text
stringlengths 64
6.93k
|
---|
transmission spectrum of KELT-9 b",2,"['Very proud of my postdoc Jens Hoeijmakers, who followed up his Nature paper on discovering iron and titanium in KELT-9b with this new A&A study. First discovery of chromium, scandium and yttrium in an exoplanetary atmosphere at high spectral resolution. <LINK>', 'It is necessary to point out massive contributions from two of my other (senior) postdocs: Simon Grimm basically converted the entire Kurucz database of atomic line lists into opacities; Daniel Kitzmann converted these opacities into transmission spectra (with variable gravity).']",19,05,540 |
195,161,1290541721676214272,2324423269,Peyman 𝕄𝕀𝕃𝔸ℕ𝔽𝔸ℝ,"We study a compression framework that considers the interplay between rate, distortion and classification accuracy. Optimizing the quantization tables in JPEG yields a nice boost in performance using an easily-implemented modification of these tables. <LINK> <LINK> @HadiAmirpour For this paper, we used PSNR only since it is the standard and a full-reference distortion that is easy to compute. Other perceptual measures are very appropriate indeed, but in practice rather more difficult to optimize against, especially since they are typically ""no-reference""",https://arxiv.org/abs/2008.00605,"Handling digital images is almost always accompanied by a lossy compression in order to facilitate efficient transmission and storage. This introduces an unavoidable tension between the allocated bit-budget (rate) and the faithfulness of the resulting image to the original one (distortion). An additional complicating consideration is the effect of the compression on recognition performance by given classifiers (accuracy). This work aims to explore this rate-distortion-accuracy tradeoff. As a case study, we focus on the design of the quantization tables in the JPEG compression standard. We offer a novel optimal tuning of these tables via continuous optimization, leveraging a differential implementation of both the JPEG encoder-decoder and an entropy estimator. This enables us to offer a unified framework that considers the interplay between rate, distortion and classification accuracy. In all these fronts, we report a substantial boost in performance by a simple and easily implemented modification of these tables. ",The Rate-Distortion-Accuracy Tradeoff: JPEG Case Study,2,"['We study a compression framework that considers the interplay between rate, distortion and classification accuracy. Optimizing the quantization tables in JPEG yields a nice boost in performance using an easily-implemented modification of these tables. \n\n<LINK> <LINK>', '@HadiAmirpour For this paper, we used PSNR only since it is the standard and a full-reference distortion that is easy to compute. Other perceptual measures are very appropriate indeed, but in practice rather more difficult to optimize against, especially since they are typically ""no-reference""']",20,08,561 |
196,99,1088658322247495681,154024287,Jake Clark 👻🎃,"#astronomyTwitter we've found some #exoplanets!! Four newly discovered big bois were uncovered by scouring through old archival data. Check out the @arxiv link for more: <LINK> #HD7449c, #HD65216c, #HD89744c and #HD92788c @usqedu, @UNSWPhysics @KutztownU <LINK> Oddly enough, the whole point of the paper was to see IF these singular eccentric exoplanetary systems were actually two planet systems on circular-resonant orbits. But, it seems like those exoplanets are indeed there not and masking as two. But the big surprise came from orbital fitting routines returning long period fits. #HD7449c is a 19 Jupiter mass exoplanet (presumably a very successful exoplanet classed as a brown dwarf) on a ~42 year long orbit. #Saturn's orbit is only 29! So that brings us to THE SECOND PAPER ACCEPTED TODAY!!! Wooo. It turns out that two planets, in mutual mean-motion resonance could actually 'appear' to be a singular highly eccentric planet: <LINK> Welp, let's get some physics up in this beast! We wanted to know at what eccentricities can this behaviour occur at. Is there a defined region where you can say with some certainty that it is INDEED a single planet AND is there a DANGER ZONE? And as it turns out, there is!! If an exoplanet is discovered with an eccentricity greater than ~0.5, then it's most certainly a singular exoplanet. BUT, if your exoplanet has an eccentricity of around 0.2-0.4 then, my friend, you're in the ... <LINK> @ExoCytherean, @JontiHorner, have I missed anything here?",https://arxiv.org/abs/1901.08471,"We examine eight known single-eccentric planetary systems in light of recently released large data archives and new analysis techniques. For four of these systems (HD 7449, HD 65216, HD 89744, HD 92788) we find evidence for additional long-period companions. HD 65216c is a Jupiter analog, with a period of 14.7 yr, $e=0.18$, and m sin $i$ of 2M_Jup, whilst the remaining candidate companions move on as-yet-incomplete orbits. Our results highlight the importance of revisiting the analysis of known exoplanetary systems when new data become available, particularly given the possibility that poorly-sampled data might previously have led to the detection of a 'false-positive' single eccentric planet, when the system in question actually contains two (or more) planets on near-circular orbits. ",Truly eccentric. I. Revisiting eight single-eccentric planetary systems,7,"[""#astronomyTwitter we've found some #exoplanets!! Four newly discovered big bois were uncovered by scouring through old archival data. Check out the @arxiv link for more: <LINK>\n\n#HD7449c, #HD65216c, #HD89744c and #HD92788c \n@usqedu, @UNSWPhysics @KutztownU <LINK>"", 'Oddly enough, the whole point of the paper was to see IF these singular eccentric exoplanetary systems were actually two planet systems on circular-resonant orbits. But, it seems like those exoplanets are indeed there not and masking as two.', ""But the big surprise came from orbital fitting routines returning long period fits. #HD7449c is a 19 Jupiter mass exoplanet (presumably a very successful exoplanet classed as a brown dwarf) on a ~42 year long orbit. #Saturn's orbit is only 29!"", ""So that brings us to THE SECOND PAPER ACCEPTED TODAY!!! Wooo. It turns out that two planets, in mutual mean-motion resonance could actually 'appear' to be a singular highly eccentric planet: https://t.co/vk8WcJ6Y4i"", ""Welp, let's get some physics up in this beast! We wanted to know at what eccentricities can this behaviour occur at. Is there a defined region where you can say with some certainty that it is INDEED a single planet AND is there a DANGER ZONE?"", ""And as it turns out, there is!!\nIf an exoplanet is discovered with an eccentricity greater than ~0.5, then it's most certainly a singular exoplanet. BUT, if your exoplanet has an eccentricity of around 0.2-0.4 then, my friend, you're in the ... https://t.co/R1v943IdCX"", '@ExoCytherean, @JontiHorner, have I missed anything here?']",19,01,1498 |
197,106,1250702034153676800,1146521605457272832,Jorinde van de Vis,New paper with Felix Giese and Thomas Konstandin! We show how to compute the fraction of energy going into gravitational waves in a cosmological phase transition for any new physics model - an essential quantity for producing LISA sensitivity curves. <LINK>,https://arxiv.org/abs/2004.06995,"We study the energy budget of a first-order cosmological phase transition, which is an important factor in the prediction of the resulting gravitational wave spectrum. Formerly, this analysis was based mostly on simplified models as for example the bag equation of state. Here, we present a model-independent approach that is exact up to the temperature dependence of the speed of sound in the broken phase. We find that the only relevant quantities that enter in the hydrodynamic analysis are the speed of sound in the broken phase and a linear combination of the energy and pressure differences between the two phases which we call pseudotrace (normalized to the enthalpy in the broken phase). The pseudotrace quantifies the strength of the phase transition and yields the conventional trace of the energy-momentum tensor for a relativistic plasma (with speed of sound squared of one third). We study this approach in several realistic models of the phase transition and also provide a code snippet that can be used to determine the efficiency coefficient for a given phase transition strength and speed of sound. It turns out that our approach is accurate to the percent level for moderately strong phase transitions, while former approaches give at best the right order of magnitude. ","Model-independent energy budget of cosmological first-order phase |
transitions",1,['New paper with Felix Giese and Thomas Konstandin!\nWe show how to compute the fraction of energy going into gravitational waves in a cosmological phase transition for any new physics model - an essential quantity for producing LISA sensitivity curves. \n\n<LINK>'],20,04,258 |
198,45,1176181071727095808,888216099757490176,Maithra Raghu,"Rapid Learning or Feature Reuse? New paper: <LINK> We analyze MAML (and meta-learning and meta learning more broadly) finding that feature reuse is the critical component in the efficient learning of new tasks -- leading to some algorithmic simplifications! <LINK> @solvay_1927 There are definitely connections, though in this paper we mostly looked at the standard setups for few-shot learning. Lots of open questions for future work!",https://arxiv.org/abs/1909.09157,"An important research direction in machine learning has centered around developing meta-learning algorithms to tackle few-shot learning. An especially successful algorithm has been Model Agnostic Meta-Learning (MAML), a method that consists of two optimization loops, with the outer loop finding a meta-initialization, from which the inner loop can efficiently learn new tasks. Despite MAML's popularity, a fundamental open question remains -- is the effectiveness of MAML due to the meta-initialization being primed for rapid learning (large, efficient changes in the representations) or due to feature reuse, with the meta initialization already containing high quality features? We investigate this question, via ablation studies and analysis of the latent representations, finding that feature reuse is the dominant factor. This leads to the ANIL (Almost No Inner Loop) algorithm, a simplification of MAML where we remove the inner loop for all but the (task-specific) head of a MAML-trained network. ANIL matches MAML's performance on benchmark few-shot image classification and RL and offers computational improvements over MAML. We further study the precise contributions of the head and body of the network, showing that performance on the test tasks is entirely determined by the quality of the learned features, and we can remove even the head of the network (the NIL algorithm). We conclude with a discussion of the rapid learning vs feature reuse question for meta-learning algorithms more broadly. ","Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness |
of MAML",2,"['Rapid Learning or Feature Reuse? \n\nNew paper: <LINK>\n\nWe analyze MAML (and meta-learning and meta learning more broadly) finding that feature reuse is the critical component in the efficient learning of new tasks -- leading to some algorithmic simplifications! <LINK>', '@solvay_1927 There are definitely connections, though in this paper we mostly looked at the standard setups for few-shot learning. Lots of open questions for future work!']",19,09,436 |
199,27,1364690743735025668,1012689495420833792,Simon Powers,"Our new paper reviews agent-based traffic simulators, with the aim of helping you choose the right one for your research. Led by @ComputingNapier PhD student Johannes Nguyen, and co-authored by @NeilUrquhart1, Thomas Farrenkopf, Michael Guckert and myself.<LINK>",https://arxiv.org/abs/2102.07505,"Individual traffic significantly contributes to climate change and environmental degradation. Therefore, innovation in sustainable mobility is gaining importance as it helps to reduce environmental pollution. However, effects of new ideas in mobility are difficult to estimate in advance and strongly depend on the individual traffic participants. The application of agent technology is particularly promising as it focuses on modelling heterogeneous individual preferences and behaviours. In this paper, we show how agent-based models are particularly suitable to address three pressing research topics in mobility: 1. Social dilemmas in resource utilisation; 2. Digital connectivity; and 3. New forms of mobility. We then explain how the features of several agent-based simulators are suitable for addressing these topics. We assess the capability of simulators to model individual travel behaviour, discussing implemented features and identifying gaps in functionality that we consider important. ",An Overview of Agent-based Traffic Simulators,1,"['Our new paper reviews agent-based traffic simulators, with the aim of helping you choose the right one for your research. Led by @ComputingNapier PhD student Johannes Nguyen, and co-authored by @NeilUrquhart1, Thomas Farrenkopf, Michael Guckert and myself.<LINK>']",21,02,262 |
200,88,1504639823184809984,22148802,Leo C. Stein 🦁,"🎉 New paper day! 🎉 Tidally-induced nonlinear resonances in EMRIs with an analogue model (<LINK>) This is David's first paper! So, what did we study? 1/6 <LINK> Orbits around spinning black holes have 3 frequencies, so there can be resonances. Adding a perturbation—e.g. a distant 3rd body—can ""break"" resonant tori, creating nonlinear resonances. Here's what it looks like on a Poincaré section. Our phase space is 6d → 4d Poinc. sect. 2/6 <LINK> To visualize a 4-dimensional Poincaré section, use 3 spatial dimensions, and color as a 4th dimension. Here is one for our system, a resonant torus that broke into a nonlinear resonance because of an external perturbation (the gravitational field of some distant stuff). 3/6 <LINK> Inside one of these resonances, we get libration of the ""resonance angle"" on a new time scale, seen below. So, what's the big idea in our paper? If we don't model this, will it screw up the ability of the LISA mission to detect systems that pass through resonance? 4/6 <LINK> We computed the mismatch between signals where we do or don't attempt to model the nonlinear resonance, over a range of parameter space, to find out: how strong must the external perturbation be so that it *must be* modeled to get things right? 5/6 In the end, we found a simple approximate region of parameter space where the resonance must be modeled: ε ≳ 300q², where q is the small mass ratio, and ε is a dimensionless measure of the strength of the perturbation. Read all about it here ➡️ <LINK> 6/6ish If you want to learn more about Poincaré sections, check out this interactive web toy: <LINK> You can see our progress in this project by when I was tweeting about it long ago: <LINK>",https://arxiv.org/abs/2203.08841,"One of the important classes of targets for the future space-based gravitational wave observatory LISA is extreme mass ratio inspirals (EMRIs), where long and accurate waveform modeling is necessary for detection and characterization. When modeling the dynamics of an EMRI, several effects need to be included, such as the modifications caused by an external tidal field. The effects of such perturbations will generally break integrability at resonance, and can produce significant dephasing from an unperturbed system. In this paper, we use a Newtonian analogue of a Kerr black hole to study the effect of an external tidal field on the dynamics and the gravitational waveform. We have developed a numerical framework that takes advantage of the integrability of the background system to evolve it with a symplectic splitting integrator, and compute approximate gravitational waveforms to estimate the time scale over which the perturbation affects the dynamics. We find that different entry points into the resonance in phase-space can produce substantially different dynamics. Finally, by comparing this time scale with the inspiral time, we find tidal effects will need to be included when modeling EMRI gravitational waves when $\varepsilon \gtrsim 300\, q^2$, where $q$ is the small mass ratio, and $\varepsilon$ measures the strength of the external tidal field. ",Tidally-induced nonlinear resonances in EMRIs with an analogue model,8,"[""🎉 New paper day! 🎉 \n\nTidally-induced nonlinear resonances in EMRIs with an analogue model (<LINK>)\n\nThis is David's first paper! So, what did we study?\n1/6 <LINK>"", 'Orbits around spinning black holes have 3 frequencies, so there can be resonances. Adding a perturbation—e.g. a distant 3rd body—can ""break"" resonant tori, creating nonlinear resonances. Here\'s what it looks like on a Poincaré section. Our phase space is 6d → 4d Poinc. sect.\n2/6 https://t.co/HQl3lOLS09', 'To visualize a 4-dimensional Poincaré section, use 3 spatial dimensions, and color as a 4th dimension. Here is one for our system, a resonant torus that broke into a nonlinear resonance because of an external perturbation (the gravitational field of some distant stuff).\n3/6 https://t.co/AGnHAyU4oJ', 'Inside one of these resonances, we get libration of the ""resonance angle"" on a new time scale, seen below. So, what\'s the big idea in our paper? If we don\'t model this, will it screw up the ability of the LISA mission to detect systems that pass through resonance?\n4/6 https://t.co/26AnYhT0Hy', ""We computed the mismatch between signals where we do or don't attempt to model the nonlinear resonance, over a range of parameter space, to find out: how strong must the external perturbation be so that it *must be* modeled to get things right?\n\n5/6"", 'In the end, we found a simple approximate region of parameter space where the resonance must be modeled: ε ≳ 300q², where q is the small mass ratio, and ε is a dimensionless measure of the strength of the perturbation.\n\nRead all about it here ➡️ https://t.co/GrWbb8uJ0F\n\n6/6ish', 'If you want to learn more about Poincaré sections, check out this interactive web toy: https://t.co/AIGfyCcJoT', 'You can see our progress in this project by when I was tweeting about it long ago: https://t.co/CVzoT7n0zO']",22,03,1697 |
201,161,1380099795612733442,888857981403582465,Xiaoxiang ZHU,"Interested in a new #AI4EO task - multi-scene classification in single aerial images? We @Zhu_XLab @ai4eo_de are sharing a large-scale benchmark, called #MultiScene, composed of 100,000 high-resolution aerial images. Stay tuned! Link to paper: <LINK> <LINK>",https://arxiv.org/abs/2104.02846,"Aerial scene recognition is a fundamental research problem in interpreting high-resolution aerial imagery. Over the past few years, most studies focus on classifying an image into one scene category, while in real-world scenarios, it is more often that a single image contains multiple scenes. Therefore, in this paper, we investigate a more practical yet underexplored task -- multi-scene recognition in single images. To this end, we create a large-scale dataset, called MultiScene, composed of 100,000 unconstrained high-resolution aerial images. Considering that manually labeling such images is extremely arduous, we resort to low-cost annotations from crowdsourcing platforms, e.g., OpenStreetMap (OSM). However, OSM data might suffer from incompleteness and incorrectness, which introduce noise into image labels. To address this issue, we visually inspect 14,000 images and correct their scene labels, yielding a subset of cleanly-annotated images, named MultiScene-Clean. With it, we can develop and evaluate deep networks for multi-scene recognition using clean data. Moreover, we provide crowdsourced annotations of all images for the purpose of studying network learning with noisy labels. We conduct experiments with extensive baseline models on both MultiScene-Clean and MultiScene to offer benchmarks for multi-scene recognition in single images and learning from noisy labels for this task, respectively. To facilitate progress, we make our dataset and trained models available on this https URL ","MultiScene: A Large-scale Dataset and Benchmark for Multi-scene |
Recognition in Single Aerial Images",1,"['Interested in a new #AI4EO task - multi-scene classification in single aerial images? \n\nWe @Zhu_XLab @ai4eo_de are sharing a large-scale benchmark, called #MultiScene, composed of 100,000 high-resolution aerial images. Stay tuned!\n\nLink to paper: <LINK> <LINK>']",21,04,258 |
202,102,1425006628231864331,945445796098473984,Patrick Schnider,"A new paper on the @arxiv . In „Topological Art in Simple Galleries“, together with Daniel Bertschinger, Nicolas El Maalouly, Till Miltzow and Simon Weber, we study the space of optimal guard placements in the Art Gallery Problem. <LINK> 1/6 See this link for more info on the Art Gallery Problem: <LINK> 2/6 We show a universality theorem similar to Mnëvs universality theorem for order types. Specifically we show that for every semi-algebraic set S, there exists a polygon for which the space of optimal guard placements is homotopy-equivalent to S. 3/6 We further give some small polygons for which the space of optimal guard placements are homeomorphic to non-trivial spaces such as spheres, torus or double torus. 4/6 The paper is accompanied by a video with some animations, made by my coauthor Simon Weber. <LINK> 5/6 The animations were made using @geogebra , and Simon has also made an applet where you can play around with guard placements yourself. Sphere: <LINK> Double Torus: <LINK> 6/6",https://arxiv.org/abs/2108.04007,"Let $P$ be a simple polygon, then the art gallery problem is looking for a minimum set of points (guards) that can see every point in $P$. We say two points $a,b\in P$ can see each other if the line segment $seg(a,b)$ is contained in $P$. We denote by $V(P)$ the family of all minimum guard placements. The Hausdorff distance makes $V(P)$ a metric space and thus a topological space. We show homotopy-universality, that is for every semi-algebraic set $S$ there is a polygon $P$ such that $V(P)$ is homotopy equivalent to $S$. Furthermore, for various concrete topological spaces $T$, we describe instances $I$ of the art gallery problem such that $V(I)$ is homeomorphic to $T$. ",Topological Art in Simple Galleries,6,"['A new paper on the @arxiv . In „Topological Art in Simple Galleries“, together with Daniel Bertschinger, Nicolas El Maalouly, Till Miltzow and Simon Weber, we study the space of optimal guard placements in the Art Gallery Problem.\n\n<LINK>\n\n1/6', 'See this link for more info on the Art Gallery Problem:\n\nhttps://t.co/TRE5OOhiGo\n\n2/6', 'We show a universality theorem similar to Mnëvs universality theorem for order types. Specifically we show that for every semi-algebraic set S, there exists a polygon for which the space of optimal guard placements is homotopy-equivalent to S.\n\n3/6', 'We further give some small polygons for which the space of optimal guard placements are homeomorphic to non-trivial spaces such as spheres, torus or double torus.\n\n4/6', 'The paper is accompanied by a video with some animations, made by my coauthor Simon Weber.\n\nhttps://t.co/J1FWh66Qhh\n\n5/6', 'The animations were made using @geogebra , and Simon has also made an applet where you can play around with guard placements yourself.\n\nSphere:\n\nhttps://t.co/7CC8KOTeSU\n\nDouble Torus:\n\nhttps://t.co/OXqdcUCth3\n\n6/6']",21,08,1000 |
203,8,1356269604612567046,3433220662,Anthony Bonato,"New paper up on arXiv. We study a new model for complex hypernetworks and explore its properties related to motifs and clustering. Along the way, we introduce a new clustering coefficient for hypergraphs...there are many competing definitions! <LINK> <LINK>",https://arxiv.org/abs/2101.12560,"Complex networks are pervasive in the real world, capturing dyadic interactions between pairs of vertices, and a large corpus has emerged on their mining and modeling. However, many phenomena are comprised of polyadic interactions between more than two vertices. Such complex hypergraphs range from emails among groups of individuals, scholarly collaboration, or joint interactions of proteins in living cells. A key generative principle within social and other complex networks is transitivity, where friends of friends are more likely friends. The previously proposed Iterated Local Transitivity (ILT) model incorporated transitivity as an evolutionary mechanism. The ILT model provably satisfies many observed properties of social networks, such as densification, low average distances, and high clustering coefficients. We propose a new, generative model for complex hypergraphs based on transitivity, called the Iterated Local Transitivity Hypergraph (or ILTH) model. In ILTH, we iteratively apply the principle of transitivity to form new hypergraphs. The resulting model generates hypergraphs simulating properties observed in real-world complex hypergraphs, such as densification and low average distances. We consider properties unique to hypergraphs not captured by their 2-section. We show that certain motifs, which are specified subhypergraphs of small order, have faster growth rates in ILTH hypergraphs than in random hypergraphs with the same order and expected average degree. We show that the graphs admitting a homomorphism into the 2-section of the initial hypergraph appear as induced subgraphs in the 2-section of ILTH hypergraphs. We consider new and existing hypergraph clustering coefficients, and show that these coefficients have larger values in ILTH hypergraphs than in comparable random hypergraphs. ",The iterated local transitivity model for hypergraphs,1,"['New paper up on arXiv. We study a new model for complex hypernetworks and explore its properties related to motifs and clustering. Along the way, we introduce a new clustering coefficient for hypergraphs...there are many competing definitions!\n\n<LINK> <LINK>']",21,01,257 |
204,22,1177397120099241984,48144742,Gill Verdon ⚛️🎲🤖,"New quantum paper w/ @TheteamatX & @GoogleAI! Quantum Graph Neural Networks: learning graph-based representions on quantum computers. See thread👇 <LINK> <LINK> We use GQNN's to learn to generate graph-based quantum states & quantum dynamics. For classical problems, we tested on graph clustering / isomorphism tasks <LINK> As it turns out, the QAOA and Graph Convolutional networks had a lot to do with each other! We cooked up an analog of GCN's for as a QNN. A first test was the task of spectral clustering of graphs. Both the multi-qubit per node and single-qubit precision ansatz perform quite well <LINK> Going from unsupervised learning to supervised now, we also tested this ansatz for identifying whether two graphs are isomorphic. The performance was quite impressive, even at a reasonable number of samples for the quantum computer. <LINK> On the quantum data learning sude, a first task we tested was learning quantum dynamics and the effective Hamiltonian/topology of a quantum system from black box access to its dynamics. We introduced a Quantum Recurrent Graph Neural Net architecture for this task. <LINK> As a hint of things to come, we test applications of QGCNN's for learning quantum protocols for quantum networks (quantum internet). We learn how to create resources for quantum sensor networks (multipartite entanglement) without the need for local variational parameters. <LINK> Overall excited for future applications of graph-based quantum neural nets. Next step: quantum chemistry? 🤔 Huge shoutout to the team! Special shoutout to @vsingh_5 and @eluzhnica who did a fantastic job on their first quantum paper (coming from a classical ML background) Big thanks to @jackhidary for putting together the awesome Quantum@X and AI@X residency programs!",https://arxiv.org/abs/1909.12264,"We introduce Quantum Graph Neural Networks (QGNN), a new class of quantum neural network ansatze which are tailored to represent quantum processes which have a graph structure, and are particularly suitable to be executed on distributed quantum systems over a quantum network. Along with this general class of ansatze, we introduce further specialized architectures, namely, Quantum Graph Recurrent Neural Networks (QGRNN) and Quantum Graph Convolutional Neural Networks (QGCNN). We provide four example applications of QGNNs: learning Hamiltonian dynamics of quantum systems, learning how to create multipartite entanglement in a quantum network, unsupervised learning for spectral clustering, and supervised learning for graph isomorphism classification. ",Quantum Graph Neural Networks,8,"['New quantum paper w/ @TheteamatX & @GoogleAI!\n\nQuantum Graph Neural Networks: learning graph-based representions on quantum computers. \n\nSee thread👇\n\n<LINK> <LINK>', ""We use GQNN's to learn to generate graph-based quantum states & quantum dynamics. For classical problems, we tested on graph clustering / isomorphism tasks https://t.co/Oqv7hhnxD7"", ""As it turns out, the QAOA and Graph Convolutional networks had a lot to do with each other!\nWe cooked up an analog of GCN's for as a QNN. A first test was the task of spectral clustering of graphs. Both the multi-qubit per node and single-qubit precision ansatz perform quite well https://t.co/bCu15s99Jp"", 'Going from unsupervised learning to supervised now, we also tested this ansatz for identifying whether two graphs are isomorphic. The performance was quite impressive, even at a reasonable number of samples for the quantum computer. https://t.co/goKPrlyTHo', 'On the quantum data learning sude, a first task we tested was learning quantum dynamics and the effective Hamiltonian/topology of a quantum system from black box access to its dynamics. We introduced a Quantum Recurrent Graph Neural Net architecture for this task. https://t.co/Q49VUGlMWS', ""As a hint of things to come, we test applications of QGCNN's for learning quantum protocols for quantum networks (quantum internet). \n\nWe learn how to create resources for quantum sensor networks (multipartite entanglement) without the need for local variational parameters. https://t.co/XM8FsyIlSM"", 'Overall excited for future applications of graph-based quantum neural nets. Next step: quantum chemistry? 🤔', 'Huge shoutout to the team! Special shoutout to @vsingh_5 and @eluzhnica who did a fantastic job on their first quantum paper (coming from a classical ML background)\n\nBig thanks to @jackhidary for putting together the awesome Quantum@X and AI@X residency programs!']",19,09,1775 |
205,137,1497285329925316608,825579580597628930,Soumyabrata Pal,"New paper (<LINK>) titled “On Learning Mixture Models with Sparse Parameters” is accepted to AISTATS 2022! This is a joint work with Arya Mazumdar (@MountainOfMoon ). Overall, the main technical contribution in this paper is to introduce a very novel and general framework for support recovery in a variety of mixture models where we reduce the support recovery problem to estimating certain statistics of the indices. Below, is a detailed thread: Suppose we obtain samples from a uniform mixture model with unknown parameters (high dimensional). Examples include mixtures of Distributions (MD) with unknown means, mixtures of linear regressions (MLR)/classifiers (MLC) with unknown weights, and Gaussian covariates. We assume that the parameter vectors are sparse and formulate the objective of finding the support of the unknown parameter vectors. Support recovery is useful as a pre-processing step for feature selection and can significantly speed up subsequent learning algorithms. In MD setting, we provide sample complexity guarantees that hold for most component distributions. In particular, our results for support recovery hold for mixtures of Gaussians, mixtures of Poisson, mixtures of Laplacian among many others. Importantly, our sample complexity guarantees scale logarithmically with the dimension. We use the canonical method of moments for estimating the parameters where we show an upper bound on the sufficient number of moments via an interesting application of Newton’s identities. In MLR/MLC, we provide similar sample complexity guarantees for support recovery under certain assumptions. Although sometimes, it is trivial to find the union of support and then apply known parameter estimation algorithms, we show that our results improve on these baselines.",https://arxiv.org/abs/2202.11940,"Mixture models are widely used to fit complex and multimodal datasets. In this paper we study mixtures with high dimensional sparse latent parameter vectors and consider the problem of support recovery of those vectors. While parameter learning in mixture models is well-studied, the sparsity constraint remains relatively unexplored. Sparsity of parameter vectors is a natural constraint in variety of settings, and support recovery is a major step towards parameter estimation. We provide efficient algorithms for support recovery that have a logarithmic sample complexity dependence on the dimensionality of the latent space. Our algorithms are quite general, namely they are applicable to 1) mixtures of many different canonical distributions including Uniform, Poisson, Laplace, Gaussians, etc. 2) Mixtures of linear regressions and linear classifiers with Gaussian covariates under different assumptions on the unknown parameters. In most of these settings, our results are the first guarantees on the problem while in the rest, our results provide improvements on existing works. ",On Learning Mixture Models with Sparse Parameters,7,"['New paper (<LINK>) titled “On Learning Mixture Models with Sparse Parameters” is accepted to AISTATS 2022! This is a joint work with Arya Mazumdar (@MountainOfMoon ).', 'Overall, the main technical contribution in this paper is to introduce a very novel and general framework for support recovery in a variety of mixture models where we reduce the support recovery problem to estimating certain statistics of the indices. Below, is a detailed thread:', 'Suppose we obtain samples from a uniform mixture model with unknown parameters (high dimensional). Examples include mixtures of Distributions (MD) with unknown means, mixtures of linear regressions (MLR)/classifiers (MLC) with unknown weights, and Gaussian covariates.', 'We assume that the parameter vectors are sparse and formulate the objective of finding the support of the unknown parameter vectors. Support recovery is useful as a pre-processing step for feature selection and can significantly speed up subsequent learning algorithms.', 'In MD setting, we provide sample complexity guarantees that hold for most component distributions. In particular, our results for support recovery hold for mixtures of Gaussians, mixtures of Poisson, mixtures of Laplacian among many others.', 'Importantly, our sample complexity guarantees scale logarithmically with the dimension. We use the canonical method of moments for estimating the parameters where we show an upper bound on the sufficient number of moments via an interesting application of Newton’s identities.', 'In MLR/MLC, we provide similar sample complexity guarantees for support recovery under certain assumptions. Although sometimes, it is trivial to find the union of support and then apply known parameter estimation algorithms, we show that our results improve on these baselines.']",22,02,1782 |
206,166,1389671819767123970,2826142023,Carsten Binnig,"In our most recent paper, we (@BHilprecht @DMTUDA) discuss the vision of zero-shot learning for databases which is a new learning approach for database components. If you want to know more and see our first very promising results, then see <LINK> <LINK>",https://arxiv.org/abs/2105.00642,"In this paper, we present our vision of so called zero-shot learning for databases which is a new learning approach for database components. Zero-shot learning for databases is inspired by recent advances in transfer learning of models such as GPT-3 and can support a new database out-of-the box without the need to train a new model. Furthermore, it can easily be extended to few-shot learning by further retraining the model on the unseen database. As a first concrete contribution in this paper, we show the feasibility of zero-shot learning for the task of physical cost estimation and present very promising initial results. Moreover, as a second contribution we discuss the core challenges related to zero-shot learning for databases and present a roadmap to extend zero-shot learning towards many other tasks beyond cost estimation or even beyond classical database systems and workloads. ",One Model to Rule them All: Towards Zero-Shot Learning for Databases,1,"['In our most recent paper, we (@BHilprecht\n@DMTUDA) discuss the vision of zero-shot learning for databases which is a new learning approach for database components. If you want to know more and see our first very promising results, then see <LINK> <LINK>']",21,05,253 |
207,125,1257677775630188546,894548126329184260,Noriyuki Kojima,"New work with Hadar Averbuch-Elor, @srush_nlp and @yoavartzi to appear at ACL2020! We show building significantly less expressive models isolate the core factors of the strong performance in the black-box model trained on multimodal signals. paper: <LINK> (1/6) <LINK> We consider the case study of the Visually Grounded Neural Syntax Learner (Shi et al., 2019), a recent approach for learning syntax from a visual training signal. (2/6) We significantly reduce the expressivity of the constituent parser in Shi et al., 2019. 1. adding a bottleneck for word-embeddings, reducing the embedding dimension drastically (i.e., 512-d ==> 1-d). 2. simplifying a scoring function to merges constituents. (3/6) We find our significantly less expressive versions produce similar predictions and perform just as well or even better. (4/6) We also find that a simple lexical signal of noun concreteness plays the main role in the model’s predictions as opposed to more complex syntactic reasoning. This can be easily visualized in 1-d embeddings space learned by a bottleneck for word-embeddings. (5/6) <LINK> This is a work at @CornellCIS and @cornelltech. Code will be released soon! (6/6) Hadar Averbuch-Elor @ElorHadar (6+/6)",https://arxiv.org/abs/2005.01678,"Visual features are a promising signal for learning bootstrap textual models. However, blackbox learning models make it difficult to isolate the specific contribution of visual components. In this analysis, we consider the case study of the Visually Grounded Neural Syntax Learner (Shi et al., 2019), a recent approach for learning syntax from a visual training signal. By constructing simplified versions of the model, we isolate the core factors that yield the model's strong performance. Contrary to what the model might be capable of learning, we find significantly less expressive versions produce similar predictions and perform just as well, or even better. We also find that a simple lexical signal of noun concreteness plays the main role in the model's predictions as opposed to more complex syntactic reasoning. ",What is Learned in Visually Grounded Neural Syntax Acquisition,7,"['New work with Hadar Averbuch-Elor, @srush_nlp and @yoavartzi to appear at ACL2020! We show building significantly less expressive models isolate the core factors of the strong performance in the black-box model trained on multimodal signals. paper: <LINK> \n(1/6) <LINK>', 'We consider the case study of the Visually Grounded Neural Syntax Learner (Shi et al., 2019), a recent approach for learning syntax from a visual training signal. (2/6)', 'We significantly reduce the expressivity of the constituent parser in Shi et al., 2019. 1. adding a bottleneck for word-embeddings, reducing the embedding dimension drastically (i.e., 512-d ==> 1-d). 2. simplifying a scoring function to merges constituents. (3/6)', 'We find our significantly less expressive versions produce similar predictions and perform just as well or even better. (4/6)', 'We also find that a simple lexical signal of noun concreteness plays the main role in the model’s predictions as opposed to more complex syntactic reasoning. This can be easily visualized in 1-d embeddings space learned by a bottleneck for word-embeddings. (5/6) https://t.co/GrX1ZdMkJN', 'This is a work at @CornellCIS and @cornelltech. Code will be released soon! (6/6)', 'Hadar Averbuch-Elor @ElorHadar (6+/6)']",20,05,1220 |
208,68,1204386350503485450,720998241073029120,MacKenzie Warren,"New paper up today!! With @sean_couch, @evanoc, and Viktoriya Morozova. The short version: how much can we learn from detections of CCSN neutrinos + GWs about the progenitor star, explosion, & remnant compact object?? A TON <LINK> <LINK> We used several hundred 1D CCSN simulations to study multi-messenger (neutrino + gravitational wave) signals. There is no ""universal"" neutrino or GW signal, but the differences between CCSN events will tell us a lot about the progenitor structure and explosion (or lack thereof) But seriously, that's all you need - total neutrino counts, neutrino average energy, and/or the dominant GW frequency. It's stupidly simple. We can do this with current neutrino + GW detectors. Near future facilities will do this even better. This will give us constraints on things like the explosion energy, progenitor star, etc long before shock breakout and w/o relying on pre-explosion imaging. Pre-explosion imaging can constrain the stellar surface before collapse & this constrains the stellar core. Complimentary! I don't have any thoughts here on 70 M_sun BHs, but I can get an estimate for the BH mass for a failed CCSN event within a few seconds of core-collapse. So that's something <LINK> I'll be posting the neutrino + GW info from all of these simulations publicly whenever I get it all uploaded. There are already some awesome follow-up studies in the works. Keep an eye out for @AstroBarker's work on CCSN light curves! This is one of those papers that took approx an eternity, with endless iterations on simulations + analysis. Thanks to my coauthors for their endless patience & insights! As happy as I am to have it submitted, I'm mostly relieved to have it off my back. Now back to more writing😅",https://arxiv.org/abs/1912.03328,"With the advent of modern neutrino and gravitational wave detectors, the promise of multi-messenger detections of the next galactic core-collapse supernova has become very real. Such detections will give insight into the core-collapse supernova mechanism, the structure of the progenitor star, and may resolve longstanding questions in fundamental physics. In order to properly interpret these detections, a thorough understanding of the landscape of possible core-collapse supernova events, and their multi-messenger signals, is needed. We present detailed predictions of neutrino and gravitational wave signals from 1D simulations of stellar core collapse, spanning the landscape of core-collapse progenitors from $9-120\,\mathrm{M}_{\odot}$. In order to achieve explosions in 1D, we use the STIR model, which includes the effects of turbulence and convection in 1D supernova simulations to mimic the 3D explosion mechanism. We study the gravitational wave emission from the 1D simulations using an astroseismology analysis of the proto-neutron star. We find that the neutrino and gravitational wave signals are strongly correlated with the structure of the progenitor star and remnant compact object. Using these correlations, future detections of the first few seconds of neutrino and gravitational wave emission from a galactic core-collapse supernova may be able to provide constraints on stellar evolution independent of pre-explosion imaging and the mass of the compact object remnant prior to fallback accretion. ","Constraining properties of the next nearby core-collapse supernova with |
multi-messenger signals",7,"['New paper up today!! With @sean_couch, @evanoc, and Viktoriya Morozova. The short version: how much can we learn from detections of CCSN neutrinos + GWs about the progenitor star, explosion, & remnant compact object?? A TON <LINK> <LINK>', 'We used several hundred 1D CCSN simulations to study multi-messenger (neutrino + gravitational wave) signals. There is no ""universal"" neutrino or GW signal, but the differences between CCSN events will tell us a lot about the progenitor structure and explosion (or lack thereof)', ""But seriously, that's all you need - total neutrino counts, neutrino average energy, and/or the dominant GW frequency. It's stupidly simple. We can do this with current neutrino + GW detectors. Near future facilities will do this even better."", 'This will give us constraints on things like the explosion energy, progenitor star, etc long before shock breakout and w/o relying on pre-explosion imaging. Pre-explosion imaging can constrain the stellar surface before collapse & this constrains the stellar core. Complimentary!', ""I don't have any thoughts here on 70 M_sun BHs, but I can get an estimate for the BH mass for a failed CCSN event within a few seconds of core-collapse. So that's something https://t.co/FXZzlTY78L"", ""I'll be posting the neutrino + GW info from all of these simulations publicly whenever I get it all uploaded. There are already some awesome follow-up studies in the works. Keep an eye out for @AstroBarker's work on CCSN light curves!"", ""This is one of those papers that took approx an eternity, with endless iterations on simulations + analysis. Thanks to my coauthors for their endless patience & insights! As happy as I am to have it submitted, I'm mostly relieved to have it off my back. Now back to more writing😅""]",19,12,1734 |
209,74,1105784884629565440,1978330974,Jacob D Biamonte,"My new paper appeared on the arXiv today. <LINK> ""Universal Variational Quantum Computation"" Current quantum processors, being built by IBM, Google, etc. enable a new model of computation, called variational... <LINK> @sycramore The early work on variational quantum computing started with chemistry, see: A. Peruzzo, @JarrodMcclean, ... @A_Aspuru_Guzik A variational eigenvalue solver on a photonic quantum processor. @NatureComms, 5:4213, 2014. The point was actually to extend outside of chem :)",https://arxiv.org/abs/1903.04500,"Variational quantum algorithms dominate contemporary gate-based quantum enhanced optimisation, eigenvalue estimation and machine learning. Here we establish the quantum computational universality of variational quantum computation by developing two objective functions which minimise to prepare outputs of arbitrary quantum circuits. The fleeting resource of variational quantum computation is the number of expected values which must be iteratively minimised using classical-to-quantum outer loop optimisation. An efficient solution to this optimisation problem is given by the quantum circuit being simulated itself. The first construction is efficient in the number of expected values for $n$-qubit circuits containing $\mathcal{O}({poly} \ln n)$ non-Clifford gates -- the number of expected values has no dependence on Clifford gates appearing in the simulated circuit. The second approach yields $\mathcal{O}(L^2)$ expected values while introducing not more than $\mathcal{O}(\ln L)$ slack qubits, for a quantum circuit partitioned into $L$ gates. Hence, the utilitarian variational quantum programming procedure -- based on the classical evaluation of objective functions and iterated feedback -- is in principle as powerful as any other model of quantum computation. This result elevates the formal standing of the variational approach while establishing a new universal model of quantum computation. ",Universal Variational Quantum Computation,2,"['My new paper appeared on the arXiv today. <LINK> \n\n""Universal Variational Quantum Computation""\n\nCurrent quantum processors, being built by IBM, Google, etc. enable a new model of computation, called variational... <LINK>', '@sycramore The early work on variational quantum computing started with chemistry, see: A. Peruzzo, @JarrodMcclean, ... @A_Aspuru_Guzik A variational eigenvalue solver on a photonic quantum processor. @NatureComms, 5:4213, 2014. The point was actually to extend outside of chem :)']",19,03,499 |
210,154,1316775765690601472,232287209,Dallas Card,"New EMNLP paper with @PeterHndrsn @ukhndlwl @robinomial @kmahowald and @jurafsky -- With Little Power Comes Great Responsibility -- <LINK> (1/3) In this paper we show why and how statistical power analysis is relevant to NLP, analyze three cases showing that underpowered experiments are widespread in NLP research, and suggested ways to improve things going forward. (2/3) Data, code, and notebooks for running simple power analyses are all available at <LINK>",https://arxiv.org/abs/2010.06595,"Despite its importance to experimental design, statistical power (the probability that, given a real effect, an experiment will reject the null hypothesis) has largely been ignored by the NLP community. Underpowered experiments make it more difficult to discern the difference between statistical noise and meaningful model improvements, and increase the chances of exaggerated findings. By meta-analyzing a set of existing NLP papers and datasets, we characterize typical power for a variety of settings and conclude that underpowered experiments are common in the NLP literature. In particular, for several tasks in the popular GLUE benchmark, small test sets mean that most attempted comparisons to state of the art models will not be adequately powered. Similarly, based on reasonable assumptions, we find that the most typical experimental design for human rating studies will be underpowered to detect small model differences, of the sort that are frequently studied. For machine translation, we find that typical test sets of 2000 sentences have approximately 75% power to detect differences of 1 BLEU point. To improve the situation going forward, we give an overview of best practices for power analysis in NLP and release a series of notebooks to assist with future power analyses. ",With Little Power Comes Great Responsibility,3,"['New EMNLP paper with @PeterHndrsn @ukhndlwl @robinomial @kmahowald and @jurafsky -- With Little Power Comes Great Responsibility -- <LINK> (1/3)', 'In this paper we show why and how statistical power analysis is relevant to NLP, analyze three cases showing that underpowered experiments are widespread in NLP research, and suggested ways to improve things going forward. (2/3)', 'Data, code, and notebooks for running simple power analyses are all available at https://t.co/yv3IUrZHx8']",20,10,461 |
211,196,1506544527682715651,1190175298106675200,Jonas Latz,"Minimising a cost function + {L1/LASSO regulariser, Ginzburg-Landau energy} is hard: we propose and analyse a randomised, continuous-time splitting method. New preprint: Gradient flows and randomised thresholding: sparse inversion and classification. (<LINK>)",https://arxiv.org/abs/2203.11555,"Sparse inversion and classification problems are ubiquitous in modern data science and imaging. They are often formulated as non-smooth minimisation problems. In sparse inversion, we minimise, e.g., the sum of a data fidelity term and an L1/LASSO regulariser. In classification, we consider, e.g., the sum of a data fidelity term and a non-smooth Ginzburg--Landau energy. Standard (sub)gradient descent methods have shown to be inefficient when approaching such problems. Splitting techniques are much more useful: here, the target function is partitioned into a sum of two subtarget functions -- each of which can be efficiently optimised. Splitting proceeds by performing optimisation steps alternately with respect to each of the two subtarget functions. In this work, we study splitting from a stochastic continuous-time perspective. Indeed, we define a differential inclusion that follows one of the two subtarget function's negative subgradient at each point in time. The choice of the subtarget function is controlled by a binary continuous-time Markov process. The resulting dynamical system is a stochastic approximation of the underlying subgradient flow. We investigate this stochastic approximation for an L1-regularised sparse inversion flow and for a discrete Allen-Cahn equation minimising a Ginzburg--Landau energy. In both cases, we study the longtime behaviour of the stochastic dynamical system and its ability to approximate the underlying subgradient flow at any accuracy. We illustrate our theoretical findings in a simple sparse estimation problem and also in a low-dimensional classification problem. ","Gradient flows and randomised thresholding: sparse inversion and |
classification",1,"['Minimising a cost function + {L1/LASSO regulariser, Ginzburg-Landau energy} is hard: we propose and analyse a randomised, continuous-time splitting method.\n\nNew preprint: Gradient flows and randomised thresholding: sparse inversion and classification. (<LINK>)']",22,03,259 |
212,31,1032504981469913088,30989098,Karin Sandstrom,Interested in the dust-to-gas ratio and its dependence on metallicity? Or maybe dust spectral energy distribution fitting? Check out the new paper by my grad student I-Da Chiang! <LINK> And wish him good luck on his candidacy exam tomorrow am :) (it'll be great) Update: exam was indeed great! On to the next papers of the thesis!,https://arxiv.org/abs/1808.07164,"The dust-to-metals ratio describes the fraction of the heavy elements contained in dust grains, and its variation provides key insights into the life cycle of dust. We measure the dust-to-metals ratio in M101, a nearby galaxy with a radial metallicity (Z) gradient spanning $\sim$1 dex. We fit the dust spectral energy distribution from 100 to 500 $\mu m$ with five variants of the modified blackbody dust emission model in which we vary the temperature distribution and how emissivity depends on wavelength. Among them, the model with a single temperature blackbody modified by a broken power-law emissivity gives the statistically best fit and physically most plausible results. Using these results, we show that the dust-to-gas ratio is proportional to $\rm Z^{1.7}$. This implies that the dust-to-metals ratio is not constant in M101, but decreases as a function of radius, equivalent to a lower fraction of metals trapped in dust at low metallicity (large radius). The dust-to-metals ratio in M101 remains at or above what would be predicted by the minimum depletion level of metals observed in the Milky Way. Our current knowledge of metallicity-dependent CO-to-H$_2$ conversion factor suggests that variations in the conversion factor cannot be responsible for the dust-to-metals ratio trends we observe. This change of dust-to-metals ratio is significantly correlated with molecular hydrogen fraction, which suggests that the accretion of gas phase metals onto existing dust grains could be a mechanism contributing to a variable dust-to-metals ratio. ",The Spatially Resolved Dust-to-Metals Ratio in M101,2,"[""Interested in the dust-to-gas ratio and its dependence on metallicity? Or maybe dust spectral energy distribution fitting? Check out the new paper by my grad student I-Da Chiang! <LINK>\n\nAnd wish him good luck on his candidacy exam tomorrow am :) (it'll be great)"", 'Update: exam was indeed great! On to the next papers of the thesis!']",18,08,330 |
213,100,1503728070880071680,1291064871510069249,Calvin McPhail-Snyder,"I have a new paper up on the arXiv! <LINK> It's not strictly expository (there are some new results) but it's mostly about explaining how two approaches to a problem in knot theory are equivalent. Here's a longer thread: Say you have a link L (disjoint union of circles up to isotopy) in S³. Many links are *hyperbolic*: S³ ∖ L has a complete finite-volume metric of curvature -1. This tells you very powerful information about L. I want to be able to compute with these hyperbolic structures. One way to describe them is to give a representation π₁(S³ ∖ L) → PSL₂(ℂ) that's discrete and faithful; in general I'm also interested in non-discrete/faithful representations, which we can think of as generalized hyperbolic structures. This is direct and algebraic, but pretty hard to do in practice. Also, knowing the matrix coefficients doesn't tell you much about the geometry of the hyperbolic structure. On the other hand, knowing the matrix coefficients is exactly what you need to compute the quantum invariants studied in my PhD thesis <LINK> There is a better way, due to Thurston. If you triangulate your link complement, you can describe the geometry of each tetrahedron, and there are equations to ""glue"" these together to a coherent geometry on S³ ∖ L. If you want to work with link diagrams (not triangulations) then you can use the ""octahedral decomposition"" to get a standard triangulation from a link diagram. The gluing equations for this decomposition have a nice form. <LINK> However, it turns out that these are equivalent! If you use the coordinates on SL₂(ℂ)-reps from quantum 𝔰𝔩₂ they turn out to be equivalent to the shape coordinates from the octahedral decomposition. Explaining this equivalence is why I wrote the paper, so if you want to know more take a look! It's designed to be pretty accessible if you know basic knot theory. @arun_bassoon Thanks! One goal of this is to give a better theory of string diagrams for hyperbolic links (and manifolds, via surgery) which is something you might be interested in",https://arxiv.org/abs/2203.06042,"Hyperbolic structures (equivalently, principal $\operatorname{PSL}_2(\mathbb C)$-bundles with connection) on link complements can be described algebraically by using the octahedral decomposition, which assigns an ideal triangulation to any diagram of the link. The decomposition (like any ideal triangulation) gives a set of gluing equations in shape parameters whose solutions are hyperbolic structures. We show that these equations are closely related to a certain presentation of the Kac-de Concini quantum group $\mathcal{U}_q(\mathfrak{sl}_2)$ in terms of cluster algebras at $q = \xi$ a root of unity. Specifically, we identify ratios of the shape parameters of the octahedral decomposition with central characters of $\mathcal{U}_\xi(\mathfrak{sl}_2)$. The quantum braiding on these characters is known to be closely related to $\operatorname{SL}_2(\mathbb C)$-bundles on link complements, and our work provides a geometric perspective on this construction. ","Hyperbolic structures on link complements, octahedral decompositions, |
and quantum $\mathfrak{sl}_2$",10,"[""I have a new paper up on the arXiv! <LINK>\n\nIt's not strictly expository (there are some new results) but it's mostly about explaining how two approaches to a problem in knot theory are equivalent. Here's a longer thread:"", 'Say you have a link L (disjoint union of circles up to isotopy) in S³. Many links are *hyperbolic*: S³ ∖ L has a complete finite-volume metric of curvature -1. This tells you very powerful information about L. I want to be able to compute with these hyperbolic structures.', ""One way to describe them is to give a representation π₁(S³ ∖ L) → PSL₂(ℂ) that's discrete and faithful; in general I'm also interested in non-discrete/faithful representations, which we can think of as generalized hyperbolic structures."", ""This is direct and algebraic, but pretty hard to do in practice. Also, knowing the matrix coefficients doesn't tell you much about the geometry of the hyperbolic structure."", 'On the other hand, knowing the matrix coefficients is exactly what you need to compute the quantum invariants studied in my PhD thesis https://t.co/kUgn0mNpq7', 'There is a better way, due to Thurston. If you triangulate your link complement, you can describe the geometry of each tetrahedron, and there are equations to ""glue"" these together to a coherent geometry on S³ ∖ L.', 'If you want to work with link diagrams (not triangulations) then you can use the ""octahedral decomposition"" to get a standard triangulation from a link diagram. The gluing equations for this decomposition have a nice form.\n https://t.co/PIrb0h5HZU', 'However, it turns out that these are equivalent! If you use the coordinates on SL₂(ℂ)-reps from quantum 𝔰𝔩₂ they turn out to be equivalent to the shape coordinates from the octahedral decomposition.', ""Explaining this equivalence is why I wrote the paper, so if you want to know more take a look! It's designed to be pretty accessible if you know basic knot theory."", '@arun_bassoon Thanks! One goal of this is to give a better theory of string diagrams for hyperbolic links (and manifolds, via surgery) which is something you might be interested in']",22,03,2034 |
214,15,1266035801604923395,2530947115,Max Tegmark,"We just posted a new #AI paper on how to auto-discover laws of physics from raw warped video with machine learning. It took Silviu & me a year to get this working, using ideas inspired by #generalrelativity & high-dimensional #knottheory - phew! <LINK> <LINK>",https://arxiv.org/abs/2005.11212,"We present a method for unsupervised learning of equations of motion for objects in raw and optionally distorted unlabeled video. We first train an autoencoder that maps each video frame into a low-dimensional latent space where the laws of motion are as simple as possible, by minimizing a combination of non-linearity, acceleration and prediction error. Differential equations describing the motion are then discovered using Pareto-optimal symbolic regression. We find that our pre-regression (""pregression"") step is able to rediscover Cartesian coordinates of unlabeled moving objects even when the video is distorted by a generalized lens. Using intuition from multidimensional knot-theory, we find that the pregression step is facilitated by first adding extra latent space dimensions to avoid topological problems during training and then removing these extra dimensions via principal component analysis. ",Symbolic Pregression: Discovering Physical Laws from Distorted Video,1,"['We just posted a new #AI paper on how to auto-discover laws of physics from raw warped video with machine learning. It took Silviu & me a year to get this working, using ideas inspired by #generalrelativity & high-dimensional #knottheory - phew! <LINK> <LINK>']",20,05,259 |
215,0,1448592644888793089,935236213,Gavin Lamb,"NEW PAPER! Where we ask, ""what is shaping the jets in neutron-star mergers?"" <LINK> Work led by Lorenzo Nativi (@Stockholm_Uni), with me (@PhysicsUoL), S. Rosswog and C. Lundman (@Stockholm_Uni), and G. Kowal (@usponline) We inject 4 jets with one of 2 powers and 2 structures into a realistic neutron star merger ejecta with neutrino-driven winds. We evolve these systems in 3D relativistic hydrodynamics (using AMUN) to find the resultant jet profile. The initial jet-power has the biggest effect! Lower powered jets result in a more collimated outflow. Resultant profile differences between the jets with varying initial structure jets are largely due to chaotic processes due to the jet-ejecta interaction. But this isn't the end of the story! Afterglow modellers take note Despite the ""small"" variation between fixed power jet's resultant outflow profiles. When we fit afterglows using these structures to data, we find a significant difference in the inferred parameters, such as the system inclination! Used to infer the H_0 from a single GWEM event So, chaotic processes in the jet-ejecta interaction shape the outflow that results in the afterglows to Gamma-ray Bursts and particularly for Gravitational-wave counterparts. Variability in the profile due to these processes is significant, in terms of afterglow parameter fits Lower powered jets are more significantly collimated. Where we used a relatively low-mass ejecta/wind environment -- so, for GW170817 (with a higher ejected mass), the resultant jet structure seen in the afterglow was a result of the jet-ejecta interaction",http://arxiv.org/abs/2109.00814,"Jets can become collimated as they propagate through dense environments and understanding such interactions is crucial for linking physical models of the environments to observations. In this work, we use 3D special-relativistic simulations to study how jets propagate through the environment created around a neutron star merger remnant by neutrino-driven winds. We simulate four jets with two different initial structures, top-hat and Gaussian, and two luminosities. After jet breakout, we study the angular jet structures and the resulting afterglow light curves. We find that the initial angular structures are efficiently washed out during the propagation, despite the small wind mass of only $\sim 10^{-3}$ M$_\odot$. The final structure depends on the jet luminosity as less energetic jets are more strongly collimated, and entrainment of baryons leads to a moderate outflow Lorentz factor ($\approx 40$). Although our jets are not specifically intended to model the outflows of the GW170817 event, we show that they can be used to produce light curves consistent with the afterglow observed in the aftermath of GW170817. Using this procedure we show how the inferred physical parameters e.g., inclination angle, ambient particle number density, can vary substantially between independent fits of the same dataset and appear to be sensitive to smaller details of the angular jet shape, indicating that observationally inferred parameters may depend sensitively on the employed jet models. ",Are Interactions with Neutron Star Merger Winds Shaping the Jets?,6,"['NEW PAPER! Where we ask, ""what is shaping the jets in neutron-star mergers?"" <LINK>\n\nWork led by Lorenzo Nativi (@Stockholm_Uni), with me (@PhysicsUoL), S. Rosswog and C. Lundman (@Stockholm_Uni), and G. Kowal (@usponline)', 'We inject 4 jets with one of 2 powers and 2 structures into a realistic neutron star merger ejecta with neutrino-driven winds. We evolve these systems in 3D relativistic hydrodynamics (using AMUN) to find the resultant jet profile.\n\nThe initial jet-power has the biggest effect!', ""Lower powered jets result in a more collimated outflow.\n\nResultant profile differences between the jets with varying initial structure jets are largely due to chaotic processes due to the jet-ejecta interaction. \n\nBut this isn't the end of the story! Afterglow modellers take note"", 'Despite the ""small"" variation between fixed power jet\'s resultant outflow profiles. When we fit afterglows using these structures to data, we find a significant difference in the inferred parameters, such as the system inclination!\n\nUsed to infer the H_0 from a single GWEM event', 'So, chaotic processes in the jet-ejecta interaction shape the outflow that results in the afterglows to Gamma-ray Bursts and particularly for Gravitational-wave counterparts.\n\nVariability in the profile due to these processes is significant, in terms of afterglow parameter fits', 'Lower powered jets are more significantly collimated.\n\nWhere we used a relatively low-mass ejecta/wind environment -- so, for GW170817 (with a higher ejected mass), the resultant jet structure seen in the afterglow was a result of the jet-ejecta interaction']",21,09,1592 |
216,2,991636581067968512,449236360,Antoine Cully,Our new paper “Hierarchical Behavioral Repertoires with Unsupervised Descriptors” is now on ArXiv: <LINK> and Youtube: <LINK> It shows how a robot learns to draw digits in an unsupervised manner and how to transfer this knowledge to another robot <LINK>,https://arxiv.org/abs/1804.07127,"Enabling artificial agents to automatically learn complex, versatile and high-performing behaviors is a long-lasting challenge. This paper presents a step in this direction with hierarchical behavioral repertoires that stack several behavioral repertoires to generate sophisticated behaviors. Each repertoire of this architecture uses the lower repertoires to create complex behaviors as sequences of simpler ones, while only the lowest repertoire directly controls the agent's movements. This paper also introduces a novel approach to automatically define behavioral descriptors thanks to an unsupervised neural network that organizes the produced high-level behaviors. The experiments show that the proposed architecture enables a robot to learn how to draw digits in an unsupervised manner after having learned to draw lines and arcs. Compared to traditional behavioral repertoires, the proposed architecture reduces the dimensionality of the optimization problems by orders of magnitude and provides behaviors with a twice better fitness. More importantly, it enables the transfer of knowledge between robots: a hierarchical repertoire evolved for a robotic arm to draw digits can be transferred to a humanoid robot by simply changing the lowest layer of the hierarchy. This enables the humanoid to draw digits although it has never been trained for this task. ",Hierarchical Behavioral Repertoires with Unsupervised Descriptors,1,['Our new paper “Hierarchical Behavioral Repertoires with Unsupervised Descriptors” is now on ArXiv: <LINK> and Youtube: <LINK> It shows how a robot learns to draw digits in an unsupervised manner and how to transfer this knowledge to another robot <LINK>'],18,04,253 |
217,58,1070702916393156610,50343115,Thomas Kipf,"CompILE discovers composable segments & encodings of behavior from sequential data (unsupervised and differentiable). Codes can be recomposed to facilitate generalization and exploration in RL. New work with collaborators from @DeepMindAI Paper: <LINK> <LINK> If you're around at #NeurIPS2018, consider stopping by at our talk at the Learning By Instruction (LBI) Workshop, this Saturday at 10:00 -- <LINK>, or ping me if you want to have a chat!",https://arxiv.org/abs/1812.01483,"We introduce Compositional Imitation Learning and Execution (CompILE): a framework for learning reusable, variable-length segments of hierarchically-structured behavior from demonstration data. CompILE uses a novel unsupervised, fully-differentiable sequence segmentation module to learn latent encodings of sequential data that can be re-composed and executed to perform new tasks. Once trained, our model generalizes to sequences of longer length and from environment instances not seen during training. We evaluate CompILE in a challenging 2D multi-task environment and a continuous control task, and show that it can find correct task boundaries and event encodings in an unsupervised manner. Latent codes and associated behavior policies discovered by CompILE can be used by a hierarchical agent, where the high-level policy selects actions in the latent code space, and the low-level, task-specific policies are simply the learned decoders. We found that our CompILE-based agent could learn given only sparse rewards, where agents without task-specific policies struggle. ",CompILE: Compositional Imitation Learning and Execution,2,"['CompILE discovers composable segments & encodings of behavior from sequential data (unsupervised and differentiable). Codes can be recomposed to facilitate generalization and exploration in RL.\n\nNew work with collaborators from @DeepMindAI\nPaper: <LINK> <LINK>', ""If you're around at #NeurIPS2018, consider stopping by at our talk at the Learning By Instruction (LBI) Workshop, this Saturday at 10:00 -- https://t.co/T90GsZDz3q, or ping me if you want to have a chat!""]",18,12,446 |
218,82,1306541068901588993,261658198,Juan Cruz-Benito,"New paper from our #IBMQuantum Cloud Team c/ Sanjay Vishwakarma, @pacomartinfdez, and @ismaelfaro <LINK> 👆We compare different neural network architectures like AWD-LSTMs, AWD-QRNNs and Transformer, while using transfer learning, and different tokenizations to see how they behave in building language models using a Python dataset for automated code generation and filling mask tasks",https://arxiv.org/abs/2009.07740,"In recent years, the use of deep learning in language models gained much attention. Some research projects claim that they can generate text that can be interpreted as human-writing, enabling new possibilities in many application areas. Among the different areas related to language processing, one of the most notable in applying this type of modeling is programming languages. For years, the Machine Learning community has been researching this software engineering area, pursuing goals like applying different approaches to auto-complete, generate, fix, or evaluate code programmed by humans. Considering the increasing popularity of the Deep-Learning-enabled language models approach, we detected a lack of empirical papers that compare different deep learning architectures to create and use language models based on programming code. This paper compares different neural network architectures like AWD-LSTMs, AWD-QRNNs, and Transformer while using transfer learning and different tokenizations to see how they behave in building language models using a Python dataset for code generation and filling mask tasks. Considering the results, we discuss each approach's different strengths and weaknesses and what gaps we find to evaluate the language models or apply them in a real programming context. ","Automated Source Code Generation and Auto-completion Using Deep |
Learning: Comparing and Discussing Current Language-Model-Related Approaches",2,"['New paper from our #IBMQuantum Cloud Team c/ Sanjay Vishwakarma, @pacomartinfdez, and @ismaelfaro\n\n<LINK>', '👆We compare different neural network architectures like AWD-LSTMs, AWD-QRNNs and Transformer, while using transfer learning, and different tokenizations to see how they behave in building language models using a Python dataset for automated code generation and filling mask tasks']",20,09,384 |
219,55,1062682557505069057,131879500,John Ilee,"Our new paper on G11.92 MM1 - an extreme mass ratio proto-binary star caught in the act of formation, with a fantastic fragmenting disc. Out on astro-ph today: <LINK> (with huge thanks to @ToddRHunter, @dh4gan, @TomHaworthAstro, @tharries and others!) <LINK>",https://arxiv.org/abs/1811.05267,"We present high resolution ($\sim$300 au) Atacama Large Millimeter/submillimeter Array (ALMA) observations of the massive young stellar object G11.92-0.61 MM 1. We resolve the immediate circumstellar environment of MM 1 in 1.3 mm continuum emission and CH$_{3}$CN emission for the first time. The object divides into two main sources - MM 1a, which is the source of a bipolar molecular outflow, and MM 1b, located 0.57'' (1920 au) to the South-East. The main component of MM 1a is an elongated continuum structure, perpendicular to the bipolar outflow, with a size of $0.141'' \times 0.050''$ ($480\times170$ au). The gas kinematics toward MM 1a probed via CH$_{3}$CN trace a variety of scales. The lower energy $J=12-11$ $K=3$ line traces extended, rotating gas within the outflow cavity, while the $v$8=1 line shows a clearly-resolved Keplerian rotation signature. Analysis of the gas kinematics and dust emission shows that the total enclosed mass in MM 1a is $40\pm5$ M$_{\odot}$ (where between 2.2-5.8 M$_{\odot}$ is attributed to the disk), while MM 1b is $<0.6$ M$_{\odot}$. The extreme mass ratio and orbital properties of MM 1a and MM 1b suggest that MM 1b is one of the first observed examples of the formation of a binary star via disk fragmentation around a massive young (proto)star. ",G11.92-0.61 MM 1: A fragmented Keplerian disk surrounding a proto-O star,1,"['Our new paper on G11.92 MM1 - an extreme mass ratio proto-binary star caught in the act of formation, with a fantastic fragmenting disc. Out on astro-ph today: <LINK>\n\n(with huge thanks to @ToddRHunter, @dh4gan, @TomHaworthAstro, @tharries and others!) <LINK>']",18,11,258 |
220,203,1392109863636017155,942238055607435264,Luca Carlone,Traditional approaches for category-level object pose and shape estimation are sensitive to outliers and get stuck in local minima. We propose the first approach that avoids local minima and is robust to 70-90% outliers: <LINK> #computervision #mitsparklab accepted at RSS 2021: <LINK>,https://arxiv.org/abs/2104.08383,"We consider a category-level perception problem, where one is given 3D sensor data picturing an object of a given category (e.g. a car), and has to reconstruct the pose and shape of the object despite intra-class variability (i.e. different car models have different shapes). We consider an active shape model, where -- for an object category -- we are given a library of potential CAD models describing objects in that category, and we adopt a standard formulation where pose and shape estimation are formulated as a non-convex optimization. Our first contribution is to provide the first certifiably optimal solver for pose and shape estimation. In particular, we show that rotation estimation can be decoupled from the estimation of the object translation and shape, and we demonstrate that (i) the optimal object rotation can be computed via a tight (small-size) semidefinite relaxation, and (ii) the translation and shape parameters can be computed in closed-form given the rotation. Our second contribution is to add an outlier rejection layer to our solver, hence making it robust to a large number of misdetections. Towards this goal, we wrap our optimal solver in a robust estimation scheme based on graduated non-convexity. To further enhance robustness to outliers, we also develop the first graph-theoretic formulation to prune outliers in category-level perception, which removes outliers via convex hull and maximum clique computations; the resulting approach is robust to 70%-90% outliers. Our third contribution is an extensive experimental evaluation. Besides providing an ablation study on a simulated dataset and on the PASCAL3D+ dataset, we combine our solver with a deep-learned keypoint detector, and show that the resulting approach improves over the state of the art in vehicle pose estimation in the ApolloScape datasets. ","Optimal Pose and Shape Estimation for Category-level 3D Object |
Perception",2,"['Traditional approaches for category-level object pose and shape estimation are sensitive to outliers and get stuck in local minima. We propose the first approach that avoids local minima and is robust to 70-90% outliers: <LINK> \n#computervision #mitsparklab', 'accepted at RSS 2021: https://t.co/zhC7MTjaCJ']",21,04,285 |
221,23,977189038598885378,823957466,Hanna Wallach,"New paper on arXiv!!! 🎉🎊 Locally Private Bayesian Inference for Count Models by @AaronSchein, Steven Wu, Mingyuan Zhou & me! Limited-precision local privacy; a reinterpretation of the geometric mechanism; Poisson, Skellam & Bessel distributions!!! 😮 <LINK> <LINK> And we're VERY excited to be working with the incredible @XandaSchofield on the next phase of this project over the summer!!!! 😃🎉🎊 Turns out Steven Wu is on Twitter!!!! @zstevenwu @mdekstrand @zstevenwu This is a great question! I am not sure, but would be very excited to discuss this/think this through with you! Perhaps w/ @_ajbc too... @mdekstrand @_ajbc @zstevenwu @DrMehrpouyan Yay!!! Maybe in 2 weeks time? @timnitGebru @zstevenwu @goodfellow_ian 😮",https://arxiv.org/abs/1803.08471,"We present a general method for privacy-preserving Bayesian inference in Poisson factorization, a broad class of models that includes some of the most widely used models in the social sciences. Our method satisfies limited precision local privacy, a generalization of local differential privacy, which we introduce to formulate privacy guarantees appropriate for sparse count data. We develop an MCMC algorithm that approximates the locally private posterior over model parameters given data that has been locally privatized by the geometric mechanism (Ghosh et al., 2012). Our solution is based on two insights: 1) a novel reinterpretation of the geometric mechanism in terms of the Skellam distribution (Skellam, 1946) and 2) a general theorem that relates the Skellam to the Bessel distribution (Yuan & Kalbfleisch, 2000). We demonstrate our method in two case studies on real-world email data in which we show that our method consistently outperforms the commonly-used naive approach, obtaining higher quality topics in text and more accurate link prediction in networks. On some tasks, our privacy-preserving method even outperforms non-private inference which conditions on the true data. ",Locally Private Bayesian Inference for Count Models,6,"['New paper on arXiv!!! 🎉🎊\n\nLocally Private Bayesian Inference for Count Models\n\nby @AaronSchein, Steven Wu, Mingyuan Zhou & me!\n\nLimited-precision local privacy; a reinterpretation of the geometric mechanism; Poisson, Skellam & Bessel distributions!!! 😮\n\n<LINK> <LINK>', ""And we're VERY excited to be working with the incredible @XandaSchofield on the next phase of this project over the summer!!!! 😃🎉🎊"", 'Turns out Steven Wu is on Twitter!!!! @zstevenwu', '@mdekstrand @zstevenwu This is a great question! I am not sure, but would be very excited to discuss this/think this through with you! Perhaps w/ @_ajbc too...', '@mdekstrand @_ajbc @zstevenwu @DrMehrpouyan Yay!!! Maybe in 2 weeks time?', '@timnitGebru @zstevenwu @goodfellow_ian 😮']",18,03,719 |
222,122,1413419040639524867,1171357907574824961,Łukasz Tychoniec,There it goes! Our new paper presenting an overview of the chemical tracers in protostellar systems is out! Thanks to amazing ALMA observatory @almaobs we can map stellar nurseries at Solar System scales. <LINK> <LINK> I like to think about it as a map that we will use as a reference when observing those systems with JWST at comparable resolution. @JWSTObserver Many of sources covered in this paper are in a queue for the Cycle 1 observations!,https://arxiv.org/abs/2107.03696,"The physical and chemical conditions in Class 0/I protostars are fundamental in unlocking the protostellar accretion process and its impact on planet formation. The aim is to determine which physical components are traced by different molecules at sub-arcsecond scales (100 - 400 au). We use a suite of Atacama Large Millimeter/submillimeter Array (ALMA) datasets in Band 6 (1 mm), Band 5 (1.8 mm) and Band 3 (3 mm) at spatial resolutions 0.5 - 3"" for 16 protostellar sources. The protostellar envelope is well traced by C$^{18}$O, DCO$^+$ and N$_2$D$^+$, with the freeze-out of CO governing the chemistry at envelope scales. Molecular outflows are seen in classical shock tracers like SiO and SO, but ice-mantle products such as CH$_3$OH and HNCO released with the shock are also observed. The molecular jet is prominent not only in SiO and SO but also occasionally in H$_2$CO. The cavity walls show tracers of UV-irradiation such as C$_2$H c-C$_3$H$_2$ and CN. The hot inner envelope, apart from showing emission from complex organic molecules (COMs), also presents compact emission from small molecules like H$_2$S, SO, OCS and H$^{13}$CN, most likely related to ice sublimation and high-temperature chemistry. Sub-arcsecond millimeter-wave observations allow to identify those (simple) molecules that best trace each of the physical components of a protostellar system. COMs are found both in the hot inner envelope (high excitation lines) and in the outflows (lower-excitation lines) with comparable abundances. COMs can coexist with hydrocarbons in the same protostellar sources, but they trace different components. In the near future, mid-IR observations with JWST-MIRI will provide complementary information about the hottest gas and the ice mantle content, at unprecedented sensitivity and at resolutions comparable to ALMA for the same sources. ",Which molecule traces what: chemical diagnostics of protostellar sources,2,"['There it goes! Our new paper presenting an overview of the chemical tracers in protostellar systems is out! Thanks to amazing ALMA observatory @almaobs we can map stellar nurseries at Solar System scales. \n<LINK> <LINK>', 'I like to think about it as a map that we will use as a reference when observing those systems with JWST at comparable resolution. @JWSTObserver Many of sources covered in this paper are in a queue for the Cycle 1 observations!']",21,07,446 |
223,30,1110680737039044609,986037210309783552,Aida Behmard,"New paper - wanna know where some **cool** (i.e., not just water) hydrocarbon snowlines are in your simulated protoplanetary planetary disks?? Try out our binding energies! 🌠 <LINK> ...more science explanation below: This was laboratory astrochemistry study that basically involved releasing tiny amounts of hydrocarbon gas (C2H2, C2H4, C2H6, C3H4, C3H6, C3H8) into a vacuum sealed chamber cooled to ~10 K. As soon as the gas hit a substrate within the chamber, it froze out, and then we heated the chamber up **really** slowly and determined at what temperature points the hydrocarbon ices desorbed into the gas phase. We then backed out binding energies which can be used to extrapolate where within a protoplanetary disk these different hydrocarbons would exist in the solid or gas phase, which has major implications for the eventual compositions of planets that form there! 🌍 @astroshrey bruh i can't believe it's finally over",https://arxiv.org/abs/1903.09720,"Small hydrocarbons are an important organic reservoir in protostellar and protoplanetary environments. Constraints on desorption temperatures and binding energies of such hydrocarbons are needed for accurate predictions of where these molecules exist in the ice vs. gas-phase during the different stages of star and planet formation. Through a series of temperature programmed desorption (TPD) experiments, we constrain the binding energies of 2 and 3-carbon hydrocarbons (C$_{2}$H$_{2}$ - acetylene, C$_{2}$H$_{4}$ - ethylene, C$_{2}$H$_{6}$ - ethane, C$_{3}$H$_{4}$ - propyne, C$_{3}$H$_{6}$ - propene, and C$_{3}$H$_{8}$ - propane) to 2200-4200 K in the case of pure amorphous ices, to 2400-4400 K on compact amorphous H$_{2}$O, and to 2800-4700 K on porous amorphous H$_{2}$O. The 3-carbon hydrocarbon binding energies are always larger than the 2-carbon hydrocarbon binding energies. Within the 2- and 3-carbon hydrocarbon families, the alkynes (i.e., least-saturated) hydrocarbons exhibit the largest binding energies, while the alkane and alkene binding energies are comparable. Binding energies are $\sim$5-20% higher on water ice substrates compared to pure ices, which is a small increase compared to what has been measured for other volatile molecules such as CO and N$_{2}$. Thus in the case of hydrocarbons, H$_{2}$O has a less pronounced effect on sublimation front locations (i.e., snowlines) in protoplanetary disks. ",Desorption Kinetics and Binding Energies of Small Hydrocarbons,5,"['New paper - wanna know where some **cool** (i.e., not just water) hydrocarbon snowlines are in your simulated protoplanetary planetary disks?? Try out our binding energies! 🌠 <LINK> \n\n...more science explanation below:', 'This was laboratory astrochemistry study that basically involved releasing tiny amounts of hydrocarbon gas (C2H2, C2H4, C2H6, C3H4, C3H6, C3H8) into a vacuum sealed chamber cooled to ~10 K.', 'As soon as the gas hit a substrate within the chamber, it froze out, and then we heated the chamber up **really** slowly and determined at what temperature points the hydrocarbon ices desorbed into the gas phase.', 'We then backed out binding energies which can be used to extrapolate where within a protoplanetary disk these different hydrocarbons would exist in the solid or gas phase, which has major implications for the eventual compositions of planets that form there! 🌍', ""@astroshrey bruh i can't believe it's finally over""]",19,03,932 |
224,80,963983121027878912,19510090,Julian Togelius,"Who killed Albert Einstein? That's up to you to find out. @GabbBarros @Bumblebor @SentientDesigns and I proudly present our new paper on generating murder mystery games from open data. <LINK> <LINK> It works like this: you input the name of a person with a Wikipedia profile, and the system creates a whodunnit for you to solve. This involves a lot of crawling @Wikipedia , @WikiCommons and @openstreetmap, evolutionary algorithms and a bunch of other tricks. The paper includes a ""blooper reel"" of sorts, as you would not believe the kind of weird stuff you can find on Wikipedia and the ways in which combining innocuous data can lead to troublesome content. @FlorianHollandt @Wikipedia @WikiCommons @openstreetmap Thanks! Not yet, we're working on a new version which we hope will be stable enough to release publicly this spring. @samim @GabbBarros @Bumblebor @SentientDesigns Thanks! The current version will not be available, but we are working on a new version which will be public this spring.",https://arxiv.org/abs/1802.05219,"This paper presents a framework for generating adventure games from open data. Focusing on the murder mystery type of adventure games, the generator is able to transform open data from Wikipedia articles, OpenStreetMap and images from Wikimedia Commons into WikiMysteries. Every WikiMystery game revolves around the murder of a person with a Wikipedia article and populates the game with suspects who must be arrested by the player if guilty of the murder or absolved if innocent. Starting from only one person as the victim, an extensive generative pipeline finds suspects, their alibis, and paths connecting them from open data, transforms open data into cities, buildings, non-player characters, locks and keys and dialog options. The paper describes in detail each generative step, provides a specific playthrough of one WikiMystery where Albert Einstein is murdered, and evaluates the outcomes of games generated for the 100 most influential people of the 20th century. ",Who Killed Albert Einstein? From Open Data to Murder Mystery Games,5,"[""Who killed Albert Einstein? That's up to you to find out. @GabbBarros @Bumblebor @SentientDesigns and I proudly present our new paper on generating murder mystery games from open data.\n<LINK> <LINK>"", 'It works like this: you input the name of a person with a Wikipedia profile, and the system creates a whodunnit for you to solve. This involves a lot of crawling @Wikipedia , @WikiCommons and @openstreetmap, evolutionary algorithms and a bunch of other tricks.', 'The paper includes a ""blooper reel"" of sorts, as you would not believe the kind of weird stuff you can find on Wikipedia and the ways in which combining innocuous data can lead to troublesome content.', ""@FlorianHollandt @Wikipedia @WikiCommons @openstreetmap Thanks! Not yet, we're working on a new version which we hope will be stable enough to release publicly this spring."", '@samim @GabbBarros @Bumblebor @SentientDesigns Thanks! The current version will not be available, but we are working on a new version which will be public this spring.']",18,02,1001 |
225,70,936661302486777856,1524366572,Dieter Lukas,"We are excited that we have a preprint of our study on “Women’s visibility in academic seminars” <LINK> on which we welcome comments. A big thank you to everyone who took our survey/reported observations! <LINK> @alecia_carter @AlyssaCroftUBC @GillianSocial Some results from our preprint <LINK> : Male attendees at academic seminars were over two and half times more likely to ask a question than women attendees (similar to what has been reported for conferences e.g. <LINK>) @alecia_carter @AlyssaCroftUBC @GillianSocial Some results from our preprint <LINK> : women reported more frequently than men that they believed that such a bias exists (see also <LINK>) @alecia_carter @AlyssaCroftUBC @GillianSocial Some results from our preprint <LINK>: while not the focus of our study, only 40% of speakers at academic seminars are women (advice for seminar organizers on how to balance speaker lineup on page 2 <LINK> by @HannahMRowland & me) @catherinelinnen @alecia_carter @AlyssaCroftUBC @GillianSocial @HannahMRowland Thanks - and sorry to hear about your experience. In our observations, we only counted who asked questions but not who wanted to ask questions - so we can't say how much biases in person calling questions contributes to pattern",https://arxiv.org/abs/1711.10985,"The attrition of women in academic careers is a major concern, particularly in Science, Technology, Engineering, and Mathematics subjects. One factor that can contribute to the attrition is the lack of visible role models for women in academia. At early career stages, the behaviour of the local community may play a formative role in identifying ingroup role models, shaping women's impressions of whether or not they can be successful in academia. One common and formative setting to observe role models is the local departmental academic seminar, talk, or presentation. We thus quantified women's visibility through the question-asking behaviour of academics at seminars using observations and an online survey. From the survey responses of over 600 academics in 20 countries, we found that women reported asking fewer questions after seminars compared to men. This impression was supported by observational data from almost 250 seminars in 10 countries: women audience members asked absolutely and proportionally fewer questions than male audience members. When asked why they did not ask questions when they wanted to, women, more than men, endorsed internal factors (e.g., not working up the nerve). However, our observations suggest that structural factors might also play a role; when a man was the first to ask a question, or there were fewer questions, women asked proportionally fewer questions. Attempts to counteract the latter effect by manipulating the time for questions (in an effort to provoke more questions) in two departments were unsuccessful. We propose alternative recommendations for creating an environment that makes everyone feel more comfortable to ask questions, thus promoting equal visibility for women and members of other less visible groups. ","Women's visibility in academic seminars: women ask fewer questions than |
men",5,"['We are excited that we have a preprint of our study on “Women’s visibility in academic seminars” <LINK> on which we welcome comments. A big thank you to everyone who took our survey/reported observations! <LINK>', '@alecia_carter @AlyssaCroftUBC @GillianSocial Some results from our preprint https://t.co/2FQpCsFiPY : Male attendees at academic seminars were over two and half times more likely to ask a question than women attendees (similar to what has been reported for conferences e.g. https://t.co/tMHtU3DBeV)', '@alecia_carter @AlyssaCroftUBC @GillianSocial Some results from our preprint https://t.co/2FQpCsFiPY : women reported more frequently than men that they believed that such a bias exists (see also https://t.co/fi9JAbyptt)', '@alecia_carter @AlyssaCroftUBC @GillianSocial Some results from our preprint https://t.co/2FQpCsFiPY: while not the focus of our study, only 40% of speakers at academic seminars are women (advice for seminar organizers on how to balance speaker lineup on page 2 https://t.co/vAiGk0lFwU by @HannahMRowland & me)', ""@catherinelinnen @alecia_carter @AlyssaCroftUBC @GillianSocial @HannahMRowland Thanks - and sorry to hear about your experience. In our observations, we only counted who asked questions but not who wanted to ask questions - so we can't say how much biases in person calling questions contributes to pattern""]",17,11,1248 |
226,86,1023217908321804288,335473253,Jason Kalirai,Our group’s new paper led by @stsci Support Scientist Matteo Correnti on how we can use sensitive IR photometry on #Hubble to better measure the ages of old Milky Way globular clusters. This work paves the way for future #JWST stellar pops research. <LINK> <LINK>,https://arxiv.org/abs/1807.10142,"Globular Clusters (GCs) in the Milky Way represent the ideal laboratory to establish the age of the oldest stellar populations and to measure the color-magnitude relation of stars. Infrared (IR) photometry of these objects provides a new opportunity to accomplish this task. In particular, at low stellar masses, the stellar main sequence (MS) in an IR color-magnitude diagram (CMD) exhibits a sharp ""kink"" (due to opacity effects in M dwarfs), such that lower mass and cooler dwarfs become bluer in the F110W - F160W color baseline and not redder. This inversion of the color-magnitude relation offers the possibility to fit GC properties using IR imaging, and to reduce their uncertainties. Here, we used the IR channel of the Wide Field Camera 3 onboard the Hubble Space Telescope to obtain new, deep high-resolution photometry of the old metal-poor GC NGC6397. From the analysis of the GC CMD, we revealed below the MS ""kink"" the presence of two MSs with different chemical composition. We derived the cluster fiducial line and we compared it with a grid of isochrones over a large range of parameter space, allowing age, metallicity, distance and reddening to vary freely within reasonable selected ranges. We derived an age of 12.6 Gyr with a random uncertainty sigma ~ 0.7 Gyr. These results confirm that the analysis of the IR color-magnitude of stars provide a valuable tool to measure the GC ages and offers a new venue to determine their absolute age to sub-Gyr accuracy with next generation IR telescopes. ","The Age of the Old Metal-Poor Globular Cluster NGC6397 Using WFC3/IR |
Photometry",1,['Our group’s new paper led by @stsci Support Scientist Matteo Correnti on how we can use sensitive IR photometry on #Hubble to better measure the ages of old Milky Way globular clusters. This work paves the way for future #JWST stellar pops research. <LINK> <LINK>'],18,07,263 |
227,21,1311068545720016897,1062160582369959936,ewin,"new paper, in which I learn that SGD is good: <LINK> with András Gilyén and @realZhaoSong held off on tweeting it until now because i noticed after pushing to arxiv that the paper had a bug. thankfully it was not too sick but I took some time to nurse it back to health; hopefully should be all fine now",http://arxiv.org/abs/2009.07268,"We give a classical algorithm for linear regression analogous to the quantum matrix inversion algorithm [Harrow, Hassidim, and Lloyd, Physical Review Letters'09, arXiv:0811.3171] for low-rank matrices [Wossnig, Zhao, and Prakash, Physical Review Letters'18, arXiv:1704.06174], when the input matrix $A$ is stored in a data structure applicable for QRAM-based state preparation. Namely, suppose we are given an $A \in \mathbb{C}^{m\times n}$ with minimum non-zero singular value $\sigma$ which supports certain efficient $\ell_2$-norm importance sampling queries, along with a $b \in \mathbb{C}^m$. Then, for some $x \in \mathbb{C}^n$ satisfying $\|x - A^+b\| \leq \varepsilon\|A^+b\|$, we can output a measurement of $|x\rangle$ in the computational basis and output an entry of $x$ with classical algorithms that run in $\tilde{\mathcal{O}}\big(\frac{\|A\|_{\mathrm{F}}^6\|A\|^6}{\sigma^{12}\varepsilon^4}\big)$ and $\tilde{\mathcal{O}}\big(\frac{\|A\|_{\mathrm{F}}^6\|A\|^2}{\sigma^8\varepsilon^4}\big)$ time, respectively. This improves on previous ""quantum-inspired"" algorithms in this line of research by at least a factor of $\frac{\|A\|^{16}}{\sigma^{16}\varepsilon^2}$ [Chia, Gily\'en, Li, Lin, Tang and Wang, STOC'20, arXiv:1910.06151]. As a consequence, we show that quantum computers can achieve at most a factor-of-12 speedup for linear regression in this QRAM data structure setting and related settings. Our work applies techniques from sketching algorithms and optimization to the quantum-inspired literature. Unlike earlier works, this is a promising avenue that could lead to feasible implementations of classical regression in a quantum-inspired settings, for comparison against future quantum computers. ",An improved quantum-inspired algorithm for linear regression,2,"['new paper, in which I learn that SGD is good: <LINK> with András Gilyén and @realZhaoSong', 'held off on tweeting it until now because i noticed after pushing to arxiv that the paper had a bug. thankfully it was not too sick but I took some time to nurse it back to health; hopefully should be all fine now']",20,09,303 |
228,53,1351811220614090752,970380086036791296,Dr. Melissa van Beekveld,"New paper is out! And it is all about logarithms! Those nasty things were invented ~450 years ago, but still a topic of active research: they have the annoying property that they spoil the validity of our particle-physics predicitions... <LINK> But despair not: resummation to the rescue! This technique restores the predictability of the perturbative series. However, depending how you set it up, you actually get a different numerical answer... This is caused by power corrections, which we study in our work. And then the magic happens... If you add the largest contribution of these power corrections, the methods actually agree! <LINK>",https://arxiv.org/abs/2101.07270,"We study next-to-leading-power (NLP) threshold corrections in colour-singlet production processes, with particular emphasis on Drell-Yan (DY) and single-Higgs production. We assess the quality of the partonic and hadronic threshold expansions for each process up to NNLO. We determine numerically the NLP leading-logarithmic (LL) resummed contribution in addition to the leading-power next-to-next-to-leading logarithmic (LP NNLL) resummed DY and Higgs cross sections, matched to NNLO. We find that the inclusion of NLP logarithms is numerically more relevant than increasing the precision to N$^3$LL at LP for these processes. We also perform an analytical and numerical comparison of LP NNLL + NLP LL resummation in soft-collinear effective theory and direct QCD, where we achieve excellent analytical and numerical agreement once the NLP LL terms are included in both formalisms. Our results underline the phenomenological importance of understanding the NLP structure of QCD cross sections. ","Next-to-leading power threshold corrections for finite order and |
resummed colour-singlet cross sections",3,"['New paper is out! And it is all about logarithms! Those nasty things were invented ~450 years ago, but still a topic of active research: they have the annoying property that they spoil the validity of our particle-physics predicitions... \n\n<LINK>', 'But despair not: resummation to the rescue! This technique restores the predictability of the perturbative series. However, depending how you set it up, you actually get a different numerical answer...', 'This is caused by power corrections, which we study in our work. And then the magic happens... If you add the largest contribution of these power corrections, the methods actually agree! https://t.co/DXpH2ImWEe']",21,01,641 |
229,89,1058155262714859520,1954858226,Lloyd Knox,"1/8 The following is a haiku tweet thread about a new paper (<LINK>) with K. Aylor, M. Joy, @cosmic_mar, S. Raghunathan, and K. Wu, called ""Sounds Discordant: Classical Distance Ladder and LCDM-based Determinations of the Cosmological Sound Horizon."" 2/8 The Sound Horizon. It is how far sound travels In Big Bang plasma. 3/8 Empirically small. Standard model says it's big. Is the model wrong? 4/8 If model is wrong, Just before end of plasma Is probably when. 5/8 But how is it wrong? Is this the sound of darkness? We do not yet know. 6/8 Is the dark sector More than 'matter', 'energy'? Is there a third piece? 7/8 Or are we confused By distance ladder foibles? This seems unlikely. 8/8 Microwave mapping will test viable models within five years' time.",https://arxiv.org/abs/1811.00537,"Type Ia Supernovae, calibrated by classical distance ladder methods, can be used, in conjunction with galaxy survey two-point correlation functions, to empirically determine the size of the sound horizon $r_{\rm s}$. Assumption of the $\Lambda$CDM model, together with data to constrain its parameters, can also be used to determine the size of the sound horizon. Using a variety of cosmic microwave background (CMB) datasets to constrain $\Lambda$CDM parameters, we find the model-based sound horizon to be larger than the empirically-determined one with a statistical significance of between 2 and 3$\sigma$, depending on the dataset. If reconciliation requires a change to the cosmological model, we argue that change is likely to be important in the two decades of scale factor evolution prior to recombination. Future CMB observations will therefore likely be able to test any such adjustments; e.g., a third generation CMB survey like SPT-3G can achieve a three-fold improvement in the constraints on $r_{\rm s}$ in the $\Lambda$CDM model extended to allow additional light degrees of freedom. ","Sounds Discordant: Classical Distance Ladder & $\Lambda$CDM -based |
Determinations of the Cosmological Sound Horizon",8,"['1/8 \nThe following is a haiku tweet thread about a new paper (<LINK>) with K. Aylor, M. Joy, @cosmic_mar, S. Raghunathan, and K. Wu, called ""Sounds Discordant: Classical Distance Ladder and LCDM-based Determinations of the Cosmological Sound Horizon.""', '2/8 \nThe Sound Horizon.\nIt is how far sound travels\nIn Big Bang plasma.', ""3/8 \nEmpirically small.\nStandard model says it's big.\nIs the model wrong?"", '4/8 \nIf model is wrong,\nJust before end of plasma\nIs probably when.', '5/8\nBut how is it wrong?\nIs this the sound of darkness?\nWe do not yet know.', ""6/8\nIs the dark sector \nMore than 'matter', 'energy'?\nIs there a third piece?"", '7/8\nOr are we confused\nBy distance ladder foibles?\nThis seems unlikely.', ""8/8\nMicrowave mapping\nwill test viable models\nwithin five years' time.""]",18,11,757 |
230,98,1469016556864692225,1202165863895449600,François Charton,"Can you know if a metabolic network has an equilibrium and which ? Transformers can ! We predict graph equilibriums and their associated flows with very high precision. 1/4 New paper on Arxiv <LINK> with @Amaury_Hayat @RutgersCCIB @Rutgers_Camden <LINK> Metabolic graphs can represent many things, from the elimination of cholesterol from plaque to the carbon cycle of engineered bacterias 2/4 <LINK> We train transformers on randomly generated graphs to predict if there is an equilibrium and what are the associated flows, with high accuracy 3/4 <LINK> Interestingly, models trained on synthetic data generalize to biological networks, and also to graphs with different structures than seen at train time 4/4 <LINK>",https://arxiv.org/abs/2112.03588,"We show that deep learning models, and especially architectures like the Transformer, originally intended for natural language, can be trained on randomly generated datasets to predict to very high accuracy both the qualitative and quantitative features of metabolic networks. Using standard mathematical techniques, we create large sets (40 million elements) of random networks that can be used to train our models. These trained models can predict network equilibrium on random graphs in more than 99% of cases. They can also generalize to graphs with different structure than those encountered at training. Finally, they can predict almost perfectly the equilibria of a small set of known biological networks. Our approach is both very economical in experimental data and uses only small and shallow deep-learning model, far from the large architectures commonly used in machine translation. Such results pave the way for larger use of deep learning models for problems related to biological networks in key areas such as quantitative systems pharmacology, systems biology, and synthetic biology. ",A deep language model to predict metabolic network equilibria,4,"['Can you know if a metabolic network has an equilibrium and which ? Transformers can ! We predict graph equilibriums and their associated flows with very high precision. 1/4\nNew paper on Arxiv <LINK>\nwith @Amaury_Hayat @RutgersCCIB @Rutgers_Camden <LINK>', 'Metabolic graphs can represent many things, from the elimination of cholesterol from plaque to the carbon cycle of engineered bacterias 2/4 https://t.co/eUoRaTyE3V', 'We train transformers on randomly generated graphs to predict if there is an equilibrium and what are the associated flows, with high accuracy 3/4 https://t.co/xS92NK9ln8', 'Interestingly, models trained on synthetic data generalize to biological networks, and also to graphs with different structures than seen at train time 4/4 https://t.co/lEMFjJddx1']",21,12,717 |
231,142,1271449150937333760,61623544,Dr./Prof. Renée Hložek,"New paper out today, with awesome work by my student @ikape_margaret as well as fun collaborators @jcolinhill Simone Ferraro and Marcelo Alvarez. It was fun to work with this group on reionization constraints. Our take home plot is here: <LINK> <LINK> @ikape_margaret @jcolinhill With future CMB missions, the combined 2pt and 4pt constraints on the patchy reionization signal will mean we can break the degeneracy between the width (dz) of reionization and the optical depth (tau). @ikape_margaret @jcolinhill @ikape_margaret already contributed the 2pt constraints for the @SimonsObs - our 2pt constraints will be pretty good already! She’s working on a paper right now looking at constraining these models with current data. Such fun! @DunlapInstitute @That_Astro_Chic @ikape_margaret @jcolinhill @SimonsObs @DunlapInstitute 🥰",https://arxiv.org/abs/2006.06594,"The epoch of reionization is one of the major phase transitions in the history of the universe, and is a focus of ongoing and upcoming cosmic microwave background (CMB) experiments with improved sensitivity to small-scale fluctuations. Reionization also represents a significant contaminant to CMB-derived cosmological parameter constraints, due to the degeneracy between the Thomson-scattering optical depth, $\tau$, and the amplitude of scalar perturbations, $A_s$. This degeneracy subsequently hinders the ability of large-scale structure data to constrain the sum of the neutrino masses, a major target for cosmology in the 2020s. In this work, we explore the kinematic Sunyaev-Zel'dovich (kSZ) effect as a probe of reionization, and show that it can be used to mitigate the optical depth degeneracy with high-sensitivity, high-resolution data from the upcoming CMB-S4 experiment. We discuss the dependence of the kSZ power spectrum on physical reionization model parameters, as well as on empirical reionization parameters, namely $\tau$ and the duration of reionization, $\Delta z$. We show that by combining the kSZ two-point function and the reconstructed kSZ four-point function, degeneracies between $\tau$ and $\Delta z$ can be strongly broken, yielding tight constraints on both parameters. We forecast $\sigma(\tau) = 0.003$ and $\sigma(\Delta z) = 0.25$ for a combination of CMB-S4 and Planck data, including detailed treatment of foregrounds and atmospheric noise. The constraint on $\tau$ is nearly identical to the cosmic-variance limit that can be achieved from large-angle CMB polarization data. The kSZ effect thus promises to yield not only detailed information about the reionization epoch, but also to enable high-precision cosmological constraints on the neutrino mass. ","Mitigating the optical depth degeneracy using the kinematic |
Sunyaev-Zel'dovich effect with CMB-S4",4,"['New paper out today, with awesome work by my student @ikape_margaret as well as fun collaborators @jcolinhill Simone Ferraro and Marcelo Alvarez. It was fun to work with this group on reionization constraints. Our take home plot is here: <LINK> <LINK>', '@ikape_margaret @jcolinhill With future CMB missions, the combined 2pt and 4pt constraints on the patchy reionization signal will mean we can break the degeneracy between the width (dz) of reionization and the optical depth (tau).', '@ikape_margaret @jcolinhill @ikape_margaret already contributed the 2pt constraints for the @SimonsObs - our 2pt constraints will be pretty good already! She’s working on a paper right now looking at constraining these models with current data. Such fun! @DunlapInstitute', '@That_Astro_Chic @ikape_margaret @jcolinhill @SimonsObs @DunlapInstitute 🥰']",20,06,829 |
232,189,1290466948355100673,1231998355972096000,Yoshitomo Matsubara,"""Neural Compression and Filtering for Edge-assisted Real-time Object Detection in Challenged Networks"" was accepted @icpr2020milan We propose generalized head network distillation & neural filter for split computing Code: <LINK> Preprint: <LINK> @icpr2020milan Split Computingに向けたモデルへのボトルネック導入のためのGeneralized Head Network Distillation、物体の有無をR-CNNの頭で検知するNeural Filterに関する論文がICPR 2020で採択されました! 前回紹介したEMDL論文の続編です Code: <LINK> プレプリント: <LINK>",https://arxiv.org/abs/2007.15818,"The edge computing paradigm places compute-capable devices - edge servers - at the network edge to assist mobile devices in executing data analysis tasks. Intuitively, offloading compute-intense tasks to edge servers can reduce their execution time. However, poor conditions of the wireless channel connecting the mobile devices to the edge servers may degrade the overall capture-to-output delay achieved by edge offloading. Herein, we focus on edge computing supporting remote object detection by means of Deep Neural Networks (DNNs), and develop a framework to reduce the amount of data transmitted over the wireless link. The core idea we propose builds on recent approaches splitting DNNs into sections - namely head and tail models - executed by the mobile device and edge server, respectively. The wireless link, then, is used to transport the output of the last layer of the head model to the edge server, instead of the DNN input. Most prior work focuses on classification tasks and leaves the DNN structure unaltered. Herein, our focus is on DNNs for three different object detection tasks, which present a much more convoluted structure, and modify the architecture of the network to: (i) achieve in-network compression by introducing a bottleneck layer in the early layers on the head model, and (ii) prefilter pictures that do not contain objects of interest using a convolutional neural network. Results show that the proposed technique represents an effective intermediate option between local and edge computing in a parameter region where these extreme point solutions fail to provide satisfactory performance. The code and trained models are available at this https URL . ","Neural Compression and Filtering for Edge-assisted Real-time Object |
Detection in Challenged Networks",2,"['""Neural Compression and Filtering for Edge-assisted Real-time Object Detection in Challenged Networks"" was accepted @icpr2020milan\nWe propose generalized head network distillation & neural filter for split computing\n\nCode: <LINK>\nPreprint: <LINK>', '@icpr2020milan Split Computingに向けたモデルへのボトルネック導入のためのGeneralized Head Network Distillation、物体の有無をR-CNNの頭で検知するNeural Filterに関する論文がICPR 2020で採択されました!\n前回紹介したEMDL論文の続編です\n\nCode: https://t.co/faDcuLq3Yf\nプレプリント: https://t.co/CIOzx8WHwm']",20,07,437 |
233,87,1047392281177849856,235735524,Jonathan Pritchard,"Our new paper led by @ClaudejpSchmit with me and @AlanHeavens on the gravitational and lensing-ISW 21cm bispectrum at low redshift. Come for the “sail plots”, stay for the science. <LINK> @ClaudejpSchmit @AlanHeavens <LINK> @ClaudejpSchmit @AlanHeavens (Is that how this works? Or is that far too silly?) @ClaudejpSchmit @AlanHeavens It’s one of jubilant celebration. From Monty Python and the Holy Grail. <LINK>",https://arxiv.org/abs/1810.00973,"Cosmic Microwave Background experiments from COBE to Planck, have launched cosmology into an era of precision science, where many cosmological parameters are now determined to the percent level. Next generation telescopes, focussing on the cosmological 21cm signal from neutral hydrogen, will probe enormous volumes in the low-redshift Universe, and have the potential to determine dark energy properties and test modifications of Einstein's gravity. We study the 21cm bispectrum due to gravitational collapse as well as the contribution by line of sight perturbations in the form of the lensing-ISW bispectrum at low-redshifts ($z \sim 0.35-3$), targeted by upcoming neutral hydrogen intensity mapping experiments. We compute the expected bispectrum amplitudes and use a Fisher forecast model to compare power spectrum and bispectrum observations of intensity mapping surveys by CHIME, MeerKAT and SKA-mid. We find that combined power spectrum and bispectrum observations have the potential to decrease errors on the cosmological parameters by an order of magnitude compared to Planck. Finally, we compute the contribution of the lensing-ISW bispectrum, and find that, unlike for the cosmic microwave background analyses, it can safely be ignored for 21cm bispectrum observations. ",The gravitational and lensing-ISW bispectrum of 21cm radiation,4,"['Our new paper led by @ClaudejpSchmit with me and @AlanHeavens on the gravitational and lensing-ISW 21cm bispectrum at low redshift. Come for the “sail plots”, stay for the science. <LINK>', '@ClaudejpSchmit @AlanHeavens https://t.co/uMpVrDCmJ6', '@ClaudejpSchmit @AlanHeavens (Is that how this works? Or is that far too silly?)', '@ClaudejpSchmit @AlanHeavens It’s one of jubilant celebration. From Monty Python and the Holy Grail. https://t.co/fzclesTcaY']",18,10,412 |
234,35,940148860541980672,2974498451,Dr. Monika Moscibrodzka,"How the black hole shadow looks like in polarized light ? It is extremely difficult problem. I describe details of it in my new paper <LINK> . The picture shows theoretical model of black hole shadow that #EHT is trying to detect in Stokes I, Q, U, and V. <LINK>",https://arxiv.org/abs/1712.03057,"We describe ${\tt ipole}$, a new public ray-tracing code for covariant, polarized radiative transport. The code extends the ${\tt ibothros}$ scheme for covariant, unpolarized transport using two representations of the polarized radiation field: in the coordinate frame, it parallel transports the coherency tensor; in the frame of the plasma it evolves the Stokes parameters under emission, absorption, and Faraday conversion. The transport step is implemented to be as spacetime- and coordinate- independent as possible. The emission, absorption, and Faraday conversion step is implemented using an analytic solution to the polarized transport equation with constant coefficients. As a result, ${\tt ipole}$ is stable, efficient, and produces a physically reasonable solution even for a step with high optical depth and Faraday depth. We show that the code matches analytic results in flat space, and that it produces results that converge to those produced by Dexter's ${\tt grtrans}$ polarized transport code on a complicated model problem. We expect ${\tt ipole}$ will mainly find applications in modeling Event Horizon Telescope sources, but it may also be useful in other relativistic transport problems such as modeling for the IXPE mission. ","ipole - semianalytic scheme for relativistic polarized radiative |
transport",1,"['How the black hole shadow looks like in polarized light ? It is extremely difficult problem. I describe details of it in my new paper <LINK> . The picture shows theoretical model of black hole shadow that #EHT is trying to detect in Stokes I, Q, U, and V. <LINK>']",17,12,262 |
235,200,1488812546165514240,1432777256682860547,Lucas Siemons,"I always felt like the bee’s knees researching biology to gain medical insights, but why do it if we can’t live on Earth? Huge respect for Nick’s study looking for green alternatives to our energy needs. Congats on your first pre-print! <LINK> #BroScience",https://arxiv.org/abs/2202.00337,"Exchanging hydrophobic alkyl-based side chains to hydrophilic glycol-based side chains is a widely adopted method for improving mixed-transport device performance, despite the impact on solid state packing and polymer-electrolyte interactions being poorly understood. Presented here is a Molecular Dynamics (MD) force field for modelling alkoxylated and glycolated polythiophenes. The force field is validated against known packing motifs for their monomer crystals. MD simulations, coupled with X-ray Diffraction (XRD), show that alkoxylated polythiophenes will pack with a `tilted stack' and straight interdigitating side chains, whilst their glycolated counterpart will pack with a `deflected stack' and an s-bend side chain configuration. MD simulations reveal water penetration pathways into the alkoxylated and glycolated crystals - through the {\pi}-stack and through the lamellar stack respectively. Finally, the two distinct ways tri-ethylene glycol polymers can bind to cations are revealed, showing the formation of a meta-stable single bound state, or an energetically deep double bound state, both with a strong side chain length dependance. The minimum energy pathways for the formation of the chelates are identified, showing the physical process through which cations can bind to one or two side chains of a glycolated polythiophene, with consequences for ion transport in bithiophene semiconductors. ","Impact of Side Chain Hydrophilicity on Packing, Swelling and Ion |
Interactions in Oxy-bithiophene Semiconductors",1,"['I always felt like the bee’s knees researching biology to gain medical insights, but why do it if we can’t live on Earth? Huge respect for Nick’s study looking for green alternatives to our energy needs. Congats on your first pre-print! <LINK> #BroScience']",22,02,255 |
236,262,1274425129771175938,937004359,Marina M.-C. Höhne (née Vidovic),"In our novel paper ""How much can I trust you? Quantifying Uncertainties in Explaining Neural Networks"" we found that this #cleverHansEffect could be the primary strategy for model prediction #RightForTheWrongReason - Checkout the preprint <LINK> <LINK> <LINK>",https://arxiv.org/abs/2006.09000,"Explainable AI (XAI) aims to provide interpretations for predictions made by learning machines, such as deep neural networks, in order to make the machines more transparent for the user and furthermore trustworthy also for applications in e.g. safety-critical areas. So far, however, no methods for quantifying uncertainties of explanations have been conceived, which is problematic in domains where a high confidence in explanations is a prerequisite. We therefore contribute by proposing a new framework that allows to convert any arbitrary explanation method for neural networks into an explanation method for Bayesian neural networks, with an in-built modeling of uncertainties. Within the Bayesian framework a network's weights follow a distribution that extends standard single explanation scores and heatmaps to distributions thereof, in this manner translating the intrinsic network model uncertainties into a quantification of explanation uncertainties. This allows us for the first time to carve out uncertainties associated with a model explanation and subsequently gauge the appropriate level of explanation confidence for a user (using percentiles). We demonstrate the effectiveness and usefulness of our approach extensively in various experiments, both qualitatively and quantitatively. ","How Much Can I Trust You? -- Quantifying Uncertainties in Explaining |
Neural Networks",1,"['In our novel paper ""How much can I trust you? Quantifying Uncertainties in Explaining Neural Networks"" we found that this #cleverHansEffect could be the primary strategy for model prediction #RightForTheWrongReason - Checkout the preprint <LINK> <LINK> <LINK>']",20,06,259 |
237,144,1411043189125955586,1227540343177928704,Mayee Chen,"New paper appearing in #ICML2021! Mandoline: Model Evaluation under Distribution Shift: Paper: <LINK> Code: <LINK> work done w/ equal contribution from @krandiash and @nimit_sohoni , as well as @faitpoms, @kayvonf, and @HazyResearch 1/6 <LINK> ML models are often deployed on unlabeled data that is very different from the data it was trained on. In our paper, we explore how to utilize domain knowledge and side information to efficiently evaluate models under distribution shift. 2/6 We introduce Mandoline, an importance weighting (IW) framework that allows us to estimate model performance on an unlabeled target deployment dataset using just a labeled source dataset. 3/6 Typically, IW based on the covariates suffers from support shift (e.g. when there are previously unseen points in the target dataset) and also struggles when the feature space is complex and high-dimensional. 4/6 Instead, Mandoline performs IW based on user-defined/programmatic ""slices"" that aim to capture relevant axes of distribution shift. 5/6 Theoretically, such slices are able to correct for distribution shift while mitigating the above problems of standard IW. Empirically, we do up to 3x better than baselines of standard IW in the raw feature space. 6/6",https://arxiv.org/abs/2107.00643,"Machine learning models are often deployed in different settings than they were trained and validated on, posing a challenge to practitioners who wish to predict how well the deployed model will perform on a target distribution. If an unlabeled sample from the target distribution is available, along with a labeled sample from a possibly different source distribution, standard approaches such as importance weighting can be applied to estimate performance on the target. However, importance weighting struggles when the source and target distributions have non-overlapping support or are high-dimensional. Taking inspiration from fields such as epidemiology and polling, we develop Mandoline, a new evaluation framework that mitigates these issues. Our key insight is that practitioners may have prior knowledge about the ways in which the distribution shifts, which we can use to better guide the importance weighting procedure. Specifically, users write simple ""slicing functions"" - noisy, potentially correlated binary functions intended to capture possible axes of distribution shift - to compute reweighted performance estimates. We further describe a density ratio estimation framework for the slices and show how its estimation error scales with slice quality and dataset size. Empirical validation on NLP and vision tasks shows that Mandoline can estimate performance on the target distribution up to 3x more accurately compared to standard baselines. ",Mandoline: Model Evaluation under Distribution Shift,6,"['New paper appearing in #ICML2021! Mandoline: Model Evaluation under Distribution Shift:\n\nPaper: <LINK>\nCode: <LINK>\n\nwork done w/ equal contribution from @krandiash and @nimit_sohoni , as well as @faitpoms, @kayvonf, and @HazyResearch 1/6 <LINK>', 'ML models are often deployed on unlabeled data that is very different from the data it was trained on. In our paper, we explore how to utilize domain knowledge and side information to efficiently evaluate models under distribution shift. 2/6', 'We introduce Mandoline, an importance weighting (IW) framework that allows us to estimate model performance on an unlabeled target deployment dataset using just a labeled source dataset. 3/6', 'Typically, IW based on the covariates suffers from support shift (e.g. when there are previously unseen points in the target dataset) and also struggles when the feature space is complex and high-dimensional. 4/6', 'Instead, Mandoline performs IW based on user-defined/programmatic ""slices"" that aim to capture relevant axes of distribution shift. 5/6', 'Theoretically, such slices are able to correct for distribution shift while mitigating the above problems of standard IW. Empirically, we do up to 3x better than baselines of standard IW in the raw feature space. 6/6']",21,07,1242 |
238,157,1424325432028155909,2794056066,Matthias Grundmann,"Since about a month, many invalid IP addresses are distributed in the #Bitcoin P2P network. We found that this can be used to estimate the number of neighbors of public Bitcoin Core peers and to match multiple addresses to the same peer. Find our report at <LINK>",https://arxiv.org/abs/2108.00815,"A recent spam wave of IP addresses in the Bitcoin P2P network allowed us to estimate the degree distribution of reachable peers in the network. The resulting distribution shows that about every second reachable peer runs with Bitcoin Core's default setting of a maximum of 125 concurrent connections and nearly all connection slots are taken. We validate this result and, in addition, use our observations of the spam wave to group addresses that belong to the same peer. By doing this grouping, we improve on previous measurements and show that simply counting addresses overestimates the number of reachable peers by 13 %. ",Estimating the Peer Degree of Reachable Peers in the Bitcoin P2P Network,1,"['Since about a month, many invalid IP addresses are distributed in the #Bitcoin P2P network. We found that this can be used to estimate the number of neighbors of public Bitcoin Core peers and to match multiple addresses to the same peer. Find our report at <LINK>']",21,08,263 |
239,32,1264942753097682944,38715455,Saad Bhamla,"Congrats to @Kate__Burgener and graduating @GTChBE undergrad on her 1st 1st author paper out: <LINK> ($) and free pdf (arXiv: <LINK>) In this paper, we describe a new way to clean pollutants from contact lenses... <LINK> If you wear contact lenses, typically you use a rinse-and-rub method -- which involves rinsing with a multipurpose solution and rubbing between your fingers. We wondered if this would remove pollutants such as pollen and nanoparticles? <LINK> If you imagine tiny pollutant particles stuck on a contact lens (wet, soft, fragile and squishy) material, we envisioned rubbing it between fingers would lead to abrasion and couldn't provide sufficient shear forces to remove say nanoparticles (from air pollution)... @Kate__Burgener and I developed a new method we refer to as PoPPR (popper) that stands for Polymer on Polymer Pollutant Removal. Essentially we use another softer polymer (PDMS) to remove pollutants... <LINK> We found that although our PoPPR method was as good as the rinse-and-rub technique for Pollen (25-40um) sized particles, our method really shines for microbeads (1-5um) and nanoparticles (5-10) nanometers.. <LINK> Interestingly we found that that of different ratios of PDMS (setting agent to polymer) which tunes how soft PDMS is, an optimal ratio (1:40) exists that offers the best performance for lens cleaning across all particles - pollen to nanoparticles. <LINK> Ultimately, we think that this method would be useful for contact lens wearers in areas with high air pollution -- if you wear contact lens and live in any area with high pollution - contact us if you would like to try our method and we can ship you some cleaning polymers. Last, I am proud of @Kate__Burgener who is started as a research in my lab in 2017 (when lab was founded) and is now off for her PhD at UW Madison in molecular and environmental toxicology. Good luck to her in her next adventure! <LINK> Here's one last image from her work of pollutants glowing on a contact lens under a microscope -- if you have never looked at a contact lens under a microscope -- you should! <LINK> @nateorndorf @Kate__Burgener @GordonConf Nice - thanks for sharing - these are fantastic!",https://arxiv.org/abs/2005.08732,"Purpose: To demonstrate an alternative to the rinse and rub (RR) method for cleaning pollutants from the exterior surface of soft contact lenses. This proposed technique is termed Polymer on Polymer Pollutant Removal (PoPPR), which utilizes the elastic properties of polydimethylsiloxane (PDMS) to physically remove contaminants from contact lens surfaces through non-adhesive unpeeling. Methods: Three different ratios of setting agent to polymer PDMS (1:30, 1:40, and 1:50) were evaluated using the PoPPR method against the control method of RR with a commercial multi-purpose lens cleaning solution. Three simulated pollutants of different sizes: pollen (25-40 {\mu}m), microbeads (1-5 {\mu}m), and nanoparticles (5-10 nm), were used to test the effectiveness of both cleaning methods. The fraction of pollutants removed from each contact lens was recorded and evaluated for significance. Results: PDMS 1:40 was found to be the optimal ratio for lens cleaning using the PoPPR method. For larger particles (>10 {\mu}m), no difference was observed between conventional RR and proposed PoPPR method (p > 0.05). However, the new PoPPR technique was significantly better at removing small PM2.5 particles (<2.5 {\mu}m) compared to the RR method, specifically for microbeads (p = 0.006) and nanoparticles (p < 0.001). Conclusion: This proof-of-concept work demonstrates that the PoPPR method of cleaning contact lenses is as effective as the conventional cleaning method for larger particles such as pollen. The PoPPR method is more effective at removing extremely fine particulate pollutants, including microplastics and nanoparticles. This method offers a potentially more efficient cleaning protocol that could enhance the safety, health, and comfort of contact lens users, especially those living in regions with significant air pollution. ",A polymer-based technique to remove pollutants from soft contact lenses,10,"['Congrats to @Kate__Burgener and graduating @GTChBE undergrad on her 1st 1st author paper out: <LINK> ($) and free pdf (arXiv: <LINK>)\nIn this paper, we describe a new way to clean pollutants from contact lenses... <LINK>', 'If you wear contact lenses, typically you use a rinse-and-rub method -- which involves rinsing with a multipurpose solution and rubbing between your fingers. We wondered if this would remove pollutants such as pollen and nanoparticles? https://t.co/dS6ZntrXd3', ""If you imagine tiny pollutant particles stuck on a contact lens (wet, soft, fragile and squishy) material, we envisioned rubbing it between fingers would lead to abrasion and couldn't provide sufficient shear forces to remove say nanoparticles (from air pollution)..."", '@Kate__Burgener and I developed a new method we refer to as PoPPR (popper) that stands for Polymer on Polymer Pollutant Removal. Essentially we use another softer polymer (PDMS) to remove pollutants... https://t.co/caol7bn3Yo', 'We found that although our PoPPR method was as good as the rinse-and-rub technique for Pollen (25-40um) sized particles, our method really shines for microbeads (1-5um) and nanoparticles (5-10) nanometers.. https://t.co/2wVnuqeTA0', 'Interestingly we found that that of different ratios of PDMS (setting agent to polymer) which tunes how soft PDMS is, an optimal ratio (1:40) exists that offers the best performance for lens cleaning across all particles - pollen to nanoparticles. https://t.co/O2k0lAWkTE', 'Ultimately, we think that this method would be useful for contact lens wearers in areas with high air pollution -- if you wear contact lens and live in any area with high pollution - contact us if you would like to try our method and we can ship you some cleaning polymers.', 'Last, I am proud of @Kate__Burgener who is started as a research in my lab in 2017 (when lab was founded) and is now off for her PhD at UW Madison in molecular and environmental toxicology. Good luck to her in her next adventure! https://t.co/yQscx7BZ1u', ""Here's one last image from her work of pollutants glowing on a contact lens under a microscope -- if you have never looked at a contact lens under a microscope -- you should! https://t.co/Cxt6nVKFbk"", '@nateorndorf @Kate__Burgener @GordonConf Nice - thanks for sharing - these are fantastic!']",20,05,2192 |
240,69,1014891929794797568,39525395,Aditya Grover,"Amidst all the interest in compressed sensing using deep generative models, our new paper shows that sparsity assumptions can co-exist & aid generative models. Long oral at @icmlconf next week! w/ Manik Dhar & @ermonste Paper: <LINK> Code: <LINK> <LINK>",https://arxiv.org/abs/1807.01442,"In compressed sensing, a small number of linear measurements can be used to reconstruct an unknown signal. Existing approaches leverage assumptions on the structure of these signals, such as sparsity or the availability of a generative model. A domain-specific generative model can provide a stronger prior and thus allow for recovery with far fewer measurements. However, unlike sparsity-based approaches, existing methods based on generative models guarantee exact recovery only over their support, which is typically only a small subset of the space on which the signals are defined. We propose Sparse-Gen, a framework that allows for sparse deviations from the support set, thereby achieving the best of both worlds by using a domain specific prior and allowing reconstruction over the full space of signals. Theoretically, our framework provides a new class of signals that can be acquired using compressed sensing, reducing classic sparse vector recovery to a special case and avoiding the restrictive support due to a generative model prior. Empirically, we observe consistent improvements in reconstruction accuracy over competing approaches, especially in the more practical setting of transfer compressed sensing where a generative model for a data-rich, source domain aids sensing on a data-scarce, target domain. ","Modeling Sparse Deviations for Compressed Sensing using Generative |
Models",1,"['Amidst all the interest in compressed sensing using deep generative models, our new paper shows that sparsity assumptions can co-exist & aid generative models. Long oral at @icmlconf next week! w/ Manik Dhar & @ermonste Paper: <LINK> Code: <LINK> <LINK>']",18,07,253 |
241,78,1440160643953291266,66452419,Alexey Svyatkovskiy,"Our new work on modeling long-range aspects of source code has been accepted to #EMNLP2021 Paper is available on arXiv: <LINK> Summary (1/4) eWASH is an architecture-independent approach to incorporating entire file-level context into a fixed length window for learning, that compares favorably to memory-efficient transformers (Reformer, Performer). (2/4) Our hypothesis was that the syntax hierarchy imposed by developers is a real signal of importance in a task context, and that methods, containing most lines of code, are most dependent on the higher-level scopes of their file-level attributes (3/4) We applied eWASH to GPT-C and PyMT5 models. (4/4)",https://arxiv.org/abs/2109.08780,"Statistical language modeling and translation with transformers have found many successful applications in program understanding and generation tasks, setting high benchmarks for tools in modern software development environments. The finite context window of these neural models means, however, that they will be unable to leverage the entire relevant context of large files and packages for any given task. While there are many efforts to extend the context window, we introduce an architecture-independent approach for leveraging the syntactic hierarchies of source code for incorporating entire file-level context into a fixed-length window. Using concrete syntax trees of each source file we extract syntactic hierarchies and integrate them into context window by selectively removing from view more specific, less relevant scopes for a given task. We evaluate this approach on code generation tasks and joint translation of natural language and source code in Python programming language, achieving a new state-of-the-art in code completion and summarization for Python in the CodeXGLUE benchmark. We also introduce new CodeXGLUE benchmarks for user-experience-motivated tasks: code completion with normalized literals, method body completion/code summarization conditioned on file-level context. ","Long-Range Modeling of Source Code Files with eWASH: Extended Window |
Access by Syntax Hierarchy",4,"['Our new work on modeling long-range aspects of source code has been accepted to #EMNLP2021\n\nPaper is available on arXiv: <LINK>\n\nSummary (1/4)', 'eWASH is an architecture-independent approach to incorporating entire file-level context into a fixed length window for learning, that compares favorably to memory-efficient transformers (Reformer, Performer). (2/4)', 'Our hypothesis was that the\nsyntax hierarchy imposed by developers is a real\nsignal of importance in a task context, and that\nmethods, containing most lines of code, are most\ndependent on the higher-level scopes of their file-level attributes (3/4)', 'We applied eWASH to GPT-C and PyMT5 models. (4/4)']",21,09,655 |
242,35,951579934614589440,2284142012,Brigitta Sipőcz,"Exciting times, the new Astropy paper is on arxiv including significant updates since the previous paper (0.2 release), the status of the project and some future plans. <LINK> Oh, and let's hope we'll have enough to say for a new paper in a few years time and don't have to wait for version v20.0 :) (First paper was v0.2, this brand new second one is v2.0)",http://arxiv.org/abs/1801.02634,"The Astropy project supports and fosters the development of open-source and openly-developed Python packages that provide commonly-needed functionality to the astronomical community. A key element of the Astropy project is the core package Astropy, which serves as the foundation for more specialized projects and packages. In this article, we provide an overview of the organization of the Astropy project and summarize key features in the core package as of the recent major release, version 2.0. We then describe the project infrastructure designed to facilitate and support development for a broader ecosystem of inter-operable packages. We conclude with a future outlook of planned new features and directions for the broader Astropy project. ","The Astropy Project: Building an inclusive, open-science project and |
status of the v2.0 core package",2,"['Exciting times, the new Astropy paper is on arxiv including significant updates since the previous paper (0.2 release), the status of the project and some future plans. <LINK>', ""Oh, and let's hope we'll have enough to say for a new paper in a few years time and don't have to wait for version v20.0 :) (First paper was v0.2, this brand new second one is v2.0)""]",18,01,357 |
243,119,1006811970690080769,992779795,Kristof De Mey,New @ugent paper online: Prediction of the @FIFAWorldCup – A random forest approach with an emphasis on estimated team ability parameters. The model slightly favors #Spain 🇪🇸before the defending champion #Germany.🇩🇪 But... ➡️<LINK> #Statistics #Analytics <LINK>,https://arxiv.org/abs/1806.03208,"In this work, we compare three different modeling approaches for the scores of soccer matches with regard to their predictive performances based on all matches from the four previous FIFA World Cups 2002 - 2014: Poisson regression models, random forests and ranking methods. While the former two are based on the teams' covariate information, the latter method estimates adequate ability parameters that reflect the current strength of the teams best. Within this comparison the best-performing prediction methods on the training data turn out to be the ranking methods and the random forests. However, we show that by combining the random forest with the team ability parameters from the ranking methods as an additional covariate we can improve the predictive power substantially. Finally, this combination of methods is chosen as the final model and based on its estimates, the FIFA World Cup 2018 is simulated repeatedly and winning probabilities are obtained for all teams. The model slightly favors Spain before the defending champion Germany. Additionally, we provide survival probabilities for all teams and at all tournament stages as well as the most probable tournament outcome. ","Prediction of the FIFA World Cup 2018 - A random forest approach with an |
emphasis on estimated team ability parameters",1,['New @ugent paper online: Prediction of the @FIFAWorldCup – A random forest approach with an emphasis on estimated team ability parameters. The model slightly favors #Spain 🇪🇸before the defending champion #Germany.🇩🇪 But... ➡️<LINK> #Statistics #Analytics <LINK>'],18,06,261 |
244,81,976989585648517121,712016577567264769,Hernan Makse,"See our latest study of how fake news influenced the 2016 US election. We study 170 million tweets and characterize influencers spreading fake, extremely bias, and traditional news from left to right. <LINK> @kcore_analytics #FakeNews #FakeNewsInfluencers",https://arxiv.org/abs/1803.08491,"The dynamics and influence of fake news on Twitter during the 2016 US presidential election remains to be clarified. Here, we use a dataset of 171 million tweets in the five months preceding the election day to identify 30 million tweets, from 2.2 million users, which contain a link to news outlets. Based on a classification of news outlets curated by www.opensources.co, we find that 25% of these tweets spread either fake or extremely biased news. We characterize the networks of information flow to find the most influential spreaders of fake and traditional news and use causal modeling to uncover how fake news influenced the presidential election. We find that, while top influencers spreading traditional center and left leaning news largely influence the activity of Clinton supporters, this causality is reversed for the fake news: the activity of Trump supporters influences the dynamics of the top fake news spreaders. ","Influence of fake news in Twitter during the 2016 US presidential |
election",1,"['See our latest study of how fake news influenced the 2016 US election. We study 170 million tweets and characterize influencers spreading fake, extremely bias, and traditional news from left to right. <LINK> @kcore_analytics #FakeNews #FakeNewsInfluencers']",18,03,255 |
245,163,1186752805211693057,325846699,Thaddeus Komacek,"Is weather in the atmospheres of hot exoplanets observationally detectable? In <LINK>, Adam Showman and I determine a baseline for how time-variable hot Jupiter atmospheres should be. We find that JWST should be able to detect time-variability in emitted flux. The figure below shows an emitted flux map and the change in emitted flux over 150 and 75 days. We find that the variability in flux, temperature, and wind speed is concentrated in equatorial regions. The local variability in emitted flux can be as large as ~10%! <LINK> The global variability in the secondary eclipse depth can be as large as 2%. This is smaller than previous upper limits with Spitzer, but should be detectable with JWST. We also find that the phase curve offset and amplitude can shift significantly. Lastly, we find that the wind speeds at the terminator can be strongly time-variable. In some cases, the wind speed at the western limb can reverse from eastward to westward! This variability might be detectable with many high-resolution spectroscopic transit observations. <LINK> Our simulations did not include many other effects that can lead to time-variability, including vertical shear instabilities, hydrogen dissociation, magnetic effects, and clouds. We find that these effects are needed to explain previous observations of time-variability.",https://arxiv.org/abs/1910.09523,"Hot Jupiters receive intense incident stellar light on their daysides, which drives vigorous atmospheric circulation that attempts to erase their large dayside-to-nightside flux contrasts. Propagating waves and instabilities in hot Jupiter atmospheres can cause emergent properties of the atmosphere to be time-variable. In this work, we study such weather in hot Jupiter atmospheres using idealized cloud-free general circulation models with double-grey radiative transfer. We find that hot Jupiter atmospheres can be time-variable at the $\sim 0.1-1\%$ level in globally averaged temperature and at the $\sim 1-10\%$ level in globally averaged wind speeds. As a result, we find that observable quantities are also time variable: the secondary eclipse depth can be variable at the $\lesssim 2\%$ level, the phase curve amplitude can change by $\lesssim 1\%$, the phase curve offset can shift by $\lesssim 5^{\circ}$, and terminator-averaged wind speeds can vary by $\lesssim 2~ \mathrm{km}~\mathrm{s}^{-1}$. Additionally, we calculate how the eastern and western limb-averaged wind speeds vary with incident stellar flux and the strength of an imposed drag that parameterizes Lorentz forces in partially ionized atmospheres. We find that the eastern limb is blueshifted in models over a wide range of equilibrium temperature and drag strength, while the western limb is only redshifted if equilibrium temperatures are $\lesssim1500~\mathrm{K}$ and drag is weak. Lastly, we show that temporal variability may be observationally detectable in the infrared through secondary eclipse observations with JWST, phase curve observations with future space telescopes (e.g., ARIEL), and/or Doppler wind speed measurements with high-resolution spectrographs. ",Temporal Variability in Hot Jupiter Atmospheres,5,"['Is weather in the atmospheres of hot exoplanets observationally detectable? In <LINK>, Adam Showman and I determine a baseline for how time-variable hot Jupiter atmospheres should be. We find that JWST should be able to detect time-variability in emitted flux.', 'The figure below shows an emitted flux map and the change in emitted flux over 150 and 75 days. We find that the variability in flux, temperature, and wind speed is concentrated in equatorial regions. The local variability in emitted flux can be as large as ~10%! https://t.co/iLYCYqsltp', 'The global variability in the secondary eclipse depth can be as large as 2%. This is smaller than previous upper limits with Spitzer, but should be detectable with JWST. We also find that the phase curve offset and amplitude can shift significantly.', 'Lastly, we find that the wind speeds at the terminator can be strongly time-variable. In some cases, the wind speed at the western limb can reverse from eastward to westward! This variability might be detectable with many high-resolution spectroscopic transit observations. https://t.co/F2EoaG2zuQ', 'Our simulations did not include many other effects that can lead to time-variability, including vertical shear instabilities, hydrogen dissociation, magnetic effects, and clouds. We find that these effects are needed to explain previous observations of time-variability.']",19,10,1333 |
246,156,1356669509680066560,1177300366620184576,Xuhui,"Our new work on debiasing toxic language detection systems @eacl2021 with @MaartenSap @swabhz @nlpnoah @YejinChoinka gives you a better understanding of challenges in automated debiasing those systems 1/4 Paper: <LINK> With many discovered (and potentially undiscovered) unwanted biases presenting in current toxic language detection systems😟, we apply bias-aware and bias-agnostic methods on debiasing those systems. 2/4 ⚠️: offensive contents 👇 Our focus is on lexical (e.g., swear words, slurs, identity mentions) and dialectal markers (specifically African American English). We find that methods show debiasing effects on the in-distribution test sets but fail to debias on the out-of-distribution test sets. 3/4 <LINK> We then propose an automatic, dialect-aware data correction method, as a proof-of-concept study. Despite the use of synthetic labels, this method reduces dialectal associations with toxicity. 4/4",https://arxiv.org/abs/2102.00086,"Biased associations have been a challenge in the development of classifiers for detecting toxic language, hindering both fairness and accuracy. As potential solutions, we investigate recently introduced debiasing methods for text classification datasets and models, as applied to toxic language detection. Our focus is on lexical (e.g., swear words, slurs, identity mentions) and dialectal markers (specifically African American English). Our comprehensive experiments establish that existing methods are limited in their ability to prevent biased behavior in current toxicity detectors. We then propose an automatic, dialect-aware data correction method, as a proof-of-concept. Despite the use of synthetic labels, this method reduces dialectal associations with toxicity. Overall, our findings show that debiasing a model trained on biased toxic language data is not as effective as simply relabeling the data to remove existing biases. ",Challenges in Automated Debiasing for Toxic Language Detection,4,"['Our new work on debiasing toxic language detection systems @eacl2021 with @MaartenSap @swabhz @nlpnoah @YejinChoinka gives you a better understanding of challenges in automated debiasing those systems 1/4\n\nPaper: <LINK>', 'With many discovered (and potentially undiscovered) unwanted biases presenting in current toxic language detection systems😟, we apply bias-aware and bias-agnostic methods on debiasing those systems. 2/4\n\n⚠️: offensive contents 👇', 'Our focus is on lexical (e.g., swear words, slurs, identity mentions) and dialectal markers (specifically African American English).\nWe find that methods show debiasing effects on the in-distribution test sets but fail to debias on the out-of-distribution test sets. 3/4 https://t.co/2u8EvxUED3', 'We then propose an automatic, dialect-aware data correction method, as a proof-of-concept study. Despite the use of synthetic labels, this method reduces dialectal associations with toxicity. 4/4']",21,02,920 |
247,172,1268837557728673794,1171357907574824961,Łukasz Tychoniec,Super excited about our new paper accepted to A&A! We look into my favorite molecular cloud - Perseus - to measure the masses of youngest disks. Result: Early planet formation is a likely solution to the missing mass problem in protoplanetary disks. <LINK> <LINK>,https://arxiv.org/abs/2006.02812,"In recent years evidence has been building that planet formation starts early, in the first $\sim$ 0.5 Myr. Studying the dust masses available in young disks enables understanding the origin of planetary systems since mature disks are lacking the solid material necessary to reproduce the observed exoplanetary systems, especially the massive ones. We aim to determine if disks in the embedded stage of star formation contain enough dust to explain the solid content of the most massive exoplanets. We use Atacama Large Millimeter/submillimeter Array (ALMA) Band 6 observations of embedded disks in the Perseus star-forming region together with Very Large Array (VLA) Ka-band (9 mm) data to provide a robust estimate of dust disk masses from the flux densities. Using the DIANA opacity model including large grains, with a dust opacity value of $\kappa_{\rm 9\ mm}$ = 0.28 cm$^{2}$ g$^{-1}$, the median dust masses of the embedded disks in Perseus are 158 M$_\oplus$ for Class 0 and 52 M$_\oplus$ for Class I from the VLA fluxes. The lower limits on the median masses from ALMA fluxes are 47 M$_\oplus$ and 12 M$_\oplus$ for Class 0 and Class I, respectively, obtained using the maximum dust opacity value $\kappa_{\rm 1.3mm}$ = 2.3 cm$^{2}$ g$^{-1}$. The dust masses of young Class 0 and I disks are larger by at least a factor of 10 and 3, respectively, compared with dust masses inferred for Class II disks in Lupus and other regions. The dust masses of Class 0 and I disks in Perseus derived from the VLA data are high enough to produce the observed exoplanet systems with efficiencies acceptable by planet formation models: the solid content in observed giant exoplanets can be explained if planet formation starts in Class 0 phase with an efficiency of $\sim$ 15%. Higher efficiency of $\sim$ 30% is necessary if the planet formation is set to start in Class I disks. ","Dust masses of young disks: constraining the initial solid reservoir for |
planet formation",1,['Super excited about our new paper accepted to A&A!\nWe look into my favorite molecular cloud - Perseus - to measure the masses of youngest disks. \nResult: Early planet formation is a likely solution to the missing mass problem in protoplanetary disks. \n\n<LINK> <LINK>'],20,06,264 |
248,1,1513618963330224138,1212484521678929920,Leonie Weissweiler,"Excited to announce our new paper ""CaMEL: Case Marker Extraction without Labels 🐫"", with @vjhofmann, @_masoudjalili, and @HinrichSchuetze was accepted to #ACL2022! <LINK> (1/5) First, we introduce the new task of extracting case markers from a parallel corpus without morphological segmentation or annotation. Then, we automatically compile a silver standard to evaluate this new task by extracting case markers from the UniMorph dataset. (2/5) We also present a model featuring annotation projection using alignments and statistical tests that achieves 45% F1 on the silver standard. (3/5) After extracting case markers for 83 languages from the Parallel Bible Corpus, we demonstrate how they can be used to cluster noun phrases in it by their deep case or semantic role, opening up exciting avenues for further research. (4/5) We provide our silver standard and our code at <LINK> and look forward to seeing you in Dublin! 🇮🇪 (5/5)",https://arxiv.org/abs/2203.10010,"We introduce CaMEL (Case Marker Extraction without Labels), a novel and challenging task in computational morphology that is especially relevant for low-resource languages. We propose a first model for CaMEL that uses a massively multilingual corpus to extract case markers in 83 languages based only on a noun phrase chunker and an alignment system. To evaluate CaMEL, we automatically construct a silver standard from UniMorph. The case markers extracted by our model can be used to detect and visualise similarities and differences between the case systems of different languages as well as to annotate fine-grained deep cases in languages in which they are not overtly marked. ",CaMEL: Case Marker Extraction without Labels,5,"['Excited to announce our new paper ""CaMEL: Case Marker Extraction without Labels 🐫"", with @vjhofmann, @_masoudjalili, and @HinrichSchuetze was accepted to #ACL2022!\n<LINK> (1/5)', 'First, we introduce the new task of extracting case markers from a parallel corpus without morphological segmentation or annotation. Then, we automatically compile a silver standard to evaluate this new task by extracting case markers from the UniMorph dataset. (2/5)', 'We also present a model featuring annotation projection using alignments and statistical tests that achieves 45% F1 on the silver standard. (3/5)', 'After extracting case markers for 83 languages from the Parallel Bible Corpus, we demonstrate how they can be used to cluster noun phrases in it by their deep case or semantic role, opening up exciting avenues for further research. (4/5)', 'We provide our silver standard and our code at https://t.co/p5j7u5fe5f and look forward to seeing you in Dublin! 🇮🇪 (5/5)']",22,03,933 |
249,148,1435189824424710145,2370131880,Andy Keller,"Together with @wellingmax, we think deep learning needs more organization and structure... topographic organization and equivariant structure 😁 Introducing our new paper: Topographic VAEs learn Equivariant Capsules 📃<LINK> 🧬<LINK> 1/6 <LINK> Topographic organization in the brain describes the observation that nearby neurons on the cortical surface tend to have more strongly correlated activations than spatially distant neurons. What is the advantage of such organization and what is the relation to equivariance? 2/6 To answer these questions we introduce the Topographic VAE, a deep generative model with topographically organized latent variables, and show that it indeed learns to organize its activations according to salient characteristics such as class, width, and style on MNIST. 3/6 <LINK> We then show, by extension of such organization over time, it is possible learn sets of approximately equivariant features (aka 'capsules') directly from sequences -- where observed transformations become encoded as cyclic permutations within the capsule dimension. (Fig. 1) 4/6 Experimentally we observe that our model yields higher likelihoods than non-topographic baselines on transforming test sequences while additionally being able to decode unseen future sequence elements through a 'capsule roll' in latent space. 5/6 <LINK> We hope this work enables further exploration of the computational benefits of topographic organization in deep neural networks, and further advances in the domain learned approximate equivariance. Please see our paper for more experiments, and don't hesitate to reach out! 6/6 This work could not have been done without the prior works of Aapo Hyvarinen, @pohoyer, Jarmo Hurri, Mika Inki, Jaakko Väyrynen, @wellingmax, @sindero, @geoffreyhinton, Laurenz Wiskott, @sejnowski, @TacoCohen, @dpkingma, @DaniloJRezende, @shakir_za and so many more. Thank you!",https://arxiv.org/abs/2109.01394,"In this work we seek to bridge the concepts of topographic organization and equivariance in neural networks. To accomplish this, we introduce the Topographic VAE: a novel method for efficiently training deep generative models with topographically organized latent variables. We show that such a model indeed learns to organize its activations according to salient characteristics such as digit class, width, and style on MNIST. Furthermore, through topographic organization over time (i.e. temporal coherence), we demonstrate how predefined latent space transformation operators can be encouraged for observed transformed input sequences -- a primitive form of unsupervised learned equivariance. We demonstrate that this model successfully learns sets of approximately equivariant features (i.e. ""capsules"") directly from sequences and achieves higher likelihood on correspondingly transforming test sequences. Equivariance is verified quantitatively by measuring the approximate commutativity of the inference network and the sequence transformations. Finally, we demonstrate approximate equivariance to complex transformations, expanding upon the capabilities of existing group equivariant neural networks. ",Topographic VAEs learn Equivariant Capsules,7,"['Together with @wellingmax, we think deep learning needs more organization and structure... topographic organization and equivariant structure 😁\n\nIntroducing our new paper:\nTopographic VAEs learn Equivariant Capsules\n📃<LINK>\n🧬<LINK>\n\n1/6 <LINK>', 'Topographic organization in the brain describes the observation that nearby neurons on the cortical surface tend to have more strongly correlated activations than spatially distant neurons. What is the advantage of such organization and what is the relation to equivariance?\n\n2/6', 'To answer these questions we introduce the Topographic VAE, a deep generative model with topographically organized latent variables, and show that it indeed learns to organize its activations according to salient characteristics such as class, width, and style on MNIST.\n\n3/6 https://t.co/7XRFazsSb8', ""We then show, by extension of such organization over time, it is possible learn sets of approximately equivariant features (aka 'capsules') directly from sequences -- where observed transformations become encoded as cyclic permutations within the capsule dimension. (Fig. 1)\n\n4/6"", ""Experimentally we observe that our model yields higher likelihoods than non-topographic baselines on transforming test sequences while additionally being able to decode unseen future sequence elements through a 'capsule roll' in latent space.\n\n5/6 https://t.co/LqaAA2XVIH"", ""We hope this work enables further exploration of the computational benefits of topographic organization in deep neural networks, and further advances in the domain learned approximate equivariance. Please see our paper for more experiments, and don't hesitate to reach out!\n\n6/6"", 'This work could not have been done without the prior works of Aapo Hyvarinen, @pohoyer, Jarmo Hurri, Mika Inki, Jaakko Väyrynen, @wellingmax, @sindero, @geoffreyhinton, Laurenz Wiskott, @sejnowski, @TacoCohen, @dpkingma, @DaniloJRezende, @shakir_za and so many more. Thank you!']",21,09,1891 |
250,47,974077485892583424,3242991169,Bharath Ramsundar,"Check out our new paper on ""Spatial Graph Convolutions for Drug Discovery."" Converts a 3D macro molecular structure into a graph structure that it feeds into a graph convolutional deep network. Matches state-of-art with end-to-end learning. <LINK> The work is written up in a great Medium post by lead author @enfeinberg and corresponding author @vijaypande <LINK>",https://arxiv.org/abs/1803.04465,"The arc of drug discovery entails a multiparameter optimization problem spanning vast length scales. They key parameters range from solubility (angstroms) to protein-ligand binding (nanometers) to in vivo toxicity (meters). Through feature learning---instead of feature engineering---deep neural networks promise to outperform both traditional physics-based and knowledge-based machine learning models for predicting molecular properties pertinent to drug discovery. To this end, we present the PotentialNet family of graph convolutions. These models are specifically designed for and achieve state-of-the-art performance for protein-ligand binding affinity. We further validate these deep neural networks by setting new standards of performance in several ligand-based tasks. In parallel, we introduce a new metric, the Regression Enrichment Factor $EF_\chi^{(R)}$, to measure the early enrichment of computational models for chemical data. Finally, we introduce a cross-validation strategy based on structural homology clustering that can more accurately measure model generalizability, which crucially distinguishes the aims of machine learning for drug discovery from standard machine learning tasks. ",PotentialNet for Molecular Property Prediction,2,"['Check out our new paper on ""Spatial Graph Convolutions for Drug Discovery."" Converts a 3D macro molecular structure into a graph structure that it feeds into a graph convolutional deep network. Matches state-of-art with end-to-end learning. <LINK>', 'The work is written up in a great Medium post by lead author @enfeinberg and corresponding author @vijaypande https://t.co/UwegCC8ni7']",18,03,364 |
251,146,1410986063598870528,19391874,Naren Ramakrishnan,"New paper on AntiPatterns in MLOps: <LINK> Joint work with Hays ""Skip"" McCormick, Prakash Arunachalam, Yu Yu et al. of @BNYMellon and @nikhilm_1, @sathappanspm, @hbar of @SanghaniCtrVT. I have learnt a lot about antipatterns from Skip esp. via his book: <LINK> Our paper provides a vocabulary to describe defective ML practices, esp. in financial applications like forecasting treasury settlement failures & balance prediction. See this very accessible blog post from @BNYMellon about their role as a clearing provider processing more than $8.6 trillion in Fed-eligible securities daily and how ML is used in their pipeline. <LINK> As ML matures and gets viewed through a software engineering lens (see @isbellHFh keynote), patterns and antipatterns will pop up everywhere. This paper to appear in KDD workshop on ML in Finance (<LINK>).",https://arxiv.org/abs/2107.00079,"We describe lessons learned from developing and deploying machine learning models at scale across the enterprise in a range of financial analytics applications. These lessons are presented in the form of antipatterns. Just as design patterns codify best software engineering practices, antipatterns provide a vocabulary to describe defective practices and methodologies. Here we catalog and document numerous antipatterns in financial ML operations (MLOps). Some antipatterns are due to technical errors, while others are due to not having sufficient knowledge of the surrounding context in which ML results are used. By providing a common vocabulary to discuss these situations, our intent is that antipatterns will support better documentation of issues, rapid communication between stakeholders, and faster resolution of problems. In addition to cataloging antipatterns, we describe solutions, best practices, and future directions toward MLOps maturity. ",Using AntiPatterns to avoid MLOps Mistakes,4,"['New paper on AntiPatterns in MLOps: \n<LINK>\nJoint work with Hays ""Skip"" McCormick, Prakash Arunachalam, Yu Yu et al. of @BNYMellon and @nikhilm_1, @sathappanspm, @hbar of @SanghaniCtrVT.', 'I have learnt a lot about antipatterns from Skip esp. via his book: \nhttps://t.co/6IYvZd7eHx\nOur paper provides a vocabulary to describe defective ML practices, esp. in financial applications like forecasting treasury settlement failures & balance prediction.', 'See this very accessible blog post from @BNYMellon about their role as a clearing provider processing more than $8.6 trillion in Fed-eligible securities daily and how ML is used in their pipeline.\nhttps://t.co/M1XkMyLlLZ', 'As ML matures and gets viewed through a software engineering lens (see @isbellHFh keynote), patterns and antipatterns will pop up everywhere. This paper to appear in KDD workshop on ML in Finance (https://t.co/Nl51dNuRc5).']",21,07,837 |
252,126,1172580007547301888,2835427044,Sneha Kudugunta,"New EMNLP paper “Investigating Multilingual NMT Representation at Scale” w/ @ankurbpn, @orf_bnw, @caswell_isaac, @naveenariva. We study transfer in massively multilingual NMT @GoogleAI from the perspective of representational similarity. Paper: <LINK> 1/n <LINK> @ankurbpn @orf_bnw @caswell_isaac @naveenariva @GoogleAI We discuss the challenges of comparing misaligned sequences, and use a variant of SVCCA. 2/n @ankurbpn @orf_bnw @naveenariva @GoogleAI We find that encoder representations of different languages cluster according to linguistic similarity... 3/n <LINK> @ankurbpn @orf_bnw @naveenariva @GoogleAI … Even at a more fine-grained level. 4/n <LINK> @ankurbpn @orf_bnw @naveenariva @GoogleAI We look at how our similarity measure capturing linguistic similarity vs script. 5/n <LINK> @ankurbpn @orf_bnw @naveenariva @GoogleAI We also find that representations of high resource and/or linguistically similar languages are more robust when fine-tuning on an arbitrary language pair, which is critical to determining how much cross-lingual transfer can be expected in a zero or few-shot setting. 6/n <LINK> @ankurbpn @orf_bnw @naveenariva @GoogleAI Colab to play with coming soon! 7/n @ankurbpn @orf_bnw @naveenariva @GoogleAI Huge thanks to my collaborators at @GoogleAI, without whom this work would not have been possible. This work was done as a part of the Google AI Residency - applications open soon, so definitely check it out! <LINK> 8/8",https://arxiv.org/abs/1909.02197,"Multilingual Neural Machine Translation (NMT) models have yielded large empirical success in transfer learning settings. However, these black-box representations are poorly understood, and their mode of transfer remains elusive. In this work, we attempt to understand massively multilingual NMT representations (with 103 languages) using Singular Value Canonical Correlation Analysis (SVCCA), a representation similarity framework that allows us to compare representations across different languages, layers and models. Our analysis validates several empirical results and long-standing intuitions, and unveils new observations regarding how representations evolve in a multilingual translation model. We draw three major conclusions from our analysis, with implications on cross-lingual transfer learning: (i) Encoder representations of different languages cluster based on linguistic similarity, (ii) Representations of a source language learned by the encoder are dependent on the target language, and vice-versa, and (iii) Representations of high resource and/or linguistically similar languages are more robust when fine-tuning on an arbitrary language pair, which is critical to determining how much cross-lingual transfer can be expected in a zero or few-shot setting. We further connect our findings with existing empirical observations in multilingual NMT and transfer learning. ",Investigating Multilingual NMT Representations at Scale,8,"['New EMNLP paper “Investigating Multilingual NMT Representation at Scale” w/ @ankurbpn, @orf_bnw, @caswell_isaac, @naveenariva. We study transfer in massively multilingual NMT @GoogleAI from the perspective of representational similarity.\n\nPaper: <LINK> 1/n <LINK>', '@ankurbpn @orf_bnw @caswell_isaac @naveenariva @GoogleAI We discuss the challenges of comparing misaligned sequences, and use a variant of SVCCA. 2/n', '@ankurbpn @orf_bnw @naveenariva @GoogleAI We find that encoder representations of different languages cluster according to linguistic similarity... 3/n https://t.co/lHFtSIdilm', '@ankurbpn @orf_bnw @naveenariva @GoogleAI … Even at a more fine-grained level. 4/n https://t.co/iiwfV8MaoO', '@ankurbpn @orf_bnw @naveenariva @GoogleAI We look at how our similarity measure capturing linguistic similarity vs script. 5/n https://t.co/V5qqInb1CH', '@ankurbpn @orf_bnw @naveenariva @GoogleAI We also find that representations of high resource and/or linguistically similar languages are more robust when fine-tuning on an arbitrary language pair, which is critical to determining how much cross-lingual transfer can be expected in a zero or few-shot setting. 6/n https://t.co/IX4HYmYOSd', '@ankurbpn @orf_bnw @naveenariva @GoogleAI Colab to play with coming soon! 7/n', '@ankurbpn @orf_bnw @naveenariva @GoogleAI Huge thanks to my collaborators at @GoogleAI, without whom this work would not have been possible. This work was done as a part of the Google AI Residency - applications open soon, so definitely check it out!\n\nhttps://t.co/gFZs0tnWTA 8/8']",19,09,1455 |
253,50,1418399407792443392,923231130383536128,Eduardo Fonseca,"New paper! We evaluate two pooling methods to improve shift invariance in CNNs, obtaining SOTA on FSD50K. Methods are based on low-pass filtering & adaptive sampling of feature maps. They increase robustness to time/freq shifts in the input! w/ @andrebola_ <LINK> <LINK>",https://arxiv.org/abs/2107.00623,"Recent studies have put into question the commonly assumed shift invariance property of convolutional networks, showing that small shifts in the input can affect the output predictions substantially. In this paper, we analyze the benefits of addressing lack of shift invariance in CNN-based sound event classification. Specifically, we evaluate two pooling methods to improve shift invariance in CNNs, based on low-pass filtering and adaptive sampling of incoming feature maps. These methods are implemented via small architectural modifications inserted into the pooling layers of CNNs. We evaluate the effect of these architectural changes on the FSD50K dataset using models of different capacity and in presence of strong regularization. We show that these modifications consistently improve sound event classification in all cases considered. We also demonstrate empirically that the proposed pooling methods increase shift invariance in the network, making it more robust against time/frequency shifts in input spectrograms. This is achieved by adding a negligible amount of trainable parameters, which makes these methods an appealing alternative to conventional pooling layers. The outcome is a new state-of-the-art mAP of 0.541 on the FSD50K classification benchmark. ","Improving Sound Event Classification by Increasing Shift Invariance in |
Convolutional Neural Networks",1,"['New paper! We evaluate two pooling methods to improve shift invariance in CNNs, obtaining SOTA on FSD50K. Methods are based on low-pass filtering & adaptive sampling of feature maps. They increase robustness to time/freq shifts in the input! w/ @andrebola_ <LINK> <LINK>']",21,07,270 |
254,3,1435290715186159622,790289877388505089,Jeff Hyde,"New paper* w/ undergrad Matthew Saveliev, who stuck with this project admirably throughout the past crazy year. <LINK> *actually ancient history from last week, but I was preoccupied w/ start of classes. (1/N) We took a fresh look at the possibility of high-energy neutrino signals from decay or annihilation of dark matter particles captured within Earth. Some older papers look at this, and there are newer ones addressing things like the anomalous ANITA events, but... ...we wanted a straightforward way to connect IceCube event rates to assumptions about DM properties & existing constraints on that. e.g. how do limits on the DM-nucleon cross section relate to expected rate of neutrinos from annihilation? First, some cool physics that happens here: tau neutrino regeneration. For ~PeV tau neutrinos, the mean free path is much shorter than Earth’s radius. But CC interactions produce a tau lepton, which quickly decays and gives us another (secondary) tau neutrino. Above 10 PeV or so, the dominant energy loss mechanism becomes EM interactions of these tau leptons w/ matter. In fact, even taus that are orders of magnitude more energetic are quickly reduced down to ~PeV, which IceCube can detect. We used a numerical simulation of tau neutrino propagation / regeneration to find a convenient approx distribution for emerging nu-taus. Along with other estimates of DM capture and decay/annihilation rates, we related model parameters to event rates. Decays: The dotted line shows constraints on the DM-nucleon cross section, in comparison w/ the regions of parameter space (above each curve) that would have seen > 10 events from Earth’s core in the IceCube HESE data set. <LINK> Annihilations: The dotted line shows constraints on the thermally-averaged annihilation cross section. As w/ the decay plot, above each curve would give > 10 events. <LINK> IceCube would be sensitive to quite small <sigma v> … but only if the DM-nucleon cross section is pretty big by the standards of direct detection limits. (The cross section is relevant here because of the capture rate.) So it would be tough to explain any HE neutrino events (not just ANITA) in terms of terrestrial DM annihilation or decay. Unless, of course, DM model-specific details manage to evade the assumptions and/or constraints used here... ... in that case, the semianalytic estimate we give for event rate would still be useful, probably just with a change to the capture rate.",https://arxiv.org/abs/2108.13412,"Dark matter particles can be gravitationally trapped by celestial bodies, motivating searches for localized annihilation or decay. If neutrinos are among the decay products, then IceCube and other neutrino observatories could detect them. We investigate this scenario for dark matter particles above $m_{\chi} \gtrsim$ PeV producing tau neutrino signals, using updated modeling of dark matter capture and thermalization. At these energies, tau neutrino regeneration is an important effect during propagation through Earth, allowing detection at distances far longer than one interaction length. We show how large energy loss of tau leptons above $\sim$ PeV drives a wide range of initial energies to the same final energy spectrum of ""secondary"" tau neutrinos at the detector, and we provide an analytic approximation to the numerical results. This effect enables an experiment to constrain decays that occur at very high energies, and we examine the reach of the IceCube high-energy starting event (HESE) sample in the parameter space of trapped dark matter annihilations and decays above PeV. We find that the parameter space probed by IceCube searches would require dark matter cross sections in tension with existing direct-detection bounds. ",Using Secondary Tau Neutrinos to Probe Heavy Dark Matter Decays in Earth,11,"['New paper* w/ undergrad Matthew Saveliev, who stuck with this project admirably throughout the past crazy year. <LINK>\n*actually ancient history from last week, but I was preoccupied w/ start of classes. (1/N)', 'We took a fresh look at the possibility of high-energy neutrino signals from decay or annihilation of dark matter particles captured within Earth. Some older papers look at this, and there are newer ones addressing things like the anomalous ANITA events, but...', '...we wanted a straightforward way to connect IceCube event rates to assumptions about DM properties & existing constraints on that. e.g. how do limits on the DM-nucleon cross section relate to expected rate of neutrinos from annihilation?', 'First, some cool physics that happens here: tau neutrino regeneration. For ~PeV tau neutrinos, the mean free path is much shorter than Earth’s radius. But CC interactions produce a tau lepton, which quickly decays and gives us another (secondary) tau neutrino.', 'Above 10 PeV or so, the dominant energy loss mechanism becomes EM interactions of these tau leptons w/ matter. In fact, even taus that are orders of magnitude more energetic are quickly reduced down to ~PeV, which IceCube can detect.', 'We used a numerical simulation of tau neutrino propagation / regeneration to find a convenient approx distribution for emerging nu-taus. Along with other estimates of DM capture and decay/annihilation rates, we related model parameters to event rates.', 'Decays: The dotted line shows constraints on the DM-nucleon cross section, in comparison w/ the regions of parameter space (above each curve) that would have seen > 10 events from Earth’s core in the IceCube HESE data set. https://t.co/sKnQa80QbN', 'Annihilations: The dotted line shows constraints on the thermally-averaged annihilation cross section. As w/ the decay plot, above each curve would give > 10 events. https://t.co/Sw5Xt98rVF', 'IceCube would be sensitive to quite small <sigma v> … but only if the DM-nucleon cross section is pretty big by the standards of direct detection limits. (The cross section is relevant here because of the capture rate.)', 'So it would be tough to explain any HE neutrino events (not just ANITA) in terms of terrestrial DM annihilation or decay. Unless, of course, DM model-specific details manage to evade the assumptions and/or constraints used here...', '... in that case, the semianalytic estimate we give for event rate would still be useful, probably just with a change to the capture rate.']",21,08,2463 |
255,120,1403276386089984000,978860930,"Jan Scholtz, Logic Wizard","<LINK> It is a new paper day! We analyzed IFU data from VLT/SINFONI with ALMA band 7 continuum observations to answer: Are AGN driven outflows in Quasars instantaneously suppressing star formation? The answer is. Not really, at least not on 4 kpc scale! It is probably all happening on much smaller scales and over many AGN episodes! Huge thanks to all my co-authors especially @CMHarrisonAstro, @astro_sario and Dave Alexander!",https://arxiv.org/abs/2106.05277,"We present high-resolution ($\sim$2.4\,kpc) ALMA band 7 observations (rest-frame $\lambda \sim 250\mu$m) of three powerful z$\sim$2.5 quasars ($L_{\rm bol}=10^{47.3}$-$10^{47.5}$ ergs s$^{-1}$). These targets have previously been reported as showing evidence for suppressed star formation based on cavities in the narrow H$\alpha$ emission at the location of outflows traced with [O~{\sc iii}] emission. Here we combine the ALMA observations with a re-analysis of the VLT/SINFONI data to map the rest-frame far-infrared emission, H$\alpha$ emission, and [O~{\sc iii}] emission. In all targets we observe high velocity [O~{\sc iii}] gas (i.e., W80$\sim$1000--2000\,km\,s$^{-1}$) across the whole galaxy. We do not identify any H$\alpha$ emission that is free from contamination from AGN-related processes; however, based on SED analyses, we show that the ALMA data contains a significant dust-obscured star formation component in two out of the three systems. This dust emission is found to be extended over $\approx$1.5--5.5\,kpc in the nuclear regions, overlaps with the previously reported H$\alpha$ cavities and is co-spatial with the peak in surface brightness of the [O~{\sc iii}] outflows. In summary, within the resolution and sensitivity limits of the data, we do not see any evidence for a instantaneous shut down of in-situ star formation caused directly by the outflows. However, similar to the conclusions of previous studies and based on our measured star formation rates, we do not rule out that the global host galaxy star formation could be suppressed on longer timescales by the cumulative effect of quasar episodes during the growth of these massive black holes. ","The impact of ionised outflows from z$\sim$2.5 quasars is not through |
instantaneous in-situ quenching: the evidence from ALMA and VLT/SINFONI",2,"['<LINK> It is a new paper day! We analyzed IFU data from VLT/SINFONI with ALMA band 7 continuum observations to answer: Are AGN driven outflows in Quasars instantaneously suppressing star formation?', 'The answer is. Not really, at least not on 4 kpc scale! It is probably all happening on much smaller scales and over many AGN episodes! Huge thanks to all my co-authors especially @CMHarrisonAstro, @astro_sario and Dave Alexander!']",21,06,428 |
256,57,1117841310876979200,767345894,Xiang Ren,"Happy to share our new work ""Recurrent Event Network for Reasoning over Temporal Knowledge Graphs""(short version accepted to @iclr2019-RLGM): modeling temporal, multi-relational, concurrent interactions in dynamic KGs. Paper: <LINK> Code: <LINK> <LINK>",https://arxiv.org/abs/1904.05530,"Knowledge graph reasoning is a critical task in natural language processing. The task becomes more challenging on temporal knowledge graphs, where each fact is associated with a timestamp. Most existing methods focus on reasoning at past timestamps and they are not able to predict facts happening in the future. This paper proposes Recurrent Event Network (RE-NET), a novel autoregressive architecture for predicting future interactions. The occurrence of a fact (event) is modeled as a probability distribution conditioned on temporal sequences of past knowledge graphs. Specifically, our RE-NET employs a recurrent event encoder to encode past facts and uses a neighborhood aggregator to model the connection of facts at the same timestamp. Future facts can then be inferred in a sequential manner based on the two modules. We evaluate our proposed method via link prediction at future times on five public datasets. Through extensive experiments, we demonstrate the strength of RENET, especially on multi-step inference over future timestamps, and achieve state-of-the-art performance on all five datasets. Code and data can be found at this https URL ","Recurrent Event Network: Autoregressive Structure Inference over |
Temporal Knowledge Graphs",1,"['Happy to share our new work ""Recurrent Event Network for Reasoning over Temporal Knowledge Graphs""(short version accepted to @iclr2019-RLGM): modeling temporal, multi-relational, concurrent interactions in dynamic KGs. \nPaper: <LINK>\nCode: <LINK> <LINK>']",19,04,252 |
257,132,1333967745872916481,1015053310603284480,Stephen Kane,"My co-authors (@astro_tiff, @ThomasFauchez, @fselsis, Alma Ceja) and I present our new paper ""Phase Modeling of the TRAPPIST-1 Planetary Atmospheres"", in which we use analytical and climate models to simulate phase signatures of the TRAPPIST-1 planets. <LINK> The compact architecture of the TRAPPIST-1 system combined with mean motion resonances ensures frequent syzygy events, where planets line up along the line-of-sight. As such, even though individual phase signatures may be small, combined phase signatures become detectable. We further used ROCKE-3D to simulate Modern Earth and Archean climate models for TRAPPIST-1 e and f, and integrate these to calculate the phase signatures. These show different symmetry properties in the phase curves that will be challenging but rewarding to detect. @jcbastro Thanks Juliette! We did take the results into account in our JWST proposals but it's a tough detection that will likely need LUVOIR. For JWST, we're mostly (ironically) concerned with how planetary alignments may introduce phase ""noise"" into our secondary eclipse measurements.",https://arxiv.org/abs/2012.00080,"Transiting compact multi-planet systems provide many unique opportunities to characterize the planets, including studies of size distributions, mean densities, orbital dynamics, and atmospheric compositions. The relatively short orbital periods in these systems ensure that events requiring specific orbital locations of the planets (such as primary transit and secondary eclipse points) occur with high frequency. The orbital motion and associated phase variations of the planets provide a means to constrain the atmospheric compositions through measurement of their albedos. Here we describe the expected phase variations of the TRAPPIST-1 system and times of superior conjunction when the summation of phase effects produce maximum amplitudes. We also describe the infrared flux emitted by the TRAPPIST-1 planets and the influence on the overall phase amplitudes. We further present the results from using the global circulation model ROCKE-3D to model the atmospheres of TRAPPIST-1e and TRAPPIST-1f assuming modern Earth and Archean atmospheric compositions. These simulations are used to calculate predicted phase curves for both reflected light and thermal emission components. We discuss the detectability of these signatures and the future prospects for similar studies of phase variations for relatively faint M stars. ",Phase Modeling of the TRAPPIST-1 Planetary Atmospheres,4,"['My co-authors (@astro_tiff, @ThomasFauchez, @fselsis, Alma Ceja) and I present our new paper ""Phase Modeling of the TRAPPIST-1 Planetary Atmospheres"", in which we use analytical and climate models to simulate phase signatures of the TRAPPIST-1 planets. <LINK>', 'The compact architecture of the TRAPPIST-1 system combined with mean motion resonances ensures frequent syzygy events, where planets line up along the line-of-sight. As such, even though individual phase signatures may be small, combined phase signatures become detectable.', 'We further used ROCKE-3D to simulate Modern Earth and Archean climate models for TRAPPIST-1 e and f, and integrate these to calculate the phase signatures. These show different symmetry properties in the phase curves that will be challenging but rewarding to detect.', '@jcbastro Thanks Juliette! We did take the results into account in our JWST proposals but it\'s a tough detection that will likely need LUVOIR. For JWST, we\'re mostly (ironically) concerned with how planetary alignments may introduce phase ""noise"" into our secondary eclipse measurements.']",20,12,1088 |
258,122,1356927302819545089,405103790,Matteo Angelinelli,"A new paper today <LINK> by myself, S. Ettori, @franco_vazza and T.W. Jones! We study the outskirts of simulated galaxy clusters to investigate the physical properties of matter clumps and filaments #magcow <LINK> We developed two different algorithms, which detect matter clumps, starting from overdensity in the simulated density field, and filaments, using a new proxy based on the gas radial velocity and gas entropy. <LINK> We find that density and temperature for our clumps population are independent by the central cluster's mass, while for filaments we note a slight increase of temperature with the cluster's mass. <LINK> We investigate possible relations between clumps and filaments proprieties, and we find a high level of correlation, both for density and temperature. <LINK> Moreover, we study the mass and volume contribution of clumps and filaments over the total amount of gas in our simulations. We find that combing the different contributions account for ~17% of the total gas mass and only ~1% of the volume. <LINK> We divide our simulated volume into two different radial shells. We note that closer to the central cluster, both clumps' density and temperature are higher than ones in the periphery regions, making the clumps' X-rays detection easier. <LINK> Furthermore, analysing clumps' and filaments' mass and volume contributions in the different shells, we conclude that the inner one is a suitable candidate to detect and analyse matter clumps, while the outer one is better to filaments' studies. <LINK> Finally, we study three different scale relations M-L, L-T and M-T. Up to 3*R500, the interactions between clumps and ICM change the physical proprieties of the infalling structures. Otherwise, over 3*R500, clumps are described by scaling relations similar to the clusters' ones. <LINK> We are working on possible extensions of this work, using a different kind of simulations. Moreover, using SIXTE simulator, we are simulating what the WFI and @AthenaXIFU instruments (which will be on aboard on the @AthenaXobs) would observe in the outskirts of galaxy clusters <LINK>",https://arxiv.org/abs/2102.01096,"We report on the possibility of studying the proprieties of cosmic diffuse baryons by studying self-gravitating clumps and filaments connected to galaxy clusters. While filaments are challenging to detect with X-ray observations, the higher density of clumps makes them visible and a viable tracer to study the thermodynamical proprieties of baryons undergoing accretion along cosmic web filaments onto galaxy clusters. We developed new algorithms to identify these structures and applied them to a set of non-radiative cosmological simulations of galaxy clusters at high resolution. We find that in those simulated clusters, the density and temperature of clumps are independent of the mass of the cluster where they reside. We detected a positive correlation between the filament temperature and the host cluster mass. The density and temperature of clumps and filaments also tended to correlate. Both the temperature and density decrease moving outward. We observed that clumps are hotter, more massive, and more luminous if identified closer to the cluster center. Especially in the outermost cluster regions (~3*R500,c or beyond), X-ray observations might already have the potential to locate cosmic filaments based on the distribution of clumps and to allow one to study the thermodynamics of diffuse baryons before they are processed by the intracluster medium. ",Properties of clumps and filaments around galaxy clusters,9,"['A new paper today <LINK> by myself, S. Ettori, @franco_vazza and T.W. Jones! We study the outskirts of simulated galaxy clusters to investigate the physical properties of matter clumps and filaments #magcow <LINK>', 'We developed two different algorithms, which detect matter clumps, starting from overdensity in the simulated density field, and filaments, using a new proxy based on the gas radial velocity and gas entropy. https://t.co/l47aFkS8gc', ""We find that density and temperature for our clumps population are independent by the central cluster's mass, while for filaments we note a slight increase of temperature with the cluster's mass. https://t.co/GxJL6FDkzk"", 'We investigate possible relations between clumps and filaments proprieties, and we find a high level of correlation, both for density and temperature. https://t.co/GAuuVIZOgr', 'Moreover, we study the mass and volume contribution of clumps and filaments over the total amount of gas in our simulations. We find that combing the different contributions account for ~17% of the total gas mass and only ~1% of the volume. https://t.co/8PS5LG1qG0', ""We divide our simulated volume into two different radial shells. We note that closer to the central cluster, both clumps' density and temperature are higher than ones in the periphery regions, making the clumps' X-rays detection easier. https://t.co/tv9gfrctq6"", ""Furthermore, analysing clumps' and filaments' mass and volume contributions in the different shells, we conclude that the inner one is a suitable candidate to detect and analyse matter clumps, while the outer one is better to filaments' studies. https://t.co/GqarDD9BwE"", ""Finally, we study three different scale relations M-L, L-T and M-T. Up to 3*R500, the interactions between clumps and ICM change the physical proprieties of the infalling structures. Otherwise, over 3*R500, clumps are described by scaling relations similar to the clusters' ones. https://t.co/sud4AhOYt1"", 'We are working on possible extensions of this work, using a different kind of simulations. Moreover, using SIXTE simulator, we are simulating what the WFI and @AthenaXIFU instruments (which will be on aboard on the @AthenaXobs) would observe in the outskirts of galaxy clusters https://t.co/gnL6nOfmDi']",21,02,2106 |
259,97,1291830136975892481,1003652696723873792,Max Gaspari,"New paper with a brilliant student, I had the pleasure to mentor in recent years, dissecting the hot halo properties of rotating #galaxies (e.g. #cca_rain/condensation): <LINK> Here her precursor work: <LINK> Great works Anna!! #BlackHoleWeather <LINK>",https://arxiv.org/abs/2008.01161,"X-ray emitting atmospheres of non-rotating early-type galaxies and their connection to central active galactic nuclei have been thoroughly studied over the years. However, in systems with significant angular momentum, processes of heating and cooling are likely to proceed differently. We present an analysis of the hot atmospheres of six lenticulars and a spiral galaxy to study the effects of angular momentum on the hot gas properties. We find an alignment between the hot gas and the stellar distribution, with the ellipticity of the X-ray emission generally lower than that of the optical stellar emission, consistent with theoretical predictions for rotationally-supported hot atmospheres. The entropy profiles of NGC 4382 and the massive spiral galaxy NGC 1961 are significantly shallower than the entropy distribution in other galaxies, suggesting the presence of strong heating (via outflows or compressional) in the central regions of these systems. Finally, we investigate the thermal (in)stability of the hot atmospheres via criteria such as the TI- and C-ratio, and discuss the possibility that the discs of cold gas present in these objects have condensed out of the hot atmospheres. ",Hot gaseous atmospheres of rotating galaxies observed with XMM-Newton,1,"['New paper with a brilliant student, I had the pleasure to mentor in recent years, dissecting the hot halo properties of rotating #galaxies (e.g. #cca_rain/condensation): <LINK> \nHere her precursor work: <LINK>\nGreat works Anna!!\n#BlackHoleWeather <LINK>']",20,08,252 |
260,184,1395683669554122753,195773271,Marios Fournarakis,"Quantized training can pave the way for efficient on-device training. In our latest work with @mnagel87, we propose ""In-hindsight quantization range estimation"" (<LINK>) to enable fast and efficient gradient quantization. (accepted at EDLCV Workshop @CVPR 2021) <LINK>",https://arxiv.org/abs/2105.04246,"Quantization techniques applied to the inference of deep neural networks have enabled fast and efficient execution on resource-constraint devices. The success of quantization during inference has motivated the academic community to explore fully quantized training, i.e. quantizing back-propagation as well. However, effective gradient quantization is still an open problem. Gradients are unbounded and their distribution changes significantly during training, which leads to the need for dynamic quantization. As we show, dynamic quantization can lead to significant memory overhead and additional data traffic slowing down training. We propose a simple alternative to dynamic quantization, in-hindsight range estimation, that uses the quantization ranges estimated on previous iterations to quantize the present. Our approach enables fast static quantization of gradients and activations while requiring only minimal hardware support from the neural network accelerator to keep track of output statistics in an online fashion. It is intended as a drop-in replacement for estimating quantization ranges and can be used in conjunction with other advances in quantized training. We compare our method to existing methods for range estimation from the quantized training literature and demonstrate its effectiveness with a range of architectures, including MobileNetV2, on image classification benchmarks (Tiny ImageNet & ImageNet). ",In-Hindsight Quantization Range Estimation for Quantized Training,1,"['Quantized training can pave the way for efficient on-device training. In our latest work with @mnagel87, we propose ""In-hindsight quantization range estimation"" (<LINK>) to enable fast and efficient gradient quantization.\n(accepted at EDLCV Workshop @CVPR 2021) <LINK>']",21,05,268 |
261,11,1475724913625505797,1074313879612784640,Anna Abalkina,"This is my new preprint on @arxiv ""Publication and collaboration anomalies in academic papers originating from a paper mill: evidence from Russia"". At least 303 papers were identified. and a set of predictors of a Russian paper mill were proposed. <LINK> <LINK>",http://arxiv.org/abs/2112.13322,"This study attempts to detect papers originating from the Russia-based paper mill International publisher LLC. A total of 1009 offers published during 2019-2021 on the 123mi.ru website were analysed. The study allowed us to identify at least 434 papers that are potentially linked to the paper mill including one preprint, a duplication paper and 15 republications of papers erroneously published in hijacked journals. Evidence of suspicious provenance from the paper mill is provided: matches in title, number of coauthorship slots, year of publication, country of the journal, country of a coauthorship slot and similarities of abstracts. These problematic papers are coauthored by scholars associated with at least 39 countries and submitted both to predatory and reputable journals. This study also demonstrates collaboration anomalies and the phenomenon of suspicious collaboration in questionable papers and examines the predictors of the Russia-based paper mill. The value of coauthorship slots offered by International Publisher LLC in 2019-2021 is estimated at $6.5 million. Since the study analysed a particular paper mill, it is likely that the number of papers with forged authorship is much higher. ","Publication and collaboration anomalies in academic papers originating |
from a paper mill: evidence from a Russia-based paper mill",1,"['This is my new preprint on @arxiv ""Publication and collaboration anomalies in academic papers originating from a paper mill: evidence from Russia"". At least 303 papers were identified. and a set of predictors of a Russian paper mill were proposed.\n<LINK> <LINK>']",21,12,261 |
262,112,1447894959626063874,1205923801709588481,Yuval Kirstain,"I’m thrilled to share our new work “A Few More Examples May Be Worth *Billions* of Parameters”. Joint w/ @PSH_Lewis @riedelcastro @omerlevy_ Paper: <LINK> 1/4 <LINK> While scaling parameters consistently yields performance improvements, the contribution of additional examples highly depends on the *task’s format*. 2/4 <LINK> For classification, multiple choice, and extractive QA, annotating a few more examples may yield similar benefits as adding *billions* of model parameters. However, in open QA tasks, parameters are of immense value that just cannot be traded with labeled data. 3/4 <LINK> We reached this conclusion after training thousands of models and conducting ablation experiments where we convert one format into another! Check out the paper for more :) 4/4 <LINK>",http://arxiv.org/abs/2110.04374,"We investigate the dynamics of increasing the number of model parameters versus the number of labeled examples across a wide variety of tasks. Our exploration reveals that while scaling parameters consistently yields performance improvements, the contribution of additional examples highly depends on the task's format. Specifically, in open question answering tasks, enlarging the training set does not improve performance. In contrast, classification, extractive question answering, and multiple choice tasks benefit so much from additional examples that collecting a few hundred examples is often ""worth"" billions of parameters. We hypothesize that unlike open question answering, which involves recalling specific information, solving strategies for tasks with a more restricted output space transfer across examples, and can therefore be learned with small amounts of labeled data. ",A Few More Examples May Be Worth Billions of Parameters,4,"['I’m thrilled to share our new work “A Few More Examples May Be Worth *Billions* of Parameters”.\n\nJoint w/ @PSH_Lewis @riedelcastro @omerlevy_\nPaper: <LINK> \n\n1/4 <LINK>', 'While scaling parameters consistently yields performance improvements, the contribution of additional examples highly depends on the *task’s format*.\n\n2/4 https://t.co/1ssoX1K4iE', 'For classification, multiple choice, and extractive QA, annotating a few more examples may yield similar benefits as adding *billions* of model parameters. However, in open QA tasks, parameters are of immense value that just cannot be traded with labeled data.\n\n3/4 https://t.co/iHyfNdkRiT', 'We reached this conclusion after training thousands of models and conducting ablation experiments where we convert one format into another!\n\nCheck out the paper for more :)\n\n4/4 https://t.co/3shlUXHwVq']",21,10,782 |
263,6,1145518012201615360,1047899041311412224,Francois Grondin,"I just released a new paper that shows how SVD-PHAT can be used to perform multiple sound source localization, with more accuracy than the vanilla SRP-PHAT, but with the low-complexity of SVD-PHAT: <LINK> @thaytan During the last month I also participated to the DCASE 2019 sound event detection and localization challenge :P We didn't win, but it was interesting to see how CRNN would perform for detection/localization tasks. Here's the tech report: <LINK>",https://arxiv.org/abs/1906.11913,"This paper introduces a modification of phase transform on singular value decomposition (SVD-PHAT) to localize multiple sound sources. This work aims to improve localization accuracy and keeps the algorithm complexity low for real-time applications. This method relies on multiple scans of the search space, with projection of each low-dimensional observation onto orthogonal subspaces. We show that this method localizes multiple sound sources more accurately than discrete SRP-PHAT, with a reduction in the Root Mean Square Error up to 0.0395 radians. ",Multiple Sound Source Localization with SVD-PHAT,2,"['I just released a new paper that shows how SVD-PHAT can be used to perform multiple sound source localization, with more accuracy than the vanilla SRP-PHAT, but with the low-complexity of SVD-PHAT: <LINK>', ""@thaytan During the last month I also participated to the DCASE 2019 sound event detection and localization challenge :P We didn't win, but it was interesting to see how CRNN would perform for detection/localization tasks. Here's the tech report: https://t.co/hrcY0EAmLt""]",19,06,458 |
264,157,1270435732243578881,19344537,Lukas Ruff,"Classification of normal data against few random natural images doesn't sound promising for anomaly detection, right? Well... check out our new preprint 'Rethinking Assumptions in Deep Anomaly Detection'. Paper: <LINK> @PyTorch Code: <LINK> The multiscale structure of images seems to fool this intuition. On a recent ImageNet one-class benchmark, classification of the normal training data against as few as 64 random images is able to outperform the current state of the art. This is joint work with @robvdm, @BillyJoeFranks, Klaus-Robert Müller, and Marius Kloft. @sabokrou Looking forward to read your work!",https://arxiv.org/abs/2006.00339,"Though anomaly detection (AD) can be viewed as a classification problem (nominal vs. anomalous) it is usually treated in an unsupervised manner since one typically does not have access to, or it is infeasible to utilize, a dataset that sufficiently characterizes what it means to be ""anomalous."" In this paper we present results demonstrating that this intuition surprisingly seems not to extend to deep AD on images. For a recent AD benchmark on ImageNet, classifiers trained to discern between normal samples and just a few (64) random natural images are able to outperform the current state of the art in deep AD. Experimentally we discover that the multiscale structure of image data makes example anomalies exceptionally informative. ",Rethinking Assumptions in Deep Anomaly Detection,4,"[""Classification of normal data against few random natural images doesn't sound promising for anomaly detection, right? Well... check out our new preprint 'Rethinking Assumptions in Deep Anomaly Detection'.\n\nPaper: <LINK>\n@PyTorch Code: <LINK>"", 'The multiscale structure of images seems to fool this intuition. On a recent ImageNet one-class benchmark, classification of the normal training data against as few as 64 random images is able to outperform the current state of the art.', 'This is joint work with @robvdm, @BillyJoeFranks, Klaus-Robert Müller, and Marius Kloft.', '@sabokrou Looking forward to read your work!']",20,06,611 |
265,137,1317129255973654528,2432031889,Ana Marasović,"📢 New at Findings #EMNLP2020 📢 ""Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense Graphs"" w/ @_csBhagav @jae_sung_park96 @Ronan_LeBras @nlpnoah @YejinChoinka 📖 Paper: <LINK> Thread 👇 <LINK> Why natural language rationales? Explaining higher-level conceptual reasoning cannot be well conveyed *only* by attributing individual pixels or words---the cause behind prediction is often *not* explicitly grounded in the input (""she doesn’t know whose order is whose"") 1/ The key challenge of visual-textual reasoning rationalization is image understanding beyond explicit content (highlighting objects): understanding contextual content like the relations among objects through action predicates (semantics) and the action's intent (pragmatics) 2/ We combine GPT-2 with object recognition, grounded visual semantic frames, and commonsense inferences inferred from an image and an optional event predicted from a visual commonsense graph 3/ <LINK> GPT-2 benefits from visual adaptation across complex visual reasoning tasks: visual commonsense reasoning, visual-textual entailment, and visual question answering; adapted models generate more plausible rationales that are less likely to mention content irrelevant to an image 4/ <LINK> Our best performing models for visual commonsense reasoning and visual-textual entailment are still notably behind human-written rationales showing that free-text rationalization remains a challenging task despite our improvements 5/ I'm really excited about this work and the numerous open questions. Can natural language rationales be used to persuade users? Yes, don't generate rationales independently after the prediction. We need evaluations of association btw rationale generation and label prediction 7/ Is generation of natural language rationales possible only with human-written rationales? I don't believe so. Recent related work that uses weak supervision is a promising direction, but we need more exploration there 8/ <LINK> Are there alternatives to human evaluation of plausibility? Maybe. BLEU and co. are not suitable, but someone should investigate newly emerging *learned* evaluation measures such as BLEURT 9/ And I'm sure there are many more. Feel free to reach out! 10/10 :)",http://arxiv.org/abs/2010.07526,"Natural language rationales could provide intuitive, higher-level explanations that are easily understandable by humans, complementing the more broadly studied lower-level explanations based on gradients or attention weights. We present the first study focused on generating natural language rationales across several complex visual reasoning tasks: visual commonsense reasoning, visual-textual entailment, and visual question answering. The key challenge of accurate rationalization is comprehensive image understanding at all levels: not just their explicit content at the pixel level, but their contextual contents at the semantic and pragmatic levels. We present Rationale^VT Transformer, an integrated model that learns to generate free-text rationales by combining pretrained language models with object recognition, grounded visual semantic frames, and visual commonsense graphs. Our experiments show that the base pretrained language model benefits from visual adaptation and that free-text rationalization is a promising research direction to complement model interpretability for complex visual-textual reasoning tasks. ","Natural Language Rationales with Full-Stack Visual Reasoning: From |
Pixels to Semantic Frames to Commonsense Graphs",10,"['📢 New at Findings #EMNLP2020 📢 \n\n""Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense Graphs""\n\nw/ @_csBhagav @jae_sung_park96 @Ronan_LeBras @nlpnoah @YejinChoinka\n\n📖 Paper: <LINK>\n\nThread 👇 <LINK>', 'Why natural language rationales?\n\nExplaining higher-level conceptual reasoning cannot be well conveyed *only* by attributing individual pixels or words---the cause behind prediction is often *not* explicitly grounded in the input (""she doesn’t know whose order is whose"") 1/', ""The key challenge of visual-textual reasoning rationalization is image understanding beyond explicit content (highlighting objects): understanding contextual content like the relations among objects through action predicates (semantics) and the action's intent (pragmatics) 2/"", 'We combine GPT-2 with object recognition, grounded visual semantic frames, and commonsense inferences inferred from an image and an optional event predicted from a visual commonsense graph 3/ https://t.co/U8eND0geM1', 'GPT-2 benefits from visual adaptation across complex visual reasoning tasks: visual commonsense reasoning, visual-textual entailment, and visual question answering; adapted models generate more plausible rationales that are less likely to mention content irrelevant to an image 4/ https://t.co/HwKFIoHq3A', 'Our best performing models for visual commonsense reasoning and visual-textual entailment are still notably behind human-written rationales showing that free-text rationalization remains a challenging task despite our improvements 5/', ""I'm really excited about this work and the numerous open questions. \n\nCan natural language rationales be used to persuade users? Yes, don't generate rationales independently after the prediction. We need evaluations of association btw rationale generation and label prediction 7/"", ""Is generation of natural language rationales possible only with human-written rationales? I don't believe so. Recent related work that uses weak supervision is a promising direction, but we need more exploration there 8/ https://t.co/JkHLhvPTp3"", 'Are there alternatives to human evaluation of plausibility? Maybe. BLEU and co. are not suitable, but someone should investigate newly emerging *learned* evaluation measures such as BLEURT 9/', ""And I'm sure there are many more. Feel free to reach out! 10/10 :)""]",20,10,2286 |