text
stringlengths
64
6.93k
3823,45,1054902731352100864,759118366468481024,Decker French,"New paper out today on identifying TDE candidates using their host galaxies! <LINK> Because the TDE rate is high in post-starburst and ""quiescent Balmer-strong"" galaxies, and because their star formation rates are low, a large fraction of transients in these galaxies will be TDEs with low contamination from supernovae. This makes it efficient to follow up transients in such galaxies in order to (1) get rapid follow-up observations and (2) perhaps identify unusual TDEs. This strategy will help identify more TDEs from the deluge of LSST events. But, LSST is in the Southern hemisphere, where we lack large spectroscopic surveys. Without spectra, we need a new way to identify post-starburst and similar galaxies. In this paper, we developed a method using machine learning to identify post-starburst and quiescent Balmer-strong galaxies using photometry alone. This works best at z~0 where we can also use archival UV and IR data from GALEX and WISE, but also out to z~0.5, where the rest-frame NUV becomes accessible in SDSS or LSST u. All in all, using this strategy, we expect to be able to find and identify 100-250 TDEs per year with LSST. This will make all sorts of TDE science possible that we can't do with the small numbers currently known. Because there is a tradeoff in this strategy in host galaxy bias vs. detection, this strategy will be complementary to other methods of identifying TDEs with LSST. In advance of LSST, we constructed a catalog of 70,000 new likely TDE host galaxies from Pan-STARRS and DES. These catalogs of photometrically identified galaxies as well as the catalog of spectroscopically identified galaxies will be downloadable as MRTs from ApJ, but feel free to contact me if you'd like them sooner.",https://arxiv.org/abs/1810.09507,"A nuclear transient detected in a post-starburst galaxy or other quiescent galaxy with strong Balmer absorption is likely to be a Tidal Disruption Event (TDE). Identifying such galaxies within the planned survey footprint of the Large Synoptic Survey Telescope (LSST)---before a transient is detected---will make TDE classification immediate and follow-up more efficient. Unfortunately, spectra for identifying most such galaxies are unavailable, and simple photometric selection is ineffective; cutting on ""green valley"" UV/optical/IR colors produces samples that are highly contaminated and incomplete. Here we propose a new strategy using only photometric optical/UV/IR data from large surveys. Applying a machine learning Random Forest classifier to a sample of ~400k SDSS galaxies with GALEX and WISE photometry, including 13,592 quiescent Balmer-strong galaxies, we achieve 53-61% purity and 8-21% completeness, given the range in redshift. For the subset of 1299 post-starburst galaxies, we achieve 63-73% purity and 5-12% completeness. Given these results, the range of likely TDE and supernova rates, and that 36-75% of TDEs occur in quiescent Balmer-strong hosts, we estimate that 13-99% of transients observed in photometrically-selected host galaxies will be TDEs and that we will discover 119-248 TDEs per year with LSST. Using our technique, we present a new catalog of 67,484 candidate galaxies expected to have a high TDE rate, drawn from the SDSS, Pan-STARRS, DES, and WISE photometric surveys. This sample is 3.5x larger than the current SDSS sample of similar galaxies, thereby providing a new path forward for transient science and galaxy evolution studies. ","Identifying Tidal Disruption Events via Prior Photometric Selection of
Their Preferred Hosts",10,"['New paper out today on identifying TDE candidates using their host galaxies! <LINK>', 'Because the TDE rate is high in post-starburst and ""quiescent Balmer-strong"" galaxies, and because their star formation rates are low, a large fraction of transients in these galaxies will be TDEs with low contamination from supernovae.', 'This makes it efficient to follow up transients in such galaxies in order to (1) get rapid follow-up observations and (2) perhaps identify unusual TDEs. This strategy will help identify more TDEs from the deluge of LSST events.', 'But, LSST is in the Southern hemisphere, where we lack large spectroscopic surveys. Without spectra, we need a new way to identify post-starburst and similar galaxies.', 'In this paper, we developed a method using machine learning to identify post-starburst and quiescent Balmer-strong galaxies using photometry alone.', 'This works best at z~0 where we can also use archival UV and IR data from GALEX and WISE, but also out to z~0.5, where the rest-frame NUV becomes accessible in SDSS or LSST u.', ""All in all, using this strategy, we expect to be able to find and identify 100-250 TDEs per year with LSST. This will make all sorts of TDE science possible that we can't do with the small numbers currently known."", 'Because there is a tradeoff in this strategy in host galaxy bias vs. detection, this strategy will be complementary to other methods of identifying TDEs with LSST.', 'In advance of LSST, we constructed a catalog of 70,000 new likely TDE host galaxies from Pan-STARRS and DES.', ""These catalogs of photometrically identified galaxies as well as the catalog of spectroscopically identified galaxies will be downloadable as MRTs from ApJ, but feel free to contact me if you'd like them sooner.""]",18,10,1739
3824,32,1321808905442365441,48017871,Andreas Orthey,"New journal preprint (Submitted to Transactions) on lifting paths from a lower-dimensional space into a higher-dimensional space while keeping them feasible. Paper: <LINK> Code: <LINK> <LINK> We mainly use those methods to plan motions through narrow passages, for example for the Bugtrap scenario or for dexterous manipulation of the @shadowrobot hand. Those are prototypical scenarios for many robotic applications like grasping, assembly, disassembly or egress tasks. <LINK> With @LIS_TUBerlin #MotionPlanning #Robotics #FiberBundles #Relaxation #LiftingPaths #AI #Algorithms",https://arxiv.org/abs/2010.14524,"Sampling-based planning methods often become inefficient due to narrow passages. Narrow passages induce a higher runtime, because the chance to sample them becomes vanishingly small. In recent work, we showed that narrow passages can be approached by relaxing the problem using admissible lower-dimensional projections of the state space. Those relaxations often increase the volume of narrow passages under projection. Solving the relaxed problem is often efficient and produces an admissible heuristic we can exploit. However, given a base path, i.e. a solution to a relaxed problem, there are currently no tailored methods to efficiently exploit the base path. To efficiently exploit the base path and thereby its admissible heuristic, we develop section patterns, which are solution strategies to efficiently exploit base paths in particular around narrow passages. To coordinate section patterns, we develop the pattern dance algorithm, which efficiently coordinates section patterns to reactively traverse narrow passages. We combine the pattern dance algorithm with previously developed multilevel planning algorithms and benchmark them on challenging planning problems like the Bugtrap, the double L-shape, an egress problem and on four pregrasp scenarios for a 37 degrees of freedom shadow hand mounted on a KUKA LWR robot. Our results confirm that section patterns are useful to efficiently solve high-dimensional narrow passage motion planning problems. ","Section Patterns: Efficiently Solving Narrow Passage Problems in
Multilevel Motion Planning",3,"['New journal preprint (Submitted to Transactions) on lifting paths from a lower-dimensional space into a higher-dimensional space while keeping them feasible. \n\nPaper: <LINK>\nCode: <LINK> <LINK>', 'We mainly use those methods to plan motions through narrow passages, for example for the Bugtrap scenario or for dexterous manipulation of the @shadowrobot hand. Those are prototypical scenarios for many robotic applications like grasping, assembly, disassembly or egress tasks. https://t.co/55My1ClDcA', 'With @LIS_TUBerlin #MotionPlanning #Robotics #FiberBundles #Relaxation #LiftingPaths #AI #Algorithms']",20,10,579
3825,19,1311114599421407234,727177363344105474,Q. Liu | 刘启民,"NEW PAPER accepted at @SpringerNature Computer Science! Co-authored with Dr. Fang Liu, “Selective Cascade of Residual ExtraTrees” (SCORE) is a machine learning algorithm we developed based on neural networks and decision trees. See post-print @ <LINK> <LINK>",https://arxiv.org/abs/2009.14138,"We propose a novel tree-based ensemble method named Selective Cascade of Residual ExtraTrees (SCORE). SCORE draws inspiration from representation learning, incorporates regularized regression with variable selection features, and utilizes boosting to improve prediction and reduce generalization errors. We also develop a variable importance measure to increase the explainability of SCORE. Our computer experiments show that SCORE provides comparable or superior performance in prediction against ExtraTrees, random forest, gradient boosting machine, and neural networks; and the proposed variable importance measure for SCORE is comparable to studied benchmark methods. Finally, the predictive performance of SCORE remains stable across hyper-parameter values, suggesting potential robustness to hyperparameter specification. ",Selective Cascade of Residual ExtraTrees,1,"['NEW PAPER accepted at @SpringerNature Computer Science! Co-authored with Dr. Fang Liu, “Selective Cascade of Residual ExtraTrees” (SCORE) is a machine learning algorithm we developed based on neural networks and decision trees. See post-print @ <LINK> <LINK>']",20,09,258
3826,11,1455180236971253764,725994666953428993,Chen Zhao,"Training SOTA Open-Domain QA systems needs expensive evidence labels. Not for us, but we are equally good! Excited to share our new #EMNLP21 paper, Distantly-Supervised Evidence Retrieval Enables Question Answering without Evidence Annotation <LINK> (1/6) <LINK> Instead of annotating evidence (i.e., passages or chain of passages) for training Open-Domain QA systems, we focus on a much cheaper weakly-supervised setting that only has question-answer pairs during training. (2/6) We propose a hard-EM approach (DistDR) to find evidence as distant training signals. We iteratively improve over a weak retriever by alternately finding evidence from the up-to-date model (Hard E-step) and encouraging the model to learn the most likely evidence (M-step). (3/6) <LINK> DistDR is on par with fully-supervised state-of-the-art methods on both a multi-hop QA benchmark (HotpotQA) and a single-hop QA benchmark (NaturalQuestions). (4/6) Our analysis on multi-hop QA further indicates that albeit some (~30%) predicted evidence differs from the annotation, it’s still helpful! As DistDR can find alternative evidence; some questions only need a single passage as evidence (and DistDR finds it). (5/6) <LINK> Our code is at <LINK>, EMNLP video link <LINK>. Joint work w/ @XiongChenyan , @boydgraber and @haldaume3. (6/6)",https://arxiv.org/abs/2110.04889,"Open-domain question answering answers a question based on evidence retrieved from a large corpus. State-of-the-art neural approaches require intermediate evidence annotations for training. However, such intermediate annotations are expensive, and methods that rely on them cannot transfer to the more common setting, where only question-answer pairs are available. This paper investigates whether models can learn to find evidence from a large corpus, with only distant supervision from answer labels for model training, thereby generating no additional annotation cost. We introduce a novel approach (DistDR) that iteratively improves over a weak retriever by alternately finding evidence from the up-to-date model and encouraging the model to learn the most likely evidence. Without using any evidence labels, DistDR is on par with fully-supervised state-of-the-art methods on both multi-hop and single-hop QA benchmarks. Our analysis confirms that DistDR finds more accurate evidence over iterations, which leads to model improvements. ","Distantly-Supervised Evidence Retrieval Enables Question Answering
without Evidence Annotation",6,"['Training SOTA Open-Domain QA systems needs expensive evidence labels. Not for us, but we are equally good!\n\nExcited to share our new #EMNLP21 paper, Distantly-Supervised Evidence Retrieval Enables Question Answering without Evidence Annotation\n<LINK> \n(1/6) <LINK>', 'Instead of annotating evidence (i.e., passages or chain of passages) for training Open-Domain QA systems, we focus on a much cheaper weakly-supervised setting that only has question-answer pairs during training. \n(2/6)', 'We propose a hard-EM approach (DistDR) to find evidence as distant training signals. We iteratively improve over a weak retriever by alternately finding evidence from the up-to-date model (Hard E-step) and encouraging the model to learn the most likely evidence (M-step).\n(3/6) https://t.co/9SxvUEgcGh', 'DistDR is on par with fully-supervised state-of-the-art methods on both a multi-hop QA benchmark (HotpotQA) and a single-hop QA benchmark (NaturalQuestions). \n(4/6)', 'Our analysis on multi-hop QA further indicates that albeit some (~30%) predicted evidence differs from the annotation, it’s still helpful! \nAs DistDR can find alternative evidence; some questions only need a single passage as evidence (and DistDR finds it). \n(5/6) https://t.co/hbqhSq1x0W', 'Our code is at https://t.co/ALvGSDoDXb, EMNLP video link https://t.co/GoYm4EutMm. Joint work w/ @XiongChenyan , @boydgraber and @haldaume3. \n(6/6)']",21,10,1311
3827,79,1306626047383924737,82497649,Moin Nadeem,"New AACL paper! Sampling from a language model is a crucial task for generation. While many sampling algorithms exist, what properties are desirable in a good sampling algorithm? 🧐 Joint work w/ @TianxingH, @kchonyc, Jim Glass, and I Paper: <LINK> Thread 👇 What makes the current sampling algorithms (top-k, nucleus, tempered) perform well? We inspected them and extracted three shared properties. All algs reduce entropy of the distribution, preserve the relative order of the logits, and preserve the “slope” of the distribution. <LINK> We design two algorithms that satisfy these properties, and three algorithms that violate these properties. Most interestingly, we find that we can design new sampling algorithms that satisfy these properties, and obtain competitive performance 😲 <LINK> Conversely, we find that two of the three algorithms that violate these properties yields significant performance degradation. For our random masked algorithm, we were surprised to see that randomly masking logits (other than the first) yields similar performance! 🤔 <LINK> We acknowledge the empirical limitations of our study, and emphasize that it is entirely possible for some crucial property that we have not discovered to exist! We are hopeful that this study may help guide the development of novel sampling algorithms in the future.",https://arxiv.org/abs/2009.07243,"This work studies the widely adopted ancestral sampling algorithms for auto-regressive language models, which is not widely studied in the literature. We use the quality-diversity (Q-D) trade-off to investigate three popular sampling algorithms (top-k, nucleus and tempered sampling). We focus on the task of open-ended language generation. We first show that the existing sampling algorithms have similar performance. After carefully inspecting the transformations defined by different sampling algorithms, we identify three key properties that are shared among them: entropy reduction, order preservation, and slope preservation. To validate the importance of the identified properties, we design two sets of new sampling algorithms: one set in which each algorithm satisfies all three properties, and one set in which each algorithm violates at least one of the properties. We compare their performance with existing sampling algorithms, and find that violating the identified properties could lead to drastic performance degradation, as measured by the Q-D trade-off. On the other hand, we find that the set of sampling algorithms that satisfies these properties performs on par with the existing sampling algorithms. Our data and code are available at this https URL ","A Systematic Characterization of Sampling Algorithms for Open-ended
Language Generation",5,"['New AACL paper!\n\nSampling from a language model is a crucial task for generation. While many sampling algorithms exist, what properties are desirable in a good sampling algorithm? 🧐\n\nJoint work w/ @TianxingH, @kchonyc, Jim Glass, and I\nPaper: <LINK>\nThread 👇', 'What makes the current sampling algorithms (top-k, nucleus, tempered) perform well? \n\nWe inspected them and extracted three shared properties. All algs reduce entropy of the distribution, preserve the relative order of the logits, and preserve the “slope” of the distribution. https://t.co/vMct1pdl8s', 'We design two algorithms that satisfy these properties, and three algorithms that violate these properties. \n\nMost interestingly, we find that we can design new sampling algorithms that satisfy these properties, and obtain competitive performance 😲 https://t.co/FsuhW7eXV0', 'Conversely, we find that two of the three algorithms that violate these properties yields significant performance degradation. \n\nFor our random masked algorithm, we were surprised to see that randomly masking logits (other than the first) yields similar performance! 🤔 https://t.co/0ZoMOl6d4G', 'We acknowledge the empirical limitations of our study, and emphasize that it is entirely possible for some crucial property that we have not discovered to exist!\n\nWe are hopeful that this study may help guide the development of novel sampling algorithms in the future.']",20,09,1337
3828,168,1329111048985714691,976521596289671172,Thomas Fai,"New preprint, and a change of pace from my usual work. We study hard spheres packed on lattices in terms of geometry, i.e. the free volume. We find a ""leaky"" regime in which the geometry changes and the cage of neighbors confining each sphere expands! <LINK> Because of the simplifying lattice geometry, we can calculate the free volumes exactly using known formulas for the volumes of intersection between multiple (e.g. two, three, and four) spheres. We repeat this for different lattices above and below the leaky transition. <LINK> Related to this work, I also made an outreach lecture on triangulations and tilings of space, a favorite topic of mine that comes up in surprisingly diverse applications such as simulating red blood cells and guarding art galleries. You can find it here: <LINK>",https://arxiv.org/abs/2011.07106,"We study packings of hard spheres on lattices. The partition function, and therefore the pressure, may be written solely in terms of the accessible free volume, i.e. the volume of space that a sphere can explore without touching another sphere. We compute these free volumes using a leaky cell model, in which the accessible space accounts for the possibility that spheres may escape from the local cage of lattice neighbors. We describe how elementary geometry may be used to calculate the free volume exactly for this leaky cell model in two- and three-dimensional lattice packings and compare the results to the well-known Carnahan-Starling and Percus-Yevick liquid models. We provide formulas for the free volumes of various lattices and use the common tangent construction to identify several phase transitions between them in the leaky cell regime, indicating the possibility of coexistence in crystalline materials. ",Leaky Cell Model of Hard Spheres,3,"['New preprint, and a change of pace from my usual work. We study hard spheres packed on lattices in terms of geometry, i.e. the free volume. We find a ""leaky"" regime in which the geometry changes and the cage of neighbors confining each sphere expands! <LINK>', 'Because of the simplifying lattice geometry, we can calculate the free volumes exactly using known formulas for the volumes of intersection between multiple (e.g. two, three, and four) spheres. We repeat this for different lattices above and below the leaky transition. https://t.co/n2rxI99Stp', 'Related to this work, I also made an outreach lecture on triangulations and tilings of space, a favorite topic of mine that comes up in surprisingly diverse applications such as simulating red blood cells and guarding art galleries. You can find it here: https://t.co/gpGWeLkOE6']",20,11,797
3829,197,1271156830803234820,312448486,Dr. Karan Jani,"In a new paper led by @GTSciences researcher Deborah Ferguson, we find that the most accurate solutions to Einstein's Equations (on fastest supercomputers!) are just not good enough for the next-gen gravitational-wave detectors. <LINK> Submitted to @PhysRevLett <LINK>",https://arxiv.org/abs/2006.04272,"Future detectors such as LISA promise signal-to-noise ratios potentially in the thousands and data containing simultaneous signals. Accurate numerical relativity waveforms will be essential to maximize the science return. A question of interest to the broad gravitational wave community is: Are the numerical relativity codes ready to face this challenge? Towards answering this question, we provide a new criteria to identify the minimum resolution a simulation must have as a function of signal-to-noise ratio in order for the numerical relativity waveform to be indistinguishable from a true signal. This criteria can be applied to any finite-differencing numerical relativity code with multiple simulations of differing resolutions for the desired binary parameters and waveform length. We apply this criteria to binary systems of interest with the fourth-order MAYA code to obtain the first estimate of the minimum resolution a simulation must have to be prepared for next generation detectors. ","Assessing the Readiness of Numerical Relativity for LISA and 3G
Detectors",1,"[""In a new paper led by @GTSciences researcher Deborah Ferguson, we find that the most accurate solutions to Einstein's Equations (on fastest supercomputers!) are just not good enough for the next-gen gravitational-wave detectors.\n\n<LINK>\n\nSubmitted to @PhysRevLett <LINK>""]",20,06,268
3830,195,1393203111452348417,857263993207099392,Ori Fox,"It’s a BIG paper day! (i.e., one of my own) INFRARED obs of Type Ia thermonuclear supernovae out to z=0.1. Offering the low-z anchor for #jwst and @NASARoman measurements of DARK ENERGY. We find NO evidence for a ``mass-step’’! #astrospax #ratir <LINK> <LINK>",https://arxiv.org/abs/2105.06236,"We present optical and near-infrared (NIR, $YJH$-band) observations of 42 Type Ia supernovae (SNe Ia) discovered by the untargeted intermediate Palomar Transient Factory (iPTF) survey. This new data-set covers a broad range of redshifts and host galaxy stellar masses, compared to previous SN Ia efforts in the NIR. We construct a sample, using also literature data at optical and NIR wavelengths, to examine claimed correlations between the host stellar masses and the Hubble diagram residuals. The SN magnitudes are corrected for host galaxy extinction using either a global total-to-selective extinction ratio, $R_V$=2.0 for all SNe, or a best-fit $R_V$ for each SN individually. Unlike previous studies which were based on a narrower range in host stellar mass, we do not find evidence for a ""mass-step"", between the color- and stretch-corrected peak $J$ and $H$ magnitudes for galaxies below and above $\log(M_{*}/M_{\odot}) = 10$. However, the mass-step remains significant ($3\sigma$) at optical wavelengths ($g,r,i$) when using a global $R_V$, but vanishes when each SN is corrected using their individual best-fit $R_V$. Our study confirms the benefits of the NIR SN Ia distance estimates, as these are largely exempted from the empirical corrections dominating the systematic uncertainties in the optical. ","Near-IR Type Ia SN distances: host galaxy extinction and mass-step
corrections revisited",1,"['It’s a BIG paper day! (i.e., one of my own) \n\nINFRARED obs of Type Ia thermonuclear supernovae out to z=0.1. \n\nOffering the low-z anchor for #jwst and @NASARoman measurements of DARK ENERGY.\n\nWe find NO evidence for a ``mass-step’’! \n\n#astrospax #ratir \n<LINK> <LINK>']",21,05,262
3831,42,974321395898265600,162293874,Jeff Clune,"New Scientist article summarizes our new paper on how evolution routinely outsmarts the scientists that wield it. <LINK> Paper <LINK> Gif from Cully, Clune, Tarapore, Mouret Nature 2015 @CULLYAntoine @jb_mouret @joelbot3000 @DuleFR @newscientist <LINK>",https://arxiv.org/abs/1803.03453v1,"Biological evolution provides a creative fount of complex and subtle adaptations, often surprising the scientists who discover them. However, because evolution is an algorithmic process that transcends the substrate in which it occurs, evolution's creativity is not limited to nature. Indeed, many researchers in the field of digital evolution have observed their evolving algorithms and organisms subverting their intentions, exposing unrecognized bugs in their code, producing unexpected adaptations, or exhibiting outcomes uncannily convergent with ones in nature. Such stories routinely reveal creativity by evolution in these digital worlds, but they rarely fit into the standard scientific narrative. Instead they are often treated as mere obstacles to be overcome, rather than results that warrant study in their own right. The stories themselves are traded among researchers through oral tradition, but that mode of information transmission is inefficient and prone to error and outright loss. Moreover, the fact that these stories tend to be shared only among practitioners means that many natural scientists do not realize how interesting and lifelike digital organisms are and how natural their evolution can be. To our knowledge, no collection of such anecdotes has been published before. This paper is the crowd-sourced product of researchers in the fields of artificial life and evolutionary computation who have provided first-hand accounts of such cases. It thus serves as a written, fact-checked collection of scientifically important and even entertaining stories. In doing so we also present here substantial evidence that the existence and importance of evolutionary surprises extends beyond the natural world, and may indeed be a universal property of all complex evolving systems. ","] The Surprising Creativity of Digital Evolution: A Collection of
Anecdotes from the Evolutionary Computation and Artificial Life Research
Communities",1,"['New Scientist article summarizes our new paper on how evolution routinely outsmarts the scientists that wield it. <LINK> Paper <LINK> Gif from Cully, Clune, Tarapore, Mouret Nature 2015 @CULLYAntoine @jb_mouret @joelbot3000 @DuleFR @newscientist <LINK>']",18,03,252
3832,78,1405731090476716035,926000798970073088,Efthymios Tzinis,I am happy to announce that the new version of AudioScope is online 🥳📢! We obtained some impressive results for on-screen sound separation tasks simply by watching in-the-wild videos! Please check our paper with @ScottTWisdom here: <LINK> <LINK> We extended the AudioScope model (<LINK>) towards these axes: - We generalize to a wider set of videos for both training and testing. - Audio-visual self and cross-modal attention from low level features boost the performance and esp. on higher frame rates. - Pretraining the sound separation model helps significantly to provide more stable pseudo-labels for the audio-visual coincidence classifier. - Freezing the embedding networks also helps the model to prevent overfitting and leverage vast amounts of in-the-wild videos. We introduce efficient separable versions of our self and cross-modal attention blocks which capture dependencies between each estimated source waveform and the input video! First perform self-attention across time and then attend across the second axis (space and/or sources)! <LINK>,https://arxiv.org/abs/2106.09669,"We introduce a state-of-the-art audio-visual on-screen sound separation system which is capable of learning to separate sounds and associate them with on-screen objects by looking at in-the-wild videos. We identify limitations of previous work on audio-visual on-screen sound separation, including the simplicity and coarse resolution of spatio-temporal attention, and poor convergence of the audio separation model. Our proposed model addresses these issues using cross-modal and self-attention modules that capture audio-visual dependencies at a finer resolution over time, and by unsupervised pre-training of audio separation model. These improvements allow the model to generalize to a much wider set of unseen videos. We also show a robust way to further improve the generalization capability of our models by calibrating the probabilities of our audio-visual on-screen classifier, using only a small amount of in-domain videos labeled for their on-screen presence. For evaluation and semi-supervised training, we collected human annotations of on-screen audio from a large database of in-the-wild videos (YFCC100m). Our results show marked improvements in on-screen separation performance, in more general conditions than previous methods. ","Improving On-Screen Sound Separation for Open-Domain Videos with
Audio-Visual Self-Attention",4,"['I am happy to announce that the new version of AudioScope is online 🥳📢! We obtained some impressive results for on-screen sound separation tasks simply by watching in-the-wild videos! Please check our paper with @ScottTWisdom here: <LINK> <LINK>', 'We extended the AudioScope model (https://t.co/BuTSVUuTeJ) towards these axes:\n- We generalize to a wider set of videos for both training and testing.\n- Audio-visual self and cross-modal attention from low level features boost the performance and esp. on higher frame rates.', '- Pretraining the sound separation model helps significantly to provide more stable pseudo-labels for the audio-visual coincidence classifier.\n- Freezing the embedding networks also helps the model to prevent overfitting and leverage vast amounts of in-the-wild videos.', 'We introduce efficient separable versions of our self and cross-modal attention blocks which capture dependencies between each estimated source waveform and the input video! First perform self-attention across time and then attend across the second axis (space and/or sources)! https://t.co/Wgh9RXTqOh']",21,06,1058
3833,51,1407411113226952705,822667790628880384,Nir Goldman,"Please see our latest paper on a new graph theory-based order parameter for characterizing condensed phases: <LINK> Our order parameter is physically motivated, and involves a single user-defined variable. It clearly outperforms the Steinhardt order parameters as a well machine learned approach for a wide variety of solids, molten materials, and nanoclusters. @abhishekshwarma Thanks for your comments! I'll be sure to have the lead author post is as soon as possible. @abhishekshwarma We did experiment with using simpler functional forms (e.g., either the connectivity OR ""entropy"" term) but found that the combination of the two did the best job of distinguishing between materials. The order parameter does well for 2D systems (Fig. 2). @abhishekshwarma That's a nice point -- a clean example showing improved results with the cubic exponent would be interesting.",https://arxiv.org/abs/2106.08215,"A new graph-based order parameter is introduced for the characterization of atomistic structures. The order parameter is universal to any material/chemical system, and is transferable to all structural geometries. Three sets of data are used to validate both the generalizability and accuracy of the algorithm: (1) liquid lithium configurations spanning up to 300 GPa, (2) condensed phases of carbon along with nanotubes and buckyballs at ambient and high temperature, and (3) a diverse set of aluminum configurations including surfaces, compressed and expanded lattices, point defects, grain boundaries, liquids, nanoparticles, all at non-zero temperatures. The aluminum configurations are also compared to existing characterization methods for both speed and accuracy. Our order parameter uniquely classifies every configuration and outperforms all crystalline order parameters studied here, opening the door for its use in a multitude of complex application spaces that can require fine configurational characterization of materials. ","A Physically-informed Graph-based Order Parameter for the Universal
Characterization of Atomic Structures",5,"['Please see our latest paper on a new graph theory-based order parameter for characterizing condensed phases: <LINK>', 'Our order parameter is physically motivated, and involves a single user-defined variable. It clearly outperforms the Steinhardt order parameters as a well machine learned approach for a wide variety of solids, molten materials, and nanoclusters.', ""@abhishekshwarma Thanks for your comments! I'll be sure to have the lead author post is as soon as possible."", '@abhishekshwarma We did experiment with using simpler functional forms (e.g., either the connectivity OR ""entropy"" term) but found that the combination of the two did the best job of distinguishing between materials. The order parameter does well for 2D systems (Fig. 2).', ""@abhishekshwarma That's a nice point -- a clean example showing improved results with the cubic exponent would be interesting.""]",21,06,869
3834,250,1368841022374088707,776765039726460929,Carlo Felice Manara,"Paper by previous @ESO student Eleonora Fiorellino is out! <LINK> We used #KMOS on the #VLT to study accretion in young stars in NGC1333 in the Perseus region. We find higher accretion rates in Class I wrt Class II, but no extremely high Macc in Class I <LINK>",https://arxiv.org/abs/2103.03863,"The mass accretion rate is the fundamental parameter to understand the process of mass assembly that results in the formation of a low-mass star. This parameter has been largely studied in Classical TTauri stars in star-forming regions with ages of 1-10Myr. However, little is known about the accretion properties of young stellar objects (YSOs) in younger regions and early stages of star formation, such as in the Class0/I phases. We present new NIR spectra of 17 ClassI/Flat and 35 ClassII sources located in the young (<1Myr) NGC1333 cluster, acquired with the KMOS instrument at the VLT. Our goal is to study whether the mass accretion rate evolves with age, as suggested by the widely adopted viscous evolution model, by comparing the properties of the NGC1333 members with samples of older regions. We measured the stellar parameters and accretion rates of our sample, finding a correlation between accretion and stellar luminosity, and between mass accretion rate and stellar mass. Both correlations are compatible within the errors with the older Lupus star-forming region, while only the latter is consistent with results from ChamaeleonI. The ClassI sample shows larger accretion luminosities with respect to the ClassII stars of the same cloud. However, the derived accretion rates are not sufficiently high to build up the inferred stellar masses, assuming steady accretion during the ClassI lifetime. This suggests that the sources are not in their main accretion phase and that most of their mass has already been accumulated during a previous stage and/or that the accretion is an episodic phenomenon. We show that some of the targets originally classified as Class I through Spitzer photometry are in fact evolved or low accreting objects. This evidence can have implications for the estimated protostellar phase lifetimes. Further observations are needed to determine if this is a general result. ","KMOS study of the mass accretion rate from Class I to Class II in NGC
1333",1,"['Paper by previous @ESO student Eleonora Fiorellino is out! <LINK>\nWe used #KMOS on the #VLT to study accretion in young stars in NGC1333 in the Perseus region. We find higher accretion rates in Class I wrt Class II, but no extremely high Macc in Class I <LINK>']",21,03,260
3835,84,1059699725648232449,1000951360404312065,Alexei Moiseev,"The galactic wind in NGC 6286 was discovered at the 6-m telescope in 2004. Now it is only a member of a large team of wind outflows in our new paper: ""Systematic study of outflows in the Local Universe using CALIFA: I. Sample selection..."" Lopez Coba + <LINK> <LINK>",https://arxiv.org/abs/1811.01253,"We present a sample of 17 objects from the CALIFA survey where we find initial evidence of galactic winds based on their off-axis ionization properties. We identify the presence of outflows using various optical diagnostic diagrams (e.g., EW(H$\alpha$), [Nii]/H$\alpha$, [Sii]/H$\alpha$, [Oi]/H$\alpha$ line-ratio maps). We find that all 17 candidate outflow galaxies lie along the sequence of active star formation in the M$_\star$ vs. star-formation rate diagram, without a clear excess in the integrated SFR. The location of galaxies along the star-formation main sequence (SFMS) does not influence strongly the presence or not of outflows. The analysis of the star-formation rate density ($\Sigma_{\rm SFR}$) reveals that the CALIFA sources present higher values when compared with normal star-forming galaxies. The strength of this relation depends on the calibrator used to estimate the SFR. This excess in $\Sigma_{\rm SFR}$ is significant within the first effective radius supporting the idea that most outflows are driven by processes in the inner regions of a galaxy. We find that the molecular gas mass density ($\Sigma_\mathrm{gas}$) is a key parameter that plays an important role in the generation of outflows through its association with the local SFR. The canonical threshold reported for the generation of outflows -- $\Sigma_{\rm SFR}>0.1$ $\mathrm{M}_\odot \mathrm{yr}^{-1} \mathrm{kpc}^{-2}$ -- is only marginally exceeded in our sample. Within the Kennicutt-Schmidt diagram we propose a domain for galaxies hosting starburst-driven outflows defined by $\Sigma_{\rm SFR}>10^{-2} \,\mathrm{M}_\odot \mathrm{yr}^{-1} \mathrm{kpc}^{-2}$ and $\Sigma_\mathrm{gas}>10^{1.2} \, \mathrm{M}_\odot \mathrm{pc}^{-2}$ within a central kiloparcec region. ","Systematic study of outflows in the Local Universe using CALIFA: I.
Sample selection and main properties",1,"['The galactic wind in NGC 6286 was discovered at the 6-m telescope in 2004. Now it is only a member of a large team of wind outflows in our new paper: \n""Systematic study of outflows in the Local Universe using CALIFA: I. Sample selection..."" Lopez Coba +\n<LINK> <LINK>']",18,11,266
3836,130,1455602326194974720,777776718928941056,Daniel Muthukrishna,"New paper on the Arxiv today on the real-time detection of anomalies in astronomical time series. <LINK> We compare a deep neural network and a parametric approach. The fact that NNs generalize too well to different types of data obscures anomaly detection Excited to finally have this paper out with Michelle Lochner, @Doctor_Lobster, @SaraWebbScience, and @gsnarayan",https://arxiv.org/abs/2111.00036,"New time-domain surveys, such as the Rubin Observatory Legacy Survey of Space and Time (LSST), will observe millions of transient alerts each night, making standard approaches of visually identifying new and interesting transients infeasible. We present two novel methods of automatically detecting anomalous transient light curves in real-time. Both methods are based on the simple idea that if the light curves from a known population of transients can be accurately modelled, any deviations from model predictions are likely anomalies. The first modelling approach is a probabilistic neural network built using Temporal Convolutional Networks (TCNs) and the second is an interpretable Bayesian parametric model of a transient. We demonstrate our methods' ability to provide anomaly scores as a function of time on light curves from the Zwicky Transient Facility. We show that the flexibility of neural networks, the attribute that makes them such a powerful tool for many regression tasks, is what makes them less suitable for anomaly detection when compared with our parametric model. The parametric model is able to identify anomalies with respect to common supernova classes with low false anomaly rates and high true anomaly rates achieving Area Under the Receive Operating Characteristic (ROC) Curve (AUC) scores above 0.8 for most rare classes such as kilonovae, tidal disruption events, intermediate luminosity transients, and pair-instability supernovae. Our ability to identify anomalies improves over the lifetime of the light curves. Our framework, used in conjunction with transient classifiers, will enable fast and prioritised follow-up of unusual transients from new large-scale surveys. ",Real-time detection of anomalies in large-scale transient surveys,2,"['New paper on the Arxiv today on the real-time detection of anomalies in astronomical time series. <LINK>\n\nWe compare a deep neural network and a parametric approach. The fact that NNs generalize too well to different types of data obscures anomaly detection', 'Excited to finally have this paper out with Michelle Lochner, @Doctor_Lobster, @SaraWebbScience, and @gsnarayan']",21,11,368
3837,39,1088078623410737152,806058672619212800,Guillaume Lample,"Check out our new paper on cross-lingual language model pretraining! We extend BERT to the cross-lingual setting. Huge improvements on XNLI, Supervised MT, Unsupervised MT. <LINK> With @alex_conneau <LINK> @yoavgo @alex_conneau Thanks :) The data used for supervised/unsupervised MT pretraining is monolingual only, so the unsupervised MT remains without parallel data. The only case where we use parallel data for pretraining is for the TLM loss, and we only use it on the XNLI task. We will clarify this. @GregFrench26 @alex_conneau Thank you ! @arankomatsuzaki @alex_conneau By ""unconditional LM"" you mean regular causal LM? We haven't tried, I think it could help but as you say, probably not as much as in MT. Also note that in MT pretraining the encoder is the important part, pretraining the decoder doesn't help much. @arankomatsuzaki @alex_conneau Which basically means that learning a good encoder / good sentence representations is harder than learning a good decoder.",https://arxiv.org/abs/1901.07291,"Recent studies have demonstrated the efficiency of generative pretraining for English natural language understanding. In this work, we extend this approach to multiple languages and show the effectiveness of cross-lingual pretraining. We propose two methods to learn cross-lingual language models (XLMs): one unsupervised that only relies on monolingual data, and one supervised that leverages parallel data with a new cross-lingual language model objective. We obtain state-of-the-art results on cross-lingual classification, unsupervised and supervised machine translation. On XNLI, our approach pushes the state of the art by an absolute gain of 4.9% accuracy. On unsupervised machine translation, we obtain 34.3 BLEU on WMT'16 German-English, improving the previous state of the art by more than 9 BLEU. On supervised machine translation, we obtain a new state of the art of 38.5 BLEU on WMT'16 Romanian-English, outperforming the previous best approach by more than 4 BLEU. Our code and pretrained models will be made publicly available. ",Cross-lingual Language Model Pretraining,5,"['Check out our new paper on cross-lingual language model pretraining! We extend BERT to the cross-lingual setting. Huge improvements on XNLI, Supervised MT, Unsupervised MT.\n<LINK>\nWith @alex_conneau <LINK>', '@yoavgo @alex_conneau Thanks :) The data used for supervised/unsupervised MT pretraining is monolingual only, so the unsupervised MT remains without parallel data. The only case where we use parallel data for pretraining is for the TLM loss, and we only use it on the XNLI task. We will clarify this.', '@GregFrench26 @alex_conneau Thank you !', '@arankomatsuzaki @alex_conneau By ""unconditional LM"" you mean regular causal LM? We haven\'t tried, I think it could help but as you say, probably not as much as in MT. Also note that in MT pretraining the encoder is the important part, pretraining the decoder doesn\'t help much.', '@arankomatsuzaki @alex_conneau Which basically means that learning a good encoder / good sentence representations is harder than learning a good decoder.']",19,01,979
3838,81,1306572504887111683,169369247,Dripto Debroy,"New paper! In this work we create flag-style gadgets which can detect errors on NISQ algorithms, allowing for improved performance under postselection. These low-overhead ideas may be helpful for algorithms being run on machines too small for QEC. <LINK> <LINK> I'm particularly excited because this is my first project where the core idea was one I came up. I scribbled Fig. 2b on one of the scratch pads at the QEC19 poster session after a lunch with our group and @JarrodMcclean. Joint work w/ @kenbrownquantum",https://arxiv.org/abs/2009.07752,"Flag verification techniques are useful in quantum error correction for detecting critical faults. Here we present an application of flag verification techniques to improving post-selected performance of near-term algorithms. We extend the definition of what constitutes a flag by creating error-detection gadgets based on known transformations of unitary operators. In the case of Clifford or near-Clifford circuits, these unitary operators can be chosen to be controlled Pauli gates, leading to gadgets which require only a small number of additional Clifford gates. We show that such flags can improve circuit fidelities by up to a factor of 2 after post selection, and demonstrate their effectiveness over error models featuring single-qubit depolarizing noise, crosstalk, and two-qubit coherent overrotation. ",Extended flag gadgets for low-overhead circuit verification,2,"['New paper! \n\nIn this work we create flag-style gadgets which can detect errors on NISQ algorithms, allowing for improved performance under postselection. These low-overhead ideas may be helpful for algorithms being run on machines too small for QEC.\n\n<LINK> <LINK>', ""I'm particularly excited because this is my first project where the core idea was one I came up. I scribbled Fig. 2b on one of the scratch pads at the QEC19 poster session after a lunch with our group and @JarrodMcclean. Joint work w/ @kenbrownquantum""]",20,09,514
3839,207,1251092435653935104,1197738606866993152,Fatemeh Saleh,"Check out our CVPR 2020 (Oral) paper proposing UCNet: Paper: <LINK> Code: <LINK> In UCNet, we propose the first framework to employ uncertainty for RGB-D saliency detection by learning from the data labeling process. <LINK> Existing RGBD saliency detection methods treat the saliency detection task as a point estimation problem, and produce a single saliency map following a deterministic learning pipeline. Inspired by the saliency data labeling process, we propose probabilistic RGB-D saliency detection network via conditional variational autoencoders to model human annotation uncertainty and generate multiple saliency maps for each input image by sampling in the latent space.",https://arxiv.org/abs/2004.05763,"In this paper, we propose the first framework (UCNet) to employ uncertainty for RGB-D saliency detection by learning from the data labeling process. Existing RGB-D saliency detection methods treat the saliency detection task as a point estimation problem, and produce a single saliency map following a deterministic learning pipeline. Inspired by the saliency data labeling process, we propose probabilistic RGB-D saliency detection network via conditional variational autoencoders to model human annotation uncertainty and generate multiple saliency maps for each input image by sampling in the latent space. With the proposed saliency consensus process, we are able to generate an accurate saliency map based on these multiple predictions. Quantitative and qualitative evaluations on six challenging benchmark datasets against 18 competing algorithms demonstrate the effectiveness of our approach in learning the distribution of saliency maps, leading to a new state-of-the-art in RGB-D saliency detection. ","UC-Net: Uncertainty Inspired RGB-D Saliency Detection via Conditional
Variational Autoencoders",3,"['Check out our CVPR 2020 (Oral) paper proposing UCNet:\nPaper:\xa0<LINK>\nCode:\xa0<LINK>\n\nIn UCNet, we propose the first framework to employ uncertainty for RGB-D saliency detection by learning from the data labeling process. <LINK>', 'Existing RGBD saliency detection methods treat the saliency detection task as a point estimation problem, and produce a single saliency map following a deterministic learning pipeline.', 'Inspired by the saliency data labeling process, we propose probabilistic RGB-D saliency detection network via conditional variational autoencoders to model human annotation uncertainty and generate multiple saliency maps for each input image by sampling in the latent space.']",20,04,683
3840,71,1517039759071186951,547776192,Chris Lovell,"New paper on the arXiv today led by @stewilkins looking at 'the redshift frontier' (z &gt; 10) in FLARES ✨ We compare to the few existing constraints, and make predictions for the properties of galaxies (intrinsic and observed) accessible to @NASAWebb <LINK>",https://arxiv.org/abs/2204.09431,"The James Webb Space Telescope (JWST) is set to transform many areas of astronomy, one of the most exciting is the expansion of the redshift frontier to $z>10$. In its first year alone JWST should discover hundreds of galaxies, dwarfing the handful currently known. To prepare for these powerful observational constraints, we use the First Light And Reionisation Epoch (FLARES) simulations to predict the physical and observational properties of the $z>10$ population of galaxies accessible to JWST. This is the first time such predictions have been made using a hydrodynamical model validated at low redshift. Our predictions at $z=10$ are broadly in agreement with current observational constraints on the far-UV luminosity function and UV continuum slope $\beta$, though the observational uncertainties are large. We note tension with recent constraints $z\sim 13$ from Harikane et al. 2022 - compared to these constraints, FLARES predicts objects with the same space density should have an order of magnitude lower luminosity, though this is mitigated slightly if dust attenuation is negligible in these systems. Our predictions suggest that in JWST's first cycle alone, around $600$ galaxies should be identified at $z>10$, with the first small samples available at $z>13$. ","First Light And Reionisation Epoch Simulations (FLARES) V: The redshift
frontier",1,"[""New paper on the arXiv today led by @stewilkins looking at 'the redshift frontier' (z &gt; 10) in FLARES ✨\n\nWe compare to the few existing constraints, and make predictions for the properties of galaxies (intrinsic and observed) accessible to @NASAWebb \n\n<LINK>""]",22,04,259
3841,115,1403340131566768132,1310552063999438849,Hauke Group,🔝Check out the new paper ▶ <LINK> 🙌🏼Congrats to the authors @JCHalimeh @PhilippHauke #MaartenVanDamme #LingzhenGuo #JohannesLang 🗣&lt;Dynamical phase transitions in #antiferromagnetic long-range spin chains are different from those in their ferromagnetic cousins&gt; <LINK>,https://arxiv.org/abs/2106.05282,"In recent years, dynamical phase transitions and out-of-equilibrium criticality have been at the forefront of ultracold gases and condensed matter research. Whereas universality and scaling are established topics in equilibrium quantum many-body physics, out-of-equilibrium extensions of such concepts still leave much to be desired. Using exact diagonalization and the time-dependent variational principle in uniform martrix product states, we calculate the time evolution of the local order parameter and Loschmidt return rate in transverse-field Ising chains with antiferromagnetic power law-decaying interactions, and map out the corresponding rich dynamical phase diagram. \textit{Anomalous} cusps in the return rate, which are ubiquitous at small quenches within the ordered phase in the case of ferromagnetic long-range interactions, are absent within the accessible timescales of our simulations in the antiferromagnetic case, showing that long-range interactions are not a sufficient condition for their appearance. We attribute this to much weaker domain-wall binding in the antiferromagnetic case. For quenches across the quantum critical point, \textit{regular} cusps appear in the return rate and connect to the local order parameter changing sign, indicating the concurrence of two major concepts of dynamical phase transitions. Our results consolidate conclusions of previous works that a necessary condition for the appearance of anomalous cusps in the return rate after quenches within the ordered phase is for topologically trivial local spin flips to be the energetically dominant excitations in the spectrum of the quench Hamiltonian. Our findings are readily accessible in modern trapped-ion setups, and we outline the associated experimental considerations. ","Dynamical phase transitions in quantum spin models with
antiferromagnetic long-range interactions",1,['🔝Check out the new paper ▶ <LINK>\n🙌🏼Congrats to the authors @JCHalimeh @PhilippHauke #MaartenVanDamme #LingzhenGuo #JohannesLang\n🗣&lt;Dynamical phase transitions in #antiferromagnetic long-range spin chains are different from those in their ferromagnetic cousins&gt; <LINK>'],21,06,273
3842,98,1146459699841134595,2445765961,Macartan Humphreys,"Happy to share a new working paper with Philip Dawid and Monica Musio on when you can use information on causal processes to make claims about ""causes of effects"" (is Y due to X? / process tracing / causal attribution) <LINK> Highlights: 1/n * Experiments focus on treatment effects but we often care about whether an outcome is due to a cause * That's a harder question and answer usually not identified by experimental data * But don't obsess about identification: you can learn lots even if estimands are not identified * Knowledge of mediation processes can tighten bounds even if you cannot observe the mediators. But: 1 Getting arbitrarily ""close"" to a causal process does not render causal effects observable 2 Process data better for disconfirming causal relations than for confirming them 3. Max learning arises from short processes (when X is a necessary condition for a sufficient condit'n for Y) 4. Understanding conditional fx can tighten bounds more than knowledge of mediation Reminder for me how much to learn working with great people outside your discipline",https://arxiv.org/abs/1907.00399,"Suppose X and Y are binary exposure and outcome variables, and we have full knowledge of the distribution of Y, given application of X. From this we know the average causal effect of X on Y. We are now interested in assessing, for a case that was exposed and exhibited a positive outcome, whether it was the exposure that caused the outcome. The relevant ""probability of causation"", PC, typically is not identified by the distribution of Y given X, but bounds can be placed on it, and these bounds can be improved if we have further information about the causal process. Here we consider cases where we know the probabilistic structure for a sequence of complete mediators between X and Y. We derive a general formula for calculating bounds on PC for any pattern of data on the mediators (including the case with no data). We show that the largest and smallest upper and lower bounds that can result from any complete mediation process can be obtained in processes with at most two steps. We also consider homogeneous processes with many mediators. PC can sometimes be identified as 0 with negative data, but it cannot be identified at 1 even with positive data on an infinite set of mediators. The results have implications for learning about causation from knowledge of general processes and of data on cases. ",Bounding Causes of Effects with Mediators,4,"['Happy to share a new working paper with Philip Dawid and Monica Musio on when you can use information on causal processes to make claims about ""causes of effects"" \n\n(is Y due to X? / process tracing / causal attribution)\n\n<LINK>\n\nHighlights:\n\n1/n', ""* Experiments focus on treatment effects but we often care about whether an outcome is due to a cause\n* That's a harder question and answer usually not identified by experimental data\n* But don't obsess about identification: you can learn lots even if estimands are not identified"", '* Knowledge of mediation processes can tighten bounds even if you cannot observe the mediators.\n\nBut:\n\n1 Getting arbitrarily ""close"" to a causal process does not render causal effects observable\n\n2 Process data better for disconfirming causal relations than for confirming them', ""3. Max learning arises from short processes (when X is a necessary condition for a sufficient condit'n for Y)\n\n4. Understanding conditional fx can tighten bounds more than knowledge of mediation\n\nReminder for me how much to learn working with great people outside your discipline""]",19,07,1076
3843,73,1229669766035578880,223144852,Suchita Kulkarni,"Paper day! <LINK> What if our searches for heavy Higgses fall short because we only look for its Standard Model decays? Worry not, if the Higgs decays to new particles, we can look for it at the LHC! High luminosity LHC will be great for this. Albeit the paper considers only supersymmetric models, it is possible to generalise this to extended Higgs sectors containing additional BSM particles. Upshot: consider Higgs production mechanisms due to b-quarks inside proton (bbH mode). Consider backgrounds from known SM processes but also model dependent BSM processes. If the Higgs decays to charged particles, it is worth considering long lived particle searches. Look at kinematics of the final states, given that the heavy Higgs is a resonance, it can leave imprint on the kinematics. This usually results in kinematic end points, distinct visible and missing energy imbalances, and long-er lived particles due to boosts. This was absolute fun to work with. Learned a lot! Thanks @OeAD_worldwide and @IndiaDST for generous funding to make this collaboration possible. @VM_Lozano Hope we cite you ;) Yes, non-SM decays of heavy Higgs are not well studied so far. It has a lot of potential as I see it. @VM_Lozano Sounds great! I have thoughts :)",http://arxiv.org/abs/2002.07137,"In this work, we analyse and demonstrate possible strategies to explore extended Higgs sector of the Minimal Supersymmetric Standard Model (MSSM). In particular we concentrate on heavy Higgs decays to electroweakinos. We analyse the Higgs to electroweakino decays in the allowed MSSM parameter space after taking into account 13 TeV LHC searches for supersymmetric particles and phenomenological constraints such as flavour physics, Higgs measurements and dark matter constraints. We explore some novel aspects of these Higgs decays. The final states resulting from Higgs to electroweakino decays will have backgrounds arising from the Standard Model as well as direct electroweakino production at the LHC. We demonstrate explicit kinematical differences between Higgs to electroweakino decays and associated backgrounds. Furthermore, we demonstrate for a few specific example points, optimised analysis search strategies at the high luminosity LHC (HL-LHC) run. Finally, we comment on possible search strategies for heavy Higgs decays to exotic final states, where the lightest chargino is long lived and leads to a disappearing track at the LHC. ",Searching for heavy Higgs in supersymmetric final states at the LHC,7,"['Paper day! <LINK>\n\nWhat if our searches for heavy Higgses fall short because we only look for its Standard Model decays? \n\nWorry not, if the Higgs decays to new particles, we can look for it at the LHC! High luminosity LHC will be great for this.', 'Albeit the paper considers only supersymmetric models, it is possible to generalise this to extended Higgs sectors containing additional BSM particles.', 'Upshot: consider Higgs production mechanisms due to b-quarks inside proton (bbH mode). Consider backgrounds from known SM processes but also model dependent BSM processes. If the Higgs decays to charged particles, it is worth considering long lived particle searches.', 'Look at kinematics of the final states, given that the heavy Higgs is a resonance, it can leave imprint on the kinematics. This usually results in kinematic end points, distinct visible and missing energy imbalances, and long-er lived particles due to boosts.', 'This was absolute fun to work with. Learned a lot! Thanks @OeAD_worldwide and @IndiaDST for generous funding to make this collaboration possible.', '@VM_Lozano Hope we cite you ;)\nYes, non-SM decays of heavy Higgs are not well studied so far. It has a lot of potential as I see it.', '@VM_Lozano Sounds great! I have thoughts :)']",20,02,1247
3844,104,1392376737359699969,1707692827,Paul McMillan,"New paper on the Milky Way's spiral arms lead by @alfredcas - <LINK> <LINK> This paper started life when Alfred came for a 6 week visit at the end of February last year - as you can imagine this got cut short by circumstances beyond our control... ... but we/he persevered, and now we have this great paper to show for it!",https://arxiv.org/abs/2105.04590,"Context. The physical processes driving the formation of Galactic spiral arms are still under debate. Studies using open clusters favour the description of the Milky Way spiral arms as long-lived structures following the classical density wave theory. Current studies comparing the Gaia DR2 field stars kinematic information of the Solar neighbourhood to simulations, find a better agreement with short-lived arms with a transient behaviour. Aims. Our aim is to provide an observational, data-driven view of the Milky Way spiral structure and its dynamics using open clusters as the main tracers, and to contrast it with simulation-based approaches. We use the most complete catalogue of Milky Way open clusters, with astrometric Gaia EDR3 updated parameters, estimated astrophysical information and radial velocities, to re-visit the nature of the spiral pattern of the Galaxy. Methods. We use a Gaussian mixture model to detect overdensities of open clusters younger than 30 Myr that correspond to the Perseus, Local, Sagittarius and Scutum spiral arms, respectively. We use the birthplaces of the open cluster population younger than 80 Myr to trace the evolution of the different spiral arms and compute their pattern speed. We analyse the age distribution of the open clusters across the spiral arms to explore the differences in the rotational velocity of stars and spiral arms. Results. We are able to increase the range in Galactic azimuth where present-day spiral arms are described, better estimating its parameters by adding 264 young open clusters to the 84 high-mass star-forming regions used so far, thus increasing by a 314% the number of tracers. We use the evolution of the open clusters from their birth positions to find that spiral arms nearly co-rotate with field stars at any given radius, discarding a common spiral pattern speed for the spiral arms explored. [abridged] ",On the Milky Way spiral arms from open clusters in Gaia EDR3,3,"[""New paper on the Milky Way's spiral arms lead by @alfredcas - <LINK> <LINK>"", 'This paper started life when Alfred came for a 6 week visit at the end of February last year - as you can imagine this got cut short by circumstances beyond our control...', '... but we/he persevered, and now we have this great paper to show for it!']",21,05,322
3845,106,1456695666814636036,1003652696723873792,Max Gaspari,"New paper on multiscale BH #feeding & #feedback in the beautiful CenA galaxy (with B. McKinley), majorly contributing to the theory part. Another strong case supporting the #ChaoticColdAccretion unified scenario and pillar for #BlackHoleWeather. <LINK> #astronomy <LINK>",https://arxiv.org/abs/2111.02683,"Supermassive black holes and supernovae explosions at the centres of active galaxies power cycles of outflowing and inflowing gas that affect galactic evolution and the overall structure of the Universe. While simulations and observations show that this must be the case, the range of physical scales (over ten orders of magnitude) and paucity of available tracers, make both the simulation and observation of these effects difficult. By serendipity, there lies an active galaxy, Centaurus A (NGC 5128), at such a close proximity as to allow its observation over this entire range of scales and across the entire electromagnetic spectrum. In the radio band, however, details on scales of 10-100 kpc from the supermassive black hole have so far been obscured by instrumental limitations. Here we report low-frequency radio observations that overcome these limitations and show evidence for a broad, bipolar outflow with velocity 1100 km per s and mass outflow rate of 2.9 solar masses per year on these scales. We combine our data with the plethora of multi-scale, multi-wavelength historical observations of Centaurus A to probe a unified view of feeding and feedback, which we show to be consistent with the Chaotic Cold Accretion self-regulation scenario. ",Multi-scale feedback and feeding in the closest radio galaxy Centaurus A,1,"['New paper on multiscale BH #feeding &amp; #feedback in the beautiful CenA galaxy (with B. McKinley), majorly contributing to the theory part. Another strong case supporting the #ChaoticColdAccretion unified scenario and pillar for #BlackHoleWeather.\n<LINK>\n#astronomy <LINK>']",21,11,270
3846,17,1355178915854127107,1865461842,Luca Soldaini 🏳️‍🌈,"New #EACL2021 paper with our fantastic intern @HanRujun is out! With @amoschitti1, we study the problem of making Answer Sentence Selection models more contextual without having to parse passages like slow Machine Reading Comprehension models do 🧵1/5 <LINK> <LINK> .@HanRujun experimented with two approaches for selecting relevant snippets from documents to use as context, as well as three different architectures for exploiting said context. 2/5 #EACL2021 <LINK> we found that, by using sentence similarity to find relevant snippets, and combining signals using a multi-way attention similar to Tan et al. (2018), we can essentially match the performance of an ensemble model with 2x parameters and 1.3x latency 3/5 #EACL2021 <LINK> for full analysis and results, see our paper on ArXiv <LINK> Code is not out yet, but watch this space 👀 #EACL2021 4/5 lastly, I want to stress how wonderful was to have @HanRujun with us – remote internships during a pandemic are not easy, but Rujun did an amazing job with deepening our understanding of this important document reading / efficiency for QA! Ty so much for your hard work ☺️ 5/5",https://arxiv.org/abs/2101.12093,"Answer Sentence Selection (AS2) is an efficient approach for the design of open-domain Question Answering (QA) systems. In order to achieve low latency, traditional AS2 models score question-answer pairs individually, ignoring any information from the document each potential answer was extracted from. In contrast, more computationally expensive models designed for machine reading comprehension tasks typically receive one or more passages as input, which often results in better accuracy. In this work, we present an approach to efficiently incorporate contextual information in AS2 models. For each answer candidate, we first use unsupervised similarity techniques to extract relevant sentences from its source document, which we then feed into an efficient transformer architecture fine-tuned for AS2. Our best approach, which leverages a multi-way attention architecture to efficiently encode context, improves 6% to 11% over noncontextual state of the art in AS2 with minimal impact on system latency. All experiments in this work were conducted in English. ","Modeling Context in Answer Sentence Selection Systems on a Latency
Budget",5,"['New #EACL2021 paper with our fantastic intern @HanRujun is out! With @amoschitti1, we study the problem of making Answer Sentence Selection models more contextual without having to parse passages like slow Machine Reading Comprehension models do 🧵1/5 <LINK> <LINK>', '.@HanRujun experimented with two approaches for selecting relevant snippets from documents to use as context, as well as three different architectures for exploiting said context. 2/5 #EACL2021 https://t.co/b1XgdEsTqd', 'we found that, by using sentence similarity to find relevant snippets, and combining signals using a multi-way attention similar to Tan et al. (2018), we can essentially match the performance of an ensemble model with 2x parameters and 1.3x latency 3/5 #EACL2021 https://t.co/WtEM2bpqZo', 'for full analysis and results, see our paper on ArXiv https://t.co/zO0voouGWb Code is not out yet, but watch this space 👀 #EACL2021 4/5', 'lastly, I want to stress how wonderful was to have @HanRujun with us – remote internships during a pandemic are not easy, but Rujun did an amazing job with deepening our understanding of this important document reading / efficiency for QA! Ty so much for your hard work ☺️ 5/5']",21,01,1131
3847,61,1027368062419259393,822304279037784065,Terry Taewoong Um,"""Parkinson's Disease Assessment from a Wrist-Worn Wearable Sensor in Free-Living Conditions: Deep Ensemble Learning and Visualization"" <LINK> My new paper! Unreliable predictions can be overcome by introducing an ensemble of CNNs & a simple prediction smoothing. <LINK>",https://arxiv.org/abs/1808.02870,"Parkinson's Disease (PD) is characterized by disorders in motor function such as freezing of gait, rest tremor, rigidity, and slowed and hyposcaled movements. Medication with dopaminergic medication may alleviate those motor symptoms, however, side-effects may include uncontrolled movements, known as dyskinesia. In this paper, an automatic PD motor-state assessment in free-living conditions is proposed using an accelerometer in a wrist-worn wearable sensor. In particular, an ensemble of convolutional neural networks (CNNs) is applied to capture the large variability of daily-living activities and overcome the dissimilarity between training and test patients due to the inter-patient variability. In addition, class activation map (CAM), a visualization technique for CNNs, is applied for providing an interpretation of the results. ","Parkinson's Disease Assessment from a Wrist-Worn Wearable Sensor in
Free-Living Conditions: Deep Ensemble Learning and Visualization",1,"['""Parkinson\'s Disease Assessment from a Wrist-Worn Wearable Sensor in Free-Living Conditions: Deep Ensemble Learning and Visualization""\n<LINK>\nMy new paper! Unreliable predictions can be overcome by introducing an ensemble of CNNs &amp; a simple prediction smoothing. <LINK>']",18,08,269
3848,80,1426101099971284995,42604759,Oliver Obst,"Machine learning is already used to improve performance of athletes. Modelling energy and recovery involves exhausting tests, so established models are simple - but it doesn't mean they can't be improved. Fabian Weigend has just done that in our new paper: <LINK> <LINK>",https://arxiv.org/abs/2108.04510,"Data Science advances in sports commonly involve ""big data"", i.e., large sport-related data sets. However, such big data sets are not always available, necessitating specialized models that apply to relatively few observations. One important area of sport-science research that features small data sets is the study of energy recovery from exercise. In this area, models are typically fitted to data collected from exhaustive exercise test protocols, which athletes can perform only a few times. Recent findings highlight that established recovery models like W' balance (W'bal) models are too simple to adequately fit observed trends in the data. Therefore, we investigated a hydraulic model that requires the same few data points as W'bal models to be applied, but promises to predict recovery dynamics more accurately. To compare the hydraulic model to established W'bal models, we retrospectively applied them to a compilation of data from published studies. In total, one hydraulic model and three W'bal models were compared on data extracted from five studies. The hydraulic model outperformed established W'bal models on all defined metrics, even those that penalize models featuring higher numbers of parameters. These results incentivize further investigation of the hydraulic model as a new alternative to established performance models of energy recovery. ","A hydraulic model outperforms work-balance models for predicting
recovery kinetics from intermittent exercise",1,"[""Machine learning is already used to improve performance of athletes. Modelling energy and recovery involves exhausting tests, so established models are simple - but it doesn't mean they can't be improved. Fabian Weigend has just done that in our new paper: <LINK> <LINK>""]",21,08,270
3849,5,1024499681240539136,2596589880,Nikolaus Kriegeskorte,"New preprint: Cognitive Computational Neuroscience -- a review/perspective paper with @pamelitadouglas <LINK> <LINK> Functional imaging gives us unprecedentedly rich measurements of brain activity, but theory-driven analyses with brain-computational models are needed to reveal computational mechanisms. <LINK> <LINK> Cognitive computational neuroscience combines the criteria of success of cognitive science, computational neuroscience, and AI: to explain detailed patterns of behavior and brain activity using neurobiologically plausible computational models that perform complex tasks. <LINK> Explaining how cognition is implemented in the brain requires biologically plausible process models that perform cognitive functions. We face a tradeoff between biological and cognitive fidelity, but the tradeoff can turn into a synergy. <LINK> <LINK> Building large-scale biologically plausible task-performing models that explain brain and behavioral data will require new forms of cross-disciplinary collaboration. Tasks, data, models, and tests are the key shareable components to coordinate efforts across labs and disciplines. <LINK>",https://arxiv.org/abs/1807.11819,"To learn how cognition is implemented in the brain, we must build computational models that can perform cognitive tasks, and test such models with brain and behavioral experiments. Cognitive science has developed computational models of human cognition, decomposing task performance into computational components. However, its algorithms still fall short of human intelligence and are not grounded in neurobiology. Computational neuroscience has investigated how interacting neurons can implement component functions of brain computation. However, it has yet to explain how those components interact to explain human cognition and behavior. Modern technologies enable us to measure and manipulate brain activity in unprecedentedly rich ways in animals and humans. However, experiments will yield theoretical insight only when employed to test brain-computational models. It is time to assemble the pieces of the puzzle of brain computation. Here we review recent work in the intersection of cognitive science, computational neuroscience, and artificial intelligence. Computational models that mimic brain information processing during perceptual, cognitive, and control tasks are beginning to be developed and tested with brain and behavioral data. ",Cognitive computational neuroscience,5,"['New preprint: Cognitive Computational Neuroscience -- a review/perspective paper with @pamelitadouglas <LINK> <LINK>', 'Functional imaging gives us unprecedentedly rich measurements of brain activity, but theory-driven analyses with brain-computational models are needed to reveal computational mechanisms. https://t.co/qQwvTEfx3g https://t.co/OS6DlrIjMe', 'Cognitive computational neuroscience combines the criteria of success of cognitive science, computational neuroscience, and AI: to explain detailed patterns of behavior and brain activity using neurobiologically plausible computational models that perform complex tasks. https://t.co/1Kq20mts9R', 'Explaining how cognition is implemented in the brain requires biologically plausible process models that perform cognitive functions. We face a tradeoff between biological and cognitive fidelity, but the tradeoff can turn into a synergy. https://t.co/qQwvTEfx3g https://t.co/1vYuIM0hEl', 'Building large-scale biologically plausible task-performing models that explain brain and behavioral data will require new forms of cross-disciplinary collaboration. Tasks, data, models, and tests are the key shareable components to coordinate efforts across labs and disciplines. https://t.co/ytzqa6p1od']",18,07,1135
3850,98,1514352308783624193,74846617,John Forbes,"New paper out today! I've been working on this one for *coughcough* years... We tracked every bit of gas entering or leaving a given region of a simulated galaxy to understand its energy budget: <LINK> <LINK> Featuring stacking, regression, and cross-correlation analyses, we found that gas accreting onto a galaxy has a *direct impact* on the energy budget, with an efficiency around 10%. <LINK> The title of the paper ""Gas Accretion Can Drive Turbulence in Galaxies"" is a friendly poke at @PFHopkins_Astro's 2013 paper ""Accretion does not drive the turbulence in galactic discs"" Many thanks to my coauthors, virtually none of whom are on twitter! @StephTonnesen @cchayward82",https://arxiv.org/abs/2204.05344,"The driving of turbulence in galaxies is deeply connected with the physics of feedback, star formation, outflows, accretion, and radial transport in disks. The velocity dispersion of gas in galaxies therefore offers a promising observational window into these processes. However, the relative importance of each of these mechanisms remains controversial. In this work we revisit the possibility that turbulence on galactic scales is driven by the direct impact of accreting gaseous material on the disk. We measure this effect in a disk-like star-forming galaxy in IllustrisTNG, using the high-resolution cosmological magnetohydrodynamical simulation TNG50. We employ Lagrangian tracer particles with a high time cadence of only a few Myr to identify accretion and other events, such as star formation, outflows, and movement within the disk. The energies of particles as they arrive in the disk are measured by stacking the events in bins of time before and after the event. The average effect of each event is measured on the galaxy by fitting explicit models for the kinetic and turbulent energies as a function of time in the disk. These measurements are corroborated by measuring the cross-correlation of the turbulent energy in the different annuli of the disk with other time series, and searching for signals of causality, i.e. asymmetries in the cross-correlation across zero time lag. We find that accretion contributes to the large-scale turbulent kinetic energy even if it is not the dominant driver of turbulence in this $\sim 5 \times 10^{9} M_\odot$ stellar mass galaxy. Extrapolating this finding to a range of galaxy masses, we find that there are regimes where energy from direct accretion may dominate the turbulent energy budget, particularly in disk outskirts, galaxies less massive than the Milky Way, and at redshift $\sim 2$. ",Gas Accretion Can Drive Turbulence in Galaxies,4,"[""New paper out today! I've been working on this one for *coughcough* years...\n\nWe tracked every bit of gas entering or leaving a given region of a simulated galaxy to understand its energy budget:\n\n <LINK> <LINK>"", 'Featuring stacking, regression, and cross-correlation analyses, we found that gas accreting onto a galaxy has a *direct impact* on the energy budget, with an efficiency around 10%. https://t.co/Rfamc9Ar8O', 'The title of the paper ""Gas Accretion Can Drive Turbulence in Galaxies"" is a friendly poke at @PFHopkins_Astro\'s 2013 paper ""Accretion does not drive the turbulence in galactic discs""', 'Many thanks to my coauthors, virtually none of whom are on twitter! @StephTonnesen @cchayward82']",22,04,677
3851,93,984518582783574016,2383638403,Savvas Zannettou,"Ever wondered what are the different types of mis/disinformation, as well as the various actors and their motives? Find out in our latest survey paper: <LINK> Also, we report useful research directions for various lines of work. @msirivia @jhblackb @kourtellis",https://arxiv.org/abs/1804.03461,"A new era of Information Warfare has arrived. Various actors, including state-sponsored ones, are weaponizing information on Online Social Networks to run false information campaigns with targeted manipulation of public opinion on specific topics. These false information campaigns can have dire consequences to the public: mutating their opinions and actions, especially with respect to critical world events like major elections. Evidently, the problem of false information on the Web is a crucial one, and needs increased public awareness, as well as immediate attention from law enforcement agencies, public institutions, and in particular, the research community. In this paper, we make a step in this direction by providing a typology of the Web's false information ecosystem, comprising various types of false information, actors, and their motives. We report a comprehensive overview of existing research on the false information ecosystem by identifying several lines of work: 1) how the public perceives false information; 2) understanding the propagation of false information; 3) detecting and containing false information on the Web; and 4) false information on the political stage. In this work, we pay particular attention to political false information as: 1) it can have dire consequences to the community (e.g., when election results are mutated) and 2) previous work show that this type of false information propagates faster and further when compared to other types of false information. Finally, for each of these lines of work, we report several future research directions that can help us better understand and mitigate the emerging problem of false information dissemination on the Web. ","The Web of False Information: Rumors, Fake News, Hoaxes, Clickbait, and
Various Other Shenanigans",1,"['Ever wondered what are the different types of mis/disinformation, as well as the various actors and their motives? \nFind out in our latest survey paper: <LINK>\nAlso, we report useful research directions for various lines of work.\n@msirivia @jhblackb @kourtellis']",18,04,260
3852,134,1432969414912131076,284468794,Joe Zuntz,"Paper out today! We want to know if we can use a limited set of wavelengths (riz) to classify galaxies into broad redshift bins, so launched a mini-challenge to find methods. The answer seems to be ""yes"", at least if the training data is good enough.<LINK>",https://arxiv.org/abs/2108.13418,"This paper presents the results of the Rubin Observatory Dark Energy Science Collaboration (DESC) 3x2pt tomography challenge, which served as a first step toward optimizing the tomographic binning strategy for the main DESC analysis. The task of choosing an optimal tomographic binning scheme for a photometric survey is made particularly delicate in the context of a metacalibrated lensing catalogue, as only the photometry from the bands included in the metacalibration process (usually riz and potentially g) can be used in sample definition. The goal of the challenge was to collect and compare bin assignment strategies under various metrics of a standard 3x2pt cosmology analysis in a highly idealized setting to establish a baseline for realistically complex follow-up studies; in this preliminary study, we used two sets of cosmological simulations of galaxy redshifts and photometry under a simple noise model neglecting photometric outliers and variation in observing conditions, and contributed algorithms were provided with a representative and complete training set. We review and evaluate the entries to the challenge, finding that even from this limited photometry information, multiple algorithms can separate tomographic bins reasonably well, reaching figures-of-merit scores close to the attainable maximum. We further find that adding the g band to riz photometry improves metric performance by ~15% and that the optimal bin assignment strategy depends strongly on the science case: which figure-of-merit is to be optimized, and which observables (clustering, lensing, or both) are included. ",The LSST-DESC 3x2pt Tomography Optimization Challenge,1,"['Paper out today! We want to know if we can use a limited set of wavelengths (riz) to classify galaxies into broad redshift bins, so launched a mini-challenge to find methods. The answer seems to be ""yes"", at least if the training data is good enough.<LINK>']",21,08,256
3853,180,1285220814665723904,1004365363574902784,Kevin J. Kelly,"Digging into the weeds of our new paper <LINK> a little bit. Here's the key reason that NOvA and T2K have begun to (in combination) prefer the inverted mass ordering over the normal mass ordering. In the middle panel here, we have fixed our analysis to be *only* in the Normal mass ordering. As you can see, the blue (NOvA) and red (T2K) regions are pretty much perfectly complementary. @Tokai2Kamioka wants to have delta_CP = -\pi/2 and sin^2 q_{23} of about 0.55. <LINK> @novaexperiment wants nothing to do with that combination of parameters -- it prefers a value of delta_CP closer to 0 and smaller sin^2 q_{23}. However, if you take a look at the bottom panel (where we fix to be in the inverted ordering), both experiments are relatively happy at delta_CP = -\pi/2! Despite each experiment on its own preferring the Normal Ordering (at low significance), their combination actually ends up preferring the Inverted Ordering (again, low significance).",https://arxiv.org/abs/2007.08526,"We inspect recently updated neutrino oscillation data -- specifically coming from the Tokai to Kamioka and NuMI Off-axis $\nu_e$ Appearance experiments -- and how they are analyzed to determine whether the neutrino mass ordering is normal ($m_1 < m_2 < m_3$) or inverted ($m_3 < m_1 < m_2$). We show that, despite previous results giving a strong preference for the normal ordering, with the newest data from T2K and NOvA, this preference has all but vanished. Additionally, we highlight the importance of this result for non-oscillation probes of neutrinos, including neutrinoless double beta decay and cosmology. Future experiments, including JUNO, DUNE, and T2HK will provide valuable information and determine the mass ordering at a high confidence level. ","Back to (Mass-)Square(d) One: The Neutrino Mass Ordering in Light of
Recent Data",5,"[""Digging into the weeds of our new paper <LINK> a little bit. Here's the key reason that NOvA and T2K have begun to (in combination) prefer the inverted mass ordering over the normal mass ordering."", 'In the middle panel here, we have fixed our analysis to be *only* in the Normal mass ordering. As you can see, the blue (NOvA) and red (T2K) regions are pretty much perfectly complementary. @Tokai2Kamioka wants to have delta_CP = -\\pi/2 and sin^2 q_{23} of about 0.55. https://t.co/fXLi8DIP9a', '@novaexperiment wants nothing to do with that combination of parameters -- it prefers a value of delta_CP closer to 0 and smaller sin^2 q_{23}.', 'However, if you take a look at the bottom panel (where we fix to be in the inverted ordering), both experiments are relatively happy at delta_CP = -\\pi/2!', 'Despite each experiment on its own preferring the Normal Ordering (at low significance), their combination actually ends up preferring the Inverted Ordering (again, low significance).']",20,07,955
3854,42,894951186939338752,38637367,Debidatta Dwibedi,"Want to detect objects but don't have bounding box annotations,check out our new paper:<LINK> #DeepLearning #computervision <LINK> Cut out objects, paste them in real scenes with different modes of blending and train an object detector on these synthetic images. <LINK> Important takeaways: 1. Synthesized scenes need not look 'realistic' 2. Randomization is essential for effective data augmentation <LINK> 3. Adding synthetic data helps: i) Adds complementary information to existing real images ii) Model needs fewer annotated real images",https://arxiv.org/abs/1708.01642,"A major impediment in rapidly deploying object detection models for instance detection is the lack of large annotated datasets. For example, finding a large labeled dataset containing instances in a particular kitchen is unlikely. Each new environment with new instances requires expensive data collection and annotation. In this paper, we propose a simple approach to generate large annotated instance datasets with minimal effort. Our key insight is that ensuring only patch-level realism provides enough training signal for current object detector models. We automatically `cut' object instances and `paste' them on random backgrounds. A naive way to do this results in pixel artifacts which result in poor performance for trained models. We show how to make detectors ignore these artifacts during training and generate data that gives competitive performance on real data. Our method outperforms existing synthesis approaches and when combined with real images improves relative performance by more than 21% on benchmark datasets. In a cross-domain setting, our synthetic data combined with just 10% real data outperforms models trained on all real data. ","Cut, Paste and Learn: Surprisingly Easy Synthesis for Instance Detection",4,"[""Want to detect objects but don't have bounding box annotations,check out our new paper:<LINK> #DeepLearning #computervision <LINK>"", 'Cut out objects, paste them in real scenes with different modes of blending and train an object detector on these synthetic images. https://t.co/mjHB0NjbsN', ""Important takeaways: 1. Synthesized scenes need not look 'realistic' 2. Randomization is essential for effective data augmentation https://t.co/hH9PBJZqca"", '3. Adding synthetic data helps: i) Adds complementary information to existing real images ii) Model needs fewer annotated real images']",17,08,541
3855,86,1252649152158355462,2818695390,Sasho Nikolov,"New paper with Vivek Madan, @mohitsinghr, and Tao Tantipongpipat, the result of an awesome visit to Atlanta last August (which feels approximately a century ago). ""Maximizing Determinants under Matroid Constraints"" <LINK> The problem: given n rank-1 d-by-d matrices, and a matroid of rank k over them, find a basis B of the matroid so that the sum of the matrices in B has the largest determinant. This shows up in many settings: optimal design, network design, allocation of goods, ML. We give the first algorithms with approximation factor that only depends on the dimension d, and not on the rank k. The main idea is to show that a convex relaxation of the problem has an optimal solution with only d^2 fractional variables. Surprising given the non-linearity. We also leverage known and cool connections with real stable & completely log-concave polynomials. We show that a relaxation studied by myself and Mohit, and also by Straszak and @NisheethVishnoi, is at least as strong as one by @nimaanari and @oveisgharan.",https://arxiv.org/abs/2004.07886,"Given vectors $v_1,\dots,v_n\in\mathbb{R}^d$ and a matroid $M=([n],I)$, we study the problem of finding a basis $S$ of $M$ such that $\det(\sum_{i \in S}v_i v_i^\top)$ is maximized. This problem appears in a diverse set of areas such as experimental design, fair allocation of goods, network design, and machine learning. The current best results include an $e^{2k}$-estimation for any matroid of rank $k$ and a $(1+\epsilon)^d$-approximation for a uniform matroid of rank $k\ge d+\frac d\epsilon$, where the rank $k\ge d$ denotes the desired size of the optimal set. Our main result is a new approximation algorithm with an approximation guarantee that depends only on the dimension $d$ of the vectors and not on the size $k$ of the output set. In particular, we show an $(O(d))^{d}$-estimation and an $(O(d))^{d^3}$-approximation for any matroid, giving a significant improvement over prior work when $k\gg d$. Our result relies on the existence of an optimal solution to a convex programming relaxation for the problem which has sparse support; in particular, no more than $O(d^2)$ variables of the solution have fractional values. The sparsity results rely on the interplay between the first-order optimality conditions for the convex program and matroid theory. We believe that the techniques introduced to show sparsity of optimal solutions to convex programs will be of independent interest. We also give a randomized algorithm that rounds a sparse fractional solution to a feasible integral solution to the original problem. To show the approximation guarantee, we utilize recent works on strongly log-concave polynomials and show new relationships between different convex programs studied for the problem. Finally, we use the estimation algorithm and sparsity results to give an efficient deterministic approximation algorithm with an approximation guarantee that depends solely on the dimension $d$. ",Maximizing Determinants under Matroid Constraints,4,"['New paper with Vivek Madan, @mohitsinghr, and Tao Tantipongpipat, the result of an awesome visit to Atlanta last August (which feels approximately a century ago). ""Maximizing Determinants under Matroid Constraints"" <LINK>', 'The problem: given n rank-1 d-by-d matrices, and a matroid of rank k over them, find a basis B of the matroid so that the sum of the matrices in B has the largest determinant. This shows up in many settings: optimal design, network design, allocation of goods, ML.', 'We give the first algorithms with approximation factor that only depends on the dimension d, and not on the rank k. The main idea is to show that a convex relaxation of the problem has an optimal solution with only d^2 fractional variables. Surprising given the non-linearity.', 'We also leverage known and cool connections with real stable &amp; completely log-concave polynomials. We show that a relaxation studied by myself and Mohit, and also by Straszak and @NisheethVishnoi, is at least as strong as one by @nimaanari and @oveisgharan.']",20,04,1021
3856,103,1415483853192404998,279821183,Ben Montet,"Happy to announce a new paper out of our group led by Elsa Palumbo, a @Caltech ('23) undergrad working with us through the @CaltechSURF program! ""Evidence for Centrifugal Breakout around the Young M Dwarf TIC 234284556,"" <LINK>; submitted to AAS journals As a part of an investigation into variability of young stars with @NASA_TESS, Elsa found a rapidly rotating star with strange transit-looking features. They had the same period as the star's rotation, but also changed in depth and duration! <LINK> With additional data coming in from TESS during Sector 3, we took bets on what a new month of data would bring. What we weren't expecting was over ~24 hours the dips would completely disappear, but that's what happened! <LINK> A dip into the literature led Elsa to theories of centrifugal breakout, in which a magnetically-trapped cloud of nearly co-rotating material drags the magnetic field lines until the drag becomes too strong, then the lines snap and reconnect, quickly dispersing the material. The theory is &gt;15 years old, but mostly applied in the context of high-mass stars, with no direct evidence of this signal to date. We think this is the best evidence for the centrifugal breakout model ever found! Additional spectroscopy with the Veloce spectrograph at the AAT observatory in Australia shows the hydrogen features are fairly constant (no real shape changes, flux variations consistent with starspots) suggesting that it is clouds rather than plasma causing the dips. <LINK> It's a fun paper! Read it at <LINK> and read Elsa's summary of what she found at <LINK>. Elsa will be applying for PhD programs in ~18 months and would make a great addition to any astronomy program! ☺️",https://arxiv.org/abs/2107.05649,"Magnetospheric clouds have been proposed as explanations for depth-varying dips in the phased light curves of young, magnetically active stars such as $\sigma$ Ori E and RIK-210. However, the stellar theory that first predicted magnetospheric clouds also anticipated an associated mass-loss mechanism known as centrifugal breakout for which there has been limited empirical evidence. In this paper, we present data from TESS, LCO, ASAS-SN, and Veloce on the 45 Myr M3.5 star TIC 234284556, and propose that it is a candidate for the direct detection of centrifugal breakout. In assessing this hypothesis, we examine the sudden ($\sim$1-day timescale) disappearance of a previously stable ($\sim$1-month timescale) transit-like event. We also interpret the presence of an anomalous brightening event that precedes the disappearance of the signal, analyze rotational amplitudes and optical flaring as a proxy for magnetic activity, and estimate the mass of gas and dust present immediately prior to the potential breakout event. After demonstrating that our spectral and photometric data support a magnetospheric clouds and centrifugal breakout model and disfavor alternate scenarios, we discuss the possibility of a coronal mass ejection (CME) or stellar wind origin of the corotating material and we introduce a reionization mechanism as a potential explanation for more gradual variations in eclipse parameters. Finally, after comparing TIC 234284556 with previously identified ""flux-dip"" stars, we argue that TIC 234284556 may be an archetypal representative of a whole class of young, magnetically active stars. ",Evidence for Centrifugal Breakout around the Young M Dwarf TIC 234284556,7,"['Happy to announce a new paper out of our group led by Elsa Palumbo, a @Caltech (\'23) undergrad working with us through the @CaltechSURF program! ""Evidence for Centrifugal Breakout around the Young M Dwarf TIC 234284556,"" <LINK>; submitted to AAS journals', ""As a part of an investigation into variability of young stars with @NASA_TESS, Elsa found a rapidly rotating star with strange transit-looking features. They had the same period as the star's rotation, but also changed in depth and duration! https://t.co/79PdpPRaNy"", ""With additional data coming in from TESS during Sector 3, we took bets on what a new month of data would bring. What we weren't expecting was over ~24 hours the dips would completely disappear, but that's what happened! https://t.co/vlfWERZo0m"", 'A dip into the literature led Elsa to theories of centrifugal breakout, in which a magnetically-trapped cloud of nearly co-rotating material drags the magnetic field lines until the drag becomes too strong, then the lines snap and reconnect, quickly dispersing the material.', 'The theory is &gt;15 years old, but mostly applied in the context of high-mass stars, with no direct evidence of this signal to date. We think this is the best evidence for the centrifugal breakout model ever found!', 'Additional spectroscopy with the Veloce spectrograph at the AAT observatory in Australia shows the hydrogen features are fairly constant (no real shape changes, flux variations consistent with starspots) suggesting that it is clouds rather than plasma causing the dips. https://t.co/HrGHDGwkbD', ""It's a fun paper! Read it at https://t.co/1I5s3PV2LJ and read Elsa's summary of what she found at https://t.co/OVzAz5c3Zf. \n\nElsa will be applying for PhD programs in ~18 months and would make a great addition to any astronomy program! ☺️""]",21,07,1702
3857,54,1384459287452323842,29955721,Lamberto Ballan,"Check out our new paper on arXiv: Conditional Variational Capsule Network for Open Set Recognition! We also released code, data splits, etc. and addressed reproducibility (that on this task is really a serious issue) Paper: <LINK> Project: <LINK> <LINK>",https://arxiv.org/abs/2104.09159,"In open set recognition, a classifier has to detect unknown classes that are not known at training time. In order to recognize new categories, the classifier has to project the input samples of known classes in very compact and separated regions of the features space for discriminating samples of unknown classes. Recently proposed Capsule Networks have shown to outperform alternatives in many fields, particularly in image recognition, however they have not been fully applied yet to open-set recognition. In capsule networks, scalar neurons are replaced by capsule vectors or matrices, whose entries represent different properties of objects. In our proposal, during training, capsules features of the same known class are encouraged to match a pre-defined gaussian, one for each class. To this end, we use the variational autoencoder framework, with a set of gaussian priors as the approximation for the posterior distribution. In this way, we are able to control the compactness of the features of the same class around the center of the gaussians, thus controlling the ability of the classifier in detecting samples from unknown classes. We conducted several experiments and ablation of our model, obtaining state of the art results on different datasets in the open set recognition and unknown detection tasks. ",Conditional Variational Capsule Network for Open Set Recognition,1,"['Check out our new paper on arXiv: Conditional Variational Capsule Network for Open Set Recognition!\nWe also released code, data splits, etc. and addressed reproducibility (that on this task is really a serious issue)\nPaper: <LINK>\nProject: <LINK> <LINK>']",21,04,253
3858,67,930855250331803648,503452360,William Wang,"Are you still using random sampling for negative examples in your embedding models? Our new paper ""KBGAN: Adversarial Learning for Knowledge Graph Embeddings"" introduces a novel method for generating high-quality negative examples. <LINK> #NLProc #DeepLearning <LINK>",https://arxiv.org/abs/1711.04071,"We introduce KBGAN, an adversarial learning framework to improve the performances of a wide range of existing knowledge graph embedding models. Because knowledge graphs typically only contain positive facts, sampling useful negative training examples is a non-trivial task. Replacing the head or tail entity of a fact with a uniformly randomly selected entity is a conventional method for generating negative facts, but the majority of the generated negative facts can be easily discriminated from positive facts, and will contribute little towards the training. Inspired by generative adversarial networks (GANs), we use one knowledge graph embedding model as a negative sample generator to assist the training of our desired model, which acts as the discriminator in GANs. This framework is independent of the concrete form of generator and discriminator, and therefore can utilize a wide variety of knowledge graph embedding models as its building blocks. In experiments, we adversarially train two translation-based models, TransE and TransD, each with assistance from one of the two probability-based models, DistMult and ComplEx. We evaluate the performances of KBGAN on the link prediction task, using three knowledge base completion datasets: FB15k-237, WN18 and WN18RR. Experimental results show that adversarial training substantially improves the performances of target embedding models under various settings. ",KBGAN: Adversarial Learning for Knowledge Graph Embeddings,1,"['Are you still using random sampling for negative examples in your embedding models? Our new paper ""KBGAN: Adversarial Learning for Knowledge Graph Embeddings"" introduces a novel method for generating high-quality negative examples. <LINK> #NLProc #DeepLearning <LINK>']",17,11,267
3859,190,1506942458248183812,1360175358,Michele Starnini,"Last on @arxiv! The interplay between mobility & spreading dynamics is a topic interesting for both epidemiology and active matter communities. We study the impact of motility (run-and-tumble self-propelled particles) on epidemic spreading (SIS process). <LINK> <LINK> In the long-time diffusive regime, the transition becomes of the mean-field type in 1D, 2D, and 3D, while dimensionality matters in the static case. Insights obtained from an analytical, continuum description are validated by numerical simulations of an agent-based model. <LINK>",https://arxiv.org/abs/2203.12355,"Most spreading processes require spatial proximity between agents. The stationary state of spreading dynamics in a population of mobile agents thus depends on the interplay between the time and length scales involved in the epidemic process and their motion in space. We analyze the steady properties resulting from such interplay in a simple model describing epidemic spreading (modeled as a Susceptible-Infected-Susceptible process) on self-propelled particles (performing Run-and-Tumble motion). Focusing our attention on the diffusive long-time regime, we find that the agents' motion changes qualitatively the nature of the epidemic transition characterized by the emergence of a macroscopic fraction of infected agents. Indeed, the transition becomes of the mean-field type for agents diffusing in one, two and three dimensions, while, in the absence of motion, the epidemic outbreak depends on the dimension of the underlying static network determined by the agents' fixed locations. The insights obtained from a continuum description of the system are validated by numerical simulations of an agent-based model. Our work aims at bridging soft active matter physics and theoretical epidemiology, and may be of interest for researchers in both communities. ","Epidemic processes on self-propelled particles: continuum and
agent-based modelling",2,"['Last on @arxiv! The interplay between mobility &amp; spreading dynamics is a topic interesting for both epidemiology and active matter communities.\n\nWe study the impact of motility (run-and-tumble self-propelled particles) on epidemic spreading (SIS process). <LINK> <LINK>', 'In the long-time diffusive regime, the transition becomes of the mean-field type in 1D, 2D, and 3D, while dimensionality matters in the static case. Insights obtained from an analytical, continuum description are validated by numerical simulations of an agent-based model. https://t.co/YR0Vk0Aymm']",22,03,548
3860,253,1403341402218524673,187221383,kourosh hakhamaneshi,"How can we mitigate cold starts in BayesOpt using large offline datasets? In JUMBO, we propose a new combination of neural nets and GPs for large-scale multi-task BayesOpt. 📖 <LINK> 💻 <LINK> w/ @adityagrover_, @pabbeel, Vladimir Stojanović 1/8 <LINK> JUMBO is a no-regret algorithm that employs a careful hybrid of neural networks and Gaussian Processes for scalable and sample-efficient Multi-task BayesOpt. It scales cubically w.r.t. number of target queries but linearly w.r.t. the offline dataset size. 2/8 First, we pre-train a NN using the offline data to learn a latent feature h(x). Warm-GP and cold-GP are then trained using target queries. For picking the next candidate query point, we consider upper confidence bounds of both warm and cold GPs. 3/8 <LINK> JUMBO's acquisition function is a convex combination of the UCB of individual GPs. Warm GP term prunes the search space. Cold GP term then selects the next candidate point to query. We extend the analysis in GP-UCB [Srinivas et al.] to prove that JUMBO is no-regret. 4/8 Empirically, we outperform prior SoTA MBO methods on benchmark hyper-parameter optimization problems. 5/8 <LINK> We also applied JUMBO for optimizing the layout designs of circuits. We hope JUMBO is used by practitioners in many other science and engineering disciplines with offline domain data. 6/8 <LINK> JUMBO builds on related advances in large-scale Multi-task BayesOpt such as Deep Kernel Learning (<LINK>) and Adaptive Bayesian Linear Regression (<LINK>). 7/8 I really enjoyed working with amazing colleagues and advisors and I am looking forward to further collaborations with them: @adityagrover_, @pabbeel, and Vladimir Stojanović. 8/8",https://arxiv.org/abs/2106.00942,"The goal of Multi-task Bayesian Optimization (MBO) is to minimize the number of queries required to accurately optimize a target black-box function, given access to offline evaluations of other auxiliary functions. When offline datasets are large, the scalability of prior approaches comes at the expense of expressivity and inference quality. We propose JUMBO, an MBO algorithm that sidesteps these limitations by querying additional data based on a combination of acquisition signals derived from training two Gaussian Processes (GP): a cold-GP operating directly in the input domain and a warm-GP that operates in the feature space of a deep neural network pretrained using the offline data. Such a decomposition can dynamically control the reliability of information derived from the online and offline data and the use of pretrained neural networks permits scalability to large offline datasets. Theoretically, we derive regret bounds for JUMBO and show that it achieves no-regret under conditions analogous to GP-UCB (Srinivas et. al. 2010). Empirically, we demonstrate significant performance improvements over existing approaches on two real-world optimization problems: hyper-parameter optimization and automated circuit design. ",JUMBO: Scalable Multi-task Bayesian Optimization using Offline Data,8,"['How can we mitigate cold starts in BayesOpt using large offline datasets?\nIn JUMBO, we propose a new combination of neural nets and GPs for large-scale multi-task BayesOpt.\n📖 <LINK>\n💻 <LINK>\nw/ @adityagrover_, @pabbeel, Vladimir Stojanović\n\n1/8 <LINK>', 'JUMBO is a no-regret algorithm that employs a careful hybrid of neural networks and Gaussian Processes for scalable and sample-efficient Multi-task BayesOpt. It scales cubically w.r.t. number of target queries but linearly w.r.t. the offline dataset size.\n\n2/8', 'First, we pre-train a NN using the offline data to learn a latent feature h(x). Warm-GP and cold-GP are then trained using target queries. For picking the next candidate query point, we consider upper confidence bounds of both warm and cold GPs.\n\n3/8 https://t.co/P1I48WGwb3', ""JUMBO's acquisition function is a convex combination of the UCB of individual GPs. Warm GP term prunes the search space. Cold GP term then selects the next candidate point to query. We extend the analysis in GP-UCB [Srinivas et al.] to prove that JUMBO is no-regret.\n\n4/8"", 'Empirically, we outperform prior SoTA MBO methods on benchmark hyper-parameter optimization problems.\n\n5/8 https://t.co/C5jLwyCRHd', 'We also applied JUMBO for optimizing the layout designs of circuits. We hope JUMBO is used by practitioners in many other science and engineering disciplines with offline domain data.\n\n6/8 https://t.co/Ls69MXQXqO', 'JUMBO builds on related advances in large-scale Multi-task BayesOpt such as Deep Kernel Learning (https://t.co/kQeWLWJ92u) and Adaptive Bayesian Linear Regression (https://t.co/O0tPzYxLrT).\n\n7/8', 'I really enjoyed working with amazing colleagues and advisors and I am looking forward to further collaborations with them: @adityagrover_, @pabbeel, and Vladimir Stojanović.\n\n8/8']",21,06,1685
3861,146,1171600474224713729,70874545,Josh Lothringer,"So we found water vapor in the atmosphere of a 9 Earth-mass exoplanet in the habitable zone! After observing 8 transits of K2-18b with HST/WFC3, we (led by Björn Benneke) find a significant water feature: <LINK> <LINK> K2-18b receives only 5% more radiation than the Earth, leaving it with an equilibrium temperature of 265 K. This is the coolest exoplanet that we've detected water in. While not a true Earth-analogue due to its size, this bodes well for our exploration of small planets. <LINK> One of the most intriguing parts to me is how clear the atmosphere is! K2-18b appears cool enough that cloud species like KCl and ZnS don't form, while also being far enough away from its moderately active M-dwarf host star to not have significant haze production. That's not to say there aren't *any* clouds: at pressures around 100 mbar, we find some water clouds: the first water clouds in an exoplanet! (Turns out we've found some water clouds in a chilly brown dwarf though: <LINK>) I should emphasize that it was mostly Björn doing the heavy lifting on this work. I just happen to be the highest co-author with a twitter account and wanted to share this exciting result!🔭🌌🛸 Keep your eyes peeled later today for results using some of the same data by a different team...!",https://arxiv.org/abs/1909.04642,"Results from the Kepler mission indicate that the occurrence rate of small planets ($<3$ $R_\oplus$) in the habitable zone of nearby low-mass stars may be as high as 80%. Despite this abundance, probing the conditions and atmospheric properties on any habitable-zone planet is extremely difficult and has remained elusive to date. Here, we report the detection of water vapor and the likely presence of liquid and icy water clouds in the atmosphere of the $2.6$ $R_\oplus$ habitable-zone planet K2-18b. The simultaneous detection of water vapor and clouds in the mid-atmosphere of K2-18b is particularly intriguing because K2-18b receives virtually the same amount of total insolation from its host star ($1368_{-107}^{+114}$ W m$^{-2}$) as the Earth receives from the Sun (1361 W m$^{-2}$), resulting in the right conditions for water vapor to condense and explain the detected clouds. In this study, we observed nine transits of K2-18b using HST/WFC3 in order to achieve the necessary sensitivity to detect the water vapor, and we supplement this data set with Spitzer and K2 observations to obtain a broader wavelength coverage. While the thick hydrogen-dominated envelope we detect on K2-18b means that the planet is not a true Earth analog, our observations demonstrate that low-mass habitable-zone planets with the right conditions for liquid water are accessible with state-of-the-art telescopes. ","Water Vapor and Clouds on the Habitable-Zone Sub-Neptune Exoplanet
K2-18b",6,"['So we found water vapor in the atmosphere of a 9 Earth-mass exoplanet in the habitable zone! After observing 8 transits of K2-18b with HST/WFC3, we (led by Björn Benneke) find a significant water feature:\n\n<LINK> <LINK>', ""K2-18b receives only 5% more radiation than the Earth, leaving it with an equilibrium temperature of 265 K. This is the coolest exoplanet that we've detected water in. While not a true Earth-analogue due to its size, this bodes well for our exploration of small planets. https://t.co/SrAXA5yPgz"", ""One of the most intriguing parts to me is how clear the atmosphere is! K2-18b appears cool enough that cloud species like KCl and ZnS don't form, while also being far enough away from its moderately active M-dwarf host star to not have significant haze production."", ""That's not to say there aren't *any* clouds: at pressures around 100 mbar, we find some water clouds: the first water clouds in an exoplanet! (Turns out we've found some water clouds in a chilly brown dwarf though: https://t.co/G7zPTbizWC)"", 'I should emphasize that it was mostly Björn doing the heavy lifting on this work. I just happen to be the highest co-author with a twitter account and wanted to share this exciting result!🔭🌌🛸', 'Keep your eyes peeled later today for results using some of the same data by a different team...!']",19,09,1274
3862,35,976111421485436929,64444175,Jenn Gustetic 🚀💪🏻,".@MilindTambe_AI, you might want to check out @NASA's new paper on using #AI, #emergingtech and #citzenscience to address grand challenges. This paper focuses on #asteroid science but would have many learnings for your NAE & Social work GC: <LINK> #transformers",http://arxiv.org/abs/1803.04564,"Beginning in 2012, NASA utilized a strategic process to identify broad societal questions, or grand challenges, that are well suited to the aerospace sector and align with national priorities. This effort generated NASA's first grand challenge, the Asteroid Grand Challenge, a large scale effort using multidisciplinary collaborations and innovative engagement mechanisms focused on finding and addressing asteroid threats to human populations. In April 2010, President Barack Obama announced a mission to send humans to an asteroid by 2025. This resulted in the agency's Asteroid Redirect Mission to leverage and maximize existing robotic and human efforts to capture and reroute an asteroid, with the goal of eventual human exploration. The AGC, initiated in 2013, complemented ARM by expanding public participation, partnerships, and other approaches to find, understand, and overcome these potentially harmful asteroids. This paper describes a selection of AGC activities implemented from 2013 to 2017 and their results, excluding those conducted by NASA's Near Earth Object Observations Program and other organizations. The strategic development of the initiative is outlined as well as initial successes, strengths, and weaknesses resulting from the first four years of AGC activities and approaches. Finally, we describe lesson learned and areas for continued work and study. The AGC lessons learned and strategies could inform the work of other agencies and organizations seeking to conduct a global scientific investigation with matrixed organizational support, multiple strategic partners, and numerous internal and external open innovation approaches and audiences. ","NASA's Asteroid Grand Challenge: Strategy, Results and Lessons Learned",1,"["".@MilindTambe_AI, you might want to check out @NASA's new paper on using #AI, #emergingtech and #citzenscience to address grand challenges. This paper focuses on #asteroid science but would have many learnings for your NAE &amp; Social work GC: <LINK> #transformers""]",18,03,261
3863,144,1456180048222998528,569063423,Heino Falcke,"New official paper by @ehtelescope led by @azstewobs group. Low variability we find on large interferometry baseline triangles is well explained if the ring we see in M87* ist indeed gravitational and not due to plasma effects. Technical but promising. <LINK> <LINK> Followed by investigation of how general we can relate size of the black hole shadow to the bright ring we see, even if the theory of gravity deviates from general relativity. Turns out the shadow is a robust measure of spacetime properties of black holes <LINK> <LINK> Maybe useful to say that we introduced the concept of a black hole shadow in a paper in 2000 with @AgolEric and Melia and you can find a more intuitive description in this recent paper with @thomasbronzwaer <LINK> <LINK>",https://arxiv.org/abs/2111.01317,"The black-hole images obtained with the Event Horizon Telescope (EHT) are expected to be variable at the dynamical timescale near their horizons. For the black hole at the center of the M87 galaxy, this timescale (5-61 days) is comparable to the 6-day extent of the 2017 EHT observations. Closure phases along baseline triangles are robust interferometric observables that are sensitive to the expected structural changes of the images but are free of station-based atmospheric and instrumental errors. We explored the day-to-day variability in closure phase measurements on all six linearly independent non-trivial baseline triangles that can be formed from the 2017 observations. We showed that three triangles exhibit very low day-to-day variability, with a dispersion of $\sim3-5^\circ$. The only triangles that exhibit substantially higher variability ($\sim90-180^\circ$) are the ones with baselines that cross visibility amplitude minima on the $u-v$ plane, as expected from theoretical modeling. We used two sets of General Relativistic magnetohydrodynamic simulations to explore the dependence of the predicted variability on various black-hole and accretion-flow parameters. We found that changing the magnetic field configuration, electron temperature model, or black-hole spin has a marginal effect on the model consistency with the observed level of variability. On the other hand, the most discriminating image characteristic of models is the fractional width of the bright ring of emission. Models that best reproduce the observed small level of variability are characterized by thin ring-like images with structures dominated by gravitational lensing effects and thus least affected by turbulence in the accreting plasmas. ","The Variability of the Black-Hole Image in M87 at the Dynamical Time
Scale",3,"['New official paper by @ehtelescope led by @azstewobs group. Low variability we find on large interferometry baseline triangles is well explained if the ring we see in M87* ist indeed gravitational and not due to plasma effects. Technical but promising. <LINK> <LINK>', 'Followed by investigation of how general we can relate size of the black hole shadow to the bright ring we see, even if the theory of gravity deviates from general relativity. Turns out the shadow is a robust measure of spacetime properties of black holes https://t.co/ItzuROxg7T https://t.co/hlAnqrKYEa', 'Maybe useful to say that we introduced the concept of a black hole shadow in a paper in 2000 with @AgolEric and Melia and you can find a more intuitive description in this recent paper with @thomasbronzwaer https://t.co/8cQgkm1uKs https://t.co/niJSXhWWLt']",21,11,757
3864,116,1502218034160807940,1134375290581524480,Kai Schmitz,"New paper on the @arxiv: <LINK>, a #Snowmass White Paper on baryogenesis, in which I review our work on wash-in leptogenesis 2011.09347 and leptoflavorgenesis 2111.03082 (see Sec. 2.3). Also, this is my 20th preprint since joining @CERN in September 2019. Yay! 🥳",https://arxiv.org/abs/2203.05010,"The Standard Model of Particle Physics cannot explain the observed baryon asymmetry of the Universe. This observation is a clear sign of new physics beyond the Standard Model. There have been many recent theoretical developments to address this question. Critically, many new physics models that generate the baryon asymmetry have a wide range of repercussions for many areas of theoretical and experimental particle physics. This white paper provides an overview of such recent theoretical developments with an emphasis on experimental testability. ",New Ideas in Baryogenesis: A Snowmass White Paper,1,"['New paper on the @arxiv: <LINK>, a #Snowmass White Paper on baryogenesis, in which I review our work on wash-in leptogenesis 2011.09347 and leptoflavorgenesis 2111.03082 (see Sec. 2.3). Also, this is my 20th preprint since joining @CERN in September 2019. Yay! 🥳']",22,03,262
3865,97,1319234172439810055,2467076389,Kunihiko Tanaka (x2),NEW PAPER! <LINK> 銀河系の中心部は大量の高密度ガスがありながら(最近そこはすこし疑わしくなってきているのですが)、星形成が地味なことで知られています。幸い星形成あり/なしの高密度クランプのまとまったデータを持っていましたので、統計解析にかけて、星形成の有無に相関するパラメータを探させてみました。 何のパラメータにも相関していない、という結果を予想していたのですが、実際にはビリアル比(重力の強さを表すパラメータ)にのみ非常に強い相関を示しました。ランダムに生成した対称群との比較に対しても有意です。 星形成に直結しない低密度領域に対して同じ解析を行うと、ビリアル比への依存性は消え、星形成の有無はランダムに近づきます。この場合は大局的な星形成率は領域のスケール(大きさ・質量)によってのみ決まるように見えるでしょう。 結果自体は特に大きな驚きがあるものではありませんでしたが、統計解析だけできれいな結論に至ったのが面白かったので論文化してみました。 @tatary はい。それ以外の量への依存性が出てこなかったのはそれなりに意味があるかと思います。磁場とかは入っていないんですけど。 @tatary collision rateと星形成クランプの数はまずまず一致します。ですが最近の観測だと collision していそうなのに星形成のない領域がたくさん見つかっていて、全てのcollisionが星形成に至るわけではないのではなかろうかと。その効率を含めると言われているcollision rateでは足りなさそうという推測です,https://arxiv.org/abs/2010.10552,"We report a statistical analysis exploring the origin of the overall low star formation efficiency (SFE) of the Galactic central molecular zone (CMZ) and the SFE diversity among the CMZ clouds using a wide-field HCN $J$=4-3 map, whose optically thin critical density ($\sim10^7\,\mathrm{cm}^{-3}$) is the highest among the tracers ever used in CMZ surveys. Logistic regression is performed to empirically formulate star formation probability of 195 HCN clumps, 13 of which contain star formation signatures. The explanatory parameters in the best-fit model are reduced into the virial parameter $\alpha_{\mathrm{vir}}$ without significant contribution from other parameters, whereas the performance of the model without $\alpha_{\mathrm{vir}}$ is no better than that using randomly generated data. The threshold $\alpha_{\mathrm{vir}}$ is 6, which translates into a volume density ($n_{\mathrm{H_2}}$) of $10^{4.6}\,\mathrm{cm}^{-3}$ with the $n_{\mathrm{H_2}}$-$\alpha_{\mathrm{vir}}$ correlation. The scarcity of the low-$\alpha_{\mathrm{vir}}$ clumps, whose fraction to all HCN clumps is 0.1, can be considered as one of the immediate causes of the suppressed SFE. No correlation between the clump size or mass and star formation probability is found, implying that HCN $J$=4-3 does not immediately trace the mass of star-forming gas above a threshold density. Meanwhile, star-forming and non-star-forming clouds are degenerate in the physical parameters of the CS $\mathit{J}$=1-0 clouds, highlighting the efficacy of the HCN $\mathit{J}$=4-3 line to probe star-forming regions in the CMZ. The time scale of the high-$\alpha_{\mathrm{vir}}$ to low-$\alpha_{\mathrm{vir}}$ transition is $\lesssim2$ Myr, which is consistent with the tidal compression and X1/X2 orbit transition models but possibly does not fit the cloud-cloud collision picture. ","HCN $J$=4-3, HNC $J$=1-0, $\mathrm{H^{13}CN}$ $J$=1-0, and
$\mathrm{HC_3N}$ $J$=10-9 Maps of Galactic Center Region II.: Physical
Properties of Dense Gas Clumps and Probability of Star Formation",7,"['NEW PAPER! \n<LINK>', '銀河系の中心部は大量の高密度ガスがありながら(最近そこはすこし疑わしくなってきているのですが)、星形成が地味なことで知られています。幸い星形成あり/なしの高密度クランプのまとまったデータを持っていましたので、統計解析にかけて、星形成の有無に相関するパラメータを探させてみました。', '何のパラメータにも相関していない、という結果を予想していたのですが、実際にはビリアル比(重力の強さを表すパラメータ)にのみ非常に強い相関を示しました。ランダムに生成した対称群との比較に対しても有意です。', '星形成に直結しない低密度領域に対して同じ解析を行うと、ビリアル比への依存性は消え、星形成の有無はランダムに近づきます。この場合は大局的な星形成率は領域のスケール(大きさ・質量)によってのみ決まるように見えるでしょう。', '結果自体は特に大きな驚きがあるものではありませんでしたが、統計解析だけできれいな結論に至ったのが面白かったので論文化してみました。', '@tatary はい。それ以外の量への依存性が出てこなかったのはそれなりに意味があるかと思います。磁場とかは入っていないんですけど。', '@tatary collision rateと星形成クランプの数はまずまず一致します。ですが最近の観測だと collision していそうなのに星形成のない領域がたくさん見つかっていて、全てのcollisionが星形成に至るわけではないのではなかろうかと。その効率を含めると言われているcollision rateでは足りなさそうという推測です']",20,10,674
3866,38,1109115986802700288,253418172,Kenneth Hung,"A new paper from @wfithian and me on replicability from a statistical perspective! We provide new metrics, new methods to estimate these metrics and applied them to the Reproducibility Project: Psychology data! <LINK> The RP:P metrics are clearly affected by selection bias, but does selection bias alone explain the observed phenomenon? Our metrics and methods for assessing replicability allow us to see through the effects of selection bias. <LINK>",https://arxiv.org/abs/1903.08747,"Large-scale replication studies like the Reproducibility Project: Psychology (RP:P) provide invaluable systematic data on scientific replicability, but most analyses and interpretations of the data fail to agree on the definition of ""replicability"" and disentangle the inexorable consequences of known selection bias from competing explanations. We discuss three concrete definitions of replicability based on (1) whether published findings about the signs of effects are mostly correct, (2) how effective replication studies are in reproducing whatever true effect size was present in the original experiment, and (3) whether true effect sizes tend to diminish in replication. We apply techniques from multiple testing and post-selection inference to develop new methods that answer these questions while explicitly accounting for selection bias. Our analyses suggest that the RP:P dataset is largely consistent with publication bias due to selection of significant effects. The methods in this paper make no distributional assumptions about the true effect sizes. ",Statistical Methods for Replicability Assessment,2,"['A new paper from @wfithian and me on replicability from a statistical perspective! We provide new metrics, new methods to estimate these metrics and applied them to the Reproducibility Project: Psychology data! <LINK>', 'The RP:P metrics are clearly affected by selection bias, but does selection bias alone explain the observed phenomenon? Our metrics and methods for assessing replicability allow us to see through the effects of selection bias.\n\nhttps://t.co/Ee7cQwkGVX']",19,03,451
3867,326,1313582132300845057,1255599597763862529,Luyu Gao,"Excited to share our paper ""Improving Target-side Lexical Transfer in Multilingual Neural Machine Translation"" We propose a character n-gram embedding for translation into low resource language. With @cindyxinyiwang @gneubig, in findings of EMNLP. <LINK>",http://arxiv.org/abs/2010.01667,"To improve the performance of Neural Machine Translation~(NMT) for low-resource languages~(LRL), one effective strategy is to leverage parallel data from a related high-resource language~(HRL). However, multilingual data has been found more beneficial for NMT models that translate from the LRL to a target language than the ones that translate into the LRLs. In this paper, we aim to improve the effectiveness of multilingual transfer for NMT models that translate \emph{into} the LRL, by designing a better decoder word embedding. Extending upon a general-purpose multilingual encoding method Soft Decoupled Encoding~\citep{SDE}, we propose DecSDE, an efficient character n-gram based embedding specifically designed for the NMT decoder. Our experiments show that DecSDE leads to consistent gains of up to 1.8 BLEU on translation from English to four different languages. ","Improving Target-side Lexical Transfer in Multilingual Neural Machine
Translation",1,"['Excited to share our paper ""Improving Target-side Lexical Transfer in Multilingual Neural Machine Translation"" We propose a character n-gram embedding for translation into low resource language. With @cindyxinyiwang @gneubig, in findings of EMNLP.\n<LINK>']",20,10,254
3868,163,1499212752472182793,1012125662117851136,Edward Kennedy,"New paper! <LINK> How do trt effects vary across people? Such heterogeneity is crucial for optimal allocation, generalizability, etc Many methods out there... but optimality's been unsolved. What is ""best""? We derive minimax rates & give new optimal estimator <LINK> Minimax rates give a benchmark for the best possible performance of any estimator - ie when can you stop searching for better methods? Also an important measure of fundamental limits - how difficult is this in statistical sense? <LINK> <LINK> <LINK> We show the CATE minimax rate has an unusual elbow phenomenon, & interpolates bw regression & functional rates ie the CATE is a strange hybrid beast Our lower bd uses a ""mixed"" fuzzy hypotheses construction. And our estimator uses *localized* higher-order influence functions <LINK> The CATE minimax rate very clearly mixes regression & functional est rates: Minimax rate for regression scales with d/2g (dim=d, smoothness=g) Minimax rate for functional estimation scales with d/4s (nuisance smoothness=s) And the CATE minimax rate scales with (d/2g + d/4s) ! <LINK> This paper has meant a lot to me It's pretty different from my previous work - so I learned lots of fun new tools Also: - resolves something I pondered for ~a decade - wrote it during a pandemic while raising 2 young kids - made for some really fun mtgs, thanks to Siva & Larry <LINK>",https://arxiv.org/abs/2203.00837,"Estimation of heterogeneous causal effects - i.e., how effects of policies and treatments vary across subjects - is a fundamental task in causal inference, playing a crucial role in optimal treatment allocation, generalizability, subgroup effects, and more. Many flexible methods for estimating conditional average treatment effects (CATEs) have been proposed in recent years, but questions surrounding optimality have remained largely unanswered. In particular, a minimax theory of optimality has yet to be developed, with the minimax rate of convergence and construction of rate-optimal estimators remaining open problems. In this paper we derive the minimax rate for CATE estimation, in a nonparametric model where distributional components are Holder-smooth, and present a new local polynomial estimator, giving high-level conditions under which it is minimax optimal. More specifically, our minimax lower bound is derived via a localized version of the method of fuzzy hypotheses, combining lower bound constructions for nonparametric regression and functional estimation. Our proposed estimator can be viewed as a local polynomial R-Learner, based on a localized modification of higher-order influence function methods; it is shown to be minimax optimal under a condition on how accurately the covariate distribution is estimated. The minimax rate we find exhibits several interesting features, including a non-standard elbow phenomenon and an unusual interpolation between nonparametric regression and functional estimation rates. The latter quantifies how the CATE, as an estimand, can be viewed as a regression/functional hybrid. We conclude with some discussion of a few remaining open problems. ",Minimax rates for heterogeneous causal effect estimation,5,"['New paper!\n<LINK>\n\nHow do trt effects vary across people? Such heterogeneity is crucial for optimal allocation, generalizability, etc\n\nMany methods out there... but optimality\'s been unsolved. What is ""best""?\n\nWe derive minimax rates &amp; give new optimal estimator <LINK>', 'Minimax rates give a benchmark for the best possible performance of any estimator - ie when can you stop searching for better methods?\n\nAlso an important measure of fundamental limits - how difficult is this in statistical sense?\n\nhttps://t.co/jdhDkDH4bk\n\nhttps://t.co/kTnywEv6TQ https://t.co/raizPSn2wj', 'We show the CATE minimax rate has an unusual elbow phenomenon, &amp; interpolates bw regression &amp; functional rates\n\nie the CATE is a strange hybrid beast\n\nOur lower bd uses a ""mixed"" fuzzy hypotheses construction. And our estimator uses *localized* higher-order influence functions https://t.co/bsC48Asat4', 'The CATE minimax rate very clearly mixes regression &amp; functional est rates:\n\nMinimax rate for regression scales with d/2g (dim=d, smoothness=g)\n\nMinimax rate for functional estimation scales with d/4s (nuisance smoothness=s)\n\nAnd the CATE minimax rate scales with (d/2g + d/4s) ! https://t.co/yJIWDmlrwb', ""This paper has meant a lot to me\n\nIt's pretty different from my previous work - so I learned lots of fun new tools\n\nAlso:\n- resolves something I pondered for ~a decade\n- wrote it during a pandemic while raising 2 young kids\n- made for some really fun mtgs, thanks to Siva &amp; Larry https://t.co/GKe8YrDpJ7""]",22,03,1368
3869,23,1256025959695777793,3290526443,Jia-Bin Huang,"Check out our #SIGGRAPH2020 paper on Consistent Video Depth Estimation. Our geometrically consistent depth enables cool video effects to a whole new level! Video: <LINK> Paper: <LINK> Project page: <LINK> <LINK> Joint work with amazing co-authors Xuan Luo (@XuanLuo14), Richard Szeliski, Kevin Matzen, and Johannes Kopf (@JPKopf). @aiwakr Thanks, Avinash! Glad to see you here on Twitter!",https://arxiv.org/abs/2004.15021,"We present an algorithm for reconstructing dense, geometrically consistent depth for all pixels in a monocular video. We leverage a conventional structure-from-motion reconstruction to establish geometric constraints on pixels in the video. Unlike the ad-hoc priors in classical reconstruction, we use a learning-based prior, i.e., a convolutional neural network trained for single-image depth estimation. At test time, we fine-tune this network to satisfy the geometric constraints of a particular input video, while retaining its ability to synthesize plausible depth details in parts of the video that are less constrained. We show through quantitative validation that our method achieves higher accuracy and a higher degree of geometric consistency than previous monocular reconstruction methods. Visually, our results appear more stable. Our algorithm is able to handle challenging hand-held captured input videos with a moderate degree of dynamic motion. The improved quality of the reconstruction enables several applications, such as scene reconstruction and advanced video-based visual effects. ",Consistent Video Depth Estimation,3,"['Check out our #SIGGRAPH2020 paper on Consistent Video Depth Estimation. Our geometrically consistent depth enables cool video effects to a whole new level!\n\nVideo: <LINK>\nPaper: <LINK>\nProject page: <LINK> <LINK>', 'Joint work with amazing co-authors Xuan Luo (@XuanLuo14), Richard Szeliski, Kevin Matzen, and Johannes Kopf (@JPKopf).', '@aiwakr Thanks, Avinash! Glad to see you here on Twitter!']",20,04,388
3870,116,1303871529319632896,970481802308653056,Max Lipton,"My new paper on electrostatic knot theory is online! This paper, along with my previous result, articulates the relationship between the knot type and the topologies of their level potential surfaces, in a cute snippet of data I call the “Morse code.” <LINK> <LINK> @JSEllenberg @stevenstrogatz The Morse code can be “rearranged” via the Morse Replacement Lemma. The critical set and its behavior remains fixed, but the critical values can be rearranged. @JSEllenberg @stevenstrogatz However, the new function will not necessarily be harmonic, and therefore not necessarily representative of the electric potential of a charge distribution.",https://arxiv.org/abs/2009.03958,"Consider a knot $K$ in $S^3$ with uniformly distributed electric charge. From the standpoint of both physics and knot theory, it is natural to try to understand the critical points of the electric potential and their behavior. When the knot is sufficiently close to a planar projection, we prove a lower bound on the size of the critical set based on the projection's crossings, improving a 2019 result of the author. Next, we show that critical points of index $1$ correspond to increases in the genus of the equipotential surfaces as one increases the value of the potential, whilst critical points of index $2$ correspond to decreases. We conclude with a Cerf-theoretic description of the bifurcation of critical points under generic knot isotopies. Our theorems are proven with Morse theory and techniques from geometric topology. keywords: Physical knot theory, electrostatics, Morse theory, dynamical systems, geometric topology, Cerf theory ","Critical points and equipotential surfaces of knotted electric charge
distributions",3,"['My new paper on electrostatic knot theory is online! This paper, along with my previous result, articulates the relationship between the knot type and the topologies of their level potential surfaces, in a cute snippet of data I call the “Morse code.”\n\n<LINK> <LINK>', '@JSEllenberg @stevenstrogatz The Morse code can be “rearranged” via the Morse Replacement Lemma. The critical set and its behavior remains fixed, but the critical values can be rearranged.', '@JSEllenberg @stevenstrogatz However, the new function will not necessarily be harmonic, and therefore not necessarily representative of the electric potential of a charge distribution.']",20,09,640
3871,65,1016228598875918336,20444488,Seth Moortgat,"Proud to announce our new paper on ArXiv: <LINK> A study on how to pinpoint the origin of an #EFT signal at the LHC, in the broad #SMEFT landscape, using multi-class #NeuralNetworks. With the ttbb signature in a lead role as a very interesting case-study! <LINK>",https://arxiv.org/abs/1807.02130,"In the context of the Standard Model effective field theory (SMEFT), we study the LHC sensitivity to four fermion operators involving heavy quarks by employing cross section measurements in the $t\bar{t}b\bar{b}$ final state. Starting from the measurement of total rates, we progressively exploit kinematical information and machine learning techniques to optimize the projected sensitivity at the end of Run III. Indeed, in final states with high multiplicity containing inter-correlated kinematical information, multi-variate methods provide a robust way of isolating the regions of phase space where the SMEFT contribution is enhanced. We also show that training for multiple output classes allows for the discrimination between operators mediating the production of tops in different helicity states. Our projected sensitivities not only constrain a host of new directions in the SMEFT parameter space but also improve on existing limits demonstrating that, on one hand, $t\bar{t}b\bar{b}$ production is an indispensable component in a future global fit for top quark interactions in the SMEFT, and on the other, multi-class machine learning algorithms can be a valuable tool for interpreting LHC data in this framework. ","Learning to pinpoint effective operators at the LHC: a study of the
$t\bar{t}b\bar{b}$ signature",1,"['Proud to announce our new paper on ArXiv:\n<LINK>\nA study on how to pinpoint the origin of an #EFT signal at the LHC, in the broad #SMEFT landscape, using multi-class #NeuralNetworks. With the ttbb signature in a lead role as a very interesting case-study! <LINK>']",18,07,262
3872,187,1493308664219934729,1316746180085403655,Pierre Colombo,"Super excited to share our new work on Evaluation of NLP system: <LINK> ! When using large benchmark we often try to answer to the question: « what are the best systems? ». We study alternative such as using ranking ideas instead of considering the mean! Joint work with @nathan_noiry, Ekhine and Stephan! CLI interface is available at <LINK>…. We thanks @seb_ruder for it's inspirational blog post as well: <LINK>….",https://arxiv.org/abs/2202.03799,"In Machine Learning, a benchmark refers to an ensemble of datasets associated with one or multiple metrics together with a way to aggregate different systems performances. They are instrumental in (i) assessing the progress of new methods along different axes and (ii) selecting the best systems for practical use. This is particularly the case for NLP with the development of large pre-trained models (e.g. GPT, BERT) that are expected to generalize well on a variety of tasks. While the community mainly focused on developing new datasets and metrics, there has been little interest in the aggregation procedure, which is often reduced to a simple average over various performance measures. However, this procedure can be problematic when the metrics are on a different scale, which may lead to spurious conclusions. This paper proposes a new procedure to rank systems based on their performance across different tasks. Motivated by the social choice theory, the final system ordering is obtained through aggregating the rankings induced by each task and is theoretically grounded. We conduct extensive numerical experiments (on over 270k scores) to assess the soundness of our approach both on synthetic and real scores (e.g. GLUE, EXTREM, SEVAL, TAC, FLICKR). In particular, we show that our method yields different conclusions on state-of-the-art systems than the mean-aggregation procedure while being both more reliable and robust. ",What are the best systems? New perspectives on NLP Benchmarking,2,"['Super excited to share our new work on Evaluation of NLP system: <LINK> ! When using large benchmark we often try to answer to the question: « what are the best systems? ». We study alternative such as using ranking ideas instead of considering the mean!', ""Joint work with @nathan_noiry, Ekhine and Stephan! CLI interface is available at https://t.co/uzKUGS8e5O…. We thanks @seb_ruder for it's inspirational blog post as well: https://t.co/EQvXmfMayf….""]",22,02,416
3873,47,1205902384112848899,2239670346,Jonathan Frankle,"How do the lottery ticket hypothesis and the loss landscape relate? Winning lottery tickets always find the same, linearly-connected optimum. Check out our (@KDziugaite, @roydanroy, @mcarbin) poster at the SEDL workshop (West 121) and our new paper <LINK> <LINK>",https://arxiv.org/abs/1912.05671,"We study whether a neural network optimizes to the same, linearly connected minimum under different samples of SGD noise (e.g., random data order and augmentation). We find that standard vision models become stable to SGD noise in this way early in training. From then on, the outcome of optimization is determined to a linearly connected region. We use this technique to study iterative magnitude pruning (IMP), the procedure used by work on the lottery ticket hypothesis to identify subnetworks that could have trained in isolation to full accuracy. We find that these subnetworks only reach full accuracy when they are stable to SGD noise, which either occurs at initialization for small-scale settings (MNIST) or early in training for large-scale settings (ResNet-50 and Inception-v3 on ImageNet). ",Linear Mode Connectivity and the Lottery Ticket Hypothesis,1,"['How do the lottery ticket hypothesis and the loss landscape relate? Winning lottery tickets always find the same, linearly-connected optimum. Check out our (@KDziugaite, @roydanroy, @mcarbin) poster at the SEDL workshop (West 121) and our new paper <LINK> <LINK>']",19,12,262
3874,69,1106016016268705792,913058608883142657,Patrick Vallely 🛰️🌌🔭,"AWESOME new paper by @bigskybooms now posted to @arxiv! If you hadn't been convinced already, this should be the last nail in the coffin ruling out the single-degenerate scenario producing a significant fraction of Type Ia supernovae. For more: <LINK> <LINK>",https://arxiv.org/abs/1903.05115,"We place statistical constraints on Type Ia supernova (SN Ia) progenitors using 227 nebular phase spectra of 111 SNe Ia. We find no evidence of stripped companion emission in any of the nebular phase spectra. Upper limits are placed on the amount of mass that could go undetected in each spectrum using recent hydrodynamic simulations. With these null detections, we place an observational $3\sigma$ upper limit on the fraction of SNe Ia that are produced through the classical H-rich non-degenerate companion scenario of < 5.5%. Additionally, we set a tentative $3\sigma$ upper limit on He star progenitor scenarios of < 6.4%, although further theoretical modelling is required. These limits refer to our most representative sample including normal, 91bg-like, 91T-like, and ""Super Chandrasekhar"" \sne but excluding SNe Iax and SNe Ia-CSM. As part of our analysis, we also derive a Nebular Phase Phillips Relation, which approximates the brightness of a SN Ia from $150-500$~days after maximum using the peak magnitude and decline rate parameter $\Delta m_{15} (B)$. ","Nebular Spectra of 111 Type Ia Supernovae Disfavor Single Degenerate
Progenitors",1,"[""AWESOME new paper by @bigskybooms now posted to @arxiv! If you hadn't been convinced already, this should be the last nail in the coffin ruling out the single-degenerate scenario producing a significant fraction of Type Ia supernovae.\n\nFor more: <LINK> <LINK>""]",19,03,258
3875,117,1489494341550854144,1047808083995578368,Oded Zilberberg Group,"Interested in nonlinear out-of-equilibrium systems? Check out our new @JuliaLanguage package <LINK> and its white paper @arxiv (<LINK>). @ETH_physics, @UniKonstanz, @SFB1432, @NCCR_QSIT. With Jan Košata, Javier del Pino, and Toni L. Heugel <LINK> You can also here more about it today in Javier's talk today at @NCCR_QSIT Arosa meeting.",http://arxiv.org/abs/2202.00571,"HarmonicBalance.jl is a publicly available Julia package designed to simplify and solve systems of periodic time-dependent nonlinear ordinary differential equations. Time dependence of the system parameters is treated with the harmonic balance method, which approximates the system's behaviour as a set of harmonic terms with slowly-varying amplitudes. Under this approximation, the set of all possible steady-state responses follows from the solution of a polynomial system. In HarmonicBalance.jl, we combine harmonic balance with contemporary implementations of symbolic algebra and the homotopy continuation method to numerically determine all steady-state solutions and their associated fluctuation dynamics. For the exploration of involved steady-state topologies, we provide a simple graphical user interface, allowing for arbitrary solution observables and phase diagrams. HarmonicBalance.jl is a free software available at this https URL ","HarmonicBalance.jl: A Julia suite for nonlinear dynamics using harmonic
balance",2,"['Interested in nonlinear out-of-equilibrium systems? Check out our new @JuliaLanguage package <LINK> and its white paper @arxiv (<LINK>). \n@ETH_physics, @UniKonstanz, @SFB1432, @NCCR_QSIT. With Jan Košata, Javier del Pino, and Toni L. Heugel <LINK>', ""You can also here more about it today in Javier's talk today at @NCCR_QSIT Arosa meeting.""]",22,02,336
3876,224,1275318361925115908,495550336,Abhishek Gupta,"New work led by JD, Suvansh studying what properties of an environment can make sparse reward, non episodic learning easier. We find that highly dynamic environments or environments with natural “environment shaping” can help! <LINK> @svlevine @GlenBerseth",https://arxiv.org/abs/2006.12478,"Much of the current work on reinforcement learning studies episodic settings, where the agent is reset between trials to an initial state distribution, often with well-shaped reward functions. Non-episodic settings, where the agent must learn through continuous interaction with the world without resets, and where the agent receives only delayed and sparse reward signals, is substantially more difficult, but arguably more realistic considering real-world environments do not present the learner with a convenient ""reset mechanism"" and easy reward shaping. In this paper, instead of studying algorithmic improvements that can address such non-episodic and sparse reward settings, we instead study the kinds of environment properties that can make learning under such conditions easier. Understanding how properties of the environment impact the performance of reinforcement learning agents can help us to structure our tasks in ways that make learning tractable. We first discuss what we term ""environment shaping"" -- modifications to the environment that provide an alternative to reward shaping, and may be easier to implement. We then discuss an even simpler property that we refer to as ""dynamism,"" which describes the degree to which the environment changes independent of the agent's actions and can be measured by environment transition entropy. Surprisingly, we find that even this property can substantially alleviate the challenges associated with non-episodic RL in sparse reward settings. We provide an empirical evaluation on a set of new tasks focused on non-episodic learning with sparse rewards. Through this study, we hope to shift the focus of the community towards analyzing how properties of the environment can affect learning and the ultimate type of behavior that is learned via RL. ",Ecological Reinforcement Learning,1,"['New work led by JD, Suvansh studying what properties of an environment can make sparse reward, non episodic learning easier. We find that highly dynamic environments or environments with natural “environment shaping” can help!\n\n<LINK>\n@svlevine @GlenBerseth']",20,06,256
3877,83,1129008425793409026,2423945684,Vinny Davies,"New paper now available on @arxiv <LINK> The paper looks at inferring left ventricle heart model parameters from MRI data, using emulation to speed up the inference and make it suitable for clinical use Work was done with Umberto Noè, @LazarusAl, @sharpgao, @akohneko, @ColinBerryMD, Xiaoyu Luo, Dirk Husmeier as part of the @SofTMech project",https://arxiv.org/abs/1905.06310,"A central problem in biomechanical studies of personalised human left ventricular (LV) modelling is estimating the material properties and biophysical parameters from in-vivo clinical measurements in a time frame suitable for use within a clinic. Understanding these properties can provide insight into heart function or dysfunction and help inform personalised medicine. However, finding a solution to the differential equations which mathematically describe the kinematics and dynamics of the myocardium through numerical integration can be computationally expensive. To circumvent this issue, we use the concept of emulation to infer the myocardial properties of a healthy volunteer in a viable clinical time frame using in-vivo magnetic resonance image (MRI) data. Emulation methods avoid computationally expensive simulations from the LV model by replacing the biomechanical model, which is defined in terms of explicit partial differential equations, with a surrogate model inferred from simulations generated before the arrival of a patient, vastly improving computational efficiency at the clinic. We compare and contrast two emulation strategies: (i) emulation of the computational model outputs and (ii) emulation of the loss between the observed patient data and the computational model outputs. These strategies are tested with two different interpolation methods, as well as two different loss functions... ","Fast Parameter Inference in a Biomechanical Model of the Left Ventricle
using Statistical Emulation",2,"['New paper now available on @arxiv <LINK> The paper looks at inferring left ventricle heart model parameters from MRI data, using emulation to speed up the inference and make it suitable for clinical use', 'Work was done with Umberto Noè, @LazarusAl, @sharpgao, @akohneko, @ColinBerryMD, Xiaoyu Luo, Dirk Husmeier as part of the @SofTMech project']",19,05,342
3878,110,1163898677854818304,17354555,Emilio Ferrara,Great work by @palashiitkgp & team: We propose & developed an open-source library to benchmark any graph embedding methods. We hope this will foster standardization and more research in the area of graph embeddings! Paper: <LINK> Code: <LINK> <LINK> With thanks to @aaronclauset & team for providing most real world datasets! Check out their project: <LINK>,https://arxiv.org/abs/1908.06543,"Graph embedding is the task of representing nodes of a graph in a low-dimensional space and its applications for graph tasks have gained significant traction in academia and industry. The primary difference among the many recently proposed graph embedding methods is the way they preserve the inherent properties of the graphs. However, in practice, comparing these methods is very challenging. The majority of methods report performance boosts on few selected real graphs. Therefore, it is difficult to generalize these performance improvements to other types of graphs. Given a graph, it is currently impossible to quantify the advantages of one approach over another. In this work, we introduce a principled framework to compare graph embedding methods. Our goal is threefold: (i) provide a unifying framework for comparing the performance of various graph embedding methods, (ii) establish a benchmark with real-world graphs that exhibit different structural properties, and (iii) provide users with a tool to identify the best graph embedding method for their data. This paper evaluates 4 of the most influential graph embedding methods and 4 traditional link prediction methods against a corpus of 100 real-world networks with varying properties. We organize the 100 networks in terms of their properties to get a better understanding of the embedding performance of these popular methods. We use the comparisons on our 100 benchmark graphs to define GFS-score, that can be applied to any embedding method to quantify its performance. We rank the state-of-the-art embedding approaches using the GFS-score and show that it can be used to understand and evaluate novel embedding approaches. We envision that the proposed framework (this https URL) will serve the community as a benchmarking platform to test and compare the performance of future graph embedding techniques. ",Benchmarks for Graph Embedding Evaluation,2,"['Great work by @palashiitkgp &amp; team: We propose &amp; developed an open-source library to benchmark any graph embedding methods. \n\nWe hope this will foster standardization and more research in the area of graph embeddings!\n\nPaper: <LINK>\nCode: <LINK> <LINK>', 'With thanks to @aaronclauset &amp; team for providing most real world datasets! Check out their project: https://t.co/aj5qkD5Hbj']",19,08,358
3879,73,1105785748974956545,802858315172737024,Nicholas Chancellor #OneOfUsAllOfUs,"new arXiv paper <LINK> shows a new way of encoding integer variables in quantum annealers, in some cases this gives a similar relative advantage to embedding in the @dwavesys Pegasus versus chimera graph (of course advantages will 'stack') @jqcDurNew @DurhamQlm",https://arxiv.org/abs/1903.05068,"In this paper I propose a new method of encoding discrete variables into Ising model qubits for quantum optimization. The new method is based on the physics of domain walls in one dimensional Ising spin chains. I find that these encodings and the encoding of arbitrary two variable interactions is possible with only two body Ising terms. Following on from similar results for the `one hot' method of encoding discrete variables [Hadfield et. al. Algorithms 12.2 (2019): 34] I also demonstrate that it is possible to construct two body mixer terms which do not leave the logical subspace, an important consideration for optimising using the quantum alternating operator ansatz (QAOA). I additionally discuss how, since the couplings in the domain wall encoding only need to be ferromagnetic and therefore could in principle be much stronger than anti-ferromagnetic couplers, application specific quantum annealers for discrete problems based on this construction may be beneficial. Finally, I compare embedding for synthetic scheduling and colouring problems with the domain wall and one hot encodings on two graphs which are relevant for quantum annealing, the chimera graph and the Pegasus graph. For every case I examine I find a similar or better performance from the domain wall encoding as compared to one hot, but this advantage is highly dependent on the structure of the problem. For encoding some problems, I find an advantage similar to the one found by embedding in a Pegasus graph compared to embedding in a chimera graph. ","Domain wall encoding of discrete variables for quantum annealing and
QAOA",1,"[""new arXiv paper <LINK> shows a new way of encoding integer variables in quantum annealers, in some cases this gives a similar relative advantage to embedding in the @dwavesys Pegasus versus chimera graph (of course advantages will 'stack') @jqcDurNew @DurhamQlm""]",19,03,261
3880,51,1331102851637022720,4716962310,Li Junnan,"Excited to introduce CoMatch, our new semi-supervised learning method! CoMatch jointly learns class probability and image representation with graph-based contrastive learning. @CaimingXiong @stevenhoi Blog: <LINK> Paper: <LINK> CoMatch advances both semi-supervised classification and representation learning. With only 1% of labeled ImageNet training samples, CoMatch achieves a state-of-the-art top1-accuracy of 66%!",https://arxiv.org/abs/2011.11183,"Semi-supervised learning has been an effective paradigm for leveraging unlabeled data to reduce the reliance on labeled data. We propose CoMatch, a new semi-supervised learning method that unifies dominant approaches and addresses their limitations. CoMatch jointly learns two representations of the training data, their class probabilities and low-dimensional embeddings. The two representations interact with each other to jointly evolve. The embeddings impose a smoothness constraint on the class probabilities to improve the pseudo-labels, whereas the pseudo-labels regularize the structure of the embeddings through graph-based contrastive learning. CoMatch achieves state-of-the-art performance on multiple datasets. It achieves substantial accuracy improvements on the label-scarce CIFAR-10 and STL-10. On ImageNet with 1% labels, CoMatch achieves a top-1 accuracy of 66.0%, outperforming FixMatch by 12.6%. Furthermore, CoMatch achieves better representation learning performance on downstream tasks, outperforming both supervised learning and self-supervised learning. Code and pre-trained models are available at this https URL ",CoMatch: Semi-supervised Learning with Contrastive Graph Regularization,2,"['Excited to introduce CoMatch, our new semi-supervised learning method! CoMatch jointly learns class probability and image representation with graph-based contrastive learning. @CaimingXiong @stevenhoi \nBlog: <LINK>\nPaper: <LINK>', 'CoMatch advances both semi-supervised classification and representation learning. With only 1% of labeled ImageNet training samples, CoMatch achieves a state-of-the-art top1-accuracy of 66%!']",20,11,418
3881,193,1491465210414178310,1169242545139986432,Martin Lefebvre,"It's preprint time ! 😀 Check out our latest work with Prof. David Bol, in which we propose a family of simple current references, requiring a limited silicon area while ensuring robustness against supply voltage and temperature variations. 👇 <LINK> These references consist of a two-transistor voltage reference, buffered onto a voltage-to-currrent converter by a single transistor. We proposed two novel topologies, a nA-range proportional-to-absolute-temperature (PTAT) one and a µA-range constant-with-temperature (CWT) one. We fabricated and measured both current references in a 0.18-µm partially-depleted silicon-on-insulator (PDSOI) technology, demonstrating functionality on silicon in real-world conditions. First, the PTAT reference is obtained by biasing a self-cascode MOSFET with a PTAT voltage. It generates a 0.096-nA current, consumes 0.28 nW at 0.55V, and only requires 7 transistors, thus occupying a silicon area of 8700 µm^2. Second, the CWT reference is obtained by biasing a polysilicon resistor with a CWT voltage. It generates a 1.09-µA current with a temperature coefficient (TC) of 38 ppm/°C, and only requires 4 transistors and resistor, occupying a silicon area of 4300 µm^2. Finally, we demonstrated the portability of the references to common scaled technologies, such as 65-nm bulk and 28-nm fully-depleted SOI, by proposing techniques to deal with their analog non-idealites and by simulating the references post-layout in these two technologies.",https://arxiv.org/abs/2202.01751,"The robustness of current and voltage references to process, voltage and temperature (PVT) variations is paramount to the operation of integrated circuits in real-world conditions. However, while recent voltage references can meet most of these requirements with a handful of transistors, current references remain rather complex, requiring significant design time and silicon area. In this paper, we present a family of simple current references consisting of a two-transistor (2T) ultra-low-power voltage reference, buffered onto a voltage-to-current converter by a single transistor. Two topologies are fabricated in a 0.18-$\mu$m partially-depleted silicon-on-insulator (SOI) technology and measured over 10 dies. First, a 7T nA-range proportional-to-absolute-temperature (PTAT) reference intended for constant-$g_m$ biasing of subthreshold operational amplifiers demonstrates a 0.096-nA current with a line sensitivity (LS) of 1.48 %/V, a temperature coefficient (TC) of 0.75 %/$^\circ$C, and a variability $(\sigma/\mu)$ of 1.66 %. Then, two 4T+1R $\mu$A-range constant-with-temperature (CWT) references with (resp. without) TC calibration exhibit a 1.09-$\mu$A (resp. 0.99-$\mu$A) current with a 0.21-%/V (resp. 0.20-%/V) LS, a 38-ppm/$^\circ$C (resp. 290-ppm/$^\circ$C) TC, and a 0.87-% (resp. 0.65-%) $(\sigma/\mu)$. In addition, portability to common scaled CMOS technologies, such as 65-nm bulk and 28-nm fully-depleted SOI, is discussed and validated through post-layout simulations. ","A Family of Current References Based on 2T Voltage References:
Demonstration in 0.18-$\mu$m with 0.1-nA PTAT and 1.1-$\mu$A CWT
38-ppm/$^\circ$C Designs",6,"[""It's preprint time ! 😀\nCheck out our latest work with Prof. David Bol, in which we propose a family of simple current references, requiring a limited silicon area while ensuring robustness against supply voltage and temperature variations. 👇\n<LINK>"", 'These references consist of a two-transistor voltage reference, buffered onto a voltage-to-currrent converter by a single transistor.\nWe proposed two novel topologies, a nA-range proportional-to-absolute-temperature (PTAT) one and a µA-range constant-with-temperature (CWT) one.', 'We fabricated and measured both current references in a 0.18-µm partially-depleted silicon-on-insulator (PDSOI) technology, demonstrating functionality on silicon in real-world conditions.', 'First, the PTAT reference is obtained by biasing a self-cascode MOSFET with a PTAT voltage. It generates a 0.096-nA current, consumes 0.28 nW at 0.55V, and only requires 7 transistors, thus occupying a silicon area of 8700 µm^2.', 'Second, the CWT reference is obtained by biasing a polysilicon resistor with a CWT voltage. It generates a 1.09-µA current with a temperature coefficient (TC) of 38 ppm/°C, and only requires 4 transistors and resistor, occupying a silicon area of 4300 µm^2.', 'Finally, we demonstrated the portability of the references to common scaled technologies, such as 65-nm bulk and 28-nm fully-depleted SOI, by proposing techniques to deal with their analog non-idealites and by simulating the references post-layout in these two technologies.']",22,02,1478
3882,207,1412703697616969728,712960453,Prashant Saxena,"New #preprint from @sumitmehta1992's PhD work on instabilities in a pressurised constrained compressible soft cylinder. <LINK> We study pattern formation along the axis and circumference and propose a way to switch between either of them by tuning the anisotropy <LINK> @sumitmehta1992 It is an interesting research topic for me. Have worked a great deal on incompressible materials, but there are some finer points that need to be addressed while working on compressible solids. I'm also starting to love the compound matrix method that we used in this paper to solve ODEs. It's much faster and can deal with rapidly varying solutions quite nicely.",https://arxiv.org/abs/2107.01375,"Pressurised cylindrical channels made of soft materials are ubiquitous in biological systems, soft robotics, and metamaterial designs. In this paper, we study large deformation of a long, thick-walled, and compressible hyperelastic cylindrical channel under internal pressure. The applied pressure can lead to elastic bifurcations along the axial or circumferential direction. Incremental theory is used to derive the partial differential equations that govern the bifurcation behaviour of the cylindrical channel. Two cases of boundary conditions on the outer surface of the cylinder, namely, free and constrained are studied to understand their influence on the buckling behaviour. The derived equations are solved numerically using the compound matrix method to evaluate the critical pressure. The effects of the thickness of the cylinder and the compressibility of the material on the critical pressure are investigated for both the boundary conditions. The results reveal that for an isotropic material, the bifurcation occurs along the axial direction of the cylinder at lower critical pressure compared to the circumferential direction for all cases considered. Finally, we demonstrate the tailorability of bifurcation behaviour of the cylinder by adding reinforcements along the length of cylinder. The anisotropic hyperelastic material behaviour for triggering the bifurcation in the circumferential direction is studied by varying the material parameters. ","Instabilities in a compressible hyperelastic cylindrical channel due to
internal pressure and external constraints",3,"[""New #preprint from @sumitmehta1992's PhD work on instabilities in a pressurised constrained compressible soft cylinder.\n<LINK>\n\nWe study pattern formation along the axis and circumference and propose a way to switch between either of them by tuning the anisotropy <LINK>"", '@sumitmehta1992 It is an interesting research topic for me. Have worked a great deal on incompressible materials, but there are some finer points that need to be addressed while working on compressible solids.', ""I'm also starting to love the compound matrix method that we used in this paper to solve ODEs. It's much faster and can deal with rapidly varying solutions quite nicely.""]",21,07,649
3883,150,1270991464214953984,4249537197,Christian Wolf,"Second paper: we introduce a new semantic loss for VQA adding structure to the VQA answer space estimated from redundancy in annotations, questioning the classification approach to VQA. Work by @CorentK, @antigregory, @moezbac and yours, truly. <LINK> <LINK>",https://arxiv.org/abs/2006.05726,"Since its appearance, Visual Question Answering (VQA, i.e. answering a question posed over an image), has always been treated as a classification problem over a set of predefined answers. Despite its convenience, this classification approach poorly reflects the semantics of the problem limiting the answering to a choice between independent proposals, without taking into account the similarity between them (e.g. equally penalizing for answering cat or German shepherd instead of dog). We address this issue by proposing (1) two measures of proximity between VQA classes, and (2) a corresponding loss which takes into account the estimated proximity. This significantly improves the generalization of VQA models by reducing their language bias. In particular, we show that our approach is completely model-agnostic since it allows consistent improvements with three different VQA models. Finally, by combining our method with a language bias reduction approach, we report SOTA-level performance on the challenging VQAv2-CP dataset. ",Estimating semantic structure for the VQA answer space,1,"['Second paper: we introduce a new semantic loss for VQA adding structure to the VQA answer space estimated from redundancy in annotations, questioning the classification approach to VQA.\nWork by @CorentK, @antigregory, @moezbac and yours, truly.\n<LINK> <LINK>']",20,06,258
3884,293,1320820554321055746,3524520857,Piotr Żelasko,"In our new study, we aim to gain some insights into the limitations of zero-shot ASR transfer to an unknown language. This work was done together with @SFeng9, Laureano Moro, Ali Abavisani, @OScharenborg, @hasegawajohnson, and Najim Dehak. 🔗<LINK> <LINK>",https://arxiv.org/abs/2010.12104,"The idea of combining multiple languages' recordings to train a single automatic speech recognition (ASR) model brings the promise of the emergence of universal speech representation. Recently, a Transformer encoder-decoder model has been shown to leverage multilingual data well in IPA transcriptions of languages presented during training. However, the representations it learned were not successful in zero-shot transfer to unseen languages. Because that model lacks an explicit factorization of the acoustic model (AM) and language model (LM), it is unclear to what degree the performance suffered from differences in pronunciation or the mismatch in phonotactics. To gain more insight into the factors limiting zero-shot ASR transfer, we replace the encoder-decoder with a hybrid ASR system consisting of a separate AM and LM. Then, we perform an extensive evaluation of monolingual, multilingual, and crosslingual (zero-shot) acoustic and language models on a set of 13 phonetically diverse languages. We show that the gain from modeling crosslingual phonotactics is limited, and imposing a too strong model can hurt the zero-shot transfer. Furthermore, we find that a multilingual LM hurts a multilingual ASR system's performance, and retaining only the target language's phonotactic data in LM training is preferable. ",How Phonotactics Affect Multilingual and Zero-shot ASR Performance,1,"['In our new study, we aim to gain some insights into the limitations of zero-shot ASR transfer to an unknown language. This work was done together with @SFeng9, Laureano Moro, Ali Abavisani, @OScharenborg, @hasegawajohnson, and Najim Dehak.\n\n🔗<LINK> <LINK>']",20,10,254
3885,138,1298103182569209857,630587365,Bill Peebles,"We propose a simple approach to disentanglement: make your generative model's Hessian diagonal in its input. Can implement in &lt;10 lines of code. <LINK> Project Page: <LINK> ECCV 2020 Spotlight w/ J Peebles, @junyanz89, Efros and Torralba <LINK> Basic idea: When we perturb one input z component, we want the *change* in the output to be invariant to other components. This is equivalent to saying we want a diagonal Hessian in z. So we can just add a regularizer that penalizes a generator's off-diagonal Hessian term! 2/7 <LINK> This model-agnostic regularizer can get reasonable axis-aligned disentanglement results when applied to GANs, such as ProGAN trained on CLEVR. In comparison, InfoGAN seems to struggle. 3/7 <LINK> The Hessian Penalty displays a tendency to *turn-off* extra z components when the latent space is overparameterized. For example, if |z|=12 but your dataset has 1 factor of variation, 11 components get disabled. In contrast, vanilla ProGAN uses all 12 components. 4/7 <LINK> Finally, it can also be used for unsupervised direction discovery. Can identify BigGAN directions that, e.g., perform object rotation cleaner than past methods. 5/7 <LINK> The Hessian Penalty can be efficiently computed by minimizing the variance of Hutchinson's estimator. We have @PyTorch and TensorFlow implementations ready for use at <LINK>. Example implementation in six lines of code below. 6/7 <LINK> Big thanks to @pabbeel, Taesung Park and @rzhang88 for very helpful conversations and feedback! 7/7 @unsorsodicorda @pabbeel @rzhang88 We've only tried it with GANs so far, but I don't think there's anything preventing it from being applied to arbitrary latent variable generative models (or even discriminative networks).",https://arxiv.org/abs/2008.10599,"Existing disentanglement methods for deep generative models rely on hand-picked priors and complex encoder-based architectures. In this paper, we propose the Hessian Penalty, a simple regularization term that encourages the Hessian of a generative model with respect to its input to be diagonal. We introduce a model-agnostic, unbiased stochastic approximation of this term based on Hutchinson's estimator to compute it efficiently during training. Our method can be applied to a wide range of deep generators with just a few lines of code. We show that training with the Hessian Penalty often causes axis-aligned disentanglement to emerge in latent space when applied to ProGAN on several datasets. Additionally, we use our regularization term to identify interpretable directions in BigGAN's latent space in an unsupervised fashion. Finally, we provide empirical evidence that the Hessian Penalty encourages substantial shrinkage when applied to over-parameterized latent spaces. ",The Hessian Penalty: A Weak Prior for Unsupervised Disentanglement,8,"[""We propose a simple approach to disentanglement: make your generative model's Hessian diagonal in its input. Can implement in &lt;10 lines of code.\n\n<LINK>\nProject Page: <LINK>\nECCV 2020 Spotlight\nw/ J Peebles, @junyanz89, Efros and Torralba <LINK>"", ""Basic idea: When we perturb one input z component, we want the *change* in the output to be invariant to other components. This is equivalent to saying we want a diagonal Hessian in z. So we can just add a regularizer that penalizes a generator's off-diagonal Hessian term! 2/7 https://t.co/aonmrvNe7G"", 'This model-agnostic regularizer can get reasonable axis-aligned disentanglement results when applied to GANs, such as ProGAN trained on CLEVR. In comparison, InfoGAN seems to struggle. 3/7 https://t.co/WZd0Abpqm3', 'The Hessian Penalty displays a tendency to *turn-off* extra z components when the latent space is overparameterized. For example, if |z|=12 but your dataset has 1 factor of variation, 11 components get disabled. In contrast, vanilla ProGAN uses all 12 components. 4/7 https://t.co/dusf0oU1ti', 'Finally, it can also be used for unsupervised direction discovery. Can identify BigGAN directions that, e.g., perform object rotation cleaner than past methods. 5/7 https://t.co/g0YBTVAG0y', ""The Hessian Penalty can be efficiently computed by minimizing the variance of Hutchinson's estimator. We have @PyTorch and TensorFlow implementations ready for use at https://t.co/QLJm8D9d2r. Example implementation in six lines of code below. 6/7 https://t.co/2odGzKoWpQ"", 'Big thanks to @pabbeel, Taesung Park and @rzhang88 for very helpful conversations and feedback! 7/7', ""@unsorsodicorda @pabbeel @rzhang88 We've only tried it with GANs so far, but I don't think there's anything preventing it from being applied to arbitrary latent variable generative models (or even discriminative networks).""]",20,08,1735
3886,58,1262445769514254338,3051993112,Anna Wright,"New paper day! We use the Romulus25 cosmological simulation to study the formation and evolution of 100+ ultra-diffuse galaxies (UDGs) in the field: <LINK> #nbodyshopgotchu Shout-out to co-authors Michael Tremmel (@MichaelTremmel), Alyson Brooks, Ferah Munshi (@fdmtweets), Daisuke Nagai, Ray Sharma (@RaySSharma), and Tom Quinn. Here are some of our results: We search Romulus25 – one of the highest resolution volumes ever run – for isolated UDGs. That is, large, low surface brightness (LSB) galaxies that live far away from anything massive. Here are a few of our galaxies + SB profiles & fits – UDGs on the right, non-UDGs on the left: <LINK> Other than their large sizes and LSB, present-day UDGs aren’t really that weird. They have normal star formation rates (see below), HI masses, colors, and virial masses. They’re also pretty common: ~20% of all field galaxies with Mstar = 10^7-9 Msol are UDGs. <LINK> What distinguishes UDGs is their evolution! Higher mass dwarf galaxies don’t typically fade much after z~1, but UDGs become increasingly LSB. Same with size: non-UDGs don’t change much over time, but UDGs start growing and don’t stop. <LINK> Part of the reason for this seems to be that UDGs evolve to lower central star formation rates. As their central stars age, the cores of the galaxies get fainter, similar to what we see in galaxy cluster simulations (RomulusC; <LINK>) <LINK> However, field UDGs have typical star formation rates for their stellar masses, so what’s really happening is that star-forming gas is spreading out! This results in bright, new stars forming at larger radii, leading to an increase in size over time. But why? It turns out that most of our field UDGs had early mergers (z&gt;1) that temporarily spun them up and redistributed their star formation. In many UDGs, we see bursts of star formation at high radii for billions of years post-merger: <LINK>",https://arxiv.org/abs/2005.07634,"We use the \textsc{Romulus25} cosmological simulation volume to identify the largest-ever simulated sample of {\it field} ultra-diffuse galaxies (UDGs). At $z=0$, we find that isolated UDGs have average star formation rates, colors, and virial masses for their stellar masses and environment. UDGs have moderately elevated HI masses, being 70\% (300\%) more HI-rich than typical isolated dwarf galaxies at luminosities brighter (fainter) than M$_\mathrm{B}$=-14. However, UDGs are consistent with the general isolated dwarf galaxy population and make up $\sim$20\% of all field galaxies with 10$^7$<M$_\star$/M$_\odot$<10$^{9}$. The HI masses, effective radii, and overall appearances of our UDGs are consistent with existing observations of field UDGs, but we predict that many isolated UDGs have been missed by current surveys. Despite their isolation at $z=0$, the UDGs in our sample are the products of major mergers. Mergers are no more common in UDG than non-UDG progenitors, but mergers that create UDGs tend to happen earlier - almost never occurring after $z=1$, produce a temporary boost in spin, and cause star formation to be redistributed to the outskirts of galaxies, resulting in lower central star formation rates. The centers of the galaxies fade as their central stellar populations age, but their global star formation rates are maintained through bursts of star formation at larger radii, producing steeper negative g-r color gradients. This formation channel is unique relative to other proposals for UDG formation in isolated galaxies, demonstrating that UDGs can potentially be formed through multiple mechanisms. ",The Formation of Isolated Ultra-Diffuse Galaxies in Romulus25,8,"['New paper day! We use the Romulus25 cosmological simulation to study the formation and evolution of 100+ ultra-diffuse galaxies (UDGs) in the field: <LINK>\n#nbodyshopgotchu', 'Shout-out to co-authors Michael Tremmel (@MichaelTremmel), Alyson Brooks, Ferah Munshi (@fdmtweets), Daisuke Nagai, Ray Sharma (@RaySSharma), and Tom Quinn.\nHere are some of our results:', 'We search Romulus25 – one of the highest resolution volumes ever run – for isolated UDGs. That is, large, low surface brightness (LSB) galaxies that live far away from anything massive. Here are a few of our galaxies + SB profiles &amp; fits – UDGs on the right, non-UDGs on the left: https://t.co/wkdemCZ0Ci', 'Other than their large sizes and LSB, present-day UDGs aren’t really that weird. They have normal star formation rates (see below), HI masses, colors, and virial masses. They’re also pretty common: ~20% of all field galaxies with Mstar = 10^7-9 Msol are UDGs. https://t.co/l0OeHNpaGn', 'What distinguishes UDGs is their evolution! Higher mass dwarf galaxies don’t typically fade much after z~1, but UDGs become increasingly LSB. Same with size: non-UDGs don’t change much over time, but UDGs start growing and don’t stop. https://t.co/xR7cS7m4JD', 'Part of the reason for this seems to be that UDGs evolve to lower central star formation rates. As their central stars age, the cores of the galaxies get fainter, similar to what we see in galaxy cluster simulations (RomulusC; https://t.co/9oWZA88Trv) https://t.co/b8xlA1b7D4', 'However, field UDGs have typical star formation rates for their stellar masses, so what’s really happening is that star-forming gas is spreading out! This results in bright, new stars forming at larger radii, leading to an increase in size over time. But why?', 'It turns out that most of our field UDGs had early mergers (z&gt;1) that temporarily spun them up and redistributed their star formation. In many UDGs, we see bursts of star formation at high radii for billions of years post-merger: https://t.co/UTGayIlwJV']",20,05,1898
3887,30,1473011163638030339,15311609,James Reed,"Interested in learning the design principles and technical decisions that went into PyTorch's new `torch.fx` program transformation framework? Learn all that and more from our new paper on arXiv <LINK> @Dr_flerken @PyTorch Can you elaborate? I just whipped up a small transform in terms of the `nn.Transformer` module: <LINK>. That works. Would be glad to look into whatever issue you're having @RaineyCode Hi @RaineyCode, let us know any abbreviations/terms that are not easy to understand from the paper and we can help!",https://arxiv.org/abs/2112.08429,"Modern deep learning frameworks provide imperative, eager execution programming interfaces embedded in Python to provide a productive development experience. However, deep learning practitioners sometimes need to capture and transform program structure for performance optimization, visualization, analysis, and hardware integration. We study the different designs for program capture and transformation used in deep learning. By designing for typical deep learning use cases rather than long tail ones, it is possible to create a simpler framework for program capture and transformation. We apply this principle in torch.fx, a program capture and transformation library for PyTorch written entirely in Python and optimized for high developer productivity by ML practitioners. We present case studies showing how torch.fx enables workflows previously inaccessible in the PyTorch ecosystem. ","Torch.fx: Practical Program Capture and Transformation for Deep Learning