diff --git "a/arxiv_sample.jsonl" "b/arxiv_sample.jsonl" new file mode 100644--- /dev/null +++ "b/arxiv_sample.jsonl" @@ -0,0 +1,10 @@ +{"text": "\\section{Introduction}\n\\label{sec:intro}\n\n\\emph{Gender diversity}, or more often its lack thereof, among participants to\nsoftware development activities has been thoroughly studied in recent years. In\nparticular, the presence of, effects of, and countermeasures for \\emph{gender\n bias} in Free/Open Source Software (FOSS) have received a lot of attention\nover the past decade~\\cite{david2008fossdevs, qiu2010kdewomen,\n nafus2012patches, kuechler2012genderfoss, vasilescu2014gender,\n oneil2016debiansurvey, robles2016womeninfoss, terrell2017gender,\n zacchiroli2021gender}. \\emph{Geographic diversity} is on the other hand the\nkind of diversity that stems from participants in some global activity coming\nfrom different world regions and cultures.\n\nGeographic diversity in FOSS has received relatively little attention in scholarly\nworks. In particular, while seminal survey-based and\npoint-in-time medium-scale studies of the geographic origins of FOSS\ncontributors exist~\\cite{ghosh2005understanding, david2008fossdevs,\n barahona2008geodiversity, takhteyev2010ossgeography, robles2014surveydataset,\n wachs2021ossgeography}, large-scale longitudinal studies of the geographic\norigin of FOSS contributors are still lacking. Such a quantitative\ncharacterization would be useful to inform decisions related to global\ndevelopment teams~\\cite{herbsleb2007globalsweng} and hiring strategies in the\ninformation technology (IT) market, as well as contribute factual information\nto the debates on the economic impact and sociology of FOSS around the world.\n\n\n\\paragraph{Contributions}\n\nWith this work we contribute to close this gap by conducting \\textbf{the first\n longitudinal study of the geographic origin of contributors to public code\n over 50 years.} Specifically, we provide a preliminary answer to the\nfollowing research question:\n\\begin{researchquestion}\n From which world regions do authors of publicly available commits come from\n and how has it changed over the past 50 years?\n \\label{rq:geodiversity}\n\\end{researchquestion}\nWe use as dataset the \\SWH/ archive~\\cite{swhipres2017} and analyze from it\n2.2 billion\\xspace commits archived from 160 million\\xspace projects and authored by\n43 million\\xspace authors during the 1971--2021 time period. \nWe geolocate developers to\n\\DATAWorldRegions/ world regions, using as signals email country code top-level domains (ccTLDs) and \nauthor (first/last) names compared with name distributions around the world, and UTC offsets \nmined from commit metadata.\n\nWe find evidence of the early dominance of North America in open source\nsoftware, later joined by Europe. After that period, the geographic diversity \nin public code has been constantly increasing.\nWe also identify relevant historical shifts\nrelated to the end of the UNIX wars and the increase of coding literacy in\nCentral and South Asia, as well as of broader phenomena like colonialism and\npeople movement across countries (immigration/emigration).\n\n\n\n\n\\paragraph{Data availability.}\n\nA replication package for this paper is available from Zenodo at\n\\url{https://doi.org/10.5281/zenodo.6390355}~\\cite{replication-package}.\n\n\n \\section{Related Work}\n\\label{sec:related}\n\nBoth early and recent works~\\cite{ghosh2005understanding, david2008fossdevs,\n robles2014surveydataset, oneil2016debiansurvey} have characterized the\ngeography of Free/Open Source Software (FOSS) using \\emph{developer surveys},\nwhich provide high-quality answers but are limited in size (2-5\\,K developers)\nand can be biased by participant sampling.\n\nIn 2008 Barahona et al.~\\cite{barahona2008geodiversity} conducted a seminal\nlarge-scale (for the time) study on FOSS \\emph{geography using mining software\n repositories (MSR) techniques}. They analyzed the origin of 1\\,M contributors\nusing the SourceForge user database and mailing list archives over the\n1999--2005 period, using as signals information similar to ours: email domains\nand UTC offsets. \nThe studied period (7 years) in~\\cite{barahona2008geodiversity} is shorter than \nwhat is studied in the present paper (50 years) and the data sources are \nlargely different; with that in mind, our results show a slightly larger quote of \nEuropean v.~North American contributions.\n\nAnother empirical work from 2010 by Takhteyev and\nHilts~\\cite{takhteyev2010ossgeography} harvested self-declared geographic\nlocations of GitHub accounts recursively following their connections,\ncollecting information for $\\approx$\\,70\\,K GitHub users. A very recent\nwork~\\cite{wachs2021ossgeography} by Wachs et al.~has geolocated half a million\nGitHub users, having contributed at least 100 commits each, and who\nself-declare locations on their GitHub profiles. While the study is\npoint-in-time as of 2021, the authors compare their findings\nagainst~\\cite{barahona2008geodiversity, takhteyev2010ossgeography} to\ncharacterize the evolution of FOSS geography over the time snapshots taken by\nthe three studies.\n\nCompared with previous empirical works, our study is much larger scale---having\nanalyzed 43 million\\xspace authors of 2.2 billion\\xspace commits from 160 million\\xspace\nprojects---longitudinal over 50 years of public code contributions rather than\npoint in time, and also more fine-grained (with year-by-year granularity over\nthe observed period). Methodologically, our study relies on Version Control\nSystem (VCS) commit data rather than platform-declared location information.\n\n\nOther works---in particular the work by Daniel~\\cite{daniel2013ossdiversity}\nand, more recently, Rastogi et al.~\\cite{rastogi2016geobias,\n rastogi2018geobias, prana2021geogenderdiversity}---have studied geographic\n\\emph{diversity and bias}, i.e., the extent to which the origin of FOSS\ndevelopers affect their collaborative coding activities.\nIn this work we characterized geographic diversity in public code for the first\ntime at this scale, both in terms of contributors and observation period. We do\nnot tackle the bias angle, but provide empirical data and findings that can be\nleveraged to that end as future work.\n\n\\emph{Global software engineering}~\\cite{herbsleb2007globalsweng} is the\nsub-field of software engineering that has analyzed the challenges of scaling\ndeveloper collaboration globally, including the specific concern of how to deal\nwith geographic diversity~\\cite{holmstrom2006globaldev, fraser2014eastwest}.\nDecades later the present study provides evidence that can be used, in the\nspecific case of public code and at a very large scale, to verify which\npromises of global software engineering have borne fruit.\n\n\n\n\n\n\n \\section{Methodology}\n\\label{sec:method}\n\n\n\\newif\\ifgrowthfig \\growthfigtrue\n\\ifgrowthfig\n\\begin{figure}\n \\includegraphics[width=\\columnwidth]{yearly-commits}\n \\caption{Yearly public commits over time (log scale).\n}\n \\label{fig:growth}\n\\end{figure}\n\\fi\n\n\\paragraph{Dataset}\n\nWe retrieved from \\SWH/~\\cite{swh-msr2019-dataset} all commits archived until \\DATALastCommitDate/.\nThey amount to \\DATACommitsRaw/ commits, unique by SHA1 identifier, harvested from \\DATATotalCommitsInSH/ public projects coming from major development forges (GitHub, GitLab, etc.) and package repositories (Debian, PyPI, NPM, etc.).\nCommits in the dataset are by \\DATAAuthorsRaw/ authors, unique by $\\langle$name, email$\\rangle$ pairs.\nThe dataset came as two relational tables, one for commits and one for authors, with the former referencing the latter via a foreign key.\n\\iflong\nEach row in the commit table contains the following fields: commit SHA1 identifier, author and committer timestamps, author and committer identifiers (referencing the author table).\nThe distinction between commit authors and committers come from Git, which allows to commit a change authored by someone else.\nFor this study we focused on authors and ignored committers, as the difference between the two is not relevant for our research questions and the amount of commits with a committer other than its author is negligible.\n\\fi\nFor each entry in the author table we have author full name and email as two separate strings of raw bytes.\n\nWe removed implausible or unusable names that: are not decodable as UTF-8 (\\DATAAuthorsRmNondecodable/ author names removed), are email addresses instead of names (\\DATAAuthorsRmEmail/ ``names''), consist of only blank characters (\\DATAAuthorsRmBlank/), contain more than 10\\% non-letters (\\DATAAuthorsRmNonletter/), are longer than 100 characters (\\DATAAuthorsRmToolong/).\nAfter filtering, about \\DATAAuthorsPlausibleApprox/ authors (\\DATAAuthorsPlausiblePct/ of the initial dataset) remained for further analysis.\n\nNote that the amount of public code commits (and authors) contained in the\ninitial dataset grows exponentially over\ntime~\\cite{swh-provenance-emse}\\ifgrowthfig, as shown for commits in\n\\Cref{fig:growth}\\else: from $10^4$ commits in 1971, to $10^6$ in 1998, to\nalmost $10^9$ in 2020\\fi. As a consequence the observed trends tend to be more\nstable in recent decades than in 40+ year-old ones, due to statistics taken on\nexponentially larger populations.\n\n\n\\paragraph{Geolocation}\n\n\\begin{figure}\n \\centering\n \\includegraphics[clip,trim=6cm 6cm 0 0,width=\\linewidth]{subregions-ours}\n \\caption{The \\DATAWorldRegions/ world regions used as geolocation targets.}\n \\label{fig:worldmap}\n\\end{figure}\n\nAs geolocation targets we use macro world regions derived from the United Nations geoscheme~\\cite{un1999geoscheme}.\nTo avoid domination by large countries (e.g., China or Russia) within macro regions, we merged and split some regions based on geographic proximity and the sharing of preeminent cultural identification features, such as spoken language.\n\\Cref{fig:worldmap} shows the final list of \\DATAWorldRegions/ world regions used as geolocation targets in this study.\n\nGeolocation of commit authors to world regions uses the two complementary techniques introduced in~\\cite{icse-seis-2022-gender}, briefly recalled below.\nThe first one relies on the country code top-level domain (ccTLD) of email addresses extracted from commit metadata, e.g., \\texttt{.fr}, \\texttt{.ru}, \\texttt{.cn}, etc.\nWe started from the IANA list of Latin character ccTLDs~\\cite{wikipedia-cctld} and manually mapped each corresponding territory to a target world region.\n\nThe second geolocation technique uses the UTC offset of commit timestamps (e.g., UTC-05:00) and author names to determine the most likely world region of the commit author.\nFor each UTC offset we determine a list of compatible places (country, state, or dependent territory) in the world that, at the time of that commit, had that UTC offset; commit time is key here, as country UTC offsets vary over time due to timezone changes.\nTo make this determination we use the IANA time zone database~\\cite{tzdata}.\n\nThen we assign to each place a score that captures the likelihood that a given author name is characteristic of it.\nTo this end we use the Forebears dataset of the frequencies of the most common first and family names which, quoting from~\\cite{forebear-names}: {\\itshape ``provides the approximate incidence of forenames and surnames produced from a database of \\num{4 044 546 938} people (55.5\\% of living people in 2014). As of September 2019 it covers \\num{27 662 801} forenames and \\num{27 206 821} surnames in 236 jurisdictions.''}\nAs in our dataset authors are full name strings (rather than split by first/family name), we first tokenize names (by blanks and case changes) and then lookup individual tokens in both first and family names frequency lists.\nFor each element found in name lists we multiply the place population\\footnotemark{} by the name frequency to obtain a measure that is proportional to the number of persons bearing that name (token) in the specific place.\n\\footnotetext{To obtain population totals---as the notion of ``place'' is heterogeneous: full countries v.~slices of large countries spanning multiple timezones---we use a mixture of primary sources (e.g., government websites), and non-primary ones (e.g., Wikipedia articles).}\nWe sum this figure for all elements to obtain a place score, ending up with a list of $\\langle$place, score$\\rangle$ pairs.\nWe then partition this list by the world region that a place belongs to and sum the score for all the places in each region to obtain an overall score, corresponding to the likelihood that the commit belongs to a given world region.\nWe assign the starting commit as coming from the world region with the highest score.\n\nThe email-based technique suffers from the limited and unbalanced use of ccTLDs: most developers use generic TLDs such as \\texttt{.com}, \\texttt{.org}, or \\texttt{.net}.\nMoreover this does not happen uniformly across zones: US-based developers, for example, use the \\texttt{.us} ccTLD much more seldomly than their European counterparts.\nOn the other hand the offset/name-based technique relies on the UTC offset of the commit timestamps.\nDue to tool configurations on developer setups, a large number of commits in the dataset has an UTC offset equal to zero.\nThis affects less recent commits (\\DATACommitsTZZTwoThousandTwenty/ of 2020s commits have a zero offset) than older ones (\\DATACommitsTZZTwoThousand/ in 2000).\nAs a result the offset/name-based technique could end up detecting a large share of older commits as authored by African developers, and to a lesser extent Europeans.\n\nTo counter these issues we combine the two geolocation techniques together by applying the offset/name-based techniques to all commits with a non-zero UTC offset, and the email-based on to all other commits.\n\n\n \\section{Results and Discussion}\n\\label{sec:results}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\linewidth]{stacked.pdf}\n \\caption{Ratio of commits (above) and active authors (below) by world zone over the 1971--2020 period.}\n \\Description[Chart]{Stacked bar chart showing the world zone ratios for commits and authors over the 1971--2020 period.}\n \\label{fig:results}\n\\end{figure*}\n\n\n \nTo answer \\cref{rq:geodiversity} we gathered the number of commits and distinct authors per year and per world zone.\nWe present the obtained results in \\Cref{fig:results} as two stacked bar charts, showing yearly breakdowns for commits and authors respectively.\nEvery bar represents a year and is partitioned in slices showing the commit/author ratio for each of the world regions of \\Cref{fig:worldmap} in that year.\nTo avoid outliers due to sporadic contributors, in the author chart we only consider authors having contributed at least 5 commits in a given year.\n\nWhile observing trends in the charts remember that the total numbers of commits and authors grow exponentially over time.\nHence for the first years in the charts, the number of data points in some world regions can be extremely small, with negative consequences on the stability of trends.\n\n\n\n\n\\paragraph{Geographic diversity over time}\n\nOverall, the general trend appears to be that the \\textbf{geographic diversity in public code is increasing}: North America and Europe alternated their ``dominance'' until the middle of the 90s; from that moment on most other world regions show a slow but steady increment.\nThis trend of increased participation into public code development includes Central and South Asia (comprising India), Russia, Africa, Central and South America,\nNotice that also zones that do not seem to follow this trend, such as Australia and New Zealand, are also increasing their participation, but at a lower speed with respect to other zones.\nFor example, Australia and New Zealand incremented the absolute number of their commits by about 3 orders of magnitude from 2000 to present days.\n\nAnother interesting phenomenon that can be appreciated in both charts is the sudden contraction of contributions from North America in 1995; since the charts depict ratios, this corresponds to other zones, and Europe in particular, increasing their share.\nAn analysis of the main contributions in the years right before the contraction shows that nine out of ten have \\texttt{ucbvax.Berkeley.EDU} as author email domain, and the tenth is Keith Bostic, one of the leading Unix BSD developers, appearing with email \\texttt{bostic}.\nNo developer with the same email domain appears anymore within the first hundred contributors in 1996.\nThis shows the relevance that BSD Unix and the Computer Systems Research Group at the University of California at Berkeley had in the history of open source software.\nThe group was disbanded in 1995, partially as a consequence of the so-called UNIX wars~\\cite{kernighan2019unixhistory}, and this contributes significantly---also because of the relatively low amount of public code circulating at the time---to the sudden drop of contributions from North America in subsequent years.\nDescendant UNIX operating systems based on BSD, such as OpenBSD, FreeBSD, and NetBSD had smaller relevance to world trends due to (i) the increasing amount of open source code coming from elsewhere and (ii) their more geographically diverse developer community.\n\nAnother time frame in which the ratios for Europe and North America are subject to large, sudden changes is 1975--79.\nA preliminary analysis shows that these ratios are erratic due to the very limited number of commits in those time period, but we were unable to detect a specific root cause.\nTrends for those years should be subject to further studies, in collaboration with software historians.\n\n\n\\paragraph{Colonialism}\n\nAnother trend that stands out from the charts is that Africa appears to be well represented.\nTo assess if this results from a methodological bias, we double-checked the commits detected as originating from Africa for timezones included in the $[0, 3]$ range using both the email- the offset/name-based methods.\nThe results show that the offset/name-based approach assigns 22.7\\% of the commits to Africa whereas the email-based one only assigns 2.7\\% of them.\nWhile a deeper investigation is in order, it is our opinion that the phenomenon we are witnessing here is a consequence of colonialism, specifically the adoption of Europeans names in African countries.\nFor example the name Eric, derived from Old Norse, is more popular in Ghana than it is in France or in the UK.\nThis challenges the ability of the offset/name-based method to correctly differentiate between candidate places.\nTogether with the fact that several African countries are largely populated, the offset/name-based method could detect European names as originating from Africa.\nWhile this cuts both way, the likelihood of a random person contributing to public code is very different between European countries, all having a well-developed software industry, and African countries that do not all share this trait.\n\n\n\\paragraph{Immigration/emigration}\n\nAnother area where a similar phenomenon could be at play is the evolution of Central and South America.\nContribution from this macro region appears to be growing steadily.\nTo assess if this is the result of a bias introduced by the name-based detection we analyzed the evolution of offset/name-based assignment over time for authors whose email domain is among the top-ten US-based entities in terms of overall contributions (estimated in turn by analyzing the most frequent email domains and manually selecting those belonging to US-based entities).\nIn 1971 no author with an email from top US-based entities is detected as belonging to Central and South America, whereas in 2019 the ratio is 12\\%.\nNowadays more than one tenth of the people email-associated to top US-based entities have popular Central and South American names, which we posit as a likely consequence of immigration into US (emigration from Central and South America).\nSince immigration has a much longer history than what we are studying here, what we are witnessing probably includes long-term consequences of it, such as second and third generation immigrants employed in white-collar jobs, such as software development.\n\n\n\n\n \\section{Limitations and Future Work}\n\\label{sec:conclusion}\n\nWe have performed an exploratory, yet very large scale, empirical study of the geographic diversity in public code commits over time.\nWe have analyzed 2.2 billion\\xspace public commits covering the \\DATAYearRange/ time period.\nWe have geolocated developers to \\DATAWorldRegions/ world regions using as signals email domains, timezone offsets, and author names.\nOur findings show that the geographic diversity in public code is increasing over time, and markedly so over the past 20--25 years.\nObserved trends also co-occur with historical events and macro phenomena like the end of the UNIX wars, increase of coding literacy around the world, colonialism, and immigration.\n\n\n\\medskip\n\\emph{Limitations.}\nThis study relies on a combination of two geolocation methods: one based on email domains, another based on commit UTC offsets and author names.\nWe discussed some of the limitations of either method in \\Cref{sec:method}, motivating our decision of restricting the use of the email-based method to commits with a zero UTC offset.\nAs a consequence, for most commits in the dataset the offset/name-based method is used.\nWith such method, the frequencies of forenames and surnames are used to rank candidate zones that have a compatible UTC offset at commit time.\n\nA practical consequence of this is that for commits with, say, offset UTC+09:00 the candidate places can be Russia, Japan and Australia, depending on the specific date due to daylight saving time.\nPopular forenames and surnames in these regions tend to be quite different so the likelihood of the method to provide a reliable detection is high.\nFor other offsets the set of popular forenames and surnames from candidate zones can exhibit more substantial overlaps, negatively impacting detection accuracy.\nWe have discussed some of these cases in \\Cref{sec:results}, but other might be lingering in the results impacting observed trends.\n\nThe choice of using the email-based method for commits with zero UTC offset, and the offset/name-based method elsewhere, has allowed us to study all developers not having a country-specific email domain (ccTLD), but comes with the risk of under-representing the world zones that have (in part and in some times of the year) an actual UTC offset of zero.\n\nA potential bias in this study could be introduced by the fact that the name database used for offset/name-based geolocation only contains names formed using Latin alphabet characters.\nWe looked for names containing Chinese, Japanese, and Korean characters in the original dataset, finding only a negligible amount of authors who use non-Latin characters in their VCS names, which leads us to believe that the impact of this issue is minimal.\n\nWe did not apply identity merging (e.g., using state-of-the-art tools like SortingHat~\\cite{moreno2019sortinghat}), but we do not expect this to be a significant issue because: (a) to introduce bias in author trends the distribution of identity merges around the world should be uneven, which seems unlikely; and (b) the observed commit trends (which would be unaffected by identity merging) are very similar to observed author trends.\n\nWe did not systematically remove known bot accounts~\\cite{lebeuf2018swbots} from the author dataset, but we did check for the presence of software bots among the top committers of each year. We only found limited traces of continuous integration (CI) bots, used primarily to automate merge commits. After removing CI bots from the dataset the observed global trends were unchanged, therefore this paper presents unfiltered data.\n\n\n\\medskip\n\\emph{Future work.}\nTo some extent the above limitations are the price to pay to study such a large dataset: there exists a trade-off between large-scale analysis and accuracy.\nWe plan nonetheless to further investigate and mitigate them in future work.\nMulti-method approaches, merging data mining with social science methods, could be applied to address some of the questions raised in this exploratory study.\nWhile they do not scale to the whole dataset, multi-methods can be adopted to dig deeper into specific aspects, specifically those related to social phenomena.\nSoftware is a social artifact, it is no wonder that aspects related to sociocultural evolution emerge when analyzing its evolution at this scale.\n\n\n\n\n \n\\clearpage\n\n\n", "meta": {"timestamp": "2022-03-30T02:27:00", "yymm": "2203", "arxiv_id": "2203.15369", "language": "en", "url": "https://arxiv.org/abs/2203.15369"}} +{"text": "\\section{Introduction}\n\nOne of the fundamental ingredients in the theory of non-commutative or\nquantum geometry is the notion of a differential calculus.\nIn the framework of quantum groups the natural notion\nis that of a\nbicovariant differential calculus as introduced by Woronowicz\n\\cite{Wor_calculi}. Due to the allowance of non-commutativity\nthe uniqueness of a canonical calculus is lost.\nIt is therefore desirable to classify the possible choices.\nThe most important piece is the space of one-forms or ``first\norder differential calculus'' to which we will restrict our attention\nin the following. (From this point on we will use the term\n``differential calculus'' to denote a\nbicovariant first order differential calculus).\n\nMuch attention has been devoted to the investigation of differential\ncalculi on quantum groups $C_q(G)$ of function algebra type for\n$G$ a simple Lie group.\nNatural differential calculi on matrix quantum groups were obtained by\nJurco \\cite{Jur} and\nCarow-Watamura et al.\\\n\\cite{CaScWaWe}. A partial classification of calculi of the same\ndimension as the natural ones\nwas obtained by\nSchm\\\"udgen and Sch\\\"uler \\cite{ScSc2}.\nMore recently, a classification theorem for factorisable\ncosemisimple quantum groups was obtained by Majid \\cite{Majid_calculi},\ncovering the general $C_q(G)$ case. A similar result was\nobtained later by Baumann and Schmitt \\cite{BaSc}.\nAlso, Heckenberger and Schm\\\"udgen \\cite{HeSc} gave a\ncomplete classification on $C_q(SL(N))$ and $C_q(Sp(N))$. \n\n\nIn contrast, for $G$ not simple or semisimple the differential calculi\non $C_q(G)$\nare largely unknown. A particularly basic case is the Lie group $B_+$\nassociated with the Lie algebra $\\lalg{b_+}$ generated by two elements\n$X,H$ with the relation $[H,X]=X$. The quantum enveloping algebra\n\\ensuremath{U_q(\\lalg{b_+})}{}\nis self-dual, i.e.\\ is non-degenerately paired with itself \\cite{Drinfeld}.\nThis has an interesting consequence: \\ensuremath{U_q(\\lalg{b_+})}{} may be identified with (a\ncertain algebraic model of) \\ensuremath{C_q(B_+)}. The differential calculi on this\nquantum group and on its ``classical limits'' \\ensuremath{C(B_+)}{} and \\ensuremath{U(\\lalg{b_+})}{}\nwill be the main concern of this paper. We pay hereby equal attention\nto the dual notion of ``quantum tangent space''.\n\nIn section \\ref{sec:q} we obtain the complete classification of differential\ncalculi on \\ensuremath{C_q(B_+)}{}. It turns out that (finite\ndimensional) differential\ncalculi are characterised by finite subsets $I\\subset\\mathbb{N}$.\nThese\nsets determine the decomposition into coirreducible (i.e.\\ not\nadmitting quotients) differential calculi\ncharacterised by single integers. For the coirreducible calculi the\nexplicit formulas for the commutation relations and braided\nderivations are given.\n\nIn section \\ref{sec:class} we give the complete classification for the\nclassical function algebra \\ensuremath{C(B_+)}{}. It is essentially the same as in the\n$q$-deformed setting and we stress this by giving an almost\none-to-one correspondence of differential calculi to those obtained in\nthe previous section. In contrast, however, the decomposition and\ncoirreducibility properties do not hold at all. (One may even say that\nthey are maximally violated). We give the explicit formulas for those\ncalculi corresponding to coirreducible ones.\n\nMore interesting perhaps is the ``dual'' classical limit. I.e.\\ we\nview \\ensuremath{U(\\lalg{b_+})}{} as a quantum function algebra with quantum enveloping\nalgebra \\ensuremath{C(B_+)}{}. This is investigated in section \\ref{sec:dual}. It\nturns out that in this setting we have considerably more freedom in\nchoosing a\ndifferential calculus since the bicovariance condition becomes much\nweaker. This shows that this dual classical limit is in a sense\n``unnatural'' as compared to the ordinary classical limit of section\n\\ref{sec:class}. \nHowever, we can still establish a correspondence of certain\ndifferential calculi to those of section \\ref{sec:q}. The\ndecomposition properties are conserved while the coirreducibility\nproperties are not.\nWe give the\nformulas for the calculi corresponding to coirreducible ones.\n\nAnother interesting aspect of viewing \\ensuremath{U(\\lalg{b_+})}{} as a quantum function\nalgebra is the connection to quantum deformed models of space-time and\nits symmetries. In particular, the $\\kappa$-deformed Minkowski space\ncoming from the $\\kappa$-deformed Poincar\\'e algebra\n\\cite{LuNoRu}\\cite{MaRu} is just a simple generalisation of \\ensuremath{U(\\lalg{b_+})}.\nWe use this in section \\ref{sec:kappa} to give\na natural $4$-dimensional differential calculus. Then we show (in a\nformal context) that integration is given by\nthe usual Lesbegue integral on $\\mathbb{R}^n$ after normal ordering.\nThis is obtained in an intrinsic context different from the standard\n$\\kappa$-Poincar\\'e approach.\n\nA further important motivation for the investigation of differential\ncalculi on\n\\ensuremath{U(\\lalg{b_+})}{} and \\ensuremath{C(B_+)}{} is the relation of those objects to the Planck-scale\nHopf algebra \\cite{Majid_Planck}\\cite{Majid_book}. This shall be\ndeveloped elsewhere.\n\nIn the remaining parts of this introduction we will specify our\nconventions and provide preliminaries on the quantum group \\ensuremath{U_q(\\lalg{b_+})}, its\ndeformations, and differential calculi.\n\n\n\\subsection{Conventions}\n\nThroughout, $\\k$ denotes a field of characteristic 0 and\n$\\k(q)$ denotes the field of rational\nfunctions in one parameter $q$ over $\\k$.\n$\\k(q)$ is our ground field in\nthe $q$-deformed setting, while $\\k$ is the\nground field in the ``classical'' settings.\nWithin section \\ref{sec:q} one could equally well view $\\k$ as the ground\nfield with $q\\in\\k^*$ not a root of unity. This point of view is\nproblematic, however, when obtaining ``classical limits'' as\nin sections \\ref{sec:class} and \\ref{sec:dual}.\n\nThe positive integers are denoted by $\\mathbb{N}$ while the non-negative\nintegers are denoted by $\\mathbb{N}_0$.\nWe define $q$-integers, $q$-factorials and\n$q$-binomials as follows:\n\\begin{gather*}\n[n]_q=\\sum_{i=0}^{n-1} q^i\\qquad\n[n]_q!=[1]_q [2]_q\\cdots [n]_q\\qquad\n\\binomq{n}{m}=\\frac{[n]_q!}{[m]_q! [n-m]_q!}\n\\end{gather*}\nFor a function of several variables (among\nthem $x$) over $\\k$ we define\n\\begin{gather*}\n(T_{a,x} f)(x) = f(x+a)\\\\\n(\\fdiff_{a,x} f)(x) = \\frac{f(x+a)-f(x)}{a}\n\\end{gather*}\nwith $a\\in\\k$ and similarly over $\\k(q)$\n\\begin{gather*}\n(Q_{m,x} f)(x) = f(q^m x)\\\\\n(\\partial_{q,x} f)(x) = \\frac{f(x)-f(qx)}{x(1-q)}\\\\\n\\end{gather*}\nwith $m\\in\\mathbb{Z}$.\n\nWe frequently use the notion of a polynomial in an extended\nsense. Namely, if we have an algebra with an element $g$ and its\ninverse $g^{-1}$ (as\nin \\ensuremath{U_q(\\lalg{b_+})}{}) we will mean by a polynomial in $g,g^{-1}$ a finite power\nseries in $g$ with exponents in $\\mathbb{Z}$. The length of such a polynomial\nis the difference between highest and lowest degree.\n\nIf $H$ is a Hopf algebra, then $H^{op}$ will denote the Hopf algebra\nwith the opposite product.\n\n\\subsection{\\ensuremath{U_q(\\lalg{b_+})}{} and its Classical Limits}\n\\label{sec:intro_limits}\n\nWe recall that,\nin the framework of quantum groups, the duality between enveloping algebra\n$U(\\lalg{g})$ of the Lie algebra and algebra of functions $C(G)$ on the Lie\ngroup carries over to $q$-deformations.\nIn the case of\n$\\lalg{b_+}$, the\n$q$-deformed enveloping algebra \\ensuremath{U_q(\\lalg{b_+})}{} defined over $\\k(q)$ as\n\\begin{gather*}\nU_q(\\lalg{b_+})=\\k(q)\\langle X,g,g^{-1}\\rangle \\qquad\n\\text{with relations} \\\\\ng g^{-1}=1 \\qquad Xg=qgX \\\\\n\\cop X=X\\otimes 1 + g\\otimes X \\qquad\n\\cop g=g\\otimes g \\\\\n\\cou (X)=0 \\qquad \\cou (g)=1 \\qquad\n\\antip X=-g^{-1}X \\qquad \\antip g=g^{-1}\n\\end{gather*}\nis self-dual. Consequently, it\nmay alternatively be viewed as the quantum algebra \\ensuremath{C_q(B_+)}{} of\nfunctions on the Lie group $B_+$ associated with $\\lalg{b_+}$.\nIt has two classical limits, the enveloping algebra \\ensuremath{U(\\lalg{b_+})}{}\nand the function algebra $C(B_+)$.\nThe transition to the classical enveloping algebra is achieved by\nreplacing $q$\nby $e^{-t}$ and $g$ by $e^{tH}$ in a formal power series setting in\n$t$, introducing a new generator $H$. Now, all expressions are written in\nthe form $\\sum_j a_j t^j$ and only the lowest order in $t$ is kept.\nThe transition to the classical function algebra on the other hand is\nachieved by setting $q=1$.\nThis may be depicted as follows:\n\\[\\begin{array}{c @{} c @{} c @{} c}\n& \\ensuremath{U_q(\\lalg{b_+})} \\cong \\ensuremath{C_q(B_+)} && \\\\\n& \\diagup \\hspace{\\stretch{1}} \\diagdown && \\\\\n \\begin{array}{l} q=e^{-t} \\\\ g=e^{tH} \\end{array} \\Big| _{t\\to 0} \n && q=1 &\\\\\n \\swarrow &&& \\searrow \\\\\n \\ensuremath{U(\\lalg{b_+})} & <\\cdots\\textrm{dual}\\cdots> && \\ensuremath{C(B_+)}\n\\end{array}\\]\nThe self-duality of \\ensuremath{U_q(\\lalg{b_+})}{} is expressed as a pairing\n$\\ensuremath{U_q(\\lalg{b_+})}\\times\\ensuremath{U_q(\\lalg{b_+})}\\to\\k$\nwith\nitself:\n\\[\\langle X^n g^m, X^r g^s\\rangle =\n \\delta_{n,r} [n]_q!\\, q^{-n(n-1)/2} q^{-ms}\n \\qquad\\forall n,r\\in\\mathbb{N}_0\\: m,s\\in\\mathbb{Z}\\]\nIn the classical limit this becomes the pairing $\\ensuremath{U(\\lalg{b_+})}\\times\\ensuremath{C(B_+)}\\to\\k$\n\\begin{equation}\n\\langle X^n H^m, X^r g^s\\rangle =\n \\delta_{n,r} n!\\, s^m\\qquad \\forall n,m,r\\in\\mathbb{N}_0\\: s\\in\\mathbb{Z}\n\\label{eq:pair_class}\n\\end{equation} \n\n\n\n\\subsection{Differential Calculi and Quantum Tangent Spaces}\n\nIn this section we recall some facts about differential calculi\nalong the lines of Majid's treatment in \\cite{Majid_calculi}.\n\nFollowing Woronowicz \\cite{Wor_calculi}, first order bicovariant differential\ncalculi on a quantum group $A$ (of\nfunction algebra type) are in one-to-one correspondence to submodules\n$M$ of $\\ker\\cou\\subset A$ in the category $^A_A\\cal{M}$ of (say) left\ncrossed modules of $A$ via left multiplication and left adjoint\ncoaction:\n\\[\na\\triangleright v = av \\qquad \\mathrm{Ad_L}(v)\n =v_{(1)}\\antip v_{(3)}\\otimes v_{(2)}\n\\qquad \\forall a\\in A, v\\in A\n\\]\nMore precisely, given a crossed submodule $M$, the corresponding\ncalculus is given by $\\Gamma=\\ker\\cou/M\\otimes A$ with $\\diff a =\n\\pi(\\cop a - 1\\otimes a)$ ($\\pi$ the canonical projection).\nThe right action and coaction on $\\Gamma$ are given by\nthe right multiplication and coproduct on $A$, the left action and\ncoaction by the tensor product ones with $\\ker\\cou/M$ as a left\ncrossed module. In all of what follows, ``differential calculus'' will\nmean ``bicovariant first order differential calculus''.\n\nAlternatively \\cite{Majid_calculi}, given in addition a quantum group $H$\ndually paired with $A$\n(which we might think of as being of enveloping algebra type), we can\nexpress the coaction of $A$ on\nitself as an action of $H^{op}$ using the pairing:\n\\[\nh\\triangleright v = \\langle h, v_{(1)} \\antip v_{(3)}\\rangle v_{(2)}\n\\qquad \\forall h\\in H^{op}, v\\in A\n\\]\nThereby we change from the category of (left) crossed $A$-modules to\nthe category of left modules of the quantum double $A\\!\\bowtie\\! H^{op}$.\n\nIn this picture the pairing between $A$ and $H$ descends to a pairing\nbetween $A/\\k 1$ (which we may identify with $\\ker\\cou\\subset A$) and\n$\\ker\\cou\\subset H$. Further quotienting $A/\\k 1$ by $M$ (viewed in\n$A/\\k 1$) leads to a pairing with the subspace $L\\subset\\ker\\cou H$\nthat annihilates $M$. $L$ is called a ``quantum tangent space''\nand is dual to the differential calculus $\\Gamma$ generated by $M$ in\nthe sense that $\\Gamma\\cong \\Lin(L,A)$ via\n\\begin{equation}\nA/(\\k 1+M)\\otimes A \\to \\Lin(L,A)\\qquad\nv\\otimes a \\mapsto \\langle \\cdot, v\\rangle a\n\\label{eq:eval}\n\\end{equation}\nif the pairing between $A/(\\k 1+M)$ and $L$ is non-degenerate.\n\nThe quantum tangent spaces are obtained directly by dualising the\n(left) action of the quantum double on $A$ to a (right) action on\n$H$. Explicitly, this is the adjoint action and the coregular action\n\\[\nh \\triangleright x = h_{(1)} x \\antip h_{(2)} \\qquad\na \\triangleright x = \\langle x_{(1)}, a \\rangle x_{(2)}\\qquad\n \\forall h\\in H, a\\in A^{op},x\\in A\n\\]\nwhere we have converted the right action to a left action by going\nfrom \\mbox{$A\\!\\bowtie\\! H^{op}$}-modules to \\mbox{$H\\!\\bowtie\\! A^{op}$}-modules.\nQuantum tangent spaces are subspaces of $\\ker\\cou\\subset H$ invariant\nunder the projection of this action to $\\ker\\cou$ via \\mbox{$x\\mapsto\nx-\\cou(x) 1$}. Alternatively, the left action of $A^{op}$ can be\nconverted to a left coaction of $H$ being the comultiplication (with\nsubsequent projection onto $H\\otimes\\ker\\cou$).\n\nWe can use the evaluation map (\\ref{eq:eval})\nto define a ``braided derivation'' on elements of the quantum tangent\nspace via\n\\[\\partial_x:A\\to A\\qquad \\partial_x(a)={\\diff a}(x)=\\langle\nx,a_{(1)}\\rangle a_{(2)}\\qquad\\forall x\\in L, a\\in A\\]\nThis obeys the braided derivation rule\n\\[\\partial_x(a b)=(\\partial_x a) b\n + a_{(2)} \\partial_{a_{(1)}\\triangleright x}b\\qquad\\forall x\\in L, a\\in A\\]\n\nGiven a right invariant basis $\\{\\eta_i\\}_{i\\in I}$ of $\\Gamma$ with a\ndual basis $\\{\\phi_i\\}_{i\\in I}$ of $L$ we have\n\\[{\\diff a}=\\sum_{i\\in I} \\eta_i\\cdot \\partial_i(a)\\qquad\\forall a\\in A\\]\nwhere we denote $\\partial_i=\\partial_{\\phi_i}$. (This can be easily\nseen to hold by evaluation against $\\phi_i\\ \\forall i$.)\n\n\n\\section{Classification on \\ensuremath{C_q(B_+)}{} and \\ensuremath{U_q(\\lalg{b_+})}{}}\n\\label{sec:q}\n\nIn this section we completely classify differential calculi on \\ensuremath{C_q(B_+)}{}\nand, dually, quantum tangent spaces on \\ensuremath{U_q(\\lalg{b_+})}{}. We start by\nclassifying the relevant crossed modules and then proceed to a\ndetailed description of the calculi.\n\n\\begin{lem}\n\\label{lem:cqbp_class}\n(a) Left crossed \\ensuremath{C_q(B_+)}-submodules $M\\subseteq\\ensuremath{C_q(B_+)}$ by left\nmultiplication and left\nadjoint coaction are in one-to-one correspondence to\npairs $(P,I)$\nwhere $P\\in\\k(q)[g]$ is a polynomial with $P(0)=1$ and $I\\subset\\mathbb{N}$ is\nfinite.\n$\\codim M<\\infty$ iff $P=1$. In particular $\\codim M=\\sum_{n\\in I}n$\nif $P=1$.\n\n(b) The finite codimensional maximal $M$\ncorrespond to the pairs $(1,\\{n\\})$ with $n$ the\ncodimension. The infinite codimensional maximal $M$ are characterised by\n$(P,\\emptyset)$ with $P$ irreducible and $P(g)\\neq 1-q^{-k}g$ for any\n$k\\in\\mathbb{N}_0$.\n\n(c) Crossed submodules $M$ of finite\ncodimension are intersections of maximal ones.\nIn particular $M=\\bigcap_{n\\in I} M^n$, with $M^n$ corresponding to\n$(1,\\{n\\})$.\n\\end{lem}\n\\begin{proof}\n(a) Let $M\\subseteq\\ensuremath{C_q(B_+)}$ be a crossed \\ensuremath{C_q(B_+)}-submodule by left\nmultiplication and left adjoint coaction and let\n$\\sum_n X^n P_n(g) \\in M$, where $P_n$ are polynomials in $g,g^{-1}$\n(every element of \\ensuremath{C_q(B_+)}{} can be expressed in\nthis form). From the formula for the coaction ((\\ref{eq:adl}), see appendix)\nwe observe that for all $n$ and for all $t\\le n$ the element\n\\[X^t P_n(g) \\prod_{s=1}^{n-t} (1-q^{s-n}g)\\]\nlies in $M$.\nIn particular\nthis is true for $t=n$, meaning that elements of constant degree in $X$\nlie separately in $M$. It is therefore enough to consider such\nelements.\n\nLet now $X^n P(g) \\in M$.\nBy left multiplication $X^n P(g)$ generates any element of the form\n$X^k P(g) Q(g)$, where $k\\ge n$ and $Q$ is any polynomial in\n$g,g^{-1}$. (Note that $Q(q^kg) X^k=X^k Q(g)$.)\nWe see that $M$ contains the following elements:\n\\[\\begin{array}{ll}\n\\vdots & \\\\\nX^{n+2} & P(g) \\\\\nX^{n+1} & P(g) \\\\\nX^n & P(g) \\\\\nX^{n-1} & P(g) (1-q^{1-n}g) \\\\\nX^{n-2} & P(g) (1-q^{1-n}g) (1-q^{2-n}g) \\\\\n\\vdots & \\\\\nX & P(g) (1-q^{1-n}g) (1-q^{2-n}g) \\ldots (1-q^{-1}g) \\\\\n& P(g) (1-q^{1-n}g) (1-q^{2-n}g) \\ldots (1-q^{-1}g)(1-g) \n\\end{array}\n\\]\nMoreover, if $M$ is generated by $X^n P(g)$ as a module\nthen these elements generate a basis for $M$ as a vector\nspace by left\nmultiplication with polynomials in $g,g^{-1}$. (Observe that the\napplication of the coaction to any of the elements shown does not\ngenerate elements of new type.)\n\nNow, let $M$ be a given crossed submodule. We pick, among the\nelements in $M$ of the form $X^n P(g)$ with $P$ of minimal\nlength,\none\nwith lowest degree in $X$. Then certainly the elements listed above are\nin $M$. Furthermore for any element of the form $X^k Q(g)$, $Q$ must\ncontain $P$ as a factor and for $k0 \\}$ in the crossed submodule or not. In\nparticular, the crossed submodule characterised by \\{1\\} in lemma\n\\ref{lem:uqbp_class} is projected out.\n\\end{proof}\n\nDifferential calculi in the original sense of Woronowicz are\nclassified by corollary \\ref{cor:cqbp_eclass} while from the quantum\ntangent space\npoint of view the\nclassification is given by corollary \\ref{cor:uqbp_eclass}.\nIn the finite dimensional case the duality is strict in the sense of a\none-to-one correspondence.\nThe infinite dimensional case on the other hand depends strongly on\nthe algebraic models we use for the function or enveloping\nalgebras. It is therefore not surprising that in the present purely\nalgebraic context the classifications are quite different in this\ncase. We will restrict ourselves to the finite dimensional\ncase in the following description of the differential calculi.\n\n\n\\begin{thm}\n\\label{thm:q_calc}\n(a) Finite dimensional differential calculi $\\Gamma$ on \\ensuremath{C_q(B_+)}{} and\ncorresponding quantum tangent spaces $L$ on \\ensuremath{U_q(\\lalg{b_+})}{} are\nin one-to-one correspondence to\nfinite sets $I\\subset\\mathbb{N}\\setminus\\{1\\}$. In particular\n$\\dim\\Gamma=\\dim L=\\sum_{n\\in I}n$.\n\n(b) Coirreducible $\\Gamma$ and irreducible $L$ correspond to\n$\\{n\\}$ with $n\\ge 2$ the dimension.\nSuch a $\\Gamma$ has a\nright invariant basis $\\eta_0,\\dots,\\eta_{n-1}$ so that the relations\n\\begin{gather*}\n\\diff X=\\eta_1+(q^{n-1}-1)\\eta_0 X \\qquad\n \\diff g=(q^{n-1}-1)\\eta_0 g\\\\\n[a,\\eta_0]=\\diff a\\quad \\forall a\\in\\ensuremath{C_q(B_+)}\\\\\n[g,\\eta_i]_{q^{n-1-i}}=0\\quad \\forall i\\qquad\n[X,\\eta_i]_{q^{n-1-i}}=\\begin{cases}\n \\eta_{i+1} & \\text{if}\\ i0\n\\end{gather*}\nas a crossed module.\n\\end{proof}\n\nFor the transition from the $q$-deformed (lemma\n\\ref{lem:uqbp_class}) to the classical case we\nobserve that the space spanned by $g^{s_1},\\dots,g^{s_m}$ with $m$\ndifferent integers $s_i\\in\\mathbb{Z}$ maps to the space spanned by\n$1, H, \\dots, H^{m-1}$ in the\nprescription of the classical limit (as described in section\n\\ref{sec:intro_limits}). I.e.\\ the classical crossed submodule\ncharacterised by an integer $l$ and a finite set $I\\subset\\mathbb{N}$ comes\nfrom a crossed submodule characterised by this same $I$ and additionally $l$\nother integers $j\\in\\mathbb{Z}$ for which $X^k g^{1-j}$ is included. In\nparticular, we have a one-to-one correspondence in the finite\ndimensional case.\n\nTo formulate the analogue of corollary \\ref{cor:uqbp_eclass} for the\nclassical case is essentially straightforward now. However, as for\n\\ensuremath{C(B_+)}{}, we obtain more crossed submodules than those from the $q$-deformed\nsetting. This is due to the degeneracy introduced by forgetting the\npowers of $g$ and just retaining the number of different powers. \n\n\\begin{cor}\n\\label{cor:ubp_eclass}\n(a) Proper left crossed \\ensuremath{U(\\lalg{b_+})}-submodules\n$L\\subset\\ker\\cou\\subset\\ensuremath{U(\\lalg{b_+})}$ via the\nleft adjoint\naction and left regular coaction (with subsequent projection to\n$\\ker\\cou$ via $x\\mapsto x-\\cou(x)1$) are in one-to-one correspondence to\npairs $(l,I)$ with $l\\in\\mathbb{N}_0$ and $I\\subset\\mathbb{N}$ finite where $l\\neq 0$\nor $I\\neq\\emptyset$.\n$\\dim L<\\infty$ iff $l=0$. In particular $\\dim\nL=(\\sum_{n\\in I}n)-1$ if $l=0$.\n\\end{cor}\n\n\nAs in the $q$-deformed setting, we give a description of the finite\ndimensional differential calculi where we have a strict duality to\nquantum tangent spaces.\n\n\\begin{prop}\n(a) Finite dimensional differential calculi $\\Gamma$ on \\ensuremath{C(B_+)}{} and\nfinite dimensional quantum tangent spaces $L$ on \\ensuremath{U(\\lalg{b_+})}{} are\nin one-to-one correspondence to non-empty finite sets $I\\subset\\mathbb{N}$.\nIn particular $\\dim\\Gamma=\\dim L=(\\sum_{n\\in I}) n)-1$.\n\nThe $\\Gamma$ with $1\\in\\mathbb{N}$ are in\none-to-one correspondence to the finite dimensional\ncalculi and quantum tangent spaces of the $q$-deformed setting\n(theorem \\ref{thm:q_calc}(a)).\n\n(b) The differential calculus $\\Gamma$ of dimension $n\\ge 2$\ncorresponding to the\ncoirreducible one of \\ensuremath{C_q(B_+)}{} (theorem \\ref{thm:q_calc}(b)) has a right\ninvariant\nbasis $\\eta_0,\\dots,\\eta_{n-1}$ so that\n\\begin{gather*}\n\\diff X=\\eta_1+\\eta_0 X \\qquad\n \\diff g=\\eta_0 g\\\\\n[g, \\eta_i]=0\\ \\forall i \\qquad\n[X, \\eta_i]=\\begin{cases}\n 0 & \\text{if}\\ i=0\\ \\text{or}\\ i=n-1\\\\\n \\eta_{i+1} & \\text{if}\\ 0 10 \\\\\n\\end{array}\\)}\n\\right. \\)\n\n\\\\\n\nKulczynski2 \\cite{Kulczynski1927,Naish2011} & %\n\n\\( \\frac{ 1 }{ 2 } \\times ( \\frac{ \\Aef }{ \\Aef + \\Anf } + \\frac{ \\Aef }{ \\Aef + \\Aep } ) \\)\n\n\\\\\n\nFailed only & %\n\n\\( \\left\\{\\scalebox{.8}{\\(\\renewcommand{\\arraystretch}{1} %\n\\begin{array}{@{}ll@{}}\n1 & \\text{if~} \\Ncs = 0 \\\\\n0 & \\text{otherwise~} \\\\\n\\end{array}\\)}\n\\right. \\)\n\n\\\\\n\\bottomrule\n\n\\end{tabular}} &\n\\begin{tabular}{lp{2.99cm}}\n\\toprule\n\\multicolumn{2}{l}{notation used} \\\\\\midrule\n\\Ncf & number of \\emph{failing} logs \\\\ & that \\emph{include} the event \\\\\n\\Nuf & number of \\emph{failing} logs \\\\ & that \\emph{exclude} the event \\\\\n\\Ncs & number of \\emph{passing} logs \\\\ & that \\emph{include} the event \\\\\n\\Nus & number of \\emph{passing} logs \\\\ & that \\emph{exclude} the event \\\\\n\\bottomrule\n\\end{tabular}\n\\end{tabular}\\vspace*{1ex}\n\\caption{\\label{table:measures}The 10 interestingness measures under consideration in this paper.}\n\\vspace*{-3ex}\n\\end{table*}\n\n\\head{Analyzing a target log file} Using our database of event scores,\nwe first identify the events occurring in the target log file and the\ninterestingness scores associated with these events. Then, we group\nsimilarly scored events together using a clustering algorithm. Finally,\nwe present the best performing cluster of events to the end user. The\nclustering step helps us make a meaningful selection of events rather\nthan setting an often arbitrary window selection size. Among other\nthings, it prevents two identically scored events from falling at\nopposite sides of the selection threshold. If the user suspects that\nthe best performing cluster did not report all relevant events, she can\ninspect additional event clusters in order of decreasing\naggregate interestingness score. To perform the clustering step we use Hierarchical Agglomerative\nClustering (HAC) with Complete linkage~\\cite{manning2008introduction}, where\nsub-clusters are merged until the maximal distance between members of\neach candidate cluster exceeds some specified threshold. In SBLD,\nthis threshold is the uncorrected sample standard deviation of the event\nscores for the events being clustered.\\footnote{~Specifically, \nwe use the \\texttt{numpy.std} procedure from the SciPy framework~\\cite{2020SciPy-NMeth},\nin which the uncorrected sample standard deviation is given by\n$ \\sqrt{\\frac{1}{N} \\sum_{i=1}^{N}\\lvert x_{i} - \\bar{x} \\rvert^2} $ where\n$\\bar{x}$ is the sample mean of the interestingness scores obtained for the\nevents in the log being analyzed and $N$ is the number of events in the log.} \nThis ensures that the ``interestingness-distance'' between two events \nin a cluster never exceeds the uncorrected sample standard deviation observed in the set.\n\n %\n\n\\section{Research Questions}\n\\label{sec:rqs}\n\nThe goal of this paper is to present SBLD and help practitioners make\nan informed decision whether SBLD meets their needs. To this end, we have identified\nthree research questions that encompass several concerns practitioners\nare likely to have and that also are of interested to the research community at\nlarge:\n\\begin{enumerate}[\\bfseries RQ1]\n\n\\item How well does SBLD reduce the effort needed to identify all\n known-to-be relevant events (\"does it work?\") ?\n\n\\item How is the efficacy of SBLD impacted by increased evidence in the form of\n additional failing and passing logs (\"how much data do we need before\n running the analysis?\") ?\n\n\\item How does SBLD perform compared to a strategy based on searching for\n common textual patterns with a tool like \\texttt{grep} (\"is it better than doing the obvious thing?\") ?\n\\end{enumerate}\nRQ1 looks at the aggregated performance of SBLD to assess its viability.\nWith RQ2 we assess how sensitive the performance is to the amount of\navailable data: How many logs should you have before you can expect the\nanalysis to yield good results? Is more data unequivocally a good thing?\nWhat type of log is more informative: A passing log or a failing log?\nFinally, we compare SBLD's performance to a more traditional method for\nfinding relevant segments in logs: Using a textual search for strings \none expects to occur near informative segments, like\n\"failure\" and \"error\". The next section details the dataset used, our\nchosen quality measures for assessment and our methodology for answering\neach research question.\n\n %\n\n\\section{Experimental Design}\n\\label{sec:expdesign}\n\n\\begin{table}\n\\centering\n\\caption{The key per-test attributes of our dataset. Two events are considered\n distinct if they are treated as separate events after the abstraction\n step. A \"mixed\" event is an event that occurs in logs of both failing and\n passing runs.}\n\\vspace*{-1ex}\n\\label{table:descriptive}\n\\renewcommand{\\tabcolsep}{0.11cm}\\small\n\\begin{tabular}{rcrrrrrr}\n\\toprule\n & & \\# fail & \\# pass & distinct & fail-only & mixed & pass-only \\\\\ntest & signature & logs & logs & events & events & events & events \\\\\n\\midrule\n 1 & C & 24 & 100 & 36391 & 21870 & 207 & 14314 \\\\\n 2 & E & 11 & 25 & 380 & 79 & 100 & 201 \\\\\n 3 & E & 11 & 25 & 679 & 174 & 43 & 462 \\\\\n 4 & E & 4 & 25 & 227 & 49 & 39 & 139 \\\\\n 5 & C & 2 & 100 & 33420 & 2034 & 82 & 31304 \\\\\n 6 & C & 19 & 100 & 49155 & 15684 & 893 & 32578 \\\\\n 7 & C & 21 & 100 & 37316 & 17881 & 154 & 19281 \\\\\n 8 & C & 4 & 100 & 26614 & 3976 & 67 & 22571 \\\\\n 9 & C & 21 & 100 & 36828 & 19240 & 228 & 17360 \\\\\n 10 & C & 22 & 100 & 110479 & 19134 & 1135 & 90210 \\\\\n 11 & E & 5 & 25 & 586 & 95 & 47 & 444 \\\\\n 12 & E & 7 & 25 & 532 & 66 & 18 & 448 \\\\\n 13 & C & 2 & 100 & 15351 & 2048 & 232 & 13071 \\\\\n 14 & C & 3 & 100 & 16318 & 2991 & 237 & 13090 \\\\\n 15 & C & 26 & 100 & 60362 & 20964 & 1395 & 38003 \\\\\n 16 & C & 12 & 100 & 2206 & 159 & 112 & 1935 \\\\\n 17 & E & 8 & 25 & 271 & 58 & 98 & 115 \\\\\n 18 & A & 23 & 75 & 3209 & 570 & 156 & 2483 \\\\\n 19 & C & 13 & 100 & 36268 & 13544 & 411 & 22313 \\\\\n 20 & B & 3 & 19 & 688 & 69 & 31 & 588 \\\\\n 21 & B & 22 & 25 & 540 & 187 & 94 & 259 \\\\\n 22 & E & 1 & 25 & 276 & 11 & 13 & 252 \\\\\n 23 & C & 13 & 100 & 28395 & 13629 & 114 & 14652 \\\\\n 24 & E & 7 & 26 & 655 & 117 & 56 & 482 \\\\\n 25 & C & 21 & 100 & 44693 & 18461 & 543 & 25689 \\\\\n 26 & C & 21 & 100 & 42259 & 19434 & 408 & 22417 \\\\\n 27 & C & 21 & 100 & 44229 & 18115 & 396 & 25718 \\\\\n 28 & C & 20 & 100 & 43862 & 16922 & 642 & 26298 \\\\\n 29 & C & 28 & 100 & 54003 & 24216 & 1226 & 28561 \\\\\n 30 & C & 31 & 100 & 53482 & 26997 & 1063 & 25422 \\\\\n 31 & C & 27 & 100 & 53092 & 23283 & 463 & 29346 \\\\\n 32 & C & 21 & 100 & 55195 & 19817 & 768 & 34610 \\\\\n 33 & E & 9 & 25 & 291 & 70 & 30 & 191 \\\\\n 34 & D & 2 & 13 & 697 & 76 & 92 & 529 \\\\\n 35 & E & 9 & 25 & 479 & 141 & 47 & 291 \\\\\n 36 & E & 10 & 75 & 1026 & 137 & 68 & 821 \\\\\n 37 & E & 7 & 25 & 7165 & 1804 & 94 & 5267 \\\\\n 38 & E & 4 & 25 & 647 & 67 & 49 & 531 \\\\\n 39 & G & 47 & 333 & 3350 & 428 & 144 & 2778 \\\\\n 40 & G & 26 & 333 & 3599 & 240 & 157 & 3202 \\\\\n 41 & G & 26 & 332 & 4918 & 239 & 145 & 4534 \\\\\n 42 & C & 17 & 100 & 30411 & 14844 & 348 & 15219 \\\\\n 43 & F & 267 & 477 & 10002 & 3204 & 1519 & 5279 \\\\\n 44 & C & 9 & 100 & 29906 & 8260 & 274 & 21372 \\\\\n 45 & E & 3 & 25 & 380 & 44 & 43 & 293 \\\\\n\\bottomrule\n\\end{tabular}\n\\vspace*{-2ex}\n\\end{table}\n %\n\n\\begin{table}\n\\centering\n\\caption{Ground-truth signatures and their occurrences in distinct events.}\n\\label{table:signature}\n\\vspace*{-1ex}\n\\small\n\\begin{tabular}{ccrrrc}\n\\toprule\n & sub- & fail-only & pass-only & fail \\& & failure \\\\\nsignature & pattern & events & events & pass & strings* \\\\\n\\midrule\n A & 1 & 1 & 0 & 0 & yes \\\\\n A & 2 & 2 & 0 & 0 & no \\\\\n B & 1 & 2 & 0 & 0 & yes \\\\\n C & 1 & 21 & 0 & 0 & yes \\\\\n C & 2 & 21 & 0 & 0 & yes \\\\\n D & 1 & 4 & 0 & 0 & yes \\\\\n \\textbf{D$^{\\#}$} & \\textbf{2} & 69 & 267 & 115 & no \\\\\n \\textbf{D$^{\\#}$} & \\textbf{3} & 2 & 10 & 13 & no \\\\\n \\textbf{E$^{\\#}$} & \\textbf{1} & 24 & 239 & 171 & no \\\\\n E & 1 & 1 & 0 & 0 & no \\\\\n E & 2 & 9 & 0 & 0 & no \\\\\n E & 3 & 9 & 0 & 0 & yes \\\\\n E & 4 & 23 & 0 & 0 & yes \\\\\n F & 1 & 19 & 0 & 0 & yes \\\\\n F & 2 & 19 & 0 & 0 & no \\\\\n F & 3 & 19 & 0 & 0 & yes \\\\\n F & 4 & 14 & 0 & 0 & yes \\\\\n G & 1 & 2 & 0 & 0 & yes \\\\\n G & 2 & 1 & 0 & 0 & no \\\\\n G & 3 & 1 & 0 & 0 & no \\\\\n\\bottomrule\n\\multicolumn{6}{l}{* signature contains the lexical patterns 'error', 'fault' or 'fail*'}\\\\\n\\multicolumn{6}{l}{$^{\\#}$ sub-patterns that were removed to ensure a clean ground truth}\n\\end{tabular}\n\\vspace*{-3ex}\n\\end{table}\n \n\\subsection{Dataset and ground truth}\n\\label{sec:dataset}\n\nOur dataset provided by \\CiscoNorway{our industrial partner} consists\nof failing and passing log files from 45 different end-to-end integration\ntests. In addition to the log text we also have data on when a given\nlog file was produced. Most test-sets span a time-period of 38 days, while\nthe largest set (test 43 in Table~\\ref{table:descriptive}) spans 112\ndays. Each failing log is known to exemplify symptoms of one of seven\nknown errors, and \\CiscoNorway{our industrial partner} has given us a\nset of regular expressions that help determine which events are relevant\nfor a given known error. We refer to the set of regular expressions\nthat identify a known error as a \\emph{signature} for that error. These\nsignatures help us construct a ground truth for our investigation.\nMoreover, an important motivation for developing SBLD is to help create\nsignatures for novel problems: The events highlighted by SBLD should be\ncharacteristic of the observed failure, and the textual contents of the\nevents can be used in new signature expressions.\n\nDescriptive facts about our dataset is listed in\nTable~\\ref{table:descriptive} while Table~\\ref{table:signature}\nsummarizes key insights about the signatures used.\n\nIdeally, our ground truth should highlight exactly and \\emph{only} the\nlog events that an end user would find relevant for troubleshooting\nan error. However, the signatures used in this investigation were\ndesigned to find sufficient evidence that the \\emph{entire log} in\nquestion belongs to a certain error class: the log might contain other\nevents that a human user would find equally relevant for diagnosing\na problem, but the signature in question might not encompass these\nevents. Nevertheless, the events that constitute sufficient evidence\nfor assigning the log to a given error class are presumably relevant\nand should be presented as soon as possible to the end user. However,\nif our method cannot differentiate between these signature events and\nother events we cannot say anything certain about the relevance of\nthose other events. This fact is reflected in our choice of quality\nmeasures, specifically in how we assess the precision of the approach. This\nis explained in detail in the next section.\n\nWhen producing the ground truth, we first ensured that a log would only be\nassociated with a signature if the entire log taken as a whole satisfied all\nthe sub-patterns of that signature. If so, we then determined which events\nthe patterns were matching on. These events constitute the known-to-be relevant\nset of events for a given log. However, we identified some problems with two of the provided\nsignatures that made them unsuitable for assessing SBLD. Signature \\emph{E}\n(see Table~\\ref{table:signature}) had a sub-pattern that searched for a \"starting test\"-prefix that necessarily\nmatches on the first event in all logs due to the structure of the logs.\nSimilarly, signature \\emph{D} contained two sub-patterns that necessarily\nmatch all logs in the set--in this case by searching for whether the test\nwas run on a given machine, which was true for all logs for the corresponding\ntest. We therefore elected to remove these sub-patterns from the signatures\nbefore conducting the analysis.\n\n\\subsection{Quality Measures}\n\nAs a measure of how well SBLD reports all known-to-be relevant log\nevents, we measure \\emph{recall in best cluster}, which we for brevity refer to\nas simply \\emph{recall}. \nThis is an adaption of the classic recall measure used in information retrieval,\nwhich tracks the proportion of all relevant events that were retrieved\nby the system~\\cite{manning2008introduction}. \nAs our method presents events to the user in a series of ranked clusters, \nwe ideally want all known-to-be relevant events to appear in the highest ranked cluster. \nWe therefore track the overall recall obtained as if the first cluster were the only events retrieved.\nNote, however, that SBLD ranks all clusters, and a user can retrieve additional clusters if desired. \nWe explore whether this could improve SBLD's performance on a\nspecific problematic test-set in Section~\\ref{sec:testfourtythree}.\n\nIt is trivial to obtain a perfect recall by simply retrieving all events\nin the log, but such a method would obviously be of little help to a user\nwho wants to reduce the effort needed to diagnose failures.\nWe therefore also track the \\emph{effort reduction} (ER), defined as\n\\[ \\text{ER} = 1 - \\frac{\\text{number of events in first cluster}}{\\text{number of events in log}} \\]\n\nMuch like effective information retrieval systems aim for high recall and\nprecision, we want our method to score a perfect recall while obtaining the\nhighest effort reduction possible. \n\n\\subsection{Recording the impact of added data}\n\nTo study the impact of added data on SBLD's performance, we need to measure how\nSBLD's performance on a target log $t$ is affected by adding an extra\nfailing log $f$ or a passing log $p$. There are several strategies\nfor accomplishing this. One way is to try all combinations in the\ndataset i.e.\\ compute the performance on any $t$ using any choice of\nfailing and passing logs to produce the interestingness scores. This\napproach does not account for the fact that the logs in the data are\nproduced at different points in time and is also extremely expensive\ncomputationally. We opted instead to order the logs chronologically and\nsimulate a step-wise increase in data as time progresses, as shown in\nAlgorithm~\\ref{alg:time}.\n\n\\begin{algorithm}[b]\n\\caption{Pseudo-code illustrating how we simulate a step-wise increase in data\n as time progresses and account for variability in choice of\n interestingness measure.}\n\\label{alg:time}\n\\begin{algorithmic}\\small\n\\STATE $F$ is the set of failing logs for a given test\n\\STATE $P$ is the set of passing logs for a given test\n\\STATE $M$ is the set of interestingness measures considered\n\\STATE sort $F$ chronologically\n\\STATE sort $P$ chronologically\n\\FOR{$i=0$ to $i=\\lvert F \\rvert$}\n \\FOR{$j=0$ to $j=\\lvert P \\rvert$}\n \\STATE $f = F[:i]$ \\COMMENT{get all elements in F up to and including position i}\n \\STATE $p = P[:j]$\n \\FORALL{$l$ in $f$}\n \\STATE initialize $er\\_scores$ as an empty list\n \\STATE initialize $recall\\_scores$ as an empty list\n \\FORALL{$m$ in $M$}\n \\STATE perform SBLD on $l$ using $m$ as measure \\\\ \\hspace*{1.75cm} and $f$ and $p$ as spectrum data\n \\STATE append recorded effort reduction score to $er\\_scores$\n \\STATE append recorded recall score to $recall\\_scores$\n \\ENDFOR\n \\STATE record median of $er\\_scores$\n \\STATE record median of $recall\\_scores$\n \\ENDFOR\n \\ENDFOR\n\\ENDFOR\n\\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Variability in interestingness measures}\n\\label{sec:imvars}\n\nAs mentioned in Section~\\ref{sec:approach}, SBLD requires a\nchoice of interestingness measure for scoring the events, \nwhich can have a considerable impact on SBLD's performance. \nConsidering that the best choice of interestingness measure is context-dependent, \nthere is no global optimum, \nit is up to the user to decide which interestingness metric best reflects their\nnotion of event relevance. \n\nConsequently, we want to empirically study SBLD in way\nthat captures the variability introduced by this decision. \nTo this end, we record the median score obtained by performing SBLD for every possible choice of\ninterestingness measure from those listed in Table~\\ref{table:measures}.\nAlgorithm~\\ref{alg:time} demonstrates the procedure in pseudo-code.\n\n\\subsection{Comparing alternatives}\n\\label{sec:comps}\n\nTo answer RQ2 and RQ3, we use pairwise comparisons of\ndifferent configurations of SBLD with a method that searches for regular expressions. \nThe alternatives are compared\non each individual failing log in the set in a paired fashion. An\nimportant consequence of this is that the statistical comparisons have\nno concept of which test the failing log belongs to, and thus the test\nfor which there is most data has the highest impact on the result of the\ncomparison.\n\nThe pairwise comparisons are conducted using paired Wilcoxon signed-rank\ntests~\\cite{wilcoxon1945} where the Pratt correction~\\cite{Pratt1959}\nis used to handle ties. We apply Holm's correction~\\cite{Holm1979}\nto the obtained p-values to account for the family-wise error\nrate arising from multiple comparisons. We declare a comparison\n\\emph{statistically significant} if the Holm-adjusted p-value is below\n$\\alpha=0.05$. The Wilcoxon tests check the two-sided null hypothesis of\nno difference between the alternatives. We report the Vargha-Delaney $A_{12}$ and\n$A_{21}$~\\cite{Vargha2000} measures of stochastic superiority to\nindicate which alternative is the strongest. Conventionally, $A_{12}=0.56$ is\nconsidered a small difference, $A_{12}=.64$ is considered a medium difference\nand $A_{12}=.71$ or greater is considered large~\\cite{Vargha2000}. Observe\nalso that $A_{21} = 1 - A_{12}$.\n\n\\begin{figure*}\n \\includegraphics[width=0.8\\textwidth]{rq1_boxplot.png}\n %\n \\caption{The overall performance of SBLD in terms of effort reduction\n and recall. On many tests, SBLD exhibited perfect recall for\n all observations in the inter-quartile range and thus the box collapses to a single line on the $1.0$ mark.\\label{fig:rq1boxplot}}\n\\end{figure*}\n\n\\subsection{Analysis procedures}\n\nWe implement the SBLD approach in a prototype tool \nDAIM (Diagnosis and Analysis using Interestingness Measures), \nand use DAIM to empirically evaluate the idea.\n\n\\head{RQ1 - overall performance} We investigate the overall performance\nof SBLD by analyzing a boxplot for each test in our dataset. Every individual\ndatum that forms the basis of the plot is the median performance of SBLD over\nall choices of interestingness measures for a given set of failing and passing\nlogs subject to the chronological ordering scheme outlined above.\n\n\\head{RQ2 - impact of data} We analyze the impact of added data by\nproducing and evaluating heatmaps that show the obtained performance\nas a function of the number of failing logs (y-axis) and number of\npassing logs (x-axis). The color intensity of each tile in the heatmaps\nis calculated by taking the median of the scores obtained for each\nfailing log analyzed with the given number of failing and passing logs\nas data for the spectrum inference, wherein the score for each log is\nthe median over all the interestingness measures considered as outlined in\nSection~\\ref{sec:imvars}.\n\nFurthermore, we compare three variant configurations\nof SBLD that give an overall impression of the influence of added\ndata. The three configurations considered are \\emph{minimal evidence},\n\\emph{median evidence} and \\emph{maximal evidence}, where minimal\nevidence uses only events from the log being analyzed and one additional\npassing log, median evidence uses the median amount of respectively failing and\nand passing logs available while maximal evidence uses\nall available data for a given test. The comparisons are conducted with the\nstatistical scheme described above in Section~\\ref{sec:comps}.\n\n\\head{RQ3 - SBLD versus pattern-based search} To compare SBLD\nagainst a pattern-based search, we record the effort reduction and\nrecall obtained when only selecting events in the log that match on the\ncase-insensitive regular expression \\texttt{\"error|fault|fail*\"}, where\nthe $*$ denotes a wildcard-operator and the $\\lvert$ denotes logical\n$OR$. This simulates the results that a user would obtain by using\na tool like \\texttt{grep} to search for words like 'error' and 'failure'.\nSometimes the ground-truth signature expressions contain words from this\npattern, and we indicate this in Table~\\ref{table:signature}. If so, the\nregular expression-based method is guaranteed to retrieve the event.\nSimilarly to RQ2, we compare the three configurations of SBLD described\nabove (minimum, median and maximal evidence) against the pattern-based\nsearch using the statistical described in Section~\\ref{sec:comps}.\n\n %\n\n\\section{Results and Discussion}\n\\label{sec:resdiscuss}\n\nThis section gradually dissects Figure~\\ref{fig:rq1boxplot}, showing a breakdown of SBLD's performance per test for both recall\nand effort reduction, Figures \\ref{fig:erheat} and \\ref{fig:recallheat}, \nshowing SBLD's performance as a function of the number of failing and passing\nlogs used, as well as Table~\\ref{table:comparisons}, which shows the results\nof the statistical comparisons we have performed.\n\n\\begin{figure*}\n\\includegraphics[width=\\textwidth]{er_heatmap.pdf}\n \\caption{Effort reduction score obtained when SBLD is run on a given number of failing and passing logs. The tests not listed in this figure all obtained a lowest median effort reduction score of 90\\% or greater and are thus not shown for space considerations. \\label{fig:erheat}}\n\\vspace*{-2ex}\n\\end{figure*}\n\n\\begin{table*}\n\\caption{Statistical comparisons performed in this investigation. The\nbold p-values are those for which no statistically significant difference under $\\alpha=0.05$\n could be established.}\n\\label{table:comparisons}\n{\\small%\n\\begin{tabular}{lllrrrr}\n\\toprule\n variant 1 & variant 2 & quality measure & Wilcoxon statistic & $A_{12}$ & $A_{21}$ & Holm-adjusted p-value\\\\\n\\midrule\n pattern-based search & minimal evidence & effort reduction & 29568.5 & 0.777 & 0.223 & $\\ll$ 0.001 \\\\\n pattern-based search & maximal evidence & effort reduction & 202413.0 & 0.506 & 0.494 & \\textbf{1.000} \\\\\n pattern-based search & median evidence & effort reduction & 170870.5 & 0.496 & 0.504 & $\\ll$ 0.001 \\\\\n minimal evidence & maximal evidence & effort reduction & 832.0 & 0.145 & 0.855 & $\\ll$ 0.001 \\\\\n minimal evidence & median evidence & effort reduction & 2666.0 & 0.125 & 0.875 & $\\ll$ 0.001 \\\\\n maximal evidence & median evidence & effort reduction & 164674.0 & 0.521 & 0.479 & \\textbf{1.000} \\\\\n pattern-based search & minimal evidence & recall & 57707.0 & 0.610 & 0.390 & $\\ll$ 0.001 \\\\\n pattern-based search & maximal evidence & recall & 67296.0 & 0.599 & 0.401 & $\\ll$ 0.001 \\\\\n pattern-based search & median evidence & recall & 58663.5 & 0.609 & 0.391 & $\\ll$ 0.001 \\\\\n minimal evidence & maximal evidence & recall & 867.5 & 0.481 & 0.519 & $\\ll$ 0.001 \\\\\n minimal evidence & median evidence & recall & 909.0 & 0.498 & 0.502 & 0.020 \\\\\n maximal evidence & median evidence & recall & 0.0 & 0.518 & 0.482 & $\\ll$ 0.001 \\\\\n\\bottomrule\n\\end{tabular}\n %\n}\n\\end{table*}\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{recall_heatmap.pdf}\n \\caption{Recall score obtained when SBLD is run on a given number of failing and passing logs. For space\n considerations, we only show tests for which the minimum observed\n median recall was smaller than 1 (SBLD attained perfect median recall for all configurations in the other tests). \\label{fig:recallheat}}\n\\vspace*{-3ex}\n\\end{figure}\n\n\\subsection{RQ1: The overall performance of SBLD}\n\nFigure~\\ref{fig:rq1boxplot} suggests that SBLD's overall performance is strong,\nsince it obtains near-perfect recall while retaining a high degree of effort\nreduction. In terms of recall, SBLD obtains a perfect performance on all except\nfour tests: 18, 34, 42 and 43, with the lower quartile stationed at perfect recall for all tests\nexcept 43 (which we discuss in detail in Section~\\ref{sec:testfourtythree}).\nFor test 18, only 75 out of 20700 observations ($0.036\\%$) obtained a recall score\nof $0.5$ while the rest obtained a perfect score. On test 34 (the smallest in our\ndataset), 4 out of 39 observations obtained a score of zero recall while the\nothers obtained perfect recall. \nFor test 42, 700 out of 15300 ($0.4\\%$) observations obtained a score of zero recall while the rest obtained perfect recall.\nHence with the exception of test 43 which is discussed later, \nSBLD obtains very strong recall scores overall with only a few outliers.\n\nThe performance is also strong in terms of effort reduction, albeit\nmore varied. To a certain extent this is expected since the attainable\neffort reduction on any log will vary with the length of the log and\nthe number of ground-truth relevant events in the log. As can be seen\nin Figure~\\ref{fig:rq1boxplot}, most of the observations fall well\nover the 75\\% mark, with the exceptions being tests 4 and 22. For test\n4, Figure~\\ref{fig:erheat} suggests that one or more of the latest\npassing logs helped SBLD refine the interestingness scores. A similar\nbut less pronounced effect seems to have happened for test 22. However,\nas reported in Table~\\ref{table:descriptive}, test 22 consists only of\n\\emph{one} failing log. Manual inspection reveals that the log consists\nof 30 events, of which 11 are fail-only events. Without additional\nfailing logs, most interestingness measures will give a high score to\nall events that are unique to that singular failing log, which is likely\nto include many events that are not ground-truth relevant. Reporting 11\nout of 30 events to the user yields a meager effort reduction of around\n63\\%. Nevertheless, the general trend is that SBLD retrieves a compact\nset of events to the user which yields a high effort reduction score.\n\nIn summary, the overall performance shows that SBLD\nretrieves the majority of all known-to-be-relevant events\nin compact clusters, which dramatically reduces the analysis burden for the\nend user. The major exception is Test 43, which we return to in\nSection~\\ref{sec:testfourtythree}.\n\n\\subsection{RQ2: On the impact of evidence}\n\nThe heatmaps suggest that the effort reduction is generally not\nadversely affected by adding more \\emph{passing logs}. If the\nassumptions underlying our interestingness measures are correct,\nthis is to be expected: Each additional passing log either gives us\nreason to devalue certain events that co-occur in failing and passing\nlogs or contain passing-only events that are deemed uninteresting.\nMost interestingness measures highly value events that\nexclusively occur in failing logs, and additional passing logs help\nreduce the number of events that satisfy this criteria. However, since\nour method bases itself on clustering similarly scored events it is\nweak to \\emph{ties} in interestingness scores. It is possible that\nan additional passing log introduces ties where there previously was\nnone. This is likely to have an exaggerated effect in situations with\nlittle data, where each additional log can have a dramatic impact on the\ninterestingness scores. This might explain the gradual dip in effort\nreduction seen in Test 34, for which there are only two failing logs.\n\nAdding more failing logs, on the other hand, draws a more nuanced\npicture: When the number of failing logs (y-axis) is high relative\nto the number of passing logs (x-axis), effort reduction seems to suffer.\nAgain, while most interestingness measures will prioritize events that\nonly occur in failing logs, this strategy only works if there is a\nsufficient corpus of passing logs to weed out false positives. When\nthere are far fewer passing than failing logs, many events will be\nunique to the failing logs even though they merely reflect a different\nvalid execution path that the test can take. This is especially true for\ncomplex integration tests like the ones in our dataset, which might test\na system's ability to recover from an error, or in other ways have many\nvalid execution paths.\n\nThe statistical comparisons summarized in Table~\\ref{table:comparisons}\nsuggest that the minimal evidence strategy performs poorly compared to the\nmedian and maximal evidence strategies. This is especially\npronounced for effort reduction, where the Vargha-Delaney\nmetric scores well over 80\\% in favor of the maximal and median\nstrategy. For recall, the difference between the minimum strategy and\nthe other variants is small, albeit statistically significant. Furthermore,\nthe jump from minimal evidence to median evidence is much more\npronounced than the jump from median evidence to maximal evidence.\nFor effort reduction, there is in fact no statistically discernible\ndifference between the median and maximal strategies. For recall, the maximal\nstrategies seems a tiny bit better, but the $A_{12}$ measure suggests the\nmagnitude of the difference to be small.\n\nOverall, SBLD seems to benefit from extra data, especially additional passing\nlogs. Failing logs also help, but depend on a proportional amount of passing\nlogs for SBLD to fully benefit. \nThe performance increase from going from minimal data to some data is more pronounced than going from some data to\nmaximal data. This suggests that there may be diminishing returns to\ncollecting extra logs, but our investigation cannot prove or disprove this.\n\n\\subsection{RQ3: SBLD versus simple pattern-search}\n\nIn terms of effort reduction, Table~\\ref{table:comparisons} shows that\nthe pattern-based search clearly beats the minimal evidence variant of\nSBLD. It does not, however, beat the median and maximal variants: The\ncomparison to median evidence suggests a statistically significant win\nin favor of median evidence, but the effect reported by $A_{12}$ is\nso small that it is unlikely to matter in practice. No statistically\nsignificant difference could be established between the pattern-based\nsearch and SBLD with maximal evidence.\n\nIn one sense, it is to be expected that the pattern-based search does\nwell on effort reduction assuming that events containing words like\n\"fault\" and \"error\" are rare. The fact that the pattern-based search\nworks so well could indicate that \\CiscoNorway{our industrial partner}\nhas a well-designed logging infrastructure where such words are\nrare and occur at relevant positions in the logs. On the other\nhand, it is then notable that the median and maximum variants of SBLD perform\ncomparably on effort reduction without having any concept of the textual\ncontent in the events.\n\nIn terms of recall, however, pattern-based search beats all variants of\nSBLD in a statistically significant manner, where the effect size of the\ndifferences is small to medium. One likely explanation for this better performance is that the\npattern-based search performs very well on Test 43, which SBLD generally\nperforms less well on. Since the comparisons are run per failing log and test\n43 constitutes 29\\% of the failing logs (specifically, 267 out of 910 logs), the\nperformance of test 43 has a massive impact. We return to test 43 and its\nimpact on our results in Section~\\ref{sec:testfourtythree}.\n\nOn the whole, SBLD performs similarly to pattern-based search, obtaining\nslightly poorer results on recall for reasons that are likely due\nto a particular test we discuss below. At any rate, there is no\ncontradiction in combining SBLD with a traditional pattern-based search.\nAnalysts could start by issuing a set of pattern-based searches and\nrun SBLD afterward if the pattern search returned unhelpful results.\nIndeed, an excellent and intended use of SBLD is to suggest candidate\nsignature patterns that, once proven reliable, can be incorporated in a\nregular-expression based search to automatically identify known issues\nin future runs.\n\n\\subsection{What happens in Test 43?}\n\\label{sec:testfourtythree}\n\nSBLD's performance is much worse on Test 43 than the other tests, which\nwarrants a dedicated investigation. The first thing we observed in the\nresults for Test 43 is that all of the ground-truth-relevant events\noccurred \\emph{exclusively} in failing logs and were often singular\n(11 out of the 33) or infrequent (30 out of 33 events occurred in 10\\%\nof the failing logs or fewer). Consequently, we observed a strong\nperformance from the \\emph{Tarantula} and \\emph{Failed only}-measures\nthat put a high premium on failure-exclusive events. Most of the\ninterestingness measures, on the other hand, will prefer an event that\nis very frequent in the failing logs and sometimes occur in passing logs\nover a very rare event that only occurs in failing logs. This goes a\nlong way in explaining the poor performance on recall. The abundance of\nsingular events might also suggest that there is an error in the event\nabstraction framework, where several events that should be treated as\ninstances of the same abstract event are treated as separate events. We\ndiscuss this further in Section~\\ref{sec:ttv}.\n\n\\begin{sloppypar}%\nAnother observation we made is that the failing logs contained only \\emph{two}\nground-truth relevant events, which means that the recorded recall can quickly\nfluctuate between $0$, $0.5$ and $1$.\n\\end{sloppypar}\n\nWould the overall performance improve by retrieving an additional\ncluster? A priori, retrieving an extra cluster would strictly improve\nor not change recall since more events are retrieved without removing\nthe previously retrieved events. Furthermore, retrieving an additional\ncluster necessarily decreases the effort reduction. We re-ran the\nanalysis on Test 43 and collected effort reduction and recall scores\nfor SBLD when retrieving \\emph{two} clusters, and found that the added\ncluster increased median recall from $0$ to $0.5$ while the median\neffort reduction decreased from $0.97$ to $0.72$. While the proportional\nincrease in recall is larger than the decrease in effort reduction,\nthis should in our view not be seen as an improvement: As previously\nmentioned, the failing logs in this set contain only two ground-truth\nrelevant events and thus recall is expected to fluctuate greatly.\nSecondly, an effort reduction of $0.72$ implies that you still have to\nmanually inspect 28\\% of the data, which in most information retrieval\ncontexts is unacceptable. An unfortunate aspect of our analysis in this\nregard is that we do not account for event \\emph{lengths}: An abstracted\nevent is treated as one atomic entity, but could in reality vary from a\nsingle line to a stack trace that spans several pages. A better measure\nof effort reduction should incorporate a notion of event length to\nbetter reflect the real-world effect of retrieving more events.\n\nAll in all, Test 43 exhibits a challenge that SBLD is not suited for:\nIt asks SBLD to prioritize rare events that are exclusive to failing\nlogs over events that frequently occur in failing logs but might\noccasionally occur in passing logs. The majority of interestingness\nmeasures supported by SBLD would prioritize the latter category of\nevents. In a way, this might suggest that SBLD is not suited for finding\n\\emph{outliers} and rare events: Rather, it is useful for finding\nevents that are \\emph{characteristic} for failures that have occurred\nseveral times - a \"recurring suspect\", if you will. An avenue for future\nresearch is to explore ways of letting the user combine a search for\n\"recurring suspects\" with the search for outliers.\n\n %\n\n\\section{Related Work}\n\\label{sec:relwork}\n\nWe distinguish two main lines of related work: \nFirst, there is other work aimed at automated analysis of log files, \ni.e., our problem domain,\nand second, there is other work that shares similarities with our technical approach, \ni.e., our solution domain.\n\n\\head{Automated log analysis}\nAutomated log analysis originates in \\emph{system and network monitoring} for security and administration~\\cite{lin1990:error,Oliner2007}, \nand saw a revival in recent years due to the needs of \\emph{modern software development}, \\emph{CE} and \\emph{DevOps}~\\cite{Hilton2017,Laukkanen2017,Debbiche2014,Olsson2012,Shahin2017,candido2019:contemporary}.\n\nA considerable amount of research has focused on automated \\emph{log parsing} or \\emph{log abstraction}, \nwhich aims to reduce and organize log data by recognizing latent structures or templates in the events in a log~\\cite{zhu2019:tools,el-masri2020:systematic}.\nHe et al. analyze the quality of these log parsers and conclude that many of them are not accurate or efficient enough for parsing the logs of modern software systems~\\cite{he2018:automated}.\nIn contrast to these automated approaches, \nour study uses a handcrafted log abstracter developed by \\CiscoNorway{our industrial collaborator}.\n\n\\emph{Anomaly detection} has traditionally been used for intrusion detection and computer security~\\cite{liao2013:intrusion,ramaki2016:survey,ramaki2018:systematic}.\nApplication-level anomaly detection has been investigated for troubleshooting~\\cite{chen2004:failure,zhang2019:robust},\nand to assess compliance with service-level agreements~\\cite{banerjee2010:logbased,He2018,sauvanaud2018:anomaly}.\nGunter et al. present an infrastructure for troubleshooting of large distributed systems, %\nby first (distributively) summarizing high volume event streams before submitting those summaries to a centralized anomaly detector. \nThis helps them achieve the fidelity needed for detailed troubleshooting, \nwithout suffering from the overhead that such detailed instrumentation would bring~\\cite{Gunter2007}.\nDeeplog by Du et al. enables execution-path and performance anomaly detection in system logs by training a Long Short-Term Memory neural network of the system's expected behavior from the logs, and using that model to flag events and parameter values in the logs that deviate from the model's expectations~\\cite{Du2017}.\nSimilarly, LogRobust by Zhang et al. performs anomaly detection using a bi-LSTM neural network but also detects events that are likely evolved versions of previously seen events, making the learned model more robust to updates in the target logging infrastructure~\\cite{zhang2019:robust}.\n\nIn earlier work, we use \\emph{log clustering} to reduce the effort needed to process a backlog of failing CE logs \nby grouping those logs that failed for similar reasons~\\cite{rosenberg2018:use,rosenberg:2018:improving}. \nThey build on earlier research that uses log clustering to identify problems in system logs~\\cite{Lin2016,Shang2013}.\nCommon to these approaches is how the contrast between passing and failing logs is used to improve accuracy, \nwhich is closely related to how SBLD highlights failure-relevant events.\n\nNagarash et al.~\\cite{nagaraj:2012} explore the use of dependency networks to exploit the contrast between two sets of logs, \none with good and one with bad performance, \nto help developers understand which component(s) likely contain the root cause of performance issues.\n\nAn often-occurring challenge is the need to (re)construct an interpretable model of a system's execution.\nTo this end, several authors investigate the combination of log analysis with (static) source code analysis, \nwhere they try to (partially) match events in logs to log statements in the code, \nand then use these statements to reconstruct a path through the source code to help determine \nwhat happened in a failed execution~\\cite{Xu2009,yuan:2010:sherlog,zhao2014:lprof,schipper2019:tracing}.\nGadler et al. employ Hidden Markov Models to create a model of a system's usage patterns from logged events~\\cite{gadler2017:mining}, while\nPettinato et al. model and analyze the behavior of a complex telescope system using Latent Dirichlet Allocation~\\cite{pettinato2019:log}.\n\nOther researchers have analyzed the logs for successful and failing builds, \nto warn for anti-patterns and decay~\\cite{vassallo2019:automated}, \ngive build repair hints~\\cite{Vassallo2018}, \nand automatically repair build scripts~\\cite{hassan2018:hirebuild, tarlow2019:learning}. \nOpposite to our work,\nthese techniques exploit the \\emph{overlap} in build systems used by many projects to mine patterns that hint at decay or help repair a failing build, \nwhereas we exploit the \\emph{contrast} with passing runs for the same project to highlight failure-relevant events.\n\n\\begin{sloppypar}\n\\head{Fault Localization} \nAs mentioned, our approach was inspired by Spectrum-Based Fault Localization (SBFL), \nwhere the fault-proneness of a statement is computed as a function of \nthe number of times that the statement was executed in a failing test case, combined with \nthe number of times that the statement was skipped in a passing test case~\\cite{Jones2002,Chen2002,Abreu2007,Abreu2009,Naish2011}.\nThis more or less directly translates to the inclusion or exclusion of events in failing, resp. passing logs, \nwhere the difference is that SBLD adds clustering of the results to enable step-wise presentation of results to the user. \n\\end{sloppypar}\n\nA recent survey of Software Fault Localization includes the SBFL literature up to 2014~\\cite{Wong2016}.\nDe Souza et. all extend this with SBFL work up to to 2017, and add an overview of seminal work on automated debugging from 1950 to 1977~\\cite{deSouza2017}.\nBy reflecting on the information-theoretic foundations of fault localization, Perez proposes the DDU metric, \nwhich can be used to evaluate test suites and predict their diagnostic performance when used in SBFL~\\cite{Perez2018}. \nOne avenue for future work is exploring how a metric like this can be adapted to our context, \nand see if helps to explain what happened with test 43.\n\nA recent evaluation of \\emph{pure} SBFL on large-scale software systems found that it under-performs in these situations \n(only 33-40\\% of the bugs are identified with the top 10 of ranked results~\\cite{heiden2019:evaluation}. \nThe authors discuss several directions beyond pure SBFL, such as combining it with dynamic program analysis techniques, \nincluding additional text analysis/IR techniques~\\cite{Wang2015a}, mutation based fault localization, \nand using SBFL in an interactive feedback-based process, such as whyline-debugging~\\cite{ko2008:debugging}.\nPure SBFL is closely related to the Spectrum-Based Log Diagnosis proposed here, \nso we may see similar challenges (in fact, test 43 may already show some of this). \nOf the proposed directions to go beyond pure SBFL, \nboth the inclusion of additional text analysis/IR techniques, \nand the application of Spectrum-Based Log Diagnosis in an interactive feedback-based process\nare plausible avenues to extend our approach. \nClosely related to the latter option, \nde Souza et al.~\\cite{deSouza2018b} assess guidance and filtering strategies to \\emph{contextualize} the fault localization process.\nTheir results suggest that contextualization by guidance and filtering can improve the effectiveness of SBFL,\nby classifying more actual bugs in the top ranked results.\n\n\\begin{comment}\n\nDirect comparison~\\cite{He2018, jiang2017:what, Jones:2007:DP:1273463.1273468,\nXu2009, Hwa-YouHsu:2008:RIB:1642931.1642994}. \n\nHsu et\nal~\\cite{Hwa-YouHsu:2008:RIB:1642931.1642994} discuss methods for extracting\nfailure signatures as sequences of code executions, which in spirit is rather\nsimilar to what we are trying to accomplish.\n\nAn interesting data-structure, the event correlation\ngraph, is explores in~\\cite{Fu2012a}. An FL metric that takes frequencies into\naccount~\\cite{Shu2016}.\n\\end{comment}\n\n %\n\\section{Threats to Validity}\n\\label{sec:ttv}\n\n\\head{Construct Validity} %\nThe signatures that provide our ground truth were devised to determine whether a given log \\emph{in its entirety} showed symptoms of a known error.\nAs discussed in Section~\\ref{sec:dataset}, we have used these signatures to detect events that give sufficient evidence for a symptom, \nbut there may be other events that could be useful to the user that are not part of our ground truth.\nWe also assume that the logs exhibit exactly the failures described by the signature expression.\nIn reality, the logs could contain symptoms of multiple failures beyond the ones described by the signature.\n\nFurthermore, we currently do not distinguish between events that consist of single line of text, \nor events that contain a multi-line stack-trace, although these clearly represent different comprehension efforts.\nThis threat could be addressed by tracking the \\emph{length} of the event contents, \nand using it to further improve the accuracy of our effort reduction measure.\n\nThe choice of clustering algorithm and parameters affects the events retrieved, \nbut our investigation currently only considers HAC with complete linkage.\nWhile we chose complete linkage to favor compact clusters, \noutliers in the dataset could cause unfavorable clustering outcomes.\nFurthermore, using the uncorrected sample standard deviation as threshold criterion \nmay be too lenient if the variance in the scores is high.\nThis threat could be addressed by investigate alternative cluster algorithm and parameter choices.\n\nMoreover, as for the majority of log analysis frameworks, the performance of SBLD strongly depends on the quality of log abstraction. \nAn error in the abstraction will directly propagate to SBLD: \nFor example, if abstraction fails to identify two concrete events as being instances of the same generic event, \ntheir aggregated frequencies will be smaller and consequently treated as less interesting by SBLD.\nSimilarly, the accuracy will suffer if two events that represent distinct generic events are treated as instances of the same generic event.\nFuture work could investigate alternative log abstraction approaches.\n\n\\head{Internal Validity} %\nWhile our heatmaps illustrate the interaction between additional data and SBLD performance, \nthey are not sufficient to prove a causal relationship between performance and added data.\nOur statistical comparisons suggests that a strategy of maximizing data is generally preferable, \nbut they are not sufficient for discussing the respective contribution of failing or passing logs.\n\n\\head{External Validity} %\nThis investigation is concerned with a single dataset from one industrial partner.\nStudies using additional datasets from other contexts is needed to assess the generalizability of SBLD to other domains.\nMoreover, while SBLD is made to help users diagnose problems that are not already well understood,\nwe are assessing it on a dataset of \\emph{known} problems.\nIt could be that these errors, being known, are of a kind that are generally easier to identify than most errors.\nStudying SBLD in-situ over time and directly assessing whether end users found it helpful\nin diagnosis would better indicate the generalizability of our approach.\n\n %\n\n\\section{Concluding Remarks}\n\\label{sec:conclusion}\n\n\\head{Contributions}\nThis paper presents and evaluates Spectrum-Based Log Diagnosis (SBLD), \na method for automatically identifying segments of failing logs \nthat are likely to help users diagnose failures. \nOur empirical investigation of SBLD addresses the following questions: \n(i) How well does SBLD reduce the \\emph{effort needed} to identify all \\emph{failure-relevant events} in the log for a failing run? \n(ii) How is the \\emph{performance} of SBLD affected by \\emph{available data}? \n(iii) How does SBLD compare to searching for \\emph{simple textual patterns} that often occur in failure-relevant events? \n\n\\head{Results}\nIn response to (i), \nwe find that SBLD generally retrieves the failure-relevant events in a compact manner \nthat effectively reduces the effort needed to identify failure-relevant events. \nIn response to (ii), \nwe find that SBLD benefits from addition data, especially more logs from successful runs. \nSBLD also benefits from additional logs from failing runs if there is a proportional amount of successful runs in the set. \nWe also find that the effect of added data is most pronounced when going from little data to \\emph{some} data rather than from \\emph{some} data to maximal data. \nIn response to (iii), \nwe find that SBLD achieves roughly the same effort reduction as traditional search-based methods but obtains slightly lower recall. \nWe trace the likely cause of this discrepancy on recall to a prominent part of our dataset, whose ground truth emphasizes rare events. \nA lesson learned in this regard is that SBLD is not suited for finding statistical outliers but rather \\emph{recurring suspects} \nthat characterize the observed failures. \nFurthermore, the investigation highlights that traditional pattern-based search and SBLD can complement each other nicely: \nUsers can resort to SBLD if they are unhappy with what the pattern-based searches turn\nup, and SBLD is an excellent method for finding characteristic textual patterns\nthat can form the basis of automated failure identification methods.\n\n\\head{Conclusions}\nWe conclude that SBLD shows promise as a method diagnosing failing runs, \nthat its performance is positively affected by additional data, \nbut that it does not outperform textual search on the dataset considered. \n\n\\head{Future work}\nWe see the following directions for future work: \n(a) investigate SBLD's performance on other datasets, to better assess generalizability, \n(b) explore the impact of alternative log abstraction mechanisms,\n(c) explore ways of combining SBLD with outlier detection, to accommodate different user needs, \n(d) adapt the Perez' DDU metric to our context and see if it can help predict diagnostic efficiency,\n(e) experiment with extensions of \\emph{pure SBLD} that include additional text analysis/IR techniques, \n or apply it in an interactive feedback-based process\n(f) rigorously assess (extensions of) SBLD in in-situ experiments.\n\n\\begin{acks}\nWe thank Marius Liaaen and Thomas Nornes of Cisco Systems Norway for help with obtaining and understanding the dataset, for developing the log abstraction\nmechanisms and for extensive discussions.\nThis work is supported by the \\grantsponsor{RCN}{Research Council of Norway}{https://www.rcn.no} through the\nCertus SFI (\\grantnum{RCN}{\\#203461/030)}.\nThe empirical evaluation was performed on resources provided by \\textsc{uninett s}igma2,\nthe national infrastructure for high performance computing and data\nstorage in Norway.\n\\end{acks}\n\n \\printbibliography\n\n\\end{document}\n", "meta": {"timestamp": "2020-08-18T02:18:33", "yymm": "2008", "arxiv_id": "2008.06948", "language": "en", "url": "https://arxiv.org/abs/2008.06948"}} +{"text": "\\section{Introduction}\nWhen granular material in a cubic container is shaken\nhorizontally one observes experimentally different types of\ninstabilities, i.e. spontaneous formation of ripples in shallow\nbeds~\\cite{StrassburgerBetatSchererRehberg:1996},\nliquefaction~\\cite{RistowStrassburgerRehberg:1997,Ristow:1997}, convective\nmotion~\\cite{TennakoonBehringer:1997,Jaeger} and recurrent swelling of\nshaken material where the period of swelling decouples from the\nforcing period~\\cite{RosenkranzPoeschel:1996}. Other interesting experimental results concerning simultaneously vertically and horizontally vibrated granular systems~\\cite{TennakoonBehringer:1998} and enhanced packing of spheres due to horizontal vibrations~\\cite{PouliquenNicolasWeidman:1997} have been reported recently. Horizontally shaken\ngranular systems have been simulated numerically using cellular\nautomata~\\cite{StrassburgerBetatSchererRehberg:1996} as well as\nmolecular dynamics\ntechniques~\\cite{RistowStrassburgerRehberg:1997,Ristow:1997,IwashitaEtAl:1988,LiffmanMetcalfeCleary:1997,SaluenaEsipovPoeschel:1997,SPEpre99}.\nTheoretical work on horizontal shaking can be found\nin~\\cite{SaluenaEsipovPoeschel:1997} and the dynamics of a single\nparticle in a horizontally shaken box has been discussed\nin~\\cite{DrosselPrellberg:1997}.\n\n\\begin{figure}[htbp]\n \\centerline{\\psfig{file=sketch.eps,width=7cm,clip=}} \n \\caption{Sketch of the simulated system.}\n \\label{fig:sketch}\n\\end{figure}\n\nRecently the effect of convection in a horizontally shaken box filled with \ngranular material attracted much attention and presently the effect is studied\nexperimentally by different\ngroups~\\cite{TennakoonBehringer:1997,Jaeger,RosenkranzPoeschel:1996}.\nUnlike the effect of convective motion in vertically shaken granular\nmaterial which has been studied intensively experimentally,\nanalytically and by means of computer simulations\n(s.~e.g.~\\cite{vertikalEX,JaegerVert,vertikalANA,vertikalMD}), there\nexist only a few references on horizontal shaking. Different from the\nvertical case, where the ``architecture'' of the convection pattern is\nvery simple~\\cite{BizonEtAl:1998}, in horizontally shaken containers one observes a variety\nof different patterns, convecting in different directions, in parallel\nas well as perpendicular to the direction of\nforcing~\\cite{TennakoonBehringer:1997}. Under certain conditions one\nobserves several convection rolls on top of each other~\\cite{Jaeger}.\nAn impression of the complicated convection can be found in the\ninternet~\\cite{movies}.\n\nWhereas the properties of convection in vertically sha\\-ken systems\ncan be reproduced by two dimensional molecular dynamics simulations\nwith good reliability, for the case of horizontal motion the results\nof simulations are inconsistent with the experimental results: in {\\em\n all} experimental investigations it was reported that the material\nflows downwards close to the vertical\nwalls~\\cite{TennakoonBehringer:1997,Jaeger,RosenkranzPoeschel:1996,movies},\nbut reported numerical simulations systematically show surface rolls\nin opposite direction accompanying the more realistic deeper rolls, or\neven replacing them completely~\\cite{LiffmanMetcalfeCleary:1997}.\n\nOur investigation is thus concerned with the convection pattern, i.e. the\nnumber and direction of the convection rolls in a two dimensional\nmolecular dynamics simulation. We will show that the choice of the\ndissipative material parameters has crucial influence on the convection pattern\nand, in particular, that the type of convection rolls observed experimentally\ncan be \nreproduced by using sufficiently high dissipation constants.\n\n\\section{Numerical Model}\nThe system under consideration is sketched in Fig.~\\ref{fig:sketch}:\nwe simulate a two-dimensional vertical cross section of a three-dimensional\ncontainer.\nThis rectangular section of width $L=100$ (all units in cgs system), and\ninfinite height, contains $N=1000$ spherical particles. The system is\nperiodically driven by an external oscillator $x(t) = A \\sin (2\\pi f\nt)$ along a horizontal plane. For the effect we want to show, a\nworking frequency $f=10$ and amplitude $A=4$ is\nselected. \nThese values give an acceleration amplitude of approximately $16 g$.\nLower accelerations affect the intensity of the\nconvection but do not change the basic features of the convection \npattern which we want to discuss. \nAs has been shown in~\\cite{SPEpre99},\npast the fluidization point, a much better indicator of the convective\nstate is the dimensionless velocity $A 2\\pi f/ \\sqrt{Lg}$. This means\nthat in small containers motion saturates earlier, hence, results for\ndifferent container lengths at the same values of the acceleration amplitude \ncannot be compared directly. Our acceleration amplitude $\\approx 16g$ corresponds to\n$\\approx 3g$ in a 10 cm container (provided that the frequency is the same\nand particle sizes have been \nscaled by the same amount).\n\n\nThe radii of the particles of density $2$ are homogeneously\ndistributed in the interval $[0.6, 1.4]$. The rough inner walls of the\ncontainer are simulated by attaching additional particles of the same\nradii and material properties (this simulation technique is similar to ``real''\nexperiments, e.g.~\\cite{JaegerVert}). \n\nFor the molecular dynamics simulations, we apply a modified\nsoft-particle model by Cundall and Strack~\\cite{CundallStrack:1979}:\nTwo particles $i$ and $j$, with radii $R_i$ and $R_j$ and at positions\n$\\vec{r}_i$ and $\\vec{r}_j$, interact if their compression $\\xi_{ij}=\nR_i+R_j-\\left|\\vec{r}_i -\\vec{r}_j\\right|$ is positive. In this case\nthe colliding spheres feel the force\n $F_{ij}^{N} \\vec{n}^N + F_{ij}^{S} \\vec{n}^S$, \nwith $\\vec{n}^N$ and $\\vec{n}^S$ being the unit vectors in normal and shear\ndirection. The normal force acting between colliding spheres reads\n\\begin{equation}\n F_{ij}^N = \\frac{Y\\sqrt{R^{\\,\\mbox{\\it\\footnotesize\\it eff}}_{ij}}}{1-\\nu^2} \n~\\left(\\frac{2}{3}\\xi_{ij}^{3/2} + B \\sqrt{\\xi_{ij}}\\, \n\\frac{d {\\xi_{ij}}}{dt} \\right)\n\\label{normal}\n\\end{equation}\nwhere $Y$ is the Young modulus, $\\nu$ is the Poisson ratio and $B$ \nis a material constant which characterizes the dissipative\ncharacter of the material~\\cite{BSHP}. \n\\begin{equation}\nR^{\\,\\mbox{\\it\\footnotesize\\it\n eff}}_{ij} = \\left(R_i R_j\\right)/\\left(R_i + R_j\\right) \n\\end{equation}\n is the\neffective radius. For a strict derivation of (\\ref{normal})\nsee~\\cite{BSHP,KuwabaraKono}.\n\nFor the shear force we apply the model by Haff and Werner~\\cite{HaffWerner}\n\\begin{equation}\nF_{ij}^S = \\mbox{sign}\\left({v}_{ij}^{\\,\\mbox{\\it\\footnotesize\\it rel}}\\right) \n\\min \\left\\{\\gamma_s m_{ij}^{\\,\\mbox{\\it\\footnotesize\\it eff}} \n\\left|{v}_{ij}^{\\,\\mbox{\\it\\footnotesize\\it rel}}\\right|~,~\\mu \n\\left|F_{ij}^N\\right| \\right\\} \n\\label{shear} \n\\end{equation}\nwith the effective mass $m_{ij}^{\\,\\mbox{\\it\\footnotesize\\it eff}} =\n\\left(m_i m_j\\right)/\\left(m_i + m_j\\right)$ and the relative velocity\nat the point of contact\n\\begin{equation}\n{v}_{ij}^{\\,\\mbox{\\it\\footnotesize\\it rel}} = \\left(\\dot{\\vec{r}}_i - \n\\dot{\\vec{r}}_j\\right)\\cdot \\vec{n}^S + R_i {\\Omega}_i + R_j {\\Omega}_j ~.\n\\end{equation}\n$\\Omega_i$ and $\\Omega_j$ are the angular velocities of the particles.\n \nThe resulting momenta $M_i$ and $M_j$ acting upon the particles are\n$M_i = F_{ij}^S R_i$ and $M_j = - F_{ij}^S R_j$. Eq.~(\\ref{shear})\ntakes into account that the particles slide upon each other for the\ncase that the Coulomb condition $\\mu \\mid F_{ij}^N \\mid~<~\\left| \nF_{ij}^S \\right|$ holds, otherwise they feel some viscous friction.\nBy means of $\\gamma _{n} \\equiv BY/(1-\\nu ^2)$ and $\\gamma _{s}$,\nnormal and shear damping coefficients, energy loss during particle\ncontact is taken into account~\\cite{restitution}.\n\nThe equations of motion for translation and rotation have been solved\nusing a Gear predictor-corrector scheme of sixth order\n(e.g.~\\cite{AllenTildesley:1987}).\n\nThe values of the coefficients used in simulations are $Y/(1-\\nu\n^2)=1\\times 10^{8}$, $\\gamma _{s}=1\\times 10^{3}$, $ \\mu =0.5$. For\nthe effect we want to show, the coefficient $\\gamma _{n}$ takes values within the range\n$\\left[10^2,10^4\\right]$.\n\n\\section{Results}\nThe mechanisms for convection under horizontal shaking have been\ndiscussed in \\cite{LiffmanMetcalfeCleary:1997}. Now we can show that\nthese mechanisms can be better understood by taking into account the\nparticular role of dissipation in this problem. The most striking\nconsequence of varying the normal damping coefficient is the change\nin organization of the convective pattern, i.e. the direction and\nnumber of rolls in the stationary regime. This is shown in\nFig.~\\ref{fig1}, which has been obtained after averaging particle\ndisplacements over 200 cycles \n(2 snapshots per cycle).\nThe asymmetry of compression and expansion of particles close to\nthe walls (where the material results highly compressible) explains \nthe large transverse velocities shown in the figure.\nNote, however, that the upward and downward motion at the walls cannot be altered \nby this particular averaging procedure. \n\nThe first frame shows a convection pattern with only two rolls, where\nthe arrows indicate that the grains slide down the walls, with at most\na slight expansion of the material at the surface. \nThere are no surface rolls.\nThis is very\nsimilar to what has been observed in\nexperiments\\cite{TennakoonBehringer:1997,Jaeger,RosenkranzPoeschel:1996}.\nIn this case, dissipation is high enough to damp most of the sloshing\ninduced by the vertical walls, and not even the grains just below the\nsurface can overcome the pressure gradient directed downwards.\n\nFor lower damping, we see the developing of surface rolls, \nwhich\ncoexist with the inner rolls circulating in the opposite way. Some\nenergy is now available for upward motion when the walls compress the\nmaterial fluidized during the opening of the wall ``gap'' (empty space\nwhich is created alternatively during the shaking motion). This is the\ncase reported in \\cite{LiffmanMetcalfeCleary:1997}. The last frames\ndemonstrate how the original rolls vanish at the same time that the\nsurface rolls grow occupying a significant part of the system.\nAnother feature shown in the figure is the thin layer of material involving\n3 particle rows close to the bottom, which perform a different kind\nof motion. This effect, which can be seen in all frames,\nis due to the presence of the constraining boundaries\nbut has not been analyzed separately.\n\\onecolumn\n\\begin{figure}\n\\centerline{\\psfig{file=fric1nn.eps,width=5.7cm,clip=}\n\\hspace{0.3cm}\\psfig{file=fric2nn.eps,width=5.7cm,clip=}\n\\hspace{0.3cm}\\psfig{file=fric3nn.eps,width=5.7cm,clip=}}\n\\centerline{\\psfig{file=fric4nn.eps,width=5.7cm,clip=}\n\\hspace{0.3cm}\\psfig{file=fric5nn.eps,width=5.7cm,clip=}\n\\hspace{0.3cm}\\psfig{file=fric6nn.eps,width=5.7cm,clip=}}\n\\centerline{\\psfig{file=fric7nn.eps,width=5.7cm,clip=}\n\\hspace{0.3cm}\\psfig{file=fric8nn.eps,width=5.7cm,clip=}\n\\hspace{0.3cm}\\psfig{file=fric9nn.eps,width=5.7cm,clip=}}\n\\vspace{0.3cm}\n\\caption{Velocity field obtained after cycle averaging of \n particle displacements, for different values of the normal damping\n coefficient, $\\gamma_n$. The first one is $1\\times 10^4$, and for\n obtaining each subsequent frame the coefficient has been divided by\n two. The frames are ordered from left to right and from top to\n bottom. The cell size for averaging is approximately one particle diameter.}\n\\label{fig1}\n\\vspace*{-0.2cm}\n\\end{figure}\n\\twocolumn\n\nWith decreasing normal damping $\\gamma_n$ there are two transitions \nobservable in Fig.~\\ref{fig1}, meaning that the convection pattern changes\nqualitatively at these two particular values of $\\gamma_n$:\nThe first transition leads to the appearance of two surface rolls\nlaying on top of the bulk cells and circulating in opposite direction.\nThe second transition eliminates the bulk rolls. A more detailed analysis of \nthe displacement fields (Fig.~\\ref{fig2})\nallows us to locate the transitions much more precisely.\nIn Fig.~\\ref{fig2} we have represented in grey-scale the horizontal and\nvertical components of the displacement vectors pictured in\nFig.~\\ref{fig1} but in a denser sampling, analyzing data from 30 simulations \ncorresponding to \nvalues of the normal damping coefficient within the interval [50,10000]. \nFor horizontal displacements, we have chosen vertical sections \nat some representative position in horizontal direction\n($x=30$). For the vertical displacements, vertical sections of the\nleftmost part of the container were selected ($x=10$), s.\nFig.~\\ref{fig2}, lower part.\n\\begin{figure}\n \\centerline{\\psfig{file=vx.eps,width=4.5cm,clip=}\\hspace{-0.5cm}\n \\psfig{file=vy.eps,width=4.5cm,clip=}\n\n\\centerline{\\psfig{file=sectionn.eps,height=4.2cm,bbllx=7pt,bblly=16pt,bburx=507pt,bbury=544pt,clip=}}\n\\vspace*{0.2cm}\n\\caption{Horizontal (left) and vertical (right) displacements at \n selected positions of the frames in Fig.~\\ref{fig1} (see the text\n for details), for decreasing normal damping and as a function of\n depth. White indicates strongest flow along positive axis directions\n (up,right), and black the corresponding negative ones. The black region \n at the bottom of the left picture corresponds to the complex boundary\n effect observed in Fig.~\\ref{fig1}, involving only two particle layers.\n The \n figure below shows a typical convection pattern together with the sections\n at $x=10$ and $x=30$ at which the displacements were recorded.}\n\\label{fig2}\n\\vspace*{-0.1cm}\n\\end{figure}\n\nThe horizontal axis shows the values of the normal damping\ncoefficient scaled logarithmically in decreasing sequence. The\nvertical axis represents the position in vertical direction, with the\nfree surface of the system located at $y \\approx 60$. One observes first\nthat white surface shades, complemented by subsurface black ones,\nappear quite clearly at about $\\gamma =$2000 in Fig.~\\ref{fig2}\n(left), indicating the appearance of surface rolls. On the other\nhand, Fig.~\\ref{fig2} (right) shows a black area (indicative of\ndownward flow along the vertical wall) that vanishes at\n$\\gamma_n \\approx 200$ (at this point the grey shade represents vanishing vertical velocity). \nThe dashed lines in Fig.~\\ref{fig2} lead the eye to identify the transition values.\nIn the interval $ 200 \\lesssim \\gamma_n\n\\lesssim 2000$ surface and inner rolls coexist, rotating in opposite\ndirections.\n\nOne can analyze the situation in terms of the restitution coefficient.\n\\ From Eq. (\\ref{normal}), the equation of motion for the displacement\n$\\xi_{ij}$ can be integrated and the relative energy loss in a\ncollision $\\eta=(E_0-E)/E_0$ (with $E$ and $E_0$ being the energy of\nthe relative motion of the particles) can be evaluated approximately.\nUp to the lowest order in the expansion parameter, one\nfinds~\\cite{Thomas-Thorsten}\n\\begin{equation}\n\\eta = 1.78 \\left( \\frac{\\tau}{\\ell} v_0\\right)^{1/5}\\;,\n\\label{energyloss}\n\\end{equation}\nwhere $v_0$ is the relative initial velocity in normal direction, and\n$\\tau$, $\\ell$, time and length scales associated with the problem\n(see~\\cite{Thomas-Thorsten} for details),\n\n\\begin{equation}\n\\tau = \\frac{3}{2} B\\; ,~~~~~~~~~\n\\ell = \\left(\\frac{1}{3} \\frac{m_{ij}^{\\,\\mbox{\\it\\footnotesize\\it eff}} \n}{\\sqrt{R^{\\,\\mbox{\\it\\footnotesize\\it eff}}_{ij}} \nB \\gamma_{n}}\\right)^{2}.\n\\end{equation}\nFor $\\gamma_n = 10^4$ (the highest value analyzed) and the values of\nthe parameters specified above ($v_0 \\approx A 2\\pi f$ for collisions\nwith the incoming wall), $B= 10^{-4}$ and $\\eta$ is typically\n50\\%. This means that after three more collisions the particle leaves\nwith an energy not enough to overcome the height of one single\nparticle in the gravity field. For $\\gamma_n = 10^3$ and the other\nparameters kept constant, $B=10^{-5}$ and $\\eta$ has been\nreduced to 5\\%, resulting in that the number of collisions needed for\nthe particle to have its kinetic energy reduced to the same residual\nfraction, has increased roughly by an order of magnitude. On the other\nhand, given the weak dependence of Eq. (\\ref{energyloss}) on the\nvelocity, one expects that the transitions shown in Fig.~\\ref{fig2}\nwill depend also weakly on the amplitude of the shaking velocity. The reduction of the\ninelasticity $\\eta$ by an order of magnitude seems enough for\nparticles to ``climb'' the walls and develop the characteristic\nsurface rolls observed in numerical simulations.\n\n\\section{Discussion}\nWe have shown that the value of the normal damping coefficient\ninfluences the convective pattern of horizontally shaken granular\nmaterials. By means of molecular dynamics simulations in two\ndimensions we can reproduce the pattern observed in real experiments,\nwhich corresponds to a situation of comparatively high damping,\ncharacterized by inelasticity parameters $\\eta$ larger than 5\\%. For\nlower damping, the upper layers of the material develop additional\nsurface rolls as has been reported previously. As normal damping\ndecreases, the lower rolls descend and finally disappear completely at\ninelasticities of the order of 1\\%.\n\n\\begin{acknowledgement}\nThe authors want to thank R. P. Behringer, H. M. Jaeger, M. Medved,\nand D. Rosenkranz for providing experimental results prior to\npublication and V. Buchholtz, S. E. Esipov, and L. Schimansky-Geier\nfor discussion. The calculations have been done on the parallel\nmachine {\\it KATJA} (http://summa.physik.hu-berlin.de/KATJA/) of the\nmedical department {\\em Charit\\'e} of the Humboldt University Berlin.\nThe work was supported by Deut\\-sche Forschungsgemeinschaft through\ngrant Po 472/3-2.\n\\end{acknowledgement}\n\n", "meta": {"timestamp": "2002-03-19T12:47:20", "yymm": "9807", "arxiv_id": "cond-mat/9807071", "language": "en", "url": "https://arxiv.org/abs/cond-mat/9807071"}} +{"text": "\\section{\\label{sec:intro}Introduction}\n \nDemonstration of non-abelian exchange statistics is one of the most active areas of condensed matter research and yet experimental realization of braiding of Majorana modes remains elusive~\\cite{RevModPhys.80.1083,zhang2019next}. Most efforts so far have been focused on superconductor/semiconductor nanowire hybrids, where Majorana bound states (MBS) are expected to form at the ends of a wire or at boundaries between topologically trivial and non-trivial regions~\\cite{rokhinson2012fractional, deng2012anomalous, mourik2012signatures, LutchynReview}. Recently, it became clear that abrupt interfaces may also host topologically trivial Andreev states with experimental signatures similar to MBS \\cite{pan2020generic,Yu2021}, which makes demonstrating braiding in nanowire-based platforms challenging. Phase-controlled long Josephson junctions (JJ) open much wider phase space to realize MBS with a promise to solve some problems of the nanowire platform, such as enabling zero-field operation to avoid detrimental flux focusing for in-plane fields \\cite{pientka2017topological, ren2019topological}. However, MBSs in long JJs suffer from the same problems as in the original Fu-Kane proposal for topological insulator/superconductor JJs, such as poor control of flux motion along the junction and presence of sharp interfaces in the vicinity of MBS-carrying vortices which may host Andreev states and trap quasiparticles. For instance, MBS spectroscopy in both HgTe and InAs-based JJs shows a soft gap \\cite{fornieri2019evidence}, despite a hard SC gap in an underlying InAs/Al heterostructure.\n\n\\begin{figure*}[t]\n\\centering\n\\begin{subfigure}{0.95\\textwidth}\n\\includegraphics[width=1\\textwidth]{Schematic.pdf}\n\\caption{\\label{fig:schematic}}\n\\end{subfigure}\n\\begin{subfigure}{0.35\\textwidth}\n\\includegraphics[width=1\\textwidth]{stack_2.pdf}\n\\caption{\\label{fig:layers}}\n\\end{subfigure}\n\\begin{subfigure}{0.6\\textwidth}\n\\includegraphics[width=1\\textwidth]{Flow_2.pdf}\n\\caption{\\label{fig:flow}}\n\\end{subfigure}\n\\caption{\\label{fig:one} (a) Schematic of the Majorana braiding platform. Magnetic multilayer (MML) is patterned into a track and is separated from TSC by a thin insulating layer. Green lines represent on-chip microwave resonators for a dispersive parity readout setup. The left inset shows a magnified view of a SVP and the right inset shows the role of each layer (b) Expanded view of the composition of an MML (c) Process flow diagram for our Majorana braiding scheme. Here, $T_c$ is superconducting transition temperature and $T_{BKT}$ is Berezinskii\u2013Kosterlitz\u2013Thouless transition temperature for the TSC.}\n\n\\end{figure*}\n\nIn the search for alternate platforms to realize Majorana braiding, spectroscopic signatures of MBS have been recently reported in STM studies of vortex cores in iron-based topological superconductors (TSC) \\cite{wang2018evidence}. Notably, a hard gap surrounding the zero-bias peak at a relatively high temperature of $0.55$ K, and a $5$ K separation gap from trivial Caroli-de Gennes-Matricon (CdGM) states were observed \\cite{chen2020observation, chen2018discrete}. Moreover, vortices in a TSC can be field-coupled to a skyrmion in an electrically-separated magnetic multilayer (MML) \\cite{volkov,petrovic2021skyrmion}, which can be used to manipulate the vortex. This allows for physical separation of the manipulation layer from the layer wherein MBS reside, eliminating the problem of abrupt interfaces faced by nanowire hybrids and JJs. Finally, recent advances in the field of spintronics provide a flexible toolbox to design MML in which skyrmions of various sizes can be stabilized in zero external magnetic field and at low temperatures \\cite{petrovic2021skyrmion, buttner2018theory, dupe2016engineering}. Under the right conditions, stray fields from these skyrmions alone can nucleate vortices in the adjacent superconducting layer. In this paper, we propose TSC--MML heterostructures hosting skyrmion-vortex pairs (SVP) as a viable platform to realize Majorana braiding. By patterning the MML into a track and by driving skyrmions in the MML with local spin-orbit torques (SOT), we show that the SVPs can be effectively moved along the track, thereby facilitating braiding of MBS bound to vortices.\n\nThe notion of coupling skyrmions (Sk) and superconducting vortices (Vx) through magnetic fields has been studied before \\cite{volkov, baumard2019generation, zhou_fusion_2022, PhysRevLett.117.077002, PhysRevB.105.224509, PhysRevB.100.064504, PhysRevB.93.224505, PhysRevB.99.134505, PhysRevApplied.12.034048}. Menezes et al. \\cite{menezes2019manipulation} performed numerical simulations to study the motion of a skyrmion--vortex pair when the vortex is dragged via supercurrents and Hals et al. \\cite{hals2016composite} proposed an analytical model for the motion of such a pair where a skyrmion and a vortex are coupled via exchange fields. However, the dynamics of a SVP in the context of Majorana braiding remains largely unexplored. Furthermore, no \\textit{in-situ} non-demolition experimental technique has been proposed to measure MBS in these TSC--MML heterostructures. In this paper, through micromagnetic simulations and analytical calculations within London and Thiele formalisms, we study the dynamics of a SVP subjected to external spin torques. We demonstrate that the SVP moves without dissociation up to speeds necessary to complete Majorana braiding within estimated quasiparticle poisoning time. We further eliminate the problem of \\textit{in-situ} MBS measurements by proposing a novel on-chip microwave readout technique. By coupling the electric field of the microwave cavity to dipole-moments of transitions from Majorana modes to CdGM modes, we show that a topological non-demolition dispersive readout of the MBS parity can be realized. Moreover, we show that our platform can be used to make the first experimental observations of quasiparticle poisoning times in topological superconducting vortices.\n\nThe paper is organized as follows: in Section~\\ref{sec:plat} we present a schematic and describe our platform. In Section~\\ref{sec:initial} we present the conditions for initializing a skyrmion--vortex pair and discuss its equilibrium properties. In particular, we characterize the skyrmion--vortex binding strength. In Section~\\ref{sec:braid} we discuss the dynamics of a SVP in the context of braiding. Then in Section~\\ref{sec:read}, we present details of our microwave readout technique. Finally, we discuss the scope of our platform in Section~\\ref{sec:summ}.\n\n\\begin{figure*}[t]\n\\centering\n \\begin{subfigure}{0.32\\textwidth}\n \\includegraphics[width=1\\textwidth]{energies.jpg}\n \\caption{\\label{fig:energies}}\n \\end{subfigure}\n \\begin{subfigure}{0.32\\textwidth}\n \\includegraphics[width=1\\textwidth]{forces.jpg}\n \\caption{\\label{fig:forces}}\n \\end{subfigure}\n \\begin{subfigure}{0.32\\textwidth}\n \\includegraphics[width=1\\textwidth]{fvav.jpg}\n \\caption{\\label{fig:fvav}}\n \\end{subfigure}\n \\caption{\\label{fig:onenew} (a -- b) Normalized energies and forces for Sk--Vx interaction between a Pearl vortex and a N\\'eel skyrmion of varying thickness. (c) Attractive $F_{Vx-Avx}$ and repulsive $F_{Sk-Avx}$ (colored lines) for the example materials in Appendix~\\ref{app:A}: $M_{0}=1450$ emu/cc, $r_{sk}=35$ nm, $d_s = 50$ nm, $\\Lambda = 5$ $\\mu$m and $\\xi=15$ nm.}\n\n\\end{figure*}\n\n\\section{\\label{sec:plat}Platform Description}\n\n\\begin{figure*}[t]\n\\centering\n \\begin{subfigure}{0.59\\textwidth}\n \\includegraphics[width=1\\textwidth]{Braiding.jpg}\n \\caption{\\label{fig:braiding}}\n \\end{subfigure}\n \\begin{subfigure}{0.39\\textwidth}\n \\includegraphics[width=1\\textwidth]{t0.jpg}\n \\caption{\\label{fig:t0}}\n \\end{subfigure}\n \n \\begin{subfigure}{0.15\\textwidth}\n \\includegraphics[width=1\\textwidth]{t1.jpg}\n \\caption{\\label{fig:t1}}\n \\end{subfigure}\n \\begin{subfigure}{0.15\\textwidth}\n \\includegraphics[width=1\\textwidth]{t2.jpg}\n \\caption{\\label{fig:t2}}\n \\end{subfigure}\n \\begin{subfigure}{0.15\\textwidth}\n \\includegraphics[width=1\\textwidth]{t3.jpg}\n \\caption{\\label{fig:t3}}\n \\end{subfigure}\n \\begin{subfigure}{0.15\\textwidth}\n \\includegraphics[width=1\\textwidth]{t4.jpg}\n \\caption{\\label{fig:t4}}\n \\end{subfigure}\n \\begin{subfigure}{0.15\\textwidth}\n \\includegraphics[width=1\\textwidth]{t55.jpg}\n \\caption{\\label{fig:t5}}\n \\end{subfigure}\n \\begin{subfigure}{0.15\\textwidth}\n \\includegraphics[width=1\\textwidth]{t6.jpg}\n \\caption{\\label{fig:t6}}\n \\end{subfigure}\n\\caption{\\label{fig:two} (a) Schematic of our braiding process: manipulations of four skyrmions in the MML track are shown. MBS at the centers of vortices bound to each of these skyrmions are labeled $\\gamma_1$--$\\gamma_4$. Ohmic contacts in HM layers of the MML are shown in brown and rf readout lines are shown in green. II--VI show the steps involved in braiding $\\gamma_2$ and $\\gamma_4$. In step II, $\\gamma_1$ and $\\gamma_2$ are brought close to rf lines by applying charge currents from C to A and D to B, respectively. $\\gamma_1$ and $\\gamma_2$ are then initialized by performing a dispersive readout of their parity (see Section~\\ref{sec:read}). Similarly, $\\gamma_3$ and $\\gamma_4$ are initialized after applying charge currents along P to R and Q to S, respectively. In step III, $\\gamma_2$ is moved aside to make room for $\\gamma_4$ by applying currents from B to X followed by applying currents from X to C. In step IV, $\\gamma_4$ is braided with $\\gamma_2$ by applying currents along S to X and X to B. Finally, in step V, the braiding process is completed by bringing $\\gamma_2$ to S by applying currents from A to X and from X to S. Parities (i.e., fusion outcomes) of $\\gamma_1$ and $\\gamma_4$, and $\\gamma_3$ and $\\gamma_2$ are then measured in step VI. Fusion outcomes in each pair of MBS indicate the presence or absence of a fermion corresponding to a parity of $\\pm1$ \\cite{PhysRevApplied.12.054035, PhysRevX.6.031016}. (b) Initial position of the skyrmions labeled A and B in the micromagnetic simulation for skyrmion braiding (see Appendix.~\\ref{app:A}) (c--h) Positions of the two skyrmions at the given times as the braiding progresses. Charge current $j = 2\\times 10^{12}$ A/m$^2$ was applied.}\n\n\\end{figure*}\n\nOur setup consists of a thin TSC layer that hosts vortices grown on top of a MML that hosts skyrmions as shown in Fig.~\\ref{fig:schematic}. A thin insulating layer separates the magnetic and superconducting layers ensuring electrical separation between the two. Vortices in a TSC are expected to host MBS at their cores \\cite{wang2018evidence,chen2020observation, chen2018discrete}. Stray fields from a skyrmion in the MML nucleate such a vortex in the TSC, forming a bound skyrmion--vortex pair under favorable energy conditions (see Sec.~\\ref{sec:initial}). This phenomenon has been recently experimentally demonstrated in Ref.~\\cite{petrovic2021skyrmion}, where stray fields from N\\'eel skyrmions in Ir/Fe/Co/Ni magnetic multilayers nucleated vortices in a bare Niobium superconducting film.\n\nThe MML consists of alternating magnetic and heavy metal (HM) layers, as shown in Fig.~\\ref{fig:layers}. The size of a skyrmion in a MML is determined by a delicate balance between exchange, magnetostatic, anisotropy and Dzyaloshinskii\u2013Moriya interaction (DMI) energies \\cite{wang2018theory, romming2015field} -- and the balance is highly tunable, thanks to advances in spintronics \\cite{buttner2018theory, dupe2016engineering, soumyanarayanan2017tunable}. Given a TSC, this tunability allows us to find a variety of magnetic materials and skyrmion sizes that can satisfy the vortex nucleation condition [to be detailed in Eq.~(\\ref{eqn:nuc})]. In Appendix~\\ref{app:A}, we provide a specific example of FeTeSe topological superconductor coupled with Ir/Fe/Co/Ni magnetic multilayers.\n\nDue to large intrinsic spin-orbit coupling, a charge current through the heavy metal layers of a MML exerts spin-orbit torques (SOT) on the magnetic moments in the MML, which have been shown to drive skyrmions along magnetic tracks \\cite{fert2013skyrmions, woo2017spin}. In our platform, to realize Majorana braiding we propose to pattern the MML into a track as shown in Fig.~\\ref{fig:schematic} and use local spin-orbit torques to move skyrmions along each leg of the track. If skyrmions are braided on the MML track, and if skyrmion-vortex binding force is stronger than total pinning force on the SVPs, then the MBS hosting vortices in TSC will closely follow the motion of skyrmions, resulting in the braiding of MBS. We note here that there is an upper threshold speed with which a SVP can be moved as detailed in Sec.~\\ref{sec:braid}. By using experimentally-relevant parameters for TSC and MML in Appendix~\\ref{app:A}, we show that our Majorana braiding scheme can be realized with existing materials.\n\nWe propose a non-demolition microwave measurement technique for the readout of the quantum information encoded in a pair of vortex Majorana bound states (MBS). A similar method has been proposed for the parity readout in topological Josephson junctions~\\cite{PhysRevB.92.245432,Vayrynen2015,Yavilberg2015,PhysRevB.99.235420,PRXQuantum.1.020313} and in Coulomb blockaded Majorana islands~\\cite{PhysRevB.95.235305}. Dipole moments of transitions from MBS to CdGM levels couple dispersively to electric fields in a microwave cavity, producing a parity-dependent dispersive shift in the cavity resonator frequency. Thus by probing the change in the resonator's natural frequency, the state of the Majorana modes can be inferred. Virtual transitions from Majorana subspace to excited CdGM subspace induced due to coupling to the cavity electric field are truly parity conserving, making our readout scheme a so-called topological quantum non-demolition technique \\cite{PRXQuantum.1.020313, PhysRevB.99.235420}. The readout scheme is explained in greater detail in Sec.~\\ref{sec:read}.\n\nAs discussed above, in our platform we consider coupling between a thin superconducting layer and magnetic multilayers. We note that in thin superconducting films, vortices are characterized by the Pearl penetration depth, given by $\\Lambda \\ =\\ \\lambda ^{2} /d_{s}$, where $\\lambda$ is the London penetration depth and $d_{s}$ is the thickness of the TSC film. Typically, these penetration depths $\\Lambda$ are much larger than skyrmion radii $r_{sk}$ in MMLs of interest. Further, interfacial DMI in MML stabilizes a N\\'eel skyrmion as opposed to a Bloch skyrmion. So hereon, we only study coupling between a N\\'eel skyrmion and a Pearl vortex in the limit $\\Lambda\\gg r_{sk}$.\n\n\\section{\\label{sec:initial}Initialization and SVP in Equilibrium}\n\nFig.~\\ref{fig:flow} illustrates the process flow of our initialization scheme. Skyrmions can be generated individually in MML by locally modifying magnetic anisotropy through an artificially created defect center and applying a current through adjacent heavy metal layers \\cite{zhang2020skyrmion}. Such defect centers have been experimentally observed to act as skyrmion creation sites \\cite{buttner2017field}. When the TSC--MML heterostructure is cooled below the superconducting transition temperature (SC $T_{C}$), stray fields from a skyrmion in the MML will nucleate a vortex and an antivortex in the superconducting layer if the nucleation leads to a lowering in overall free energy of the system \\cite{volkov}. An analytical expression has been obtained for the nucleation condition in Ref.~\\cite{NeelInteraction} ignoring contributions of dipolar and Zeeman energies to total magnetic energy: a N\\'eel skyrmion nucleates a vortex directly on top of it if \n\\begin{equation}\n d_{m}\\left[ \\alpha _{K}\\frac{Kr_{sk}^{2}}{2} -\\alpha _{A} A-M_{0} \\phi _{0}\\right] \\geq \\frac{{\\phi _{0}}^2}{8 \\pi^2 \\lambda} \\ln\\left(\\frac{\\Lambda }{\\xi }\\right).\n \\label{eqn:nuc}\n\\end{equation}\n\\noindent Here, $d_{m}$ is the effective thickness, $M_{0}$ is the saturation magnetization, $A$ is the exchange stiffness and $K$ is the perpendicular anisotropy constant of the MML; $\\alpha_K$ and $\\alpha_A$ are positive constants that depend on skyrmion's spatial profile (see Appendix~\\ref{app:A}), $r_{sk}$ is the radius of the skyrmion in the presence of a Pearl vortex \\footnote{The radius of a skyrmion is not expected to change significantly in the presence of a vortex \\cite{NeelInteraction}. We verified this claim with micromagnetic simulations. For the materials in Appendix~\\ref{app:A}, when vortex fields are applied on a bare skyrmion, its radius increased by less than $10\\%$. So, for numerical calculations in this paper, we use bare skyrmion radius for $r_{sk}$.}, $\\phi _{0}$ is the magnetic flux quantum, and $\\Lambda$ ($\\xi$) is the Pearl depth (coherence length) of the TSC. Although a complete solution of the nucleation condition must include contributions from dipolar and Zeeman energies to total energy of a MML, such a calculation can only be done numerically and Eq.~(\\ref{eqn:nuc}) can still be used as an approximate estimate. For the choice of materials listed in the Appendix, the left side of the equation exceeds the right side by $400\\%$, strongly suggesting the nucleation of a vortex for every skyrmion in the MML. Furthermore, skyrmions in Ir/Fe/Co/Ni heterostructures have also been experimentally shown to nucleate vortices in Niobium superconducting films \\cite{petrovic2021skyrmion}. \n\nWe proceed to characterize the strength of a skyrmion (Sk) -- vortex (Vx) binding force as it plays a crucial role in determining the feasibility of moving the skyrmion and the vortex as a single object. Spatial magnetic profile of a N\\'eel skyrmion is given by $\\boldsymbol{M}_{sk} =M_{0}[\\zeta \\sin\\theta(r) \\boldsymbol{\\hat{r}}+ \\cos\\theta(r) \\boldsymbol{\\hat{z}}]$, where $\\zeta=\\pm$1 is the chirality and $\\theta(r)$ is the angle of the skyrmion. For $\\Lambda\\gg r_{sk}$, the interaction energy between a vortex and a skyrmion is given by \\cite{NeelInteraction}:\n\\begin{equation}\n E_{Sk-Vx} =\\frac{M_{0} \\phi _{0} r_{sk}^{2}}{2\\Lambda }\\int_{0}^{\\infty} \\frac{1}{q^2}(e^{-q\\tilde{d}}-1) J_{0}(qR) m_{z,\\theta}(q) \\,dq,\n \\label{eqn:energy}\n\\end{equation}\n\n\\noindent where $\\tilde{d} = d_m \\slash r_{sk}$, $J_{n}$ is the nth-order Bessel function of the first kind, and $R=r/r_{sk}$ is the normalized horizontal displacement $r$ between the centers of the skyrmion and the vortex. $m_{z,\\theta}(q)$ contains information about skyrmion's spatial profile and is given by \\cite{NeelInteraction}: $m_{z,\\theta}(q) = \\int_{0}^{\\infty} x [\\zeta q + \\theta^\\prime ( x )] J_{1}( qx) \\sin\\theta(x) \\,dx$, where $\\theta ( x )$ is determined by skyrmion ansatz.\n\nWe now derive an expression for the skyrmion--vortex restoring force by differentiating Eq.~(\\ref{eqn:energy}) with respect to $r$:\n\\begin{equation}\n F_{Sk-Vx} =-\\frac{M_{0} \\phi _{0} r_{sk}}{2\\Lambda }\\int_{0}^{\\infty} \\frac{1}{q}(1- e^{-q\\tilde{d}}) J_{1}(qR) m_{z,\\theta}(q) \\,dq.\n \\label{eqn:force}\n\\end{equation}\nFor small horizontal displacements $r\\ll r_{sk}$ between the centers of the skyrmion and the vortex, we can approximate the Sk--Vx energy as:\n\\begin{equation}\n E_{Sk-Vx} =\\frac{1}{2} kr^{2},\n \\label{eqn:springconstant}\n\\end{equation}\n\\noindent with an effective spring constant \n\\begin{equation}\n k =-\\frac{M_{0} \\phi _{0}}{4\\Lambda }\\int_{0}^{\\infty} (1- e^{-q\\tilde{d}}) m_{z,\\theta}(q) \\,dq.\n \\label{eqn:spring}\n\\end{equation}\n\nFigs.~\\ref{fig:energies}--\\ref{fig:forces} show binding energy and restoring force between a vortex and skyrmions of varying thickness for the materials listed in Appendix~\\ref{app:A}. Here we used domain wall ansatz for the skyrmion with $\\theta(x) = 2\\tan^{-1}[\\frac{\\sinh(r_{sk}/\\delta)}{\\sinh(r_{sk}x/\\delta)}]$, where $r_{sk}/\\delta$ is the ratio of skyrmion radius to its domain wall width and $x$ is the distance from the center of the skyrmion normalized by $r_{sk}$. As seen in Fig.~\\ref{fig:forces}, the restoring force between a skyrmion and a vortex increases with increasing separation between their centers until it reaches a maximum value, $F_{max}$, and then decreases with further increase in separation. We note that $F_{max}$ occurs when Sk--Vx separation is equal to the radius of the skyrmion, i.e. when $R=1$ in Eq.~(\\ref{eqn:force}):\n\\begin{equation}\n F_{max} = -\\frac{M_{0} \\phi _{0} r_{sk}}{2\\Lambda }\\int_{0}^{\\infty} \\frac{1}{q}(1- e^{-q\\tilde{d}}) J_{1}(q) m_{z,\\theta}(q) \\,dq. \n \\label{eqn:fmax}\n\\end{equation}\n\n\\noindent As the size of the skyrmion increases, the maximum binding force $F_{max}$ of the SVP increases. For a given skyrmion size, increasing the skyrmion thickness increases the attractive force until the thickness reaches the size of the skyrmion. Further increase in MML thickness does not lead to an appreciable increase in stray fields outside the MML layer and, as a result, the Sk--Vx force saturates.\n\nIt is important to note that stray fields from a skyrmion nucleate both a vortex and an antivortex (Avx) in the superconducting layer \\cite{volkov, PhysRevLett.88.017001, milosevic_guided_2010, PhysRevLett.93.267006}. While the skyrmion attracts the vortex, it repels the antivortex. Eqs.~(\\ref{eqn:energy}) and (\\ref{eqn:force}) remain valid for Sk--Avx interaction, but switch signs. The equilibrium position of the antivortex is at the location where repulsive skyrmion--antivortex force, $F_{Sk-Avx}$, is balanced by the attractive vortex--antivortex force, $F_{Vx-Avx}$~\\cite{lemberger2013theory, ge2017controlled}. Fig.~\\ref{fig:fvav} shows $F_{Vx-Avx}$ against $F_{Sk-Avx}$ for the platform in the Appendix. We see that for thicker magnets, the location of the antivortex is far away from that of the vortex, where the Avx can be pinned with artificially implanted pinning centers \\cite{aichner2019ultradense, gonzalez2018vortex}. For thin magnetic films, where the antivortex is expected to be nucleated right outside the skyrmion radius, we can leverage Berezinskii\u2013Kosterlitz\u2013Thouless (BKT) transition to negate $F_{Vx-AVx}$ for Vx-Avx distances $r<\\Lambda$ \\cite{PhysRevB.104.024509, schneider_excess_2014, goldman2013berezinskii, zhao2013evidence}. Namely, when a Pearl superconducting film is cooled to a temperature below $T_C$ but above $T_{BKT}$, vortices and antivortices dissociate to gain entropy, which minimizes the overall free energy of the system \\cite{beasley1979possibility}. While the attractive force between a vortex and an antivortex is nullified, a skyrmion in the MML still attracts the vortex and pushes the antivortex towards the edge of the sample, where it can be pinned. Therefore we assume that the antivortices are located far away and neglect their presence in our braiding and readout schemes.\n\n\\section{\\label{sec:braid}Braiding}\n\nMajorana braiding statistics can be probed by braiding a pair of MBS \\cite{RevModPhys.80.1083} which involves swapping positions of the two vortices hosting the MBS. We propose to pattern the MML into interconnected Y-junctions as shown in Fig.~\\ref{fig:two} to enable that swapping. Ohmic contacts in HM layers across each leg of the Y-junctions enable independent application of charge currents along each leg of the track. These charge currents in-turn apply spin-orbit torques on the adjacent magnetic layers and enable skyrmions to be moved independently along each leg of the track. As long as skyrmion and vortex move as a collective object, braiding of skyrmions in the MML leads to braiding of MBS hosting vortices in the superconducting layer. Below we study the dynamics of a SVP subjected to spin torques for braiding. We calculate all external forces acting on the SVP in the process and discuss the limits in which the skyrmion and the vortex move as a collective object.\n\nFor a charge current $\\bm{J}$ in the HM layer, the dynamics in the magnetic layer is given by the modified Landau\u2013Lifshitz\u2013Gilbert (LLG) equation \\cite{hayashi2014quantitative, slonczewski1996current}:\n\\begin{equation}\n \\partial _{t}\\bm{m} =-\\gamma (\\bm{m} \\times {{\\bm H}_{eff}} +\\eta J\\ \\bm{m} \\times \\bm{m} \\times \\bm{p}) +\\alpha \\bm{m} \\times \\partial _{t}\\bm{m}\n \\label{eqn:llg}\n\\end{equation}\n\\noindent where we have included damping-like term from the SOT and neglected the field-like term as it does not induce motion of N\\'eel skyrmions for our geometry \\cite{jiang_blowing_2015}. Here, $\\gamma$ is the gyromagnetic ratio, $\\alpha$ is the Gilbert damping parameter, and ${{\\bm H}_{eff}}$ is the effective field from dipole, exchange, anisotropy and DMI interactions. $\\bm{p}=sgn(\\Theta _{SH})\\bm{\\hat{J}} \\times \\hat{\\bm{n}}$ is the direction of polarization of the spin current, where $\\Theta _{SH}$ is the spin Hall angle, $\\bm{\\hat{J}}$ is the direction of charge current in the HM layer and $\\hat{\\bm{n}}$ is the unit vector normal to the MML. $\\eta=\\hbar \\Theta _{SH}/2eM_{0} d_{m}$ quantifies the strength of the torque, $\\hbar$ is the reduced Planck's constant and $e$ is the charge of an electron. \n\nAssuming skyrmion and vortex move as a collective object, semiclassical equations of motion for the centers of mass of the skyrmion and the vortex can be written using collective coordinate approach as done in Ref.~\\cite{hals2016composite}:\n\\begin{eqnarray}\n m_{sk}\\ddot{\\bm{R}}_{sk}= {\\bf{F}}_{SOT} - \\frac{\\partial U_{sk,\\ pin}}{\\partial \\bm{R}_{sk}} - & {\\bm{G}}_{sk}\\times \\dot{\\bm{R}}_{sk} - 4\\pi s \\alpha \\dot{\\bm{R}}_{sk} \\nonumber \\\\\n &- k({\\bm{R}}_{sk}-{\\bm{r}}_{vx}),\n \\label{eqn:skmotion}\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n m_{vx}\\ddot{\\bm{R}}_{vx} = - \\frac{\\partial U_{vx,\\ pin}}{\\partial \\bm{R}_{vx}} - &{\\bm{G}}_{vx}\\times \\dot{\\bm{R}}_{vx} - {\\alpha}_{vx} \\dot{\\bm{R}}_{vx} \\nonumber \\\\\n & + k({\\bm{R}}_{sk}-{\\bm{r}}_{vx}),\n \\label{eqn:vxmotion}\n\\end{eqnarray}\n\\noindent where ${\\bm{R}}_{sk}$ (${\\bm{R}}_{vx}$), $m_{sk}$ ($m_{vx}$) and $q_{sk}$ ($q_{vx}$) are the position, mass and chirality of the skyrmion (vortex). $k$ is the effective spring constant of the Sk--Vx system, given in Eq.~(\\ref{eqn:spring}). ${\\bm{F}}_{SOT}=\\pi ^{2} \\gamma \\eta r_{sk} s\\bm{{J}} \\times \\hat{\\bm{n}}$ is the force on a skyrmion due to spin torques in Thiele formalism, where $s=M_0 d_m/\\gamma$ is the spin density \\cite{upadhyaya2015electric, thiele1970theory}. The third term on the right side of Eq.~(\\ref{eqn:skmotion}) gives Magnus force on the skyrmion, with ${\\bm{G}}_{sk} = 4\\pi s q_{sk}\\hat{\\bm{z}}$, and the fourth term characterizes a dissipative force due to Gilbert damping. Similarly, the second term on the right side of Eq.~(\\ref{eqn:vxmotion}) gives the Magnus force on the vortex with ${\\bm{G}}_{vx} = 2\\pi s n_{vx} q_{vx} \\hat{\\bm{z}}$, with $n_{vx}$ being the superfluid density of the TSC, and the third term characterizes viscous force with friction coefficient ${\\alpha}_{vx}$. $U_{sk,\\ pin}$ ($U_{vx,\\ pin}$) gives the pinning potential landscape for the skyrmion (vortex). The last term in Eq.~(\\ref{eqn:vxmotion}) represents restoring force on a vortex due to its separation from a skyrmion and is valid when $\\mid{\\bm{R}}_{sk}-{\\bm{R}}_{vx}\\mid 100~m$ were rejected,\ncorresponding to the area near the fifth telescope currently \nnot included in the system.\n\\begin{figure}[htb]\n\\begin{center}\n\\mbox{\n\\epsfxsize8.0cm\n\\epsffile{coreloc.eps}}\n\\end{center}\n\\caption\n{Distribution of the core locations of events, after the cuts to\nenhance the fraction of $\\gamma$-rays. Also indicated are the\nselection region and the telescope locations.}\n\\label{fig_core}\n\\end{figure}\nAfter these cuts, a sample of 11874 on-source events remained, including\na background of 1543 cosmic-ray events, as estimated using the equal-sized\noff-source region.\n\nFor such a sample of events at TeV energies, \nthe core location is measured with a\nprecision of about 6~m to 7~m for events with cores within a \ndistance up to 100~m from the central telescope; for larger\ndistances, the resolution degrades gradually, due to\nthe smaller angles between the different views,\nand the reduced image {\\em size} (see Fig.~\\ref{fig_coreres}).\n\\begin{figure}[htb]\n\\begin{center}\n\\mbox{\n\\epsfxsize7.0cm\n\\epsffile{res.ps}}\n\\end{center}\n\\caption\n{Resolution in the core position as a function of the distance\nbetween the shower core and the central telescope, as determined\nfrom Monte Carlo simulations of $\\gamma$-ray showers with\nenergies between 1 and 2 TeV. The resolution is defined by\nfitting a Gaussian to the distribution of differences between the true and\nreconstructed coordinates of the shower impact point, projected\nonto the $x$ and $y$ axes of the coordinate system. Due to slight\nnon-Gaussian tails, the rms widths of the distributions are about\n20\\% larger.}\n\\label{fig_coreres}\n\\end{figure}\n\n\\section{The shape of the Cherenkov light pool for $\\gamma$-ray\nevents}\n\nUsing the technique described in the introduction, the intensity\ndistribution in the Cherenkov light pool can now simply be traced\nby selecting events with the shower core at given distance $r_i$ from\na `reference' \ntelescope $i$ and with a fixed image {\\em size} $a_i$, and plotting the\nmean amplitude $a_j$ of telescope $j$ as a function of $r_j$.\nHowever, in this simplest form, the procedure is not very practical,\ngiven the small sample of events remaining after such additional\ncuts. To be able to use a larger sample of events, one has to\n\\begin{itemize}\n\\item select events with $a_i$ in a certain range, $a_{min} < a_i \n< a_{max}$, and plot $a_j/a_i$ vs $r_j$, assuming that the shape of\nthe light pool does not change rapidly with energy, and that one\ncan average over a certain energy range\n\\item repeat the measurement of $a_j(r_j)/a_i$ for different (small) bins \nin $r_i$, and combine these measurements after normalizing the distributions\nat some fixed distance\n\\item Combine the results obtained for different pairs of telescopes $i,j$.\n\\end{itemize}\nCare has to be taken not to introduce a bias due to the trigger\ncondition. For example, one has to ensure that the selection\ncriterion of at least three triggered telescopes is fulfilled regardless\nof whether telescope $j$ has triggered or not, otherwise the selection\nmight enforce a minimum image {\\em size} in telescope $j$. \n\nTo avoid truncation of images by the border of the camera, only images\nwith a maximum distance of $1.5^\\circ$ between the image centroid and\nthe camera center were included, leaving a $0.6^\\circ$ margin to\nthe edge of the field of view. Since \nthe image of the source if offset by $0.5^\\circ$ from the camera \ncenter, a maximum distance of $2.0^\\circ$ is possible between the source\nimage and the centroid of the shower image.\n\nEven after these selections, the comparison between data and shower models\nis not completely straight forward. One should not, e.g., simply compare\ndata to the predicted photon flux at ground level since\n\\begin{itemize}\n\\item as is well known, the radial dependence\nof the density of Cherenkov light depends on the solid angle over which\nthe light is collected, i.e., on the field of view of the camera\n\\item the experimental resolution in the\nreconstruction of the shower core position causes a \ncertain smearing, which is visible in particular near the break \nin the light distribution\nat the Cherenkov radius\n\\item the selection of image pixels using the tail cuts results in a\ncertain loss of photons; this loss is the more significant the lower\nthe intensity in the image is, and the more diffuse the image is.\n\\end{itemize}\nWhile the distortion in the measured radial distribution of Cherenkov\nlight due to the latter two effects is relatively modest (see\nFig.~\\ref{fig_pool}), a detailed\ncomparison with Monte Carlo should take these effects into account by\nprocessing Monte-Carlo generated events using the same procedure as\nreal data, i.e., by plotting the distance to the reconstructed core\nposition rather than the true core position, and by applying the same\ntail cuts etc. \n\\begin{figure}[htb]\n\\begin{center}\n\\mbox{\n\\epsfxsize11.0cm\n\\epsffile{mc_final.eps}}\n\\end{center}\n\\caption\n{Radial distribution of Cherenkov light for TeV $\\gamma$-ray\nshowers, for unrestricted aperture of the photon detector (full line),\nfor a $2^\\circ$ aperture (dashed), and\nincluding the full camera simulation and image processing (shaded).\nThe curves are normalized at $r \\approx $100~m.}\n\\label{fig_pool}\n\\end{figure}\n\nFor a first comparison between data and simulation,\nshowers from the zenith (zenith angle between\n$10^\\circ$ and $15^\\circ$) were selected. \nThe range of distances $r_i$ from the shower core \nto the reference telescope was restricted to the plateau region\nbetween 50~m and 120~m. Smaller\ndistances were not used because of the large fluctuations of image\n{\\em size} close to the shower core, and larger distances were excluded\nbecause of the relatively steep variation of light yield with \ndistance. The showers were further selected on an amplitude in the `reference'\ntelescope $i$ between 100 and 200 photoelectrons, corresponding to\na mean energy of about 1.3~TeV. \nContamination of the Mrk 501 on-source data sample by cosmic\nrays was subtracted using an off-source region displaced from\nthe optical axis by the same amount as the source, but in\nthe opposite direction. The measured radial distribution\n(Fig.~\\ref{fig_dat2}(a))\nshows the expected features: a relatively flat plateau out to distances\nof 120~m, and a rapid decrease in light yield for larger distances.\n\nThe errors given in the Figure are purely statistical. To estimate the\ninfluence of systematic errors, one can look at the consistency of\nthe data for different ranges in distance $r_i$ to the `reference' \ntelescope, one can compare results for different telescope combinations,\nand one can study the dependence on the cuts applied. Usually,\nthe different data sets were consistent to better than $\\pm 0.05$ units;\nsystematic effects certainly do not exceed a level of $\\pm 0.1$ units. \nWithin these\nerrors, the measured distribution is reasonably well reproduced\nby the Monte-Carlo\nsimulations.\n\n\\begin{figure}[p]\n\\begin{center}\n\\mbox{\n\\epsfysize18.0cm\n\\epsffile{reng1.eps}}\n\\end{center}\n\\caption\n{Light yield as a function of shower energy, for image {\\em size} in \nthe reference telescope between 100 and 200 photoelectrons (a),\n200 and 400 photoelectrons (b), and 400 to 800 photoelectrons (c).\nEvents were selected \nwith a distance range between 50~m and 120~m from the reference telescope,\nfor zenith angles between $10^\\circ$ and $15^\\circ$.\nThe shaded bands indicate the Monte-Carlo results.\nThe distributions are normalized at $r \\approx 100$~m. Only \nstatistical errors are shown.}\n\\label{fig_dat2}\n\\end{figure}\n\\begin{figure}[p]\n\\begin{center}\n\\mbox{\n\\epsfysize20.0cm\n\\epsffile{rall1.eps}}\n\\end{center}\n\\caption\n{Light yield as a function of core distance, for zenith angles between\n$10^\\circ$ and $15^\\circ$ (a), $15^\\circ$ and $25^\\circ$ (b), $25^\\circ$ and\n$35^\\circ$ (c), and $35^\\circ$ and $45^\\circ$ (d). Events were selected \nwith a distance range between 50~m and 120~m from the reference telescope,\nand an image {\\em size} between 100 and 200 photoelectrons in the reference\ntelescope. \nThe shaded bands indicate the Monte-Carlo results.\nThe distributions are normalized at $r \\approx 100$~m.\nOnly statistical errors are shown.}\n\\label{fig_dat3}\n\\end{figure}\n\nShower models predict that the distribution\nof light intensity varies (slowly) with the shower\nenergy and with the zenith angle. Fig.~\\ref{fig_dat2} compares the\ndistributions obtained for different {\\em size} ranges $a_i$ of\n100 to 200, 200 to 400, and 400 to 800 photoelectrons at distances\nbetween 50~m and 120~m, corresponding\nto mean shower energies of about 1.3, 2.5, and 4.5 TeV, respectively.\nWe note that the intensity close to the shower core increases with\nincreasing energy. This component of the Cherenkov light is generated\nby penetrating particles near the shower core. Their number grows\nrapidly with increasing shower energy, and correspondingly decreasing\nheight of the shower maximum. The increase in the mean light intensity \nat small distances from the shower core is primarily caused by\nlong tails distribution of image {\\em sizes} towards large {\\em size}; the\nmedian {\\em size} is more or less constant.\nThe observed trends are well reproduced by the\nMonte-Carlo simulations.\n\nThe dependence on zenith angle is\nillustrated in Fig.~\\ref{fig_dat3}, where zenith angles between \n$10^\\circ$ and $15^\\circ$, $15^\\circ$ and $25^\\circ$, $25^\\circ$ and\n$35^\\circ$, and $35^\\circ$ and $45^\\circ$ are compared. Events were\nagain selected for an image {\\em size} in the `reference' telescope\nbetween 100 and 200 photoelectrons, in a distance range of 50~m to \n120~m \\footnote{Core\ndistance is always measured in the plane perpendicular to the shower\naxis}. The corresponding \nmean shower energies for the four ranges in zenith angle are about \n1.3~TeV, 1.5~TeV, 2~TeV, and 3~TeV.\nFor increasing zenith angles, the distribution of Cherenkov light\nflattens for small radii, and the diameter of the light pool\nincreases. Both effects are expected, since for larger zenith\nangles the distance between the telescope and the shower maximum\ngrows, reducing the number of penetrating particles, and resulting\nin a larger Cherenkov radius. The simulations properly account for \nthis behaviour.\n\n\\begin{figure}[tb]\n\\begin{center}\n\\mbox{\n\\epsfxsize7.0cm\n\\epsffile{rms.eps}}\n\\end{center}\n\\caption\n{Relative variation in the {\\em size} ratio $a_j/a_i$ as a function\nof $r_j$, for $r_i$ in the range 50~m to 120~m, and for image {\\em size}\nin the `reference' telescope between 100 and 200 photoelectrons.\nFull circles refer to zenith angles between $10^\\circ$ and $15^\\circ$, \nopen circles to zenith angles between $25^\\circ$ and $35^\\circ$.}\n\\label{fig_rms}\n\\end{figure}\nIt is also of some interest to consider the fluctuations of\nimage {\\em size}, $\\Delta(a_j/a_i)$.\nFig.~\\ref{fig_rms} shows the relative rms fluctuation in the\n{\\em size} ratio, as a function of $r_j$, for small ($10^\\circ$ to\n$15^\\circ$) and for larger ($25^\\circ$ and $35^\\circ$) zenith\nangles. The fluctuations are minimal near the Cherenkov radius;\nthey increase for larger distances, primarily due to the smaller\nlight yield and hence larger relative fluctuations in the number\nof photoelectrons. In particular for the small zenith angles,\nthe fluctuations also increase for small radii, reflecting the\nlarge fluctuations associated with the penetrating tail of the\nair showers. For larger zenith angles, this effect is much reduced,\nsince now all shower particles are absorbed well above the telescopes;\nmore detailed studies show that already zenith angles of $20^\\circ$\nmake a significant difference. \n\n\\section{Summary}\n\nThe stereoscopic observation of $\\gamma$-ray induced air showers\nwith the HEGRA Cherenkov telescopes allowed for the first time\nthe measurement of the light distribution in the Cherenkov light \npool at TeV energies, providing a consistency check of one of the\nkey inputs for the calculation of shower energies based on the \nintensity of the Cherenkov images. The light distribution shows a\ncharacteristic variation with shower energy and with zenith angle.\nData are well reproduced by the Monte-Carlo\nsimulations.\n\n\\section*{Acknowledgements}\n\nThe support of the German Ministry for Research \nand Technology BMBF and of the Spanish Research Council\nCYCIT is gratefully acknowledged. We thank the Instituto\nde Astrofisica de Canarias for the use of the site and\nfor providing excellent working conditions. We gratefully\nacknowledge the technical support staff of Heidelberg,\nKiel, Munich, and Yerevan.\n\n", "meta": {"timestamp": "1998-07-13T09:54:01", "yymm": "9807", "arxiv_id": "astro-ph/9807119", "language": "en", "url": "https://arxiv.org/abs/astro-ph/9807119"}} +{"text": "\\section{Introduction}\n\\label{sec:introduction}\nA plethora of observations have led to confirm the standard $\\Lambda$CDM framework as the most economical and successful model describing our current universe.\nThis simple picture (pressureless dark matter, baryons and a cosmological constant representing the vacuum energy) has been shown to provide an excellent fit to cosmological data.\nHowever, there are a number of inconsistencies that persist and, instead of diluting with improved precision measurements, gain significance~\\cite{Freedman:2017yms,DiValentino:2020zio,DiValentino:2020vvd,DiValentino:2020srs,Freedman:2021ahq,DiValentino:2021izs,Schoneberg:2021qvd,Nunes:2021ipq,Perivolaropoulos:2021jda,Shah:2021onj}.\n\nThe most exciting (i.e.\\ probably non due to systematics) and most statistically significant ($4-6\\sigma$) tension in the literature is the so-called Hubble constant tension, which refers to the discrepancy between cosmological predictions and low redshift estimates of $H_0$~\\cite{Verde:2019ivm,Riess:2019qba,DiValentino:2020vnx}.\nWithin the $\\Lambda$CDM scenario, Cosmic Microwave Background (CMB) measurements from the Planck satellite provide a value of $H_0=67.36\\pm 0.54$~km s$^{-1}$ Mpc$^{-1}$ at 68\\%~CL~\\cite{Planck:2018vyg}.\nNear universe, local measurements of $H_0$, using the cosmic distance ladder calibration of Type Ia Supernovae with Cepheids, as those carried out by the SH0ES team, provide a measurement of the Hubble constant $H_0=73.2\\pm 1.3$~km s$^{-1}$ Mpc$^{-1}$ at 68$\\%$~CL~\\cite{Riess:2020fzl}.\nThis problematic $\\sim 4\\sigma$ discrepancy aggravates when considering other late-time estimates of $H_0$.\nFor instance, measurements from the Megamaser Cosmology Project~\\cite{Pesce:2020xfe}, or those exploiting Surface Brightness Fluctuations~\\cite{Blakeslee:2021rqi} only exacerbate this tension~\\footnote{%\nOther estimates are unable to disentangle between nearby universe and CMB measurements. These include results from the Tip of the Red Giant Branch~\\cite{Freedman:2021ahq},\nfrom the astrophysical strong lensing observations~\\cite{Birrer:2020tax}\nor from gravitational wave events~\\cite{Abbott:2017xzu}.}.\n\nAs previously mentioned, the SH0ES collaboration exploits the cosmic distance ladder calibration of Type Ia Supernovae, which means that these observations do not provide a direct extraction of the Hubble parameter.\nMore concretely, the SH0ES team measures the absolute peak magnitude $M_B$ of Type Ia Supernovae \\emph{standard candles} and then translates these measurements into an estimate of $H_0$ by means of the magnitude-redshift relation of the Pantheon Type Ia Supernovae sample~\\cite{Scolnic:2017caz}.\nTherefore, strictly speaking, the SH0ES team does not directly extract the value of $H_0$, and there have been arguments in the literature aiming to translate the Hubble constant tension into a Type Ia Supernovae absolute magnitude tension $M_B$~\\cite{Camarena:2019rmj,Efstathiou:2021ocp,Camarena:2021jlr}.\nIn this regard, late-time exotic cosmologies have been questioned as possible solutions to the Hubble constant tension~\\cite{Efstathiou:2021ocp,Camarena:2021jlr}, since within these scenarios, it is possible that the supernova absolute magnitude $M_B$ used to derive the low redshift estimate of $H_0$ is no longer compatible with the $M_B$ needed to fit supernovae, BAO and CMB data. \n\nA number of studies have prescribed to use in the statistical analyses a prior on the intrinsic magnitude rather than on the Hubble constant $H_0$~\\cite{Camarena:2021jlr,Schoneberg:2021qvd}.\nFollowing the very same logic of these previous analyses, we reassess here the potential of interacting dark matter-dark energy cosmology~\\cite{Amendola:1999er}\nin resolving the Hubble constant (\\cite{Kumar:2016zpg, Murgia:2016ccp, Kumar:2017dnp, DiValentino:2017iww, Yang:2018ubt, Yang:2018euj, Yang:2019uzo, Kumar:2019wfs, Pan:2019gop, Pan:2019jqh, DiValentino:2019ffd, DiValentino:2019jae, DiValentino:2020leo, DiValentino:2020kpf, Gomez-Valent:2020mqn, Yang:2019uog, Lucca:2020zjb, Martinelli:2019dau, Yang:2020uga, Yao:2020hkw, Pan:2020bur, DiValentino:2020vnx, Yao:2020pji, Amirhashchi:2020qep, Yang:2021hxg, Gao:2021xnk, Lucca:2021dxo, Kumar:2021eev,Yang:2021oxc,Lucca:2021eqy,Halder:2021jiv}\nand references therein)\nand/or the intrinsic magnitude $M_B$ tension, by demonstrating explicitly from a full analysis that the results are completely independent of whether a prior on $M_B$ or $H_0$ is assumed (see also the recent~\\cite{Nunes:2021zzi}).\n\n\n\\section{Theoretical framework}\n\\label{sec:theory}\nWe adopt a flat cosmological model described by the Friedmann-Lema\\^{i}tre-Robertson-Walker metric.\nA possible parameterization of a dark matter-dark energy interaction is provided by the following expressions~\\cite{Valiviita:2008iv,Gavela:2009cy}:\n\n\\begin{eqnarray}\n \\label{eq:conservDM}\n\\nabla_\\mu T^\\mu_{(dm)\\nu} &=& Q \\,u_{\\nu}^{(dm)}/a~, \\\\\n \\label{eq:conservDE}\n\\nabla_\\mu T^\\mu_{(de)\\nu} &=&-Q \\,u_{\\nu}^{(dm)}/a~.\n\\end{eqnarray}\nIn the equations above, $T^\\mu_{(dm)\\nu}$ and $T^\\mu_{(de)\\nu}$ represent the energy-momentum tensors for the dark matter and dark energy components respectively, the function $Q$ is the interaction rate between the two dark components, and $u_{\\nu}^{(dm)}$ represents the dark matter four-velocity. \nIn what follows we shall restrict ourselves to the case in which the\ninteraction rate is proportional to the dark energy density $\\rho_{de}$~\\cite{Valiviita:2008iv,Gavela:2009cy}:\n\\begin{equation}\nQ=\\ensuremath{\\delta{}_{DMDE}}\\mathcal{H} \\rho_{de}~,\n\\label{rate}\n\\end{equation}\nwhere $\\ensuremath{\\delta{}_{DMDE}}$ is a dimensionless coupling parameter and\n$\\mathcal{H}=\\dot{a}/a$~\\footnote{The dot indicates derivative respect to conformal time $d\\tau=dt/a$.}.\nThe background evolution equations in the coupled model considered\nhere read~\\cite{Gavela:2010tm}\n\\begin{eqnarray}\n\\label{eq:backDM}\n\\dot{{\\rho}}_{dm}+3{\\mathcal H}{\\rho}_{dm}\n&=&\n\\ensuremath{\\delta{}_{DMDE}}{\\mathcal H}{\\rho}_{de}~,\n\\\\\n\\label{eq:backDE}\n\\dot{{\\rho}}_{de}+3{\\mathcal H}(1+\\ensuremath{w_{\\rm 0,fld}}){\\rho}_{de}\n&=&\n-\\ensuremath{\\delta{}_{DMDE}}{\\mathcal H}{\\rho}_{de}~.\n\\end{eqnarray}\nThe evolution of the dark matter and dark energy density perturbations and velocities divergence field are described in \\cite{DiValentino:2019jae} and references therein.\n\nIt has been shown in the literature that this model is free of instabilities\nif the sign of the coupling $\\ensuremath{\\delta{}_{DMDE}}$ and the sign of $(1+\\ensuremath{w_{\\rm 0,fld}})$ are opposite,\nwhere $\\ensuremath{w_{\\rm 0,fld}}$ refers to the dark energy equation of state~\\cite{He:2008si,Gavela:2009cy}.\nIn order to satisfy such stability conditions, we explore three possible scenarios, all of them with a redshift-independent equation of state.\nIn Model A, the equation of state $\\ensuremath{w_{\\rm 0,fld}}$ is fixed to $-0.999$.\nConsequently, since $(1+\\ensuremath{w_{\\rm 0,fld}}) >0$, in order to ensure a instability-free perturbation evolution, the dark matter-dark energy coupling $\\ensuremath{\\delta{}_{DMDE}}$ is allowed to vary in a negative range.\nIn Model B, $\\ensuremath{w_{\\rm 0,fld}}$ is allowed to vary but we ensure that the condition $(1+\\ensuremath{w_{\\rm 0,fld}})>0$ is always satisfied.\nTherefore, the coupling parameter $\\ensuremath{\\delta{}_{DMDE}}$ is also negative.\nIn Model C, instead, the dark energy equation of state is phantom ($\\ensuremath{w_{\\rm 0,fld}}<-1$), therefore the dark matter-dark energy coupling is taken as positive to avoid early-time instabilities.\nWe shall present separately the cosmological constraints for these three models, together with those corresponding to the canonical $\\Lambda$CDM.\n\n\\begin{table}[t]\n \\centering\n \\begin{tabular}{c|c|c}\n Model & Prior $\\ensuremath{w_{\\rm 0,fld}}$ & Prior $\\ensuremath{\\delta{}_{DMDE}}$ \\\\\n \\hline\n A & -0.999 & [-1.0, 0.0]\\\\\n B & [-0.999, -0.333] & [-1.0, 0.0] \\\\\n C & [-3, -1.001]& [0.0, 1.0] \\\\\n \\end{tabular}\n \\caption{Priors of $\\ensuremath{w_{\\rm 0,fld}}$, $\\delta$ in models A, B, C.}\n \\label{tab:priors}\n\\end{table}\n\n\n\\section{Datasets and Methodology}\n\\label{sec:data}\n\nIn this Section, we present the data sets and methodology employed to obtain the observational constraints on the model parameters by performing Bayesian Monte Carlo Markov Chain (MCMC) analyses.\nIn order to constrain the parameters, we use the following data sets:\n\\begin{itemize}\n\\item The Cosmic Microwave Background (CMB) temperature and polarization power spectra from the final release of Planck 2018, in particular we adopt the plikTTTEEE+lowl+lowE likelihood \\cite{Aghanim:2018eyx,Aghanim:2019ame}, plus the CMB lensing reconstruction from the four-point correlation function~\\cite{Aghanim:2018oex}.\n\\item Type Ia Supernovae distance moduli measurements from the \\textit{Pantheon} sample~\\cite{Scolnic:2017caz}. These measurements constrain the uncalibrated luminosity distance $H_0d_L(z)$, or in other words the slope of the late-time expansion rate (which in turn constrains the current matter energy density, $\\Omega_{\\rm 0,m}$). We refer to this dataset as \\textit{SN}. \n\\item Baryon Acoustic Oscillations (BAO) distance and expansion rate measurements from the 6dFGS~\\cite{Beutler:2011hx}, SDSS-DR7 MGS~\\cite{Ross:2014qpa}, BOSS DR12~\\cite{Alam:2016hwk} galaxy surveys,\nas well as from the eBOSS DR14 Lyman-$\\alpha$ (Ly$\\alpha$) absorption~\\cite{Agathe:2019vsu} and Ly$\\alpha$-quasars cross-correlation~\\cite{Blomqvist:2019rah}.\nThese consist of isotropic BAO measurements of $D_V(z)/r_d$\n(with $D_V(z)$ and $r_d$ the spherically averaged volume distance and sound horizon at baryon drag, respectively)\nfor 6dFGS and MGS, and anisotropic BAO measurements of $D_M(z)/r_d$ and $D_H(z)/r_d$\n(with $D_M(z)$ the comoving angular diameter distance and $D_H(z)=c/H(z)$ the radial distance)\nfor BOSS DR12, eBOSS DR14 Ly$\\alpha$, and eBOSS DR14 Ly$\\alpha$-quasars cross-correlation. \n\\item A gaussian prior on $M_B= -19.244 \\pm 0.037$~mag~\\cite{Camarena:2021jlr}, corresponding to the SN measurements from SH0ES.\n\\item A gaussian prior on the Hubble constant $H_0=73.2\\pm 1.3$~km s$^{-1}$ Mpc$^{-1}$ in\nagreement with the measurement obtained by the\nSH0ES collaboration in~\\cite{Riess:2020fzl}.\n\\end{itemize}\nFor the sake of brevity, data combinations are indicated as CMB+SN+BAO (CSB), CMB+SN+BAO+$H_0$ (CSBH) and CMB+SN+BAO+$M_B$ (CSBM).\n\nCosmological observables are computed with \\texttt{CLASS}~\\cite{Blas:2011rf,Lesgourgues:2011re}.\nIn order to derive bounds on the proposed scenarios, we modify the efficient and well-known cosmological package \\texttt{MontePython}~\\cite{Brinckmann:2018cvx}, supporting the Planck 2018 likelihood~\\cite{Planck:2019nip}.\nWe make use of CalPriorSNIa, a module for \\texttt{MontePython}, publicly available at \\url{https://github.com/valerio-marra/CalPriorSNIa}, that implements an effective calibration prior on the absolute magnitude of Type Ia Supernovae~\\cite{Camarena:2019moy,Camarena:2021jlr}.\n\n\n\n\\section{Main results and discussion}\n\\label{sec:results}\n\n\\begin{figure*}[t]\n\\begin{center}\n\\includegraphics[width=0.7\\textwidth]{H0.pdf} \n\\caption{Posterior distribution of the Hubble parameter in the $\\Lambda$CDM model (black) and in interacting cosmologies, with priors on the parameters as given in Tab.~\\ref{tab:priors}. \nWe show constraint obtained within model A (green), model B (red) and model C (blue)\nfor the CMB+SN+BAO data combination (solid lines),\nCMB+SN+BAO+$H_0$ (dashed lines)\nand CMB+SN+BAO+$M_B$ (dotted lines).}\n\\label{fig:h0}\n\\end{center}\n\\end{figure*}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{0_PlSB-vs-0_PlSBH-vs-0_PlSBM_triangle.pdf} \n\\caption{68\\% CL and 95\\% CL allowed contours and one-dimensional posterior probabilities on a selection of cosmological parameters within the canonical $\\Lambda$CDM picture, considering three data combinations: CMB+SN+BAO (red), CMB+SN+BAO+$H_0$ (blue) and CMB+SN+BAO+$M_B$ (green).}\n\\label{fig:triangle_LCDM}\n\\end{center}\n\\end{figure*}\n\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{|l|c|c|c|} \n\\hline \nParameter & CSB & CSBH & CSBM \\\\\n\\hline\n$\\omega{}_{cdm }$ & $0.1193\\pm0.0010$ & $0.1183\\pm0.0009$ & $0.1183_{-0.0009}^{+0.0008}$ \\\\\n$\\ensuremath{\\Omega_{\\rm 0,fld}}$ & $0.6889_{-0.0061}^{+0.0057}$ & $0.6958_{-0.0050}^{+0.0056}$ & $0.6956_{-0.0049}^{+0.0057}$ \\\\\n$\\Omega_{\\rm 0,m}$ & $0.3111_{-0.0057}^{+0.0061}$ & $0.3042_{-0.0056}^{+0.0050}$ & $0.3044_{-0.0057}^{+0.0049}$ \\\\\n$M_B$ & $-19.42\\pm0.01$ & $-19.40\\pm0.01$ & $-19.40\\pm0.01$ \\\\\n$H_0$ & $67.68_{-0.46}^{+0.41}$ & $68.21_{-0.41}^{+0.42}$ & $68.20_{-0.41}^{+0.41}$ \\\\\n$\\sigma_8$ & $0.8108_{-0.0058}^{+0.0061}$ & $0.8092_{-0.0065}^{+0.0060}$ & $0.8090_{-0.0059}^{+0.0064}$ \\\\\n\\hline \nminimum $\\chi^2$ & $3819.46$ & $3836.50$ & $3840.44$ \\\\\n\\hline \n\\end{tabular}\n\\caption{Mean values and 68\\% CL errors on $\\omega_{cdm }\\equiv\\Omega_{cdm} h^2$, the current dark energy density $\\ensuremath{\\Omega_{\\rm 0,fld}}$, the current matter energy density $\\Omega_{\\rm 0,m}$, the Supernovae Ia intrinsic magnitude $M_B$, the Hubble constant $H_0$ and the clustering parameter $\\sigma_8$ within the standard $\\Lambda$CDM paradigm. We also report the minimum value of the $\\chi^2$ function obtained for each of the data combinations.}\n\\label{tab:model_LCDM}\n\\end{table}\n\nWe start by discussing the results obtained within the canonical $\\Lambda$CDM scenario. Table~\\ref{tab:model_LCDM} presents the mean values and the $1\\sigma$ errors on a number of different cosmological parameters.\nNamely, we show the constraints on\n$\\omega_{cdm }\\equiv\\Omega_{0,cdm} h^2$,\nthe current dark energy density $\\ensuremath{\\Omega_{\\rm 0,fld}}$,\nthe current matter energy density $\\Omega_{\\rm 0,m}$,\nthe Supernovae Ia intrinsic magnitude $M_B$,\nthe Hubble constant $H_0$ and the clustering parameter $\\sigma_8$\narising from three possible data combinations considered here and above described:\nCMB+SN+BAO (CSB), CMB+SN+BAO+$H_0$ (CSBH), CMB+SN+BAO+$M_B$ (CSBM).\nInterestingly, \\emph{all} the parameters experience the very same shift regardless the prior is adopted on the Hubble constant or on the intrinsic Supernovae Ia magnitude $M_B$.\nThe mean value of $H_0$ coincides for both the CSBH and the CSBM data combinations, as one can clearly see from the dashed and dotted black lines in Fig.~\\ref{fig:h0}. \nFigure~\\ref{fig:triangle_LCDM} presents the two-dimensional allowed contours and the one-dimensional posterior probabilities on the parameters shown in Tab.~\\ref{tab:model_LCDM}.\nNotice that all the parameters are equally shifted when adding the prior on $H_0$ or on $M_B$, except for $\\sigma_8$ which remains almost unchanged. Notice also that the value of the current matter density, $\\Omega_{\\rm 0,m}$, is smaller when a prior from SN measurements is considered:\ndue to the larger $H_0$ value that these measurements imply, in order to keep the CMB peaks structure unaltered, the value of $\\Omega_{\\rm 0,m}$ should be smaller to ensure that the product $\\omega_m h^2$ is barely shifted.\n\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{|l|c|c|c|} \n\\hline \nParameter & CSB & CSBH & CSBM \\\\\n\\hline\n$\\omega{}_{cdm }$ & $0.107_{-0.005}^{+0.011}$ & $0.09\\pm0.01$ & $0.096_{-0.009}^{+0.011}$ \\\\\n$\\ensuremath{\\Omega_{\\rm 0,fld}}$ & $0.723_{-0.028}^{+0.017}$ & $0.758_{-0.024}^{+0.026}$ & $0.754_{-0.028}^{+0.025}$ \\\\\n$\\Omega_{\\rm 0,m}$ & $0.277_{-0.017}^{+0.028}$ & $0.242_{-0.026}^{+0.024}$ & $0.246_{-0.025}^{+0.028}$ \\\\\n$\\ensuremath{\\delta{}_{DMDE}}$ & $-0.116_{-0.044}^{+0.100}$ & $-0.219_{-0.086}^{+0.083}$ & $-0.203_{-0.087}^{+0.093}$ \\\\\n$M_B$ & $-19.40\\pm0.02$ & $-19.38_{-0.01}^{+0.02}$ & $-19.37\\pm0.02$ \\\\\n$H_0$ & $68.59_{-0.79}^{+0.65}$ & $69.73_{-0.72}^{+0.71}$ & $69.67_{-0.85}^{+0.75}$ \\\\\n$\\sigma_8$ & $0.90_{-0.08}^{+0.04}$ & $1.01_{-0.11}^{+0.08}$ & $1.00_{-0.12}^{+0.07}$ \\\\\n\\hline \nminimum $\\chi^2$ & $3819.86$ & $3831.90$ & $3835.86$ \\\\ \n\\hline \n\\end{tabular}\n\\caption{Mean values and 68\\% CL errors on $\\omega_{cdm }\\equiv\\Omega_{cdm} h^2$, the current dark energy density $\\ensuremath{\\Omega_{\\rm 0,fld}}$, the current matter energy density $\\Omega_{\\rm 0,m}$, the dimensionless dark matter-dark energy coupling $\\ensuremath{\\delta{}_{DMDE}}$, the Supernovae Ia intrinsic magnitude $M_B$, the Hubble constant $H_0$ and the clustering parameter $\\sigma_8$ within the interacting model A, see Tab.~\\ref{tab:priors}. We also report the minimum value of the $\\chi^2$ function obtained for each of the data combinations.}\n\\label{tab:model_A}\n\\end{table}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{A_PlSB-vs-A_PlSBH-vs-A_PlSBM_triangle.pdf} \n\\caption{68\\% CL and 95\\% CL allowed contours and one-dimensional posterior probabilities on a selection of cosmological parameters within model A, considering three data combinations: CMB+SN+BAO (red), CMB+SN+BAO+$H_0$ (blue) and CMB+SN+BAO+$M_B$ (green).}\n\\label{fig:triangle_A}\n\\end{center}\n\\end{figure*}\n\nWe focus now on Model A, which refers to an interacting cosmology with $\\ensuremath{w_{\\rm 0,fld}}=-0.999$ and $\\ensuremath{\\delta{}_{DMDE}}<0$.\nTable~\\ref{tab:model_A} presents the mean values and the $1\\sigma$ errors on the same cosmological parameters listed above, with the addition of the coupling parameter $\\ensuremath{\\delta{}_{DMDE}}$, for the same three data combination already discussed.\n\nNotice again that all the parameters are equally shifted to either smaller or larger values, regardless the prior is adopted on either $H_0$ or $M_B$. In this case the shift on the Hubble parameter is larger than that observed within the $\\Lambda$CDM model, as one can notice from the blue curves depicted in \nFig.~\\ref{fig:h0}.\nInterestingly, we observe a $2\\sigma$ indication in favor of a non-zero value of the coupling $\\ensuremath{\\delta{}_{DMDE}}$ when considering the CSBH and the CSBM data combinations.\nIndeed, while the value of the minimum $\\chi^2$ is almost equal to that obtained in the $\\Lambda$CDM framework for the CSB data analyses, when adding either a prior on $H_0$ or on $M_B$,\nthe minimum $\\chi^2$ value is \\emph{smaller} than that obtained for the standard cosmological picture: therefore, the addition of a coupling \\emph{improves} the overall fit.\nFigure~\\ref{fig:triangle_A} presents the two-dimensional allowed contours and the one-dimensional posterior probabilities obtained within Model A.\nIt can be noticed that the prior on the Hubble constant and on the intrinsic magnitude lead to the very same shift, and the main conclusion is therefore prior-independent:\nthere is a $\\sim 2\\sigma$ indication for a non-zero dark matter-dark energy coupling when considering either $H_0$ or $M_B$ measurements,\n\\emph{and} the value of the Hubble constant is considerably larger, alleviating the $H_0$ tension.\n\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{|l|c|c|c|} \n\\hline \nParameter & CSB & CSBH & CSBM \\\\\n\\hline\n$\\omega{}_{cdm }$ & $0.077_{-0.014}^{+0.036}$ & $0.061_{-0.019}^{+0.034}$ & $0.065_{-0.017}^{+0.036}$ \\\\\n$\\ensuremath{\\Omega_{\\rm 0,fld}}$ & $0.785_{-0.081}^{+0.034}$ & $0.825_{-0.070}^{+0.045}$ & $0.818_{-0.075}^{+0.041}$ \\\\\n$\\Omega_{\\rm 0,m}$ & $0.215_{-0.034}^{+0.081}$ & $0.174_{-0.044}^{+0.069}$ & $0.182_{-0.041}^{+0.075}$ \\\\\n$\\ensuremath{w_{\\rm 0,fld}}$ & $-0.909_{-0.090}^{+0.026}$ & $-0.917_{-0.082}^{+0.026}$ & $-0.918_{-0.081}^{+0.026}$ \\\\\n$\\ensuremath{\\delta{}_{DMDE}}$ & $-0.35_{-0.14}^{+0.26}$ & $-0.45_{-0.16}^{+0.22}$ & $-0.43_{-0.15}^{+0.24}$ \\\\\n$M_B$ & $-19.41\\pm0.02$ & $-19.38\\pm0.02$ & $-19.38\\pm0.02$ \\\\\n$H_0$ & $68.28_{-0.85}^{+0.79}$ & $69.68_{-0.75}^{+0.71}$ & $69.57_{-0.76}^{+0.75}$ \\\\\n$\\sigma_8$ & $1.30_{-0.51}^{+0.01}$ & $1.60_{-0.76}^{+0.06}$ & $1.53_{-0.71}^{+0.03}$ \\\\\n\\hline \nminimum $\\chi^2$ & $ 3819.96$ & $3832.28$ & $3836.24$ \\\\\n\\hline \n\\end{tabular}\n\\caption{Mean values and 68\\% CL errors on $\\omega_{cdm }\\equiv\\Omega_{cdm} h^2$, the current dark energy density $\\ensuremath{\\Omega_{\\rm 0,fld}}$, the current matter energy density $\\Omega_{\\rm 0,m}$, the dark energy equation of state $\\ensuremath{w_{\\rm 0,fld}}$,\nthe dimensionless dark matter-dark energy coupling $\\ensuremath{\\delta{}_{DMDE}}$, the Supernovae Ia intrinsic magnitude $M_B$, the Hubble constant $H_0$ and the clustering parameter $\\sigma_8$ within the interacting model B, see Tab.~\\ref{tab:priors}.\nWe also report the minimum value of the $\\chi^2$ function obtained for each of the data combinations.}\n\\label{tab:model_B}\n\\end{table}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{B_PlSB-vs-B_PlSBH-vs-B_PlSBM_triangle.pdf} \n\\caption{68\\% CL and 95\\% CL allowed contours and one-dimensional posterior probabilities on a selection of cosmological parameters within model B, considering three data combinations: CMB+SN+BAO (red), CMB+SN+BAO+$H_0$ (blue) and CMB+SN+BAO+$M_B$ (green).}\n\\label{fig:triangle_B}\n\\end{center}\n\\end{figure*}\n\nFocusing now on Model B, which assumes a negative coupling $\\ensuremath{\\delta{}_{DMDE}}$ and a constant, but freely varying, dark energy equation of state $\\ensuremath{w_{\\rm 0,fld}}$ within the $\\ensuremath{w_{\\rm 0,fld}}>-1$ region,\nwe notice again the same shift on the cosmological parameters, regardless the prior is introduced in the Hubble parameter ($H_0$) or in the Supernovae Ia intrinsic magnitude ($M_B$), as can be noticed from Tab.~\\ref{tab:model_B}.\nAs in Model A, the value of $H_0$ in this interacting cosmology is larger than within the $\\Lambda$CDM framework (see the red curves in Fig.~\\ref{fig:h0}),\nalbeit slightly smaller than in Model A, due to the strong anti-correlation between $\\ensuremath{w_{\\rm 0,fld}}$ and $H_0$~\\cite{DiValentino:2016hlg,DiValentino:2019jae}.\nConsequently, a larger value of $\\ensuremath{w_{\\rm 0,fld}}>-1$ implies a lower value of $H_0$.\nNevertheless, a $2\\sigma$ preference for a non-zero value of the dark matter-dark energy coupling is present also in this case, and also when the CSB dataset is considered:\nfor the three data combinations presented here, there is always a preference for a non-zero dark matter-dark energy coupling. \nNotice that the minimum $\\chi^2$ in Model B is smaller than that corresponding to the minimal $\\Lambda$CDM framework, but slightly larger than that of Model A, which is nested in Model B. The differences between the minimum $\\chi^2$ in Model A and Model B, however, are small\nenough to be considered as numerical fluctuations. Since, as previously stated, $\\ensuremath{w_{\\rm 0,fld}}$ and $H_0$ are strongly anti-correlated, a more negative value of the dark energy equation of state (i.e.\\ $\\ensuremath{w_{\\rm 0,fld}}=-0.999$ as in Model A, close to the prior limit) is preferred by both the CSBH and the CSBM data combinations. \n\nIn Fig.~\\ref{fig:triangle_B} we depict the two-dimensional allowed contours and the one-dimensional posterior probabilities obtained for Model B.\nFrom a comparison to Fig.~\\ref{fig:triangle_LCDM} and also confronting the mean values of Tab.~\\ref{tab:model_B} to those shown in Tab.~\\ref{tab:model_LCDM} (and, to a minor extent, to those in Tab.~\\ref{tab:model_A}),\none can notice that the value of \\ $\\ensuremath{\\Omega_{\\rm 0,fld}}$ is much larger.\nThe reason for this is related to the lower value for the present matter energy density $\\Omega_{\\rm 0,m}$ (the values are also shown in the tables), which is required within the interacting cosmologies when the dark matter-dark energy coupling is negative.\nIn the context of a universe with a negative dark coupling, indeed, there is an energy flow from dark matter to dark energy.\nConsequently, the (dark) matter content in the past is higher than in the standard $\\Lambda$CDM scenario and the amount of intrinsic (dark) matter needed today is lower, because of the extra contribution from the dark energy sector.\nIn a flat universe, this translates into a much higher value of $\\ensuremath{\\Omega_{\\rm 0,fld}}$.\nOn the other hand, a lower value of $\\Omega_{m,0}$ requires a larger value of the clustering parameter $\\sigma_8$ to be able to satisfy the overall normalization of the matter power spectrum. In any case, we find again that the addition of a prior on either $H_0$ or $M_B$ leads to exactly the very same shift for all the cosmological parameters.\nTherefore, Model B also provides an excellent solution to the Hubble constant tension,\nalthough at the expense of a very large $\\sigma_8$. \n\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{|l|c|c|c|} \n\\hline \nParameter & CSB & CSBH & CSBM \\\\\n\\hline\n$\\omega{}_{cdm }$ & $0.138_{-0.015}^{+0.008}$ & $0.137_{-0.016}^{+0.007}$ & $0.135_{-0.013}^{+0.008}$ \\\\\n$\\ensuremath{\\Omega_{\\rm 0,fld}}$ & $0.655_{-0.021}^{+0.032}$ & $0.671_{-0.018}^{+0.031}$ & $0.675_{-0.018}^{+0.027}$ \\\\\n$\\Omega_{\\rm 0,m}$ & $0.345_{-0.032}^{+0.021}$ & $0.329_{-0.031}^{+0.018}$ & $0.325_{-0.027}^{+0.018}$ \\\\\n$\\ensuremath{w_{\\rm 0,fld}}$ & $-1.087_{-0.042}^{+0.051}$ & $-1.131_{-0.044}^{+0.053}$ & $-1.117_{-0.044}^{+0.048}$ \\\\\n$\\ensuremath{\\delta{}_{DMDE}}$ & $0.183_{-0.180}^{+0.061}$ & $0.173_{-0.170}^{+0.051}$ & $0.150_{-0.150}^{+0.051}$ \\\\\n$M_B$ & $-19.41\\pm0.02$ & $-19.38\\pm0.02$ & $-19.37\\pm0.02$ \\\\\n$H_0$ & $68.29_{-0.91}^{+0.66}$ & $69.74_{-0.73}^{+0.75}$ & $69.67_{-0.77}^{+0.78}$ \\\\\n$\\sigma_8$ & $0.735_{-0.057}^{+0.045}$ & $0.748_{-0.041}^{+0.068}$ & $0.755_{-0.047}^{+0.051}$ \\\\\n\\hline\nminimum $\\chi^2$ & $3818.24$ & $3830.56$ & $3835.10$ \\\\\n\\hline\n\\end{tabular}\n\\caption{Mean values and 68\\% CL errors on $\\omega_{cdm }\\equiv\\Omega_{cdm} h^2$, the current dark energy density $\\ensuremath{\\Omega_{\\rm 0,fld}}$, the current matter energy density $\\Omega_{\\rm 0,m}$, the dark energy equation of state $\\ensuremath{w_{\\rm 0,fld}}$,\nthe dimensionless dark matter-dark energy coupling $\\ensuremath{\\delta{}_{DMDE}}$, the Supernovae Ia intrinsic magnitude $M_B$, the Hubble constant $H_0$ and the clustering parameter $\\sigma_8$ within the interacting model C, see Tab.~\\ref{tab:priors}.\nWe also report the minimum value of the $\\chi^2$ function obtained for each of the data combinations.}\n\\label{tab:model_C}\n\\end{table}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{C_PlSB-vs-C_PlSBH-vs-C_PlSBM_triangle.pdf} \n\\caption{68\\% CL and 95\\% CL allowed contours and one-dimensional posterior probabilities on a selection of cosmological parameters within model C, considering three data combinations: CMB+SN+BAO (red), CMB+SN+BAO+$H_0$ (blue) and CMB+SN+BAO+$M_B$ (green).}\n\\label{fig:triangle_C}\n\\end{center}\n\\end{figure*}\n\nFinally, Tab.~\\ref{tab:model_C} shows the mean values and the $1\\sigma$ errors on the usual cosmological parameters explored along this study, for Model C.\nNotice that this model benefits from both its interacting nature and from the fact that $\\ensuremath{w_{\\rm 0,fld}}<-1$ and $\\ensuremath{\\delta{}_{DMDE}}>0$.\nBoth features of the dark energy sector have been shown to be excellent solutions to the Hubble constant problem.\nAs in the previous cases, the shift in the cosmological parameters induced by the addition of a prior is independent of its nature, i.e.\\ it is independent on whether a prior on $H_0$ or $M_B$ is adopted.\nWithin this model, the value of the Hubble constant is naturally larger than within the $\\Lambda$CDM model (see the blue lines in Fig.~\\ref{fig:h0}),\nregardless of the data sets assumed in the analyses.\nDespite its phantom nature, as in this particular case $\\ensuremath{w_{\\rm 0,fld}}<-1$ to ensure a instability-free evolution of perturbations, Model C provides the \\emph{best-fits to any of the data combinations explored here, performing even better than} the minimal $\\Lambda$CDM picture,\nas one can clearly notice from the last row of Tab.~\\ref{tab:model_C}.\nThis fact makes Model C a very attractive cosmological scenario which can provide a solution for the long-standing $H_0$ tension. We must remember that model C, however, has two degrees of freedom more than the standard $\\Lambda$CDM paradigm.\nFigure~\\ref{fig:triangle_C} illustrates the two-dimensional allowed contours and the one-dimensional posterior probabilities obtained within Model C.\nNotice that here the situation is just the opposite one of Model B: the value of $\\ensuremath{\\Omega_{\\rm 0,fld}}$ is much smaller than in standard scenarios,\ndue to the larger value required for the present matter energy density $\\Omega_{\\rm 0,m}$ when the dark matter-dark energy coupling $\\ensuremath{\\delta{}_{DMDE}}>0$ and $\\ensuremath{w_{\\rm 0,fld}}<-1$.\nThis larger value of the present matter energy density also implies a lower value for the clustering parameter $\\sigma_8$, in contrast to what was required within Model B.\n\n\n\\section{Final Remarks}\n\\label{sec:conclusions}\n\nIn this study we have tried to reassess the ability of interacting dark matter-dark energy cosmologies in alleviating the long-standing and highly significant Hubble constant tension.\nDespite the fact that in the past these models have been shown to provide an excellent solution to the discrepancy between local measurements and high redshift, Cosmic Microwave Background estimates of $H_0$, there have been recent works in the literature questioning \ntheir effectiveness, related to a misinterpretation of SH0ES data, which indeed does not directly extract the value of $H_0$.\nWe have therefore computed the ability of interacting cosmologies of reducing the Hubble tension by means of two possible different priors in the cosmological analyses:\na prior on the Hubble constant and, separately, a prior on Type Ia Supernova absolute magnitude.\nWe combine these priors with Cosmic Microwave Background (CMB), Type Ia Supernovae (SN) and Baryon Acoustic Oscillation (BAO) measurements,\nshowing that the constraints on the cosmological parameters are independent of the choice of prior, and that the Hubble constant tension is always alleviated.\nThis last statement is also prior-independent.\nFurthermore, one of the possible interacting cosmologies considered here,\nwith a phantom nature, provides a better fit than the canonical $\\Lambda$CDM framework for all the considered data combinations, but with two extra degrees of freedom.\nWe therefore conclude that interacting dark-matter dark-energy cosmologies still provide a very attractive and viable theoretical and phenomenological scenario\nwhere to robustly relieve the Hubble constant tension,\n regardless the method one adopts to process SH0ES data. \n\n\n\\begin{acknowledgments}\n\\noindent \nSG acknowledges financial support from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\u0142odowska-Curie grant agreement No 754496 (project FELLINI).\nEDV is supported by a Royal Society Dorothy Hodgkin Research Fellowship. \nOM is supported by the Spanish grants PID2020-113644GB-I00, PROMETEO/2019/083 and by the European ITN project HIDDeN (H2020-MSCA-ITN-2019//860881-HIDDeN).\nRCN acknowledges financial support from the Funda\\c{c}\\~{a}o de Amparo \\`{a} Pesquisa do Estado de S\\~{a}o Paulo (FAPESP, S\\~{a}o Paulo Research Foundation) under the project No. 2018/18036-5.\n\\end{acknowledgments}\n\n", "meta": {"timestamp": "2021-11-08T02:04:43", "yymm": "2111", "arxiv_id": "2111.03152", "language": "en", "url": "https://arxiv.org/abs/2111.03152"}} +{"text": "\n\n\\section{Introduction} \\label{sec:introduction} \\input{introduction}\n\\section{Related Work} \\label{sec:related_work} \\input{relatedWork}\n\\section{Model Description} \\label{sec:model} \\input{modelDescription}\n\\section{Experiments} \\label{sec:experiments} \\input{experiments}\n\\section{Conclusions and Future Work} \\label{sec:conclusions} \\input{conclusion}\n\n{\\small\n\\textbf{Acknowledgements}\n\\input{acknowledgements}\n}\n\n{\\small\n\\bibliographystyle{ieee}\n\n\\subsection{Composable Activities Dataset} \\label{subsec:composableActivities}\r\n\r\n\r\n\r\n\r\n\n\n\n\\subsection{Inference of per-frame annotations.}\n\\label{subsec:action_annotation}\nThe hierarchical structure and compositional\nproperties of our model enable it to output a predicted global activity,\nas well as per-frame annotations of predicted atomic actions and poses for each body\nregion.\nIt is important to highlight that in the generation of the per-frame annotations, no prior temporal \nsegmentation of atomic actions is needed. Also, no post-processing of the output is performed. The \nproficiency of our model to produce\nper-frame annotated data, enabling action detection temporally and\nspatially, make our model unique. \n\nFigure \\ref{fig:annotation} illustrates\nthe capability of our model to provide per-frame annotation of the atomic\nactions that compose each activity. The accuracy of\nthe mid-level action prediction can be evaluated as in \\cite{Wei2013}.\nSpecifically, we first obtain segments of the same predicted action in each\nsequence, and then compare these segments with ground truth action labels. The\nestimated label of the segment is assumed correct if the detected segment is\ncompletely contained in a ground truth segment with the same label, or if the\nJaccard Index considering the segment and the ground truth label is greater\nthan 0.6. Using these criteria, the accuracy of the mid-level actions is\n79.4\\%. In many cases, the wrong action prediction is only highly local in time\nor space, and the model is still able to correctly predict the activity label\nof the sequence. Taking only the correctly predicted videos in terms of global\nactivity prediction, the accuracy of action labeling reaches 83.3\\%. When consider this number, it \nis\nimportant to note that not every ground truth action label is accurate: the\nvideos were hand-labeled by volunteers, so there is a chance for mistakes in\nterms of the exact temporal boundaries of the action. In\nthis sense, in our experiments we observe cases where the predicted\nlabels showed more accuracte temporal boundaries than the ground \ntruth.\n\n \n\\begin{figure*}[th]\n\\begin{center}\n\\includegraphics[width=0.999\\linewidth]{./fig_all_sequences_red.pdf}\n\\end{center}\n\\caption{Per-frame predictions of atomic actions for selected activities,\nshowing 20 frames of each video. Each frame is joined with the predicted action\nannotations of left arm, right arm, left leg and right leg. Besides the prediction of the global \nactivity of the video, our algorithm is able to\ncorrectly predict the atomic actions that compose each activity in each frame,\nas well as the body regions that are active during the execution of the action.\nNote that in the example video of the activity \\emph{Walking while calling with\nhands}, the \\emph{calling with hands} action is correctly annotated even when\nthe subject change the waving hand during the execution of the activity.}\n\\label{fig:annotation}\n\\end{figure*}\n\n\\subsection{Robustness to occlusion and noisy joints.}\nOur method is also capable of inferring action and activity labels even if some\njoints are not observed. This is a common situation in practice,\nas body motions induce temporal self-occlusions of body regions.\nNevertheless, due to the joint estimation of poses, actions, and activities,\nour model is able to reduce the effect of this problem. To illustrate this, we\nsimulate a totally occluded region by fixing its geometry to the position\nobserved in the first frame.\nWe select which region to be completely occluded in every sequence using uniform sampling.\nIn this scenario, the accuracy of our preliminary model in \\cite{Lillo2014} drops\nby 7.2\\%. Using our new SR setup including NI handling, the accuracy only drops\nby 4.3\\%, showing that the detection of non-informative poses helps the model\nto deal with occluded regions. In fact, as we show in Section\n\\ref{subsec:exp_non_info_handling}, many of truly occluded regions in the\nvideos are identified using NI handling. In contrast, the drop in performance of\nBoW is 12.5\\% and HMM 10.3\\%: simpler models are less capable of robustly dealing\nwith occluded regions, since their pose assignments rely only on the descriptor\nitself, while in our model the assigned pose depends on the descriptor,\nsequences of poses and actions, and the activity evaluated, making inference\nmore robust. Fig. \\ref{fig:occlusions} shows some qualitative results of\noccluded regions.\n\n\n\n\\begin{figure}[tb]\n\\begin{center}\n\\includegraphics[width=0.999\\linewidth]\n{./subject_1_6.pdf} \\\\\n{\\footnotesize Right arm occluded} \\\\\n\\includegraphics[width=0.999\\linewidth]\n{./subject_1_23.pdf}\\\\\n{\\footnotesize Left leg occluded} \\\\\n\\includegraphics[width=0.999\\linewidth]\n{./subject_1_8.pdf}\\\\\n{\\footnotesize Left arm occluded}\\\\\n\\end{center}\n\\caption{The occluded body regions are depicted in light blue. When an arm or\nleg is occluded, our method still provides a good estimation of the underlying actions in each\nframe.}\n\\label{fig:occlusions}\n\\end{figure}\n\nIn terms of noisy joints, we manually add random Gaussian noise to change the\njoints 3D location of testing videos, using the SR setup and the GEO descriptor\nto isolate the effect of the joints and not mixing the motion descriptor. Figure\n\\ref{fig:joint_noise} shows the accuracy of testing videos in terms of noise\ndispersion $\\sigma_{noise}$ measured in inches. For little noise, there is no\nmuch effect in our model accuracy, as expected for the robustness of the\ngeometric descriptor. However, for more drastic noise added to every joint, the\naccuracy drops dramatically. This behavior is expected, since for highly noisy\njoints the model can no longer predict well the sequence of actions and poses. \n\n\\begin{figure}[tb]\n\\begin{center}\n\\includegraphics[width=0.999\\linewidth]{./fig_acc_vs_noise.pdf} \\\\\n\\end{center}\n\\caption{Performance of our model in presence of simulated Gaussian noise in\nevery joint, as a function of $\\sigma_{noise}$ measured in inches. When the\nnoise is less than 3 inches in average, the model performance is not very\naffected, while for bigger noise dispersion the model accuracy is drastically\naffected. It is important no note that in our simulation, every joint is\naffected to noise, while in a real setup, noisy joint estimation tend to occur\nmore rarely. } \\label{fig:joint_noise}\n\\end{figure}\n\n\\subsection{Early activity prediction.}\nOur model needs the complete video to make an accurate activity and action\nprediction of a query video. In this section, we analyze the number of frames\n(as a percentage of a complete activity sequence) needed\nto make an accurate activity prediction. Figure \\ref{fig:accuracy_reduced_frames}\nshows the mean accuracy over the dataset (using leave-one-subject-out\ncross-validation) in function of the\npercentage of frames used by the classifier to label each video. We note that\nconsidering 30\\% of the frames, the classifier performs reasonable predictions,\nwhile 70\\% of frames are needed to closely match the\naccuracy of using all frames.\n\\begin{figure}[tb]\n\\begin{center}\n\\includegraphics[width=0.999\\linewidth]{./fig_acc_vs_frame_reduction.pdf}\n\\end{center}\n\\caption{Accuracy of activity recognition versus percentage of frames used in\nComposable Activities dataset. In general, 30\\% of the frames are needed to\nperform reasonable predictions, while 70\\% of frames are needed to closely match the\naccuracy of using all frames.}\n\\label{fig:accuracy_reduced_frames}\n\\end{figure}\n\n\\subsection{Failure cases.}\n\nWe also study some of the failure cases that we observe during the\nexperimentation with our model.\nFigure \\ref{fig:errors} shows some error cases. It is interesting that\nthe sequences are confusing even for humans when only the skeleton is available\nas in the figure. These errors probably will not be surpassed with the model\nitself, and will need to use other sources of information like object\ndetectors, where a cup should be distinguished from a cellphone as in the\nthird row of Figure \\ref{fig:errors}.\n\n\\begin{figure}[tb]\n\\begin{center}\n\\includegraphics[width=0.999\\linewidth]\n{./sbj1_1.pdf} \\\\\n{\\footnotesize Ground truth: Walking while calling with hands\\\\\nPrediction: Walking while waving hand} \\\\\n\\includegraphics[width=0.999\\linewidth]\n{./sbj4_4.pdf}\\\\\n{\\footnotesize Ground truth: Composed activity 1\\\\\nPrediction: Talking on cellphone and drinking} \\\\\n\\includegraphics[width=0.999\\linewidth]\n{./sbj4_6.pdf}\\\\\n{\\footnotesize Ground truth: Waving hand and drinking\\\\\nPrediction: Talking on cellphone and scratching head} \\\\\n\\end{center}\n\\caption{Failure cases. Our algorithm tends to confuse activities that share very similar\nbody postures.}\n\\label{fig:errors}\n\\end{figure}\n\n\n\\begin{comment}\n\\subsubsection{New activity characterization}\nAs we mention in previous section, our model using sparse regularization and\nnon-negative weights on activity ($\\alpha$) classifiers and action ($\\beta$)\nclassifiers do not \\emph{punish} poses that have no influence in the\nactivities. For this reason, our model is able to model a new composed activity\njust combining the coefficients of two known activities, leaving the rest of\nthe parameters of the model untouched. We use an heuristic approach to combine\ntwo models: givint two classes $c_1$ and $c_2$, their coefficients for a region\n$r$ and action $a$ are $ \\alpha^r_{c_1,a}$ and $ \\alpha^r_{c_2,a}$\nrespectively. For a new class $c_{new}$ composed of classes $c_1$ and $c_2$, we\nuse the mean value of the coefficients \\begin{equation}\n\\alpha^r_{{c_{new},a}} = \\frac{(\\alpha^r_{c_1,a} + \\alpha^r_{c_2,a})}{2}\n\\end{equation}\n only when the corresponding coefficients for are positive; in other case, we\nuse the maximum value of the two coefficients. For all subjects of the dataset,\nwe create all the combinations od two activities, and tested the new model\nusing three composed videos per subject. The average accuracy of the activity\n$16+1$ is 90.2\\%, and in average the activities that compose the new activity\ndrops its accuracy in 12.3\\%, showing that we effectively incorporate a new\ncomposed activity to the model at a little cost of getting more confusion over\nthe original activities. Moreover, the accuracy of action labeling for the new\nclass is 74.2\\%, similar to the accuracy of the action labeling of the\noriginal model, so we can effectively transfer the learning of atomic action\nclassifiers to new compositions of activities. \n\n\\begin{table}\n\\begin{tabular}\n\\hline\nActivity group & Accuracy of new class & \\\\ \n\\hline\nSimple & 92.\nComplex & 87.2\\% & \\\\\n\\hline\nAll & 90.2\\% & \\\\\n\\end{tabular}\n\\caption{}\n\\label{tab:acc_new_class}\n\\end{table}\n\n\\end{comment}\n\n\n\n\n\n\\subsection{Classification of Simple and Isolated Actions}\n\nAs a first experiment,\nwe evaluate the performance of our model on the task of simple and\nisolated human action recognition in the MSR-Action3D dataset\n\\cite{WanLi2010}.\nAlthough our model is tailored at recognizing complex\nactions, this experiment verifies the performance of our model in the\nsimpler scenario of isolated atomic action classification.\n\nThe MSR-Action3D dataset provides pre-trimmed depth videos and estimated body poses\nfor isolated actors performing actions from 20\ncategories. We use 557 videos\nin a similar setup to\n\\cite{Wang2012}, where videos from subjects 1, 3, 5, 7, 9 are used for\ntraining and the rest for testing. Table \\ref{tab:msr3d} shows that in this \ndataset our model achieves classification accuracies comparable to \nstate-of-the-art methods.\n\n\\begin{table}[t]\n\\footnotesize\n\\centering\n\\begin{tabular}{|l|c|}\n\\hline\n\\textbf{Algorithm} & \\textbf{Accuracy}\\\\\n\\hline\nOur model & 93.0\\% \\\\\n\\hline\nL. Tao \\etal \\cite{Tao2015} & 93.6\\% \\\\\nC. Wang \\etal \\cite{Wang2013} & 90.2\\% \\\\\nVemulapalli \\etal \\cite{Vemulapalli2014} & 89.5\\% \\\\\n\\hline\n\\end{tabular}\n\\caption{\\footnotesize\nRecognition accuracy in the MSR-Action3D \ndataset.}\n\\label{tab:msr3d}\n\\end{table}\n\n\n\n\n\\subsection{Detection of Concurrent Actions}\nOur second experiment evaluates the performance of our model in a concurrent\naction recognition setting. In this scenario, the goal is to predict\nthe temporal localization of actions that may occur concurrently in a long\nvideo. We evaluate this task on the Concurrent Actions dataset \\cite{Wei2013},\nwhich\nprovides 61 RGBD videos and pose estimation data annotated with 12\naction categories.\nWe use a similar evaluation setup as proposed by the authors.\nWe split the dataset into training and testing sets with a 50\\%-50\\% ratio.\nWe evaluate performance by measuring precision-recall: a detected action\nis declared as a true positive if its temporal overlap with the ground\ntruth action interval is larger than 60\\% of their union, or if\nthe detected interval is completely covered by the ground truth annotation.\n\nOur model is tailored at recognizing complex actions that are composed\nof atomic components. However, in this scenario, only atomic actions are\nprovided and no compositions are explicitly defined. Therefore, we apply\na simple preprocessing step: we cluster training videos into groups\nby comparing the occurrence of atomic actions within each video.\nThe resulting groups are used as complex actions labels in the training\nvideos of this dataset.\nAt inference time, our model outputs a single labeling per video,\nwhich corresponds to the atomic action labeling that maximizes the energy of\nour model.\nSince there are no thresholds to adjust, our model produces the single\nprecision-recall measurement reported in Table \\ref{tab:concurrent}.\nOur model outperforms the state-of-the-art method in this\ndataset at that recall level.\n\n\n\n\n\n\\begin{table}[tb]\n\\footnotesize\n\\centering\n\\begin{tabular}{|l|c|c|}\n\\hline\n\\textbf{Algorithm} & \\textbf{Precision} & \\textbf{Recall}\\\\\n\\hline\nOur full model & 0.92 & 0.81 \\\\\n\\hline\nWei et al. \\cite{Wei2013} & 0.85 & 0.81 \\\\\n\\hline\n\\end{tabular}\n\\caption{\n\\footnotesize\nRecognition accuracy in the Concurrent Actions dataset. }\n\\label{tab:concurrent}\n\\end{table}\n \n\\subsection{Recognition of Composable Activities}\nIn this experiment, we evaluate the performance of our model to recognize complex \nand composable human actions. In the evaluation, we use the Composable \nActivities dataset \\cite{Lillo2014},\nwhich provides 693 videos of 14 subjects performing 16 activities.\nEach activity is a spatio-temporal composition of atomic actions.\nThe dataset provides a total of 26 atomic actions that are shared across\nactivities. We train our model using two levels of supervision during training:\ni) spatial annotations that map body regions to the execution of each action are made available\nii) spatial supervision is not available, and therefore the labels $\\vec{v}$ to assign spatial regions to actionlets \nare treated as latent variables.\n\nTable \\ref{tab:composable} summarizes our results. We observe that under both \ntraining conditions, our model achieves comparable performance. This indicates \nthat our weakly supervised model can recover some of the information\nthat is missing while performing well at the activity categorization task.\nIn spite of using less\nsupervision at training time, our method outperforms state-of-the-art\nmethodologies that are trained with full spatial supervision.\n\n\n\\begin{table}[tb]\n\\footnotesize\n\\centering\n\\begin{tabular}{|l|c|}\n\\hline\n\\textbf{Algorithm} & \\textbf{Accuracy}\\\\\n\\hline\nBase model + GC, GEO desc. only, spatial supervision & 88.5\\%\\\\\nBase model + GC, with spatial supervision & 91.8\\% \\\\\nOur full model, no spatial supervision (latent $\\vec{v}$) & 91.1\\%\\\\\n\\hline\nLillo \\etal \\cite{Lillo2014} (without GC) & 85.7\\% \\\\\nCao et al. \\cite{cao2015spatio} & 79.0\\% \\\\\n\\hline\n\\end{tabular}\n\\caption{\n\\footnotesize\nRecognition accuracy in the Composable Activities\ndataset.}\n\\label{tab:composable}\n\\end{table}\n \n\\subsection{Action Recognition in RGB Videos}\nOur experiments so far have evaluated the performance of our model\nin the task of human action recognition in RGBD videos.\nIn this experiment, we explore the use of our model in the problem of human\naction recognition in RGB videos. For this purpose, we use the sub-JHMDB\ndataset \\cite{Jhuang2013}, which focuses on videos depicting 12 actions and\nwhere most of the actor body is visible in the image frames.\nIn our validation, we use the 2D body pose configurations provided by the\nauthors and compare against previous methods that also use them. Given that \nthis dataset only includes 2D image coordinates for each body joint, we obtain \nthe geometric descriptor by adding a depth coordinate with a value $z = d$ to \njoints corresponding to wrist and knees, $z = -d$ to elbows, and $z = 0$ to other joints, \nso we can compute angles between segments, using $d = 30$ fixed with cross-validation. We summarize the results in Table \n\\ref{tab:subjhmdb},\nwhich shows that our method outperforms alternative state-of-the-art techniques.\n\n\n\n\\begin{table}[tb]\n\\footnotesize\n\\centering\n\\begin{tabular}{|l|c|}\n\\hline\n\\textbf{Algorithm} & \\textbf{Accuracy}\\\\\n\\hline\nOur model & 77.5\\% \\\\\n\\hline\nHuang et al. \\cite{Jhuang2013} & 75.6\\% \\\\\nCh\\'eron et al. \\cite{Cheron2015} & 72.5\\%\\\\\n\\hline\n\\end{tabular}\n\\caption{\\footnotesize\nRecognition accuracy in the sub-JHMDB dataset.}\n\\label{tab:subjhmdb}\n\\end{table}\n\n\n\\subsection{Spatio-temporal Annotation of Atomic Actions}\nIn this experiment, we study the ability of our model to provide spatial and \ntemporal annotations of relevant atomic actions. Table \\ref{tab:annotation} \nsummarizes our results. We report precision-recall rates\nfor the spatio-temporal annotations predicted by our model in the \ntesting videos (first and second rows). Notice that this is a \nvery challenging task. The testing videos do no provide any label, and \nthe model needs to predict both, the temporal extent of each action and the \nbody regions associated with the execution of each action. Although the \ndifficulty of the task, our model shows satisfactory results being able to \ninfer suitable spatio-temporal annotations. \n\nWe also study the capability of the model to provide spatial and temporal \nannotations during training. In our first experiment, each video \nis provided\nwith the temporal extent of each action, so the model only needs to infer the \nspatial annotations (third row in Table \\ref{tab:annotation}). In a \nsecond experiment, we do not provide any temporal or spatial annotation, \nbut only the global action label of each video (fourth row in Table \n\\ref{tab:annotation}). In both experiments, we observe that the model is \nstill able to infer suitable spatio-temporal annotations.\n\n\n\\begin{table}[tb]\n\\footnotesize\n\\centering\n\\begin{tabular}{|l|c|c|c|}\n\\hline\n\\textbf{Videos} & \\textbf{Annotation inferred} & \\textbf{Precision} & \\textbf{Recall}\\\\\n\\hline\nTesting set & Spatio-temporal, no GC & 0.59 & 0.77 \\\\\nTesting set & Spatio-temporal & 0.62 & 0.78 \\\\\n\\hline\nTraining set & Spatial only & 0.86 & 0.90\\\\\nTraining set & Spatio-temporal & 0.67 & 0.85 \\\\\n\\hline\n\\end{tabular}\n\\caption{\n\\footnotesize\nAtomic action annotation performances in the Composable Activities\ndataset. The results show that our model is able to recover spatio-temporal\nannotations both at training and testing time.}\n\\label{tab:annotation}\n\\end{table}\n\n\n\\subsection{Effect of Model Components}\nIn this experiment,\nwe study the contribution of key components of the\nproposed model. First, using the sub-JHMDB dataset, \nwe measure the impact of three components of our model: garbage collector for \nmotion poselets (GC), multimodal modeling of actionlets, and use of latent \nvariables to infer spatial annotation about body regions (latent $\\vec{v}$). Table \n\\ref{tab:components} summarizes our experimental results. \nTable \\ref{tab:components} shows that the full version\nof our model achieves the best performance, with each of the components \nmentioned above contributing to the overall success of the method.\n\n\n\n\\begin{table}[tb]\n\\footnotesize\n\\centering\n\\begin{tabular}{|l|c|}\n\\hline\n\\textbf{Algorithm} & \\textbf{Accuracy}\\\\\n\\hline\nBase model, GEO descriptor only & 66.9\\%\\\\\nBase Model & 70.6\\%\\\\\nBase Model + GC & 72.7\\% \\\\\nBase Model + Actionlets & 75.3\\%\\\\\nOur full model (Actionlets + GC + latent $\\vec{v}$) & 77.5\\% \\\\\n\\hline\n\\end{tabular}\n\\caption{\n\\footnotesize\nAnalysis of contribution to recognition performance from\neach model component in the sub-JHMDB dataset.}\n\\label{tab:components}\n\\end{table}\n\nSecond, using the Composable Activities dataset, we also analyze the \ncontribution of the proposed self-paced learning scheme for initializing and \ntraining our model. We summarize our results in\nTable \\ref{tab:initialization} by reporting action\nrecognition accuracy under different initialization schemes: i) Random: random \ninitialization of latent variables $\\vec{v}$, ii) Clustering: initialize \n$\\vec{v}$ by first computing a BoW descriptor for the atomic action intervals \nand then perform $k$-means clustering, assigning the action intervals to the \ncloser cluster center, and iii) Ours: initialize $\\vec{v}$ using the proposed \nself-paced learning scheme. Our proposed initialization scheme helps the model to achieve its best\nperformance.\n\n\n\n\\begin{table}[tb]\n\\footnotesize\n\\centering\n\\begin{tabular}{|l|c|}\n\\hline\n\\textbf{Initialization Algorithm} & \\textbf{Accuracy}\\\\\n\\hline\nRandom & 46.3\\% \\\\\nClustering & 54.8\\% \\\\\nOurs & 91.1\\% \\\\\n\\hline\nOurs, fully supervised & 91.8\\%\\\\\n\\hline\n\\end{tabular}\n\\caption{\n\\footnotesize\nResults in Composable Activities dataset, with latent $\\vec{v}$ and different initializations. }\n\\label{tab:initialization}\n\\end{table}\n\n\\subsection{Qualitative Results}\nFinally, we provide a qualitative analysis of\nrelevant properties of our model. Figure \\ref{fig:poselets_img} \nshows examples of moving poselets learned in the Composable \nActivities dataset. We observe that each moving poselet captures \na salient body configuration that helps to discriminate among atomic \nactions. To further illustrate this, Figure \\ref{fig:poselets_img} \nindicates the most likely underlying atomic action for each moving poselet.\nFigure \\ref{fig:poselets_skel} presents a similar analysis for moving \nposelets learned in the MSR-Action3D dataset.\n\nWe also visualize the action annotations produced by our model.\nFigure \\ref{fig:actionlabels} (top) shows the action labels associated\nwith each body part in a video from the Composable Activities dataset.\nFigure \\ref{fig:actionlabels} (bottom) illustrates per-body part action\nannotations for a video in the Concurrent Actions dataset. These\nexamples illustrate the capabilities of our model to correctly\nannotate the body parts that are involved in the execution of each action,\nin spite of not having that information during training.\n\n\n\\begin{figure}[tb]\n\\begin{center}\n\\scriptsize\n Motion poselet \\#4 - most likely action: talking on cellphone\\\\\n \\includegraphics[trim=0 0 0 0.35cm, clip, width=0.49\\textwidth]{Fig/poselets1}\n\n Motion poselet \\#7 - most likely action: erasing on board\\\\\n \\includegraphics[trim=0 0 0 0.35cm, clip, width=0.49\\textwidth]{Fig/poselets2}\n\n Motion poselet \\#19 - most likely action: waving hand\\\\\n \\includegraphics[trim=0 0 0 0.35cm, clip, width=0.49\\textwidth]{Fig/poselets3}\n\\end{center}\n\\caption{\n\\footnotesize\nMoving poselets learned from the Composable Activities\ndataset.}\n\\label{fig:poselets_img}\n\\end{figure}\n\n\n\\begin{figure}[tb]\n\\begin{center}\n\\scriptsize\n Motion poselet \\#16 - most likely action: tennis swing\\\\\n \\includegraphics[trim=0 0 0cm 0cm, clip, width=0.49\\textwidth]{Fig/poselets4}\n\n Motion poselet \\#34 - most likely action: golf swing\\\\\n \\includegraphics[trim=0 0 0cm 0cm,clip, width=0.49\\textwidth]{Fig/poselets5}\n\n Motion poselet \\#160 - most likely action: bend\\\\\n \\includegraphics[trim=0 0 0cm 0cm, clip, width=0.49\\textwidth]{Fig/poselets6}\n\n\\end{center}\n\\caption{\n\\footnotesize\nMoving poselets learned from the MSR-Action3D\ndataset.}\n\\label{fig:poselets_skel}\n\\end{figure}\n\n \n\n\\begin{figure}[tb]\n\\begin{center}\n\\scriptsize\n\\includegraphics[]{Fig/labels_acciones}\n\\end{center}\n\\caption{\n\\footnotesize\nAutomatic spatio-temporal annotation of atomic actions. Our method\ndetects the temporal span and spatial body regions that are involved in\nthe performance of atomic actions in videos.}\n\\label{fig:actionlabels}\n\\end{figure}\n\n\n\\begin{comment}\n\n[GENERAL IDEA]\n\nWhat we want to show:\n\\begin{itemize}\n\\item Show tables of results that can be useful to compare the model.\n\\item Show how the model is useful for videos of simple and composed actions, since now the level of annotations is similar.\n\\item Show how the inference produces annotated data (poses, actions, etc). In particular, show in Composable Activities and Concurrent actions how the action compositions are handled by the model without post-processing.\n\\item Show results in sub-JHMDB,showing how the model detects the action in the videos and also which part of the body performs the action (search for well-behaved videos). It could be interesting to show the annotated data over real RGB videos. \n\\item Show examples of poses (like poselets) and sequences of 3 or 5 poses for actions (Actionlets?)\n\\end{itemize}\n\n\\subsection{Figures}\nThe list of figures should include:\n\\begin{itemize}\n\\item A figure showing the recognition and mid-level labels of Composable Activities, using RGB videos\n\\item Comparison of action annotations, real v/s inferred in training set, showing we can recover (almost) the original annotations.\n\\item Show a figure similar to Concurrent Actions paper, with a timeline showing the actions in color. We can show that our inference is more stable than proposed in that paper, and it is visually more similar to the ground truth than the other methods.\n\\item Show a figure for sub-JHMDB dataset, where we can detect temporally and spatially the action without annotations in the training set.\n\\item Show Composable Activities and sub-JHMDB the most representative poses and actions.\n\\end{itemize}\n\n\n\\paragraph{Composable Activities Dataset}\nIn this dataset we show several results.\n(1) Comparing TRAJ descriptor (HOF over trajectory);\n(2) Compare the results using latent variables for action assignations to\nregions, with different initializations;\n(3) Show results of the annotations of the videos in inference.\n\nWe must include figures comparing the real annotations\nand the inferred annotations for training data, to show we are able to get the\nannotations only from data.\n\n\n\n\\subsection{Recognition of composable activities}\n\\label{subsec:experiments_summary}\n\n\\subsection{Impact of including motion features}\n\\label{subsec:exp_motionfeats}\n\n\\subsection{Impact of latent spatial assignment of actions}\n\\label{subsec:exp_vlatent}\n\n\\subsection{Impact of using multiple classifiers per semantic action}\n\\label{subsec:exp_multiple}\n\n\\subsection{Impact of handling non-informative poses}\n\\label{subsec:exp_non_info_handling}\n\\end{comment}\n\n\n\n\n\\begin{comment}\n\\subsection{CAD120 Dataset}\nThe CAD120 dataset is introduced in \\cite{Koppula2012}. It is composed of 124\nvideos that contain activities in 10 clases performed by 4 actors. Activities\nare related to daily living: \\emph{making cereal}, \\emph{stacking objects}, or\n\\emph{taking a meal}. Each activity is composed of simpler actions like\n\\emph{reaching}, \\emph{moving}, or \\emph{eating}. In this database, human-object\ninteractions are an important cue to identify the actions, so object\nlocations and object affordances are provided as annotations. Performance\nevaluation is made through leave-one-subject-out cross-validation. Given\nthat our method does not consider objects, we use only\nthe data corresponding to 3D joints of the skeletons. As shown in Table\n\\ref{Table-CAD120},\nour method outperforms the results reported in\n\\cite{Koppula2012} using the same experimental setup. It is clear that using\nonly 3D joints is not enough to characterize each action or activity in this\ndataset. As part of our future work, we expect that adding information related\nto objects will further improve accuracy.\n\\begin{table}\n\\centering\n{\\small\n\\begin{tabular}{|c|c|c|}\n\\hline\n\\textbf{Algorithm} & \\textbf{Average precision} & \\textbf{Average recall}\\\\\n\\hline\nOur method & 32.6\\% & 34.58\\% \\\\\n\\hline\n\\cite{Koppula2012} & 27.4\\% & 31.2\\%\\\\\n\\cite{Sung2012} & 23.7\\% & 23.7\\% \\\\\n\\hline\n\\end{tabular}\n}\n\\caption{Recognition accuracy of our method compared to state-of-the-art methods\nusing CAD120 dataset.}\n\\label{Table-CAD120}\n\\end{table}\n\\end{comment}\n\n\n\n\n\n\\subsection{Latent spatial actions for hierarchical action detection}\n\n\\subsection{Hierarchical activity model}\n\nSuppose we have a video $D$ with $T$ frames, each frame described by a feature vector $x_t$. Assume we have available $K$ classifiers$\\{w_k\\}_{k=1}^K$ over the frame descriptors, such that each frame descriptor can be associated to a single classifier. If we choose the maximum response for every frame, encoded as $z_t = \\argmax_k\\{w_k^\\top x_t\\}$, we can build a BoW representation to feed linear action classifiers $\\beta$, computing the histogram $h(Z)$ of $Z = \\{z_1,z_2,\\dots,z_T\\}$ and using these histograms as a feature vector for the complete video to recognize single actions. Imagine now that we would like to use the scores of the maximum responses, $w_{z_t}^\\top x_t$ as a potential to help discriminating between videos that present reliable poses from videos that does not. We can build a joint energy function, combining the action classifier score and the aggregated frame classifier scores, as\n\\begin{equation}\n\\label{eq:2-levels}\n\\begin{split}\nE(D) &= \\beta_{a}^\\top h(Z) + \\sum_{t=1}^T w_{z_t}^\\top x_t \\\\ & = \\sum_{t=1}^T\\sum_{k=1}^K\\left(\\beta_{a,k} + w_k^\\top x_t \\right)\\delta(z_t=k)\n\\end{split}\n\\end{equation}\nWhat is interesting of Eq. (\\ref{eq:2-levels}) is that it every term in the sum is tied for the value of $z_t$, creating a model such that all its components depends of the labeling $Z$. We can expand the previous model to more levels using the same philosophy. In fact, for a new level, we could create a new indicator $v_t$ for every frame that indicates the election of which classifier $\\beta$ will be used (the same as $z_t$ indicates which classifier of $w$). If we name $w$ as \\emph{pose classifiers}, and $\\beta$ as \\emph{action classifiers}, we can create a hierarchical model where multiple poses and actions can be present in a single video. Supposing we have $A$ actions; the energy for a three-level hierarchy could be, for an \\emph{activity} $l$,\n\\begin{equation}\nE(D) =\\alpha_l^\\top h(V) + \\sum_{a=1}^A \\beta_{a}^\\top h^a(Z,V) + \\sum_{t=1}^T w_{z_t}^\\top x_t \n\\end{equation}\nwhere $h^a(Z,V)$ refers to the BoW representation of $Z$ for those frames labeled as action $v_t = a$.\n\n[NEW MODEL]\n\nRecent work in action recognition \\cite{Cheron2015,Tao2015, Wang2011,Jhuang2013} shows a resurgence of describing human actions as a collection of dynamic spatial parts that resembles Poselets. In line with these research, we split the human body into $R$ semantic regions. As modeling actions using the whole body is hard, separating the body into groups of limbs helps in recognition of actions, specially for complex datasets \\cite{Tao2015}. Our wiew is that while poses are in general well defined in most research, little effort has been made to mine actions from videos, in terms of detecting the temporal spanning (action detection) and action localization. In addition to the fact that most action datasets are only single actions, there is a lack of research in the general setup where actions are combined in the same video. Nevertheless, a few works have noticed that humans usually performs complex action in real life \\cite{Wei2013, Lillo2014}, providing their own datasets based in RGB-D cameras. In our work, we aim to group both worlds of single and composed actions in a single hierarchical model of three semantic levels, and using human body regions to improve the representativeness. \n\nDuring training, we assume there is temporal annotations of actions. As we want our model to perform action localization, we model the action assignments $V_r$ in each region as latent variables during training, allowing the model to infer which human part execute the action without needing this kind of annotations in the training set, including a model for the initialization of action labels. In this way, we advance from a simple detection problem to infer also \\emph{how} the subject executes the action, important in surveillance applications, health monitoring, between others. We also expand the modeling of recurrent patterns of poses to construct a general model for shared actions, aiming to handle multimodal information, which is produced by actions with the same label but with different execution patterns, or by changes in representation of actions such as varying camera view. We handle this problem by augmenting the number of action classifiers, where each original action acts as a parent node of several non-overlapping child actions. Finally, as we are using local information for poses, some frames could be noisy or representing an uncommon pose, not useful to build the pose models. We attack this issue by adding a garbage collector for poses, where only the most-informative poses are used by pose classifiers during learning. We describe these contributions in the following paragraphs.\n\n\\paragraph{[EDIT] Latent assignments of actions to human regions}\n\nKnowing the parts of the body involved in the actions is highly appealing. Suppose we have $M$ videos, each video annotated with $Q_m$ action intervals. Each action interval can be associated with any number of regions, from $1$ to all $R$ regions. For example, a \\emph{waving hand} action could be associated only with \\emph{right\\_arm}, while the action \\emph{jogging} could be associated with the whole body. We want to learn the associations of actions and human parts for training videos, and we build these associations using latent variables. The main problem to solve is to how to get a proper initialization for actions, since there is a very high chance to get sticked in a local minimum far away of the optimum, producing bad results.\n\nOur first contribution is a method to get a proper initialization of fine-grained spatial action labels, knowing only the time span of the actions. Using the known action intervals, we formulate the problem of action to region assignment as an optimization problem, constrained using structural information: the actions intervals must not overlap in the same region, and all the action intervals must be present at least in one region. We formulate this labeling problem as a binary Integer Linear Programming (ILP) problem. We define as $v_{r,q}^m=1$ when the action interval $q \\in \\{1,\\dots,Q_m\\}$ appears in region $r$ in the video $m$, and $v_{r,q}^m=0$ otherwise. We assume we have pose labels $z_{t,r}$ in each frame, independent for each region, learned via clustering the poses for all frames in all videos. For an action interval $q$, we use as descriptor the histogram of pose labels for each region in the action interval, defined for the video $m$ as $h_{r,q}^m$ . We can solve the problem of finding the correspondence between action intervals and regions in a formulation similar to $k$-means, using the structure of the problem as constraints in the labels, and using $\\chi^2$ distance between the action interval descriptors and the cluster centers: \n\\begin{equation}\n\\begin{split}\nP1) \\quad \\min_{v,\\mu} &\\sum_{m=1}^M \\sum_{r=1}^R \\sum_{q=1}^{Q_m} v_{r,q}^m d( h_{r,q}^m - \\mu_{a_q}^r) -\\frac{1}{\\lambda} v_{r,q}^m\\\\ \n \\text{s. to} \n\\quad \n& \\sum_{r=1}^R v_{r,q}^m \\ge 1\\text{, }\\forall q\\text{, }\\forall m \\\\ \n& v_{r,q_1}^m + v_{r,q_2}^m \\le 1 \\text{ if } q_1\\cap q_2 \\neq \\emptyset \\text{, }\\forall r\\text{, }\\forall m\\\\ \n& v_{r,q}^m \\in \\{0,1\\}\\text{, }\\forall q\\text{, }\\forall{r}\\text{, }\\forall m\n\\end{split}\n\\end{equation}\nwith\n\\begin{equation}\nd( h_{r,q}^m - \\mu_{a_q}^r) = \\sum_{k=1}^K (h_{r,q}^m[k] - \\mu_{a_q}^r[k])^2/(h_{r,q}^m[k] +\\mu_{a_q}^r[k]).\n\\end{equation}\n\n$\\mu_{a_q}^r$ are computed as the mean of the descriptors with the same action label within the same region. We solve $P1$ iteratively as $k$-means, finding the cluster centers for each region $r$, $\\mu_{a}^r$ using the labels $v_{r,q}^m$, and then finding the best labeling given the cluster centers, solving an ILP problem. Note that the first term of the objective function is similar to a $k$-means model, while the second term resembles the objective function of \\emph{self-paced} learning as in \\cite{Kumar2010}, fostering to balance between assigning a single region to every action, towards assigning all possible regions to the action intervals when possible. \n\n[IL: INCLUDE FIGURE TO SHOW P1 GRAPHICALLY]\n\nWe describe the further changes in the hierarchical model of \\cite{Lillo2014} in the learning and inference sections.\n \\paragraph{[EDIT] Representing semantic actions with multiple atomic sequences}.\n\n\nAs the poses and atomic actions in \\cite{Lillo2014} model are shared, a single classifier is generally not enough to model multimodal representations, that occur usually in complex videos. We modify the original hierarchical model of \\cite{Lillo2014} to include multiple linear classifiers per action. We create two new concepts: \\textbf{semantic actions}, that refer to actions \\emph{names} that compose an activity; and \\textbf{atomic sequences}, that refers to the sequence of poses that conform an action. Several atomic sequences can be associated to a single semantic action, creating disjoint sets of atomic sequences, each set associated to a single semantic action. The main idea is that the action annotations in the datasets are associated to semantic actions, whereas for each semantic action we learn several atomic sequence classifiers. With this formulation, we can handle the multimodal nature of semantic actions, covering the changes in motion, poses , or even changes in meaning of the action according to the context (e.g. the semantic action ``open'' can be associated to opening a can, opening a door, etc.). \n\nInspired by \\cite{Raptis2012}, we first use the \\emph{Cattell's Scree test} for finding a suitable number of atomic sequence for every semantic action. Using the semantic action labels, we compute a descriptor for every interval using normalized histograms of pose labels. Then, for a particular semantic action $u$, we compute the the eigenvalues $\\lambda_u$ of the affinity matrix of the semantic action descriptors, using $\\chi^2$ distance. For each semantic action $u \\in \\{1,\\dots,U\\}$ we find the number of atomic sequences $G_u$ as $G_u = \\argmin_i \\lambda_{i+1}^2 / (\\sum_{j=1}^i \\lambda_j) + c\\cdot i$, with $c=2\\cdot 10^{-3}$. Finally, we cluster the descriptors corresponding to each semantic action using k-means, using a different number of clusters for each semantic action $u$ according to $G_u$. This approach generates non-overlapping atomic sequences, each associated to a single semantic action.\n\nTo transfer the new labels to the model, we define $u(v)$ as the function that given the atomic sequence label $v$, returns the corresponding semantic action label $u$. The energy for the activity level is then\n\\begin{equation}\nE_{\\text{activity}} = \\sum_{u=1}^U\\sum_{t=1}^T \\alpha_{y,u}\\delta(u(v_t)=u)\n\\end{equation} \n\nFor the action and pose labels the model remains unchanged. Using the new atomic sequences allows a richer representation for actions, while in he activity level, several atomic sequences will map to a single semantic action. This behavior resembles a max-pooling operation, where we will choose at inference the atomic sequences that best describe the performed actions in the video, keeping the semantics of the original labels. \n\n\\paragraph{Towards a better representation of poses: adding a garbage collector}\n\nThe model in \\cite{Lillo2014} uses all poses to feed action classifiers. Out intuition is that only a subset of poses in each video are really discriminative or informative for the actions performed, while there is plenty of poses that corresponds to noisy or non-informative ones. [EXPAND] Our intuition is that low-scored frames in terms of poses (i.e. a low value of $w_{z_t}^\\top x_t$ in Eq. (\\ref{eq:energy2014})) make the same contribution as high-scored poses in higher levels of the model, while degrading the pose classifiers at the same time since low-scored poses are likely to be related to non-informative frames. We propose to include a new pose, to explicitly handling those low-scored frames, keeping them apart for the pose classifiers $w$, but still adding a fixed score to the energy function to avoid normalization issues and to help in the specialization of pose classifiers. We call this change in the model a \\emph{garbage collector} since it handles all low-scores frames and group them having a fixed energy score $\\theta$. In practice, we use a special pose entry $K+1$ to identify the non-informative poses. The equation representing the energy for pose level is\n\\begin{equation} \\label{Eq_poseEnergy}\nE_{\\text{poses}} = \\sum_{t=1}^T \\left[ {w_{z_t}}^\\top x_{t}\\delta(z_{t} \\le K) + \\theta \n\\delta(z_{t}=K+1)\\right] \n\\end{equation}\nwhere $\\delta(\\ell) = 1$ if $\\ell$ is true and $\\delta(\\ell) = 0$ if\n$\\ell$ is false. The action level also change its energy:\n\\begin{equation}\n\\begin{split}\n \\label{Eq_actionEnergy}\nE_{\\text{actions}} = \\sum_{t=1}^T \\sum_{a=1}^A \\sum_{k=1}^{K+1} \\beta_{a,k} \\delta(z_t = k) \\delta(v_t = a).\n\\end{split}\n\\end{equation}\n\n\\begin{comment}\nIntegrating all contribution detailed in previous sections, the model is written as:\nEnergy function:\n\\begin{equation}\nE = E_{\\text{activity}} + E_{\\text{action}} + E_{\\text{pose}}\n + E_{\\text{action transition}} + E_{\\text{pose transition}}.\n\\end{equation}\n\n\\begin{equation}\nE_{\\text{poses}} = \\sum_{t=1}^T \\left[ {w_{z_t}}^\\top x_{t}\\delta(z_{t} \\le K) + \\theta \n\\delta(z_{t}=K+1)\\right] \n\\end{equation}\n\n\\begin{equation}\nE_{\\text{actions}} = \\sum_{t=1}^T \\sum_{a=1}^A \\sum_{k=1}^{K+1} \\beta_{a,k} \\delta(z_t = k) \\delta(v_t = a).\n\\end{equation}\n\n\\begin{equation}\nh_g^{r}(U) = \\sum_{t} \\delta_{u_{t,r}}^g\n\\end{equation}\n\nSo the energy in the activity level is\n\\begin{equation}\nE_{\\text{activity}} = \\sum_{r} {\\alpha^r_{y}}^\\top h^{r}(U) = \\sum_{r,g,t} \\alpha^r_{y,g} \\delta_{u_{t,r}}^g\n\\end{equation}\n\n\\begin{equation}\nE_{\\text{action transition}} = \\sum_{r,a,a'} \\gamma^r_{a',a} \\sum_{t} \\delta_{v_{t-1,r}}^{a'}\\delta_{v_{t,r}}^a \n\\end{equation}\n\n\\begin{equation}\nE_{\\text{pose transition}} =\\sum_{r,k,k'} \\eta^r_{k',k}\\sum_{t}\\delta_{z_{t-1,r}}^{k'}\\delta_{z_{t,r}}^{k}\n\\end{equation}\n\\end{comment}\n\n\n\n\\subsection{Inference}\n\\label{subsec:inference}\nThe input to the inference algorithm is a new video sequence with features\n$\\vec{x}$. The task is to infer the best complex action label $\\hat y$, and to \nproduce the best labeling of actionlets $\\hat{\\vec{v}}$ and motion poselets $\\hat{\\vec{z}}$.\n{\\small\n\\begin{equation}\n \\hat y, \\hat{\\vec{v}}, \\hat{\\vec{z}} = \\argmax_{y, \\vec{v},\\vec{z}} E(\\vec{x}, \\vec{v}, \\vec{z}, y)\n\\end{equation}}\nWe can solve this by exhaustively enumerating all values of complex actions $y$, and solving for $\\hat{\\vec{v}}$ and $\\hat{\\vec{z}}$ using:\n\\small\n\\begin{equation}\n\\begin{split}\n \\hat{\\vec{v}}, \\hat{\\vec{z}} | y ~ =~ & \\argmax_{\\vec{v},\\vec{z}} ~ \\sum_{r=1}^R \\sum_{t=1}^T \\left( \\alpha^r_{y,u(v{(t,r)})} \n + \\beta^r_{v_{(t,r)},z_{(t,r)}}\\right. \\\\\n\t\t\t\t&\\quad\\quad \\left.+ {w^r_{z_{(t,r)}}}^\\top x_{t,r} \\delta(z_{(t,r)} \\le K) + \\theta^r \\delta_{z_{(t,r)}}^{K+1} \\right. \\\\ \n\t\t\t\t& \\quad\\quad \\left.+ \\gamma^r_{v_{({t-1},r)},v_{(t,r)}} + \\eta^r_{z_{({t-1},r)},z_{(t,r)}} \\vphantom{{w^r_{z_{(t,r)}}}^\\top x_{t,r}} \\right). \\\\\n\\end{split}\n\\label{eq:classify_inference}\n\\end{equation}\n\\normalsize\n\n\n\n\\subsection{Learning} \\label{subsec:learning}\n\\textbf{Initial actionlet labels.} An important step in the training process is\nthe initialization of latent variables. This is a challenging due to the lack\nof spatial supervision: at each time instance, the available atomic actions can be associated with \nany of the $R$ body regions.\nWe adopt the machinery of \nself-paced \nlearning \\cite{Kumar:EtAl:2010} to provide a suitable solution and \nformulate the association between actions and body regions as an \noptimization problem. We constrain this optimization using two structural \nrestrictions:\ni) atomic actions intervals must not overlap in the same region, and \nii) a labeled atomic action must be present at least in one region. We \nformulate the labeling \nprocess as a binary Integer Linear Programming (ILP) problem, where we define \n$b_{r,q}^m=1$ when action interval $q \\in \\{1,\\dots,Q_m\\}$ is active in region \n$r$ of video $m$; and $b_{r,q}^m=0$ otherwise. Each action interval $q$ is \nassociated with a single atomic action. We assume that we have initial \nmotion poselet labels\n$z_{t,r}$ in each frame and region.\nWe describe the action interval $q$ and region $r$ using \nthe histogram $h_{r,q}^m$ of motion poselet labels. We can find \nthe correspondence between action intervals and regions using a formulation \nthat resembles the operation of$k$-means, but using the\nstructure of the problem to constraint the labels:\n\\small\n\\begin{equation}\n\\begin{split}\n\\text{P1}) \\quad \\min_{b,\\mu} &\\sum_{m=1}^M \\sum_{r=1}^R \\sum_{q=1}^{Q_m} b_{r,q}^m \nd( h_{r,q}^m - \\mu_{a_q}^r) -\\frac{1}{\\lambda} b_{r,q}^m\\\\ \n \\text{s.t.} \n\\quad \n& \\sum_{r=1}^R b_{r,q}^m \\ge 1\\text{, }\\forall q\\text{, }\\forall m \\\\ \n& b_{r,q_1}^m + b_{r,q_2}^m \\le 1 \\text{ if } q_1\\cap q_2 \\neq \\emptyset \n\\text{, \n}\\forall r\\text{, }\\forall m\\\\ \n& b_{r,q}^m \\in \\{0,1\\}\\text{, }\\forall q\\text{, }\\forall{r}\\text{, }\\forall m\n\\end{split}\n\\end{equation}\nwith\n\\begin{equation}\nd( h_{r,q}^m - \\mu_{a_q}^r) = \\sum_{k=1}^K (h_{r,q}^m[k] - \n\\mu_{a_q}^r[k])^2/(h_{r,q}^m[k] +\\mu_{a_q}^r[k]).\n\\end{equation}\n\\normalsize\nHere, $\\mu_{a_q}^r$ are the means of the descriptors with action \nlabel $a_q$ within region $r$. We solve $\\text{P1}$ iteratively using a block coordinate \ndescending scheme, alternating between solving $b_{r,q}^m$ with $\\mu_{a}^r$ \nfixed, which has a trivial solution; and then fixing $\\mu_{a}^r$ to solve \n$b_{r,q}^m$, relaxing $\\text{P1}$ to solve a linear program. Note that the second term \nof the objective function in $\\text{P1}$ resembles the objective function of \n\\emph{self-paced} learning \\cite{Kumar:EtAl:2010}, managing the balance between \nassigning a single region to every action or assigning all possible regions to \nthe respective action interval. \n\n\\textbf{Learning model parameters.}\nWe formulate learning the model parameters as a Latent Structural SVM\nproblem \\cite{Yu:Joachims:2010}, with latent variables for motion\nposelets $\\vec{z}$ and actionlets $\\vec{v}$. We find values for parameters in \nequations \n(\\ref{eq:motionposelets}-\\ref{eq:actionletstransition}),\nslack variables $\\xi_i$, motion poselet labels $\\vec{z}_i$, and actionlet labels $\\vec{v}_i$, \nby solving:\n{\\small\n\\begin{equation}\n\\label{eq:big_problem}\n\\min_{W,\\xi_i,~i=\\{1,\\dots,M\\}} \\frac{1}{2}||W||_2^2 + \\frac{C}{M} \\sum_{i=1}^M\\xi_i ,\n\\end{equation}}\nwhere\n{\\small \\begin{equation}\nW^\\top=[\\alpha^\\top, \\beta^\\top, w^\\top, \\gamma^\\top, \\eta^\\top, \\theta^\\top],\n\\end{equation}}\nand\n{\\small\n\\begin{equation} \\label{eq:slags}\n\\begin{split}\n\\xi_i = \\max_{\\vec{z},\\vec{v},y} \\{ & E(\\vec{x}_i, \\vec{z}, \\vec{v}, y) + \\Delta( (y_i,\\vec{v}_i), (y, \\vec{v})) \\\\\n & - \\max_{\\vec{z}_i}{ E(\\vec{x}_i, \\vec{z}_i, \\vec{v}_i, y_i)} \\}, \\; \\;\\; i\\in[1,...M].\t\n\\end{split}\n\\end{equation}}\nIn Equation (\\ref{eq:slags}), each slack variable\n$\\xi_i$ quantifies the error of the inferred labeling for\nvideo $i$. We solve Equation (\\ref{eq:big_problem}) iteratively using the CCCP\nalgorithm \\cite{Yuille:Rangarajan:03}, by solving for \nlatent labels $\\vec{z}_i$ and $\\vec{v}_i$ given model parameters $W$, \ntemporal atomic action annotations (when available), and labels of complex actions occurring in \ntraining videos (see Section \\ref{subsec:inference}). Then, we solve for \n$W$ via 1-slack formulation using Cutting Plane algorithm \n\\cite{Joachims2009}. \n\nThe role of the loss function $\\Delta((y_i,\\vec{v}_i),(y,\\vec{v}))$ is to penalize inference errors during \ntraining. If the true actionlet labels are known in advance, the loss function is the same as in \\cite{Lillo2014} using the actionlets instead of atomic actions:\n\\small \\begin{equation}\n\\Delta((y_i,\\vec{v}_i),(y,\\vec{v})) = \\lambda_y(y_i \\ne y) + \\lambda_v\\frac{1}{T}\\sum_{t=1}^T \n\\delta({v_t}_{i} \\neq v_t),\n\\end{equation}\n\\normalsize\n\\noindent where ${v_t}_{i}$ is the true actionlet label. If the spatial ordering of actionlets is unknown (hence the latent \nactionlet formulation), but the temporal composition is known, we can compute a \nlist $A_t$ of possible actionlets for frame $t$, and include that information\non the loss function as\n\\small \\begin{equation}\n\\Delta((y_i,\\vec{v}_i),(y,\\vec{v})) = \\lambda_y(y_i \\ne y) + \\lambda_v\\frac{1}{T}\\sum_{t=1}^T \n\\delta(v_t \\notin A_t)\n\\end{equation}\n\\normalsize\n\n\\subsection{Body regions}\nWe divide the body pose into $R$ fixed spatial regions and independently compute \na pose feature vector for each region. Figure \\ref{fig:skeleton_limbs_regions} \nillustrates the case when $R = 4$ that we use in all our experiments. Our body \npose feature vector consists of the concatenation of two descriptors. At frame \n$t$ and region $r$, a descriptor $x^{g}_{t,r}$ encodes geometric information \nabout the spatial configuration of body joints, and a descriptor $x^{m}_{t,r}$ \nencodes local motion information around each body joint position.\nWe use the geometric descriptor from \\cite{Lillo2014}:\nwe construct six segments that connect pairs of joints at each\nregion\\footnote{Arm segments: wrist-elbow, elbow-shoulder, shoulder-neck, wrist-shoulder, wrist-head, and neck-torso; Leg segments: ankle-knee, knee-hip, hip-hip center, ankle-hip, ankle-torso and hip center-torso}\nand compute 15 angles between those segments.\nAlso, three angles are calculated between a plane formed by three\nsegments\\footnote{Arm plane: shoulder-elbow-wrist; Leg plane: hip-knee-ankle} and \nthe remaining three non-coplanar segments, totalizing an 18-D geometric descriptor (GEO) for every region.\nOur motion descriptor is based on tracking motion trajectories of key points\n\\cite{WangCVPR2011}, which in our case coincide with body joint positions.\nWe extract a HOF descriptor\nusing 32x32 RGB patches centered at the joint location for a temporal window of 15 \nframes. At each joint location, this produces a 108-D descriptor, \nwhich we concatenate across all joints in each a region to obtain our motion descriptor. Finally, \nwe apply PCA to reduce the dimensionality of our concatenated motion descriptor\nto 20. The final descriptor is the concatenation of the geometric and \nmotion descriptors, $x_{t,r} = [x_{t,r}^g ; x_{t,r}^m]$.\n\n\n\\subsection{Hierarchical compositional model}\n\nWe propose a hierarchical compositional model that spans three semantic \nlevels. Figure \\ref{fig:overview} shows a schematic of our model. At the \ntop level, our model assumes that each input video has a single complex action \nlabel $y$. Each complex action is composed of a \ntemporal and spatial arrangement of atomic actions with labels $\\vec{u}=[u_1,\\dots,u_T]$, $u_i \\in \\{1,\\dots,S\\}$.\nIn turn, each atomic action consists of several non-shared \\emph{actionlets}, which correspond to representative sets of pose configurations for action identification, modeling the multimodality of each atomic action.\nWe capture actionlet assignments in $\\vec{v}=[v_1,\\dots,v_T]$, $v_i \\in \\{1,\\dots,A\\}$.\nEach actionlet index $v_i$ corresponds to a unique and known actomic action label $u_i$, so they are related by a mapping $\\vec{u} = \\vec{u}(\\vec{v})$. At the \nintermediate level, our model assumes that each actionlet is composed of a \ntemporal arrangement of a subset from $K$ body poses, encoded in $\\vec{z} = [z_1,\\dots,z_T]$, $z_i \\in \\{1,\\dots,K\\}$,\nwhere $K$ is a hyperparameter of the model.\nThese subsets capture pose geometry and local motion, so we call them \\emph{motion poselets}.\nFinally, at the bottom level, our model identifies motion poselets \nusing a bank of linear classifiers that are applied to the incoming frame \ndescriptors.\n\n\nWe build each layer of our hierarchical model on top of BoW \nrepresentations of labels. To this end, at the bottom level of our hierarchy, and for \neach body region, we learn a dictionary of motion poselets. Similarly, at the mid-level of our hierarchy, we learn a dictionary of actionlets, using the BoW representation of motion poselets as inputs. At each of these levels, \nspatio-temporal activations of the respective dictionary words are used \nto obtain the corresponding histogram encoding the BoW representation. \nThe next two sections provide\ndetails on the process to represent and learn the dictionaries of motion \nposelets and actionlets. Here we discuss our\nintegrated hierarchical model.\n\nWe formulate our hierarchical model using an energy function.\nGiven a video of $T$ frames corresponding to complex action $y$ encoded by descriptors $\\vec{x}$, with the label vectors $\\vec{z}$ for motion poselets,\n$\\vec{v}$ for actionlets and $\\vec{u}$ for atomic actions, we\ndefine an energy function for a video as:\n\\small\n\\begin{align}\\label{Eq_energy}\nE(\\vec{x},&\\vec{v},\\vec{z},y) = E_{\\text{motion poselets}}(\\vec{z},\\vec{x}) \\nonumber \\\\&+ E_{\\text{motion poselets BoW}}(\\vec{v},\\vec{z}) + \nE_{\\text{atomic actions BoW}}(\\vec{u}(\\vec{v}),y) \\nonumber \\\\ \n& + E_{\\text{motion poselets transition}}(\\vec{z}) + E_{\\text{actionlets \ntransition}}(\\vec{v}).\n\\end{align}\n\\normalsize\nBesides the BoW representations and motion poselet classifiers\ndescribed above, Equation (\\ref{Eq_energy}) includes\ntwo energy potentials that encode information related to\ntemporal\ntransitions between pairs of motion poselets ($E_{\\text{motion poselets \ntransition}}$) and \nactionlets ($E_{\\text{actionlets transition}}$). \nThe energy potentials are given by:\n{\\small\n\\begin{align}\n\\label{eq:motionposelets}\n&E_{\\text{mot. poselet}}(\\vec{z},\\vec{x}) = \\sum_{r,t} \\left[ \\sum_{k} {w^r_k}^\\top \nx_{t,r}\\delta_{z_{(t,r)}}^{k} + \\theta^r \\delta_{z_{(t,r)}}^{K+1}\\right] \\\\\n&E_{\\text{mot. poselet BoW}}(\\vec{v},\\vec{z}) = \\sum_{r,a,k} {\\beta^r_{a,k}}\\delta_{v_{(t,r)}}^{a}\\delta_{z_{(t,r)}}^{k}\\\\\n\\label{eq:actionlets_BoW} \n&E_{\\text{atomic act. BoW}}(\\vec{u}(\\vec{v}),y) =\\sum_{r,s} {\\alpha^r_{y,s}}\\delta_{u(v_{(t,r)})}^{s} \\\\\n&E_{\\text{mot. pos. trans.}}(\\vec{z}) = \n\\sum_{r,k_{+1},k'_{+1}} \\eta^r_{k,k'} \n\\sum_{t} \\delta_{z_{(t-1,r)}}^{k}\\delta_{z_{(t,r)}}^{k'} \\\\\n\\label{eq:actionletstransition}\n&E_{\\text{acttionlet trans.}}(\\vec{v}) =\\sum_{r,a,a'} \\gamma^r_{a,a'} \n\\sum_{t} \n\\delta_{v_{(t-1,r)}}^{a}\\delta_{v_{(t,r)}}^{a'} \n\\end{align}\n}\n\nOur goal is to \nmaximize $E(\\vec{x},\\vec{v},\\vec{z},y)$, and obtain the \nspatial and temporal arrangement \nof motion poselets $\\vec{z}$ and actionlets $\\vec{v}$, as well as, the underlying \ncomplex action $y$.\n\nIn the previous equations, we use $\\delta_a^b$ to indicate the Kronecker delta function $\\delta(a = b)$, and use indexes $k \\in \\{1,\\dots,K\\}$ for motion poselets, $a \\in \\{1,\\dots,A\\}$ for actionlets, and $s \\in \\{1,\\dots,S\\}$ for atomic actions.\nIn the energy term for motion poselets,\n$w^r_k$ are a set of $K$ linear pose classifiers applied to frame \ndescriptors $x_{t,r}$, according to the label of the latent variable $z_{t,r}$. \nNote that there is a special label $K+1$; the role of this label will be \nexplained in Section \\ref{subsec:garbage_collector}.\nIn the energy potential associated to \nthe BoW representation for motion poselets, $\\vec{\\beta}^r$ denotes a set of $A$ \nmid-level classifiers, whose inputs are histograms of motion \nposelet labels at those frame annotated as actionlet $a$. At the highest level, \n$\\alpha^r_{y}$ is a linear classifier associated with complex action $y$, whose \ninput is the histogram of atomic action labels,\nwhich are related to actionlet assignments by the mapping function $\\vec{u}(\\vec{v})$. Note that all classifiers \nand labels here correspond to a single region $r$. We add the contributions of all \nregions to compute the global energy of the video. The transition terms act as\nlinear classifiers $\\eta^r$ and $\\gamma^r$ over histograms of temporal transitions of motion poselets \nand temporal transitions of actionlets respectively. As we have a special label $K+1$ for motion poselets, the summation index\n$k_{+1}$ indicates the interval $\\lbrack 1,\\dots,K+1 \\rbrack$.\n\n\\subsection{Learning motion poselets}\nIn our model, motion poselets are learned by treating them as latent variables \nduring training. Before training, we fix the number of motion poselets per region to $K$.\nIn every region $r$, we learn an independent\nset of pose classifiers $\\{w^r_k\\}_{k=1}^K$, initializing the motion poselet \nlabels using the $k$-means algorithm. We learn pose classifiers, \nactionlets and complex actions classifiers jointly, allowing the model to discover \ndiscriminative motion poselets useful to detect and recognize complex actions. \nAs shown in previous work, jointly learning linear\nclassifiers to identify body parts and atomic actions improves recognition \nrates \\cite{Lillo2014,Wang2008}, so here we follow a similar hierarchical \napproach, and integrate learning\nof motion poselets with the learning of actionlets.\n\n\\subsection{Learning actionlets}\n\\label{sec:learningactionlets}\nA single linear classifier does not offer enough flexibility to identify atomic \nactions that exhibit high visual variability. As an example, the atomic action \n``open'' can be associated with ``opening a can'' or ``opening a \nbook'', displaying high variability in action execution. Consequently, we \naugment our hierarchical model including multiple classifiers to \nidentify different modes of action execution. \n\nInspired by \\cite{Raptis2012}, we use the \\emph{Cattell's Scree test} to\nfind a suitable number of actionlets to model each atomic \naction. Specifically, using the atomic action labels, we compute a descriptor \nfor every video interval using \nnormalized histograms of initial pose labels obtained with $k$-means. Then, for a particular atomic action \n$s$, we compute the eigenvalues $\\lambda(s)$ of the affinity matrix of the \natomic action descriptors, which is build using $\\chi^2$ distance. For each \natomic action \n$s \\in \\{1,\\dots,S\\}$, we find the number of actionlets $G_s$ as $G_s = \n\\argmin_i {\\lambda(s)}_{i+1}^2 / (\\sum_{j=1}^i {\\lambda(s)}_j) + c\\cdot i$, with $c=2\\cdot \n10^{-3}$. Finally, we cluster the descriptors from each atomic \naction $s$ running $k$-means with $k = G_s$. This scheme generates \na set of non-overlapping actionlets to model each single atomic \naction. In our experiments, we notice that the number of actionlets used to \nmodel each atomic action varies typically from 1 to 8.\n\nTo transfer the new labels to the model, we define $u(v)$ as a function that\nmaps from actionlet label $v$ to the corresponding atomic action label \n$u$. A dictionary of actionlets provides a richer representation for actions, \nwhere several actionlets will map to a single atomic action. This behavior \nresembles a max-pooling operation, where at inference time we will choose the \nset of actionlets that best describe the performed actions in the video, keeping \nthe semantics of the original atomic action labels.\n\n\\subsection{A garbage collector for motion poselets}\n\\label{subsec:garbage_collector}\nWhile poses are highly informative for action recognition, an input video \nmight contain irrelevant or idle zones, where the underlying poses are noisy \nor non-discriminative to identify the actions being performed in the video. As \na result, low-scoring motion poselets could degrade the pose classifiers during \ntraining, decreasing their performance. To deal with this problem, we include in \nour model a \\emph{garbage collector} mechanism for motion poselets. This \nmechanism operates by assigning all low-scoring motion poselets to\nthe $(K+1)$-th pose dictionary entry. These collected poses are \nassociated with a learned score lower than $\\theta^r$, as in Equation \n(\\ref{eq:motionposelets}). Our experiments show that this mechanism leads \nto learning more discriminative motion poselet classifiers.\n\n\n\\input{learning}\n\\input{inference}\n\n\n\n\n\n\n\n\n\\subsection{Video Representation} \\label{subsec:videorepresentation}\n\n[EXPLAIN BETTER, ADD FIGURE]\nOur model is based on skeleton information encoded in joint annotations. We use the same geometric descriptor as in \\cite{Lillo2014}, using angles between segments connecting two joints, and angles between these segments and a plane formed by three joints. In addition to geometry, other authors \\cite{Zanfir2013,Tao2015,Wang2014} have noticed that including local motion information is beneficial to the categorization of videos. Moreover, in \\cite{zhu2013fusing} the authors create a fused descriptor using spatio-temporal descriptors and joint descriptors, showing that they combined perform better than separated. With this is mind, we augment the original geometric descriptor with motion information: when there is only skeleton jonints data, we use the displacement of vectors (velocity) as a motion descriptor. If RGB video is available, we use the HOF descriptor extracted from the trajectory of the joint in a small temporal window.\n\nFor the geometric descriptor, we use 6 segments per human action (see Fig. XXXX). The descriptor is composed by the angles between the segments (15 angles), and the angles between a plane formed by three segments and the non-coplanar segments (3 angles). For motion descriptor, we use either the 3D velocity of every joint in each region as a concatenated vector (18 dimensions), or the concatenated HOF descriptor of the joint trajectories, transformed to a low-dimensional space using PCA (20 dimensions).\n", "meta": {"timestamp": "2016-06-17T02:01:41", "yymm": "1606", "arxiv_id": "1606.04992", "language": "en", "url": "https://arxiv.org/abs/1606.04992"}} +{"text": "\\section{introduction}\nRecent discovery of Weyl semimetals (WSMs)~\\cite{Lv2015TaAs,Xu2015TaAs,Yang2015TaAs} in realistic materials has stimulated tremendous research interest in topological semimetals, such as WSMs, Dirac semimetals, and nodal line semimetals~\\cite{volovik2003universe,Wan2011,Balents2011,Burkov2011,Hosur2013,Vafek2014}, as a new frontier of condensed matter physics after the discovery of topological insulators~\\cite{qi2011RMP, Hasan2010}.\nThe WSMs are of particular interest not only because of their exotic Fermi-arc-type surface states but also because of their appealing bulk chiral magneto-transport properties, such as the chiral anomaly effect~\\cite{Xiong2015,Huang2015anomaly,Arnold2015}, nonlocal transport~\\cite{Parameswaran2014,Baum2015}, large magnetoresistance, and high mobility~\\cite{Shekhar2015}.\nCurrently discovered WSM materials can be classified into two groups. One group breaks crystal inversion symmetry but preserves time-reversal symmetry (e.g., TaAs-family transition-metal pnictides~\\cite{Weng2015,Huang2015}and WTe$_2$- and MoTe$_2$-family transition-metal dichalcogenides~\\cite{Soluyanov2015WTe2,Sun2015MoTe2,Wang2016MoTe2,Koepernik2016,Deng2016,Jiang2016}). The other group breaks time-reversal symmetry in ferromagnets with possible tilted moments (e.g., magnetic Heusler GdPtBi~\\cite{Hirschberger2016,Shekhar2016} and YbMnBi$_2$~\\cite{Borisenko2015}). An antiferromagnetic (AFM) WSM compound has yet to be found, although Y$_2$Ir$_2$O$_7$ with a noncoplanar AFM structure was theoretically predicted to be a WSM candidate~\\cite{Wan2011}.\n\nIn a WSM, the conduction and valence bands cross each other linearly through nodes called Weyl points. Between a pair of Weyl points with opposite chiralities (sink or source of the Berry curvature)~\\cite{volovik2003universe}, the emerging Berry flux can lead to the anomalous Hall effect (AHE) ~\\cite{Burkov2014}, as observed in GdPtBi~\\cite{Hirschberger2016,Shekhar2016}, and an intrinsic spin Hall effect (SHE), as predicted in TaAs-type materials~\\cite{Sun2016}, for systems without and with time-reversal symmetry, respectively. Herein, we raise a simple recipe to search for WSM candidates among materials that host strong AHE or SHE.\n\nRecently, Mn$_3$X (where $\\rm X=Sn$, Ge, and Ir), which exhibit noncollinear antiferromagetic (AFM) phases at room temperature, have been found to show large AHE~\\cite{Kubler2014,Chen2014,Nakatsuji2015,Nayak2016} and SHE~\\cite{Zhang2016}, provoking our interest to investigate their band structures. In this work, we report the existence of Weyl fermions for Mn$_3$Ge and Mn$_3$Sn compounds and the resultant Fermi arcs on the surface by \\textit{ab initio} calculations, awaiting experimental verifications. Dozens of Weyl points exist near the Fermi energy in their band structure, and these can be well understood with the assistance of lattice symmetry.\n\n\n\\section{methods}\n\n\nThe electronic ground states of Mn$_3$Ge and Mn$_3$Sn were calculated by using density-functional theory (DFT) within the Perdew-Burke-Ernzerhof-type generalized-gradient approximation (GGA)~\\cite{Perdew1996} using the Vienna {\\it ab initio} Simulation Package (\\textsc{vasp})~\\cite{Kresse1996}. The $3d^6 4s^1$, $4s^24p^2$, and $5s^2 5p^2$ electrons were considered as valance electrons for Mn, Ge, and Sn atoms, respectively. The primitive cell with experimental crystal parameters $a=b=5.352$ and $c=4.312$ \\AA~ for Mn$_3$Ge\nand $a=b=5.67$ and $c=4.53$ \\AA~ for Mn$_3$Sn\nwere adopted. Spin-orbit coupling (SOC) was included in all calculations.\n\nTo identify the Weyl points with the monopole feature, we calculated the Berry curvature distribution in momentum space.\nThe Berry curvature was calculated based on a tight-binding Hamiltonian based on localized Wannier functions\\cite{Mostofi2008} projected from the DFT Bloch wave functions. Chosen were atomic-orbital-like Wannier functions, which include Mn-$spd$ and Ge-$sp$/Sn-$p$ orbitals, so that the tight-binding Hamiltonian is consistent with the symmetry of \\textit{ab initio} calculations.\nFrom such a Hamiltonian, the Berry curvature can be calculated using the Kubo-formula approach\\cite{Xiao2010},\n\\begin{equation}\n\\begin{aligned}\\label{equation1}\n\\Omega^{\\gamma}_n(\\vec{k})= 2i\\hbar^2 \\sum_{m \\ne n} \\dfrac{}{(E_{n}(\\vec{k})-E_{m}(\\vec{k}))^2},\n\\end{aligned}\n\\end{equation}\nwhere $\\Omega^{\\gamma}_n(\\vec{k})$ is the Berry curvature in momentum space for a given band $n$,\n$\\hat{v}_{\\alpha (\\beta, \\gamma)}=\\frac{1}{\\hbar}\\frac{\\partial\\hat{H}}{\\partial k_{\\alpha (\\beta, \\gamma)}}$ is the velocity operator with $\\alpha,\\beta,\\gamma=x,y,z$, and $|u_{n}(\\vec{k})\\rangle$ and $E_{n}(\\vec{k})$ are the eigenvector and eigenvalue of the Hamiltonian $\\hat{H}(\\vec{k})$, respectively. The summation of $\\Omega^{\\gamma}_n(\\vec{k})$ over all valence bands gives the Berry curvature vector $\\mathbf{\\Omega} ~(\\Omega^x,\\Omega^y,\\Omega^z)$.\n\nIn addition, the surface states that demonstrate the Fermi arcs were calculated on a semi-infinite surface, where the momentum-resolved local density of states (LDOS) on the surface layer was evaluated based on the Green's function method. We note that the current surface band structure corresponds to the bottom surface of a half-infinite system.\n\n\\section{Results and Discussion}\n\\subsection{Symmetry analysis of the antiferromagnetic structure}\n\nMn$_3$Ge and Mn$_3$Sn share the same layered hexagonal lattice (space group $P6_3/mmc$, No. 193).\nInside a layer, Mn atoms form a Kagome-type lattice with mixed triangles and hexagons and Ge/Sn atoms are located at the centers of these hexagons.\nEach Mn atom carries a magnetic moment of 3.2 $\\mu$B in Mn$_3$Sn and 2.7 $\\mu$B in Mn$_3$Ge.\nAs revealed in a previous study~\\cite{Zhang2013}, the ground magnetic state is a\nnoncollinear AFM state, where Mn moments align inside the $ab$ plane and form 120-degree angles with neighboring moment vectors, as shown in Fig.\\ref{stru}b. Along the $c$ axis, stacking two layers leads to the primitive unit cell.\nGiven the magnetic lattice, these two layers can be transformed into each other by inversion symmetry or with a mirror reflection ($M_y$) adding a half-lattice ($c/2$) translation, i.e., a nonsymmorphic symmetry $\\{M_y|\\tau = c/2\\}$. In addition, two other mirror reflections ($M_x$ and $M_z$) adding time reversal (T), $M_x T$ and $M_z T$, exist.\n\nIn momentum space, we can utilize three important symmetries, $M_x T$, $M_z T$, and $M_y$, to understand the electronic structure and locate the Weyl points. Suppose a Weyl point with chirality $\\chi$ (+ or $-$) exists at a generic position $\\mathbf{k}~(k_x,k_y,k_z)$.\nMirror reflection reverses $\\chi$ while time reversal does not and both of them act on $\\mathbf{k}$. The transformation is as follows:\n\\begin{equation}\n\\begin{aligned}\nM_x T : & ~ (k_x,k_y,k_z) \\rightarrow (k_x, -k_y, -k_z); &~\\chi &\\rightarrow -\\chi \\\\\nM_z T : &~ (k_x,k_y,k_z) \\rightarrow (-k_x, -k_y, k_z); &~ \\chi &\\rightarrow -\\chi \\\\\nM_y : &~ (k_x,k_y,k_z) \\rightarrow (k_x, -k_y, k_z); &~ \\chi &\\rightarrow -\\chi \\\\\n\\end{aligned}\n\\label{symmetry}\n\\end{equation}\nEach of the above three operations doubles the number of Weyl points. Thus, eight nonequivalent Weyl points can be generated at $(\\pm k_x,+k_y,\\pm k_z)$ with chirality $\\chi$ and\n$(\\pm k_x,-k_y,\\pm k_z)$ with chirality $-\\chi$ (see Fig. 1d). We note that the $k_x=0/\\pi$ or $k_z=0/\\pi$ plane can host Weyl points. However, the $k_y=0/\\pi$ plane cannot host Weyl points, because $M_y$ simply reverses the chirality and annihilates the Weyl point with its mirror image if it exists. Similarly the $M_y$ mirror reflection requires that a nonzero anomalous Hall conductivity can only exist in the $xz$ plane (i.e., $\\sigma_{xz}$), as already shown in Ref.~\\onlinecite{Nayak2016}.\n\nIn addition, the symmetry of the 120-degree AFM state is slightly broken in the materials, owing to the existence of a tiny net moment ($\\sim$0.003 ~$\\mu$B per unit cell)~\\cite{Nakatsuji2015,Nayak2016,Zhang2013}. Such weak symmetry breaking seems to induce negligible effects in the transport measurement. However, it gives rise to a perturbation of the band structure, for example, shifting slightly the mirror image of a Weyl point from its position expected, as we will see in the surface states of Mn$_3$Ge.\n\n\\begin{figure\n \\begin{center}\n \\includegraphics[width=0.45\\textwidth]{figure1.png}\n \\end{center}\n \\caption{ Crystal and magnetic structures of Mn$_3X$ (where $\\rm X = Sn$ or Ge) and related symmetry.\n(a) Crystal structure of Mn$_3$X. Three mirror planes are shown in purple, corresponding to\n \\{$M_y|\\tauup=c/2$\\}, $M_xT$, and $M_zT$ symmetries.\n(b) Top view along the $c$ axis of the Mn sublattice. Chiral AFM with an angle of 120 degrees between neighboring magnetic moments is formed in each Mn layer.\nThe mirror planes that correspond to $M_xT$ and \\{$M_y|\\tauup=c/2$\\} are marked by dashed lines.\n(c) Symmetry in momentum space, $M_y$, $M_xT$, and $M_zT$.\nIf a Weyl point appears at $(k_x,k_y,k_z)$, eight Weyl points in total can be generated at $(\\pm k_x,\\pm k_y,\\pm k_z)$ by the above three symmetry operations. For convenience, we choose the $k_y=\\pi$ plane for $M_y$ here.\n }\n \\label{stru}\n\\end{figure}\n\n\\begin{table\n\\caption{\nPositions and energies of Weyl points in first Brillouin zone for Mn$_3$Sn.\nThe positions ($k_x$, $k_y$, $k_z$) are in units of $\\pi$.\nEnergies are relative to the Fermi energy $E_F$.\nEach type of Weyl point has four copies whose coordinates can be generated\nfrom the symmetry as $(\\pm k_x, \\pm k_y, k_z=0)$.\n}\n\\label{table:Mn3Sn}\n\\centering\n\\begin{tabular}{cccccc}\n\\toprule\n\\hline\nWeyl point & $k_x$ & $k_y$ & $k_z$ & Chirality & Energy (meV) \\\\\n\\hline\nW$_1$ & $-0.325$ & 0.405 & 0.000 & $-$ & 86 \\\\\nW$_2$ & $-0.230$ & 0.356 & 0.003 & + & 158 \\\\\nW$_3$ & $-0.107$ & 0.133 & 0.000 & $-$ & 493 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\\begin{table\n\\caption{\nPositions and energies of Weyl points in the first Brillouin zone for Mn$_3$Ge.\nThe positions ($k_x$, $k_y$, $k_z$) are in units of $\\pi$.\nEnergies are relative to the Fermi energy $E_F$.\nEach of W$_{1,2,7}$ has four copies whose coordinates can be generated\nfrom the symmetry as $(\\pm k_x, \\pm k_y, k_z=0)$.\nW$_4$ has four copies at $(k_x \\approx 0, \\pm k_y, \\pm k_z)$ and\nW$_9$ has two copies at $(k_x \\approx 0, \\pm k_y, k_z =0)$.\nEach of the other Weyl points has four copies whose coordinates can be generated\nfrom the symmetry as $(\\pm k_x, \\pm k_y, \\pm k_z)$.\n} \\label{table:Mn3Ge}\n\\centering\n\\begin{tabular}{@{}cccccc@{}}\n\\toprule\n\\hline\nWeyl point & $k_x$ & $k_y$ & $k_z$ & Chirality & Energy (meV) \\\\\n\\hline\nW$_1$ & $-0.333$ & 0.388 & $-0.000$ & $-$ & 57 \\\\\nW$_2$ & 0.255 & 0.378 & $-0.000$ & + & 111 \\\\\nW$_3$ & $-0.101$ & 0.405 & 0.097 & $-$ & 48 \\\\\nW$_4$ & $-0.004$ & 0.419 & 0.131 & + & 8 \\\\\nW$_5$ & $-0.048$ & 0.306 & 0.164 & + & 77 \\\\\nW$_6$ & 0.002 & 0.314 & 0.171 & $-$ & 59 \\\\\nW$_7$ & $-0.081$ & 0.109 & 0.000 & + & 479 \\\\\nW$_8$ & 0.069 & $-0.128$ & 0.117 & + & 330 \\\\\nW$_9$ & 0.004 & $-0.149$ & $-0.000$ & + & 470 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\n\\subsection{Weyl points in the bulk band structure}\n\nThe bulk band structures are shown along high-symmetry lines in Fig.~\\ref{bandstrucure} for Mn$_3$Ge and Mn$_3$Sn. It is not surprising that the two materials exhibit similar band dispersions.\nAt first glance, one can find two seemingly band degenerate points at $Z$ and $K$ points, which are below the Fermi energy. Because of $M_z T$ and the nonsymmorphic symmetry \\{$M_y|\\tauup=c/2$\\}, the bands are supposed to be quadruply degenerate at the Brillouin zone boundary $Z$, forming a Dirac point protected by the nonsymmorphic space group~\\cite{Young2012,Schoop2015,Tang2016}. Given the slight mirror symmetry breaking by the residual net magnetic moment, this Dirac point is gapped at $Z$ (as shown in the enlarged panel) and splits into four Weyl points, which are very close to each other in $k$ space. A tiny gap also appears at the $K$ point. Nearby, two additional Weyl points appear, too. Since the Weyl point separations are too small near both $Z$ and $K$ points, these Weyl points may generate little observable consequence in experiments such as those for studying Fermi arcs. Therefore, we will not focus on them in the following investigation.\n\n\\begin{figure\n\\begin{center}\n\\includegraphics[width=0.45\\textwidth]{figure2.png}\n\\end{center}\n\\caption{\nBulk band structures for (a) Mn$_3$Sn and (b) Mn$_3$Ge along high-symmetry lines with SOC.\nThe bands near the $Z$ and $K$ (indicated by red circles) are expanded to show details in (a).\nThe Fermi energy is set to zero.}\n\\label{bandstrucure}\n\\end{figure}\n\nMn$_3$Sn and Mn$_3$Ge are actually metallic, as seen from the band structures. However, we retain the terminology of Weyl semimetal for simplicity and consistency. The valence and conduction bands cross each many times near the Fermi energy, generating multiple pairs of Weyl points. We first investigate the Sn compound. Supposing that the total valence electron number is $N_v$, we search for the crossing points between the $N_v ^{\\rm th}$ and $(N_v +1) ^{\\rm th}$ bands.\n\nAs shown in Fig.~\\ref{bc_Mn3Sn}a, there are six pairs of Weyl points in the first Brillouin zone; these can be classified into three groups according to their positions, noted as W$_1$, W$_2$, and W$_3$. These Weyl points lie in the $M_z$ plane (with W$_2$ points being only slightly off this plane owing to the residual-moment-induced symmetry breaking) and slightly above the Fermi energy. Therefore, there are four copies for each of them according to the symmetry analysis in Eq.~\\ref{symmetry}.\n Their representative coordinates and energies are listed in Table~\\ref{table:Mn3Sn} and also indicated in Fig.~\\ref{bc_Mn3Sn}a. A Weyl point (e.g., W$_1$ in Figs.~\\ref{bc_Mn3Sn}b and ~\\ref{bc_Mn3Sn}c) acts as a source or sink of the Berry curvature $\\mathbf{\\Omega}$, clearly showing the monopole feature with a definite chirality.\n\nIn contrast to Mn$_3$Sn, Mn$_3$Ge displays many more Weyl points. As shown in Fig.~\\ref{bc_Mn3Ge}a and listed in Table~\\ref{table:Mn3Ge}, there are nine groups of Weyl points. Here W$_{1,2,7,9}$ lie in the $M_z$ plane with W$_9$ on the $k_y$ axis, W$_4$ appears in the $M_x$ plane, and the others are in generic positions. Therefore, there are four copies of W$_{1,2,7,4}$, two copies of W$_9$, and eight copies of other Weyl points.\nAlthough there are many other Weyl points in higher energies owing to different band crossings, we mainly focus on the current Weyl points that are close to the Fermi energy. The monopole-like distribution of the Berry curvature near these Weyl points is verified; see W$_1$ in Fig.~\\ref{bc_Mn3Ge} as an example.\nWithout including SOC, we observed a nodal-ring-like band crossing in the band structures of both Mn$_3$Sn and Mn$_3$Ge. SOC gaps the nodal rings but leaves isolating band-touching points, i.e., Weyl points. Since Mn$_3$Sn exhibits stronger SOC than Mn$_3$Ge, many Weyl points with opposite chirality may annihilate each other by being pushed by the strong SOC in Mn$_3$Sn. This might be why Mn$_3$Sn exhibits fewer Weyl points than Mn$_3$Ge.\n\n\n\\begin{figure\n \\begin{center}\n \\includegraphics[width=0.5\\textwidth]{figure3.png}\n \\end{center}\n \\caption{Surface states of Mn$_3$Sn.\n(a) Distribution of Weyl points in momentum space.\nBlack and white points represent Weyl points with $-$ and + chirality, respectively. \n(b) and (c) Monopole-like distribution of the Berry curvature near a W$_1$ Weyl point.\n(d) Fermi surface at $E_F= 86$ meV crossing the W$_1$ Weyl points.\nThe color represents the surface LDOS.\nTwo pairs of W$_1$ points are shown enlarged in the upper panels, where clear Fermi arcs exist.\n(e) Surface band structure along a line connecting a pair of W$_1$ points with opposite chirality.\n(f) Surface band structure along the white horizontal line indicated in (d). Here p1 and p2 are the chiral states corresponding to the Fermi arcs.\n}\n \\label{bc_Mn3Sn}\n\\end{figure}\n\n\\begin{figure\n \\begin{center}\n \\includegraphics[width=0.5\\textwidth]{figure4.png}\n \\end{center}\n \\caption{ Surface states of Mn$_3$Ge.\n(a) Distribution of Weyl points in momentum space.\nBlack and white points represent Weyl points with $-$' and + chirality, respectively. Larger points indicate two Weyl points ($\\pm k_z$) projected into this plane.\n(b) and (c) Monopole-like distribution of the Berry curvature near a W$_1$ Weyl point.\n(d) Fermi surface at $E_F= 55$ meV crossing the W$_1$ Weyl points.\nThe color represents the surface LDOS.\nTwo pairs of W$_1$ points are shown enlarged in the upper panels, where clear Fermi arcs exist.\n(e) Surface band structure along a line connecting a pair of W$_1$ points with opposite chirality.\n(f) Surface band structure along the white horizontal line indicated in (d). Here p1 and p2 are the chiral states corresponding to the Fermi arcs.\n}\n \\label{bc_Mn3Ge}\n\\end{figure}\n\n\n\\subsection{Fermi arcs on the surface}\n\nThe existence of Fermi arcs on the surface is one of the most significant consequences of Weyl points inside the three-dimensional (3D) bulk. We first investigate the surface states of Mn$_3$Sn that have a simple bulk band structure with fewer Weyl points. When projecting W$_{2,3}$ Weyl points to the (001) surface, they overlap with other bulk bands that overwhelm the surface states. Luckily, W$_1$ Weyl points are visible on the Fermi surface. When the Fermi energy crosses them, W$_1$ Weyl points appear as the touching points of neighboring hole and electron pockets. Therefore, they are typical type-II Weyl points~\\cite{Soluyanov2015WTe2}. Indeed, their energy dispersions demonstrate strongly tilted Weyl cones.\n\nThe Fermi surface of the surface band structure is shown in Fig.~\\ref{bc_Mn3Sn}d for the Sn compound. In each corner of the surface Brillouin zone, a pair of W$_1$ Weyl points exists with opposite chirality. Connecting such a pair of Weyl points, a long Fermi arc appears in both the Fermi surface (Fig.~\\ref{bc_Mn3Sn}d) and the band structure (Fig.~\\ref{bc_Mn3Sn}e). Although the projection of bulk bands exhibit pseudo-symmetry of a hexagonal lattice, the surface Fermi arcs do not. It is clear that the Fermi arcs originating from two neighboring Weyl pairs (see Fig.~\\ref{bc_Mn3Sn}d) do not exhibit $M_x$ reflection, because the chirality of Weyl points apparently violates $M_x$ symmetry. For a generic $k_x$--$k_z$ plane between each pair of W$_1$ Weyl points, the net Berry flux points in the $-k_y$ direction. As a consequence, the Fermi velocities of both Fermi arcs point in the $+k_x$ direction on the bottom surface (see Fig.~\\ref{bc_Mn3Sn}f). These two right movers coincide with the nonzero net Berry flux, i.e., Chern number $=2$.\n\nFor Mn$_3$Ge, we also focus on the W$_1$-type Weyl points at the corners of the hexagonal Brillouin zone. In contrast to Mn$_3$Sn, Mn$_3$Ge exhibits a more complicated Fermi surface. Fermi arcs exist to connect a pair of W$_1$-type Weyl points with opposite chirality, but they are divided into three pieces as shown in Fig.~\\ref{bc_Mn3Ge}d. In the band structures (see Figs. ~\\ref{bc_Mn3Ge}e and f), these three pieces are indeed connected together as a single surface state. Crossing a line between two pairs of W$_1$ points, one can find two right movers in the band structure, which are indicated as p1 and p2 in Fig. ~\\ref{bc_Mn3Ge}f. The existence of two chiral surface bands is consistent with a nontrivial Chern number between these two pairs of Weyl points.\n\n\\section{Summary}\n\nIn summary, we have discovered the Weyl semimetal state in the chiral AFM compounds Mn$_3$Sn and Mn$_3$Ge by {\\it ab~initio} band structure calculations.\nMultiple Weyl points were observed in the bulk band structures, most of which are type II.\nThe positions and chirality of Weyl points are in accordance with the symmetry of the magnetic lattice.\nFor both compounds, Fermi arcs were found on the surface, each of which connects a pair of Weyl points with opposite chirality, calling for further experimental investigations such as angle-resolved photoemission spectroscopy.\nThe discovery of Weyl points verifies the large anomalous Hall conductivity observed recently in titled compounds.\nOur work further reveals a guiding principle to search for Weyl semimetals among materials\nthat exhibit a strong anomalous Hall effect.\n\n\\begin{acknowledgments}\nWe thank Claudia Felser, J{\\\"u}rgen K{\\\"u}bler and Ajaya K. Nayak for helpful discussions.\nWe acknowledge the Max Planck Computing and Data Facility (MPCDF) and Shanghai Supercomputer Center for computational resources and the German Research Foundation (DFG) SFB-1143 for financial support.\n\\end{acknowledgments}\n\n\n", "meta": {"timestamp": "2016-08-18T02:05:38", "yymm": "1608", "arxiv_id": "1608.03404", "language": "en", "url": "https://arxiv.org/abs/1608.03404"}} +{"text": "\\section{Introduction}\n\nConformal invariance was first recognised to be of physical interest when it was realized that the Maxwell equations are covariant under the $15$-dimensional conformal group \\cite{Cu,Bat}, a fact that motivated a more detailed analysis of conformal invariance in other physical contexts such as General Relativity, Quantum Mechanics or high energy physics \\cite{Ful}. These applications further suggested to study conformal invariance in connection with the physically-relevant groups, among which the Poincar\\'e and Galilei groups were the first to be considered. In this context, conformal extensions of the Galilei group have been considered in Galilei-invariant field theories, in the study of possible dynamics of interacting particles as well as in the nonrelativistic AdS/CFT correspondence\n\\cite{Bar54,Hag,Hav,Zak,Fig}. Special cases as the (centrally extended) Schr\\\"odinger algebra $\\widehat{\\mathcal{S}}(n)$ corresponding to the maximal invariance group of the \nfree Schr\\\"odinger equation have been studied in detail by various authors, motivated by different applications such as the kinematical invariance of hierarchies of partial differential equations, Appell systems, quantum groups or representation theory \\cite{Ni72,Ni73,Do97,Fra}. The class of Schr\\\"odinger algebras can be generalized in natural manner to the so-called conformal Galilei algebras $\\mathfrak{g}_{\\ell}(d)$ for (half-integer) values $\\ell\\geq \\frac{1}{2}$, \nalso corresponding to semidirect products of the semisimple Lie algebra $\\mathfrak{sl}(2,\\mathbb{R})\\oplus\\mathfrak{so}(d)$ with a Heisenberg algebra but with a higher dimensional characteristic representation.\\footnote{By characteristic representation we mean the representation of $\\mathfrak{sl}(2,\\mathbb{R})\\oplus\\mathfrak{so}(d)$ that describes the action on the Heisenberg algebra.} Such algebras, that can be interpreted as a nonrelativistic analogue of the conformal algebra, have been used in a variety of contexts, ranging from classical (nonrelativistic) mechanics, electrodynamics and fluid dynamics to higher-order Lagrangian mechanics \\cite{Ai12,Tac,Du11,St13}\nThe algebraic structure of the conformal Galilei algebra $\\mathfrak{g}_{\\ell}(d)$ for values of $\\ell\\geq \\frac{3}{2}$ and its representations have been analyzed in some detail, and an algorithmic procedures to compute their Casimir operators have been proposed (see e.g. \\cite{Als17,Als19} and references therein). In the recent note \\cite{raub}, a synthetic formula for the Casimir operators of the $\\mathfrak{g}_{\\ell}(d)$ algebra has been given. Although not cited explicitly, the \nprocedure used there corresponds to the so-called ``virtual-copy\" method, a technique well-known for some years that enables to compute the Casimir operators of a Lie algebra using those of its maximal semisimple subalgebra (\\cite{Que,C23,C45,SL3} and references therein). \n\n\\medskip\n\\noindent \nIn this work, we first propose a further generalization of the conformal Galilei algebras $\\mathfrak{g}_{\\ell}(d)$, replacing the $\\mathfrak{sl}(2,\\mathbb{R})\\oplus\\mathfrak{so}(d)$ subalgebra of the latter by the semisimple Lie algebra $\\mathfrak{sl}(2,\\mathbb{R})\\oplus\\mathfrak{so}(p,q)$. As the defining representation $\\rho_d$ of $\\mathfrak{so}(p,q)$ is real for all values $p+q=d$ \\cite{Tits}, the structure of a semidirect product with a Heisenberg Lie algebra remains unaltered. The Lie algebras $\\mathfrak{Gal}_{\\ell}(p,q)$ describe a class of semidirect products of semisimple and Heisenberg Lie algebras among which $\\mathfrak{g}_{\\ell}(d)$ corresponds to the case with a largest maximal compact subalgebra. \nUsing the method developed in \\cite{C45}, we construct a virtual copy of $\\mathfrak{sl}(2,\\mathbb{R})\\oplus\\mathfrak{so}(p,q)$ in the enveloping algebra of $\\mathfrak{Gal}_{\\ell}(p,q)$ for all half-integer values of $\\ell$ and any $d=p+q\\geq 3$. The Casimir operators of these Lie algebras are determined combining the analytical and the matrix trace methods, showing how to compute them explicitly in terms of the determinant of a polynomial matrix. \n\n\n\\medskip\n\\noindent We further determine the exact number of Casimir operators for the unextended Lie algebras $\\overline{\\mathfrak{Gal}}_{\\ell}(p,q)$ obtained by factorizing \n$\\mathfrak{Gal}_{\\ell}(p,q)$ by its centre. Using the reformulation of the Beltrametti-Blasi formula in terms of the Maurer-Cartan equations, we show that albeit the number $\\mathcal{N}$ of invariants increases considerably for fixed $\\ell$ and varying $d$, a generic polynomial formula at most quadratic in $\\ell$ and $d$ that gives the exact value of $\\mathcal{N}$ can be established. Depending on the fact whether the relation $d\\leq 2\\ell+2$ is satisfied or not, it is shown that $\\overline{\\mathfrak{Gal}}_{\\ell}(p,q)$ admits a complete set of invariants formed by operators that do not depend on the generators of the Levi subalgebra. An algorithmic procedure to compute these invariants by means of a reduction to a linear system is proposed. \n \n\n\\section{Maurer-Cartan equations of Lie algebras and Casimir operators }\n\nGiven a Lie algebra $ \\frak{g}=\\left\\{X_{1},..,X_{n}\\; |\\;\n\\left[X_{i},X_{j}\\right]=C_{ij}^{k}X_{k}\\right\\}$ in terms of\ngenerators and commutation relations, we are principally interested\non (polynomial) operators\n$C_{p}=\\alpha^{i_{1}..i_{p}}X_{i_{1}}..X_{i_{p}}$ in the\ngenerators of $\\frak{s}$ such that the constraint $\n\\left[X_{i},C_{p}\\right]=0$,\\; ($i=1,..,n$) is satisfied. Such an\noperator can be shown to lie in the centre of the enveloping\nalgebra of $\\frak{g}$ and is called a (generalized) Casimir\noperator. For semisimple Lie algebras, the determination of\nCasimir operators can be done using structural properties\n\\cite{Ra,Gel}. However, for non-semisimple Lie algebras the relevant\ninvariant functions are often rational or even transcendental\nfunctions \\cite{Bo1,Bo2}. This suggests to develop a method in order to\ncover arbitrary Lie algebras. One convenient approach is the\nanalytical realization. The generators of the Lie algebra\n$\\frak{s}$ are realized in the space $C^{\\infty }\\left(\n\\frak{g}^{\\ast }\\right) $ by means of the differential operators:\n\\begin{equation}\n\\widehat{X}_{i}=C_{ij}^{k}x_{k}\\frac{\\partial }{\\partial x_{j}},\n\\label{Rep1}\n\\end{equation}\nwhere $\\left\\{ x_{1},..,x_{n}\\right\\}$ are the coordinates in a dual basis of\n$\\left\\{X_{1},..,X_{n}\\right\\} $. The invariants of $\\frak{g}$ hence correspond to solutions of the following\nsystem of partial differential equations:\n\\begin{equation}\n\\widehat{X}_{i}F=0,\\quad 1\\leq i\\leq n. \\label{sys}\n\\end{equation}\nWhenever we have a polynomial solution of (\\ref{sys}), the\nsymmetrization map defined by\n\\begin{equation}\n{\\rm Sym(}x_{i_{1}}^{a_{1}}..x_{i_{p}}^{a_{p}})=\\frac{1}{p!}\\sum_{\\sigma\\in\nS_{p}}x_{\\sigma(i_{1})}^{a_{1}}..x_{\\sigma(i_{p})}^{a_{p}}\\label{syma}\n\\end{equation}\nallows to rewrite the Casimir operators in their usual form \nas central elements in the enveloping algebra of $\\frak{g}$,\nafter replacing the variables $x_{i}$ by the corresponding\ngenerator $X_{i}$. A maximal set of functionally\nindependent invariants is usually called a fundamental basis. The\nnumber $\\mathcal{N}(\\frak{g})$ of functionally independent\nsolutions of (\\ref{sys}) is obtained from the classical criteria\nfor differential equations, and is given by the formula \n\\begin{equation}\n\\mathcal{N}(\\frak{g}):=\\dim \\,\\frak{g}- {\\rm\nsup}_{x_{1},..,x_{n}}{\\rm rank}\\left( C_{ij}^{k}x_{k}\\right),\n\\label{BB}\n\\end{equation}\nwhere $A(\\frak{g}):=\\left(C_{ij}^{k}x_{k}\\right)$ is the matrix\nassociated to the commutator table of $\\frak{g}$ over the given\nbasis \\cite{Be}.\\newline \nThe reformulation of condition (\\ref{BB}) in terms of differential forms (see e.g. \\cite{C43})\nallows to compute $\\mathcal{N}(\\frak{g})$ quite efficiently and even to \nobtain the Casimir\noperators under special circumstances \\cite{Peci,C72}. In terms of the\nMaurer-Cartan equations, the Lie algebra $\\frak{g}$\nis described as follows: If $\\left\\{ C_{ij}\n^{k}\\right\\} $ denotes the structure tensor over the basis $\\left\\{ X_{1},..,X_{n}\\right\\} $,\nthe identification of the dual space $\\frak{g}^{\\ast}$ with the\nleft-invariant 1-forms on the simply connected Lie group the Lie algebra of which is isomorphic to $\\frak{g}$ allows to define an exterior\ndifferential $d$ on $\\frak{g}^{\\ast}$ by\n\\begin{equation}\nd\\omega\\left( X_{i},X_{j}\\right) =-C_{ij}^{k}\\omega\\left(\nX_{k}\\right) ,\\;\\omega\\in\\frak{g}^{\\ast}.\\label{MCG}\n\\end{equation}\nUsing the coboundary operator $d$, we rewrite $\\frak{g}$ as a\nclosed system of $2$-forms%\n\\begin{equation}\nd\\omega_{k}=-C_{ij}^{k}\\omega_{i}\\wedge\\omega_{j},\\;1\\leq\ni2\\ell+2$. \n\n\n\\begin{enumerate}\n\\item Let $d=p+q\\leq 2\\ell +2$. In this case the dimension of the characteristic representation $\\Gamma$ is clearly larger than that of the Levi subalgebra, so that a 2-form of maximal rank can be constructed using only the differential forms associated to the generators $P_{n,k}$. Consider the 2-form in (\\ref{MCA}) given by $\\Theta=\\Theta_1+\\Theta_2$, where \n\\begin{eqnarray}\n\\Theta_1=d\\sigma_{0,1}+d\\sigma_{2\\ell,d}+d\\sigma_{2\\ell-1,d-1},\\; \n\\Theta_2=\\sum_{s=1}^{d-4} d\\sigma_{s,s+1}.\\label{difo1}\n\\end{eqnarray}\nUsing the decomposition formula $\\bigwedge^{a}\\Theta=\\sum_{r=0}^{a} \\left(\\bigwedge^{r}\\Theta_1\\right) \\wedge \\left(\\bigwedge^{a-r}\\Theta_2\\right)$ we obtain that \n\\begin{eqnarray}\n\\fl \\bigwedge^{\\frac{1}{2}\\left(6-d+d^2\\right)}\\Theta= &\\bigwedge^{d+1}d\\sigma_{0,1}\\wedge\\bigwedge^{d-1}d\\sigma_{2\\ell,d}\\wedge\\bigwedge^{d-3}d\\sigma_{2\\ell-1,d-1}\\wedge\n\\bigwedge^{d-4}d\\sigma_{1,2}\\wedge\\nonumber\\\\\n& \\wedge\\bigwedge^{d-5}d\\sigma_{2,3}\\wedge\\bigwedge^{d-6}d\\sigma_{3,4}\\wedge\\cdots \\bigwedge^{2}d\\sigma_{d-5,d-4}\\wedge d\\sigma_{d-4,d-3}+\\cdots \\neq 0.\\label{pro2}\n\\end{eqnarray}\nAs $\\frac{1}{2}\\left(6-d+d^2\\right)=\\dim\\left(\\mathfrak{sl}(2,\\mathbb{R})\\oplus\\mathfrak{so}(p,q)\\right)$, the 2-form $\\Theta$ is necessarily of maximal rank, as all the generators of the Levi subalgebra appear in some term of the product (\\ref{pro2}) and no products of higher rank are possible due to the Abelian nilradical. We therefore conclude that $j(\\mathfrak{g})=\\frac{1}{2}\\left(6-d+d^2\\right)$ and by formula (\\ref{BB1}) we have \n\\begin{equation}\n\\mathcal{N}(\\mathfrak{g})= \\frac{1}{2}\\left(4\\ell d+3d-d^2-6\\right).\\label{inva1}\n\\end{equation}\n\n\\item Now let $d \\geq 2\\ell +3$. The main difference with respect to the previous case is that a generic form $\\omega\\in\\mathcal{L}(\\mathfrak{g})$ of maximal rank must necessarily contain linear combinations of the 2-forms $d\\omega_{i,j}$ corresponding to the semisimple part of $\\overline{\\mathfrak{Gal}}_{\\ell}(p,q)$. Let us consider first the 2-form \n\\begin{equation}\n\\Xi_1= \\Theta_1+\\Theta_2,\n\\end{equation}\nwhere $\\Theta_1$ is the same as in (\\ref{difo1}) and $\\Theta_2$ is defined as\n\\begin{equation}\n\\Theta_2=\\sum_{s=0}^{2\\ell-3} d\\sigma_{1+s,2+s}.\n\\end{equation}\nIn analogy with the previous case, for the index $\\mu_1=(2\\ell+1)d+(\\ell+2)(1-2\\ell)$ the first term of the following product does not vanish: \n\\begin{equation}\n\\fl \\bigwedge^{\\mu_1}\\Xi_1=\\bigwedge^{d+1}d\\sigma_{0,1}\\bigwedge^{d-1}d\\sigma_{2\\ell,d}\\bigwedge^{d-3}d\\sigma_{2\\ell-1,d-1} \n\\bigwedge^{d-4}d\\sigma_{1,2}\\cdots \\bigwedge^{d-1-2\\ell}d\\sigma_{2\\ell-2,2\\ell-1}+\\cdots \\neq 0.\\label{Pot1}\n\\end{equation}\nThis form, although not maximal in $\\mathcal{L}(\\mathfrak{g})$, is indeed of maximal rank when restricted to the subspace $\\mathcal{L}(\\mathfrak{r})$ generated by the 2-forms $d\\sigma_{n,k}$ with $0\\leq n\\leq 2\\ell$, $1\\leq k\\leq d$. \nThis means that the wedge product of $\\bigwedge^{\\mu_1}\\Xi_1$ with any other $d\\sigma_{n,k}$ is identically zero. Hence, in order to construct a 2-form of maximal rank in $\\mathcal{L}(\\mathfrak{g})$, we have to consider a 2-form $\\Xi_2$ that is a linear combination of the differential forms associated to the generators of the Levi subalgebra of $\\overline{\\mathfrak{Gal}}_{\\ell}(p,q)$. As follows at once from (\\ref{Pot1}), the forms $\\theta_1,\\theta_2,\\theta_3$ associated to $\\mathfrak{sl}(2,\\mathbb{R})$-generators have already appeared, thus it suffices to restrict our analysis to linear combinations of the forms $d\\omega_{i,j}$ corresponding to the pseudo-orthogonal Lie algebra $\\mathfrak{so}(p,q)$. Specifically, we make the choice \n\\begin{equation}\n\\Xi_2= \\sum_{s=0}^{\\nu}d\\omega_{3+2s,4+2s},\\quad \\nu=\\frac{1}{4}\\left(2d-4\\ell-9+(-1)^{1+d}\\right).\n\\end{equation} \nConsider the integer $\\mu_2=\\frac{1}{4}\\left(11+(d-4\\ell)(1+d)-4\\ell^2-2\\left[\\frac{d}{2}\\right]\\right)$ and take the 2-form $\\Xi=\\Xi_1+\\Xi_2$. A long but routine computation shows that following identity is satisfied:\n\\begin{eqnarray}\n\\fl \\bigwedge^{\\mu_1+\\mu_2}\\Xi =& \\left(\\bigwedge^{\\mu_1}\\Xi_1\\right)\\wedge \\left(\\bigwedge^{\\mu_2}\\Xi_2\\right) \\nonumber\\\\\n& = \\left(\\bigwedge^{\\mu_1}\\Xi_1\\right)\\wedge\\bigwedge^{d-6}d\\omega_{3,4}\\bigwedge^{d-8}d\\omega_{5,6}\\cdots \\bigwedge^{d-6-2\\nu}d\\omega_{3+2\\nu,4+2\\nu}+\\cdots \\neq 0.\\label{pro1}\n\\end{eqnarray}\nWe observe that this form involves $\\mu_1+2\\mu_2$ forms $\\omega_{i,j}$ from $\\mathfrak{so}(p,q)$, hence there remain $\\frac{d(d-1)}{2}-\\mu_1-2\\mu_2$ elements of the pseudo-orthogonal that do not appear in the first term in (\\ref{pro1}). From this product and (\\ref{MCA}) it can be seen that these uncovered elements are of the type $\\left\\{\\omega_{i_1,i_1+1},\\omega_{i_2,i_2+1},\\cdots \\omega_{i_r,i_r+1}\\right\\}$ with the subindices satisfying $i_{\\alpha+1}-i_{\\alpha}\\geq 2$ for $1\\leq \\alpha\\leq r$, from which we deduce that no other 2-form $d\\omega_{i_\\alpha,i_\\alpha+1}$, when multiplied with $\\bigwedge^{\\mu_1+\\mu_2}\\Xi $ will be different from zero. \nWe conclude that $\\Xi$ has maximal rank equal to $j_0(\\mathfrak{g})=\\mu_1+\\mu_2$, thus applying (\\ref{BB1}) we find that \n\\begin{equation}\n\\fl \\mathcal{N}(\\mathfrak{g})= 3 + \\frac{d(d-1)}{2}+ (2 \\ell + 1) d-2(\\mu_1+\\mu_2)= 2\\ell^2+2\\ell-\\frac{5}{2}+\\left[\\frac{d}{2}\\right],\n\\end{equation}\nas asserted.\n\\end{enumerate}\n\n\\medskip\n\\noindent In Table \\ref{Tabelle1} we give the numerical values for the number of Casimir operators of the Lie algebras $\\overline{\\mathfrak{Gal}}_{\\ell}(p,q)$ with $d=p+q\\leq 12$, and where the linear increment with respect to $\\ell$ can be easily recognized. \n \n\\smallskip\n\\begin{table}[h!] \n\\caption{\\label{Tabelle1} Number of Casimir operators for $\\overline{\\mathfrak{Gal}}_{\\ell}(p,q)$.}\n\\begin{indented}\\item[]\n\\begin{tabular}{c||cccccccccc}\n$\\;d$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$ & $11$ & $12$ \\\\\\hline \n{$\\ell=\\frac{1}{2}$} & $2$ & $3$ & $3$ & $4$ & $4$ & $5$ & $5$\n& $6$ & $6$ & $7$ \\\\ \n{$\\ell=\\frac{3}{2}$} & $6$ & $7$ & $7$ & $8$ & $8$ & $9$ & $9$\n& $10$ & $10$ & $11$ \\\\ \n{$\\ell=\\frac{5}{2}$} & $12$ & $15$ & $17$ & $18$ & $18$ & $19$\n& $19$ & $20$ & $20$ & $21$ \\\\ \n{$\\ell=\\frac{7}{2}$} & $18$ & $23$ & $27$ & $30$ & $32$ & $33$\n& $33$ & $34$ & $34$ & $35$ \\\\ \n{$\\ell=\\frac{9}{2}$} & $24$ & $31$ & $37$ & $42$ & $46$ & $49$\n& $51$ & $52$ & $52$ & $53$ \\\\ \n{$\\ell=\\frac{11}{2}$} & $30$ & $39$ & $47$ & $54$ & $60$ & $65\n$ & $69$ & $72$ & $74$ & $75$%\n\\end{tabular}\n\\end{indented}\n\\end{table}\n\n\\medskip\n\\noindent As follows from a general property concerning virtual copies \\cite{C45}, Lie algebras of the type $\\mathfrak{g}=\\mathfrak{s}\\overrightarrow{\\oplus} \\mathfrak{r}$ with an Abelian radical $\\mathfrak{r}$ do not admit virtual copies of $\\mathfrak{s}$ in $\\mathcal{U}\\left(\\mathfrak{g}\\right)$. Thus for Lie algebras of this type the Casimir invariants must be computed either directly from system (\\ref{sys}) or by some other procedure. Among the class $\\overline{\\mathfrak{Gal}}_{\\ell}(p,q)$, an exception is given by the unextended (pseudo-)Schr\\\"odinger algebra $\\overline{\\mathfrak{Gal}}_{\\frac{1}{2}}(p,q)\\simeq \\widehat{\\mathcal{S}}(p,q)$, where the invariants can be deduced from those of the central extension $\\widehat{\\mathcal{S}}(p,q)$ by the widely used method of contractions (see e.g. \\cite{IW,We}). For the remaining values $\\ell\\geq \\frac{3}{2}$ the contraction procedure is useless in practice, given the high number of invariants. However, an interesting property concerning the invariants of $\\overline{\\mathfrak{Gal}}_{\\ell}(p,q)$ emerges when we try to find the Casimir operators $F$ that only depend on variables $p_{n,k}$ associated to generators $P_{n,k}$ of the radical, i.e., such that the condition \n\\begin{equation}\n\\quad \\frac{\\partial F}{\\partial x}=0,\\quad \\forall x\\in\\mathfrak{sl}(2,\\mathbb{R})\\oplus\\mathfrak{so}(p,q).\\label{kond}\n\\end{equation}\nis satisfied. As will be shown next, the number of such solutions tends to stabilize for high values of $d=p+q$, showing that almost any invariant will depend on all of the variables in $\\overline{\\mathfrak{Gal}}_{\\ell}(p,q)$, implying that finding a complete set of invariants is a computationally formidable task, as there is currently no general method to derive these invariants in closed form. \n\n\\begin{proposition}\nLet $\\ell\\geq \\frac{3}{2}$. For sufficiently large $d$, the number of Casimir invariants of $\\overline{\\mathfrak{Gal}}_{\\ell}(p,q)$ depending only on the variables $p_{n,k}$ of the Abelian radical is constant and given by \n\\begin{equation}\n\\mathcal{N}_1(S)=2\\ell^2+3\\ell-2.\\label{sr2}\n\\end{equation}\n\\end{proposition}\n\n\\noindent The proof follows analyzing the rank of the subsystem of (\\ref{sys}) corresponding to the differential operators $\\widehat{X}$ associated to the generators of the Levi subalgebra $\\mathfrak{sl}(2,\\mathbb{R})\\oplus\\mathfrak{so}(p,q)$ and such that condition (\\ref{kond}) is fulfilled. Specifically, this leads to the system $S$ of PDEs\n\\begin{eqnarray}\n\\widehat{D}^{\\prime}(F):=\\sum_{n=0}^{2\\ell}\\sum_{i=1}^{d} (2\\ell-n)p_{n,i}\\frac{\\partial F}{\\partial p_{n,i}}=0,\\; \n\\widehat{H}^{\\prime}(F):=\\sum_{n=0}^{2\\ell}\\sum_{i=1}^{d} n p_{n-1,i}\\frac{\\partial F}{\\partial p_{n,i}}=0,\\nonumber\\\\\n\\widehat{C}^{\\prime}(F):=\\sum_{n=0}^{2\\ell}\\sum_{i=1}^{d} (2\\ell-n)p_{n+1,i}\\frac{\\partial F}{\\partial p_{n,i}}=0,\\label{kond2}\\\\\n\\widehat{E}_{j,k}^{\\prime}(F):=\\sum_{n=0}^{2\\ell}\\sum_{i=1}^{d} \\left( g_{ij} p_{n,k} -g_{ik} p_{n,j}\\right) \\frac{\\partial F}{\\partial p_{n,i}}=0, 1\\leq j2\\ell+2$, those invariants of $\\mathfrak{Gal}_{\\ell}(p,q)$ satisfying the condition (\\ref{kond}) can be easily computed by means of a reduction argument that leads to a linear system. To this extent, consider the last of the equations in (\\ref{kond2}). As the generators of $\\mathfrak{so}(p,q)$ permute the generators of the Abelian radical, it is straightforward to verify that the quadratic polynomials \n\\begin{equation}\n\\Phi_{n,s}= \\sum_{k=1}^{d} \\frac{g_{11}}{g_{kk}}\\;p_{n,k}p_{n+s,k},\\; 0\\leq n\\leq 2\\ell,\\; 0\\leq s\\leq 2\\ell-n.\\label{ELE}\n\\end{equation}\nare actually solutions of these equations. Indeed, any solution of the type (\\ref{kond}) is built up from these functions. Let $\\mathcal{M}_d=\\left\\{\\Phi_{n,s},\\; 0\\leq n\\leq 2\\ell,\\; 0\\leq s\\leq 2\\ell-n\\right\\}$. The cardinal of this set is given by $2\\ell^2+3\\ell+1$, and we observe that not all of the elements in $\\mathcal{M}_d$ are independent. It follows by a short computation that \n\\begin{equation}\n\\widehat{D}^{\\prime}(\\mathcal{M}_d)\\subset \\mathcal{M}_d,\\; \\widehat{H}^{\\prime}(\\mathcal{M}_d)\\subset \\mathcal{M}_d,\\; \\widehat{C}^{\\prime}(\\mathcal{M}_d)\\subset \\mathcal{M}_d,\\label{ELE2}\n\\end{equation}\nshowing that this set is invariant by the action of $\\mathfrak{sl}(2,\\mathbb{R})$. Therefore, we can construct the solutions of system (\\ref{kond2}) recursively using polynomials in the new variables $\\Phi_{n,s}$. Specifically, renumbering the elements in $\\mathcal{M}_d$ as $\\left\\{u_{1},\\cdots ,u_{2\\ell^2+3\\ell+1}\\right\\}$, for any $r\\geq 2$ we define a polynomial of degree $2r$ as \n\\begin{equation}\n\\Psi_r= \\sum_{1\\leq i_1< \\cdots