Datasets:
article
stringlengths 0
456k
| abstract
stringlengths 0
65.5k
|
---|---|
additive models provide an important family of models for semiparametric regression or classification . some reasons for the success of additive models are their increased flexibility when compared to linear or generalized linear models and their increased interpretability when compared to fully nonparametric models .it is well - known that good estimators in additive models are in general less prone to the curse of high dimensionality than good estimators in fully nonparametric models .many examples of such estimators belong to the large class of regularized kernel based methods over a reproducing kernel hilbert space , see e.g. . in the last yearsmany interesting results on learning rates of regularized kernel based models for additive models have been published when the focus is on sparsity and when the classical least squares loss function is used , see e.g. , , , , , and the references therein . of course , the least squares loss function is differentiable and has many nice mathematical properties , but it is only locally lipschitz continuous and therefore regularized kernel based methods based on this loss function typically suffer on bad statistical robustness properties , even if the kernel is bounded .this is in sharp contrast to kernel methods based on a lipschitz continuous loss function and on a bounded loss function , where results on upper bounds for the maxbias bias and on a bounded influence function are known , see e.g. for the general case and for additive models .therefore , we will here consider the case of regularized kernel based methods based on a general convex and lipschitz continuous loss function , on a general kernel , and on the classical regularizing term for some which is a smoothness penalty but not a sparsity penalty , see e.g. .such regularized kernel based methods are now often called support vector machines ( svms ) , although the notation was historically used for such methods based on the special hinge loss function and for special kernels only , we refer to . in this paper we address the open question , whether an svm with an additive kernel can provide a substantially better learning rate in high dimensions than an svm with a general kernel , say a classical gaussian rbf kernel , if the assumption of an additive model is satisfied .our leading example covers learning rates for quantile regression based on the lipschitz continuous but non - differentiable pinball loss function , which is also called check function in the literature , see e.g. and for parametric quantile regression and , , and for kernel based quantile regression .we will not address the question how to check whether the assumption of an additive model is satisfied because this would be a topic of a paper of its own .of course , a practical approach might be to fit both models and compare their risks evaluated for test data .for the same reason we will also not cover sparsity .consistency of support vector machines generated by additive kernels for additive models was considered in . in this paperwe establish learning rates for these algorithms .let us recall the framework with a complete separable metric space as the input space and a closed subset of as the output space .a borel probability measure on is used to model the learning problem and an independent and identically distributed sample is drawn according to for learning .a loss function is used to measure the quality of a prediction function by the local error ._ throughout the paper we assume that is measurable , , convex with respect to the third variable , and uniformly lipschitz continuous satisfying with a finite constant ._ support vector machines ( svms ) considered here are kernel - based regularization schemes in a reproducing kernel hilbert space ( rkhs ) generated by a mercer kernel . with a shifted loss function introduced for dealingeven with heavy - tailed distributions as , they take the form where for a general borel measure on , the function is defined by where is a regularization parameter .the idea to shift a loss function has a long history , see e.g. in the context of m - estimators .it was shown in that is also a minimizer of the following optimization problem involving the original loss function if a minimizer exists : the additive model we consider consists of the _ input space decomposition _ with each a complete separable metric space and a _ hypothesis space _ where is a set of functions each of which is also identified as a map from to .hence the functions from take the additive form .we mention , that there is strictly speaking a notational problem here , because in the previous formula each quantity is an element of the set which is a subset of the full input space , , whereas in the definition of sample each quantity is an element of the full input space , where .because these notations will only be used in different places and because we do not expect any misunderstandings , we think this notation is easier and more intuitive than specifying these quantities with different symbols .the additive kernel is defined in terms of mercer kernels on as it generates an rkhs which can be written in terms of the rkhs generated by on corresponding to the form ( [ additive ] ) as with norm given by the norm of satisfies to illustrate advantages of additive models , we provide two examples of comparing additive with product kernels .the first example deals with gaussian rbf kernels .all proofs will be given in section [ proofsection ] .[ gaussadd ] let , ] let and .\ ] ] the additive kernel is given by furthermore , the product kernel is the standard gaussian kernel given by define a gaussian function on ^ 2 ] and ^s. ] .if we take all the mercer kernels to be , then ] , consisting of all square integrable functions whose partial derivatives are all square integrable , contains discontinuous functions and is not an rkhs .denote the marginal distribution of on as . under the assumption that for each and that is dense in in the -metric , it was proved in that in probability as long as satisfies and .the rest of the paper has the following structure .section [ ratessection ] contains our main results on learning rates for svms based on additive kernels . learning rates for quantile regressionare treated as important special cases .section [ comparisonsection ] contains a comparison of our results with other learning rates published recently .section [ proofsection ] contains all the proofs and some results which can be interesting in their own .in this paper we provide some learning rates for the support vector machines generated by additive kernels for additive models which helps improve the quantitative understanding presented in .the rates are about asymptotic behaviors of the excess risk and take the form with .they will be stated under three kinds of conditions involving the hypothesis space , the measure , the loss , and the choice of the regularization parameter .the first condition is about the approximation ability of the hypothesis space .since the output function is from the hypothesis space , the learning rates of the learning algorithm depend on the approximation ability of the hypothesis space with respect to the optimal risk measured by the following approximation error .[ defapprox ] the approximation error of the triple is defined as to estimate the approximation error , we make an assumption about the minimizer of the risk for each , define the integral operator associated with the kernel by we mention that is a compact and positive operator on . hence we can find its normalized eigenpairs such that is an orthonormal basis of and as . fix .then we can define the -th power of by this is a positive and bounded operator and its range is well - defined .the assumption means lies in this range .[ assumption1 ] we assume and where for some and each , is a function of the form with some .the case of assumption [ assumption1 ] means each lies in the rkhs .a standard condition in the literature ( e.g. , ) for achieving decays of the form for the approximation error ( [ approxerrordef ] ) is with some . herethe operator is defined by in general , this can not be written in an additive form .however , the hypothesis space ( [ additive ] ) takes an additive form .so it is natural for us to impose an additive expression for the target function with the component functions satisfying the power condition .the above natural assumption leads to a technical difficulty in estimating the approximation error : the function has no direct connection to the marginal distribution projected onto , hence existing methods in the literature ( e.g. , ) can not be applied directly .note that on the product space , there is no natural probability measure projected from , and the risk on is not defined . our idea to overcome the difficulty is to introduce an intermediate function .it may not minimize a risk ( which is not even defined ) .however , it approximates the component function well .when we add up such functions , we get a good approximation of the target function , and thereby a good estimate of the approximation error .this is the first novelty of the paper .[ approxerrorthm ] under assumption [ assumption1 ] , we have where is the constant given by the second condition for our learning rates is about the capacity of the hypothesis space measured by -empirical covering numbers . let be a set of functions on and for every the * covering number of * with respect to the empirical metric , given by is defined as and the * -empirical covering number * of is defined as [ assumption2 ] we assume and that for some , and every , the -empirical covering number of the unit ball of satisfies the second novelty of this paper is to observe that the additive nature of the hypothesis space yields the following nice bound with a dimension - independent power exponent for the covering numbers of the balls of the hypothesis space , to be proved in section [ samplesection ] .[ capacitythm ] under assumption [ assumption2 ] , for any and , we have the bound for the covering numbers stated in theorem [ capacitythm ] is special : the power is independent of the number of the components in the additive model .it is well - known in the literature of function spaces that the covering numbers of balls of the sobolev space on the cube ^s ] , we have 2 .let ] since we can use \} ] ) .the following theorem , to be proved in section [ proofsection ] , gives a learning rate for the regularization scheme ( [ algor ] ) in the special case of quantile regression .[ quantilethm ] suppose that almost surely for some constant , and that each kernel is with for some .if assumption [ assumption1 ] holds with and has a -quantile of -average type for some ] and a positive constant such that assumption [ assumption3 ] always holds true for . if the triple satisfies some conditions , the exponent can be larger .for example , when is the pinball loss ( [ pinloss ] ) and has a -quantile of -average type for some ] and ^ 2. ] , then by taking , for any and , ( [ quantilerates ] ) holds with confidence at least . it is unknown whether the above learning rate can be derived by existing approaches in the literature ( e.g. ) even after projection .note that the kernel in the above example is independent of the sample size .it would be interesting to see whether there exists some such that the function defined by ( [ gaussfcn ] ) lies in the range of the operator .the existence of such a positive index would lead to the approximation error condition ( [ approxerrorb ] ) , see . let us now add some numerical comparisons on the goodness of our learning rates given by theorem [ mainratesthm ] with those given by .their corollary 4.12 gives ( essentially ) minmax optimal learning rates for ( clipped ) svms in the context of nonparametric quantile regression using one gaussian rbf kernel on the whole input space under appropriate smoothness assumptions of the target function .let us consider the case that the distribution has a -quantile of -average type , where , and assume that both corollary 4.12 in and our theorem [ mainratesthm ] are applicable .i.e. , we assume in particular that is a probability measure on ] ) the additive structure with each as stated in assumption [ assumption1 ] , where and , with minimal risk and additionally fulfills ( to make corollary 4.12 in applicable ) where ] , and is some user - defined positive constant independent of . forreasons of simplicity , let us fix .then ( * ? ? ?4.12 ) gives learning rates for the risk of svms for -quantile regression , if a single gaussian rbf - kernel on is used for -quantile functions of -average type with , which are of order hence the learning rate in theorem [ quantilethm ] is better than the one in ( * ? ? ?4.12 ) in this situation , if provided the assumption of the additive model is valid .table [ table1 ] lists the values of from ( [ explicitratescz2 ] ) for some finite values of the dimension , where .all of these values of are positive with the exceptions if or .this is in contrast to the corresponding exponent in the learning rate by ( * ? ?* cor . 4.12 ) , because table [ table2 ] and figures [ figure1 ] to [ figure2 ] give additional information on the limit .of course , higher values of the exponent indicates faster rates of convergence .it is obvious , that an svm based on an additive kernel has a significantly faster rate of convergence in higher dimensions compared to svm based on a single gaussian rbf kernel defined on the whole input space , of course under the assumption that the additive model is valid .the figures seem to indicate that our learning rate from theorem [ mainratesthm ] is probably not optimal for small dimensions . however , the main focus of the present paper is on high dimensions ..[table1 ] the table lists the limits of the exponents from ( * ? ? ?* cor . 4.12 ) and from theorem [ mainratesthm ] , respectively , if the regularizing parameter is chosen in an optimal manner for the nonparametric setup , i.e. , with for and .recall that $ ] .[ cols= " > , > , > , > " , ] | additive models play an important role in semiparametric statistics . this paper gives learning rates for regularized kernel based methods for additive models . these learning rates compare favourably in particular in high dimensions to recent results on optimal learning rates for purely nonparametric regularized kernel based quantile regression using the gaussian radial basis function kernel , provided the assumption of an additive model is valid . additionally , a concrete example is presented to show that a gaussian function depending only on one variable lies in a reproducing kernel hilbert space generated by an additive gaussian kernel , but does not belong to the reproducing kernel hilbert space generated by the multivariate gaussian kernel of the same variance . * key words and phrases . * additive model , kernel , quantile regression , semiparametric , rate of convergence , support vector machine . |
the transport properties of nonlinear non - equilibrium dynamical systems are far from well - understood .consider in particular so - called ratchet systems which are asymmetric periodic potentials where an ensemble of particles experience directed transport .the origins of the interest in this lie in considerations about extracting useful work from unbiased noisy fluctuations as seems to happen in biological systems .recently attention has been focused on the behavior of deterministic chaotic ratchets as well as hamiltonian ratchets .chaotic systems are defined as those which are sensitively dependent on initial conditions . whether chaotic or not , the behavior of nonlinear systems including the transition from regular to chaotic behavior is in general sensitively dependent on the parameters of the system .that is , the phase - space structure is usually relatively complicated , consisting of stability islands embedded in chaotic seas , for examples , or of simultaneously co - existing attractors .this can change significantly as parameters change .for example , stability islands can merge into each other , or break apart , and the chaotic sea itself may get pinched off or otherwise changed , or attractors can change symmetry or bifurcate .this means that the transport properties can change dramatically as well .a few years ago , mateos considered a specific ratchet model with a periodically forced underdamped particle .he looked at an ensemble of particles , specifically the velocity for the particles , averaged over time and the entire ensemble .he showed that this quantity , which is an intuitively reasonable definition of ` the current ' , could be either positive or negative depending on the amplitude of the periodic forcing for the system . at the same time , there exist ranges in where the trajectory of an individual particle displays chaotic dynamics .mateos conjectured a connection between these two phenomena , specifically that the reversal of current direction was correlated with a bifurcation from chaotic to periodic behavior in the trajectory dynamics .even though it is unlikely that such a result would be universally valid across all chaotic deterministic ratchets , it would still be extremely useful to have general heuristic rules such as this .these organizing principles would allow some handle on characterizing the many different kinds of behavior that are possible in such systems .a later investigation of the mateos conjecture by barbi and salerno , however , argued that it was not a valid rule even in the specific system considered by mateos .they presented results showing that it was possible to have current reversals in the absence of bifurcations from periodic to chaotic behavior .they proposed an alternative origin for the current reversal , suggesting it was related to the different stability properties of the rotating periodic orbits of the system .these latter results seem fundamentally sensible . however , this paper based its arguments about currents on the behavior of a _ single _ particle as opposed to an ensemble .this implicitly assumes that the dynamics of the system are ergodic .this is not true in general for chaotic systems of the type being considered . in particular , there can be extreme dependence of the result on the statistics of the ensemble being considered .this has been pointed out in earlier studies which laid out a detailed methodology for understanding transport properties in such a mixed regular and chaotic system . depending on specific parameter value , the particular system under consideration has multiple coexisting periodic or chaotic attractors or a mixture of both .it is hence appropriate to understand how a probability ensemble might behave in such a system .the details of the dependence on the ensemble are particularly relevant to the issue of the possible experimental validation of these results , since experiments are always conducted , by virtue of finite - precision , over finite time and finite ensembles .it is therefore interesting to probe the results of barbi and salerno with regard to the details of the ensemble used , and more formally , to see how ergodicity alters our considerations about the current , as we do in this paper .we report here on studies on the properties of the current in a chaotic deterministic ratchet , specifically the same system as considered by mateos and barbi and salerno .we consider the impact of different kinds of ensembles of particles on the current and show that the current depends significantly on the details of the initial ensemble .we also show that it is important to discard transients in quantifying the current .this is one of the central messages of this paper : broad heuristics are rare in chaotic systems , and hence it is critical to understand the ensemble - dependence in any study of the transport properties of chaotic ratchets .having established this , we then proceed to discuss the connection between the bifurcation diagram for individual particles and the behavior of the current .we find that while we disagree with many of the details of barbi and salerno s results , the broader conclusion still holds .that is , it is indeed possible to have current reversals in the absence of bifurcations from chaos to periodic behavior as well as bifurcations without any accompanying current reversals .the result of our investigation is therefore that the transport properties of a chaotic ratchet are not as simple as the initial conjecture .however , we do find evidence for a generalized version of mateos s conjecture .that is , in general , bifurcations for trajectory dynamics as a function of system parameter seem to be associated with abrupt changes in the current .depending on the specific value of the current , these abrupt changes may lead the net current to reverse direction , but not necessarily so .we start below with a preparatory discussion necessary to understand the details of the connection between bifurcations and current reversal , where we discuss the potential and phase - space for single trajectories for this system , where we also define a bifurcation diagram for this system . in the next section ,we discuss the subtleties of establishing a connection between the behavior of individual trajectories and of ensembles .after this , we are able to compare details of specific trajectory bifurcation curves with current curves , and thus justify our broader statements above , after which we conclude .the goal of these studies is to understand the behavior of general chaotic ratchets .the approach taken here is that to discover heuristic rules we must consider specific systems in great detail before generalizing .we choose the same -dimensional ratchet considered previously by mateos , as well as barbi and salerno .we consider an ensemble of particles moving in an asymmetric periodic potential , driven by a periodic time - dependent external force , where the force has a zero time - average .there is no noise in the system , so it is completely deterministic , although there is damping .the equations of motion for an individual trajectory for such a system are given in dimensionless variables by where the periodic asymmetric potential can be written in the form + \frac{1}{4 } \sin [ 4\pi ( x -x_0 ) ] \bigg ] .\ ] ] in this equation have been introduced for convenience such that one potential minimum exists at the origin with and the term .( a ) classical phase space for the unperturbed system . for , ,two chaotic attractors emerge with ( b ) ( c ) and a period four attractor consisting of the four centers of the circles with .,title="fig:",width=302 ] the phase - space of the undamped undriven ratchet the system corresponding to the unperturbed potential looks like a series of asymmetric pendula .that is , individual trajectories have one of following possible time - asymptotic behaviors : ( i ) inside the potential wells , trajectories and all their properties oscillate , leading to zero net transport . outside the wells , the trajectories either ( ii ) librate to the right or ( iii ) to the left , with corresponding net transport depending upon initial conditions .there are also ( iv ) trajectories on the separatrices between the oscillating and librating orbits , moving between unstable fixed points in infinite time , as well as the unstable and stable fixed points themselves , all of which constitute a set of negligible measure . when damping is introduced via the -dependent term in eq .[ eq : dyn ] , it makes the stable fixed points the only attractors for the system .when the driving is turned on , the phase - space becomes chaotic with the usual phenomena of intertwining separatrices and resulting homoclinic tangles .the dynamics of individual trajectories in such a system are now very complicated in general and depend sensitively on the choice of parameters and initial conditions .we show snapshots of the development of this kind of chaos in the set of poincar sections fig .( [ figure1]b , c ) together with a period - four orbit represented by the center of the circles . a broad characterization of the dynamics of the problem as a function of a parameter ( or ) emerges in a bifurcation diagram. this can be constructed in several different and essentially equivalent ways .the relatively standard form that we use proceeds as follows : first choose the bifurcation parameter ( let us say ) and correspondingly choose fixed values of , and start with a given value for .now iterate an initial condition , recording the value of the particle s position at times from its integrated trajectory ( sometimes we record .this is done stroboscopically at discrete times where and is an integer with the maximum number of observations made . of these , discard observations at times less than some cut - off time and plot the remaining points against .it must be noted that discarding transient behavior is critical to get results which are independent of initial condition , and we shall emphasize this further below in the context of the net transport or current .if the system has a fixed - point attractor then all of the data lie at one particular location . a periodic orbit with period ( that is , with period commensurate with the driving ) shows up with points occupying only different locations of for .all other orbits , including periodic orbits of incommensurate period result in a simply - connected or multiply - connected dense set of points . for the next value , the last computed value of at are used as initial conditions , and previously , results are stored after cutoff and so on until .that is , the bifurcation diagram is generated by sweeping the relevant parameter , in this case , from through some maximum value .this procedure is intended to catch all coexisting attractors of the system with the specified parameter range .note that several initial conditions are effectively used troughout the process , and a bifurcation diagram is not the behavior of a single trajectory .we have made several plots , as a test , with different initial conditions and the diagrams obtained are identical .we show several examples of this kind of bifurcation diagram below , where they are being compared with the corresponding behavior of the current .having broadly understood the wide range of behavior for individual trajectories in this system , we now turn in the next section to a discussion of the non - equilibrium properties of a statistical ensemble of these trajectories , specifically the current for an ensemble .the current for an ensemble in the system is defined in an intuitive manner by mateos as the time - average of the average velocity over an ensemble of initial conditions .that is , an average over several initial conditions is performed at a given observation time to yield the average velocity over the particles this average velocity is then further time - averaged ; given the discrete time for observation this leads to a second sum where is the number of time - observations made . for this to be a relevant quantity to compare with bifurcation diagrams , should be independent of the quantities but still strongly dependent on .a further parameter dependence that is being suppressed in the definition above is the shape and location of the ensemble being used .that is , the transport properties of an ensemble in a chaotic system depend in general on the part of the phase - space being sampled .it is therefore important to consider many different initial conditions to generate a current . the first straightforward result we show in fig .( [ figure2 ] ) is that in the case of chaotic trajectories , a single trajectory easily displays behavior very different from that of many trajectories .however , it turns out that in the regular regime , it is possible to use a single trajectory to get essentially the same result as obtained from many trajectories . further consider the bifurcation diagram in fig .( [ figure3 ] ) where we superimpose the different curves resulting from varying the number of points in the initial ensemble .first , the curve is significantly smoother as a function of for larger . even more relevant is the fact that the single trajectory data ( ) may show current reversals that do not exist in the large data .current versus the number of trajectories for ; dashed lines correspond to a regular motion with while solid lines correspond to a chaotic motion with .note that a single trajectory is sufficient for a regular motion while the convergence in the chaotic case is only obtained if the exceeds a certain threshold , .,title="fig:",width=302 ] current versus for different set of trajectories ; ( circles ) , ( square ) and ( dashed lines ) . note that a single trajectory suffices in the regular regime where all the curves match . in the chaotic regime, as increases , the curves converge towards the dashed one.,title="fig:",width=302 ] also , note that single - trajectory current values are typically significantly greater than ensemble averages .this arises from the fact that an arbitrarily chosen ensemble has particles with idiosyncratic behaviors which often average out . as our result , with these ensembles we see typical for example , while barbi and salerno report currents about times greater .however , it is not true that only a few trajectories dominate the dynamics completely , else there would not be a saturation of the current as a function of .all this is clear in fig .( [ figure3 ] ) .we note that the * net * drift of an ensemble can be a lot closer to than the behavior of an individual trajectory. it should also be clear that there is a dependence of the current on the location of the initial ensemble , this being particularly true for small , of course .the location is defined by its centroid . for , it is trivially true that the initial location matters to the asymptotic value of the time - averaged velocity , given that this is a non - ergodic and chaotic system .further , considering a gaussian ensemble , say , the width of the ensemble also affects the details of the current , and can show , for instance , illusory current reversal , as seen in figs .( [ current - bifur1],[current - bifur2 ] ) for example .notice also that in fig .( [ current - bifur1 ] ) , at and , the deviations between the different ensembles is particularly pronounced .these points are close to bifurcation points where some sort of symmetry breaking is clearly occuring , which underlines our emphasis on the relevance of specifying ensemble characteristics in the neighborhood of unstable behavior .however , why these specific bifurcations should stand out among all the bifurcations in the parameter range shown is not entirely clear . to understand how to incorporate this knowledge into calculations of the current ,therefore , consider the fact that if we look at the classical phase space for the hamiltonian or underdamped motion , we see the typical structure of stable islands embedded in a chaotic sea which have quite complicated behavior . in such a situation , the dynamics always depends on the location of the initial conditions .however , we are not in the hamiltonian situation when the damping is turned on in this case , the phase - space consists in general of attractors .that is , if transient behavior is discarded , the current is less likely to depend significantly on the location of the initial conditions or on the spread of the initial conditions . in particular , in the chaotic regime of a non - hamiltonian system , the initial ensemble needs to be chosen larger than a certain threshold to ensure convergence .however , in the regular regime , it is not important to take a large ensemble and a single trajectory can suffice , as long as we take care to discard the transients .that is to say , in the computation of currents , the definition of the current needs to be modified to : where is some empirically obtained cut - off such that we get a converged current ( for instance , in our calculations , we obtained converged results with ) . when this modified form is used , the convergence ( ensemble - independence ) is more rapid as a function of and the width of the intial conditions .armed with this background , we are now finally in a position to compare bifurcation diagrams with the current , as we do in the next section .our results are presented in the set of figures fig .( [ figure5 ] ) fig .( [ rev - nobifur ] ) , in each of which we plot both the ensemble current and the bifurcation diagram as a function of the parameter .the main point of these numerical results can be distilled into a series of heuristic statements which we state below ; these are labelled with roman numerals . for and , we plot current ( upper ) with and bifurcation diagram ( lower ) versus .note that there is a * single * current reversal while there are many bifurcations visible in the same parameter range.,title="fig:",width=302 ] consider fig .( [ figure5 ] ) , which shows the parameter range chosen relatively arbitrarily . in this figure , we see several period - doubling bifurcations leading to order - chaos transitions , such as for example in the approximate ranges . however , there is only one instance of current - reversal , at .note , however , that the current is not without structure it changes fairly dramatically as a function of parameter .this point is made even more clearly in fig .( [ figure6 ] ) where the current remains consistently below , and hence there are in fact , no current reversals at all .note again , however , that the current has considerable structure , even while remaining negative. for and , plotted are current ( upper ) and bifurcation diagram ( lower ) versus with .notice the current stays consistently below .,title="fig:",width=302 ] current and bifurcations versus . in ( a ) and ( b )we show ensemble dependence , specifically in ( a ) the black curve is for an ensemble of trajectories starting centered at the stable fixed point with a root - mean - square gaussian width of , and the brown curve for trajectories starting from the unstable fixed point and of width . in ( b ) , all ensembles are centered at the stable fixed point , the black line for an ensemble of width , brown a width of and maroon with width .( c ) is the comparison of the current without transients ( black ) and with transients ( brown ) along with the single - trajectory results in blue ( after barbi and salerno ) .the initial conditions for the ensembles are centered at with a mean root square gaussian of width .( d ) is the corresponding bifurcation diagram.,title="fig:",width=302 ] it is possible to find several examples of this at different parameters , leading to the negative conclusion , therefore , that * ( i ) not all bifurcations lead to current reversal*. however , we are searching for positive correlations , and at this point we have not precluded the more restricted statement that all current reversals are associated with bifurcations , which is in fact mateos conjecture .we therefore now move onto comparing our results against the specific details of barbi and salerno s treatment of this conjecture . in particular , we look at their figs .( 2,3a,3b ) , where they scan the parameter region .the distinction between their results and ours is that we are using _ensembles _ of particles , and are investigating the convergence of these results as a function of number of particles , the width of the ensemble in phase - space , as well as transience parameters . our data with larger yields different results in general , as we show in the recomputed versions of these figures , presented here in figs .( [ current - bifur1],[current - bifur2 ] ) .specifically , ( a ) the single - trajectory results are , not surprisingly , cleaner and can be more easily interpreted as part of transitions in the behavior of the stability properties of the periodic orbits .the ensemble results on the other hand , even when converged , show statistical roughness .( b ) the ensemble results are consistent with barbi and salerno in general , although disagreeing in several details .for instance , ( c ) the bifurcation at has a much gentler impact on the ensemble current , which has been growing for a while , while the single - trajectory result changes abruptly . note , ( d ) the very interesting fact that the single - trajectory current completely misses the bifurcation - associated spike at .further , ( e ) the barbi and salerno discussion of the behavior of the current in the range is seen to be flawed our results are consistent with theirs , however , the current changes are seen to be consistent with bifurcations despite their statements to the contrary . on the other hand ( f ) , the ensemble current shows a case [ in fig .( [ current - bifur2 ] ) , at of current reversal that does not seem to be associated with bifurcations .in this spike , the current abruptly drops below and then rises above it again .the single trajectory current completely ignores this particular effect , as can be seen .the bifurcation diagram indicates that in this case the important transitions happen either before or after the spike .all of this adds up to two statements : the first is a reiteration of the fact that there is significant information in the ensemble current that can not be obtained from the single - trajectory current .the second is that the heuristic that arises from this is again a negative conclusion , that * ( ii ) not all current reversals are associated with bifurcations .* where does this leave us in the search for ` positive ' results , that is , useful heuristics ?one possible way of retaining the mateos conjecture is to weaken it , i.e. make it into the statement that * ( iii ) _ most _ current reversals are associated with bifurcations . * same as fig .( [ current - bifur1 ] ) except for the range of considered.,title="fig:",width=302 ] for and , plotted are current ( upper ) and bifurcation diagram ( lower ) versus with .note in particular in this figure that eyeball tests can be misleading .we see reversals without bifurcations in ( a ) whereas the zoomed version ( c ) shows that there are windows of periodic and chaotic regimes .this is further evidence that jumps in the current correspond in general to bifurcation.,title="fig:",width=302 ] for and , current ( upper ) and bifurcation diagram ( lower ) versus .,title="fig:",width=302 ] however , a * different * rule of thumb , previously not proposed , emerges from our studies .this generalizes mateos conjecture to say that * ( iv ) bifurcations correspond to sudden current changes ( spikes or jumps)*. note that this means these changes in current are not necessarily reversals of direction .if this current jump or spike goes through zero , this coincides with a current reversal , making the mateos conjecture a special case .the physical basis of this argument is the fact that ensembles of particles in chaotic systems _ can _ have net directed transport but the details of this behavior depends relatively sensitively on the system parameters .this parameter dependence is greatly exaggerated at the bifurcation point , when the dynamics of the underlying single - particle system undergoes a transition a period - doubling transition , for example , or one from chaos to regular behavior .scanning the relevant figures , we see that this is a very useful rule of thumb . for example, it completely captures the behaviour of fig .( [ figure6 ] ) which can not be understood as either an example of the mateos conjecture , or even a failure thereof . as such, this rule significantly enhances our ability to characterize changes in the behavior of the current as a function of parameter .a further example of where this modified conjecture helps us is in looking at a seeming negation of the mateos conjecture , that is , an example where we seem to see current - reversal without bifurcation , visible in fig .( [ hidden - bifur ] ) .the current - reversals in that scan of parameter space seem to happen inside the chaotic regime and seemingly independent of bifurcation . however , this turns out to be a ` hidden ' bifurcation when we zoom in on the chaotic regime , we see hidden periodic windows .this is therefore consistent with our statement that sudden current changes are associated with bifurcations .each of the transitions from periodic behavior to chaos and back provides opportunities for the current to spike .however , in not all such cases can these hidden bifurcations be found .we can see an example of this in fig .( [ rev - nobifur ] ) .the current is seen to move smoothly across with seemingly no corresponding bifurcations , even when we do a careful zoom on the data , as in fig .( [ hidden - bifur ] ) .however , arguably , although subjective , this change is close to the bifurcation point .this result , that there are situations where the heuristics simply do not seem to apply , are part of the open questions associated with this problem , of course .we note , however , that we have seen that these broad arguments hold when we vary other parameters as well ( figures not shown here ) . in conclusion ,in this paper we have taken the approach that it is useful to find general rules of thumb ( even if not universally valid ) to understand the complicated behavior of non - equilibrium nonlinear statistical mechanical systems . in the case of chaotic deterministic ratchets, we have shown that it is important to factor out issues of size , location , spread , and transience in computing the ` current ' due to an ensemble before we search for such rules , and that the dependence on ensemble characteristics is most critical near certain bifurcation points .we have then argued that the following heuristic characteristics hold : bifurcations in single - trajectory behavior often corresponds to sudden spikes or jumps in the current for an ensemble in the same system .current reversals are a special case of this. however , not all spikes or jumps correspond to a bifurcation , nor vice versa .the open question is clearly to figure out if the reason for when these rules are violated or are valid can be made more concrete .a.k . gratefully acknowledges t. barsch and kamal p. singh for stimulating discussions , the reimar lst grant and financial support from the alexander von humboldt foundation in bonn . a.k.p .is grateful to carleton college for the ` sit , wallin , and class of 1949 ' sabbatical fellowships , and to the mpipks for hosting him for a sabbatical visit , which led to this collaboration .useful discussions with j .-m . rost on preliminary results are also acknowledged .p. hnggi and bartussek , in nonlinear physics of complex systems , lecture notes in physics vol .476 , edited by j. parisi , s.c .mueller , and w. zimmermann ( springer verlag , berlin , 1996 ) , pp.294 - 308 ; r.d .asturmian , science * 276 * , 917 ( 1997 ) ; f. jlicher , a. ajdari , and j. prost , rev . mod .phys . * 69 * , 1269 ( 1997 ) ; c. dring , nuovo cimento d*17 * , 685 ( 1995 ) s. flach , o. yevtushenko , and y. zolotaryuk , phys. rev .lett . * 84 * , 2358 ( 2000 ) ; o. yevtushenko , s. flach , y. zolotaryuk , and a. a. ovchinnikov , europhys .lett . * 54 * , 141 ( 2001 ) ; s. denisov et al .e * 66 * , 041104 ( 2002 ) | in 84 , 258 ( 2000 ) , mateos conjectured that current reversal in a classical deterministic ratchet is associated with bifurcations from chaotic to periodic regimes . this is based on the comparison of the current and the bifurcation diagram as a function of a given parameter for a periodic asymmetric potential . barbi and salerno , in 62 , 1988 ( 2000 ) , have further investigated this claim and argue that , contrary to mateos claim , current reversals can occur also in the absence of bifurcations . barbi and salerno s studies are based on the dynamics of one particle rather than the statistical mechanics of an ensemble of particles moving in the chaotic system . the behavior of ensembles can be quite different , depending upon their characteristics , which leaves their results open to question . in this paper we present results from studies showing how the current depends on the details of the ensemble used to generate it , as well as conditions for convergent behavior ( that is , independent of the details of the ensemble ) . we are then able to present the converged current as a function of parameters , in the same system as mateos as well as barbi and salerno . we show evidence for current reversal without bifurcation , as well as bifurcation without current reversal . we conjecture that it is appropriate to correlate abrupt changes in the current with bifurcation , rather than current reversals , and show numerical evidence for our claims . |
with significant research efforts being directed to the development of neurocomputers based on the functionalities of the brain , a seismic shift is expected in the domain of computing based on the traditional von - neumann model .the , and the ibm are instances of recent flagship neuromorphic projects that aim to develop brain - inspired computing platforms suitable for recognition ( image , video , speech ) , classification and mining problems . while boolean computation is based on the sequential fetch , decode and execute cycles , such neuromorphic computing architectures are massively parallel and event - driven and are potentially appealing for pattern recognition tasks and cortical brain simulationsto that end , researchers have proposed various nanoelectronic devices where the underlying device physics offer a mapping to the neuronal and synaptic operations performed in the brain . the main motivation behind the usage of such non - von neumann post - cmos technologies as neural and synaptic devices stems from the fact that the significant mismatch between the cmos transistors and the underlying neuroscience mechanisms result in significant area and energy overhead for a corresponding hardware implementation .a very popular instance is the simulation of a cat s brain on ibm s blue gene supercomputer where the power consumption was reported to be of the order of a few .while the power required to simulate the human brain will rise significantly as we proceed along the hierarchy in the animal kingdom , actual power consumption in the mammalian brain is just a few tens of watts . in a neuromorphic computing platform, synapses form the pathways between neurons and their strength modulate the magnitude of the signal transmitted between the neurons .the exact mechanisms that underlie the `` learning '' or `` plasticity '' of such synaptic connections are still under debate .meanwhile , researchers have attempted to mimic several plasticity measurements observed in biological synapses in nanoelectronic devices like phase change memories , memristors and spintronic devices , etc .however , majority of the research have focused on non - volatile plasticity changes of the synapse in response to the spiking patterns of the neurons it connects corresponding to long - term plasticity and the volatility of human memory has been largely ignored . as a matter of fact , neuroscience studies performed in have demonstrated that synapses exhibit an inherent learning ability where they undergo volatile plasticity changes and ultimately undergo long - term plasticity conditionally based on the frequency of the incoming action potentials .such volatile or meta - stable synaptic plasticity mechanisms can lead to neuromorphic architectures where the synaptic memory can adapt itself to a changing environment since sections of the memory that have been not receiving frequent stimulus can be now erased and utilized to memorize more frequent information .hence , it is necessary to include such volatile memory transition functionalities in a neuromorphic chip in order to leverage from the computational power that such meta - stable synaptic plasticity mechanisms has to offer .[ drawing1 ] ( a ) demonstrates the biological process involved in such volatile synaptic plasticity changes . during the transmission of each action potential from the pre - neuron to the post - neuron through the synapse , an influx of ionic species like and causes the release of neurotransmitters from the pre- to the post - neuron .this results in temporary strengthening of the synaptic strength .however , in absence of the action potential , the ionic species concentration settles down to its equilibrium value and the synapse strength diminishes . this phenomenon is termed as short - term plasticity ( stp ) . however ,if the action potentials occur frequently , the concentration of the ions do not get enough time to settle down to the equilibrium concentration and this buildup of concentration eventually results in long - term strengthening of the synaptic junction .this phenomenon is termed as long - term potentiation ( ltp ) . while stp is a meta - stable state and lasts for a very small time duration, ltp is a stable synaptic state which can last for hours , days or even years .a similar discussion is valid for the case where there is a long - term reduction in synaptic strength with frequent stimulus and then the phenomenon is referred to as long - term depression ( ltd ) .such stp and ltp mechanisms have been often correlated to the short - term memory ( stm ) and long - term memory ( ltm ) models proposed by atkinson and shiffrin ( fig .[ drawing1](b ) ) .this psychological model partitions the human memory into an stm and an ltm . on the arrival of an input stimulus , information is first stored in the stm . however , upon frequent rehearsal , information gets transferred to the ltm . while the `` forgetting '' phenomena occurs at a fast rate in the stm , information can be stored for a much longer duration in the ltm . in order to mimic such volatile synaptic plasticity mechanisms , a nanoelectronic device is required that is able to undergo meta - stable resistance transitions depending on the frequency of the input and also transition to a long - term stable resistance state on frequent stimulations .hence a competition between synaptic memory reinforcement or strengthening and memory loss is a crucial requirement for such nanoelectronic synapses . in the next section, we will describe the mapping of the magnetization dynamics of a nanomagnet to such volatile synaptic plasticity mechanisms observed in the brain .let us first describe the device structure and principle of operation of an mtj as shown in fig .[ drawing2](a ) .the device consists of two ferromagnetic layers separated by a tunneling oxide barrier ( tb ) .the magnetization of one of the layers is magnetically `` pinned '' and hence it is termed as the `` pinned '' layer ( pl ) .the magnetization of the other layer , denoted as the `` free layer '' ( fl ) , can be manipulated by an incoming spin current .the mtj structure exhibits two extreme stable conductive states the low conductive `` anti - parallel '' orientation ( ap ) , where pl and fl magnetizations are oppositely directed and the high conductive `` parallel '' orientation ( p ) , where the magnetization of the two layers are in the same direction .let us consider that the initial state of the mtj synapse is in the low conductive ap state .considering the input stimulus ( current ) to flow from terminal t2 to terminal t1 , electrons will flow from terminal t1 to t2 and get spin - polarized by the pl of the mtj .subsequently , these spin - polarized electrons will try to orient the fl of the mtj `` parallel '' to the pl .it is worth noting here that the spin - polarization of incoming electrons in the mtj is analogous to the release of neurotransmitters in a biological synapse .the stp and ltp mechanisms exhibited in the mtj due to the spin - polarization of the incoming electrons can be explained by the energy profile of the fl of the mtj .let the angle between the fl magnetization , , and the pl magnetization , , be denoted by .the fl energy as a function of has been shown in fig .[ drawing2](a ) where the two energy minima points ( and ) are separated by the energy barrier , . during the transition from the ap state to the p state , the fl has to transition from to . upon the receipt of an input stimulus , the fl magnetization proceeds `` uphill '' along the energy profile ( from initial point 1 to point 2 in fig .[ drawing2](a ) ) .however , since point 2 is a meta - stable state , it starts going `` downhill '' to point 1 , once the stimulus is removed .if the input stimulus is not frequent enough , the fl will try to stabilize back to the ap state after each stimulus .however , if the stimulus is frequent , the fl will not get sufficient time to reach point 1 and ultimately will be able to overcome the energy barrier ( point 3 in fig .[ drawing2](a ) ) .it is worth noting here , that on crossing the energy barrier at , it becomes progressively difficult for the mtj to exhibit stp and switch back to the initial ap state .this is in agreement with the psychological model of human memory where it becomes progressively difficult for the memory to `` forget '' information during transition from stm to ltm .hence , once it has crossed the energy barrier , it starts transitioning from the stp to the ltp state ( point 4 in fig .[ drawing2](a ) ) .the stability of the mtj in the ltp state is dictated by the magnitude of the energy barrier .the lifetime of the ltp state is exponentially related to the energy barrier .for instance , for an energy barrier of used in this work , the ltp lifetime is hours while the lifetime can be extended to around years by engineering a barrier height of .the lifetime can be varied by varying the energy barrier , or equivalently , volume of the mtj .the stp - ltp behavior of the mtj can be also explained from the magnetization dynamics of the fl described by landau - lifshitz - gilbert ( llg ) equation with additional term to account for the spin momentum torque according to slonczewski , where , is the unit vector of fl magnetization , is the gyromagnetic ratio for electron , is gilberts damping ratio , is the effective magnetic field including the shape anisotropy field for elliptic disks calculated using , is the number of spins in free layer of volume ( is saturation magnetization and is bohr magneton ) , and is the spin current generated by the input stimulus ( is the spin - polarization efficiency of the pl ) . thermal noise is included by an additional thermal field , , where is a gaussian distribution with zero mean and unit standard deviation , is boltzmann constant , is the temperature and is the simulation time step .equation [ llg ] can be reformulated by simple algebraic manipulations as , hence , in the presence of an input stimulus the magnetization of the fl starts changing due to integration of the input . however , in the absence of the input , it starts leaking back due to the first two terms in the rhs of the above equation .it is worth noting here that , like traditional semiconductor memories , magnitude and duration of the input stimulus will definitely have an impact on the stp - ltp transition of the synapse .however , frequency of the input is a critical factor in this scenario . even though the total flux through the device is same ,the synapse will conditionally change its state if the frequency of the input is high .we verified that this functionality is exhibited in mtjs by performing llg simulations ( including thermal noise ) .the conductance of the mtj as a function of can be described by , where , ( ) is the mtj conductance in the p ( ap ) orientation respectively . as shown in fig .[ drawing2](b ) , the mtj conductance undergoes meta - stable transitions ( stp ) and is not able to undergo ltp when the time interval of the input pulses is large ( ) .however , on frequent stimulations with time interval as , the device undergoes ltp transition incrementally .[ drawing2](b ) and ( c ) illustrates the competition between memory reinforcement and memory decay in an mtj structure that is crucial to implement stp and ltp in the synapse .we demonstrate simulation results to verify the stp and ltp mechanisms in an mtj synapse depending on the time interval between stimulations .the device simulation parameters were obtained from experimental measurements and have been shown in table i. [ table ] table i. device simulation parameters [ cols="^,^ " , ] + the mtj was subjected to 10 stimulations , each stimulation being a current pulse of magnitude and in duration . as shown in fig .[ drawing3 ] , the probability of ltp transition and average device conductance at the end of each stimulation increases with decrease in the time interval between the stimulations .the dependence on stimulation time interval can be further characterized by measurements corresponding to paired - pulse facilitation ( ppf : synaptic plasticity increase when a second stimulus follows a previous similar stimulus ) and post - tetanic potentiation ( ptp : progressive synaptic plasticity increment when a large number of such stimuli are received successively ) .[ drawing4 ] depicts such ppf ( after 2nd stimulus ) and ptp ( after 10th stimulus ) measurements for the mtj synapse with variation in the stimulation interval .the measurements closely resemble measurements performed in frog neuromuscular junctions where ppf measurements revealed that there was a small synaptic conductivity increase when the stimulation rate was frequent enough while ptp measurements indicated ltp transition on frequent stimulations with a fast decay in synaptic conductivity on decrement in the stimulation rate .hence , stimulation rate indeed plays a critical role in the mtj synapse to determine the probability of ltp transition .the psychological model of stm and ltm utilizing such mtj synapses was further explored in a memory array .the array was stimulated by a binary image of the purdue university logo where a set of 5 pulses ( each of magnitude and in duration ) was applied for each on pixel .the snapshots of the conductance values of the memory array after each stimulus have been shown for two different stimulation intervals of and respectively .while the memory array attempts to remember the displayed image right after stimulation , it fails to transition to ltm for the case and the information is eventually lost after stimulation .however , information gets transferred to ltm progressively for .it is worth noting here , that the same amount of flux is transmitted through the mtj in both cases .the simulation not only provides a visual depiction of the temporal evolution of a large array of mtj conductances as a function of stimulus but also provides inspiration for the realization of adaptive neuromorphic systems exploiting the concepts of stm and ltm .readers interested in the practical implementation of such arrays of spintronic devices are referred to ref .the contributions of this work over state - of - the - art approaches may be summarized as follows .this is the first theoretical demonstration of stp and ltp mechanisms in an mtj synapse .we demonstrated the mapping of neurotransmitter release in a biological synapse to the spin polarization of electrons in an mtj and performed extensive simulations to illustrate the impact of stimulus frequency on the ltp probability in such an mtj structure .there have been recent proposals of other emerging devices that can exhibit such stp - ltp mechanisms like synapses and memristors .however , it is worth noting here , that input stimulus magnitudes are usually in the range of volts ( 1.3v in and 80mv in ) and stimulus durations are of the order of a few msecs ( 1ms in and 0.5s in ) .in contrast , similar mechanisms can be exhibited in mtj synapses at much lower energy consumption ( by stimulus magnitudes of a few hundred and duration of a few ) .we believe that this work will stimulate proof - of - concept experiments to realize such mtj synapses that can potentially pave the way for future ultra - low power intelligent neuromorphic systems capable of adaptive learning .the work was supported in part by , center for spintronic materials , interfaces , and novel architectures ( c - spin ) , a marco and darpa sponsored starnet center , by the semiconductor research corporation , the national science foundation , intel corporation and by the national security science and engineering faculty fellowship .j. schemmel , j. fieres , and k. meier , in _ neural networks , 2008 .ijcnn 2008.(ieee world congress on computational intelligence ) .ieee international joint conference on_.1em plus 0.5em minus 0.4emieee , 2008 , pp .431438 .b. l. jackson , b. rajendran , g. s. corrado , m. breitwisch , g. w. burr , r. cheek , k. gopalakrishnan , s. raoux , c. t. rettner , a. padilla _et al . _ , `` nanoscale electronic synapses using phase change devices , '' _ acm journal on emerging technologies in computing systems ( jetc ) _ , vol . 9 , no . 2 , p. 12, 2013 .m. n. baibich , j. m. broto , a. fert , f. n. van dau , f. petroff , p. etienne , g. creuzet , a. friederich , and j. chazelas , `` giant magnetoresistance of ( 001 ) fe/(001 ) cr magnetic superlattices , '' _ physical review letters _ ,61 , no .21 , p. 2472, 1988 .g. binasch , p. grnberg , f. saurenbach , and w. zinn , `` enhanced magnetoresistance in layered magnetic structures with antiferromagnetic interlayer exchange , '' _ physical review b _ , vol .39 , no . 7 , p. 4828, 1989 .w. scholz , t. schrefl , and j. fidler , `` micromagnetic simulation of thermally activated switching in fine particles , '' _ journal of magnetism and magnetic materials _ , vol .233 , no . 3 , pp .296304 , 2001 .pai , l. liu , y. li , h. tseng , d. ralph , and r. buhrman , `` spin transfer torque devices utilizing the giant spin hall effect of tungsten , '' _ applied physics letters _ , vol .101 , no . 12 , p. 122404, 2012 .h. noguchi , k. ikegami , k. kushida , k. abe , s. itai , s. takaya , n. shimomura , j. ito , a. kawasumi , h. hara _et al . _ , in _ solid - state circuits conference-(isscc ) , 2015 ieee international_.1em plus 0.5em minus 0.4emieee , 2015 , pp .t. ohno , t. hasegawa , t. tsuruoka , k. terabe , j. k. gimzewski , and m. aono , `` short - term plasticity and long - term potentiation mimicked in single inorganic synapses , '' _ nature materials _ , vol .10 , no . 8 , pp . 591595 , 2011 .r. yang , k. terabe , y. yao , t. tsuruoka , t. hasegawa , j. k. gimzewski , and m. aono , `` synaptic plasticity and memory functions achieved in a wo3- x - based nanoionics device by using the principle of atomic switch operation , '' _ nanotechnology _ , vol .24 , no .38 , p. 384003 | synaptic memory is considered to be the main element responsible for learning and cognition in humans . although traditionally non - volatile long - term plasticity changes have been implemented in nanoelectronic synapses for neuromorphic applications , recent studies in neuroscience have revealed that biological synapses undergo meta - stable volatile strengthening followed by a long - term strengthening provided that the frequency of the input stimulus is sufficiently high . such `` memory strengthening '' and `` memory decay '' functionalities can potentially lead to adaptive neuromorphic architectures . in this paper , we demonstrate the close resemblance of the magnetization dynamics of a magnetic tunnel junction ( mtj ) to short - term plasticity and long - term potentiation observed in biological synapses . we illustrate that , in addition to the magnitude and duration of the input stimulus , frequency of the stimulus plays a critical role in determining long - term potentiation of the mtj . such mtj synaptic memory arrays can be utilized to create compact , ultra - fast and low power intelligent neural systems . |
the segmentation process as a whole can be thought of as consisting of two tasks : recognition and delineation .recognition is to determine roughly `` where '' the object is and to distinguish it from other object - like entities .although delineation is the final step for defining the spatial extent of the object region / boundary in the image , an efficient recognition strategy is a key for successful delineation . in this study ,a novel , general method is introduced for object recognition to assist in segmentation ( delineation ) tasks .it exploits the pose relationship that can be encoded , via the concept of ball scale ( b - scale ) , between the binary training objects and their associated images . as an alternative to the manual methods based on initial placement of the models by an expert in the literature ,model based methods can be employed for recognition .for example , in , the position of an organ model ( such as liver ) is estimated by its histogram . in ,generalized hough transform is succesfully extended to incorporate variability of shape for 2d segmentation problem .atlas based methods are also used to define initial position for a shape model . in , affine registration is performed to align the data into an atlas to determine the initial position for a shape model of the knee cartilage .similarly , a popular particle filtering algorithm is used to detect the starting pose of models for both single and multi - object cases . however , due to the large search space and numerous local minimas in most of these studies , conducting a global search on the entire image is not a feasible approach . in this paper, we investigate an approach of automatically recognizing objects in 3d images without performing elaborate searches or optimization .the proposed method consists of the following key ideas and components : * 1 .model building : * after aligning image data from all subjects in the training set into a common coordinate system via 7-parameter affine registration , the live - wire algorithm is used to segment different objects from subjects .segmented objects are used for the automatic extraction of landmarks in a slice - by - slice manner . from the landmark information for all objects ,a model assembly is constructed .b - scale encoding : * the b - scale value at every voxel in an image helps to understand `` objectness '' of a given image without doing explicit segmentation . for each voxel ,the radius of the largest ball of homogeneous intensity is weighted by the intensity value of that particular voxel in order to incorporate appearance ( texture ) information into the object information ( called intensity weighted b - scale : ) so that a model of the correlations between shape and texture can be built . a simple and proper way of thresholding the b - scale image yields a few largest balls remaining in the image .these are used for the construction of the relationship between the segmented training objects and the corresponding images .the resulting images have a strong relationship with the actual delineated objects .relationship between and : * a principal component system is built via pca for the segmented objects in each image , and their mean system , denoted , is found over all training images . has an origin and three axes .similarly the mean system , denoted , for intensity weighted b - scale images is found .finally the transformation that maps to is found .given an image to be segmented , the main idea here is to use to facilitate a quick placement of in with a proper pose as indicated in step 4 below . *hierarchical recognition : * for a given image , is obtained and its system , denoted is computed subsequently . assuming the relationship of to to be the same as of to , and assuming that offers the proper pose of in the training images , we use transformation and to determine the pose of in .this level of recognition is called coarse recognition . further refinement of the recognition can be done using the skin boundary object in the image with the requirement that a major portion of should lie inside the body region delimited by the skin boundary . moreover, a little search inside the skin boundary can be done for the fine tuning , however , since offered coarse recognition method gives high recognition rates , there is no need to do any elaborate searches .we will focus on the fine tuning of coarse recognition for future study .the finest level of recognition requires the actual delineation algorithm itself , which is a hybrid method in our case and called gc - asm ( synergistic integration of graph - cut and active shape model ) .this delineation algorithm is presented in a companion paper submitted to this symposium .a convenient way of achieving incorporation of prior information automatically in computing systems is to create and use a flexible _ model _ to encode information such as the expected _ size _ , _ shape _ , _ appearance _ , and _ position _ of objects in an image . among such information ,_ shape _ and _ appearance _ are two complementary but closely related attributes of biological structures in images , and hence they are often used to create statistical models . in particular , shape has been used both in high and low level image analysis tasks extensively , and it has been demonstrated that shape models ( such as active shape models ( asms ) ) can be quite powerful in compensating for misleading information due to noise , poor resolution , clutter , and occlusion in the images .therefore , we use asm to estimate population statistics from a set of examples ( training set ) . in order to guarantee 3d point correspondences required by asm, we build our statistical shape models combining semi - automatic methods : ( 1 ) manually selected anatomically correspondent slices by an expert , and ( 2 ) semi - automatic way of specifying key points on the shapes starting from the same anatomical locations .once step ( 1 ) is accomplished , the remaining problem turns into a problem of establishing point correspondence in 2d shapes , which is easily solved .it is extremely significant to choose correct correspondences so that a good representation of the modelled object results .although landmark correspondence is usually established manually by experts , it is time - consuming , prone to errors , and restricted to only 2d objects .because of these limitations , a semi - automatic landmark tagging method , _ equal space landmarking _ , is used to establish correspondence between landmarks of each sample shape in our experiments .although this method is proposed for 2d objects , and equally spacing a fixed number of points for 3d objects is much more difficult , we use equal space landmarking technique in pseudo-3d manner where 3d object is annotated slice by slice .let be a single shape and assume that its finite dimensional representation after the landmarking consisting of landmark points with positions , where are cartesian coordinates of the point on the shape .equal space landmark tagging for points for on shape boundaries ( contours ) starts by selecting an initial point on each shape sample in training set and equally space a fixed number of points on each boundary automatically . selecting the starting pointhas been done manually by annotating the same anatomical point for each shape in the training set .figure [ img : landmarking_abd ] shows annotated landmarks for five different objects ( skin , liver , right kidney , left kidney , spleen ) in a ct slice of the abdominal region .note that different number of landmarks are used for different objects considering their size . [ cols="^ " , ]\(1 ) the b - scale image of a given image captures object morphometric information without requiring explicit segmentation .b - scales constitute fundamental units of an image in terms of largest homogeneous balls situated at every voxel in the image .the b - scale concept has been previously used in object delineation , filtering and registration .our results suggest that their ability to capture object geography in conjunction with shape models may be useful in quick and simple yet accurate object recognition strategies .( 2 ) the presented method is general and does not depend on exploiting the peculiar characteristics of the application situation .( 3 ) the specificity of recognition increases dramatically as the number of objects in the model increases .( 4 ) we emphasize that both modeling and testing procedures are carried out on the ct data sets that are part of the clinical pet / ct data routinely acquired in our hospital .the ct data set are thus of relatively poor ( spatial and contrast ) resolution compared to other ct - alone studies with or without contrast .we expect better performance if higher resolution ct data are employed in modeling or testing .this paper is published in spie medical imaging conference - 2010 .falcao , a.x . ,udupa , j.k . ,samarasekera , s. , sharma , s. , hirsch , b.e . , and lotufo , r.a . , 1998user - steered image segmentation paradigms : live wire and live lane . graph .models image process .60 ( 4 ) , pp .233260 .kokkinos , i. , maragos , p. , 2009 .synergy between object recognition and image segmentation using the expectation - maximization algorithm .ieee transactions on pattern analysis and machine intelligience , vol.31 ( 8) , pp.14861501 .brejl , m. , sonka , m. , 2000 .object localization and border detection criteria design in edge - based image segmentation : automated learning from examples .ieee transactions on medical imaging , vol.19 ( 10 ) , pp.973985 .fripp , j. , crozier , s. , warfield , s.k . ,ourselin , s. , 2005 .automatic initialisation of 3d deformable models for cartilage segmentation . in proceedings of digital image computing : techniques and applications , pp . | this paper investigates , using prior shape models and the concept of ball scale ( b - scale ) , ways of automatically recognizing objects in 3d images without performing elaborate searches or optimization . that is , the goal is to place the model in a single shot close to the right pose ( position , orientation , and scale ) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image . this is achieved via the following set of key ideas : ( a ) a semi - automatic way of constructing a multi - object shape model assembly . ( b ) a novel strategy of encoding , via b - scale , the pose relationship between objects in the training images and their intensity patterns captured in b - scale images . ( c ) a hierarchical mechanism of positioning the model , in a one - shot way , in a given image from a knowledge of the learnt pose relationship and the b - scale image of the given image to be segmented . the evaluation results on a set of 20 routine clinical abdominal female and male ct data sets indicate the following : ( 1 ) incorporating a large number of objects improves the recognition accuracy dramatically . ( 2 ) the recognition algorithm can be thought as a hierarchical framework such that quick replacement of the model assembly is defined as coarse recognition and delineation itself is known as finest recognition . ( 3 ) scale yields useful information about the relationship between the model assembly and any given image such that the recognition results in a placement of the model close to the actual pose without doing any elaborate searches or optimization . ( 4 ) effective object recognition can make delineation most accurate . |
biological aggregations such as fish schools , bird flocks , bacterial colonies , and insect swarms have characteristic morphologies governed by the group members interactions with each other and with their environment .the _ endogenous _ interactions , _i.e. _ , those between individuals , often involve organisms reacting to each other in an attractive or repulsive manner when they sense each other either directly by sound , sight , smell or touch , or indirectly via chemicals , vibrations , or other signals .a typical modeling strategy is to treat each individual as a moving particle whose velocity is influenced by social ( interparticle ) attractive and repulsive forces . in contrast , the _ exogenous _ forces describe an individual s reaction to the environment , for instance a response to gravity , wind , a chemical source , a light source , a food source , or a predator .the superposition of endogenous and exogenous forces can lead to characteristic swarm shapes ; these equilibrium solutions are the subject of our present study .more specifically , our motivation is rooted in our previous modeling study of the swarming desert locust _schistocerca gregaria _ . in some parameter regimes of our model (presented momentarily ) , locusts self - organize into swarms with a peculiar morphology , namely a bubble - like shape containing a dense group of locusts on the ground and a flying group of locusts overhead ; see figure [ fig : locust](bc ) .the two are separated by an unoccupied gap . with wind, the swarm migrates with a rolling motion .locusts at the front of the swarm fly downwards and land on the ground .locusts on the ground , when overtaken by the flying swarm , take off and rejoin the flying group ; see figure [ fig : locust](cd ) .the presence of an unoccupied gap and the rolling motion are found in real locust swarms . as we will show throughout this paper , features of swarms such as dense concentrations and disconnected components ( that is , the presence of gaps ) arise as properties of equilibria in a general model of swarming .the model of is [ eq : locusts ] which describes interacting locusts with positions .the direction of locust swarm migration is strongly correlated with the direction of the wind and has little macroscopic motion in the transverse direction , so the model is two - dimensional , _i.e. _ , where the coordinate is aligned with the main current of the wind and is a vertical coordinate . as the velocity of each insect is simply a function of position ,the model neglects inertial forces .this so - called kinematic assumption is common in swarming models , and we discuss it further in section [ sec : discretemodel ] .the first term on the right - hand side of ( [ eq : locusts ] ) describes endogenous forces ; measures the force that locust exerts on locust .the first term of describes attraction , which operates with strength over a length scale and is necessary for aggregation .the second term is repulsive , and operates more strongly and over a shorter length scale in order to prevent collisions .time and space are scaled so that the repulsive strength and length scale are unity .the second term on the right - hand side of ( [ eq : locusts ] ) describes gravity , acting downwards with strength .the last term describes advection of locusts in the direction of the wind with speed .furthermore , the model assumes a flat impenetrable ground . since locusts rest and feed while grounded , their motion in that state is negligible compared to their motion in the air .thus we add to ( [ eq : locusts ] ) the stipulation that grounded locusts whose vertical velocity is computed to be negative under ( [ eq : locusts ] ) remain stationary .as mentioned above , for some parameters , ( [ eq : locusts ] ) forms a bubble - like shape .this can occur even in the absence of wind , that is , when ; see figure [ fig : locust](b ) .the bubble is crucial , for it allows the swarm to roll in the presence of wind . as discussed in , states which lack a bubble in the absence of wind do not migrate in the presence of wind .conditions for bubble formation , even in the equilibrium state arising in the windless model , have not been determined ; we will investigate this problem .some swarming models adopt a discrete approach as in our locust example above because of the ready connection to biological observations .a further advantage is that simulation of discrete systems is straightforward , requiring only the integration of ordinary differential equations .however , since biological swarms contain many individuals , the resulting high - dimensional systems of differential equations can be difficult or impossible to analyze .furthermore , for especially large systems , computation , though straightforward , may become a bottleneck .continuum models are more amenable to analysis .one well - studied continuum model is that of , a partial integrodifferential equation model for a swarm population density in one spatial dimension : the density obeys a conservation equation , and is the velocity field , which is determined via convolution with the antisymmetric pairwise endogenous force , the one - dimensional analog of a social force like the one in ( [ eq : locusts ] ) .the general model ( [ eq : introeq ] ) displays at least three solution types as identified in .populations may concentrate to a point , reach a finite steady state , or spread . in , we identified conditions on the social interaction force for each behavior to occur .these conditions map out a `` phase diagram '' dividing parameter space into regions associated with each behavior .similar phase diagrams arise in a dynamic particle model and its continuum analog .models that break the antisymmetry of ( creating an asymmetric response of organisms to each other ) display more complicated phenomena , including traveling swarms .many studies have sought conditions under which the population concentrates to a point mass . in a one - dimensional domain ,collapse occurs when the force is finite and attractive at short distances .the analogous condition in higher dimensions also leads to collapse .one may also consider the case when the velocity includes an additional term describing an exogenous force , in this case , equilibrium solutions consisting of sums of point - masses can be linearly and nonlinearly stable , even for social forces that are repulsive at short distances .these results naturally lead to the question of whether a solution can be continued past the time at which a mass concentrates . early work on a particular generalization of ( [ eq : introeq ] )suggests the answer is yes . for ( [ eq : introeq ] ) itself in arbitrary dimension, there is an existence theory beyond the time of concentration .some of the concentration solutions mentioned above are equilibrium solutions. however , there may be classical equilibria as well .for most purely attractive , the only classical steady states are constant in space , as shown via a variational formulation of the steady state problem . however , these solutions are non - biological , as they contain infinite mass . theredo exist attractive - repulsive which give rise to compactly - supported classical steady states of finite mass .for instance , in simulations of ( [ eq : introeq ] ) , we found classical steady state solutions consisting of compactly supported swarms with jump discontinuities at the edges of the support . in our current work , we will find equilibria that contain both classical and nonclassical components .many of the results reviewed above were obtained by exploiting the underlying gradient flow structure of ( [ eq : introeq2 ] ) .there exists an energy functional = \frac{1}{2 } \int_\mathbb{r } \int_\mathbb{r } \rho(x ) \rho(y ) q(x - y)\,dx\,dy + \int_\mathbb{r } f(x)\rho(x)\,dx,\ ] ] which is minimized under the dynamics .this energy can be interpreted as the continuum analog of the summed pairwise energy of the corresponding discrete ( particle ) model .we will also exploit this energy to find equilibrium solutions and study their stability . in this paper, we focus on equilibria of swarms and ask the following questions : * what sorts of density distributions do swarming systems make ? are they classical or nonclassical ? * how are the final density distributions reached affected by endogenous interactions , exogenous forces , boundaries , and the interplay of these ? *how well can discrete and continuum swarming systems approximate each other ? to answer these questions , we formulate a general mathematical framework for discrete , interacting swarm members in one spatial dimension , also subject to exogenous forces .we then derive an analogous continuum model and use variational methods to seek minimizers of its energy .this process involves solution of a fredholm integral equation for the density .for some choices of endogenous forces , we are able to find exact solutions . perhaps surprisingly , they are not always classical . in particular , they can involve -function concentrations of mass at the domain boundary .the rest of this paper is organized as follows . in section[ sec : formulation ] , we create the mathematical framework for our study , and derive conditions for a particular density distribution to be an equilibrium solution , and to be stable to various classes of perturbations . in sections [ sec : repulsive ] and [ sec : morse ] ,we demonstrate different types of swarm equilibria via examples . in section[ sec : repulsive ] , we focus on purely repulsive endogenous interactions .we consider a bounded domain with no exogenous forces , a half - line subject to gravitational forces , and an unbounded domain subject to a quadratic exogenous potential , modeling attraction to a light , chemical , or nutrient source .for all three situations , we find exact solutions for swarm equilibria .for the first two examples , these equilibria consist of a density distribution that is classical in the interior of the domain , but contains -functions at the boundaries . for the third example , the equilibrium is compactly supported with the density dropping discontinuously to zero at the edge of the support .for all three examples , we compare analytical solutions from the continuum framework to equilibria obtained from numerical simulation of the underlying discrete system . the two agree closely even for small numbers of discrete swarm members .section [ sec : morse ] is similar to section [ sec : repulsive ] , but we now consider the more complicated case of endogenous interactions that are repulsive on short length scales and attractive over longer ones ; such forces are typical for swarming biological organisms . in section [ sec : locust - ground ] , we revisit locust swarms , focusing on their bubble - like morphology as described above , and on the significance of dimensionality . in a one - dimensional model corresponding to a vertical slice of a wide locust swarm under the influence of social interactions and gravity ,energy minimizers can reproduce concentrations of locusts on the ground and a group of locusts above the ground , but there can not be a separation between the two groups. however , a quasi - two - dimensional model accounting for the influence of the swarm s horizontal extent does , in contrast , have minimizers which qualitatively correspond to the biological bubble - like swarms .consider identical interacting particles ( swarm members ) in one spatial dimension with positions .assume that motion is governed by newton s law , so that acceleration is proportional to the sum of the drag and motive forces .we will focus on the case where the acceleration is negligible and the drag force is proportional to the velocity .this assumption is appropriate when drag forces dominate momentum , commonly known in fluid dynamics as the low reynolds number or stokes flow regime . in the swarming literature , the resulting models , which are first - order in time , are known as _kinematic models have been used in numerous studies of swarming and collective behavior , including ) .we now introduce a general model with both endogenous and exogenous forces , as with the locust model ( [ eq : locusts ] ) .the endogenous forces act between individuals and might include social attraction and repulsion ; see for a discussion . for simplicity , we assume that the endogenous forces act in an additive , pairwise manner .we also assume that the forces are symmetric , that is , the force induced by particle on particle is the opposite of that induced by particle on particle .exogenous forces might include gravity , wind , and taxis towards light or nutrients .the governing equations take the form [ eq : discretesystem ] eventually we will examine the governing equations for a continuum limit of the discrete problem . to this end , we have introduced a _ social mass _ which scales the strength of the endogenous forces so as to remain bounded for . is the total social mass of the ensemble .( [ eq : vee ] ) defines the velocity rule ; is the endogenous velocity one particle induces on another , and is the exogenous velocity . from our assumption of symmetry of endogenous forces , is odd and in most realistic situations is discontinuous at the origin .each force , and , can be written as the gradient of a potential under the relatively minor assumption of integrability .as pointed out in , most of the specific models for mutual interaction forces proposed in the literature satisfy this requirement .many exogenous forces including gravity and common forms of chemotaxis do so as well . under this assumption , we rewrite ( [ eq : discretesystem ] ) as a gradient flow , where the potential is [ eq : discrete_gradient ] the double sum describes the endogenous forces and the single sum describes the exogenous forces . also , is the mutual interaction potential , which is even , and is the exogenous potential .the flow described by ( [ eq : discretegradient1 ] ) will evolve towards minimizers of the energy . up to now, we have defined the problem on . in order to confine the problem to a particular domain , one may use the artifice of letting the exogenous potential tend to infinity on the complement of . while this discrete model is convenient from a modeling and simulation standpoint , it is difficult to analyze . presently , we will derive a continuum analog of ( [ eq : discretesystem ] ) .this continuum model will allow us to derive equilibrium solutions and determine their stability via the calculus of variations and integral equation methods . to derive a continuum model , we begin by describing our evolving ensemble of discrete particles with a density function equal to a sum of -functions .( for brevity , we suppress the dependence of in the following discussion . )our approach here is similar to .these -functions have strength and are located at the positions of the particles : the total mass is where is the domain of the problem . using ( [ eq : deltafuncs ] ) , we write the discrete velocity in terms of a continuum velocity .that is , we require where by conservation of mass , the density obeys with no mass flux at the boundary .we now introduce an energy functional ] for nontrivial perturbations , which follows from ( [ eq : expand_w ] ) being exact . to summarize, we have obtained the following results : * equilibrium solutions satisfy the fredholm integral equation ( [ eq : fie ] ) and the mass constraint ( [ eq : mass1 ] ) . * the solution is a local and global minimizer with respect to the first class of perturbations ( those with support in ) if in ( [ eq : second_variation ] ) is positive .* the solution is a local minimizer with respect to the second ( more general zero - mass ) class of perturbations if satisfies ( [ eq : stable ] ) .if in addition is positive for these perturbations , then is a global minimizer as well . in practice, we solve the integral equation ( [ eq : fie ] ) to find candidate solutions .then , we compute to determine whether is a local minimizer .finally , when possible , we show the positivity of to guarantee that is a global minimizer .as the continuum limit replaces individual particles with a density , we need to make sure the continuum problem inherits a physical interpretation for the underlying problem .if we think about perturbing an equilibrium configuration , we note that mass can not `` tunnel '' between disjoint components of the solution . as suchwe define the concept of a multi - component swarm equilibrium .suppose the swarm s support can be divided into a set of disjoint , closed , connected components , that is we define a swarm equilibrium as a configuration in which each individual swarm component is in equilibrium , we can still define in + f(x ) = \int_{{\omega_{{{\bar \rho}}}}}q(x - y ) { { \bar \rho}}(y)~dy + f(x),\ ] ] but now in .we can now define a swarm minimizer .we say a swarm equilibrium is a swarm minimizer if for some neighborhood of each component of the swarm . in practicethis means that the swarm is an energy minimizer for infinitesimal redistributions of mass in the neighborhood of each component .this might also be called a lagrangian minimizer in the sense that the equilibrium is a minimizer with respect to infinitesimal lagrangian deformations of the distributions .it is crucial to note that even if a solution is a global minimizer , other multi - component swarm minimizers may still exist .these solutions are local minimizers and consequently a global minimizer may not be a global attractor under the dynamics of ( [ eq : pde ] ) .in this section we discuss the minimization problem formulated in section [ sec : formulation ] .it is helpful for expository purposes to make a concrete choice for the interaction potential . as previously mentioned , in many physical , chemical , and biological applications , the pairwise potential is symmetric. additionally , repulsion dominates at short distances ( to prevent collisions ) and the interaction strength approaches zero for very long distances .a common choice for is the morse potential with parameters chosen to describe long - range attraction and short - range repulsion . for the remainder of this section, we consider a simpler example where is the laplace distribution which represents repulsion with strength decaying exponentially in space . when there is no exogenous potential , , and when the domain is infinite , _e.g. _ , , the swarm will spread without bound .the solutions asymptotically approach the barenblatt solution to the porous medium equation as shown in .however , when the domain is bounded or when there is a well in the exogenous potential , bounded swarms are observed both analytically and numerically , as we will show .figure [ fig : repulsion_schematic ] shows solutions for three cases : a bounded domain with no exogenous potential , a gravitational potential on a semi - infinite domain , and a quadratic potential well on an infinite domain . in each case , a bounded swarm solution is observed but the solutions are not necessarily continuous and can even contain -function concentrations at the boundaries .we discuss these three example cases in detail later in this section .first , we will formulate the minimization problem for the case of the laplace potential .we will attempt to solve the problem classically ; when the solution has compact support contained within the domain we find solutions that are continuous within the support and may have jump discontinuities at the boundary of the support .however , when the boundary of the support coincides with the boundary of the domain , the classical solution may break down and it is necessary to include a distributional component in the solution .we also formulate explicit conditions for the solutions to be global minimizers .we then apply these results to the three examples mentioned above .recall that for to be a steady solution , it must satisfy the integral equation ( [ eq : fie ] ) subject to the mass constraint ( [ eq : mass1 ] ) . for to be a local minimizer , it must also satisfy ( [ eq : stable ] ) , finally , recall that for a solution to be a global minimizer , the second variation ( [ eq : second_variation ] ) must be positive .we saw that if , this is guaranteed . for ( [ eq : laplace ] ) , andso for the remainder of this section , we are able to ignore the issue of .any local minimizer that we find will be a global minimizer .additionally , for the remainder of this section , we restrict our attention to cases where the support of the solution is a single interval in ; in other words , the minimizing solution has a connected support .the reason that we are able to make this restriction follows from the notion of swarm minimization , discussed above .in fact , we can show that there are no multi - component swarm minimizers for the laplace potential as long as the exogenous potential is convex , that is , on . to see this ,assume we have a swarm minimizer with a at least two disjoint components .consider in the gap between two components so that .we differentiate twice to obtain note that as in . by assumption .consequently , in and so is convex upwards in the gap . also , at the endpoints of the gap .we conclude from the convexity that must be less than near one of the endpoints .this violates the condition of swarm minimization from the previous section , and hence the solution is not a swarm minimizer . since swarm minimization is a necessary condition for global minimization , we now , as discussed , restrict attention to single - component solutions. for concreteness , assume the support of the solution is ] and any mass we can find a solution to ( [ eq : fie ] ) with smooth in the interior and with a concentration at the endpoints .however , we havent yet addressed the issue of being non - negative , nor have we considered whether it is a minimizer .we next consider whether the extremal solution is a minimizer , which involves the study of ( [ eq : stable2 ] ) .we present a differential operator method that allows us to compute and deduce sufficient conditions for to be a minimizer .we start by factoring the differential operator where . applying these operators to the interaction potential , we see that substituting in ( [ eq : fie - sol2 ] ) into our definition of in ( [ eq : stable2 ] ) yields now consider applying to ( [ eq : one ] ) at a point in .we see that = { \mathcal{d}^- } [ \lambda],\ ] ] where we ve used the fact that in .if we let and let decrease to zero , the integral term vanishes and solving for yields the first half of ( [ eq : ab ] ) .a similar argument near yields the value of .assuming does not coincide with an endpoint of , we now consider the region , which is to the left of the support . again , applying to ( [ eq : fie ] ) simplifies the equation ; we can check that both the integral term and the contribution from the -functions are annihilated by this operator , from which we deduce that = 0 \qquad \rightarrow \qquad f(x)-\lambda(x ) = ce^x,\ ] ] where is an unknown constant .a quick check shows that if is continuous , then is continuous at the endpoints of so that .this in turn determines , yielding ^{x-\alpha } \qquad { \rm for } \quad x \leq \alpha.\ ] ] a similar argument near yields ^{\beta -x } \qquad { \rm for } \quad x \geq \beta.\ ] ] as discussed in section [ sec : minimizers ] , for to be a minimizer we wish for for and .a little algebra shows that this is equivalent to [ eq : mincon ] if and are both strictly inside , then ( [ eq : mincon ] ) constitutes sufficient conditions for the extremal solution to be a global minimizer ( recalling that ) . we may also derive a necessary condition at the endpoints of the support from ( [ eq : mincon ] ) .as increases to , we may apply lhpital s rule and this equation becomes equivalent to the condition , as expected .a similar calculation letting decrease to implies that .however , since is a density , we are looking for positive solutions . hence, either or coincides with the left endpoint of .similarly , either or coincides with the right edge of .this is consistent with the result ( [ eq : nodeltas2 ] ) which showed that -functions can not occur in the interior of . in summary, we come to two conclusions : * a globally minimizing solution contains a -function only if a boundary of the support of the solution coincides with a boundary of the domain . * a globally minimizing solution must satisfy ( [ eq : mincon ] ) .we now consider three concrete examples for and .we model a one - dimensional biological swarm with repulsive social interactions described by the laplace potential .we begin with the simplest possible case , namely no exogenous potential , and a finite domain which for convenience we take to be the symmetric interval ] with .cross - hatched boxes indicate the boundary of the domain .the solid line is the classical solution .dots correspond to the numerically - obtained equilibrium of the discrete system ( [ eq : discretesystem ] ) with swarm members .the density at each lagrangian grid point is estimated using the correspondence discussed in section ( [ sec : contmodel ] ) and pictured in figure [ fig : delta_schematic ] .each `` lollipop '' at the domain boundary corresponds to a -function of mass in the analytical solution , and simultaneously to a superposition of swarm members in the numerical simulation .hence , we see excellent agreement between the continuum minimizer and the numerical equilibrium even for this relatively small number of lagrangian points .we now consider repulsive social interactions and an exogenous gravitational potential .the spatial coordinate describes the elevation above ground .consequently , is the semi - infinite interval . then with , shown in figure [ fig : repulsion_schematic](f ) . as we know from ( [ eq : singlecomponent ] ) that the minimizing solution has a connected support , _i.e. _ , it is a single component .moreover , translating this component downward decreases the exogenous energy while leaving the endogenous energy unchanged .thus , the support of the solution must be ] which is positive , and hence the solution is globally stable . for previous calculation naively implies .since can not be negative , the minimizer in this case is a -function at the origin , namely , shown in figure [ fig : repulsion_schematic](d ) . in this case , from ( [ eq : gravitylambda ] ) and from ( [ eq : lambdaright ] ) .it follows that the first inequality follows from a taylor expansion .the second follows from our assumption . since the solution is a global minimizer . in summary, there are two cases . when , the globally stable minimizer is a -function at the origin . when there is a globally stable minimizer consisting of a -function at the origin which is the left - hand endpoint of a compactly - supported classical swarm .the two cases are shown schematically in figures [ fig : repulsion_schematic](de ) .figure [ fig : repulsion_numerics](b ) compares analytical and numerical results for the latter ( ) case with and .we use swarm members for the numerical simulation . the numerical ( dots ) and analytical ( line ) agree , as does the nonclassical part of the solution , pictured as the `` lollipop '' which represents a superposition of swarm members in the numerical simulation having total mass , and simultaneously a -function of mass in the analytical solution .we now consider the infinite domain with a quadratic exogenous potential well , pictured in figure [ fig : repulsion_schematic](c ) .this choice of a quadratic well is representative of a generic potential minimum , as might occur due to a chemoattractant , food source , or light source .thus where controls the strength of the potential .as we know from ( [ eq : singlecomponent ] ) that the minimizing solution is a single component .we take the support of the solution to be ] and then allow for -functions at the boundaries . once again , we will see that minimizers contain -functions only when the boundary of the support , , coincides with the boundary of the domain , . for convenience , define the differential operators and , and apply to ( [ eq : fie ] ) to obtain , \quad x \in { { \omega_{{{\bar \rho } } } } } , \label{eq : morselocal}\ ] ] where thus , we guess the full solution to the problem is obtained by substituting ( [ eq : morseansatz ] ) into ( [ eq : mass1 ] ) and ( [ eq : fie ] ) which yields where in .we begin by considering the amplitudes and of the distributional component of the solution .we factor the differential operators where and where .note that h(x - y),\ ] ] where is the heaviside function .now we apply to ( [ eq : morseintegraleq ] ) at a point in , which yields { { { { { \bar \rho}}_*}}}(y)\,dy & & \\\mbox{}+a { \mathcal{p}^-}{\mathcal{q}^-}q(x-\alpha ) = { \mathcal{p}^-}{\mathcal{q}^-}\ { \lambda -f(x)\}. \nonumber\end{aligned}\ ] ] taking the limit yields where we have used the fact that .a similar calculation using the operators and focusing near yields that eqs .( [ eq : morseboundary1 ] ) and ( [ eq : morseboundary2 ] ) relate the amplitudes of the -functions at the boundaries to the value of the classical solution there .further solution of the problem requires to be specified . in the case where , solving ( [ eq : morselocal ] ) for and solving ( [ eq : morseboundary1 ] ) and ( [ eq : morseboundary2 ] ) for and yields an equilibrium solution .one must check that the solution is non - negative and then consider the solutions stability to determine if it is a local or global minimizer . in the case where is contained in the interior of , we know that as discussed in section [ sec : absence ] .we consider this case below .suppose is contained in the interior of . then . following section [ sec : funcmin ], we try to determine when in and when , which constitute necessary and sufficient conditions for to be a global minimizer .we apply to ( [ eq : morseintegraleq ] ) at a point . the integral term and the terms arising from the functions vanish .the equation is simply .we write the solution as the two constants are determined as follows . from ( [ eq : morseintegraleq ] ) , is a continuous function , and thus we derive a jump condition on the derivative to get another equation for .we differentiate ( [ eq : morseintegraleq ] ) and determine that is continuous . however , since for , . substituting this result into the derivative of ( [ eq : lambdamorse ] ) and letting increase to , we find the solution to ( [ eq : lambdacontinuity ] ) and ( [ eq : lambdaprime ] ) is [ eq : kvals ] now that is known near we can compute when , at least near the left side of .taylor expanding around , we find the quadratic term in ( [ eq : lambdatayl ] ) has coefficient where the second line comes from substituting ( [ eq : morseboundary1 ] ) with and noting that the classical part of the solution must be nonnegative since it is a density . furthermore , since we expect ( this can be shown a posteriori ) , we have that the quadratic term in ( [ eq : lambdatayl ] ) is positive .a similar analysis holds near the boundary .therefore , for in a neighborhood outside of . stated differently , the solution ( [ eq : morseansatz ] ) is a swarm minimizer , that is , it is stable with respect to infinitesimal redistributions of mass . the domain is determined through the relations ( [ eq : morseboundary1 ] ) and ( [ eq : morseboundary2 ] ) , which , when , become [ eq : bcs ] in the following subsections , we will consider the solution of the continuum system ( [ eq : fie ] ) and ( [ eq : mass1 ] ) with no external potential , . we consider two cases for the morse interaction potential ( [ eq : morse ] ) : first , the catastrophic case on , for which the above calculation applies , and second , for the h - stable case on a finite domain , in which case and there are -concentrations at the boundary .exact solutions for cases with an exogenous potential , can be straightforwardly derived , though the algebra is even more cumbersome and the results unenlightening . in this case , in ( [ eq : fie ] ) and in ( [ eq : morse ] ) so that in ( [ eq : morselocal ] ) .the solution to ( [ eq : morselocal ] ) is where in the absence of an external potential , the solution is translationally invariant .consequently , we may choose the support to be an interval ] . as before , in ( [ eq : fie ] ) but now in ( [ eq : morse ] ) so that in ( [ eq : morselocal ] ) .the classical solution to ( [ eq : morselocal ] ) is where we will again invoke symmetry to assume .the minimizer will be the classical solution together with -functions on the boundary , again by symmetry , .consequently , the solution can be written as .\ ] ] substituting into the integral equation ( [ eq : fie ] ) and the mass constraint ( [ eq : mass1 ] ) will determine the constants , and .the integral operator produces modes spanned by .this produces two homogeneous , linear equations for , and .the mass constraint ( [ eq : mass1 ] ) produces an inhomogeneous one , namely an equation linear in , , and for the mass .we have the three dimensional linear system the solution is [ eq : hstablesoln ] }{2\phi } , \\ c & = & -\frac{{{\tilde \mu}}m ( 1-{{\tilde \mu}}^2 ) ( 1-{{\tilde \mu}}^2 l^2)}{\phi } , \\\lambda & = & \frac{m { { \tilde \mu}}(1-{{g}}l^2)\left [ ( 1+{{\tilde \mu}})(1+{{\tilde \mu}}l ) + ( 1-{{\tilde \mu}})(1-{{\tilde \mu}}l ) \right]}{\phi},\end{aligned}\ ] ] where for convenience we have defined for this h - stable case , which ensures that for nontrivial perturbations .this guarantees that the solution above is a global minimizer . in the limit of large domain size ,the analytical solution simplifies substantially . to leading order ,the expressions ( [ eq : hstablesoln ] ) become note that is exponentially small except in a boundary layer near each edge of , and therefore the solution is nearly constant in the interior of .figure [ fig : morse_numerics](b ) compares analytical and numerical results for an example case with a relatively small value of .we take total mass and set the domain half - width to be . the interaction potential parameters and .the solid line is the classical solution .dots correspond to the numerically - obtained equilibrium of the discrete system ( [ eq : discretesystem ] ) with swarm members .each `` lollipop '' at the domain boundary corresponds to a -function of mass in the analytical solution , and simultaneously to a superposition of swarm members in the numerical simulation .we now return to the locust swarm model of , discussed also in section [ sec : intro ] .recall that locust swarms are observed to have a concentration of individuals on the ground , a gap or `` bubble '' where the density of individuals is near zero , and a sharply delineated swarm of flying individuals .this behavior is reproduced in the model ( [ eq : locusts ] ) ; see figure [ fig : locust](b ) .in fact , figure [ fig : locust](c ) shows that the bubble is present even when the wind in the model is turned off , and only endogenous interactions and gravity are present . to better understand the structure of the swarm , we consider the analogous continuum problem . to further simplify the model , we note that the vertical structure of the swarm appears to depend only weakly on the horizontal direction , and thus we will construct a _ quasi - two - dimensional _ model in which the horizontal structure is assumed uniform . in particular , we will make a comparison between a one - dimensional and a quasi - two - dimensional model . both modelstake the form of the energy minimization problem ( [ eq : fie ] ) on a semi - infinite domain , with an exogenous potential describing gravity .the models differ in the choice of the endogenous potential , which is chosen to describe either one - dimensional or quasi - two - dimensional repulsion .the one - dimensional model is precisely that which we considered in section [ sec : grav ] .there we saw that minimizers of the one - dimensional model can reproduce the concentrations of locusts on the ground and a group of individuals above the ground , but there can not be a separation between the grounded and airborne groups .we will show below that for the quasi - two - dimensional model , this is not the case , and indeed , some minimizers have a gap between the two groups .as mentioned , the one - dimensional and quasi - two - dimensional models incorporate only endogenous repulsion .however , the behavior we describe herein does not change for the more biologically realistic situation when attraction is present .we consider the repulsion - only case in order to seek the minimal mechanism responsible for the appearance of the gap .we consider a swarm in two dimensions , with spatial coordinate .we will eventually confine the vertical coordinate to be nonnegative , since it describes the elevation above the ground at .we assume the swarm to be uniform in the horizontal direction , so that .we construct a quasi - two - dimensional interaction potential , letting and , this yields it is straightforward to show that the two - dimensional energy per unit horizontal length is given by = \frac{1}{2 } \int_{{\omega}}\int_{{\omega}}\rho(x_1 ) \rho(y_1 ) q_{2d}(x_1-y_1)\,dx_1\,dy_1 + \int_{{\omega}}f(x_1)\rho(x_1)\,dx_1,\ ] ] where the exogenous force is and the domain is the half - line .this is exactly analogous to the one - dimensional problem ( [ eq : continuum_energy ] ) , but with particles interacting according to the quasi - two - dimensional endogenous potential .similarly , the corresponding dynamical equations are simply ( [ eq : cont_velocity ] ) and ( [ eq : pde ] ) but with endogenous force . for the laplace potential ( [ eq : laplace ] ) , the quasi - two - dimensional potential is this integral can be manipulated for ease of calculation , where ( [ eq : q2db ] ) comes from symmetry , ( [ eq : q2dc ] ) comes from letting , ( [ eq : q2dd ] ) comes from letting , and ( [ eq : q2de ] ) comes from the trigonometric substitution . from an asymptotic expansion of ( [ eq : q2dd ] ), we find that for small , whereas for large , .\ ] ] in our numerical study , it is important to have an efficient method of computing values of . in practice ,we use ( [ eq : smallz ] ) for small , ( [ eq : largez ] ) for large , and for intermediate values of we interpolate from a lookup table pre - computed using ( [ eq : q2de ] ) .the potential is shown in figure [ fig : q2d ] .note that is horizontal at , and monotonically decreasing in .the negative of the slope reaches a maximum of the quantity plays a key role in our analysis of minimizers below .the fourier transform of can be evaluated exactly using the integral definition ( [ eq : q2dc ] ) and interchanging the order of integration of and to obtain which we note is positive , so local minimizers are global minimizers per the discussion in section [ sec : minimizers ] .we model a quasi - two - dimensional biological swarm with repulsive social interactions of laplace type and subject to an exogenous gravitational potential , .the spatial coordinate describes the elevation above ground .consequently , is the semi - infinite interval . from section [ sec : grav ] , recall that for the one - dimensional model, is a minimizer for some , corresponding to all swarm members pinned by gravity to the ground .we consider this same solution as a candidate minimizer for the quasi - two - dimensional problem . in this case, above is actually a minimizer for any mass . to see this, we can compute , since , increases away from the origin and hence is at least a swarm minimizer . in fact , if , is a global minimizer because which guarantees that is strictly increasing for as shown in figure [ fig : lambda](a ) . because it is strictly increasing , for .given this fact , and additionally , since as previously shown , is a global minimizer .this means that if an infinitesimal amount of mass is added anywhere in the system , it will descend to the origin .consequently , we believe this solution is the global attractor ( though we have not proven this ) . note that while the condition is sufficient for to be a global minimizer , it is not necessary . as alluded above ,it is not necessary that be strictly increasing , only that for .this is the case for for , where .figure [ fig : lambda](b ) shows a case when .although for , has a local minimum .in this situation , although the solution with the mass concentrated at the origin is a global minimizer , it is _ not _ a global attractor .we will see that a small amount of mass added near the local minimum of will create a swarm minimizer , which is dynamically stable to perturbations .figure [ fig : lambda](c ) shows the critical case when . in this casethe local minimum of at satisfies and .figure [ fig : lambda](d ) shows the case when and now in the neighborhood of the minimum . in this casethe solution with the mass concentrated at the origin is only a swarm minimizer ; the energy of the system can be reduced by transporting some of the mass at the origin to the neighborhood of the local minimum . when it is possible to construct a continuum of swarm minimizers .we have conducted a range of simulations for varying and have measured two basic properties of the solutions .we set and use in all simulations of the discrete system .initially , all the swarm members are high above the ground and we evolve the simulation to equilibrium . figure [ fig : quasi2dnumerics](a ) measures the mass on the ground as a percentage of the total swarm mass .the horizontal blue line indicates ( schematically ) that for , the equilibrium consists of all mass concentrated at the origin ; as discussed above , this state is the global minimizer and ( we believe ) the global attractor . as massis increased through , the equilibrium is a swarm minimizer consisting of a classical swarm in the air separated from the origin , and some mass concentrated on the ground . as increases , the proportion of mass located on the ground decreases monotonically .figure [ fig : quasi2dnumerics](b ) visualizes the support of the airborne swarm , which exists only for ; the lower and upper data represent the coordinates of the bottom and top of the swarm , respectively . as massis increased , the span of the swarm increases monotonically . as established above , when , swarm minimizers exist with two components .in fact , there is a continuum of swarm minimizers with different proportions of mass in the air and on the ground .which minimizer is obtained in simulation depends on initial conditions .figure [ fig : lambda2 ] shows two such minimizers for and , and the associated values of ( each obtained from a different initial condition ) . recalling that for a swarm minimizer , each connected component of the swarm , is constant , we define for the grounded component and for the airborne component . in figure [ fig : lambda2](ab ) , of the mass is contained in the grounded component . in this case , indicating that the total energy could be reduced by transporting swarm members from the air to the ground .in contrast , in figure [ fig : lambda2](cd ) , of the mass is contained in the grounded component . in this case , indicating that the total energy could be reduced by transporting swarm members from the ground to the air .note that by continuity , we believe a state exists where , which would correspond to a global minimizer .however , this state is clearly not a global attractor and hence will not necessarily be achieved in simulation .we ve demonstrated that for one can construct a continuum of swarm minimizers with a gap between grounded and airborne components , and that for these solutions can have a lower energy than the state with the density concentrated solely on the ground .by contrast with the one - dimensional system of section [ sec : grav ] in which no gap is observed , these gap states appear to be the generic configuration for sufficiently large mass in the quasi - two - dimensional system . we conclude that dimensionality is crucial element for the formation of the bubble - like shape of real locust swarms .in this paper we deeveloped a framework for studying equilibrium solutions for swarming problems .we related the discrete swarming problem to an associated continuum model .this continuum model has an energy formulation which enables analysis equilibrium solutions and their stability .we derived conditions for an equilibrium solution to be a local minimizer , a global minimizer , and/or a swarm minimizer , that is , stable to infinitesimal lagrangian deformations of the mass .we found many examples of compactly supported equilibrium solutions , which may be discontinuous at the boundary of the support .in addition , when a boundary of the support coincides with the domain boundary , a minimizer may contain a -concentration there .for the case of exogenous repulsion modeled by the laplace potential , we computed three example equilibria . on a bounded domain ,the minimizer is a constant density profile with -functions at each end . on a half - line with an exogenous gravitational potential ,the minimizer is a compactly supported linear density profile with a -function at the origin . in free space with an exogenous quadratic potential ,the minimizer is a compactly supported inverted parabola with jump discontinuities at the endpoints .each of the aforementioned solutions is also a global minimizer . to extend the results above, we also found analytical solutions for exogenous attractive - repulsive forces , modeled with the morse potential . in the case that the social force was in the catastrophic statistical mechanical regime, we found a compactly supported solution whose support is independent of the total population mass .this means that within the modeling assumptions , swarms become denser with increasing mass . for the case of an h - stable social force, there is no equilibrium solution on an infinite domain . on a finite domain, mass is partitioned between a classical solution in the interior and -concentrations on the boundary .we recall that for the locust model of ( see figure [ fig : locust ] ) a concentration of locusts occurs on the ground , with a seemingly classical component above , separated by a gap .none of the one - dimensional solutions ( for the laplace and morse potentials ) discussed above contain a gap , that is , multiple swarm components that are spatially disconnected , suggesting that this configuration is intrinsically two - dimensional . to study this configuration , we computed a quasi - two - dimensional potential corresponding to a horizontally uniform swarm .we demonstrated numerically that for a wide range of parameters , there exists a continuous family of swarm minimizers that consist of a concentration on the ground and a disconnected , classical component in the air , reminiscent of our earlier numerical studies of a discrete locust swarm model .we believe that the analytical solutions we found provide a sampling of the rich tapestry of equilibrium solutions that manifest in the general model we have considered , and in nature .we hope that these solutions will inspire further analysis and guide future modeling efforts .cmt acknowledges support from the nsf through grants dms-0740484 and dms-1009633 .ajb gratefully acknowledges the support from the nsf through grants dms-0807347 and dms-0730630 , and the hospitality of robert kohn and the courant institute of mathematical sciences .we both wish to thank the institute for mathematics and its applications where portions of this work were completed . , _ optimal transportation , dissipative pde s and functional inequalities _ , in optimal transportation and applications ( martina franca , 2001 ) , vol .1813 of lecture notes in math . ,springer , berlin , 2003 , pp . | we study equilibrium configurations of swarming biological organisms subject to exogenous and pairwise endogenous forces . beginning with a discrete dynamical model , we derive a variational description of the corresponding continuum population density . equilibrium solutions are extrema of an energy functional , and satisfy a fredholm integral equation . we find conditions for the extrema to be local minimizers , global minimizers , and minimizers with respect to infinitesimal lagrangian displacements of mass . in one spatial dimension , for a variety of exogenous forces , endogenous forces , and domain configurations , we find exact analytical expressions for the equilibria . these agree closely with numerical simulations of the underlying discrete model.the exact solutions provide a sampling of the wide variety of equilibrium configurations possible within our general swarm modeling framework . the equilibria typically are compactly supported and may contain -concentrations or jump discontinuities at the edge of the support . we apply our methods to a model of locust swarms , which are observed in nature to consist of a concentrated population on the ground separated from an airborne group . our model can reproduce this configuration ; quasi - two - dimensionality of the model plays a critical role . swarm , equilibrium , aggregation , integrodifferential equation , variational model , energy , minimizer , locust |
inflation generically predicts a primordial spectrum of density perturbations which is almost precisely gaussian . in recent yearsthe small non - gaussian component has emerged as an important observable , and will be measured with good precision by the _planck surveyor _ satellite . in the near future ,as observational data become more plentiful , it will be important to understand the non - gaussian signal expected in a wide variety of models , and to anticipate what conclusions can be drawn about early - universe physics from a prospective detection of primordial non - gaussianity . in this paper, we present a novel method for calculating the primordial non - gaussianity produced by super - horizon evolution in two - field models of inflation .our method is based on the real - space distribution of inflationary field values on a flat hypersurface , which can be thought of as a probability density function whose evolution is determined by a form of the collisionless boltzmann equation . using a cumulant representation to expand our density function around an exact gaussian, we derive ordinary differential equations which evolve the moments of this distribution .further , we show how these moments are related to observable quantities , such as the dimensionless bispectrum measured by .we present numerical results which show that this method gives good agreement with other techniques .it is not necessary to make any assumptions about the inflationary model beyond requiring a canonical kinetic term and applying the slow - roll approximation .while there are already numerous methods for computing the super - horizon contribution to , including the widely used formalism , we believe the one reported here has a number of advantages .first , this new technique is ideally suited to unraveling the various contributions to .this is because we follow the moments of the inflaton distribution directly , which makes it straightforward to identify large contributions to the skewness or other moments .the evolution equation for each moment is simple and possesses clearly identifiable source terms , which are related to the properties of the inflationary flow on field space .this offers a clear separation between two key sources of primordial non - gaussianity .one of these is the intrinsic non - linearity associated with evolution of the probability density function between successive flat hypersurfaces ; the other is a gauge transformation from field fluctuations to the curvature peturbation , .it would be difficult or impossible to observe this split within the context of other calculational schemes , such as the conventional formalism .a second advantage of our method is connected with the computational cost of numerical implementation .analytic formulas for are known in certain cases , mostly in the context of the framework , but only for very specific choices of the potential or hubble rate .these formulas become increasingly cumbersome as the number of fields increases , or if one studies higher moments . in the future , it seems clear that studies of complex models with many fields will increasingly rely on numerical methods .the numerical framework requires the solution to a number of ordinary differential equations which scales exponentially with the number of fields .since some models include hundreds of fields , this may present a significant obstacle .moreover , the formalism depends crucially on a numerical integration algorithm with low noise properties , since finite differences must be extracted between perturbatively different initial conditions after e - folds of evolution .thus , the background equations must be solved to great accuracy , since any small error has considerable scope to propagate . in this paperwe ultimately solve our equations numerically to determine the evolution of moments in specific models .our method requires the solution to a number of differential equations which scales at most polynomially ( or in certain cases perhaps even linearly ) with the number of fields .it does not rely on extracting finite differences , and therefore is much less susceptible to numerical noise .these advantages may be shared with other schemes , such as the numerical method recently employed by lehners & renaux - petel .a third advantage , to which we hope to return in a future publication , is that our formalism yields explicit evolution equations with source terms . from an analysis of these source terms , we hope that it will be possible to identify those physical features of specific models which lead to the production of large non - gaussianities .this paper is organized as follows . in [ sec : computing_fnl ] , we show how the non - gaussian parameter can be computed in our framework .the calculation remains in real space throughout ( as opposed to fourier space ) , which modifies the relationship between and the multi - point functions of the inflaton field .our expression for shows a clean separation between different contributions to non - gaussianity , especially between the intrinsic nonlinearity of the field evolution and the gauge transformation between comoving and flat hypersurfaces . in [ sec : transport ] , we introduce our model for the distribution of inflaton field values , which is a moment expansion " around a purely gaussian distribution .we derive the equations which govern the evolution of the moments of this distribution in the one- and two - field cases . in [ sec : numerics ] , we present a comparison of our new technique and those already in the literature . we compute numerically in several two - field models , and find excellent agreement between techniques .we conclude in [ s : conclusions ] . throughout this paper, we use units in which , and the reduced planck mass is set to unity .in this section , we introduce our new method for computing the non - gaussianity parameter .this method requires three main ingredients : a formula for the curvature perturbation , , in terms of the field values on a spatially flat hypersurface ; expressions for the derivatives of the number of e - foldings , , as a function of field values at horizon exit ; and a prescription for evolving the field distribution from horizon exit to the time when we require the statistical properties of .the first two ingredients are given in eqs . and , found at the end of [ ss : sep_universe ] and [ sec : derivative - n ] respectively .the final ingredient is discussed in [ sec : transport ] . once it became clear that non - linearities of the microwave background anisotropies could be detected by the wmap and _ planck _ survey satellites , many authors studied higher - order correlations of the curvature perturbation . in early work , direct calculations of a correlation function were matched to the known limit of local non - gaussianity .this method works well if isocurvature modes are absent , so that the curvature perturbation is constant after horizon exit . in the more realistic situation that isocurvature modes cause evolution on superhorizon scales , all correlation functions become time dependent .various formalisms have been employed to describe this evolution .lyth & rodrguez extended the method beyond linear order .this method is simple and well - suited to analytical calculation .rigopoulos , shellard and van tent worked with a gradient expansion , rewriting the field equations in langevin form .the noise term was used as a proxy for setting initial conditions at horizon crossing .a similar ` exact ' gradient formalism was written down by langlois & vernizzi . in its perturbative form, this approach has been used by lehners & renaux - petel to obtain numerical results .another numerical scheme has been introduced by huston & malik .what properties do we require of a successful prediction ? consider a typical observer , drawn at random from an ensemble of realizations of inflation . in any of the formalismsdiscussed above , we aim to estimate the statistical properties of the curvature perturbation which would be measured by such an observer .some realizations may yield statistical properties which are quite different from the ensemble average , but these large excursions are uninteresting unless anthropic arguments are in play .next we introduce a collection of comparably - sized spacetime volumes whose mutual scatter is destined to dominate the microwave background anisotropy on a given scale .neglecting spatial gradients , each spacetime volume will follow a trajectory in field space which is slightly displaced from its neighbors .the scatter between trajectories is determined by initial conditions set at horizon exit , which are determined by promoting the vacuum fluctuation to a classical perturbation .a correct prediction is a function of the trajectories followed by every volume in the collection , taken as a whole .one never makes a prediction for a single trajectory .each spacetime volume follows a trajectory , which we label with its position at some fixed time , to be made precise below . throughout this paper ,superscript ` ' denotes evaluation on a spatially flat hypersurface .consider the evolution of some quantity of interest , , which is a function of trajectory .if we know the distribution we can study statistical properties of such as the moment , ^m , \ ] ] where we have introduced the ensemble average of , in eqs . , stands for a collection of any number of fields .it is the which are observable quantities .. defines what we will call the exact separate universe picture .it is often convenient to expand as a power series in the field values centered on a fiducial trajectory , labelled ` fid , ' when eq .is used to evaluate the , we refer to the ` perturbative ' separate universe picture .if all terms in the power series are retained , these two versions of the calculation are formally equivalent . in unfavorable cases , however , convergence may occur slowly or not at all .this possibility was discussed in refs .although our calculation is formally perturbative , it is not directly equivalent to eq . .we briefly discuss the relation of our calculation to conventional perturbation theory in [ s : conclusions ] . by definition , the curvature perturbation measures local fluctuations in expansion history ( expressed in e - folds ) , calculated on a comoving hypersurface . in many models , the curvature perturbation is synthesized by superhorizon physics , which reprocesses a set of gaussian fluctuations generated at horizon exit . in a single - field model, only one gaussian fluctuation can be present , which we label . neglecting spatial gradients , the total curvature perturbationmust then be a function of alone . for , this can be well - approximated by where is independent of spatial position .defines the so - called `` local '' form of non - gaussianity .it applies only when quantum interference effects can be neglected , making a well - defined object rather than a superposition of operators .if this condition is satisfied , spatial correlations of may be extracted and it follows that can be estimated using the rule where we have recalled that is nearly gaussian , or equivalently that . with spatially independent , eq . strictly applies only in single - field inflation . in this caseone can accurately determine by applying eq . to a single trajectory with fixed initial conditions , as in the method of lehners & renaux - petel .where more than one field is present , may vary in space because it depends on the isocurvature modes . in this caseone must determine statistically on a bundle of adjacent trajectories which sample the local distribution of isocurvature modes .is then indispensible .following maldacena , and later lyth & rodrguez , we adopt eq . as our definition of , whatever its origin . in real space , the coefficient in eq .depends on the convention .more generally , this follows from the definition of , eq . .in fourier space , either prescription is automatically enforced after dropping disconnected contributions , again leading to eq . . to proceed , we require estimates of the correlation functions and .we first describe the conventional approach , in which ` ' denotes a flat hypersurface at a fixed initial time .the quantity denotes the number of e - foldings between this initial slice and a final comoving hypersurface , where indexes the species of light scalar fields .the local variation in expansion can be written in the fiducial picture as where .subject to the condition that the relevant scales are all outside the horizon , we are free to choose the initial time set by the hypersurface ` 'at our convenience . in the conventional approach , ` ' is taken to lie a few e - folds after our collection of spacetime volumes passes outside the causal horizon .this choice has many virtues .first , we need to know statistical properties of the field fluctuations only around the time of horizon crossing , where they can be computed without the appearance of large logarithms .second , as a consequence of the slow - roll approximation , the are uncorrelated at this time , leading to algebraic simplifications .finally , the formula subsumes a gauge transformation from the field variables to the observational variable . using eqs . , and, one finds that can be written to a good approximation where and for simplicity we have dropped the ` ' which indicates time of evaluation .a similar definition applies for .. is accurate up to small intrinsic non - gaussianities present in the field fluctuations at horizon exit . as a means of predicting is pleasingly compact , and straightforward to evaluate in many models .unfortunately , it also obscures the physics which determines . for this reason it is hard to infer , from eq . alone , those classes of models in which is always large or small .is dynamically allowed .see , for example , refs . . ]our strategy is quite different .we choose ` ' to lie around the time when we require the statistical properties of .the role of the formula , eq . , is then to encode _ only _ the gauge transformation between the and . in [ sec : derivative - n ] below , we show how the appropriate gauge transformation is computed using the formula . in the present sectionwe restrict our attention to determining a formula for under the assumption that the distribution of field values on ` ' is known . in [ sec : transport ] , we will supply the required prescription to evolve the distribution of field values between horizon exit and ` ' .although the are independent random variables at horizon exit , correlations can be induced by subsequent evolution .one must therefore allow for off - diagonal terms in the two - point function .remembering that we are working with a collection of spacetime volumes in real space , smoothed on some characteristic scale , we write does not vary in space , but it may be a function of the scale which characterizes our ensemble of spacetime volumes . in all but the simplest models it will vary in time .it is also necessary to account for intrinsic non - linearities among the , which are small at horizon crossing but may grow . we define likewise , should be regarded as a function of time and scale .the permutation symmetries of an expectation value such as guarantee that , for example , .when written explicitly , we place the indices of symbols such as in numerical order . neglecting a small ( ) intrinsic four - point correlation , it follows that now we specialize to a two - field model , parametrized by fields and . using eqs . , and , it follows that the two - point function of satisfies the three - point function can be written where we have identified two separate contributions , labelled ` 1 ' and ` 2 ' .the ` 1 ' term includes all contributions involving _ intrinsic _ non - linearities , those which arise from non - gaussian correlations among the field fluctuations , the ` 2 ' term encodes non - linearities arising directly from the gauge transformation to after use of eq . , can be used to extract the non - linearity parameter .this decomposes likewise into two contributions , which we shall discuss in more detail in [ sec : numerics ] . to compute in concrete models ,we require expressions for the derivatives and . for generic initial and final times , these are difficult to obtain .lyth & rodrguez used direct integration , which is effective for quadratic potentials and constant slow - roll parameters .vernizzi & wands obtained expressions in a two - field model with an arbitrary sum - separable potential by introducing gaussian normal coordinates on the space of trajectories .their approach was generalized to many fields by battefeld & easther .product - separable potentials can be accommodated using the same technique .an alternative technique has been proposed by yokoyama __ .a considerable simplification occurs in the present case , because we only require the derivative evaluated between flat and comoving hypersurfaces which coincide in the unperturbed universe . for any species , andto leading order in the slow - roll approximation , the number of e - folds between the flat hypersurface ` ' and a comoving hypersurface ` ' satisfies where and are the field values evaluated on ` ' and ` , ' respectively . under an infinitesimal shift of , we deduce that obeys note that this applies for an arbitrary , which need not factorize into a sum or product of potentials for the individual species . in principle a contribution from variation of the integrand is present , which spoils a nave attempt to generalize the method of refs . to an arbitrary potential .this contribution vanishes in virtue of our supposition that ` ' and ` ' are infinitesimally separated . to compute it is helpful to introduce a quantity , which in the sum - separable case coincides with the conserved quantity of vernizzi & wands . for our specific choice of a two - field model , this takes the form where the integrals are evaluated on a single spatial hypersurface . in an -field model , one would obtain conserved quantities which label the isocurvature fields .the construction of these quantities is discussed in refs . . for sum - separable potentials one can show using the equations of motion that is conserved under time evolution to leading order in slow - roll .it is not conserved for general potentials , but the variation can be neglected for infinitesimally separated hypersurfaces . under a change of trajectory , varies according to the rules and the comoving hypersurface ` ' is defined by we are assuming that the slow - roll approximation applies , so that the kinetic energy may be neglected in comparison with the potential .therefore on ` ' we have combining eqs . , and we obtain expressions for , namely where we have defined eqs . can alternatively be derived without use of by comparing eq . with the formulas of ref . , which were derived using conventional perturbation theory .applying , we obtain to proceed , we require the second derivatives of .these can be obtained directly from , after use of eqs .. we find ^\star \cos^2 \theta + 2 \left ( \frac{v}{v_{,1 } } \right)^{\star 2 } \cos^2 \theta \\\hspace{-6 mm } \mbox { } \times \left [ \frac{v_{,11}}{v } \sin^2 \theta - \frac{v_{,1 } v_{,12}}{v v_{,2 } } \sin^4 \theta - \left ( \frac{v_{,11}}{v } - \frac{v_{,22}}{v } + \frac{v_{,2 } v_{,12}}{v v_{,1 } } \right ) \cos^2 \theta \sin^2 \theta \right]^c .\end{aligned}\ ] ] an analogous expression for can be obtained after the simultaneous exchange .the mixed derivative satisfies ^c \\ \hspace{-6 mm } \mbox { } + \cos^2 \theta \left ( \frac{v_{,2}}{v_{,1 } } - \frac{v v_{,12}}{v_{,1}^2 } \right)^c . \end{aligned}\ ] ] now that the calculation is complete , we can drop the superscripts ` ' and ` , ' since any background quantity is the same on either hypersurface .once this is done it can be verified that ( despite appearances ) eq .is invariant under the exchange .in this section we return to the problem of evolution between horizon exit and the time of observation , and supply the prescription which connects the distribution of field values at these two times .we begin by discussing the single - field system , which lacks the technical complexity of the two - field case , yet still exhibits certain interesting features which recur there . among these featuresare the subtle difference between motion of the statistical mean and the background field value , and the hierarchy of moment evolution equations .moreover , the structure of the moment mixing equations is similar to that which obtains in the two - field case .for this reason , the one - field scenario provides an instructive example of the techniques we wish to employ .recall that we work in real space with a collection of comparably sized spacetime volumes , each with a slightly different expansion history , and the scatter in these histories determines the microwave background anisotropy on a given angular scale . within each volumethe smoothed background field takes a uniform value described by a density function , where in this section we are dropping the superscript ` ' denoting evaluation of spatially flat hypersurfaces .our ultimate goal is to calculate the reduced bispectrum , , which describes the third moment of . in the language of probabilitythis is the skewness , which we denote .a gaussian distribution has skewness zero , and inflation usually predicts that the skew is small .for this reason , rather than seek a distribution with non - zero third moment , as proposed in ref . , we will introduce higher moments as perturbative corrections to the gaussian .such a procedure is known as a _cumulant expansion_. the construction of cumulant expansions is a classical problem in probability theory .we seek a distribution with centroid , variance , and skew , with all higher moments determined by and alone .a distribution with suitable properties is , \ ] ] where \ ] ] is a pure gaussian and denotes the hermite polynomial , for which there are multiple normalization conventions .we choose to normalize so that which implies that the leading term of is .this is sometimes called the `` probabilist s convention . ''we define expectation values by the usual rule , the probability density function in eq .has the properties , and do not depend on the approximation that is small .however , for large the density function may become negative for some values of .it then ceases to be a probability density in the strict sense .this does not present a problem in practice , since we are interested in distributions which are approximately gaussian , and for which will typically be small . moreover , our principal use of eq .is as a formal tool to extract evolution equations for each moment .for this reason we will not worry whether defines an honest probability density function in the strict mathematical sense . ] the moments , , and may be time - dependent , so evolution of the probability density in time can be accommodated by finding evolution equations for these quantities .the density function given in eq .is well - known and has been applied in many situations .it is a solution to the problem of approximating a nearly - gaussian distribution whose moments are known .( [ e : p1d ] ) is in fact the first two terms of the _ gram charlier ` a ' series _ , also sometimes called the _hermite series_. in recent years it has found multiple applications to cosmology , of which our method is closest to that of taylor & watts .other applications are discussed in refs . . for a review of the `a ' series and related nearly - gaussian probability distributions from an astrophysical perspective , see . in this paper, we will refer to eq . and its natural generalization to higher moments as the `` moment expansion . '' in the slow - roll approximation, the field in each spacetime volume obeys a simple equation of motion where records the number of e - foldings of expansion .we refer to as the velocity field .expanding about the instantaneous centroid gives where the value of evolves with time , so each expansion coefficient is time - dependent .hence , we do not assume that the velocity field is _ globally _ well - described by a quadratic taylor expansion , but merely that it is well - described as such in the neighborhood of the instantaneous centroid .we expand the velocity field to second order , although in principle this expansion could be carried to arbitrary order .it remains to specify how the probability density evolves in time .conservation of probability leads to the transport equation eq .can also be understood as the limit of a chapman kolmogorov process as the size of each hop goes to zero .it is well known for example , from the study of starobinsky s diffusion equation which forms the basis of the stochastic approach to inflation that the choice of time variable in this equation is significant , with different choices corresponding to the selection of a temporal gauge .we have chosen to use the e - folding time , , which means that we are evolving the distribution on hypersurfaces of uniform expansion .these are the spatially flat hypersurfaces whose field perturbations enter the formulas described in [ sec : computing_fnl ] . in principle , eq .can be solved directly . in practiceit is simpler to extract equations for the moments of , giving evolution equations for , and . to achieve this , one need only resolve eq. into a hermite series of the form the hermite polynomials are linearly independent , and application of the orthogonality condition shows that the must all vanish .this leads to a hierarchy of equations , which we refer to as the moment hierarchy . at the top of the hierarchy ,the equation is empty and expresses conservation of probability .the first non - trivial equation requires and yields an evolution equation for the centroid , the first term on the right - hand side drives the centroid along the velocity field , as one would anticipate based on the background equation of motion , eq . .however , the second term shows that the centroid is also influenced as the wings of the probability distribution probe the nearby velocity field .this influence is not captured by the background equation of motion .if we are in a situation with , then the wings of the density function will be moving faster than the center .hence , the velocity of the centroid will be larger than one might expect by restricting attention to .accordingly , the mean fluctuation value is not following a solution to the background equations of motion .evolution equations for the variance and skew are obtained after enforcing , yielding in both equations , the first term on the right - hand sides describes how and scale as the density function expands or contracts in response to the velocity field .these terms force and to scale in proportion to the velocity field . specifically ,if we temporarily drop the second terms in each equation above , one finds that and .this precisely matches our expectation for the scaling of these quantities .hence , these terms account for the jacobians associated with infinitesimal transformations induced by the flow . for applications to inflationary non - gaussianity ,the second terms in and are more relevant .these terms describe how each moment is sourced by higher moments and the interaction of the density function with the velocity field . in the exampleabove , if we are in a situation where , the tails of the density function are moving faster than the core .this means that one tail is shrinking and the other is extending , skewing the probability density .the opposite occurs when .these effects are measured by the second term in .hence , by expanding our pdf to the third moment , and our velocity field to quadratic order , we are able to construct a set of evolution equations which include the leading - order source terms for each moment . there is little conceptually new as we move from one field to two .the new features are mostly technical in nature .our primary challenge is a generalization of the moment expansion to two fields , allowing for the possibility of correlation between the fields . with this done ,we can write down evolution equations whose structure is very similar to those found in the single - field case . the two - field system is described by a two - dimensional velocity field , defined by where again we are using the number of e - folds as the time variable .the index takes values in . while we think it is likely that our equations generalize to any number of fields , we have only explicitly constructed them for a two - field system . as will become clear below , certain steps in this construction apply only for two fields , andhence we make no claims at present concerning examples with three or more fields .the two - dimensional transport equation is = 0 .\ ] ] here and in the following we have returned to our convention that repeated species indices are summed . as in the single - field case , we construct a probability distribution which is nearly gaussian , but has a small non - zero skewness .that gives where is a pure gaussian distribution , defined by .\ ] ] in this equation , defines the center of the distribution and describes the covariance between the fields .we adopt a conventional parametrization in terms of variances and a correlation coefficient , the matrix defines two - point correlations of the fields , all skewnesses are encoded in . before defining this explicitly ,it is helpful to pause and notice a complication inherent in eqs . which was not present in the single - field case . to extract a hierarchy of moment evolution equations from the transport equation , eq ., we made the expansion given in and argued that orthogonality of the hermite polynomials implied the hierarchy .however , hermite polynomials of the form $ ] are _ not _ orthogonal under the gaussian measure of eq . .following an expansion analogous to eq .the moment hierarchy would comprise linear combinations of the coefficients .the problem is essentially an algebraic question of gram schmidt orthogonalization . to avoid this problemit is convenient to diagonalize the covariance matrix , introducing new variables and for which eq .factorizes into the product of two measures under which the polynomials and are separately orthogonal .the necessary redefinitions are \ ] ] and .\ ] ] a simple expression for can be given in terms of and , we now define the non - gaussian factor , which encodes the skewnesses , to be in these variables we find , but .in addition , we have in order for eq .to be useful , it is necessary to express the skewnesses associated with the physical variables in terms of and . by definition ,these satisfy after substituting for the definition of these quantities inside the expectation values in eq .we arrive at the relations the moments , and are time - dependent , but for clarity we will usually suppress this in our notation .next we must extract the moment hierarchy , which governs evolution of , , and .we expand the velocity field in a neighborhood of the instantaneous centroid according to where we have defined as in the single - field case , these coefficients are functions of time and vary with the motion of the centroid .the expansion can be pursued to higher order if desired .our construction of and implies that the two - field transport equation can be arranged as a double gauss hermite expansion , = p_g \sum_{m , n \ge 0 } c_{mn } h_m(x ) h_n(y ) = 0 .\ ] ] because the hermite polynomials are orthogonal in the measure defined by , we deduce the moment hierarchy we define the rank " of each coefficient by .we terminated the velocity field expansion at quadratic order , and our probability distribution included only the first three moments .it follows that only with rank five or less are nonzero .if we followed the velocity field to higher order , or included higher terms in the moment expansion , we would obtain non - trivial higher - rank coefficients .inclusion of additional coefficients requires no qualitative modification of our analysis and can be incorporated in the scheme we describe below . a useful feature of the expansion in eq .is that the rank- coefficients give evolution equations for the order- moments . written explicitly in components ,the expressions that result from are quite cumbersome .however , when written as field - space covariant expressions they can be expressed in a surprisingly compact form .: : the rank-0 coefficient is identically zero .this expresses the fact that the total probability is conserved as the distribution evolves . : : the rank-1 coefficients and give evolution equations for the centroid .these equations can be written in the form we remind the reader that here and below , terms like , and represent the velocity field and its derivatives evaluated at the centroid .the first term in expresses the non - anomalous motion of the centroid , which coincides with the background velocity field of eq . .the second term describes how the wings of the probability distribution sample the velocity field at nearby points .narrow probability distributions have small components of and hence are only sensitive to the local value of .broad probability distributions have large components of and are therefore more sensitive to the velocity field far from the centroid . : : the rank-2 coefficients , and give evolution equations for the variances and the correlation .these can conveniently be packaged as evolution equations for the matrix this equation describes the stretching and rotation of as it is transported by the velocity field .it includes a sensitivity to the wings of the probability distribution , in a manner analogous to the similar term appearing in .hence the skew acts as a source for the correlation matrix .: : the rank-3 coefficients , , and describe evolution of the moments .these are the first term describes how the moments flow into each other as the velocity field rotates and shears the coordinate frame relative to the coordinate frame .the second term describes sourcing of non - gaussianity from inhomogeneities in the velocity field and the overall spread of the probability distribution .some higher - rank coefficients in our case , those of ranks four and five are also nonzero , but do not give any new evolution equations .these coefficients measure the error " introduced by truncating the moment expansion .if we had included higher cumulants , these higher - rank coefficients would have given evolution equations for the higher moments of the probability distribution .in general , all moments of the density function will mix so it is always necessary to terminate our expansion at a predetermined order both in cumulants and powers of the field fluctuation .the order we have chosen is sufficient to generate evolution equations containing both the leading - order behavior of the moments namely , the first terms in eqs . , and and the leading corrections , given by the latter terms in these equations .at this point we put our new method into practice .we study two models for which the non - gaussian signal is already known , using the standard formula . for each casewe employ our method and compare it with results obtained using . to ensure a fair comparison , we solve numerically in both cases .our new method employs the slow - roll approximation , as described above .therefore , when using the approach we produce results both with and without slow - roll simplifications .first consider double quadratic inflation , which was studied by rigopoulos , shellard & van tent and later by vernizzi & wands .the potential is we use the initial conditions chosen in ref . , where , and the fiducial trajectory has coordinates and .we plot the evolution of in fig .[ fig1 ] , which also shows the prediction of the standard formula ( with and without employing slow roll simplifications ) .we implement the algorithm using a finite difference method to calculate the derivatives of .a similar technique was used in ref .this model yields a very modest non - gaussian signal , below unity even at its peak .if inflation ends away from the spike then is practically negligible .shows that the method of moment transport allows us to separate contributions to from the intrinsic non - gaussianity of the field fluctuations , and non - linearities of the gauge transformation to . as explained in [ ss : sep_universe ] , we denote the former and the latter , and plot them separately in fig .[ fig2 ] . inspection of this figure clearly shows that is determined by a cancellation between two much larger components .its final shape and magnitude are exquisitely sensitive to their relative phase .initially , the magnitudes of and grow , but their sum remains small .the peak in fig .[ fig1 ] arises from the peak of , which is incompletely cancelled by .it is remarkable that initially evolves in exact opposition to the gauge transformation , to which it is not obviously connected . in the double quadratic model, is always small .however , it has recently been shown by byrnes __ that a large non - gaussian signal can be generated even when slow - roll is a good approximation . the conditions for this to occur are incompletely understood , but apparently require a specific choice of potential and strong tuning of initial conditions . in figs .[ fig3][fig4 ] we show the evolution of in a model with the potential which corresponds to example a of ref .* 5 ) when we choose and initial conditions , .it is clear that the agreement is exact . in this model , is overwhelmingly dominated by the contribution from the second - order gauge transformation , , as shown in fig .[ fig4 ] .this conclusion applies equally to the other large- examples discussed in refs . , although we make no claim that this is a general phenomenon . in conclusion , figs .[ fig1 ] and [ fig3 ] show excellent agreement between our new method and the outcome of the numerical formula .these figures also compare the moment transport method and without the slow - roll approximation .we conclude that the slow - roll estimate remains broadly accurate throughout the entire evolution .non - linearities are now routinely extracted from all - sky observations of the microwave background anisotropy .our purpose in this paper has been to propose a new technique with which to predict the observable signal .present data already give interesting constraints on the skewness parameter , and over the next several years we expect that the _ planck _ survey satellite will make these constraints very stringent . it is even possible that higher - order moments , such as the kurtosis parameter will become better constrained . to meet the need of the observational community for comparison with theory , reliable estimates of these non - linear quantities will be necessary for various models of early - universe physics .a survey of the literature suggests that the ` conventional ' method , originally introduced by lyth & rodrguez , remains the method of choice for analytical study of non - gaussianity . in comparison ,our proposed moment transport method exhibits several clear differences .first , the conventional method functions best when we base the expansion on a flat hypersurface immediately after horizon exit . in our method , we make the opposite choice and move the flat hypersurface as close as possible to the time of observation . after this , the role of the formula is to provide no more than the non - linear gauge transformation between field fluctuations and the curvature perturbation .we substitute the method of moment transport to evolve the distribution of field fluctuations between horizon exit and observation .second , in integrating the transport equation one uses an expansion of the velocity field such as the one given in eqs .. this expansion is refreshed at each step of integration , so the result is related to conventional perturbative calculations in a very similar way to renormalization - group improved perturbation theory . in this interpretation ,derivatives of play the role of couplings . at a given order , , in the moment hierarchy, the equations for lower - order moments function as renormalization group equations for the couplings at level- , resumming potentially large terms before they spoil perturbation theory .this property is shared with any formalism such as which is non - perturbative in time evolution , but may be an advantage in comparison with perturbative methods .we also note that although is non - perturbative as a point of principle , practical implementations are frequently perturbative .for example , the method of vernizzi & wands and battefeld & easther depends on the existence of quantities which are conserved only to leading order in , and can lose accuracy after e - foldings .numerical calculations confirm that our method gives results in excellent agreement with existing techniques . as a by - product of our analysis, we note that the large non - gaussianities which have recently been observed in sum- and product - separable potentials are dominated by non - linearities from the second - order part of the gauge transformation from to .the contribution from intrinsic non - linearities of the field fluctuations , measured by the skewnesses , is negligible . in such casesone can obtain a useful formula for by approximating the field distribution as an exact gaussian .the non - gaussianity produced in such cases arises from a distortion of comoving hypersurfaces with respect to adjacent spatially flat hypersurfaces .our new method joins many well - established techniques for estimating non - gaussian properties of the curvature perturbation . in our experience , these techniques give comparable estimates of , but they do not exactly agree .each method invokes different assumptions , such as the neglect of gradients or the degree to which time dependence can be accommodated .the mutual scatter between different methods can be attributed to the theory error inherent in any estimate of .the comparison presented in [ sec : numerics ] shows that while all of these methods slightly disagree , the moment transport method gives good agreement with other established methods .dm is supported by the cambridge centre for theoretical cosmology ( ctc ) .ds is funded by stfc .dw acknowledges support from the ctc .we would like to thank chris byrnes , jim lidsey and karim malik for helpful conversations .10 t. falk , r. rangarajan , and m. srednicki , _ the angular dependence of the three - point correlation function of the cosmic microwave background radiation as predicted by inflationary cosmologies _ , _ astrophys .j. _ * 403 * ( 1993 ) l1 , [ http://xxx.lanl.gov/abs/astro-ph/9208001 [ arxiv : astro - ph/9208001 ] ] .a. gangui , f. lucchin , s. matarrese , and s. mollerach , _ the three - point correlation function of the cosmic microwave background in inflationary models _ , _ astrophys . j. _ * 430 * ( 1994 ) 447457 , [ http://xxx.lanl.gov/abs/astro-ph/9312033 [ arxiv : astro - ph/9312033 ] ] .t. pyne and s. m. carroll , _ higher - order gravitational perturbations of the cosmic microwave background _* d53 * ( 1996 ) 29202929 , [ http://xxx.lanl.gov/abs/astro-ph/9510041 [ arxiv : astro - ph/9510041 ] ] .v. acquaviva , n. bartolo , s. matarrese , and a. riotto , _ second - order cosmological perturbations from inflation _ , _ nucl .phys . _ * b667 * ( 2003 ) 119148 , [ http://xxx.lanl.gov/abs/astro-ph/0209156 [ arxiv : astro - ph/0209156 ] ] .e. komatsu and d. n. spergel , _acoustic signatures in the primary microwave background bispectrum _ , _ phys .* d63 * ( 2001 ) 063002 , [ http://xxx.lanl.gov/abs/astro-ph/0005036 [ arxiv : astro - ph/0005036 ] ] .f. r. bouchet and r. juszkiewicz , _ perturbation theory confronts observations : implications for the ` initial ' conditions and _ , http://xxx.lanl.gov/abs/astro-ph/9312007 [ arxiv : astro - ph/9312007 ] .p. fosalba , e. gaztanaga , and e. elizalde , _ gravitational evolution of the large - scale density distribution : the edgeworth & gamma expansions _ , http://xxx.lanl.gov/abs/astro-ph/9910308 [ arxiv : astro - ph/9910308 ] .m. sasaki and e. d. stewart , _ a general analytic formula for the spectral index of the density perturbations produced during inflation _ , _ prog .* 95 * ( 1996 ) 7178 , [ http://xxx.lanl.gov/abs/astro-ph/9507001 [ arxiv : astro - ph/9507001 ] ] .g. i. rigopoulos , e. p. s. shellard , and b. j. w. van tent , _ non - linear perturbations in multiple - field inflation _rev . _ * d73 * ( 2006 ) 083521 , [ http://xxx.lanl.gov/abs/astro-ph/0504508 [ arxiv : astro - ph/0504508 ] ] .h. r. s. cogollo , y. rodrguez , and c. a. valenzuela - toledo , _ on the issue of the series convergence and loop corrections in the generation of observable primordial non - gaussianity in slow - roll inflation .part i : the bispectrum _ , _ jcap _ * 0808 * ( 2008 ) 029 , [ http://xxx.lanl.gov/abs/0806.1546[arxiv:0806.1546 ] ] .y. rodrguez and c. a. valenzuela - toledo , _ on the issue of the series convergence and loop corrections in the generation of observable primordial non - gaussianity in slow - roll inflation .part ii : the trispectrum _ , http://xxx.lanl.gov/abs/0811.4092 [ arxiv:0811.4092 ] .choi , l. m. h. hall , and c. van de bruck , _ spectral running and non - gaussianity from slow - roll inflation in generalised two - field models _ , _ jcap _ * 0702 * ( 2007 ) 029 , [ http://xxx.lanl.gov/abs/astro-ph/0701247 [ arxiv : astro - ph/0701247 ] ] . c. gordon , d. wands , b. a. bassett , and r. maartens , _ adiabatic and entropy perturbations from inflation _ , _ phys ._ * d63 * ( 2001 ) 023506 , [ http://xxx.lanl.gov/abs/astro-ph/0009131 [ arxiv : astro - ph/0009131 ] ] .s. matarrese , l. verde , and r. jimenez , _ the abundance of high - redshift objects as a probe of non- gaussian initial conditions _ , _ astrophys. j. _ * 541 * ( 2000 ) 10 , [ http://xxx.lanl.gov/abs/astro-ph/0001366 [ arxiv : astro - ph/0001366 ] ] .l. amendola , _ the dependence of cosmological parameters estimated from the microwave background on non - gaussianity _ , _ astrophys. j. _ * 569 * ( 2002 ) 595599 , [ http://xxx.lanl.gov/abs/astro-ph/0107527 [ arxiv : astro - ph/0107527 ] ] .m. loverde , a. miller , s. shandera , and l. verde , _ effects of scale - dependent non - gaussianity on cosmological structures _ , _ jcap _ * 0804 * ( 2008 ) 014 , [ http://xxx.lanl.gov/abs/0711.4126 [ arxiv:0711.4126 ] ] .d. seery and j. c. hidalgo , _ non - gaussian corrections to the probability distribution of the curvature perturbation from inflation _ , _ jcap _ * 0607 * ( 2006 ) 008 , [ http://xxx.lanl.gov/abs/astro-ph/0604579 [ arxiv : astro - ph/0604579 ] ] .s. blinnikov and r. moessner , _expansions for nearly gaussian distributions _ , _ astron .* 130 * ( 1998 ) 193205 , [ http://xxx.lanl.gov/abs/astro-ph/9711239 [ arxiv : astro - ph/9711239 ] ] .g. i. rigopoulos , e. p. s. shellard , and b. j. w. van tent , _ quantitative bispectra from multifield inflation _rev . _ * d76 * ( 2007 ) 083512 , [ http://xxx.lanl.gov/abs/astro-ph/0511041 [ arxiv : astro - ph/0511041 ] ] .m. sasaki , j. valiviita , and d. wands , _ non - gaussianity of the primordial perturbation in the curvaton model _ , _ phys .rev . _ * d74 * ( 2006 ) 103003 , [ http://xxx.lanl.gov/abs/astro-ph/0607627 [ arxiv : astro - ph/0607627 ] ] . | we present a novel method for calculating the primordial non - gaussianity produced by super - horizon evolution during inflation . our method evolves the distribution of coarse - grained inflationary field values using a transport equation . we present simple evolution equations for the moments of this distribution , such as the variance and skewness . this method possesses some advantages over existing techniques . among them , it cleanly separates multiple sources of primordial non - gaussianity , and is computationally efficient when compared with popular alternatives , such as the framework . we adduce numerical calculations demonstrating that our new method offers good agreement with those already in the literature . we focus on two fields and the parameter , but we expect our method will generalize to multiple scalar fields and to moments of arbitrarily high order . we present our expressions in a field - space covariant form which we postulate to be valid for any number of fields . * keywords * : inflation , cosmological perturbation theory , physics of the early universe , quantum field theory in curved spacetime . |
invariants are a popular concept in object recognition and image retrieval .they aim to provide descriptions that remain constant under certain geometric or radiometric transformations of the scene , thereby reducing the search space .they can be classified into global invariants , typically based either on a set of key points or on moments , and local invariants , typically based on derivatives of the image function which is assumed to be continuous and differentiable .the geometric transformations of interest often include translation , rotation , and scaling , summarily referred to as _ similarity _ transformations . in a previous paper , building on work done by schmid and mohr , we have proposed differential invariants for those similarity transformations , plus _ linear _ brightness change . here, we are looking at a _ non - linear _ brightness change known as _ gamma correction_. gamma correction is a non - linear quantization of the brightness measurements performed by many cameras during the image formation process .the idea is to achieve better _ perceptual _ results by maintaining an approximately constant ratio between adjacent brightness levels , placing the quantization levels apart by the _ just noticeable difference_. incidentally , this non - linear quantization also precompensates for the non - linear mapping from voltage to brightness in electronic display devices .gamma correction can be expressed by the equation where is the input intensity , is the output intensity , and is a normalization factor which is determined by the value of . for output devices , the ntsc standard specifies . for input devices like cameras ,the parameter value is just inversed , resulting in a typical value of .the camera we used , the sony 3 ccd color camera dxc 950 , exhibited . for the kodak megaplus xrc camera ][ fig : gammacorr ] shows the intensity mapping of 8-bit data for different values of .it turns out that an invariant under gamma correction can be designed from first and second order derivatives .additional invariance under scaling requires third order derivatives .derivatives are by nature translationally invariant .rotational invariance in 2-d is achieved by using rotationally symmetric operators .the key idea for the design of the proposed invariants is to form suitable ratios of the derivatives of the image function such that the parameters describing the transformation of interest will cancel out .this idea has been used in to achieve invariance under linear brightness changes , and it can be adjusted to the context of gamma correction by at least conceptually considering the _ logarithm _ of the image function . for simplicity , we begin with 1-d image functions .let be the image function , i.e. the original signal , assumed to be continuous and differentiable , and the corresponding gamma corrected function .note that is a special case of where .taking the logarithm yields with the derivatives , and .we can now define the invariant under gamma correction to be {0mm}{13 mm } & = \\ & = \end{tabular}\ ] ] the factor has been eliminated by taking derivatives , and has canceled out .furthermore , turns out to be completely specified in terms of the _ original _ image function and its derivatives , i.e. the logarithm actually does nt have to be computed .the notation indicates that the invariant depends on the underlying image function and location the invariance holds under gamma correction , not under spatial changes of the image function .a shortcoming of is that it is undefined where the denominator is zero .therefore , we modify to be continuous everywhere : {0mm}{8 mm } { \normalsize } & & { \normalsize if } \\ & & { \normalsize else } \\\end{tabular}\ ] ] where , for notational convenience , we have dropped the variable .the modification entails .note that the modification is just a heuristic to deal with poles .if all derivatives are zero because the image function is constant , then differentials are certainly not the best way to represent the function .if scaling is a transformation that has to be considered , then another parameter describing the change of size has to be introduced .that is , scaling is modeled here as variable substitution : the scaled version of is .so we are looking at the function where the derivatives with respect to are , , and . nowthe invariant is obtained by defining a suitable ratio of the derivatives such that both and cancel out : {0mm}{10 mm } & = \end{tabular}\ ] ] analogously to eq .( [ eq : thm12 g ] ) , we can define a modified invariant {0mm}{8 mm } { \normalsize } & & { \normalsize if cond2 } \\ & & { \normalsize else } \\\end{tabular}\ ] ] where condition cond1 is , and condition cond2 is .again , this modification entails .it is a straightforward albeit cumbersome exercise to verify the invariants from eqs .( [ eq : th12 g ] ) and ( [ eq : th123 g ] ) with an analytical , differentiable function . as an arbitrary example, we choose the first three derivatives are , , and .then , according to eq .( [ eq : th12 g ] ) , .if we now replace with a gamma corrected version , say , the first derivative becomes , the second derivative is , and the third is . if we plug these derivatives into eq .( [ eq : th12 g ] ) , we obtain an expression for which is identical to the one for above .the algebraically inclined reader is encouraged to verify the invariant for the same function .[ fig : analyex ] shows the example function and its gamma corrected counterpart , together with their derivatives and the two modified invariants .as expected , the graphs of the invariants are the same on the right as on the left .note that the invariants define a many - to - one mapping .that is , the mapping is not information preserving , and it is not possible to reconstruct the original image from its invariant representation .if or are to be computed on images , then eqs . ( [ eq : th12 g ] ) to ( [ eq : thm123 g ] ) have to be generalized to two dimensions .this is to be done in a rotationally invariant way in order to achieve invariance under similarity transformations .the standard way is to use rotationally symmetric operators .for the first derivative , we have the well known _gradient magnitude _ , defined as where is the 2-d image function , and , are partial derivatives along the x - axis and the y - axis . for the second order derivative, we can use the linear _ laplacian _ horn also presents an alternative second order derivative operator , the _quadratic variation _ since the qv is not a linear operator and more expensive to compute , we use the laplacian for our implementation . for the third order derivative ,we can define , in close analogy with the quadratic variation , a _cubic variation _ as the invariants from eqs .( [ eq : th12 g ] ) to ( [ eq : thm123 g ] ) remain valid in 2-d if we replace with , with , and with .this can be verified by going through the same argument as for the functions .recall that the critical observation in eq .( [ eq : th12 g ] ) was that cancels out , which is the case when all derivatives return a factor .but such is also the case with the rotationally symmetric operators mentioned above .for example , if we apply the gradient magnitude operator to , i.e. to the logarithm of a gamma corrected image function , we obtain returning a factor , and analogously for , qv , and cv .a similar argument holds for eq .( [ eq : th123 g ] ) where we have to show , in addition , that the first derivative returns a factor , the second derivative returns a factor , and the third derivative returns a factor , which is the case for our 2-d operators . while the derivatives of continuous , differentiable functions are uniquely defined , there are many ways to implement derivatives for _ sampled _ functions .we follow schmid and mohr , ter haar romeny , and many other researchers in employing the derivatives of the gaussian function as filters to compute the derivatives of a sampled image function via convolution .this way , derivation is combined with smoothing .the 2-d zero mean gaussian is defined as the partial derivatives up to third order are , , , , , , , , .they are shown in fig .[ fig : gausskernels ] . we used the parameter setting and kernel size these kernels , eq .( [ eq : th12 g ] ) , for example , is implemented as at each pixel , where denotes convolution .we evaluate the invariant from eq .( [ eq : thm12 g ] ) in two different ways .first , we measure how much the invariant computed on an image without gamma correction is different from the invariant computed on the same image but with gamma correction .theoretical , this difference should be zero , but in practice , it is not .second , we compare template matching accuracy on intensity images , again without and with gamma correction , to the accuracy achievable if instead the invariant representation is used .we also examine whether the results can be improved by prefiltering .a straightforward error measure is the _ absolute error _ , where `` 0gc '' refers to the image without gamma correction , and gc stands for either `` sgc '' if the gamma correction is done synthetically via eq .( [ eq : gammacorr ] ) , or for `` cgc '' if the gamma correction is done via the camera hardware . like the invariant itself ,the absolute error is computed at each pixel location of the image , except for the image boundaries where the derivatives and therefore the invariants can not be computed reliably .[ fig : imas ] shows an example image .the sgc image has been computed from the 0gc image , with .note that the gamma correction is done _after _ the quantization of the 0gc image , since we do nt have access to the 0gc image before quantization .[ fig : accuinv ] shows the invariant representations of the image data from fig .[ fig : imas ] and the corresponding absolute errors . since , we have .the dark points in fig . [ fig : accuinv ] , ( c ) and ( e ) , indicate areas of large errors .we observe two error sources : * the invariant can not be computed robustly in homogeneous regions .this is hardly surprising , given that it is based on differentials which are by definition only sensitive to spatial changes of the signal .* there are outliers even in the sgc invariant representation , at points of very high contrast edges .they are a byproduct of the inherent smoothing when the derivatives are computed with differentials of the gaussian .note that the latter put a ceiling on the maximum gradient magnitude that is computable on 8-bit images .in addition to computing the absolute error , we can also compute the relative error , in percent , as then we can define the set of _ reliable points _ , relative to some error threshold , as and , the percentage of reliable points , as where is the number of valid , i.e. non - boundary , pixels in the image .[ fig : reliapts ] shows , in the first row , the reliable points for three different values of the threshold .the second row shows the sets of reliable points for the same thresholds if we gently prefilter the 0gc and cgc images . the corresponding data for the ten test images from fig .[ fig : imadb ] is summarized in table [ tab : reliaperc ] .derivatives are known to be sensitive to noise .noise can be reduced by smoothing the original data before the invariants are computed . on the other hand , derivatives should be computed as locally as possible . with these conflicting goals to be considered , we experiment with gentle prefiltering , using a gaussian filter of size =1.0 .the size of the gaussian to compute the invariant is set to =1.0 .note that and can _ not _ be combined into just one gaussian because of the non - linearity of the invariant .with respect to the set of reliable points , we observe that after prefiltering , roughly half the points , on average , have a relative error of less than 20% .gentle prefiltering consistently reduces both absolute and relative errors , but strong prefiltering does not .template matching is a frequently employed technique in computer vision . here, we will examine how gamma correction affects the spatial accuracy of template matching , and whether that accuracy can be improved by using the invariant .an overview of the testbed scenario is given in fig .[ fig : templloca ] . a small template of size , representing the search pattern ,is taken from a 0gc intensity image , i.e. without gamma correction .this query template is then correlated with the corresponding cgc intensity image , i.e. the same scene but with gamma correction switched on .if the correlation maximum occurs at exactly the location where the 0gc query template has been cut out , we call this a _ correct maximum correlation position _ , or cmcp. the correlation function employed here is based on a normalized mean squared difference : where is an image , is a template positioned at , is the mean of the subimage of at of the same size as , is the mean of the template , and .the template location problem then is to perform this correlation for the whole image and to determine whether the position of the correlation maximum occurs precisely at .[ fig : matchtempl ] demonstrates the template location problem , on the left for an intensity image , and on the right for its invariant representation .the black box marks the position of the original template at ( 40,15 ) , and the white box marks the position of the matched template , which is incorrectly located at ( 50,64 ) in the intensity image . on the right , the matched template ( white )has overwritten the original template ( black ) at the same , correctly identified position .[ fig : correlexmpl ] visualizes the correlation function over the whole image .the white areas are regions of high correlation .the example from figs .[ fig : matchtempl ] and [ fig : correlexmpl ] deals with only _ one _ arbitrarily selected template . in order to systematically analyze the template location problem , we repeat the correlation process for all possible template locations. then we can define the _ correlation accuracy _ ca as the percentage of correctly located templates , where is the size of the template , is the set of correct maximum correlation positions , and , again , is the number of valid pixels .we compute the correlation accuracy both for unfiltered images and for gently prefiltered images , with .[ fig : corrcorrelpts ] shows the binary correlation accuracy matrices for our example image .the cmcp set is shown in white , its complement and the boundaries in black .we observe a higher correlation accuracy for the invariant representation , which is improved by the prefiltering .we have computed the correlation accuracy for all the images given in fig .[ fig : imadb ] .the results are shown in table [ tab : ca ] and visualized in fig .[ fig : correlaccuras ] .we observe the following : * the correlation accuracy ca is higher on the invariant representation than on the intensity images .* the correlation accuracy is higher on the invariant representation with gentle prefiltering , , than without prefiltering .we also observed a decrease in correlation accuracy if we increase the prefiltering well beyond .by contrast , prefiltering seems to be always detrimental to the intensity images ca .* the correlation accuracy shows a wide variation , roughly in the range 30%% for the unfiltered intensity images and 50%% for prefiltered invariant representations .similarly , the gain in correlation accuracy ranges from close to zero up to 45% . for our test images, it turns out that the invariant representation is always superior , but that does nt necessarily have to be the case . * the medians and means of the cas over all test images confirm the gain in correlation accuracy for the invariant representation . * the larger the template size , the higher the correlation accuracy , independent of the representation .a larger template size means more structure , and more discriminatory power .we have proposed novel invariants that combine invariance under gamma correction with invariance under geometric transformations . in a general sense ,the invariants can be seen as trading off derivatives for a power law parameter , which makes them interesting for applications beyond image processing .the error analysis of our implementation on real images has shown that , for sampled data , the invariants can not be computed robustly everywhere .nevertheless , the template matching application scenario has demonstrated that a performance gain is achievable by using the proposed invariant .bob woodham suggested to the author to look into invariance under gamma correction .his meticulous comments on this work were much appreciated .jochen lang helped with the acquisition of image data through the acme facility .d. forsyth , j. mundy , a. zisserman , c. coelho , c. rothwell , `` invariant descriptors for 3-d object recognition and pose '' , _ ieee transactions on pattern analysis and machine intelligence _ , vol.13 ,no.10 , pp.971 - 991 , oct.1991 .d. pai , j. lang , j. lloyd , r. woodham , `` acme , a telerobotic active measurement facility '' , sixth international symposium on experimental robotics , sydney , 1999 .see also : http://www.cs.ubc.ca/nest/lci/acme/ | _ this paper presents invariants under gamma correction and similarity transformations . the invariants are local features based on differentials which are implemented using derivatives of the gaussian . the use of the proposed invariant representation is shown to yield improved correlation results in a template matching scenario . _ |
"developments are currently underway to promote the sensitivity of ligo and to improve its prospect (...TRUNCATED) | "matched - filtering for the identification of compact object mergers in gravitational - wave antenn(...TRUNCATED) |
"the origin - destination ( od ) matrix is important in transportation analysis .the matrix contains(...TRUNCATED) | "the estimation of the number of passengers with the identical journey is a common problem for publi(...TRUNCATED) |
"a fair number of astronomers and astronomy students have a physical challenge .it is our responsibi(...TRUNCATED) | "making online resources more accessible to physically challenged library users is a topic deserving(...TRUNCATED) |
Dataset Card for 'ML Articles Subset of Scientific Papers' Dataset
Dataset Summary
The dataset consists of 32,621 instances from the 'Scientific papers' dataset, a selection of scientific papers and summaries from ArXiv repository. This subset focuses on articles that are semantically, vocabulary-wise, structurally, and meaningfully closest to articles describing machine learning. This subset was created using sentence embeddings and K-means clustering.
Supported Tasks and Leaderboards
The dataset supports tasks related to text summarization. Particularly, the dataset was created for fine-tuning transformer models for summarization. There are no established leaderboards at this moment.
Languages
The text in the dataset is in English.
Dataset Structure
Data Instances
An instance in the dataset includes a scientific paper and its summary, both in English.
Data Fields
article: The full text of the scientific paper.
abstract: The summary of the paper.
Data Splits
The dataset is split into:
-training subset: 30280 articles
-validation subset: 1196 articles
-test subset: 1145 articles
Dataset Creation
Methods
The subset was created using sentence embeddings from a transformer model, SciBERT. The embeddings were clustered into 6 clusters using the K-means clustering algorithm. The cluster closest to articles strongly related to the machine learning area by cosine similarity was chosen to form this dataset.
Source Data
The dataset is a subset of the 'Scientific papers' dataset, which includes scientific papers from the ArXiv repository.
Social Impact
This dataset could help improve the quality of summarization models for machine learning research articles, which in turn can make such content more accessible.
Discussion of Biases
As the dataset focuses on machine learning articles, it may not be representative of scientific papers in general or other specific domains.
Other Known Limitations
As the dataset has been selected based on a specific methodology, it may not include all machine learning articles or may inadvertently include non-machine learning articles.
Dataset Curators
The subset was created as part of a project aimed to build an effective summarization model for Machine Learning articles.
- Downloads last month
- 34