article
stringlengths 0
456k
| abstract
stringlengths 0
65.5k
|
---|---|
traffic is a realization of an open one - dimensional many - body system . recently ,popkov and schtz found that the fundamental diagram determines the phase diagram of such a system , at least for a very simple , yet exactly solvable toy model , the so called asymmetric exclusion process ( asep ) .in particular , the most important feature that influences the phase diagram is the number of extrema in the fundamental diagram .this is exactly the theme of this report .we present an extension of classical , macroscopic ( `` fluid - like '' ) traffic flow models .usually , it is assumed that the fundamental diagram is a one - hump function , however recent empirical results point to more complicated behaviour .it is impossible to assign a single flow function to the measured data - points in a certain density range .therefore , it can be speculated , that this scatter hides a more complicated behaviour of the fundamental diagram in this regime .we explore two qualitatively different forms of the safe velocity , the velocity to which the flow tends to relax , which leads from the usual one - hump behaviour of the flow density relation to a more complicated function that exhibits , depending on the relaxation parameter , one , two or three humps .obviously , real drivers may have different , adding another source of dynamical complexity , which will not be discussed in this paper .if the behaviour of individual vehicles is not of concern , but the focus is more on aggregated quantities ( like density , mean velocity etc . ), one often describes the system dynamics by means of macroscopic , fluid - like equations .the form of these navier - stokes - like equations can be motivated from anticipative behaviour of the drivers .+ assume there is a safe velocity that only depends on the density .the driver is expected to adapt the velocity in a way that relaxes on a time scale to this desired velocity corresponding to the density at , if both sides are taylor - expanded to first order one finds inserting abbreviating with the payne equation is recovered : if one seeks the analogy to the hydrodynamic equations one can identify a `` traffic pressure '' . in this sense traffic follows the equation of state of a perfect gas ( compare to thermodynamics : ) .+ the above described procedure to motivate fluid - like models can be extended beyond the described model in a straight forward way . if , for example , eq .( [ ansatz ] ) is expanded to second order , quadratic terms in are neglected , the abbreviation is used and the terms in front of are absorbed in the coupling constant , one finds : the primes in the last equation denote derivatives with respect to the density . since these equations allow infinitely steep velocity changes , we add ( as in the usual macroscopic traffic flow equations , ) a diffusive term to smooth out shock fronts : since a vehicle passing through an infinitely steep velocity shock front would suffer an infinite acceleration , we interpret the diffusive ( `` viscosity '' ) term as a result of the finite acceleration capabilities of real world vehicles . our model equations ( [ modeq ] ) extend the equations of the khne - kerner - konhuser ( in the sequel called k model ; , ) model by a term coupling to the second derivative of the desired velocity . throughout this studywe use ms , ms and m .the form of the safe velocity plays an important role in this class of models ( as can be seen , for example , from the linear stability analysis of the model ) .however , experimentally the relation between this desired velocity and the vehicle density is poorly known .it is reasonable to assume a maximum at vanishing density and once the vehicle bumpers touch , the velocity will ( hopefully ) be zero .+ to study the effect of the additional term in the equations of motion we first investigate the case of the conventional safe velocity given by a fermi - function of the form since is at present stage rather uncertain , we also examine the effects of a more complicated relation between the desired velocity and the density . for this reason we look at a velocity - density relation that has a plateau at intermediate densities , which , in a microscopic interpretation , means that in a certain density regime drivers do not care about the exact distance to the car ahead .we chose an -function of the form \label{vdes_eq}\ ] ] with where is used .the parameters , and m s are used throughout this study , the corresponding safe velocity and flow are shown in fig .note that the densities are always normalized with respect to their maximum possible value which is given by the average vehicle length as .we use a lagrangian particle scheme to solve the navier - stokes - like equations for traffic flow . a particle method similar to the smoothed particle hydrodynamics method( sph ; ) has been used previously to simulate traffic flow , the method we use here , however , differs in the way the density and the derivatives are calculated .the particles correspond to moving interpolation centers that carry aggregated properties of the vehicle flow , like , for example , the vehicle density .they are not to be confused with single `` test vehicles '' in the flow , they rather correspond to `` a bulk '' of vehicles .+ the first step in this procedure is to define , what is meant by the term `` vehicle density '' .since we assign a number indicating the corresponding vehicle number to each particle with position , the density definition is straight forward , i.e. the number of vehicles per length that can be assigned unambiguously to particle , or once this is done one has to decide in which way spatial derivatives are to be evaluated .one possibility would be to take finite differences of properties at the particle positions .however , one has to keep in mind that the particles are not necessarily distributed equidistantly and thus in standard finite differences higher order terms do not automatically cancel out _exactly_. the introduced errors may be appreciable in the surrounding of a shock and they can trigger numerical instabilities that prevent further integration of the system .therefore we decided to evaluate first order derivatives as the _ analytical derivatives _ of cubic spline interpolations through the particle positions .second order derivatives of a variable are evaluated using centered finite differences where and are evaluated by spline interpolation and is an appropriately chosen discretisation length . since we do not evolve the `` weights '' in time , there is no need to handle a continuity equation , the total vehicle number is constant and given as + denoting the left hand side of ( [ modeq ] ) in lagrangian form , we are left with a first order system : this set of equations is integrated forward in time by means of a fourth order accurate runge - kutta integrator with adaptive time step . + the described scheme is able to resolve emerging shock fronts sharply without any spurious oscillations .an example of such a shock front is shown in fig .[ shockfront ] for the -model .traffic modelling as well as traffic measurements have a long - standing tradition ( e.g. greenshields , lighthill and whitham , richards , gazis et al . , treiterer , to name just a few ) . in recent yearsphysicists working in this area have tried to interpret and formulate phenomena encountered in traffic flow in the language of non - linear dynamics ( see , for example , the review of kerner and many of the references cited therein ) .the measured real - world data reveal a tremendous amount of different phenomena many of which are also encountered in other non - linear systems .+ to identify properties of our model equations we apply them to a closed one - lane road loop .the loop has a length of km and we prepare initial conditions close to a homogeneous state with density ( i.e. same density and velocity everywhere ) .the system is slightly perturbed by a sinusoidal density perturbation of fixed maximum amplitude and a wavelength equal to the loop length .the particles are initially distributed equidistantly , the weights are assigned according to ( [ dens ] ) in order to reproduce the desired density distribution , and the velocities corresponding to are used .all calculations are performed using 500 particles . + it is important to keep in mind that the results are only partly comparable to real world data since the latter may reflect the response of the non - linear system to external perturbations like on - ramps , accidents etc . which are not included in the model .+ in the following the model parameter , which determines the time scale on which the flow tries to adapt on , is allowed to vary .this corresponds to a varying acceleration capability of the flow due to a changing vehicle composition ( percentage trucks etc . ) .this parameter which is typically of the order of seconds controls a wide variety of different dynamical phenomena .a similar result has been found in for a microscopic car - following model .the -isocline , i.e. the locus of points in the -plane for which , is found from eq .( [ eq1 ] ) to be the abscissa .the velocity isocline , i.e. the -points where the acceleration vanishes can be inferred from eq .( [ eq2 ] ) . for the homogeneous and stationary solution one finds the isocline velocity as a function of : fixed - points of the flow , defined as intersections of the - and -isoclines ,are thus expected only for densities above , where approaches the abscissa , see fig .[ force_free_fd ] .the flow of the homogeneous and stationary solution has no fixed point in a strict sense since becomes extremely small at , but does not vanish exactly .however , this could easily be changed by choosing another form of .[ force_free_fd ] shows these `` force free velocities '' ( in the homogeneous and stationary limit ) together with the `` force free fundamental diagrams '' ( for s ) for both investigated forms of .we expect the fundamental diagrams ( fd ) found from the numerical analysis of the full equation set to be centered around these `` force free fundamental diagrams '' . while for the of the -model the fd always ( i.e. for s ) exhibits a simple one - hump behaviour , eq .( [ vdes_eq ] ) leads to a one , two or three hump structure of the fd depending on the time constant .that is why a stronger dependence of qualitative features on this constant may be expected for the plateau safe velocity .note , however , that even with the conventional the `` force free velocity '' exhibits for s two additional extrema at intermediate densities ( up to four with the plateau function ) .the implications of these additional extrema for the stability of the flow are discussed below .+ to get a preliminary idea about the stability regimes of the model it is appropriate to perform a _ linear stability analysis_. by inserting ( [ vel_iso ] ) into the equations of motion we obtain equations that formally look like the equations of the -model : the role of now being played by .thus with the appropriate substitution the _ linear stability criterion _ of can be used : \frac { c_0}{\rho}. \label{stab_crit}\ ] ] thus we expect the flow to be linearly unstable in density regimes where the decline of with is steeper than a given threshold .specifically , extrema of the are ( to linear order ) stable and we therefore expect _ stable density regions embedded in unstable regimes_. the previous analytical considerations give a rough idea of what to expect , for a more complete analysis , however , we have to resort to a numerical treatment of the full equation set . in order to be able to distinguish the effects resulting from the additional term in the equations of motion from those coming from the form of we treat two cases separately : in the first case the conventional form of is used and in the second the effects due to a plateau in are investigated . to obtain a * fundamental diagram * ( fd ) comparable to measurements we chose a fixed site on our road loop .we determine averages over one minute in the following way : and , where is the number of particles that have passed the reference point within the last minute . the thus calculated fd ( fig .[ fd_alltau ] , left panel ) is , as expected , close to a superposition of the `` force - free fds '' , for different values of see fig .[ force_free_fd ] , left panel .as in real - world traffic data in the higher density regimes the flow is not an unambiguous function of , but rather covers a surface given by the range of in the measured data .note that many data points in the unstable regime ( see below ) exhibit substantially higher flows than expected from the `` force - free fds '' ( see fig .[ force_free_fd ] ) .+ in certain density ranges the model shows * instability * with respect to jam formation from an initial slight perturbation .in this regime the initial perturbation of the homogeneous state grows and finally leads to a breakdown of the flow into a backward moving jam ( kerner refers to this state , where vehicles come in an extended region to a stop , as `` wide jam '' ( wj ) contrary to a `` narrow jam '' ( nj ) which basically consists only of its upstream and downstream fronts and vehicles do not , on average , come to a stop ; ) .this phenomenon , widely known as `` jam out of nowhere '' , is reproducible with several traffic flow models ( e.g. ) .an example of a spontaneously forming wj accompanied by two njs is given in fig .[ widejam ] , left panel , for an initial density of .it is interesting to note that the initial perturbation remains present in the system for approximately 15 minutes without noticeably growing in amplitude before the flow breaks down . as in realitythe inflow front of the wj is much steeper than the outflow front of the jam .note the similarity with the jam formation process within the k-model .+ to give a global idea in which density regimes congestion phenomena occur we show in fig .[ velvar ] , left panel , the velocity variance for given initial densities .the system is allowed to evolve from its initial state until converges . if has not converged after a very long time (= 10000 s ) it is assumed that no stationary state ( const ) can be reached and is taken at . denotes the particle number and the average particle velocity in the system . for low values of ( 1 and 5 s ) the system shows spontaneous jam formation in a coherent density regime from to , comparable to measured data . for s a stable regime at intermediate densities surrounded by unstable density regimesis encountered .this region corresponds to the two close extrema seen in fig .[ force_free_fd ] , first panel . +another widespread phenomenon is the formation of several jams following each other , so - called * stop - and - go - waves*. this phenomenon is also a solution of our model equations , see fig .[ stop_and_go ] , left panel .the emerging pattern of very sharply localized perturbations is found in empirical traffic data as well ( see fig .14 , detector d7 in ) . +a very interesting phenomenon happens towards the upper end of the instability range ( ) .after the initial perturbation has remained present in the system for more than 20 minutes without growing substantially in amplitude , see fig .[ plateau1 ] , suddenly a sharp velocity spike appears at s that broadens in the further evolution until the system has separated into two phases : a totally queued phase , where the velocity vanishes on a distance of several kilometers , and a homogeneous high velocity phase , both separated by a shock - like transition .we refer to these states with homogeneous velocity plateaus separated by shock fronts as * mesa states*. the numerically determined * fundamental diagrams * for the case with plateau is shown in fig .[ fd_alltau ] , right panel .the additional extrema expected from the `` force - free velocity '' are visible in the data points .we therefore conclude that _ if a pronounced plateau in really does exist , additional extrema should appear in the measured fundamental diagrams , at least for flows with poor acceleration capabilities , i.e. large . + also with the plateau function the system shows * spontaneous jam appearance*. the formation of an isolated , stable wj is displayed in fig .refwidejam , right panel . with a change in the parameter ( 10 s rather than 5 s as in fig . [ widejam ] )one finds a more complicated pattern with one wj that coexists for a long time with constantly emerging and disappearing njs , see fig .[ jamsync ] .+ the global stability properties for the case with plateau are shown in fig .[ velvar ] , right panel . as expected from the linear stability analysis( [ stab_crit ] ) and fig .[ force_free_fd ] , right panels ) we find alternating regimes of stability and instability rather than one coherent density range where the flow is prone to instability . for low ( ) and very high density ( ) , initial perturbations decrease in amplitude , i.e. the system relaxes towards the homogeneous state . in between these density perturbationsmay grow and lead to spontaneous structure formation of the flow .the stable regions within unstable flow are found around densities for which .this is displayed for two values of in fig .[ velvar_comp ] .+ the * accelerations * in the model were never found to exceed ms for negative and ms for positive signs and thus agree with accelerations from real - world traffic data ( for both forms of ) . for reasons of illustration fig .[ acc ] displays velocities and the corresponding accelerations at one time slice of a simulation ( ms s ) for according to eq .( [ vdes_eq ] ) .also the plateau function allows for * stop - and - go - waves * , see fig . [ stop_and_go ] , right panel .the shown evolution process is close to what kerner describes as general features of stop - and - go - waves : initiated by a local phase transition from free to synchronized flow , numerous well localized njs emerge , move through the flow and begin to grow .one part of the njs propagates in the downstream direction ( see e.g. the perturbations located at km at s ) while the rest ( at s at km ) move upstream .once the first wj has formed after approximately 500 s the njs start to merge with it .this nj - wj merger process continues until a stationary pattern of three wjs has formed ( at around 1000 s ; not shown ) which moves with constant velocity in upstream direction .the distance scale of the downstream fronts of these self - formed wjs is in excellent agreement with the experimental value of 2.5 - 5 km .+ we found for the conventional form of a separation into different homogeneous velocity phases that we called * mesa states*. this feature is also present if the plateau function is used . in fig .[ plateau1 ] , right panel , the initial perturbation organizes itself into different platoons of homogeneous velocities .these platoons are separated by sharp , shock - like transitions and form a stationary pattern that moves along the loop without changing in shape . + the relaxation term in eq .( [ modeq ] ) plays a crucial role for the stabilisation of this pattern . if , for example , the relaxation time is increased ( see fig .[ plateau2 ] ) and thus the importance of the relaxation term is reduced , the system is not able to stabilize the velocity plateau .it seems to be aware of these states , but it is always heavily disturbed and never able to reach a stationary state .again , the composition of the traffic flow plays via _ the _ crucial role for the emerging phenomena .starting from the assumption that a safe velocity exists towards which drivers want to relax by anticipating the density ahead of them , we motivate a set of equations for the temporal evolution of the mean flow velocity .the resulting partial differential equations possess a navier - stokes - like form , they extend the well - known macroscopic traffic flow equations of khne , kerner and konhuser by an additional term proportional to the second derivative of .motivated by recent empirical results , we explore , in addition to the new equation set , also the effects of a -function that exhibits a plateau at intermediate densities .the results are compared to the use of a conventional form of .+ these fluid - like equations are solved using a lagrangian particle scheme , that formulates density in terms of particle properties and evaluates first order derivatives analytically by means of cubic spline interpolation and second order derivatives by equidistant finite differencing of the splined quantities .the continuity equation is fulfilled automatically by construction .this method is able to follow the evolution of the ( in some ranges physically unstable ) traffic flow in a numerically stable way and to resolve emerging shock - fronts accurately without any spurious oscillations .+ the presented model shows for both investigated forms of a large variety of phenomena that are well - known from real - world traffic data .for example , traffic flow is found to be unstable with respect to jam formation initiated by a subtle perturbation around the homogeneous state . as in reality, stable backward - moving _ wide jams _ as well as sharply localized _ narrow jams _ form .these latter ones move through the flow without leading to a full breakdown until they merge and form wide jams .the distance - scale of the downstream fronts of these self - formed wide jams is in excellent agreement with the empirical values .the encountered accelerations are in very good agreement with measured values . for the with plateauwe also find states where a stable wide jam coexists with narrow jams that keep emerging and disappearing without ever leading to a breakdown of the flow , properties that are usually attributed to the elusive state of `` synchronized flow '' .another , striking phenomena is encountered that we call the `` mesa - effect '' : the flow may organize into a state , where platoons of high and low velocity follow each other , separated only by a very sharp , shock - like transition region .this pattern is found to be stationary , i.e. it moves forward without changing shape .one may speculate , that these mesa states are related to the minimum flow phase found in the work using the asep as a model for traffic flow .+ in other regions of parameter space the flow is never able to settle into a stationary state . herewide jams and a multitude of emerging moving or disappearing narrow jams may coexist for a very long time .again , it may be presumeded that in these cases the system displays deterministic chaos , however , we did not check this beyond any doubt .+ the basic effect of the new interaction term is to make the `` force - free velocity '' , which essentially determines the shape of the fundamental diagram , sensitive to the relaxation parameter . for large values of additional extrema in the `` force - free flows '' are introduced and a stability analysis shows that that the flows are stable against perturbations in the vicinity of these extrema .this leads to the emergence of alternating regimes of stability and instability , the details of which depend on the shape of .we find that if a pronounced plateau in really does exist , it should appear in the measured fundamental diagrams , at least for flows with poor acceleration capabilities , i.e. large s .+ the crucial parameter besides density which determines the dynamic evolution of the flow and all the related phenomena is the relaxation time .since this parameter governs the time scale on which the flow tries to adapt to the desired velocity , we may interpret it as a measure for the flow composition ( fraction of trucks etc . ) .it is this composition that determines whether / which structure formation takes place , whether the system relaxes into a homogeneous state , forms isolated wide jams or a multitude of interacting narrow jams . to conclude, this work shows that a surprising richness of phenomena is encountered if one allows for a slight change of the underlying traffic flow equations .further work is needed in order to extend the qualitative description undertaken in this work and to find more quantitative relationships between the traffic flow models and reality . | based on the assumption of a safe velocity depending on the vehicle density a macroscopic model for traffic flow is presented that extends the model of the khne - kerner - konhuser by an interaction term containing the second derivative of . we explore two qualitatively different forms of : a conventional , fermi - type function and , motivated by recent experimental findings , a function that exhibits a plateau at intermediate densities , i.e. in this density regime the exact distance to the car ahead is only of minor importance . to solve the fluid - like equations a lagrangian particle scheme is developed . + the suggested model shows a much richer dynamical behaviour than the usual fluid - like models . a large variety of encountered effects is known from traffic observations many of which are usually assigned to the elusive state of `` synchronized flow '' . furthermore , the model displays alternating regimes of stability and instability at intermediate densities , it can explain data scatter in the fundamental diagram and complicated jam patterns . within this model , a consistent interpretation of the emergence of very different traffic phenomena is offered : they are determined by the velocity relaxation time , i.e. the time needed to relax towards . this relaxation time is a measure of the average acceleration capability and can be attributed to the composition ( e.g. the percentage of trucks ) of the traffic flow . |
the concept of coherent control precise measurement or determination of a process through control of the phase of an applied oscillating field has been applied to many different systems , including quantum dynamics , trapped atomic ions , chemical reactions , cooper pairs , quantum dots and thz generation to name but a few .a plasma wave is a coherent and deterministically evolving structure that can be generated by the interaction of laser light with plasma .it is therefore natural to assume that coherent control techniques may also be applied to plasma waves .plasma waves produced by high power lasers have been studied intensively for their numerous applications , such as the production of ultrashort pulses by plasma wave compression , generation of extremely high power pulses by raman amplification , for inertial confinement fusion ignition schemes , as well as for fundamental scientific investigations . in particular ,laser wakefield acceleration of ultra - relativistic electron beams , has been a successful method for accelerating electrons to relativistic energies over a very short distance . in laser wakefield acceleration , an electron bunch ` surfs ' on the electron plasma wave generated by an intense laser and gains a large amount of energy .the accelerating electric field strength that the plasma wave can support can be many orders of magnitude higher than that of a conventional accelerator , which makes laser wakefield acceleration an exciting prospect as an advanced accelerator concept .however , although highly competitive in terms of accelerating gradient , beams from laser wakefield accelerator experiments are currently inferior to conventional accelerators in terms of other important characteristics , such as energy spread and stability .in addition , due to constraints in laser wakefield technology , experimental demonstrations have predominantly been performed in single shot operation , far below the khz - mhz repetition rates of conventional accelerators . in recent years, deformable mirror adaptive optical systems have been successfully implemented in high intensity laser experiments to increase the peak laser intensity by improving the beam focusability , especially in systems using high numerical aperture optics .the shape of the deformable mirror is generally determined in a closed loop where either a direct measurement of the wavefront is performed or some nonlinear optical signal is used as feedback in an iterative algorithm .the objective of adaptive optics has largely been optimization of the laser focal shape to a near diffraction - limited spot , thus producing the highest possible intensity .adaptive optics can also be useful for certain focal profile shaping , optimization of a laser machining process or harmonic generation . in the following ,we demonstrate that orders of magnitude improvement to electron beam properties from a laser wakefield accelerator operating at khz repetition rate can be made , through the use of a genetic algorithm coupled to a deformable mirror adaptive optical system to coherently control the plasma wave formation . the electron image from a scintillator screenwas processed and used in the fitness function as feedback for the genetic algorithm . using this method, we were able to improve the beam properties significantly .this result was not simply due to an improvement in focal quality since a laser pulse with the ` best ' ( highest intensity / lowest ) focus in vacuum produced a greatly inferior electron beam compared with a laser pulse optimized using the electron beam properties themselves .it was found that the focal spot optimized for electron beam production had pronounced intensity ` wings ' .modifications to the phase front of the tightly focusing laser alter the light propagation , which experiences strong optical nonlinearities in the plasma , and therefore affect the plasma wave dynamics in a complex but deterministic manner .the experiment was performed using the relativistic lambda - cubed ( ) laser system ( see methods ) .the output laser beam was reflected from a deformable mirror and focused onto a free - flowing argon gas plume to produce an electron beam by laser wakefield acceleration ( see methods ) at 500 hz .electrons were measured using a scintillating screen imaged onto a lens - coupled ccd camera .the experimental setup is shown schematically in fig .[ setup ] .we first implemented a genetic algorithm for laser focus optimization using the second - harmonic signal generated from a beta barium borate ( -bbo ) crystal ( setup a in fig .[ setup ] ) .the laser spot was optimized such that highest peak intensity is achieved when the second harmonic generation is strongest .subsequently , we modified the fitness function to use a figure of merit ( fom , refer to equation [ fom2 ] in methods ) from the electron scintillation data , calculating the inverse distance weighting ( with power parameter n ) to a single point for all pixel intensities within an electron image .the pixel of the optimization point was _ dynamically _ adjusted during the genetic algorithm to concentrate all electron signal to the peak location of the charge distribution during each generation .the genetic algorithm was initialized using a ` flat ' mirror shape with 30 v for all actuators to allow immediate deformation in both directions . for comparison , electron beams produced by the ` best ' laser focus ( by optimizing the intensity ) and the initial mirror shape at 30 vare shown in fig .[ ebeam]a and b respectively .the optimized electron beam profiles are shown in fig .[ ebeam]e - j for various weighting parameters , .the genetic algorithm converged to the best electron beam using in terms of beam divergence and peak charge density .the peak charge density was increased by a factor of 20 compared to the initial electron beam profile before optimization ( see fig . [ ebeam]d ) .the optimized electron profile is highly stable and collimated , with a full - width - at - half - maximum ( fwhm ) divergence of mrad and mrad .the shot - to - shot pointing ( defined by the centroid position ) fluctuation of the electron beam is less than 1 mrad ( root mean square , r.m.s . ) .the integrated charge was increased by more than two - fold from the electron beams generated by a laser focus of highest intensity .the high repetition rate and real - time diagnostics permit implementation of the algorithm within a practical time frame using a standard personal computer .typical optimization takes only a few minutes ( iterations ) to reach convergence ( see fig . [ebeam]c ) .the second harmonic optimization generates a near - diffraction - limited focal spot as shown in fig .[ spot]a for the far - field laser intensity profile in vacuum . in figs .[ spot]a and b , we compare the transverse intensity distribution within the focal region over the length scale of the gas jet .the laser profile ( fig .[ spot]b ) that produces the best electron beam exhibits several low intensity side lobes around the central peak , and has a peak intensity about half that of the optimized focus .the complex laser profiles appear to have a very dramatic effect on the structure of the plasma waves produced and consequently the electron beam profile .[ spot]c shows the relative wavefront change recorded by a shack - hartmann wavefront sensor .calculation using the reconstructed wavefront gives a strehl ratio of about 0.5 , which is in agreement with the far - field intensity measurement .this small wavefront modification of the driver pulse can lead to a significant improvement in the electron beam properties through the relativistic nonlinear optics of the plasma .the relative position between the focal plane and the center of the gas flow was controlled by moving the nozzle .scanning the gas nozzle both before and after genetic algorithm confirms the optimal focal position does not change , excluding the possibility that the improvement may be due to optimizing the focal position .furthermore , we extended the genetic algorithm optimization to _ control the electron energy distribution_. through control of the light propagation , the plasma wave amplitude will be affected and therefore also the strength of the accelerating gradient .hence , we can expect to be able to modify the energy spectrum .a high resolution energy analyzer using a dipole magnet pair was used to obtain the electron energy spectrum as the electrons were dispersed in the horizontal plane in the magnetic field .a 150 m pinhole was placed 2.2 cm from the electron source to improve the energy resolution of the spectrometer .the schematic setup is shown in fig .[ espec]a .the energy resolution limited by the entrance pinhole and transverse emittance of the beam is estimated to be 2 kev for the energy range of measurement .three rectangular masks are set in the low- , mid- and high - energy region on the dispersed data , namely masks i , ii , and iii in fig .[ espec]b .we employed a fitness function ( see methods ) to preferentially maximize the total counts inside the mask .raw spectra from the genetic algorithm optimization are displayed in fig .[ espec]b , showing that the brightest part has shifted congruently .the resulting spectra have mean energies of 89 kev , 95 kev and 98 kev respectively for masks i , ii , and iii , noting that they do not fall on the visual centroid of the image because the scintillator sensitivity is not included in the presentation of the raw data , however it was taken into account for computing the mean energies .our results show that manipulation of electron energy distribution using the deformable mirror is somewhat restricted .the final result after optimization does not reach the objective mask completely despite that the mean energies can be varied by up to 10% .this result is somewhat unsurprising as the scope for controlling the electron spectrum is mostly limited by the physics of the interaction - while changing the transverse intensity profile can make big differences to the shape of the plasma electric field structure , changing the maximum field amplitude of the wakefield ( and therefore peak energy of the accelerated electrons ) will be limited .although the details of the initial conditions required for optimal beams are difficult to determine and are therefore found using the genetic algorithm , we can at least demonstrate how modifications to the phase front of the laser pulse can improve the beam properties with an example . to illustrate the underlying physics of the plasma wave dynamics determined by the conditions of the driving laser pulse , we performed two dimensional ( 2d ) particle - in - cell ( pic ) simulations using the osiris framework . parameters similar to the experiment conditions were used , with a gaussian plasma density profile to enable trapping of electrons in the density down ramp ( see ref . and methods for details on the simulation ) .it was previously shown in ref . that the focusing fields of laser plasma accelerators can be controlled by tailoring the transverse intensity profile of the laser pulse using higher - order modes , where generalization to 3d was also discussed . here , we simulated a laser pulse with a fundamental gaussian mode ( tem ) or a coherent superposition of a fundamental ( tem ) and a second - order hermite - gaussian ( tem ) mode ( fig .[ sim]a ) .although the plasma wave has a larger amplitude when it is driven by a single mode laser pulse , the wake phase front evolves a backward curvature when electrons are trapped and accelerated ( see top panel in supplementary movie 1 and fig .[ sim]b ) .contrastingly , the evolution of the wakefield driven by the laser pulse with additional mode forms a flatter plasma phase front _ at the point of trapping _ ( fig .[ sim]d ) . in fig .[ sim]c and e , the momentum distribution of the forward accelerated electrons has a larger transverse spread for the single mode laser pulse compared to the one with the addition of higher order modes .this is a consequence of the different trapping conditions and accelerating fields from the coherent plasma wakefield structure , which is governed by the structure of the driving laser pulse . in a comparative testto show this effect is not simply due to a lower intensity , we repeated the simulation using a single fundamental gaussian mode laser pulse with a larger focal spot with the same peak intensity as that with the superimposed modes .the wakefield evolution shows very similar response as fig .[ sim]b and does not develop a flatter phase front as seen in fig .the subsequently accelerated electrons have very similar divergence to that in fig .[ sim]c , eliminating the possibility that the improvement comes from either a high intensity effect or a simple change in -number ._ note that we are not saying that this mixed - hermite - gaussian mode is the optimal pulse in the experiment ; this is simply an illustration of how small changes in pulse shape can have significant effects on electron beam properties .when a particular wavefront of laser light interacts with plasma , it can affect the plasma wave structures and trapping conditions of the electrons in a complex way .for example , raman forward scattering , envelope self - modulation , relativistic self - focusing , and relativistic self - phase modulation and many other nonlinear interactions modify both the pulse envelope and phase as the pulse propagates , in a way that can not be easily predicted and that subsequently dictates the formation of plasma waves .moreover , under realistic experimental conditions , ionization dynamics before the laser pulse reaches the vacuum focus can also modify the phase of the driving pulse . ideally , the light interacts in such a way as to generate large amplitude plasma waves with electric field structures that accelerate electrons with small divergence , high charge etc . because of the complicated interaction , it is difficult to determine a laser phase profile that will lead to such a plasma structure .however , such unforeseeable conditions were successfully revealed by _ using the evolutionary algorithm method _, with the result that the electron charge can be increased and emitted in a very well collimated beam .here we have implemented coherent control of a nonlinear plasma wave and demonstrated an order of magnitude improvement in the electron beam parameters .the laser beam optimized to generate the best electron beam was not the one with the ` best ' focal spot .control and shaping of the electron energy distribution was observed to be less effective , but was still possible .the capability for wavefront control was also limited by the number of actuators and maximum deformation of the deformable mirror used in our experiments .in addition , this work was performed using adaptive optics , but it is clear that coherent control of plasma waves should be possible in a variety of configurations , for example by using an acousto - optic modulator to control the temporal phase of the driving pulse .recently developed techniques for single - shot diagnosis of plasma wave structures may provide an avenue for direct control of the plasma evolution .the concept of coherent control for plasmas opens new possibilities for future laser - based accelerators .although still at the stage of fundamental research , laser wakefield accelerators are showing significant promise . in principle , such improvements could be integrated into next generation high - power laser projects , such as ican , based on coherent combination of many independent fibers , taking advantage of both their high repetition rate and controllability .the stability and response of the wakefield to laser conditions , such as phase front errors , is not well understood , but is crucial for the success of laser wakefield acceleration as a source of relativistic electrons and secondary radiation .for example , the presence of an asymmetric laser pulse was shown to affect the betatron oscillations and properties of x - rays produced in laser wakefield accelerators .implementing the methods of this study should enable a significantly improved understanding and control of the wakefield acceleration process with regard to stability , dark current reduction and beam emittance .the relativistic lambda cubed laser ( ) produces 30 fs pulses of 800 nm light at a repetition rate of 500 hz with an ase ( amplified - spontaneous - emission ) intensity contrast of around 1 ns before the main pulse .the system is seeded by a femto - laser ti : sapphire oscillator , which generates 12 fs pulses and has a companion carrier envelope phase locking system .an rf addressable acousto - optic filter called a dazzler controls the spectral amplitude and phase of these pulses . selected pulses from the dazzler trainare stretched to 220 ps in a low - aberration stretcher and amplified to 7 mj in a cryogenically cooled large - mode regenerative amplifier ( regen ) .the energy dumped from the regen cavity is ` cleaned ' in a pockels cell and used to seed a 3-pass amplifier as an upgrade from the laser system described in ref . , which delivers up to 28 mj pulses before compression .following 71% efficient compression , 20 mj pulses are trimmed to 18 mj at the perimeter of a 47 mm - diameter , 37-actuator deformable mirror . throughout the system ,pump light is provided by a variety of internally doubled nd - doped yag , ylf and vanadate lasers . the output beam with its controllable wavefrontis then delivered to one of five experimental areas for the production of x - rays , electron beams , ion beams , thz radiation , high - order harmonics , or warm - dense matter .the focused laser pulse drives plasma waves by interacting with an argon gas jet flowing continuously from a 100 m inner diameter fused silica capillary .typically the laser axis is 300 m above the orifice of the tubing .the laser pulses were focused by an off - axis parabolic mirror to a spot size of 2.5 m fwhm with a maximum of 10 mj energy on target .the plasma electron density is measured to be in the range ( 0.5 - 2) using transverse interferometry .electrons are accelerated in the density down ramp with a final energy in the 100 kev range and detected by a high resolution scintillating screen ( j6677 fos by hamamatsu ) , which is placed about 35 cm downstream from the source and imaged with a lens coupled 12-bit ccd camera for a 4 cm effective area .the scintillator sensitivity was calibrated using an electron microscope for the energy range in the spectrum measurement .electron beam charge was estimated using the calibrated scintillator response , manufacturer - provided information for the ccd camera ( gain , quantum efficiency etc . ) and the measured effective numerical aperture of the imaging system .the amplified laser beam was attenuated by using a half - wave plate and the polarization dependent properties of the compressor grating of the laser system .25 mm neutral density filters ( thorlabs , inc . )were inserted in the exit beam after the compressor and before a telescope beam expander .the laser focus was imaged by a 60 microscope objective lens ( newport corporation , m-60x ) onto a 8-bit ccd camera for focal characterization ( cf setup a in fig .[ setup ] ) . a shack - hartmann wavefront sensor ( flexible optical bv )was used to determine the relative wavefront change between different deformable mirror configurations .the sensor , which consists of a microlens array having a focal length of 3.5 mm and 150 m pitch , was directly placed in the path of the converging beam after the focusing parabolic mirror .reconstruction of the wavefront was performed using the frontsurfer analysis software ( flexible optical bv ) , typically with measured local wavefront slopes and rms error on the order of 0.05 .rotating the neutral density filters and the half - wave plate did not change the focal spot or the wavefront measurement significantly , insuring the wavefront distortion introduced by attenuation was negligible .the deformable mirror ( aoa xinetics ) has a 47-mm clear aperture of a continuous face sheet with 37 piezoelectric actuators arranged on a square grid spaced 7 mm apart .the maximum stroke used in this experiment is about 2 m .the mirror shape is controlled by a genetic algorithm , which is a method mimicking the process of natural selection and routinely used to generate optimal solutions in complex systems with a large number of variables .the _ genetic representation _ in our experiments comprises a set of 37 independent voltage values for the deformable mirror actuator array .a _ fitness function _ is designed to produce a single _ figure of merit _ ( fom ) to evaluate how close the solution is to the goal . in the electron beam profile optimization experiment , fom is computed as follows : where is the pixel intensity for every pixel in the whole image and is a coordinate point in the image used as an optimization target . the power factor gives higher weighting to those pixels closer to the target ( inverse distance weighting ) . in the experiment to control the energy spectrum , fomis calculated using the following formula given a pre - defined image mask , the mean intensity is the sum of the pixel counts divided by the number of pixels for a defined region . a rectangular mask was used in the experiment as specified by the region enclosed by the red dashed lines in fig .[ espec]b .the 2d pic simulations were performed in a stationary box of the dimensions m with cells and 4 particles - per - cell .a gaussian plasma density profile was used in the propagation dimension ( ) , peaked at m with a full - width - at - half - maximum of 120 m and a maximum electron density of 0.005 , where is the plasma critical density .the laser pulse was initialized at the left edge of the simulation window and focused at 215 m in the density down ramp . in 2d geometry ,the transverse intensity profile of the laser pulse for fundamental gaussian mode ( tem ) has the form , and the second - order hermite - gaussian mode , where is the normalized vector potential and is the beam waist parameter .the two modes are coherently superimposed in the same plane of polarization . herewe used even - order hermite - gaussian mode ( tem ) for its symmetric property .a phase difference of was applied at the beam waist between the two modes to simulate variations in the optical phase front condition .the beam waist was positioned to account for focal shift as a result of coherent superposition of two modes such that the location of the maximum on - axis laser intensity was the same ( at m ) for all simulation runs .the simulation parameters are and m for the gaussian mode alone , or , and m for the superimposed mode .this work was supported by darpa ( contract no .n66001 - 11 - 1 - 4208 ) , the nsf ( grant no .0935197 ) , nsf career ( grant no 1054164 ) , the afosr young investigator program ( grant no . fa9550 - 12 - 1 - 0310 ) and mcubed at the university of michigan .the authors acknowledge the osiris consortium , consisting of university of california , los angeles ( ucla ) and instituto superior tcnico ( ist ) ( lisbon , portugal ) , for the use of the osiris 2.0 framework . this research was supported in part through computational resources and services provided by advanced research computing at the university of michigan , ann arbor . z.h , b.h . , j.a.ndesigned and carried out the experiment .z.h . performed all data analysis and the simulations .b.h . developed the labview program .k.m.k and a.g.r.t . directed and guided the project .z.h . and a.g.r.t .wrote the paper .all authors discussed the results and contributed to the manuscript . .( c ) relative wavefront change reconstructed from direct measurement using a shack - hartmann wavefront sensor .root - mean - square phase deviation in the aperture is 0.14 wave .the wavefront was reconstructed over a slighly smaller aperture ( 2.26 mm diameter ) than the full beam diameter on the sensor ( .7 mm width ) to reduce errors in the peripheral area .( scale bar , 1 mm).,width=160 ] m ) for a single gaussian mode ( solid curves ) and a superimposed mode ( dashed curves ) obtained in the 2d simulations . plotted are the laser pulses at vacuum focus .( b)(d ) are snapshots showing the different plasma wave structures around the time ( ps ) and location of trapping , driven by a laser pulse of single mode ( b ) and superimposed mode ( d ) .the laser pulse propagates to the right .the phase space distribution of the accelerated electrons shown for ( c ) single mode and ( e ) superimposed mode before the electrons exit the simulation box ( ps).,width=160 ] | coherent control of a system involves steering an interaction to a final coherent state by controlling the phase of an applied field . plasmas support coherent wave structures that can be generated by intense laser fields . here , we demonstrate the coherent control of plasma dynamics in a laser wakefield electron acceleration experiment . a genetic algorithm is implemented using a deformable mirror with the electron beam signal as feedback , which allows a heuristic search for the optimal wavefront under laser - plasma conditions that is not known _ a priori_. we are able to improve both the electron beam charge and angular distribution by an order of magnitude . these improvements do not simply correlate with having the ` best ' focal spot , since the highest quality vacuum focal spot produces a greatly inferior electron beam , but instead correspond to the particular laser phase front that steers the plasma wave to a final state with optimal accelerating fields . center for ultrafast optical science , university of michigan , ann arbor , mi 48109 - 2099 usa polytech paris - sud - universit paris - sud , 91405 orsay , france |
rna polymerase ( rnap ) is a molecular motor .it moves on a stretch of dna , utilizing chemical energy input , while polymerizing a messenger rna ( mrna ) .the sequence of monomeric subunits of the mrna is dictated by the corresponding sequence on the template dna .this process of template - dictated polymerization of rna is usually referred to as _ transcription _it comprises three stages , namely , initiation , elongation of the mrna and termination .we first report analytical results on the characteristic properties of single rnap motors . in our approach , each rnap is represented by a hard rod while the dna track is modelled as a one - dimensional lattice whose sites represent a nucleotide , the monomeric subunits of the dna .the mechano - chemistry of individual rnap motors is captured in this model by assigning distinct `` chemical '' states to each rnap and postulating the nature of the transitions between these states .the dwell time of an rnap at successive monomers of the dna template is a random variable ; its distribution characterizes the stochastic nature of the movement of rnap motors .we derive the _ exact _ analytical expression for the dwell - time distribution of the rnaps in this model .we also report results on the collective movements of the rnaps .often many rnaps move simultaneously on the same dna track ; because of superficial similarities with vehicular traffic , we refer to such collective movements of rnaps as rnap traffic .our model of rnap traffic can be regarded as an extension of the totally asymmetric simple exclusion process ( tasep ) for hard rods where each rod can exist at a location in one of its possible chemical states .the movement of an rnap on its dna track is coupled to the elongation of the mrna chain that it synthesizes .naturally , the rate of its forward movement depends on the availability of the monomeric subunits of the mrna and the associated `` chemical '' transitions on the dominant pathway in its mechano - chemical cycle . because of the incorporation of the mechano - chemical cycles of individual rnap motors , the number of rate constants in this model is higher than that in a tasep for hard rods .consequently , we plot the phase diagrams of our model not in a two - dimensionl plane ( as is customary for the tasep ) , but in a 3-dimensional space where the additional dimension corresponds to the concentration of the monomeric subunits of the mrna .we take the dna template as a one dimensional lattice of length and each rnap is taken as a hard rod of length in units of the length of a nucleotide .although an rnap covers nucleotides , its position is denoted by the _nucleotide covered by it .transcription initiation and termination steps are taken into account by the rate constants and , respectively . a hard rod , representing an mrna , attaches to the first site on the lattice with rate if the first sites are not covered by any other rnap at that instant of time .similarly , an mrna bound to the rightmost site is released from the system , with rate .we have assumed hard core steric interaction among the rnaps ; therefore , no site can be simultaneously covered by more than one rnap . at every lattice site , an rnap can exist in one of two possible chemical states : in one of these it is bound with a pyrophosphate ( which is one of the byproducts of rna elongation reaction and is denoted by the symbol ) , whereas no is bound to it in the other chemical state ( see fig.[fig - model ] ) . for plotting our results ,we have used throughout this paper , and ~\tilde{\omega}_{21}^f ~s^{-1} ] is concentration of nucleotide triphosphate monomers ( fuel for transcription elongation ) and .for every rnap , the dwell time is measured by an imaginary `` stop watch '' which is reset to zero whenever the rnap reaches the chemical state , _ for the first time _ , after arriving at a new site ( say , -th from the -th ) .let be the probability of finding a rnap in the chemical state at time .the time evolution of the probabilities are given by there is a close formal similarity between the mechano - chemical cycle of an rnap in our model ( see fig.[fig - model ] ) and the catalytic cycle of an enzyme in the michaelis - menten scenario .the states and in the former correspond to the states and in the latter where represents the free enzyme while represents the enzyme - substrate complex . following the steps of calculation used earlier by kuo et al . for the kinetics of single - molecule enzymatic reactions , we obtain the dwell time distribution \label{eq - ftgen}\end{aligned}\ ] ] where \(a ) + release ( and , hence , ) fixed , and ( b ) two different values of , keeping the ntp concentration fixed . , title="fig : " ] + ( b ) + release ( and , hence , ) fixed , and ( b ) two different values of , keeping the ntp concentration fixed . , title="fig : " ] the dwell time distribution ( [ eq - ftgen ] ) is plotted in fig.[fig - ft ] .depending on the magnitudes of the rate constants the peak of the distribution may appear at such a small that it may not possible to detect the existence of this maxium in a laboratory experiment .in that case , the dwell time distribution would appear to be purely a single exponential .it is worth pointing out that our model does not incorporate backtracking of rnap motors which have been observed in the _ in - vitro _experiments .it has been argued by some groups that short transcriptional pausing is distinct from the long pauses which arise from backtracking .in contrast , some other groups claim that polymerase backtracking can account for both the short and long pauses .thus , the the role of backtracking in the pause distribution remain controversial .moreover , it has been demonstrated that a polymerase stalled by backtracking can be re - activated by the `` push '' of another closely following it from behind .therefore , in the crowded molecular environment of intracellular space , the occurrence of backtracking may be far less frequent that those observed under _ in - vitro _ conditions .our model , which does not allow backtracking , predicts a dwell time distribution which is qualitatively very similar to that of the short pauses provided the most probable dwell time is shorter than 1 s. from equation ( [ eq - ftgen ] ) we get the inverse mean dwell time } } \label{eq - avtgen}\ ] ] where and .the form of the expression ( [ eq - avtgen ] ) is identical to the michaelis - menten formula for the average rate of an enzymatic reaction .it describes the slowing down of the `` bare '' elongation progress of an rnap due to the ntp reaction cycle that it has to undergo .the unit of velocity is .the fluctuations of the dwell time can be computed from the second moment } { \left(\omega_{12}~\omega_{21}^f \right)^2 } \end{aligned}\ ] ] of the dwell time distribution .we find the randomness parameter is plotted against ntp concentration for three values of the parameter ., title="fig : " ] + note that , for a one - step poisson process , .the randomness parameter , given by ( [ eq - ranpar ] ) , is plotted against the ntp concentration in fig.[fig - ranpar ] for three different values of . at sufficiently low ntp concentration, is unity because ntp binding with the rnap is the rate - limiting step . as ntp concentration increases, exhibits a nonmonotonic variation . at sufficiently high ntp concentration , pp-release ( which occurs with the rate ) is the rate - limiting step and ,therefore , is unity also in this limit .this interpretation is consistent with the fact that the smaller is the magnitude of , the quicker is the crossover to the value as the ntp concentration is increased .the randomness parameter yields the diffusion coefficient \nonumber\\ \label{eqn - diffusion}\end{aligned}\ ] ] the expression ( [ eqn - diffusion ] ) is in agreement with the general expression for the effective diffusion constant of a molecular motor with unbranched mechano - chemical cycle which was first reported by fisher and kolomeisky .now we will take into account the hard core steric interaction among the rnaps which are simultaneously moving on the same dna track. equations ( [ eq - masterp1 ] and [ eq - masterp2 ] ) will be modified to where is conditional probability of finding site ( for backward motion ) vacant , given there is a particle at site . due to the steric interactions between rnap s their stationary flux ( and hence the transcription rate ) is no longer limited solely by the initiation and release at the terminal sites of the template dna .we calculate the resulting phase diagram utilizing the extremum current hypothesis ( ech ) .the ech relates the flux in the system under open boundary conditions ( obc ) to that under periodic boundary conditions ( pbc ) with the same bulk dynamics . in this approach, one imagines that initiation and termination sites are connected to two separate reservoirs where the number densities of particles are and respectively , and where the particles follow the same dynamics as in the bulk of the real physical system . then the actual rates and of initiation and termination of mrna polymerizationare incorporated by appropriate choice of and respectively . .the surfaces and separate the mc phase from the hd and ld phases , respectively ., title="fig : " ] + - plane for several values of .the numbers on the phase boundary lines represent the value of .the inclined lines have ld and hd above and below , respectively , while the mc phase lies in the upper right corner . ]an expression for was reported by us in ref . . in the special case where the dominant pathway is that shown in fig .[ fig - model ] ( = 0 for further calculation as ) , we have the number density that corresponds to the maximum flux is given by the expression ^{-1}\ ] ] by comparing ( [ eq : effj ] ) with the exact current - density relation of the usual tasep for extended particles of size , which have no internal states ( formally obtained by taking the limit in the present model ) , we predict that the stationary current ( i.e. the collective average rate of translation ) is reduced by the occurrence of the intermediate state 1 through which the rnaps have to pass . from ( [ ech ] )one expects three phases , viz .a maximal - current(mc ) phase with with bulk density , a low - density phase ( ld ) with bulk density , and a high - density phase ( hd ) with bulk density .using arguments similar to those used in ref . in a similar context , we get and except that the projections are on - plane for several values of .the inclined lines have ld and hd above and below , respectively .each vertical line separates the ld phase on the left from the mc phase on its right . ]the condition for the coexistence of the high density ( hd ) and low density ( ld ) phases is with .using the expression ( [ eq : effj ] ) for in ( [ eq - ldhd ] ) we get substituting ( [ eq - rhom ] ) and ( [ eq - rhop ] ) into ( [ eq - rhomp ] ) , we get the equation for the plane of coexistence of ld and hd to be where } .\label{eq : phasec}\end{aligned}\ ] ] in order to compare our result with the 2-d phase diagram of the tasep in the -plane , we project 2-d cross sections of the 3-d phase diagram , for several different values of onto the -plane .the lines of coexistence of the ld and hd phases on this projected two - dimensional plane are curved , a similar curvature is also reported by antal and schtz .this is in contrast to the straight coexistence line for ld and hd phases of tasep .the bulk density of the system is guided by following equations : \biggl[\dfrac{\omega_{12}\omega_{21}^f}{\omega_{12}+\omega_{21}^f}\biggr]~~\mbox{low density}\\ \rho_{+ } & ~ \mbox{if}~\omega_{\beta } < f(\omega_{\alpha},\omega_{21}^f ) { \rm and } ~\omega_{\beta } < \biggl[\dfrac{1-\rho_*\ell}{1-\rho_*(\ell-1)}\biggr ] \biggl[\dfrac{\omega_{12}\omega_{21}^f}{\omega_{12}+\omega_{21}^f}\biggr]~~\mbox{high density } \\\rho _ { * } & ~\mbox{if}~\omega_{\beta}>\biggl[\dfrac{1-\rho_*\ell}{1-\rho_*(\ell-1)}\biggr ] \biggl[\dfrac{\omega_{12}\omega_{21}^f}{\omega_{12}+\omega_{21}^f}\biggr ] { \rm and } ~\omega_{\alpha } > \biggl[\dfrac{\rho_*}{1-\rho_*(\ell-1)}\biggr ] \biggl[\dfrac{\omega_{12}\omega_{21}^f}{\omega_{12}+\omega_{21}^f}\biggr]~~\mbox{maximal current}.\end{array } \right . \nonumber\\\end{aligned}\ ] ] in fig .[ fig:3d ] , we plot the 3d phase diagram . except thatthe projections are on - plane for several values of . herethe inclined lines have hd and ld , respectively , above and below .each vertical line separates the hd phase on the left from the mc phase on its right . ] in general , a plane =constant intersects the surfaces i , ii and iii thereby generating the phase transition lines between the ld , hd and mc phases in the - plane .we have projected several of these 2d phase diagrams , each for one constant value of in figure [ fig : xyplane ] . in the inset ,we have shown the value of for different lines .we have also projected several 2d phase diagrams in the - plane and - plane , respectively , in figures [ fig : yzplane ] and [ fig : xzplane ] .in this paper we have reported the exact dwell time distribution for a simple 2-state model of rnap motors . from this distributionwe have also computed the average velocity and the fluctuations of position and dwell time of rnap s on the dna nucleotides .these expressions are consistent with a general formula derived earlier by fisher and kolomeisky for a generic model of molecular motors with unbranched mechano - chemical cycles . taking into account the presence of steric interactions between different rnap moving along the same dna template we have plotted the full 3d phase diagram of a model for multiple rnap traffic .this model is a biologically motivated extension of the tasep , the novel feature being the incorporation of the mechano - chemical cycle of the rnap into the dynamics of the transcription process .this leads to a hopping process with a dwell time distribution that is not a simple exponential .nevertheless , the phase diagram is demonstrated to follow the extremal - current hypothesis for driven diffusive systems . using mean field theorywe have computed the effective boundary densities that enter the ech from the reaction constants of our model .we observe that the collective average rate of translation as given by the stationary rnap current ( [ eq : effj ] ) is reduced by the need of the rnap to go through the pyrophosphate bound state .this is a prediction that is open to experimental test .the 2d cross sections of this phase diagram have been compared and contrasted with the phase diagram for the tasep . unlike in the tasep , the coexistence line between low- and high - density phase is curved for all parameter values .this is a signature of broken particle - vacancy symmetry of the rnap dynamics .the presence of this coexistence line suggests the occurrence of rnap `` traffic jams '' that our model predicts to appear when stationary initiation and release of rnap at the terminal sites of the dna track are able to balance each other .this traffic jam would perform an unbiased random motion , as argued earlier on general theoretical grounds in the context of protein synthesis by ribosomes from mrna templates . + * acknowledgments * : this work is supported by a grant from csir ( india ) .gms thanks iit kanpur for kind hospitality and dfg for partial financial support .50 m. schliwa , ( ed . ) _ molecular motors _ , ( wiley - vch , 2003 ) .j. gelles and r. landick , cell , * 93 * , 13 ( 1998 ) .b. alberts et al ._ essential cell biology _ , 2nd ed .( garland science , taylor and francis , 2004 ) .t. tripathi and d. chowdhury , phys .e * 77 * , 011921 ( 2008 ) .d. chowdhury , l. santen and a. schadschneider , phys .rep . * 329 * , 199 ( 2000 ) .s. klumpp and r. lipowsky , j. stat .phys . * 113 * , 233 ( 2003 ) .d. chowdhury , a. schadschneider and k. nishinari , phys . of life rev .* 2 * , 318 ( 2005 ) .m. voliotis , n. cohen , c. molina - paris and t.b .liverpool , biophys .j. * 94 * , 334 ( 2007 ) .s. klumpp and t. hwa , pnas * 105 * , 18159 ( 2008 ) .g. m. schtz , in : _ phase transitions and critical phenomena _ , vol .19 ( acad . press , 2001 ) .m. dixon and e.c .webb , _ enzymes _ ( academic press , 1979 ) .kou , b.j .cherayil , w. min , b.p .english and x.s .xie , j. phys .b * 109 * , 19068 - 19081 ( 2005 ) .k. adelman , a. la porta , t.j .santangelo , j.t .lis , j.w .roberts and m.d .wang , pnas * 99 * , 13538 ( 2002 ) .a. shundrovsky , t.j .santangelo , j.w .roberts and m.d .wang , biophys .j. * 87 * , 3945 ( 2004 ) .abbondanzieri , w.j .greenleaf , j.w .shaevitz , r. landick and s.m .block , nature * 438 * , 460 ( 2005 ) .shaevitz , e.a .abbondanzieri , r. landick and s.m .block , nature * 426 * , 684 ( 2003 ) .e. galburt , s.w .grill , a. wiedmann , l. lubhowska , j. choy , e. nogales , m. kashlev and c. bustamante , nature * 446 * , 820 ( 2007 ) .neuman , e.a .abbondanzieri , r. landick , j. gelles and s.m .block , cell * 115 * , 437 ( 2003 ) .m. depken , e. galburt and s.w .grill , biophys .j. * 96 * , 2189 ( 2009 ) .v. epshtein and e. nudler , science * 300 * , 801 ( 2003 ) .m. schnitzer and s. block , cold spring harbor symp .biol . * 60 * , 793 ( 1995 ) t. tripathi , ph.d .thesis , iit kanpur ( 2009 ) .fisher and a.b .kolomeisky , pnas * 96 * , 6597 ( 1999 ) .v. popkov and g. m. schtz , europhys . lett . * 48 * , 257 ( 1999 ) .j. krug , phys . rev. lett . * 67 * , 1882 ( 1991 ) . c. macdonald , j. gibbs and a. pipkin , biopolymers , * 6 * , 1 ( 1968 )shaw , r.k.p .zia and k.h .lee , phys .e * 68 * , 021910 ( 2003 ) .g. schnherr and g.m .schtz , j. phys .a * 37 * , 8215 ( 2004 ) .a. basu and d. chowdhury , phys .e * 75 * , 021902 ( 2007 ) . t. antal and g. m. schtz , phys .e * 62 * , 83 ( 2000 ) .schtz , int . j. modb * 11 * , 197 ( 1997 ) . | polymerization of rna from a template dna is carried out by a molecular machine called rna polymerase ( rnap ) . it also uses the template as a track on which it moves as a motor utilizing chemical energy input . the time it spends at each successive monomer of dna is random ; we derive the exact distribution of these `` dwell times '' in our model . the inverse of the mean dwell time satisfies a michaelis - menten - like equation and is also consistent with a general formula derived earlier by fisher and kolomeisky for molecular motors with unbranched mechano - chemical cycles . often many rnap motors move simultaneously on the same track . incorporating the steric interactions among the rnaps in our model , we also plot the three - dimensional phase diagram of our model for rnap traffic using an extremum current hypothesis . |
we study degrees of freedom , or the `` effective number of parameters , '' in -penalized linear regression problems .in particular , for a response vector , predictor matrix and tuning parameter , we consider the lasso problem [ , ] the above notation emphasizes the fact that the solution may not be unique [ such nonuniqueness can occur if . throughout the paper , when a function may have a nonunique minimizer over its domain , we write to denote the set of minimizing values , that is , .a fundamental result on the degrees of freedom of the lasso fit was shown by .the authors show that if follows a normal distribution with spherical covariance , , and are considered fixed with , then where denotes the active set of the unique lasso solution at , and is its cardinality .this is quite a well - known result , and is sometimes used to informally justify an application of the lasso procedure , as it says that number of parameters used by the lasso fit is simply equal to the ( average ) number of selected variables .however , we note that the assumption implies that ; in other words , the degrees of freedom result ( [ eq : lassodffull ] ) does not cover the important `` high - dimensional '' case . in this case, the lasso solution is not necessarily unique , which raises the questions : * can we still express degrees of freedom in terms of the active set of a lasso solution ? *if so , which active set ( solution ) would we refer to ? in section [ sec: lasso ] , we provide answers to these questions , by proving a stronger result when is a general predictor matrix . we show that the subspace spanned by the columns of in is almost surely unique , where `` almost surely '' means for almost every .furthermore , the degrees of freedom of the lasso fit is simply the expected dimension of this column space .we also consider the generalized lasso problem , where is a penalty matrix , and again the notation emphasizes the fact that need not be unique [ when .this of course reduces to the usual lasso problem ( [ eq : lasso ] ) when , and demonstrate that the formulation ( [ eq : genlasso ] ) encapsulates several other important problems including the fused lasso on any graph and trend filtering of any order by varying the penalty matrix .the same paper shows that if is normally distributed as above , and are fixed with , then the generalized lasso fit has degrees of freedom .\ ] ] here denotes the boundary set of an optimal subgradient to the generalized lasso problem at ( equivalently , the boundary set of a dual solution at ) , denotes the matrix after having removed the rows that are indexed by , and , the dimension of the null space of .it turns out that examining ( [ eq : genlassodffull ] ) for specific choices of produces a number of interpretable corollaries , as discussed in .for example , this result implies that the degrees of freedom of the fused lasso fit is equal to the expected number of fused groups , and that the degrees of freedom of the trend filtering fit is equal to the expected number of knots , where is the order of the polynomial . the result ( [ eq : genlassodffull ] ) assumes that and does not cover the case ; in section [ sec : genlasso ] , we derive the degrees of freedom of the generalized lasso fit for a general ( and still a general ) . as in the lasso case, we prove that there exists a linear subspace that is almost surely unique , meaning that it will be the same under different boundary sets corresponding to different solutions of ( [ eq : genlasso ] ) .the generalized lasso degrees of freedom is then the expected dimension of this subspace .our assumptions throughout the paper are minimal . as was already mentioned , we place no assumptions whatsoever on the predictor matrix or on the penalty matrix , considering them fixed and nonrandom .we also consider fixed . for theorems [ thm : lassodfequi ] , [ thm : lassodfact ] and [ thm : genlassodf ]we assume that is normally distributed , for some ( unknown ) mean vector and marginal variance .this assumption is only needed in order to apply stein s formula for degrees of freedom , and none of the other lasso and generalized lasso results in the paper , namely lemmas [ lem : lassoproj ] through [ lem : invbound ] , make any assumption about the distribution of .this paper is organized as follows .the rest of the introduction contains an overview of related work , and an explanation of our notation .section [ sec : prelim ] covers some relevant background material on degrees of freedom and convex polyhedra .though the connection may not be immediately obvious , the geometry of polyhedra plays a large role in understanding problems ( [ eq : lasso ] ) and ( [ eq : genlasso ] ) , and section [ sec : poly ] gives a high - level view of this geometry before the technical arguments that follow in sections [ sec : lasso ] and [ sec : genlasso ] . in section [ sec : lasso ] , we derive two representations for the degrees of freedom of the lasso fit , given in theorems [ thm : lassodfequi ] and [ thm : lassodfact ] . in section [ sec : genlasso ] , we derive the analogous results for the generalized lasso problem , and these are given in theorem [ thm : genlassodf ] . as the lasso problem is a special case of the generalized lasso problem ( corresponding to ) , theorems [ thm : lassodfequi ] and [ thm : lassodfact ] can actually be viewed as corollaries of theorem [ thm : genlassodf ] .the reader may then ask : why is there a separate section dedicated to the lasso problem ?we give two reasons : first , the lasso arguments are simpler and easier to follow than their generalized lasso counterparts ; second , we cover some intermediate results for the lasso problem that are interesting in their own right and that do not carry over to the generalized lasso perspective .section [ sec : disc ] contains some final discussion .all of the degrees of freedom results discussed here assume that the response vector has distribution , and that the predictor matrix is fixed .to the best of our knowledge , were the first to prove a result on the degrees of freedom of the lasso fit , using the lasso solution path with moving from to .the authors showed that when the active set reaches size along this path , the lasso fit has degrees of freedom exactly .this result assumes that has full column rank and further satisfies a restrictive condition called the `` positive cone condition , '' which ensures that as decreases , variables can only enter , and not leave , the active set .subsequent results on the lasso degrees of freedom ( including those presented in this paper ) differ from this original result in that they derive degrees of freedom for a fixed value of the tuning parameter , and not a fixed number of steps taken along the solution path .as mentioned previously , established the basic lasso degrees of freedom result ( for fixed ) stated in ( [ eq : lassodffull ] ) .this is analogous to the path result of ; here degrees of freedom is equal to the expected size of the active set ( rather than simply the size ) because for a fixed the active set is a random quantity , and can hence achieve a random size .the proof of ( [ eq : lassodffull ] ) appearing in relies heavily on properties of the lasso solution path . as also mentioned previously , derived an extension of ( [ eq : lassodffull ] ) to the generalized lasso problem , which is stated in ( [ eq : genlassodffull ] ) for an arbitrary penalty matrix .their arguments are not based on properties of the solution path , but instead come from a geometric perspective much like the one developed in this paper .both of the results ( [ eq : lassodffull ] ) and ( [ eq : genlassodffull ] ) assume that ; the current work extends these to the case of an arbitrary matrix , in theorems [ thm : lassodfequi ] , [ thm : lassodfact ] ( the lasso ) and [ thm : genlassodf ] ( the generalized lasso ) . in terms of our intermediate results ,a version of lemmas [ lem : lcequi ] , [ lem : lcact ] corresponding to appears in , and a version of lemma [ lem : lcbound ] corresponding to appears in [ furthermore , only consider the boundary set representation and not the active set representation ] .lemmas [ lem : nonexp ] , [ lem : locaff ] and the conclusions thereafter , on the degrees of freedom of the projection map onto a convex polyhedron , are essentially given in , though these authors state and prove the results in a different manner . in preparing a draft of this manuscript ,it was brought to our attention that other authors have independently and concurrently worked to extend results ( [ eq : lassodffull ] ) and ( [ eq : genlassodffull ] ) to the general case .namely , prove a result on the lasso degrees of freedom , and prove a result on the generalized lasso degrees of freedom , both for an arbitrary .these authors results express degrees of freedom in terms of the active sets of special ( lasso or generalized lasso ) solutions .theorems [ thm : lassodfact ] and [ thm : genlassodf ] express degrees of freedom in terms of the active sets of any solutions , and hence the appropriate application of these theorems provides an alternative verification of these formulas .we discuss this in detail in the form of remarks following the theorems . in this paper ,we use , and to denote the column space , row space and null space of a matrix , respectively ; we use and to denote the dimensions of [ equivalently , ] and , respectively .we write for the the moore penrose pseudoinverse of ; for a rectangular matrix , recall that .we write to denote the projection matrix onto a linear subspace , and more generally , to denote the projection of a point onto a closed convex set . for readability ,we sometimes write ( instead of ) to denote the inner product between vectors and . for a set of indices satisfying , and a vector , we use to denote the subvector . we denote the complementary subvector by .the notation is similar for matrices . given another subset of indices with , and a matrix , we use to denote the submatrix \in{\mathbb{r}}^{k \times\ell}.\ ] ] in words , rows are indexed by , and columns are indexed by . when combining this notation with the transpose operation , we assume that the indexing happens first , so that . as above ,negative signs are used to denote the complementary set of rows or columns ; for example , . to extract only rows or only columns , we abbreviate the other dimension by a dot , so that and ; to extract a single row or column , we use or . finally , and most importantly , we introduce the following shorthand notation : * for the predictor matrix , we let . * for the penalty matrix , we let . in other words ,the default for is to index its columns , and the default for is to index its rows .this convention greatly simplifies the notation in expressions that involve multiple instances of or ; however , its use could also cause a great deal of confusion , if not properly interpreted by the reader !the following two sections describe some background material needed to follow the results in sections [ sec : lasso ] and [ sec : genlasso ] .if the data vector is distributed according to the homoskedastic model , meaning that the components of are uncorrelated , with having mean and variance for , then the degrees of freedom of a function with , is defined as this definition is often attributed to or , and is interpreted as the `` effective number of parameters '' used by the fitting procedure .note that for the linear regression fit of onto a fixed and full column rank predictor matrix , we have , and , which is the number of fitted coefficients ( one for each predictor variable ). furthermore , we can decompose the risk of , denoted by , as a well - known identity that leads to the derivation of the statistic [ ] . for a general fitting procedure , the motivation for the definition ( [ eq : df ] ) comes from the analogous decomposition of the quantity , therefore a large difference between risk and expected training error implies a large degrees of freedom .why is the concept of degrees of freedom important ?one simple answer is that it provides a way to put different fitting procedures on equal footing .for example , it would not seem fair to compare a procedure that uses an effective number of parameters equal to 100 with another that uses only 10 .however , assuming that these procedures can be tuned to varying levels of adaptivity ( as is the case with the lasso and generalized lasso , where the adaptivity is controlled by ) , one could first tune the procedures to have the same degrees of freedom , and then compare their performances .doing this over several common values for degrees of freedom may reveal , in an informal sense , that one procedure is particularly efficient when it comes to its parameter usage versus another . a more detailed answer to the above questionis based the risk decomposition ( [ eq : riskd ] ) .the decomposition suggests that an estimate of degrees of freedom can be used to form an estimate of the risk , furthermore , it is straightforward to check that an unbiased estimate of degrees of freedom leads to an unbiased estimate of risk ; that is , ] .hence , the risk estimate ( [ eq : riskhat ] ) can be used to choose between fitting procedures , assuming that unbiased estimates of degrees of freedom are available .[ it is worth mentioning that bootstrap or monte carlo methods can be helpful in estimating degrees of freedom ( [ eq : df ] ) when an analytic form is difficult to obtain . ]the natural extension of this idea is to use the risk estimate ( [ eq : riskhat ] ) for tuning parameter selection . if we suppose that depends on a tuning parameter , denoted , then in principle one could minimize the estimated risk over to select an appropriate value for the tuning parameter , this is a computationally efficient alternative to selecting the tuning parameter by cross - validation , and it is commonly used ( along with similar methods that replace the factor of above with a function of or ) in penalized regression problems . even though such an estimate ( [ eq : tunsel ] ) is commonly used in the high - dimensional setting ( ) , its asymptotic properties are largely unknown in this case , such as risk consistency , or relatively efficiency compared to the cross - validation estimate . proposed the risk estimate ( [ eq : riskhat ] ) using a particular unbiased estimate of degrees of freedom , now commonly referred to as _ stein s unbiased risk estimate _ ( sure ) .stein s framework requires that we strengthen our distributional assumption on and assume normality , as stated in ( [ eq : normal ] ) .we also assume that the function is continuous and almost differentiable .( the precise definition of almost differentiability is not important here , but the interested reader may take it to mean that each coordinate function is absolutely continuous on almost every line segment parallel to one of the coordinate axes . ) given these assumptions , stein s main result is an alternate expression for degrees of freedom , ,\ ] ] where the function is called the divergence of .immediately following is the unbiased estimate of degrees of freedom , we pause for a moment to reflect on the importance of this result . from its definition ( [ eq : df ] ) , we can see that the two most obvious candidates for unbiased estimates of degrees of freedom are \bigr ) y_i.\ ] ] to use the first estimate above , we need to know ( remember , this is ultimately what we are trying to estimate ! ) . using the second requires knowing { { \hat{\beta}}}_i = 0 i ] , the expected dimension of the linear subspace spanned by the active variables .meanwhile , for the linear regression problem where we consider fixed , the degrees of freedom of the fit is . in other words ,the lasso adaptively selects a subset of the variables to use for a linear model of , but on average it only `` spends '' the same number of parameters as would linear regression on the variables in , if was pre - specified .how is this possible ? broadly speaking, the answer lies in the shrinkage due to the penalty . although the active set is chosen adaptively , the lasso does not estimate the active coefficients as aggressively as does the corresponding linear regression problem ( [ eq : lsa ] ) ; instead , they are shrunken toward zero , and this adjusts for the adaptive selection .differing views have been presented in the literature with respect to this feature of lasso shrinkage . on the one hand , for example , point out that lasso estimates suffer from bias due to the shrinkage of large coefficients , and motivate the nonconvex _ scad _ penalty as an attempt to overcome this bias . on the other hand , for example, loubes and massart ( ) discuss the merits of such shrunken estimates in model selection criteria , such as ( [ eq : tunsel ] ) . in the current context ,the shrinkage due to the penalty is helpful in that it provides control over degrees of freedom . a more precise study of this idea is the topic of future work .in this section we extend our degrees of freedom results to the generalized lasso problem , with an arbitrary predictor matrix and penalty matrix . as before ,the kkt conditions play a central role , and we present these first . also , many results that follow have equivalent derivations from the perspective of the generalized lasso dual problem ; see appendix [ app : dual ] .we remind the reader that is used to extract to extract rows of corresponding to an index set . the kkt conditions for the generalized lasso problem ( [ eq : genlasso ] ) are & \quad if . } & \ ] ] now is a subgradient of the function evaluated at .similar to what we showed for the lasso , it follows from the kkt conditions that the generalized lasso fit is the residual from projecting onto a polyhedron .[ lem : genlassoproj ] for any and , the generalized lasso fit can be written as , where is the polyhedron the proof is quite similar to that of lemma [ lem : lassoproj ] .as in ( [ eq : innerprod ] ) , we want to show that for all , where is as in the lemma . for the first term above, we can take an inner product with on both sides of ( [ eq : genlassokkt ] ) to get , and furthermore , therefore ( [ eq : innerprod2 ] ) holds if for some , in other words , if . to show that is a polyhedron , note that we can write it as where is taken to mean the inverse image under the linear map , and , a hypercube in .clearly is a polyhedron , and the image or inverse image of a polyhedron under a linear map is still a polyhedron . as with the lasso, this lemma implies that the generalized lasso fit is nonexpansive , and therefore continuous and almost differentiable as a function of , by lemma [ lem : nonexp ] .this is important because it allows us to use stein s formula when computing degrees of freedom . in the next sectionwe define the boundary set , and derive expressions for the generalized lasso fit and solutions in terms of .the following section defines the active set in the generalized lasso context , and again gives expressions for the fit and solutions in terms of .though neither nor are necessarily unique for the generalized lasso problem , any choice of or generates a special invariant subspace ( similar to the case for the active sets in the lasso problem ) .we are subsequently able to express the degrees of freedom of the generalized lasso fit in terms of any boundary set , or any active set . like the lasso ,the generalized lasso fit is always unique ( following from lemma [ lem : genlassoproj ] , and the fact that projection onto a closed convex set is unique ) .however , unlike the lasso , the optimal subgradient in the generalized lasso problem is not necessarily unique . in particular , if , then the optimal subgradient is not uniquely determined by conditions ( [ eq : genlassokkt ] ) and ( [ eq : genlassosg ] ) . given a subgradient satisfying ( [ eq : genlassokkt ] ) and ( [ eq : genlassosg ] ) for some , we define the _ boundary set _ as this generalizes the notion of the equicorrelation set in the lasso problem [ though , as just noted , the set is not necessarily unique unless ] .we also define now we focus on writing the generalized lasso fit and solutions in terms of and . abbreviating , note that we can expand .therefore , multiplying both sides of ( [ eq : genlassokkt ] ) by yields since , we can write . also , we have by definition of , so .these two facts allow us to rewrite ( [ eq : genlassokktb ] ) as and hence the fit is where we have un - abbreviated .further , any generalized lasso solution is of the form where .multiplying the above equation by , and recalling that , reveals that ; hence . in the case that , the generalized lasso solution is unique and is given by ( [ eq : genlassosol ] ) with .this occurs when , for example .otherwise , any gives a generalized lasso solution in ( [ eq : genlassosol ] ) as long as it also satisfies the sign condition necessary to ensure that is a proper subgradient of .we define the _ active set _ of a particular solution as which can be alternatively expressed as .if corresponds to a subgradient with boundary set and signs , then ; in particular , given and , different active sets can be generated by taking such that ( [ eq : genlassosign ] ) is satisfied , and also if , then , and there is only one active set ; however , in this case , can still be a strict subset of .this is quite different from the lasso problem , wherein for almost every whenever .[ note that in the generalized lasso problem , implies that is unique but implies nothing about the uniqueness of is determined by the rank of .the boundary set is not necessarily unique if , and in this case we may have for some , which certainly implies that for any .hence some boundary sets may not correspond to active sets at any .] we denote the signs of the active entries in by and we note that .following the same arguments as those leading up to the expression for the fit ( [ eq : genlassofit ] ) in section [ sec : genlassobound ] , we can alternatively express the generalized lasso fit as where and are the active set and signs of any solution .computing the divergence of the fit in ( [ eq : genlassofit2 ] ) , and pretending that and are constants ( not depending on ) , gives . the same logic applied to ( [ eq : genlassofit ] ) gives .the next section shows that , for almost every , the quantities or can indeed be treated as locally constant in expressions ( [ eq : genlassofit2 ] ) or ( [ eq : genlassofit ] ) , respectively .we then prove that linear subspaces are invariant under all choices of boundary sets , respectively active sets , and that the two subspaces are in fact equal , for almost every .furthermore , we express the generalized lasso degrees of freedom in terms of any boundary set or any active set .we call an _ optimal pair _ provided that and jointly satisfy the kkt conditions , ( [ eq : genlassokkt ] ) and ( [ eq : genlassosg ] ) , at .for such a pair , we consider its boundary set , boundary signs , active set , active signs , and show that these sets and sign vectors possess a kind of local stability .[ lem : lcbound ] there exists a set , of measure zero , with the following property : for , and for any optimal pair with boundary set , boundary signs , active set , and active signs , there is a neighborhood of such that each point yields an optimal pair with the same boundary set , boundary signs , active set and active signs .the proof is delayed to appendix [ app : lcbound ] , mainly because of its length .now lemma [ lem : lcbound ] , used together with expressions ( [ eq : genlassofit ] ) and ( [ eq : genlassofit2 ] ) for the generalized lasso fit , implies an invariance in representing a ( particularly important ) linear subspace .[ lem : invbound ] for the same set as in lemma [ lem : lcbound ] , and for any , the linear subspace is invariant under all boundary sets defined in terms of an optimal subgradient at at .the linear subspace is also invariant under all choices of active sets defined in terms of a generalized lasso solution at .finally , the two subspaces are equal , .let , and let be an optimal subgradient with boundary set and signs .let be the neighborhood of over which optimal subgradients exist with boundary set and signs , as given by lemma [ lem : lcbound ] .recalling the expression for the fit ( [ eq : genlassofit ] ) , we have that for every if is a solution with active set and signs , then again by lemma [ lem : lcbound ] there is a neighborhood of such that each point yields a solution with active set and signs .[ note that and are not necessarily equal unless and jointly satisfy the kkt conditions at . ] therefore , recalling ( [ eq : genlassofit ] ) , we have for each .the uniqueness of the generalized lasso fit now implies that for all .as is open , for any , there exists an such that . plugging into the equation abovereveals that , hence .the reverse inclusion follows similarly , and therefore .finally , the same strategy can be used to show that these linear subspaces are unchanged for any choice of boundary set , coming from an optimal subgradient at and for any choice of active set coming from a solution at . noticing that for matrices gives the result as stated in the lemma .this local stability result implies the following theorem .[ thm : genlassodf ] assume that follows a normal distribution ( [ eq : normal ] ) . for any and , the degrees of freedom of the generalized lasso fit can be expressed as ,\ ] ] where is the boundary set corresponding to any optimal subgradient of the generalized lasso problem at .we can alternatively express degrees of freedom as ,\ ] ] with being the active set corresponding to any generalized lasso solution at .note : lemma [ lem : invbound ] implies that for almost every , for any defined in terms of an optimal subgradient , and for any defined in terms of a generalized lasso solution , .this makes the above expressions for degrees of freedom well defined .proof of theorem [ thm : genlassodf ] first , the continuity and almost differentiability of follow from lemmas [ lem : nonexp ] and [ lem : genlassoproj ] , so we can use stein s formula ( [ eq : steindf ] ) for degrees of freedom .let , where is the set of measure zero as in lemma [ lem : lcact ] .if and are the boundary set and signs of an optimal subgradient at , then by lemma [ lem : invbound ] there is a neighborhood of such that each point yields an optimal subgradient with boundary set and signs .therefore , taking the divergence of the fit in ( [ eq : genlassofit ] ) , and taking an expectation over gives the first expression in the theorem .similarly , if and are the active set and signs of a generalized lasso solution at , then by lemma [ lem : invbound ] there exists a solution with active set and signs at each point in some neighborhood of .the divergence of the fit in ( [ eq : genlassofit2 ] ) is hence and taking an expectation over gives the second expression .if , then for any linear subspace , so the results of theorem [ thm : genlassodf ] reduce to = { \mathrm{e}}[{\operatorname{nullity}}(d_{-{{\mathcal{a}}}})].\ ] ] the first equality above was shown in . analyzing the null space of ( equivalently , ) for specific choices of gives interpretable results on the degrees of freedom of the fused lasso and trend filtering fits as mentioned in the introduction .it is important to note that , as , the active set is unique , but not necessarily equal to the boundary set [ since can be nonunique if ] . if , then for any subset .therefore the results of theorem [ thm : genlassodf ] become = { \mathrm{e}}[{\operatorname{rank}}(x_{{\mathcal{a}}})],\ ] ] which match the results of theorems [ thm : lassodfequi ] and [ thm : lassodfact ] ( recall that for the lasso the boundary set is exactly the same as equicorrelation set ) .recent and independent work of shows that , for arbitrary and for any , there exists a generalized lasso solution whose active set satisfies ( calling the `` smallest '' active set is somewhat of an abuse of terminology , but it is the smallest in terms of the above intersection . ) the authors then prove that , for any , the generalized lasso fit has degrees of freedom ,\ ] ] with the special active set as above .this matches the active set result of theorem [ thm : genlassodf ] applied to , since for this special active set .we conclude this section by comparing the active set result of theorem [ thm : genlassodf ] to degrees of freedom in a particularly relevant equality constrained linear regression problem ( this comparison is similar to that made in lasso case , given at the end of section [ sec : lasso ] ) .the result states that the generalized lasso fit has degrees of freedom ] . here is the active set of any lasso solution at , that is , .this result is well defined , since we proved that any active set generates the same linear subspace , almost everywhere in .in fact , we showed that for almost every , and for any active set of a solution at , the lasso fit can be written as for all in a neighborhood of , where is a constant ( it does not depend on ) .this draws an interesting connection to linear regression , as it shows that locally the lasso fit is just a translation of the linear regression fit of on . the same results ( on degrees of freedom and local representations of the fit )hold when the active set is replaced by the equicorrelation set .our results also extend to the generalized lasso problem , with an arbitrary predictor matrix and arbitrary penalty matrix .we showed that degrees of freedom of the generalized lasso fit is $ ] , with being the active set of any generalized lasso solution at , that is , .as before , this result is well defined because any choice of active set generates the same linear subspace , almost everywhere in .furthermore , for almost every , and for any active set of a solution at , the generalized lasso fit satisfies for all in a neighborhood of , where is a constant ( not depending on ) .this again reveals an interesting connection to linear regression , since it says that locally the generalized lasso fit is a translation of the linear regression fit on , with the coefficients subject to .the same statements hold with the active set replaced by the boundary set of an optimal subgradient .we note that our results provide practically useful estimates of degrees of freedom . for the lasso problem, we can use as an unbiased estimate of degrees of freedom , with being the active set of a lasso solution . to emphasizewhat has already been said , here we can actually choose any active set ( i.e. , any solution ) , because all active sets give rise to the same , except for in a set of measure zero .this is important , since different algorithms for the lasso can produce different solutions with different active sets . for the generalized lasso problem , an unbiased estimate for degrees of freedomis given by , where is the active set of a generalized lasso solution .this estimate is the same , regardless of the choice of active set ( i.e. , choice of solution ) , for almost every .hence any algorithm can be used to compute a solution .first , we prove the statement for the projection map . note that where the first inequality follows from ( [ eq : pfact ] ) , and the second is by cauchy dividing both sides by gives the result .we have shown that and are lipschitz ( with constant ) ; they are therefore continuous , and almost differentiability follows from the standard proof of the fact that a lipschitz function is differentiable almost everywhere .we write to denote the set of faces of . to each face , there is an associated normal cone , defined as the normal cone of satisfies for any .[ we use to denote the relative interior of a set , and to denote its relative boundary . ]now let .we have for some , and by construction .furthermore , we claim that projecting onto is the same as projecting onto the affine hull of , that is , .otherwise there is some with , and as , this means that . by definition of , there is some such that . but , which is a contradiction .this proves the claim , and writing , we have as desired . in the first type of points above , vertices are excluded because when is a vertex . in the second type , is excluded because .the lattice structure of tells us that for any face , we can write . this , and the fact that the normal cones have the opposite partial ordering as the faces , imply that points of the first type above can be written as with and for some . notethat actually we must have because otherwise we would have .therefore it suffices to consider points of the second type alone , and can be written as as is a polyhedron , the set of its faces is finite , and for each .therefore is a finite union of sets of dimension , and hence has measure zero . now let ^\perp } [ ( x_{\mathcal{e}})^+]_{(-{{\mathcal{a}}},\cdot ) } \bigl(z - ( x_{\mathcal{e}}{^t})^+\lambda s \bigr ) = 0 \bigr\}.\ ] ] the first union is taken over all possible subsets and all sign vectors ; as for the second union , we define for a fixed subset ^\perp } [ ( x_{\mathcal{e}})^+]_{(-{{\mathcal{a}}},\cdot ) } \not= 0 \bigr\}.\ ] ] notice that is a finite union of affine subspace of dimension , and hence has measure zero .let , and let be a lasso solution , abbreviating and for the active set and active signs . also write and for the equicorrelation set and equicorrelation signs of the fit .we know from ( [ eq : lassosol ] ) that we can write where is such that {(-{{\mathcal{a}}},\cdot ) } \bigl(y - ( x_{\mathcal{e}}{^t})^+ \lambda s \bigr ) + b_{-{{\mathcal{a } } } } = 0.\ ] ] in other words , {(-{{\mathcal{a}}},\cdot ) } \bigl(y - ( x_{\mathcal{e}}{^t})^+ \lambda s \bigr ) = -b_{-{{\mathcal{a } } } } \in \pi_{-{{\mathcal{a } } } } ( { \operatorname{null}}(x_{\mathcal{e}})),\ ] ] so projecting onto the orthogonal complement of the linear subspace gives zero , ^\perp } [ ( x_{\mathcal{e}})^+]_{(-{{\mathcal{a}}},\cdot ) } \bigl(y - ( x_{\mathcal{e}}{^t})^+\lambda s \bigr ) = 0.\ ] ] since , we know that ^\perp } [ ( x_{\mathcal{e}})^+]_{(-{{\mathcal{a}}},\cdot ) } = 0,\ ] ] and finally , this can be rewritten as {(-{{\mathcal{a}}},\cdot ) } \bigr ) \subseteq\pi_{-{{\mathcal{a}}}}({\operatorname{null}}(x_{\mathcal{e}})).\ ] ] consider defining , for a new point , where , and is yet to be determined .exactly as in the proof of lemma [ lem : lcequi ] , we know that , and for all , a neighborhood of .now we want to choose so that has the correct active set and active signs . for simplicity of notation , first define the function , equation ( [ eq : colsp ] ) implies that there is a such that , hence .however , we must choose so that additionally for and .write by the continuity of , there exits a neighborhood of of such that for and , for all .therefore we only need to choose a vector , with , such that sufficiently small .this can be achieved by applying the bounded inverse theorem , which says that the bijective linear map has a bounded inverse ( when considered a function from its row space to its column space ) .therefore there exists some such that for any , there is a vector , , with finally , the continuity of implies that can be made sufficiently small by restricting , another neighborhood of .define the set ^\perp}\cdot d_{{\mathcal{b}}\setminus{{\mathcal{a } } } } \bigl(x p_{{\operatorname{null}}(d_{-{\mathcal{b}}})}\bigr)^+ \\ & & \hspace*{165pt}{}\times \bigl(z - \bigl(p_{{\operatorname{null}}(d_{-{\mathcal{b } } } ) } x{^t}\bigr)^+ d_{\mathcal{b}}{^t}\lambda s \bigr ) = 0 \bigr\}.\end{aligned}\ ] ] the first union above is taken over all subsets and all sign vectors .the second union is taken over subsets , where ^\perp } d_{{\mathcal{b}}\setminus{{\mathcal{a } } } } \bigl(x p_{{\operatorname{null}}(d_{-{\mathcal{b}}})}\bigr)^+ \not= 0 \bigr\}.\ ] ] since is a finite union of affine subspaces of dimension , it has measure zero .now fix , and let be an optimal pair , with boundary set , boundary signs , active set , and active signs . starting from ( [ eq : genlassokktb ] ) , and plugging in for the fit in terms of , as in ( [ eq : genlassofit ] ) we can show that where . by ( [ eq : genlassosol ] ) , we know that where .furthermore , or equivalently , projecting onto the orthogonal complement of the linear subspace therefore gives zero , ^\perp } d_{{\mathcal{b}}\setminus{{\mathcal{a } } } } \bigl(x p_{{\operatorname{null}}(d_{-{\mathcal{b}}})}\bigr)^+ \bigl(y - \bigl(p_{{\operatorname{null}}(d_{-{\mathcal{b } } } ) } x{^t}\bigr)^+ d_{\mathcal{b}}{^t}\lambda s \bigr ) = 0,\ ] ] and because , we know that in fact ^\perp } d_{{\mathcal{b}}\setminus{{\mathcal{a } } } } \bigl(x p_{{\operatorname{null}}(d_{-{\mathcal{b}}})}\bigr)^+ = 0.\ ] ] this can be rewritten as at a new point , consider defining , and where is yet to be determined . by construction , and satisfy the stationarity condition ( [ eq : genlassokkt ] ) at .hence it remains to show two parts : first , we must show that this pair satisfies the subgradient condition ( [ eq : genlassosg ] ) at ; second , we must show this pair has boundary set , boundary signs , active set and active signs .actually , it suffices to show the second part alone , because the first part is then implied by the fact that and satisfy the subgradient condition at .well , by the continuity of the function , we have provided that , a neighborhood of .this ensures that has boundary set and signs . as for the active set and signs of , note first that , following directly from the definition .next , define the function , so .equation ( [ eq : colsp2 ] ) implies that there is a vector such that , which makes .however , we still need to choose such that for all and .to this end , write the continuity of implies that there is a neighborhood of such that for all and , for . since where is the operator norm of the , we only need to choose such that , and such that is sufficiently small .this is possible by the bounded inverse theorem applied to the linear map : when considered a function from its row space to its column space , is bijective and hence has a bounded inverse .therefore there is some such that for any , there is a with and the continuity of implies that the right - hand side above can be made sufficiently small by restricting , a neighborhood of .the dual of the lasso problem ( [ eq : lasso ] ) has appeared in many papers in the literature ; as far as we can tell , it was first considered by .we start by rewriting problem ( [ eq : lasso ] ) as then we write the lagrangian and we minimize over to obtain the dual problem taking the gradient of with respect to to , and setting this equal to zero gives where is a subgradient of the function evaluated at . from ( [ eq : lassodual1 ] ) , we can immediately see that the dual solution is the projection of onto the polyhedron as in lemma [ lem : lassoproj ] , and then ( [ eq : lassopd1 ] ) shows that is the residual from projecting onto .further , from ( [ eq : lassopd2 ] ) , we can define the equicorrelation set as noting that together ( [ eq : lassopd1 ] ) , ( [ eq : lassopd2 ] ) are exactly the same as the kkt conditions ( [ eq : lassokkt ] ) , ( [ eq : lassosg ] ) , and all of the arguments in section [ sec : lasso ] involving the equicorrelation set can be translated to this dual perspective .there is a slightly different way to derive the lasso dual , resulting in a different ( but of course , equivalent ) formulation .we first rewrite problem ( [ eq : lasso ] ) as and by following similar steps to those above , we arrive at the dual problem each dual solution ( now no longer unique ) satisfies the dual problem ( [ eq : lassodual2 ] ) and its relationship ( [ eq : lassopd3 ] ) , ( [ eq : lassopd4 ] ) to the primal problem offer yet another viewpoint to understand some of the results in section [ sec : lasso ] . for the generalized lasso problem, one might imagine that there are three different dual problems , corresponding to the three different ways of introducing an auxiliary variable into the generalized lasso criterion : { { \hat{\beta}}},\hat{z } & \in&\mathop{{\operatorname{argmin}}}_{\beta\in{\mathbb{r}}^p , z \in{\mathbb{r}}^p } { \frac{1}{2}}\|y - x\beta\|_{2}^2 + \lambda\|dz\|_{1}\qquad\mbox{subject to } z=\beta ; \\[2pt ] { { \hat{\beta}}},\hat{z } & \in&\mathop{{\operatorname{argmin}}}_{\beta\in{\mathbb{r}}^p , z \in{\mathbb{r}}^m } { \frac{1}{2}}\|y - x\beta\|_{2}^2 + \lambda\|z\|_{1}\qquad\mbox{subject to } z = d\beta.\end{aligned}\ ] ] however , the first two approaches above lead to lagrangian functions that can not be minimized analytically over .only the third approach yields a dual problem in closed - form , as given by , \\[-7pt ] \eqntext{\mbox{subject to } \|v\|_{\infty}\leq\lambda , d{^t}v \in{\operatorname{row}}(x).}\end{aligned}\ ] ] the relationship between primal and dual solutions is \label{eq : genlassopd2 } { \hat{v}}&= & \lambda\gamma,\end{aligned}\ ] ] where is a subgradient of evaluated at . directly from ( [ eq : genlassodual ] ) we can see that is the projection of the point onto the polyhedron by ( [ eq : genlassopd1 ] ) , the primal fit is , which can be rewritten as where is the polyhedron from lemma [ lem : genlassoproj ] , and finally because is zero on . by ( [ eq : genlassopd2 ] ) , we can define the boundary set corresponding to a particular dual solution as ( this explains its name , as gives the coordinates of that are on the boundary of the box . ) as ( [ eq : genlassopd1 ] ) , ( [ eq : genlassopd2 ] ) are equivalent to the kkt conditions ( [ eq : genlassokkt ] ) , ( [ eq : genlassosg ] ) [ following from rewriting ( [ eq : genlassopd1 ] ) using , the results in section [ sec : genlasso ] on the boundary set can all be derived from this dual setting . | we derive the degrees of freedom of the lasso fit , placing no assumptions on the predictor matrix . like the well - known result of zou , hastie and tibshirani [ _ ann . statist . _ * 35 * ( 2007 ) 21732192 ] , which gives the degrees of freedom of the lasso fit when has full column rank , we express our result in terms of the active set of a lasso solution . we extend this result to cover the degrees of freedom of the generalized lasso fit for an arbitrary predictor matrix ( and an arbitrary penalty matrix ) . though our focus is degrees of freedom , we establish some intermediate results on the lasso and generalized lasso that may be interesting on their own . . |
in high - dimensional time series analysis , the need to define time - varying patterns of sparsity in model parameters has proven challenging .dynamic latent thresholding , introduced in , provides a general approach that induces parsimony into time series model structures with potential to reduce effective parameter dimension and improve model interpretations as well as forecasting performance .the utility of various classes of latent threshold models ( ltms ) has been demonstrated in recent applied studies in macroeconomics and financial forecasting and portfolio decisions .the scope of the approach includes dynamic regressions , dynamic latent factor models , time - varying vector autoregressions , and dynamic graphical models of multivariate stochastic volatility , and also opens a path to new approaches to dynamic network modeling .this paper adapts the latent thresholding approach to different classes of multivariate factor models with a one main interest in dynamic transfer response analysis .our detailed case - study concerns time - varying lag / lead relationships among multiple time series in electroencephalographic ( eeg ) studies . here the latent threshold analysis of such models induces relevant , time - varying patterns of sparsity in otherwise time - varying factor loadings matrices , among other model features .we evaluate and compare two different classes of models in the eeg study , and explore a number of posterior summaries in relation to this main interest .time series factor modeling has been an area of growth for bayesian analysis in recent years .two key themes are : ( i ) dynamic factor models , where latent factors are time series processes underlying patterns of relationships among multiple time series ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) ; and ( ii ) sparse factor models , where the bipartite graphs representing conditional dependencies of observed variables on factors are not completely connected ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , increasingly applied in problems of classification and prediction . herewe combine dynamics with sparsity .some of the practical relevance of models with time - varying factor loadings is evident in recent studies ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) . as the number of variables and factors increase, so does the need to induce sparsity in loadings matrices to reflect the view that variables will typically be conditionally dependent on only a subset of factors . in a time series setting , however , the patterns of occurrence of zeros in otherwise time - varying factor loadings matrices may also be time - varying .one factor may relate to one particular variable with a time - varying loading over a period of time , but be insignificant for that variable in other time periods . thus the need to develop models of time - varying sparsity of loadings matrices in dynamic factor models .all vectors are column vectors .we use , , , , , for the normal , uniform , beta , gamma , and wishart distributions , respectively .succinct notation for ranges uses to denote when e.g. , denotes .the indicator function is and is the diagonal matrix with diagonal elements in the argument and hence dimension implicit .elements of any time series are , and those of any matrix time series are a general setting , the time series , ( ) is modeled as where : * is a of predictor variables known at time ; * is the matrix of regression coefficients at time ; * is the vector of latent factors , arising from some underlying latent factor process over time ; * is the matrix of factor loadings at time ; * is the residual term , assumed zero - mean normal with diagonal variance matrix of volatilities at time complete specification requires models for , , and over time .typically , , and models are identified via constraints on , such as fixing to have zeros above a unit upper diagonal : and for in section [ sec : modelsmandm+ ] , there is interpretable structure to and alternative assumptions are natural . special cases and assumptions now follow .* constant and sparse factor models : * much past work uses constant coefficients and loadings the pure factor model , with and typically assumes the factors are zero - mean and independent , yielding a linear factor representation of the conditional variance matrix of sparsity in then begins development of more parsimonious models for larger ( e.g. * ? ? ?* favar models : * when concatenates past values to lag and are constant , the model is a factor - augmented vector autoregression ( favar ) .variants based on differing models for are becoming of increasing interest in macroeconomics .* factor stochastic volatility models : * traditional bayesian multivariate volatility models have and where model completion involves stochastic volatility model for the and based on either log - ar(1 ) models or bayesian discounting ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?* time - varying regression and factor loadings models : * variants of models with time - varying are well - used ( e.g * ? ? ?* ; * ? ? ?* ; * ? ? ?typically , the elements are ar(1 ) processes . within this class ,random walk models have flexibility to adapt to change over time , while stationary ar(1 ) models can have longer - term predictive value and interpretation .* process models for factors : * models of factor processes typically involve either conditionally independent factors over time , with or without time - varying conditional variances , or stationary vector autoregressive ( var ) models .we highlight example models that incorporate elements noted in section [ sec : dfms ] , while being customized to the eeg study : a response variable is hierarchically linked to current and lagged values of an underlying latent process of scientific interest .a first latent factor model is discussed , then extended with a time - varying vector autoregressive component ; these two models are customized examples of time - varying favar processes .a dynamic transfer response factor model ( dtrfm ) relates the outcome variables to a foundational , scalar latent process by specifying to be a vector of recent values of this underlying scalar process .each outcome variable relates to potentially several recent and lagged values of through time - varying loadings coefficients ; at any instant in time , these coefficients define the transfer response of the variable to the history of the underlying process . as the loadings vary in time, the form of this response then naturally varies .the basic structure of the model is described here . in , set for all suppose also that for some where the scalar series is modeled as a time - varying autoregressive ( tvar ) process of order .that is , where is the vector of ar coefficients at time conditional on the variance elements and , the , and sequences are assumedly independent over time and mutually independent .equation ( [ eq : delta ] ) indicates that the coefficients follow a vector random walk over time , permitting time variation but not anticipating its direction or form .coupled with we have the traditional specification of a bayesian tvar model for the latent process . for the scalar response variable , the above model implies showing the transfer of responses from past values of via the possibly quite widely time - varying loadings , the latter specific to series for each model identification is straightforward . from , it is clear that an identification problem exists with respect to the lag / lead structure , i.e. , the time origin for the latent process , as well as the scale of relative to the fixing elements of one row of to specified values obviates this .here we do this on the first row : for one factor ( lag ) index we set and for this way , is a direct , unbiased measurement of , subject to residual noise , so that we have formal identification and a quantitative anchor for prior specification . beyond the need for priors for model hyper - parameters ,we need structures for the error volatility processes in and the tvar innovations variance process in . for routine analysis that is not inherently focused on volatility prediction , standard bayesian variance discount learning models effectiverandom walks whose variability is controlled by a single discount factor are defaults .specified to describe typically slowly , randomly changing variances , the inverse gamma / beta bayesian model has the ability to track time - varying variances over time , and to deliver full posterior samples from relevant conditional posteriors for volatility sequences in mcmc analyses .we use variance discount models here , based on standard theory in , for example , and ; these are simply specified via two discount factor hyper - parameters : , for each of the set of observation volatilities , and for the tvar innovations volatility .substantive interpretation is aided by investigating the more detailed structure that theoretically underlies the latent tvar process specifically , well - known ( and well - exploited ) time series decomposition theory ( e.g * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) shows that , given the model parameters , the series has the decomposition where the are simpler component time series processes and are non - negative integers such that the values of these integers and the nature of the component processes depend on the model parameters . typically , slow variation over time in these yields stable numbers and the resulting processes are computable directly from ( posterior samples or estimates of ) the and the component processes have the ( approximate ) forms of time - varying autoregressive moving averages tvarma(2,1 ) processes exhibiting quasi - periodic behavior : each is a stochastic sine wave whose amplitude , phase and frequency varies in time ; the time variation in frequency is directly related to that in , while the amplitude and phase variation is inherent and driven by the levels of variation controlled by further , posterior inferences for the time - varying frequencies , amplitude and phase are directly available from posterior simulations that generate samples of the and at each time . in parallel , each is a tvar(1 ) process , with time variation in short - term autocorrelations driven by that in . as with the we have direct access to posterior inferences on the tvar(1 ) parameters of these component processes from simulations of the posterior for at each time .this decomposition therefore gives inferences on underlying time - frequency and short - term dependency structures underlying and its dynamic behavior . from itfollows that where , for each in the ranges displayed , thus the transfer response pattern defined by the time - varying factor loadings translates the nature of the inherent , underlying components of the driving process to each of the output / response variables .the above shows that this class of models provides broad scope for capturing multiple time - varying patterns of component structure including several or many components with dynamically varying time - frequency characteristics via a single latent process filtered to construct the latent factor vector process in the general framework .the flexibility of these models for increasingly high - dimensional response series is then further enhanced through the ability of models with series - specific and time - varying loadings to differentiate both instantaneous and time - varying patterns in the transfer responses .a direct model extension adds back a non - zero dynamic regression term to provide an example of time - varying favar models .that is , with the dynamic factor component as specified via model m ,suppose now follows where the matrix contains time - varying autoregressive parameters and that is , is dynamically regressed on the immediate past value as well as the underlying components of a driving latent process through the dynamic transfer response mechanism : we denote this as a tv - var(1 ) component of the model .this extension of model m allows for the transfer response effects of the fundamental , driving process to be overlaid with spill - over effects between individual response series from one time point to the next , modeled by a basic tv - var(1 ) component .this can be regarded as a model extension to assess whether the empirical tv - var component is able to explain structure in the response data not adequately captured by the structure dynamic factor component .for increasingly large the tv - var(1 ) model component alone ( i.e. , setting implies what can be quite flexible marginal processes for the individual in contrast , the dynamic transfer response factor component while also quite flexible represents structurally related processes .there is thus opportunity to for evaluation of the latter in the extended model m+ .as the dimension of response variables and the number of effective latent factors increases , it becomes increasingly untenable to entertain models in which all loadings in are non - zero .further , depending on context , it is also scientifically reasonable to entertain models in which one or more variables may relate in a time - varying manner to a particular element of the latent factor vector for some periods of time , but that the relationships may be practically negligible at other epochs .this is the concept of dynamic sparsity : a particular may be non - zero over multiple , disjoint time periods , and adequately modeled by a specified stochastic process model when non - zero , but effectively zero in terms of the effect of on in other periods .the same idea applies to dynamic regression and/or autoregressive parameters in analysis that permits this will allow for adaptation over time to zero / non - zero periods as well as to inference on actual values when non - zero .this includes extreme cases when a may be inferred as effectively zero ( or non - zero ) over the full time period of interest .dynamic latent thresholding addresses this question of _ time - varying sparsity _ in some generality ; this approach is now developed in our context of dynamic transfer response factor models .we anchor the development on basic ar(1 ) process models for the free elements of the dynamic factor loadings matrix recalling that the first row of elements is constrained to fixed ( 0/1 ) values as noted in section [ subsec : dtrfm ] . for are modeled via what we denote by the lt - ar(1 ) processes defined as follows : where the latent process is ar(1 ) with and where the processes are assumed independent over the latent threshold structure allows each time - varying factor loading to be shrunk fully to zero when its absolute value falls below a threshold .this way , a factor loads in explaining a response only when the corresponding is `` large enough '' .inference on the latent processes and threshold parameters make this data - adaptive , neatly embodying and yielding data - informed time - varying sparsity / shrinkage and parameter reduction .the same approach applies to the time - varying autoregressive parameters in the extension to model m+ .that is , the effective model parameters are modeled as thresholded values of ar(1 ) processes in precisely the same way as for the details are left to the reader as they follow the development for with simple notational changes. it will be evident that prior specification for threshold parameters are key in defining practical models .we can do this by referencing the expected range of variation of the corresponding process . under the ar(1 ) process model detailed above, has a stationary normal distribution with mean and variance given the hyper - parameters this allows us to compute the probability that exceeds the threshold i.e. , the probability of a practically significant coefficient across any range of possible thresholds . follow this logic to specify informative , structured priors for that depend explicitly on we use this specification here ; in particular , take conditional uniform priors for some direct evaluation then yields marginal ( with respect to ) sparsity probabilities where is the standard normal cdf .this is trivially evaluated . for large , this is also very well approximated by ( this is extremely accurate for as low as 2 and practically relevant values of exceeding that ) .the sparsity probability strictly decreases in and decays to values of about 0.25 , 0.20 and 0.15 , respectively , at about and 5.3 , respectively .this gives us assessment of what a particular choice of implies in terms of overall expected levels of sparsity _ a priori_. in our studies , we find strong robustness in posterior inferences to specified values of above 3 or so , and use that value as a default .note also that there is flexibility to customize the prior to use different values for each threshold , to cater for contexts in which we aim to favor higher thresholds ( and hence higher probabilities of zero parameter process values ) for some than for others .mcmc computations extend and modify those developed for time - varying autoregressions and multivariate stochastic volatility factor models in .the overall mcmc integrates a series of steps that use standard simulation components from bayesian state space models ( e.g. , * ? ? ?* ; * ? ? ?* ) and from traditional ( static loadings ) latent factor models .customization here involves modifications to resample the latent tvar factor process in our dynamic transfer responses factor context , and other elements including metropolis hastings steps as in for the latent threshold components .the appendix accompanying this paper describes key details , and notes how the mcmc directly extends previously described strategies for dynamic latent threshold models .electroencephalographic ( eeg ) traces are time series of electrical potential fluctuations at various scalp locations of a human subject , reflecting the complex dynamics of underlying neural communication .analysis of multichannel eeg traces is key to understanding the impact of electroconvulsive therapy ( ect ) , one of the most effective treatments known for major depression with electrically induced seizures in patients .the convulsive seizure activity drives multichannel eeg traces and statistical interest is to model such multivariate time series in order to reveal underlying characteristics and effects of ect .various models have been studied to explore features of eeg time series ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?univariate tvar models are well - used and proven as models of individual eeg channels ( e.g. * ? ? ?* ; * ? ? ?* ) ; they can adequately represent what can be considerable changes in the patterns of evolution of time - frequency structure in such series , as well as differences and changes in relationships across the eeg channels .such studies highlight the need for multivariate models of the time - varying commonalities across channels , with latent process structure reflecting the inherent , underlying mechanisms of neural communication .our analysis adapts the earlier approach of .that work was the first bayesian approach to multivariate time series analysis that incorporated the key scientific feature of a single , focal latent process driving the eeg signals across multiple channels .the authors used a novel dynamic distributed lag approach that aimed to capture time - varying lag - lead structures across the eeg channels , introducing a customized model specific to that context .though effective , that approach was very specific and empirical the authors developed dynamic regressions of of the channels on the _ observed signal _ of one selected channel , the latter chosen as an empirical proxy for the underlying latent driving process the developments of the current paper provide a general , flexible and in terms of the specific goals of the dynamic lag / lead study almost perfectly suited context that can be seen , in part , as an outgrowth from that prior work . herethe identification of dynamically adaptive lag / lead structure is driven by combining time variation in non - zero factor loadings with the latent threshold approach .the study here explores -channel eeg times series recorded in one seizure of one patient , as reported and earlier analyzed in and .the eeg channels are electrodes located around and over the patient s scalp ; see figure [ fig : map ] .the original data set has sampling rate 256hz over a period of 1 - 2 minutes ; following and to compare directly with , we analyze the series subsampled every sixth observation after removing about 2,000 observations from the beginning ( up to a higher amplitude portion of the seizure ) yielding observations .representative graphs of data on two of the channels over selected epochs appear in figure [ fig : eegdata ] .visual inspection of the substantial time - varying , quasi - periodic trajectories of the data indicates that signals on some eeg channels are obviously delayed with respect to other channels , and the apparent delays ( lags ) vary substantially through time .this is perfectly consistent with the dynamic patterns of relationships of individual channels ( the with an underlying seizure process ( the latent ) captured by our model structure ( section [ subsec : dtrfmcomps ] ) ; the latent process represents a range of dynamic quasi - periodicities characterizing multiple brain wave components overlaid by , and modified by , the induced seizure , and the time - varying lag / lead relationships among channels are represented by channel - specific and time - varying factor loadings , some of which may be negligible for all time or for periods of time , and relevant elsewhere .series are measurements of electrical potential fluctuations taken in parallel at each of these locations , defining the eeg channels ( international 10 - 20 eeg system).,width=192 ] + our analysis summaries are based on effective lags , i.e. , the model has a latent factor vector and the first row of set to as the basis for model identification .this precisely parallels the setup in the empirical model of .as discussed in section [ subsec : dtrfm ] , some constraints of this form are needed on elements of to formally identify the single latent factor process model . there is no loss of generality nor any superfluous structure imposed on the model here ; we could choose any element of the first row of to insert the 1 , with different choices simply shifting the implied time origin of the process . under this structure ,the first eeg channel loads only , while the other channels have loadings in the first ( last ) two columns of related to the leading ( lagged ) values of the process .our analysis takes the so - called vertex channel cz as series .see figure [ fig : map ] .this parallels the use of the observed data on this specific channel as an empirical factor process in .the other channels are ordered from the centre out .one further modeling detail relates to a modification for a further , subtler soft identification question .the model so far implies that so the conditional variation expected in channel 1 is the sum of time - varying contributions from the process plus as in all state - space models with multiple components contributing to variability in observed data , distinguishing and partitioning the contributions requires care in prior specification ; the picture is complicated here as time variation in competes with the intricate dynamics of the and time variation in .a specification that controls this more usefully in the current latent factor models is to fix as constant the measurement error in series 1 , i.e. , set constant over time .this ensures the interpretation of as pure measurement error ( there being no reason to expect time variation in pure measurement error variances , as opposed to the characteristics of the underlying factor processes and transfer response / loadings parameters ) .we do this , maintaining the stochastic variance discount model for the other the latter combine pure measurement error and any other identified changes in residual volatility across these channels .then , posterior inferences indicating substantial patterns of time variation in the latter then indicate the ability of the discount models to account for relative variability not captured by the underlying , identified latent factor process . the mcmc analysis of section [ sec : computation ] is trivially modified ; a traditional inverse gamma prior on leads to an inverse gamma complete conditional posterior .the analyses summarized are based on model order for the latent process . while formal order selection approaches could be entertained ( e.g. , * ? ? ?* ; * ? ? ?* ) , our philosophy based on applied work with tvar and related models in many areas is to fit successively larger models and assess practical relevant of resulting inferences .here we fit the dtrfm with model orders up to and for each analysis examine the posterior estimates of components as detailed in section [ subsec : dtrfmcomps ] . with successively higher values of model order find robust estimation of quasi - periodic components with estimated frequencies varying over time in ranges consistent with known ranges of seizure and normal brain wave activity .model order is needed to identify these three components , and they persist in models of higher order ; in addition to their substantive relevant , the estimated components are sustained over time and vary at practically relevant levels in terms of their contributions to each of the eeg series . however , moving beyond leads to increasing numbers of estimated components that are very ephemeral , of low amplitudes and higher inferred frequencies beyond the range of substantive relevance .this signals over - fitting as the model caters to finer aspects of what is really noise in the data . andthen disregarding such estimated noise components is certainly acceptable , we prefer to cut - back to the model order that identifies the main component structures without these practically spurious elements . model specification is completed with priors for hyper - parameters .we take , supporting a range of values for and with prior mean for near 4.5 .seizure eeg data typically range over 300 - 600 units on the potential scale , with sample standard deviations over selected epochs varying from 40 - 100 or more .hence an expectation of measurement error standard deviation around 4 - 5 is consistent with prior expectations that measurement error constitutes in the range of 4 - 12% of the signal uncertainty in the traces . for the stochastic discount variance models ,we set values of the discount factors as ; this is based in part on examination of analyses with various values , and consistent with relatively modest levels of volatility in variance components .priors for the hyper - parameters of the latent ar(1 ) parameter processes are as follows : , , and independently , for .this anticipates persistence in non - thresholded latent factor loadings , while allowing for some of the loadings to exhibit notable patterns of change . finally , we take and set in the conditional uniform priors for thresholds .summaries here come from mcmc draws after a burn - in period of .computations were performed using custom code in ox .figure [ fig : factor ] displays time trajectories of the posterior means of the factor process , and the volatility of its driving innovations . the figure displays similar trajectories for the time - varying characteristic frequency and modulus for each of the three identified quasi - periodic components in , the of section [ subsec : dtrfmcomps ] .the component of lowest frequency has oscillations in the so - called seizure or slow wave band , considerably decaying toward the end of the seizure episode .notably , the other two inferred components have characteristic frequencies and moduli that are rather stable over time though exhibit minor variation . ., width=480 ] ., width=480 ] the frequency trajectories of the three quasi - periodic components show that each lies in one of the expected neuropsychiatric categories : the so - called _ delta _ band ( roughly 0 - 4hz ) , _ theta _ band ( 4 - 8hz ) , and _ alpha _ band ( 8 - 13hz ) .each component process is defined by the corresponding characteristic frequency , while being broad - band in the spectral domain with time - varying spectra that can be understood to peak at the characteristic frequencies themselves .the lowest - frequency component stays in the _ delta _ range and gradually decreases over time ; its modulus is close to one ( solid line in figure [ fig : factor](iv ) ) , which indicates a considerably persistent component ; this so - called delta - slow wave dominates the factor process during the course of the seizure , while its frequency content slides towards lower values towards the end of the seizure episode .the other two quasi - periodic components lie in the _ theta _ and _ alpha _ ranges ; their moduli and amplitudes are lower than those of the dominant component over the whole seizure course , and show only limited changes over time .these reflect known frequency ranges of normal brain signaling , being dominated through much of the period by the strong seizure waveform .the innovations volatility rises in the initial part of the seizure to drive increased amplitude fluctuations throughout the central section , and then decays in later stages corresponding to the control and dissipation of the brain seizure effects .these features are consistent with expected structure in the seizure process , and with the broad results of .figure [ fig : tvar ] provides the trajectories of the posterior means and 95% credible intervals for the tvar coefficients .all are markedly time - varying .the 95% credible intervals are slightly wider during the late time periods , which feeds through to increased uncertainties in features of the quasi - periodic components ; figures [ fig : factor](v ) and ( vi ) , displaying the posterior means and 95% credible intervals of the frequency and modulus for the lowest - frequency component respectively , showing somewhat increased uncertainties towards the end of the seizure . for channel f7 , index , indicating the inferred probabilities of lag - lead structure in transfer response to as they vary over time.,width=480 ] to generate some insights into the nature of dynamic sparsity under the latent threshold model , we select one channel f7 at , and plot the corresponding trajectories of the estimated posterior probabilities over time ; i.e. , the probability of a non - zero loading of channel f7 on each of the values for see figure [ fig : s ] where we indicate the loadings on by lead( ) and ( ) respectively , that on by sync , and those on by lag( ) and ( ) respectively .the annotation here refers to lead / lag relative to the vertex location cz that reads - out an unbiased estimate of .so a non - zero loading of f7 on , for example , defines a 2-period lead of that channel relative to the vertex , whereas a non - zero loading on represents a 1-period lag relative to the vertex channel cz , and so forth . from the figure, it is clearly inferred that there is strong synchrony between f7 and cz in their transfer responses to fluctuations in based on the sync trajectory .also , f7 also has a reasonable probability of responding to the latent factor process 1-period ahead of cz , and almost surely does not lag cz in the transfer response over most of the time period , nor lead by more than 1 period until perhaps later in the seizure episode .the ability of latent thresholding to adaptively indicate existence of non - zero loadings during some periods and not others , while also operating as a global variable selection mechanism as well , is nicely exemplified here .figure [ fig : st ] provides a visual display of posterior probabilities across all the channels , and drawn at selected snapshots in time , with images created by linearly interpolating between the estimates at the electrode locations .note that the model says nothing about spatial relationships between channels .the marked patterns of shrinkage in the latent threshold model analysis does nevertheless indicate strong spatial relationships , while the relationships also show marked patterns of change over time .for example , loadings of lead(+2 ) are commonly and globally shrunk to zero from left frontal to right occipital sites . the lead(+2 )loadings around right frontal and prefrontal areas exhibit evolving degree of shrinkage .similar changes are found in the parietal and occipital regions of lag(-2 ) loadings .meanwhile , almost no shrinkage is found in the synchronized loadings except for the channel t3 ( left temporal ) .interpolating from values at the channels each row corresponds to the values at a selected time point , and the columns represent the indices in relation to the transfer responses to for ,width=480 ] with the lag - lead structure at selected time points .the row - column layout of images corresponds to that in figure [ fig : st ] ., width=480 ] figure [ fig : beta ] is a companion to figure [ fig : st ] that exhibits aspects of estimated factor loadings with the lag - lead structure at selected time points .the images represent estimates where if and zero otherwise .recall that the factor loading of vertex channel cz is fixed at 1 for the basis and 0 for lagged / leaded times .the estimates show strong patterns of positive spatial dependencies with cz at the synchronized state ( zero lag / lead ) , with concurrent loadings on the process decaying towards the exterior regions .the approximate centroid of the higher loadings region moves from front to back through the course of the seizure , consistent with what is understood to be the typical progression of seizure waveforms . in the third row of the figure ( ) , the highest loadings appears at and near channel pz , and the parietal region exhibits rising intensity .another higher intensity is detected around the right temporal area in lead(+2 ) and in the channel c4 in lead(+1 ) .this indicates dynamics of the driving latent process exhibited earlier in right temporal / central areas and followed in the occipital region ; this spatial asymmetry in estimated transfer response loadings again links to experimental expectations for the progression of seizure activity . in the last row of the figure ( , a late stage of the seizure ) , the major lead / lag loadings diminish while the synchronized loadings persist .two animated figures , available as online supplementary material , provide more insight into the patterns of variation over time in factor loadings , the differences across channels , and the nature of the dynamic latent thresholding in particular .the first animation http://www.stat.duke.edu/~mw/.downloads/nakajimawest2016eeg/animate-st.avi[(linked at the external site here ) ] shows a movie of patterns of interpolating from values at the channels this shows how these patterns evolve over time , providing a dynamic display from which the snapshots in figure [ fig : st ] are selected at four specific times .the second animation http://www.stat.duke.edu/~mw/.downloads/nakajimawest2016eeg/animate-betat.avi[(linked at the external site here ) ] shows the corresponding movie for the interpolated estimates of factor loadings over all time ; the snapshots in figure [ fig : beta ] are selected at four specific times .the animations clearly show and highlight regions of the brain surface where there is very low or negligible probability of lag or lead effects of the process , other regions where sustained effects are very evident and regions in which there is more uncertainty about potential effects , together with inferences on the quantified lag / lead effects in terms of the temporal evolution of the spatial patterns in estimated factor loadings .figure [ fig : sigma ] plots i.e. , estimated standard deviations of the idiosyncratic shocks in each channel each graph is roughly located at the corresponding electrode placement .recall that , the innovation standard deviation for the channel cz , is assumed time - invariant , representing measurement error only , as part of the model specification to define and identify the latent driving process .the model allows for potential variations over time in standard deviations at other channels , with opportunity to identify variability in the data not already captured through the time - varying loadings and latent process structure . across all channels for clarity in presentation , the y - scale has been omitted ; each standard deviation is graphed on the scale of for comparability across channels .recall that the anchor channel cz has constant standard deviation representing pure measurement error around the latent process at that channel .each graph is roughly located at the corresponding electrode placement and the -axes represent the full time period .,width=307 ] from figure [ fig : sigma ] , there do appear to be variations across channels and they show some local spatial dependence .trajectories of the neighboring channels f4 and f8 are clearly similar , exhibiting a major hike in the middle of the seizure .it is evident that some parietal and occipital sites ( t3 , p3 , o1 , o2 and p4 ) share a common trajectory , which marks a peak in an early stage of the seizure then gradually decrease towards the end of the seizure . as seen in figures [ fig : st ] and [ fig : beta ] , these sites also share some relationships in the latent threshold - induced shrinkage and loadings at lag(-1 ) .further , the estimate shows similarities among the channels pz , c4 and t6 , whose patterns differ from those in the occipital region .this suggests an intrinsic difference between the central sites ( pz , c4 and t6 ) and the occipital sites ( t3 , p3 , o1 , o2 and p4 ) , also suggested by figures [ fig : st ] and [ fig : beta ] . across all but the vertex channel at the idiosyncratic error terms a compound of measurement error and of additional patterns including local sub - activity of the seizure that is not explained by the latent factor process .there are also experimental and physiological noise sources that are likely to induce local / spatial residual dependencies in the data not forming part of the main driving process including electrical recording / power line noise and scalp - electrode impedance characteristics ; these presumably also contribute to the time - variation patterns in the identified and their spatial dependencies .model m+ has where is the matrix of lag-1 time - varying coefficients modeled using latent threshold ar(1 ) processes .model m+ extends model m to potentially capture data structure not fully explained by the factor and residual component .one interest is to more structurally explain the time variation in estimated residual volatilities exhibited in the analysis of the baseline model m. a contextual question is that of representing potential spill - over effects between eeg channels as the seizure waves cascade around the brain ; that is , local ( in terms of the neural network and perhaps in part , though not necessarily , physically spatially ) transmission of signals between subsets of channels that represent delayed responses to the latent process not already captured by the dynamic latent factor model form .the matrix is expected to be sparse and modeled via latent threshold dynamic models , as earlier described . across all channels for the extended model m+ .details as in as in figure [ fig : sigma ] for model m.,title="fig:",width=480 ] + across all channels for the extended model m+ .details as in as in figure [ fig : sigma ] for model m.,title="fig:",width=480 ] .4 in across all channels for the extended model m+ .details as in as in figure [ fig : sigma ] for model m.,width=307 ] figure [ fig : at ] plots the posterior means of the states and the posterior probabilities , where and is the latent threshold for each state .the matrix is evidently sparse and exhibits considerable changes in the state and the posterior shrinkage probability among the selected time points .figure [ fig : sigma - var ] shows the estimated standard deviations of the idiosyncratic shocks .compared with figure [ fig : sigma ] , the trajectories of standard deviations are generally somewhat smoother over time ; some of the variation in the data not already captured by the is now absorbed by the . to explore some practical implications of the extended model m+ and compare with the baseline model m, one aspect of interest is predicted behavior of the time series based on _ impulse response analysis _ relative to the underlying process . standing at a current , specified time simply asks about the nature of expected development of the series over the next time points based on an assumed level of the impulse to the driving innovations of the latent process . in applied work in economics ,impulse responses are often primary vehicles for communicating model implications , comparing models , and feeding into decisions .the use of latent thresholding in macroeconomic models has focused on this , in part , and clearly demonstrated the utility of dynamic latent thresholding in inducing more accurate predictions and , in particular , more statistically and substantively reliable impulse response analyses .we do this here from three time points chosen in the early , middle and later sections of the eeg series ; this exhibits differences in the model projections / implications over time due to the dynamics , as well as differences between the two models in each of these periods .computations are easily done by using the posterior mcmc samples to project forward in time ; predictive expectations are then computed as monte carlo averages .the impulse value is taken as the average over of the estimated historical innovations standard deviations . figure [ fig : imp ] plots the impulse responses of the 19 eeg channels with and from each of the two models .note that , for our comparison purposes here , we are interested in the _ forms _ of the impulse responses over the horizon specified , not their specific values .we already know that the innovations variance shows marked changes over time and , in particular , decays to rather low values in the later stages of the seizure .hence the shock size taken here is larger than relevant and realized innovations over the latter part of the time period , and the amplitudes of impulse responses should therefore not be regarded as pertinent .the form of the projections are the focus .\(i ) model m + eeg channels to a shock to the underlying factor process obtained from ( i ) model m , ( ii ) model m+ . the impulse response functions computed at three different time points throughout the seizure are shown ( columns ) . for each model ,the impulse response projections are made from the time point indicated by column header up to 80 time periods ahead ( lower rows in ( i ) and ( ii ) ) ; the same responses are shown on a truncated time period up to only 30 time periods ahead ( upper rows in ( i ) and ( ii ) ) , for clarity.,title="fig:",width=364 ] + ( ii ) model m+ + eeg channels to a shock to the underlying factor process obtained from ( i ) model m , ( ii ) model m+ .the impulse response functions computed at three different time points throughout the seizure are shown ( columns ) .for each model , the impulse response projections are made from the time point indicated by column header up to 80 time periods ahead ( lower rows in ( i ) and ( ii ) ) ; the same responses are shown on a truncated time period up to only 30 time periods ahead ( upper rows in ( i ) and ( ii ) ) , for clarity.,title="fig:",width=364 ] \(i ) + and , respectively.,title="fig:",width=384 ] + ( ii ) + and , respectively.,title="fig:",width=384 ] patterns of the impulse response are clearly time - varying across the three exhibited time points ; variation is evident with respect to wave frequency , persistence / decay speed , and variation across the channels . in early periods of the seizure ,the responses decay slowly with a high - frequency cyclical wave , while in later periods the decay is more rapid and the oscillations at lower frequency .while there are , as we have discussed above , marked patterns of variation in lag / lead relationships across channels , there is the appearance of stronger synchronicity earlier in the seizure episode , and this deteriorates towards end of the seizure .the responses from the tv - var extended model m+ model exhibit more variation across the channels than those from the model m. this is attributable to the induction of some spill - over effects of the shock . through the latent factor model component alone, the shock has an impact on each of the channels through its immediate influence on and the consequent transfer response of these effects via and so forth . in the extended model, additional feed - forward effects are passed through the channels via the tv - var component .some additional insights into the nature of impulse responses can be gained from figure [ fig : impulses ] that shows images interpolating the 9 channel responses across the brain areas , based on analysis of the extended model m+ .these are shown at six time points across the seizure period , and for selected horizons and , respectively ; these images clearly show the time variation of the responses spreading over the channels .an animated figure , available as online supplementary material , provides a dynamic display over impulse response horizons 1:80 , with a movie that more vividly exhibits the differences due to time period .the animations http://www.stat.duke.edu/~mw/.downloads/nakajimawest2016eeg/animate-impulse.avi[(linked at the external site here ) ] represent the six time points in figure [ fig : impulses ] , and show images of the impulse responses as the projections are made over to horizon eeg time series analysis highlights the utility of latent thresholding dynamic models in constraining model parametrization adaptively in time , with resulting improvements in intepretation and inferences on inter - relationships among series and transfer response characteristics .an additional comment on model comparison in the case study is worth mentioning .statistical evaluation and comparison of model m with model m is implicit since the latter is a special case of the former .the analysis results of m explicitly show the relevance of the extensions and hence support the more general model .this is separately supported by values of the deviance information criterion ( dic ; see ) computed from the mcmc results for each model separately ; this yields estimated dic is 996,191.7 for model m and 988,435.9 for model m , which indicates strong evidence that model m dominates model m. a number of methodological and computational areas remain for further study . among them , we note potential for integrating spatially - dependent structures with latent threshold factor models , motivated in part by the spatial - temporal findings in the eeg study . also , incorporating two or more common latent processes might allow evaluation of more complex latent factor structures for these and other applications .computational challenges are clear in connection with applying these models to higher dimensional time series such as are becoming increasing common in neuroscience as they are other other areas . that said , we expect the dynamic latent thresholding approach to become increasing relevant and important in constraining and reducing effective parameter dimension via dynamic sparsity in model parameters in contexts with higher - dimensional time series .based on the observations , the full set of latent state parameters and model parameters for the posterior analysis of dtrfm model m is as follows : * the latent factor process states including uncertain initial values ; * the latent tvar coefficient process ; * the variance processes and ; * the latent factor loading process , including the uncertain initial state ; * the hyper - parameters and ; * the latent threshold hyper - parameters . the model of can be written in a conditionally linear , gaussian dynamic model form with a modified state and a state transition where generation of the full sets of states is obtained by the standard forward filtering , backward sampling ( ffbs ) algorithm ( e.g. * ? ? ?* ) , which is efficient in the sense that the full trajectories of the states over time are regenerated at each iterate of the overall mcmc .conditional on and the variances , reduce to a univariate , linear and gaussian dynamic regression model with respect to the state process .we sample the states using the ffbs algorithm . based on the standard inverse gamma / beta bayesian discount model for the variance sequence over timeas noted in section [ subsec : dtrfm ] , the corresponding ffbs for volatilities provides a full sample from the conditional posterior for given all other quantities .similarly , the full conditional posterior for the factorizes into components involving the individual separately over and the discount variance ffbs applies to each in parallel to generate full conditional posterior samples .following , we sample each from its conditional posterior distribution given and all other parameters . recall that the elements of follow standard ar(1 ) processes , but are linked to the observation equation by the latent threshold structure .the resulting conditional posterior for is a non - standard distribution that we can not directly sample .we use a metropolis - within - gibbs sampling strategy with the proposal distribution derived in the _ non - threshold _case by assuming ; i.e. , we generate the candidate from a standard linear dynamic model for without the latent thresholds ( see section 2.3 of * ? ? ?priors for the latent ar hyper - parameters assume prior independence across series with traditional forms : normal or log - gamma priors for , truncated normal or shifted beta priors for and inverse gamma priors for . on this basis ,the full conditional posterior for breaks down into conditionally independent components across we then resample the in parallel across using direct sampling from the conditional posterior in cases that the priors are conditionally conjugate , or alternatively via metropolis hastings steps .as discussed in section [ subsec : dtrfm ] , the structured prior for the thresholds takes them as conditionally independent over with marginal priors that depend on the parameters of the corresponding latent ar processes , viz . where the set of thresholds are then also independent in the complete conditional posterior ; they are resampled in parallel via metropolis hastings independence chain steps using the conditional uniform priors as proposals .this is precisely as pioneered in in other latent threshold models , and its efficacy has been borne out in a number of examples there .finally , note that the above requires a slight modification and extension to generalize the mcmc for the extended dtrfm model m+ of section [ subsec : tvvardtrfm ] .the extension now involves the tv - var parameter matrices in with together with the required latent initial missing vector the above development applies conditional on these elements with the obvious modifications to subtract from throughout .then additional mcmc steps are needed .first , is generated from a complete conditional normal posterior under a suitably diffuse normal prior .second , the latent thresholded elements of the sequence and the set of hyper - parameters of the underlying ar(1 ) processes as well as the corresponding thresholds , are treated just as are the elements of discussed above .this component is a special case of the mcmc analysis for more general tv - var models as developed in .some insights into the convergence of the mcmc sampling are gained by viewing trace plots for selected parameters . as an example, some such plots from the analysis of the extended model m are shown in figure [ fig : trace ] . | we discuss bayesian analysis of multivariate time series with dynamic factor models that exploit time - adaptive sparsity in model parametrizations via the latent threshold approach . one central focus is on the transfer responses of multiple interrelated series to underlying , dynamic latent factor processes . structured priors on model hyper - parameters are key to the efficacy of dynamic latent thresholding , and mcmc - based computation enables model fitting and analysis . a detailed case study of electroencephalographic ( eeg ) data from experimental psychiatry highlights the use of latent threshold extensions of time - varying vector autoregressive and factor models . this study explores a class of dynamic transfer response factor models , extending prior bayesian modeling of multiple eeg series and highlighting the practical utility of the latent thresholding concept in multivariate , non - stationary time series analysis . _ msc 2010 subject classifications : _ 62f15 , 62m10 , 62p10 _ key words & phrases : _ dynamic factor models ; dynamic sparsity ; eeg time series ; factor - augmented vector autoregression ; impulse response ; multivariate time series ; sparse time - varying loadings ; time - series decomposition ; transfer response factor models . |
control problems for wheeled robots are usually described by nonholonomic systems .a nonholonomic system arises when the dimension of configuration space is greater than the dimension of control . here and below, the configuration space represents possible positions of a wheeled robot , i.e. , for a car - like robot it can be expressed in the following way : where corresponds to the reference point of the robot on a plane and corresponds to the orientation of the robot . for nonholonomic systems , motion in some directions is infinitesimally prohibited , but it is locally and globally possible ( through complex maneuvers of the system ) .there is a well - known problem about parking of a car which can not move in a direction perpendicular to the direction of motion of the wheels .in fact , every driver has faced this problem .let us consider the simplest case where a car should be parked on an empty parking lot ( a plane ) with no other cars or static obstacles .it is the classical hypothesis of `` rolling without slipping '' that provides the kinematic model of the car .the control problem for this model is stated as follows : given a system of differential equations , fixed initial and final positions of the mobile robot and restrictions on one or two dimensional control , one should find a control law and the corresponding trajectory satisfying these conditions .note that in order to control a real robot , the kinematic model should be derived from a dynamic one .from a driver s point of view , cars have two controls : the accelerator and the steering wheel .the position of a rear - wheel drive car can be defined by a vector , where is the midpoint of the rear wheels and is the angle of the car orientation which coincides with the direction of the rear wheels .the direction of the front wheels is not fixed and corresponds to the steering control .it is possible that each pair of wheels can be reduced to one wheel only .moreover , if we are not concerned with the direction of the front wheels , the control system for a car - like robot is equivalent to the system for a wheel or a skate : where are respectively linear and angular velocities as controls .note that the mechanical constraint for the steering angle of the front wheels of a car should be reduced to the constraint on .such a system looks like the kinematic model of a unicycle or a two - driving wheel mobile robot .however , the dynamical models of these systems are not identical .the main difference between kinematic models lies in the admissible control domains which should be obtained from dynamical models .further in the text , a car is identified with a mobile robot .the problem is to find controls and the corresponding trajectory satisfying system ( [ sys ] ) with boundary conditions : where .since such a formulation of the problem has a continuum of solutions in the general case , a cost functional should be added and minimized in order to choose the optimal solution among the family . the fact that we can choose the coordinate system allows us to always associate the initial position with the origin : further, the paper considers a number of known approaches to solve the described above problem , which admits different restrictions on the control .markov is one of the first mathematicians to work on such a problem . in 1887, he studied four problems with application to railroad practice .the first problem can be formulated geometrically in the following way . given two points in a plane , a positive number and a direction , the problem is to find the shortest smooth path joining and s.t . is the tangent direction of at the point and the curvature of is bounded everywhere by .the solution consists of an arc of a circle with radius and a straight segment or of two arcs depending on the position of the point ( see fig . [fig : skmark ] ) .such a solution can be applied to parking problem ( [ sys]),([positions ] ) , provided that the car is moving with a constant speed equal to one and is unable to move on a circle with radius smaller than , i.e. , and the direction of the car is of no importance at the desired end state , i.e. , is not fixed .since the car is moving with the unit velocity , the length minimization is equivalent to the time minimization : other problems from involve the condition about the direction at the point , but also have some additional constraints on the curvature and its derivative . in 1957 ,l. dubins solved problem ( [ sys])([timemin ] ) with fixed using geometric methods .the solution is reduced to selection among 6 variants of smooth connection of arcs and straight segments ( see fig . [fig : dub ] ) .this model corresponds to the time - optimal control problem for a car moving with a constant velocity and with bounds on angular velocity .such a car is called the dubins car . in 1990 ,j.a . reeds and l.a .shepp studied the so - called model of reeds shepp car which can move forwards and backwards .this case gives the following constraints on controls : ) .however , the number of variants of shortest paths increases significantly , the worst case providing 68 variants .optimal trajectories could contain cusp points , i.e. , points where the movement vector of the car changes to the opposite ( see example in fig .[ fig : rsh ] ) . as in dubins work, this characterization is done without obstacles . soon ,simpler solutions for both models were obtained in and in via pontryagin s maximum principle .in addition , h.j .sussmann and g. tang reduced the sufficient family to 46 canonical paths . the reeds shepp car is small time controllable from everywhere , i.e. , the reachable set of admissible configurations for any small time contains a small neighborhood of an initial point ( see geometric proof in ) .the dubins car is locally controllable , i.e. , it is possible to find a path for the initial and final configurations that are arbitrarily close , but not small time controllable from everywhere , so the length of such a path does not converge to zero .optimal control for both cars is a piecewise constant function .both models serve as classical examples of time - optimal problems .however , another criterion for choosing an optimal solution which minimizes the amount of car maneuvers can be considered . a classic problem from mechanics with such a variational principleis described below . in 1744 , l. euler considered a problem on stationary configurations of an elastic rod . given an elastic rod in a plane with fixed endpoints and tangents at the endpoints ; should find the possible profiles of the rod with given boundary conditions .euler derived differential equations for stationary configurations of a rod and described their possible qualitative types .these configurations are called euler s elasticae . in 1880 , l. saalschutz obtained explicit parametrization of the curves .these curves have rich history including max born s work on stability of euler s elasticae ( see fig .[ fig : euborn ] ) , for more details see .they appear as solutions to several problems of sub - riemannian geometry : nilpotent sub - riemannian problem on the engel group , nilpotent sub - riemannian problem with the growth vector , the problem of rolling a sphere over a plane .in other words , we have system ( [ sys ] ) with , boundary conditions ( [ positions ] ) and a cost functional of energy to be minimized one of the solutions to the parking problem of a car can be expressed via the shape of an elastic rod , provided the car is moving forwards with a constant linear speed and with no boundary on steering .time minimization in this case has no solutions ( curves tend to straight segments but can not reach them for general boundary conditions ) .note that time is fixed in ( [ inte ] ) and is equal to the length of the corresponding rod .jacobi elliptic functions are used in the expression of optimal control for a car moving along an elastica .the global structure of all solutions of the problem is described in a series of works , the software and algorithm for numerical computation of optimal elasticae are given in .a tool for plotting and evaluating generic euler s elastica can be found in .all models considered above have one - dimensional steering control .the reeds shepp car has an additional discrete control for direction of movement .a natural extension to it is the reeds - sheep car with varying speed and two dimensional control .one of such models arises in sub - riemannian geometry .consider system ( [ sys ] ) with no bounds on control .the problem is to find a curve connecting two points ( [ positions ] ) with a minimum sub - riemannian length optimal solutions for this problem were obtained by yu .sachkov in via geometric control theory .a more general length functional reflecting the different weights of controls can be minimized : the extension is easily implemented by changing the configuration variables .the parameter corresponds to the choice of scale in plane .this solution can be applied to a mobile robot with two parallel driving wheels , the acceleration of each being controlled by an independent motor .such a robot is called a differential drive robot .the distance between the wheels may define the value of parameter .consider a general model of a mobile robot with a trailer ( see fig .[ fig : trailer ] ) .if a robot is able to move forward only then such a system can usually be reduced to system ( [ sys ] ) , since the position of the trailer is of no importance .such systems can use dubins paths ( see .[ secdub ] ) or planar elastica ( see .[ secela ] ) for the path planning problem .that being said , the complexity of the parking problem increases significantly for a car with a trailer moving backwards .a detailed survey on the various control strategies for the backward motion of a mobile robot with trailers can be found in .formalization of control laws for such a problem requires the study of a nonlinear four - dimensional differential system with two - dimensional control which defines the linear and angular velocity . such a differential system has the following form : where are coordinates of the position of the car defined in sec .[ car ] , is the angle of the trailer with respect to the car ; constant parameters define distances from the robot and the trailer to the connection point as shown in fig . [fig : trailer ] .the problem is to move a robot with a trailer from one configuration to another , i.e. , to find a path , s.t . where such a problem has symmetries which translate solutions of the problem to other solutions .the group of motions of the plane gives such symmetries and allows to move the initial position of a car to the origin ( [ origin ] ) .another known symmetry is the similarity transformation , which changes configuration coordinates , constants and controls in the following way : this symmetry is used in the algorithm for trailer reparking , see subsec . [ reparking ] .note that usually there is mechanical constraint .such a constraint can be treated as an obstacle in .the presence of obstacles can be described by inequalities and equalities for . in other wordswe have a function which tells us if a point belongs to an obstacle .the resulting constraint equations for configuration space have different forms for different shapes of a robot .the simplest shape of a car - like robot is a circle because such constraint equations do not involve the angle parameter .the algorithm for computing the configuration space obstacles , when the objects are polygons , is given in . in 1989 ,j. barraquand and j .- c .latombe , using standard results of differential geometry , nonlinear control theory and dynamic programming search , developed path planning algorithms for two robotic systems : car - like robots and trailer - like robots .a car - like robot is kinematically similar to a rectangular - shaped car .left fig .[ fig : parking ] shows an example of maneuvering in an unstructured workspace represented as a bitmap with a maximal steering angle of the front wheels equal to 45 degrees ( the running time was about 2 minutes ) .a trailer - car is kinematically similar to a vehicle towing a trailer .right fig .[ fig : parking ] shows an example with a simulated two - body trailer - like robot where the trailer has to maneuver in a cluttered workspace with a maximal steering angle equal to 45 degrees ( the running time was about 5 minutes ) .their planners confirm controllability of such systems among obstacles in practice , but the solutions are not optimal . in 1994 ,laumond , p. jacobs , m. tax and r. murray developed a fast and accurate planner for a car - like model in presence of obstacles , based on a recursive subdivision of a collision free path generated by a lower - level geometric planner that ignores the motion constraints .the resultant trajectory is optimized to give a path that has near - minimal length in its homotopy class .the model of a front - wheel - drive car with a constraint on the turning radius can be reduced to system ( [ sys ] ) with the constraints ) satisfying inequalities ( [ constraint3 ] ) are the same as for reeds sheep s problem ( see the proof in ) .the algorithm of the planner consists of three steps : * to plan a collision - free path for the geometric system , i.e. , without taking into account differential system ( [ sys ] ) . if such a path does not exist , neither does a feasible path .the experimental results present two implementations based on two different geometric planners .the first one runs for a polygonal robot ( see left fig . [fig : laum94 ] ( a ) ) , the second one for a disk based on the voronoi diagram ( see right fig . [fig : laum94 ] ( a ) ) . * to perform subdivision of the path until all endpoints are linked by a minimal - length path satisfying system ( [ sys ] ) and constraints ( [ constraint2 ] ) .the shortest path for reeds - shepp car is used in order to link intermediate configurations along the path , generated in the previous step ( see fig .[ fig : laum94 ] ( b ) ) . * to perform an `` optimization '' routine to remove extra maneuvers and reduce the length of the path ( see fig . [fig : laum94 ] ( c ) ) .the resulting paths are showed on fig .[ fig : laum94 ] ( d ) . later in 1998 , j .-laumond presented notes which describes a more general approach for the motion planning problem of a car and a car with one or two trailers .he mentioned several steering methods for step ( b ) , including methods for nilpotent systems with piecewise and sinusoidal control . back in those days , steering with optimal control was possible only for car - like systems , so , the only possibility for general systems was to call on numerical methods . in 2013 , h. chitsaz considered the time - optimal control problem for system ( [ sysgen])([vf ] ) with and the constraints ^ 2 ] can be used to solve the parking problem .otherwise , there should be found a way to improve the obtained solution .[ fig : parking1 ] shows an example with . here and below , gray images of a robot and a trailer correspond to desired positions .one of the ways to improve the approximation is to consider the parking problem with the constraints , where is an arbitrary point of curve .this improvement can be repeated several times until the resulting curve comes to a point close enough to .[ fig : parking2 ] shows an improved approximation of the example showed in fig .[ fig : parking1 ] with .black points correspond to points where the nilpotent approximation has been recalculated .such an improvement is not always achieved with the desired precision , fig .[ fig : parking3 ] illustrates an example with for improved approximation , where the initial approximation has precision . in order to obtain a more accurate solution ,a specified algorithm should be developed on the base of the proposed approach .more examples are shown in fig .[ fig : parking4 ] with ( initial approximation has precision ) and in fig .[ fig : parking5 ] with ( initial approximation has precision ) .note that , as the last example shows , there is no restrictions on the parameter . ] ] ]the article explores motion planning problems for a mobile robot with a trailer .this is quite a difficult task even without obstacles .the simplest case is given by the kinematic model of a car - like mobile robot , i.e. when the position of the trailer is not taken into account .the article also includes a brief overview of the existing methods of solving a motion planning problem for a mobile robot and a mobile robot with a trailer .one of them is based on the concept of nilpotent approximation .different solutions , obtained from different classes of controls , are used to control a nilpotent system .optimal control , in sense of minimization of controls , corresponds to nilpotent sub - riemannian problems .recently , the nilpotent sub - riemannian problem on the engel group was solved .this problem is given by a 4-dimensional control system with a 2-dimensional control and provides an approximate optimal solution to controlling a differential system for a mobile robot with a trailer .the algorithm of nilpotent approximation is applied for solving the problem of reparking a trailer .this algorithm is improved using a symmetry of the corresponding differential system for a mobile robot with a trailer . at the end of the articlea new algorithm for parking a mobile robot with a trailer is presented .it will be applied for controlling a real mobile robot with a trailer .dubins , l.e . , on curves of minimal length with a constraint on average curvature , and with prescribed initial and terminal positions and tangents , _ american journal of mathematics _ , 1957 .79 , no . 3 , pp .497516 .pontryagin , l.s . ,boltyanskii , v.r . ,gamkrelidze , r.v . , and mishchenko , e.f . ,_ the mathematical theory of optimal processes _ , ed .: neustadt , l.w ., translator : trirogoff , k.n ., london : interscience , 1962 .laumond , j .-p . , feasible trajectories for mobile robots with kinematic and environment constraints , in _ proc . on intelligent autonomous systems _ ,: hertzberger , l.o . and groen , f.c.a ., new york : north holland , 1987 , pp .346354 .brockett , r.w . , and dai , l. , non - holonomic kinematics and the role of elliptic functions in constructive controllability , _ nonholonomic motion planning _, eds . li , z. and canny , j. , boston : kluwer acad . publ . , 1993 , 121 .agrachev , a.a . andsarychev , a.v ., filtrations of a lie algebra of vector fields and the nilpotent approximation of controllable systems , _ dokl . akad .nauk sssr _ , 1987 ,295 , no . 4 , pp .777781 ( in russian ) ; translation in _soviet math ._ , 1988 , vol .36 , no . 1 ,104108 .mashtakov , a.p . ,algorithms and software solving a motion planning problem for nonholonomic five - dimensional control systems , _ programmnye sistemy : teoriya i prilozheniya _ , 2012 , vol . 3 , no . 1 , p. 329 ( in russian ) . | the work studies a number of approaches to solving motion planning problem for a mobile robot with a trailer . different control models of car - like robots are considered from the differential - geometric point of view . the same models can be also used for controlling a mobile robot with a trailer . however , in cases where the position of the trailer is of importance , i.e. , when it is moving backward , a more complex approach should be applied . at the end of the article , such an approach , based on recent works in sub - riemannian geometry , is described . it is applied to the problem of reparking a trailer and implemented in the algorithm for parking a mobile robot with a trailer . * keywords : * mobile robot , trailer , motion planning , sub - riemannian geometry , nilpotent approximation . * msc 2010 : * 22e25 , 53a17 , 58e25 , 70q05 . |
-5 mm how can we understand intelligent behavior ?how can we design intelligent computers ?these are questions that have been discussed by scientists and the public at large for over 50 years . as mathematicians ,however , the question we want to ask is `` is there a _ mathematical _ theory underlying intelligence ? ''i believe the first mathematical attack on these issues was control theory , led by wiener and pontryagin .they were studying how to design a controller which drives a motor affecting the world and also sits in a feedback loop receiving measurements from the world about the effect of the motor action .the goal was to control the motor so that the world , as measured , did something specific , i.e. move the tiller so that the boat stays on course .the main complication is that nothing is precisely predictable : the motor control is not exact , the world does unexpected things because of its complexities and the measurements you take of it are imprecise .all this led , in the simplest case , to a beautiful analysis known as the wiener - kalman - bucy filter ( to be described below ) .but control theory is basically a theory of the output side of intelligence with the measurements modeled in the simplest possible way : e.g. linear functions of the state of the world system being controlled plus additive noise .the real input side of intelligence is perception in a much broader sense , the analysis of all the noisy incomplete signals which you can pick up from the world through natural or artificial senses .such signals typically display a mix of distinctive patterns which tend to repeat with many kinds of variations and which are confused by noisy distortions and extraneous clutter .the interesting and important structure of the world is thus coded in these signals , using a code which is complex but not perversely so .-5 mm the first serious attack on problems of perception was the attempt to recognize speech which was launched by the us defense agency arpa in 1970 .at this point , there were two competing ideas of what was the right formalism for combining the various clues and features which the raw speech yielded .the first was to use logic or , more precisely , a set of ` production rules ' to augment a growing database of true propositions about the situation at hand .this was often organized in a ` blackboard ' , a two - dimensional buffer with the time of the asserted proposition plotted along the -axis and the level of abstraction ( i.e. signal phone phoneme syllable word sentence ) along the -axis .the second was to use statistics , that is , to compute probabilities and conditional probabilities of various possible events ( like the identity of the phoneme being pronounced at some instant ) .these statistics were computed by what was called the ` forward - backward ' algorithm , making 2 passes in time , before the final verdict about the most probable translation of the speech into words was found .this issue of logic vs. statistics in the modeling of thought has a long history going back to aristotle about which i have written in [ m ] .i think it is fair to say that statistics won .people in speech were convinced in the 1970 s , artificial intelligence researchers converted during the 1980 s as expert systems needed statistics so clearly ( see pearl s influential book [ p ] ) , but vision researchers were not converted until the 1990 s when computers became powerful enough to handle the much larger datasets and algorithms needed for dealing with 2d images .the biggest reason why it is hard to accept that statistics underlies all our mental processes perception , thinking and acting is that we are not consciously aware of 99% of the ambiguities with which we deal every second .what philosophers call the ` raw qualia ' , the actual sensations received , do not make it to consciousness ; what we are conscious of is a precise unambiguous enhancement of the sensory signal in which our expectations and our memories have been drawn upon to label and complete each element of the percept .a very good example of this comes from the psychophysical experiments of warren & warren [ w ] in 1970 : they modified recorded speech by replacing a single phoneme in a sentence by a noise and played this to subjects . remarkably ,the subjects did _ not _ perceive that a phoneme was missing but believed they had heard the one phoneme which made the sentence semantically consistent : l|l actual sound & perceived words + the ? eel is on the shoe & the _ h_eel is on the shoe + the ? eel is on the car & the _ wh_eel is on the car + the ? eel is on the table & the _ m_eal is on the table + the ? eel is on the orange & the _ p_eel is on the orange two things should be noted .firstly , this showed clearly that the actual auditory signal did not reach consciousness .secondly , the choice of percept was a matter of probability , not certainty .that is , one might find some odd shoe with a wheel on it , a car with a meal on it , a table with a peel on it , etc .but the words which popped into consciousness were the most likely .an example from vision of a simple image , whose contents require major statistical reasoning to reconstruct , is shown in figure [ fig : oldman ] .it is important to clarify the role of probability in this approach .the uncertainty in a given situation need not be caused by observations of the world being truly unpredictable as in quantum mechanics or even effectively so as in chaotic phenomena .it is rather a matter of efficiency : in order to understand a sentence being spoken , we do not need to know all the things which affect the sound such as the exact acoustics of the room in which we are listening , nor are we even able to know other factors like the state of mind of the person we are listening to . in other words , we always have incomplete data about a situation .a vast number of physical and mental processes are going on around us , some germane to the meaning of the signal , some extraneous and just cluttering up the environment . in this ` blooming , buzzing ' world , as william james called it , we need to extract information and the best way to do it , apparently , is to make a stochastic model in which all the irrelevent events are given a simplified probability distribution .this is not unlike the stochastic approach to navier - stokes , where one seeks to replace turbulence or random molecular effects on small scales by stochastic perturbations .-5 mm having accepted that we need to use probabilities to combine bits and pieces of evidence , what is the mathematical set up for this ?we need the following ingredients : a ) a set of random variables , some of which describe the observed signal and some the ` hidden ' variables describing the events and objects in the world which are causing this signal , b ) a class of stochastic models which allow one to express the variability of the world and the noise present in the signals and c ) specific parameters for the one stochastic model in this class which best describes the class of signals we are trying to decode now .more formally , we shall assume we have a set of observed and hidden random variables , which may have real values or discrete values in some finite or countable sets , we have a set of parameters and we have a class of probability models on the s for each set of values of the s .the crafting or learning of this model may be called the first problem in the mathematical theory of perception .it is usual to factor these probability distributions : where the first factor , describing the likelihood of the observations from the hidden variables , is called the _ imaging model _ and the second , giving probabilities on the hidden variables , is called the _prior_. in the full bayesian setting , one has an even stronger prior , a full probability model , including the parameters .the second problem of perception is that we need to estimate the values of the parameters which give the best stochastic model of this aspect of the world .this often means that you have some set of measurements and seek the value of which maximizes their likelihood .if the hidden variables as well as the observations are known , this is called supervised learning ; if the hidden variables are not known , then it is unsupervised and one may maximize , for instance , .if one has a prior on the s too , one can also estimate them from the mean or mode of the full posterior .usually a more challenging problem is how many parameters to include . at one extreme, there are simple ` off - the - shelf ' models with very few parameters and , at the other extreme , there are fully non - parametric models with infinitely many parameters . herethe central issue is how much data one has : for any set of data , models with too few parameters distort the information the data contains and models with too many overfit the accidents of this data set .this is called the _ bias - variance dilemma_. there are two main approaches to this issue .one is cross - validation : hold back parts of the data , train the model to have maximal likelihood on the training set and test it by checking the likelihood of the held out data .there is also a beautiful theoretical analysis of the problem due principally to vapnik [ v ] and involving the _ vc dimension _ of the models the size of the largest set of data which can be split in all possible ways into more and less likely parts by different choices of .as grenander has emphasized , a very useful test for a class of models is to synthesize from it , i.e. choose random samples according to this probability measure and to see how well they resemble the signals we are accustomed to observing in the world .this is a stringent test as signals in the world usually express layers and layers of structure and the model tries to describe only a few of these . the third problem of perception is using this machinary to actually perceive : we assume we have measured specific values and want to infer the values of the hidden variables in this situation . given these observations , by bayes rule ,the hidden variables are distributed by the so - called _ posterior _ distribution : one may then want to estimate the mode of the posterior , the most likely value of . or one may want to estimate the mean of some functions of the hidden variables . or , if the posterior is often multi - modal and some evidence is expected to available later , one usually wants a more complete description or approximation to the full posterior distribution .-5 mm a convenient way to introduce the ideas of pattern theory is to outline the simple hidden markov model method in speech recognition to illustrate many of the ideas and problems which occur almost everywhere . herethe observed random variables are the values of the sound signal , a pressure wave in air .the hidden random variables are the states of the speaker s mouth and throat and the identity of the phonemes being spoken at each instant .usually this is simplified , replacing the signal by samples and taking for hidden variables a sequence whose values indicate which phone in which phoneme is being pronounced at time .the stochastic model used is : i.e. the form a markov chain and each depends only on .this is expressed by the graph : in which each variable corresponds to a vertex and the graphical markov property holds : if 2 vertices in the graph are separated by a subset of vertices , then the variables associated to and are conditionally independent if we fix the variables associated to . this simple model works moderately well to decode speech because of the linear nature of the graph , which allows the ideas of dynamic programming to be used to solve for the marginal distributions and the modes of the hidden variables , given any observations .this is expressed simply in the recursive formulas : note that if each can take values , the complexity of each time step is . in any model ,if you can calculate the conditional probabilities of the hidden variables and if the model is of exponential type , i.e. then there is also an efficient method of optimizing the parameters .this is called the _ em algorithm _ and , because it holds for hmm s , it is one of the key reasons for the early successes of the stochastic approach to speech recognition .for instance , a markov chain is an exponential model if we let the s be and write the chain probabilities as : the fundamental result on exponential models is that the s are determined by the expectations and that any set of expectations that can be achieved in some probability model ( with all probabilities non - zero ) , is also achieved in an exponential model .-5 mm in this model , the observations are naturally continuous random variables , like all primary measurements of the physical world . but the hidden variables are discrete : the set of phonemes , although somewhat variable from language to language , is always a small discrete set .this combination of discrete and continuous is characteristic of perception .it is certainly a psychophysical reality : for example experiments show that our perceptions lock onto one or another phoneme , resisting ambiguity ( see [ l ] , ch.8 , esp .but it shows itself more objectively in the low - level statistics of natural signals .take almost any class of continuous real - valued signals generated by the world and compile a histogram of their changes over some fixed time interval .this empirical distribution will very likely have kurtosis ( ) greater than 3 , the kurtosis of any gaussian distribution !this means that , compared to a gaussian distribution with the same mean and standard deviation , has higher probability of being quite small or quite large but a lower probability of being average .thus , compared to brownian motion , tends to move relatively little most of the time but to make quite large moves sometimes .this can be made precise by the theory of stochastic processes with iid increments , a natural first approximation to any stationary markov process .the theory of such processes says that ( a ) their increments always have kurtosis at least 3 , ( b ) if it equals 3 the process is brownian and ( c ) if it is greater , samples from the process almost surely have discontinuities . at the risk of over - simplfying, we can say _kurtosis 3 is nature s universal signal of the presence of discrete events / objects in continuous space - time_. a classic example of this are stock market prices .their changes ( or better , changes in log(price ) ) have a highly non - gaussian distribution with polynomial tails . in speech , the changes in the log(power ) of the windowed fourier transform show the same phenomenon , confirming that can not be decently modeled by colored gaussian noise .-5 mm applying hmm s in realistic settings , it usually happens that is too large for an exhaustive search of complexity or that the are real valued and , when adequately sampled , again is too large .there is one other situation in which the hmm - style approach works easily the _ kalman filter_. in kalman s setting , each variable and is real vector - valued instead of being discrete and and are _ gaussian distributions with fixed covariances and means depending linearly on the conditioning variable_. it is then easy to derive recursive update formulas , similar to those above , for the conditional distributions on each , given the past data . but usually , in the real - valued variable setting , the s are more complex than gaussian distributions .an example is the tracking problem in vision : the position and velocity of some specific moving object at time is to be inferred from a movie , in which the object s location is confused by clutter and noise .it is clear that the search for the optimal reconstruction must be pruned or approximated .a dramatic breakthrough in this and other complex situations has been to adapt the hmm / kalman ideas by using weak approximations to the marginals by a finite set of samples , an idea called _ particle filtering _ : this idea was proposed originally by gordon , salmond and smith [ g - s - s ] and is developed at length in the recent survey [ d - f - g ] .an example with explicit estimates of the posterior from the work of isard and blake [ i - b ] is shown in figure 2 .they follow the version known as bootstrap particle filtering in which , for each , samples are drawn with replacement from the weak approximation above , each sample is propagated randomly to a new sample at time using the prior and these are reweighted proportional to .-5 mm a more serious problem with the hmm approach is that the markov assumption is never really valid and it may be much too crude an approximation . consider speech recognition .the finite lexicon of words clearly constrains the expected phoneme sequences , i.e. if are the phonemes , then depends on the current word(s ) containing these phonemes , i.e. on a short but variable part of the preceding string of phonemes . to fix this, we could let be a pair consisting of a word and a specific phoneme in this word ; then would have two quite different values depending on whether was the last phoneme in the word or not . within a word, the chain needs only to take into account the variability with which the word can be pronounced . at word boundaries, it should use the conditional probabilities of word pairs .this builds much more of the patterns of the language into the model .why stop here ?state - of - the - art speech recognizers go further and let be a pair of consecutive words plus a triphone in the second word ( or bridging the first and second word ) whose middle phoneme is being pronounced at time .then the transition probabilities in the hmm involve the statistics of ` trigrams ' , consecutive word triples in the language .but grammar tells us that words sequences are also structured into phrases and clauses of variable length forming a parse tree .these clearly affect the statistics .semantics tells us that words sequences are further constrained by semantic plausibility ( ` sky ' is more probable as the word following ` blue ' than ` cry ' ) and pragmatics tells us that sentences are part of human communications which further constrain probable word sequences .all these effects make it clear that certain parts of the signal should be grouped together into units on a higher level and given labels which determine how likely they are to follow each other or combine in any way .this is the essence of grammar : higher order random variables are needed whose values are subsets of the low order random variables .the simplest class of stochastic models which incorporate variable length random substrings of the phoneme sequence are _ probabilistic context free grammars _ or pcfg s .mathematically , they are a particular type of random branching tree . * definition * _ a is a stochastic model in which the random variables are ( a ) a sequence of rooted trees , ( b ) a linearly ordered sequence of observations and a 1:1 correspondence between the observations and the leaves of the whole forest of trees such that the children of any vertex of any tree form an interval in time and ( c ) a set of labels for each vertex .the probability model is given by conditional probabilities for the labels of each child of each vertex is the name of the ` production rule ' with this vertex as its head , esp .it fixes the arity of the vertex .we are doing it this way to simplify the markov property . ] and for the observations , conditional on the label of the corresponding leaf ._ see figure 3 for an example .this has a markov property if we define the ` extended ' state at leaf to be not only the label at this leaf but the whole sequence of labels on the path from this leaf to the root of the tree in which this leaf lies .conditional on this state , the past and the future are independent .this is a mathematically elegant and satisfying theory : unfortunately , it also fails , or rather explodes because , in carrying it out , the set of labels gets bigger and bigger .for instance , it is not enough to have a label for noun phrase which expands into an adjective plus a noun .the adjective and noun must agree in number and ( in many languages ) gender , a constraint that must be carried from the adjective to the noun ( which need not be adjacent ) via the label of the parent .so we need 4 labels , all combinations of singular / plural masculine / feminine noun phrases .and semantic constraints , such as pr(`blue sky ' ) pr(`blue cry ' ) , would seem to require even more labels like ` colorable noun phrases ' .rather than letting the label set explode , it is better to consider a bigger class of grammars , which express these relations more succinctly but which are not so easily converted into hmm s : _ unification grammars _ [ sh ] or _ compositional grammars _ [ b - g - p ] .the need for grammars of this type is especially clear when we look at formalisms for expressing the grouping laws in vision : see figure 3 .the further development of stochastic compositional grammars , both in language and vision , is one of the main challenges today .-5 mm the theory of hmm s deals with one - dimensional signals .but images , the signals occurring in vision , are usually two - dimensional or three - dimensional for mr scans and movies ( 3 space dimensions and 2 space plus 1 time dimension ) , even four - dimensional for echo cardiograms . on the other hand ,the parse tree is a more abstract graphical structure and other ` signals ' , like medical data gathered about a particular patient , are structured in complex ways ( e.g. a set of blood tests , a medical history ) .this leads to the basic insight of grenander s pattern theory [ g ] : that the variables describing the structures in the world are typically related in a graphical fashion , edges connecting variables which have direct bearing on each other .finding the right graph or class of graphs is a crucial step in setting up a satisfactory model for any type of patterns .thus the applications , as well as the mathematical desire to find the most general setting for this theory , lead to the idea of replacing a simple chain of variables by a set of variables with a more general graphical structure .the general concept we need is that of a markov random field : * definition * _ a is a graph , a set of random variables , one for each vertex , and a joint probability distribution on these variables of the form : where ranges over the cliques ( fully connected subsets ) of the graph , are any functions and a constant . if the variables are real - valued for , we make this into a probability density , multiplying by .moreover , we can put each model in a family by introducing a temperature and defining : _ these are also called _gibbs models _ in statistical mechanics ( where the are called _ energies _ ) and _ graphical models _ in learning theory and , like markov chains , are characterized by their conditional independence properties .this characterization , called the hammersley - clifford theorem , is that if two vertices are separated by a subset ( all paths in from to must include some vertex in ) , then and are conditionally independent given .the equivalence of these independence properties , plus the requirement that all probabilities be positive , with the simple explicit formula for the joint probabilities makes it very convincing that mrf s are a natural class of stochastic models .-5 mm this class of models is very expressive and many types of patterns which occur in the signals of nature can be captured by this sort of stochastic model . a basic example is the ising model and its application to the image segmentation problem . in the simplest form ,we take the graph to be a square grid with two layers , with observable random variables associated to the top layer and hidden random variables associated to the bottom layer .we connect by edges each vertex to the vertex above it and to its 4 neighbors in the -grid ( except when the neighbor is off the grid ) and no others .the cliques are just the pairs of vertices connected by edges . finally , we take for energies : the modes of the posteriors are quite subtle : s at adjacent vertices try to be equal but they also seek to have the same sign as the correponding .if has rapid positive and negative swings , these are in conflict .hence the more probable values of will align with the larger areas where is consistently of one sign .this can be used to model a basic problem in vision : the _ segmentation problem_. the vision problem is to decompose the domain of an image into parts where distinct objects are seen .for example , the oldman image might be decomposed into 6 parts : his body , his head , his cap , the bench , the wall behind him and the sky .the decomposition is to be based on the idea that the image will tend to either slowly varying or to be statistically stationary at points on one object , but to change abruptly at the edges of objects . as proposed in [ g - g ] , the ising model can be used to treat the case where the image has 2 parts , one lighter and one darker , so that at the mode of the posterior the hidden variables will be on one part , on the other .an example is shown in figure 4 .this approach makes a beautiful link between statistical mechanics and perception , in which the process of finding global patterns in a signal is like forming large scale structures in a physical material as the temperature cools through a phase transition .+ more complex models of this sort have been used extensively in image analysis , for texture segmentation , for finding disparity in stereo vision , for finding optic flow in moving images and for finding other kinds of groupings .we want to give one example of the expressivity of these models which is quite instructive .we saw above that exponential models can be crafted to reproduce some set of observed expectations but we also saw that scalar statistics from natural signals typically have high kurtosis , i.e. significant outliers , so that their whole distribution and not just their mean needs to be captured in the model .putting these 2 facts together suggests that we seek exponential models which duplicate the whole distribution of some important statistics .this can be done using as parameters not just unknown constants but unknown functions : if depends only the variables , for some clique , this is a mrf , whose energies have unknown functions in them .an example of this fitting is shown in figure 5 .-5 mm however , a problem with mrf models is that the dynamic programming style algorithm used in speech and one - dimensional models to find the posterior mode has no analog in 2d .one strategy for dealing with this , which goes back to metropolis , is to imitate physics and introduce an artifical dynamics into the state space whose equilibrium is the gibbs distribution .this dynamics is called a _monte carlo markov chain _( mcmc ) and is how the panels in figure 4 were generated .letting the temperature converge to zero , we get _ simulated annealing _ ( see [ g - g ] ) and , if we do it slowly enough , will find the mode of the mrf model .although slow , this can be speeded up by biasing the dynamics ( called _ importance sampling _ see [ t - z ] for a state - of - the - art implementation with many improvements ) and is an important tool .recently , however , another idea due to weiss and collaborators ( see [ y - f - w ] ) and linked to statistical mechanics has been found to give new and remarkably effective algorithms for finding these modes . from an algorithmic standpoint ,the idea is to use the natural generalization of dynamic programming , called _ bayesian belief propagation _ ( bbp ) , which computes the marginals and modes correctly whenever the graph is a tree and just use it anyway on an arbitrary graph !mathematically , it amounts to working on the universal covering graph , which is a tree , hence much simpler , instead of . in statistical mechanics, this idea is called the _ bethe approximation _ , introduced by him in the 30 s .to explain the idea , start with the _ mean field approximation_. the mean field idea is to find the best approximation of the mrf by a probability distribution in which the variables are all independent .this is formulated as the distribution which minimizes the kullback - liebler divergence . unlike computing the true marginals of on each which is very hard ,this approximation can be found by solving iteratively a coupled set of non - linear equations for the .but the assumption of independence is much too restrictive .the idea of bethe is instead to approximate by a -invariant distribution on .such distributions are easy to describe : note that a markov random field on a _ tree _ is uniquely determined by its marginals for each edge and , conversely , if we are given a compatible set of distributions for each edge ( in the sense that , for all edges abutting a vertex , the marginals of give distributions on independent of ) , they define an mrf on .so if we start with a markov random field on any , we get a -invariant markov random field on by making duplicate copies for each random variable for each over and lifting the edge marginals .but more generally , if we have any compatible set of probability distributions on , we also get a -invariant mrf on .then the bethe approximation is that family which minimizes . as in the mean field case, there is a natural iterative method of solving for this minimum , which turns out , remarkably , to be identical to the generalization of bbp to general graphs .this approach has proved effective in some cases at finding best segmentations of images via the mode of a two - dimensional mrf .other interesting ideas have been proposed for solving the segmentation problem which we do not have time to sketch : region growing , see esp .[ z - y ] ) , using the eigenfunctions of the graph - theoretic laplacian , see [ s - m ] , and multi - scale algorithms , see [ p - b ] and [ s - b - b ] .-5 mm although signals as we measure them are always sampled discretely , in the world itself signals are functions on the continua , time or space or both together . in some situations , a much richer mathematical theory emerges by replacing a countable collection of random variables by random processes and asking whether we can find good stochastic models for these continuous signals .i want to conclude this talk by mentioning three instances where some interesting analysis has arisen when passing to the continuum limit and going into some detail on two . we will not worry about algorithmic issues for these models .-5 mm this is the area where the most work has been done , both because of its links with other areas of analysis and because it is one of the central problems of image processing . you observe a degraded image as a function of continuous variables and seek to restore it , removing simultaneously noise and blur . in the discretesetting , the ising model or variants thereof discussed above can be applied for this .there are two closely related ways to pass to the continuous limit and reformulate this as a problem in analysis .as both drop the stochastic interpretation and have excellent treatments in the literature , we only mention briefly one of a family of variants of each approach : _ optimal piecewise smooth approximation of i via a variational problem _ : where , the improved image , has discontinuities along the set of ` edge ' curves .this approach is due to the author and shah and has been extensively pursued by the schools of degiorgi and morel .see [ m - s ] .it is remarkable that it is still unknown whether the minima to this functional are well behaved , e.g. whether has a finite number of components .stochastic variants of this approach should exist ._ non - linear diffusion of i _ : where at some future time is the enhancement .this approach started with the work of perona and malik and has been extensively pursued by osher and his coworkers .see [ gu - m ] .it can be interpreted as gradient descent for a variant of the previous variational problem .-5 mm one of the earliest discoveries about the statistics of images was that their power spectra tend to obey power laws where varies somewhat from image to image but clusters around the value 2 .this has a very provocative interpretation : this power law is implied by self - similarity ! in the language of lattice field theory , if is a random lattice field and is the block averaged field then we say the field is a renormalization fixed point if the distributions of and of are the same .the hypothesis that natural images of the world , treated as a single large database , have renormalization invariant statistics has received remarkable confirmation from many quite distinct tests .why does this hold ?it certainly is nt true for auditory or tactile signals .i think there is one major and one minor reason for it .the major one is that the world is viewed from a random viewpoint , so one can move closer or farther from any scene .to first approximation , this scales the image ( though not exactly because nearer objects scale faster than distant ones ) .the minor one is that most objects are opaque but have , by and large , parts or patterns on them and , in turn , belong to clusters of larger things .this observation may be formulated as saying the world is not merely made up of objects but it is cluttered with them . the natural setting forscale invariance is pass to the limit and model images as random functions of two real variables .then the hypothesis is that a suitable function space supports a probability measure which is invariant under both translations and scalings , whose samples are ` natural images ' .this hypothesis encounters , however , an infra - red and an ultra - violet catastrophe : + a ) the infra - red one is caused by larger and larger scale effects giving bigger and bigger positive and negative swings to a local value of . but these large scale effects are very low - frequency and this is solved by considering to be defined only modulo an unknown constant , i.e. it is a sample from a measure on a function space mod constants .+ b ) the ultra - violet one is worse : there are more and more local oscillations of the signals at finer and finer scales and this contradicts lusin s theorem that an integrable function is continuous outside sets of arbitrarily small measure .in fact , it is a theorem that _ there is no translation and scale invariant probability measure on the space of locally integrable functions mod constants_. this can be avoided by allowing images to be generalized functions .in fact , the support can be as small as the intersection of all negative sobolev spaces .to summarize what a good statistical theory of natural images should explain , we have scale - invariance as just described , kurtosis greater than 3 as described in section 2.1 and finally the right local properties : * hypothesis i * : : a theory of images is a translation and scale invariant probability measure on the space of generalized functions mod constants .* hypothesis ii * : : for any filter with mean 0 , the marginal statistics of have kurtosis greater than 3 . *hypothesis iii * : : the local statistics of images reflect the preferred local geometries , esp .images of straight edges , but also curved edges , corners , bars , ` t - junctions ' and ` blobs ' as well as images without geometry , blank ` blue sky ' patches .hypothesis iii is roughly the existence of what marr , thinking globally of the image called the _ primal sketch _ and what julesz , thinking locally of the elements of texture , referred to as _ textons_. by scale invariance , the local and global image should have the same elements . to quantify hypothesis iii ,what is needed is a major effort at data mining .specifically , the natural approach seems to be to take a small filter bank of zero mean local filters , a large data base of natural images leading to the sample of points in given by for all .one seeks a good non - parametric fit to this dataset . buthypothesis iii shows that this distribution will not be simple .for example lee et al [ l - p - m ] have taken , a basis of zero mean filters with fixed support .they then make a linear tranformation in normalizing the covariance of the data to ( ` whitening ' the data ) , and to investigate the outliers , map the data with norms in the upper 20% to by dividing by the norm .the analysis reveals that the resulting data has asymptotic infinite density along a non - linear surface in ! this surface is constructed by starting with an ideal image , black and white on the two sides of a straight edge and forming a discrete image patch by integrating this ideal image over a tic - tac - toe board of square pixels .as the angle of the edge and the offset of the pixels to the edge vary , the resulting patches form this surface .this is the most concrete piece of evidence showing the complexity of local image statistics . are there models for these three hypotheses ?we can satisfy the first hypothesis by the unique scale - invariant gaussian model , called the free field by physicists but its samples look like clouds and its marginals have kurtosis 3 , so neither the second nor third hypothesis is satisfied .the next best approximation seems to be to use infinitely divisible measures , such as the model constructed by the author and b.gidas [ m - g ] , which we call _ random wavelet expansions _ : where is a poisson process in and are samples from an auxiliary levi measure , playing the role of individual random wavelet primitives .but this model is based on adding primitives , as in a world of transparent objects , which causes the probability density functions of its marginal filter statistics to be smooth at 0 instead of having peaks there , i.e. the model does not produce enough ` blue sky ' patches with very low constrast .a better approach are the random collage models , called _dead leaves models _ by the french school : see [ l - m - h ] . herethe are assumed to have bounded support , the terms have a random depth and , instead of being simply added , each term occludes anything behind it with respect to depth .this means equals the one which is in front of all the others whose support contains .this theory has major troubles with both infra - red and ultra - violet limits but it does provide the best approximation to date of the empirical statistics of images .it introduces explicitly the hidden variables describing the discrete objects in the image and allows one to model their preferred geometries .crafting models of this type is not simply mathematically satisfying .it is central to the main application of computer vision : object recognition .when an object of interest is obscured in a cluttered badly lit scene , one needs a -value for the hypothesis test is this fragment of stuff part of the sought - for object or an accidental conjunction of things occurring in generic images ? to get this -value , one needs a null hypothesis , a theory of generic images .-5 mm as we have seen in the last section , modeling images leads to objects and these objects have shape so we need stochastic models of shape , the ultimate non - linear sort of thing .again it is natural to consider this in the continuum limit and consider a -dimensional shape to be a subset of , e.g. a connected open subset with nice boundary .it is very common in multiple images of objects like faces , animals , clothes , organs in your body , to find not identical shapes but warped versions .how is this to be modeled ?one can follow the ideas of the previous section and take a highly empirical approach , gathering huge databases of faces or kidneys .this is probably the road to the best pattern recognition in the long run . butanother principle that grenander has always emphasized is to take advantage of the group of symmetries of the situation in this case , the group of all diffeomorphisms of .he and miller and collaborators ( see [ gr - m ] ) were led to rediscover the point of view of arnold which we next describe .let and be the volume - preserving subgroup .we want to bypass issues of the exact degree of differentiability of these diffeomorphisms , but consider and as infinite dimensional riemannian manifolds .let be a path in and define its length by : this length is nothing but the _ right_-invariant riemannianmetric : arnold s beautiful theorem is : * theorem * _ geodesics in are solutions of euler s equation : _ this result suggests using geodesics on suitable infinite dimensional manifolds to model optimal warps between similar shapes in images and using diffusion on these manifolds to craft stochastic models .but we need to get rid of the volume - preserving restriction .the weak metric used by arnold no longer works on the full and in [ c - r - m ] , christensen et al introduced : where is any vector field and is a fixed positive self - adjoint differential operator e.g. then a path in has both a velocity : and a _ momentum : _ ( so , the green s function of ) .what is important here is that the momentum can be a generalized function , even when is smooth .the generalization of arnold s theorem , first derived by vishik , states that geodesics are : this equation is a new kind of regularized compressible euler equation , called by marsden the template matching equation ( tme ) .the left hand side is the derivative along the flow of the momentum , as a measure , and the right hand side is the force term .a wonderful fact about this equation is that by making the momentum singular , we get very nice equations for geodesics on the -homogeneous spaces : + ( a ) set of all -tuples of distinct points in and + ( b ) set of all images of the unit ball under a diffeomorphism . + in the first case , we have where is the stabilizer of a specific set of distinct points . to get geodesics on , we look for ` particle solutions of the tme ' , i.e. where is a path in the geodesics on , which are perpendicular to all cosets , are then the geodesics on for the quotient metric : where . for these we get the interesting hamiltonian ode : which makes points traveling in the same direction attract each other and points going in opposite directions repel each other . this space leads to a non - linear version of the theory of landmark points and shape statistics of kendall [ sm ] and has been developed by younes [ yo ] .a similar treatment can be made for the space of shapes , where is the stabilizer of the unit sphere .geodesics on come from solutions of the tme for which is supported on the boundary of the shape and perpendicular to it .even though the first of these spaces might seem to be quite a simple space , it seems to have a remarkable global geometry reflecting the many perceptual distinctions which we make when we recognize a similar shapes , e.g. a cell decomposition reflecting the different possible graphs which can occur as the ` medial axis ' of the shape .this is an area in which i anticipate interesting results .we can also use these riemannian structures to define brownian motion on and ( see [ d - g - m ] , [ yi ] ) . putting a random stopping time on this walk ,we get probability measures on these spaces . to make the ideas more concrete ,in figure 6 we show a simulation of the random walk on .-5 mm the patterns which occur in nature s sensory signals are complex but allow mathematical modeling .their study has gone through several phases . at first , ` off - the - shelf ' classical models ( e.g. linear gaussian models ) were adopted based only on intuition about the variability of the signals .now , however , two things are happening : computers are large enough to allow massive data gathering to support fully non - parametric models .and the issues raised by these models are driving the study of new areas of mathematics and the development of new algorithms for working with these models .applications like general purpose speech recognizers and computer driven vehicles are likely in the foreseeable future .perhaps the ultimate dream is a fully unsupervised learning machine which is given only signals from the world and which finds their statistically significant patterns with no assistance : something like a baby in its first 6 months . | is there a mathematical theory underlying intelligence ? control theory addresses the output side , motor control , but the work of the last 30 years has made clear that perception is a matter of bayesian statistical inference , based on stochastic models of the signals delivered by our senses and the structures in the world producing them . we will start by sketching the simplest such model , the hidden markov model for speech , and then go on illustrate the complications , mathematical issues and challenges that this has led to . 4.5 mm * keywords and phrases : * perception , speech , vision , bayesian , statistics , inference , markov . |
with rising demand for bandwidth , several researchers around the world have measured and studied the occupancy of spectrum in different countries .these measurements suggest that except for the spectrum allocated to services like cellular technologies , and the industrial , scientific and medical ( ism ) bands , most of the allocated spectrum is heavily underutilized .the overall usage of the analyzed spectrum is as low as 4.54% in singapore , 6.2% in auckland , 17.4% in chicago and 22.57% in barcelona . among all the unutilized portions of the frequency spectrum , white spaces in the ultra high frequency ( uhf ) television ( tv ) bands have been of particular interest owing to the superior propagation characteristics as compared to the higher frequency bands .loosely speaking , the unutilized ( or underutilized ) tv channels collectively form the tv white spaces .the amount of available tv white space varies with location and time .tv white space estimation has been done in countries like the united states ( us ) , the united kingdom ( uk ) , europe , and japan . in the indian context, single - day experiments at three locations in urban and sub - urban delhi have been performed .the estimation of tv white space in the uhf band , based on spectrum allocation and tv transmitter parameters , is presented in this work .the main contributions of this paper are the following : 1 .for the first time , the empirical quantification of the available tv white space in the - in india is presented .the quantification utilizes existing methods in the literature , namely pollution and protection viewpoints , and the technical specifications of the federal communications commission .it is found that uhf tv band spectrum is heavily underutilized in india .2 . motivated by underutilization of uhf tv band spectrum, a spatial reuse based channel allocation algorithm has been proposed for the existing indian tv transmitters operating in the 470 - 590 mhz band .the algorithm uses the least number of tv channels while ensuring no ( significant ) interference between transmitters operating in the same channel .it is observed that at least uhf tv band channels can be freed by this approach .the importance of the above results must be understood in the context of indian national frequency allocation plan ( nfap ) 2011 where a policy intent for the utilization of tv white spaces was made .therefore , it is necessary to estimate the amount of tv white spaces in india . besides , based on above results ,the tv band in india is underutilized and this situation is quite different than in the developed countries .the optimal mechanism(s ) for the use of tv white spaces in india can be _ different _ and it should be studied by further research . _organization : _ the tv white space scenario and the related work on quantitative analysis in a few other countries is briefly described in sec .[ sec : tvws_review ] .[ sec : india_tvplan ] describes the current indian usage scenario of the uhf tv bands .[ sec : methodology ] presents the methodology and assumptions used in calculating the white space availability in india .[ sec : results ] presents the results of our work , and compares the tv white space availability in india with that of other countries . in sec .[ sec : channel_allocation ] , we propose a frequency allocation scheme to the tv transmitters in india so as to ensure minimum number of channel usage in the country .concluding remarks and directions for future work are discussed in sec .[ sec : conclusions ] .regulators fcc in the us and ofcom in the uk have allowed for secondary operations in the tv white spaces . under this provision , a secondary user can use the unutilized tv spectrum provided it does not cause harmful interference to the tv band users and it relinquishes the spectrum when a primary user ( such as tv transmitter ) starts operation . since the actual availability of tv white spaces varies both with location and time , operators of secondary services are interested in the amount of available white space .the available tv white space depends on regulations such as the protection margin to the primary user , maximum height above average terrain ( haat ) , transmission power of secondary user , and the separation distance . as per fcc, a band can declared as unutilized if no primary signal is detected above a threshold of . using the parameters of terrestrial tv towers ,tv white space availability in the us has been done in the literature .the average number of channels available per user has been calculated using the pollution and protection viewpoints .these viewpoints are explained in more detail in sec .[ sec : methodology ] . using the pollution viewpoint into account , the average number of channels available per location increases with the allowable pollution level .this average number of available channels is maximum in the lower uhf band . in the protection viewpoint too , the average number of available channels at a location is maximum in the lower uhf band ( channels 14 - 51 of the us ) and this decreases as more and more constraints are applied . in uk , ofcom published a consultation providing details of cognitive access to tv white spaces in 2009 .the coverage maps and database of digital tv ( dtv ) transmitters can be used to develop a method for identification of the tv white space at any location within uk . the tv white space availability in japan has also been studied in .the results of indicate that the amount of available tv white space in japan is larger than that in us and uk .however , this availability decreases with an increase in the separation distance . to the best of our knowledge ,a comprehensive study of tv white space availability has not been done in india and is the focus of this work .as per the nfap 2011 , the spectrum in the frequency band - is earmarked for fixed , mobile and broadcasting services .the nfap has allowed the digital broadcasting services to operate in the - band .india is a part of the itu region 3 , and the - band has been earmarked for international mobile telecommunications - advanced ( imt - a ) applications ( see footnote ind 38 of ) .hence , the digital tv broadcasting will operate in the frequency band from to .currently the tv transmitters operate only in the - band in the uhf band . in india , the sole terrestrial tv service provider is doordarshan which currently transmits in two channels in most parts across india .currently doordarshan has tv transmitters operating in india , out of which transmitters transmit in the vhf band - i ( - comprising of two channels of each ) , transmitters operate in the vhf band - iii ( - comprising of eight channels of each ) , and the remaining transmitters transmit in the uhf band - iv ( - comprising of fifteen channels of each ) . in india ,a small number of transmitters operate in the uhf bands . as a result ,apart from - band depending on the location , _ the uhf band is quite sparsely utilized in india _ !this observation will be made more precise in the next section .the quantification of tv white space in india will be addressed in this section .a computational tool has been developed that calculates the protection region and separation distance for each tower , and also the pollution region around the tower where a secondary device should not operate .currently , there are no tv white space regulations in india . the regulations of fcc ( us ) are borrowed for the estimation of tv white space in india ._ microphones are ignored in our computation due to lack of available information_. the input to the developed computational tool include the following parameters for all the tv transmitters : 1 .position of the tower ( latitude and longitude ) , 2 .transmission power of the tv transmitter , 3 .frequency of operation , 4 .height of the antenna , 5 . and , terrain information of area surrounding the tower .the above parameters of all the tv towers operating in the uhf band - iv have been obtained from the terrestrial tv broadcaster doordarshan . out of 373 transmitters operating in the 470 - 590 mhz uhf band ( channel no .21 - 35 ) , doordarshan has provided data of 254 towers operating in the west , east , south and the north east zone .the tv transmitter information for the north zone is yet to be provided by doordarshan .comprehensive field strength measurements in india suggest that hata model is fairly accurate for propagation modeling .the hata model will be used for path - loss calculations . using the tv transmitter information and the propagation model, we quantify the available tv white space in the uhf tv band by two methods .the first method utilizes the protection and pollution viewpoints while the second one utilizes technical specification made by the fcc . the protection and pollution viewpoints used for calculating tv white spacehave been introduced by mishra and sahai .their method is reviewed in this section and utilized in our work for obtaining tv white space availability ( see sec . [sec : results ] ) . in the protection viewpoint ,when a secondary user operates , it must not cause any interference to the primary receivers in its vicinity .this is illustrated in fig .[ fig : picture31 ] protection radius , separation distance and the no - talk radius , width=240 ] the protected area is defined using the following equations .let be the transmit power of primary in dbm , be the path - loss in db at a radial distance from the transmitter , be the thermal noise in dbm , and be the threshold in dbm .then , the protection radius is defined by the following equation , the regulator provides an additional margin ( ) to account for fading .the modified equation for is , the no - talk radius is defined as the distance from the transmitter up to which no secondary user can transmit .the difference is calculated such that if a secondary device transmits at a distance of from the tv band receiver located at , the at the tv band receiver within a radius does not fall below .the separation distance is then calculated such that where , is the secondary transmitter power in dbm .in addition to the co - channel considerations , a tv receiver tuned to a particular channel has a tolerance limit on the interference level in the adjacent bands . in the protection viewpoint , we consider that the protection radius in the adjacent channel is the same as in co - channel. however , the tv receiver can tolerate more adjacent channel interference than co - channel interference .therefore , a margin of more than co - channel fading margin ( set by the fcc regulations ) is provisioned for adjacent channel interference .the pollution viewpoint takes into consideration the fact that even though a region could be used by a secondary device , the interference at the secondary receiver due to the primary transmitter might be higher than the tolerable interference level of the secondary receiver .if is the interference tolerable by the secondary receiver , then is given by , similar to the protection viewpoint , there are adjacent channel conditions ( leakage of primary transmitter s power in the adjacent channel ) in the pollution viewpoint as well .it is assumed that the secondary device can tolerate up to of interference if it is operating in the adjacent channels .the tv white space available is the _ intersection _ of the white space determined from the pollution and protection viewpoints .the parameters used in our computations for calculating the available tv white space are given in table [ i ] .& + & ( specified for 802.11 g systems ) + maximum tolerable interference ( ) by secondary ( adjacent channel ) & + noise in a band ( ) & + + & + & ( specified by fcc ) + additional fading margin in adjacent channel & ( specified by fcc ) + required for primary receiver & + transmission power of secondary device & + haat of secondary device & m + as an example , we consider the tv tower located at the sinhagad fort in pune .doordarshan informed us that the tower at sinhagad fort operates in the - band ( channel ) at a height of m and power of ( ) . in the hata model used for path loss calculations ,pune has been considered as an urban city . using the pollution viewpoint , for a tolerable interference in channel ( - ), the pollution radius for the tower is calculated to be km , and for a tolerable interference of in the adjacent channel , the pollution radius is km .what this means for a secondary device is that the interference level is more than the allowable limit ( above noise floor ) in a region of km in channel and km in the adjacent channels around the tower . from the protection viewpoint , if a fading margin of is provided , the protection and no - talk radius in channel are km and km respectively .if we consider an additional fading margin of in the adjacent band , the no talk radius in the adjacent channel is km .this implies that if a secondary device operates within a distance of km in channel and km in the adjacent channels , the primary user receiving on channel will experience interference .the available white space is the intersection of the white space using the two viewpoints .thus , in pune , no secondary device can operate within a distance of km ( limit set by pollution viewpoint ) on channel and km in the adjacent channels ( limit set by protection viewpoint ) around the tower at sinhagad fort . in the fcc s definition of tv white space , the protection radius is same in the grade b contour ( ) . in the uhf band , is the distance from the tv tower where the field strength of the primary signal falls to .the required field strength is converted from from dbu to dbm using the following conversion formula , where , is transmit power in dbm , is the field strength in dbu , is the upper frequency - limit of the channel , and is the lower frequency - limit of the channel . to calculate the separation distance , i.e. distance beyond where no secondary device can transmit , the distance such that the signal from the secondary device at results in a signal level of at the tv receiver located at is calculated . for the tv transmitter at pune ,the no - talk radius , i.e. the distance from the tower beyond which a secondary device can use the channel is computed to be km .the results obtained by tv white space calculation methods of sec . [ sec : methodology ] will be discussed in this section . using the methodology described in sec .iii , the pollution and the no - talk radius are calculated for every tv tower in the four zones .each region is plotted as a circle around the tv tower .here it has been assumed that each tower has an omnidirectional antenna . the tv white space availability in the west , east , south and north - east zone using the pollution viewpointis shown in fig .[ poll15db ] and using the protection viewpoint in fig .[ prot1db ] .white space availability using the fcc regulations is shown in fig .[ fcc2 ] .[ poll15db ] and fig .[ prot1db ] illustrate that at most places in india , not even a single channel in the uhf band is utilized ! to quantify this result further , the complementary cumulative distribution function of the number of channels available per unit area as tv white space is plotted in fig .[ ccdf1 ] . [ cols="^,<,<,<,<,<,<",options="header " , ]conclusions that can be drawn from table [ ii ] are as follows : 1 . out of the 15 uhf tv channels ( 470 - 590 mhz ), the average number of tv channels available for secondary usage is above 14 ( 112mhz ) in each of the four zones .2 . availabletv white space is the maxium in the north east , where 18 transmitters operate in the uhf band .3 . if we use the adjacent channel constraint , the available white space decreases . however , this decrease is less than 1% in each case .table [ iii ] concludes that in almost all cases at least 12 out of the 15 channels ( 80% ) are available as tv white space in 100% of the areas in india .this is larger than japan , where out of 40 channels , on an average 16.67 channels ( 41.67% ) are available in 84.3% of the areas .this white space is also larger than what is available in us and the european countries .the available tv white space by area in germany , uk , switzerland , denmark on an average are 19.2 ( 48% ) , 23.1 ( 58% ) , 25.3 ( 63% ) and 24.4 ( 61% ) channels out of the 40 channels respectively . similarly , as compared to the us , the available tv white space in india is much larger .it must be noted that in tv white space studies across the world , the imt - a band is also included .there are a total of 254 doordarshan tv transmitters in the four zones illustrated in fig .[ loc ] operating in the - .currently , in these zones , 14 out of the 15 channels ( channels - ) are _ sparsely _ used for transmissions . as shown in fig .[ loc ] , channels allocated to the transmitters are reused inefficiently or at very large distances . for instance , out of the transmitters , only transmitters in the four zones operate on channel .we propose a channel allocation scheme such that the minimum number of tv channels are used in each zone , while ensuring that the coverage areas of different transmitters do not overlap .the algorithm of the proposed channel allocation scheme is as follows .using the algorithm described above , the minimum number of distinct channels required without any overlap of the coverage areas for four zones are given in fig . [ channel ] . under this channel allocation scheme ,the maximum number of distinct tv channels required in the entire zone is four , which is much smaller than the fourteen channels currently used in india . to avoid adjacent channel interference, the overlapping channels must be non - adjacent .in this paper , quantitative analysis of the available tv white space in the - uhf tv band in india was performed .it is observed that unlike developed countries , a major portion of tv band spectrum is unutilized in india .the results show that even while using conservative parameters , in at least % areas in the four zones all the channels ( % of the tv band spectrum ) are free !the average available tv white space was calculated using two methods : ( i ) the protection and pollution viewpoints ; and , ( ii ) the fcc regulations . by both methods , the average available tv white space in the uhf tv band was shown to be more than !an algorithm was proposed for reassignment of tv transmitter frequencies to maximize unused spectrum .it was observed that four tv channels ( or ) are sufficient to provide the existing uhf tv band coverage in india . in the future ,we plan to obtain and include the missing north zone data in our work .we also wish to explore suitable regulations in india for the tv white space to enable affordable broadband coverage .this is timely and important since policy intent for tv white space was made in nfap 2011 .m. islam et .al , spectrum survey in singapore : occupancy measurements and analyses , " in proc . of 3rd intl .conference on cognitive radio oriented wireless networks and communications , may 2008 , pp . 1 - 7 .m. mchenry et .chicago spectrum occupancy measurements & analysis and a long - term studies proposal , " in proc . of the acm 1st intl .wkshp . on tech . andpolicy for accessing spectrum , aug .2006 , pp . 1 - 12. t. shimomura , t. oyama and h. seki , `` analysis of tv white space availability in japan , '' in proc . of ieee vehicular tech .p. kumar et .al , white space detection and spectrum characterization in urban and rural india , " in proc . of ieee 14thsymp . and wkshops on a world of wireless , mobile and multimedia networks , jun .2013 , pp . 1 - 6. ofcom , `` digital dividend : cognitive access .consultation on license - exempting cognitive devices using interleaved spectrum , '' feb . | licensed but unutilized television ( tv ) band spectrum is called as tv white space in the literature . ultra high frequency ( uhf ) tv band spectrum has very good wireless radio propagation characteristics . the amount of tv white space in the uhf tv band in india is of interest . comprehensive quantitative assessment and estimates for the tv white space in the - band for four zones of india ( all except north ) are presented in this work . this is the first effort in india to estimate tv white spaces in a comprehensive manner . the average available tv white space per unit area in these four zones is calculated using two methods : ( i ) the primary ( licensed ) user and secondary ( unlicensed ) user point of view ; and , ( ii ) the regulations of federal communications commission in the united states . by both methods , the average available tv white space in the uhf tv band is shown to be more than ! a tv transmitter frequency - reassignment algorithm is also described . based on spatial - reuse ideas , a tv channel allocation scheme is presented which results in insignicant interference to the tv receivers while using the least number of tv channels for transmission across the four zones . based on this reassignment , it is found that four tv band channels ( or ) are sufficient to provide the existing uhf tv band coverage in india . |
forecasting changes in volatility is essential for risk management , asset pricing and scenario analysis .indeed , models for describing and forecasting the evolution of volatility and covariance among financial assets are widely applied in industry . among the most popular approachesare worth mentioning the multivariate extensions of garch , the stochastic covariance models and realized covariance .however most of these econometrics tools are not able to cope with more than few assets , due to the curse of dimensionality and the increase in the number of parameters , limiting their insight into the volatility evolution to baskets of few assets only .this is unfortunate , since gathering insights into systemic risk and the unfolding of financial crises require modelling the evolution of entire markets which are composed by large numbers of assets .we suggest to use network filtering as a valuable tool to overcome this limitation .correlation - based filtering networks are tools which have been widely applied to filter and reduce the complexity of covariance matrices made of large numbers of assets ( of the order of hundreds ) , representative of entire markets .this strand of research represents an important part of the econophysics literature and has given important insights for risk management , portfolio optimization and systemic risk regulation .the volatility of a portfolio depends on the covariance matrix of the corresponding assets .therefore , the latter can provide insights into the former . in this work we elaborate on this connection : we show that correlation matrices can be used to predict variations of volatility , once they are analysed through the lens of network filtering .this is quite an innovative use of correlation - based networks , which have been used mostly for descriptive analyses , with the connections with risk forecasting being mostly overlooked .some works have shown that is possible to use dimensionality reduction techniques , such as spectral methods , as early - warning signals for systemic risk : however these approaches , although promising , do not provide proper forecasting tools , as they are affected by high false positive ratios and are not designed to predict a specific quantity . the approach we propose exploits network filtering to explicitly predict future volatility of markets made of hundreds of stocks . to this end , we introduce a new dynamical measure that quantifies the rate of change in the structure of the market correlation matrix : the `` correlation structure persistence '' . this quantity is derived from the structure of network filtering from past correlations . then we show how such measure exhibits significant predicting power on the market volatility , providing a tool to forecast it .we assess the reliability of this forecasting through out - of - sample tests on two different equity datasets .the rest of this paper is structured as follows : we first describe the two datasets we have analysed and we introduce the correlation structure persistence ; then we show how our analyses point out a strong interdependence between correlation structure persistence and future changes in the market volatility ; moreover , we describe how this result can be exploited to provide a forecasting tool useful for risk management , by presenting out - of - sample tests and false positive analysis ; then we investigate how the forecasting performance changes in time ; finally we discuss our findings and their theoretical implications .we have analysed two different datasets of equity data . the first set ( nyse dataset )is composed by daily closing prices of us stocks traded in new york stock exchange , covering 15 years from 02/01/1997 to 31/12/2012 .the second set ( lse dataset ) is composed by daily closing prices of uk stocks traded in the london stock exchange , covering 13 years from 05/01/2000 to 21/08/2013 .all stocks have been continuously traded throughout these periods of time .these two sets of stocks have been chosen in order to provide a significant sample of the different industrial sectors in the respective markets .for each asset ( ) we have calculated the corresponding daily log - return , where is the asset price at day .the market return is defined as the average of all stocks returns : . in order to calculate the correlation between different assetswe have then analysed the observations by using moving time windows , with .each time window contains observations of log - returns for each asset , totaling to observations .the shift between adjacent time windows is fixed to trading days .we have calculated the correlation matrix within each time window , , by using an exponential smoothing method that allows to assign more weight on recent observations .the smoothing factor of this scheme has been chosen equal to according to previously established criteria . from each correlation matrix have then computed the corresponding planar maximally filtered graph ( pmfg ) .the pmfg is a sparse network representation of the correlation matrix that retains only a subset of most significant entries , selected through the topological criterion of being maximally planar .such networks serve as filtering method and have been shown to provide a deep insight into the dependence structure of financial assets .once the pmfgs , with , have been computed we have calculated two measures , a backward - looking and a forward - looking one .the first is a measure that monitors the correlation structure persistence , based on a measure of pmfg similarity .this backward - looking measure , that we call , relies on past data only and indicates how slowly the correlation structure measured at time window is differentiating from structures associated to previous time windows .the forward - looking measure is the volatility ratio , that at each time window quantifies how good the market volatility measured at is as a proxy for the next time - window volatility . unlike ,the value of is not known at the end of .[ fig : time_windows ] shows a graphical representation of the time window set - up . in the followingwe define the two measures : [ sec : methods ] * * correlation structure persistence * : we define the correlation structure persistence at time as : + + where is an exponential smoothing factor , is a parameter and is the fraction of edges in common between the two pmfgs and , called `` edge survival ratio '' . in formula , reads : + + where is the number of edges ( links ) in the two pmfgs ( constant and equal to for a pmfg ) , and ( ) represents the edge - sets of pmfg at ( ) .the correlation structure persistence is therefore a weighted average of the similarity ( as measured by the edge survival ratio ) between and the first previous pmfgs , with an exponential smoothing scheme that gives more weight to those pmfgs that are closer to .the parameter in eq .[ eq : es ] can be calculated by imposing .intuitively , measures how slowly the change of correlation structure is occurring in the near past of . ** volatility ratio * : in order to quantify the agreement between the estimated and the realized risk we here make use of the volatility ratio , a measure which has been used for this purpose and defined as follows : + + where is the realized volatility of the average market return computed on the time window ; is the estimated volatility of computed on time window , by using the same exponential smoothing scheme described for the correlation .specifically , is the time window of length that follows immediately : if is the last observation in , covers observations from to ( fig .[ fig : time_windows ] ) .therefore the ratio in eq .[ eq : q ] estimates the agreement between the market volatility estimated with observations in and the actual market volatility observed over an investment in the assets over .if , then the historical data gathered at has underestimated the ( future ) realized volatilty , whereas indicates overestimation .+ let us stress that provides an information on the reliability of the covariance estimation too , given the relation between market return volatility and covariance : + + + where and are respectively the estimated and realized covariances . to investigate the relation between and we have calculated the two quantities with different values of and in eqs .[ eq : es ] and [ eq : q ] , to assess the robustness against these parameters .specifically , we have used trading days , that correspond to time windows of length 1 , 2 , 3 and 4 years respectively ; , that correspond ( given trading days ) to an average in eq .[ eq : es ] reaching back to , , and trading days respectively . has been chosen equal to trading days ( one year ) for all the analysis . in fig .[ fig : es_matrices ] we show the matrices ( eq . [ eq : es_ab ] ) for the nyse and lse dataset , for .we can observe a block structure with periods of high structural persistence and other periods whose correlation structure is changing faster .in particular two main blocks of high persistence can be found before and after the 2007 - 2008 financial crisis ; a similar result was found in a previous work with a different measure of similarity .these results are confirmed for all values of considered . in fig .[ fig : plot_342 ] we show and as a function of time , for and .as expected , main peaks of occur during the months before the most turbulent periods in the stock market , namely the 2002 market downturn and the 2007 - 08 credit crisis .interestingly , the corresponding seems to follow a specular trend .this is confirmed by explicit calculation of pearson correlation between the two signals , reported in tabs .[ tab : corr_4 ] - [ tab : corr_8 ] : as one can see , for all combinations of parameters the correlation is negative .* matrices for , for nyse ( left ) and lse dataset ( right)*. a block - like structure can be observed in both datasets , with periods of high structural persistence and other periods whose correlation structure is changing faster .the 2007 - 2008 financial crisis marks a transition between two main blocks of high structural persistence.,title="fig : " ] * matrices for , for nyse ( left ) and lse dataset ( right)*. a block - like structure can be observed in both datasets , with periods of high structural persistence and other periods whose correlation structure is changing faster .the 2007 - 2008 financial crisis marks a transition between two main blocks of high structural persistence.,title="fig : " ] * and signals represented for and * , for both nyse ( left graph ) and lse ( right graph ) datasets .it is evident the anticorrelation between the two signals .the financial crisis triggers a major drop in the structural persistence and a corresponding peak in ., title="fig : " ] * and signals represented for and * , for both nyse ( left graph ) and lse ( right graph ) datasets .it is evident the anticorrelation between the two signals .the financial crisis triggers a major drop in the structural persistence and a corresponding peak in ., title="fig : " ] * matrices for , for nyse ( left ) and lse dataset ( right)*. a block - like structure can be observed in both datasets , with periods of high structural persistence and other periods whose correlation structure is changing faster .the blocks of high similarity show higher compactness than in fig .[ fig : es_matrices].,title="fig : " ] * matrices for , for nyse ( left ) and lse dataset ( right)*. a block - like structure can be observed in both datasets , with periods of high structural persistence and other periods whose correlation structure is changing faster .the blocks of high similarity show higher compactness than in fig .[ fig : es_matrices].,title="fig : " ] in order to check the significance of this anticorrelation we can not rely on standard tests on pearson coefficient , such as fisher transform , as they assume i.i.d .series .our time series are instead strongly autocorrelated , due to the overlapping between adjacent time windows . therefore we have calculated confidence intervals by performing a block bootstrapping test .this is a variation of the bootstrapping test , conceived to take into account the autocorrelation structure of the bootstrapped series .the only free parameter in this method is the block length , that we have chosen applying the optimal selection criterion proposed in literature : such criterion is adaptive on the autocorrelation strength of the series as measured by the correlogram .we have found , depending on the parameters and , optimal block lengths ranging from 29 to 37 , with a mean of 34 ( corresponding to 170 trading days ) . by performing block bootstrapping tests we have therefore estimated confidence intervals for the true correlation between and ; in tabs .[ tab : corr_4 ] - [ tab : corr_8 ] correlations whose and confidence intervals ( ci ) do not include zero are marked with one and two stars respectively .as we can see , 14 out of 16 correlation coefficients are significantly different from zero within ci in the nyse dataset , and 12 out of 16 in the lse dataset . for what concerns the ci, we observe 13 out of 16 for the nyse and 9 out of 16 for the lse dataset .non - significant correlations appear only for , suggesting that this length is too small to provide a reliable measure of structural persistence .very similar results are obtained by using minimum spanning tree ( mst ) instead of pmfg as network filtering .given the interpretation of and given above , anticorrelation implies that an increase in the `` speed '' of correlation structure evolution ( low ) is likely to correspond to underestimation of future market volatility from historical data ( high ) ) , whereas when the structure evolution `` slows down '' ( high ) there is indication that historical data is likely to provide an overestimation of future volatility .this means that we can use as a valuable predictor of current historical data reliability .this result is to some extent surprising as is derived from pmfgs topology , which in turns depends only on the ranking of correlations and not on their actual value : yet , this information provides meaningful information about the future market volatility and therefore about the future covariance . in principle other measures of correlation ranking structure , more straightforward than the correlation persistence , might capture the same interplay with .we have therefore considered also the metacorrelation , that is the pearson correlation computed between the coefficients of correlation matrices at and ( see methods for more details ) .such measure does not make use of pmfg .[ fig : meta_matrices ] displays the similarity matrices obtained with this measure for nyse and lse datasets : we can observe again block - like structures , that however carry different information from the in fig .[ fig : es_matrices ] ; in particular , blocks show higher intra - similarity and less structure . similarly to eq .[ eq : es ] , we have then defined as the weighted average over past time windows ( see methods ) . in tabs . [tab : corr_meta_nyse ] and [ tab : corr_meta_lse ] we show the correlation between and . as we can see , although an anticorrelation is present for each combination of parameters and , correlation coefficients are systematically closer to zero than in tabs .[ tab : corr_4 ] - [ tab : corr_8 ] , where was used .moreover the number of significant pearson coefficients , according to the block bootstrapping , decreases to 12 out of 16 in nyse and to 10 out of 16 in lse dataset .since does not make use of pmfg , this result suggests that the filtering procedure associated to correlation - based networks is a necessary step for capturing at best the correlation ranking evolution and its interplay with the volatility ratio .cc | c c c c | & & + & & * 10 * & * 25 * & * 50 * & * 100 * + & & -0.2129 & -0.2224 & & + & & & & & + & & & & & + & & & & & + cc | c c c c | & & + & & * 10 * & * 25 * & * 50 * & * 100 * + & & & & -0.1872 & + & & & & & + & & & & & + & & & & & + cc | c c c c | & & + & & * 10 * & * 25 * & * 50 * & * 100 * + & & -0.0992 & -0.0754 & -0.1055 & -0.1157 + & & -0.2146 & -0.2232 & -0.2309 & -0.2753 + & & -0.2997 & & & + & & & & & + cc | c c c c | & & + & & * 10 * & * 25 * & * 50 * & * 100 * + & & -0.1470 & -0.1095 & -0.1326 & -0.1720 + & & & -0.2113 & & + & & & & & + & & & -0.2954 & -0.3163 & + * partition of data into training ( left graphs ) and test ( right graphs ) set*. training sets are used to regress against , in order to estimate the coefficents in the logistic regression and therefore identify the regression threshold , shown as a vertical continuous line . the test sets are used to test the forecasting performance of such regression on a subset of data that has not been used for regression ; the model predicts ( ) if is greater than the regression threshold , and ( ) otherwise.,title="fig : " ] * partition of data into training ( left graphs ) and test ( right graphs ) set*. training sets are used to regress against , in order to estimate the coefficents in the logistic regression and therefore identify the regression threshold , shown as a vertical continuous line . the test sets are used to test the forecasting performance of such regression on a subset of data that has not been used for regression ; the model predicts ( ) if is greater than the regression threshold , and ( ) otherwise.,title="fig : " ] * partition of data into training ( left graphs ) and test ( right graphs ) set*. training sets are used to regress against , in order to estimate the coefficents in the logistic regression and therefore identify the regression threshold , shown as a vertical continuous line . the test sets are used to test the forecasting performance of such regression on a subset of data that has not been used for regression ; the model predicts ( ) if is greater than the regression threshold , and ( ) otherwise.,title="fig : " ] * partition of data into training ( left graphs ) and test ( right graphs ) set*. training sets are used to regress against , in order to estimate the coefficents in the logistic regression and therefore identify the regression threshold , shown as a vertical continuous line .the test sets are used to test the forecasting performance of such regression on a subset of data that has not been used for regression ; the model predicts ( ) if is greater than the regression threshold , and ( ) otherwise.,title="fig : " ] in this section we evaluate how well the correlation structure persistence can forecast the future through its relation with the forward - looking volatility ratio .in particular we focus on estimating whether is greater or less than : this information , although less complete than a precise estimation of , gives us an important insight into possible overestimation ( ) or underestimation ( ) of future volatility .we have proceeded as follows .given a choice of parameters and , we have calculated the corresponding set of pairs , with .then we have defined as the categorical variable that is if and if .finally we have performed a logistic regression of against : namely , we assume that : where is the sigmoid function ; we estimate parameters and from the observations through maximum likelihood . once the model has been calibrated , given a new observation we have predicted if , and otherwise . this classification criterion , in a case with only one predictor , corresponds to classify according to whether is greater or less than a threshold which depends on and , as shown in fig .[ fig : training_test ] ( right graphs ) for a particular choice of parameters .therefore the problem of predicting whether market volatility will increase or decrease boils down to a classification problem with as predictor and as target variable .we have made use of a logistic regression because it is more suitable than a polynomial model for dealing with classification problems .other classification algorithms are available ; we have chosen the logistic regression due to its simplicity .we have also implemented the knn algorithm and we have found that it provides similar outcomes but worse results in terms of the forecasting performance metrics that we discuss in this section .we have then evaluated the goodness of the logistic regression at estimating given a new observation . to this end , we have computed three standard metrics for assessing the performance of a classification method : the probability of successful forecasting , the true positive rate and the false positive rate . represents the expected fraction of correct predictions , is the method goodness at identifying true positives ( in this case , actual increases in volatility ) and quantifies the method tendency to false positives ( predictions of volatility increase when the volatility will actually decrease ) : see methods for more details .overall these metrics provide a complete summary of the model goodness at predicting changes in the market volatility . in order to avoid overfittingwe have estimated the metrics above by means of an out - of - sample procedure .we have divided our dataset into two periods , a training set and a test set . in the trainingset we have calibrated the logistic equation in eq .[ eq : logistic_regression ] , estimating the parameters and ; in the test set we have used the calibrated model to measure the goodness of the model predictions by computing the measures of performance in eq .[ eq : prob_forecast]-[eq : fpr ] . in fig .[ fig : training_test ] this division is shown for a particular choice of and , for both nyse and lse dataset . in this examplethe percentage of data included in the test set ( let us call it ) is .probabilities of successful forecasting are reported in tabs . [tab : nyse_p_outs ] and [ tab : lse_p_outs ] , for .as we can see is higher than for all combinations of parameters in nyse dataset , and in almost all combinations for lse dataset .stars mark those values of that are significantly higher than the same probability obtained by using the most recent value of instead of as a predictor for in the logistic regression ( let us call such probability ) .specifically , we have defined a null model where variations from such probability are due to random fluctuations only ; given observations , such fluctuations follow a binomial distribution , with mean and variance . then p - values have been calculated by using this null distribution for each combination of parameters .this null hypothesis accounts for the predictability of that is due to the autocorrelation of only ; therefore significantly higher than the value expected under this hypothesis implies a forecasting power of that is not explained by the autocorrelation of . from the tablewe can see that is significant in 12 out of 16 combinations of parameters for nyse dataset , and in 13 out of 16 for lse dataset .this means that correlation persistence is a valuable predictor for future average correlation , able to outperform forecasting method based on past average correlation trends .these results are robust against changes of , as long as the training set is large enough to allow an accurate calibration of the logistic regression .we have found this condition is satisfied for .however does not give any information on the method ability to distinguish between true and false positives . to investigate this aspect we need and .a traditional way of representing both measures from a binary classifier is the so - called `` receiver operating characteristic '' ( roc ) curve . in a roc plot , plotted against as the discriminant threshold is varied .the discriminant threshold is the value of the probability in eq .[ eq : logistic_regression ] over which we classify : the higher is , the less likely the method is to classify ( in the analysis on we chose ) .ideally , a perfect classifier would yield for all , whereas a random classifier is expected to lie on the line .therefore a roc curve which lies above the line indicates a classifier that is better than chance at distinguishing true from false positives . as one can see from fig .[ fig : roc_curve ] , the roc curve s position depends on the choice of parameters and . in this respectour classifier performs better for low values of and .this can be quantified by measuring the area under the roc curve ; such measure , often denoted by auc , is shown in tabs .[ tab : auc_nyse_outs]-[tab : auc_lse_outs ] . for both datasetsthe optimal choice of parameters is and .cc | c c c c | & & + & & * 10 * & * 25 * & * 50 * & * 100 * + & & 0.546 & 0.560 * & 0.599 * * & 0.539 * * + & & 0.704 * * & 0.695 * * & 0.658 * * & 0.605 * * + & & 0.634 * & 0.585 & 0.539 & 0.708 * + & & 0.704 * & 0.7638 * * & 0.839 * * & 0.860 + cc | c c c c | & & + & & * 10 * & * 25 * & * 50 * & * 100 * + & & 0.616 * * & 0.645 * * & 0.612 * * & 0.568 * * + & & 0.652 * * & 0.635 * * & 0.598 * * & 0.393 + & & 0.651 * * & 0.560 * * & 0.453 * * & 0.412 + & & 0.544 * * & 0.573 * * & 0.706 * * & 0.689 + * receiver operating characteristic ( roc ) curve*. upper graph : true positive rate ( tpr ) against false positive rate ( fpr ) as the discriminant threshold of the classifier is varied , for each combination of parameters and in the nyse dataset. the closer the curve is to the upper left corner of each graph , the better is the classifier compared to chance .bottom graph : true positive rate ( tpr ) against false positive rate ( fpr ) as the discriminant threshold of the classifier is varied , for each combination of parameters and in the lse dataset.,title="fig : " ] * receiver operating characteristic ( roc ) curve*. upper graph : true positive rate ( tpr ) against false positive rate ( fpr ) as the discriminant threshold of the classifier is varied , for each combination of parameters and in the nyse dataset .the closer the curve is to the upper left corner of each graph , the better is the classifier compared to chance .bottom graph : true positive rate ( tpr ) against false positive rate ( fpr ) as the discriminant threshold of the classifier is varied , for each combination of parameters and in the lse dataset.,title="fig : " ] .[tab : auc_nyse_outs ] nyse dataset : * area under the curve ( auc ) * , measured from the roc curve in fig .[ fig : roc_curve ] .values greater than 0.5 indicate that the classifier performs better than chance . [ cols="^,^,^,^,^,^ " , ] * fraction of successful predictions as a function of time*. nyse ( left graph ) and lse dataset ( right graph ) .forecasting is based on logistic regression with predictor ( top graphs ) and most recent value of ( bottom graphs ) .horizontal lines represent the average over the entire period.,title="fig : " ] * fraction of successful predictions as a function of time*. nyse ( left graph ) and lse dataset ( right graph ) .forecasting is based on logistic regression with predictor ( top graphs ) and most recent value of ( bottom graphs ) .horizontal lines represent the average over the entire period.,title="fig : " ] in this section we look at how the forecasting performance changes at different time periods . in order to explore this aspect we have counted at each time window the number of predictions ( out of the 16 predictions corresponding to as many combinations of and ) that have turned out to be correct; we have then calculated the fraction of successful predictions as . in this way is a proxy for the goodness of our method at each time window .logistic regression parameters and have been calibrated by using the entire time period as training set , therefore this amounts to an in - sample analysis . in fig .[ fig : forecast_time ] we show the fraction of successful predictions for both nyse and lse datasets ( upper graphs , blue circles ) . for comparisonwe also show the same measure obtained by using the most recent value of as predictor ( bottom graphs ) ; as in the previous section , it represents a null model that makes prediction by using only the past evolution of . as we can see , both predictions based on and on past values of display performances changing in time . in particular drops just ahead of the main financial crises ( the market downturn in march 2002 , 2007 - 2008 financial crisis , euro zone crisis in 2011 ) ; this is probably due to the abrupt increase in volatility that occurred during these events and that the models took time to detect .after these drops though performances based on recover much more rapidly than those based on past value of .for instance in the first months of 2007 our method shows quite high ( more than of successful predictions ) , being able to predict the sharp increase in volatility to come in 2008 while predictions based on fail systematically until 2009 .overall , predictions based on correlation structure persistence appear to be more reliable ( as shown by the average over all time windows , the horizontal lines in the plot ) and faster at detecting changes in market volatility .in this paper we have proposed a new tool for forecasting market volatility based on correlation - based information filtering networks and logistic regression , useful for risk and portfolio management .the advantage of our approach over traditional econometrics tools , such as multivariate garch and stochastic covariance models , is the `` top - down '' methodology that treats correlation matrices as the fundamental objects , allowing to deal with many assets simultaneously ; in this way the curse of dimensionality , that prevents e.g. multivariate garch to deal with more than few assets , is overcome .we have proven the forecasting power of this tool by means of out - of - sample analyses on two different stock markets ; the forecasting performance has been proven to be statistically significant against a null model , outperforming predictions based on past market correlation trends . moreover we have measured the roc curve and identified an optimal region of the parameters in terms of true positive and false positive trade - off .the temporal analysis indicates that our method is able to adapt to abrupt changes in the market , such as financial crises , more rapidly than methods based on past volatility .this forecasting tool relies on an empirical fact that we have reported in this paper for the first time .specifically , we have shown that there is a deep interplay between market volatility and the rate of change of the correlation structure .the statistical significance of this relation has been assessed by means of a block - bootstrapping technique .an analysis based on metacorrelation has revealed that this interplay is better highlighted when filtering based on planar maximally filtered graphs is used to estimate the correlation structure persistence .this finding sheds new light into the dynamic of correlation .the topology of planar maximally filtered graphs depends on the ranking of the pairs of cross - correlations ; therefore an increase in the rate of change in pmfgs topology points out a faster change of this ranking .our result indicates that such increase is typically followed by a rise in the market volatility , whereas decreases are followed by drops .a possible interpretation of this is related to the dynamics of risk factors in the market .indeed higher volatility in the market is associated to the emergence of a ( possibly new ) risk factor that makes the whole system unstable ; such transition could be anticipated by a quicker change of the correlation ranking , triggered by the still emerging factor and revealed by the correlation structure persistence .such persistence can therefore be a powerful tool for monitoring the emergence of new risks , valuable for a wide range of applications , from portfolio management to systemic risk regulation .moreover this interpretation would open interesting connections with those approaches to systemic risk that make use of principal component analysis , monitoring the emergence of new risk factors by means of spectral methods .we plan to investigate all these aspects in a future work .given two correlation matrices and at two different time windows and , their metacorrelation is defined as follows : [\langle \rho^2_{ij}(t_b ) \rangle_{ij } - \langle \rho_{ij}(t_b ) \rangle_{ij}^2 ] } } , \label{eq : z}\ ] ] where is the average over all couples of stocks .similarly to eq .[ eq : es ] we have then defined as the weighted average over past time windows : with reference to figs .[ fig : training_test ] b ) and d ) , let us define the number of observations in each quadrant ( ) as . in the terminology of classification techniques , is the number of true positive ( observations for which the model correctly predicted ) , is the number of true negative ( observations for which the model correctly predicted ) , the number of false negative ( observations for which the model incorrectly predicted ) and the number of false positive ( observations for which the model incorrectly predicted ) .we have then computed the following measures of quality of classification , that are the standard metrics for assessing the performances of a classification method : * * probability of successful forecasting ( ) * : represents the method probability of a correct prediction , expressed as fraction of observed values through which the method has successfully identified the correspondent value of . in classification problems ,sometimes , the error rate is used , which is simply . is computed as follows : + * * true positive rate ( ) * : it is the probability of predicting , conditional to the fact that the real is indeed ( that is , to predict an increase in volatility when the volatility will indeed increase ) ; it represents the method sensitivity to increase in volatility .it is also called `` recall '' . in formula : + * * false positive rate ( ) * : it is the probability of predicting , conditional to the fact that the real is instead ( that is , to predict an increase in volatility when the volatility will actually decrease ) .it is also called `` 1-specificity '' . in formula : + 10 url # 1`#1`urlprefix[2]#2 [ 2][]#2 _ _ ( , ) . & . _ _ ( ) . & _ _ ( , ) . , , , & . _ _ * * , ( ) . , & . _ _ * * , ( ) . ._ _ * * , ( ) . , , & ._ _ * * , ( ) . ._ _ * * , ( ) . , , & ._ _ * * , ( ) . & ._ _ * * , ( ) . , , , & ._ _ * * , ( ) . , & ._ _ * * , ( ) . , & ._ _ * * , ( ) ._ _ ( , ) ._ _ * * , ( ) ._ _ * * , ( ) . , , & ._ _ * * , ( ) . , & ._ _ * * , ( ) . , & ._ _ * * , ( ) . , & ._ _ * * , ( ) . ._ _ * * , ( ) ._ et al . _ . __ * * , ( ) ._ _ * * , ( ) . , , & . _ _ ( ) . , &_ _ * * , ( ) . , &_ _ * * , ( ) . , & ._ _ * * , ( ) . & ._ _ * * , ( ) . & ._ _ * * , ( ) . , & ._ _ ( ) .. _ _ * * , ( ) . & _ _ ( , ) . ._ _ * * , ( ) . ._ _ * * , ( ) ._ _ * * , ( ) ._ _ * * , ( ) ., , & _ _ ( , ) ._ _ ( , ) ._ _ ( , ) . .the authors wish to thank bloomberg for providing the data .tdm wishes to thank the cost action td1210 for partially supporting this work .ta acknowledges support of the uk economic and social research council ( esrc ) in funding the systemic risk centre ( es / k002309/1 ) .* competing financial interests . *the authors declare no competing financial interest . | we discovered that past changes in the market correlation structure are significantly related with future changes in the market volatility . by using correlation - based information filtering networks we device a new tool for forecasting the market volatility changes . in particular , we introduce a new measure , the `` correlation structure persistence '' , that quantifies the rate of change of the market dependence structure . this measure shows a deep interplay with changes in volatility and we demonstrate it can anticipate market risk variations . notably , our method overcomes the curse of dimensionality that limits the applicability of traditional econometric tools to portfolios made of a large number of assets . we report on forecasting performances and statistical significance of this tool for two different equity datasets . we also identify an optimal region of parameters in terms of true positive and false positive trade - off , through a roc curve analysis . we find that our forecasting method is robust and it outperforms predictors based on past volatility only . moreover the temporal analysis indicates that our method is able to adapt to abrupt changes in the market , such as financial crises , more rapidly than methods based on past volatility . |
let us consider a random sequence of total length with alphabet .each position of is independently assigned the letter , , , or with corresponding probabilities , , and ( ) .a word of length has the generic form of , where is the letter at the -th position .the total number of such words is .this number exceeds when increases to order , therefore most of the words of length will never appear in . then what is the probability of a particular word being a maw of sequence ? for to be a maw , it must not appear in but its two subwords of length , and , must appear in at least once , as demonstrated schematically in fig .[ fig : modelvsrandom](a ) .we define the core of the maw as the substring which must appear in at least twice , except for the special case of where the and overlap ( see appendix a in _ supplemental information _ ) .the core must immediately follow the letter at least once and it must also be immediately followed by the letter at least once . similarly ,if immediately follows the letter , it must not be immediately followed by the letter .we can construct subsequences of length from , say .neighboring subsequences are not fully independent as there is an overlap of length between and with .however , for two randomly chosen subsequences of length from the random sequence have a high probability of being completely uncorrelated .we can thus safely neglect these short - range correlations , and consequentially the probability of word being a maw is expressed as ^{n - l+1 } \nonumber \\ & & - \bigl\{\bigl [ 1 - \omega({\bf w}^{(p ) } ) \bigr]^{n - l+2 } + \bigl [ 1 - \omega({\bf w}^{(s ) } ) \bigr]^{n - l+2 } \nonumber \\ & & \quad - \bigl [ 1 - \omega({\bf w}^{(p ) } ) - \omega({\bf w}^{(s ) } ) + \omega({\bf w } ) \bigr]^{n - l+1 } \bigr\ } \ ; , \label{eq : rdprob}\end{aligned}\ ] ] where ( with being the -th letter of ) is the probability of a randomly chosen subsequence of length from to be identical to the word , while and are , respectively , the probabilities of a randomly chosen subsequences of length from being identical to and .summing over all the possible words of length , we obtain the expected number of maws of length for a random sequence of length : where the summation is over all the combinations of the two terminal letters and over all the possibilities with which the letters , , , , may appear in the core a total number of times equal to respectively , , , and .( a ) illustration of the properties of a minimal absent word and its subwords .( b ) comparison between the length distribution predicted by eq .[ eq : mawnum ] and the number of maws calculated for one instance of a random genome of mbp with uniform nucleotide distribution and with gc content ., title="fig : " ] ( a ) ( a ) illustration of the properties of a minimal absent word and its subwords .( b ) comparison between the length distribution predicted by eq .[ eq : mawnum ] and the number of maws calculated for one instance of a random genome of mbp with uniform nucleotide distribution and with gc content ., title="fig : " ] ( b ) in the simplest case of maximally random sequences , namely , eq .( [ eq : mawnum ] ) reduces to \ ; .\label{eq : mawnummaxrand}\end{aligned}\ ] ] we have checked by numerical simulations ( see fig . [fig : modelvsrandom]b ) that eq .[ eq : mawnum ] and eq . [ eq : mawnummaxrand ] indeed give excellent predictions of the number maws as a function of their length in random sequences .we define a predicted minimum and a predicted maximum of the support of the bulk ( and ) as the two values of such that . in the general case , requiring that we obtain , with the shortest length such that is closest to one , while in the other limit we obtain that , with the longest length being the bulk distribution is therefore centered around lengths of order . in the case of maximally random sequences , we can obtain the lower limit analytically and also the first correction to ( [ eq : maxl - gen ] ) , as and the above definition of is good enough for our purposes , and is also a good predictor for ( see below ) .a more refined predictor for is discussed in appendix b in the _ supplemental information_.we now describe a protocol for constructing random genome by a iterative copy paste mutation scheme that qualitatively reproduces the tail behavior observed for most of real genomes .the model is in principle similar to but differs in the details of the implementation .the starting point is a string of nucleotides chosen independently at random with a length . at each iteration , we chose two positions and uniformly at random on the genome and a length from a poisson distribution with mean .we copy the sequence between to and insert it between positions and , thus increasing the genome size by .we then randomly alter a fraction of nucleotides in the genome , choosing the positions uniformly at random and the new letters from an arbitrary distribution that can be tuned to adjust the gc content .the process is repeated until the genome reaches the desired length . in this model, represents the typical size of region involved in a translocation in the genome and corresponds to the expected number of mutations between such events .we observed that the length or the content of the initial string is unimportant provided that it is much shorter than the final genome size .the exact value of given a constant ration only affects the tail of the distribution far away from the bulk .we have also checked by simulations that using different distributions for the choice of does not affect the results qualitatively ( see fig .s1 in _ supplemental information _ ) .a low ratio generates genomes with tail maws while higher values cause them to only have bulk maws as in random texts discussed above ( fig .[ fig : tailmodelresult ] ) .this is in agreement with the observations for viruses in fig .[ fig : absentwordsplot ] : _ hepatitis b _ and _ h5n1 _ are viruses that replicate using an error prone reverse transcription and the maw distributions for their genomes lack the tail .in contrast , _ human herpers 5 _ virus and the _ cafeteria roenbergensis _ virus are dna viruses that use the higher fidelity dna replication mechanism and their genomes clearly have tail maws. ratio .the curves were obtained using and a constant .[ fig : tailmodelresult ] ]equations ( [ eq : lmin ] ) and ( [ eq : minl - gen ] ) can be used to estimate the length of the shortest absent words .[ fig : lminallgenomes ] compares the prediction of of the simplest estimate in eq .[ eq : lmin ] to the length of the shortest maw for a large set of genomes .the estimator eq .[ eq : lmin ] is expected to be most accurate only for genomes with neutral gc content .the figure reveals that genomes of comparable sizes typically vary in their by about nt , and that our estimator captures very well the upper values in this distribution . using eq .[ eq : minl - gen ] only improves the predictions for genomes with much biased gc content ( or less ) and leads to results in line with the earlier published estimator by wu et al . , see appendix c and table s1 in _supplemental information_. this analysis shows that , contrary to the conclusion of , there is no need to invoke a biological mechanism to explain the length of the shortest maw ; it is instead a property of rare events when sampling from a random distribution .indeed , the estimator eq .[ eq : minl - gen ] gives the length of the shortest human maw as and not , only one nucleotide away from the correct answer ( ) . .[ fig : lminallgenomes ] ]the human herpes virus 5 , a double stranded dna virus with a linear genome of kbp , has four very long maws with lengths of nt for two of them , and nt for two others , all other maws being much shorter ( nt or less ) . the cores of these four maws come from three regions , two of them located at the very beginning and very end of the genome , and the third at position kbp made up of the juxtaposition of the reverse complements of the two others .these regions are annotated as repeated and regulatory .based on the ncbi blast webservice , these particular sequences are highly conserved ( or more ) in numerous strains of the virus and do not seem to have homologues in any other species : the closely related _ human herpes virus 2 _ shows sequences with no more than similarities to these maw cores . in _ e.coli _ and _ e. faecalis _ , the longest maws with lengths between and nt all originate from rrna regions : a set of genes present in a few copies made of highly conserved regions with minor variations between the copies . for yeast ,the four longest maws ( nt ) originate from the two copies of rrna rdn37 on chromosome xii , another four ( nt ) are caused by copies of the region containing pau1 to vth2 on chromosome x and pau4 to vth2 on chromosome ix and two more ( nt ) originate from the copies of gene yrf on chromosome vii and xvi ( yrf is present in at least copies on the yeast genome ) .we performed an extensive search for all occurrences of the cores of every maw found in the organisms mentioned above and considered the density of maw cores along the genome for maws in the tail ( see fig .s2 in _ supplemental information _ , ) . except for the human herpes virus 5 that only has few maws which cluster in the repeated segments discussed above, maw cores do not appear to be preferentially located in any specific regions on the genome scale . a more detailed analysis ( fig .[ fig : distribmaw2 ] ) however reveals that , while cores of maws from the bulk appear uniformly distributed , those from the tail cluster downstream of ends of annotated coding dna sequences ( cds ) ( i.e. in the utrs and terminator sequences ) . a similar yet weaker effectcan be observed upstream of the start of annotated cds ( the utr ) .+ by definition , a maw core corresponds to a repeated region on the genome immediately surrounded by nucleotides varying between the copies .exact repeated regions lead to only a few maws with cores corresponding to that repeat . introducing a few random changes in such regionscreates more but shorter maws , the cores of which are the sub - strings common to two or more regions .a high density of maw cores in a family of regions such as the utrs thus indicates that they share a limited set of building blocks , implying a similar set of evolutionary constraints or a common origin .we now consider the lengths of the longest maw found in the genomes of numerous organisms and viruses . we observe that this length generally lies between and a length of about of the genome size ( fig .[ fig : maxlength ] ) .viruses are the class showing the largest spread in the length of their longest maws .many viruses are close to , particularly for those with shorter genomes , confirming our previously mentioned observation that some lack the tail and are thus closer to random texts .nevertheless , a few viral genomes have maws longer than of the genome length , _i.e. _ proportionally longer than in any living organism .the figure suggests that bacteria have on average slightly longer maws than archaea , but overall no clear distinctions between the four types of genomes can be noted based on the length of maws alone , suggesting that the mechanisms behind evolution of all organisms and viruses influence the maws distribution in the same way . a more detailed analysis of the data presented in fig .[ fig : distribmaw2 ] shows that organisms have the longest maws closest to the lower bound of the tail ( ) or the observed upper bound of of the genome length share some common traits . in bacteria ,the genomes having their longest maws closest to are two strains from the _ buchnera aphidicola _ species , two strains of _ candidatus carsonella ruddii _, one _ candidatus phytoplasma solani _ and one _ bacteroides uniformis_. while the last is a putative bacterial species living in human feces , the five other species are intra - cellular symbiotic or parasitic gammaproteobacteria in plants or insects . among eukaryotes ,the same analysis gives us _ plasmodium gaboni _, an agent responsible for malaria , a species of _ cryptosporidium _ , another family of intracellular parasites found in drinking water and _ chromera velia _ , a photosynthetic organism from the same apicomplexa phylum as plasmodium , which is remarkable in this class for its ability to survive outside a host and is of particular interest for studying the origin of photosynthesis in eukaryotes . for archaea, we find _ candidatus parvarchaeum acidiphilum _ and _ c. p. acidophilus _ , which are two organisms with short genomes ( and kbp ) living in low ph drainage water from the richmond mine in nothern california , and an uncultivated hyperthermophilic archaea `` scgc aaa471-e16 '' of the aigarchaeota phylum .additionally , we searched for maws in human mithocondrial genomes with lengths between and bp and found that the longest maws are only or nt long , while for these genomes . among bacteria having their longest maw close to 10% of the genome length , we find several strains of _ e. coli _ , _ francisella tularensis _ , _ shewanella baltica _ , _ methylobacillus flagellatus _ , _ xanthomonas oryzae _ and a species of the _ wolbachia _ genus .all of these are facultative or obligatory aerobes and are a lot more widespread than the bacteria listed in the previous paragraph .at least the four first species are free - living and commonly cultured in labs ._ x. oryzae _ is a pathogen affecting rice residing in the intercellular spaces ._ wolbachia _ species are a very common intracellular parasites or symbiotes living in arthropods . among eukaryotes , in addition to human and mouse , we find _ dictyostelium discoideum _ , an organism living in soil that changes from uni- to multi - cellular during its life cycle , and _ thalassiosira pseudonana _ a unicellular algea commonly found in marine phytoplankton .finally , archaea with the longest maws are _ methanococcus voltae _, a mesophilic methanogen , _ halobacterium salinarum _ , a halophilic marine obligate aerobic archeon also found in food such as salted pork or sausages , and _ halalkalicoccus jeotgali _ , another halophilic archeon isolated from salted sea - food . to summarize, it appears from these results that development in specific environmental niches that offer rather stable conditions , and particularly the inside of the cell of another organism , is , albeit with some exceptions , associated with short maws . on the other hand ,aerobic life and widespread presence in changing and diverse environments such as soil , sea water or food is generally associated with long maws .intracellular organisms have a well known tendency to reduce the size of their genomes and increase error rate for dna replication due to elimination of error correction mechanisms .this translates into a high value for in our random model for the tail and explains why their longest maws are short .in particular , we note that _ b. aphidicola _ ( genome size of kbp ) , brought out by our analysis , is known to have the highest mutation rate among all prokaryotes and indeed its longest maw is as short as nt . as for organisms specialized for a niche environment, one may hypothesize that proliferation speed is more important than replication fidelity .reasons for a widespread presence in changing environments to increase the length of the longest maws are less clear .multicellular eukaryotic organisms do not show particularly high dna replication fidelity .the fidelity of _ e. coli _ is fairly good at about 3.5x the one of human germline , but it is unclear if this is enough to make it stand out from other bacteria . we speculate that soil , sea water or food , which are likely to contain many types of microorganisms , may favor species more likely to undergo horizontal gene transfers , thus increasing the rate of translocation events and decreasing in our model , while leaving the _ per generation _ mutation rate unchanged . .the dashed line represents the estimator in eq .[ eq : lmax ] and the dotted line .[ fig : maxlength ] ]we have proposed a two - parts model explaining the unusual shape of the length distribution of minimal absent words ( maws ) in genomes .the first part of the model quantitatively reproduces the bulk of the distribution by considering the genome as a random text with random and independent letters .the second part is a stochastic algorithm grounded in basic principles of how genomes evolve , through translocation events and mutations , that qualitatively reproduces the behavior in the tail .our theory provides an estimator for the length of the shortest maws that is remarkably simple and captures well the global trend observed in large numbers of genomes from all sorts of organisms and viruses .considerations about the longest maw in a genome reveal sets of organisms sharing common high - level features such as the type of environment they live in .we have shown arguments for believing that organisms and viruses having few tail maws do so because of a low replication fidelity .why some organisms such as _ e. coli _ have long tail maws is less clear and replication fidelity alone does not seem to be a sufficient reason . finally , we have introduced the concept of maw cores , sequences present on the genome that tell us about what causes the existence of their parent maw .we have shown that , while cores from bulk maws do not seem biologically relevant , cores from tail maws cluster in regions of the genome with special roles , namely ribosomal rnas and untranslated regions surrounding coding regions of genes , a feature that can not be explained by a stochastic protocol that ignores the biological roles of the strings it manipulates .viral and bacterial genomes were downloaded under the form of the `` all.fna '' archives from the `` genomes / viruses/ '' and `` genomes / bacteria '' from the ncbi database on 17 - 18 may 2015 respectively .the norway spruce s genome was downloaded from the `` spruce genome project'' homepage and the yeast genome strain 288c from saccharomyces genome database .genomes of other eukaryotes and archaeas were downloaded from the ncbi database at several different dates over the period may - june 2015 .the human mithocondrial genomes were downloaded from the human mitochondrial genome database ( mtdb) in early september 2015 .all maws were computed using the software provided by pinho et al . in .the software was run taking into account the reverse - complementary strand ( -rc command line switch ) and requesting maws ( -n command line option ) with length up to five million nucleotides , i.e. much longer than the expected length of the longest maws .the search for maws was performed on commodity desktop computers for all but the human , mouse , and norway spruce genomes , for which the computer `` ellen '' from the center for high performance computing ( pdc ) at kth was used .localization of occurrences of maw cores on the genomes for figures [ fig : distribmaw2 ] and s2 was done by aligning these subwords to their respective genomes using bowtie2 , allowing only strict alignments ( command line option -v 0 ) .identical cores from different maws are counted independently in the coverage .this research is supported by the swedish science council through grant 621 - 2012 - 2982 ( ea ) , by the academy of finland through its center of excellence coin ( ea ) , and by the natural science foundation of china through grant 11225526 ( hjz ) .ea thanks the hospitality of kitpc and hjz thanks the hospitality of kth .the authors thank profs rdiger urbanke and nicolas macris , and dr .franoise wessner for valuable discussions , and pdc , the center for high performance computing at kth , for access to computational resources needed to analyze large genomes .amir , a , zeisel , a , zuk , o , elgart , m , stern , s , shamir , o , turnbaugh , j , soen , y , & shental , n. ( 2013 ) high - resolution microbial community reconstruction by integrating short reads from multiple 16s rrna regions . .chatterjee , s , koslicki , d , dong , s , innocenti , n , cheng , l , lan , y , vehkaper , m , skoglund , m , rasmussen , l. k , aurell , e , & corander , j. ( 2014 ) sek : sparsity exploiting k - mer - based estimation of bacterial community composition ., 24232431 .fofanov , y , fofanov , y , & pettitt , b. ( 2002 ) _ counting array algorithms for the problem of finding appearances of all possible patterns of size in a sequence_. ( w.m . keck center for computational and structural biology ) .innocenti , n , golumbeanu , m , dhroul , a. f , lacoux , c , bonnin , r. a , kennedy , s. p , wessner , f , serror , p , bouloc , p , repoila , f , & aurell , e. ( 2015 ) whole genome mapping of 5 ends in bacteria by tagged sequencing : a comprehensive view in _ enterococcus faecalis_. , 10181030 .quaglino , f , zhao , y , casati , p , bulgari , d , bianco , p. a , wei , w , & davis , r. e. ( 2013 ) _candidatus phytoplasma solani_ , a novel taxon associated with stolbur and bois noir related diseases of plants .ijs0 .van ham , r. c. h. j , kamerbeek , j , palacios , c , rausell , c , abascal , f , bastolla , u , fernndez , j. m , jimnez , l , postigo , m , silva , f. j , tamames , j , viguera , e , latorre , a , valencia , a , morn , f , & moya , a. ( 2003 ) reductive genome evolution in _buchnera aphidicola_. , 581586 .moore , r. b , obornk , m , janoukovec , j , chrudimsky , t , vancov , m , green , d. h , wright , s. w , davies , n. w , bolch , c. j , heimann , k , et al .( 2008 ) a photosynthetic alveolate closely related to apicomplexan parasites . , 959963 .baker , b. j , comolli , l. r , dick , g. j , hauser , l. j , hyatt , d , dill , b. d , land , m. l , verberkmoes , n. c , hettich , r. l , & banfield , j. f. ( 2010 ) enigmatic , ultrasmall , uncultivated archaea ., 88068811 .rinke , c , schwientek , p , sczyrba , a , ivanova , n. n , anderson , i. j , cheng , j .- f , darling , a , malfatti , s , swan , b. k , gies , e. a , dodsworth , j. a , hedlund , b. p , tsiamis , g , sievert , s. m , liu , w .-t , eisen , j. a , hallam , s. j , kyrpides , n. c , stepanauskas , r , rubin , e. m , hugenholtz , p , & woyke , t. ( 2013 ) insights into the phylogeny and coding potential of microbial dark matter ., 431437 .roh , s. w , nam , y .- d , chang , h .-w , sung , y , kim , k .-h , oh , h .-m , & bae , j .- w . ( 2007 ) _ halalkalicoccus jeotgali _ sp . nov . , a halophilic archaeon from shrimp jeotgal , a traditional korean fermented seafood ., 22962298 .nystedt , b , street , n. r , wetterbom , a , zuccolo , a , lin , y .- c , scofield , d. g , vezzi , f , delhomme , n , giacomello , s , alexeyenko , a , vicedomini , r , sahlin , k , sherwood , e , elfstrand , m , gramzow , l , holmberg , k , hallman , j , keech , o , klasson , l , koriabine , m , kucukoglu , m , kaller , m , luthman , j , lysholm , f , niittyla , t , olson , a , rilakovic , n , ritland , c , rossello , j. a , sena , j , svensson , t , talavera - lopez , c , theiszen , g , tuominen , h , vanneste , k , wu , z .-q , zhang , b , zerbe , p , arvestad , l , bhalerao , r , bohlmann , j , bousquet , j , garcia gil , r , hvidsten , t. r , de jong , p , mackay , j , morgante , m , ritland , k , sundberg , b , lee thompson , s , van de peer , y , andersson , b , nilsson , o , ingvarsson , p. k , lundeberg , j , & jansson , s. ( 2013 ) the norway spruce genome sequence and conifer genome evolution ., 579584 .engel , s. r , dietrich , f. s , fisk , d. g , binkley , g , balakrishnan , r , costanzo , m. c , dwight , s. s , hitz , b. c , karra , k , nash , r. s , weng , s , wong , e. d , lloyd , p , skrzypek , m. s , miyasato , s. r , simison , m , & cherry , j. m. ( 2014 ) the reference genome sequence of _ saccharomyces cerevisiae _ : then and now ., 389398 . | minimal absent words ( maw ) of a genomic sequence are subsequences that are absent themselves but the subwords of which are all present in the sequence . the characteristic distribution of genomic maws as a function of their length has been observed to be qualitatively similar for all living organisms , the bulk being rather short , and only relatively few being long . it has been an open issue whether the reason behind this phenomenon is statistical or reflects a biological mechanism , and what biological information is contained in absent words . in this work we demonstrate that the bulk can be described by a probabilistic model of sampling words from random sequences , while the tail of long maws is of biological origin . we introduce the novel concept of a core of a minimal absent word , which are sequences present in the genome and closest to a given maw . we show that in bacteria and yeast the cores of the longest maws , which exist in two or more copies , are located in highly conserved regions the most prominent example being ribosomal rnas ( rrnas ) . we also show that while the distribution of the cores of long maws is roughly uniform over these genomes on a coarse - grained level , on a more detailed level it is strongly enhanced in 3 untranslated regions ( utrs ) and , to a lesser extent , also in 5 utrs . this indicates that maws and associated maw cores correspond to fine - tuned evolutionary relationships , and suggest that they can be more widely used as markers for genomic complexity . enomic sequences are texts in languages shaped by evolution . the simplest statistical properties of these languages are short - range dependencies , ranging from single - nucleotide frequencies ( gc content ) to -step markov models , both of which are central to gene prediction and many other bioinformatic tasks . more complex characteristics , such as abundances of -mers , sub - sequences of length , have applications to classification of genomic sequences , and _ e.g. _ to fast computations of species abundancies in metagenomic data . the reverse image of words present are absent words ( aws ) , subsequences which actually can not be found in a text . in genomics the concept was first introduced around 15 years ago for fragment assembly and for species identification , and later developed for inter- and intra - species comparisons as well as for phylogeny construction . a practical application is to the design of molecular bar codes such as in the tagrna - seq protocol recently introduced by us to distinguish primary and processed transcripts in bacteria . short sequences or tags are ligated to transcript ends , and reads from processed and primary transcripts can be distinguished _ in silico _ after sequencing based on the tags . for this to be possible it is crucial that the tags do not match any subsequence of the genome under study , _ i.e. _ that they correspond to absent words . in a further recent study we also showed that the same method allows to separate true antisense transcripts from sequencing artifacts giving a high - fidelity high - throughput antisense transcript discovery protocol . in these as in other biotechnological applications there is an interest in finding short absent words , preferably additionally with some tolerance . minimal absent words ( maw ) are absent words which can not be found by concatenating a substring to another absent word . all the subsequences of a maw are present in the text . maws in genomic sequences have been addressed repeatedly as these obviously form a basis for the derived set of all absent words . furthermore , while the number of absent words grows exponentially with their length , because new aws can be built by adding letters to other aws , the number of maws for genomes shows a drastically different behavior , as illustrated below in fig . [ fig : absentwordsplot](a ) and previously reported in the literature . the behavior can be summarized as there being one or more shortest minimal absent word of a length which we will denote , a maximum of the distribution at a length we will denote , and a very slow decay of the distribution for large . in human is equal to , as first found in , is equal to , there being about billion maws of that length , while the support of the distribution extends to around ( fig . [ fig : absentwordsplot](c ) and ( d ) ) . the total number of human maws is about eight billion . as already found in the very end of the distribution depends on the genome assembly ; for human genome assembly grch38.p2 the three longest maws are , and nt in length . ( a ) ( b ) ( c ) ( d ) several aspects of this distribution are interesting . first , in a four - letter alphabet there are possible subsequences of length , but in a text of length only subsequences of length actually appear . if the human genome were a completely random string of letters one would therefore expect the shortest maw to be of length the fact that is considerably shorter ( ) is therefore already an indication of a systematic bias , in attributed to the hypermutability of cpg sites . we will return to this point below . more intriguing is the observation that the overwhelming majority of the maws lie in a smooth distribution around , and then a small minority are found at longer lengths . we will call the first part of the distribution the _ bulk _ and the second the _ tail_. we separate the tail from the bulk by a cut - off which we describe below ; the human is , a typical number for larger genomes , while the _ escherichia coli _ is . using this separation there are about million human tail maws , about of the total , while there are _ e. coli _ tail maws , about of the total . the effect is qualitatively the same for , as far as we are aware of , all eukaryotic , archeal and baterial genomes analyzed in the literature , as well as all tested by us . only a few viruses with short genomes are exceptions to this rule and show only the bulk , see fig . [ fig : absentwordsplot](c ) . the questions we want to answer in this work are why the distributions of maws are described by the bulk and the tail . can these be understood quantitatively ? do they carry biological information or are they some kind of sampling effects ? can one make further observations ? we will show that both the bulk and the tail can be described probabilistically , but in two very different ways . the bulk of the maw distribution arises from sampling words from finite random sequences and are contained in an interval $ ] , where was introduced above and is a good predictor of , the actual length of the shortest aws . to the best of our knowledge this has not been shown previously , and although our analysis uses only elementary considerations , they have to be combined in a somewhat intricate manner . the distribution of bulk maws , which comprise the vast majority of maws in all genomic sequences , can hence be seen as nothing more than a complicated transformation of simple statistical properties of the sequence . in fact , excellent results are obtained taking only the single nucleotide composition into account ( fig [ fig : absentwordsplot](b ) ) . nevertheless , the tail maws are different , and can be described by a statistical model of genome growth by a copy paste mutate mechanism similar to the one presented in . we show that the distributions of the tail maws vary , both in the data and in the model . the human and the mouse maw tail distributions follow approximately a power - law , but this seems to be more the exception than the rule ; bacteria and yeast as well as _ e.g. _ _ picea abies _ ( norway spruce ) show a cross - over behavior to a largest maw length . for bacteria this largest length ranges from hundreds to thousands ; for _ p. abies _ it is around 30 000 ; while for human and mouse the tail maw distribution reaches up to one million , without cross - over behavior ( fig [ fig : absentwordsplot](d ) ) . from the definition , any subword obtained by removing letters from the start or end of a maw is present in the sequence . in particular , removing the first and last letters of a maw leads to a subword that is present at least twice , which we denote here as a _ maw core_. maws made of a repeat of the same letter are an exception to this rule as they can have the two copies of their cores overlapping each other , see appendix a in _ supplemental information_. maw cores can be considered as the causes that create the maws and their location on the genome combined with functional information from the annotation tells us about their biological significance . finding all the occurrences on a genome of a given word ( such as a maw core ) is the very common bioinformatic task of alignment , which can be done quickly and efficiently using one of the many software packages available . in bacteria and yeast , the cores from the longest maws are predominantly found in regions coding for ribosomal rnas ( rrnas ) , regions present in multiple copies on the genome and under high evolutionary pressure as their sequence determines their enzymatic properties , required for protein synthesis and vital to every living cell . at the global scale , it appears that maw cores obtained from maws in the bulk are distributed roughly uniformly over the genome while those from the tail cluster in utrs and , to a lesser extent , also in utrs . these regions are important for post - transcriptional regulation , and thus likely to be under evolutionary pressure similarly to rrnas . we end this introduction by noting that from a linguistic perspective a language can be described by its list of _ forbidden _ sub - sequences , or the list of its forbidden words . minimal forbidden words relate to forbidden words as maws to absent words , and in a text of infinite length the lists of maws and minimal forbidden words would agree . if there is a finite list of minimal forbidden words the resulting language lies on the lowest level of regular languages in the chomsky hierarchy , and is hence relatively simple , while a complex set of instructions , such as a genome , is expected to correspond to a more complex language , with many layers of meaning . such aspects have been exploited in cellular automata theory and in dynamical systems theory , and are perhaps relevant to genomics as well . the present investigation is however focused on properties of texts of finite length , for which minimal forbidden words and maws are quite different . |
recent years have seen a surge of modelling particle dynamics from a discrete potential theoretic perspective .the models , which usually run under the heading _ aggregation models _ , in particular cases boil down to ( harmonic / poisson ) redistribution of a given initial mass ( sandpile ) according to some prescribed governing rules .the most famous and well - known model is the poincar s balayage , where a given initial mass distribution ( in a given domain ) is to be redistributed ( or mapped ) to another distribution on the boundary of the domain where is uniquely defined through , and .this model uses continuous amounts of mass instead of discrete ones ; the latter is more common in chip firing on graphs ( see for instance ) . a completely different model called _ partial - balayage _ ( see ) aims at finding a body ( domain ) that is gravi - equivalent with the given initial mass .this problem , in turn , is equivalent to variational inequalities and the so - called obstacle problem .the discrete version of this problem was ( probably for the first time ) studied by d. zidarov , where he performed ( what is now called ) a _ divisible sandpile _ model ; see page 108 - 118 .levine , and levine - peres , started a systematic study of such problems , proving , among other things , existence of scaling limit for divisible sandpiles .although zidarov was the first to consider such a problem , the mathematical rigour is to be attributed to levine , and levine - peres , .the divisible sandpile , which is of particular relevance to our paper , is a growth model on which amounts to redistribution of a given continuous mass .the redistribution of mass takes place according to a ( given ) simple rule : each lattice point can , and eventually must topple if it already has more than a prescribed amount of mass ( sand ) .the amount to topple is the excess , which is divided between all the neighbouring lattice points equally or according to a governing background pde . the scaling limit of this model , when the lattice spacing tends to 0 , and the amount of mass is scaled properly , leads to the _ obstacle problem _ in .the divisible sandpile model of zidarov , and levine - peres also relates to a well - known problem in potential theory , the so - called quadrature domains ( qd ) . a quadrature domain ( with respect to a given measure ) is a domain that has the same exterior newtonian potential ( with uniform density ) as that of the measure .hence , potential theoretically and are equivalent in the free space ; i.e. outside the support of one has , where these are the newtonian potentials of , respectively .the odometer function of levine - peres ( which represents the amount of mass emitted from each lattice site ) corresponds to the difference between the above potentials ( up to a normalization constant ) , i.e. , where is and in .this , expressed differently , means that for all harmonic and integrable over ( see ) .in many other ( related ) free boundary value problems ( known as bernoulli type ) the zero boundary gradient in the above takes a different turn , and is a prescribed ( strictly ) non - zero function , and the volume potential in is replaced by surface / single layer potential ( of the a priori unknown domain ) . in terms of sandpile redistributionthis means to find a domain such that the given initial mass in is replaced by a prescribed mass on . here is the standard surface measure on , and a given function ; for the simplest case the counterpart of would be for all harmonic on a neighbourhood of .such domains are referred to as quadrature surfaces , with corresponding pde with , where is a normalization constant .this problem is well - studied in the continuum case and there is a large amount of literature concerning existence , regularity and geometric properties of both solutions and the free boundary , see , , and the references therein . in our searchto find a model for the sandpile particle dynamics to model we came across a few models that we ( naively ) thought could and should be the answer to our quest .however , running numerical simulations ( without any theoretical attempts ) it became apparent that the models we suggested are far from the bernoulli type free boundaries admitting quadrature identity , in continuum case , should be a sphere . ] .in particular , redistribution of an initial point mass does not converge ( numerically ) to a sphere , but to a shape with a boundary remotely resembling the boundary of the abelian sandpile ( see figure [ fig-1 ] ; cf .* figure 1 ) , or ( * ? ? ? * figure 2 ) for instance ) .[ fig : sub1 ] [ fig : sub2 ] [ fig : sub1 ] [ fig : sub2 ] notwithstanding this , our model seems to present a new alluring and fascinating phenomenon not considered earlier , neither by combinatorics nor free boundary communities . hence the birth " of this article .since the boundary of a set will be a prime object in the analysis of the paper , we will assume throughout the text that to avoid uninteresting cases .given an initial mass on , we want to find a model that builds a domain such that the corresponding ( discrete ) balayage measure on the boundary ( see ) is prescribed .such a model seems for the moment infeasible for us .a slight variation of it asks to find a canonical " domain such that the boundary mass stays below the prescribed threshold .since larger domains ( with reasonable geometry ) have larger boundaries , and hence a greater amount of boundary points , we should expect that one can always find a solution to our problem by taking a very large domain containing the initial mass .this , in particular , suggests that a canonical domain should be the smallest domain among all domains with the property that the boundary mass is below the prescribed mass , which we shall assume to be uniform mass .results in a so - called bernoulli free boundary problem , where the green s potential of the mass , in the sought domain , has boundary gradient , in case of uniform distribution , i.e. . ] departing from the above point of view of canonicity one may ask whether there is a natural lattice growth model which corresponds to this minimality . with . ]it turns out that there actually is a model which seemingly is more complicated than the divisible sandpile model described above . to introduce our model , which we call _ boundary sandpile _, we need a few formal definitions ) , we think it might be more convenient without any further specification with descriptive names to use the term boundary sandpile . ] .through the text various letters will stand for constants which may change from formulae to formula .we use lower case to denote small constants , and upper case for large constants . call two lattice points _ neighbours _ and write if , where -norm of is the sum of absolute values of its coordinates .clearly iff for some where is the -th vector of the standard basis of . for any non - empty set , this concept of lattice adjacency induces a natural graph having vertices on .we call such the _ lattice graph _ of . next , for a set define and call them respectively the * boundary * of and the * interior * of . for the convenience of the exposition we have chosen to include the boundary into the set . for recall the definition of the _ discrete laplace operator _ denoted by and acting on a function by , \qquad x\in h{{\mathbb z}}^d,\ ] ] where , as for , means that is a neighbour of in the lattice . throughout the text , when , i.e. on a unit scale lattice , we will write for unless explicitly stated otherwise. for we set for the closed discrete ball of radius including its 1-neighbourhood .it is clear from the definition that is from the interior of iff .let now be a given ( mass ) distribution on , i.e. a non - negative function supported on finitely many sites of .fix also a threshold . for each integer we inductively construct a sequence of sets , mass distributions , and functions as follows .start with and .for an integer time a particular site is called * unstable * if either of the following holds : * and , * and .otherwise a site is called * stable*. we call the number the * ( boundary ) capacity * of the model and refer to as the set of _ visited sites _ at time .any unstable site can _ topple _ by distributing all its mass equally among its lattice neighbours .more precisely , for each we choose an unstable site and define , and , , where is the kronecker delta symbol at the origin , i.e. equals 1 if and is zero otherwise for .we call the _ odometer _ function at time . for the sake of convenience, we do allow the toppling to be applied to a stable site , as an identity operator , i.e. if at time a toppling is applied to a stable site , then we set , and .we say that toppling is _ legal _ , if is unstable .if for some there are no unstable sites , the process is terminated .we call this model * boundary sandpile * ( ) and denote by , where is the initial distribution , and is the boundary capacity of the model . it is clear that the triple may depend on the choice of the unstable sites , i.e. the toppling sequence . later onwe will see that for a suitable class of toppling sequences stable configurations exist and are identical ( see propositions [ prop - stabil - toppling ] and [ prop - abelian ] ) .observe that from the definition of discrete laplacian and ( [ mu - k+1-def ] ) we easily see that for each one has i.e. the laplacian of represents the net gain of mass for a site at time .we further define a few concepts needed for our analysis . for a , and a ( toppling ) sequence , we say that is * stabilizing * if there exists a distribution such that as for any .we call * infinitive * on a set if every appears in the sequence infinitely often . if the set on which is infinitive is not specified , then it is assumed to be the entire . our main results concern general qualitative analysis of the boundary sandpile model , introduced in this paper . for any initial mass distributionwe prove well - posedness of the model ( propositions [ prop - stabil - toppling ] and [ prop - abelian ] ) , and canonical representation of the model in terms of the smallest super - solution among a certain class of functions ( theorem [ thm - canonical ] ) . specifying our analysis for point masses and using this minimality of the model ,we show directional monotonicity of the odometer function ( theorem [ thm - monotonicity ] ) .we then determine the reasonable size of the boundary capacity ( subsection [ sub - sec - heuristics ] ) and estimate the growth rate of the model in lemma [ lem - balls - inside - out ] .we prove a uniform lipschitz bound for the scaled odometer function in proposition [ lem - lipschitz - est ] . using these results obtained in discrete setting , in the final part of the paper , section [ sec - shape - analysis ] we study the scaling limit of the model in continuum space .in particular , we show that ( along a subsequence ) the scaled odometers , and hence the visited sites , converge to a continuum limit ( theorem [ thm - scaling - limit ] parts ( i ) and ( iii ) ) .we also prove that the free boundary of ( any ) scaling limit of the model is locally a lipschitz graph ( theorem [ thm - scaling - limit ] part ( v ) ) .in subsection [ subsec - asm ] we apply our methods developed for boundary sandpile model to classical abelian sandpile .most notably , we show that the boundary of the scaling limit of abelian sandpile corresponding to initial configuration of chips at a single vertex , has lipschitz boundary .[ rem - general - f ] it is worthwhile to remark that the boundary sandpile described above can be represented slightly different , using discrete partial derivatives . indeed , our model generates a set whose green s function ( corresponding to the given initial distribution of mass ) satisfies for a given , where . in light of this observation one may consider a wider class of problems in terms of general prescribed boundary mass given by , where defined on has to satisfy some properties ( ellipticity and other ) , yet to be found .the methods developed in this paper seem to have a good chance to go through for at least nice enough " function .this aspect we shall leave for interested reader to explore .in this section we prove two basic properties of the boundary sandpile model .namely , that the model needs to visit a finite number of sites in to reach a stable state , and that the final stable configuration is independent of the toppling sequence . for the proofs we will use some ideas from (* lemma 3.1 ) and ( * ? ? ?* lemma 3.1 ) .[ prop - stabil - toppling ] for a given any infinitive toppling sequence is stabilizing . let be the total mass of the sandpile , and set . for each let be the set of visited sites after invoking -th toppling in .let also and be respectively the odometer function and the distribution of mass after the -th site in has toppled .we first show that for any and for each satisfying one has indeed , let be the smallest integer such that , and let be the neighbour of that topples at time . since and we have , but as toppling of is legal ( since it is producing a previously unvisited site ) , we must have .consequently we get , and since did not topple at steps , as otherwise will be in the interior of , we get and hence ( [ mu - k - from - below ] ) . comparing ( [ mu - k - from - below ] ) with the total mass , for any we get latexmath:[\[\label{bound - on - bdry - points } upper bound for the number of newly emerged .the simple reason is that in general may have , say , arbitrarily large number of isolated points with small mass , which may never be visited again in the course of the process . ]boundary points of visited sites at any time of the process . now assume that is connected as a lattice graph .fix and let be such that and . for each integer consider the hyperplane .it is clear that contains at least one element of and these points are pairwise different for integer ] , where each in is replaced by its equivalence class in any ordering .clearly the new toppling sequence ] in ] .since the odometer corresponding to ] for the average of . with these preparationswe easily obtain the following .[ lem - graph - balayage ] for any map such that the pre - image of each is infinite , for all we have we will assume without loss of generality that the subgraph on is connected , as otherwise we can apply the argument that follows to each connected component of the subgraph .set and let be the diameter of the subgraph on , i.e. the maximal length of the shortest path between any two vertices .assume that the maximum of is attained at some , and take any shortest path from to the sink through the vertices of .let be such path where each .we have by definition . from the definition of can choose large enough such that the sequence is a subsequence of .but after toppling the total mass of is decreased by at least , and as , we have \leq \mathbb{e}[\mu ] \left(1- \frac{1}{|v_0| } \frac{1}{d_\ast^\rho } \right).\ ] ] since the average is reduced by a constant multiple the claim follows .[ rem - abelian - graph ] by adapting the proof of proposition [ prop - abelian ] ( in fact in a simpler form ) one can easily show that the amount of mass each vertex of the sink receives is independent of the toppling order , as long as each source vertex is toppled infinite number of times .[ rem - infinite - steps ] take a graph with two vertices and a single edge connecting them .let also each of these two vertices be attached to any number of sinks ( possibly none , but not simultaneously ) .then clearly one has to topple each vertex infinitely often in order to entirely transfer any given mass to the sinks .hence the requirement on each vertex to topple infinite number of times can not be eliminated .we will see shortly that one can avoid a straightforward application of topplings to transform mass from _ source _ to _sink_. by realizing this transference as a linear operator we will perform the mass transportation from source to sink effectively in one step . keeping the notation of lemma [ lem - graph - balayage ]let the source be and the mass vector be equal to .consider the toppling and for let denote the -th iterate of .then by lemma [ lem - graph - balayage ] we have that as and the goal is to determine how much mass each sink vertex will receive in the limit . due to the abelian property ( remark [ rem - abelian - graph ] )the amount of mass distributed to the sinks is independent of the toppling sequence , hence specifying the toppling order results in no loss of generality . to compute the final distribution of the mass on the sinkwe introduce matrices where we call the * mass transformation * matrix , and the * mass distribution * matrix .more precisely shows the proportion of which is present at vertex after we have applied the transformation to the graph .namely if is the new mass - vector after applying , then ( v_i ) = m_{i1 } \mu_0(v_1 ) + m_{i2 } \mu_0(v_2)+ ... +m_{i n } \mu_0(v_n).\ ] ] in a similar fashion shows the proportion of transferred to its neighbours by .thus each sink ( if any ) attached to gets mass equal to a simple bookkeeping procedure illustrated below in algorithm [ algo-1 ] allows us to compute these coefficients when we consequently topple .* input : * connected graph on source vertices initialize to _ identity _ and to _ zero _ matrices . * output : * matrices and [ algo-1 ] now observe that after the toppling , , is applied , the mass vector distributed to sinks becomes + ...+ d[m^{k-1 } \mu_0 ] = d \left [ \sum\limits_{i=0}^{k-1 } m^i \right ] \mu_0 \in { { \mathbb r}}^n.\ ] ] to sum the series in ( [ neumann - series ] ) we need the following .[ lem - e - value1 ] all eigenvalues of are less than 1 by absolute value .take any vector with all coordinates being non - negative . by virtue of ( [ exp - decay ] )there exists an integer and a constant such that \leq ( 1-c ) \mathbb{e } [ \mu_0].\ ] ] since has non - negative entries due to construction by algorithm [ algo-1 ] , from the last inequality we get now for any .hence , if is an eigenvector of corresponding to an eigenvalue , we get which implies completing the proof of the lemma .lemma [ lem - e - value1 ] implies that the matrix is invertible , where is the identity matrix .hence the neumann series of , as we have in ( [ neumann - series ] ) , is convergent . passing to limit in ( [ neumann - series ] ) as demonstrates that the total mass distributed to the sinks equals \mu_0 = d ( \mathrm{id}- m)^{-1 } \mu_0.\ ] ] observe that all mass is being transferred to the sink by lemma [ lem - graph - balayage ] , and hence there is no loss of mass .this shows that the linear operator defined by ( [ mass - trans - sink ] ) is non - degenerate .hence , the matrix is invertible as well .finally , each sink vertex attached to gets mass equal to the -th component of the vector \in { { \mathbb r}}^n.\ ] ] based on the above discussion , we next present a simple pseudocode , which allows to stabilize the given initial distribution in finite number of steps .* input : * initial distribution and boundary capacity initialize , the set of visited sites , as the support of populate a stack of sites from that need to be toppled .topple(v ) is the top element of the stack include in if not there already remove from construct , the lattice graph on vertices of construct the distribution matrices and for by algorithm [ algo-1 ] use and to distribute the mass from to by re - populate the stack by elements of that need to be toppled * output : * final set of visited sites and stable mass distribution on the boundary [ algo-2 ] in algorithm [ algo-2 ] we simply move the boundary of the set of visited sites as long as there is a possibility .then , we transfer at once all mass from the interior onto the boundary which requires a passage to the limit in a toppling procedure . consequently , the fact that this will produce the same final configuration as the original boundary sandpile process , is not a direct corollary of abelian property of the sandpile , but follows from lemma [ lem - interim ] .in particular , due to lemma [ lem - interim ] each iteration of the algorithm resuming from step 3 , starts with an initial distribution for which the two sandpiles and produce the same final configurations .this fact , coupled with proposition [ prop - stabil - toppling ] implies that algorithm [ algo-2 ] terminates in finite number of steps since the set of visited sites can not grow to infinity .depending on and , however , it is interesting to observe , that the same approach can stabilize the divisible sandpile of levine - peres in finite number of steps . ]we start with the following . * discrete maximum principle ( dmp ) : * let , be finite , and in the interior of . then .the proof of this important statement is an easy exercise ( see e.g. ( * ? ? ?* exercise 1.4.7 ) ) .we now show that the boundary sandpile requires the smallest possible domain in the discrete space , where it can stabilize itself .[ def - stab - pair ] for an initial distribution and a boundary capacity , we say that the pair is stabilizing for if the following holds : * is a finite set , * is a function satisfying in this setting , we will call odometer function .[ rem - stabilizing ] for any and let be the set of visited sites of the corresponding and be the odometer function .then clearly the pair is stabilizing in a sense of definition [ def - stab - pair ] .[ thm - canonical ] let be the set of visited sites of .then is the intersection of all for which there is a function such that the pair is stabilizing for .the set of stabilizing pairs is non - empty in view of remark [ rem - stabilizing ] . hence if is the intersection in the formulation of the theorem , then is finite and contains the support of .first , we show that for some the pair is stabilizing . to see this for ,it is enough to prove that for any two stabilizing pairs , the set has the property . to this end , consider the function in .clearly from which we get on .it is also clear that in .next , using the fact that vanishes on and is the pointwise minimum of and , from ( [ bdry - as - union ] ) we have now let be the unique solution to since is a subsolution in the interior of , from dmp we have in . combining this with the fact that both and vanish on we arrive at but then , where the last inequality is due to ( [ v0-on - v0 ] )hence , the pair is stabilizing and the claim for follows .we are now in a position to show . by definition of have .take any toppling sequence where for all and is infinitive might be empty , in which case is empty as well , that is no toppling occurs .this corresponds to the case , when the initial distribution is already stable . ] on .observe that if is stabilizing for the original , then as the final configuration is unique by abelian property of the sandpile .hence , it is left to prove that is stabilizing .assume it is not .invoking , we get that all mass from the interior of is being redistributed onto by lemma [ lem - graph - balayage ] .we have , and since is not stabilizing it follows that the distribution of mass on obtained after applying is not stable .since no mass remains in the interior of it follows that the boundary of is unstable .let be the odometer function corresponding to the toppling .then , by definition we have let also be the odometer for which is stabilizing , whose existence was established above . by definition , satisfies ( [ u - in - int - v - star ] ) hence , due to uniqueness of solutions , we get that . but then where the inequality follows since the pair is stabilizing . this shows , that is stabilizing as well , which contradicts our assumption that the toppling was not stabilizing , and completes the proof of the theorem .[ def - stab - pair ] for a given we call _ the stabilizing pair _ of the sandpile if is minimal in a sense of theorem [ thm - canonical ] .an immediate reformulation of the previous theorem is the following characterization of the odometer as the smallest supersolution in a certain class of functions on . for let be the odometer function .then where .a useful corollary of the minimality principle , which is being used in algorithm [ algo-2 ] above , is the following .[ lem - interim ] let be the stabilizing pair for . fix any such that and let be the solution to then the final configurations of and are identical. the lemma states that if we balayage to first , and then balayage the resulting measure from to , the boundary measure produced on will be the same as if we had balayaged on .this is obvious from a potential theoretic and balayage perspective .it is however not straightforward from a dynamic point of view .set , which , by the definition of , is a distribution supported on .let be the stabilizing pair for .the aim is to show that and that the final configurations for and coincide .the latter is equivalent to equality on , which in turn is equivalent to which we will prove below .we have as well as on .clearly therefore on .we also have hence the pair is stabilizing for . as a consequence of theorem [ thm - canonical ] we obtain .we now prove the reverse inclusion .since , it follows that is supported in .hence , we may define as the unique solution of since it follows that on .this implies that the function is vanishing on .but we also have and by the uniqueness of solutions to the dirichlet problem , we infer that in .in particular it follows that which , by the stability of , implies that is stabilizing for .again , by the minimality principle we get , consequently .it is left to show ( [ config - equiv ] ) . since ,the function vanishes on and is identically 0 outside .hence in the interior of we have which shows that on and hence ( [ config - equiv ] ) .the lemma is proved .yet another important consequence of the minimality principle of theorem [ thm - canonical ] is a special directional monotonicity of the odometer function corresponding to an initial distribution concentrated at a single point .this key property of the boundary sandpile will prove crucial in studying the regularity of the boundary of scaling limit of the process .let be the set of mirror symmetry hyperplanes of the unit cube ^dd=2 d \geq 3 $ } , \end{cases}\ ] ] and is a constant depending on .now things are reduced to some elementary computations .we start with a lower bound of the lemma .consider the case of .fix a point such that there exists with .let .in view of the definition of , the neighbour of having the largest norm among all 4 neighbours must be on the boundary of . since we also have , it follows that as one can take for and for .assume we have , i.e. . then observe that since the random walk can exit from in one step through if started at .on the other hand , from the definition of we have for any and any , hence all terms in the sum of are non - negative .using this , for and as above we get this implies for any having a neighbour on .since vanishes on the bound of the lemma follows .the lower bound in case of is handled similarly .the upper bound of the lemma follows directly from ( [ g1])-([g2 ] ) and definition of the ball .the details , which we will omit , are elementary .the proof of the lemma is complete .[ lem - balls - inside - out ] there are dimension dependent constants such that if is the set of visited sites of , with large , then by lemma [ lem - mono - growth ] part ( b ) every site of is eventually visited by the sandpile if we keep increasing .thus we will assume that is so large that the discrete ball is inside , where is fixed from lemma [ lem - greens - laplace ] .let us also remark , that if for some we have the inclusion , then for all we get in view of lemma [ lem - mono - growth ] part ( a ) .let be as above ( see ) .then , the function satisfies from lemma [ lem - greens - laplace ] we have everywhere on .observe that if and and are chosen so that on , then by lemma [ lem - no - touch ] we have , i.e. stays strictly inside .we now choose a sequence of radii , where is as above , and for each we have and , where the latter simply means that we enlarge the balls by 1 lattice step at most .given this , as long as we have we get .we can therefore increase the radius up to obtaining by so the first inclusion of the current lemma .the upper bound can be obtained similarly , by starting with a ball of sufficiently large radius , containing , then shrinking the ball by at most 1 lattice step at a time and using the upper bound of lemma [ lem - greens - laplace ] instead .the proof of the lemma is complete .our next result gives a uniform lower bound on the odometer function , which will be needed in the analysis of the shape of scaling limit .for the proof we need a discrete version of harnack s inequality .[ thm - harnack ] for any there exists a constant such that if is harmonic in then for any . [ lem - bounds - below ]for any small there exists a constant depending on and dimension such that for any and each satisfying one has where is the stabilizing pair of .we first observe that the bound of the lemma holds near the origin , i.e. there exist dimension dependent constants and such that for any one has indeed , by lemma [ lem - balls - inside - out ] and dmp we have for some constant , where the green s function is defined in ( [ green - def ] ) .thanks to this bound , follows directly from asymptotics of green s function proved in ( * ? ? ?* proposition 1.5.9 ) for , and ( * ? ? ? * proposition 1.6.7 ) for .next , we reduce the general case of the lemma to by constructing a harnack chain leading from a given point to the ball . let and assume , since otherwise the estimate follows by .suppose , and denote , where is fixed from ( [ u - n - large - near-0 ] ) and from the formulation of the lemma .we get that the ball lies inside . by denote the orthogonal projection of onto , i.e. by directional monotonicity of theorem [ thm - monotonicity ] applied to the direction , and axial symmetry of the odometer given by corollary [ cor - o - symm ] , we have \big ) \cap { { \mathbb z}}^d \subset v_n.\ ] ] if the cylinder intersects the ball , the estimate of the lemma follows by harnack s inequality of theorem [ thm - harnack ] and upper estimate of lemma [ lem - balls - inside - out ] , where the latter is used to estimate the length of harnack chain . otherwise , if , denote , and . by construction .we have , due to harnack s inequality , that with a constant , hence it is enough to prove the lemma for replaced by .for that , we apply the same argument as we had for to .since has at most non - zero coordinates , this reduction procedure on coordinates will terminate in at most number of steps , where in the last step , the corresponding cylinder ( [ cylinder ] ) will intersect the ball implying the desired estimate of the lemma .the proof is complete .the purpose of this section is the study of scaling limit of the model when the initial distribution is concentrated at a point .we will show that after proper scaling of mass and the underlying lattice , there is a convergence along subsequences .the approach is via uniform gradient bounds for the odometers .combined with some results of the previous section , we will prove a certain non - degeneracy result of the converging shapes , as well as lipschitz regularity of the boundary of scaling limit . throughout this section, we assume that initial distribution of the sandpile equals , where and boundary capacity of the model is . for all , by denote the corresponding odometer function and by the set of visited sites .let be the green s function for , i.e. for any and if or . from (* proposition 4.6.2 ) for we have - g(x , y),\ ] ] where as before is defined from ( [ g - x - y ] ) , and is the first exit time from of the simple random walk started at . from the definition of we have where we have used the asymptotics ( [ g - x - y ] ) and mean - value theorem along with the choice of and to bound the first summand . from lemma [ lem - balls - inside - out ] and the choice of get that for any .this , coupled with ( [ g - x - y ] ) and mean - value theorem , for any implies combining this estimate with ( [ a1 ] ) we obtain we get from the definition of that for all , which together with ( [ a2 ] ) completes the proof of the lemma .recall that is the set of visited sites for mass .define where is the combinatorial distance . foreach consider the discrete derivative of , namely with .clearly is harmonic in .now , in view of the stability of the sandpile , for any such that we have with a constant . on the other hand , by lemma [ lem - grad - outside-0 ] we have if .since in the interior of , by dmp we get in .the same bound obviously works for .we have thus proved 1-step lipschitz bound , i.e. the estimate of the proposition if and are lattice neighbours . for the general case , take any such that , and consider the shortest lattice path connecting and and staying outside the ball .namely , let where and . clearly such path exists .it is also clear that for the length of the path we have where equivalence is with dimension dependent constants . using this andthe 1-step lipschitz bound already proved above , we obtain completing the proof of the proposition . for set , and define the scaled odometer by where . let also be the scaled set of visited sites for mass .clearly , is supported in the interior of . moreover , in view of lemma [ lem - balls - inside - out ] we have that the sets , are uniformly bounded and contain a ball of some fixed radius . in order to study the scaling limit of the model , we need to extend each to a function defined on .we will use a standard extension of which preserves its -laplacian .namely , for fixed define a function , where for each and any we have set . clearly for all .note that we are using the same notation for extended odometers . in what follows will stand for this extension . also , for a given set we write for the interior of in the next theorem .the following is our main result concerning scaling limit of the model . * locally uniformly in , * in in the sense of distributions , where is the dirac delta at the origin and denotes the continuous laplace operator , * if is the support of then , * setting , and considering the set of vectors for any with the property that is non - zero and is collinear to any of the vectors of , we have * the boundary of is locally a lipschitz graph . fix small , and for set extend each as 0 outside . from proposition [ lem - lipschitz - est ] and definition of , there exists a constant independent of such that we have , hence for any thanks to ( [ u - h - lipschitz ] ) we obtain where the second inequality with a constant follows from lemma [ lem - balls - inside - out ] .thus each , , when restricted to , is bounded uniformly in due to and is -lipschitz in view of .next , we extend each from to a lipschitz function on having the same lipschitz constant .namely , for a given define this is a well - known method of lipschitz extension , and it is not hard to verify that is a -lipschitz function on ( see e.g. ( * ? ? ?* theorem 2.3 ) ) and coincides with on .moreover , by construction we have that the family is uniformly bounded and is non - negative everywhere . observe also that for , and ( see ) by construction we have latexmath:[\[\label{u - h - u - h - est } by arzel - ascoli there is sequence as , and -lipschitz function such that locally uniformly in .due to ( [ u - h - u - h - est ] ) we get locally uniformly in . obviously on . for the support of we have to prove this take any from the _ l.h.s ._ of ( [ supp - outside-2delta ] ) .since is continuous on there is such that the closed ball and on for some . by constructionwe have that converges to uniformly on , hence there exists large enough such that on for any integer .but since agrees with on we get for any .this implies on hence for all . in particular, is from the _ r.h.s ._ of ( [ supp - outside-2delta ] ) . to see the reverse inclusion fix any from the _ r.h.s ._ of ( [ supp - outside-2delta ] ) .there exists large and small such that .we get , in particular , that for all . hence , by lemma [ lem - bounds - below ] and definition of one gets on with a constant uniform in .this bound , together with ( [ u - h - u - h - est ] ) implies , accordingly lies in the _ l.h.s ._ of ( [ supp - outside-2delta ] ) .this completes the proof of ( [ supp - outside-2delta ] ) .we next prove that is harmonic on .fix any such that .since is continuous , there exists a closed ball .let be any . since is continuous , to prove its harmonicity in it suffices , due to weyl s lemma , to show that where is the usual ( continuous ) laplace operator in .the latter follows by where the first equality comes from definition of and smoothness of , the second is in view of ( [ u - h - u - h - est ] ) , the third one follows from discrete integration by parts .the last equality is a consequence of the fact that for large enough due to one has for all , which , in particular , implies that on for all . * locally uniformly outside any neighbourhood of 0 , * , * in , * is lipschitz continuous outside any neighbourhood of the origin , where lipschitz norm depends on the neighbourhood .these properties give us parts ( i ) and ( iii ) of the theorem , as well as the claim of ( ii ) except at the origin .to finish the proof of part ( ii ) it remains to study the laplacian of at 0 . to this end ,let be the fundamental solution for in , i.e. . hereas well , following the convention above , suppose that is extended to so that to preserve its -laplacian .it is well - known ( see ) that as locally uniformly in , where is the fundamental solution to continuous laplacian .observe , that in view of ( b ) and lemma [ lem - balls - inside - out ] there is small such that the ball .using this and the definition of we have but since both and converge locally uniformly away from 0 to continuous functions , then the difference on for all where is some large constant .by dmp we get that in , hence , the limit is also bounded by in . but as in we get that is a removable singularity for , in particular at 0 , and the proof of part ( ii ) is now complete .it remains to show ( v ) , the last assertion of the theorem .we will show that at each there exists a double cone , with its size ( opening and height ) independent of , having vertex at and intersecting at only is the 0-level set of which is a lipschitz function on compact subsets away from the origin , and in particular in a neighbourhood of .but this fact alone is not enough to conclude that is lipschitz , see for instance . ] .this clearly implies ( v ) . applying the monotonicity of ( iv ) in directions we obtain for all . due to this symmetry , to prove ( v ) it is enough to treat the part of where all coordinates are non - negative .in addition , we will also assume that the point satisfies for all .the rest of the cases are similar .observe that due to ( iii ) and lemma [ lem - balls - inside - out ] there is a constant such that now the choice of and ( [ ball - inside ] ) together imply we first determine a certain monotonicity region for .let be a finite set of vectors containing and having the rest of its elements chosen by the following rule : for each it is easy to check from definition of that it contains linearly independent vectors .for a given consider the halfspace .we have the following monotonicity of in .indeed , decomposing into tangential and normal components in , one gets , where , and is the same for both due to assumption .then , , hence if and only if since the latter reduces to .it remains to apply the monotonicity result of part ( iv ) to get .we thus obtain that is non - increasing in in the normal direction . as a direct corollary of ( [ mono - plane ] ) the cone inherits the following property .if and where , for all , and is the cardinality of , then along with consider as well the cone generated by since has linearly independent vectors , the cone is -dimensional .we claim that is the sought cone , for which we need to show that the truncated cone has uniform size and that is positive in the interior of and is zero in the interior of . we will only show the former claim , as the latter follows similarly .it is clear , thanks to ( [ x-0-is - inside ] ) , that is of uniform size .now assume for contradiction , that there is such that .consider the cone .let us see that the first assertion of ( [ 2claims ] ) follows directly from since by definition and any element is from as well , and has the form with all .the second one is simply a consequence of the definition of .armed with ( [ 2claims ] ) the proof of ( v ) follows readily , since we simply get that is zero in an open neighbourhood of , which violates the condition that .this contradiction finishes the proof of ( v ) , and the proof of the theorem is now complete .assertion ( a ) follows from convergence result of pegden - smart discussed in subsection [ subsec - asm ] in conjunction with theorem [ thm - monotonicity - asm ] . the proof of part ( b ) follows from part ( a ) by the same argument as for part ( v ) of theorem [ thm - scaling - limit ] .the proof is complete .it is clear from the proof of theorem [ thm - scaling - limit ] that for any sequence of scalings , where , one may extract a subsequence such that the corresponding sequence of scaled odometers will have a limit , call it , satisfying the same properties as the function of theorem [ thm - scaling - limit ] .in particular , by theorem [ thm - scaling - limit ] ( ii ) we have that any such is on the set where it is positive , and is lipschitz continuous up to the boundary of that set . however , we do not know if scaling limits generated by different scaling sequences must coincide . for several lattice - growth modelsthe existence of scaling limit , and in some cases its geometry , are known ; for divisible sandpile see , and , for rotor - router see , , , and , and for abelian sandpile see . the internal diffusion limited aggregation ( idla ) , which is a stochastic growth model ,is studied in , , , . for the idla in critical regime, one may consult .growth models involving continuous amounts of mass similar to divisible sandpile model , are studied in , , and , where the last paper considers a non - abelian growth model .it is interesting to observe , that the strong flat patterns appearing on the boundary of a sandpile as can be seen in figure [ fig-1 ] , might be a result of a significant increase of the role of the free boundary , rather than the pde problem in the interior .it is remarkable that a similar geometry of the boundary as in figure [ fig-1 ] , in particular the flatness , arises also from the classical abelian sandpile , see figure [ fig-2 ] , if one treats the boundary of the growth cluster likewise , i.e. allowing each boundary point to accumulate many particles however not exceeding the given threshold .nevertheless , the pde problem of the abelian sandpile in the set of visited sites , is very different from that of the boundary sandpile considered here .it seems very interesting to understand the emergence of the flatness on the boundary rigorously .yet another surprising property of the boundary sandpile is the discontinuity of its geometry under small perturbations of the initial mass distribution , and a certain tendency of the sandpile dynamics towards generating convex sets ( see figure [ fig-3 ] ) .a rigorous explanation of these phenomena seems to be a very interesting problem .* problem 3 .* is there a boundary sandpile type process , leading to that the total mass of the system is being redistributed onto the combinatorial free boundary ( possibly with a different background rules ) which has a sphere as its scaling limit ?in fact , this problem does not exclude , as a possible candidate , the boundary sandpile process considered in this paper .it is apparent from numerical simulations ( see figure [ fig-1 ] ) , that the shape of divaricates from a sphere . however , we do not have a rigorous proof of this fact .gustafsson , b. : direct and inverse balayage - some new developments in classical potential theory .proceedings of the second world congress of nonlinear analysts , part 5 ( athens , 1996 ) nonlinear anal .30 ( 1997 ) , no . 5 , 2557 - 2565 | we introduce a new lattice growth model , which we call boundary sandpile . the model amounts to potential - theoretic redistribution of a given initial mass on ( ) onto the boundary of an ( a priori ) unknown domain . the latter evolves through sandpile dynamics , and has the property that the mass on the boundary is forced to stay below a prescribed threshold . since finding the domain is part of the problem , the redistribution process is a discrete model of a free boundary problem , whose continuum limit is yet to be understood . we prove general results concerning our model . these include canonical representation of the model in terms of the smallest super - solution among a certain class of functions , uniform lipschitz regularity of the scaled odometer function , and hence the convergence of a subsequence of the odometer and the visited sites , discrete symmetry properties , as well as directional monotonicity of the odometer function . the latter ( in part ) implies the lipschitz regularity of the free boundary of the sandpile . as a direct application of some of the methods developed in this paper , combined with earlier results on classical abelian sandpile , we show that the boundary of the scaling limit of abelian sandpile is locally a lipschitz graph . |
we consider numerical inversion of a forward deformation vector field ( dvf ) from one image to another . very often the inverse dvf is needed along with the forward dvf to map medical images , structures , or doses back and forth throughout the process of 4d image reconstruction and adaptive radiotherapy .the inverse dvf may be obtained in different ways , such as through deformable registration with swapped inputs , simultaneous registration in both directions , or inverting the forward dvf from the reference image to the deformed target image . the latter option ( inverting the forward dvf ) is often preferred in clinical applications , due to several reasons :inversion is typically faster , empirically ; image quality can be quite different for the reference and target image sets , which may make the other approaches more error - prone ; and inversion can ensure consistency between forward and inverse dvfs .previously , chen _ et al . _ developed a fixed - point iteration method for dvf inversion . in this study, we aim to advance the dvf inversion approach further by improving its convergence behavior , in terms of convergence region and rate , using a feedback control .the problem of dvf inversion can be framed as follows .the reference and target images , denoted by and , respectively , can be related to one another by two non - linear transformations . the forward transformation , , maps the voxels of the reference image , , onto those of the target image , , via the forward deformation vector field , : where is the 3d displacement of the reference voxel at , and is the image domain .conversely , the backward transformation , , maps the voxels of back to , via , the reverse dvf .the problem of dvf inversion is to obtain given .the two transformations are the inverse of each other , i.e. , consequently , the forward and backward dvfs satisfy the _ simultaneous inverse consistency condition _ : [ eq : consistency - condition ] where .inverse consistency is of great importance to deformable registration and estimation of 4d dose accumulation , among other biomedical applications .the inverse consistency condition is commonly incorporated in deformable registration processes .for instance , christensen and johnson formulate image registration using objective functions symmetrically between the two images in both matching and regularization terms .et al . _ also treat the two images symmetrically , and use inverse consistency in approximating the unknown inverse fields .additional related studies on employing the consistency condition in simultaneous estimates of the forward and inverse dvf can be found in the survey by sotiras _et al . _ .the study reported in this paper follows and improves upon the work of chen _ et al . _the precursor work presented a fixed - point iteration method for dvf inversion , with regard to inverse consistency condition .the significance of that work lies not only in the simple iterative process , but also in the corresponding convergence condition .assuming is given , chen s iteration proceeds as , the initial guess , , is set to zero ; i.e. , . negating the forward dvf used to a prevailing approach for inverse dvf computation , but the resulting inverse estimate , , does not in general satisfy inverse consistency .this common misconception was made clear and amended by the fixed - point iteration solution of ( [ eq : fpim ] ) . with fixed - point method ,the convergence behavior of the iterative inversion process can be analyzed , which is a substantial advancement from previous methods which solely relied on empirical studies .a sufficient convergence condition for ( [ eq : fpim ] ) is the contraction condition on : where is a well - defined distance metric in the 3d image domain , and is a lipschitz constant , .the convergence behavior of chen s iteration depends passively on this condition , which is not always met in clinical cases with large deformation . in this study, we introduce an iterative method with an active feedback control mechanism . at each step of the iteration, we compute a residual which measures the inconsistency between the forward dvf and the iterative inverse estimate , see ( [ eq : residual - r ] ) .the residual is incorporated into the next iterate after being modulated by the feedback control .the feedback control provides an extra handle for controlling and improving the convergence behavior .the rest of the document is organized as follows . in, we describe the new iterative method with feedback control , introduce a simple feedback control mechanism , and provide the underlying principle . in , we make experimental assessment of the new method with an analytic dvf pair and with numerical dvfs obtained via the 4d extended cardiac - torso ( xcat ) digital anthropomorphic phantom . in , we conclude the presented work and give additional remarks on extended feedback control .an iterative method with feedback control is first introduced for numerical dvf inversion .an analysis is then provided for steering the feedback mechanism to improve convergence behavior . at each iteration step , we get an iterative estimate , , of the inverse dvf , . we use the residual with respect to the consistency condition of ( [ eq : consistency - condition ] ) as the feedback : the residual can be obtained at each iteration step .this computationally available quantity allows us to monitor and control the ( unknown ) estimate error , which is to be reduced to zero , or sufficiently close to zero , via the iteration process .the residual is small when the error is small , and is zero when is equal to the inverse satisfying the consistency condition ( [ eq : consistency - condition ] ) .feedback control is introduced in the iteration process as follows . the residual for the -th iterate , ,is calculated , modulated by a nonsingular matrix , and used as an incremental correction , where is a matrix , local to each .feedback control is spatially homogeneous when does not vary with .there are many formulations or reformulations of a fixed - point equation .the formulation in ( [ eq : fpim - general ] ) is a general formulation for fixed - point iterations , with feedback control modifying and transforming the residual before it is fed back to . in this study , we focus on the feedback mechanism in its simplest form with a single parameter .we let , where is the identity matrix , and is a scalar which may be referred to as the relaxation parameter .the iteration process thus takes the simple form : when , the iteration of ( [ eq : fpim - single - parameter ] ) reduces to chen s iteration of ( [ eq : fpim ] ) .hereafter , we refer to the iteration procedure by chen __ as the iteration with feedback control .we first demonstrate how the feedback control can be designed for inverting the analytic dvfs introduced by chen _et al . _the forward and backward dvfs are expressed by the following closed - form formulas over a 2d spatial domain : [ eq:2d - analytic - dvfs ] where denotes cartesian coordinates in the 2d spatial domain ^{2} ] , denotes the jacobian of evaluated at , and denotes the error propagation matrix . in order to express the error propagation matrix in a first - order form , we assume that is smooth almost everywhere over the spatial domain and the jacobian can be well approximated from the numerical values of . the first control objective is to make the propagation matrix a contractor over the image domain of interest , which implies that the convergence region must span the entire image domain . when the feedback control , , is limited by a specific mechanism structure , one may consider making the convergence region in the image domain as large as possible . the second objective is to make the convergence as fast as possible .we describe in particular the control design approach with the single - parameter control mechanism .first , we find the feasible values of over which is a contractor over the spatial domain of interest , where is the spectral radius of the propagation matrix .if the feasible set is not empty , we next locate the values of that reach the fastest convergence speed possible . in, we have demonstrated how to achieve these objectives with analytic dvfs . in, we will introduce a simple approach to locate in practical situation where the dvf is numerically provided , without a closed - form expression .we present experimental results for assessing the new iterative method .first , we will describe the data sets and assessment measures .two particular sets , referred to as set a and set b for convenience , were used for evaluating the single - parameter feedback control process of ( [ eq : fpim - single - parameter ] ) .data set a is analytically designed by chen _et al . _it includes a reference image and a deformed ( target ) image , in 2d , as shown in .the reference image is binary - valued ; the intensity is one over the concentric rings , and otherwise .the dvf from the reference to the target image follows the analytic expression ( [ eq:2d - analytic - dvfs ] ) . in the experiments ,the dvf parameter is set to , and the parameter takes values below or above ( which is the critical point for chen s iteration from converge to diverge ) .data set b consists of two images of a digital lung phantom generated using xcat .the reference image is at the end - of - expiration phase and the target image is at the end - of - inspiration phase .the forward dvf , from the reference to the target image , was provided by xcat .respiratory motion was generated by two parametrized sinusoidal curves : one for diaphragm motion in the si direction , and one for chest surface motion in the ap direction .the curve and surface parameters were set to the xcat default values , with a respiratory cycle , and asymmetric sinusoidals with respect to inspiration and expiration duration .peak - to - peak displacements at the diaphragm and chest surface were set to and in si and ap directions , respectively .the large displacements were chosen in order to test the convergence behavior of the iteration methods in a challenging condition that may be expected in a clinical scenario .voxel resolution was set to mm in all three dimensions .the xcat phantom deformation fields are mostly smooth over the image domain , except at certain surface boundaries , such as around the spine .we use two kinds of schemes for assessing the effect of feedback control on convergence behavior .the first kind of scheme shows the spatial variation of errors using images or error maps .the second kind scheme uses scalar quantities to summarize estimate errors in the worst case or in aggregation . for dataseta , visual assessment of spatial variation of errors is materialized in two ways . in one ,the proxy target and references images are constructed by the inverse dvf estimate under evaluation . in the other ,a map of pixel - wise residuals is overlaid on the target image .the scalar assessment scheme comprises the maximal residual error and the maximal absolute error at each iteration , [ eq : max - errors ] where is the image domain , ^ 2 $ ] . for dataset b , the scheme for assessing the spatial variation of errors is similar to that for dataseta , except that proxy images are constructed for only the target phase , for the following reason . unlike the ideal case with dataset a, local transforms of image voxels by the dvfs may not be one - to - one numerically , which happens to be the case with the backward transform for dataset b. this numerically ill condition makes the constructed reference images via numerical interpolation much more susceptible to numerical artifacts , especially around the outer body interface .the scalar assessment scheme is limited to the measure of residual errors , as the absolute errors are not obtainable with the exact inverse unknown .taking into account the presence of discontinuities at certain boundaries and other sources of variation , residual errors are assessed via percentiles at different levels , defined as , where is the normalized cumulative histogram of , and the image domain excludes voxels outside the body . in particular, we use percentiles with and .we present experimental results , in , with the analytical dvf of ( [ eq:2d - analytic - dvfs ] ) , at two dvf parameter settings .the parameter is set to in both settings and is set to in one setting and in the other .1 iteration 5 iterations 10 iterations + + 1 iteration 5 iterations 10 iterations + + 1 iteration 5 iterations 10 iterations + + 1 iteration 5 iterations 10 iterations + + when , the iteration of ( [ eq : fpim - single - parameter ] ) converges for any .show the iterative estimates of the reference and target images respectively at intermediate steps of the iteration process .the iteration of ( [ eq : fpim - single - parameter ] ) with feedback control recovers the reference and target images in at most steps , whereas the iteration with feedback control takes more steps .a quantitative confirmation of the speedup , shown in [ fig : result - analytical_phantom - b_pt_4 ] , shows the comparison in convergence rate among the feasible values .the iteration with is at least two times faster than with .show the results when the deformation is larger .the iteration with diverges , whereas iteration with recovers the reference and target images successfully in at most steps .shows the critical change from divergence to convergence .provide numerical evidence that errors decrease faster when the feedback control parameter is set to the analytically optimal value in ( [ eqn : optimal - mu - specific ] ) .we present experimental results for the xcat phantom .shows residual errors overlaid on ct slices .the residual errors are much smaller with the feedback control , especially around the diaphragm . shows the spatial distribution of errors by means of the constructed target image .the target image is constructed by deforming the reference image with inverse estimate .the constructed target image with exhibits substantially smaller errors compared to that with . shows the residuals at and against the number of iteration steps .the residual drops below mm for iteration with feedback control ( and ) , whereas the same remains above cm level with . shows the variation of residual percentiles with parameter with minima occurring at .axial coronal sagittal axial coronal sagittal + + + ( a ) ( b ) axial coronal sagittal + + have introduced an iterative method for numerical dvf inversion with a feedback control mechanism to improve convergence behavior .we have described and demonstrated the key principle for altering and controlling error propagation , and a methodology for determining the control parameters numerically as well as analytically .experimental results are in agreement with the presented error analysis and control design objectives .we give below some additional comments on computational issues and extended feedback control mechanisms .the iteration ( [ eq : fpim - general ] ) can be described alternatively as the following procedure with two sub - steps per iteration , [ eq : two - step - relaxed ] the first sub - step is the same as chen s iteration .the second sub - step makes a modification by a combination of the estimates at step and sub - step .the alternative computation procedure is mathematically equivalent with respect to convergence analysis , to iteration ( [ eq : fpim - general ] ) , although the residual feedback is not used explicitly .the residuals can be effectively utilized in termination criteria .commonly used termination criteria include a threshold on the numerical difference between two successive iterates and an upper bound on the total number of iteration steps .the threshold is set based on an acceptable precision in computed results , and it should not be set higher than the computational precision used , such as ieee single or double precision . a process is terminated by the numerical threshold criterion either because the iteration has converged numerically , or because its progression is too slow .we would need an additional indicator to differentiate the two cases . recall that the residual is related , but not necessarily equal , to the difference between successive iterates .if the residual is not sufficiently small , the estimate is far from meeting the inverse consistency condition and the iteration has not converged well enough .the single - parameter feedback control described in [ subsec : feedback - control ] is simple .the parameter is kept constant over the spatial domain and across all iteration steps .while this simple control mechanism proved effective with the analytic and xcat dvf data used in this study , the robustness of the convergence can be further improved by extending the control in the following two approaches .the first extension is to turn the constant parameter to a spatially variant control function .in fact , by the error propagation analysis specific to the analytic dvf ( [ eq:2d - analytic - dvfs ] ) , an ideal control scheme is spatially - variant , .both analysis and numerical results ( not presented in this paper ) show that the iteration with takes only one or two steps for the residuals to drop to numerical zero , as determined by the machine precision .since the residual is vector - valued in general , a spatially - variant control function with matrix local to each can be used .the second extension is to make the control non - stationary , i.e. , to allow changes throughout the iteration process in adaptation to changes in the iterates or residuals .this work was supported by the national institutes of health grant no .the authors thank dr .paul segars and wendy harris for their insightful suggestions on generating the xcat phantom .they thank animesh srivastava for his generous time and effort on technical assistance .[ sec : conflict - interest ] the authors have no relevant conflicts of interest to disclose .we give the derivation of error propagation equations presented in this paper .denote the error in estimating the dvf inverse at iteration by . for the analytical phantom of ( [ eq:2d - analytic - dvfs ] ) , which gives the specific error propagation equation ( [ eq : error - propagation - customized ] ) , for the analytical phantom of ( [ eq:2d - analytic - dvfs ] ) , which gives the specific error propagation equation ( [ eq : error - propagation - customized ] ) , for the analytical phantom of ( [ eq:2d - analytic - dvfs ] ) , which gives the specific error propagation equation ( [ eq : error - propagation - customized ] ) , for the analytical phantom of ( [ eq:2d - analytic - dvfs ] ) , which gives the specific error propagation equation ( [ eq : error - propagation - customized ] ) , general dvf , using first order taylor expansion , where .this gives the error propagation equation of ( [ eq : error - propagation ] ) , necessary and sufficient condition of the convergence of iteration process ( [ eq : fpim - single - parameter ] ) is , derive the optimal in convergence rate for the analytical phantom ( [ eq:2d - analytic - dvfs ] ) .}{\max } \quad { \mathopen{}\mathclose\bgroup\leftorig } ( 1 \!- ( 1\!-\mu ) \frac{1}{1 + b\cos ( m \theta ) } { \aftergroup\egroup\rightorig})^{2}\nonumber\\ & = \displaystyle \underset{\mu}{\arg \min } \quad \underset{y \in [ -b , b]}{\max } \quad { \mathopen{}\mathclose\bgroup\leftorig } ( 1 - \frac{1 - \mu}{1 + y } { \aftergroup\egroup\rightorig})^{2}\nonumber\\ & = \displaystyle \underset{\mu}{\arg \min } \quad \begin{cases } { \mathopen{}\mathclose\bgroup\leftorig}(\frac{\mu - b}{1 - b } { \aftergroup\egroup\rightorig})^{2 } , \mu \leq b^2\\ { \mathopen{}\mathclose\bgroup\leftorig}(\frac{\mu + b}{1 + b } { \aftergroup\egroup\rightorig})^{2 } , \mu > b^2 \end{cases}\nonumber\\ & = \displaystyle b^2\end{aligned}\ ] ] a. leow , s .- c .huang , a. geng , j. becker , s. davis , a. toga , and p. thompson .inverse consistent mapping in 3d deformable image registration : its construction and statistical properties . in _ information processing in medical imaging _, pages 493503 .springer , 2005 .w. lu , g. h. olivera , q. chen , k. j. ruchala , j. haimerl , s. l. meeks , k. m. langen , and p. a. kupelian .deformable registration of the planning image ( kvct ) and the daily images ( mvct ) for adaptive radiation therapythis work was in part presented at the aapm meeting in seattle , july 2005 ., 51(17):4357 , 2006 .y. seppenwoolde , h. shirato , k. kitamura , s. shimizu , m. van herk , j. v. lebesque , and k. miyasaka .precise and real - time measurement of 3d tumor motion in lung due to breathing and heartbeat , measured during radiotherapy ., 53(4):822834 , 2002 . | * purpose : * the inverse of a deformation vector field ( dvf ) is often needed in deformable registration , 4d image reconstruction , and adaptive radiation therapy . this study aims at improving both the accuracy with respect to inverse consistency and efficiency of the numerical dvf inversion by developing a fixed - point iteration method with feedback control . + * method : * we introduce an iterative method with active feedback control for dvf inversion . the method is built upon a previous fixed - point iteration method , which is represented as a particular passive instance in the new method . at each iteration step , we measure the inconsistency , namely the residual , between the iterative inverse estimate and the input dvf . the residual is modulated by a feedback control mechanism before being incorporated into the next iterate . the feedback control design is based on analysis of error propagation in the iteration process . the control design goal is to suppress estimation error progressively to make the convergence region as large as possible , and to make estimate errors vanish faster whenever possible . we demonstrated the feedback control with a single - parameter control mechanism . the optimal parameter value is determined either analytically by a closed - form expression for analytical test data , or numerically for experimental data . the feedback control design is demonstrated and assessed with two data sets : an analytic dvf pair , and a dvf generated between two phases of the 4d extended cardiac - torso ( xcat ) digital anthropomorphic phantom . * results : * the single - parameter feedback control improved both the convergence region and convergence rate of the iterative algorithm , for both datasets . with the analytic data , the iteration becomes convergent over the entire image domain , and the convergence is sped up substantially compared to the precursor method , which suffers from slow convergence or even divergence , as the deformation becomes larger . with the xcat dvf data , the new iteration method substantially outperforms the precursor method in both accuracy and efficiency ; feedback control reduced the 95th percentile of residual errors from mm to mm . additionally , convergence rate was accelerated by at least a factor of for both datasets . * conclusion : * the introduced iteration method for dvf inversion shows the previously unexplored possibility in exercising active feedback control in dvf inversion , and the unexploited potential in improving both numerical accuracy and computational efficiency . = 1 mailto:[[`[`]author to whom correspondence should be addressed . electronic mail : ] lei.ren.edu mailto:[[`[`]author to whom correspondence should be addressed . electronic mail : ] lei.ren.edu |
a role for modification of activation functions , or intrinsic plasticity ( ip ) , for behavioral learning has been demonstrated for a number of systems .for instance , in rabbit eyeblink conditioning , when ion channels related to afterhyperpolarization are being suppressed by a learning event , they can become permanently suppressed .this has been shown for pyramidal cells of hippocampal areas ca1 and ca3 , and for cerebellar purkinje cells . in some cases , these changes are permanent and still present after 30 days , in other cases , intrinsic changes disappear after 3 - 7 days , while the behavioral memory remains intact , raising questions about the long - term component of intrinsic plasticity in these systems .there are at the present time conflicting ideas on the significance of ip compared to synaptic plasticity , and the range of functions that ip may have in adaptivity .few computational models have been proposed that show how modification in activation functions can be achieved with ion channel based models of realistic single neurons .marder and colleagues have developed an approach , where they sample a very large parameter space for conductances of ion channels , exploring nonlinearities in the relation between conductances and neural spiking behavior .the motivation for this research are observations about neuromodulation and intrinsic plasticity in specific neurons of an invertebrate ganglion ( e.g. , ) .they have noted that large variations in some parameters may have little effect on neuronal behavior , while comparatively small variations in certain regions in parameter space may change response properties significantly .they also suggest that neuromodulation may provide an efficient means of targeting regions in parameter space with significant effects on response properties .a study by assumed the goal of modification of activation functions is to achieve an optimal distribution of firing rates for a population of neurons .the idea was that by tuning each neuron to a different band of the frequency spectrum , the full bandwidth of frequencies could be employed for information transfer .this goal was achieved by adjusting , and channels for a generically defined neuron until a desired frequency was stably reached .we present a different approach , where the modification of activation functions reflects the history of exposure to stimuli for a specific neuron .similarly , suggested that synaptic ltp / ltd and linear regulations of intrinsic excitability could operate in a synergistic fashion .however , in our approach , different types of synaptic stimulation result in state changes for the neuronal unit , influencing its capacity for read - out of stored intrinsic properties .thus , intrinsic plasticity is conceptualized as fundamentally different from ltp / ltd which does not encompass such a state change .the learning rule that we derive as the basis for adjustment concerns one - dimensional upregulation or down - regulation of excitability in the `` read - out '' state of the neuron , and affecting only this state .this rule uses neural activation , significantly determined by intracellular calcium for the learning parameter , which can be shown to be biologically well - motivated ( cf . also ) .the membrane voltage is modeled as ] .physiological ranges for can be estimated by various means .there are measurements for variability in eletrophysiologically defined membrane behavior ( current threshold , spike response to current pulses etc . ) that are typically expressed as standard errors ( e.g. 16 - 20% for current threshold , ) .there are also attempts at classifying msn cells into different types based on their electrophysiological profile .modeling shows that variability of ion channel conductances with a range of matches measures of electrophysiological variability and reproduces the ranges for msn types ( data not shown ) .interestingly , direct measurements for dopamine d1 receptor - mediated changes on ion channel conductances are approximately in the same ranges ( , ) .our discussion is thus based on an estimate of ranging fom 0.6 - 1.4 for each channel .synaptic input is defined by overlays of the epsps generated by n individual poisson - distributed spike trains with mean interspike interval .each epsp is modeled as a spike with peak amplitude and exponential decay with similar to .ipsps are modeled in a similar way with .this corresponds to 0.5na ( -0.2na ) as peak current ( with ) .synaptic conductances are calculated by with set to 0mv .we have tuned the model to = for a first spike for the naive or standard neuron ( all ) . at -40mv ,this is or , which corresponds to the experimentally measured average value for the rheobase in .we may increase the correlation in the input by using a percentage of neurons which fire at the same time .higher values for increase the amplitude of the fluctuations of the input ( cf . ) .the simulator has been implemented in matlab and executed on an apple 1ghz g4 notebook .the entire code is interpreted and no specific code optimizations have been applied . for numerical integration , the solver ode45 was used .simulation of one neuron for 1 s took approximately 68s cpu time .it is expected that for compiled and optimized code the simulation speed could be increased by at least an order of magnitude .we explore the impact of small variations in ion channel conductances on the shape of the activation function . as an example , we show the current and conductance changes for a slowly inactivating a - type k+ channel ( kv1.2 , ) , l - type calcium channel ( ) and inward rectifying k+ channel ( ) at different membrane potentials modulated by a scaling factor ( fig .[ kas - conductance ] , fig . [ actfun ] ) .regulation of the voltage - dependence and even of the inactivation dynamics of an ion channel has also been shown , but these effects are not further discussed here .we can see that there are critical voltage ranges ( around -50mv , around -80mv and starting at -40mv ) , where the conductance and the current are highest , and where scaling has a significant effect , while scaling has small or no effect in other voltage ranges .( the na+ current has been disabled for this example to prevent the neuron from firing ) . in fig .[ temporal ] , we show the current over time - to graphically display the slow dynamics of the and channel . since we do not change the activation - inactivation dynamics of any channel in our model , we show currents only for , = 1 .we can see that activates moderately fast ( 20ms ) , while it inactivates with a half - time of about 300ms , depending on the voltage . for , activation is almost instantaneous , but inactivation is 500ms . the activation function for the msn model shows a time - dependence only in the high - voltage range ( at or above -55mv ) , whereas the components in the lower voltage ranges are not time - dependent .mathematically , we can consider the individual channels as a set of basis functions that allow function approximation for the activation function .each particular adjustment of an activation function can be considered learning of a filter which is suited to a specific processing task .the activation - inactivation dynamics would provide a similar set of basis functions for the temporal domain . of course , it is interesting to note which particular basis functions exist , and also how the temporal dimension is tied in with specific voltage - dependences. for instance , the slowly inactivating potassium channel provides a skewed mirror image of the function of calcium - gated sk / bk channels , which are responsible for afterhyperpolarization , making different variants of frequency filters possible . on this basis, a mapping of ion channel components and their density or distributions in different types of neurons could provide an interesting perspective on direct interactions for neurons from different tissue types or brain areas , as well as e.g. between cholinergic interneurons and msns within striatum . to further explore the influence of variability of the activation function, we apply realistic synaptic input with different amounts of correlation to individual ms neurons ( see fig .[ currentgsyn ] ) .this shows us that small adjustments in the contribution of a specific ion channel can result in significantly different spiking behavior even for identical synaptic input .this occurs when the input is distributed , i.e. has low correlation . in this case, the neurons spike independently of each other and with different frequencies .we can eliminate this effect by increasing the correlation of the input .because of the slow activation / inactivation dynamics of the channel , ( latency of 20ms ) only low correlated input activates these channels ( neuronal integrator mode ) , but highly correlated inputs do not activate these channels , driving the membrane to spiking quickly ( coincidence detector mode ) .therefore correlated input can produce reliable spiking behavior for model neurons which differ in the relative contribution of the slow channel .distributed input , in contrast , activates slower ion channels , and can produce different tonic firing rates , here according to the contribution of the channels , as long as strong synaptic input keeps the neuron in the relevant voltage range ( persistent activity ) .similarly , the differential contribution of other channels ( high - voltage gated l - type ca - channels , hyperpolarization - activated girk channels or calcium - dependent sk / bk channels ) will affect neuronal behavior , when the conditions for a prominent influence of these channels are met .we are modeling a state of ms neurons that exhibits regular tonic firing .in experimental studies , showed that ms neurons , similar to cortical neurons , exhibit upstate - downstate behavior , reminiscent of slow - wave sleep , under certain forms of anesthesia ( ketamine , barbiturate ) . however , under neurolept - analgesia ( fentanyl - haloperidol ) , showed that ms neurons can show driven activity , when cortical input is highly synchronized , and exhibit a state characterized by fluctuating synaptic inputs without rhythmic activity ( i.e. without upstates / downstates ) , when cortical input is desynchronized .the regular , tonic spiking in this state is very low , much less than in the waking animal , which may be related to the dopamine block by haloperidol .this makes a waking state of ms neurons characterized by regular tonic spiking at different firing rates probable . in the following ,we show how intrinsic excitability adaptation can lead to different recalled firing rates under appropriate synaptic stimulation . the model could thus reflect learning that is recalled or read out during msn states under desynchronized cortical input - in contrast to highly synchronized input , which would homogenize the response of the coincidence detecting neurons and favor reliable transfer of spikes .the general idea for learning intrinsic plasticity is to use a learning parameter for each individual update of the conductance scaling factor .the _ direction _ of learning ( or ) is determined from the neural activation ( ) for each individual neuron .neural activation is largely determined by intracellular calcium but here we estimate the neural activation from the spike rate of the neuron , measured over 1 s of simulated behavior ( see [ biolearn ] for a discussion ) .we define a bidirectional learning rule dependent on an initial firing rate : excitability is increased by a step function ( with stepsize ) when is greater than , excitability is decreased when is lower than ( `` positive learning '' ) .this means , when the actual neural activation is higher than the initial firing rate , membrane adaptations aim to move the neuron to a higher excitability in order to create a positive memory trace of a period of high activation ( which can then be replicated under distributed synaptic stimulation ) .the same mechanism applies to lower the excitability of a neuron . this rule can also be implemented by individual increases in excitability after each action potential , and decreases of excitability for periods of time without action potentials .initial experiments indeed show such adaptation of intrinsic excitability after individual spikes . the function can be applied to a single ion channel , such as , but also to a number of ion channels in parallel : e.g. to mimic dopamine d1 receptor activation , may be applied to ( upregulated with high ) , ( downregulated with high ) , and ( downregulated with high ) .we can show the effect of this learning rule on pattern learning .we generate synaptic inputs from a grid of 200 input neurons for a single layer of 10 msns . on this gridwe project two stripes of width 4 as a simple input pattern by adjusting the mean interspike interval ( isi ) for the corresponding input neurons to a higher value ( ms for _ on _ vs. ms for _ off _ neurons , see fig .[ toplo ] ) .we apply the learning rule to each of the currents , and .this mimics changes in dopamine d1 receptor sensitivity , which targets these ion channels .adaptation can be weaker or stronger , depending on learning time ( e.g. , =0.01 , =20s ( 20 steps ) ( weak ) , =40s ( 40 steps ) ( strong ) ) .after a number of steps , we achieve a distribution of -values that reflects the strength of the input ( table [ table3 ] a ) . in fig .[ positive ] , we obtain spike frequency histograms from the set of ms neurons under different conditions . fig .[ positive ] a shows the naive response to the input pattern - high activation in two medial areas . after adaptation , this responseis increased ( fig .[ positive ] b ) . when we apply a test input of a random noise pattern , we see that the learned pattern is still reflected in the spike histogram ( fig .[ positive ] c ) .for positive learning , this process is theoretically unbounded , and only limited by the stepsize and the adaptation time .a saturation state could be defined to prevent unbounded learning , which would also allow to perform capacity calculations .we should note that applying just one pattern continuously results in a very simple learning trajectory : each update results in a step change in the relevant ion channel currents .however , we also show that the effects of stepwise adaptation of individual ion channels do not necessarily lead to a completely parallel adaptation of firing rate . in fig .[ positive ] we see that adaptation is much stronger for high input rather than low input neurons . in this case , at 11.5hz is a fairly low value for neurons to continue to lower their firing rate with stepwise adaptation of the chosen ion channels .this shows the importance of using appropriate tuning ( harnessing ) mechanisms to make highly nonlinear channels work in a purely linear learning context .clearly one of the results of learning is an altered spiking behavior of individual neurons dependent on their history .it is important to realize that this rule is based on neural activation , not synaptic input as a learning parameter - since synaptic input is constant during learning .we show that this mechanism can be employed not only for positive trace learning , when excitability adaptation corresponds to frequency response , but also for negative trace learning , when excitability adaptation counteracts frequency response and approximates a target firing rate .this target rate could be set as a result of global inhibitory mechanisms corresponding to the expected mean values under physiological stimulation .accordingly , the neuron responds with decreases of excitability to high input ranges and increases of excitability to low input ranges ( fig . [ negative ] ) . this emphasizes that `` homeostatic '' responses - adjusting excitability in the opposite direction to the level of input - can implement trace learning ( pattern learning and feature extraction ) as well .negative learning rule results in a mirror image of parameter values compared to positive learning , as shown in table [ table3 ] b. the naive response is the same as before ( fig .[ positive ] a ) . but here , after adaptation , the neurons have habituated to the input , and do not produce a strong response anymore ( fig .[ neg2 ] a ) . when neurons are tested with , an inverse version of the original pattern appears ( fig .[ neg2 ] b ) .similarly , when we apply a different pattern , we obtain a spike histogram , where the learned pattern is overlayed with the new input , resulting in a dampening of the frequency response for ( fig .[ neg2 ] c ) . for both positive and negative traces ,learning is pattern - specific , i.e. training with homogeneous , fluctuating ( high - low ) noise , such as , results in no adjustments ( or computes an average ) .however , any prolonged sequence of neuron - selective stimulation results in neuron - selective patterns .this requires the population to be protected from prolonged stimulation with random patterns in a biological setting .we may assume most patterns to be meaningful and highly repetitive , while the neuron exists in a plastic state , while patterns may be random , when the neuron is not plastic ( because it is stimulated with highly correlated or very low frequency input , saturated in its parameters or undergoes ion channel block by selected neuromodulators ) .a number of experimental results show that intrinsic plasticity in msns may be prominently induced and regulated by intracellular calcium : it has been shown that e.g. the regulation of delayed rectifier -channels ( kv2.1 channels ) is effectively performed by influx and calcineurin activation in cultured hippocampal neurons , which can be achieved by glutamate stimulation .the regulation concerns marked dephosphorylation ( reduction of conductance ) plus a shift in voltage - dependence .it has also been shown that 20 s of nmda stimulation , or alternatively , increase of intracellular calcium , increases functional dopamine d1 receptor density at the membrane , which corresponds to an alteration in for d1 parameters , targeting a number of ion channels simultaneously . for deep cerebellar neurons, there has recently been some direct evidence on the conditions that induce intrinsic plasticity . here, alterations in intrinsic excitability can be induced by bursts of epsps and ipsps , accompanied by dendritic calcium transients . in striatal msns , it has been determined that synaptic stimulation at 1 hz does not cause significant calcium signals , but 10 hz stimulation causes moderate increases , and higher stimulation ( up to 100hz ) significantly raises calcium levels . in the simulations , neural activation ( )is estimated from the number of spikes generated , measured over the simulated behavior . in the model case ,the membrane potential is not used as a separate parameter , because membrane potential and spiking behavior are closely linked . however ,when a neuron exhibits prominent upstates ( periods of high membrane voltages with a variable number of actual spikes ) , membrane potential may need to be treated as an additional , independent component of , since a great part of the intracellular calcium signal in striatal msns is being generated from high - voltage activated nmda and l - type calcium channels .the number of spikes produced nonetheless seems important because of the phenomenon of backpropagating spikes .backpropagating spikes enhance the calcium signal , thus providing a basis for a prominent role for spiking behavior , or firing rate , for defining intracellular calcium .the presence of backpropagation of spikes has recently been confirmed for msns . in general , the induction of intrinsic plasticity may be linked not only to intracellular calcium .there exists an intricate intracellular system of interactions between diffusible substances like calcium and camp , as well as a number of crucial proteins ( rgs , calcineurin , pka , pkc , other kinases and phosphatases ) for regulating receptor sensitivity and ion channel properties , which are furthermore influenced by nm receptor activation .thus the learning parameter may be analyzed as being dependent not only on , but also on ] - where ] will regulate several ion channels in parallel , but there may be different for each ion channel . if activation function adaptation proceeds by nm - activated parameters , rather than unconditioned parameters , response to stimuli will consist of an early , non - modulated component , where the input pattern is reflected directly in the spiking frequency , and a later , modulated component , where habituation occurs for a learned pattern , or the stored pattern is reflected by overlaying a new stimulus and the stored pattern .nm signals orchestrate both adjustments in activation function and synaptic input , with nm activation often depressing synapses , but increasing the variability in the activation function through selected conductance changes ( activating -parameters ) . as a result , the input component of the signal is reduced in comparison to the stored intrinsic component after nm activation . presumably , this has a dynamic component , such that for a short time after a strong signal there is an input - dominant phase which is then followed by an intrinsic - dominant phase .there are different ideas at the present time what intrinsic plasticity can achieve within a network model of neuronal interaction .a recent review of intrinsic plasticity has left the authors wondering , whether ip acts mainly to maintain homeostasis , adapting to changes in synaptic strength by keeping neurons within certain ranges but without significant informational capacity , as in the model of . however , as we have shown , homeostatic adaptation does not exclude information storage under conditions of conditional read - out .the synergy between synaptic and intrinsic plasticity may take different forms , beyond e - s potentiation .in contrast to , have listed many possible functions and roles of intrinsic adaptive plasticity , based on a review of experimental evidence in different systems .we have greatly simplified the exposition here by concentrating on spike frequency as a major indicator of neural behavior .certainly the type of firing ( e.g. burst firing ) is also under control of neuromodulators , and may be influenced by the distribution and density of ion channels .single neuron computation is more complex than what can be shown with a single compartment model . in dendritic computation, the coupling of different compartments may be prominently affected by intrinsic plasticity .for instance , showed a loss of clustering for k+ channels on the membrane , induced by high glutamate stimulation , indicating a possible input - dependent regulation of dendritic integration .a recent study on concurrent simulation of synaptic coupling parameters and intrinsic ion channel conductances has concluded that intrinsic and synaptic plasticity can achieve similar effects for network operation .we have suggested that synaptic and intrinsic plasticity can substitute for each other , and furthermore that this essential functional parallelism could be an indication for _ information flow _ over time from one modality to the other .the direction of this information flow may be from intrinsic to synaptic for the induction of permanent , morphological changes ( such as dendritic spine morphology ) - however the results of in cerebellum have also shown the possibility of permanent intrinsic plasticity ( albeit in purkinje cells which lack nmda receptors and thus may be highly atypical neurons concerning learning properties ) .clearly the interaction between synaptic and intrinsic plasticity is still an open question .here we have shown a simple , local learning mechanism for intrinsic plasticity that allows to store pattern information without synaptic plasticity .this is different from theoretical approaches , where activation functions are only being modulated to optimize global measures of information transmission between neurons while the information is exclusively stored in synaptic weights .further work will be needed to investigate the smooth integration of synaptic and intrinsic plasticity and their respective functions in different systems .we wanted to show quantitatively that ip can have significant effects on spike frequency , dependent on the statistical structure of the input .in particular , low correlated input , or input during sensitive ( high - voltage membrane ) states induces the strongest variability of spike responses for different activation functions , while highly correlated input acts as drivers for neurons , eliminating subtle differences in activation function . we suggested that starting from a very general , natural format for a learning rule , which can be biologically motivated , we arrive at simple pattern learning , the basis for feature extraction , and realistic types of neural behavior : population - wide increases / decreases of neural firing rates to novel input stimuli , habituation to known stimuli and history - dependent distortions of individual stimuli . a significant application of this theoretical model exists in the observation of pervasive whole - cell adaptations in selected ion channels ( , ) after cocaine sensitization , with implications of the type of learning that underlies addiction .this would reduce the dynamic range of intrinsic plasticity .potentially , then , learning in striatum is mediated in part by intrinsic plasticity , and a reduction in inducible intrinsic plasticity or dynamic range of intrinsic plasticity after cocaine sensitization may contribute to the pathology of addiction .bargas j , howe a , eberwine j , cao y , surmeier dj ( 1994 ) cellular and molecular characterization of ca++ currents in acutely isolated , adult rat neostriatal neurons. journal of neuroscience 14 ( 11 ) , 66676686 .gruber aj , solla sa , surmeier dj , houk jc ( 2003 ) modulation of striatal single units by expected reward : a spiny neuron model displaying dopamine - induced bistability .journal of neurophysiology 90 , 1095114 .hu xt , basu s , white fj ( 2004 ) repeated cocaine administration suppresses hva - ca2 + potentials and enhances activity of k+ channels in rat nucleus accumbens neurons .journal of neurophysiology 92 , 1597607 .mahon s , deniau jm , charpier s , delord b ( 2000 ) role of a striatal slowly inactivating potassium current in short - term facilitation of corticostriatal inputs : a computer simulation study .journal of experimental psychology learning memory and cognition 7 , 35762 .misonou h , mohapatra dp , park ew , leung v , zhen d , misonou k , anderson ae , trimmer js ( 2004 ) regulation of ion channel localization and phosphorylation by neuronal activity .nature neuroscience 7 , 7118 .nisenbaum es , mermelstein pg , wilson cj , surmeier dj ( 1998 ) selective blockade of a slowly inactivating potassium current in striatal neurons by ( + /-) 6-chloro - apb hydrobromide ( skf82958 ) .synapse 29 , 21324 .onn sp , fienberg aa , grace aa ( 2003 ) dopamine modulation of membrane excitability in striatal spiny neurons is altered in darpp-32 knockout mice .journal of pharmacology and experimental therapeutics 306 , 8709 .schreurs bg , gusev pa , tomsic d , alkon dl , shi t ( 1998 ) intracellular correlates of acquisition and long - term memory of classical conditioning in purkinje cell dendrites in slices of rabbit cerebellar lobule hvi .journal of neuroscience 18 , 5498507 .scott l , kruse ms , forssberg h , brismar h , greengard p , aperia a ( 2002 ) selective up - regulation of dopamine d1 receptors in dendritic spines by nmda receptor activation .proceedings of the national acadadamy of the sciencesu s a 99 , 16614 .thompson lt , moyer jrj , disterhoft jf ( 1996 ) transient changes in excitability of rabbit ca3 neurons with a time course appropriate to support memory consolidation .journal of neurophysiology 76 , 183649 .zhang xf , cooper dc , white fj ( 2002 ) repeated cocaine treatment decreases whole - cell calcium current in rat nucleus accumbens neurons .journal of pharmacology and experimental therapeutics 301 , 111925 .= \{0.6 ... 1.4 } ) for the slowly inactivating -channel ( kv1.2 , ) , the l - type calcium channel ( ) , and the inward rectifying k+ channel ( ) are shown at different membrane voltages ( a ) in an i - v plot , ( b ) as variability in conductance.,title="fig:",scaledwidth=75.0% ] = \{0.6 ... 1.4 } ) for , , and as components of the activation function ( vs. ) .the activation function is defined as the membrane voltage response for different injected ( synaptic ) conductances ( ) , and computed by solving eq[eq : mu - factorsg ] for the membrane voltage .,title="fig:",scaledwidth=50.0% ] ( b ) the l - type ca channel , and ( c ) for the set of ion channels used in the standard msn model .we see a rise time due to and overlapping inactivation dynamics in the -55 to -40 mv range.,title="fig:",scaledwidth=75.0% ] neurons with independent poisson processes using different correlations parameters ( a , b ) .three slightly different neurons with are shown under both conditions .( a ) response variability and different firing rates for each neuron ( here : 20,26,40 hz ) occur with distributed ( low correlation ) input .( b ) highly correlated input produces reliable spiking and by implication a single firing rate ( 20hz ) .the upper panel shows the membrane voltage , the middle panel shows the membrane conductances , and the lower panel shows the synaptic input as conductance.,title="fig:",scaledwidth=75.0% ] = 11.5hz ) ( a ) response of naive neurons to ( b ) response of -adapted neurons to ( c ) -adapted neurons tested with .average synaptic input ( ) for each neuron is shown on top .responses in ( a ) and ( b ) to the same input are different , a pattern similar to emerges in response to uniform ( noise ) pattern input in ( c ) . ,title="fig:",scaledwidth=75.0% ] = 11.5hz ) ( a ) habituation for neurons adapted to , ( b ) an inverse pattern for -adapted neurons tested with and ( c ) interference ( dampening of response ) for a new pattern ( naive : line - drawn bars , adaptive : filled bars ) .average synaptic input ( ) for each neuron is shown on top .( a ) shows a uniform response to patterend synaptic input and ( b ) a patterned response to uniform ( noise ) input .( c ) shows a difference of response for naive vs. -adapted neurons to a new pattern .[ neg2 ] , title="fig:",scaledwidth=75.0% ] | we present an unsupervised , local activation - dependent learning rule for intrinsic plasticity ( ip ) which affects the composition of ion channel conductances for single neurons in a use - dependent way . we use a single - compartment conductance - based model for medium spiny striatal neurons in order to show the effects of parametrization of individual ion channels on the neuronal activation function . we show that parameter changes within the physiological ranges are sufficient to create an ensemble of neurons with significantly different activation functions . we emphasize that the effects of intrinsic neuronal variability on spiking behavior require a distributed mode of synaptic input and can be eliminated by strongly correlated input . we show how variability and adaptivity in ion channel conductances can be utilized to store patterns without an additional contribution by synaptic plasticity ( sp ) . the adaptation of the spike response may result in either `` positive '' or `` negative '' pattern learning . however , read - out of stored information depends on a distributed pattern of synaptic activity to let intrinsic variability determine spike response . we briefly discuss the implications of this conditional memory on learning and addiction . * _ keywords : _ * single neuron model , striatal medium spiny neuron , ion channels , activation function , unsupervised learning , intrinsic plasticity , pattern learning , neuronal variability |
a variety of data in many different fields can be described by networks .examples include friendship and social networks , food webs , protein - protein interaction and gene regulatory networks , the world wide web , and many others .one of the fundamental problems in network science is link prediction , where the goal is to predict the existence of a link between two nodes based on observed links between other nodes as well as additional information about the nodes ( node covariates ) when available ( see , and for recent reviews ) .link prediction has wide applications .for example , recommendation of new friends or connections for members is an important service in online social networks such as facebook . in biological networks , such as protein - protein interaction and gene regulatory networks ,it is usually time - consuming and expensive to test existence of links by comprehensive experiments ; link prediction in these biological networks can provide specific targets for future experiments .there are two different settings under which the link prediction problem is commonly studied . in the firstsetting , a snapshot of the network at time , or a sequence of snapshots at times , is used to predict new links that are likely to appear in the near future ( at time ) . in the secondsetting , the network is treated as static but not fully observed , and the task is to fill in the missing links in such a partially observed network .these two tasks are related in practice , since a network evolving over time can also be partially observed and a missing link is more likely to emerge in the future . from the analysis point of view , however , these settings are quite different ; in this paper , we focus on the partially observed setting and do not consider networks evolving over time .there are several types of methods for the link prediction problem in the literature .the first class of methods consists of unsupervised approaches based on various types of node similarities .these methods assign a similarity score to each pair of nodes and , and higher similarity scores are assumed to imply higher probabilities of a link .similarities can be based either on node attributes or solely on the network structure , such as the number of common neighbors ; the latter are known as structural similarities .typical choices of structural similarity measures include local indices based on common neighbors , such as the jaccard index or the adamic - adar index , and global indices based on the ensemble of all paths , such as the katz index and the leicht - holme - newman index .comprehensive reviews of such similarity measures can be found in and .another class of approaches to link prediction includes supervised learning methods that use both network structures and node attributes .these methods treat link prediction as a binary classification problem , where the responses are indicating whether there exists a link for a pair , and the predictors are covariates for each pair , which are constructed from node attributes .a number of popular supervised learning methods have been applied to the link prediction problem .for example , and use the support vector machine with pairwise kernels , and compares the performance of several supervised learning methods .other supervised methods use probabilistic models for incomplete networks to do link prediction , for example , the hierarchical structure models , latent space models , latent variable models , and stochastic relational models .our approach falls in the supervised learning category , in the sense that we make use of both the node similarities and observed links . however , one difficulty in treating link prediction as a straightforward classification problem is the lack of certainty about the negative and positive examplesthis is particularly true for negative examples ( absent edges ) . in biological networksin particular , there may be no certain negative examples at all .for instance , in a protein - protein interaction network , an absent edge may not mean that there is no interaction between the two proteins instead , it may indicate that the experiment to test that interaction has not been done , or that it did not have enough sensitivity to detect the interaction .positive examples could sometimes also be spurious for example , high - throughput experiments can yield a large number of false positive protein - protein interactions . herewe propose a new link prediction method that allows for the presence of both false positive and false negative examples .more formally , we assume that the network we observe is the true network with independent observation errors , i.e. , with some true edges missing and other edges recorded erroneously .the error rates for both kinds of errors are assumed unknown , and in fact can not be estimated under this framework .however , we can provide rankings of potential links in order of their estimated probabilities , for node pairs with observed links as well as for node pairs with no observed links .these relative rankings rather than absolute probabilities of edges are sufficient in many applications .for example , pairs of proteins without observed interactions that rank highly could be given priority in subsequent experiments .to obtain these rankings , we utilize node covariates when available , and/or network topology based on observed links .the rest of the paper is organized as follows . in section [ sec :model ] , we specify our ( rather minimal ) model assumptions for the network and the edge errors .we propose link ranking criteria for both directed and undirected networks in section [ sec : meth ] .the algorithms used to optimize these criteria are discussed in section [ sec : alg ] . in section [ sec : sim ] we compare performance of proposed criteria to other link prediction methods on simulated networks . in section [ sec: data ] , we apply our methods to link prediction in a protein - protein interaction network and a school friendship network .section [ sec : summary ] concludes with a summary and discussion of future directions .a network with nodes ( vertices ) can be represented by an adjacency matrix ] provides rankings for both these special cases and the general problem , and thus we focus on estimating for the rest of the paper .in this section , we propose criteria for estimating the probabilities of edges in the observed network , , for both directed and undirected networks .the criteria rely on a symmetric matrix ] is a symmetric matrix , and is the number of communities in the network .suppose we have the best similarity measure we can possibly hope to have based on the truth , , where is the indicator function .in that case , implies if , whereas the sum of the weights would be misleading . using as the measure of pair similarity , we propose estimating for undirected networks by similarly to the directed case ,if we have information about true positive and negative examples , we can use a partial - sum criterion where if it is known that , otherwise .the last component we need to specify is the node similarity matrix .one typical situation is when we have reasons to believe that the external node covariates are related to the structure of the network , in which case it is natural to use covariate information to construct .though more complicated formats do exist , node covariates are typically represented by an matrix where is the value of variable on node .then can be taken to be some similarity measure between the -th and -th rows of .for example , if contains only numerical variables and has been standardized , we can use the exponential decay kernel , where is the euclidean vector norm .when node covariates are not available , node similarity is usually obtained from the topology of the observed network , i.e. , is large if and have a similar pattern of connections with other nodes . for undirected networks ,a simple choice of could be where denotes cardinality of a set .this particular measure turns out to be not very useful : since most real networks are sparse , most entries of any -th column will be 0 , and thus most of s would be large .a more informative measure is the jaccard index , where is the set of neighbors of node .the directed networks case is similar , except we need to count the in and the out links separately .the formulas corresponding to and become where and .the proposed link prediction criteria are convex and quadratic in parameters , and thus optimization is fairly straightforward .the obvious approach is to treat the matrix as a long vector with elements ( or in the undirected case ) , and solve the linear system obtained by taking the first derivative of any criterion above with respect to this vector .however , solving a system of linear equations could be challenging for large - scale problems ; the number of parameters here is , and so the linear system requires memory .however , if is sparse , or sparsified by applying thresholding or some other similar method , then solving the linear system is the efficient choice .if the matrix is not sparse , an iterative algorithm with sequential updates that only requires memory would be a better choice than solving the linear system .we propose an iterative algorithm following the idea of block coordinate descent . a block coordinate descent algorithm partitions the coordinates into blocks and iteratively optimizes the criterion with respect to each block while holding the other blocks fixed .first , we derive the update equations for directed networks .note and can be written in the general form where for and for . for any matrix ,let be the row of .we treat as a block , and update iteratively . define . then let be an diagonal matrix with . then plugging and into , and taking the first derivative of with respect to , we obtain .\nonumber\end{aligned}\ ] ] solving with respect to , we obtain the updating formula where is the value of at iteration . this update is fast to compute but its derivation relies on the product form of and , and thus is not directly applicable in the undirected case , where is used as the similarity measure .however , we can still approximate with a product , using the fact that for , { x^q+y^q}= \max(x , y)$ ] .thus , for sufficiently large , we have ^q \approx ( w_{ii'}w_{jj'})^q + ( w_{ij'}w_{ji'})^q . \label{appx}\end{aligned}\ ] ] further , is a monotone transformation of and can also serve as a similarity measure .based on , we propose to substitute the following approximate criterion for undirected networks , where for the full sum criterion and for the partial sum criterion . by symmetry , this is now in the same form as , with each term in the sum containing a product of and , and therefore can be solved by block coordinate descent with an analogous updating equation as that in the directed network case . in practice, we found that when is sparse or truncated to be sparse , solving the linear system can be much faster than the block coordinate descent method ; however , when is dense and the number of nodes is reasonably large , the block coordinate descent method dominates directly solving linear equations .in this section , we test performance of our link prediction methods on simulated networks . in all cases ,each network consists of nodes , and node s covariates are independently generated from a multivariate normal distribution with .each is generated independently , with .we consider the following functions : the right hand column gives sparser versions of functions in the left hand column ( subtracting a constant within the logit link functions lowers the overall degree ) , which we use to compare dense and sparse networks ( the average degrees of all these networks are reported in figures [ fig : sim1 ] and [ fig : sim2 ] ) .functions ( a ) and ( b ) are asymmetric in and , giving directed networks , while ( c ) and ( d ) are symmetric functions corresponding to undirected networks .further , and are linear functions ; is the projection model proposed in , under which the link probability is determined by the projection of onto the direction of , and is an undirected version of the projection model .we also generate indicators s as independent bernoulli variables taking values 1 and 0 with equal probability , and set .this setup corresponds to the `` partially observed '' network of the title , where all the observed edges are true but the missing edges may or may not be true 0s .since we have node covariates affecting the probabilities of links in this case , we define the similarity matrix by where we choose .after truncating at 0.1 , we optimize all criteria by solving linear equations , with chosen by 5-fold cross validation . the performance of link prediction is evaluated on the `` test '' set .we report roc curves , which only depend on the rankings of the estimates rather than their numerical values .specifically , let be the ranking of on the test set in descending order .for any integer , we define false positives as pairs ranked within top but without links in the true network ( ) , and true positives as pairs ranked within top with . then the true positive rate ( tpr ) and the false positive rate ( fpr ) are defined by the roc curves showing the false positive rate vs. the true positive rate over a range of values are shown in figures [ fig : sim1 ] ( directed networks ) and [ fig : sim2 ] ( undirected networks ) .each curve is the average of 20 replicates .we also show the roc curve constructed from true s as a benchmark .. overall , both the full sum and the partial sum criteria perform well .there is little difference between directed network models and their undirected versions .as expected , the partial sum criterion always gives better results since it has more information and only uses the true positive and negative examples for training .but its performance is quite comparable to the completely unsupervised full sum criterion , except perhaps for model .the gaps between the unsupervised full sum criterion and semi - supervised partial sum criterion become smaller for sparse networks , as the false negatives in the full sum are only a small proportion of the large number of true negatives in a sparse network .the roc curve obtained from the true model in sparse networks is better than in the corresponding dense networks ; this seemingly counter - intuitive finding is also explained by the large number of 0s in sparse networks .however , gaps between both our link prediction methods and the true model are larger in all the sparse networks than in their dense counterparts .this confirms the observation that a small number of positive examples in sparse networks makes the link prediction problem challenging .our first application is to an undirected network containing yeast protein - protein interactions from .this network was edited to contain only highly reliable interactions supported by multiple experiments , resulting in 984 protein nodes and 2438 edges , with the average node degree about 5 .we take this verified network to be the true underlying network . also constructed a matrix measuring similarities between proteins based on gene expression , protein localization , phylogenetic profiles and yeast two - hybrid data , which we use as the node similarity matrix for link prediction . here , we compare the full sum criterion , the partial sum criterion , and the latent variable model proposed by . to test prediction , we generate indicators s as independent bernoulli variables taking value 1 with probability , and set .we consider three different values of , , corresponding to different amounts of available information .we use the block coordinate descent algorithm proposed in section [ sec : alg ] to approximately optimize and , with and chosen by cross - validation .the latent variable model depends on a tuning parameter , the dimension of the latent space .we fix since larger values of do not significantly change the performance in this example .we again use roc curves to evaluate the link prediction performance on the set .each roc curve in figure [ fig : ppi ] is the average of 10 random realizations of s .the semi - supervised criterion always performs better than the unsupervised criterion , as it should .further , the semi - supervised criterion almost always outperforms the latent variable model , except for very small values of the false positive rate , and the fully unsupervised criterion also starts to outperform the latent variable model as the false positive rate increases .the latent variable model is also more sensitive to the sampling rate , with performance deteriorating for .this is because the model relies heavily on the structure of the network , and a low sampling rate may substantially distort the overall network topology .on the other hand , we use the node similarity matrix which depends only on the features of the proteins , and is thus unaffected by the sampling rate .this dataset is a school friendship network from the national longitudinal study of adolescent health ( see for detailed information ) .this network contains 1011 high school students and 5459 directed links connecting students to their friends , as reported by the students themselves .the average degree of this network is also around .here we test our two link prediction criteria , with the same settings for as in the protein example .since the latent variable model of is not applicable to directed networks , we omit it here . due to lack of node covariates, we construct a network - based similarity by using the jaccard index defined in .we again apply block coordinate descent to minimize the criteria with chosen by cross - validation , and report the average roc curves over 10 realizations of s . as shown in figure [ fig : school ] , both criteria perform fairly well for and , but fail for , as the sampling rate is too small for to capture the overall network topology .this does not happen in the protein - protein interactions network , since is constructed from covariates on proteins and is unaffected by sub - sampling .in this article , we have proposed a new framework for link prediction that allows uncertainty in observed links and non - links of a given network .our method can provide relative rankings of potential links for pairs with and without observed links .the proposed link prediction criteria are fully non - parametric and essentially model - free , relying only on the assumption that similar node pairs have similar link probabilities , which is valid for a wide range of network models .one direction we would like to explore in the future is to combine more specific parametric network models with our non - parametric approach , with the goal of achieving both robustness and efficiency .we are also investigating consistency properties of our method , which is challenging because it requires developing a novel theoretical framework for evaluating consistency of rankings .we are also developing extensions that would allow the probabilities of errors , and , to depend on the underlying probabilities of links .this would allow , for example , making highly probable links more likely to be observed correctly .ultimately , we would also like to incorporate the general framework of link uncertainty into other network problems , for example , community detection .h. kashima , t. kato , y. yamanishi , m. sugiyama , and k. tsuda .link propagation : a fast semi - supervised learning algorithm for link prediction . in _ proceedings of the 2009 siam international conference on data mining _ , 2009 .k. miller , t. griffiths , and m.i .nonparametric latent feature models for link prediction . in y.bengio , d. schuurmans , j. lafferty , and c. williams , editors , _ advances in neural information processing systems ( nips ) _ , volume 22 , 2010 .k. yu , w. chu , s. yu , v. tresp , and z. xu . stochastic relational models for discriminative link prediction . in _ proceedings of neural information precessing systems _, pages 15531560 . mit press , cambridge ma , 2007 . | link prediction is one of the fundamental problems in network analysis . in many applications , notably in genetics , a partially observed network may not contain any negative examples of absent edges , which creates a difficulty for many existing supervised learning approaches . we develop a new method which treats the observed network as a sample of the true network with different sampling rates for positive and negative examples . we obtain a relative ranking of potential links by their probabilities , utilizing information on node covariates as well as on network topology . empirically , the method performs well under many settings , including when the observed network is sparse . we apply the method to a protein - protein interaction network and a school friendship network . |
a fundamental problem of broad theoretical and practical interest is to characterize the maximum correlation between the outputs of a pair of functions of random sequences .consider the two distributed agents shown in figure [ fig : agents ] .a pair of correlated discrete memoryless sources ( dms ) are fed to the two agents .these agents are to each make a binary decision .the goal of the problem is to maximize the correlation between the outputs of these agents subject to specific constraints on the decision functions .the study of this setup has had impact on a variety of disciplines , for instance , by taking the agents to be two encoders in the distributed source coding problem , or two transmitters in the interference channel problem , or alice and bob in a secret key - generation problem , or two agents in a distributed control problem .a special case of the problem is the study of common - information ( ci ) generated by the two agents . as an example , consider two encoders in a slepian - wolf ( sw ) setup .let , and be independent , non - constant binary random variables .then , an encoder observing the dms , and an encoder observing agree on the value of with probability one .the random variable is called the ci observed by the two encoders .these encoders require a sum - rate equal to to transmit the source to the decoder .this gives a reduction in rate equal to the entropy of , compared to the transmission of the sources over independent point - to - point channels .the gain in performance is directly related to the entropy of the ci .so , it is desirable to maximize the entropy of the ci between the encoders .in , the authors investigated multi - letterization as a method for increasing the ci .they showed that multi - letterization does not lead to an increase in the ci .more precisely , they prove the following statement : _ let and be two sequences of dmss .let and be two sequences of functions which converge to one another in probability .then , the normalized entropies , and are less than or equal to the entropy of the ci between and for large . _a stronger version of the result was proved by witsenhausen , where maximum correlation between the outputs is upper - bounded subject to the following restrictions on the decision functions : + _ 1 ) the entropy of the binary output is fixed .+ 2 ) the agents cooperate with each other ._ it was shown that maximum correlation is achieved if both users output a single element of the string without further processing ( e.g. each user outputs the first element of its corresponding string ) .this was used to conclude that common - information can not be induced by multi - letterization . while , the result was used extensively in a variety of areas such as information theory , security , and control , in many problems , there are additional constraints on the set of admissible decision functions . for example, one can consider constraints on the ` effective length ' of the decision functions .this is a valid assumption , for instance , in the case of communication systems , the users have lower - bounds on their effective lengths due to the rate - distortion requirements in the problem . in this paper ,the problem under these additional constraints is considered . a new upper - bound on the correlation between the outputs of arbitrary pairs of boolean functionsis derived .the bound is presented as a function of the dependency spectrum of the boolean functions .this is done in several steps .first , the effective length of an additive boolean function is defined .then , we use a method similar to , and map the boolean functions to the set of real - valued functions .using tools in real analysis , we find an additive decomposition of these functions .the decomposition components have well - defined effective lengths . using the decompositionwe find the dependency spectrum of the boolean function .the dependency spectrum is a generalization of the effective length and is defined for non - additive boolean functions .lastly , we use the dependency spectrum to derive the new upper - bound .the rest of the paper is organized as follows : section [ sec : not ] presents the notation used in the paper .section [ sec : eff ] develops useful mathematical machinery to analyze boolean function .section [ sec : corr ] contains the main result of the paper .finally , section [ sec : con ] concludes the paper .in this section , we introduce the notation used in this paper .we represent random variables by capital letters such as .sets are denoted by calligraphic letters such as .particularly , the set of natural numbers and real numbers are shown by , and , respectively . for random variables ,the -length vector is denoted by . the binary string is written as .the vector of random variables , j_i\neq j_k ] .for example , take , the vector is denoted by , and the vector by . for two binary strings , we write if and only if ] , where the addition operator is the binary addition , the effective length is defined as the cardinality of the set . for a general boolean function ( e.g. non - additive ) , we find a decomposition of into a set of functions whose effective length is well - defined .first , we provide a mapping from the set of boolean functions to the set of real functions .this allows us to use the tools available in real analysis to analyze these functions .fix a discrete memoryless source , and a boolean function defined by .let .the real - valued function corresponding to is represented by , and is defined as follows : note that has zero mean and variance .[ rem : exp_0 ] the random variable has finite variance on the probability space .the set of all such functions is denoted by .more precisely , we define as the separable hilbert space of all measurable functions . since x is a dms , the isomorphy relation holds , where indicates the tensor product .the hilbert space is the space of all measurable functions .the space is spanned by the two linearly independent functions and , where .we conclude that the space is two - dimensional .[ ex : ex1 ] the tensor operation in is real multiplication ( i.e. ) .let \} ] .example [ ex : ex1 ] gives a decomposition of the space .next , we introduce another decomposition of which turns out to be very useful .let be the subset of all measurable functions of which have 0 mean , and let be the set of constant real functions of .we argue that gives a decomposition of . and are linear subspaces of . is the null space of the linear functional which takes an arbitrary function to its expected value .the null space of any non - zero linear functional is a hyper - space in .so , is a one - dimensional subspace of . from remark [ rem : exp_0 ] , .we conclude that any element of can be written as . is also one dimensional .it is spanned by the function .consider an arbitrary element .one can write where , and . [ex : one_dim ] replacing with in , we have : where and , in ( a ) , we have used the distributive property of tensor products over direct sums .equation , can be interpreted as follows : for any , we can find a decomposition , where . can be viewed as the component of which is only a function of . in this sense ,the collection }i_j = k\} ] using the following result in linear algebra : let ] be the basis for where is the dimension of .then , any element }\mathcal{h}_i ] .[ lem : tensor_dec ] since s , ] .then : where we have used linearity of expectation in ( a ) , and the last two equalities use the fact that which means it satisfies properties and .so far we have shown that .now assume .then : so , .by assumption we have .\1 ) for two n - length binary vectors , and , we write if ] be the element of the standard basis. then . by the smoothing property of expectation , .assume that , .then , + 2 ) this statement is also proved by induction . is a function of , so by induction is also a function of .+ 3 ) let ] .we have : where we have used the memoryless property of the source in ( a ) and ( b ) results from the smoothing property of expectation .we extend the argument by noetherian induction .fix .assume that , and . the second and third terms in the above expression can be simplified as follows .first , note that : our goal is to simplify .we proceed by considering two different cases : + * case 1 : * and : replacing the terms in the original equality we get : where in ( b ) we have used that s are uncorrelated , and ( a ) is proved below : + * case 2 : * assume : + * case 3 : * when the proof is similar to case 2 .+ 4 ) clearly when , the claim holds .assume it is true for all such that .take and , i_t=1 ] .we prove ( c ) in lemma [ lem : init ] below . in ( d ) we have used proposition [ pr : partfun ] .using equations and we get : + * step 2 : * we use the results from step one to derive a bound on .define , , , and , then we write this equation in terms of , q , and r using the following relations : solving the above we get : we replace , and in by their values in : on the other hand , where the last equality follows from the fact that s are uncorrelated .this proves the lower bound .next we use the lower bound to derive the upper bound . +* step 3 : * the upper - bound can be derived by considering the function to be the complement of ( i.e. . ) in this case .the corresponding real function for is : so , .using the same method as in the previous step , we have : on the other hand .so , this completes the proof .h. s. witsenhausen , `` on sequences of pair of dependent random variables , '' _ siam journal of applied mathematics _ ,28 , no . 1 ,pp . 100113 , 1975 . f. s. chaharsooghi , a. g. sahebi , and s. s. pradhan , `` distributed source coding in absence of common components , '' in _ information theory proceedings ( isit ) , 2013 ieee international symposium on _ , july 2013 , pp .13621366 .a. mahajan , a. nayyar , and d. teneketzis , `` identifying tractable decentralized control problems on the basis of information structure , '' in _ 2008 46th annual allerton conference on communication , control , and computing _ , sept 2008 , pp .14401449 .m. reed and b. simon , _ methods of modern mathematical physics , i : functional analysis_.1em plus 0.5em minus 0.4emnew york : academic press inc . ltd . ,1972 . f. shirani , m. heidari , s. s. pradhan , _ on the sub - optimality of single - letter coding in multi - letter communications _ , arxiv.org , jan 2017 .f. shirani , s. s. pradhan , _ on the correlation between boolean functions of sequences of random variables _ , arxiv.org , jan 2017 . | in this paper , we establish a new inequality tying together the effective length and the maximum correlation between the outputs of an arbitrary pair of boolean functions which operate on two sequences of correlated random variables . we derive a new upper - bound on the correlation between the outputs of these functions . the upper - bound is useful in various disciplines which deal with common - information . we build upon witsenhausen s bound on maximum - correlation . the previous upper - bound did not take the effective length of the boolean functions into account . one possible application of the new bound is to characterize the communication - cooperation tradeoff in multi - terminal communications . in this problem , there are lower - bounds on the effective length of the boolean functions due to the rate - distortion constraints in the problem , as well as lower bounds on the output correlation at different nodes due to the multi - terminal nature of the problem . |
to continuous introduction of mobile devices and services , future cellular systems are facing a significantly increased number of mobile devices requesting large data volume . to accommodate such a large growth of mobile devices , there are active researches on the 5th generation ( 5 g ) cellular system .new targets for the 5 g cellular system are to support latency - sensitive applications such as tactile internet and low energy consumption for machine - type communication ( mtc ) or the internet of things ( iot ) .unfortunately , a cellular system can not achieve the two targets simultaneously , but a non - trivial tradeoff can exist .although this tradeoff is very important to 5 g cellular system designers , related researches are rare .this is because it is often hard to deal with the latency and the energy consumption analytically so that intensive simulation - based network plannings are widely spread , .however , this approach becomes impractical when the network components , such as the number of users and bs antennas are scaled up .more viable approach is to analyze the network .this paper mainly concentrates on the analysis about the tradeoff between the latency and the energy consumption in a promising 5 g cellular system . in 5 g cellular systems ,there has been great interest to a large - scale antenna system ( lsas ) , a.k.a .massive multiple - input multiple - output ( mimo ) , in which very large number of antennas are equipped at a base station ( bs ) to serve many users simultaneously .its inherent merits come from massive spatial dimensions , which include i ) generating sharp beams for intended users to improve spectral efficiency by suppressing unintended interference , , ii ) reducing transmit energy while guaranteeing quality of service ( qos ) , and iii ) allowing a complexity - efficient transceiver algorithm . in order to achieve such advantages ,an appropriate channel state information ( csi ) acquisition process is essential . to acquire csi ,a widely - accepted approach is the training - based transmission in which a frame is divided into two phases : one is the training phase , in which users transmit known training signals and the bs estimates the csi , and the other is the data transmission phase , in which the users transmit information - bearing signals and the bs extracts the information by utilizing the estimated csi .even if the training - based transmission is not optimal in information - theoretic point of view , it gives an efficient way to acquire the csi as well as to provide the optimal degrees of freedom in the high signal - to - noise ratio ( snr ) regime . in order to analyze the latency in the training - based lsas ,it is necessary to optimize the user scheduling policy as well as the resource allocation under reasonable and practical constraints .if this optimization is not performed , it often gives an inappropriate cellular system design .the optimization of the training - based transmission is firstly investigated by hassibi and hochwald .they consider the mimo point - to - point channel with a capacity - approaching transmitter / receiver pair and successfully derive the optimal system parameters as a function of snr and other parameters .later , this results are extended to the mimo broadcast channel , multiple access channel , relay channel , and interference channel .however , these works optimize the energy and time dedicated to the training phase only under a given user set so that it can not be directly applied to the latency - energy tradeoff in the lsas . in order to evaluate the latency of the lsas ,it is necessary to further optimize those variables under the optimal scheduling policy .the scheduling policies to minimize the latency ( or delay ) under a minimum rate constraint or to maximize spectral efficiency under a maximum latency constraint have been widely investigated in literature under various system models . in , the system average delay is optimized by using combined energy / rate control under average symbol - energy constraints . in ,delay - optimal energy and subcarrier allocation is proposed for orthogonal frequency division multiple access ( ofdma ) . in ,the energy minimizing scheduler , by adapting energy and rate based on the queue and channel state is proposed .however , most of them assume perfect csi at transmitter and receiver so that it often overestimates the network - wise performance .also , their scheduling policies are too complicated to be analyzed for providing an intuitive insight on the network - wise performance .thus , a practically optimal scheduling policy for the training - based lsas is needed and an intuitive analysis is desired to provide an insight on the latency - energy tradeoff in the lsas .decreasing the latency in the lsas is closely related to increasing the spectral efficiency , because higher spectral efficiency results in a smaller transmission completion time if the number of users and their rate constraints are given .in addition , the spectral efficiency of a multiple - access channel with bs antennas and scheduled users is asymptotically expressed as as , which implies that the spectral efficiency can be enhanced by scheduling users as many as possible in the lsas .however , most literature assumes that _ orthogonal _ pilots are allocated to users so that the maximum number of scheduled users is limited by the number of available pilots in practice .actually , there is no reason that orthogonal pilots are optimal for the latency - energy tradeoff so that it is natural to consider _ non - orthogonal _ pilots in general .there are a few results related to the case using non - orthogonal pilots . in ,optimal non - orthogonal pilots for minimizing channel estimation error are derived and it turns out that finding the optimal non - orthogonal pilots is equivalent to solving the grassmannian subspace packing problem . in ,an iterative algorithm is proposed to find optimal non - orthogonal pilots for maximizing the number of users with a minimum rate constraint in a downlink lsas .however , they still do not address the effect of the non - orthogonal pilots on the latency and it would be very interesting to find whether the use of non - orthogonal pilots can reduce the latency and when and how much reduction can be obtained over the case of using orthogonal pilots . in this paper , we are interested in an uplink training - based lsas serving many users with an average energy constraint , in which each user has a limited average energy for transmitting a frame . in addition , we assume a block rayleigh fading model and a practical receiver such as the maximum ratio combining ( mrc ) or the zero - forcing ( zf ) receiver and focus on the resource allocation and multiple access strategy ( to be specific , pilot allocation , user grouping and scheduling , and energy allocation ) . the main target of this paper is to address the following question : * _ how much time is needed for guaranteeing the minimum throughput to all users in the uplink training - based lsas ? _ * it is , in general , hard to address this question analytically so that we look into the two asymptotic regimes , high and low energy regimes and successfully derive the effect on the network latency according to the energy consumption . the main contributions of this paper are summarized as follows : * we optimize the uplink scheduling policy for minimizing the latency with guaranteeing the minimum rate constraint .the optimizing variables are the scheduling groups in which users are simultaneously scheduled , the scheduling portion indicating how often each scheduling group actually transmits , and the energy allocation indicating how much portion of energy is dedicated to the training phase .this problem is transformed into an equivalent problem of maximizing the spectral efficiency with the rate constraint .the optimal scheduling policy is obtained by solving the binary integer programming ( bip ) and it is proved that the optimal solution of the original bip can be obtained by a linear programming relaxation with a polynomial - time complexity . * we investigate the asymptotic performance of the proposed optimal uplink scheduling policy for a large number of users .we derive a simple close - form expression for the asymptotic network latency and find the optimal parameters for the proposed optimal uplink scheduling policy . then , we identify four operating regimes of the training - based lsas according to the growth or decay rate of the average received signal quality , , and the number of bs antennas , .it turns out that orthogonal pilot sequences are optimal only when the average received signal quality is sufficiently good and the number of bs antennas is not - so - large . in other regimes , it turns out that non - orthogonal pilot sequences become optimal .in fact , the use of non - orthogonal pilots can reduce the network latency by a factor of when the received signal quality is quite poor ( ) or by a factor of when the received signal quality is sufficiently good ( ) and the number of bs antennas is sufficiently large ( ) .the remainder of this paper is organized as follows . in sectionii , a detailed model description is presented including channel , energy , and signal models for a training - based lsas . in section iii ,the uplink scheduling policy problem is formulated and its optimal solution is provided .numerical experiments to verify the superiority of the proposed uplink scheduling policy are shown in section vi . in sectionv , an asymptotic analysis provides the closed - form network latency and optimal parameters for the proposed uplink scheduling policy .finally , conclusion is drawn in section vi .matrices and vectors are respectively denoted by boldface uppercase and lowercase characters . also , , , and stand for the transpose , conjugate transpose , and cardinality of a set , respectively , and and are the natural logarithm and the logarithm with base 2 , respectively .also , denotes the function rounding towards the nearest integer , , and denotes the indicator function . denotes the distribution of a circularly symmetric complex gaussian random vector with mean vector and covariance matrix and ] & the achievable rate of users in sub - frame of frame and its approximation ( bits / hz ) + & throughput threshold ( bits ) + & the scheduling group and its scheduling portion + & the energy dedicated to the training symbols or data symbols ( joule / symbol ) +we consider an uplink lsas consisting of a bs with antennas , and single - antenna users as illustrated in fig .it is assumed that the users are randomly distributed on the cell coverage region and they want the quality of service ( qos ) on their own data traffic ( rate , latency , and reliability ) so that the bs serves these users persistently and try to guarantee their qos . a two - phase frame structure with training and data transmission phases , illustrated as in fig .2 , is used . for every uplink scheduling period ,the bs broadcasts the scheduling information and then the users transmit frames in uplink direction step by step .the frame of time length seconds and bandwidth hz is divided into equal - bandwidth sub - frames by partitioning frequency domain by using the orthogonal division multiplex access ( ofdma ) , single - carrier frequency domain multiple access ( sc - fdma ) or any good one of the newly considered waveforms .the sub - frame consists of symbols of time period seconds . in the training phase of time period seconds , the scheduled users send training symbols , and the bs estimates the uplink channels .then , in the data transmission phase of the remaining time period seconds , all of the scheduled users transmit data symbols to the bs simultaneously in a space division multiple access ( sdma ) manner .sub - frames and symbols in the sub - frame can be arbitrarily configured both in time and frequency domains .for example , in the long - term evolution ( lte ) , one sub - frame includes symbols ( 14 symbols in time domain and 12 sub - carriers in frequency domain ) and one frame includes sub - frames in time domain .] so , the transmit signal vector of user , who is allocated to the sub - frame in frame , is written as = { \left [ { { { \left ( { { \bf{x}}_j^{{\rm{tr}}}[f;t ] } \right)}^h},{{\left ( { { \bf{x}}_j^{{\rm{dt}}}[f;t ] } \right)}^h } } \right]^h},\ ] ] where ] is the data symbol vector for the data transmission phase . in every sub - frame , at most users are scheduled and the set of users scheduled in sub - frame of frame is denoted as ] .within one block of symbols , the received signal matrix , denoted as =\left[\mathbf{y}_1[f;t],\mathbf{y}_2[f;t],\dots,\mathbf{y}_n[f;t]\right] ] is the channel matrix , =[\mathbf{x}_1[f;t],\dots,\mathbf{x}_k[f;t ] ] ] is the noise matrix , whose elements are independent and identically distributed ( i.i.d . ) random variables with .the flat - fading channel vector between the bs and user at the sub - frame of frame , ] is the short - term csi whose elements are i.i.d .random variables with and is the long - term csi depending on the path - loss and shadowing .the long - term csi between the bs and user is modeled as , where is the wireless channel path - loss exponent and is the distance between the bs and user .we assume that rayleigh block fading model , where the short - term csi of each user remains constant within a given frame but is independent across different frames , while the long - term csi does not vary during a much longer interval .further , it is assumed that the long - term csi of all users is perfectly known at the bs . since each of users has a different limited battery capacity , recharge process , or radio frequency ( rf ) transmitter , they are assumed to be limited to spend energy for transmitting each sub - frame .let be the average allowed energy level ( in joule ) of user per sub - frame of length .the energy is consumed during both the training phase of length and the data transmission phase of length .so , the consumed energy transmitting each sub - frame needs to meet \|^2 = \mathbb{e}\|\mathbf{x}_j^{\rm{tr}}[f;t]\|^2+\mathbb{e}\|\mathbf{x}_j^{\rm{dt}}[f;t]\|^2\le e_j,~\forall j.\ ] ] letting \|^2/l ] be the transmit energy of each training symbol and data symbol , respectively , the constraint is represented as in the sequel , we drop the sub - frame index and the frame index if there is no ambiguity . to estimate the csi of scheduled users , the bs allocates pilot sequences with length of .let ] is set to the common target received energy , .so , the transmit energy at the training phase is set by then , the received signal matrix in frame at the bs during the training phase , denoted as ] , and ] is the information - bearing data symbol of user at sub - frame of frame . withthe channel - inversely power - controlled non - orthogonal pilots , the achievable rate of user of the mrc or zf receiver is approximated by \approx \left\ { { \begin{array}{*{20}{l } } { \log_2\left ( { 1 + \gamma _ k[f;t ] } \right),}&{{\text{if } } k \in { \cal s}[f;t],}\\ { 0,}&{{\text{if } } k \notin { \cal s}[f;t ] , } \end{array } } \right.\ ] ] where ] if ] is the state matrix , is the cost vector given by ^t},\end{aligned}\ ] ] is the all - one vector , denotes the number of candidate scheduling groups , and denotes the optimal common rate at given and , defined in ( [ eq32 ] ) .the optimizing variable informs which candidate scheduling groups are selected , i.e. , if , the corresponding candidate scheduling group is selected as one of the optimal scheduling groups .such a bip has been widely researched in literature and a variety of efficient algorithms are summarized in .unfortunately , finding the optimal solution in a bip is known as np - hard in general . however , due to the special structure of our bip , it will be shown that a linear programming ( lp ) relaxation using ^{c\times 1} ] , has integer vertices only .now , we are ready to state the optimality of the proposed algorithm using the lp relaxation in ( [ eq34 ] ) .the optimal solution of the bip in ( [ eq34 ] ) is identical to the solution obtained by using the lp relaxation on ( [ eq34 ] ) . from the properties of theorem 2 , every column of the matrix has consecutive ones only , which implies that is totally unimodular from lemma 4 .using lemma 5 , the feasible region of ( [ eq34 ] ) is a polytope with integer vertices only , which guarantees that the solution obtained by using the lp relaxation on ( [ eq34 ] ) does not affect the optimality .the proposed algorithm for obtaining the optimal static uplink scheduling policy is outlined in algorithm 1 .the proposed optimal algorithm obtains and corresponding for each , and then find the optimal maximizing . the algorithm for obtaining is composed of the two parts .the first part finds the candidate scheduling groups , denoted as , and their corresponding common rate by using ( [ eq32 ] ) obtained by using the optimal energy allocations in theorem 1 .then , the second part finds the optimal combination of the selected scheduling groups that maximizes the spectral efficiency by applying the lp relaxation by virtue of theorems 2 and 3 ._ example : _ here , we explain a toy example .suppose that the network has users , , and and , , , , , and .the first part returns the candidate scheduling groups as and suppose that the corresponding common rate is determined as , respectively .then , the cost vector and the state matrix are respectively set as ^t}\ ] ] and .\ ] ] to satisfy the constraints ( [ eq34]b ) , there exist five feasible solutions : ^t ] , ^t ] . since , and are selected as the optimal uplink scheduling groups . and and are the optimal scheduling portions and bps / hz .then , the latency is frames .note that among sub - frames sub - frames are allocated to and the remained sub - frames are allocated to .define as the whole search space for finding the optimal scheduling groups in ( [ eq19 ] ) without theorem 2 , where is the collection of -ary partitions of with at most elements , i.e. , each scheduling group size is no greater than the number of antennas , given by note that is well - defined only when and , where denotes the stirling number of the second kind . for a fixed , increases exponentially with .thus , the whole search space is given as for a large .now , define as the reduced search space for finding the optimal scheduling groups in ( [ eq19 ] ) with the aid of theorem 2 .then , we can show that the cardinality of can be represented as the following recursive formula : which is known as the generalized fibonacci number .with help of the binet s formula , we arrive at where is the unique positive root of . after some algebraic manipulations , and as so that , which implies that the reduced search space still increases exponentially with the number of total users , .however , combined with theorem 3 , the following dramatical complexity reduction can be obtained . *the reduction gain of theorem 2 itself also increases exponentially with .in fact , the reduction gain is at least for a large . * without theorem 2 ,the number of candidate scheduling groups in ( [ eq34 ] ) is , ( for , ) , which increases exponentially with .however , due to theorem 2 , it reduces into , ( for , ) , which increases only squarely with . * as shown in theorem 3 , applying the lp relaxation on ( [ eq34 ] ) still provides the optimal solution of the original bip ( [ eq34 ] ) , which can dramatically reduce the exponential - time complexity into a polynomial - time complexity .now , we are ready to quantify the computational complexity of algorithm 1 . the computational complexity of algorithm 1 consists of the following three parts , namely 1 ) the sorting operation ( line 1 ) , 2 ) the optimal energy allocation ( lines 3 - 9 ) and 3 ) solving the relaxed lp ( lines 10 - 15 ) . the worst - case computational complexity for sorting samples is .since the optimal energy allocation requires iterations , the worst - case computational complexity of the second part is .finally , the worst - case computational complexity of the lp is by using the karmarkar s algorithm .thus , the total worst - case computational complexity for the proposed algorithm is .in this section , we present some numerical results to verify the superiority of the proposed uplink scheduling policy .one frame is set to occupy and in the frequency and time domains and consists of sub - frames with and .the number of symbols in each sub - frame is set to by assuming ( 25% cp overhead ) .there are users each requesting date volume .we use the pathloss model , where , and is given by with .this pathloss model reflects the bs located at the origin and the users are located uniformly along the line ] , and the optimal training length , , as a function of when , , and the zf receiver is employed .6 shows that small - size scheduling groups are preferred at low , while large - size ones are preferred at high , because high array gain is required at low . in spite of the pathloss difference ,the size of each optimal scheduling group is nearly identical .although a longer training period is required in high , it is less than or equal to , which is the half of each sub - frame . over a wide range of , is larger than or equal to for , which implies that orthogonal pilots can be used for not - so - large number of . also , small - size scheduling groups are preferred at low , while large - size ones are preferred at high because higher array gain is available .7 illustrates the optimal scheduling groups , ] as the averages of the transmit and the receive energies , respectively . the following theorem states the asymptotic behavior of the static uplink scheduling policy and its network latency .let denote a random variable with cdf .suppose that ] and , is obtained by finding a real root of the quadratic function as long as ( [ eq30 ] ) is feasible , i.e. , .first , show that is real . to do this ,it is sufficient that after inserting values in table ii , ( [ eq48 ] ) becomes equivalent to which is always true since .now , show that . if , it is trivial so we omit .suppose that .then , the condition can be written as which is also always true since .similarly when . finally , show that .the case is again trivial so that we omit it .assume that .then , we have which holds since for any and inserting it in ( [ eq31 ] ) shows .similarly for , we have which completes the proof .without loss of generality , we assume and and rewrite the objective function as note that is a monotonically decreasing function of for and is independent to . since , in order to minimize , should be similarly , is successively determined once , , are determined , which concludes that the two properties in theorem 2 hold for all .before proving theorem 4 , some preliminary results about a quantile function are introduced .suppose that are i.i.d .real - valued random variables with cdf and the order statistics of are denoted by . for ,the quantile of is defined as .correspondingly , the sample quantile is defined as the quantile of the empirical cdf with samples , , which can also be expressed as .let be i.i.d .random variables from a cdf satisfying for any . then , for every and , where and is a positive constant . the proof is directly obtained by applying the dvoretzky - kiefer - wolfowitz inequality . now , show the almost - sure convergence of as .let be i.i.d .random variables from a cdf .then , . forany given and sufficiently large , we have which can be made arbitrarily small by increasing because is convergent .thus , as , which implies . for a riemann - integrable function ,we have from lemma 7 , as , where the last convergence comes from the definition of the riemann integral . let be the samples of i.i.d random variables with .define and assume that as .then , since each and , as . from lemma 8 , we have since is continuous and each , small variation in the length of each interval ( vs. ) does not affect the convergence of the riemann integral so that which concludes the proof . in this proof, we omit the index by assuming is used in the symbols , , , , and for simplicity .assume that and let be the sequence of positive finite integers such that and , , where for .we first prove that the optimal scheduling groups can be selected among equi - sized ones as . from ( [ eq23 ] ) and ( [ eq24 ] ), we have where is the number of scheduling groups ( as ) and note that in ( [ eq29 ] ) , is given as since depends only on , denote by allowing some notational abuse . from lemma 9 as , which implies that the equi - sized scheduling group with size of can achieve the optimal network latency asymptotically . also from lemma 8, the asymptotically optimal network latency can be expressed as first consider the case of and so that = 0 .then , is given as inserting ( [ eq56 ] ) into ( [ eq54 ] ) , we obtain by using ( [ eq55 ] ) , we obtain which concludes the proof for the case .the other cases can be shown in similar ways .for , we obtain using the equality simplifies the case , , and inserting into ( [ eq61 ] ) results in by which we arrive at ( [ eq59 ] ) . on the other hand , by using taylor expansion , for .then , for , we obtain to simplify the case , we use the identity by which we arrive at ( [ eq60 ] ) . by using the chebyshev s inequality ,we obtain , as } \right| > c\mathbb{e}[{x_n } ] } \right ) \le \frac{{{\rm{var}}\left ( { { x_n } } \right)}}{{{c^2}{{\left ( { \mathbb{e}[{x_n } ] } \right)}^2 } } } \to 0,\ ] ] where is a finite and positive constant , which implies that the realizations of is included in the set \le x_n\le(1+c)\mathbb{e}[x_n]\} ] almost - surely so that its transformation , i.e. , is also scaled as ) ] becomes independent to and asymptotically . by using theorem 5 and ( [ eq63 ] ), we obtain }{{k\left ( { 1 - \frac{l}{n } } \right ) } } \\&=\mathop { \arg \max } \limits_{k \le l } k\left ( { 1 - \frac{l}{n } } \right ) \nonumber \\&= \left(\frac{n}{2},\frac{n}{2}\right),\label{eq64}\end{aligned}\ ] ] and by inserting ( [ eq64 ] ) into ( [ eq43 ] ) , we obtain \\ & = \theta \left ( \frac{\mathcal{t}_{\rm{th}}}{w{\log \rho } } \right),\label{eq65}\end{aligned}\ ] ] where the last equality comes from lemma 11 and .now , consider the case . by using lemma 10 ,we can rewrite ( [ eq41 ] ) as where if is finite , is also finite so that is maximized at and . inserting it into ( [ eq43 ] ) , we obtain comparing ( [ eq67 ] ) with ( [ eq65 ] ) indicates that the asymptotically lower network latency can be achieved when .now , consider the case when .for a fixed and , can be approximated as which is maximized at for some positive . ignoring the non - dominant term , inserting , and replacing with ] .although ( [ eq63 ] ) still holds , ( [ eq65 ] ) becomes , which , together with ( [ eq72 ] ) , results in ._ regime iii ) and iv ) _ : from lemma 10 , ( [ eq41 ] ) can be rewritten as }}{{k\left ( { 1 - \frac{l}{n } } \right)}}\\ & = \mathop{\arg\min}\limits_{{\scriptstyle 1\le l < n,\atop \scriptstyle 1\le k\le m } } \frac{{4n{{\log } _ 2}e}}{{k\left ( { m - k } \right)}}\mathbb{e}\left [ { { x^ { - 2 } } } \right],\label{eq73}\end{aligned}\ ] ] where the second equality comes from that for small .finally , and can be selected arbitrarily among integers between and .inserting , we obtain , which competes the proof. 1 g. p. fettweis , `` the tactile internet : applications and challenges , '' _ ieee veh .technol . mag ._ , vol . 9 , no .1 , pp . 6470 , mar .t. taleb and a. kunz , `` machine type communications in 3gpp networks : potential , challenges , solutions , '' _ ieee commun . mag .3 , pp . 178184 , mar .l. atzori , a. iera , and g. morabito , `` the internet of things : a survey , '' _ comput .15 , pp . 27872805 , oct .j. c. ikuno , m. wrulick , and m. rupp , `` system - level simulation of lte networks , '' _ in proc .ieee vtc spring _ ,taipei , taiwan , may 2010 , pp . 15 .g. piro , a. grieco , g. boggia , f. capozzi , and p. camarda , `` simulating lte cellular systems : an open source framework , '' _ ieee trans . veh .498513 , feb .2011 . t. l. marzetta , `` noncooperative cellular wireless with unlimited numbers of base station antennas , '' _ ieee trans .wireless commun ._ , vol . 9 , no . 11 , pp .35903600 , nov .f. rusek , d. persson , b. k. lau , e. g. larsson , t. l. marzetta , o. edfors , and f. tufvesson , `` scaling up mimo : opportunities and challenges with very large arrays , _ ieee signal process . mag ._ , vol . 30 , no .40 - 46 , jan . 2013 .n. q. ngo , e. g. larsson , and t. l. marzetta , ' ' energy and spectral efficiency of very large multiuser mimo systems , `` _ ieee trans .4 , pp . 14361449 , apr . 2012 .a. muller , a. kammoun , e. bjornson , and m. debbah , ' ' linear precoding based on polynomial expansion : large - scale multi - cell mimo systems , `` _ ieee j. sel .topics signal process . _ , vol . 8 , no .5 , pp . 861875 , oct . 2014 .l. zheng and d. tse , ' ' communication on the grassmann manifold : a geometric approach to the noncoherent multiple - antenna channel " , _ ieee trans .inf . theory _ ,2 , pp . 359383 , feb . 2002 .b. hassibi and b. m. hochwald , `` how much training is needed in multiple - antenna wireless links ? '' _ ieee trans .inf . theory _ , vol .4 , pp . 951963 , apr . 2003 .m. kobayashi , n. jindal , and g. caire , `` training and feedback optimization for multiuser mimo downlink , '' _ ieee trans .59 , no . 8 , pp .22282240 , aug .s. murugesan , e. u .- biyikoglu , and p. schniter , `` optimization of training and scheduling in the non - coherent simo multiple access channel , '' _ ieee j. sel .areas commun .25 , no . 7 , pp .1441456 , sep .2007 . s. sun , and y. jing , `` channel training design in amplify - and - forward mimo relay networks , '' _ ieee trans .wireless commun .10 , pp . 33803391 , oct .o. ayach , a. lozano , r. w. heath , `` on the overhead of interference alignment : training , feedback , and cooperation , '' _ ieee trans .wireless commun ._ , vol . 11 , no . 11 , pp .41924203 , nov .i. bettesh and s. shamai , `` optimal energy and rate control for minimal average delay : the single - user case , '' _ ieee trans .inf . theory _9 , pp . 41154141 , sep . 2006 . v. k. n. lau and c. ying , `` delay - optimal energy and subcarrier allocation for ofdma systems via stochastic approximation , '' _ ieee trans .wireless commun ._ , vol . 9 , no. 1 , pp . 227233 , jan .d. rajan , a. sabharwal , and b. aazhang , `` delay - bounded packet scheduling of bursty traffic over wireless channels , '' _ ieee trans .inf . theory _ ,1 , pp . 125144 , jan .h. wang , w. zhang , y. liu , q. xu , and p. pan , `` on design of non - orthogonal pilot signals for a multi - cell massive mimo system , '' _ ieee wireless commun .2 , pp . 129132 , apr .shen , j. zhang , and k. b. letaief , `` downlink user capacity of massive mimo unser pilot contamination , '' _ ieee trans .wireless commun .14 , no . 6 , pp .31833193 , jun . 2015 .p. banelli , s. buzzi , g. colavolpe , a. modenini , f. rusek , and a. ugolini , `` modulation formats and waveforms for 5 g networks : who will be the heir of ofdm ? : an overview of alternative modulation schemes for improved spectral efficiency , '' _ ieee signal process . mag .6 , pp . 8093 , nov 2014 .b. m. hochwald , t. l. marzetta , and v. tarokh , `` multiple - antenna channel hargening and its implications for rate feedback and scheduling , '' _ ieee trans .theory , _ vol .9 , pp . 18931909 , sep .2004 .b. m. hochwald , t. l. marzetta , t. j. richardson , w. sweldens , and r. urbanke , `` systematic design of unitary space - time constellation , '' _ ieee trans .inf . theory _ ,19621973 , sep .p. xia , s. zhou , and g. b. giannakis , `` achieving the welch bound with difference sets , '' _ ieee trans .inf . theory _ ,5 , pp . 19001907 , may 2005 .d. calabuig , r. h. gohary , and h. yanikomeroglu , `` optimum transmission through the multiple - antenna gaussian multiple access channel , '' _ ieee trans .inf . theory _ ,1 , pp . 230243 , jan . | in this paper , an uplink scheduling policy problem to minimize the network latency , defined as the air - time to serve all of users with a quality - of - service ( qos ) , under an energy constraint is considered in a training - based large - scale antenna systems ( lsas ) employing a simple linear receiver . an optimal algorithm providing the exact latency - optimal uplink scheduling policy is proposed with a polynomial - time complexity . via numerical simulations , it is shown that the proposed scheduling policy can provide several times lower network latency over the conventional ones in realistic environments . in addition , the proposed scheduling policy and its network latency are analyzed asymptotically to provide better insights on the system behavior . four operating regimes are classified according to the average received signal quality , , and the number of bs antennas , . it turns out that orthogonal pilots are optimal only in the regime and . in other regimes ( or ) , it turns out that non - orthogonal pilots become optimal . more rigorously , the use of non - orthogonal pilots can reduce the network latency by a factor of when or by a factor of when and , which would be a critical guideline for designing 5 g future cellular systems . large - scale antenna system , training - based transmission , network latency minimization , uplink scheduling policy , non - orthogonal pilots |
the recent upgrade of the very large array ( vla ) has resulted in a greatly increased imaging sensitivity due to the availability of large instantaneous bandwidths at the receivers and correlator .at least two new dish array telescopes ( in particular , askap and meerkat ) are currently under construction to improve upon the vla s specifications in terms of instantaneous sky coverage and total collecting area .a considerable amount of observing time has been allotted on all three instruments for large survey projects that need deep and sometimes high dynamic range imaging over fields of view that span one or more primary beams .desired data products include images and high precision catalogs of source intensity , spectral index , polarized intensity and rotation measure , produced by largely automated imaging pipelines .for these experiments , data sizes range from a few hundred gigabytes up to a few terabytes and contain a large number of frequency channels for one or more pointings . in this imaging regime ,traditional algorithms have limits in the achievable dynamic range and accuracy with which weak sources are reconstructed .narrow - band approximations of the sky brightness and instrumental effects result in sub - optimal continuum sensitivity and angular resolution .narrow - field approximations that ignore the time- , frequency- , and polarization dependence of antenna primary beams prevent accurate reconstructions over fields of view larger than the inner part of the primary beam .mosaics constructed by stitching together images reconstructed separately from each pointing often have a lower imaging fidelity than a joint reconstruction . despite these drawbacks , there are several science cases for which such accuracies will suffice .further , all these methods are easy to apply using readily available and stable software and are therefore used regularly .more recently - developed algorithms that address the above shortcomings also exist .wide - field imaging algorithms include corrections for instrumental effects such as the w - term and antenna aperture illumination functions .wide - band imaging algorithms such as multi - term multi - frequency - synthesis ( mt - mfs ) make use of the combined multi - frequency spatial frequency coverage while reconstructing both the sky intensity and spectrum at the same time .wideband a - projection , a combination of the two methods mentioned above accounts for the frequency dependence of the sky separately from that of the instrument during wideband imaging .algorithms for joint mosaic reconstruction add together data from multiple pointings either in the spatial - frequency or image domain and take advantage of the combined spatial - frequency coverage during deconvolution . such joint mosaic imaging along with a wideband sky model and wideband primary beam correction has recently been demonstrated to work accurately and is currently being commissioned (in prep ) .these methods provide superior numerical results compared to traditional methods but they require all the data to be treated together during the reconstruction and need specialized software implementations that are optimized for the large amount of data transport and memory usage involved in each imaging run . with so many methods to choose from and various trade - offs between numerical accuracy , computational complexity and ease of use , it becomes important to identify the most appropriate approach for a given imaging goal and to quantify the errors that would occur if other methods are used .the square kilometre array ( ska ) will involve much larger datasets than the vla , askap or meerkat will encounter with even more stringent accuracy requirements , making it all the more relevant to characterize all our algorithmic options and use existing , smaller instruments to derive and validate algorithmic parameters .this paper describes some preliminary results based on a series of simulated tests of deep wide - band and wide - field mosaic observations with the vla .section [ sec : sims ] describes how the datasets were simulated .sections [ sec : algos : single1][sec : algos : mosaic ] list the imaging methods that were compared , for the single pointing as well as the mosaic tests .section [ sec : metrics ] describes the metrics used to quantify imaging quality .sections [ sec : results : single ] and [ sec : results : mosaic ] describe the results from several tests performed with the single - pointing and mosaic datasets .section [ sec : discussion ] summarizes the results , discusses what one can and can not conclude from such tests , and lists several other tests that are required before ska - level algorithmic accuracy predictions can be made .a sky model was chosen to contain a set of 8000 point sources spanning one square degree in area .the source list is a subset of that available from the skads / scubed simulated sky project .in this sample , intensities ranged between and and followed a realistic source count distribution . for high dynamic range tests , one source was also added .spectral indices ranged between 0.0 and -0.8 with a peak in the spectral index distribution at -0.7 plus a roughly gaussian distribution around -0.3 with a width of 0.5 .[ fig : scounts ] shows the source count vs intensity on the top - left panel and intensity vs spectral index on the bottom - left .two types of datasets were simulated .one was for a vla single pointing at c - config and l - band with 16 channels ( or spectral windows ) between 1 and 2 ghz .the -coverage was a series of snapshots the hpbw of the primary beam at l - band is 30arcmin and therefore covers the central part of the simulated region of sky .the second dataset was for a vla mosaic at d - config and c - band with 46 pointings ( of primary beams 6arcmin in hpbw ) spaced 5 arcmin apart to cover roughly the same patch of sky at a comparable angular resolution . at these simulated angular resolutions ( 10arcsec at l - band c- config and 9arcsec at c - band d - config ) , the expected confusion limit is , and the simulation included only sources brighter than in order to insulate these tests from errors due to main - lobe confusion .16 channels ( or spectral windows ) were chosen to span the frequency range of 4 - 8 ghz , and the -coverage corresponds to one pointing snapshot every 6 minutes , tracing the entire mosaic twice within 8.8 hours .a true sky image cube was constructed by evaluating wideband point source components from the skads list for a set of frequencies that matched those being observed .all sources were evaluated as delta functions ( naturally at pixel centers ) using the same cell size as would be later used during imaging .specific tests with off - pixel - center sources were done by using different image cell sizes during simulation and imaging so that a source at a pixel center during simulation is not at the center during imaging .visibilities were simulated per pointing for this image cube , using the wb - a - projection de - gridder which uses complex antenna aperture illumination functions to model primary beams that scale with frequency , have polarization squint and rotate with time ( due to the vla altitude - azimuth mount ) .noise was not added to these simulations as our first goal was to characterize numerical limits purely due to the algorithms and their software implementations .only after all observed trends and limits are understood will it be instructive to add gaussian random noise .theoretically , pure gaussian random noise should not change the behaviour of algorithms other than increase error in predictable ways and it is important to systematically confirm that this is indeed the case in practice .however , numerical noise will be present in these tests at the level as most image domain operations use single float precision .all references to signal - to - noise ratio in this analysis therefore relate to numerical precision noise .the datasets described above were imaged in a variety of ways . in all casesthe data products were continuum intensity images and spectral index maps .the methods that were tested are several possible combinations of standard clean for narrow - band imaging , mt - mfs for wideband imaging , a - projection to account for direction - dependent effects during gridding , and stitched versus joint mosaics .wideband primary beam correction was applied as appropriate using whatever primary beam models were available to the reconstruction algorithms .all image reconstruction runs used the standard major and minor cycle iterative approach with different combinations of gridding algorithms for the major cycle ( prolate spheroid , a - projection ) and deconvolution algorithms for the minor cycle ( hogbom clean , mt - mfs ) .the casa imaging software was used for these simulations and reconstructions as a combination of production tasks and custom c++ code and python scripts .each frequency channel is reconstructed independently with standard narrow - band imaging algorithms ( clean ) .there is no intrinsic primary beam correction , but deconvolution is followed by post - deconvolution primary - beam correction done per frequency .all images are then smoothed to the angular resolution of the lowest frequency in the observation , spectral models are fitted per pixel to extract spectral indices , and channels are collapsed to form a continuum intensity image . the main advantage of this method is computational simplicity and ease of parallelization , with each channel and pointing being treated independently . the disadvantage of this approach is low angular resolution and possible sub - optimal imaging fidelity as the reconstruction process can not take advantage of the additional constraints that multi - frequency measurements provide .this procedure is the same as cube , but with projection - based gridding algorithms applied per channel to account for baseline and time dependent primary beam effects .aw - projection uses models of the antenna aperture illumination functions at different parallactic angles to compute gridding convolution functions per baseline and timestep .these convolution kernels are constructed to have conjugate phase structure compared to what exists in the visibilities , and this eliminates beam squint during gridding .the main expected differences from standard cube imaging is in the quality of the primary beam correction , visible in stokes v images all the time ( beam squint ) and at high dynamic range in the stokes i image .multi - term multi - frequency synthesis was used to simultaneously solve for the sky intensity and spectrum , using the combined wideband -coverage . with no intrinsic primary beam corrections ,the output spectral taylor coefficients represent the time - averaged product of the sky and primary beam spectrum .a wideband post - deconvolution correction , a polynomial division carried out in terms of taylor coefficients . ] of the average primary beam and its spectrum ( ) was done at the end to produce intensity and spectral index maps that represent only the sky .this method has the advantage of algorithmic simplicity while taking advantage of the wideband -coverage .the main disadvantage is that the time variability of the antenna primary beam is ignored , which in the case of squinted and rotating beams can result in artifacts and errors in both the intensity and spectrum for sources near the half - power level .also , since the frequency dependence of the primary beam persists through to the minor cycle modeling stage , the multi - term reconstruction has to model a spectrum that is steeper than just that of the sky .more taylor terms are required , increasing cost and low snr instabilities .multi - term multi - frequency synthesis is used along with wideband a - projection , an adaptation of aw - projection that uses convolution functions from conjugate frequencies to undo the frequency dependent effects of the aperture function during gridding in addition to accounting for beam rotation and squint .this achieves a clear separation between frequency dependent sky and instrument parameters before the sky intensity and spectrum are modeled .the output spectral taylor coefficients represent where the effective is no longer frequency dependent . in this case , a post - deconvolution division of only the intensity image by an average primary beam is required .the output spectrum already represents only that of the sky brightness .alternatively , a hybrid of cube imaging with narrow - band a - projection and mt - mfs can also be used in which the frequency dependence of the primary beam is removed in the image domain from the residual image cube before combining the frequency planes to form the taylor weighted averages needed for the mt - mfs minor cycle .these two approaches have different trade - offs in numerical accuracy , computational load , memory use and ease of parallelization and a choice between them will depend on the particular imaging problem at hand .this general approach has the advantage of accounting for the time and frequency dependence of the primary beam during gridding , and clearly separating sky parameters from instrumental ones .the required number of taylor coefficients depend only on the spectrum of the sky , which is usually less steep than that of the primary beam . with a - projection , a flat - noise normalization choice can sometimes cause numerical instabilities around the nulls of the primary beam where the true sensitivity of the observations is also the lowest work is in progress to find a robust solution to this .alternate normalization choices will alleviate the problem but they will increase the degree of approximation that the minor cycle must now handle . in general , a mosaic can be constructed as a weighted average of single pointing images , using an average primary beam model as the weighting function . in this discussion , a combination after deconvolution will be called stitched mosaic , and a combination before deconvolution will be called a joint mosaic .a joint mosaic can also combine the data in the visibility domain by applying appropriate phase gradients across the gridding convolution functions used by projection algorithms .several wideband mosaic imaging options exist as various combinations of imaging with and without dd correction during imaging , cube versus multi - term multi - frequency synthesis imaging , and stitched versus joint mosaics . for cube clean imaging , joint mosaicswere made by combining data in the visibility domain and applying appropriate phase gradients across gridding convolution functions .two methods were compared with the first using an azimuthally symmetric primary beam model to construct a single gridding convolution kernel for all visibilities and the second using full aw - projection to account for pb - rotation and beam squint ( per frequency ) .the first method has the advantage of computational simplicity compared to full aw - projection where convolution functions can potentially be different for every visibility , but it has the disadvantage of ignoring beam squint and pb - rotation .the primary beam model is also not the same as what was used to simulate the data and this test evaluates the effect of this commonly used simplifying assumption . for mfs imaging ( with multiple taylor terms ), a joint mosaic was computed using aw - projection with its wideband adaptation ( to correct for the pb frequency dependence ) along with phase gradients applied to convolution kernels .an alternate approach is to stitch together sets of pb - corrected output taylor coefficient images , using the time - averaged primary beam as a weighting function , and then recomputing spectral index over the mosaic .however , initial tests showed that stitched mosaics ( with or without wb - awprojection ) produced larger errors than joint mosaics and stitched multi - term mfs mosaics were not included in this analysis .the following metrics were used to evaluate the numerical performance of the different algorithms . 1 .image rms : the rms of pixel amplitudes from off - source regions in the image , or in the case of no source - free regions , the width of a pixel amplitude histogram .dynamic range : ratio of peak flux to peak artifact level or image rms when no artifacts are visible .error distributions ( imaging fidelity ) : the intensity and spectral index maps produced by the above algorithms were compared with the known simulated sky and estimates of error per source were binned into histograms . for each output image ,the simulated sky model image was first smoothed to match its angular resolution , and then pixel values were read off from both images at all the locations of the true source pixels .histograms were plotted for where deviations from 1.0 indicate relative flux errors and for where deviations from 0.0 indicate relative errors in spectral index .all histograms were made with multiple intensity ranges ( e.g.fig .[ fig.lowdr.hist1 ] ) and over different fields of view ( e.g. fig .[ fig.lowdr.hist2 ] ) to look for trends .the l - band ( 1 - 2ghz ) c - configuration simulated data were imaged with the algorithms listed in sec . [sec : algos : single1 ] through [ sec : algos : single4 ] . [ cols="^,^,^,^,^,^ " , ] +the tests described in this paper address the use of wideband data for the deep imaging of crowded fields of compact sources at a sensitivity close to the confusion limit of the observation .the simulations use a realistic source distribution ( from which only sources a few times brighter than the confusion limit were used ) and include primary beam effects arising from azimuthal asymmetry , parallactic angle rotation and frequency scaling .imaging results were based on the ability to apply appropriate primary beam corrections and recover intensities and spectral indices of sources out to the 0.2 gain level of the primary beam and down to sensitivities a few times the confusion limit .bright sources were introduced to evaluate dynamic range limits with and without corrections for the time variability of primary beams .most observed trends were as expected and we were able to quantify the accuracy with which algorithms performed .there were also a few surprises that highlight the unpredictability of current algorithms in certain situations . for a crowded field of compact sources being imaged at a sensitivity close to the confusion limit , the quality of the psf matters a great deal during deconvolution , and multi - frequency synthesis has a clear advantage over subband based imaging especially for weak sources .the clean bias effect was seen for cube based methods which required careful masking to eliminate the effect .however , mfs - based wideband algorithms had psfs with narrower main lobes and lower sidelobes and did not suffer from the clean bias thus making complicated masking procedures unnecessary . for a mosaic of such a field ,a joint imaging approach is prefered ( within reasonable image size limits ) . for dynamic ranges higher than a - projectionbased methods are required to account for baseline and time - dependent primary beam effects .methods that derive spectral models of the sky while decoupling them from wideband instrumental effects ( mt - mfs with wideband a - projection ) are capable of producing usable spectral indices at all snrs except the lowest snrs ( where all methods fall short ) .the use of wb a - projection even at dynamic ranges lower than might benefit imaging cases where strong sources exist in the outer parts of the pb where pb - spectral index is higher .finally , simple data parallelization during the major cycle goes a long way in balancing out the increase in cost due to the more complex algorithms .given the best possible approach ( multi - term mfs with wideband a - projection ) for the specific problem being studied ( deep widefield imaging of crowded fields at or near the confusion limit ) , we quantified errors in the recovered intensity and spectral index as a function of source snr for both single pointings and a joint mosaic .note that for these noiseless simulations there still is numerical noise due to the use of single float precision in many image domain calculations . for our single pointing tests ( at l - band ) , errors in reconstructed intensitywere as follows .sources brighter than have errors less than 5% , sources between and show errors at the 10% level and sources below show errors at the 20 to 30% level with several sources more than 50% .errors in reconstructed spectral index were as follows .sources brighter than show errors of , but sources between and show errors of . in these tests, the weakest sources did not meet the threshold for spectral index calculation ( for both cube and multi - term mfs methods ) . for such crowded fields ,accuracy also depends on the quality of the psf even if source snr is not a problem .we showed that as psf sidelobe levels decreased ( by choosing different subsets of the data ) sources brighter than are always reconstructed to within a few percent , sources between 8 and 50 improve from 20% errors to less than 5% , and sources between and improve from over 50% errors to about 20% . for our joint mosaic tests( at c - band ) , errors in reconstructed intensity were as follows .sources brighter than had 2% errors , sources between and show errors of 4% and sources below have about errors .spectral index errors were for sources brighter than but sources between and had errors upto with both cube and mfs methods showing a slight systematic bias of towards flatness .the difference between the scale of the errors between the single pointing and mosaic tests relate to the amount of data used in the simulation .there are many sources of error during image reconstruction and the systematic separation of various contributing factors and their eventual combination is crucial to building a complete picture and truly understanding the reasons behind observed effects .conclusions derived from approximations done in isolation must be treated with caution and trends observed in real data must be reproduced when applicable . in a first stepwe include artifacts typical of various instrumental effects ( primary beams , psf sidelobe levels ) and demonstrate clean bias and show how to eliminate it .in fact , even the simulations done in this paper require the addition of several more effects to become accurate predictors of reality .for example , this paper quantifies algorithmic limits in the situation of no noise , point sources located at pixel centers , and no differences between the primary beam models used for simulation versus those used during imaging ( except for one of the wideband mosaic tests ) .these errors are thereforem a lower limit on what one could expect in reality .effects such as the clean bias were reproduced clearly and its cause and solution understood .future enhancements ( even just for stokes i imaging ) should include noise as well as residual calibration errors ( some of which may masquerade as effects needing baseline based calibration ) , the use of inaccurate primary beam models during imaging , the presence of extended emission in addition to point sources , etc .such questions are listed in detail in sec .[ sec : openqns ] . sometimes , algorithmic choices and achievable numerical accuracy depend on the type of available computing resources .this section revisits the various algorithmic options in the context of numerical accuracy versus computational cost .cube imaging methods are the easiest to parallelize , with both data and images being partitioned across frequency for the entire iterative imaging process .there is minimal need for special - purpose software for such a setup .imaging accuracy is limited to that offered by the coverage per channel , deconvolution depth is limited to the single channel sensitivity , and the resolution at which spectral structure can be calculated is limited to that of the lowest frequency .there is no dependence on any particular spectral model which makes this approach very flexible in its reconstruction of spectral structure .multi - frequency synthesis is demonstrably superior for continuum imaging due to its increased angular resolution , imaging sensitivity and fidelity , especially for crowded fields with thousands of compact sources .the multi - term mfs algorithm is useful to compute in - band spectral indices along with intensity but the cost of both the major and minor cycle increase with the order of the polynomials used .also , the accuracy of the spectral indices depends on the source snr and the choice of the order of the polynomial .work is in progress to test an approach where the number of taylor terms is snr dependent . for multi - frequency synthesis ,only the major cycle can be easily parallelized , with a gather step performed before the joint deconvolution step .the prefered partition axis when the wb - awp algorithm is used is time because of the use of aperture illumination functions from conjugate frequencies during gridding . if narrow - band a - projection is used to form a cube before primary beam correction and the formation of taylor weighted averages in the image domain , the partition axis of choice for the major cycle would be frequency .projection algorithms are significantly more expensive than the more standard method of using prolate spheroidal functions during gridding , mainly because of the support size of the convolution kernels and the overhead of computing such functions for potentially every visibility .however , the more useful metric is the _ total _ runtime for imaging and the extra cost of using projection algorithms can be offset by a comparable reduction in the runtime due to its numerical advantages .for example , wb a - projection increases the computing load for imaging but decreases the computing load and memory footprint of the mt - mfs setup which will need fewer terms to fit the spectral structure since the primary beam spectrum has been eliminated .also , in practice the roughly 10 fold increase in computation due to the use of a - projection compared to the standard gridder is readily absorbed by simple data parallelization during the major cycle .in addition , approximations can always be made ( identical antennas , coarser sampling of the aperture illumination function to reduce the size of the convolution functions , etc ) but effects of such approximations are visible beyond the dynamic range level .another axis along which parallelization is relatively easy is pointing .however , our tests show that the numerical differences between stitched vs joint mosaics are large enough that ( for crowded fields ) joint mosaics are always preferred .another basic factor is the use of single vs double precision calculations during imaging and deconvolution .currently ( in casa ) , all intermediate and output images use single precision , which is not the best option for dynamic ranges .the simulations and tests described in this paper demonstrate several ways in which imaging accuracy can be sub - optimal even for the simple situation of point sources imaged using both traditional trusted techniques as well as newer ones .a vast number of open questions and details remain , and a truly accurate picture can be derived only after these avenues are explored carefully and quantified to provide trends and usage guidelines to astronomers .work is in progress on several of these fronts , and results will be presented in subsequent papers . 1 .is it better to trade integration time at a single frequency band for shorter samples taken across a wider frequency range ?for example for the vla , simulations have shown that comparable imaging sensitivities and far more accurate spectral indices are recovered when an observation spans multiple bands ( l - band and c - band for example ) compared with the entire time spent at only one band .2 . what is the best algorithm for emission consisting of extended structure as well as compact emission? algorithms like multiscale clean are usable but relatively more expensive in terms of computing and memory footprint load and require considerable human input .several newer methods have been shown to produce superior results on their own but do not currently have production - quality optimized implementations that one can use .3 . does the addition of noise and residual calibration errors change any of the above conclusions ?theoretically , one would not expect the addition of gaussian random noise to change any results but it will be instructive to understand how robust these algorithms are to various noise levels .residual calibration errors on the other hand might cause changes that must be quantified ( e.g. , see ) . to assess how well such effects can be corrected , traditional self - calibration techniques must be compared with more flexible methods such as direction dependent calibration schemes and peeling ( for example , sage ) , especially to test their effects on the accuracy of the reconstructed sources .4 . how does baseline based averaging affect the achievable accuracy in the reconstructed intensity and spectral index ?a popular mode of data compression is to average time - contiguous visibilities that will all fall on the same grid cell during gridding .one concern with such an approach is whether it would prevent the handling of time variability of directional dependent instrumental effects or not . a simple test that achieved a data size reduction of 20% for one of the test datasets showed no noticeable effect with a - projection imaging out to twice the hpbw of the pb .additional tests must be done with more practical data compression ratios .how effective is the standard p(d ) analyses in predicting source counts below confusion limits ?simulations similar to those described in this paper with sources weaker than ( or an observation with a larger angular resolution ) can be used to test the effect of main - lobe confusion for the simulated source count distribution and the accuracy of p(d ) analyses on such an image .what type of software implementation and parallelization strategy is the most appropriate for a particular type of survey ? in the past few yearsseveral new modern imagers have begun to become available and it would be instructive to repeat an imaging test with different algorithms and implementations to evaluate and quantify differences that arise simply from different software implementations and subtle numerical and algorithmic choices within it .for example , shows examples of the numerical differences one can achieve simply by using pb models of different kinds and different algorithmic and software implementations for wideband mosaic imaging .how do these results extend to full polarization imaging and faraday rotation synthesis ?simulations with polarization dependent primary beams and full - stokes imaging ( with and without a - projection ) can quantify polarization imaging limits and identify appropriate imaging strategies .work is in advanced stages .to analyse this and produce the required algorithms and software to do such imaging .how accurate do primary beam models need to be for use within a - projection ?simulations with controlled differences in actual aperture illumination functions can give a useful idea of how much variation can be left unmodeled during imaging .for example , it is easy to produce a spurious bias in spectral index simply by using a pb model whose shape is slightly different from what is present in the data .work is in progress to quantity imaging errors for alma when ( not so ) subtle differences between antenna structures and illumination patterns are ignored at different levels of approximation and for the vla to carefully model primary beams from holography data and use them during image reconstruction .these tests probe the limits of commonly used interferometric imaging algorithms in the context of crowded fields of compact sources being imaged at a few times the confusion limit and times the ( numerical ) noise level . in this regime ,the quality of the psf is of considerable importance even at signal - to - noise ratios of simply because of the limitations of clean based deconvolution algorithms in crowded fields .a psf sidelobe of to achieve errors of in intensity and in spectral index across a 1 - 2ghz band for low brightness sources near the confusion limit .since psfs from the joint imaging of data ( mfs , for example ) typically have lower sidelobes compared to psfs from partitioned pieces of data ( cube , for example ) the former are prefered algorithmic choice in this regime . for shallow surveys where the emission above thermal noise limits fills the sky sparsely, we did not find any statistically significant difference between algorithms that partition the data and those that do nt ( like mfs ) .surveys that require imaging at the native resolution of the data / telescope particularly where detection and reconstruction of extended emission is important will still need to use mfs .computing resources will be another discriminator in choosing algorithms in the shallow regime with algorithms that do nt require data partitioning ( like mfs ) requiring fewer resources than algorithms that do require partitioning . for wide - field imaging ( dr ) ,primary beam correction methods need to include details such as its time , frequency and polarization dependence to both reconstruct the bright source accurately and to eliminate artifacts that may contaminate surrounding weaker sources .our investigation shows that accounting for azimuthal asymmetry of the beams , rotation or pointing jitter as a function of time , scaling with frequency and polarization beam squint due to off - axis feed locations is certainly required .the imaging performance of joint image reconstruction methods is fundamentally superior to those that work with partitioned data and combining number of reconstructed images each from a fraction of the available data .hybrid implementations that take advantage of the ease of parallelization of partitioned methods where possible may be useful , but require careful analysis of its final imaging performance .work presented here also gives a lower limit on the level of detail at which simulations for future surveys and telescopes must be undertaken .it is important to systematically build up the complexity of the simulation in a way that enables one to efficiently pinpoint the reasons behind observed feature or trends . given the wide range of observation types and analysis methods , simulations must be specific to the parameters of each survey and must be as complete as possible in their inclusion of instrumental effects and observing modes .in particular , effects of time , frequency and polarization dependence of the antenna primary beams , effects of long baselines and wide fractional bandwidths , and correct sampling of the coherence field in time and frequency to realistically reflect both the data volume and the filling factor in the uv - plane .clear accuracy requirements are also necessary for each survey in order to choose the optimal analysis procedure based on scientific estimates .we wish to thank the various nrao staff members for useful discussions at various stages of this project .we wish to thank the common astronomy software applications ( casa ) group for the use of their imaging libraries in our simulation and imaging scripts . | many deep wide - band wide - field radio interferometric surveys are being designed to accurately measure intensities , spectral indices and polarization properties of faint source populations . in this paper we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed . we simulated a wideband single - pointing ( c - array , l - band ( 1 - 2ghz ) ) and 46-pointing mosaic ( d - array , c - band ( 4 - 8ghz ) ) jvla observation using realistic brightness distribution ranging from to and time- , frequency- , polarization- and direction - dependent instrumental effects . the main results from these comparisons are ( a ) errors in the reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise , ( b ) errors are systematically lower for joint reconstruction methods ( such as mt - mfs ) along with a - projection for accurate primary beam correction , and ( c ) use of mt - mfs for image reconstruction eliminates clean - bias ( which is present otherwise ) . auxiliary tests include solutions for deficiencies of data partitioning methods ( e.g. the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left undeconvolved ) , the effect of sources not at pixel centers and the consequences of various other numerical approximations within software implementations . this paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality , enable one to systematically identify specific reasons for every trend that is observed and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms / analysis procedures . |
adiabatic quantum computation ( aqc ) was proposed by farhi et al. in 2000 .the aqc model is based on the _ adiabatic theorem _ ( see , e.g. ) .informally , the theorem says that if we take a quantum system whose hamiltonian `` slowly '' changes from ( initial hamiltonian ) to ( final hamiltonian ) , then if we start with the system in the _ groundstate _( eigenvector corresponding to the lowest eigenvalue ) of , then at the end of the evolution the system will be `` predominantly '' in the ground state of . the theorem is used to construct _adiabatic algorithms _ for optimization problems in the following way : the initial hamiltonian is designed such that the system can be readily initialized into its known groundstate , while the groundstate of the final hamiltonian encodes the answer to the desired optimization problem .the complete ( or _ system _ ) hamiltonian at a time is then given by for ] , ^{\dagger}$ ] .( 2 ) tensor product property : .more precisely , for , , . | we show that the np - hard quadratic unconstrained binary optimization ( qubo ) problem on a graph can be solved using an adiabatic quantum computer that implements an ising spin-1/2 hamiltonian , by reduction through _ minor - embedding _ of in the quantum hardware graph . there are two components to this reduction : _ embedding _ and _ parameter setting_. the embedding problem is to find a minor - embedding of a graph in , which is a subgraph of such that can be obtained from by contracting edges . the parameter setting problem is to determine the corresponding parameters , qubit biases and coupler strengths , of the embedded ising hamiltonian . in this paper , we focus on the parameter setting problem . as an example , we demonstrate the embedded ising hamiltonian for solving the maximum independent set ( mis ) problem via adiabatic quantum computation ( aqc ) using an ising spin-1/2 system . we close by discussing several related algorithmic problems that need to be investigated in order to facilitate the design of adiabatic algorithms and aqc architectures . |
a tv broadcast speech database named kalaka-2 is employed for analysing language - dependent speech .it was originally designed for language recognition evaluation purposes and consists of wide - band tv broadcast speech recordings ( roughly 4 hours per language ) featuring 6 different languages : basque , catalan , galician , spanish , portuguese and english .tv broadcast shows were recorded and sampled using bytes at samples / second rate , taking care of including as much diversity as possible regarding speakers and speech modalities .it includes both planned and spontaneous speech throughout diverse environment conditions , such as studio or outside journalist reports but excluding telephonic channel .therefore audio excerpts may contain voices from several speakers but only a single language .for illustrative purposes in figure [ presentacion ] we depict a sample speech waveform amplitude and its squared , semi - definite positive instantaneous energy , respectively . without loss of generality , dropping the irrelevant constants , has units of energy per time .then , a threshold is defined as the instantaneous energy level for which a fixed percentage of data is larger than the threshold . for instance , is the threshold for which of the data fall under this energy level ( this allows to compare data across different empirical signals ) . not only works as a threshold of ` zero energy ' that filters out background ( environmental ) noise , but help us to unambiguously distinguish a speech event , defined as a sequence of _ consecutive _ measurements , from a silence event , whose duration is ( see figure [ presentacion ] ) .accordingly , speech can now be seen as a dynamical process of energy releases or ` speech earthquakes ' separated by silence events , with in principle different statistical properties given different thresholds . in what follows we address all these properties . + * energy release : a gutenberg - richter - like scaling law in speech * + the energy of a speech event is computed from the integration of the instantaneous energy over the duration of that event where is the inverse of the sampling frequency ( and therefore has arbitrary units of energy ) . in order to get rid of environmental noise , we set a fixed threshold and for each language , we compute its histogram . in figure [ energy ]we draw , in log - log scales , this histogram for all languages considered ( note that a logarithmic binning was used to smooth out the data ) .we find out a robust power - law scaling over five decades saturated by the standard finite - size cutoff , where the fitted exponents are all consistent with a language - independent universal behaviour : for spanish , for basque , for portuguese , for galician , for catalan , and for english , all having a correlation coefficient ( for completeness , in the inset of figure [ energy ] we also depict the binned histogram of instantaneous energy for all languages ) . as long as the magnitude in seismicity is related to the logarithm of the energy release , under this definition can be identified as a new gutenberg - richter - like law in speech .this may be seen as related to other scaling laws in cognitive sciences , although at this stage it is still unclear what particular contributions come from both the mechanical ( vocal folds and resonating cavity ) and the cognitive systems .+ released in the statistics of all languages considered in this study , after a logarithmic binning .the figure shows a power law , that holds for five decades , truncated for large ( rare ) events by a finite - size cut - off .( inset panel ) log - log plot of the associated instantaneous energy histogram of each language , which we include for completeness .we note that this non scaling shape of instantaneous energy is similar to the one found for precipitation rates .,scaledwidth=40.0% ] * scaling and universality of waiting time distributions . *+ in a second part , we study on the temporal orchestration of fluctuations , that is , the arrangement of silences or speech interevents of duration . we will pay a special attention to the intraphoneme range ( timescales s ) , where we assume no cognitive effects are present , in order to focus on the physiological aspects of speech . at this pointwe introduce a renormalisation group ( rg ) transformation to explore the origin of temporal correlations .this technique originates in the statistical physics community and has been previously used in the context of earthquakes and tropical - cyclone statistics .the first part of the transformation consists of a decimation : we raise the threshold .this in general leads to different interevent distributions .the second part of the transformation consists in a scale transformation in time , such that renormalized systems become comparable : , , where is the mean interevent time of the system for a particular .invariant distributions under this rg transformation collapse into a threshold - independent universal curve : an adimensional waiting time distribution .while the complete fixed point structure of this rg is not well understood yet , recent advances rigorously found that stable ( attractive ) fixed points include the exponential distribution and a somewhat exotic double power - law distribution , which are attractors for both memoryless and short - range correlated stochastic point processes under the rg flow .invariant distributions other than the previous fixed points are likely to be unstable solutions of the rg flow , therefore encompassing criticality in the underlying dynamics .+ in the left panel of figure [ interevent ] we plot in log - log , for different thresholds , the interevent histogram associated to the english language . in the intraphoneme range , interevents are power - law distributed in every case . in the right panel of the same figure we plot the rescaled histograms ( for every language , yielding a total of 18 curves ) , collapsing under a single curve .note that the collapse is quite good for those timescales that belong to the intraphoneme range , where only physiological mechanisms are in place , and such collapse is lost for larger timescales .this suggests that for every language , the statistics are invariant under this rg transformation , and the pattern is robust across languages .a more careful statistical analysis is required here as the range of the power law , restricted to the intraphoneme regime , is smaller . accordingly , exponent is estimated now following clauset et al.s method which employs maximum likelihood estimation ( mle ) of a power law model , where goodness - of - fit test and confidence interval are based on kolmogorov - smirnov ( ks ) tests ( comparing the actual distribution with 100 synthetic power law models whose exponent is the one found in the mle to obtain p - values ) .this method yields a fairly universal exponent within the confidence interval $ ] , ks p - value of and the statistical support for the power - law hypothesis given by a p - value of .it is well known that the dynamics of speech generation are complex , nonlinear , and certainly poorly explained by the benchmark source - filter theory . the fact that a gutenberg - richter law for the energy release probability distribution during speech emerges opens the possibility of understanding speech production in terms of crackling noise , a highly nonlinear behaviour that was first described in condensed matter physics and latter found in a variety of natural hazards systems including earthquakes , rain or solar flares to cite some .the underlying theory might then describe energy releases as the resonating response function of the system under airflow perturbation ( the so called susceptibility in the statistical physics jargon ) , and the fact that it is a power law distributed quantity is the first evidence of criticality in these systems .+ the fact that the waiting time distributions are different from the well known stable exponential laws while being invariant under the rg transformation confirms that the system is indeed operating close to a critical point .note that the currently established low dimensional chaotic hypothesis suffers from being a process with short - range correlations and therefore chaotic speech interevents should typically renormalise into the exponential law .the most plausible conclusion of this work is that the physiological process of speech production evidences long - range correlations and criticality .this argument can be put in the same grounds as equilibrium critical phenomena , where only long - range correlations allow the system to escape from the basin of attraction of the trivial rg ( high or low temperature ) fixed points .as long as the critical solution is an _ unstable _ fixed point of the rg flow , the fact that the underlying dynamics seems to be poised near this point is somewhat remarkable .if we assume that the dynamics of speech generation at the level of the vocal folds are influenced by the properties of the glottal airflow , then our results would suggest that a similar critical behaviour might take place in other physiological processes involving such airflow , something that has been found empirically in the mechanism of lung inflation .+ note also that whereas a simple stochastic processes such as a brownian motion can not explain these results , under the more general paradigm of fractional brownian motion with first return distribution , our findings would be consistent with so called noise ( with hurst exponent ) , on agreement with previous evidence , this latter being a trait of long range correlations .this gives further credit to our results , as noise is usually found along with criticality .+ although we acknowledge that weakly chaotic systems such as intermittent ones can operate close to criticality , our results suggest that the paradigm of self - organised criticality ( soc ) - out of equilibrium dissipative systems that self - organise towards a critical state- may be a more adequate modelling scenario than standard low dimensional chaos . if this was to be the case , threshold dynamics would appear as the essential ingredient that encompasses the nonlinear properties in speech .these new approaches could further contribute to the development of both ( i ) microscopic ( generative ) models of speech production , and ( ii ) alternative methods of speech synthesis that would profit from the self - similar properties of speech at the intraphoneme range to make refined speech interpolation without needs to incorporate in the synthesis model pieces of real speech .+ furthermore , the combined fact that ( i ) the results are robust for different human languages and that ( ii ) the timescales involved in this analysis require that the process is purely physiological , lead us to conclude that this mechanism is a universal trait of human beings .it is an open question if the onset of soc in this system is the result of an evolutionary process , where human speech waveforms would have evolved to be independent of speaker and receiver distance and perception thresholds . in this context, further work should be done to investigate whether if similar patterns originate in the communication of other species , and up to which extent other variables , such as ageing , may play a role . +* acknowledgments . *the authors would like to thank luis javier rodrguez - fuentes and mikel peagarikano for recording and hand - labeling the speech corpus .j. neubauer , p. mergell , u. eysholdt , ulrich and h. herzel , spatio - temporal analysis of irregular vocal fold oscillations : biphonation due to desynchronization of spatial modes _ j. acoust .* 110 * , 3179 - 3192 ( 2001 ) . luis javier rodrguez - fuentes , mikel peagarikano , amparo varona and mireia dez and germn bordel , kalaka-2 : a tv broadcast speech database for the recognition of iberian languages in clean and noisy environments , _ lrec _ , 99105 , ( 2012 ) . | speech is a distinctive complex feature of human capabilities . in order to understand the physics underlying speech production , in this work we empirically analyse the statistics of large human speech datasets ranging several languages . we first show that during speech the energy is unevenly released and power - law distributed , reporting a universal robust gutenberg - richter - like law in speech . we further show that such earthquakes in speech show temporal correlations , as the interevent statistics are again power - law distributed . since this feature takes place in the intraphoneme range , we conjecture that the responsible for this complex phenomenon is not cognitive , but it resides on the physiological speech production mechanism . moreover , we show that these waiting time distributions are scale invariant under a renormalisation group transformation , suggesting that the process of speech generation is indeed operating close to a critical point . these results are put in contrast with current paradigms in speech processing , which point towards low dimensional deterministic chaos as the origin of nonlinear traits in speech fluctuations . as these latter fluctuations are indeed the aspects that humanize synthetic speech , these findings may have an impact in future speech synthesis technologies . results are robust and independent of the communication language or the number of speakers , pointing towards an universal pattern and yet another hint of complexity in human speech . the description , understanding and modelling of speech is an interdisciplinary topic of current interest for physics , social and cognitive sciences , data mining as well as engineering . classical speech synthesis technologies and algorithms were firstly based on linear stochastic models and linear prediction , and their underlying theory , so called source - filter theory of speech production , was initially relying on several key assumptions including uncoupled vocal tract and speech source , laminar airflow propagating linearly , periodic fold vibration , and homogeneous tract conditions . despite the successes of this benchmark theory , the synthetic speech generated by linear models fails to be ` natural ' . as a matter of fact , current speech synthesizers usually require to incorporate pieces of real speech , e.g. concatenation of smaller speech units ( phoneme and diphone - based synthesis ) to improve their synthetic output . + with the popularisation of nonlinear dynamics , fractals , and chaos theory , the modelling paradigm slightly shifted and nonlinear speech processing emerged . accordingly , a number of authors pointed towards low dimensional , chaotic phenomena as the underlying mechanism governing the fluctuations in speech . this paradigm shift has fostered a number of inspiring theoretical vocal - fold and tract models of increasing complexity , displaying a range of nonlinear phenomena such as bifurcations and chaos or irregular oscillations to cite some . glottal airflow induced by the fluid - tissue interaction has also been modelled using navier - stokes equations . nevertheless , the empirical justification for low dimensional chaos is , up to a certain extent , based on preliminary evidence , and this modelling approach only theoretically justified through an analogy between turbulent states , chaos , and fractals . as a matter of fact , it has been recently acknowledged in the nonlinear dynamics community that a word of caution should be taken when empirically analysing the low dimensional nature of short , noisy and nonstationary empirical signals , as not only the computation of dynamical invariants such as fractal dimensions or lyapunov exponents is difficult in those cases , but furthermore correlated stochastic noise can be misleadingly described as to have a low dimensional attractor . it is therefore reasonably unclear what modelling paradigm should we follow to pinpoint the nonlinear nature of the fine grained details of speech . + in this work we make no a priori assumptions about the underlying adequate dynamical model , and we follow a data driven approach . we thoroughly analyse the statistics of speech waveforms in extensive real datasets that extend all the way into the intraphoneme range ( s ) , enabling us to dissect the purely physiological aspects that are playing a role in the production of speech , from other aspects such as cognitive effects . our main results are the following : ( i ) energy releases in speech are power law distributed with a language - independent universal exponent , and accordingly a gutenberg - richter - like law is proposed within speech . ( ii ) in the intraphoneme range ( s ) , the interevent times ( silences of duration ) between energy releases are also power - law distributed , suggesting long - range correlations in the time fluctuations of the amplitude signal . ( iii ) furthermore , these distributions are invariant under a time renormalisation group ( rg ) transformation , ) . on the basis of these results , we should conclude that the physiological mechanism of speech production self - organises close to a critical point . + of a single - speaker recording ( spanish language ) . ( bottom panel ) instantaneous energy per unit time in an excerpt from the top panel . the energy threshold , defined as the instantaneous energy level for which a fixed percentage of the entire data remains above that level , helps us to unambiguously distinguish , for a given threshold , speech events ( events for which ) from silence or speech interevents of duration . the energy released in a speech event is computed from the integration of the instantaneous energy over the duration of that event ( dark area in the figure denotes the energy released in a given speech event).,scaledwidth=45.0% ] |
title of program : siliconiap computer hardware and operating system : any shared memory computer running under unix or linux programming language : fortran90 with openmp compiler directives memory requirements : roughly 150 words per atom no . of bits in a word : 64 no . of processorsused : tested on up to 4 processors has the code been vectorized or parallelized : parallelized withe openmp no .of bytes in distributed program , including test data , etc : 50 000 distribution format : compressed tar file keywords : silicon , interatomic potential , force field , molecular dynamics nature of physical problem : condensed matter physics method of solution : interatomic potential restrictions on the complexity of the problem : none typical running time : 30 per step and per atom on a compaq dec alpha unusual features of the program : nonedue to its technological importance , silicon is one of the most studied materials . for smallsystem sizes ab - initio density functional calculations are the preferred approach .unfortunately this kind of calculation becomes unfeasible for larger systems required to study problems such as interfaces or extended defects . for this type of calculationsone resorts to force fields which are several orders of magnitude faster .recent progress in the development of force fields has demonstrated that they can be a reliable tool for such studies .a highly accurate silicon force field has been developed by lenosky and coworkers .its transferability has been demonstrated by extensive tests containing both bulk and cluster systems .its accuracy is in part due to the fact that second nearest neighbor interactions are included .this makes it unfortunately somewhat slower than force fields containing only nearest neighbor interactions . in the following a highly optimized parallel implementation of this force fieldwill be presented that allows large scale calculations with this force field .the parallelization is achieved by using openmp , an emerging industry standard for medium size shared memory parallel computers .molecular dynamics calculations have also been parallelized on distributed memory supercomputers .this approach is considerably more complex than the one presented here . since few researcheshave access to massively parallel supercomputers and are willing to overcome the complexities of doing molecular dynamics on such machines , medium scale parallelization of molecular dynamics has an important place in practice .user friendliness was one of the major design goals in the development of this routine . using fortran90 made it possible to hide all the complexities in an object oriented fashion from the user .the calling sequence is just .... call lenosky(nat , alat , rxyz , fxyz , ener , coord , ener_var , coord_var , count ) .... on input the user has to specify the number of atoms , , the vector containing the 3 lattice constant of the orthorhombic periodic volume and the atomic positions .the program then returns the total energy , , the forces , , the average coordination number , the variation of the energy per atom and of the coordination number as well as an counter that is increased in each call .in particular the user has not to supply any verlet list .since the calculation of the forces is typically much more expensive than the update of the atomic positions in molecular dynamics or geometry optimizations , we expect that the subroutine will be called in most cases from within a serial program . in casethe user is on a shared memory machine the subroutine will then nevertheless be executed in parallel if the program is compiled with the appropriate openmp options .in addition the subroutine can of course also be used on a serial machine . in this caseall the parallelization directives are considered by the compiler to be comments .the verlet list gives all the atoms that are contained within the potential cutoff distance of any given atom .typically the verlet list consists of two integer arrays .the first array , called in this work , points to the first / last neighbor position in the second array that contains the numbering of the atoms that are neighbors .a straightforward implementation for a non - periodic system containing atoms is shown below . in this simple casethe search through all atoms is sequential with respect to their numbering and it is redundant to give both the starting positions and the ending position , since .but in the more complicated linear scaling algorithm to be presented below , both will be needed . ....indc=0 do 10 iat=1,nat c starting position lsta(1,iat)=indc+1 do 20 jat=1,nat if ( jat.ne.iat ) then xrel1= rxyz(1,jat)-rxyz(1,iat ) xrel2= rxyz(2,jat)-rxyz(2,iat ) xrel3= rxyz(3,jat)-rxyz(3,iat ) rr2=xrel1**2 + xrel2**2 + xrel3**2 if ( rr2 .le .cut**2 ) then indc = indc+1 c nearest neighbor numbers lstb(indc)=jat endif endif 20 continue c ending position lsta(2,iat)=indc 10 continue .... this straightforward implementations has a quadratic scaling with respect to the numbers of atoms . due to thisscaling the calculation of the verlet list starts to dominate the linear scaling calculation of the energies and forces for system sizes of more than 10 000 atoms .it is therefore good practice to calculate the verlet list with a modified algorithm that has linear scaling as well .to do this one first subdivides the system into boxes that have a side length that is equal to or larger than and then finds all the atoms that are contained in each box .the cpu time for this first step is less than 1 percent of the entire verlet list calculation .hence this part was not parallelized .it could significantly affect the parallel performance according to amdahls law only if more than 50 processors are used .the largest smp machines at our disposal had however only 4 processors . to implement periodic boundary conditions all atoms within a distance of the boundary of the periodic volumeare replicated on the opposite part as shown in figure [ period ] .this part is equally well less than 1 percent of the cpu time for the verlet list for a 8000 atom system .being a surface term it becomes even smaller for larger systems .consequently it was nt parallelized either .( -1.5,0.5 ) after these two preparing steps one has to search only among all the atoms in the reference cell containing the atom for which one wants to find its neighbors as well as all the atoms in the cells neighboring this reference cell ( 26 cells in 3 dimensions ) .this implies that starting and ending positions for the atoms to are not calculated in a sequential way , necessitating , as mentioned before , separate starting and ending positions in the array .the corresponding parallel code is shown below .the indices refer to the cells , contains the number of atoms in cell and their numbering .the array saves the relative positions and distances that will again be needed in the loop calculating the forces and energies .each thread has its own starting position in the shared memory space and these starting positions are uniformly distributed .this approach allows the different threads to work independently .the resulting speedup is much higher than the one that one would obtain by calculating in the parallel version an array that is identical to the one from the serial version .if there are on the average more neighbors than expected ( 24 by default ) the allocated space becomes too small . in this casethe array is deallocated and a new larger version is allocated .this check for sufficient memory requires some minimal amount of coordination among the processors and is implemented by a critical section .if a reallocation is necessary , a message is written into an file to alert the user of the inefficiency due to the need of a second calculation of the verlet list . ....allocate(lsta(2,nat ) ) nnbrx=24 2345 nnbrx=3*nnbrx/2 allocate(lstb(nnbrx*nat),rel(5,nnbrx*nat ) ) indlstx=0 ! omp private(iat , cut2,iam , ii , indlst , l1,l2,l3,myspace , npr ) & ! omp rel , rxyz , cut , myspaceout ) npr=1 ! iam = omp_get_thread_num ( ) cut2=cut**2 myspace=(nat*nnbrx)/npr if ( iam.eq.0 ) myspaceout = myspace !verlet list , relative positions indlst=0 do 6000,l3=0,ll3 - 1 do 6000,l2=0,ll2 - 1 do 6000,l1=0,ll1 - 1 do 6600,ii=1,icell(0,l1,l2,l3 ) iat = icell(ii , l1,l2,l3 ) if ( ( ( iat-1)*npr)/nat .eq .iam ) then lsta(1,iat)=iam*myspace+indlst+1 call sublstiat(iat , nn , ncx , ll1,ll2,ll3,l1,l2,l3,myspace , & rxyz , icell , lstb(iam*myspace+1),lay , rel(1,iam*myspace+1),cut2,indlst ) lsta(2,iat)=iam*myspace+indlst endif 6600 continue 6000 continue ! omp end critical ! ] is rare and unimportant for performance considerations .the important cubic spline case is characterized by many dependencies . in the case of such dependenciesthe latency of the functional unit pipeline comes into play and reduces the attainable speed .a latency of some 20 cycles comes from the first two statements ( tt=(x - tmin)*hi ; klo = tt ) alone , requiring arithmetic operations and a floating point to integer conversion . for this reasonthe calculation of tt was taken out of the ( most likely occurring ) else block to overlap its evaluation with the evaluation of the if clauses . to further speed up the evaluation of the splines the structure of the energy expression [ energy ]was exploited . in the computationally most important loop over and two splines ( and )have to be evaluated .inlining by hand the subroutine splint for both evaluations and calculating alternatingly one step of the first spline evaluation and one step of the second spline evaluation introduces two independent streams .this reduces the effect of latencies and boosts speed .compilers are not able to do these complex type of optimizations .the best performance after these optimizations was obtained with low level compiler optimization flags ( -o3 -qarch = pwr3 -qtune - pwr3 on ibm power3 , -o2 on the compaq dec alpha , -o2 -xw on intel pentium4 ) .... subroutine splint(ya , y2a , tmin , tmax , hsixth , h2sixth , hi ,n , x , y , yp ) implicit real*8 ( a - h , o - z ) dimension y2a(0:n-1),ya(0:n-1 ) ! interpolate if the argument is outside the cubic spline interval [ tmin , tmax ] tt=(x - tmin)*hi if ( x.lt.tmin ) then yp = hi*(ya(1)-ya(0 ) ) - & ( y2a(1)+2.d0*y2a(0 ) ) * hsixth y = ya(0 ) + ( x - tmin)*yp else if ( x.gt.tmax ) then yp = hi*(ya(n-1)-ya(n-2 ) ) + & ( 2.d0*y2a(n-1)+y2a(n-2 ) ) * hsixth y = ya(n-1 ) + ( x - tmax)*yp! otherwise evaluate cubic spline else klo = tt khi = klo+1 ya_klo = ya(klo ) y2a_klo = y2a(klo ) b = tt - klo a=1.d0-b ya_khi = ya(khi ) y2a_khi = y2a(khi ) b2=b*b y = a*ya_kloyp = ya_khi - ya_klo a2=a*a cof1=a2 - 1.d0 cof2=b2 - 1.d0 y = y+b*ya_khi yp = hi*yp cof3=3.d0*b2 cof4=3.d0*a2 cof1=a*cof1 cof2=b*cof2 cof3=cof3 - 1.d0 cof4=cof4 - 1.d0 yt1=cof1*y2a_klo yt2=cof2*y2a_khi ypt1=cof3*y2a_khi ypt2=cof4*y2a_klo y = y + ( yt1+yt2)*h2sixth yp = yp + ( ypt1 - ypt2 ) * hsixth endif return end .... the final single processor performance for the entire subroutine is 460 mflops on a compaq dec alpha at 833 mhz , 300 mflops on a ibm power3 at 350 mhz and 550 mflops on a pentium 4 . in order to obtain a high parallel speedup in this central part of the subroutinethe threads are completely decoupled .this was done by introducing private copies for each thread to accumulate the energies and forces .the global energy and force are summed up in an additional loop at the end of the parallel region in a critical section . .... ! omp private(iam , npr , iat , iat1,iat2,lot , istop , tcoord , tcoord2 , & ! omp shared ( nat , nnbrx , lsta , lstb , rel , ener , ener2,fxyz , coord , coord2,istopg ) npr=1 ! iam = omp_get_thread_num ( ) npjx=300 ; npjkx=3000 istopg=0 if ( npr.ne.1 ) then ! parallel case !create temporary private scalars for reduction sum on energies and ! temporary private array for reduction sum on forces ! ompend critical if ( iam.eq.0 ) then ener=0.d0 ener2=0.d0 coord=0.d0 coord2=0.d0 do 121,iat=1,nat fxyz(1,iat)=0.d0 fxyz(2,iat)=0.d0 121 fxyz(3,iat)=0.d0 endif lot = nat / npr+.999999999999d0 iat1=iam*lot+1 iat2=min((iam+1)*lot , nat ) call subfeniat(iat1,iat2,nat , lsta , lstb , rel , tener , tener2 , & tcoord , tcoord2,nnbrx , txyz , f2ij , npjx , f3ij , npjkx , f3ik , istop ) ! omp end critical deallocate(txyz , f2ij , f3ij , f3ik ) else !serial case iat1=1 iat2=nat allocate(f2ij(3,npjx),f3ij(3,npjkx),f3ik(3,npjkx ) ) call subfeniat(iat1,iat2,nat , lsta , lstb , rel , ener , ener2 , & coord , coord2,nnbrx , fxyz , f2ij , npjx , f3ij , npjkx , f3ik , istop ) deallocate(f2ij , f3ij , f3ik ) endif !$ omp end parallel if ( istopg.gt.0 ) stop ' dimension error ( see warning above ) ' ener_var = ener2/nat-(ener / nat)**2 coord = coord / nat coord_var = coord2/nat - coord**2 deallocate(rxyz , icell , lay , lsta , lstb , rel ) end subroutine subfeniat(iat1,iat2,nat , lsta , lstb , rel , tener , tener2 , & tcoord , tcoord2,nnbrx , txyz , f2ij , npjx , f3ij , npjkx , f3ik , istop ) implicit real*8 ( a - h , o - z ) dimension lsta(2,nat),lstb(nnbrx*nat),rel(5,nnbrx*nat),txyz(3,nat ) dimensionf2ij(3,npjx),f3ij(3,npjkx),f3ik(3,npjkx ) initialize data ........ ! create temporary private scalars for reduction sum on energies and tener=0.d0 tener2=0.d0 tcoord=0.d0 tcoord2=0.d0 istop=0 do 121,iat=1,nat txyz(1,iat)=0.d0 txyz(2,iat)=0.d0 121 txyz(3,iat)=0.d0 !calculation of forces , energy do 1000,iat = iat1,iat2 dens2=0.d0 dens3=0.d0 jcnt=0 jkcnt=0 coord_iat=0.d0 ener_iat=0.d0 do 2000,jbr = lsta(1,iat),lsta(2,iat ) jat = lstb(jbr ) jcnt = jcnt+1 if ( jcnt.gt.npjx ) then write(6 , * ) ' warning : enlarge npjx ' istop=1 endif fxij = rel(1,jbr ) fyij = rel(2,jbr ) fzij = rel(3,jbr ) rij = rel(4,jbr ) sij = rel(5,jbr ) ! coordination number calculated with soft cutoff between first and !second nearest neighbor if ( rij.le.2.36d0 ) then coord_iat = coord_iat+1.d0 else if ( rij.ge.3.83d0 ) then else x=(rij-2.36d0)*(1.d0/(3.83d0 - 2.36d0 ) ) coord_iat = coord_iat+(2*x+1.d0)*(x-1.d0)**2 endif !pairpotential term call splint(cof_phi , dof_phi , tmin_phi, tmax_phi , & hsixth_phi , h2sixth_phi , hi_phi,10,rij , e_phi , ep_phi ) ener_iat = ener_iat+(e_phi*.5d0 ) txyz(1,iat)=txyz(1,iat)-fxij*(ep_phi*.5d0 ) txyz(2,iat)=txyz(2,iat)-fyij*(ep_phi*.5d0 ) txyz(3,iat)=txyz(3,iat)-fzij*(ep_phi*.5d0 ) txyz(1,jat)=txyz(1,jat)+fxij*(ep_phi*.5d0 ) txyz(2,jat)=txyz(2,jat)+fyij*(ep_phi*.5d0 ) txyz(3,jat)=txyz(3,jat)+fzij*(ep_phi*.5d0 ) ! 2 body embedding term call splint(cof_rho , dof_rho , tmin_rho , tmax_rho , & hsixth_rho , h2sixth_rho , hi_rho,11,rij , rho , rhop ) dens2=dens2+rho f2ij(1,jcnt)=fxij*rhop f2ij(2,jcnt)=fyij*rhop f2ij(3,jcnt)=fzij*rhop ! 3 body embedding term call splint(cof_fff , dof_fff , tmin_fff , tmax_fff , & hsixth_fff , h2sixth_fff , hi_fff,10,rij , fij , fijp ) do 3000,kbr = lsta(1,iat),lsta(2,iat ) kat = lstb(kbr ) if ( kat.lt.jat ) then jkcnt = jkcnt+1 if ( jkcnt.gt.npjkx ) then write(6 , * ) ' warning : enlarge npjkx ' istop=1 endif ! begin optimized version rik = rel(4,kbr ) if ( rik.gt.tmax_fff ) then fikp=0.d0 ; fik=0.d0 gjik=0.d0 ; gjikp=0.d0 ; sik=0.d0 costheta=0.d0 ; fxik=0.d0 ; fyik=0.d0 ; fzik=0.d0 else if ( rik.lt.tmin_fff ) then fxik = rel(1,kbr ) fyik = rel(2,kbr ) fzik = rel(3,kbr ) costheta = fxij*fxik+fyij*fyik+fzij*fzik sik = rel(5,kbr ) fikp = hi_fff*(cof_fff(1)-cof_fff(0 ) ) - & ( dof_fff(1)+2.d0*dof_fff(0 ) ) * hsixth_fff fik = cof_fff(0 ) + ( rik - tmin_fff)*fikp tt_ggg=(costheta - tmin_ggg)*hi_ggg if ( costheta.gt.tmax_ggg ) then gjikp = hi_ggg*(cof_ggg(8 - 1)-cof_ggg(8 - 2 ) ) + & ( 2.d0*dof_ggg(8 - 1)+dof_ggg(8 - 2 ) ) * hsixth_ggg gjik = cof_ggg(8 - 1 ) + ( costheta - tmax_ggg)*gjikp else klo_ggg = tt_ggg khi_ggg = klo_ggg+1 cof_ggg_klo = cof_ggg(klo_ggg ) dof_ggg_klo = dof_ggg(klo_ggg )b_ggg = tt_ggg - klo_ggg a_ggg=1.d0-b_ggg cof_ggg_khi = cof_ggg(khi_ggg ) dof_ggg_khi = dof_ggg(khi_ggg ) b2_ggg = b_ggg*b_ggg gjik = a_ggg*cof_ggg_klo gjikp = cof_ggg_khi - cof_ggg_klo a2_ggg = a_ggg*a_ggg cof1_ggg = a2_ggg-1.d0 cof2_ggg = b2_ggg-1.d0 gjik = gjik+b_ggg*cof_ggg_khigjikp = hi_ggg*gjikp cof3_ggg=3.d0*b2_ggg cof4_ggg=3.d0*a2_ggg cof1_ggg = a_ggg*cof1_ggg cof2_ggg = b_ggg*cof2_ggg cof3_ggg = cof3_ggg-1.d0 cof4_ggg = cof4_ggg-1.d0 yt1_ggg = cof1_ggg*dof_ggg_klo yt2_ggg = cof2_ggg*dof_ggg_khi ypt1_ggg = cof3_ggg*dof_ggg_khi ypt2_ggg = cof4_ggg*dof_ggg_klo gjik = gjik + ( yt1_ggg+yt2_ggg)*h2sixth_ggg gjikp = gjikp + ( ypt1_ggg - ypt2_ggg ) * hsixth_ggg endif else fxik = rel(1,kbr ) tt_fff = rik - tmin_fff costheta = fxij*fxik fyik = rel(2,kbr ) tt_fff = tt_fff*hi_fff costheta = costheta+fyij*fyik fzik = rel(3,kbr ) klo_fff = tt_fff costheta = costheta+fzij*fzik sik = rel(5,kbr ) tt_ggg=(costheta - tmin_ggg)*hi_ggg if ( costheta.gt.tmax_ggg ) then gjikp = hi_ggg*(cof_ggg(8 - 1)-cof_ggg(8 - 2 ) ) + & ( 2.d0*dof_ggg(8 - 1)+dof_ggg(8 - 2 ) ) * hsixth_ggg gjik = cof_ggg(8 - 1 ) + ( costheta - tmax_ggg)*gjikp khi_fff = klo_fff+1 cof_fff_klo = cof_fff(klo_fff ) dof_fff_klo = dof_fff(klo_fff ) b_fff = tt_fff - klo_fff a_fff=1.d0-b_fff cof_fff_khi = cof_fff(khi_fff ) dof_fff_khi = dof_fff(khi_fff ) b2_fff = b_fff*b_fff fik = a_fff*cof_fff_klo fikp = cof_fff_khi - cof_fff_klo a2_fff = a_fff*a_fff cof1_fff = a2_fff-1.d0 cof2_fff = b2_fff-1.d0 fik = fik+b_fff*cof_fff_khi fikp = hi_fff*fikp cof3_fff=3.d0*b2_fff cof4_fff=3.d0*a2_fff cof1_fff = a_fff*cof1_fff cof2_fff = b_fff*cof2_fff cof3_fff = cof3_fff-1.d0 cof4_fff = cof4_fff-1.d0 yt1_fff = cof1_fff*dof_fff_klo yt2_fff = cof2_fff*dof_fff_khi ypt1_fff = cof3_fff*dof_fff_khi ypt2_fff = cof4_fff*dof_fff_klo fik = fik + ( yt1_fff+yt2_fff)*h2sixth_fff fikp = fikp + ( ypt1_fff - ypt2_fff ) * hsixth_fff else klo_ggg = tt_ggg khi_ggg = klo_ggg+1 khi_fff = klo_fff+1 cof_ggg_klo = cof_ggg(klo_ggg ) cof_fff_klo = cof_fff(klo_fff ) dof_ggg_klo = dof_ggg(klo_ggg ) dof_fff_klo = dof_fff(klo_fff ) b_ggg = tt_ggg - klo_ggg b_fff = tt_fff - klo_fff a_ggg=1.d0-b_ggg a_fff=1.d0-b_fff cof_ggg_khi = cof_ggg(khi_ggg ) cof_fff_khi = cof_fff(khi_fff ) dof_ggg_khi = dof_ggg(khi_ggg ) dof_fff_khi = dof_fff(khi_fff ) b2_ggg = b_ggg*b_ggg b2_fff = b_fff*b_fff gjik = a_ggg*cof_ggg_klo fik = a_fff*cof_fff_klo gjikp = cof_ggg_khi - cof_ggg_klo fikp = cof_fff_khi - cof_fff_klo a2_ggg = a_ggg*a_ggg a2_fff = a_fff*a_fff cof1_ggg = a2_ggg-1.d0 cof1_fff = a2_fff-1.d0 cof2_ggg = b2_ggg-1.d0 cof2_fff = b2_fff-1.d0 gjik = gjik+b_ggg*cof_ggg_khi fik = fik+b_fff*cof_fff_khi gjikp = hi_ggg*gjikp fikp = hi_fff*fikp cof3_ggg=3.d0*b2_ggg cof3_fff=3.d0*b2_fff cof4_ggg=3.d0*a2_ggg cof4_fff=3.d0*a2_fff cof1_ggg = a_ggg*cof1_ggg cof1_fff = a_fff*cof1_fff cof2_ggg = b_ggg*cof2_ggg cof2_fff = b_fff*cof2_fff cof3_ggg = cof3_ggg-1.d0 cof3_fff = cof3_fff-1.d0 cof4_ggg = cof4_ggg-1.d0 cof4_fff = cof4_fff-1.d0 yt1_ggg = cof1_ggg*dof_ggg_klo yt1_fff = cof1_fff*dof_fff_klo yt2_ggg = cof2_ggg*dof_ggg_khi yt2_fff = cof2_fff*dof_fff_khi ypt1_ggg = cof3_ggg*dof_ggg_khi ypt1_fff = cof3_fff*dof_fff_khi ypt2_ggg = cof4_ggg*dof_ggg_klo ypt2_fff = cof4_fff*dof_fff_klo gjik = gjik + ( yt1_ggg+yt2_ggg)*h2sixth_ggg fik = fik + ( yt1_fff+yt2_fff)*h2sixth_fff gjikp = gjikp + ( ypt1_ggg - ypt2_ggg ) * hsixth_ggg fikp = fikp + ( ypt1_fff - ypt2_fff ) * hsixth_fff endif endif !end optimized version tt = fij*fik dens3=dens3+tt*gjik t1=fijp*fik*gjik t2=sij*(tt*gjikp ) f3ij(1,jkcnt)=fxij*t1 + ( fxik - fxij*costheta)*t2 f3ij(2,jkcnt)=fyij*t1 + ( fyik - fyij*costheta)*t2 f3ij(3,jkcnt)=fzij*t1 + ( fzik - fzij*costheta)*t2 t3=fikp*fij*gjik t4=sik*(tt*gjikp ) f3ik(1,jkcnt)=fxik*t3 + ( fxij - fxik*costheta)*t4 f3ik(2,jkcnt)=fyik*t3 + ( fyij - fyik*costheta)*t4 f3ik(3,jkcnt)=fzik*t3 + ( fzij - fzik*costheta)*t4 endif 3000 continue 2000 continue dens = dens2+dens3 call splint(cof_uuu , dof_uuu , tmin_uuu , tmax_uuu , & hsixth_uuu , h2sixth_uuu , hi_uuu,8,dens , e_uuu , ep_uuu ) ener_iat = ener_iat+e_uuu ! only now ep_uu is known and the forces can be calculated , lets loop again jcnt=0 jkcnt=0 do 2200,jbr = lsta(1,iat),lsta(2,iat ) jat = lstb(jbr ) jcnt = jcnt+1 txyz(1,iat)=txyz(1,iat)-ep_uuu*f2ij(1,jcnt ) txyz(2,iat)=txyz(2,iat)-ep_uuu*f2ij(2,jcnt ) txyz(3,iat)=txyz(3,iat)-ep_uuu*f2ij(3,jcnt ) txyz(1,jat)=txyz(1,jat)+ep_uuu*f2ij(1,jcnt ) txyz(2,jat)=txyz(2,jat)+ep_uuu*f2ij(2,jcnt ) txyz(3,jat)=txyz(3,jat)+ep_uuu*f2ij(3,jcnt ) !3 body embedding term do 3300,kbr = lsta(1,iat),lsta(2,iat ) kat = lstb(kbr ) if ( kat.lt.jat ) then jkcnt = jkcnt+1 txyz(1,iat)=txyz(1,iat)-ep_uuu*(f3ij(1,jkcnt)+f3ik(1,jkcnt ) ) txyz(2,iat)=txyz(2,iat)-ep_uuu*(f3ij(2,jkcnt)+f3ik(2,jkcnt ) ) txyz(3,iat)=txyz(3,iat)-ep_uuu*(f3ij(3,jkcnt)+f3ik(3,jkcnt ) ) txyz(1,jat)=txyz(1,jat)+ep_uuu*f3ij(1,jkcnt ) txyz(2,jat)=txyz(2,jat)+ep_uuu*f3ij(2,jkcnt ) txyz(3,jat)=txyz(3,jat)+ep_uuu*f3ij(3,jkcnt ) txyz(1,kat)=txyz(1,kat)+ep_uuu*f3ik(1,jkcnt ) txyz(2,kat)=txyz(2,kat)+ep_uuu*f3ik(2,jkcnt ) txyz(3,kat)=txyz(3,kat)+ep_uuu*f3ik(3,jkcnt ) endif 3300 continue 2200 continue tener = tener+ener_iat tener2=tener2+ener_iat**2 tcoord = tcoord+coord_iat tcoord2=tcoord2+coord_iat**2 1000 continue return end .... in addition to the energy and the forces the program still returns the coordination number as well as the variance of the energy per atom and the coordination number .the coordination number is calculated using a soft cutoff between the first and second nearest neighbor distance .these extra calculations are very cheap and not visible as an increase in the cpu timetable [ speedup_dec ] shows the final overall speedups obtained by the program .the results were obtained for an 8000 atom system , but the cpu time per call and atom is nearly independent of system size . .timings in for a combined evaluation of the forces and the energy per particle as well as the corresponding speedups ( in parentheses ) on an ibm sp3 based on a 375 mhz power3 processor , on a compaq sc 232 based on a 833mhz ev67 processor and on an intel pentium4 biprocessor at 2 ghz [ speedup_dec ] [ cols="<,^,^,^",options="header " , ] obtaining such high speedups was not straightforward .only the compaq fortran90 compiler was able to use in the original version of the program the openmp parallel do directive to obtain a good speedup .both the ibm compiler and the intel compiler failed . in order to get the performances of table [ speedup_dec ] , it was necessary to encapsulate the workload of the different threads into the subroutines sublstias and subfen , which amounts to doing the parallelization quasi by hand . using allocatable arrays in connection with openmp turned also out to be tricky . because of these problems , the parallelization was much more painful that one might expect for a shared memory model .the results show that simulations for very large silicon systems are feasible on relatively cheap serial or parallel computers accessible to a large number of researches . | the force field by lenosky and coworkers is the latest force field for silicon which is one of the most studied materials . it has turned out to be highly accurate in a large range of test cases . the optimization and parallelization of this force field using openmp and fortan90 is described here . the optimized program allows us to handle a very large number of silicon atoms in large scale simulations . since all the parallelization is hidden in a single subroutine that returns the total energies and forces , this subroutine can be called from within a serial program in an user friendly way . -0.75 in |
quantum information science ( see e.g. and references therein ) has received an increased attention in recent years due to the understanding that it enables to perform procedures not possible by purely classical resources .experimental techniques to manipulate increasingly complex quantum systems are also rapidly progressing .one of the central issues is on the one hand to control and manipulate delicate complex quantum states in an efficient manner , but on the other hand at the same time to prevent all uncontrollable influences from the environment . in order to tackle such problems ,one has to understand the structure and properties of quantum states .this can be done either through studies of particular states in a particular setting , or through focusing on the properties of the most generic states .random quantum states , that is states distributed according to the unitarily invariant fubini - study measure , are good candidates for describing generic states .indeed , they are typical in the sense that statistical properties of states from a given hilbert space are well described by those of random quantum states . also , they describe eigenstates of sufficiently complex quantum systems as well as time evolved states after sufficiently long evolution .not least , because random quantum states possess a large amount of entanglement they are useful in certain quantum information processes like quantum dense coding and remote state preparation . they can be used to produce random unitaries needed in noise estimation and twirling operations .in addition , as random states are closely connected to the unitarily invariant haar measure of unitary matrices , the unitary invariance makes theoretical treatment of such states simpler .producing such states therefore enables to make available a useful quantum resource , and in addition to span the space of quantum states in a well - defined sense .therefore several works have recently explored different procedures to achieve this goal .it is known that generating random states distributed according to the exact invariant measure requires a number of gates exponential in the number of qubits .a more efficient but approximate way to generate random states uses pseudo - random quantum circuits in which gates are randomly drawn from a universal set of gates .as the number of applied gates increases the resulting measure gets increasingly close to the asymptotic invariant measure .some bipartite properties of random states can be reproduced in a number of steps that is smaller than exponential in the number of qubits .polynomial convergence bounds have been derived analytically for bipartite entanglement for a number of pseudo - random protocols . on the numerical side , different properties of circuits generating random stateshave been studied . in order to quantifyhow well a given pseudo - random scheme reproduces the unitarily invariant distribution , one can study averages of low - order polynomials in matrix elements .in particular , one can define a state or a unitary -design , for which moments up to order agree with the haar distribution .although exact state -designs can be built for all ( see references in ) they are in general inefficient . in contrast , efficient approximate -designs can be constructed for arbitrary ( for the specific case of 2-design see ) .the pseudo - random circuit approach can yield only pseudo - random states , which do not reproduce exactly the unitarily invariant distribution .the method has been shown to be useful for large number of qubits , where exact methods are clearly inefficient .however , for systems with few qubits , the question of asymptotic complexity is not relevant .it is thus of interest to study specifically these systems and to find the most efficient way in terms of number of gates to generate random states distributed according to the unitarily invariant measure .this question is not just of academic interest since , as mentioned , few - qubit random unitaries are needed for e.g. noise estimation or twirling operations .optimal circuits for small number of qubits could also be used as a basic building block of pseudo - random circuits for larger number qubits , which might lead to faster convergence .in addition , systems of few qubits are becoming available experimentally , and it is important to propose algorithms that could be implemented on such small quantum processors , and which use as little quantum gates as possible .indeed , quantum gates , and especially two - qubit gates , are a scarce resource in real systems which should be carefully optimized . in this paperwe therefore follow a different strategy from the more generally adopted approach of using pseudo - random circuits to generate pseudo - random states , and try and construct exact algorithms generating random states for systems of three qubits . in the language of -designssuch algorithms are exact -designs .we present a circuit composed of one - qubit and two - qubit gates which produces exact random states in an optimal way , in the sense of using the smallest possible number of cnot gates .the circuit uses in total cnot gates and one - qubit elementary rotations .our procedure uses results recently obtained which described optimal procedures to transform a three - qubit state into another .our circuit needs random numbers which should be classically drawn and used as parameters for performing the one - qubit gates .the probability distribution of these parameters is derived , showing that it factorizes into a product of 10 independent distributions of one parameter and a joint distribution of the 4 remaining ones , each of these distributions being explicitly given .since we had to devise specific methods to compute these distributions , we explain the derivation in some details , as these methods can be useful in other contexts . after presenting the main idea of the calculation in section [ metrictensor ] , we start by treating the simple case of two - qubit states in section [ 2qubits ] .we then turn to the three - qubit case and first show factorization of the probability distribution for a certain subset of the parameters ( section [ 7 - 14 ] ) , the remaining parameters being treated in section [ 1 - 6 ] .the full probability distribution for three qubits is summarized in section [ explicit_calc ] .formally , a quantum state can be considered as an element of the complex projective space , with the hilbert space dimension for qubits .the natural riemannian metric on is the fubini - study metric , induced by the unitarily invariant haar measure on .it is the only metric invariant under unitary transformations . to parametrize needs independent real parameters . such parametrizations are well - known , for instance using hurwitz parametrization of .however , they do not easily translate into one and two - qubit operations , as desired in quantum information . in ref . , optimal quantum circuits transforming the three - qubit state into an arbitrary quantum state were discussed . in the case of three qubits, a generic state can be parametrized up to a global phase by 14 parameters .the quantum circuit requiring the smallest amount of cnot gates has three cnots and 15 one - qubit gates depending on 14 independent rotation angles. from it is possible ( see appendix ) to extract the circuit depicted in fig .[ circuit ] , expressed as a series of cnot gates and single qubit rotations , where z - rotation is and y - rotation is with the pauli matrices .the circuit allows to go from to any quantum state ( up to an irrelevant global phase ) .it therefore provides a parametrization of a quantum state by angles . in order to generate random vectors distributed according to the fubini - study measure, it would of course be possible to use e. g. hurwitz parametrization to generate classically a random state , and then use the procedure described in to find out the consecutive steps that allow to construct this particular vector from .however this procedure requires application of a specific algorithm for each realization of the random vector . instead , our aim here is to directly find the distribution of the such that the resulting is distributed according to the fubini - study measure .this is equivalent to calculating the invariant measure associated with the parametrization provided by fig .[ circuit ] in terms of the angles .geometrically , the fubini - study distance is the angle between two normalized states , .the metric induced by this distance is obtained by taking , getting where is the usual hermitian scalar product on .if a state is parametrized by some parameters then the riemannian metric tensor is such that and the volume form at each point of the coordinate patch , directly giving the invariant measure , is then given by .thus the joint distribution of the is simply obtained by calculating the determinant of the metric tensor given by with the parametrization , .unfortunately the calculation of such a determinant for qubits is intractable and one has to resort to other means .let us first consider the easier case of qubits , where by contrast the calculation can be performed directly .a normalized random 2-qubit state depends , up to a global phase , on 6 independent real parameters .a circuit producing from an initial state is depicted in fig .[ fig:2qcircuit ] .one can easily calculate the parametrization of the final state in terms of all six angles , thus directly obtaining the metric tensor from ( [ ds2 ] ) .square root of the determinant of then gives an unnormalized probability distribution of the angles as ( see also ) .several observations can be made about this distribution .first , the three rotations applied on the first qubit ( top wire in fig . [ fig:2qcircuit ] ) after the cnot gate represent a random su(2 ) rotation , for which the y - rotation angle is distributed as and the z - rotation angles are uniformly distributed .secondly , angle gives the eigenvalue of the reduced density matrix , , for which the distribution is well known , see , _e.g. _ , .the third observation is that , remarkably , the joint distribution ( [ eq:2qp ] ) of all 6 angles factorizes into 6 independent one - angle distributions .let us now turn to our main issue , which is the distribution of angles in the three - qubit case . in order to have an indication whether the distribution of an angle factorizes , we numerically computed the determinant det of the metric tensor as a function of with the other angles fixed .we also numerically computed the marginal distribution of by using the procedure given in the appendix to find the angles corresponding to a sample of uniformly distributed random vectors . if the distribution for a given angle factorizes , these two numerically computed functions should match ( up to a constant factor ) .this is what we observed for all angles but four of them ( angles to ) . in order to turn this numerical observation into a rigorous proof, we are going to show in this section that the distributions for angles to indeed factorize . in the next sectionwe will complete the proof by dealing with the cases to .the explicit analytical expression of the probability distribution for individual angles will be given in section [ explicit_calc ] .let us denote by the circuit of fig .[ circuit ] and by the unitary operator corresponding to it , so that . because circuits span the whole space of 3-qubit states , any unitary 3-qubit transformation maps parameters to new parameters such that .we denote by the circuit parametrized by angles corresponding to performing followed by .it is associated with the unitary operator such that .unitary invariance of the measure implies for that with the jacobian of the transformation and denotes the determinant .note that eq .( [ eq : invariance ] ) is not a simple change of variables , as the same function appears on both sides of the equation .the jacobian matrix for transformation from angles to , , tells how much do the angles of change if we vary angles in keeping transformation matrix fixed .choosing that sets some angles in circuit to a fixed value , say zero , and at the same time showing that depends only on these angles , would prove factorization of with respect to angles through eq .[ eq : invariance ] .the simplest case is that of gates at the end of the circuit of fig .[ circuit ] , _e.g. _ , gate .for we take y - rotation by angle on the third qubit , .it defines a mapping such that for and .matrix elements of the jacobian , i.e. partial derivatives , are equal to the kronecker symbol .the jacobian is equal to an identity matrix and its determinant is one .equation taken at then gives , from which one concludes that the distribution for factorizes and is in fact uniform ( unless noted otherwise s are not normalized ) .the same argument holds for the two other rotations by angles and applied at the end of each qubit wire .proceeding to angle one could use applied on the first qubit and show that the jacobian depends only on and , while at and one gets , from which factorization of would follow from eq .[ eq : invariance ] .there is however a simpler way .observe that the three single - qubit gates with angles and on the first qubit span the whole su(2 ) group .therefore , for any one - qubit unitary , gates can be rewritten as , without affecting other s .the distribution of these three angles must therefore be the same as the distribution of corresponding su(2 ) parameters .note that the same argument can be applied in the 2-qubit case of section [ 2qubits ] . as a consequence, the distribution of angles for gates at the end of the circuit should be the same in both cases , that is the distribution of is uniform while that of is proportional to .similarly , one can show that the distribution for the angles to is the same as for angles to .as opposed to gates 10 - 12 , for gate 13 we can not use the analogy with the 2-qubit circuit ( fig.[fig:2qcircuit ] ) because the two gates 13 and 14 on the third qubit do not span the whole su(2 ) group . therefore a different argument should be used . in what followswe show that the joint distribution for angles and can be factorized out of the full distribution .since it has been shown in subsection [ gate14 ] that angle factorizes , this will prove that the distribution for also factorizes .using on the third qubit we can set and to zero with the choice and .our goal is to show that depends only on and .we can formally consider each angle as being a function of the initial as well as of the parameters through . to calculate matrix elements of for our choice of at and , we must obtain the first - order expansion in of the quantities some angles are very simple .we immediately see that when varying angles , that is taking in eq.[limlim ] , angles do not change ( i.e ) .the corresponding -dimensional subblock in is therefore equal to an identity matrix .similarly , taking in eq.[limlim ] we see that do not change .the corresponding column in is therefore zero apart from on the diagonal .the jacobian thus has a block structure of the form where is a -dimensional identity matrix and is a -dimensional block with partial derivative .the angle given by eq.([limlim ] ) is obtained by varying angle by . the condition that is , where is a state after the third cnot acts on ( counting from the left in fig . [ circuit ] ) , explicitly given by with is therefore determined by and , where .similarly , is determined by and .projecting these conditions for and on the computational basis , eliminating unwanted variables , we get the following two equations : one with , one with . expanding these equations to first order in yields and .the derivative is therefore equal to , which completes the proof .incidentally , we also see that the distribution of is proportional to .in the preceding section , we have shown that the distribution for angles to factorizes . as was mentioned , numerical observations indicated us that the distribution for angles and should also factorize , but that it is not the case for the joint distribution of .as we were not able to directly prove by the same methods as above that the distributions for and factorize , we use a different strategy .namely , we first assume that this factorization is true , then we compute the distributions under this assumption , and the knowledge of the answer allows us to prove a posteriori that it is indeed the correct probability distribution . if the factorization holds , the distribution for and is easily calculated from the matrix using symbolic manipulation software , by replacing angles , , in by suitably chosen simple values , so that the determinant giving the volume form can now be handled .this yields , up to a normalization constant , the joint distribution of can not be further factorized , and requires heavy calculations . indeed , even replacing all angles but by numerical values the determinant of the metric tensor given by still depends on 4 variables , which is too much for it to be evaluated by standard software .we thus proceed as follows .first one can show that can be put under the form with the sums running over all but only even values of and .because of the parity of , there are independent coefficients .evaluating numerically the determinant at random values of the angles one gets an linear system that can be solved numerically .if the values of the coefficients of the matrix are multiplied by a factor 4 , then one is ensured ( from inspection of ) that the are rationals of the form , .this allows to deduce their exact value from the numerical result .we are left with 6998 nonzero terms in , and terms with odd or do not exist .we then suppose that can be expanded as this assumption is validated a posteriori , since a solution of the form can indeed be found .there are 4851 coefficients , which can be obtained by identifying term by term coefficients in the expansion of and .we have to solve a system of quadratic equations the first equation is quadratic and fixes an overall sign .equation is linear once the values obtained from equations to are plugged into it .starting with the highest - degree term one can thus recursively solve all equations .there are only 1320 non - zero coefficients .gathering together terms one can simplify the sum to a sum of 96 terms of the form . expanding this expression in powers of and and simplifying separately each coefficient we finally get where and with , given by eq . .recall that is the bit - flip transform of , .note that .angles can be obtained from where and .we do not have a general argument to explain this remarkable expression of the distribution in terms of the scalar products of , and . to complete the proof for the joint distribution it remains to be checked that the determinant of the metric tensor with angles to replaced by constants is indeed proportional to .this a posteriori verification is easier to handle symbolically than the full a priori calculation of the determinant .indeed , the determinant can first be reduced to an determinant by gauss - jordan elimination .the remaining determinant can be expanded as a trigonometric polynomial . although symbolic manipulation software do not allow to simplify the coefficients of this polynomial , they are able to check that these coefficients match those of the expected distribution .we proved in that way that the difference between the determinant det and our expression is identically zero .this gives a computer - assisted but rigorous proof for the distribution of angles to .gathering together the results of the previous sections we obtain that the joint distribution can be factorized as the joint distribution has been derived in the previous section and is given by eq . .the distribution for and is given by eqs . - .given the factorization , it is easy to calculate the remaining for each as was done for and in the previous section : replacing angles , , in by suitably chosen simple values , the determinant giving the volume form can be easily evaluated by standard symbolic manipulation .this yields , up to a normalization constant , the knowledge of the angle distribution ( [ eq : p ] ) allows to easily generate random three - qubit vectors using the circuit of fig .[ circuit ] .angles and to can be drawn classically according to their individual probability distribution .angles can be obtained classically from the joint distribution by , for instance , monte - carlo rejection method ( that is , drawing angles to and a parameter $ ] at random , and keeping them if ) . bounding from above by yields a success rate of about .in this work , we constructed a quantum circuit for generating three - qubit states distributed according to the unitarily invariant measure .the construction is exact and optimal in the sense of having the smallest possible number of cnot gates .the procedure requires a set of random numbers classically drawn , which will be the angles of the one - qubit rotations , and whose distribution has been explicitly given .remarkably , we have shown that the distribution of angles factorizes , apart from that of four angles .the circuit can be used as a three - qubit random state generator , thus producing at will typical states on three qubits .it could be also used as a building block for pseudo - random circuits in order to produce pseudo - random quantum states on an arbitrary number of qubits . at last, it gives an example of a quantum algorithm producing interesting results which could be implemented on a few - qubit platform , using only quantum gates , of which are one - qubit elementary rotations much less demanding experimentally .we thank the french anr ( project infosysqq ) and the ist - fet program of the ec(project eurosqip ) for funding .m would like to acknowledge support by slovenian research agency , grant j1 - 7437 , and hospitality of laboratoire de physique th ' eorique , toulouse , where this work has been started .in this appendix , we explain how to obtain the angles of the circuit ( fig . [ circuit ] ) for a given , based on the discussion in .this justifies the use of these angles as a parametrization of the quantum states .we start from a state , and transform it by the inverse of the different gates of fig . [ circuit ] to end up with , specifying how the angles are obtained in turn .more details can be found in .a generic three - qubit state can be written in a canonical form as a sum of two ( not normalized ) product terms , where are one - qubit states , is a one - qubit state orthogonal to and is a two - qubit state of the second and third qubits . the angle is chosen such that the z - rotation of angle eliminates a relative phase between the coefficients of the expansion of into and .( note that because we are using the circuit in the reverse direction the angles of rotations have opposite signs ) . a subsequent y - rotation with angle results in the transformation ( up to a global phase ) .similarly , rotations of angles and rotate into . after applying rotations of angles , , and the state has become of the form ( up to normalization ) .two rotations on the third qubit of angles and are now chosen so as to rotate into some new state while is rotated , up to normalization , into .it was shown in that this can always be done by writing the normalized as , and then is a solution of while , where s are relative phases in .acting with a cnot gate on the resulting state one obtains a quantum state for the three qubits of the form , with .the z - rotation angle on the second qubit is now determined so as to eliminate a relative phase between the expansion coefficients of , making them real up to a global phase .on the third qubit we now apply three rotations of angles , , and to bring to and into , eliminating also a relative phase . then a cnot is applied . at this point( after the second cnot in fig . [ circuit ] , counting from right , but without the rotation ), the state has become of the form , where the one - qubit states and are normalized and real . with now eliminate the relative phase , and with an y - rotation of angle the third qubit is brought to the state . then the combination of two y - rotation of angles and with a cnot brings the second qubit to , and the last rotation of angle on the first qubit yields the final state .note that in the circuit of fig .[ circuit ] the two z - rotations of angles and commute with cnot gates if they act on the control qubit .this is the reason why the rotation of angle can be applied at any point between and and , similarly , can be applied at any point between and .weinstein and c. s. hellberg , phys .. lett . * 95 * , 030501 ( 2005 ) ; m. nidari , phys .a * 76 * , 012318 ( 2007 ) ; y. most , y. shimoni and o. biham , phys .a * 76 * , 022328 ( 2007 ) ; d. rossini and g. benenti , phys .100 * , 060501 ( 2008 ) ; y. s. weinstein , w. g. brown and l. viola , phys .a * 78 * , 052332 ( 2008 ) ; l. arnaud and d. braun , phys .a * 78 * , 062329 ( 2008 ) . | we explicitly construct a quantum circuit which exactly generates random three - qubit states . the optimal circuit consists of three cnot gates and fifteen single qubit elementary rotations , parametrized by fourteen independent angles . the explicit distribution of these angles is derived , showing that the joint distribution is a product of independent distributions of individual angles apart from four angles . |
at the 2012 varenna summer school on _ physics of complex colloids _ , i gave a series of lectures on computer simulations in the context of complex liquids .the lectures were introductory , although occasionally , i would mix in a more general cautionary remark. it seemed to me that there was little point in writing a chapter in the proceedings on ` introduction to computer simulations ' .books on the topic exist .however , i did not quite know what to write instead .then , over lunch , _ wilson poon _ suggested to me to write something on the limitations of existing simulations methods : where do they go wrong and why ?i liked the idea very much .the scope of the present manuscript is a bit broader : after a fairly general ( but brief ) introduction , i will discuss three types of issues : 1 .computer simulation methods that seem simple yet require great care 2 .computer simulation methods that seem reasonable but are not 3 . myths and misconceptions not all issues that i list are of direct relevance for soft matter . however , i hope that the reader will forgive me .i should also point out that many of the issues that i discuss are very well known sometimes they are even trivial .however , i thought it better to list even the trivial examples , rather than assume that every single one of them is well known to all readers .some of the issues that i highlight may not be well known , simply because i am mistaken or i have missed a key reference .if so , i apologise .i also apologise for the rather black - or - white way in which i present problems .seen in their original context , the issues are usually more subtle . my aim is to show what can go wrong if techniques are used outside their original context .over the past 60 years , the speed at which computers perform elementary calculations has increased by a factor 10 , and the size of computer memories and the capacity of data storage devices have undergone similarly spectacular increases .the earliest computer simulations of systems consisting of a few hundred atoms could only be performed on the world s largest computers .now , anybody who has access to a standard computer for personal use can carry out simulations that would have required a supercomputer only 15 years ago .moreover , software to carry out computer simulations is readily available .the fact that the hardware and software thresholds for performing ` normal ' simulations have all but disappeared forces us to think about the role of computer simulations .the key question is : why should one perform a simulation in the first place .when we look at computer simulations in an applied context , the answer to the question ` why simulation ? ' is simple : they can save time ( and money ) .increasingly , simulations are used to complement experiment or , more precisely , to guide experiments in such a way that they can focus on the promising compounds or materials .this is the core of the rapidly growing field of computational materials science and computational ` molecular ' design .computer simulations allow us to predict the properties of potentially useful substances , e.g. pharmaceutical compounds or materials with unique physical properties .using computer simulations we can pre - screen candidate substances to minimise the amount of experimental work needed to find a substance that meets our requirements .in addition , simulations are very useful to predict the properties of materials under conditions that are difficult to achieve in controlled experiments ( e.g. very high temperatures or pressures ) .computational materials science of the type sketched above is the ` front end ' of a broader scientific endeavour that aims to advance the field of particle - based modelling , thus opening up new possibilities .much of this development work is carried out in an academic environment where other criteria apply when we wish to answer the question whether a simulation serves a useful purpose .below , i list several valid reasons to perform a simulation , but i also indicate what reasons i consider less convincing .let me begin with the latter .the total number of molecular systems that can , in principle , be simulated is very , very large .hence , it is not difficult to find a system that nobody else has simulated before .this may seem very tempting .it is easy to perform a simulation , create a few nice colour snapshots and compute , say , a radial distribution function .then , we write a manuscript for a high impact journal and , in the abstract , we write ` here , for the first time , we report molecular dynamics simulations of _18-bromo-12-butyl-11-chloro-4,8-diethyl-5-hydroxy-15-methoxytricos-6,13-diene-19-yne-3,9-dione _ ' i took the name from wikipedia , and my guess is that nobody has simulated this substance .then , in the opening sentence of our manuscript we write : ` recently , there has been much interest in the molecular dynamics of _ 18-bromo-12-butyl-11-chloro-4,8-diethyl-5-hydroxy-15-methoxytricos-6,13-diene-19-yne-3,9-dione ... _ ' and , with a few more sentences , a some snapshots and graphs , and a concluding section that mirrors the abstract , the work is done ... of course , this example is a parody of reality but only just .such simulations provide information that answers no existing question it is like the famous passage in the hitchhikers guide to the galaxy , where the computer ` deep thought ' has completed a massive calculation to answer the question of _ life , the universe and everything_. the answer is 42 but the problem is that nobody really remembers what the question was .a simulation should answer a question .but there are different kinds of questions .i will discuss some of the categories below .our knowledge of forces between all but the simplest molecules is limited .moreover , the construction of reliable force - fields is a time - consuming business that combines experimental and theoretical information .the most widely used force fields are hopefully ` transferable ' .this means that the interactions between molecules are decomposed into interactions between their constituent atoms or groups , and that the same interactions can be used when these atoms or groups are arranged to form other molecules. the model may take interactions between charges and polarisability into account but , in the end , the force - field is always approximate .this is even true when interactions are computed ` on the fly ' using density - functional theory or another quantum - based approach .therefore , if we wish to apply a force field to a new class of molecules , or in a range of physical conditions for which it has not been tested , we can not assume _ a priori _ that our force - field will perform well : we must compare the predictions of our simulations with experiment .if simulation and experiment disagree , our force field will have to be improved .optimising and validating force fields is an important ingredient of computer simulations .it has been suggested that simulation constitutes a third branch of science , complementing experiment and theory .this is only partly true .there is a two - way relation between theory and experiment : theories make predictions for experimental observations and , conversely , experiments can prove theories wrong if experiment agrees with theory then this does not demonstrate that the theory is correct but that it is compatible with the observations .simulations have some characteristics of theory and some of experiment . as is the case for theory , simulations start with the choice of a model , usually a choice for the hamiltonian ( ` the force field ' ) describing the system and , in the case of dynamical simulations , a choice for the dynamics ( newtonian , langevin , brownian etc ) . with this choice , simulations can ( in principle ) arrive at predictions of any desired accuracy the limiting factor is computing power. however , simulations can never arrive at exact results , let alone analytical relations . unlike theories , simulations are therefore not a good tool to summarise our understanding of nature .however , they can provide important insights : for instance , a simulation can show that a particular model captures the essential physics that is needed to reproduce a given phenomenon .a case in point is hard - sphere freezing : before the simulations of alder and wainwright and jacobson and wood , it was not obvious that systems with hard - sphere interactions alone could freeze .once this was known , approximate theories of freezing could focus on the simple hard - sphere model and ` computer experiments ' could be used to test these theories .this shows two aspects of simulations : a ) it can act as a ` discovery tool ' and b ) it can be used to test predictions of approximate theories .conversely , exact results from theory are an essential tool to test whether a particular algorithm works : if simulation results ( properly analysed and , if necessary , extrapolated to the thermodynamic limit ) disagree with an exact theoretical result , then there is something wrong with the simulation .these tests need not be very sophisticated : they can be as simple as computing the average kinetic energy of a system and comparing with equipartition ( for a system of the same size ) , or computing the limiting behaviour of the non - ideal part of the pressure of a system with short - ranged interactions at low densities and comparing with the value expected on the basis of our knowledge of the second virial coefficient . when comparing simulation with experiment , we do something else : we are testing whether our hamiltonian ( ` force - field ' ) can reproduce the behaviour observed in experiment .as said before : agreement does not imply that the force field is correct , just that it is ` good enough ' .once a simulation is able to reproduce the experimental behaviour of a particular system , we can use the computer as a microscope : we can use it to find hints that help us identify the microscopic phenomena that are responsible for unexplained experimental behaviour .experimentalists and simulators often speak a different ` language ' .experimental data are usually ( although not always ) expressed in s.i . units .simulators often use reduced units that can be an endless source of confusion for experimentalists ( see fig .[ fig : reducedunits ] ) .there are good reasons to use reduced units in simulations [ e.g. to use a characteristic particle diameter as a unit of length and to use the thermal energy ( ) as a unit of energy ] . however , the units should be clearly defined , such that a conversion to s.i .units can easily be made .there is one general rule , or rather a form of numerical ` etiquette ' , governing the comparison between experiment and simulation that is not often obeyed .the rule is that , if at all possible , simulations should compute directly what is being measured in experiment .the reason is that it is often not straightforward to convert experimental data into the quantities that are computed in a simulation : the procedure may involve additional approximations and , at the very least , is likely to lead to a loss of information . to give a specific example : scattering experiments do not measure the radial distribution function , rather , they measure over a finite range of wave - vectors . as is related to the fourier transform of , we would need to know for _ all _ wave vectors to compute . in practice ,experiments probe over a finite window of wave - vectors .if we would perform a fourier transform within this window , we would obtain an estimate for that contains unphysical oscillations ( and possibly even negative regions ) , due to the truncation of the fourier transform .of course , procedures exist to mitigate the effects of such truncation errors , yet , if at all possible , simulations should report to save the experimentalist the trouble of having to perform an approximate fourier transform of their data .there is , of course , a limit to this rule : some corrections to experimental data are specific for one particular piece of equipment .if simulations would account for these specific instrument effects , they could only be compared to one particular experiment . in that case, the meeting point should be half way .computers have become more powerful but so have experimental techniques ( often for the same reason ) .many experiments generate huge data streams that are impossible to analyse by hand . for this reason ,computers play an important role in data analysis .most of these techniques are not related to computer simulations but some are .increasingly , molecular modelling is integrated in experiments .the experimental data provide constraints that allow us to refine the model .increasingly , this type of simulation will be embedded in the experimental equipment a black box that most users will not even see .the staggering increase in computing power during the past 60 years [ roughly by a factor 10 ] , is such that it has not only quantitatively changed computing , but also qualitatively , because we can now contemplate simulations ( e.g. a short simulation of a system the size of a bacterium ) that were not even on the horizon in the early 50 s .however , in some cases , the speed - up achieved through the use of better algorithms ( e.g. techniques to study rare events ) have been even more dramatic . yet , the algorithms at the core of monte carlo or molecular dynamics simulations have barely changed since the moment that they were first implemented . hence , a computer simulator in the 60 s would have been able to predict fairly accurately on the basis of moore s law , what system sizes and time - scales we can simulate today .however , he / she ( more ` he ' than ` she ' in the 50 s and 60 s arianna rosenbluth and mary - ann mansigh are notable exceptions ) would not have been able to predict the tools that we now use to study rare events , quantum systems or free - energy landscapes .i should add that many algorithmic improvements are not primarily concerned with computing speed but with the fact that they enable new types of calculations ( think of thermostats , barostats , fluctuating box shapes etc ) .it would be incorrect to measure the impact of these techniques in terms of a possible gain in computing speed . although this point of view is not universally accepted , scientists are human .being human , they like to impress their peers . one way to impress your peers is to establish a record .it is for this reason that , year after year , there have been and will be claims of the demonstration of ever larger prime numbers : at present 2012 the record - holding prime contains more than ten million digits but less than one hundred million digits . as the number of primes is infinite , that search will never end and any record is therefore likely to be overthrown in a relatively short time .no eternal fame there . in simulations , we see a similar effort : the simulation of the ` largest ' system yet , or the simulation for the longest time yet ( it is necessarily ` either - or ' ) .again , these records are short - lived .they may be useful to advertise the power of a new computer , but their scientific impact is usually limited . at the same time , there are many problems that can not be solved with today s computers but that are tantalisingly close .it may then be tempting to start a massive simulation to crack such a problem before the competition do it .that is certainly a valid objective , and alder and wainwright s discovery of long - time tails in the velocity auto - correlation function of hard spheres is a case in point .the question is : how much computer time ( ` real ' clock time ) should we be prepared to spend . to make this estimate ,let us assume that the project will have unlimited access to the most powerful computer available at the time that the project starts .let us assume that this computer would complete the desired calculation in years .somewhat unrealistically , i assume that the computer , once chosen , is not upgraded .the competition has decided to wait until a faster computer becomes available .we assume that the speed of new computers follows moore s law .then , after waiting for years , the time to completion on a new computer will be ( 2.17 years is the time it takes the computer power to increase by a factor of if moore s law holds ) .the question is : who will win , the group that starts first , or the group that waits for a faster computer to become available : _ i.e. _ is the time that minimises , satisfies : this equation has a solution for positive when years . if the calculation takes less than 2.17 years , the group that starts first winsotherwise , the group that waits finishes the calculation after \ ] ] years .of course , this calculation is very oversimplified , but the general message is clear : it is not useful to buy a computer to do a calculation that will take several years .once more , i should stress that i mean ` wall clock ' time not processor years .finally , it is important to realise that moore s law is not a law but an extrapolation that , at some point , will fail .many of the subtleties of computer simulations are related to the fact that the systems that are being studied in a simulation are far from macroscopic .the behaviour of a macroscopic system can be imitated , but never really reproduced , by using periodic boundary conditions .the main advantage of periodic boundary conditions is that they make it possible to simulate a small system that is not terminated by a surface , as it is periodically repeated in all directions . buteven though the use of periodic boundary conditions will eliminate surface effects ( that are very dramatic for small systems ) , there are many other finite - size effects that can not be eliminated quite so easily .below , i will give a few examples .but first , a general comment about testing for finite size effects .if a simulator suspects finite size effects in a simulation , the obvious test is to repeat the same simulation with a system of a larger size , e.g. a system with double the linear dimension of the original system .the problem with this test is that it is rather expensive .the computational cost of a simulation of a fixed duration scales ( at least ) linearly with the number of particles . for a 3d system, doubling the linear dimensions of the system makes the simulation 8 times more expensive .in contrast , halving the system size makes the simulation 8 times cheaper .of course , if there is evidence for finite - size effects , then there it is necessary to perform a systematic study of the dependence of the simulation results and then there is no cheap solution .a systematic analysis of finite size affects if of particular importance in the study of critical phenomena . in general ,finite size effects will show whenever we study phenomena that are correlated over large distances : these can be static correlations ( as in the case of critical fluctuations ) or dynamic correlations , as in the case of hydrodynamic interactions .when a particle moves in a fluid , it gradually imparts its original momentum to the surrounding medium . as momentumis conserved , this momentum spreads in space , partly as a sound wave , partly as an ( overdamped ) shear waves .the sound wave moves with the speed of sound and will therefore cross a simulation box with diameter in a time .after that time , a particle will ` feel ' the effect of the sound waves emitted by its nearest periodic images . clearly , this is finite size effect that would not be observed in a bulk fluid .the time for such spurious correlations to arise is , in practice , relatively short .if is of the order of 10 nm and is of the order m / s , then is of order 10 ps .needless to say that most modern simulation runs are much longer than that .in addition to a sound wave , a moving particle also induces shear flow that carries away the remainder of its momentum .this ` transverse ' momentum is transported diffusively .the time to cross the periodic box is of order , where is the kinematic viscosity ( , where is the shear viscosity and the mass density ) . to take water as an example : and hence the time that it takes transverse momentum to diffuse one box diameter ( again ,i will assume = 10 nm ) is around 10 ps .what these examples show is that any numerical study of long - time dynamical correlations may require a careful study of hydrodynamic finite - size effects .below , i will discuss the specific example of hydrodynamic interactions .however , many of my comments apply ( suitably modified ) to the study of dislocations or the calculation of the properties of systems consisting of particles that interact through long - ranged forces ( e.g. charged or dipolar particles ) . at large distances ,the hydrodynamic flow field that is induced by a force acting on a particle in a fluid , decays as .if we now consider a system with periodic boundary conditions , the hydrodynamic flow fields due to all periodic images of the particle that is being dragged by the external force will have to be added .it is clear that , just as in the case of the evaluation of coulomb interactions , a naive addition will not work .it would even seem that the total flow field due to an infinite periodic array of dragged particles would diverge .this is not the case , and the reason is the same a in the coulomb case : when we compute coulomb interactions in a periodic system , we must impose charge neutrality . similarly , in the case of hydrodynamic interaction , we must impose that the total momentum of the fluid is fixed ( in practice : zero ) .this means that if we impart a momentum to a given article , then we must distribute a momentum among all other particles .the result is that , if we start with a fluid at rest , dragging one particle in the direction , creates a back flow of all other particles in the direction .if there are particles in the system , this is an effect .but that is not the only finite size effect .the other effect is due to the fact that the drag force per particle needed to move a periodic array of particles through a fluid is not simply the sum of the forces needed to move a single particle at infinite dilution .in fact , the force is different ( larger ) by an amount that is proportional to ( roughly speaking , because the hydrodynamic interactions decay as ) .hence , the mobility ( velocity divided by force ) of a particle that belongs to a periodic array is less than the of an isolated particle in a bulk fluid . as a consequence ( stokes - einstein ) ,the diffusion coefficient of a particle in a periodic fluid has a substantial finite - size correction that scales as , where is the diameter of the particle .for a three - dimensional system , the diffusion coefficient therefore approaches its infinite system - size limit as , _i.e. _ very slowly . by comparison ,the finite - size correction due to back flow is relatively minor .periodic boundary conditions impose a specific symmetry on the system under study .for instance , if we study a fluid using simple - cubic periodic boundary conditions , then a single ` cell ' of this periodic lattice may look totally disordered , yet as it is repeated periodically in all directions , the system should be viewed as a simple cubic crystal with a particularly complex unit cell . as was already well known in the early days of computer simulations , the imposed symmetry of the periodic boundary conditions induces some degree of bond - orientational order in the fluid . often , this ( slight ) order has no noticeable effect , but in the case of crystal nucleation it may strongly enhance the nucleation rate .this is why finite size effects on crystal nucleation are so pronounced .the shape of the simulation box has an effect on the magnitude of finite size effects . intuitively , it is obvious that this should be the case in the limit that one dimension of the box is much smaller than the others .for instance , if the simulation box has the shape of a rectangular prism with sides , then the shortest distance between a particle and its periodic image is .clearly , we want to be sufficiently large that a particle does not ` feel ' the direct effect of its periodic images .this sets a lower limit for . for a cubic box, and would have the same value , but for the rectangular prism mentioned above , they will be larger . hence , a large system volume would be needed to suppress finite size effect for a rectangular prismatic box than for a cubic box .this is one of the reasons why cubic boxes are popular ; the other reason is that it is very easy to code the periodic boundary conditions for cubic boxes .this second reason explains why cubic boxes are being used at all , because there are other box shapes that result in a larger distance between nearest periodic images for the same box volume .depending on the criteria that one use , there are two shapes that can claim to be optimal .the first is the _ rhombic dodecahedron _ ( i.e. the wigner - seitz cell of a face - centred cubic lattice ) . for a given volume , the rhombic dodecahedron has the largest distance between nearest periodic images or , what amounts to the same thing , it has the largest inscribed sphere .the other ( more popular ) shape is the _ truncated octahedron _ ( the wigner - seitz cell of a body - centred cubic lattice ) . the truncated octahedron is the most ` spherical ' unit cell that can be used , in the sense that it has the smallest enclosing sphere .in fact , the ratio of the radii of the enclosing and inscribed spheres of the truncated octahedron is . for a rhombic dodecahedron ,this ratio is and for a cube it is . in two dimensionsthe optimal box shape is a hexagon ( and in 4d it is the wigner - seitz cell of the so - called lattice ) . in simulations of isotropic fluids ,any shape of the simulation box can be chosen , provided that it is sufficiently large to make finite - size effects unimportant . however , for ordered phases such as crystals , the choice of the simulation box is constrained by the fact that is has to accommodate an integral number of crystal unit cells . if the crystal under study is cubic , then a cubic simulation box will do , otherwise the simulation box should be chosen such that it is as ` spherical ' as possible , yet commensurate with the unit cell of the crystal under study . notethat this constraint implies that , if a given substance has several crystal polymorphs of lower than cubic symmetry , then different box shapes will be needed to simulate these phases .moreover , for low symmetry crystals , the shape of the crystal unit cell will , in general , depend on temperature and pressure .hence , simulations of different thermodynamic state points will require simulation boxes with different shapes .if the shape of the simulation box is incompatible with the crystal symmetry , the crystal will be strained .this is easy to verify by computing the stress tensor of the system : if the average stress is not isotropic ( hydrostatic ) , the shape of the simulation box causes a deformation of the crystal . as a consequence, elastic free energy will be stored in the crystal and it will be less stable than in its undeformed state . not surprisingly , such a deformation will affect the location of phase coexistence curves . the effect of the shape of the simulation box on the crystal under study is not simply a finite size effect : even a macroscopic crystals can be deformed .however , for a large enough system , it is always possible to choose the number of crystal unit cells in the and directions such that the resulting crystal is almost commensurate with the simulation box .the remaining difference is a finite size effect .however , if we have been less careful and have prepared a deformed crystal , then it will stay deformed , if not forever , then at least much longer than the time that we can study in a simulation . in practice ,the shape of the simulation box is not chosen by trial and error until the crystal is stress free ( or , more precisely , has an isotropic stress ) . in the early 80 s , parrinello andrahman developed a constant - stress molecular dynamics scheme that treated the parameters characterising the box shape as dynamical variables .in a simulation , the box shape fluctuates and the average box shape is the one compatible with the imposed stress . if the imposed stress is isotropic , then the parrinello - rahman technique yields the box shape compatible with an undeformed crystal . shortly after its introduction, the parrinello - rahman technique was extended to monte carlo simulations by najafabadi and yip .more recently , the method of najafabadi and yip was used by filion and coworkers to predict the crystal structure of novel materials . in this method ,small systems are used such that appreciable fluctuations in the shape of the simulation box are possible : this allows one to explore a wide range of possible structures .potentially , the fluctuations in the shape of the simulation box can become a problem : if the box becomes very anisometric , the nearest - neighbour distance in some directions becomes very small and serious finite size effects are observed .this problem is always present if fluctuating box shapes are used to simulated liquids ( but , usually , there is no need to use fluctuating box shapes in that case ) . in the case of crystals ,the problems are most pronounced for small systems . in that case, the fluctuations of the box shape may have to be constrained . to allow for larger changes in the shape of the crystal unit cell , the system should be mapped onto a new simulation box , once the original box becomes too deformed .this remapping does not affect the atomic configuration of the system .liquid crystals are phases with symmetry properties intermediate between those of an isotropic liquid and of a fully ordered , three - dimensional crystal. the simplest liquid crystal , the so - called _ nematic _ phase , has no long - range translational order , but the molecules are on average aligned along a fixed direction , called the nematic _ director_. like isotropic liquids , nematic phases are compatible with any shape of the simulation box . however , for more highly ordered liquid - crystalline phases , the box shape should be chosen carefully . to give a simple example : the smectic - a phases is similar to the nematic phase in that all molecules are , on average aligned along a director but , unlike the nematic phase , the density of the nematic phase along the director is periodic . in the simplest case one can view the smectic - a phase as a stack of liquid - like molecular layers . clearly , in a simulation, the boundary conditions should be such that an integral number of layers fits in the simulation box .this constrains the system size in one direction ( viz . along the director ) , but not along the other two directions .again , simulations with a flexible box shape can be used to let the system relax to its equilibrium layer spacing . however , as the smectic phase is liquid like in the transverse direction , one should make sure that the box shape in those directions is fixed .there is a practical reason for using slightly more complex boundary conditions for smectic - a phases : the average areal density in all smectic - a layers is the same , but the number of molecules per layer will fluctuate .this is a real effect .however , the time it takes a molecule to diffuse ( ` permeate ' ) from one smectic layer to the next tends to be very long ( it involves crossing a free - energy barrier ) . hence ,unless special precautions are taken , the distribution of molecules over the various smectic layers will hardly vary over the course of a simulation and hence , the system does not adequately sample the configuration space . this problem can be solved by using so - called ` helical ' boundary conditions .suppose that we have a smectic phase in a rectangular prismatic simulation box .we assume that the smectic layers are perpendicular to the axis and have an equilibrium spacing . now consider the next periodic box in the -direction .normally , this box is placed such that the point maps onto .if we use helical boundary conditions , the box at is shifted by along the direction : hence , maps onto ( see fig . [fig : helical ] ) .the result is that all the smectic layers in the system are now connected laterally into a single layer .now , fluctuations in the number of particles per layer is easy because it does not require permeation .similar techniques can be used to allow faster equilibration of columns in the so - called _ columnar _ phase ( a phase that has crystalline order in two directions and liquid - like order in the third ) .i should point out that helical boundary conditions are not necessarily more ` realistic ' than normal periodic boundary conditions .in particular , the slow permeation of particles between layers is a real effect .the simulation of cholesteric liquid crystals poses a special problem .a cholesteric liquid crystal is similar to the nematic phase in that it has no long - ranged translational order .however , whereas the director if a nematic phase has a fixed orientation throughout the system , the director of a cholesteric phase varies periodically ( ` twists ' ) in one direction and is constant in any plane perpendicular to the twist direction . for instance , if we choose the twist direction to be parallel to the -axis , then the -dependence of the director would be of the form where denotes the cholesteric pitch .ideally , the periodic boundary conditions should be chosen such that the cholesteric pitch is not distorted . in practice , this is often not feasible because typical cholesteric pitches are very long compared to the molecular dimensions . for cholesteric phases of small molecules ( as opposed to colloids ) , pitches of hundreds of nanometers are common .simulations of systems with such dimensions would be very expensive .the problem can be alleviated a bit by using twisted periodic boundary conditions . in that casethe system is not simply repeated periodically in the -direction ( the pitch axis ) , but it is rotated by an amount that is compatible with the boundary conditions . if the simulation box is a square prism , any rotation by a multiple of is allowed .the minimal twist corresponds to the case where the rotation is . if the simulation box is a hexagonal prism , rotations that are a multiple of ( corresponding to 1/6-th of a pitch )are allowed .smaller angles are not possible . although , the box size needed to accommodate an undeformed cholesteric is usually still too large to be computationally feasible .hence , cholesterics are usually simulated in an over- or under - twisted geometry and the properties of the undistorted are deduced using thermodynamics .allen has used such an approach to compute the twisting power of chiral additives in a nematic phase . here, i outline how a simulation of over- and under - twisted cholesterics can be used to estimate the equilibrium pitch .to do so , we have to make use of the fact that , for small deformations , the free energy of a cholesteric phase depends quadratically on the degree of over(under)-twist : where and is the value of for which the system is torque - free and is the volume of the system .the constant is the so - called ` twist elastic constant ' .the subscript is there for historical reasons ( in a nematic , two other types of director deformation are possible : ` splay ' and ` bend ' : the associated elastic constants are denoted by and respectively ) . in what follows, i drop the subscript of . in a periodic system with a box length in the -direction , , where is the twist of the system over a distance . in practice, we will only consider or .then an infinitesimal change in the helmholtz free energy is given by where is the entropy of the system , the temperature , the pressure , the chemical potential ( per particle ) and the number of particles .in what follows , i fix and .it is useful to write as the sum of a volume change due to , a lateral expansion / compression of the system and , the volume change due to deformations in the -direction .note that the total free energy change due to a change in is given by \;.\ ] ] we can write this as \parallel= -\left[p + k(\xi-\xi_0)\xi\right]dv_\parallel\;.\ ] ] a change in volume at constant pitch ( i.e. constant ) would yield the excess pressure for deformations along the -axis is therefore by measuring we can determine .momentum conservation in classical mechanics follows if the lagrangian of the system is invariant under infinitesimal translations .this result carries over from a free system in the absence of external forces to a system with periodic boundary conditions . for a non - periodic system, momentum conservation implies that the centre of mass of the system moves at a constant velocity . in the presence of periodic boundary conditions ,this statement becomes ambiguous because one should specify how the position of the centre of mass is computed .clearly if we interpret the centre of mass of the system as the mass - weighted average position of the particles in the simulation box at the origin , then the center of mass motion is not at all conserved .every time the particle crosses the boundary of the simulation box and is put back at the other side the centre of mass undergoes a jump . however, if we consider the center of mass of the particles that were originally in the ` central ' simulation box , and do not put these particles back in that box when they move to another box , then the velocity of the center of mass of those particles is conserved .it should be stressed that the very concept of the ` boundary ' of the simulation box is confusing : for normal ( equilibrium ) simulations , we can draw the lattice of simulation boxes wherever we like , hence the boundaries between such boxes can be anywhere in the system . the ` boundary ' therefore has no physical meaning ( the situation is different in non - equilibrium simulations with so - called lees - edwards ` sliding ' boundary conditions ) . unlike linear momentum ,angular momentum is not a conserved quantity in systems with periodic boundary conditions .the basic reason is that the lagrangian is not invariant under infinitesimal rotations .this invariance ( for spatially isotropic systems ) is at the origin of angular momentum conservation . in periodic systems ,angular momentum is simply not uniquely defined . to see this ,consider a system consisting of two particles zero net linear momentum . in a free system, the angular momentum of this system is uniquely defined as it does not depend on the origin from which we measure the angular momentum .however , it is easy to see that in a periodic system the apparent angular momentum will be different if we choose the origin of our coordinate system between the two particles in the same simulation box , or between a particle in one simulation box and the periodic image of the other particle . considering only particles in the same simulation box will not help because then the angular momentum will change discontinuously whenever we move a particle that has crossed a boundary back into the original simulation box .there is another , more subtle , effect of periodic boundary conditions that is not always properly realised .it has to do with the calculation of fourier transform of spatial correlation function . as an example ( and an important one at that ) , i consider the computation of the structure factor . for simplicity, i will consider an atomic fluid . is related to the mean square value of , the fourier component of the number density at wave vector : where note that the structure factor is , by construction , non - negative . in liquid state theory , we often use the fact that the structure factor of a fluid is related to the radial distribution function though \ ; .\ ] ] in the thermodynamic limit , the volume of the system tends to infinity and the integral does not depend on the size of the integration volume .however , in a small , periodic system it does .more importantly , in an isotropic fluid , the radial distribution function is isotropic and we can write \ ; .\ ] ] for a finite , periodic system , one might be tempted to replace the upper limit of this integration by half the box diameter . however ,this is very dangerous because the function thus computed is no longer equivalent to the expression in eq .( [ eq : rhoq2 ] ) .in particular , the apparent is not guaranteed to be non - negative , even if one only considered those values of that are compatible with the periodic boundary conditions .one solution to this problem is to extrapolate to larger distances , using a plausible approximation .however , in general , it is safest to stick with eq .( [ eq : rhoq2 ] ) , even though it is computationally more demanding .computer simulations provide a powerful tool to locate first - order phase transitions . however , it is important to account properly for finite size effects .the safest ( although not necessarily simplest ) way to locate first order phase transitions is to compute the free energy of the relevant phases in the vicinity of the coexistence curve . for a given temperature , this involves computing the free energy of both phases at one ( or a few ) state points and then computing the pressure of the system in the vicinity of these points . in this way, one can evaluate the free - energy as a function of volume .coexistence is then determined using a double tangent construction .the advantage of this procedure is that finite - size effects are relatively small as the two phases are simulated under conditions where no interfaces are present .however , the approach is rather different from the one followed in experiment where one typically looks for the point where the two phases are in direct contact .if the interface between the two phases does not move , we have reached coexistence .there are several reasons why , in simulations , this approach requires great care .first of all , the creation of a two - phase system involves creating a slab of phase i in contact with a slab of phase ii ( the slabs are obviously periodically repeated ) .the free energy of such a system is equal to the free energy of the coexisting bulk phases plus the total surface free energy .a ( potential ) problem is that , if the slabs are not thick enough ( how thick depends on the range of the intermolecular potential and on the range of density correlations ) , the total surface free energy is not equal to that of two surfaces at infinite separation . a second problem, that is more relevant for monte carlo than for molecular dynamics simulations is that two - phase systems equilibrate slowly .in particular , coexistence between two phases requires that the pressures are equal . in molecular dynamics simulations ,the rate of pressure equilibration is determined by the speed of sound typically such equilibration is rapid .in contrast , in monte carlo simulations , pressure equilibrates through diffusion , and this is slower .most likely , our first guess for the coexistence pressure at a given temperature will not be correct and we will see the phase boundary moving. then we have to adjust the pressure .however , it is very important that we do not impose an isotropic pressure on the system .rather , we should vary the volume of the system by changing the box - length in the direction perpendicular to the interface .the reason is that , in the transverse direction , the surface tension of the two interfaces contributes to the apparent pressure .hence if the ` longitudinal ' pressure ( i.e. the one acting on a plane parallel to the interfaces ) is denoted by , then the apparent pressure in the perpendicular direction is in other words : the transverse pressure contains a contribution due to the laplace pressure of the interfaces .the coexistence pressure is , not . in the above example, we have considered the case of a liquid - liquid interface where the laplace correction is determined by the surface tension . however , when one of the coexisting phases is a crystalline solid the problems are more severe because the pressure inside a solid need not be isotropic . in that case , the procedure becomes more complicated .first , one must determine the compressibility of the crystal or , more generally , the relation between the lattice constant of the bulk crystal and the applied isotropic pressure .then the lateral dimensions of the crystalline slab are chosen such that it is commensurate with a crystal with the correct lattice constant for the applied longitudinal pressure . if the initial guess for the coexistence pressure was incorrect one should change the applied longitudinal pressure but also the lateral dimensions of the simulation box , such that the crystal again has the correct lattice constant corresponding to the applied pressure .the rescaling of the lateral dimension of the simulation box should also be performed if we do not fix the longitudinal pressure but . in that casewe do not know a priori what the coexistence pressure will be , but we still must rescale the transverse dimensions of the simulation box to make sure that the stress in the bulk of the crystal is isotropic . finally , there is a general point that , for a small system , the free energy cost to create two interfaces is not negligible compared to the bulk free energy . for sufficiently narrow slabs the result may be that the system will form a strained homogeneous phase instead of two coexisting phases separated by interfaces. in general , simulations that study phase equilibria by preparing two phases separated by interfaces will use biasing techniques to facilitate the formation of a two - phase system . for examples ,see refs . .( [ eq : deltapgamma ] ) illustrates the surface tension of a liquid - liquid interface can create a difference between the longitudinal and transverse components of the pressure that can be measured in a simulation . in principle , eq .( [ eq : deltapgamma ] ) can be used to determine the surface tension . however , eq .( [ eq : deltapgamma ] ) is only valid for the interface between two disordered phases ( e.g. liquid - liquid or liquid - vapour ) .if one of the phases involved is a crystal , the situation becomes more complicated .in particular , the laplace pressure is not determined by the surface tension ( which , for a solid - liquid interface is called the ` surface free - energy density ' ) , but by the surface stress . to see why this is so , consider the expression of the surface free energy of an interface with area : the surface stress is the derivative of the surface free energy with respect to surface area : for a liquid - liquid interface , does not depend on the surface area , as the structure of the surface remains the same when the surface is stretched . hence , in that case .however , for a solid , the structure of the surface is changed when the surface is stretched and hence changes . in that case , . as a consequenceit is much more difficult to determine for a solid - liquid interface than for a liquid - liquid interface .in fact , whereas for a liquid - liquid interface can be obtained from eq .( [ eq : deltapgamma ] ) , the determination of for a solid - liquid interface requires a free - energy calculation that yields the reversible work needed to create an interface between two coexisting phases .an example of such a calculation can be found in ref . .classical mechanics and classical statistical mechanics are at the basis of a large fraction of all molecular dynamics and monte carlo simulations .this simple observation leads to a trivial conclusion : the results of purely classical simulations can _ never _ depend on the value of planck constant and , indeed , the expressions that are used to compute the internal energy , pressure , heat capacity or , for that matter , the viscosity of a system are functions of the classical coordinates and momenta only .however , there seems to be an exception : the chemical potential , . when we perform a grand - canonical simulation , we fix , and and compute ( for instance ) the average density of the system . , and do not depend on , but does . to be more precise , we can always write as : where is a purely classical quantity that depends on the interactions between molecules . however , depends on planck constant .for instance , for an atomic system , where is the thermal de broglie wavelength : for a molecular system , the expression for is of a similar form : where is the part of the molecular partition function associated with the vibrations and rotations ( and , possibly , electronic excitations ) of an isolated molecule . is a function of but its value depends on .it would then seem that the results of classical simulations can depend on .in fact , they do not .first of all , if we consider a system at constant temperature , the factors involving just add a constant to .hence , the value of will not affect the location of any phase transitions : at the transition , the chemical potentials in both phases must be equal , and this equality is maintained if all chemical potentials are shifted by a constant amount .one might object that the density of a system itself depends on and therefore on .this is true , but in reality it is not a physical dependence : a shift the absolute value of the chemical potential is not an observable property .one should recall that the chemical potential describes the state of a system in contact with an infinite particle reservoir at a given temperature and density . in experiments ,such a reservoir can be approximated by a large system , but in simulations it is most convenient to view this reservoir as consisting of the same atoms or molecules , but with all inter - molecular interaction switched off . to see this , consider two systems at temperature : a system with particles in volume and a reservoir with particles in volume .we assume that .the density in the reservoir .the reservoir and the system can exchange particles .we now assume that the particle in the reservoir do not interact . to be more precise : we assume that all _ inter_-molecular interaction have been switched off in the reservoir , but all _ intra_-molecular interactions are still present .this ideal gas is our reference system .it is straightforward to write down the expression for the partition function of the reservoir for a given number of particles ( say ) : the total partition function ( system plus reservoir ) is : note that all terms in the sum have the same number of factors and .i take these out and define through note that does not depend on .now , the probability to find particles in the system is : dividing numerator and denominator by , and using the fact that for , , we can write note that in this expression , the number of particles in the system is controlled by , the number density in the reservoir . has disappeared , as it should .the concept of a reservoir that contains an ideal gas of molecules that have normal intra - molecular interactions is extremely useful in practice , in particular for flexible molecules . in a grand canonical simulation , we prepare thermally equilibrated conformations of the molecules that we wish to insert . we can then attempt to insert one of these particles into the system that already contains particles .the probability of insertion is determined by : where measures the interaction of the added particle with the particles already present , but _ not _ the intramolecular interactions of the added particle . note ( again ) that the crucial control parameter is , the hypothetical but purely classical density in the ideal reservoir .occasionally , the role of planck constant can be a bit more subtle .we have assumed that the quantised intramolecular motions of the particles do not depend on the inter - molecular interactions . at high densities ,this approximation may break down .usually , this means that the distinction between inter and intra - molecular degrees of freedom becomes meaningless .however , if we insist on keeping the distinction , then we must account for the shift of intra - molecular energy levels due to inter - molecular interactions . in that case , necessarily enters into the description . of course , the examples above are but a small subset of the many subtleties that one may run into when performing a simulation .my reason for discussing this particular set is that 1 ) they are important and 2 ) they illustrate that one should never use a simulation programme as a ` black box ' .in the previous section , i discussed aspects of simulations that are more tricky than they might at first seem to be . in the present section ,i list a few examples of simulations techniques that can not be used because they are intrinsically flawed . in a monte carlo simulation, we compute thermal averages of the type in this expression , stands for any ` mechanical ' property of the system ( e.g. the potential energy or the virial ) .what we can not compute in this way are ` thermal ' properties , such as the free energy of the entropy .let us take the free energy as an example .the relation between the helmholtz free energy and the partition function of the system is in what follows , i will focus on the configurational part of the partition function , as the remaining contributions to can usually be computed analytically . clearly , the integral on the righthand side is not a ratio of two integrals , as in eqn .[ eq : aav ] .however , we can write we can rewrite this as and hence it would seem that we _ can _ express the partition function in terms of s thermal average that can be samples . however , the method will not work ( except for an ideal gas , or for similarly trivial systems ) .the reason is that the most important contributions are those for which is very large , whilst the boltzmann weight ( ) is very small - these parts of configuration space will therefore never be sampled . for systems with hard - core interactions ,the situation is even more extreme : an important contribution to the average comes for parts of configuration space where and . in other words , there is no free lunch : ( and therefore ) can not be determined by normal monte carlo sampling .that is why dedicated simulation techniques are needed to compute free energies .the widom ` particle - insertion ' method is used extensively to compute ( excess ) chemical potentials of the components of a moderately dense fluid .below i briefly show the derivation of widom s expression . however , an almost identical derivation leads to an expression that relates the chemical potential to the boltzmann factor associated with particle removal . that expression is useless and , in the limit of hard - core interactions , even wrong .consider the definition of the chemical potential of a species . from thermodynamicswe know that can be defined as : where is the helmholtz free energy of a system of particles in a volume , at temperature . for convenience ,i focus on a one - component system .if we express the helmholtz free energy of an -particle system in terms of the partition function , then it is obvious that for sufficiently large the chemical potential is given by : .for instance , for an ideal gas of atoms , \;.\ ] ] the excess chemical potential of a system is defined as }{v\int d{\bf r}^{n}\exp\left[-\beta u({\bf r}^n)\right]}\right\}\;.\end{aligned}\ ] ] we now separate the potential energy of the -particle system into the potential energy function of the -particle system , , and the interaction energy of the -th particle with the rest : . using this separation , we can write as : \exp\left[-\beta\delta u_{n , n+1}\right]}{v\int d{\bf r}^{n}\exp\left[-\beta u({\bf r}^n)\right]}\right\ } \nonumber\\ & = & -k_bt\ln\left\{\frac{\int d{\bf r}_{n+1}\left\langle\exp(-\beta\delta u_{n , n+1})\right\rangle_n}{v}\right\}\;,\end{aligned}\ ] ] where denotes canonical ensemble averaging over the configuration space of the -particle system . in other words ,the excess chemical potential is related to the average of over all possible positions of particle . in a translationally invariant system , such as a liquid, this average does not depend on the position of the addition particle , and we can simply write the widom method to determine the excess chemical potential is often called the particle - insertion method " because it relates the excess chemical potential to the average of the boltzmann factor , associated with the random insertion of an additional particle in a system where particles are already present . to arrive at the ( incorrect ) ` particle - removal ' expression we simply write }{\int d{\bf r}^{n+1}\exp\left[-\beta u({\bf r}^n+1)\right]}\right\}\;.\end{aligned}\ ] ] following the same steps as above ,we then find : if the potential energy of the system is bounded from above ( and , of course , from below ) then this expression is not wrong , but simply rather inefficient .the reason why it is inefficient is the same as in the case of sampling partition functions : the most important contribution come from regions of configuration space that are rarely sampled .if the potential energy function is not bounded from above ( as is the case for hard - core interaction ) , eq .( [ eqn : widomremoval ] ) simply yields the wrong answer . having said that , particle - insertion and particle - removal can be fruitfully combined in the so - called ` overlapping - distribution ' method .but even the particle insertion method may occasionally yield nonsensical results .this happens when applying the method to a system at high densities . as an example , let us consider a crystalline material . during a simulation of a perfect crystal containing particles, we can carry out random particle insertions and we can compute the average boltzmann factor associated with such insertions . the method appears to work , in the sense that it gives an answer .however , that answer is _ not _ the excess chemical potential of atoms in a crystal lattice .rather , it is the excess chemical potential of _ interstitial _ particles .the reason why the method does not yield the expected answer is that in a real crystal , there are always some vacancies .the vacancy concentration is low ( ranging from around 1:10 at melting , to vanishingly values at low temperatures ) . yet , for particle insertion , these vacancies are absolutely crucial : the contribution to the average of of a single insertion in a vacancy far outweighs all the contributions due to insertion in interstitial positions . using a combination of particle insertion and particle removal we can obtain information about the equilibrium concentration of vacancies and interstitials .because there are so few vacancies , a naive grand - canonical simulation of crystal will be inefficient . as the vacancy concentration is low very large systemsare needed to create enough vacancies where particles can be added to the system . of course, one can perform biased grand - canonical simulations to alleviate this problem , but i am unaware of any attempts to do so . because the concentration of vacancies is so low in most ( but not all ) crystals ,one subtlety is often overlooked .the point is that , because of the presence of vacancies , the number of particles is not the same as the number of lattice sites ( i assume , for simplicity , a one - component bravais lattice ) .the free energy of the system is then a function of the number of particles _ and _ the number of lattice sites .we can then write the variation of the free energy at constant temperature as where denotes the number of lattice sites and the corresponding ` chemical potential ' .i use quotation marks here because is not a true chemical potential : in equilibrium , the number of lattice sites is such that the free energy is a minimum and hence must be zero . however , in a simulation where we constrain and to be the same , is definitely not zero and incorrect predictions of the phase diagram result if this is not taken into account .there are only few examples where the analysis of the chemical potential associated with lattice sites has been taken into account correctly .i end this paper with a very brief summary some ` urban legends ' in the simulation field .i should apologise to the reader that what i express are my personal views not everybody will agree . in the beginning of this paperi pointed out that in a number of specific cases the computational gains made by novel algorithms far outweigh the increase in computing power expressed by moore s law .however , the number of such algorithms is very small .this is certainly true if we avoid double counting : successful algorithms tend to be re - invented all the time ( with minor modifications ) and then presented by the unsuspecting authors as new ( for example , simpson s rule has recently been reinvented in a paper that has attracted over 150 citations ) .although i have no accurate statistics , it seems a reasonable to assume that the number of new algorithms that is produced every year keeps on growing .unfortunately , these ` new ' algorithms are often not compared at all with existing methods , or the comparison is performed such that the odds are stacked very much in favour of the new method .emphatically , i am not arguing against new algorithms quite the opposite is true and , in fact , over the past few decades we have seen absolutely amazing novel techniques being introduced in the area of computer simulation .i am only warning against the ` not invented here ' syndrome . [[ molecular - dynamics - simulations - predict - the - time - evolution - of - a - many - body - system ] ] molecular dynamics simulations predict the time evolution of a many - body system ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ this view is more common outside the field than inside .however , the outside community is much larger than the tribe of simulators .the reason why molecular dynamics does not predict the time evolution of a many - body system is that the dynamics of virtually all relevant many - body systems is chaotic .this means that a tiny error of the phase - space trajectory of the system will grow exponentially in time , such that in a very short time the numerical trajectory will bear no resemblance to the ` true ' trajectory with the same initial conditions . using more accurate algorithmscan postpone this so - called lyapunov instability .however , doubling the accuracy of the simulation simply shifts the onset by a time .doubling it again , will gain you another time interval .the point is that the computational effort required to suppress the lyapunov instability below a certain preset level scales exponentially with simulation time : in practice , no simulation worthy of the name is short enough to avoid the lyapunov instability .it would seem that this is a serious problem : if molecular dynamics does not predict the time evolution of the system , then what is the use of the method ? the reason why molecular dynamics simulations are still widely used is that ( most likely ) the trajectories that are generated in a simulation ` post - dict ' rather than ` pre - dict ' a possible real trajectory of the system .in particular , it can be shown that the widely used verlet algorithm generates trajectories that are a solution to the discretised lagrangian equations of motion .this means that the algorithm generates a trajectory that starts at the same point as a real trajectory , ends at the same point as that trajectory and takes the same time to move from the starting point to the end point . what molecular dynamics generates is a good approximation to a _possible _ trajectory , not to the trajectory with the same initial conditions .the distinction between the two types of trajectories is illustrated in fig .[ fig : lyapunov ] .typically , the time between the beginning and end points of a trajectory is much longer than the time it takes for the lyapunov instability to appear .therefore , the trajectory that we generate need not be close to any real trajectory .however , if we consider the limit where the time - step goes to zero , then the discretised lagrangian approaches the full lagrangian and the discretised path ( presumably ) approaches the real path .i should stress that this result holds for the verlet algorithm and , presumably , for a number of verlet - style algorithms .however , it is not obvious ( to me ) that the molecular dynamics trajectories generated in an event - driven algorithm with hard collisions are necessarily close to a real trajectory for long times . if we can believe molecular dynamics simulations ( and i believe that we can in a ` statistical ' sense ) then there are several excellent reasons to use the technique .the first is , obviously , that molecular dynamics probes the dynamics of the system under study and can therefore be used to compute transport coefficients , something that a monte carlo simulation can not do .another practical reason for using molecular dynamics is that it can be ( and , more importantly , has been ) parallellized efficiently .hence , for large systems , molecular dynamics is almost always the method of choice .this rule - of - thumb is old and nowhere properly justified .there are many examples where it appears that the optimal acceptance of monte carlo moves should be substantially lower ( certainly for hard core systems ) .also , in the mathematical literature it is argued ( but in a different context ) that the optimal acceptance of metropolis - style monte carlo moves is probably closer to 25 % ( 23.4 % has been shown to be optimal for a specific but not a statistical mechanical example ) .a very simple example can be used to illustrate this point .consider a hard particle in an ideal gas .the number density of the ideal gas is denoted by .if we move the hard particle over a distance , it sweeps out a volume proportional to .to make life easy , i will consider a one - dimensional example . in that casethe volume swept out by the hard particle is equal to ( i will assume that the particle itself is much larger than ) .the probability that a monte carlo move over a distance will be accepted is given by the probability that there will be no ideal gas particles in the volume swept out by the particle : if the trial displacement is , then the average _ accepted _ displacement is \times 0 ] that is around 28 % ( i.e. considerably less than 50% ) .does it matter ?probably not very much : the very fact that there is an ` optimal ' acceptance ratio means that the efficiency is fairy flat near this optimum .still , 25 % is rather different form 50 % .when in doubt , check it out .this is a point that i already mentioned above but i repeat it here : in normal ( equilibrium ) monte carlo or molecular dynamics simulations , the origin of the periodic box can be chosen wherever we like and we can remap the periodic boxes at any time during our simulation .hence , the ` boundary ' of the periodic box is not a physical divide .again , as said before , this situation changes for some non - equilibrium situations , e.g. those that use lees - edwards sliding boundary conditions .there is another situation where the boundary of the simulation box has ( a bit ) more meaning , namely in the simulation of ionic systems . due to the long - range nature of the coulomb forces ,the properties of the system depend on the boundary conditions at infinity .this is not very mysterious : if we have a flat capacitor with a homogeneous polarised medium between the plates , then the system experiences a depolarisation field , no matter how far away the plates are .however , if the capacitor is short - circuited , there is no depolarisation field .something similar happens in a simulation of a system of charged particles : we have to specify the boundary conditions ( or , more precisely , the dielectric permittivity ) at infinity . if we choose a finite dielectric constant at the boundaries ( ` finite ' in this context means : anything except infinite )then the internal energy of the system that we simulate has a polarisation contribution that depends quadratically on the dipole moment of the simulation box .if we study a dipolar fluid , than the dipole moment of the simulation box does not depend on our choice of the origin of the box .however , if we study a fluid of charged particles , then the choice of the boundaries does matter .again , it is nothing mysterious : some crystal faces of ionic crystals are charged ( even though the crystal is globally neutral ) .the depolarisation field in this crystal will depend on where we choose the boundary ( at a positively or at a negatively charged layer ) , no matter how large the crystal .if the potential energy of a system is known as a function of the particles coordinates , then it is meaningful to speak about _ the _ energy ` landscape ' of the system : the potential energy of an particle system in dimensions will have minima , saddle points and maxima in the dimensional space spanned by the cartesian coordinates of the particles .in contrast , it is not meaningful to speak of _ the _ free energy landscape : there are many . in general ,free energies appear when we wish to describe the state of the system by a smaller number of coordinates than .of course , using a smaller number of coordinates to characterise the system will result in a loss of information .its advantage is that it may yield more physical insight in particular if we use physically meaningful observables as our coordinates .examples are : the total dipole moment of a system , the number of particles in a crystalline cluster or the radius of gyration of a polymer .typically , we would like to know the probability to find the system within a certain range of these new variables .this probability ( density ) is related to the original boltzmann distribution . in what follows, i will consider the case of a single coordinate ( often referred to as the ` order parameter ' ) .the generalisation to a set of different s is straightforward .note that may be a complicated , and often non - linear function of the original particle coordinates .the probability to find the system in a state between and is , where is given by } { z } \;,\ ] ] where , the configurational part of the partition function , is given by we now _ define _ the free energy ` landscape ' as which implies that hence , the plays the same role in the new coordinate system as in the original dimensional system . the free energy may exhibit maxima and minima and , in the case of higher dimensional free - energy landscapes , free - energy minima are separated by saddle points .these minima and saddle points play an important role when we are interested in processes that change the value of , e.g. a phase transition , or the collapse of a polymer .the natural assumption is that the rate of such a process will be proportional to the value of at the top of the free - energy barrier separating initial and final states .however , whilst , for reasonable choose of , this statement is often approximately correct , it is misleading because the height of the free energy barrier depends on the choice of the coordinates . to see this ,consider another coordinate that measures the progress of the same transformation from initial to final state . for simplicity ,i assume that i have a one - dimensional free - energy landscape and that is a function of only : .then it is easy to show that the derivative on the right - hand side is the jacobian associated with the coordinate transformation from to . if is a linear function of , then the jacobian is harmless : it simply adds a constant to the free energy and hence does not affect barrier heights .however , in the more general case where is a non - linear function of , the choice of the functional form affects the magnitude of the barrier height .let us consider an extreme ( and unphysical ) examples to illustrate this .we choose : then in other words : with this choice of coordinates , the barrier has disappeared altogether . in general , the choice of different order parameters will not affect the free energy barrier as drastically as this , but the fact remains : the height of the free energy barrier depends on the choice of the order parameter(s ) . of course , this dependence of the free - energy barrier on the choice of the order parameter is only an artefact of our description .it can not have physically observable consequences . and ,indeed , it does not .the observable quantity is the rate at which the initial state transforms into the final state ( folding rate , nucleation rate etc . ) and that rate is independent of the choice of .it is instructive to consider a simple example : the diffusive crossing of a one - dimensional barrier of height and with a flat top of width \ ; , \ ] ] where is the heaviside step function .we assume that the diffusion constant at the top of the barrier has a constant value .moreover , we assume that , initially , the density is on the reactant side , and zero on the product side .then the initial , steady - state flux across the barrier is given by : now we apply our transformation eq .( [ eq : jacobiantransformation ] ) . in the region of the barrier, we then get : this transformation will completely eliminate the barrier .however , it should not have an effect to the reactive flux .indeed , it does not , as we can easily verify : the width of the barrier in the new coordinates is and the diffusion constant becomes combining all terms , we get : which is identical to the original flux , as it should be .there is another problem with free energy barriers : the fact that a coordinate joins two minima in a free energy landscape , does not necessarily imply that a physical path exists . again , we consider a simple example : a two dimensional system of a particle in a rectangular box .the box is cut obliquely by an infinitely thin , but infinitely high , energy barrier ( see fig . [fig : nopath ] ) .we now choose the coordinate as our order parameter .the free energy is computed by integrating the boltzmann factor over at constant .the fact that the boltzmann factor vanishes on one point along a line of constant makes no difference to the integral .hence , is constant .the path between ( on the left ) and ( on the right ) appears barrier free . yet, clearly , there is no physical path from to . 1 .the construction of the coordinates is a matter of choice that should be guided by physical insight or intuition 2 .the height of free energy barriers depends on the choice of these coordinates 3 .the rate of ` barrier - crossing ' processes does _ not _ depend on the choice of the coordinates 4 .the fact that there is no free - energy barrier separating two states does not necessarily imply that there is an easy physical bath between these states . as the field of numerical simulation expands ,new techniques are invented and old techniques are rediscovered .but not all that is new is true .there are several examples of methods that had been discredited in the past that have been given a new identity and thereby a new lease on life .usually , these methods disappear after some time , only to re - emerge again later .the fight against myths and misconceptions never ends ( including the fight against fallacies that i have unknowingly propagated in this paper ) .i am grateful to wilson poon for suggesting the topic of this paper .what i wrote is necessarily very incomplete , but i enjoyed writing it .i thank patrick varilly for a critical reading of this manuscript and for his many helpful suggestions .i should stress that all remaining errors and misconceptions are mine alone .this work was supported by the european research council ( erc ) advanced grant 227758 , the wolfson merit award 2007/r3 of the royal society of london and the engineering and physical sciences research council ( epsrc ) programme grant ep / i001352/1 .99 , _ j. chem ._ , * 27 * ( 1957 ) 1208 . , _ j. chem_ , * 27 * ( 1957 ) 1207 . ,_ a guide to monte carlo simulations in statistical physics _ ,second edition ( cambridge university press , cambridge ) 2005 . , _ j. stat ._ , * 15 * ( 1976 ) 299 ., _ j. appl ._ , * 52 * ( 1981 ) 7182 . ,_ scripta metall ._ , * 17 * ( 1983 ) 1199 . , _ phys_ , * 103 * ( 2009 ) 188302 . ,_ , * 79 * ( 1993 ) 277 . , _ phys . rev .* 47 * ( 1993 ) 4611 .a _ , * 25 * ( 1982 ) 1699 .b _ * 81 * ( 2010 ) 125416 . ,_ , * 49 * ( 1983 ) 1121 . ,_ j. comp ._ , * 22 * ( 1976 ) 245 . ,_ j. chem .b _ , * 105 * ( 2001 ) 6722 . , _ a monte carlo method for chemical potential determination in single and multiple occupancy crystals _ , arxiv:1209.3228 ( 2012 ) . ,natl . acad .usa _ , * 109 * ( 2012 ) 17886 .* 109 * ( 2012 ) 17728 ._ , * 99 * ( 2007 ) 235702 . ,_ diabetes care _ , * 17 * ( 1994 ) 152 ., _ annals appl ._ , * 7 * ( 1997 ) 110 . ,_ j. phys .c _ , * 5 * ( 1972 ) 1921 . | this paper discusses the monte carlo and molecular dynamics methods . both methods are , in principle , simple . however , simple does not mean risk - free . in the literature , many of the pitfalls in the field are mentioned , but usually as a footnote and these footnotes are scattered over many papers . the present paper focuses on the ` dark side ' of simulation : it is one big footnote . i should stress that ` dark ' , in this context , has no negative moral implication . it just means : under - exposed . |
we address reversing and anti - reversing properties of interfaces in the following one - dimensional slow diffusion equation with strong absorption where is a positive function , _e.g. _ , a concentration of some species , and and denote space and time , respectively . restricting the exponents to the ranges and corresponds to the slow diffusion and strong absorption cases respectively. interfaces sometimes termed ` contact lines ' by fluid dynamicists correspond to the points on the -axis , where regions for positive solutions for are connected with the regions where is identically zero . the initial data is assumed to be compactly supported .the motion of the interfaces is determined from conditions that require the function be continuous and the flux of through the interface to be zero . in the presence of slow diffusion ( ) , the interfaces of compactly supported solutions have a finite propagation speed . in the presence of strong absorption ( ), the solution vanishes for all after some finite time , which is referred to as finite - time extinction .therefore , the interfaces for a compactly supported initial data coalesce in a finite time . depending on the shape of and the values of and , the interfaces may change their direction of propagation in a number of different ways .it was proved by chen __ that bell - shaped initial data remains bell - shaped for all time before the compact support shrinks to a point .however , the possible types of dynamics of interfaces for this bell - shaped data were not identified in .the slow diffusion equation with the strong absorption ( [ heat ] ) describes a variety of different physical processes , including : ( i ) the slow spreading of a slender viscous film over a horizontal plate subject to the action of gravity and a constant evaporation rate ( when and ) ; ( ii ) the dispersion of a biological population subject to a constant death - rate ( when and ) ; ( iii ) non - linear heat conduction along a rod with a constant rate of heat loss ( when and ) , and ; ( iv ) fluid flows in porous media with a drainage rate driven by gravity or background flows ( when and either or ) .let us denote the location of the left interface by and the limit , where is nonzero , by .if , it was proved in that the position of the interface , , is a lipschitz continuous function of time . in the case , the function is found from the boundary conditions and where a dot denotes differentiation with respect to time . in the case , the spatial derivatives at are not well defined . and the zero flux condition ( [ zero - flux ] ) must be rewritten as \displaystyle h^{n } \left ( \frac{\partial h}{\partial x } \right)^{-1 } \big{|}_{x=\ell(t)^+ } , \quad \mbox{\rm if } \;\ ; \dot{\ell } \geq 0 .\end{array } \right.\end{aligned}\ ] ] one could choose to close the slow diffusion equation ( [ heat ] ) in a variety of ways , _e.g. _ , by supplying analogous conditions at the right interface , or by supplying a dirichlet or neumann condition elsewhere . for instance, if is even in , then the solution remains even in for all times , and therefore , the slow diffusion equation ( [ heat ] ) can be closed on the compact interval ] , * or as , whereas . for convenience ,let us reverse the time variable , by transforming , and rewrite system ( [ infinity - dynamics ] ) with the lower sign in the negative direction of : by proposition [ proposition - center - manifold - infinity ] , we have and for the trajectory departing from the equilibrium point ( in negative ) along . from the second equation of system ( [ infinity - dynamics - negative - time ] ), remains an increasing function of negative as long as remains positive .therefore , we have an alternative : either vanishes before diverges or diverges before vanishes .the first choice of the alternative gives case ( i ) .for the second choice , let us consider variables and given by ( [ scaling - explicit ] ) and parameterized by in the limit ( the map is one - to - one and onto ) . by dividing the first and third equations in system ( [ infinity - dynamics - negative - time ] ) by the second equation , we obtain by using variables and given by ( [ scaling - explicit ] ) , we obtain to show that the second choice of the alternative above gives case ( ii ) , we will prove that remains finite as .this is done by a contradiction .assume that as .therefore , there exists such that for all sufficiently large .then , from the first equation of system ( [ closed - infinity ] ) , we have for sufficiently large , because is integrable at infinity , there is a finite positive such that for all sufficiently large .then , from the second equation of system ( [ closed - infinity ] ) , we obtain for sufficiently large , since both and are integrable at infinity , there is a finite positive such that for all sufficiently large , contradicting the assumption that as .therefore , the case ( ii ) is proved .the trajectory of system ( [ infinity - dynamics ] ) departing from the equilibrium point with in the negative direction of intersects either the half - plane and in system ( [ zero - dynamics ] ) in case ( i ) of lemma [ lemma - continuation ] or the half - plane , in system ( [ zero - dynamics ] ) in case ( ii ) .moreover , in the corresponding cases , * if , then * if , then .consequently , one can define two piecewise maps [ corollary - continuation ] in case ( i ) , it is trivial to see that and $ ] corresponds to the half - plane and .the first equation of system ( [ infinity - dynamics - negative - time ] ) can be written for the variable as follows : where the prime still denotes the derivative with respect to the time variable in the negative direction of .if remains finite as ( so that ) , then is bounded . in case ( ii ) , it is also trivial to see that and corresponds to the half - plane and .if remains nonzero in the limit , then , there exists such that for all sufficiently large .then , the same analysis as in lemma [ lemma - continuation ] applies to the first equation of system ( [ closed - infinity ] ) and shows that remains finite as . __ unfortunately , we do not control the value of at the intersection of the two piecewise maps ( [ maps - negative ] ) .however , we will show numerically in [ num - scheme ] that the piecewise maps ( [ maps - negative ] ) are typically connected at the points where and . in this case , a true solution of the differential equation ( [ ode ] ) with the lower sign satisfying properties ( [ ( i)])([(iii ) ] ) exist .let us describe a new numerical approach , based on the results of lemmas [ lemma - connection ] and [ lemma - continuation ] , that will be used to furnish meaningful solutions to the differential equations ( [ ode ] ) , for and . in [ t - neg - shooting ] and [ t - pos - solutions ] we describe the numerical procedures for finding solutions for and respectively . finally , in [ num - sum ] , we summarize the results of the numerical experiments and compare them with the results found in foster _ et ._ . as discussed in [ h - minus - system ], we wish to numerically construct a unique trajectory from infinite to finite values of .to do so , we integrate the system ( [ infinity - dynamics ] ) from near the equilibrium point in the far - field towards the equilibrium point of the system ( [ zero - dynamics ] ) in the near - field .the numerical procedure is carried out as follows : select a value of .ideally , one would like to begin by using this choice of to specify unique initial values for , and then numerically integrating the system ( [ infinity - dynamics ] ) backward in the ` time ' variable .equivalently , one could integrate the system ( [ infinity - dynamics - negative - time ] ) forwards in time .however , since is an equilibrium point , it is not possible to escape in a finite time .thus , in order to ensure that any numerical integration scheme can depart from near the equilibrium point along the center manifold , , it is necessary to take a ` small step ' , say , away from using the relevant asymptotic behaviour . using ( [ center - manifold - infinity ] ) and ( [ center - manifold - infinity - dynamics ] ) we find that a trajectory on the center manifold has the local behaviour for small positive values of . having selected values for both and , the behaviours ( [ numeric - begin - t - negative ] )may be used to specify unique ( pseudo-)initial values for and to begin the numerical integration of the system ( [ infinity - dynamics ] ) in the direction of decreasing . in this study , numerical integration of the system ( [ infinity - dynamics ] )was carried out using the ode45 routine in matlab with the default settings , except abstol and reltol which were both set to have a value of . selecting an appropriate value of a somewhat ad - hoc procedure : there is trade - off between taking too small , which renders it difficult for the numerical integration to escape the neighbourhood of the equilibrium point ( leading to poor accuracy of the integration ) , and taking too large which could result in low accuracy of the asymptotic expansion ( [ numeric - begin - t - negative ] ) .however , we found that choosing gave good results over the ranges of parameters we studied .robustness of the results with respect to changes in : ( i ) the choice of , and ; ( ii ) the number of terms in the asymptotic expansion ( [ numeric - begin - t - negative ] ) were verified .the result of corollary [ corollary - continuation ] asserts that all such trajectories will ultimately in either a finite or infinite time intersect with either the plane or , see the maps ( [ maps - negative ] ) . to ensure accurate numerical integration of the system in the near - field , in variables , it is necessary to ` switch ' from integrating the far - field system ( [ infinity - dynamics ] ) to the near - field system ( [ zero - dynamics ] ) backward in time .the choice of conditions under which this switch should occur is , again , somewhat arbitrary .however , as long as the values of and hence the values of are all finite and non - zero , this switching is valid at any point . in this study, we chose to switch from integrating ( [ infinity - dynamics ] ) to ( [ zero - dynamics ] ) when ( or equivalently when ) .however , we verified that our results were robust to changes in the choice of switching conditions .this switching procedure can be readily automated within matlab using the events function to autonomously : ( i ) stop the integration of the system ( [ infinity - dynamics ] ) when specified the conditions are satisfied ; ( ii ) read - off the final values of ; ( iii ) transform these to corresponding values for using the change of variables ( [ scaling - explicit ] ) , and ; ( iv ) begin the integration of ( [ zero - dynamics ] ) backwards in time from the appropriate initial data .the integration of the system ( [ zero - dynamics ] ) is then continued backward in the time variable until either or .again , we used the events function to autonomously detect when either of these events occurred and to stop the integration .it is noteworthy that we found it helpful to use ode15s to integrate the near - field system again , the default settings were used with the exception of abstol and reltol which we both set to .although ode15s is typically slower than ode45 , it is considerably more appropriate to deal with integrating systems of equations that exhibit apparent stiffness .this apparent stiffness , manifested as rapid changes in the direction of the trajectory in variables , arises as an artifact of the infinite time required to reach the equilibrium point .thus , if a trajectory approaches very close to the equilibrium point it appears to be rapidly rejected from that neighbourhood .when the integration is terminated , we record the following pieces of data : ( i ) the selected value of ; ( ii ) whether the trajectory reached or , and ; ( iii ) the value of either or at the termination point .these data define a point on one of the two piecewise maps defined in ( [ maps - negative ] ) .it is by computing a large number of trajectories , each emanating from different values of , that we are able to trace out the forms of these maps .the non - trivial solutions for that we seek correspond to trajectories emanating from particular equilibrium points in the far - field , say , that reach the near - field equilibrium point , for some finite .in addition to these non - trivial solutions , we recall that for all values of there exists a trivial solution given by ( [ exact - solution - zero ] ) and ( [ exact - solution - infinity ] ) that emanates from and reaches , as discussed in proposition [ lemma - exact ] .some representative results for and are shown in figure [ individual - maps ] . in these plots , a suitable non - trivial solution for is found by identifying a value of for which the value of or at the termination point . in figure[ example - shooting ] , we show some representative computations highlighting the differences between the near - field behaviour of trajectories local to a true solution with and . as is evidenced by figure [ individual - maps ] ,close to a trajectory with the piecewise continuous maps defined in ( [ maps - negative ] ) are smooth , whereas close to a trajectory with the maps exhibit rapid changes and ( vertical ) cusp - like features . as a result ,determining negative value(s ) of is more challenging despite resolving to machine precision ( approximately in the standard ieee double precision ) , it is not possible to approximate the value of sufficiently well that an accurate estimate of can be determined .in these cases , we therefore found it necessary to implement one additional stage in the numerical scheme as follows .once had been determined up to machine precision , and two ` limiting ' trajectories had been identified ( one emanating from and terminating at , and the other emanating from and terminating at , where ) the expected near - field linear asymptotic behaviour of , according to the expansion ( [ center - manifold - zero - asymptotics ] ) for the solution in theorem [ theorem - center - manifold - zero ] , is clearly visible .this linear behaviour can then be extrapolated , in the direction of decreasing , until it intersects the -axis .this intersection point is , to a good approximation , the value of corresponding to the trajectory emanating from .panel ( a ) of figure [ example - shooting ] gives an example of the linear extrapolation procedure described above .using the procedure described above , for all values of , we recover the trivial solution discussed in proposition [ lemma - exact ] .in addition , we see that in the case there is only one suitable non - trivial solution with the following data : for and , similar results are observed with one trivial and only one non - trivial solution as follows : and contrastingly , in the case we find that three suitable non - trivial solutions exist with the data : x_0^ * \approx 0.137 , \quad a_- \approx -0.932 , \\*[1 mm ] x_0^ * \approx 0.0592 , \quad a_- \approx -0.546 . \end{array } \right.\ ] ] notably all values of for are negative , whereas for and they are positive .in addition to the detailed results for and shown in figure [ individual - maps ] , we also show in figure [ all - sols - negative - t ] the values of determined for all values of from to . for the primary red and blue branches , emanating from along the black branch ,we show the corresponding values of in figure [ negative - t - a - minus ] .intriguingly , the numerical results indicate that in addition to the exact solution which is valid for all there are a whole host of additional solutions , some with , and others with .in particular , there is at least one additional trajectory corresponding to a suitable solution for with for all values of .further , for all there exists at least one additional solution for , although , in this case for a value of .another noteworthy feature of the plots shown in panels ( a)-(d ) of figure [ all - sols - negative - t ] is that at each value of for additional branches of solutions depart from the branch along which and .the underlying reason for this structure is as yet not understood .maps defined in ( [ maps - negative ] ) for and respectively . in all cases the blue , red and black curves show the value of at , the value of at and the value of at the termination point respectively .the dashed vertical line indicates the value of corresponding to the exact solution ( [ exact - solution ] ) .the crosses on panel ( a ) mark the data points extracted using the extrapolation procedure discussed in the text.,title="fig:",scaledwidth=49.0% ] maps defined in ( [ maps - negative ] ) for and respectively . in all cases the blue , red and black curves show the value of at , the value of at and the value of at the termination point respectively .the dashed vertical line indicates the value of corresponding to the exact solution ( [ exact - solution ] ) .the crosses on panel ( a ) mark the data points extracted using the extrapolation procedure discussed in the text.,title="fig:",scaledwidth=49.0% ] maps defined in ( [ maps - negative ] ) for and respectively . in all cases the blue , red and black curves show the value of at , the value of at and the value of at the termination point respectively .the dashed vertical line indicates the value of corresponding to the exact solution ( [ exact - solution ] ) .the crosses on panel ( a ) mark the data points extracted using the extrapolation procedure discussed in the text.,title="fig:",scaledwidth=49.0% ] maps defined in ( [ maps - negative ] ) for and respectively . in all cases the blue , red and black curves show the value of at , the value of at and the value of at the termination point respectively .the dashed vertical line indicates the value of corresponding to the exact solution ( [ exact - solution ] ) .the crosses on panel ( a ) mark the data points extracted using the extrapolation procedure discussed in the text.,title="fig:",scaledwidth=49.0% ] for .more precisely the red and blue trajectories begin at for and .the black line shows the artificially extrapolated linear behaviour .panel ( b ) shows some representative trajectories emanating from for . in this case , the trajectories begin at for and .notably , in the latter case , despite only resolving to 3 significant digits , a good estimate of has already been obtained.,title="fig:",scaledwidth=49.0% ] for .more precisely the red and blue trajectories begin at for and .the black line shows the artificially extrapolated linear behaviour .panel ( b ) shows some representative trajectories emanating from for . in this case , the trajectories begin at for and .notably , in the latter case , despite only resolving to 3 significant digits , a good estimate of has already been obtained.,title="fig:",scaledwidth=49.0% ] versus .the red , blue and black curves indicates values of that define trajectories terminating at the near - field equilibrium point with , , and respectively .panels ( b)-(d ) show zoomed - in regions from panel ( a ) near , , and respectively.,title="fig : " ] + along the red and blue curves emanating from the black curve near .panel ( b ) shows the same plot zoomed in on positive values of .,title="fig:",scaledwidth=49.0% ] along the red and blue curves emanating from the black curve near .panel ( b ) shows the same plot zoomed in on positive values of .,title="fig:",scaledwidth=49.0% ] with for various different values of and .panel ( b ) : plots of the trajectories emanating from and for .the constant to which these trajectories tend in the far - field is selected to be the corresponding value of ., title="fig:",scaledwidth=49.0% ] with for various different values of and .panel ( b ) : plots of the trajectories emanating from and for .the constant to which these trajectories tend in the far - field is selected to be the corresponding value of ., title="fig:",scaledwidth=49.0% ] having successfully found suitable solutions for , we now proceed to compute suitable solutions for . as discussed in [ h - plus - connect ] , we can numerically construct a unique trajectory from small to infinite values of .to do so , we integrate the system ( [ zero - dynamics ] ) from the equilibrium point in the near - field towards the equilibrium point of the system ( [ infinity - dynamics ] ) in the far - field .the numerical procedure is carried out as follows : select a value of . since is an equilibrium point , it is not possible to escape in finite time .we therefore begin integration of the system ( [ zero - dynamics ] ) by taking a small step , say , away from using the relevant asymptotic behaviour . using ( [ center - manifold - zero ] ) and ( [ center - manifold - zero - dynamics ] ) for , we find that a trajectory exiting the equilibrium point along the center manifold , , has the local asymptotic behaviour for a small positive value of .by contrast , using ( [ stable - manifold - zero ] ) and ( [ stable - manifold - zero - dynamics ] ) for , we find that a trajectory along the unstable manifold has the local asymptotic behaviour for a small positive value of .having selected values for both and , either ( [ numeric - begin - t - positive - positive ] ) or ( [ numeric - begin - t - positive - negative ] ) define unique ( pseudo-)initial conditions to begin integrating the system ( [ zero - dynamics ] ) in the direction of increasing time towards the far - field .we proved in corollary [ coral-1 ] , that the ultimate fate of all such trajectories , in variables , is approaching the equilibrium state for some .thus , by continuing integration of the system ( [ zero - dynamics ] ) to some large value of , denoted by say , and reading off the value at , we can obtain an arbitrarily accurate approximation of the corresponding value of that is obtained in the far - field a higher degree of accuracy can be achieved by simply increasing the value of . for this purpose we found ode45 with the majority of the default setting to be sufficiently robust . to ensure high numerical accuracy , at the cost of a relatively small increase in computation time ,both abstol and reltol were decreased to .in contrast to the case for solutions , we found it unnecessary to ` switch ' from integrating the near - field system ( [ zero - dynamics ] ) to the far - field system ( [ infinity - dynamics ] ) .typically , we found that taking gave an approximation of correct to 8 significant digits . carrying out this procedure for a variety of choices of we are able to trace out the form of the piecewise map between and defined earlier in ( [ two - maps ] ) . in figure[ t - positive - map ] , we show this map for and ( see panel ( a ) ) , as well as some representative trajectories of the system ( [ zero - dynamics ] ) for ( see panel ( b ) ) .in addition to the results shown , other computations for different values of were also carried out and it appears generic that is a monotonically increasing function of .crucially , it appears that range of the map ( [ two - maps ] ) is the entire semi - axis for .we have demonstrated that : ( i ) for each value of there exists _ at least _ one value of ( different from the trivial case ) that defines a trajectory emanating from and terminating at a , and thus a suitable solution for , and ; ( ii ) for every value of there exists a unique corresponding value of , thereby defining an infinite family of suitable solutions for .the one remaining step is therefore to invoke the matching condition ( [ far - field - matching ] ) .this condition is equivalent to requiring that the far - field behaviour of is characterized by .thus , given a solution for , the matching condition ( [ far - field - matching ] ) specifies a unique choice of , a unique , and thus a unique solution for , thereby closing the problem .the solutions found here for and show a qualitative , although not quantitative , agreement with those reported in . here, we found that and in foster _ et . , they claimed that for : , , and , whereas for : , , and .additionally , they claim another suitable solution for : , , and . in contrast , here we found that no such solution with exists .notably this value of reported in for is very close to the value of . for , using our numerical approach we have been able to identify three other solutions with , namely : a_- \approx -0.932 , \quad a_+ \approx -30.625 , \quad x_0^* \approx 0.137 , \\*[1 mm ] a_- \approx -0.546 , \quad a_+ \approx -166.623 , \quad x_0^ * \approx 0.0592 . \end{array } \right.\ ] ] we believe that the origin of these discrepancies is due to the low accuracy of the numerical scheme used in .indeed , in , the solutions for were computed by identifying the value of which characterizes solutions in the near - field that extend into far - field with the requisite behaviour , as in panel ( a ) of figure [ return - to - zero ] .solutions for were computed by finding the value of inferred ( via shooting from the far - field toward the near - field ) by invoking the matching condition in the far - field as in panel ( b ) on figure [ t - positive - map ] .both numerical methods used in are ill - posed . here, we pose the numerical problem as a shooting scheme for uniquely defined piecewise scalar functions , _i.e. _ the maps defined in ( [ two - maps ] ) and ( [ maps - negative ] ) .we therefore believe that the results obtained here are more reliable than those in .at 10 equally spaced values of between and , and for to demonstrate the continuity of across .solid and dashed curves show the solution for and respectively .panel ( a ) shows the anti - reversing dynamics for , , and .panel ( b ) shows the reversing dynamics for , , and .,title="fig:",scaledwidth=49.0% ] at 10 equally spaced values of between and , and for to demonstrate the continuity of across .solid and dashed curves show the solution for and respectively .panel ( a ) shows the anti - reversing dynamics for , , and .panel ( b ) shows the reversing dynamics for , , and .,title="fig:",scaledwidth=49.0% ] this work has focused on constructing local ( in both space and time ) self - similar reversing and anti - reversing solutions to the nonlinear diffusion equation ( [ heat ] ) with and .we have demonstrated how the dynamical theory combined with the numerical scheme can be used to furnish suitable solutions to the differential equations ( [ ode ] ) for and . via the self - similar reductions( [ reduction ] ) , the solutions to these differential equations can be transformed into physically meaningful solutions for to the nonlinear diffusion equation ( [ heat ] ) with and , which is completed with the no flux boundary condition ( [ zero - flux2 ] ) and the condition the . in this final sectionwe shall discuss the connection between the self - similar solutions found here and the dynamics of the model ( [ heat ] ) .it is well - known , and can be readily verified , that the nonlinear diffusion equation with absorption ( [ heat ] ) admits two different travelling wave solutions .seeking such solutions near a left interface that have the form , for some function and constant , we find that \label{lin - nose } h \sim ( \dot{\ell})^{-1 } ( x-\ell(t ) ) \quad \mbox{as } \quad x \searrow \ell(t ) , \quad \mbox{for } \quad \dot{\ell } > 0.\end{aligned}\ ] ] the former , is an advancing wave local to a left interface whose motion is driven by diffusion , whereas the latter is a receding wave driven by absorption .this study has therefore elucidated the process by which the wave ( [ non - lin - nose ] ) _ becomes _ ( [ lin - nose ] ) , giving rise to a reversing interface , or vice versa , giving rise to an anti - reversing interface . in [ t - neg - shooting ] we used the result of lemma [ lemma - continuation ] to numerically construct suitable solutions for . for each value of we identified _ at least _one suitable solution , defined by a pair of values of and , in addition to the exact solution ( [ exact - solution ] ) in the original time and space variables , this exact solution corresponds to a steady solution for and thus does not constitute a reversing nor an anti - reversing solution . in [ t - pos - solutions ] , we used lemma [ lemma - connection ] to formulate a numerical scheme for constructing solutions for defined by pairs of values of and .we showed that the map ( [ t - positive - map ] ) is one - to - one and its range is the entire semi - axis for .importantly , for each value of , we found that : ( i ) if then , and ( ii ) if then where .the final stage in constructing solutions for is to invoke the matching condition ( [ far - field - matching ] ) that ensures continuity of across .owing to the aforementioned properties of ( [ t - positive - map ] ) we are forced to reject any solution for that is defined by a trajectory with and or and on the basis that it necessarily can not match to solution for . in summarywe have found that for up to 5 different solutions are available with .for there is at least one solution with . for an additional branch of solutions with , whereas for , there is another branch of solutions with .for it seems quite possible that yet more branches of solutions will emerge .there is a distinct difference between the interpretation of solutions with in terms of the original model ( [ heat ] ) compared to those with .the former , correspond to a reversing solution where the left interface advances for , with the behaviour ( [ non - lin - nose ] ) , and then subsequently recedes for with the behaviour ( [ lin - nose ] ) .contrastingly , the latter corresponds to an anti - reversing solution where the interface recedes for , with the form ( [ lin - nose ] ) , and then advances according to the form ( [ non - lin - nose ] ) for .one representative local solution for for both types of behaviour one reversing and one anti - reversing are shown in figure [ generic - figure - name ] .some natural open questions raised by this study are : ( i ) whether any self - similar solutions with non - monotone profiles ( in ) exist _i.e. _ solutions that do not satisfy ( [ ( ii ) ] ) ; ( ii ) whether the self - similar solutions identified here are stable in the context of the model ( [ heat ] ) , and ; ( iii ) if more than one reversing or anti - reversing solution is stable for a particular value of , what is the mechanism for selecting the appropriate self - similar solution at a particular reversing or anti - reversing event. * acknowledgements . *thanks m. chugunova and r. taranets , while j.f .thanks j. r. king and a. d. fitt for useful discussions regarding this project .is supported by a postdoctoral fellowship at mcmaster university .he thanks b. protas for hospitality and many useful discussions .a part of this work was completed during the visit of d.p . to claremont graduate university .the work of d.p . is supported by the ministry of education and science of russian federation ( the base part of the state task no .2014/133 , project no . | we consider the slow nonlinear diffusion equation subject to a constant absorption rate and construct local self - similar solutions for reversing ( and anti - reversing ) interfaces , where an initially advancing ( receding ) interface gives way to a receding ( advancing ) one . we use an approach based on invariant manifolds , which allows us to determine the required asymptotic behaviour for small and large values of the concentration . we then ` connect ' the requisite asymptotic behaviours using a robust and accurate numerical scheme . by doing so , we are able to furnish a rich set of self - similar solutions for both reversing and anti - reversing interfaces . nonlinear diffusion equation , slow diffusion , strong absorption , self - similar solutions , invariant manifolds , reversing interface , anti - reversing interface |
diffuse reflectance spectroscopy in the visible and near - infrared range ( vis - nir drs ) has proved to be useful to assess various soil properties .it can be employed to provide more data rapidly and inexpensively compared to classical laboratory analysis .therefore , drs is increasingly used for vast soil surveys in agriculture and environmental research . recently, several studies have shown the applicability of vis - nir drs _ in situ _ as a proximal soil sensing technique . to predict soil properties from soil spectra ,a model is calibrated , often using partial least squares ( pls ) regression . however , when calibration is based on air - dried spectra collected under laboratory conditions , predictions of soil properties from field spectra tend to be less accurate .usually , this decrease in accuracy is attributed to varying moisture between air - dried calibration samples and field spectra recorded with a variable moisture content .different remediation techniques have been proposed , ranging from advanced preprocessing of the spectra to `` spiking '' the calibration set with field spectra . in our study, we adopt a slightly different view on the calibration problem .it does not only apply to the varying moisture conditions between the calibration data set and the field spectra .indeed , it is also valid if we want to predict soil properties in a range where calibration samples are rare .mining with rarity or learning from imbalanced data is an ongoing research topic in machine learning . in imbalanced data setsfrequent samples outnumber the rare once .therefore , a model will be better at predicting the former and might fail for the latter .two different approaches exist to take care of the data imbalance : we can either adjust the model or `` balance '' the data .the latter approach has the advantage that we can use the usual modelling framework .synthetic minority oversampling technique ( smote ) is one way to balance the data .it was first proposed for classification and recently for regression .smote oversamples the rare data by generating synthetic points and thus helps to equalize the distribution . in this study, we propose a strategy to increase the prediction accuracy of soil properties from field spectra when they are rare in calibration .the goal of this study is to build a calibration model to predict soil organic carbon content ( socc ) from field spectra by air - dried samples spiked with synthetic field spectra .the studied soil was sampled at the southern slopes of mt .kilimanjaro , tanzania ( 3 4 33 s , 37 21 12 e ) in coffee plantations . due to favourable soil and climate in this region ,extensive coffee plantations constitute a frequent form of land use .we took 31 samples for calibration at 4 different study sites . for validation, we scanned 12 field spectra at a wall of a soil pit and sampled soil material for chemical analysis at the scanned spots .we call these validation field spectra f. after collection , the calibration samples were dried in an oven at 45 and sieved 2 mm .subsequently , they were scanned with an agrispec portable spectrophotometer equipped with a contact probe ( analytical spectral devices , boulder , colorado ) in the range 3502500 nm with 1 nm intervals .the same spectrometer was used in the field .the instrument was calibrated with a spectralon white tile before scanning the soil samples . for the measurement, a thoroughly mixed aliquot of the sample was placed in a small cup and the surface was smoothed with a spatula .each sample was scanned 30 times and the signal averaged to reduce the noise . in the following , we call this calibration data set l. socc was measured in a cns - analyser by high temperature combustion with conductivity detectors .to generate new data to spike the calibration data set l , we used smote and its extension for regression .this algorithm consists of generating new synthetic data using existing data and is summarized below . in our case , we generated new spectra and the related socc using the field spectra f. the new spectra are created by calculating the difference between a field spectrum and one of its nearest neighbours and adding this difference ( weighted by a random number between 0 and 1 ) to the field spectrum .the socc of the synthetic spectrum is then a weighted average between the socc of the field spectrum and the used nearest neighbour .smote has two parameters , namely , the number of points to generate for each existing point ( given in percent of the whole data set ) and , the number of nearest neighbours . to study the influence of these parameters we generated six different synthetic data sets s1 through s6 , varying and . ] : target value of original sample ] : target values of synthetic sample : number of synthetic samples to compute for each original sample generate synthetic samples : compute nearest neighbours for ] = { \mathit{orig.s}}[i ] + \mathrm{random}(0,1 ) \times { \mathit{diff}} ] we corrected each spectrum ( calibration , validation and synthetic ) for the offset at 1000 and 1830 nm and kept only parts with a high signal - to - noise ratio ( 4502400 nm ) .then , we transformed the spectra to absorbance and smoothed them using the singular spectrum analysis ( ssa ) .ssa is a non - parametric technique to decompose a signal into additive components that can be identified as the signal itself or as noise . finally , we divided each spectrum by its maximum and calculated the first derivative . in order to assess similarities between the calibration , validation and synthetic data sets , we calculated the principal component analysis ( pca ) of the ( uncorrected original ) spectra l and f and projected the synthetic data into the space spanned by the principal components .we calibrated seven different pls models .for model i we used the data set l , the spectra scanned under laboratory conditions .model ii through vii were calibrated on l spiked with synthetic spectra s1 through s6 . to find the best model i through vii, we varied the number of pls components between 1 and 15 .based on the predictions in the leave - one - out cross - validation ( loocv ) we calculated the corrected akaike information criterion aic , where is the number of calibration samples , the number of pls components and the root mean - squared error .the latter is defined as , where are the predicted and the measured soccs .we selected the model with the smallest aic as the most plausible .to assess the model quality , we used the , the mean error and the coefficient of determination , where is the mean socc .smote has two random components because it selects spectra randomly ( with replacement ) among the nearest neighbours and weights the difference between spectra by a random number ( between 0 and 1 ) . to study the influence of these random steps, we generated 100 different datasets s1 through s6 .each data set was then used to spike the calibration data set l , to build a new pls model and to predict the data set f.the first principal components ( pcs ) explain 85.4% and 11.2% of variance , respectively .we can clearly identify two distinct groups of samples : the calibration data set l and the field spectra f ( fig .[ fig : pca ] ) .in other words , the data sets l and f differ .the synthetic points that were projected into the space spanned by the pcs resemble the field spectra as expected .the distinct characteristics of the data sets l and f accord well with the difficulties to predict the data set f by using the laboratory spectra l only ( table [ tab : results : calib ] and table [ tab : results : valid ] ) . although the loocv of model i yields a moderate and a large , the validation on the data set f fails . spiking the calibration dataset l with synthetic spectra increases the prediction accuracy of the socc in the data set f. actually , the decreases and increases with increasing number of synthetic points both for the loocv and the validation ( table [ tab : results : calib ] and table [ tab : results : valid ] ) .however , the number of model parameters also increases from 2 to 7 .the monte carlo results show only a small variability in the interquartile range . however , some synthetic data sets in model v produced values smaller than .53 , the value we obtain in model i on air - dried samples only .this might be due to the combination of neighbours during smoting .in general , models with 5 neighbours were more accurate than those with 3 neighbours .however , the number of neighbours had a smaller influence on the prediction accuracy than the number of synthetic points .it is difficult to decide _ a priory _ how many synthetic points should be included in the calibration .indeed , in a classification problem the goal is to approximate an equal distribution of different classes such that the rare class becomes an ordinary one . in regression , however , we do not know which features of the data make them rare . for our data , the range of socc in the data set l is larger than in the data set f. therefore , we conclude that concentration is not responsible for the difference between these data sets . based on the monte carlo results we chose one synthetic data set from model vi , namely the one with the median number of model parameters and the best in the validation .thus , the calibration data set includes 31 air - dried and 24 synthetic spectra . compared to model i, spiking the air - dried data set l with these synthetic spectra clearly improves the prediction of the data set f ( fig .[ fig : models ] ) . [ cols="<,<,>,>,>,>,>,>",options="header " , ]we propose a framework to predict soil properties from _ in situ _ acquired field spectra by spiking air - dried laboratory calibration data by synthetic ones generated from these field spectra . in general , the prediction accuracy increases when a sufficient number of synthetic points is included in the calibration . however , because it is difficult to determine this number _ a priori _ , we recommend to generate several synthetic data sets to find an appropriate model .this study is part of the project dfg for 1246 `` kilimanjaro ecosystems under global change : linking biodiversity , biotic interactions and biogeochemical ecosystem processes '' and was supported by the deutsche forschungsgemeinschaft .b. stenberg and r. a. viscarra rossel , `` diffuse reflectance spectroscopy for high - resolution soil sensing , '' in _ proximal soil sensing _ , r. a. viscarra rossel , a. b. mcbratney , and b. minasny , eds . , pp .springer , 2010 .k. d. shepherd and m. g. walsh , `` infrared spectroscopy - enabling an evidence - based diagnostic surveillance approach to agricultural and environmental management in developing countries , '' , vol .1 , pp . 119 , 2007 .vgen , k. d. shepherd , and m. g. walsh , `` sensing landscape level change in soil fertility following deforestation and conversion in the highlands of madagascar using vis - nir spectroscopy , '' , vol .3 , pp . 281294 , 2006 .r. a. viscarra rossel , s. r. cattle , a. ortega , and y. fouad , `` in situ measurements of soil colour , mineral composition and clay content by vis nir spectroscopy , '' , vol .3 , pp . 253266 , 2009 .t. h. waiser , c. l. s. morgan , d. j. brown , and c. t. hallmark , `` in situ characterization of soil clay content with visible near - infrared diffuse reflectance spectroscopy , '' , vol .2 , pp . 389396 , 2007 .b. minasny , a. b. mcbratney , v. bellon - maurel , j .-roger , a. gobrecht , l. ferrand , and s. joalland , `` removing the effect of soil moisture from nir diffuse reflectance spectra for the prediction of soil organic carbon , '' , vol .118124 , 2011 . | diffuse reflectance spectroscopy is a powerful technique to predict soil properties . it can be used _ in situ _ to provide data inexpensively and rapidly compared to the standard laboratory measurements . because most spectral data bases contain air - dried samples scanned in the laboratory , field spectra acquired _ in situ _ are either absent or rare in calibration data sets . however , when models are calibrated on air - dried spectra , prediction using field spectra are often inaccurate . we propose a framework to calibrate partial least squares models when field spectra are rare using synthetic minority oversampling technique ( smote ) . we calibrated a model to predict soil organic carbon content using air - dried spectra spiked with synthetic field spectra . the root mean - squared error of prediction decreased from 6.18 to 2.12 mg g and increased from .53 to 0.82 compared to the model calibrated on air - dried spectra only . = 1 diffuse reflectance spectroscopy , soil , partial least squares , calibration , smote |
scene flow is a three - dimensional motion field of the surface in world space , or in other words , it shows the three - dimensional displacement vector of each surface point between two frames . as most computer vision issues are , scene flow estimation is essentially an ill - posed energy minimization problem with three unknowns .prior knowledge in multiple aspects is required to make the energy function solvable with just a few pairs of images .hence , it s essential to fully make use of information from the data source and to weigh different prior knowledge for a better performance .the paper attempts to reveal clues by providing a comprehensive literature survey in this field .scene flow is first introduced by vedula in 1999 and has made constant progress over the years .diverse data sources has emerged thus scene flow estimation do nt need to set up the complicated array of cameras .the conventional framework derived from optical flow field has extended to this three - dimensional motion field estimation task , while diverse ideas and optimization manners has improved the performance noticeably .the widely concerned learning based method has been utilized for scene flow estimation , which brings fresh blood to this integrated field . moreover , a few methods have achieved real - time estimation with gpu implementation at the qvga( ) resolution , which insure a promising efficiency .the emergence of these methods stands for the fact that scene flow estimation will be widely utilized and applied to practice soon in the near future .the paper is organized as follows .section [ sec : background ] illustrates the relevant issues , challenges and applications of scene flow as a background .section [ sec : taxonomy ] provides classification of scene flow in terms of three major components . emerging datasets that are publicly available andthe diverse evaluation protocols are presented and analyzed in section [ sec : evaluation ] .section [ sec : discussion ] arises few questions to briefly discuss the content mentioned above , and the future vision is provided .finally , a conclusion is presented in section [ sec : conclusion ] .we provide relevant issues , major challenges and applications as the background information for better understanding this field .scene flow estimation is an integrated task , which is relevant to multiple issues .firstly , optical flow is the projection of scene flow onto an image plane , which is the basis of scene flow and has made steady progress over the years . the basic framework andinnovations of scene flow estimation mainly derives from optical flow estimation field .secondly , in a binocular setting , scene flow can be simply acquired by coupling stereo and optical flow , which makes the stereo matching an essential part for scene flow estimation .most scene flow estimation methods with promising performance are initialized with a robust optical flow method or a stereo matching method . andthe innovation in scene flow mostly derived from these two fields .hence , we provides the changes and trend in the relevant issues as heuristic information .optical flow is a two - dimensional motion field .the global variational horn - schunck(h - s ) method and the local total - least - square(tls ) lucas - kanade(l - k ) method have led the optical flow field and scene flow field over the years .early works was studied and categorized by barron and otte with quantitative evaluation models .afterwards , brox implemented the coarse - to - fine strategy to deal with large displacement , while sun studied the statistics of optical flow methods to find the best way for modeling .baker proposed a thorough taxonomy of current optical flow methods and introduced the middlebury dataset for evaluation , and comparisons between error evaluation methodologies , statistics and datasets are presented as well .currently , optical flow estimation has reached to a promising status . a segmentation - based method with the approximate nearest neighbor field to handle large displacement ranks the top of middlebury dataset in terms of both endpoint error(epe ) and average angular error(aae )currently , where epe varies from 0.07 to 0.41 in different data and aae varies from 0.99 to 2.39 .a similar method reached promising results as well .moreover , there are a variety of methods which achieve top - tier performance and solve different problems respectively .rushwan utilized a tensor voting method to preserve discontinuity .xu introduced a novel extended coarse - to - fine optimization framework for large displacement , while stoll combines the feature matching method with variational estimation to keep small displacement area from being compromised .he also introduced a multi - frame method utilizing trilateral filter . to handle non - rigid optical flow , li proposed a laplacian mesh energy formula which combines both laplacian deformation and mesh deformation .stereo matching is essential to scene flow estimation under binocular setting .a stereo algorithm generally consists of four parts:(1 ) matching cost computation , ( 2 ) cost aggregation , ( 3 ) estimation and optimization and ( 4 ) refinement .it is categorized into local methods and global methods depending on how the cost aggregation and computation are performed .local methods suffer from the textureless region , while global methods is computationally expensive .a semi - global - matching(sgm ) method combines local smoothness and global pixel - wise estimation and leads to a dense matching result at low runtime , which is commonly utilized as the modification .a comprehensive review is presented by scharstein in 2001 .the upper rank algorithms of middlebury stereo dataset and kitti stereo dataset are mainly occupied by unpublished papers , indicating the rapid development in this field .learning methods are utilized with promising efficiency and accuracy . besides, zhang proposed a mesh - based approach considering the high speed of rendering and ranks the top among the published papers , while segmentation - based methods are proven to tackle the textureless problem .the complex scene and the limited image capture approach post challenges in diverse ways , which are discussed as follows .what happens when an unstoppable force meets an immovable object ?the relationship between accuracy and efficiency is just like the omnipotence paradox . to achieve better accuracy ,sufficient and complicated prior knowledge is obliged , while in terms of efficiency , the data need to be listed down to a reasonable scale and the calculation scheme should be as simple as possible .we can not only consider the enhancement of efficiency and accuracy , but also value the trade - off between these two .the trade - off is discussed in section [ sec : discussion ] , and the performance is illustrated in figure [ fig : accuracy - efficiency ] .occlusion is common in a complex scene with multiple moving objects .it occurs between views and frames as figure [ fig : occlusion ] illustrates .it violates the data consistency assumption and may lead to mismatching on account of missing information of the occluded object .besides , occlusion may perturb the consistency between frames and affect the multi - frames tracking method . [ htbp ] the occlusion between views can be handled well under the multi - view stereopsis with abundant prior knowledge , while temporal constraint may provide robust temporal coherence and prediction to alleviate occlusion between frames .large displacement occurs frequently when an object is moving at a high speed or under a limited frame - rate .moreover , articulated motion may lead to large displacement as well .this kind of problem is hard to tackle on account that the scene flow algorithms normally assume the constancy and smoothness within a small region , large displacement may make the solution to energy function trapped into a local minimum which leads to enormous errors propagated by iteration procedure .brox implemented the coarse - to - fine method along with a gradient constancy assumption to alleviate the impact caused by large displacement in the optical flow field .currently , several matching algorithms have been introduced to handle this issue specifically and achieved promising results .brightness constancy does nt obey the illumination - varying environment .however , this issue is common in an outdoor scene , e.g. , drifting clouds that block the sunlight , sudden reflection from a window , and lens flares .furthermore , it will be a disaster at night when lights start to flash .in the optical flow field , additional assumptions such as gradient constancy and some more complicated constraints have been added to make it more robust to the illumination changes .schuchert specifically studied range flow estimation under varying illumination . in his paper , pre - filtering and changes of brightness modelimprove the accuracy .gotardo introduced an albedo consistency assumption as a revision .a relighting procedure was proposed as a key element to handle the multiplexed situation in his paper as well .the lack of texture may make the scene flow estimation still an ill - posed problem , which is a challenge for discovering consistency .it is also a challenge to stereo matching , which may lead to enormous errors in the binocular - based scene flow estimation .the textureless region is still a major distribution of the estimation error . to overcome this problem ,different scene representations have been utilized .for example , popham introduced a patch - based method .the motion of each patch does nt only rely on the texture information , but utilizes the motion from neighbor patches .this makes it more robust for a textureless region . as a solution to the occlusion issue ,segmentation - based method is valid because it assumes uniform motion among the small regions to deal with the ambiguousness .scene flow estimation is a comprehensive problem .motion information reveals the temporal coherence between two moments . in a long sequence, scene flow can be utilized to get the initial value for the next frame and serve as a constraint in its relevant issue fields .scene flow can not only profit from its relevant issues , but also facilitate them mutually .gotardo captured three - dimensional scene flow to provide delicate geometric details , while liu utilized scene flow as a soft constraint for stereo matching and a prediction for next frame disparity estimation .ghuffar combined local estimation and global regularization in a tls framework and utilized scene flow for segmentation and trajectory generation . beyond that, scene flow can be a valuable input or mobile robotics and autonomous driving field , which consist of multiple task such as obstacle avoidance and scene understanding .frank first fused optical flow and stereo by means of kalman filter for obstacle avoidance .alcantarilla combined scene flow estimation with the visual slam to enhance the robustness and accuracy .herbst got object segmentation with the rgb - d scene flow estimation result , aiming to achieve autonomous exploration of indoor scenes .menze utilized scene flow to reason objects by regarding the scene as a set of rigid objects .autonomous driving could make use of both the geometric information that represents distance and the scene flow information that represents motion for multiple tasks . in addition, scene flow can be utilized to serve as a feature as the histogram of optical flow(hof ) or the motion boundary histogram(mbh ) feature descriptors for object detection and recognition , e.g. , facial expression , gesture , and body motion recognition .it may enrich the information in the descriptor with additional depth dimension and can be applied for motion like rotation or dolly moves that optical flow ca nt handle .for instance , in 2009 , furukawa recorded the motion model of the facial expression using scene flow estimation .scene flow estimation is viewed as an ill - posed problem , which includes three main steps : data acquisition , energy function modeling , energy minimization and optimization .the general taxonomy is depicted in figure [ fig : overall ] .the energy function consists of data term and regularization as equation [ eq : energy ] illustrates . data terms derive from different data sources assuming brightness constancy(bc ) or gradient constancy(gc ) as local constraint . while there are three unknown parameters ,regularization terms need to be added to regularize the ill - posed problem and provide spatial coherence .theoretically , the more regularization terms are , the more robust and accurate the estimation is .however , miscellaneous regularization terms may lead to redundancy , intractability and over - fitting , and that s why the design and solution of regularization term are key to a method .hence , in this section , existing methods are categorized in terms of three fundamental properties that distinguish the major algorithms : _ scene representation _ presented the diverse representations for both scene and scene flow ._ data source _ describes the major data acquisition manner and the corresponding data term choices ._ calculation scheme _ mainly discuss the idea for estimation and optimization manner , including diverse choices of regularization terms and implement . over the years , diverse ways to representthe scene have emerged with different emphasis , which can be broadly categorized into _depth / disparity _ , _ point cloud _ , _ mesh _ and_ patch_. the convenient way to represent the scene is to couple color image and depth map as color - d information , where d stands for depth information with rgb - d data or disparity information under a binocular setting .scene flow under this sort of representation is known as 2.5d scene flow or 2d parameterization of scene flow .it consists of optical flow component which is measured in pixels , and disparity or depth change component which is measured in pixels or .the binocular - based scene flow can be presented as , where is the 2d optical flow , and the stands for disparity change .likewise , in terms of rgb - d scene flow , the motion field consists can be presented , where stands for the depth change .particularly , disparity value can be converted into depth value as equation [ eq : disparitytodepth ] illustrates . where is the focal length of the camera , and is the camera baseline value . to truly present the three - dimensional scene ,the image pixel need to be projected into the scene space .the projection is illustrated in figure [ fig:3d - mapping ] and presented in equation [ eq:3d - mapping ] . where is the image pixel , is the projection from a pixel value and a depth value to a 3d point . and stands for the camera focal length , and and are the principle points .formulation [ eq:3d - mapping ] can also be presented as : where matrix is known as the camera projection matrix .hence , scene flow under point cloud representation can be presented as , which truly reveal the three - dimensional displacement .meshes represent a surface as a set of planar polygons , e.g. , triangle , which connected to each other as is shown in figure [ fig : mesh ] .this representation is a efficient way for rendering , and it occupies less memory .mesh is essentially sort of point cloud representation as the vertex can be viewed as a point in the three - dimensional point cloud .scene flow estimation methods with a mesh representation are only under a multi - view setting and the geometry estimation is given simultaneously .the motion of vertice is solved with a point cloud methods , while the rest part are solved by interpolating along the meshes . in terms of patch representation , the surfaceis represented by collections of small planar or sphere patches .each patch is six - dimensional in terms of three - dimensional patch center position and three - dimensional patch direction .patch representation is similar to mesh representation with different emphasis , where mesh representation focuses on the precision and deformable property of each vertex , and patch representation values the local consistency in terms of rigidity and motion within a small neighborhood region .early paper viewed patch as a surface element(surfel ) under a multi - view setting . a few binocular - based scene flow methods utilized patches to fit the surface of the scene on account that this kind of patch - based methods are common in stereo matching field .in addition , hornacek uniquely exploited a pair of rgb - d data to seek patch correspondences in the 3d world space and leads to dense body motion field including both translation and rotation . over the years, scene flow has been estimated under three main kinds of data source : a calibrated multi - camera system , the binocular stereo camera or the rgb - d camera which consists of both rgb color information and depth information .moreover , the emerging light field has been applied into the scene flow estimation with promising performance .data terms under different data sources differs from each other .hence , in this section , scene flow estimation methods are categorized into these four kinds : _ multi - view stereopsis _ , _ binocular setting _ , _ rgb - d data _ and _ light field data_.most of the algorithms in the early 2000s assume a multi - view system , with multiple cameras set in a complex calibrated scene .multi - view scene flow estimation is usually along with 3d geometry reconstruction simultaneously .ample data sources and diverse prior knowledge ensure the robustness of the estimation , and occlusion issue can be handled well .however , it is commonly at a high computational cost with an intricate full - view scene to deal with .vedula proposed two choices for regularization and distinguished three scenarios in 1999 , which guides the multi - view scene flow estimation till now . in his paper ,a multi - view scene flow can be estimated from one optical flow and the known surface geometry .the equation is formulated in equation [ eq : vedula ] . where is the three - dimensional scene point , is the two - dimensional image pixel , is the scene flow , is the optical flow , and is the inverse jacobian which can be estimated from the surface gradient .afterwards , zhang proposed two systems for estimation , where ims assumed each small patch undergoes 3d affine motion , and egs used segmentation to keep the boundary .these papers modeled energy function with multiple constraint , which provided a basic estimation process . similarly , pons presented a common variational framework with local similarity criteria constraint .henceforth , different scene representations were introduced to describe the surface .diverse multi - frame tracking methods mentioned for sparse estimation are utilized as well to build the temporal coherence .moreover , letouzey added an rgb - d camera into the multi - view system with a mesh representation , aiming to enrich the geometry information with the depth data constraint .binocular setting is regarded as a basic and simplified version of multi - view system , while the difference between the two is that binocular scene flow estimation is usually along with disparity estimation between two views but not the full - view 3d geometry knowledge .the relevance between views and frames is illustrated in figure [ fig : binocular - a ] , and figure [ fig : binocular - b ] depicts the basic data terms in a binocular setting which consists of stereo consistency terms in time and along with optical flow consistency terms in both views .the specific formulation is presented in equation [ eq : binoculardataterms ] . where [ eq : binoculardataterms ] and are the optical flow consistency terms that assume the brightness of the same pixel stay constant between frames . similarly , and are the stereo consistency terms that assume brightness constancy between views . moreover , is the cross term to constrain the constancy between both frames and views .most binocular - based methods fused stereo and optical flow estimation into a joint framework . on the contrary , others decoupled motion from disparity estimation to estimate scene flow with stereo matching method replaceable at will , and basha utilized a point cloud scene representation as a three - dimensional parametrization version of scene flow .moreover , local rigidity prior was presented along with segmentation prior and achieved promising results .specifically , valgaerts introduced a variational framework for scene flow estimation under an uncalibrated stereo setup by embedding an epipolar constraint , which makes it possible for scene flow estimation under two arbitrary cameras . in 2016 , richardt has made it a reality to compute dense scene flow from two handheld cameras with varying camera settings .scene flow was estimated under a variational framework with a daisy descriptor for wide - baseline matching .table [ tab : binoculardataterm ] enumerates some typical methods under a binocular setting with diverse choices of data terms .most methods chose optical flow consistency terms in both views and stereo consistency terms in both time and time , and few methods only take parts of terms mentioned above .cross term was utilized in .moreover , huguet and hung utilized additional gradient constancy assumption besides intensity to enhance robustness against illumination changes , which turns the image intensity value in energy function into image gradient . additionally , extra rgb constancy terms and are taken in hung s paper as well , which extends gray value intensity into three - channel information . however , it is proposed that image gradient is sensitive to noise and is view dependent .hence , the necessity of additional assumptions like gradient constancy remains further research to balance the pros and cons ..typical methods under the binocular setting [ cols="<,<,<,<",options="header " , ] we calculated the statistics of the ground truth optical flow in terms of average magnitude and the max magnitude to show the challenging level and large displacement situation of each dataset , as figure [ fig : datasetstatistic ] presents . where the bigger the average magnitude and standard deviation are , the more challenging the dataset is , and the bigger the max magnitude is , the more a robust large displacement handling is required .[ htbp ] we can see clearly that the up - to - date dataset is more challenging than datasets published before . along with information provided in table [ tab : dataset ] , we provide suggestion for datasets with different purpose as follows : + * comprehensiveness * taken multiple issues , e.g. , categories of ground truth data , image resolution , challenging level , naturalism , scale , popularity , into consideration , we recommended mpi sintel dataset and freiburg dataset for their comprehensive property . +* challenging * flyingthings3d subset of freiburg dataset shows the characteristic of large displacement , complex occlusion and diverse changes between frames , which is really challenging for scene flow estimation .monkaa subset shows the similar characteristic which is recommended as well .+ * public popularity * middlebury , kitti(2012&2015 ) and mpi sintel dataset provide evaluation protocols and online ranking that can evaluate the performance of a method conveniently .moreover , the evaluation can be compared with top tier methods in optical flow estimation field and stereo matching field to indicate the superiority of scene flow estimation . +* multi - view and rgb - d data source * for multi - view stereopsis , basha rotating sphere and kitti2012/2015 provide multi - view extension for evaluation .to be noted , basha provides only point cloud scene flow ground truth for quantitative analysis , while the ground truth of kitti2012/2015 is sparse . in terms of rgb - d scene flow estimation ,mpi sintel dataset and basha rotating sphere dataset provides depth ground truth so that disparity - to - depth conversion is no need .in addition , long range of distance may make the depth map transferred from disparity map unclear for visualization , which make image - based algorithm hard to work .the provided depth visualization map is a good option .+ * large scale for learning * freiburg dataset is currently the only dataset with order of scale that is designed for training optical flow , disparity , and scene flow. protocols usually varies from each other based on different datasets .the previous datasets like middlebury and rotating sphere usually use rmse / nrmse and aae for evaluation , while newly - introduced datasets like kitti , mpi sintel and freiburg utilize epe for evaluation .we recommend researchers to use epe as a overall protocol , while rmse and aae can be supplementary means that reveal error distribution and angular error . +* should we evaluate the 3d error ?* as equation [ eq : errorprojection ] presents , the disparity in time or the disparity change do have influence on scene flow estimation .hence , we highly recommended that for 2d parametrization scene flow estimation methods , the accuracy of disparity or the disparity change should be evaluated and provided .considering the fact that equation [ eq : nrmse - wedel ] and [ eq : aae - wedel ] is rarely utilized , and rotating sphere of basha is the only dataset that provide three - dimension point cloud ground truth , we recommend researchers to provide epe for optical flow , disparity in time and time as a common protocol for scene flow evaluation . with 17 years of development, scene flow estimation has reached a promising status , while there are still many issues remains to be solved . in this section , we discuss the limitation of existing datasets , and present a brief vision on modification of algorithms .the survey of existing evaluation methodologies reveals problems and limitations as elaborated below : + * size * on account that real - time scene flow has been achieved with gpu implementation , high resolution data can be introduced for more challenging work . + * ground truth * point cloud is the real three - dimensional parametrization representation for scene flow , which reveals the difference between scene flow and optical flow significantly .hence , the ground truth for scene flow under the point cloud representation is necessary .meanwhile , occlusion , textureless region and discontinuity region ground truth are essential for evaluation due to the fact that errors mainly exist near these regions . + *data source * current datasets mainly focus on scene flow under the binocular setting , while multi - view extension and specific datasets for rgb - d and light field based scene flow is required .otherwise , the properties of missing data in rgb - d cameras and the abilities like refocusing of light field cameras will be neglected with current datasets .+ * protocol * a protocol for evaluating performances under different datasets remains vacant .moreover , the three - dimensional protocol is nt been applied due to the limitation of point cloud ground truth . by checking the error map provided by kitti benchmark, it s clear that inaccuracy mainly exists in the boundaries of objects .since this is a common issue for all computer vision tasks , edge - preserving and reasonable filtering is the first priority .gpu implementation has shown a great efficiency improvement , and the duality - based optimization has proved to enhance the efficiency of global variational methods without accuracy sacrifice . these kind of methods may be a routine in the future for better efficiency . with the development of a robust and efficient estimation between two frames ,some papers have studied motion estimation under a long sequence .the multi - fames estimation with temporal prior knowledges deserves more attention .a robust temporal constraint can benefit the methods with a better initial value or a better feature to match .the challenges like varying illumination and occlusion can be handled with the help of it .the emerging learning based methods and light field technique has brought fresh blood to scene flow estimation .learning method with cnn shows an upward tendency in the relevant issues of scene flow estimation like stereo matching and optical flow with promising accuracy and computational cost . with the help of the up - to - date large scale training dataset , learning - based method has a profound potential to achieve an accurate and fast estimation .light field camera provides more data than existing data source , which brings diverse possibilities for this field .similar to the emergence of rgb - d cameras , this new source of data may lead to a new attractive branch . on account of the fact that scene flow estimation relies highly on texture and intensity information, application will suffer in the night or an insufficient illumination circumstance .moreover , the car headlights and lighting on the building that are frequent in the autonomous driving scene may interfere motion estimation significantly .hence , the scene flow estimation with insufficient illumination is worth studying .this paper presents a comprehensive and up - to - date survey on both scene flow estimation methods and the evaluation methodologies for the first time after 17 years since scene flow was introduced .we have discussed most of the estimation methods so researchers could have a clear view of this field and get inspired for their studies of interest .the representative methods are highlighted so the differences between these methods are clear , and the similarities between top - tier methods can be seen as a tendency for modification .the widely used benchmarks have been analyzed and compared , so are multiple evaluation protocols .this paper provides sufficient information for researchers to choose the appropriate datasets and protocols for evaluating performance of their algorithms .there are still ample rooms for future research on accuracy , efficiency and multiple challenges .we wish our work could arise public interest in this field and bring it to a new stage .this work was supported by projects of national natural science foundation of china [ 61401113 ] ; and natural science foundation of heilongjiang province of china [ lc201426 ] .n. mayer , e. ilg , p. husser , p. fischer , d. cremers , a. dosovitskiy , t. brox , a large dataset to train convolutional networks for disparity , optical flow , and scene flow estimation , arxiv preprint arxiv:1512.02134 . c. rabe , t. muller , a. wedel , u. franke , dense , robust , and accurate motion field estimation from stereo image sequences in real - time , in : european conference on computer vision ( eccv ) , springer - verlag , 2010 , pp . 582595 .m. jaimez , m. souiai , j. gonzalez - jimenez , d. cremers , a primal - dual framework for real - time dense rgb - d scene flow , in : ieee international conference on robotics and automation ( icra ) , 2015 , pp .98104 . o. u. n. jith , s. a. ramakanth , r. v. babu , optical flow estimation using approximate nearest neighbor field fusion , in : ieee international conference on acoustics , speech and signal processing ( icassp ) , 2014 , pp .6736577 .h. a. rashwan , m. a. garca , d. puig , variational optical flow estimation based on stick tensor voting , ieee transactions on image processing a publication of the ieee signal processing society 22 ( 7 ) ( 2013 ) 25892599 . c. zhang , z. li , y. cheng , r. cai , h. chao , y. rui , meshstereo : a global stereo model with mesh alignment regularization for view interpolation , in : ieee international conference on computer vision ( iccv ) , 2015 , pp . 20572065 .f. alcantarilla , j. j. yebes , j. almaz , x00e , l. m. bergasa , on combining visual slam and dense scene flow to increase the robustness of localization and mapping in dynamic environments , in : ieee international conference on robotics and automation ( icra ) , 2012 , pp .12901297 .a. wedel , c. rabe , t. vaudrey , t. brox , u. franke , d. cremers , efficient dense scene flow from sparse or dense stereo data , in : european conference on computer vision ( eccv ) , vol .5302 lncs , 2008 , pp . 739751 .y. zhang , c. kambhamettu , integrated 3d scene flow and structure recovery from multiview image sequences , in : ieee conference on computer vision and pattern recognition ( cvpr ) , vol . 2 , 2000 , pp .674681 .d. ferstl , c. reinbacher , g. riegler , r. m , x00fc , ther , h. bischof , atgv - sf : dense variational scene flow through projective warping and higher order regularization , in : international conference on 3d vision , vol . 1 , 2014 , pp .285292 .j. park , t. h. oh , j. jung , y .- w .tai , i. s. kweon , a tensor voting approach for multi - view 3d scene flow estimation and refinement , in : european conference on computer vision ( eccv ) , vol .7575 lncs , 2012 , pp . 288302 .r. l. carceroni , k. n. kutalakos , multi - view scene capture by surfel sampling : from video streams to non - rigid 3d motion , shape and reflectance , in : ieee international conference on computer vision ( iccv ) , vol . 2 , 2001 , pp .6067 vol.2 .j. p. pons , r. keriven , o. faugeras , g. hermosillo , variational stereovision and 3d scene flow estimation with statistical similarity measures , in : ieee international conference on computer vision ( iccv ) , 2003 , pp .597602 vol.1 .j. p. pons , r. keriven , o. faugeras , multi - view stereo reconstruction and scene flow estimation with a global image - based matching score , international journal of computer vision 72 ( 2 ) ( 2007 ) 179193 .l. valgaerts , a. bruhn , h. zimmer , j. weickert , c. stoll , c. theobalt , joint estimation of motion , structure and geometry from stereo sequences , in : european conference on computer vision ( eccv ) , vol .6314 lncs , 2010 , pp .568581 .x. zhang , d. chen , z. yuan , n. zheng , dense scene flow based on depth and multi - channel bilateral filter , in : x. zhang , d. chen , z. yuan , z. hu ( eds . ) , asian conference on computer vision ( accv ) , 2012 , pp .140151 .m. jaimez , m. souiai , j. st , x00fc , ckler , j. gonzalez - jimenez , d. cremers , motion cooperation : smooth piece - wise rigid scene flow from rgb - d images , in : international conference on 3d vision ( 3dv ) , 2015 , pp .6472 .m. w. tao , s. hadap , j. malik , r. ramamoorthi , depth from combining defocus and correspondence using light - field cameras , in : ieee international conference on computer vision ( iccv ) , 2013 , pp .673680 .v. lempitsky , s. roth , c. rother , fusionflow : discrete - continuous optimization for optical flow estimation , in : computer vision and pattern recognition , 2008 .cvpr 2008 .ieee conference on , ieee , 2008 , pp .z. lv , c. beall , p. f. alcantarilla , f. li , z. kira , f. dellaert , a continuous optimization approach for efficient and accurate scene flow , in : european conference on computer vision , springer , 2016 , pp .757773 .a. dosovitskiy , p. fischer , e. ilg , h. p , xe , usser , c. hazirbas , v. golkov , p. v. d. smagt , d. cremers , t. brox , flownet : learning optical flow with convolutional networks , in : ieee international conference on computer vision ( iccv ) , 2015 , pp .27582766 .t. vaudrey , c. rabe , r. klette , j. milburn , differences between stereo and motion behavior on synthetic and real - world stereo sequences , in : international conference of image and vision computing new zealand ( ivcnz ) , 2008 , pp .16 .j. gall , c. stoll , e. d. aguiar , c. theobalt , b. rosenhahn , h. p. seidel , motion capture using joint skeleton tracking and surface estimation , in : ieee conference on computer vision and pattern recognition ( cvpr ) , 2009 , pp . | this paper is the first to review the scene flow estimation field to the best of our knowledge , which analyzes and compares methods , technical challenges , evaluation methodologies and performance of scene flow estimation . existing algorithms are categorized in terms of scene representation , data source , and calculation scheme , and the pros and cons in each category are compared briefly . the datasets and evaluation protocols are enumerated , and the performance of the most representative methods is presented . a future vision is illustrated with few questions arisen for discussion . this survey presents a general introduction and analysis of scene flow estimation . |
there are various models of collective opinion formation in which agents modify their opinions according to interaction with other agents . opinion formation is a dynamic process : for example , interaction between agents makes their opinions approach each other .an important problem in opinion dynamics is to examine when an agreement ( i.e. , consensus ) among all the agents occurs .complete agreement is rarely observed in the real world .however , it is an established fact that opinion dynamics under the voter model , a classical opinion model in statistical physics and probability theory , inevitably reaches agreement in finite populations .the majority rule model has a similar feature .partly motivated by this discrepancy , various extensions of voter and majority rule models and different models of collective opinion formation have been proposed to account for the disagreement in finite populations .examples include the deffuant model , language competition models , voter - like models on adaptive networks , voter model under partisan bias ( the assumption that agents naturally prefer one opinion ) , and variations of axelrod s cultural dynamics ( see for references ) .theoretical models have also been proposed in social sciences to explain disagreement in the context of polarization .for example , prior beliefs or initially received signals can cause disagreement between agents , even if they receive the same public signals from then on .although there is a plethora of studies addressing the problem of agreement and disagreement in opinion dynamics , we propose a model incorporating two factors that are relevant to human behavior : bayesian belief updating and confirmation bias .bayesian belief updating is commonly used in studies of the decision making of agents receiving uncertain information .the confirmation bias is a psychological bias inherent in humans , in which an agent inclined towards an opinion tends to misperceive incoming signals as supporting the agent s belief . a non - bayesian model with the confirmation biaswas previously proposed for explaining the influences of media and interactions between agents .we are not the first to study opinion formation under the bayesian updating and confirmation bias . in the framework of single agent opinion formation, rabin and schrag showed that the confirmation bias triggers overconfidence and can cause the individual to hold incorrect beliefs , even if it receives a series of external signals suggesting the true state of the world .orlan studied the bayesian dynamics of agents subjected to the confirmation bias , interacting through the mean field . the model yields agreement or disagreement depending on the parameter values . in this study , motivated by the rabin - schrag model , we propose a model of collective opinion formation with a confirmation bias .we model direct peer - to - peer interactions between agents ( not through the mean field ) and their effects on the bayesian updating of each agent . to study the pure effects of interactions among agents, we do not assume that agents receive signals from the environment as in previous studies .we numerically simulate the model to reveal the conditions under which the populations of agents agree and disagree , depending on the values of parameters such as the strength of the confirmation bias , fidelity of the signal , and the system size .our model modifies the bayesian decision - making model proposed by rabin and schrag in two main ways .first , we consider a well - mixed population of bayesian agents that interact with each other ; rabin and schrag focused on the case of the single agent .second , agents do not receive external signals from the environment in our model . in the rabin - schrag model , such an external signal , which represents the correct " answer in the binary choice situation ( i.e. , the true state of nature ) ,is assumed . by making the two changes , we concentrate on collective opinion formation by bayesian agents , whereby there are two possible alternative opinions of equal attractiveness .we label the agents and denote the opinion of agent ( ) by , where a and b are the alternative opinions .we assume that agents are not perfectly confident in their opinions . to model this factor, we adopt the bayesian formalism used by rabin and schrag .we denote by the strength of the belief ( hereafter , simply the belief ) with which agent believes in opinion a. a parallel definition is applied to .it should be noted that , , and . if , agent is indifferent to either opinion .we update the agent s belief as follows . the time starts from . upon every updating of an agent s belief , we add to such that the belief of each agent is updated once per time unit on average . in an updating event , we select an agent to be updated with equal probability .agent refers to agent s opinion for updating s belief , where ( ) is selected with equal probability from the population .agent imparts a signal , where and correspond to s opinions a and b , respectively .we assume that the probabilities that agent imparts and are given by and respectively , where represents the reliability of the signal , and .if is confident in its own opinion and the transformation from s belief [ i.e. , to s output signal ( i.e. , or ) is reliable , signals and are likely to indicate opinions a and b , respectively . in the limit , and . if , such that does not convey any information about s belief .we implicitly assume that all the agents share the same value of and that they know this fact when performing the bayesian update , as described below .when agent imparts signal , agent is assumed to perceive a subject signal , where and correspond to and , respectively .the flow of the signal conversion is depicted in fig .[ fig : signals ] .if agent is not subject to the confirmation bias , and are equal to and , respectively .otherwise , agent may misinterpret the signal imparted by agent , depending on the prior exposure of agent to other signals . following rabin and schrag , we define = \pr[\sigma=\beta | s = b,\pr(x_i={\rm a } ) \le 1/2 ] = 1 \label{eq : s_to_sigma_bias_1}\end{aligned}\ ] ] and = \pr[\sigma=\beta | s = a,\pr(x_i={\rm a } ) < 1/2 ] = q , \label{eq : s_to_sigma_bias_2}\end{aligned}\ ] ] where ( ) parameterizes the strength of the confirmation bias .equation states that an agent preferring opinion a misinterprets an arriving signal as ( i.e. , ) with probability .if , the confirmation bias is absent , and and are always converted to and , respectively . if , the agent perceives the signal that is consistent with its current preference [ i.e. , if and if , irrespective of the signal imparted by agent ( i.e. , or ) . the other conditional probabilities can be readily derived from eqs .( [ eq : s_to_sigma_bias_1 ] ) and ( [ eq : s_to_sigma_bias_2 ] ) . for example , eq . ( [ eq : s_to_sigma_bias_1 ] ) implies = 1 - \pr[\sigma=\alpha|s = a , \pr(x_i={\rm a})>1/2 ] = 0 , \label{eq : s_to_sigma_bias_3}\end{aligned}\ ] ] and eq .( [ eq : s_to_sigma_bias_2 ] ) implies = 1- \pr[\sigma=\beta|s = a , \pr(x_i={\rm a})<1/2 ] = 1-q .\label{eq : s_to_sigma_bias_4}\end{aligned}\ ] ] then , by using the bayes theorem , we update agent s belief on the basis of the old belief [ and the perceived signal ( i.e. , or ) . the perceived signal may be different from the received signal ( i.e. , or ) because of their confirmation bias [ eqs . and ] .we assume that agents are not aware that they may be subject to the confirmation bias .agents use the subjective conditional probabilities given by and to perform the bayesian update . the posterior belief is given by should be noted that .then , we increment the time by such that each agent is updated once per unit time on average .iterative application of eq .( [ eq : pia_update ] ) leads to and where ( ) is the accumulated number of signals ( ) that agent has perceived .the state of each agent is uniquely determined by , which is consistent with basic bayesian theory .unless otherwise stated , we set and assume a neutral initial condition ( ) , or , equivalently , ( ) . the agents exchange signals andupdate their beliefs , possibly under a confirmation bias .after a transient , the agents believe in either opinion with a strong confidence , i.e. , or .we halt a run when is satisfied for all for the first time , where is the threshold .in other words , a run continues if at least one agent has the value smaller than .we first consider the case without a confirmation bias ( i.e. , ) .we investigate the dynamics of the mean belief at time by drawing a return map , i.e. , as a function of .the return map for , , and based on runs is shown in fig .[ fig : meanp_ba ] . because when and when , the dynamics is in accordance with majority rule behavior .all runs finished with an agreement of opinion a [ i.e. , for all ] or opinion b [ i.e. , for all ] .each case occurred approximately half the time .we turn on the confirmation bias to examine the possibility that it induces disagreement among agents . at least for large ( i.e. , ), disagreement is expected to be reached because the first perceived signal would determine the final belief of each agent and is equally likely to be and for many agents . in the following numerical simulations , we measured the degree of disagreement , which we defined as follows .we determined that agreement was reached in a run if the final signs of were the same for .otherwise , we said that disagreement was reached .we denoted the fraction of runs that finished with disagreement by .we set and the number of runs to . in figs .[ fig : r_bm](a ) and [ fig : r_bm](b ) , is shown as a function of and for and , respectively .first , monotonically increases with and decreases with for both and .it should be noted that disagreement occurred in at least one run in the regions right to the solid fractured lines in fig .[ fig : r_bm ] .second , for [ fig .[ fig : r_bm](a ) ] is smaller than for [ fig .[ fig : r_bm](b ) ] for all the and values . therefore , disagreement seems to be a likely outcome of the model for large , particularly for large and small .when , perfect agreement , i.e. , , is realized only for close to zero . in other words ,even a small degree of confirmation bias elicits disagreement among the agents . to obtain analytical insights into the model, we performed an annealed approximation for by averaging out fluctuations of the dynamics for different times and runs .the configuration of the population is specified by .the stochastic dynamics of the model can be mapped to a random walk on the two - dimensional lattice ; a walker is initially located at and randomly hops to one of the four neighboring lattice points in each time step .we defined , , , and as the probabilities that the walker located at moves to , , , and , respectively .the four probabilities are given by \dfrac{\pr(s = a | m_2)}{2 } & ( m_1 = 0 ) , \\[6pt ] \dfrac{(1-q ) \pr(s = a | m_2)}{2 } & ( m_1 \le -1 ) , \end{cases } \label{eq : fr}\end{aligned}\ ] ] \dfrac{\pr(s = a | m_1)}{2 } & ( m_2 = 0 ) , \\[6pt ] \dfrac{(1-q ) \pr(s = a | m_1)}{2 } & ( m_2 \le -1 ) , \end{cases } \label{eq : fu}\end{aligned}\ ] ] and where is the probability that agent with imparts signal . and increase with and , and and decrease with and . in the following , we study the mean dynamics of the randomwalk driven by the drift terms . because the transition probability of the random walk is symmetric with respect to the lines and , we focus on the region given by .we define and , which are not integers in general , as the values satisfying and , respectively .they are given by ^{-1}}{\ln \dfrac{\theta}{1-\theta}}. \label{eq : r2_fr = fl}\end{aligned}\ ] ] note that and exist if and only if , i.e. , first , we consider the case .we partition the upper quadrant of the lattice ( given by ) into five regions : region ( ) , region ( ) , region ( ) , region ( ) , and region ( ) , as shown in fig .[ fig : schematicview_fr - fl_fu - fd](a ) .we obtain from the condition in region , in region , in region , in region , and in region .the probability flow of the walker after the annealed approximation , i.e. , ( ) inferred from eqs .- is shown schematically in fig .[ fig : schematicview_fr - fl_fu - fd](a ) . if the walker is in the second quadrant ( i.e. , regions , , and ) where the two agents disagree with each other , the random walker is likely to eventually escape and enter the first quadrant ( i.e. , region ) where the two agents agree with each other .in fact , fig . [fig : vectorplot](a ) , which shows the actual probability flow , indicates that the agreement necessarily occurs .therefore , agreement is the expected outcome when .second , if , regions and are absent because and diverge . regions and , in which inequalities and are satisfied , respectively , are the same as those in the case .region , in which inequality is satisfied , is modified to . the probability flows are schematically shown in fig .[ fig : schematicview_fr - fl_fu - fd](b ) . and are satisfied in region .therefore , once the walker is deep in the second quadrant , it is likely to move toward and , which implies that two agents finally disagree . the actual probability flow shown in fig .[ fig : vectorplot](b ) is consistent with this prediction . the transition line is shown by the dashed line in fig .[ fig : r_bm](a ) .it accurately predicts the parameter region in which disagreement can occur , i.e. , the region right to the solid line .the same transition line is also derived for the rabin - schrag model , which is concerned with a single agent subjected to a confirmation bias . in their model ,the agent forms a belief by repetitively receiving a stochastic signal from nature , according to .rabin and schrag calculated the probability that the agent eventually misunderstands the state of the nature ( i.e. , a or b ) , starting from neutral belief .this probability is equal to zero when and positive when ( see proposition in ) .our results obtained in this section are consistent with theirs because disagreement in our model roughly corresponds to misunderstanding in the rabin - schrag model . in general , there are disagreement configurations , as distinguished by the number of agents that finally believe in opinion a , which ranges from to . to distinguish different disagreement configurations , we examined the fraction of agents that believed in the minority opinion at the end of a run .we averaged this fraction over the runs ending with disagreement .we called this quantity the average size of the minority .figures [ fig : r_minority_minoritymin](a ) and [ fig : r_minority_minoritymin](b ) show the average size of the minority for and , respectively .the black regions indicate the parameter values for which the average size of the minority is undefined because all runs end with agreement .when is small , the average size of the minority monotonically decreases with and monotonically increases with for both and .therefore , small and large values allow only balanced disagreement configurations , in which the numbers of the agents believing in the opposite opinions are close to . however , the average size of the minority increases when is large .this is particularly the case for [ fig .[ fig : r_minority_minoritymin](b ) ] .this increase occurs for the following reason . with a strong confirmation bias, agents end up with an opinion consistent with a small number of signals perceived in the early stages , and both signals are equally likely to be observed in the early stages under neutral initial conditions . in the extreme case in which , agents reinforce the opinion that is consistent with their first perceived signal .therefore , unbalanced disagreement configurations are rarely realized when is large .figures [ fig : r_bm ] and [ fig : r_minority_minoritymin ] suggest that the agreement is unlikely to be reached in a large population .to examine the effect of the population size , we defined as the value of such that a threshold number of runs among runs end with agreement . for a given value, we determined by the bisection method .the number of agreement runs may not monotonically change in because the number of runs is finite .therefore , the bisection method does not perfectly work in general .however , we corroborated that the following results were negligibly affected by the lack of monotonicity .the dependence of on is shown in fig .[ fig : qcr_vs_n](a ) for three threshold values .for example , the results for the threshold value ( shown by circles ) indicate that at least runs among the runs end up with disagreement when .we set and .to explore the possibility of disagreement in large populations , we set close to .it should be noted that fig .[ fig : r_bm ] indicates that the probability of disagreement is small for a large value . in fig .[ fig : qcr_vs_n](a ) , quickly decreases for and gradually decreases for .disagreement often occurs for large unless is small .nevertheless , fig .[ fig : qcr_vs_n](a ) suggests that the range of for which agreement always occurs survives for diverging . in generating fig .[ fig : qcr_vs_n](a ) , we used an initial condition in which all the agents had a neutral belief [ i.e. , . to check the effect of the initial condition ,we investigated the dependence of on under two other initial conditions . in the bimodal initial condition , we initially set ( ) and ( ) .we assumed that was even for this initial condition .in the so - called most unbalanced initial condition , we set and ( ) . the numerical results for the two initial conditions are shown in figs .[ fig : qcr_vs_n](b ) and [ fig : qcr_vs_n](c ) .the parameter values , are the same as those used in fig . [fig : qcr_vs_n](a ) .the transition point decreases with more rapidly with the bimodal initial condition [ fig .[ fig : qcr_vs_n](b ) ] than with the neutral initial condition [ fig .[ fig : qcr_vs_n](a ) ] .this result is intuitive : the bimodal initial condition paves the way to disagreement .in contrast , under the most unbalanced initial condition is almost constant near irrespective of .therefore , disagreement is highly unlikely unless the confirmation bias is strong ( i.e. , is greater than ) .the results shown in fig . [fig : qcr_vs_n ] suggest that the eventual behavior of the model strongly depends on the initial condition even after the results are averaged over runs .our numerical results are summarized as follows . when the confirmation bias is absent ( i.e. , ) , the opinion dynamics under the bayesian update rule leads to the complete agreement among agents .the behavior of the model is similar to majority rule dynamics ( fig .[ fig : meanp_ba ] ) . when the confirmation bias is present , disagreement is a likely outcome , particularly for a strong confirmation bias ( i.e. , large ) .disagreement is also more likely for a lower fidelity of the signal ( i.e. , ) and a larger system size .the transition line separating the parameter region in which both agreement and disagreement can occur and that in which only agreement occurs is approximately given by when .this line is identical to the one determined by rabin and schrag for their model for a single agent s decision making . finally , the behavior of the model strongly depends on the initial condition .our model and results are different from orlan s , although orlan s model employs multiple agents that perform the bayesian updates under a confirmation bias .first , the belief of each agent is binary in orlan s model , whereas our model introduces an infinite range of discrete beliefs , as in .second , interaction between agents is introduced differently in the two models . in orlan s model, each agent refers to the global fraction of agents believing in one of the two opinions . in our model, agents refer to other opinions by peer - to - peer interaction , i.e. , by receiving a binary signal that is correlated with the belief of the sender .third , the stochastic dynamics of orlan s model is ergotic when the collective opinion does not reach agreement .the collective opinion obeys a stationary distribution , irrespective of the initial condition .in contrast , in all our simulations , the stochastic dynamics of our model was nonergotic , such that the final configuration depended on the initial condition in a wide parameter region .in social science studies of polarization , several authors analyzed bayesian models in which different agents receiving a series of common signals end up in disagreement .the proposed mechanisms governing disagreement include different initial beliefs or factors that affect perception of later incoming signals , different update rules , and ambiguity aversion .these models and ours are different in three major ways .first , a ground truth opinion corresponding to the state of nature is assumed in these models but not in ours .second , public signals commonly received by different agents are assumed in these models but not in ours .third , the agents do not have direct peer - to - peer interaction in these models , but they do in ours. models with interacting bayesian agents , which show disagreement ( reviewed in ref . ) , are also different from our model in the first respect .it should be noted that zimper and ludwig discussed confirmation bias with their bayesian model .however , they derived a confirmation bias from their model , rather than assuming one , such that their results pertaining to confirmation bias were also distinct from ours . extending our model to the case of networksis straightforward .for example , we can select a recipient of the signal with probability and then select the sender with equal probability among the neighbors of the recipient on the network .another possible update rule is to select the sender first and then the recipient among the sender s neighbors .yet another possibility is to select a link with equal probability and designate one of the two agents as sender and the other as recipient . on heterogeneous networks ,the results may depend on the update rule because it is the case in the voter model .extension of the model to the case of confirmation bias heterogeneity may also be interesting .neurological evidence shows that different individuals have different confirmation bias strengths .the strength of the confirmation bias and the position of the node in a social network may be correlated and affect the dynamics .it is also straightforward to extend the model to the case of multiple opinion cases .these and other extensions , along with the study of analytically tractable models that capture the essence of the present study , warrant future work .we thank mitsuhiro nakamura , taro takaguchi , and shoma tanabe for critical reading of the manuscript .this work is supported by grants - in - aid for scientific research [ grant23681033 and innovative areas systems molecular ethology " ( grant no .20115009 ) ] from mext , japan . , , and .we recorded the values of ) for for runs and divided the recorded pairs into classes . the class ( )was composed of the pairs satisfying .we obtained the mean value for the class by averaging over all the pairs contained in the class .finally , we plotted against for .the diagonal is also shown as a guide . ] .( a ) .( b ) .solid lines represent the boundary between and . the dashed line in ( a ) represents .the dashed line is not drawn in ( b ) because this theoretical estimate is valid only for . in ( a ), the two lines almost overlap each other .the initial belief of each agent was assumed to be neutral [ i.e. , , ] . ] .vector is shown by an arrow of proportional size at each position of the random walker .( a ) and , which satisfies .( b ) and , which satisfies .the size of the vectors is manually normalized for clarity , independently for the two panels . ] and ( b ) .the initial belief of each agent is assumed to be neutral [ i.e. , , ] .the black region represents the case where all the runs end with agreement such that the average size of the minority is undefined . ]99ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] * * , ( ) ( , , ) ( , , ) ( , , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) ( ) * * , ( ) * * , ( ) link:\doibase 10.1103/physrevlett.100.108702 [ * * , ( ) ] link:\doibase 10.1103/physreve.77.016102 [ * * , ( ) ] link:\doibase 10.1103/physreve.74.056108 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.100.158701 [ * * , ( ) ] * * , ( ) ( ) * * , ( ) * * , ( ) `` , '' * * , ( ) * * , ( ) ( , , ) ( ) ( , , ) * * , ( ) * * ( 5 ) , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) `` , '' * * , ( ) * * , ( ) | we propose a collective opinion formation model with a so - called confirmation bias . the confirmation bias is a psychological effect with which , in the context of opinion formation , an individual in favor of an opinion is prone to misperceive new incoming information as supporting the current belief of the individual . our model modifies a bayesian decision - making model for single individuals [ m. rabin and j. l. schrag , q. j. econ . * 114 * , 37 ( 1999 ) ] for the case of a well - mixed population of interacting individuals in the absence of the external input . we numerically simulate the model to show that all the agents eventually agree on one of the two opinions only when the confirmation bias is weak . otherwise , the stochastic population dynamics ends up creating a disagreement configuration ( also called polarization ) , particularly for large system sizes . a strong confirmation bias allows various final disagreement configurations with different fractions of the individuals in favor of the opposite opinions . pacs numbers : : 87.23.ge , 02.50.ey , 02.50.le 0 |
the credit valuation adjustment ( cva ) is , by definition , the difference between the risk - free portfolio value and the true portfolio value that takes into account default risk of the counterparty . in other words , cva is the market value of counterparty credit risk .after the financial crisis in 2007 - 2008 , it has been widely recognized that even major financial institutions may default .therefore , the market participants has become fully aware of counterparty credit risk . in order to reflect the counterparty credit risk in the price of over - the - counter ( otc ) derivative transactions ,cva is widely used in the financial institutions today .although duffie - huang has already introduced the basic idea of cva in 1990 s , several people reconsidered the theory of cva related to collateralized derivatives ( cf . ) and also efficient numerical calculation methods appeared(cf . ) .there are two approaches to measuring cva : unilateral and bilateral ( cf . ) . under the unilateral approach, it is assumed that the bank that does the cva analysis is default - free .cva measured in this way is the current market value of future losses due to the counterparty s potential default .the problem with unilateral cva is that both the bank and the counterparty require a premium for the credit risk they are bearing and can never agree on the fair value of the trades in the portfolio .therefore , we have to consider not only the market value of the counterparty s default risk , but also the bank s own counterparty credit risk called debit value adjustment ( dva ) in order to calculate the correct fair value . bilateral cva ( it is calculated by netting unilateral cva and dva ) takes into account the possibility of both the counterparty default and the own default .it is thus symmetric between the own company and the counterparty , and results in an objective fair value calculation .mathematically , unilateral cva and dva are calculated in the same way , and bilateral cva is the difference of them .so we focus on the calculation of unilateral cva in this paper .cva is measured at the counterparty level and there are many assets in the portfolio generally .therefore , we have to be involved in the high dimensional numerical problem to obtain the value of cva .this is one of the reasons why cva calculation is difficult . on the other hand, each payoff usually depends only on a few assets .we will focus on this property and suggest an efficient calculation methods of cva in the present paper .let us consider the portfolio consist of the contracts on one counterparty .let be -valued stochastic processes we think that is an underlying process .we consider the model that the macro factor is determined by and the payoff of each derivative at maturity is the form of let be the final maturity of all the contracts in the portfolio .let be the default time of the counterparty , be its hazard rate process , be the process of loss when the default takes place at time , and be the discount factor process from to .we assume that is the function of and that and are the function of let be total value of all contracts in the portfolio at time under the assumption that counterparty is default free .then is given by ,\ ] ] where denotes the expectation with respect to the risk neutral measure .then unilateral cva on this portfolio is the restructuring cost when the counterparty defaults .so unilateral cva is given by \nonumber\\ & = e[\int_0^t l(t)\exp(-\int_0^t \lambda(s)ds ) \lambda(t)d(0,t ) ( \tilde{v}_0(t ) \vee 0 ) dt ] \nonumber\\ & = e[\int_0^t l(t)\exp(-\int_0^t \lambda(s)ds ) \lambda(t ) ( v_0(t ) \vee 0 ) dt ] \label{defcva},\end{aligned}\ ] ] where ,\ ] ] and is a function such as since is a function of we denote it by then cva is given by the following form . \vee 0 ) dt ] \label{defcva2}.\end{aligned}\ ] ] now we prepare the mathematical setting .let be fixed , , and .+ let be the borel algebra over and be the wiener measure on let be given by then is a -dimensional brownian motion .let let here denotes the space of -valued smooth functions defined in whose derivatives of any order are bounded .we regard elements in as vector fields on now let us consider the following stratonovich stochastic differential equations . where + let and be then we have there is a unique solution to this equation. then also satisfies the solution to the following stratonovich stochastic differential equation . where is we assume that vector fields satisfy condition ( ufg ) stated in the section [ vec ] .let be defined by ( [ defe ] ) in section [ vec ] . by ,if the distribution law of under has a smooth density function for + let we assume that the underlying asset process is .we also assume that let denotes the space of functions on given by where .+ denotes the space of lipschitz continuous functions on and we define a semi - norm on by let be the linear subspace of spanned by + + we define linear operators by , \quad f\in { lip}({\bf r}^n),\ ] ] and by , \quad f \in { lip}({\bf r}^{\tilde{n}_m}).\ ] ] we remind that is represented by we assume that \times { \bf r}^n \to [ 0,\infty) ] and such that for all . ] inductively by } = v_i , \quad i = 0,1,\ldots , d,\ ] ] } = [ v_{[\alpha ] } , v_i ] , \qquad i = 0,1,\ldots , d.\ ] ] here for and we assume that a system of vector fields satisfies the following condition ( ufg ) .( ufg ) there is an integer and there are functions satisfying the following . } = \sum_{\beta \in { \cal a}^{*}_{\leqq \ell_0 } } \varphi_{\alpha,\beta}v_{[\beta ] } } , \qquad \alpha \in { \cal a}^{*}.\ ] ] a system of vector fields also satisfies the condition . _ proof ._ we prove following by induction on } ( f \circ \pi_m ) = ( \tilde{v}^{(m)}_{[\alpha ] } f)\circ \pi_m , \quad f \in c_b^{\infty}({\bf r}^{\tilde{n}_m } ) , \end{aligned}\ ] ] for any and + it is trivial in the case of by the assumption for induction , } ( f \circ \pi_m ) = ( { v}_{[\alpha ] } v_i- v_i { v}_{[\alpha ] } ) ( f \circ \pi_m)\ ] ] } ( ( \tilde{v}^{(m)}_{i } f)\circ \pi_m ) - v_i ( ( \tilde{v}^{(m)}_{[\alpha]}f ) \circ \pi_m)\ ] ] }(\tilde{v}^{(m)}_{i } f ) ) \circ \pi_m - ( \tilde{v}^{(m)}_{i } ( \tilde{v}^{(m)}_{[\alpha]}f ) ) \circ \pi_m.\ ] ] so we have ( [ inductive ] ) . from ( ufg ) condition , we have } ( f \circ \pi_m)= \sum _ { \beta \in { \cal a}^{*}_{\leqq \ell_0 } } \varphi _ { \alpha , \beta } { v}_{[\beta ] } ( f \circ \pi_m)\ ] ] }^{(m)}f ) \circ \pi_m.\ ] ] let be then }f=(v_{[\alpha]}^{(m)}f)\circ \pi_m \circ j_m\ ] ] }^{(m)}f ) \circ \pi_m ) \circ j_m = \sum _ { \beta \in { \cal a}^{*}_{\leqq \ell_0 } } ( \varphi _ { \alpha , \beta } \circ j_m ) \tilde{v}_{[\beta]}^{(m)}f.\ ] ] so we have our assertion .+ + let be a symmetric matrix given by }(\tilde{x}_m)\tilde{v}^j_{m,[\alpha ] } ( \tilde{x}_m ) , \qquad i , j=1,\ldots , \tilde{n}_m.\ ] ] let and by , we see that if the distribution law of under has a smooth density function for moreover , by we see that we have by .in this section , we use the notation in . we have the following lemma similarly to the proof of lemma 8 ( 3 ) . [ rev ] for any let and then }f ) ( x(t , x ) ) ] = t^{-\|\alpha\| / 2}e^{\mu}[\phi_{\alpha}(t , x ) f(x(t , x))],\ ] ] and , x \in { \bf r}^n , p\in ( 1 , \infty ) } e[|\phi_{\alpha}(t , x)|^p ] < \infty.\ ] ] + let be a smooth function such that let and be then for any [ rev2 ] if then _ proof ._ let then for any and we have then is a cauchy sequence in because \}}\|d\phi(t , x)\|_h , \quad n\geqq m\geqq 1.\end{aligned}\ ] ] because is a closed operator , we have for any .so we have the assertion .+ let us denote [ disclem2 ] let .then there exists a such that \\ & \leqq c \|\nabla f\|_{\infty } \int_{s}^t(r^{-1/2}+(t - r)^{-1/2})dr , \end{aligned}\ ] ] for any and any ._ let be =(p_{t - t}f)(x(t , x^*)).\ ] ] is a -martingale .let let by it formula , note that , and so we have \ ] ] dr\ ] ] + \sum_{j=1}^d \int_s^t e^{\mu}[|(v_jg)(v_j ( p_{t - r}f))(x(r , x^*))|]dr.\ ] ] now by the definition of and , we have , \leqq \int_s^t e^{\mu}[|m(r)|^2]^{1/2}e^{\mu}[|(l_t g)(x(r , x^*))|^2]^{1/2}dr\ ] ] }e^{\mu}[|(l_t g)(x(r , x^*))|^2]^{1/2 } \int_s^t e^{\mu}[|m(r)|^2]^{1/2}dr.\ ] ] by burkholder s inequality , ^{1/2}dr \leqq e^{\mu}[\langle m \rangle_t]^{1/2 } ( t - s ) \leqq \sup_{r\in ( s , t ) } \|v_j ( p_{t - r}f)\|_{\infty}(t - s).\ ] ] on the other hand , we have , \ ] ] \ ] ] for any and any .+ on the other hand notice that we have \ ] ] \ ] ] where ,\ ] ] .\ ] ] let then by lemma [ rev2 ] , let be defined by the formula of lemma [ rev ] . then we have ,\ ] ] and , x\in { \bf r}^n } e^{\mu}[|\phi_{g , i}(t , x)|^p ] < \infty.\ ] ] then there exists a constant such that also we have \|v_j^2 ( p_{t - r}f)\|_{\infty},\ ] ] for any and any . + + let vector field be represented by then we have where and .\ ] ] moreover , we have then by corollary 9 of , since and there is a constant such that and for any and any .+ + so we have \ ] ] letting we have our assertion .let .there exists a constant such that \ ] ] for any and any ._ for , there exists such that and , for any .so we obtain the result from lemma [ disclem2 ] .+ [ disclem1 ] let and there exists a constant such that \nonumber \\\leqq & c(\|\nabla h\|_{\infty}+ \|\nabla^2 h\|_{\infty } ) ( t - t),\label{disc1}\\ \text{and } \nonumber\\ & e^{\mu } [ |g(t , x(t , x))(p^{(m)}_{t - t}(h\vee 0))(\tilde{x}^{(m)}(t,\tilde{x}_m^*))-(h\vee 0)(\tilde{x}^{(m)}(t,\tilde{x}_m^*))| ] \nonumber \\ \leqq & c(\|\nabla h\|_{\infty}+\|\nabla^2 h\|_{\infty } ) ( t - t ) \label{disc2}.\end{aligned}\ ] ] for any _ proof . _ ( [ disc1 ] ) follows from it s formula .so we show ( [ disc2 ] ) .+ let are as defined in ( [ defphi ] ) .let by it s formula so we have -\bar{\varphi}_k(h(\tilde{x}^{(m)}(t,\tilde{x}_m^*))\ ] ] ds\ ] ] ds\ ] ] notice that then we have -\bar{\varphi}_k(h(\tilde{x}^{(m)}(t,\tilde{x}_m^*))|]\ ] ] \|\tilde{l}_m h\|_{\infty}(t - t)\ ] ] ds.\ ] ] on the other hand , we have + let and be defined by the formula of lemma [ rev ] .then it follows that \|_{\infty}\ ] ] \|(\varphi_k \circ h ) \tilde{v}^{(m)}_{i } h \|_{\infty } \leqq c s^{-1/2 } \| \tilde{v}^{(m)}_{i } h \|_{\infty},\ ] ] and \|_{\infty } \leqq c\| ( \tilde{v}^{(m)}_{i})^2 h \|_{\infty}.\ ] ] so we have ds\ ] ] letting we have the assertion .let and .there exists a constant such that \leqq c(t - t),\end{aligned}\ ] ] for any _ proof . _notice that and lemma [ disclem1 ] is valid for . on the other hand , for , we have the expression that so our assertion follows from lemma [ disclem1 ] .let be given by .\ ] ] let be such that then we have is as follows .\ ] ] let be sub -algebra of given by , \ \ell = 1,2,\ldots \ } .\ ] ] + [ time disc ] there exists a constant such that ._ let then by lemma [ disclem2 ] , there is a constant such that + \\ - & e^{\mu } [ g(t_i , x(t_i , x^*))(p_{t_{k}-t_i } \tilde{f}_{k})(x(t_i , x^ * ) ) \vee 0]| dt\\ \leqq & c\sum_{k=1}^k \sum_{i = i_{(k-1)}}^{i_{(k)}-1 } \int_{t_i}^{t_{i+1 } } dt \int_{t_i}^t ( r^{-1/2}+(t_k - r)^{-1/2})dr\\ \leqq & c|\delta| \sum_{k=1}^k \sum_{i = i_{(k-1)}}^{i_{(k)}-1}\int_{t_i}^{t_{i+1 } } ( r^{-1/2}+(t_k - r)^{-1/2})dr . \end{aligned}\ ] ] so we have so the assertion follows .to estimate the stochastic mesh operator , we use the following estimation of transition kernel obtained by proposition 8 of .[ heatkernel ] let be given by then for any and there is a such that , \ x , y \in e_m,\ ] ] and , \ x , y \in e_m.\ ] ] in particular , for any and there is a such that , \ x , y \in e_m.\ ] ] let from proposition 13 , 21 and proposition 15 ( 1 ) of , we have the followings .[ basic ] let and then we have = ( p^{(m)}_{s , t}f)(x ) , \qquad \nu_s^{(m)}-a.e.x \in e_m.\ ] ] and \leqq \frac{1}{l } \int_{e_m } \frac{p^{(m)}(t - s , x , y)^2 |f(y)|^2}{q_{s , t}^{(m , l,\omega)}(y)}\ ; dy .\ ] ] [ key ] let then there exists a such that \right)^{1/2 } \\ & \leqq c l^{-(1-\delta)/2}(t_k - t)^{-(1+\delta)(\tilde{n}+1)\ell_0/4 } ( \int_{e_m } f(y)^2 ( 1+|y|^2)^{-\tilde{n}_m}dy)^{1/2}.\end{aligned}\ ] ] for any any and any [ exset ] let } z_l^{(m , k)}(s;\delta).\end{aligned}\ ] ] then we have the followings .+ for any , and , there is a such that ^{1/p}\leqq c_{p,\delta}\varepsilon^{-5\ell_0}l^{-p\delta^2/2 + 1/p}\ ] ] for any , k=1,\ldots , k, ] and now we introduce the following sets and functions .let be given by and let \times e \times \omega \to [ 0 , \infty ) , i=1,2,3, ] and _ proof ._ equation ( [ d3 ] ) follows from lemma [ disclem1 ] .so we will show ( [ d1 ] ) and ( [ d2 ] ) . note that if , both sides are in ( [ d1 ] ) and ( [ d2 ] ) .so we will consider the case . by proposition [ basic ] , we have |g(t , x)| p(t , x^ * , dx)\\ = & \int_{e } e^p[|(q_{t , t_k , \varepsilon}^{(m)}(1-\varphi_{m , k , l})f_{m , k})(x ) -(p_{t_k - t}^{(m ) } ( 1-\varphi_{m , k , l})f_{m ,k})(\pi_m(x ) ) | ] |g(t , x)| p(t , x^ * , dx ) \\ \leqq & \int_{e } ( e^p[(q_{t , t_k , \varepsilon}^{(m ) } |(1-\varphi_{m , k , l})f_{m , k}| ) ( \pi_m(x))]\\ & \qquad \qquad \qquad \qquad \qquad \qquad+ ( p_{t_k - t}^{(m ) } |(1-\varphi_{m , k , l})f_{m , k}| ) ( \pi_m(x ) ) ) |g(t , x)| p(t , x^ * , dx)\\ \leqq & 2 \int_{e } ( p_{t_k - t}^{(m ) } ( 1-\varphi_{m , k , l})|f_{m , k}|)(\pi_m(x ) ) g(t , x ) p(t , x^ * , dx).\end{aligned}\ ] ] using hlder s inequality for we used in the last inequality .so we have equation ( [ d1 ] ) .+ next we will show equation ( [ d2 ] ) . noting that from and , since propsition [ exset ] , . and by proposition [ basic ] , we have \\ \leqq & 1_{b^{(m , k)}(t , \delta , l ) } \frac{1}{l } \int_{e_m } \frac{|\varphi_{m , k , l}f_{m ,k}(\tilde{y}_m)|^2p^{(m)}(t_k - t , \tilde{x}_m,\tilde{y}_m)^2}{q_{t , t_k}^{(m , l,\omega)}(\tilde{y}_m)}d\tilde{y}_m\\ \leqq & \frac{2}{l } \int_{e_m } \frac{|\varphi_{m , k , l}f_{m , k}(\tilde{y}_m)|^2}{p^{(m)}(t_k , \pi_m(x^*),\tilde{y}_m ) } p^{(m)}(t_k - t , \tilde{x}_m,\tilde{y}_m)^2 dy.\end{aligned}\ ] ] then we have p(t , x^ * , dx))^{1/2 } \\ \leqq & ( \int_{e } e^p[1_{b^{(m , k)}(t , \delta , l)}e^p [ |(q_{t , t_k , \varepsilon}^{(m)}\varphi_{m , k , l}f_{m , k})(x ) -(p_{t_k - t}^{(m ) } \varphi_{m , k , l}f_{m , k})(x ) |^2| \mathcal{f}_t ] ] p(t , x^ * , dx))^{1/2}\\ \leqq & ( \frac{2}{l } \int_{e_m } \frac{|\varphi_{m , k , l}f_{m , k}(y)|^2}{p^{(m)}(t_k , \pi_m(x^*),\tilde{y}_m ) } ( \int_{e } p^{(m)}(t_k - t , \tilde{x}_m,\tilde{y}_m)^{(1-\delta)+(1+\delta ) } p(t , x^ * , dx))d\tilde{y}_m)^{1/2}\\ \leqq & ( \frac{2}{l } \int_{e_m } \frac{|\varphi_{m , k , l}f_{m , k}(y)|^2}{p^{(m)}(t_k , \pi_m(x^*),\tilde{y}_m)^{\delta } } ( \int_{e_m } p^{(m)}(t , \pi_m(x^ * ) , \tilde{x}_m ) p^{(m)}(t_k - t , \tilde{x}_m,\tilde{y}_m)^{(1+\delta)/\delta}d\tilde{x}_m)^{\delta}d\tilde{y}_m)^{1/2}.\end{aligned}\ ] ] let from lemma [ heatkernel ] , there exists a constant such that we set as }\max_{\substack{m=1,\ldots , m,\\ k=1,\ldots , k}}(\int_{e_m } h_m(x)^{-(\tilde{n}_m+1)\ell_0(1+\delta)/\delta}(1+|x|^2)^{q(1+\delta)/\delta}p^{(m)}(t , \tilde{x}_m^*,x)dx)^{\delta/2}\\ & \quad \quad \times ( \int_e|g(t , x)|dx)^{1/2}.\end{aligned}\ ] ] is bounded by proposition 3 of . then since , we have ^{1/2}|g(t , x)| p(t , x^ * , dx)\\ \leqq & \frac{c_1}{l } \int_{e_m } p^{(m)}(t_k , \tilde{x}_m^*,\tilde{y}_m)^{-\delta } |\varphi_{m , k , l}f_{m , k}(\tilde{y}_m)|^2(1+|\tilde{y}_m|^2)^{-q(1+\delta)}d\tilde{y}_m ( t_k - t)^{-(1+\delta)(n+1)\ell_0/2 } , \\\leqq & c_1l^{-(1-\delta)}(t_k - t)^{-(1+\delta)(\tilde{n}_m+1)l_0/2 } \int_{e_m } |f_{m , k}(\tilde{y}_m)|^2(1+|\tilde{y}_m|^2)^{-q(1+\delta)}d\tilde{y}_m .\end{aligned}\ ] ] since and is lipschitz continuous , so we have the assertion . +let and let and be [ dd ] there exists a constant such that and , for any ._ let us take as if ] and such that then there exists a constant , , such that and for _ proof ._ in this proof , we denote by for simplicity .let let be given by and let be given by then |,\ ] ] since is measurable .+ applying lemma [ improve ] to and we have where ,\\ & i_2=1_{\tilde{b}_{\varepsilon } } \sum_{i=0}^{n-1}(t_{i+1}-t_i ) \theta e^{\mu } [ |g(t_i , x(t_i))||1_{\{f_{p , i}(x(t_i)| \leqq \theta \}}].\end{aligned}\ ] ] by hlder s inequality , ^{\delta } \mu ( |f_{p , i}(x(t_i)| \leqq \theta)^{1-\delta } \leqq c \theta \mu ( |f_{p , i}(x(t_i)| \leqq \theta).\ ] ] from the assumption , we have next we will estimate . .\end{aligned}\ ] ] is dominated by where ,\ ] ] ,\ ] ] .\ ] ] \ ] ] from proposition [ d12 ] , =\sum_{i=0}^{n-1 } ( t_{i+1}-t_i ) \sum_{\substack{m=1,\ldots , m,\\ k ; t_k \geqq t_{i+1 } } } \int_{e } e^p[d_{1,\varepsilon , l}^{(m , k)}(t,\tilde{x}_m ) ] + next , we will estimate . by hlder s inequality ^{1/2}\ ] ] ^{\delta/2}e^{\mu}[1_{\{\sum_{m=1}^m \sum_{k : t_k\geqq t_{i+1 } } ( d_{1,\varepsilon , l}^{(m , k)}(t_i , \pi_m(x(t_i ) + d_{3,\varepsilon , l}^{(m , k)}(t_i , \pi_m ( x(t_i ) ) > \theta/2\}}]^{(1-\delta)/2},\ ] ] ^{1/2}\ ] ] )^{(1-\delta)/2}.\ ] ] so we have \leqq c\sqrt{\frac{2}{\theta } } \sum_{i=0}^{n-1}(t_{i+1}-t_i ) e^p[\sum_{\substack{m=1,\ldots , m,\\k : t_k\geqq t_{i+1 } } } \int_{e_m } d_{2,\varepsilon , l}^{(m , k)}(t_i , x)^2 p^{(m)}(t_i , \tilde{x}_m^ * , x)dx ] ^{1/2}\\ & \times e^p [ \sum_{\substack{m=1,\ldots , m,\\k : t_k\geqq t_{i+1 } } } \int_{e_m } ( d_{1,\varepsilon , l}^{(m , k)}(t_i , x ) + d_{3,\varepsilon , l}^{(m , k)}(t_i , x ) ) p^{(m)}(t_i , \tilde{x}_m^ * , x)dx ] ^{(1-\delta)/2}\\ \leqq & c \theta^{-(1-\delta)/2 } \sum_{i=0}^{n-1}(t_{i+1}-t_i ) \left ( \sum_{k : t_k\geqq t_{i+1 } } \phi^{(k)}(t_i , \varepsilon ; l^{-(1-\delta)^2/2 } , ( 1-\delta^2)(\tilde{n}+1)\ell_0/4 , 0 , 0)\right ) \\ & \times \left(\sum_{k : t_k\geqq t_{i+1 } } \phi^{(k)}(t_i , \varepsilon ; l^{-(1-\delta)^4/2 } , 0 , 1 , ( 1-\delta)/2)\right).\end{aligned}\ ] ] by proposition [ dd ] , \leqq c\theta^{-1/2 } \left ( l^{-(1-\delta)^4 } \hat{e}\left(\varepsilon , ( 1-\delta^2)(\tilde{n}+1)\ell_0/4\right ) + l^{-(1-\delta)^2/2}\varepsilon^{3(1-\delta)/2 } \right).\ ] ] similarly , we have \leqq c \theta^{-(1-\delta)/2 } \sum_{i=0}^{n-1}(t_{i+1}-t_i ) e[\sum_{\substack{m=1,\ldots , m,\\k : t_k\geqq t_{i+1 } } } \int_{e_m } d_{2,\varepsilon , l}^{(k)}(t_i , x)^2 p^{(k)}(t_i , \tilde{x}_k^ * , x)dx ] ^{(1-\delta)}\\ \leqq & c \theta^{-(1-\delta)/2 } \sum_{i=0}^{n-1}(t_{i+1}-t_i ) ( \sum_{k : t_k\geqq t_{i+1 } } \phi^{(k)}(t_i , \varepsilon ; l^{-(1-\delta)^2/2 } , ( 1-\delta^2)(\tilde{n}+1)\ell_0/4 , 0,0 ) ) ^2\\ \leqq & c\theta^{-1 } l^{-(1-\delta)^2 } \hat{e}\left(\varepsilon,(1-\delta^2)(\tilde{n}+1)\ell_0/2 \right).\end{aligned}\ ] ] it follows easily that \leqq c\varepsilon^2.\ ] ] notice that we have \leqq c\left(\theta^{\gamma+1}+\theta^{-1 } l^{-(1-\delta)^2/2 } \left ( l^{-(1-\delta)^2/2 } \hat{e}\left(\varepsilon,(1-\delta^2)(\tilde{n}+1)\ell_0/2 \right)+\varepsilon^{3(1-\delta)/2 } \right ) \right).\ ] ] in particular if we take as then we have \leqq c\left ( l^{-(1-\delta)^2/2 } \left ( l^{-(1-\delta)^2/2 } \hat{e}\left(\varepsilon,(1-\delta^2)(\tilde{n}+1)\ell_0/2 \right)+\varepsilon^{3(1-\delta)/2 } \right ) \right)^{(1+\gamma)/(2+\gamma ) } .\ ] ] let be from proposition [ exset ] , we have and theorem [ main2 ] and theorem [ main1 ] follow from the theorem [ convergence ] and theorem [ improve ] .let be 1 dimensional brownian motion .let let be \vee0 ) \ dt ] = \frac{2}{3 \sqrt{2\pi}}.\ ] ] let be the discretization of , such that .\ ] ] we approximate as remark [ actual ] , where let be i.i.d sample paths of .we compute and by using of paths . let be another i.i.d sample paths of .we compute by we have when we take we have we also take and we replicate 100 estimators of for each .let `` average '' denote the average and `` standard dviation '' denote the unbiased standard deviation of these 100 estimators of we show the numerical result in table [ nume ] , we show graph of `` average '' and in figure [ fig1 ] and graph of `` standard deviation '' in figure [ fig2 ] . we see in figure [ fig1 ] that both and are close to , but we see in figure [ fig2 ] that is more stable than + 99 avramidis , a.n . , and p. hyden , efficiency improvements for pricing american options with a stochastic mesh , in proceedings of the 1999 winter simulation conference , pp . 344 - 350 broadie , m. , and p. glasserman , a stochastic mesh method for pricing high - dimensional american options j. computational finance , 7 ( 4 ) ( 2004 ) , 35 - 72 .duffie , d. , m. huang , 1996 , swap rates and credit quality , journal of finance , vol .51 , no . 3 , 921 fujii , f. , a. takahashi derivative pricing under asymmetric and imperfect collateralization , and cva carf working paper f-240 , december 2010 glasserman , p. , `` monte carlo methods in financial engineering '' springer , 2004 , berlin .gregory , j. , counterparty credit risk and credit value adjustment : a continuing challenge for global financial markets , second editio , john wiley & sons , 2012 .kusuoka , s. , malliavin calculus revisited , j. math .tokyo 10(2003 ) , 261 - 277 .kusuoka , s. , and d.w.stroock , applications of malliavin calculus ii , j. fac .tokyo sect .32(1985),1 - 76 .kusuoka , s. , and y. morimoto , stochastic mesh methods for h " ormander type diffusion processes , preprint .labordere , h . cutting cva s complexity , risk magazine , july , 2012 , pp .liu , g. , and l.j .hong , revisit of stochastic mesh method for pricing american options , operations research letters 37(2009 ) , 411 - 414 .shigekawa , i. , `` stochastic analysis '' , translation of mathematical monographs vol.224 , ams 2000 . | in this paper , the author considers the numerical computation of cva for large systems by mote carlo methods . he introduces two types of stochastic mesh methods for the computations of cva . in the first method , stochastic mesh method is used to obtain the future value of the derivative contracts . in the second method , stochastic mesh method is used only to judge whether future value of the derivative contracts is positive or not . he discusses the rate of convergence to the real cva value of these methods . jel classification : c63 , g12 mathematical subject classification(2010 ) 65c05 , 60g40 keywords : computational finance , option pricing , malliavin calculus , stochastic mesh method , cva |
the concept of entropy plays a central role in information theory , and has found a wide array of uses in other disciplines , including statistics , probability and combinatorics .the _ ( differential ) entropy _ of a random vector with density function is defined as where .it represents the average information content of an observation , and is usually thought of as a measure of unpredictability . in statistical contexts , it is often the estimation of entropy that is of primary interest , for instance in goodness - of - fit tests of normality or uniformity , tests of independence , independent component analysis and feature selection in classification .see , for example , and for other applications and an overview of nonparametric techniques , which include methods based on sample spacings ( in the univariate case ) , histograms and kernel density estimates , among others .the estimator of is particularly attractive as a starting point , both because it generalises easily to multivariate cases , and because , since it only relies on the evaluation of - nearest neighbour distances , it is straightforward to compute . to introduce this estimator ,let be independent random vectors with density on .write for the euclidean norm on , and for , let denote a permutation of such that .for conciseness , we let denote the distance between and the nearest neighbour of . leonenko estimator of the entropy is given by where denotes the volume of the unit -dimensional euclidean ball and where denotes the digamma function .in fact , this is a generalisation of the estimator originally proposed by , which was defined for . for integers we have where is the euler mascheroni constant , so that as .this estimator can be regarded as an attempt to mimic the ` oracle ' estimator , based on a -nearest neighbour density estimate that relies on the approximation the initial purpose of this paper is to provide a detailed description of the theoretical properties of the kozachenko leonenko estimator .in particular , when , we show that under a wide range of choices of ( which must diverge to infinity with the sample size ) and under regularity conditions , the estimator satisfies this immediately implies that in these settings , is efficient in the sense that the fact that the asymptotic variance is the best attainable follows from , e.g. , .when , we show that no longer holds but the kozachenko leonenko estimator is still root- consistent provided is bounded .moreover , when , it turns out a non - trivial bias means that the rate of convergence is slower than in general , regardless of the choice of .these results motivate our second main contribution , namely the proposal of a new entropy estimator , formed as a weighted average of kozachenko leonenko estimators for different values of .we show that it is possible to choose the weights in such a way as to cancel the dominant bias terms , thereby yielding an efficient estimator in arbitrary dimensions , given sufficient smoothness .there have been several previous studies of the kozachenko leonenko estimator , but results on the rate of convergence have until now confined either to the case or ( very recently ) the case where is fixed as diverges .the original paper proved consistency of the estimate under mild conditions in the case . proved that the mean squared error of a truncated version of the estimator is when and under a condition that is almost equivalent to an exponential tail ; showed that the bias vanishes asymptotically while the variance is when and is compactly supported and bounded away from zero on its support . very recently , in independent work and under regularity conditions , derived the asymptotic normality of the estimator when , confirming the suboptimal asymptotic variance in this case .previous works on the general case include , where heuristic arguments were presented to suggest the estimator is consistent for general and general fixed and has variance of order for and general fixed . the only work of which we are aware in which is allowed to diverge with is , where the estimator was shown to be consistent for general .other very recent contributions include and . in this paper , we substantially weaken regularity conditions compared with these previous works , and show that with an appropriate choice of , efficient entropy estimators can be obtained in arbitrary dimensions , even in cases where the support of the true density is the whole of .such settings present significant new challenges and lead to different behaviour compared with more commonly - studied situations where the underlying density is compactly supported and bounded away from zero on its support . to gain intuition , consider the following second - order taylor expansion of around a density estimator : when is bounded away from zero on its support, one can estimate the ( smaller order ) second term on the right - hand side , thereby obtaining efficient estimators of entropy in higher dimensions ; however , when is not bounded away from zero on its support such procedures are no longer effective . to the best of our knowledge ,therefore , this is the first time that a nonparametric entropy estimator has been shown to be efficient in multivariate settings for densities having unbounded support .( we remark that when , the histogram estimator of is known to be efficient under fairly strong tail conditions . )although the focus of this work is on efficient estimation , our results have other methodological implications : first , they suggest that prewhitening the data , i.e. replacing with before computing the estimator , where denotes the sample covariance matrix , can yield substantial bias reduction .second , they yield asymptotically valid confidence intervals and hypothesis tests in a straightforward manner .the outline of the rest of the paper is as follows . in section[ sec : bias ] , we give our main results on the bias of kozachenko leonenko estimators , and present examples to understand the strength of the conditions .we also discuss prewhitening the estimators to improve performance .section [ sec : var ] is devoted to the study of the variance of the estimators and their asymptotic normality . in section [ sec : weighted ] , we introduce our weighted kozachenko leonenko estimators , and show both how to choose the weights to achieve efficiency , and how to construct confidence intervals for entropy .proofs of main results are presented in section [ sec : proof ] with auxiliary material and detailed bounds for various error terms deferred to the appendix .we conclude the introduction with some notation used throughout the paper . for and , let be the closed euclidean ball of radius about .we write , and for the operator norm , frobenius norm and determinant , respectively , of . for an dimensional array write . for a smooth function , we write and for its first , second , and derivatives respectively , for its laplacian , and for its uniform norm . we write to mean that there exists , depending only on and , such that .in this section we study the bias of the kozachenko leonenko estimator . for , we write so that to gain intuition about the bias , we introduce for and , the sequence of distribution functions where further , for , define the limiting distribution function where . that this is the limit distribution for each fixed follows from a poisson approximation to the binomial distribution .we therefore expect that although we do not explicitly use this approximation in our asymptotic analysis of the bias , it motivates much of our development .it also explains the reason for using in the definition of , rather than simply .we will work with the following conditions : * ( a1)*( ) : : is bounded , and writing and , we have that is times continuously differentiable and there exist and a borel measurable function such that for each and , and as , for each . *( a2)*( ) : : .condition * ( a1)*( ) is discussed in section [ sec : assumptions ] below .we are now in a position to state our main result on the bias of the kozachenko leonenko estimator .[ biasthm ] suppose * ( a1)*( ) and * ( a2)*( ) hold for some .let denote any deterministic sequence of positive integers with as for some . then *if and , we have uniformly for as . * if or with either or ] .for , we have it therefore follows that condition * ( a1)*( ) imposes smoothness and tail constraints on the unknown density . in particular, it is reminiscent of more standard hlder smoothness conditions , though it also requires that the partial derivatives of the density vary less where is small . roughly speaking, it also means that the partial derivatives of the density decay about as fast as the density itself in the tails of the distribution .moreover , it can be shown that if * ( a1)*( ) holds for a density then it also holds for any density from the location - scale family : in fact , * ( a1)*( ) is satisfied by many standard families of distributions , as indicated by the following examples .[ eg : normal ] consider the standard -dimensional normal distribution , with density function then , for any , for some polynomial of degree with positive coefficients .moreover , hence , since only when , and since as for all , we may therefore use taylor s theorem and the equivalence of norms on finite - dimensional vector spaces to see that * ( a1)*( ) is satisfied for any with and .[ eg : t ] consider the standard -dimensional -distribution with degrees of freedom , with density function in this case , where is a polynomial of degree with positive coefficients .moreover , it may also be shown that so we may use taylor s theorem to show that * ( a1)*( ) is satisfied for any with and taken be a sufficiently large constant ( which may depend on and ) .although * ( a1)*( ) is rather general for densities with for all , a class of such densities for which the condition does not hold is those of the form for large , which have highly oscillatory behaviour in the tails . on the other hand , * ( a1)*( )apparently precludes points with . to provide some guarantees on the performance of kozachenko leonenko estimators in settings where the density vanishes at certain points, we now give a very general condition under which our approach to studying the bias can be applied .[ prop : weakcond ] assume that is bounded , that * ( a2)*( ) holds for some , and let be as in theorem [ biasthm ] .let , let , and assume further that there exists such that the function on given by is real - valued .suppose that is such that as , where . then writing , we have for every that uniformly for . to aid interpretation of proposition [ prop : weakcond ] , we first remark that if * ( a1)( ) * holds , then so does , with , where is defined in below .we can then obtain explicit bounds on the terms in , which enable us to recover theorem [ biasthm](ii ) . for ,consider the density of the distribution .then for any , we may take \ ] ] to deduce from proposition [ prop : weakcond ] that for every , similar calculations show that the bias is of the same order for distributions with .finally , we show that we can even obtain a small bias in settings where all derivatives of are zero at a point where .let .then for any , we may take \ ] ] to deduce from proposition [ prop : weakcond ] once again that for every .so far , we have assumed that the kozachenko leonenko estimator is applied directly to ; when and our regularity conditions hold , theorem [ biasthm](i ) then yields that the dominant asymptotic contribution to the bias of is in this section , we consider the possibility of an alternative strategy , namely to make a transformation to the observations , apply the kozachenko leonenko estimator to the transformed data , and then correct for the transformation in an appropriate way . recall that for any with , we have this motivates us to consider transformed estimators defined as follows : set , and let if is either orthogonal , or a positive scalar multiple of the identity , then and we have gained nothing , so we should look for transformations that are not of this type .in fact , under the conditions of theorem [ biasthm](i ) , we see that the dominant asymptotic contribution to the bias of is in general , it is hard to compare the magnitudes of these two expressions ( [ eq : originalbias ] ) and ( [ eq : lineartrans ] ) .however , in certain specific cases , an explicit comparison is possible .for instance , suppose that we can write for some positive definite , where either for some univariate density , or can be written in the form for some and .then , as shown in propositions [ prop : product ] and [ prop : spherical ] in appendix [ appendix : conds ] , we have from the expressions above , we see that the linear transformation strategy results in reduced asymptotic bias whenever . in fact , by the am gm inequality , is minimised when the eigenvalues of are equal .one option for a practical data - driven approach is to ` prewhiten ' the data by setting , where is the sample covariance matrix and is the sample mean . in order to investigate the empirical performance of the prewhitened estimator with , we generated samples of size from the distributions and , where figures [ fig : prew1 ] and [ fig : prew2 ] present the mean squared errors of both the prewhitened and orignal kozachenko leonenko estimator based on repetitions , suggesting that the prewhitened estimator can yield significant improvements in performance for appropriate choices of . ] ]we now study the asymptotic variance of kozachenko leonenko estimators under the assumption that the tuning parameter is diverging with ( which is necessary for efficiency ) .[ varthm ] assume * ( a1)*(2 ) and that * ( a2)*( ) holds for some .let and denote any two deterministic sequences of positive integers with , with and with , where then as , uniformly for .the proof of this theorem is lengthy , and involves many delicate error bounds , so we outline the main ideas here .first , we argue that where we hope to exploit the fact that .the main difficulties in the argument are caused by the fact that handling the covariance above requires us to study the joint distribution of , and this is complicated by the fact that may be one of the nearest neighbours of or vice versa , and more generally , and may have some of their nearest neighbours in common .dealing carefully with the different possible events requires us to consider separately the cases where is small and large , as well as the proximity of to .finally , however , we can apply a normal approximation to the relevant multinomial distribution ( which requires that ) to deduce the result .we remark that under stronger conditions on , it should also be possible to derive the same conclusion about the asymptotic variance of while only assuming similar conditions on the density to those required in proposition [ prop : weakcond ] , but we do not pursue this here . a straightforward consequence of theorem [ varthm ] is the asymptotic normality of kozachenko leonenko estimators .[ clt ] assume that and the conditions of theorem [ varthm ] are satisfied .if , we additionally assume that . then and as , uniformly for .as was mentioned in the introduction , the asymptotic variance in theorem [ clt ] is best possible .although the results of sections [ sec : bias ] and [ sec : var ] reveal that the estimators can not be efficient when , the insights provided motivate us to propose and analyse alternative estimators of that can be written as weighted averages of kozachenko leonenko entropy estimators . in this section we show that , for carefully chosen weights , they have reduced asymptotic bias while maintaining the same asymptotic variance as kozachenko leonenko estimators , and as such can achieve efficiency in higher dimensions . for such that , we define the weighted estimator where . then , under the conditions of theorem [ biasthm](i ) , we will show that for appropriate choices of and for every , as .this facilitates a choice of and such that , and results in an estimator that has bias for and , under the stronger assumption that , for . in propostion [ weightedbiasprop ] in section [ sec : weightedproof ] we will see that , under stronger assumptions on the smoothness of and for diverging to infinity , the bias of can be expanded in powers of .it then turns out that the weights we should consider are elements of the following set , where the restrictions on the support of are convenient for our analysis .let and let for .a sufficient condition for is that the matrix , with entry is invertible , as then gives the non - zero entries of a vector , where .now define to have entry .since as for , we have as .now , is a vandermonde matrix ( depending only on ) and as such has determinant hence , by the continuity of the determinant and eigenvalues of a matrix , we have that there exists such that , for , the matrix is invertible and where denotes the eigenvalue of a matrix with smallest absolute value .thus , for each , there exists satisfying .the following theorem establishes the efficiency of for arbitrary .the main ideas of the proof are provided in section [ sec : weightedproof ] , with details deferred to appendix [ appendix : weighted ] .[ weightedclt ] assume * ( a1)*( ) , * ( a2)*( ) , and that the conditions on of theorem [ varthm ] hold .assume further that for large and that .* if , and , then uniformly for .* if , ] .recall the definition of in .the first part of lemma [ lemma : hxinvbounds ] below provides crude but general bounds ; the second gives much sharper bounds in a more restricted region .[ lemma : hxinvbounds ] a. assume that has a bounded density and that * ( a2)*( ) holds .then for every and , b. assume * ( a1)*( ) , and let , be such that where is taken from * ( a1)*( ) .then there exists such that for all sufficiently large , and , we have where and if is an even integer .moreover , if , then we are now in a position to prove theorem [ biasthm ] .\(i ) define where is as in assumption * ( a1)*( ) , let and let . recall that and let the proof is based on the fact that and lemma [ lemma : hxinvbounds](ii ) , which allow us to make the transformation . writing for remainder terms to be bounded at the end of the proof , we can write proposition [ prop : moment ] tells us that uniformly for as . noting that , it now remains to bound . henceforth , to save repetition ,we adopt without further mention the convention that whenever an error term inside or depends on , this error is uniform for ; thus as means as . _to bound . first note that for any , where the conclusion follows from hlder s inequality as in the proof of proposition [ prop : moment ] in appendix [ appendix : auxiliary ] .now , from lemma [ lemma : hxinvbounds](i ) , moreover , when , so we deduce that for each , as . since when , this bound on is sufficiently tight . _ to bound . _ for random variables and we have that for every where the final equality follows from standard bounds on the left - hand tail of the binomial distribution ( see , e.g. , equation ( 6 ) , page 440 ) .we therefore deduce from and the cauchy schwarz inequality that for each , ( with further work , it can in fact be shown that , but this is not necessary for our overall conclusion . ) _ to bound ._ we can write \mathrm{b}_{k , n - k}(s ) \ , ds \,dx \\ & = : r_{31}+r_{32},\end{aligned}\ ] ] say .now , note that } \sup_{x \in \mathcal{x}_n } \frac{g_*^{d\beta}(x)s}{f(x ) } \rightarrow 0.\ ] ] it follows by lemma [ lemma : hxinvbounds](ii ) that there exists such that for , , and , taking , thus , for and , using the fact that for , \mathrm{b}_{k , n - k}(s ) \,ds \,dx \\ & \leq \frac { \gamma(k+4/d ) \gamma(n)}{2(d+2)^{2}v_d^{4/d } \gamma(k ) \gamma(n+4/d ) \delta_n^{2/d } } \int_{\mathcal{x}_n } g_*(x)^2 f(x)^{1 - 2/d } \ , dx \\ & \hspace{3 cm } + \frac{2c^{2 } \gamma(k+(4 + 2 \eta)/d ) \gamma(n ) } { \gamma(k ) \gamma(n+(4 + 2 \eta)/d ) \delta_n^{(2 + 2 \eta)/d } } \int_{\mathcal{x}_n } g_*(x)^2 f(x)^{1 - 2/d } \,dx.\end{aligned}\ ] ] on the other hand , we also have for and that it follows by proposition [ prop : moment ] and the fact that that _ to bound . consider the random variable .then , using we conclude that for every , _ to bound ._ it follows from proposition [ prop : moment ] and the fact that as that for each , _ to bound . by stirling s formula , as required .\(ii ) the bias calculation in this setting is very similar , but we approximate simply by .writing for the modified error terms , we obtain here , , and , for every in both cases . on the other hand , for every .similarly , finally , for every . this concludes the proof . to deal with the integrals over , we first observe that by , for every , moreover , for every . now a simpler form of lemma [ lemma : hxinvbounds](ii ) states that there exists such that for and , we have } \bigl|v_df(x)h_x^{-1}(s)^d - s\bigr| \leq \frac{2dv_d^{-\tilde{\beta}/d}}{d+\tilde{\beta}}s^{1+\tilde{\beta}/d}\frac{c_{n,\tilde{\beta}}(x)}{f(x)^{1+\tilde{\beta}/d}}.\ ] ] it follows from , , and an almost identical argument to that leading to that for every , as required .since this proof is long , we focus here on the main argument , and defer proofs of bounds on the many error terms to appendix [ appendix : varproof ] .we use the same notation as in the proof of theorem [ biasthm ] , except that we change the definition of so that .we write for this newly - defined .similar to the proof of theorem [ biasthm ] , all error terms inside and that depend on are uniform for . first note that our first claim is that for every , \ ] ] as .the proof of this claim uses similar methods to those in the proof of theorem [ biasthm ] .in particular , writing for remainder terms to be bounded later , we have \,dx + \sum_{i=1}^4 s_i \nonumber \\ & = \int_\mathcal{x } f(x ) \log^2 f(x ) \,dx + \sum_{i=1}^5 s_i + \frac{1}{k}\{1+o(1)\ } , \end{aligned}\ ] ] as . in appendix[ appendix : s ] , we show that for every , \ ] ] as .the next step of our proof consists of showing that for every , as .define so that writing , we therefore have that to deal with the first term in , we make the substitution and let . writing for remainder terms to be bounded later , for every , in appendix [ appendix : t ] , we show that for every , as .we now deal with the second term in . by similar arguments to those used previously , for every , in appendix [ appendix : u ], we show that for every , .we also require some further notation .let denote the conditional distribution function of given .let , and let so that and for every .for pairs with and , let , and write write with for , let denote the distribution function of a random vector at , and let denote the standard univariate normal distribution function .writing for remainder terms to be bounded later , we have the proof is completed by showing in appendix [ appendix : w ] that for every , as .recall from the introduction that by the central limit theorem applied to , for the first part it suffices by slutsky s theorem to show that as . by theorem [ biasthm ] , under the conditions imposed , we know that as . moreover , from in the proof of theorem [ varthm ] , we have that as .hence as .now by similar , but simpler , methods to those employed in the proof of theorem [ biasthm ] , we have that for all , thus , as and the first result follows .the second part is an immediate consequence of and , together with an application of the cauchy schwarz inequality .the first step towards proving theorem [ weightedclt ] is to gain a higher - order understanding of the bias of kozachenko leonenko estimators , which is provided by the following proposition . the proof is given in appendix [ appendix : weighted ] .[ weightedbiasprop ] suppose that * ( a1)*( ) and * ( a2)*( ) hold for some .let denote any deterministic sequence of positive integers with as for some .then there exist , depending only on and , such that for each , as , uniformly for , where if , and if is an even integer. note that in the setting of theorem [ biasthm](i ) we have proposition [ weightedbiasprop ] provides justification for our choice of .the following result on the variance of is proved similarly to theorem [ varthm ] , and the proof is given in appendix [ appendix : weighted ] .[ weightedvarprop ] assume that the conditions of theorem [ varthm ] hold . for sufficiently large , find such that .then as , uniformly for .we are now in a position to prove our main result on the theoretical properties of our weighted kozachenko leonenko estimators .\(i ) by proposition [ weightedbiasprop ] and the fact that , we have under our conditions on . by proposition [ weightedvarprop ]we have .the results therefore follow by very similar arguments to those presented in theorem [ clt ] .\(ii ) by proposition [ weightedbiasprop ] again , we have similarly , by proposition [ weightedvarprop ] , the result follows by an application of chebychev s inequality .similar to , we may write thus , mimicking arguments in the proofs of theorems [ biasthm ] and [ varthm ] , for any , as . hence , by the cauchy schwarz inequality , \rightarrow \mathbb{e}\{\log^m f(x_1)\}\end{aligned}\ ] ] as .thus . similarly , from and cauchy schwarz , as .we conclude that .the fact that is also asymptotically unbiased can be deduced from ( and the subsequent bounds on the remainder terms ) in the proof of theorem [ varthm ] .the first conclusion follows , and the second can be deduced from the first , together with the fact that as . * acknowledgements : * the second author is grateful to sebastian nowozin for introducing him to this problem , and to grard biau for helpful discussions. 99 beirlant , j. , dudewicz , e. j. , gyrfi , l. , and van der meulen , e. c. ( 1997 ) nonparametric entropy estimation : an overview ._ , * 6 * , 1739 .biau , g. and devroye , l. ( 2015 ) _ lectures on the nearest neighbor method_. springer , new york .cai , t. t. and low , m. g. ( 2011 ) testing composite hypotheses , hermite polynomials and optimal estimation of a nonsmooth functional ._ , * 39 * , 10121041 .cressie , n. ( 1976 ) on the logarithms of high - order spacings ._ biometrika _ , * 63 * , 343355 .delattre , s. and fournier , n. ( 2016 ) on the kozachenko leonenko entropy estimator .available at ` arxiv:1602.07440 ` .gao , w. , oh , s. and viswanath , p. ( 2016 ) demystifying fixed -nearest neighbor information estimators .available at ` arxiv:1604.03006 ` .goria , m. n. , leonenko , n. n. , mergel , v. v. and novi inverardi , p. l. ( 2005 ) a new class of random vector entropy estimators and its applications in testing statistical hypotheses . _ j. nonparametr ._ , * 17 * , 277297 .gtze , f. ( 1991 ) on the rate of convergence in the multivariate clt ._ , * 19 * , 724739. hall , p. and morton , s. c. ( 1993 ) on the estimation of entropy ._ ann . inst ._ , * 45 * , 6988 .ibragimov , i. a. and khasminskii , r. z. ( 1991 ) asymptotically normal families of distributions and efficient estimation .statist . _ ,* 19 * , 16811724 .kozachenko , l. f. and leonenko , n. n. ( 1987 ) sample estimate of the entropy of a random vector ._ , * 23 * , 95101 .kwak , n. and choi , c. ( 2002 ) input feature selection by mutual information based on parzen window ._ ieee trans . pattern anal ._ , * 24 * , 16671671 .laurent , b. ( 1996 ) efficient estimation of integral functionals of a density ._ , * 24 * , 659681 .lepski , o. , nemirovski , a. and spokoiny , v. ( 1999 ) on estimation of the norm of a regression function .fields _ , * 113 * , 221253 .levit , b. ya .( 1978 ) asymptotically efficient estimation of nonlinear functionals ._ , * 14 * , 204209 .miller , e. g. and fisher , j. w. ( 2003 ) ica using spacings estimates of entropy ._ j. mach ._ , * 4 * , 12711295 .mnatsakanov , r. m. , misra , n. , li , s. and harner , e. j. ( 2008 ) -nearest neighbor estimators of entropy .methods statist . _ , * 17 * , 261277 .paninski , l. ( 2003 ) estimation of entropy and mutual information ._ neural comput ._ , * 15 * , 11911253 .shorack , g. r. and wellner , j. a. ( 2009 ) _ empirical processes with applications to statistics_. .singh , h. , misra , n. , hnizdo , v. , fedorowicz , a. and demchuk , e. ( 2003 ) nearest neighbor estimates of entropy ._ , * 23 * , 301321 .singh , s. and pczos , b. ( 2016 ) analysis of nearest neighbor distances with application to entropy estimation .available at ` arxiv:1603.08578 ` .tsybakov , a. b. and van der meulen , e. c. ( 1996 ) root- consistent estimators of entropy for densities with unbounded support . __ , * 23 * , 7583 .vasicek , o. ( 1976 ) a test for normality based on sample entropy ._ j. r. stat ._ , * 38 * , 5459 .fix ] such that for .but then } \delta^\epsilon \sup_{x : f(x ) \geq \delta } g(x ) \leq \max\bigl\{1,c^\epsilon \sup_{x : f(x ) \geq \delta_0 } g(x)\bigr\ } \leq c^\epsilon \delta_0^{-\epsilon}.\ ] ] hence , defining , we have which establishes our first claim .now choose and let .then , by hlder s inequality , since .\(i ) the lower bound is immediate from the fact that for any . for the upper bound ,observe that by markov s inequality , for any , the result follows on substituting for .\(ii ) we first prove this result in the case ] , we have now , by our hypothesis , we know that as .hence there exists such that for all , all and all , we have moreover , there exists such that for all , all and all we have finally , we can choose such that and such that for .it follows that for , and , we have that ] follows .the general case can be proved using very similar arguments , and is omitted for brevity .let be a density of the form for some positive definite and density , and let .suppose further that for some invertible .then thus , provided , we have this is proportional to as a function of if there exists such that where is the kronecker delta . in the next two propositions , we provide two simple , but reasonably general , conditions under which holds .[ prop : product ] suppose that and that satisfies * ( a1)*( ) and * ( a2)*( ) for some and .suppose further that for some density on . then holds . for , we have we may integrate by parts to write , for any , since satisfies * ( a1)*( ) for , it must be the case that has a bounded second derivative and hence must satisfy as .hence , taking and in we see that when , we find that is independent of , as required . [prop : spherical ] suppose that and that satisfies * ( a1)*( ) and * ( a2)*( ) for some and .suppose further that for some function and some , where . then ( [ d2int ] ) holds . for , and , moreover , for , since this mixed second partial derivative is an odd function of both and when , it follows that finally , when , is independent of , so the claim follows ._ to bound ._ we have it therefore follows from lemma [ lemma : hxinvbounds](ii ) that for every , latexmath:[\ ] ] by similar means we can establish the same bound for the approximation of by . to conclude the proof , we write , where and deal with these two regions separately . by slepian s inequality , for all and , so by , for every , by lemma [ lemma : hxinvbounds](ii ) we have , uniformly in , with similar bounds holding for and .we therefore have that for each , a corresponding lower bound of the same order for the left - hand side of follows from and the fact that uniformly in .it now follows from , , and that for each as required .the proof of the proposition is very similar to the proof of theorem [ biasthm ] , with the main difference being in bounding the error corresponding to . in this casewe just need to bound the error in a higher - order taylor expansion of this can be done using lemma [ lemma : hxinvbounds](ii ) ; we omit the details for brevity .we first study the diagonal terms in the standard expansion of the variance of , and claim that in the proof of theorem [ varthm ] we studied the terms with and showed that , for with , as . for , using similar arguments to those used in the proof of theorem [ biasthm ] , we have that as .now follows on noting that and each has at most non - zero entries .we now study the off - diagonal terms in the expansion of the variance of , and claim that as . in light of, it is sufficient to show that as .we suppose here that and ; the case is dealt with in the proof of theorem [ varthm ] . we broadly follow the corresponding part of the proof of theorem [ varthm ] , though we require some new ( similar ) notation .let denote the conditional distribution function of given and let denote the conditional distribution function of given .let let , and let for pairs with and , let and write also write where .writing for remainder terms to bounded later , we have as .the final equality follows by the fact that , for borel measurable sets , _ to bound _ : similar to our work used to bound , we may show that as , for fixed . also , as . using these facts and very similar arguments to those used to bound we have _ to bound _ : let , where .further , let then , as in , we have we now proceed to approximate by and by .this is rather similar to the corresponding approximation in the bounds on , so we only present the main differences .first , let we also define and set .our aim now is to provide a bound on . by the smoothness of the function and * ( a1)*(2 )we have that for . from this and similar bounds to , we find that . we therefore have which is as in the case except with the factor of missing .note now that using , similar bounds to and the same arguments as leading up to , where and now let and . similarly to , we have similarly to the arguments leading up to , it follows that where the powers on the log factors are smaller because of the absence of the factor of in .the remainder of the work required to bound is very similar to the work done from to , using also .we then have as required . | many statistical procedures , including goodness - of - fit tests and methods for independent component analysis , rely critically on the estimation of the entropy of a distribution . in this paper , we seek entropy estimators that are efficient in the sense of achieving the local asymptotic minimax lower bound . to this end , we initially study a generalisation of the estimator originally proposed by , based on the -nearest neighbour distances of a sample of independent and identically distributed random vectors in . when and provided ( as well as other regularity conditions ) , we show that the estimator is efficient ; on the other hand , when , a non - trivial bias precludes its efficiency regardless of the choice of . this motivates us to consider a new entropy estimator , formed as a weighted average of kozachenko leonenko estimators for different values of . a careful choice of weights enables us to obtain an efficient estimator in arbitrary dimensions , given sufficient smoothness . in addition to the new estimator proposed and theoretical understanding provided , our results also have other methodological implications ; in particular , they motivate the prewhitening of the data before applying the estimator and facilitate the construction of asymptotically valid confidence intervals of asymptotically minimal width . |
network coding technique has significantly improved the performance of communication networks , and has been studied extensively in the past two decades .index coding problem ( icp ) can be considered as a special case of network coding problem .icp has emerged as an important topic of recent research due to its applications in many of the practically relevant problems including that in satellite networks , topological interference management , wireless caching and cache enabled cloud radio access networks for 5 g cellular systems .the noiseless index coding problem with side information was first studied in as an informed - source coding - on - demand ( iscod ) problem , in which a central server ( sender ) wants to broadcast data blocks to a set of clients ( receivers ) which already has a proper subset of the data blocks .the problem is to minimize the data that must be broadcast , so that each receiver can derive its required data blocks .consider the case of a sender with messages denoted by the set , , is a field with elements , which it broadcasts as coded messages , to a set of receivers , .each receiver wants a subset of the messages , knows a priori a proper subset of the messages , where , and is identified by the pair .the noiseless index coding problem is to find the smallest number of transmissions required and is specified by .the set is referred to as the side information available to the receiver . an index code ( ic ) for a given icp is defined by an encoding function , and a set of decoding functions corresponding to the receivers , such that , in this paper , we consider icp over binary field ( ) .the integer , as defined above is called the length of the index code . for noiseless broadcast channels , an index code of minimum lengthis called an optimal index code , . even though this is interestingtheoretically , index coding over noisy channels is more practical .noisy index coding over a binary symmetric channel was considered in , .binary transmission of index coded bits were assumed .for this set up the problem of identifying the number of optimal index codes possible for a given icp is important and that was studied in , . a special case of icp over gaussian broadcast channel , based on multidimensional qam constellation with points , where every receiver demands all messages ( which it does not have ) from the source , was considered in .the case of noisy index coding over awgn broadcast channel , along with minimum euclidean distance decoding , was studied in , where the receivers demand a subset of messages as defined in . an algorithm to map the broadcast vectors to psk signal points so that the receiver with maximum side information gets maximum psk side information coding gain , was also proposed .the algorithm assumes that an index code is given and is applicable only for one specific order of priority ( in the non increasing order of amount of side information ) among the receivers .minimum euclidean distance of the effective broadcast signal set seen by a receiver , was considered as the basic parameter which decides the message error probability of the receiver and the proposed algorithm tries to maximize the minimum euclidean distance . in this paper, we discuss the maximum likelihood ( ml ) decoder for index coded psk modulation .further , we study the case in which the length of the index code is specified for the icp but not necessarily the index code .the receivers can have any a priori defined arbitrary order of priority among themselves .for a chosen priority order , we consider all possible index codes , to obtain the mappings to appropriate psk constellation which will result in the best message error performance in terms of psk index coding gain ( psk - icg , defined in section [ isd_section ] ) of the receivers , respecting the defined order of priority .consider a noisy index coding problem with messages , over which uses an awgn broadcast channel for transmission .for the icp , consider index codes of length , , which will generate broadcast vectors ( elements of ) .the broadcast vectors are mapped to -psk signal points , so that -psk modulation can be used , to minimize the bandwidth requirement .note that , transmitting one -psk signal point instead of binary bits ( as in noiseless index coding ) , results in fold saving in bandwidth .our contributions are summarized below : * we derive a decision rule for maximum likelihood decoding which gives the best message error performance , for any receiver , for a given index code and mapping . * we show that , at very high snr , the message error performance of the receiver employing ml decoder , depends on the minimum inter - set distance ( defined in section [ isd_section ] ) .the mapping which maximized the minimum inter - set distance is optimal for the best message error performance at high snr .* for the icp , when the receivers are arranged in the decreasing order of priority , we propose an algorithm to find ( index code , mapping ) pairs , each of which gives the best message error performance for the receivers , for the given order of priority .using any one of the above ( index code , mapping ) pairs , the highest priority receiver achieves the maximum possible gain ( psk - icg ) that it can get using any ic and any mapping for -psk constellation , at very high snr .given that the highest priority receiver achieves its best performance , the next highest priority receiver achieves its maximum gain possible and so on in the specified order of priority .let \triangleq \{1,2, ... ,n\} ] ( for any integer ) , where , denote the vector .we consider the noisy index coding problem over with a single sender having a set of messages , , and a set of receivers , , where each receiver is identified by , the want set and the known set .let , be the set of indices corresponding to the known set .it is sufficient to consider the case where each receiver demands only one message .if there is a receiver which demands more than one message , it can be considered as equivalent receivers each demanding one message and having the same side information .each , ] and , $ ] .for the given icp , we consider scalar linear index codes of length ( not necessarily the minimum or optimum length ) , such that the set of all broadcast vectors gives let be an encoding matrix for one such index code , .let and denote the message vector and the broadcast vector respectively , where [ exmp : eg1 ] consider the following icp with and .the side information available with the receivers is as follows : . for this icp we can choose a scalar linear index code of length , as given by the following encoding matrix .+ the index coded bits are given by + as . instead of using bpsk transmissions , the index coded bits of sent as a signal point from a -psk signal set , over an awgn channel , to save bandwidth . in this paperwe consider index coded -psk modulation for a chosen and so when we refer to index codes of length , we consider only those index codes for which the set of all broadcast vectors is .let the chosen -psk signal set be denoted as .assume that for the index code a mapping scheme specifies the mapping of to the signal set .all receivers are assumed to know the encoding matrix , for the index code .let be a realization of .as each receiver knows some messages ( from its side information ) , needs to consider only a subset of for decoding and this subset is called the _ effective broadcast vector set_. for a chosen index code based on the encoding matrix , the _ effective broadcast vector set_ seen by for is defined by , \setminus \mathcal{i}_{i}\}. \end{aligned}\ ] ] the corresponding set of signal points in -psk constellation is referred to as the _ effective broadcast signal set _ seen by for and is denoted by . for a chosen index code , all effective broadcast signal sets and effective broadcast vector sets seen by are of the same size ( where ) .half the number of broadcast vectors in an effective broadcast vector set corresponds to and the remaining half corresponds to .so , we can partition an effective broadcast vector set into two subsets as defined below .the _ 0-effective broadcast vector set_ seen by for is defined by , \setminus ( \mathcal{i}_{i } \cup \{f(i)\})\}. \end{aligned}\ ] ] the corresponding set of signal points in -psk constellation is referred to as the _ 0-effective broadcast signal set_ seen by for and is denoted as .the _ 1-effective broadcast vector set _ seen by for is defined by , \setminus ( \mathcal{i}_{i } \cup \{f(i)\})\}. \end{aligned}\ ] ] the corresponding set of signal points in -psk constellation is referred to as the _ 1-effective broadcast signal set_ seen by for and is denoted as .the effective broadcast vector sets , 0-effective broadcast vector sets and 1-effective broadcast vector sets seen by for the ic in example [ exmp : eg1 ] is given in table [ table : tbl1 ] .it is clear that , two different realizations of may have the same effective broadcast vector set .however , 1-effective broadcast vector set for a particular realization of may become the 0-effective broadcast vector set of another realization of and vice versa .but the way in which the effective broadcast vector set gets partitioned will be the same .for example consider the case of and in table [ table : tbl1 ] ..effective broadcast vector sets and its partitions ( seen by ) for the ic in example [ exmp : eg1 ] .[ cols="<,<,<,<",options="header " , ] [ table : tbl2 ] there are optimal mappings for .the set has ( index code , mapping ) pairs which are optimal for , with the index code being the same for all the pairs .consider .after all pairs in are considered , the maximum value possible for and there are 128 pairs which are optimal for .now consider .there are 24 pairs which are optimal with . for are 16 optimal pairs with minimum inter - set distance . for and these pairs give the same minimum inter - set distance .these 16 pairs are the optimal mappings for the ic considered .one of these mappings is given in fig .[ figure : fig2](a ) . .we have considered the icp given in example [ exmp : eg1 ] and used algorithm [ alg : a1 ] to obtain all optimal ( index code , mapping ) pairs .one such pair , has the index code as given in example [ exmp : eg1 ] and mapping as given in fig .[ figure : fig1](b ) .we compared this optimal mapping with another mapping ( shown in fig .[ figure : fig1](c ) ) which is not optimal for the same index code , .the pair , the output set obtained from the execution of the algorithm .we obtained the message error probability of the receivers for the two different mappings , by simulation .the first mapping , ( ) used algorithm [ alg : a1 ] and the second mapping ( ) used an algorithm based on maximizing the minimum euclidean distances .simulation results are given in fig .[ figure : sim1 ] . ) . ]the performance of receivers , and is significantly better with than with at high snr .the minimum inter - set distances are more for ( fig.[figure : fig1](b ) ) than for ( fig.[figure : fig1](c ) ) . for receivers and , the minimum inter - set distances are same for both the mappings .we have carried out simulation based studies to compare the performance of the receivers for the icp and the ic given in example [ exmp : eg2 ] for two different mappings as given in fig .[ figure : fig2 ] .the mapping ( ) given in fig .[ figure : fig2](a ) used algorithm [ alg : a1 ] and the mapping ( ) given in fig [ figure : fig2](b ) used the algorithm based on maximizing the minimum euclidean distances .simulation results are given in fig .[ figure : sim2 ] . ) . ] for , , , and , the minimum inter - set distances and hence the performances are the same for both the mappings . butthe performance of receiver is significantly better with than with at high snr .the simulation results indicate the effectiveness of the algorithm based on minimum inter - set distances ( algorithm [ alg : a1 ] ) for mapping the broadcast vectors to psk signal points . it should be noted that algorithm [ alg : a1 ] does not guarantee that all the receivers will perform better or as good as that with any other algorithm .it is possible that , a mapping based on some algorithm ( say , algorithm 2 ) gives a better performance to a receiver than that with algorithm [ alg : a1 ] .but then there will be a receiver which performs better with algorithm [ alg : a1 ] than with algorithm 2 , where is a higher priority receiver than .in other words , algorithm [ alg : a1 ] attempts to maximize the gain achieved by the receivers by considering the receivers in the given order of priority .this is further illustrated in example [ exmp : eg3 ] .[ exmp : eg3 ] consider the following icp with and .the side information available with the receivers is as follows : . for this icp a scalar linear index code of length ( not optimal ) ,is specified as .assume that the decreasing order of priority is given as .using algorithm [ alg : a1 ] , optimal mappings for the specified ic is obtained , of which one mapping ( ) is given in fig .[ figure : fig3](a ) .another mapping , is found by using the algorithm based on maximizing the minimum euclidean distances and is given in fig .[ figure : fig3](b ) .simulation results comparing the performance of the receivers for these two mappings are given in fig .[ figure : sim3 ] . .] ) . ]it is clear from fig .[ figure : sim3 ] that , performs better with than with . but , which is of lower priority than , has better performance with .in this paper we have considered index coded psk modulation and have derived a decision rule for the ml decoder which minimizes the message error probability of the receivers , for a given icp .we have introduced the concept of inter - set distances and illustrated its importance in noisy index coding problems .it was also shown that at high snr the dominant factor which decides the message error is the minimum inter - set distance and so an optimal mapping must maximize the minimum inter - set distance .subsequently , we have considered the problem of finding optimal ( index code , mapping ) pairs across all possible mappings for all possible index codes of length , for a chosen -psk modulation .this problem was not addressed so far in literature .the algorithm which is proposed for a given icp , can find ( index code , mapping ) pairs , each of which gives the best psk - icg for the receivers , for any given order of priority . finding all index codes of a chosen length ( greater than or equal to the optimal length ) for a given icpis in general np hard . if it is too complex to find all the index codes , the algorithm can be executed by considering a chosen set of index codes .but the complexity of the proposed algorithm increases exponentially with the length of the index code .this work was supported partly by the science and engineering research board ( serb ) of department of science and technology ( dst ) , government of india , through j.c .bose national fellowship to b. sundar rajan .y. birk and t. kol , coding on demand by an informed source ( iscod ) for efficient broadcast of different supplemental data to caching clients , " _ieee trans.inf.theory_ , 52(6 ) , june 2006 , pp .2825 - 2830 .anoop thomas , kavitha radhakumar , attada chandramouli and b. sundar rajan , optimal index coding with min - max probability of error over fading channels , " in _ proc .ieee pimrc . , _hong kong , 2015 , pp .889 - 894 anoop thomas , kavitha radhakumar , attada chandramouli and b. sundar rajan , single uniprior index coding with min - max probability of error over fading channels , " accepted for publication in ieee transactions on vehicular technology .kavitha radhakumar and b. sundar rajan , on the number of optimal index codes , " proceedings of ieee international symposium on information theory , ( isit 2015 ) , hong kong , 14 - 19 june 2015 , pp .1044 - 1048 .kavitha radhakumar , niranjana ambadi and b. sundar rajan , on the number of optimal linear index codes for unicast index coding problems , " in _ proc .47th annu .ieee wireless communications and networking conference _ , doha , qatar , april 2016 , pp .1897 - 1903 . | index coded psk modulation over an awgn broadcast channel , for a given index coding problem ( icp ) is studied . for a chosen index code and an arbitrary mapping ( of broadcast vectors to psk signal points ) , we have derived a decision rule for the maximum likelihood ( ml ) decoder . the message error performance of a receiver at high snr is characterized by a parameter called _ * psk index coding gain ( psk - icg)*_. the psk - icg of a receiver is determined by a metric called _ * minimum inter - set distance*_. for a given icp with an order of priority among the receivers , and a chosen -psk constellation we propose an algorithm to find _ * ( index code , mapping ) * _ pairs , each of which gives the best performance in terms of psk - icg of the receivers . no other pair of index code ( of length with broadcast vectors ) and mapping can give a better psk - icg for the highest priority receiver . also , given that the highest priority receiver achieves its best performance , the next highest priority receiver achieves its maximum gain possible and so on in the specified order of priority . |
market impact is the expected price change conditioned on initiating a trade of a given size and a given sign .understanding market impact is important for several reasons .one motivation is practical : to know whether a trade will be profitable it is essential to be able to estimate transaction costs , and in order to optimize a trading strategy to minimize such costs , it is necessary to understand the functional form of market impact .another motivation is ecological : impact exerts selection pressure against a fund becoming too large , and therefore is potentially important in determining the size distribution of funds . finally , an important motivation is theoretical : market impact reflects the shape of excess demand , the understanding of which has been a central problem in economics since the time of alfred marshall . in this paperwe present a theory for the market impact of large trading orders that are split into pieces and executed incrementally .we call these _ metaorders_. the true size of metaorders is typically not public information , a fact that plays a central role in our theory .the strategic reasons for incremental execution of metaorders were originally analyzed by kyle ( ) , who developed a model for an inside trader with monopolistic information about future prices .kyle showed that the optimal strategy for such a trader is to break her metaorder into pieces and execute it incrementally at a uniform rate , gradually incorporating her information into the price . in kyle s theory the price increases linearly with time as the trading takes place , and all else being equal , the total impact is a linear function of size .the prediction of linearity is reinforced by huberman and stanzl ( ) who show that , providing liquidity is constant , to prevent arbitrage permanent impact must be linear .real data contradict these predictions : metaorders do not show linear impact .empirical studies consistently find concave impact , i.e. impact per share decreases with size .it is in principle possible to reconcile the kyle model with concave dependence on size by making the additional hypothesis that larger metaorders contain less information per share than smaller ones , for example because more informed traders issue smaller metaorders .a drawback of this hypothesis is that it is neither parsimonious nor easily testable , and as we will argue here , under the assumptions of our model it violates market efficiency .huberman and stanzl are careful to specify that linearity only applies when liquidity is constant .in fact , liquidity fluctuates by orders of magnitude and has a large effect on price fluctuations .empirical studies find that order flow is extremely persistent , in the sense that the autocorrelation of order signs is positive and decays very slowly .no arbitrage arguments imply either fluctuating asymmetric liquidity as postulated by lillo and farmer ( ) , or no permanent impact , as discussed by bouchaud et al .( ) .the central goal of our model is to understand how order splitting affects market impact . whereas kyle assumed a single , monopolistic informed trader ,our informed traders are competitive .they submit their orders to an algorithmic execution service that bundles them together as one large metaorder and executes them incrementally .we show that this leads to a symmetric nash equilibrium satisfying the condition that the final price after a metaorder is executed equals its average transaction price .we call this condition _ fair pricing _ , to emphasize the fact that under this assumption trading a metaorder is a breakeven deal neither party makes a profit as a result of trading . our equilibrium is less general than kyle s in that it assumes uniform execution , but it is more general in that it allows an arbitrary information distribution .this is key because , as we show , there is an equilibrium between information and metaorder size , making it possible to match the metaorder size distribution to empirical data .combining the fair pricing condition with a martingale condition makes it possible to derive the price impact of metaorders as a function of the metaorder size distribution .this allows us to make several strong predictions based on a simple set of hypotheses . for a given metaorder size distributionit predicts the average impact as a function of time both during and after execution .we thus predict the relationship between the functional form of two observable quantities with no a priori relationship , making our theory falsifiable in a strong sense .this is in contrast to theories that make assumptions about the functional form of utility and/or behavioral or institutional assumptions about the informativeness of trades , which typically leave room for interpretation and require auxiliary assumptions to make empirical tests . for example , gabaix et al .( ) have also argued that the distribution of trading volume plays a central role in determining impact , and have derived a formula for impact that is concave under some circumstances . however , in contrast to our model , their prediction for market impact depends sensitively on the functional form for risk aversion , where is the standard deviation of profits , the impact will increase with the size of the metaorder as .thus the impact is concave if , linear if ( i.e. if risk is proportional to variance ) , and convex otherwise .for another theory that also predicts concave impact see toth et al .( ) . ] . our theory , in contrast , is based entirely on market efficiency and does not depend on the functional form of utility .our work here is related to several papers that study market design .viswanathan and wang ( ) , glosten ( ) , and back and baruch ( ) derive and compare the equilibrium transaction prices of orders submitted to markets with uniform vs. discriminatory pricing . depending on the setup of the model , these prices can be different so that investors will prefer one pricing structure to the other and can potentially be cream - skimmed " by a competing exchange .the fair pricing condition we introduce here forces the average transaction price of a metaorder ( which transacts at discriminatory prices ) to be equal to the price that would be set under uniform pricing .fair pricing , therefore , means investors have no preference between the two pricing structures , and they have no incentive to search out arrangements for better execution . on the surface, this result is similar to the equivalence of uniform and discriminatory pricing in back and baruch ( ) .however , in their paper , this equivalence results because orders are always allowed to be split , whereas ours is a true equivalence between the pricing of a split vs. unsplit order . in section iiwe give a description of the model and discuss its interpretation . in section iiiwe develop the consequences of the martingale condition and show how this leads to zero overall profits and asymmetric price responses when order flow is persistent .in section iv we show that any nash equilibrium must satisfy the fair pricing condition . in sectionv we derive in general terms what this implies about market impact . in sectionvi we introduce specific functional forms for the metaorder size distribution and explicitly compute the impact for these cases .finally , in section vii we discuss the empirical implications of the model and make some concluding remarks .we study a stylized model of an algorithmic trading service combining and executing orders of long - term traders .this can be thought of as a broker - dealer receiving multiple orders on the same security and executing them algorithmically at the same time , or as an institutional trading desk of a large asset manager combining the orders from multiple portfolio managers into one large metaorder .our goal is to model the price impact of a metaorder during a trading period in which it may or may not be present .we set up the model in a stylized manner as a game in which trading takes place across multiple periods , after which final prices are revealed . while this is somewhat artificial in comparison to a real market ( which has no such thing as a final " price ) , the framework is simple enough to allow us to find a solution , and the basic conclusions should apply more broadly .the structure of the model is in many respects similar to the classic framework of kyle ( ) , but with several important differences .( a point - by - point comparison is made in the conclusions ) .we begin with an overview .there is a single asset which is traded in each of periods , which can be regarded as a game .there are three kinds of agents , long - term traders , market makers and day traders .the long - term traders have a common information signal that is received before the game starts ; based on this information they formulate their orders and submit them to an algorithmic trading service .the algorithmic service operates mechanically , dividing the bundle of orders ( which we call a _ metaorder _ ) into equal sized lots which are submitted as anonymous market orders in each successive period until the metaorder is fully executed .the day traders , in contrast , receive a new information signal and submit orders based on this signal in every period of the game . in every periodeach market maker observes the netted order of the long - term and day traders and submits a quote accordingly ; all orders are filled at the best price .the game ends with a final liquidation period at time step in which prices are set exogenously to reflect the accumulated information .the length of the game varies based on the amount of information received , and is unknown to the market makers .we now discuss the setup of the game in more detail , beginning with a description of each of the agents .the _ long - term traders _ receive a common information signal before the game begins , which only they observe . in order to model situations in which there may or may not be a metaorder present , we allow a nonzero probability that . thus with probability the signal is drawn from a distribution , which has nonzero support over a continuous interval , where , and with probability is no information , i.e. . for simplicity we discuss the case where the draw of happens to be positive , i.e. it causes the final price to increase , but the results apply equally well if is negative .after is revealed the orders of the long - term traders are aggregated together into a _ metaorder _ and executed in a package .the bundling and execution process can be thought of as representing an algorithmic trading firm , broker , or institutional trading desk .there are long - term traders , labeled by an index , where is a large number . after the common information signalis received each long - term trader submits an order of size , where is a large positive integer .each long - term trader decides independently .the individual orders are bundled together into a metaorder of size shares . .for this figure we assume the metaorder is present and show only expected price paths , averaged over the day trader s information .the price is initially ; after the first lot is executed it is .if it is finished and the price reverts to , but if another lot is executed and it rises to .this proceeds similarly until the execution of the metaorder is completed . at any given pointthe probability that the metaorder has size , i.e. that the order continues , is . if had we followed a typical price path under circumstances when the day trader s noisy information signal is large , rather than the expected price paths shown here , the sequence of prices would be a random walk with a time - varying drift caused by the metaorder s impact . ]the algorithmic trading firm operates purely mechanically , chopping the metaorder into equal pieces and submitting market orders at successive times .these are executed at transaction prices , where , as illustrated in figure [ fig1 ] .we assume the metaorder is executed in lots , which for mathematical convenience are chosen to be of shares .the imposition of a maximum trade size is a technical detail , which induces a bound on , to avoid mathematical problems that occur in the limit .this is explained in the appendix .for the typical situations we have in mind , , and are all large numbers .the choice of lots of size shares is purely a matter of convenience ; the assumption that the lots have a constant size is something we hope can be relaxed in the future . ]; we assume that is large .if the metaorder is present ( i.e. if ) , trading ends when the metaorder is fully executed , i.e. when .the equilibrium distribution of metaorder lengths is . if we randomly choose a value of from an arbitrary distribution ( which in general differs from ) .the _ day traders _ can be treated as a single representative agent who receives a private information signal at the beginning of each period of the game , and submits a market order ( either to buy or sell ) of size , where is a zero mean iid noise process with an otherwise arbitrary distribution , and is an increasing function is not important here .we assume the day traders do not engage in order splitting , i.e. they trade on the information they receive in given time step only in that time step .this guarantees that in the absence of a metaorder the day trader s order flow provides a sufficient signal for a market maker to infer the new information . ] .there is no restriction on the size of , and in particular we allow the possibility that is large and negative while is positive , so that during the course of execution of the metaorder the combined order flow may change sign , and that at different times the market makers may buy or sell . at each timestep the market orders of the long - term traders and the day trader are netted and the single combined order is submitted to the _ market makers _ , who are competitive and profit maximizing .we are not assuming any special institutional privileges , such as those of the specialists in the nyse ; our market makers are simply competitive liquidity providers . at each time stepeach market maker observes the combined order and submits a quote .the combined order is fully executed by the market maker(s ) offering the best price .the market makers are able to take past order flow and prices into account in setting their quotes . given that is iid , the typical transaction price sequence will look like a random walk , as the market maker responds to the day trader s order flow , with a possible superimposed drift if a metaorder is present .in the final period the combination of the long - term information and the accumulated short term information are revealed . since the long - term traders information signals are independent of those of the day trader , information is additive and the final price is where is the initial price .the average final price is = s_0 + \alpha ] .the goal of the paper is to compute the _ average immediate impact _ and the _ average permanent impact _ of the metaorder .the corresponding incremental average impacts are and .as we will show , at the equilibrium the average impacts do not depend on , or .the number of long - term traders is fixed and is common knowledge .the market makers know the initial price , the information distributions and , they can deduce the function relating the day trader s information to their order size , and they observe the net combined order in each period and remember previous order flow .however they do not know the information signals or , and thus they do not know how much of the order flow to ascribe to the long - term trader vs. the day trader . they do not know whether a metaorder is present , and if it is , they do not know its size .perhaps the strongest assumption we have made concerns knowledge of the timing of metaorders . the market makers know the period , and as a result if a metaorder is present , they know when it started . in typical market settings order flow is anonymous and the starting time of a metaorder is uncertain .nonetheless , for long meta - orders and sufficiently high participation rates the starting time can be inferred from the imbalance in order flow , where is the desired number of standard deviations of statistical significance .the accumulated imbalance after steps is , where is the participation rate .equating these gives .hiding an order of size requires .thus larger metaorders need to be executed more slowly to avoid detection .since the time needed to complete execution is inversely proportional , for a large metaorder this can become prohibitive it is impossible to escape detection . ] . assuming a typical participation rate of and two standard deviations to reject the null hypothesis of balanced order flow means that a metaorder can be detected after about timesteps .large metaorders are frequently executed in or even lots share of ibm . ] , implying an error in inferring the starting time of to .thus , it is not unrealistic to imagine that market makers can infer the presence of large metaorders and estimate their starting times .we are not concerned here with the question of whether or not the algorithmic execution service s strategy of splitting the order into equal pieces is an optimal strategy .our goal is instead to assume that this is what they do , and to derive the implications for price impact .our main result is to derive the equilibrium relationship between the metaorder size distribution and the average immediate and permanent impacts and .in addition to taking expectations over the day trader s noise , which we denote by , we must compute expectations about the length of the game .the crux of our argument hinges around the market makers ignorance of ; when this translates into uncertainty about metaorder size .we will use the notation to represent an average over all metaorders of size , and as described above , for averages over . for a generic function the average over metaorder sizes is = \sum_{n = t}^\infty q_n f_n= \frac{\sum_{n = t}^\infty p_n f_n}{\sum_{n = t}^\infty p_{n}}= \frac{\sum_{i=0}^\infty p_{it } f_{it}}{\sum_{i=0}^\infty p_{it } } , e_t[f_n ] = \frac{\sum_{n = t}^m p_n f_n}{\sum_{n = t}^m p_{n}}= \frac{\sum_{i=0}^{m - t } p_{t+i } f_{t+i}}{\sum_{i=0}^{m - t } p_{t+i } } , \label{averages}\ ] ] where in the last term we made the substitution . assuming that a metaorder is present , the likelihood that it will persist depends on the distribution and the number of executions that it has already experiencedlet be the probability that the metaorder will continue given that it is still active at timestep .this is equivalent to the probability that it is at least of size , i.e. this makes precise how order splitting can make order flow positively autocorrelated . in particular ,if has heavy tails than an exponential will increase with time and induce persistence in order flow .in the case where no metaorder is present one can similarly define the probability that the game continues as this section we introduce a martingale condition and discuss its implications for liquidity and overall profitability .market makers must set prices given only past and present order flow information , without knowing whether the order flow originated from a metaorder or from day traders .their decision function is of the form where ] , where ] , where ] , i.e. the average transaction price is unchanged .4 . with probability metaorder is not present and with probability time is the last trading period . for similar reasons = \tilde{s}'_t ] and similarly for .taking averages over gives equation ( [ shortterm ] ) holds for . if the metaorder has maximal length at the end of the interval by definition , which implies that .let us pause for a moment to digest this result .we started with a martingale condition for realized prices , including fluctuations caused by day traders , and then reduced it to a martingale condition for the average impact due to the presence of a metaorder .the reduced martingale no longer depends on , , or the day trader s information .the fact that we assume a martingale for realized prices implies that arbitrage of the impact is impossible .the ability to average away the day traders is a consequence of our assumption that and are independent , which implies additivity of information .this separates the problem of the metaorder s impact from that of the day trader s impact the metaorder s impact effectively rides on top of the day trader s impact . as we will see , the virtue of this approach is that it allows us to infer quite a lot without needing to solve for the market makers optimal quote setting function .equation ( [ shortterm ] ) can be trivially rewritten in the form where and .thus the martingale condition fixes the ratio of the price responses and , but does not fix their scale . if is large , corresponding to a metaorder that is likely to continue , then small .this means that the price response if the order continues is much less it is than if it stops . to complete the calculationwe need another condition to set the scale of the price responses and , which may change as varies .such a condition is introduced in section [ sizeindependencesec ] .even without such a condition , one can already see intuitively that all else equal " , for a heavy - tailed metaorder distribution , the impact will be concave .( recall that heavy tails in imply that increases with ) .the martingale condition implies that the market makers break even overall , i.e. that their total profits summing over metaorders of all sizes is zero .this is stated more precisely in proposition one .* proposition 1 . *the market makers transact lots at average prices , which later are all valued at a final price .the martingale condition implies zero overall profits , i.e. \equiv\sum_{n=1}^\infty p_n \pi_n=0 .\pi = e_1[\pi_n]\equiv\sum_{n=1}^m n \pi_n p_n = 0 , \label{breakevenonaverage}\ ] ] where is the profit per lot transacted .the proof of proposition 1 is given in appendix a. the phrase overall profits " emphasizes that the martingale condition only implies zero profits when averaged over metaorders of all sizes .it allows for the possibility that the market makers may make profits on metaorders in a given size range , as long as they take corresponding losses in other size ranges .surprisingly , proposition 1 is not necessarily true when is infinite .the basic problem is similar to the st .petersburg paradox : as the metaorder size becomes infinite it is possible to have infinitely rare but infinitely large losses . the conditions under which this holds are more complicated , as discussed in appendix a.we now derive the _ fair pricing condition _ , which states that for any under fair pricing the average execution price is equal to the final price .we call this fair pricing for the obvious reason that both parties would naturally regard this as fair " .fair pricing implies that the market makers break even on metaorders of any size , as opposed to the martingale condition , which only implies they break even when averaging over metaorders of all sizes . .* proposition 2 .* _ if the immediate impact has a second derivative bounded below zero , in the limit where the number of informed traders , any nash equilibrium must satisfy the fair pricing condition for . on average market makers profit from orders of length one and take ( equal and opposite ) losses from orders of length ._ this result is driven by competition between informed traders .all informed traders receive the same information signal , and the strategy of informed trader is the choice of the order size .the orders are then bundled together to determine the combined metaorder size .the decision of each informed trader is made without knowing the decisions of others .the derivation has two steps : first we examine the case for , and show that if others hold their strategies constant , providing the impact is concave and is sufficiently large , traders can increase profits by changing strategy . secondly we show that if there is no incentive to change strategy. then we return to examine the cases and , which must be treated separately .the derivation is given in the appendix .in contrast to the martingale condition , which only implies that immediate profits are zero when averaged over size , fair pricing means that they are identically zero for every size .it implies that no one pays any costs or makes any profits simply by trading in any particular size range .the nash equilibrium is symmetric , i.e. all agents make the same decision .( this must be in any case since there is nothing to distinguish them ) .this means that there is a unique order size for any , and the distribution of information implies the distribution of metaorder size .we can use as a proxy for , which is the key fact allowing us to state our results in terms of the observable quantity rather than , which is much more difficult to observe .although this derivation is based on rationality , the fair pricing condition potentially stands on its own , even if other aspects of rationality and efficiency are violated . in modern markets portfolio managers routinely receive trade cost analysis reportsthat compare their execution prices relative to the close , which for a trade that takes place over one day is a good proxy for .such reports are typically broken down into size bins , making persistent inconsistencies across sizes clear .execution times for metaorders range from less than a day to several months ( vaglica et al . , ) , and are much shorter than typical holding times , which for mutual funds are on average a year and are often much more ( schwartzkopf and farmer , ) .thus the statistical fluctuations for assessing whether fair pricing holds are much smaller than those for assessing informational efficiency .since is fairly well determined , portfolio managers will exert pressure on their brokers to provide them with good execution . as a resultwe expect the fair pricing condition to be obeyed to a higher degree of accuracy than the informational efficiency condition .although we have derived the nash equilibrium only in the case where the immediate impact is concave , for the remainder of the paper we will simply assume that the martingale condition holds for all and the fair pricing condition holds for .this allows us to derive both the immediate impact and the permanent impact for any given metaorder size distribution .we later argue that for realistic situations the metaorder size distribution gives rise to a concave impact function , consistent with the nash equilibrium .the martingale condition ( eq . [ shortterm ] ) and the fair pricing condition ( eq . [ fairpricing ] ) define a system of linear equations for and at each value of , which we can alternatively express in terms of the price differences and , where .the martingale condition holds for and the fair pricing condition holds for .there are thus homogeneous linear equations with unknowns does not exist , so is not needed .this reduces the number of unknowns by one . ] . because the number of unknowns is one greater than the number of conditions there is necessarily an undetermined constant , which we choose to be .* proposition 3 .* _ the system of martingale conditions ( eq . [ shortterm ] ) and fair pricing conditions ( eq . [ fairpricing ] ) has solution _ the proof is given in appendix a. an important property of the solution is the equivalence of the impact as a function of either time or size .this is in contrast to the prediction of an extended " kyle model under the assumption that traders of different sizes are differently informed , which yields linear impact as a function of time , but allows the slope to vary nonlinearly with size .summing eq .[ solution ] implies that for the immediate impact is for the immediate impact is and for it is .( the meaning of the undetermined constants and is discussed in a moment ) . the permanent impact is easily obtained .making some simple algebraic manipulations by combining eqs .( [ final ] ) and ( [ calpdef ] ) we get we have expressed both the permanent and immediate impact purely in terms of and the undetermined constants and . the undetermined constants can in principle be fixed based on the information at the equilibrium . at the equilibrium information signals in the range ] will be assigned to metaorders of size two , with an average size , and so on .the scale of the impact is set by the relations we have used the words in principle " because , unlike metaorder sizes , information is not easily observed . barring the ability to independently measure information , the constants and remain undetermined parameters . plays the important role of setting the scale of the impact .the constant , in contrast , is unimportant it is simply the impact of the first trade , before the metaorder has been detected .the power of the theory developed here is that the impact is predicted in terms of , which is directly measurable ( at least with the proper data ) .the calculation to set the scale shows how the continuous variable maps onto the discrete variable . is a discrete function whose inverse can be written .if there is a continuum limit for large , the two distributions are related by conservation of probability as so for example , if the empirical metaorder size is is asymptotically pareto distributed , as argued in the next section , , , and .based on the empirically observed value , this gives , which means that the cumulative scales as .this is what is typically observed for price returns in american stock markets ( plerou et al . , ) .we have so far left the metaorder size distribution unspecified . in this sectionwe compute the impact for two examples .the first of these is the pareto distribution , which we argue is well - supported by empirical data means that there exists a constant such that in the limit , .we use it to indicate that this relationship is only valid in the limit of large metaorder size . ] .the second is the stretched exponential distribution , which is not supported by data , but provides a useful point of comparison .there is now considerable accumulated evidence that in the large size limit in most major equity markets the metaorder size is distributed as , with .* _ trade size ._ in many different equity markets for large trades the volume has been observed by several groups to be distributed as a power law ( gopikrishnan et al .( ) ; gabaix et al .( ) is somewhat controversial , however : eisler and kertesz ( ) and racz et al .( ) have argued that the correct value of . ] . this relationship becomes sharper if only block trades are considered ( lillo et al . , ) .* _ long - memory in order flow . _ the signs of order flow in many equity markets are observed to have long - memory .we use the term long - memory in its more general sense to mean any process whose autocorrelation function is non - integrable ( beran , ) . this can include processes with structure breaks , such as that studied by ding , engle and granger ( ) . ]this means that the transaction sign autocorrelation function decays in time as , where . under a simple theory of order splitting the exponent , which is in good agreement with the data [ ( lillo et al . , ) ; ( gerig , ) ; bouchaud , farmer , and lillo ( ) ] . *_ reconstruction of large metaorders from brokerage data ._ vaglica et al .( ) reconstructed metaorders spanish stock exchange using data with brokerage codes and found that is distributed as a power law for large with .there is thus good evidence that metaorders have a power law distribution , though more study is of course needed . in this sectionwe derive the functional form of the impact for pareto metaorder size .we do this by using eq .[ final ] in the limit as , and return in section [ finitesize ] and in appendix b to discuss how this is modified when is finite . while we only care about the asymptotic form for large , for convenience we assume an exact pareto distribution for all , i.e. where the normalization constant is the riemann zeta function . for the pareto distributionthe probability that an order of size will continue is where is the generalized riemann zeta function ( also called the hurwitz zeta function ) . the approximations are valid in the large limit . the immediate impact can be easily calculated from eq .( [ final ] ) .for pareto distributed metaorder sizes , using eqs .( [ solution2 ] ) and ( [ pareto2eq ] ) , is thus the immediate impact behaves asymptotically for large as the exponent has a dramatic effect on the shape of the impact .for lorenzian distributed metaorder size ( ) the impact is logarithmic , for it increases as a square root , for it is linear , and for it is superlinear .thus as we vary the impact goes from concave to convex , with as the borderline case is special is that for the second moment of the pareto distribution is undefined . under the theory of lillo et al .( ) , long - memory requires . ] .( black circles ) and permanent impact ( red squares ) for .the dashed line is the price profile of a metaorder of size , demonstrating how the price reverts from immediate to permanent impact when metaorder execution is completed .the inset shows a similar plot in double logarithmic scale for a wider range of sizes ( from to ) .the blue dashed line is a comparison to the asymptotic square root scaling . _bottom panel _ : expected immediate impact as a function of time for tail exponents and , illustrating how the impact goes from concave to convex as increases . , title="fig : " ] ( black circles ) and permanent impact ( red squares ) for . the dashed line is the price profile of a metaorder of size , demonstrating how the price reverts from immediate to permanent impact when metaorder execution is completed .the inset shows a similar plot in double logarithmic scale for a wider range of sizes ( from to ) .the blue dashed line is a comparison to the asymptotic square root scaling ._ bottom panel _ : expected immediate impact as a function of time for tail exponents and , illustrating how the impact goes from concave to convex as increases . , title="fig : " ] figure [ numerical ] illustrates the reversion process for and shows how the shape of the impact varies with . the permanent impact under the pareto assumption is easily computed using eq .( [ generalpermanentimpact ] ) .a direct calculation shows that eqs .( [ permanentimpact ] ) and ( [ temporaryimpact ] ) imply that the ratio of the permanent to the immediate impact is for example if the model predicts that on average the permanent impact is equal to two thirds of the maximum immediate impact , i.e. , following the completion of a metaorder the price should revert by one third from its peak value .in the previous section we have assumed that , so that we can treat the problem as if were infinite . in real marketsthe maximum order size is probably quite large , a significant fraction of the market capitalization of the asset .thus we doubt that the finite support of has much practical importance , except perhaps for extremely large metaorders . from a conceptual point of view, however , having an upper bound on metaorder size creates some interesting effects .as we have already mentioned , proposition 1 fails to hold when , so this must be handled with some care . in appendix cwe illustrate how the results change when .what we observe is that when the impact is roughly unchanged from its behavior in the limit , but when the impact becomes highly convex .this is caused by the fact that the market maker knows the metaorder must end when . since by definition , the martingale condition requires that , i.e. there is no reversion when the metaorder is completed .this propagates backward and when it significantly alters the impact , as seen in figure [ figimpact ] .nonetheless , from a practical point of view we do not think this is an important issue , which is why we have relegated the details of the discussion to appendix c. changing the metaorder distribution has a dramatic effect on the impact . while we believe that the pareto distribution is empirically the correct functional form for metaorder size , to get more insight into the role of we compute the impact for an alternative functional form .for this purpose we choose the stretched exponential , which can be tuned from thin tailed to heavy tailed behavior and contains the exponential distribution as a special case .there is no simple expression for the normalization factor needed for a discrete stretched exponential distribution , so we make a continuous approximation , in which the metaorder size distribution is the normalization factor is the incomplete gamma function .the shape parameter specifies whether the distribution decays faster or slower than an exponential .( implies faster decay and implies slower decay . ) for short data sets , when is small this functional form is easily confused with a power law .it can be shown that the stretched exponential leads to an immediate impact function that for large asymptotically behaves as this is the product of a power law and an exponential ; for large the exponential dominates .the permanent impact is where is the exponential integral function and in the last approximation we have used its asymptotic expansion .the ratio of the permanent to the immediate impact of the last transaction is in contrast to the pareto metaorder size distribution this is not constant . instead the ratio between permanent and immediate impact decreases with size , going to zero in the limit as . the fixed ratio of permanent and immediate impact is a rather special property of the pareto distribution .the three types of agents in our model are similar to kyle s ; his informed trader is replaced by our long - term traders , and his noise traders are replaced by our day traders . in both cases we assume a final liquidation .there are also several key differences . in our model : * our long - term traders do not have a monopoly , but rather have common information and compete in setting the size of their orders .their orders are bundled together and executed as a package by an algorithmic execution service .these two facts are essential to show that the fair pricing condition is a nash equilibrium . *the number of periods for execution is uncertain , and depends on the information of the long - term traders .this is important because the martingale condition is based on the market makers uncertainty about when the metaorder will terminate . *the distribution is arbitrary ( whereas kyle assumed a normal distribution ) .our key result is that information is almost fully reflected in metaorder size , i.e. that and are closely related .this means our results apply to any empirical size distribution . *our day traders respond to ongoing information signals that are permanent in the sense that they affect the final price , in contrast to kyle s noise traders , who are completely uninformed .this means that the information in the final liquidation price is incrementally revealed .* the most important difference is that our model has a different purpose .kyle assumed an information signal and solved for the optimal strategy to exploit it .we assume metaorder execution may or may not be going on in the background , and solve for its impact on prices .the theory presented here makes several predictions with clear empirical implications . in this sectionwe summarize what these are and outline a few of the problems that are likely to be encountered in empirical testing . 1 . the fair pricing condition , eq .[ fairpricing ] , is directly testable , although it requires a somewhat arbitrary choice about when enough time has elapsed since the metaorder has completed for reversion to occur .( one wants to minimize this time because of the diffusive nature of prices , but one wants to allow enough time to make sure that reversion is complete ) .2 . the asymmetric price response predicted by eq .[ returnratio ] is testable .however , this only tests the martingale condition , which is the less controversial part of our model .the equivalence of impact as a function of time and size is directly testable . under our theory , for immediate impact from the first steps is the same , regardless of .this is in contrast to the kyle model which predicts linear impact as a function of time , but can explain concavity in size only by postulating variable informativeness of trades vs. metaorder size .the prediction of immediate and permanent impact based on is directly testable through equations [ final ] and [ generalpermanentimpact ] .if the metaorder distribution is a power law ( pareto distribution ) , then for large the immediate impact scales as and the ratio of the permanent to the immediate impact of the last transaction is .see section [ paretosec ] .prediction ( 2 ) has been tested and confirmed by lillo and farmer ( ) , farmer et al .( ) and gerig ( ) .preliminary results seem to support , or at least not contradict , prediction ( 5 ) . the only studies of which we are aware that attempted to fit functional form to the impact of metaorders are by torre ( ) , almgren et al . , and moro et al .( ) ; they find immediate impact roughly consistent with a square root functional form .moro et al . also tested the ratio of permanent to immediate impact and found for the spanish stock market and for the london stock market , with large error bars . to our knowledgethe other predictions remain to be empirically tested . though the market makers in our model are uncertain whether or not a metaorder is present , if it is present , they know when its execution begins and ends . the ability to detect metaorders from imbalances in order flow using brokerage codes has been demonstrated [ ( vaglica et al . ) , ( toth et al ) ] .a recent study of metaorders based on brokerage code information found average participation rates of for the spanish stock market ( bme ) and for the london stock market , for metaorders whose average size was just under in both markets , making such metaorders difficult to hide .the detection problem introduces uncertainties in starting and stopping times that may affect shape of the price impact .the traditional view in finance is that market impact is just a reflection of information .this point of view often goes a step further and postulates that the functional form of impact is determined by behavioral and institutional factors , such as how informed the agents are who trade with a given volume .this hypothesis is difficult to test because it is inherently complicated and information is difficult to measure independently of impact . within the framework developed here, such anomalies would violate the fair pricing condition .in this paper we embrace the view that impact reflects information , but we show how at equilibrium the trading volume reflects the underlying information and makes it possible to compute the impact .the metaorder size distribution determines the shape of the impact but does not set its scale .metaorder size has the important advantage of being a measurable quantity , and thus predictions based on it are much more testable than those based directly on information .the fair pricing condition that we have derived here may well hold on its own , even without informational efficiency .this could be true for purely behavioral reasons : the fair pricing condition holds because it can be measured reliably , and both parties view it as fair .thus while the main results here are consistent with rationality , they do not necessarily depend on it .we provide an example solution for the pareto distribution for metaorder size because we believe that the evidence supports this hypothesis .this gives the simple result that the impact is a power law of the form , and the ratio of permanent impact to the temporary impact of the last transaction is .however , the bulk of our results do not depend on this assumption .thus the reader who is skeptical about power laws may simply view the results for the pareto distribution as a worked example .the strength of our approach is its empirical predictions . because these involve explicit functional relationships between observable variables they are strongly falsifiable in the popperian sense .a preliminary empirical analysis seems to support the theory , but the statistical analysis so far remains inconclusive .we look forward to more rigorous empirical tests .we would like to acknowledge conversations with jean - philippe bouchaud and a very helpful discussion by ionid rosu .this work was supported by prin project 2007tkltsr computational markets design and agent - based models of trading behavior `` and national science foundation grant 0624351 .fl acknowledges financial support from the grant ' ' progetto interno lillo 2010 " by scuola normale superiore di pisa .almgren , r.f . , 2003 .optimal execution with nonlinear impact functions and trading - enhanced risk .applied mathematical finance , 10 , 118 .almgren , r.f . and chriss , n. , 1999 .value under liquidation .risk , 12 , 6163 .almgren , r.f . and chriss , n. , 2000. optimal execution of portfolio transactions .journal of risk , 3 , 539 .almgren , r.f . ,thum , c. and hauptmann , h. l. , 2005 .direct estimation of equity market impact . risk . back , kerry and baruch , shmuel , 2007 .working orders in limit order markets and floor exchanges .journal of finance , 62 , 15891621 .beran , j. , 1994 .statistics for long - memory processes .new york : chapman & hall .berk , j.b . andgreen , r.c , 2004 . mutual fund flows and performance in rational markets .journal of political economy , 112 , 12691295 .bernhardt , dan , hughson , eric and naganathan , girish , 2002 .cream - skimming and payment for order flow . technical report .university of illinois .bertismas , d and lo , a. , 1998 . optimal control of execution costs . journal of financial markets , 1 , 150 .bouchaud , j - p , farmer , j. doyne and lillo , f. , 2009 .how markets slowly digest changes in supply and demand , in : hens , t. and schenk - hoppe , k. ( eds . ) , handbook of financial markets : dynamics and evolution .elsevier , pp .57156 .bouchaud , j - p . ,gefen , y. , potters , m. and wyart , m. , 2004 .fluctuations and response in financial markets : the subtle nature of random " price changes . quantitative finance , 4 , 176190 .bouchaud , j - p . ,kockelkoren , j. and potters , m. , 2006 .random walks , liquidity molasses and critical response in financial markets . quantitative finance , 6 , 115123 .challet , 2007 .the demise of constant price impact functions and single - time step models of speculation .physica a. , 382 , 2935 .chan , l. k.c . and lakonishok , j. , 1993 .institutional trades and intraday stock price behavior .journal of financial economics , 33 , 173199 .chan , l. k.c . and lakonishok , j. , 1995 .the behavior of stock prices around institutional trades .the journal of finance , 50 , 11471174 .chordia , t. and subrahmanyam , a. , 2004 .order imbalance and individual stock returns : theory and evidence .journal of financial markets , 72 , 485518 .ding , z. , granger , c. w. j. and engle , r. f. , 1993 .a long memory property of stock returns and a new model . journal of empirical finance , 1 , 83106 .eisler , z. and kertecz , j. , 2006 .size matters , some stylized facts of the market revisited .european journal of physics b , 51 , 145154 .engle , r. , ferstenberg , r. and russel , j. , 2008 .measuring and modeling execution cost and risk .technical report 08 - 09 .university of chicago .evans , m. d. d. and lyons , r. k. , 2002 .order flow and exchange rate dynamics .journal of political economy , 110 , 170180 .farmer , j. d. , gillemot , l. , lillo , f. , mike , s. and sen , a. , 2004 .what really causes large price changes ?quantitative finance , 4 , 383397 .farmer , j. d. , patelli , p. and zovko , ilija , 2005 .the predictive power of zero intelligence in financial markets .proceedings of the national academy of sciences of the united states of america , 102 , 22542259 .farmer , j.d . , gerig , a. , lillo , f. and mike , s. , 2006 .market efficiency and the long - memory of supply and demand : is price impact variable and permanent or fixed and temporary ?quantitative finance , 6 , 107112 .gabaix , x. , gopikrishnan , p. , plerou , v. and stanley , h. e. , 2003 . a theory of power - law distributions in financial market fluctuations .nature , 423 , 267270 .gabaix , x. , gopikrishnan , p. , plerou , v. and stanley , h.e .institutional investors and stock market volatility . quarterly journal of economics , 121 , 461504 .gatheral , j. , 2010 .no - dynamic - arbitrage and market impact . quantitative finance , 10 , 749759 .gerig , a.n . , 2007 .a theory for market impact : how order flow affects stock price .university of illinois .glosten , lawrence r. , 2003 .discriminatory limit order books , uniform price clearing and optimality .technical report .columbia .glosten , l.r ., 1994 . is the electronic limit order book inevitable ?journal of finance , 49 , 11271161 .gopikrishnan , p. , plerou , v. , gabaix , x. and stanley , h. e. , 2000 .statistical properties of share volume traded in financial markets .physical review e , 62 , r4493r4496 ; part a. hasbrouck , j. , 1991 .measuring the information content of stock trades . the journal of finance , 46 , 179207 .hausman , j. a. , lo , a. w. and mackinlay , a. c. , 1992 . an ordered probit analysis of transaction stock prices . journal of financial economics , 31 , 319379 .hopman , c. , 2006 .do supply and demand drive stock prices ?quantitative finance ; to appear .huberman , g. and stanzl , w. , 2004 . price manipulation and quasi - arbitrage .econometrica , 72 , 12471275 .keim , d. b. and madhavan , a. , 1996 .the upstairs market for large - block transactions : analysis and measurement of price effects .the review of financial studies , 9 , 136 .kempf , a. and korn , o. , 1999 .market depth and order size .journal of financial markets , 2 , 2948 .kyle , a. s. , 1985 .continuous auctions and insider trading .econometrica , 53 , 13151335 .lillo , f. and farmer , j. d. , 2004 .the long memory of the efficient market .studies in nonlinear dynamics & econometrics , 8 , .lillo , f. , farmer , j. d. and mantegna , r. n. , 2003 .master curve for price impact function .nature , 421 , 129130 .lillo , f. , mike , s. and farmer , j. d. , 2005 .theory for long memory in supply and demand .physical review e , 7106 , 066122 .moro , e. , moyano , l. g. , vicente , j. , gerig , a. , farmer , j. d. , vaglica , g. , lillo , f. and mantegna , r.n . , 2009 .market impact and trading protocols of hidden orders in stock markets .physical review e. , 80 , 066102 .obizhaeva , a.a . and wang , j. , 2005 .optimal trading strategy and supply / demand dynamcis . technical report .afa 2006 boston meetings paper .plerou , v. , gopikrishnan , p. , amaral , l. a. n. , meyer , m. and stanley , h. e. , 1999 .scaling of the distribution of price fluctuations of individual companies .physical review e , 60 , 65196529 ; part a. plerou , v. , gopikrishnan , p. , gabaix , x. and stanley , h. e. , 2002 . quantifying stock price response to demand fluctuations .physical review e , 66 , article no . 027104 .potters , m. and bouchaud , j - p . , 2003 .more statistical properties of order books and price impact .physica a , 324 , 133140 .racz , e. , eisler , z. and kertesz , j. , 2009 .comment on ` tests of scaling and universality . ' by plerou and stanley .technical report .schwartzkopf , y. and farmer , j.d . , 2010 .technical report . for a preliminary report ,see y. schwartzkopf s caltech ph.d thesis , complex phenomena in social and financial systems : from bird population growth to the dynamics of the mutual fund industry .torre , n. , 1997 .barra market impact model handbook .berkeley : barra inc .toth , b. , lemperiere , y. , deremble , c. , de lataillade , j. , kockelkoren , j. and bouchaud , j - p . , 2011 .anomalous price impact and the critical nature of liquidity in financial markets .phyiscal review x , 1 , 021006 ; http://arxiv.org/abs/1105.1694 .toth , b. , lillo , f. and farmer , j.d . , 2010 .segmentation algorithm for non - stationary compound poisson processes .technical report .santa fe institute .toth , b. , palit , i. , lillo , f. and farmer , j. d. , 2011 .why is order flow so persistent ?technical report .arxiv:1108.1632 .vaglica , g. , 2008 . scaling laws of strategic behavior and specialization of strategies in agent dynamics of a financial market . thesis .university of palermo .vaglica , g. , lillo , f. , moro , e. and mantegna , r. , 2008 . scaling laws of strategic behavior and size heterogeneity in agent dynamics .physical review e. , 77 , 0036110 .viswanathan , j. and wang , james j. d. , 2002 .market architecture : limit order books versus dealership markets .journal of financial markets , 5 , 127167 .weber , p. and rosenow , b. , 2006 .large stock price changes : volume or liquidity ?quantitative finance , 6 , 714 .wyart , m. , bouchaud , j .-, kockelkoren , j. , potters , m. and vettorazzo , m. , 2006 .relation between bid - ask spread , impact and volatility in double auction markets .technical report .zhang , y. c. , 1999 . toward a theory of marginally efficient markets .physica a , 269 , 3044 .* proposition 1 . * _ the martingale condition implies zero overall profits , i.e. _ \equiv\sum_{n=1}^\infty p_n \pi_n=0 .e_1[n\pi_n]\equiv\sum_{n=1}^m p_n n \pi_n=0 .\label{appeq1}\ ] ] * proof . * given the definition of and we can write the prices as with these expressions for , can be rewritten as where in the last equality we have used the martingale condition of eq .( [ returnratio ] ) . for profit per share is by substituting these two last expressions in eq .( [ appeq1 ] ) we obtain =\sum_{n=1}^{m-1}n\tilde r_n \sum_{i= n+1}^{m } p_i-\sum_{n=1}^{m-1}p_n\sum_{i=1}^{n-1}i\tilde r_i - p_m\sum_{i=1}^{m-1 } i\tilde r_i = \nonumber \\= \sum_{n=1}^{m-1}n\tilde r_n \sum_{j = n+1}^{m } p_j-\sum_{n=1}^{m}p_n\sum_{i=1}^{n-1}i\tilde r_i\end{aligned}\ ] ] by explicitly computing the coefficients of each , it is easy to show they vanish for each , i.e. =0 ] goes to zero , to a finite value , or diverges .this result shows that in the infinite support case the martingale condition does not imply zero overall immediate profits .* proposition 2 . * _ if the second derivative of the immediate impact is bounded strictly below zero , in the limit where the number of informed traders , any nash equilibrium must satisfy the fair pricing condition for . on average market makers profit from orders of length one and take ( equal and opposite ) losses from orders of length ._ the strategy of the proof is to show that counterexamples for which result in contradictions .first consider a candidate equilibrium with for some value of , where .assume a long - term trader buys shares at an average price .after averaging over the shares are subsequently valued at price .the profit is if long - term trader increases her order size by one share while all others hold their order size constant , her profit becomes the change in profit can be written the first term on the right ( ) represents the additional profit if it were possible to trade one extra share at the same average price , and the second term represents the reduction in profit because the average price increases . since there is nothing to distinguish the long - term traders , the equilibrium must be symmetric, i.e. they all make the same decision .thus if there are long - term traders buying shares and a day trader buying a random number of shares , at equilibrium . thus if is large , is also large . in the limit as is large the second term vanishes if this is true providing the second derivative of the function is bounded strictly below zero .thus in this limit and the candidate equilibrium fails because the informed trader has an incentive to deviate . similarly if the informed traders take a loss which can be reduced by trading less .when ( and as before ) no informed trader has an incentive to change her order size .this is clear since in eq .( [ deltaprofit ] ) with the change in profit is given by the second term alone , which is always negative .a similar calculation shows that this is also true for decreasing order size , i.e. when , causes .the cases and have to be examined separately because in these cases the fair pricing condition is incompatible with the martingale condition and informational efficiency ( i.e. with the conditions on the final price ) . for market makers profit is and from ( eq .[ shortterm ] ) the martingale condition is thus if , satisfaction of both the martingale condition and the fair pricing condition requires that , or equivalently that = 0 . in other words , if both conditions are satisfied then both the permanent and the temporary impact on the first step are identically zero , which would violate informational efficiency since . in section [ impactsolution ]we show by construction that this holds for all , i.e. it is clear in eq .( [ final ] ) and ( [ generalpermanentimpact ] ) that the impacts and are identically zero if . to have sensible impact functions we must have , which means that market making is profitable on the first timestep , so that they always make a profit .this is false : the profit from a metaorder of length one is , where is a small number . in contrast , if a market maker participates only in the first trade of a large metaorder , her profit is , where is a large number . thus while , . ] .similarly if the martingale condition implies , i.e. no reversion , and since is an increasing function the market maker takes a loss assuming for , the market makers profit and loss are related by eq .( [ breakevenonaverage ] ) as for realistic size distributions we expect metaorders of size one to be much more common than those of size , i.e. . the ratio of the total profits is thus the market maker receives frequent but small profits on metaorders of length one and rare but large losses for metaorders of length .long - term traders will rationally abstain from taking a loss on metaorders of length by simply not participating when they receive signals that are too weak ; thus , the trading volume at is due entirely to the day trader .similarly , although , the long - term traders are unable to improve their profits by trading more , since we have bounded the total amount an individual can trade at so they are blocked from further increase .thus the violations of the fair pricing condition when and occur naturally due to the institutional constraints that we have assumed and do not invalidate the equilibrium .* proposition 3 . * _ the system of martingale conditions ( eq . [ shortterm ] ) and fair pricing conditions ( eq . [ fairpricing ] ) has solution _ * proof . * the solution of eq .( [ solution2a ] ) is a direct consequence of the martingale conditions ( eq . [ shortterm ] ) .the total profit of metaorders of length can be rewritten as ( see proof of proposition 1 ) the fair pricing conditions ( eq . [ fairpricing ] ) state that for it is , i.e. this is a recursive equation which determines once is given ( note that this equation does not hold for because we do not have fair pricing for metaorders of length one ) .the solution of this equation is and we prove it by induction .we assume that the solution holds for and we prove that it is true for .( [ solreceqa ] ) holds for we can rewrite eq . ( [ receqa ] ) for as now by expanding the sum in brackets it is direct to show that since , by definition , the first two terms in the right hand side cancel and thus one obtains eq .( [ solreceqa ] ) .this equation is equivalent to eq .( [ solutiona ] ) .in fact i.e. our thesis , eq .( [ solutiona ] ) .as already discussed briefly in section [ finitesize ] , if the condition is violated this has an effect on the impact . in this section we consider the exact case of a finite support pareto distribution .we show that when we obtain the same results of the previous section and we discuss what happens when .we assume that the metaorder size distribution is a truncated pareto distribution for all , i.e. where the normalization constant is the harmonic number of order .for the truncated pareto distribution the probability that a metaorder of size will continue is where is the generalized riemann zeta function ( also called the hurwitz zeta function ) . for small function increases meaning that it is more and more likely that the order continues . in the regime of , is well approximated by the expression of eq .( [ calpapprox ] ) for an infinite support pareto distribution .however , around , starts to decrease meaning that it becomes more and more likely that the order is going to stop soon , with a corresponding effect on the impact .the immediate impact can be easily calculated once the distribution of metaorder size is known by using eq .( [ final ] ) . for truncated pareto distributed metaorder sizes , is ( for ) is for large but it is which is the same scaling as the infinite support pareto distribution ( see eq .( [ temporaryimpact ] ) ) .the same holds true for the permanent impact .we have therefore shown that when the finite support of the metaorder size distribution is irrelevant and we obtain approximately the same results as in section [ paretosec ] .the finite size effects and the role of the finiteness of the support becomes relevant when .figure [ figimpact ] shows the total impact for and different values of .it is clear that the impact is initially described by a power law , but then it becomes strongly convex when the order length becomes comparable with the maximal length . | we develop a theory for the market impact of large trading orders , which we call _ metaorders _ because they are typically split into small pieces and executed incrementally . market impact is empirically observed to be a concave function of metaorder size , i.e. the impact per share of large metaorders is smaller than that of small metaorders . we formulate a stylized model of an algorithmic execution service and derive a fair pricing condition , which says that the average transaction price of the metaorder is equal to the price after trading is completed . we show that at equilibrium the distribution of trading volume adjusts to reflect information , and dictates the shape of the impact function . the resulting theory makes empirically testable predictions for the functional form of both the temporary and permanent components of market impact . based on the commonly observed asymptotic distribution for the volume of large trades , it says that market impact should increase asymptotically roughly as the square root of size , with average permanent impact relaxing to about two thirds of peak impact . |
in physics , the study of colloidal brownian motion has a long history beginning with einstein s famous paper in 1905 , and the understanding of its mechanism has been systematically developed in molecular kinetic theory .recently , experimental developments have enabled researchers to observe a single trajectory itself of brownian motion , which engages physicists in quantitative modelling of sub - micrometer systems , such as molecular motors .likewise , recent breakthroughs for computational technologies have enabled physicists to study microstructure of financial brownian motion in detail .they have applied their knowledge beyond material science and into studying in particular price movements in financial markets for about a quarter century . thereconsequently appeared new approaches as mentioned below , in contrast to conventional mathematical finance where a priori theoretical dynamics is assumed .there are three physical approaches to financial brownian motions as shown in figure [ fig : micro - meso - macro ] .the microscopic approach focuses on the dynamics of traders in the financial markets ( figure [ fig : micro - meso - macro]a ) .traders correspond to molecules in the modelling of materials , which enables both a numerical and a theoretical analysis of the macroscopic motion of prices .another approach is based on macroscopic empirical analyses of price time series ( figure [ fig : micro - meso - macro]c ) and the direct empirical modelling of price dynamics focusing on fat - tailed distributions and long - time correlations in volatility .the third approach focuses on mesoscopic dynamics concerning order - books ( figure [ fig : micro - meso - macro]b ) , which are accumulated buy sell orders initiated by traders in the price axis where deals occur either at the best bid ( buy ) or ask ( sell ) price defining the market prices. numerical simulations of markets become available by introducing models of order - book dynamics .recently , the mesoscopic approach developed considerably with the analysis of high frequency financial market data in which the whole history of orders is tractable using a direct analogy between order - books and colloids . from the order - book data , the importance of inertia was reported for market price , implying the existence of market trends . the langevin equation was then found to hold most of time showing that the fluctuation dissipation relation can be extended to order - book dynamics .however , microscopic mechanisms behind market trends have not been clarified so far because direct information is required on individual traders strategies . in the present paper , we analyse more informative order - book data in which traders can be identified in an anonymised way for each order so that we can estimate each trader s dynamics directly from the data . herewe present a minimal model validated by direct observation of individual traders trajectories for financial brownian motion .we first report a novel statistical law on the trend - following behaviour of foreign exchange ( fx ) traders by tracking the trajectories of all individuals .we next introduce a corresponding microscopic model incorporated with the empirical law .we reveal the mesoscopic and macroscopic behaviour of our model by developing a parallel theory to that for molecular kinetics : boltzmann - like and langevin - like equations are derived for the order - book and the price dynamics , respectively . a quantitative agreement with empirical findingsis finally presented without further assumptions .we analysed the high - frequency fx data between the us dollar ( usd ) and the japanese yen ( jpy ) from the 5th 16.00 to the 10th 20.00 gmt september 2016 on electronic broking services , one of the biggest fx platforms in the world .all trader activities were recorded for our dataset with anonymised trader ids with one - millisecond time - precision .the minimum price - precision was 0.005 yen for the usd / jpy pair at that time , and the currency unit in this paper is 0.001 yen , called the tenth pip ( tpip ) .the minimum volume unit for transaction was one million usd , and the total monetary flow was about billion usd during this week .the market adopts the double - auction system , where traders quote bid or ask prices . in this paper , we particularly focused on the dynamics of high - frequency traders ( hfts ) , who frequently submit or cancel orders according to algorithms ( see appendix [ app : def_hft ] for the definition ) .the presence of hfts has rapidly grown recently and of the total orders were submitted by the hfts in our dataset .we first illustrate the trajectories of bid and ask prices quoted for the top 3 hfts in figure .[ fig : trajectory_traders]a c .we observed that with the two - sided quotes typical hfts tend to play the role of liquidity providers ( called market - makers ) .we also observed that buy sell spreads ( i.e. , the difference between the bid and ask prices for a single market - maker ) fluctuated around certain time - constants , showing a strong coupling between these prices .indeed , the buy sell spread distributions exhibit sharp peaks for individual hfts as shown in the insets in figure [ fig : trajectory_traders]a c ( see also appendix [ app : buy - sell ] ) .we next report the empirical microscopic law for the trend - following strategy of individual traders .let us denote the bid and ask prices of the top hft by and ( see appendix [ app : trend - following ] for the definitions ) .we investigated the average movement of the mid - price between transactions conditional on the previous market price movement ( figure [ fig : trend_follow]a ) . for the top 20 hfts ( figure [ fig : trend_follow]b and c ) ,we find that the average movement is described by where the conditional average is taken when the previous price movement is and ( see appendix [ app : trend - following ] for the detail ) . and are constants characterizing the price movement and the saturation threshold against the market trend . here, typical values are given using [ tpip ] and [ tpip ] .the empirical law ( [ eq : trendfollow ] ) implies that the reaction of traders is linear for small market trends but saturates for large market trends .remarkably , a similar behaviour was reported from a full macroscopic analysis of market price data at one - month precision .here we introduce a minimal microscopic model incorporating the empirical law ( [ eq : trendfollow ] ) .we make four assumptions : ( i ) the number of traders is sufficiently large .( ii ) they always quote both bid and ask prices ( for the trader , and ) simultaneously with a unit volume as market - makers .( iii ) buy sell spreads are time - constants unique to traders with distribution .the trader dynamics is then characterized by the mid - price .( iv ) trend - following random walks are assumed in the microscopic dynamics ( see figure [ fig : dealermodel]a c ) : with a constant strength for trend - following , white gaussian noise with constant variance , and a requotation jump after transactions ( figure [ fig : dealermodel]c ) . is defined by with jump size and the transaction time between traders and .the transaction condition at time is given by ( figure [ fig : dealermodel]b ) .this model can be assessed using parallel tools employed in molecular kinetic theory . in this theory , the boltzmann equation is first derived for the one - body velocity distribution from the hamiltonian dynamics by the method of bogoliubov , born , green , kirkwood , and yvon assuming molecular chaos .the langevin equation is derived in turn from the boltzmann equation for massive brownian particles .here we have followed the same mathematical procedure to elucidate the dynamics behind order - book profiles and financial brownian motion . for relative position from the centre of mass ,the boltzmann - like equation is first derived for the one - body probability distribution density conditional for a trader with a buy sell spread from the multi - agent dynamics ( [ eq : dealermodel ] ) : \label{eq : financialboltzmann}\ ] ] with and . here , ( ) represents transactions as bid ( ask ) orders .the average order - book profile is given by for the ask side .the integral term in equation ( [ eq : financialboltzmann ] ) corresponds to the collision integral in the conventional boltzmann equation .as the langevin equation is derived from it , then similarly the langevin - like equation is derived from the boltzmann - like equation ( [ eq : financialboltzmann ] ) , where and are the market price movement and transaction time interval at the tick time .the tick time is an integer time incremented by every transaction and the mean time interval between transactions is seconds in this week .the first and second terms on the right - hand side of eq .( [ eq : financiallangevin ] ) describe trend - following and random noise , respectively ; the trend - following term corresponds to the momentum inertia in the conventional langevin equation .equations ( [ eq : financialboltzmann ] ) and ( [ eq : financiallangevin ] ) can be solved for under an appropriate boundary condition ( see appendix [ app : boundaryconditions ] ) .we first set the buy sell spread distribution as with decay length }$ ] , empirically validated in our dataset ( figure [ fig : pricediff]a and appendix [ app : buy - sell ] ) .the average order - book profile is given for by .\label{eq : orderbook}\ ] ] the tail of the price movement is approximately given by with decay length , average movement from trend - following , average transaction interval , and complementary cumulative distribution .further technical details are to be published in a forthcoming paper .we next investigated the consistency between our microscopic model and our dataset .the empirical daily profile was first studied for the average order - book in figure [ fig : pricediff]b ( see appendix [ app : order - book ] for the detail ) .surprisingly , we found an agreement with our theoretical lines ( [ eq : orderbook ] ) without fitting parameter that strongly supports the validity of our description .we also empirically evaluated the two - hourly segmented cumulative distribution for the price movement in one - tick precision ( figure [ fig : pricediff]c ) , which obeys an exponential law that is consistent with our theoretical prediction ( [ eq : theory_pricemove ] ) . in our dataset ,the decay length is approximately constant over a two - hour period but varies over time during the week .to remove this non - stationary feature , we introduced the two - hourly scaled cumulative distribution with scaling parameters and ( figure [ fig : pricediff]d ) , thereby incorporating the two - hourly exponential - law for the whole week .the price movements obey an exponential law for short periods but simultaneously obeys a power - law over long periods with exponent ( figure [ fig : pricediff]e ) .this apparent discrepancy is explained by the power - law nature of the decay length . because approximately obeys a power - law cumulative distribution over the week with ( figure [ fig : pricediff]f ) , the one - week distribution obeys the power - law as a superposition of the two - hourly segmented exponential distribution , with .remarkably , tends to be long when the market is inactive ( figure [ fig : pricediff]g ) .we therefore obtain a consistent result with the previously reported power - law as a non - stationary property of .we note that our model can exhibit super - diffusion under an appropriate parameter set as with for short periods ( figure [ fig : pricediff]h ) , which is consistent with previous reports . herethe mean squared displacement is defined by as a function of tick time with the ensemble average .we also note that our model asymptotically shows ballistic behaviour when trend - following is sufficiently large , which implies that it plays the role of a momentum inertia " in financial markets .we further note that our model can show sub - diffusion ( ) when trend - following is sufficiently small .in this article , we have presented an intensive data analysis of anonymised traders in a foreign exchange market to directly observe strategies of hfts .we first report a simple empirical law characterizing trend - following behaviour of individual traders against market trends .a trend - following random walk model is correspondingly introduced as a microscopic dynamics of the financial market .the mesoscopic and macroscopic behaviours of this model are systematically analysed in a parallel calculation to molecular kinetic theory .our theoretical model reproduces the average order - book profile and the price movement distributions empirically .this work would be an important step toward unified description of financial markets from individual traders dynamics .we discuss here a possible reason behind the success of our kinetic - like theory for our model . in material physics ,mean - field approximations are invalid for low - dimensional systems because the low - dimensional geometry does not allow two - body correlations to disappear after collision .in contrast , in our model , traders are separated compulsorily after transactions , and there is little possibility for the same pair to enter successive transactions .the two - body correlation then quickly decays after collision assuming molecular chaos " is valid .this scenario implies that the kinetic - like description may work well in various socio - economic systems , in addition to the previously studied examples , such as opinion formation and wealth distribution .our report dealt mainly with short - duration trends ; their correlation with long - duration trends is a topic to future studies .of interest is the study of traders behaviour in unstable markets triggered by external shocks , as those that occur in financial crises and flash crashes .the economics reason behind the hyperbolicity ( [ eq : trendfollow ] ) in trend - following needs to be pursed further .we greatly appreciate icap for their provision of the ebs data .we also appreciate m. katori , h. hayakawa , s. ichiki , k. yamada , s. ogawa , f. van wijland , d. sornette , t. sano , and t. ito for fruitful discussions .this work was supported by jsps kakenhi ( grand no .16k16016 ) and jst , strategic international collaborative research program ( sicorp ) on the topic of ict for a resilient society " by japan and israel .99 einstein , a. ber die von der molekularkinetischen theorie der wrme geforderte bewegung von in ruhenden flssigkeiten suspendierten teilchen . __ 322 , 549 - 560 ( 1905 ) .chapman , s. & cowling , t. g. _ the mathematical theory of non - uniform gases _ ( cambridge univ .press , cambridge , 1970 ) .van kampen , n. g. _ stochastic processes in physics and chemistry _( amsterdam , north - holland , 2007 ) .van den broeck , c. , kawai , r. & meurs , p. microscopic analysis of a thermal brownian motor ._ 93 , 090601 - 090604 ( 2004 ) .huang , r. , chavez , i. , taute , k. m. , luki , b. , jeney , s. , raizen , m. g. & florin , e .-direct observation of the full transition from ballistic to diffusive brownian motion in a liquid . _ nat ._ 7 , 576 - 580 ( 2011 ) .li , t. , kheifets , s. , medellin , d. & raizen , m. g. measurement of the instantaneous velocity of a brownian particle ._ science _ 328 , 1673 - 1675 ( 2010 ) .noji , h. , yasuda , r. , yoshida , m. & kinosita , k. jr direct observation of the rotation of f1-atpase ._ nature _ 386 , 299 - 302 ( 1997 ) .black , f. & scholes , m. s. the pricing of options and corporate liabilities ._ j. political econ ._ 81 , 637 - 654 ( 1973 ) .engle , r. dynamic conditional correlation : a simple class of multivariate generalized autoregressive conditional heteroskedasticity models. _ j. bus ._ 20 , 339 - 350 ( 2002 ) .takayasu , h. , miura , h. , hirabayashi , t. & hamada , k. statistical properties of deterministic threshold elements the case of market price . _physica a _ 184 , 127 - 134 ( 1992 ) .bak , p. , paczuski , m. & shubik , m. price variations in a stock market with many agents ._ physica a _ 246 , 430 - 453 ( 1997 ) .lux , t. & marchesi , m. scaling and criticality in a stochastic multi - agent model of a financial market ._ nature _ 397 , 498 - 500 ( 1999 ) .yamada , k. , takayasu , h. , ito , i. & takayasu , m. solvable stochastic dealer models for financial markets .e _ 79 , 051120 - 051131 ( 2009 ) .mantegna , r. n. & stanley , h. e. scaling behaviour in the dynamics of an economic index ._ nature _ 376 , 46 - 49 ( 1995 ) .mantegna , r. n. & stanley , h. e. _ an introduction to econophysics _( cambridge univ .press , 2000 ) .lux , t. the stable paretian hypothesis and the frequency of large returns : an examination of major german stocks .financial economics _ 6 463 - 475 ( 1996 ) .plerou , v. , gopikrishnan , p. , amaral , l. a. , meyer , m. & stanley , h. e. scaling of the distribution of price fluctuations of individual companies .e _ 60 , 6519 - 6528 ( 1999 ) .guillaume , d. m. , dacorogna , m. m. , dav , r. r. , mller , u. a. , olsen , r. b. & pictet , o. v. from the bird s eye to the microscope : a survey of new stylized facts of the intra - daily foreign exchange markets ._ finance stochast ._ 1 , 95 - 129 ( 1997 ) .longin , f. m. the asymptotic distribution of extreme stock market returns ._ j. business _ 69 , 383 - 408 ( 1996 ) .takayasu , m. , mizuno , t. & takayasu , h. potential force observed in market dynamics ._ physica a _ 370 , 91 - 97 ( 2006 ) .maslov , s. simple model of a limit order - driven market ._ physica a _ 278 , 571 - 578 ( 2000 ) .smith , e. , farmer , j. d. , gillemot , l. & krishnamurthy , s. statistical theory of the continuous double auction ._ quantitative finance _ 3 , 481 - 514 ( 2003 ) .bouchaud , j .-, mzard , m. & potters , m. statistical properties of stock order books : empirical results and models ._ quantitative finance _ 2 , 251 - 256 ( 2002 ) .yura , y. , takayasu , h. , sornette , d. & takayasu , m. financial brownian particle in the layered order - book fluid and fluctuation - dissipation relations . _ phys ._ 112 , 098703 - 098707 ( 2014 ) .hendershott , t. , jones , c. m. & menkveld , a. j. does algorithmic trading improve liquidity ? _ the journal of finance _ 66 , 1 - 33 ( 2011 ) .menkveld , a. j. high frequency trading and the new market makers ._ journal of financial markets _ 16 , 712 - 740 ( 2013 ) .ebs dealing rules appendix ebs market ( at the time of june 2016 ) .lemprire , y. , deremble , c. , seager , p. , potters , m. & bouchaud , j .-two centuries of trend following ._ journal of investment strategies ._ 3 , 41 - 61 ( 2014 ) .slanina , f. _ essentials of econophysics modelling ._ ( oxford university , oxford , 2014 ) .pareschi , l. & toscani , g. _ interacting multiagent systems : kinetic equations and monte carlo methods . _( oxford university press , oxford , 2013 ) .for this paper , we define a high frequent trader ( hft ) as a trader who submits more than 500 times a day on average ( i.e. , more than 2500 times for the week ) . asseveral traders are unwilling to transact and often interrupt orders at the instant of submission ( called flashing ) , we excluded traders with live orders of less than percent of the transaction time . with this definition ,the number of hfts was 134 during this week , whereas the total number of traders was 1015 .we note that the total number of traders who submitted limit orders was 922 ; the other 93 traders submitted only market orders .we calculated the percentage of two - sided quotes as follows : when a bid ( ask ) order is submitted by a trader , we check whether corresponding ask ( bid ) orders exist .we then count the number of two - sided quotes for all traders at every order submission and finally divide it by the total number of submissions .the difference in the median bid and ask prices was studied as a buy sell spread for an hft .samples where only both bid and ask prices exist are taken in the one - second time - precision for figure [ fig : trajectory_traders ] and [ fig : pricediff]a .we plotted standard deviations of the averages as error bars for each point .we remark on the precise definition of the bid ( ask ) price of individual hfts for the analysis of trend - following .if a trader quotes both single - bid and single - ask orders at any time , the bid and ask prices are defined literally . in the presence of multiple bid orask orders , we use the value of the median for the bid or ask orders as or . in the absence of any bid or ask orders , we use the previous bid or ask price as or for interpolation for figure .[ fig : trend_follow]b and c. we also remark that exceptional samples where the bid or ask price is far from the market price by 10 yen ( 0.0659% of the total ) are excluded from the calculation of the conditional ensemble average .we note that the standard deviations of the conditional averages are plotted for each point as error bars .also , median values in the top 20 hfts are given using [ tpip ] and [ tpip ] .for the boltzmann - like equation ( [ eq : financialboltzmann ] ) , we first introduce sufficiently large cutoffs at .the limit is taken with reflecting boundaries assumed at . a large cutoff limit is taken finally .for the langevin - like equation ( [ eq : financiallangevin ] ) , we make the following two assumptions .( i ) trend - following has the same order as random noise : .( ii ) saturation in trend - following ( [ eq : trendfollow ] ) is dominant : .we note that equation ( [ eq : trendfollow ] ) can be approximated for large fluctuations as under these conditions with the signature function defined by for and for .the daily average order - book profile is calculated for the hfts .we took snapshots of the order - book every second and its ensemble average every day .we also plotted standard deviations of the averages as error bars for each point .we take snapshots of the order - book after every transaction and count the total number of different trader ids for both bid and ask sides . the counting weight for an hft quoting both sidesis set to 1 and that for an hft quoting one side is 1/2 .we then plot the average of the number of trader ids for both bid and ask sides every two hours in figure [ fig : pricediff]g .the typical number of hft ids was about in our dataset with this definition .the number of total volumes quoted by hfts is typically about .admittedly , there is room for debate on which number is appropriate for the calibration of the total number of traders in our model ; it remains a topic for future study . | brownian motion has been a pillar of statistical physics for more than a century , and recent high - frequency trading data have shed new light on microstructure of brownian motion in financial markets . though evidences of trend - following behaviour of traders were indirectly shown in such trading data , the microscopic model has not been established so far by direct observation of trajectories for individual traders . in this paper , we present a minimal microscopic model for financial brownian motion through an intensive analysis of trajectory data for all individuals in a foreign exchange market . this model includes a novel empirical law quantifying traders trend - following behaviour that can create the inertial motion in market prices over short durations . we present a systematic solution paralleling molecular kinetic theory to reveal mesoscopic and macroscopic dynamics of our model . our model exhibits quantitative agreements with empirical results strongly supporting our analysis . |
complex networks can be observed in a wide variety of natural and man - made systems , and an important general problem is the relationship between the connection structure and the dynamics of these networks . with graph - theoretical approaches, networks may be characterized using graphs , where nodes represent the elements of a complex system and edges their interactions . in the study of brain dynamics , a node may represent the dynamics of a circumscribed brain region determined by electrophysiologic or imaging techniques .then two nodes are connected by an edge , or direct path , if the strength of their interaction increases above some threshold . among other structural ( or statistical ) parameters ,the average shortest path length and the cluster coefficient are important characteristics of a graph . is the average fewest number of steps it takes to get from each node to every other , and is thus an emergent property of a graph indicating how compactly its nodes are interconnected . is the average probability that any pair of nodes is linked to a third common node by a single edge , and thus describes the tendency of its nodes to form local clusters .high values of both and are found in regular graphs , in which neighboring nodes are always interconnected yet it takes many steps to get from one node to the majority of other nodes , which are not close neighbors . at the other extreme ,if the nodes are instead interconnected completely at random , both and will be low .recently , the emergence of collective dynamics in complex networks has been intensively investigated in various fields .it has for example been proposed that random , small - world , and scale - free networks , due to their small network distances , might support efficient and stable globally synchronized dynamics .synchronized dynamics , however , depends not only on statistical but also on spectral properties of a network , which can be derived from the eigenvalue spectrum of the laplacian matrix describing the corresponding network .although a number of studies reported on a correlation between statistical network properties ( such as degree homogeneity , cluster coefficient , and degree distribution ) and network synchronizability , the exact relationship between the propensity for synchronization of a network and its topology has not yet been fully clarified .one of the most challenging dynamical systems in nature is the human brain , a large , interacting , complex network with nontrivial topological properties .anatomical data , theoretical considerations , and computer simulations suggest that brain networks exhibit high levels of clustering combined with short average path lengths , which was taken as an indication of a small - world architecture .a disorder of the brain that is known to be particularly associated with changes of neuronal synchronization is epilepsy along with its cardinal symptom , recurrent epileptic seizures .seizures are extreme events with transient , strongly enhanced collective activity of spatially extended neuronal networks . despite considerable progress in understanding the physiological processes underlying epileptic dynamics , the network mechanisms involved in the generation , maintenance , propagation , and termination of epileptic seizures in humans are still not fully understood .there are strong indications that seizures resemble a nonlinear deterministic dynamics , and recent modeling studies indicate the general importance of network topology in epilepsy . clinical and anatomic observations together with invasive electroencephalography and functional neuroimaging now provide increasing evidence for the existence of specific cortical and subcortical _ epileptic networks _ in the genesis and expression of not only primary generalized but also focal onset seizures .an improved understanding of both structure and dynamics of epileptic networks underlying seizure generation could improve diagnosis and , more importantly , could advice new treatment strategies , particularly for the 25% of patientswhose seizures can not be controlled by any available therapy . in order to gain deeper insights into the global network dynamics during seizures we study in a time resolved manner statistical and spectral properties of functionally defined seizure networks in human epileptic brains .we observe that , while seizures evolve , statistical network properties indicate a concave - like movement between a more regular ( during seizures ) and a more random functional topology ( prior to seizure initiation and already before seizure termination ) .network synchronizability , however , is drastically decreased during the seizure state and increases already prior to seizure end .we speculate that network randomization , accompanied by an increasing synchronization of neuronal activity may be considered as an emergent self - regulatory mechanism for seizure termination .we retrospectively analyzed multichannel ( channels ) electroencephalograms ( eeg ) that were recorded prior to , during , and after one - hundred focal onset epileptic seizures from 60 patients undergoing pre - surgical evaluation for drug - resistant epilepsy .seizure onsets were localized in different anatomical regions .all patients had signed informed consent that their clinical data might be used and published for research purposes .the study protocol had previously been approved by the ethics committee of the university of bonn .eeg data were recorded via chronically implanted strip , grid , or depth electrodes from the cortex and from within relevant structures of the brain , hence with a high signal - to - noise ratio .signals were sampled at 200 hz using a 16 bit analog - to - digital ( a / d ) converter and filtered within a frequency band of 0.5 to 70 hz . in order to minimize the influence of a particular referencing of electrodes on spatially extended correlations, we here applied a bipolar re - referencing prior to analyses by calculating differences between signals from nearest neighbor channels .we defined _ functional _ links between any pair of bipolar eeg channels and ( $ ] ) regardless of their anatomical connectivity using the cross - correlation function as a simple and most commonly used measure for interdependence between two eeg signals , which is both computationally efficient and , in light of the known correlation - based changes of synaptic strengths , also physiologically plausible .we used a sliding window approach ( see fig .[ fig : figure1 ] ) to estimate the elements of the normalized maximum - lag correlation matrix where denotes the cross - correlation function .this function yields high values for such time lags for which the signals and have a similar course in time .the sliding window had a duration = 2.5 s ( 500 sampling points ; no overlap ) , which can be regarded as a compromise between the required statistical accuracy for the calculation of the cross - correlation function and approximate stationarity within a window s length . denotes the normalized ( zero mean and unit variance ) eeg signal at channel . from time - resolved matrices we derived adjacency matrices by thresholding : elements on the main diagonal of were set to 0 in order to exclude self - connections of nodes . since our aim is to characterize the evolving _global _ network dynamics , we chose , for each time window , the highest possible threshold for which the resulting graph represented by was connected . starting from a threshold we gradually decreased , and we calculated , at each step , the second smallest eigenvalue of the corresponding laplacian matrix , whose elements are , where is the kronecker delta , and denotes the degree of node . is positive if and only if the graph is connected . following ref . we used to compute average shortest path length , cluster coefficient , and normalized edge density ( i.e , the actual number of edges in divided by the number of possible edges between nodes ) and assigned their values to the time point at the beginning of each window . to detect deviations from a random network topology we consider the ratios and and computed and for a random graph with a preserved degree distribution and an identical average number of edges per node . in order to investigate synchronizability during seizures, we analyzed again in a time - resolved manner the spectrum of the laplacian matrix .as a measure for the stability of the globally synchronized state of a connected graph we consider the eigenratio , where denotes the smallest non - vanishing eigenvalue and the maximum eigenvalue of .a network is said to be less synchronizable for larger values of and better synchronizable for smaller values of , the latter indicating a more stable globally synchronized state .in fig . [ fig : figure2 ] we show typical time courses of and . given our thresholding , which can yield a different number of links for every time window , we show , in addition , the time course of . already prior to the electrographic seizure onset ( see ref . for a fully automated detection of electrographic seizure onset and seizure end ) both average shortest path length and cluster coefficient indicate a slight deviation from a random network , while edge density of the seizure network remains almost constant .approximately in the middle of the seizure this deviation is most pronounced and indicates a movement toward a more regular functional topology .interestingly , already prior to the electrographic seizure end we observe a movement away from the more regular functional topology , which extends into the post - seizure period .edge density slowly increases during the second half of the seizure and reaches an average value during the post - seizure period that is almost twofold the average value of the pre - seizure period .we note that this temporal evolution is also reflected in the dynamics of maximum degree .the difference between maximum and average degree ( ) remains almost constant at one quarter of the number of nodes while the minimum degree is 1 for all windows ( data not shown ) .the variability of the concave - like temporal evolution of network characteristics and for different seizures from the same patient was low , and moreover , was a consistent finding for all investigated seizures independently of the anatomical location of their onset ( cf . figs .[ fig : figure4]a and b ) .typical time courses for , , and during a seizure are shown in fig .[ fig : figure3 ] .again we observed a concave - like temporal evolution , with highest values of ( i.e. , lowest synchronizability ) in the middle of the seizure , followed by a decline ( i.e. , an increasing synchronizability ) already prior to the electrographic seizure end .although this behavior varied from patient to patient , it was a consistent finding for all seizures ( cf .[ fig : figure4]c ) . a comparison of fig .[ fig : figure3]b with fig .[ fig : figure3]c shows that is largely dominated by the dynamics of the smallest non - vanishing eigenvalue , and its decrease in the middle of the seizure may indicate a reorganization of the network into local sub - structures . in this case , sparsely occurring links between local sub - structures can significantly affect .the relative change of largest eigenvalue during the seizure is less pronounced as compared to that of and resembles the time course of edge density ( cf .[ fig : figure3]d ) .this similarity can be expected , at least to some extent , since edge density constitutes a lower bound for the largest eigenvalue , ( cf .we could , however , not observe such a clear cut influence of on and hence on . as regards the degree distribution , we again observed and for all windows and a temporal evolution of quite similar to the dynamics of ( data not shown ) indicating that the degree distribution of the seizure network does not appear to determine its synchronizability ( cf . ) . fig .[ fig : figure4 ] summarizes the dynamics of functional network properties and synchronizability for all one - hundred focal onset seizures .irrespective of the anatomical location of seizure origin , both the normalized cluster coefficient and the normalized average shortest path length rapidly increased during the first half of the seizures then gradually decreased again .interestingly , this temporal evolution was more pronounced for the normalized cluster coefficient than for the normalized average shortest path length .this indicates a relative shift toward a less random functional topology of the seizure state .seizures are usually associated with massively synchronized brain activity , and the significantly decreased synchronizability of the underlying functional topology may catalyze the emergence of a globally synchronized state of the epileptic brain .once such a state has been established , synchronizability increases again , as observed in our data .if the observed changes of functional topology were simply a consequence of enhanced volume conduction during the seizures , i.e. , due to direct propagation of distant sources to remote sensors , the covariance of the eeg signals would be expected to occur with zero time lag . in order to exclude the effect of a linear superposition of sources leading to a more regular - looking graph , we estimated the normalized frequency distribution of absolute time lag of maximal correlation ( cf .( [ eq : rho ] ) ) for all seizures , partitioned into 10 equidistant time bins ( cf . fig .[ fig : figure4 ] ) .we observed to peak in the range of 550ms ( see fig .[ fig : figure5 ] ) , which indicates that the observed changes of functional topology are not due to passive electromagnetic field effects in the extracellular space , but rather due to propagation of electrical activity along anatomical pathways .we have presented findings obtained from a time - resolved analysis of statistical and spectral properties of functionally defined networks underlying human epileptic seizures . despite the many influencing factors ( number of nodes , non - uniform arrangement of sensors , focus on particular brain regions , choosing a threshold for the extraction of functional networks , etc . )that impede an interpretation of graph - theoretical measures in a strict sense when analyzing field data , our results indicate that seizure dynamics irrespective of the anatomical onset location can be characterized by a relative transient shift toward a more regular and then back toward a more random functional topology .this is consistent with recent observations reported in ref . , where a small number of seizures that originated from a circumscribed brain region have been analyzed .we here observed that the changing functional network topology during seizures was accompanied by an initially decreased stability of the globally synchronized state , which increased already prior to seizure end . in a previous study we analyzed the same data set using multivariate time series analysis techniques from random matrix theory and observed that surprisingly global neuronal synchronization ( derived from the eigenvalue spectrum of the zero - lag correlation matrix ) significantly increased during the second half of the seizures , and before seizures stopped .our present findings indicate that such a global increase of neuronal synchronization prior to seizure end may be promoted by the underlying functional topology of brain dynamics .this corroborates the hypothesis that increasing synchronization of neuronal activity may be considered as an emergent self - regulatory mechanism for seizure termination .thus , our result can provide clues as to how to control seizure network , e.g. via pinning .while the aforementioned interpretation would indicate that the transient evolution in graph properties is an active process of the brain to abort a seizure , our findings could also be understood as a passive consequence of the seizure itself .the extremely intense firing of neurons during a seizure might lead to a saturation of the capacity of neurons to fire , particularly in brain areas with a high number of functional links .if such hubs are saturated and become silent , then the connections between local sub - structures are affected to a larger extent , which could lead to a segregation of the global network and to a decrease of global synchronizability .thus , a reliable identification of local sub - structures or hubs could improve understanding of mechanisms underlying the generation , maintenance , propagation , and termination of epileptic seizures . at present ,our findings are restricted to interdependences that can be assessed by the cross - correlation function .future studies will clarify whether additional information can be gained from analyses invoking other time series analysis techniques , including nonlinear ones , as well as techniques that take into account the direction of interactions .nevertheless , disentangling the interplay between connection structure and dynamics of the complex network human brain may advance our understanding of epileptic processes .we thank philip h. goodman and christof cebulla for helpful comments .k. s. was supported by a scholarship of the ssmbs ( schweizerische stiftung fr medizinisch - biologische stipendien ) donated by roche .s. b. was supported by the german national academic foundation ( studienstiftung ) .h. , c. e. e. , and k. l. acknowledge support from the deutsche forschungsgemeinschaft ( grant nos . sfb - tr3 sub - project a2 and le660/4 - 1 ) . | we assess electrical brain dynamics before , during , and after one - hundred human epileptic seizures with different anatomical onset locations by statistical and spectral properties of functionally defined networks . we observe a concave - like temporal evolution of characteristic path length and cluster coefficient indicative of a movement from a more random toward a more regular and then back toward a more random functional topology . surprisingly , synchronizability was significantly decreased during the seizure state but increased already prior to seizure end . our findings underline the high relevance of studying complex systems from the view point of complex networks , which may help to gain deeper insights into the complicated dynamics underlying epileptic seizures . * epilepsy represents one of the most common neurological disorders , second only to stroke . patients live with a considerable risk to sustain serious or even fatal injury during seizures . in order to develop more efficient therapies the pathophysiology underlying epileptic seizures should be better understood . in human epilepsy , however , the exact mechanisms underlying seizure termination are still as uncertain as are mechanisms underlying seizure initiation and spreading . there is now growing evidence that an improved understanding of seizure dynamics can be achieved when considering epileptic seizures as network phenomena . by applying graph - theoretical concepts , we analyzed seizures on the eeg from a large patient group and observed that a global increase of neuronal synchronization prior to seizure end may be promoted by the underlying functional topology of epileptic brain dynamics . this may be considered as an emergent self - regulatory mechanism for seizure termination , providing clues as to how to efficiently control seizure networks . * |
polarimetric imagery consists in forming an image of the state of polarization of the light backscattered by a scene .we consider in this paper that the scene is artificially illuminated with coherent light ( laser ) .for example , this illumination is used in active imagery in order to combine night vision capability and to improve image resolution for a given aperture size . in practice , using a coherent illumination produces speckle noise that deteriorate the image .however , the backscattered light gives information about the capability of the scene to polarize or depolarize the emitted light and thus allows one to determine the medium that compose the scene .these information can be described by a scalar parameter : the degree of polarization of light .this quantity is obtained in standard configurations of polarimetric systems using four pair of angular rotations of both a compensator and a polarizer .four transmittance are thus recorded that lead to the estimation of the degree of polarization . however , this system is complex and it is interesting to develop methods to estimate the degree of polarization that could reduce the number of images to register . in ,the authors proposed to estimate the degree of polarization with only two intensity images , however this method relies on the assumption that the measurements of the two components are uncorrelated which can be in some cases a too restrictive hypothesis .this paper extends the work of by taking into account the correlation of the different components .+ let us first introduce the context of the study .the electric field of the light at a point of coordinates * r * ( vector of 3 component ) in a 3d space and at time can be written , if we assume the light to propagate in a homogeneous and isotropic medium , as .e^{- i 2 \pi \nu t}\ ] ] where is the central frequency of the field and are unitary orthogonal vectors ( in the following bold letters represent vectors ) .+ the terms and are complex and define the random vector called jones vector .\ ] ] the state of polarization of light corresponds to the properties of at a particular point of space .it can be described by the covariance matrix \ ] ] where and define respectively the statistical average and the complex conjugate . for sake of brevity ,the following notations for are introduced = \frac{1}{c_1c_4-|c_2|^2 } \left[\begin{array}{ll } c_4 & -c_2 \\ -c_2^ * & c_1 \\\end{array}\right];\ ] ] \ ] ] let us note that this matrix can be diagonalized since it is hermitic .+ in the case of coherent light , the electric field is represented by the complex jones vector which follows a gaussian circular law where stands for the adjoint of the vector .+ the degree of polarization is defined by where and are the eigenvalues of ( ) .+ the degree of polarization is a scalar parameter that characterizes the state of polarization of the light : if , the light is said to be totally depolarized , and if , the light is said to be totally polarized . in the intermediate cases ,the light is partially polarized .+ with the notations introduced in ( [ cov ] ) one can show that the knowledge of this quantity allows one to study the way the illuminated scene polarized or depolarized the emitted light .this degree gives information about the nature of the medium in the scene . in the standard configuration ,four measurements are needed to estimate it . in the case of uncorrelated measurements , two intensity imagesgives a good estimation of this degree using the osci .however , in some cases the uncorrelation of the measurements may be not valid .+ in this paper , an original estimation method that both uses a pair of images and that accounts for correlation in the components is proposed .we recall in the following the method proposed in and we extend it to correlated measurements .we then compare them through statistical measures using simulated data .the results are also presented when the standard estimation of is used ( four images ) as this case is expected to give the best results .finally we conclude and give the perspectives of this work .in the case of two uncorrelated images , the orthogonal state contrast image ( osci ) is determined from , , where and are intensity measurements at the pixel site , assuming a lexicographic order for the pixels .+ these two images are obtained with simple polarimetric systems .first , the scene is illuminated by coherent light with single elliptical polarization state .then the backscattered light is analysed in the polarization state parallel ( which leads to and orthogonal ( which leads to to the incident one .+ the osci is an estimation of in each pixel provided that the materials of the scene modify the degree of polarization of incident light without modifying its principal polarized state ( . ] ) .let us recall that this kind of material is called pure depolarizer . for such materials ,the covariance matrix is diagonal and , since the diagonal terms represent the intensity images , the osci gives an estimation of with in the case of non pure depolarizer objects ( i.e. is non diagonal ) , the osci still reveals interesting contrast image but no more defines .this leads us to a new method that considers the cases of correlated measurements .this is the object of the following part .in the case of correlated measurements , the covariance matrix is non diagonal and of the form ( [ cov ] ) . in its standard estimation ,the degree of polarization needs four measurements , however , two images are sufficient to get an estimation of .indeed , the coefficient is obtained from one measurement , as the coefficient and the squared modulus of can be estimated from the cross - correlation coefficient between two measurements and .we have can be calculated by using the joined density probability function assuming that is gaussian circular ( i.e. the speckle is supposed to be fully developped ) .it can be shown that the correlation coefficient is obtained with where stands for the modulus of . calculating the centered correlation coefficient defined by after some simple algebra, we get thus the coefficient can be obtained from two measurements with .this remark leads to write the following property . + * property a : * _ for fully developped speckle fluctuations , the degree of polarization can be obtained from only two intensity images . _ + + one can easily note that the degree of polarization can be written as a function of the osci with from ( [ posci ] ) , the degree of polarization estimated from the osci is clearly under - estimated .thus , we can correct the osci in order to get an estimation of the degree of polarization using the correlation coefficient .+ in the following part , the different estimation of the degree of polarization are compared to the estimation with four images through simulated data .we generated experiments of samples of complex jones vectors which follow a gaussian circular law .the covariance matrix is known and thus is also known . under the assumption that the statistical average can be estimated by spatial averages in homogenous regions ,the coefficients can be estimated from a single image ( ) like the coefficient ( ) since where represents the component of the jones vector for the sample .+ the estimation of differs in the studied cases by the way is estimated .three different methods are used : * case of four images + in this situation , we have both the real and the imaginary part of the coefficient .the quantity is estimated by + * case of two images with the osci + in this case is assumed to be equal to zero .* case of two images with the proposed approach + the coefficient is estimated using ( [ cross_coef0 ] ) with + in the three cases , was estimated from the relation ( [ deg_pol ] ) with the estimated parameters . +in order to characterize the precision of the estimation , one considers six examples of matrix . ; \gamma_2=\left [ \begin{array}{cc } 16 & 0\\ 0 & 3.6 \end{array}\right];\ ] ] ; \gamma_4=\left [ \begin{array}{cc } 18 & 7 + 8i\\ 7 - 8i & 11 \end{array}\right];\ ] ] ; \gamma_6=\left [ \begin{array}{cc } 1.25 & 5.5i\\ -5.5i & 26 \end{array}\right];\ ] ] these matrices were chosen such as the degree of polarization are approximatively in .+ the simulations are performed for realisations of samples for the six covariance matrices . for the two matrices and , supplementary cases have been studied when .the results are presented in figures [ fig1 ] , [ fig2 ] , [ fig3 ] , [ fig4 ] , [ fig5 ] , [ fig6 ] .several points are important to notice .first of all , as expected , the best estimations of the degree of polarization , regarding all the cases tested , was achieved with four images .however the proposed approach that relies on two correlated images produces good estimations of the degree of polarization whatever the covariance matrix is used ( fig.[fig1 ] ) as soon as ( fig.[fig3 ] and fig.[fig5 ] ) . note that the estimation with the osci gives results which can not be used if the term is too high ( for example if is non negligible ) .fig.[fig2 ] , [ fig4 ] and [ fig6 ] show that the experimental variance using four measurements or the osci are comparable whatever and are , whereas the variance obtained with the proposed approach is larger. however this precision should be sufficient for some practical applications .this point should be studied in details in a future work .we have proposed a new approach to estimate the degree of polarization on polarimetric images degraded by speckle noise . assuming that the speckle is fully developped , this method allows one to estimate this degree with only two intensity images whereas four images are needed in a standard experimental setup .this presents a great interest in term of reduction of cost of the imagery system since the original setup can be simplified .the proposed approach has been tested on simulated data and compared to the standard estimation techniques that requires either 4 images or 2 independant images ( osci ) .the results show that the proposed method gives good approximation of the degree of polarization .this study needs to be extended with a theoritical analyses in order to precise the conditions of validity of the proposed approach . | active polarimetric imagery is a powerful tool for accessing the information present in a scene . indeed , the polarimetric images obtained can reveal polarizing properties of the objects that are not avalaible using conventional imaging systems . however , when coherent light is used to illuminate the scene , the images are degraded by speckle noise . the polarization properties of a scene are characterized by the degree of polarization . in standard polarimetric imagery system , four intensity images are needed to estimate this degree . if we assume the uncorrelation of the measurements , this number can be decreased to two images using the orthogonal state contrast image ( osci ) . however , this approach appears too restrictive in some cases . we thus propose in this paper a new statistical parametric method to estimate the degree of polarization assuming correlated measurements with only two intensity images . the estimators obtained from four images , from the osci and from the proposed method , are compared using simulated polarimetric data degraded by speckle noise . |
in the last fifteen years , power - law distributions , with their heavy - tailed and scale - invariant features , have become ubiquitous in scientific literature in general .heavy - tailed distributions have been fitted to data from a wide range of sources , including ecological size spectra , dispersal functions for spores , seeds and birds , and animal foraging movements . in the latter case ,the fit of a heavy - tailed distribution has been used as evidence that the optimal foraging strategy is a lvy walk with exponent .however , the appropriateness of heavy - tailed distributions for some of these data sets , and the methods used to fit them , have recently been questioned by edwards _et al _ .in a literature search of studies on foraging movements , ecological size spectra and dispersal functions , the first 24 papers that fitted power - law distributions to data were chosen . in only six of these 24 papers was it clear that the authors correctly fitted a statistical distribution to the data , whereas 15 used a flawed fitting procedure ( and in the remaining three it was unclear ) .ten of the studies compared the goodness - of - fit of the power - law distribution to other types of distribution , but at least two of these ten used comparison methods that were invalid in this context . correctly fitting a power - law distribution to data is not difficult but requires a little more statistical knowledge than the standard linear regression and linear correlation techniques used in most cases . in this report ,the most common methods currently used to fit a power law to data are reviewed , and some of the drawbacks of these methods are discussed . a method that avoids these drawbacks , based on maximum likelihood estimation ,is then described in a step - by - step guide ( the reader is also referred to ) .two case studies are presented illustrating the advantages of this method .the probability density function ( pdf ) for a power - law distribution has the form this function is not defined on the range 0 to , but on the range to infinity , otherwise the distribution can not be normalised to sum to .this condition implies that the exponent is related to and by a common problem is to determine whether a particular sample of data is drawn from a power - law distribution and , if so , to estimate the value of the exponent . a standard approach tothis problem ( used by 14 of the papers in the literature sample ) is to bin the data and plot a frequency histogram on a log log scale .if the sample is indeed drawn from a power - law distribution and the correct binning strategy is used , there is a linear relationship between and , so the frequency plot will produce a straight line of slope .alternatively , one may use the cumulative distribution function ( cdf ) , which shows a linear relationship between and .so plotting ( _ i.e. _ the relative frequency of observations with a value greater than ) against ( called a rank frequency ( rf ) plot ) on a log log scale will give a straight line of slope .all data sets are subject to statistical noise , particularly in the tail of the distribution .hence a frequency chart will never produce a perfectly straight line , so some kind of fitting procedure is necessary .the most common method , used by 16 of the 24 studies surveyed , is to estimate the power - law exponent by using linear regression to find the line of best fit on the frequency histogram or rf plot .linear regression assumes that one is free to vary both the slope and intercept of the line in order to obtain the best possible fit .however , this is not true in the case of fitting a probability distribution , because of the constraint that the distribution must sum to .once the range of the data has determined , the power - law distribution has only one degree of freedom , as opposed to the two afforded by the naive linear regression or line of best fit approach .unless this constraint is explicitly acknowledged and respected , the fitting procedure will produce an incorrect estimate of the exponent and the fitted distribution will not be a pdf on the appropriate range ( see for example case study 1 ) .furthermore , the linear regression method does not offer a natural way to estimate the size of the error in the fitted value of ( very few of the studies surveyed made any attempt to do this ) .a point value for on its own is of questionable merit .a third criticism of the linear regression method of fitting a power law to some sample is that very rarely is any meaningful attempt made to judge the goodness - of - fit of the proposed model .the most common approach , taken by twelve of the surveyed works , is to provide the correlation coefficient .this is a measure of the strength of linear correlation between and , but is not a measure of the goodness - of - fit of a proposed model such as a power law .of course , in the absence of any _ a priori _ knowledge of the distributions of errors in the data , measuring goodness - of - fit of a single model is extremely difficult .however , it is possible to compare goodness - of - fit of two or more candidate models . in the work surveyed, nine papers explicitly compared the goodness - of - fit of the power - law distribution with other candidate models , but at least two of these nine used a comparison based on the flawed linear regression method and on , so in reality added little to the analysis . finally , although it is not necessary to bin the data to use a linear regression method , 14 of the 24 surveyed papers did so in their analyses , and nine used bins of equal width .it should be noted that , when a sample that does follow a power - law distribution is binned using fixed width bins , the log log plot of the pdf is not linear . to see a linear relationship one must use logarithmic width bins .this fitting approach is described in detail in , but the statistical results are often sensitive to the binning strategy ( in particular the logarithmic base ) used and the occurrence of empty bins is problematic .case study 1 illustrates some of the nonsensical conclusions that can result from binning the data . by far the best approach to fitting a power lawis not to bin the data in any way , but to use the raw data whenever possible . a simple technique for fitting a power - law distribution is to use maximum likelihood estimation ( mle ) .this method provides an unbiased estimate of the exponent .equally importantly , it provides an estimate of the error in the fitted value of , and allows the goodness - of - fit of different candidate models to be directly compared .furthermore , the method does not require the data to be binned in any way . in the step - by - step guidebelow , the method is used to compare the fit of a power - law distribution and an exponential distribution , , on the same range , but can be easily adapted to compare with other candidate distributions ( _ e.g. _ normal distribution , gamma distribution ) .the method proceeds in the following steps . 1 .* draw a rf plot . * to gain a picture of the distribution of the data , plot the fraction of observations greater than against on a log log plot by : * ordering the data from largest to smallest so that ; * plotting against ( for ) .2 . * choose .* in some cases , it may be desirable to examine only the ` tail ' of the distribution by first discarding from the sample all values less than some cut - off .typically , this is chosen so as to disregard the curved part of the rf plot on its left - hand side . if it is required to fit a distribution to the entire sample , take to be the smallest observation .3 . * calculate mle parameters and likelihoods : * * mle power - law exponent * log - likelihood for power - law model * mle exponential parameter * log - likelihood for exponential model + where is the number of data points in the ( truncated ) sample .see for a derivation of these formulae .* select the best model .* for each model , calculate the akaike information criterion ( aic ) and akaike weight ( ) , which are defined , for model , by : where is the number of parameters in model ( for the power - law and exponential models ) , is the total number of models being compared ( in this case , comparing a power - law model and an exponential model means that ) and is the smallest of the aic across all models .+ the akaike weight gives a measure of the likelihood that a particular model provides the best representation of the data .if then a power - law distribution provides a better fit to the data than an exponential distribution , and the estimated value of the exponent of the power law is .if then an exponential distribution is favoured over a power law on that range .calculate the error in the estimate of . * the standard deviation of the estimated power - law exponent may also be calculated : this gives the average size of the error in the estimated value of .six of the 24 surveyed studies correctly fitted the data and compared different candidate models using a method similar to the one described above .( it is interesting to note that these six were all found in the dispersal literature . )austin _ et al _ fitted power - law distributions to the movement lengths of grey seals ; figure [ fig : seals ] shows an example data set consisting of 96 observations .the original data are already binned , so it was necessary , for the re - analysis , to assume that all observations in the sample were at the midpoint of their respective bin . using linear regression, the line of best fit has equation ( in log coordinates ) , leading to an estimated value of the exponent of .however , it was omitted from that , in order for the distribution to sum to , this line of best fit implies an value of km . as the data are in the range km to km , it is nonsensical to fit a distribution with range .these data were re - analysed in .this work used logarithmic binning but , again , used linear regression to find the line of best fit and estimated the exponent to be .this exponent can not be correct as any power law with an exponent of less than can not be normalised to sum to and hence is not a probability distribution .this illustrates the major drawbacks of using linear regression on binned data : the fitted line may be sensitive to the size and number of bins used and to the occurrence of empty bins , and may result in a power law that is not an appropriate probability distribution . using the maximum likelihood method outlined above, the estimated exponent is ( with an expected error of ) .this explicitly presumes a minimum data value of km ( the smallest observation in the sample ) .however , comparing to an exponential distribution as described above shows that the exponential distribution ( with ) provides a much better model for this data set than a power law ( ) .hence , the power - law hypothesis for this data set can be confidently rejected .( in km ) on a log log scale , together with the line of best fit .this line is not a probability distribution on an appropriate range.,width=302 ] the data set shown in appendix [ sec : appb ] contains the land area ( in hectares ) of 789 farms in the hurunui district of new zealand ; this data set was chosen for illustrative purposes . the rf plot for this data setis shown in figure [ fig : farms](a ) .we find the maximum likelihood estimate of the power - law exponent for a range of values of the minimum cut - off ( discarding all data less than ) using the method described above . by calculating the akaike weights , it is possible to determine whether an exponential or a power - law distribution provides the better fit to the data , for any given value of .a power law is favoured over an exponential ( ) for all choices of between and , but an exponential is favoured ( ) for .figure [ fig : farms](b ) shows the estimated power - law exponent for the different values : it is clear that the estimated exponent has a wide range of values from to , depending on the range of data used .it should also be noted that , if the aim is to find the best statistical model describing the entire data set ( _ i.e. _ equal to the smallest observation in the sample ) , then an exponential distribution ( with ) is clearly favoured over a power law ( ) .the dependence of the outcome of the analysis on the arbitrary selection of further illustrates the potential problems with fitting power - law distributions . against the cut - off . a power law provides a better fit to the data than an exponential distribution for value of between and ; an exponential provides a better fit for values of outside this range.,title="fig:",width=302 ] + ( a ) + against the cut - off .a power law provides a better fit to the data than an exponential distribution for value of between and ; an exponential provides a better fit for values of outside this range.,title="fig:",width=302 ] + ( b )some of the statistical procedures commonly used to fit power - law distributions to ecological data often lead to misleading or incomplete conclusions . in this paper ,a simple step - by - step method for fitting a power law to a data set has been described .this method , based on maximum likelihood estimation , provides an unbiased estimate of the power - law exponent , as well as the expected error in this value , and assesses the goodness - of - fit of a power - law model compared to alternative candidate models such as the exponential distribution .the authors are grateful to environment canterbury for providing the data set on farm sizes , and to richard duncan and two anonymous referees for useful comments that helped to improve this paper .the following data set ( courtesy of environment canterbury ) contains the sizes ( in hectares ) of 789 farms in the hurunui district of new zealand .0.06 , 0.08 , 0.08 , 0.08 , 0.1 , 0.1 , 0.1 , 0.1 , 0.1 , 0.3 , 0.5 , 0.5 , 0.53 , 0.6 , 0.9 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1.3 , 1.5 , 1.9 , 1.9 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2.3 , 2.9 , 2.9 , 2.9 , 3 , 3 , 3 , 3 , 3 , 3 , 3 , 3 , 3.2 , 3.3 , 3.5 , 3.8 , 3.9 , 4 , 4 , 4 , 4 , 4 , 4 , 4 , 4 , 4 , 4.2 , 4.2 , 4.3 , 4.6 , 4.8 , 4.9 , 5 , 5 , 5 , 5 , 5 , 5.4 , 5.7 , 6 , 6 , 6 , 6 , 6 , 6 , 6.2 , 6.3 , 6.3 , 7 , 7 , 7 , 7 , 7 , 7.2 , 7.5 , 7.9 , 8 , 8 , 8 , 8 , 8 , 8 , 8 , 8.3 , 8.8 , 9 , 9 , 9 , 9 , 9.3 , 9.9 , 10 , 10 , 10 , 10 , 10 , 10.2 , 10.4 , 11 , 11 , 11 , 11 , 11 , 11.8 , 12 , 12 , 12 , 12 , 12 , 12 , 12 , 12 , 12.1 , 13 , 13 , 1 , 13 , 13.8 , 13.9 , 14 , 14 , 14 , 14 , 14.4 , 14.5 , 15 , 15 , 15 , 15 , 15 , 15 , 15.2 , 15.7 , 15.8 , 16 , 16 , 16 , 16 , 16 , 17 , 17 , 17 , 17 , 17 , 17.1 , 17.1 , 17.5 , 17.6 , 18 , 18 , 18 , 18 , 18 , 18 , 18 , 18 , 19 , 19 , 19 , 19 , 19 , 19 , 19 , 19 , 19 , 20 , 20 , 20 , 21 , 21 , 21 , 21.9 , 22 , 22 , 23 , 23 , 23 , 24 , 24 , 24 , 25 , 25 , 26 , 27 , 27 , 27 , 28 , 28 , 29 , 29 , 30 , 31 , 31 , 32 , 32 , 32 , 32 , 32 , 32 , 32.9 , 34 , 34 , 35 , 35 , 35 , 36 , 36 , 36 , 37 , 38 , 38 , 39 , 39 , 39 , 40 , 41 , 43 , 43 , 44.1 , 45 , 45 , 46 , 46 , 46 , 46 , 47 , 47 , 47 , 47 , 48 , 49 , 50 , 53 , 54 , 55 , 56 , 56.5 , 57 , 58 , 58 , 60 , 63 , 67 , 68 , 69 , 70 , 71 , 72 , 72 , 72 , 73 , 73 , 74 , 75 , 75 , 75 , 75 , 77 , 78 , 78 , 78 , 78 , 79 , 80 , 81 , 82 , 83 , 85 , 85 , 87 , 87 , 90 , 90 , 91 , 91 , 91 , 92 , 93 , 93 , 96 , 9 , 97 , 99 , 99 , 99 , 100 , 100 , 100 , 101 , 101 , 102 , 103 , 103 , 104 , 104 , 105 , 106 , 107 , 109 , 109 , 113 , 115 , 117 , 118 , 120 , 121 , 121 , 121 , 124 , 124 , 124 , 124 , 125 , 126 , 12 , 128 , 128 , 131 , 131 , 132 , 133 , 135 , 135 , 136 , 136 , 137 , 137 , 137 , 138 , 141 , 142 , 143 , 143 , 143 , 143 , 144 , 144 , 145 , 145 , 148 , 148 , 149 , 149 , 150 , 151 , 153 , 155 , 15 , 156 , 157 , 157 , 158 , 160 , 161 , 162 , 162 , 162 , 163 , 164 , 164 , 165 , 167 , 167 , 169 , 169 , 170 , 171 , 171 , 172 , 173 , 173 , 173 , 175 , 176 , 177 , 181 , 182 , 182 , 183 , 183 , 18 , 184 , 185 , 186 , 188 , 188 , 190 , 191 , 192 , 192 , 192 , 193 , 193.5 , 194 , 195 , 196 , 197 , 198 , 198 , 200 , 200 , 200 , 202 , 202 , 203 , 206 , 207 , 209 , 210 , 210 , 213 , 216 , 216 , 219 , 221 , 223 , 223 , 226 , 226 , 226 , 228 , 229 , 229 , 230 , 232 , 233 , 233 , 234 , 237 , 239 , 242 , 245 , 247 , 247 , 250 , 250 , 252 , 254 , 256 , 258 , 263 , 267 , 267 , 267 , 268 , 269 , 269 , 269 , 270 , 271 , 273 , 273 , 273 , 275 , 277 , 277 , 279 , 280 , 282 , 283 , 284 , 284 , 284 , 288 , 289 , 289 , 295 , 295 , 298 , 298 , 298 , 299 , 301 , 305 , 306 , 308 , 309 , 311 , 31 , 313 , 313 , 316 , 316 , 320 , 322 , 324 , 327 , 332 , 333 , 333 , 339 , 341 , 342 , 345 , 346 , 348 , 349 , 349 , 352 , 353 , 354 , 354 , 355 , 357 , 361 , 362 , 364 , 365 , 365 , 365 , 366 , 36 , 367 , 372 , 372 , 375 , 376 , 379 , 380 , 384 , 384 , 385 , 388 , 388 , 388 , 390 , 391 , 392 , 394 , 395 , 395 , 396 , 398 , 399 , 401 , 401 , 401 , 404 , 405 , 410 , 410 , 415 , 416 , 423 , 42 , 428 , 430 , 431 , 431 , 432 , 440 , 443 , 443 , 448 , 448 , 450 , 453 , 456 , 457 , 461 , 461 , 464 , 466 , 471 , 476 , 478 , 480 , 496 , 497 , 498 , 502 , 504 , 504 , 505 , 511 , 512 , 513 , 51 , 515 , 517 , 518 , 520 , 524 , 526 , 529 , 533 , 540 , 542 , 546 , 550 , 550 , 552 , 552 , 553 , 555 , 555 , 558 , 563 , 565 , 565 , 567 , 567 , 571 , 571 , 576 , 578 , 580 , 581 , 583 , 587 , 58 , 591 , 597 , 597 , 597 , 598 , 602 , 611 , 612 , 614 , 614 , 616 , 630 , 635 , 635 , 640 , 640 , 645 , 650 , 652 , 652 , 661 , 662 , 667 , 681 , 685 , 695 , 699 , 719 , 720 , 731 , 732 , 741 , 75 , 752 , 760 , 763 , 764 , 769 , 771 , 774 , 776 , 785 , 790 , 792 , 792 , 804 , 807 , 814 , 824 , 843 , 844 , 849 , 859 , 874 , 886 , 892 , 892 , 892 , 923 , 923 , 932 , 935 , 938 , 942 , 952 , 95 , 955 , 958 , 975 , 980 , 984 , 989 , 996 , 999 , 1002 , 1004 , 1013 , 1040 , 1044 , 1054 , 1059 , 1064 , 1082 , 1094 , 1110 , 1117 , 1124 , 1153 , 1172 , 1191 , 1196 , 1201 , 1216 , 1219 , 1226 , 1261 , 1262 , 126 , 1278 , 1303 , 1312 , 1336 , 1339 , 1364 , 1371 , 1372 , 1379 , 1380 , 1387 , 1401 , 1403 , 1405 , 1431 , 1431 , 1439 , 1459 , 1477 , 1489 , 1491 , 1521 , 1534 , 1537 , 1544 , 1560 , 176 , 1593 , 1594 , 1607 , 1608 , 1631 , 1690 , 1765 , 1778 , 1780 , 1780 , 1798 , 1812 , 1815 , 1899 , 1917 , 1986 , 2081 , 2152 , 2200 , 2205 , 2288 , 2335 , 2440 , 2576 , 2758 , 2975 , 3378 , 3512 , 4126 , 4136 , 4300 , 4466 , 5299 , 5967 , 6700 , 8596 , 10336 .bartumeus , f. , peters , f. , pueyo , s. , marras , c. and catalan , j. , 2003 .helical lvy walks : adjusting searching statistics to resource availability in microzooplankton ., 100 , 1277112775 .bertrand , s. , burgos , j. m. , gerlotto , f. and atiquipa , j. , 2005 .lvy trajectories of peruvian purse - seiners as an indicator of the spatial distribution of anchovy ( _ engraulis ringens _ ) ._ ices j. marine sci ._ , 62 , 477482 .chapman , c. a. , chapman , l. j. , vulinec , k. , zanne , a. and lawes , m. j. , 2003 . fragmentation and alteration of seed dispersal processes : an initial evaluation of dung beetles , seed fate , and seedling diversity ._ bioone _ , 35 , 382393 .edwards , a. m. , phillips , r. a. , watkins , n. w. , freeman , m. p. , murphy , e. j. , afanasyev , v. , buldyrev , s. v. , da luz , m. g. e. , raposo , e. p. , stanley , h. e. and viswanathan , g. m. , 2007 .revisiting lvy flight search patterns of wandering albatrosses , bumblebess and deer ._ nature _ , 449 , 10441048 .katul , g. g. , porporato , a. , nathan , r. , siqueira , m. , soons , m. b. , poggi , d. , horn , h. s. and levin , s. a. , 2005 .mechanistic analytical models for long - distance seed dispersal by wind . _ am . naturalist _, 166 , 368381 .klein , e. t. , lavigne , c. , picault , h. , renard , m. and gouyon , p .- h . , 2006 .pollen dispersal of oilseed rape : estimation of the dispersal function and effects of field dimension ._ j. appl ._ , 43 , 141151 . , g. , mateos , j. l. , miramontes , o. , cocho , g. , larralde , h. and ayala - orozco , b. , 2004 .lvy walk patterns in the foraging movements of spider monkeys ( _ ateles geoffroyi _ ) ._ , 55 , 223230 . | heavy - tailed or power - law distributions are becoming increasingly common in biological literature . a wide range of biological data has been fitted to distributions with heavy tails . many of these studies use simple fitting methods to find the parameters in the distribution , which can give highly misleading results . the potential pitfalls that can occur when using these methods are pointed out , and a step - by - step guide to fitting power - law distributions and assessing their goodness - of - fit is offered . |
seasonal unit roots and seasonal heterogeneity often coexist in seasonal data . hence , it is important to design seasonal unit root tests that allow for seasonal heterogeneity . in particular ,consider quarterly data , generated by where are seasonally varying autoregressive ( ar ) filters , and have seasonally varying autocovariances . for more information on seasonal timeseries , see ghysels and osborn ( 2001 ) , and franses and paap ( 2004 ) .now suppose is a weakly stationary vector - valued process , and for all , the roots of are on or outside the unit circle .if for all , have roots at , , or , then respectively has stochastic trends with period , , or . to remove these stochastic trends , we need to test the roots at 1 , , or . to address this task , franses ( 1994 ) andboswijk , franses , and haldrup ( 1997 ) limit their scope to finite order seasonal ar data and apply johansen s method ( 1988 ) .however , their approaches can not directly test the existence of a certain root without first checking the number of seasonal unit roots . as a remedy ,ghysels , hall , and lee ( 1996 ) designs a wald test that directly tests whether a certain root exists .however , in their own simulation , the wald test turn out less powerful than the augmented hegy test .does hegy test work in the seasonally heterogeneous setting ? to the best of our knowledge , no literature has offered a satisfactory answer .burridge and taylor ( 2001a ) analyze the behavior of augmented hegy test when only seasonal heteroscadasticity exists ; del barrio castro and osborn ( 2008 ) put augmented hegy test in the periodic integrated model , a model related but different from model .no literature has ever touched the behavior of unaugmented hegy test proposed by breitung and franses ( 1998 ) , the important semi - parametric version of hegy test . since unaugmented hegy test does not assume the noise having an ar structure , it may suit our non - parametric noise in better . to check the legitimacy of hegy tests in the seasonally heterogeneous setting, this paper derives the asymptotic null distributions of the unaugmented hegy test and the augmented hegy test whose order of lags goes to infinity .it turns out that , the asymptotic null distributions of the statistics testing single roots at 1 or are standard .more specifically , for each single root at 1 or , the asymptotic null distributions of the augmented hegy statistics are identical to that of augmented dickey - fuller ( adf ) test ( dickey and fuller , 1979 ) , and the asymptotic null distributions of the unaugmented hegy statistics are identical to those of phillips - perron test ( phillips and perron , 1988 ) . however , the asymptotic null distributions of the statistics testing any combination of roots at 1 , , , or depend on the seasonal heterogeneity parameters , and are non - standard , non - pivotal , and not directly pivotable .therefore , when seasonal heterogeneity exists , both augmented hegy and unaugmented hegy tests can be straightforwardly applied to single roots at 1 or , but can not be directly applied to the coexistence of any roots . as a remedy, this paper proposes the application of bootstrap . in general , bootstrap s advantages are two fold .firstly , bootstrap helps when the asymptotic distributions of the statistics of interest can not be found or simulated . secondly ,even when the asymptotic distributions can be found and simulated , bootstrap method may enjoy second order efficiency .for the aforementioned problem , bootstrap therefore serves as an appealing solution .firstly , it is hard to estimate the seasonal heterogeneity parameters in the asymptotic null distribution , and to simulate the asymptotic null distribution .secondly , it can be conjectured that bootstrap seasonal unit root test inherits second order efficiency from bootstrap non - seasonal unit root test ( park , 2003 ) .the only methodological literature we find on bootstrapping hegy test is burridge and taylor ( 2004 ) .their paper centers on seasonal heteroscadasticity , designs a bootstrap - aided augmented hegy test , reports its simulation result , but does not give theoretical justification for their test .it will be shown ( remark [ re : seasonal iid bootstrap ] ) that their bootstrap approach is inconsistent under the general seasonal heterogeneous setting . to cater to the general heterogeneous setting, this paper designs new bootstrap tests , namely 1 ) seasonal iid bootstrap augmented hegy test , and 2 ) seasonal block bootstrap unaugmented hegy test . to generate bootstrap replicates , the first test get residuals from season - by - season augmented hegy regressions , and then applies seasonal iid bootstrap to the whitened regression errors .on the other hand , the second test starts with season - by - season unaugmented hegy regressions , and then handles the correlated errors with seasonal block bootstrap proposed by dudek , lekow , paparoditis , and politis ( 2014 ) .our paper establishes the functional central limit theorem ( fclt ) for both bootstrap tests .based on the fclt , the consistency for both bootstrap approaches is proven . to the best of our knowledge, this result gives the first justification for bootstrapping hegy tests under .this paper proceeds as follows .section 2 formalizes the settings , presents the assumptions , and states the hypotheses .section 3 gives the asymptotic null distributions of the augmented hegy test statistics , details the algorithm of seasonal iid bootstrap augmented hegy test , and establishes the consistency of the bootstrap .section 4 presents the asymptotic null distributions of the unaugmented hegy test statistics , specifies the algorithm of seasonal block bootstrap unaugmented hegy test , and proves the consistency of the bootstrap .section 5 compares the simulation performance of the two aforementioned tests .appendix includes all technical proofs .recall the quarterly data , generated by the seasonal ar model , where , .if for all , has roots on the unit circle , we suppose that all share the same set of roots on the unit circle , this set of roots on the unit circle is a subset of , and ; otherwise , suppose our data is a stretch of the process , . let and be the regression errors and regression coefficients of , respectively .more specifically , is the distance between and the vector space generated by , , and is the coefficient of the projection of on the aforementioned vector space . let , . denote by ar an autoregressive process with order , by vma a vector moving average process with infinite moving average order , and by varma a vector autoregressive moving average process with autoregressive order and moving average order . let be the real part of complex number .let be the largest integer smaller or equal to real number , and be the smallest integer larger or equal to .assump [ assump 1a ] assume where , ; the entry of , denoted by , satisfies for all and ; the determinant of has all roots outside the unit circle ; is a lower diagonal matrix whose diagonal entries equal 1 ; is a vector - valued white noise process with mean zero and covariance matrix ; and is diagonal .assumption [ assump 1a ] assumes that is vma with respect to white noise innovation .this is equivalent to the assumption that is a weakly stationary process with no deterministic part in the multivariate wold decomposition .the assumptions on and the determinant of ensure the causality and the invertibility of and the identifiability of .[ assump 1b ] assume where ; ; determinants of and have all roots outside the unit circle ; is the identity matrix ; is a lower diagonal matrix whose diagonal entries equal 1 ; is a vector - valued white noise process with mean zero and covariance matrix ; and is diagonal .assumption [ assump 1b ] restricts to be varma with respect to white noise innovation .compared to the vma model in assumption [ assump 1a ] , varma s main restraint is its exponentially decaying autocovariance .again , the assumptions on , and the determinant of and in assumption [ assump 1b ] ensure the causality and the invertibility of and the identifiablity of . at this stage only assumed to be a white noise sequence of random vectors .in fact , needs to be weakly dependent as well .assump [ assump 2a ] ( i ) is a fourth - order stationary martingale difference sequence with finite moment for some .( ii ) , , , , and , .[ assump 2b ] ( i ) is a strictly stationary strong mixing sequence with finite moment for some .( ii ) s strong mixing coefficient satisfies .notice the higher moment has , the weaker assumption we require on the strong mixing coefficient of in assumption [ assump 2b ] .the strong mixing condition in assumption [ assump 2b ] actually guarantees ( ii ) of assumption [ assump 2a ] ( see lemma [ boundedness ] ) .we tackle the following set of null hypotheses .the alternative hypotheses are the complement of the null hypotheses .indeed , the alternative hypotheses can be written as one - sided .recall we suppose that for all , the roots of are either on or outside the unit circle . since , by the intermediate value theorem , implies , implies , and implies . to further analyze the roots of , hegy ( hylleberg , engle , granger , and yoo , 1990 ) propose the partial fraction decomposition thus + substituting into , we get where indeed, relates to the root of , i.e. , ; hence the proposition below .[ prop : hegy ] by proposition [ prop : hegy ] , the test for the null hypotheses can be carried on by checking the corresponding .further , can be estimated by ordinary least squares ( ols ) .unfortunately , ols can not be readily applied to season by season , because in are not asymptotically orthogonal for any fixed .( see also ghysels and osborn , 2001 , p. 158 . ) on the other hand , in non - periodic regression equations and are asymptotically orthogonal ( see lemma [ le : unaug real1 ] ) .so we wonder if the ols estimators based on and can be used to test the null hypotheses .when we regress with non - periodic regression equations and , the seasonally heterogeneous sequence is fitted in seasonal homogeneous ar models .consider , as an example , fitting in a misspecified ar(1 ) model .then , where .\label{eqn : tgamma}\ ] ] since is positive semi - definite , we can find a weakly stationary sequence with mean zero and autocovariance function .we call a misspecified constant parameter representation ( see also osborn , 1991 ) of , and will refer to this concept in later sections .in seasonally homogeneous setting where , the augmented hegy test detailed below copes with the roots of at , , and . by calculations similar to , hegy ( 1990 )get where augmentations , , pre - whiten the time series up to an order of . as the sample size ,let , so that the residual is asymptotically uncorrelated. let be the ols estimator of , be the t - statistics corresponding to , and be the f - statistic corresponding to and .other f - statistics , , , and can be defined similarly . in seasonally homogeneous configuration ,hegy ( 1990 ) proposes to reject if is too small , reject if is too small , reject if is too large , and reject other composite hypotheses if their corresponding f - statistics are too large .now we apply the augmented hegy test to seasonally heterogeneous processes .namely , we run regression equation with generated by .our results show that when testing roots at 1 or individually , the t - statistics , , and the f - statistics have standard and pivotal asymptotic distributions . on the other hand , when testing joint roots at 1 and , and when testing hypotheses that involve roots at , the asymptotic distributions of the t - statistics and the f - statistics are non - standard , non - pivotal , and not directly pivotable .[ aug real ] assume that assumption 1.b and one of assumption 2.a or 2.b hold .further , assume , , , and for some , . then under , the asymptotic distributions of , , and f - statistics are given by where , , , and , , , is a four - dimensional standard brownian motion .the asymptotic distributions presented in theorem [ aug real ] degenerate to the distributions in burridge and taylor ( 2001b ) and del barrio castro , osborn and taylor ( 2012 ) when is a seasonally homogeneous sequence with homoscedastic noise , and to the distributions in burridge and taylor ( 2001a ) when is a seasonally homogeneous finite - order ar sequence with heteroscedastic noise .[ pm1 ] notice s are standard brownian motions . when is seasonally homogeneous ( burridge and taylor , 2001b , del barrio castro et al . , 2012 ) , s are independent , so are the asymptotic distributions of and . on the other hand , when has seasonal heterogeneity , s are in general independent , so and are in general dependent , even asymptotically .hence , when testing , it is problematic to test and separately and calculate the level of the test with the independence of and in mind . instead , the test of should be handled with .further , because of the dependence of and , the asymptotic distribution of under heterogeneity is different from its counterpart when is seasonally homogeneous .hence , the augmented hegy test can not be directly applied to test .[ heter ] when is only seasonally heteroscedastic ( burridge and taylor , 2001a ) , does not occur in the asymptotic distributions of the f - statistics . on the other hand , when has generic seasonal heterogeneity , impacts firstly the correlation between brownian motions and , and secondly the weights and .as burridge and taylor ( 2001a ) point out , the dependence of the asymptotic distributions on weights and can be expected .indeed , is the partial sum of , while is the partial sum of . since these two partial sums differ in their variances , both and involve two different weights and .[ root ] theorem [ aug real ] presents the asymptotics when has all roots at 1 , , and .when has some but not all roots at 1 , , and , we let , , and calculate such that .the asymptotic distributions can be expressed with respective to and end up having the same form with those given in theorem [ aug real ] , where has all roots .[ power ] the preceding results give the asymptotic behaviors of the testing statistics under the null hypotheses . under the alternative hypotheses ,we conjecture the powers of the augmented hegy tests tend to one , as the sample size goes to infinity . to see this , we can without loss of generalityassume that has root at none of , or .then is stationary , and thus for , the corresponding to ( the misspecified constant parameter representation of ) are negative , due to proposition [ prop : hegy ] .we conjecture that for , the ols estimators in converge in probability to , and as a result the powers of the tests converge to one .see also theorem 2.2 of paparoditis and politis ( 2016 ) . to accommodate the non - standard , non - pivotal asymptotic null distributions of the augmented hegy test statistics ,we propose the application of bootstrap . in particular , the bootstrap replications are created as follows .firstly , we pre - whiten the data season by season to obtain uncorrelated noises .although these noises are uncorrelated , they are not white due to seasonally heteroscadasticity .hence secondly we resample season by season in order to generate bootstrapped noise , as in burridge and taylor ( 2001b ) .finally , we post - color the bootstrapped noise .the detailed algorithm of this seasonal iid bootstrap augmented hegy test is given below .[ seasonal iid bootstrap ] step 1 : calculate the t - statistics , , and the f - statistics , and from the augmented hegy test regression step 2 : record ols estimators , and residuals from the season - by - season regression step 3 : let .store demeaned residuals of the four seasons separately , then independently draw four iid samples from each of their empirical distributions , and then combine these four samples into the vector , with their seasonal orders preserved ; + + step 4 : set all corresponding to the null hypothesis to be zero . for example , set for all when testing roots at .let be generated by step 5 : get t - statistics , , and the f - statistics from the regression step 6 : run step 3 , 4 , and 5 for times to get sets of statistics , , and the bootstrapped f - statistics .count separately the numbers of , and than which , , and the f - statistics are more extreme .if these numbers are higher than , then we consider , , and the f - statistics extreme , and reject the corresponding hypotheses .it seems also reasonable to keep steps 1 , 2 , 3 , 5 , and 6 of the algorithm [ seasonal iid bootstrap ] , but change the generation of in step 4 to this new algorithm is in fact theoretically invalid for the tests of any coexistence of roots ( see remark [ pm1 ] , [ heter ] , and [ root ] ) , but it is valid for individual tests of roots at 1 or , due to the pivotal asymptotic distributions of and in theorem [ aug real ] . [ re : seasonal iid bootstrap ] if we keep steps 1 , 3 , 5 , and 6 of algorithm [ seasonal iid bootstrap ] , but run regression equations with seasonally homogeneous coefficients and in steps 2 and 4 , then this algorithm is identical with burridge and taylor ( 2004 ) .however , this algorithm can not in step 2 fully pre - whiten the time series , and it leaves the regression error serially correlated .when is bootstrapped by seasonal iid bootstrap , this serial correlation structure is ruined . as a result , differs from in its correlation structure , in particular , and the conditional distributions of the bootstrapped f - statistics differ from the distributions of the original f - statistics ( see remark [ pm1 ] and [ heter ] ) .now we justify the seasonal iid bootstrap augmented hegy test ( algorithm [ seasonal iid bootstrap ] ) .since the derivation of the real - world asymptotic distributions in theorem [ aug real ] calls on fclt ( see lemma [ le : unaug real1 ] ) , the justification of bootstrap approach also requires fclt in the bootstrap world . from now on ,let , , , , be the probability , expectation , variance , standard deviation , and covariance , respectively , conditional on our data .[ iid fclt ] suppose the assumptions in theorem [ aug real ] hold .let where , & \sigma_{2}^{\star}&=std^{\circ}[\frac{1}{\sqrt{4t}}\sum _ { t=1}^ { 4 t } ( -1)^{t}{\epsilon}_{t}^{\star}],\\ \sigma_{3}^{\star}&=std^{\circ}[\frac{1}{\sqrt{4t}}\sum _ { t=1}^ { 4 t } \sqrt{2}\sin(\frac{\pi t}{2}){\epsilon}_{t}^{\star } ] , & \sigma_{4}^{\star}&=std^{\circ}[\frac{1}{\sqrt{4t}}\sum _ { t=1}^ { 4 t } \sqrt{2}\cos(\frac{\pi t}{2}){\epsilon}_{t}^{\star}].\end{aligned}\ ] ] then , no matter which hypothesis is true , in probability as , where is a four - dimensional standard brownian motion . by the fclt given by proposition [ iid fclt ] and the proof of theorem [ aug real ] , in probability the conditional distributions of , and to the limiting distributions of , and , respectively .since conditional on , is a finite - order seasonal ar process , the derivation of the conditional distributions of , and turns out easier than that of theorem [ aug real ] , and in particular does not involve the fourth moments of .hence the consistency of the bootstrap .suppose the assumptions in theorem [ aug real ] hold .let be the probability measure corresponding to the null hypothesis .then , the proceeding section our analysis focuses on the augmented hegy test , an extension of the adf test to the seasonal unit root setting .an important alternative of the adf test is the phillips - perron test ( phillips and perron , 1988 ) .while the adf test assumes an ar structure over the noise and thus becomes parametric , its semi - parametric counterpart , phillips - perron test , allows a wide class of weakly dependent noises .unaugmented hegy test ( breitung and franses , 1998 ) , as the extension of phillips - perron test to the seasonal unit root , inherits the semi - parametric nature and does not assume the noise to be ar .given seasonal heterogeneity , it will be shown in theorem [ unaug real ] that the unaugmented hegy test estimates seasonal unit root consistently under the very general vma class of noise ( assumption 1.a ) , instead of the more restrictive varma class of noise ( assumption 1.b ) , which is needed for the augmented hegy test .now we specify the unaugmented hegy test .consider regression let be the ols estimator of , be the t - statistic corresponding to , and be the f - statistic corresponding to and .other f - statistics , , , and can be defined analogously .similar to the phillips - perron test ( phillips and perron , 1988 ) , the unaugmented hegy test can use both and when testing roots at 1 or . as in the augmented hegy test , we reject if ( or ) is too small , reject if ( or ) is too small , and reject the joint hypotheses if the corresponding f - statistics are too large .the following results give the asymptotic null distributions of , , , and the f - statistics .[ unaug real ] assume that assumption 1.a and one of assumption 2.a or assumption 2.b hold .then under , as , where , , , , , , is a four - dimensional standard brownian motion , are defined in , , , , and .the results in theorem [ unaug real ] degenerate to the asymptotics in burridge and taylor ( 2001ab ) when is uncorrelated , and degenerate to the asymptotics in breitung and franses ( 1998 ) when is seasonally homogeneous .when is seasonally homogeneous ( breitung and franses , 1998 ) , the asymptotic distributions of and are independent . on the other hand ,when has seasonal heterogeneity , and are dependent , as what we have seen for augmented hegy test ( remark [ pm1 ] ) .hence , when testing , it is problematic to test and separately and calculate the level of the test with the independence of and in mind . instead , the test of should be handled with .the parameters have the same definition as in theorem [ aug hegy ] . since , and , the asymptotic distributions of and , , only depends on the autocorrelation function of , the misspecified constant parameter representation of .since can be considered as a seasonally homogeneous version of , we can conclude that the asymptotic behaviors of the tests for single roots at 1 or are not affected by the seasonal heterogeneity in . on the other side , the asymptotic distributions of the f - statisticsdo not solely depend on .hence , the test for the concurrence of roots at 1 and and the tests involving roots at are affected by the seasonal heterogeneity . to remove the nuisance parameters in the asymptotic distributions , we notice that the asymptotic behaviors of and , have identical forms as in phillips and perron ( 1988 ) . in light of their approach, we can construct pivotal versions of and , that converge in distribution to standard dickey - fuller distributions ( dickey and fuller , 1979 ) . more specifically ,for we can substitute any consistent estimator for and below : however , there is no easy way to construct pivotal statistics for , , , , and f - statistics such as .the difficulties are two - fold .firstly the denominators of the asymptotic distributions of these statistics contain weighted sums with unknown weights and ; secondly and are in general correlated standard brownian motions as in theorem [ aug real ] . the result in theorem[ unaug real ] can be generalized .suppose is not generated by , and only has some of the seasonal unit roots .let , and . then we can find such that .the asymptotic distributions of , , and the f - statistics have the same forms as those in theorem [ unaug real ] , with substituted by , and based on . as for the asymptotic results under the alternative hyphothese, we conjecture that the powers of the unaugmented hegy tests converge to one as sample size goes to infinity . as in remark[ power ] , we can assume without loss of generality that has no root at , , or .then for , the coefficient corresponding to ( the misspecified constant parameter representation of ) are negative , according to proposition [ prop : hegy ] .we conjecture that for , the ols estimators in converge to , and as a result the power of the tests tend to one .since many of the asymptotic distributions delivered in theorem [ unaug real ] are non - standard , non - pivital , and not directly pivotable , we propose the application of bootstrap . since the regression error of is seasonally stationary , we in particular apply the seasonal block bootstrap of dudek et al .the algorithm of seasonal block bootstrap unaugmented hegy test is illustrated below .[ seasonal block boot ] step 1 : get the ols estimators , , t - statistics , , and the f - statistics , and from the regression of the unaugmented hegy test step 2 : record residual from regression step 3 : let , choose a integer block size , and let . for , let where is a sequence of iid uniform random variables taking values in with and + + step 4 : set the corresponding to the null hypothesis to be zero .for example , set for all when testing roots at .generate by step 5 : get ols estimates , , t - statistics , , and f - statistics from regression step 6 : run step 3 , 4 , and 5 for times to get sets of statistics , , , , and .count separately the numbers of , , , , and than which , , , , and are more extreme .if these numbers are higher than , then consider , , , and extreme , and reject the corresponding hypotheses .[ sbb fclt ] let where , & \sigma_{2}^{*}&=std^{\circ}[\frac{1}{\sqrt{4t}}\sum _ { t=1}^ { 4 t } ( -1)^{t}v_{t}^{*}],\\ \sigma_{3}^{*}&=std^{\circ}[\frac{1}{\sqrt{4t}}\sum _ { t=1}^ { 4 t } \sqrt{2}\sin(\frac{\pi t}{2})v_{t}^ { * } ] , & \sigma_{4}^{*}&=std^{\circ}[\frac{1}{\sqrt{4t}}\sum_ { t=1}^ { 4 t } \sqrt{2}\cos(\frac{\pi t}{2})v_{t}^{*}].\end{aligned}\ ] ] if , , , then no matter which hypothesis is true , in probability , where is a four - dimensional standard brownian motion . by the fclt given by proposition [ sbb fclt ] , the proof of theorem [ unaug real ] , and the convergence of the bootstrap standard deviation ( dudek et al . , 2014 ), we have that the conditional distribution of , , and in probability converges to the limiting distribution of , , and , respectively .hence the consistence of the bootstrap .[ coro : unaug real ] suppose the assumptions in theorem [ unaug real ] hold .let be the probability measure corresponding to the null hypothesis .if , , , then focus on the hypotheses test for root at 1 ( against ) , root at ( against ) , and root at ( against ) . in each hypothesis test , we equip one sequence with all nuisance unit roots at 1 , , and , and the other with none of the nuisance unit roots .the detailed data generation processes are listed in table [ dgp ] . to produce power curves , we let parameter , 0.004 , 0.008 , 0.012 , 0.016 , and 0.020 .notice that is set to be seasonally homogeneous for the sake of simplicity .further , we generate six types of innovations according to table [ noise ] , where .the values of are assigned so that the misspecified constant parameter representation ( see section [ sec : settings ] ) of the `` period '' sequence has almost the same ar structure as the `` ar '' sequence ..data generation processes [ cols="^,^,^,^ " , ] now we present in figure [ fig : root1 ] , [ fig : root-1 ] , and [ fig : rooti ] the main simulation result of the seasonal iid bootstrap augmented hegy test and the seasonal block bootstrap unaugmented hegy test .this simulation includes two cases of nuisance roots ( see table [ dgp ] ) and six types of noises ( see table [ noise ] ) , and sets sample size , number of bootstrap replicates , number of iterations , and nominal size . when our data have a potential root at 1 , but no other nuisance roots at or , the power curves of the both bootstrap tests almost overlap , according to ( a)-(f ) in figure [ fig : root1 ] .further , both power curves start at the correct size , , and tend to one when departs from zero .hence both tests work well when no nuisance root occurs .when data have a potential root at 1 and all nuisance roots at and , the sizes of seasonal block bootstrap unaugmented hegy test are distorted in ( g ) , ( h ) , ( j ) , and ( l ) in figure [ fig : root1 ] .these distortions may result from the errors in estimating and the need to recover with the estimated .the size distortion in ( j ) is particularly serious , since the unit root filter is partially cancelled by the moving average ( ma ) filter , and this cancellation can not be handled well by block bootstrap ( paparoditis and politis , 2003 ) .in contrast , in ( l ) the filter is enhanced by the ar filters , thus the size is distorted toward zero . on the other hand ,seasonal iid bootstrap augmented hegy test is free of the size distortions when data have nuisance roots .this is in part because the test recovers using the true values of , namely zero , instead of using the estimated values .moreover , when both hegy tests have almost the correct sizes as in ( i ) and ( k ) , seasonal iid bootstrap augmented hegy test attains equal or higher powers .therefore , when testing the root at 1 , seasonal iid bootstrap augmented hegy test is recommended .( a)-(f ) have no nuisance roots ; ( g)-(l ) have all nuisance roots ; + blue dotted curve is for seasonal iid bootstrap ; red solid curve is for seasonal block bootstrap .[ fig : root1 ]now we come to the tests for root at .when none of the nuisance root at 1 or exists , the power curves of the two tests are very close to each other , as ( a)-(f ) in figure [ fig : root-1 ] indicate .this patterns of curves have been seen in ( a)-(f ) in figure [ fig : root1 ] , and indicate the nice performance of both tests .when nuisance roots are present , sizes of seasonal block bootstrap unaugmented hegy test are distorted in nearly all scenarios in ( g)-(l ) in figure [ fig : root-1 ] .in particular , the size distortion in ( i ) is the worst , because of the partial cancellation of the seasonal unit root filter and the ma filter .however , the power curves of seasonal iid bootstrap augmented hegy test start around the nominal size 0.05 in all of ( g)-(l ) .further , these curves tend to 1 , as grows larger .therefore , we recommend seasonal iid bootstrap test for testing root at .( a)-(f ) have no nuisance roots ; ( g)-(l ) have all nuisance roots ; + blue dotted curve is for seasonal iid bootstrap ; red solid curve is for seasonal block bootstrap .[ fig : root-1 ] finally we discuss the tests for roots at .with none of the nuisance root at 1 or ,( a)-(f ) in figure [ fig : rooti ] illustrate that both tests achieve sizes that are close to the nominal size , and powers that tend to one .when all of nuisance roots show up , both tests suffer from some size distortions .the empirical sizes of seasonal iid bootstrap augmented hegy test are biased toward zero in ( g)-(l ) ; the sizes of seasonal block bootstrap unaugmented hegy test are biased toward zero in ( g ) and ( h ) , but are biased toward one in ( j)-(l ) .on the other hand , seasonal block bootstrap unaugmented hegy test s empirical powers prevail throughout ( g)-(l ) , and therefore shall be recommended for testing roots at .( a)-(f ) have no nuisance roots ; ( g)-(l ) have all nuisance roots ; + blue dotted curve is for seasonal iid bootstrap ; red solid curve is for seasonal block bootstrap .[ fig : rooti ]in this paper we analyze the augmented and unaugmented hegy tests in the seasonal heterogeneous setting . given root at 1 or , the asymptotic distributions of the test statistics are standard .however , given concurrent roots at 1 and , or roots at , the asymptotic distributions are neither standard , pivotal , nor directly pivotable .therefore , when seasonal heterogeneity exists , hegy tests can be used to test the single roots at 1 or , but can not be directly applied to any combinations of roots . bootstrap proves to be an effective remedy for hegy tests in the seasonal heterogeneous setting .the two bootstrap approaches , namely 1 ) seasonal iid bootstrap augmented hegy test and 2 ) seasonal block bootstrap unaugmented hegy test , turn out both theoretically solid . in the comparative simulation study ,seasonal iid bootstrap augmented hegy test has better performance when testing roots at 1 or , but seasonal block bootstrap unaugmented hegy test outperforms when testing roots at . therefore , when testing seasonal unit roots under seasonal heterogeneity , the aforementioned bootstrap hegy tests become competitive alternatives of the wald - test proposed by ghysels et al .further study will be needed to compare the theoretical and empirical efficiency of the two bootstrap hegy tests and the wald - test by ghysels et al .the appendix includes the proof of the theorems in this paper .we first present the proof for the asymptotics of the unaugmented hegy test , then the asymptotics of the augmented hegy test , then the consistency of the seasonal iid bootstrap augmented hegy test , and finally the consistency of the seasonal block bootstrap unaugmented hegy test . thoughout the appendix ,let , , ] .similarly , it can be shown that =o(1) ] , and \stackrel{p}\rightarrow 0 ] .notice , ) + \frac{1}{\sqrt{4t}}\sum_{m=1}^{l}\sum_{s=-3}^{0}\sum_{t=1}^{b/4}(e^{\circ}v_{i_{m}+4t+s-1}-\frac{1}{t}\sum_{t=1}^{t}v_{4t+s})\\ & \mathrel{\phantom{=}}-\sum_{j=1}^{4}\sum_{s=-3}^{0}(\hat{\pi}_{j , s}-\pi_{j , s})\frac{1}{\sqrt{4t}}\sum_{m=1}^{l}\sum_{t=1}^{b/4}(j , y_{i_{m}+4t+s-1}-\frac{1}{t}\sum_{t=1}^{t}y_{4t+s})\\ & = a_{t}+b_{t}-\sum_{j=1}^{4}c_{t , j}\end{aligned}\ ] ] where and have obvious definitions .it is straightforward to show \stackrel { p } \rightarrow 0 ] for , and =var^{\circ}[\frac{1}{\sqrt{b}}\sum_{h=1}^{b}v_{i_{m}+h-1}] ] and $ ] converge in probability to constants ( dudek et al . , 2014 ) , we only need to show that \stackrel { p } \rightarrow 0.\ ] ] notice , \\ & = \frac{\sqrt{2}}{b(t - b/4)}\sum_{i=1}^{t - b/4}\sum_{h=1}^{b}\sum_{r=1}^{b}\sin(\pi r/2)v_{4i+h-4}v_{4i+r-4}\\ & = -a+b+o_{p}(1),\end{aligned}\ ] ] where the proof under assumption [ assump 1b ] is complete after showing by lemma [ martingale difference ] below .now consider assumption [ assump 2b ] .let .let ( ,,, be the eigenvalues of .it is sufficient ( wooldridge and white , 1988 , corollary 4.2 ) to show that the following two properties hold : notice , to show , it suffices to show , which is ensured by lemma [ boundedness ] and lemma [ martingale difference ] .equation follows from the continuity of the eigenvalue function .hence we have completed the proof when block size is a multiple of four .when is not a multiple of four , it is straightforward to show . for, let since , are mutually independent with respect to , and in probability for all , we have in probability .[ martingale difference ] suppose ( i ) is a fourth - order stationary time series with finite moment for some .( ii ) , , , , and , . suppose and .then , \rightarrow 0.\ ] ] \\ & = \frac{1}{b^{2}n^{2}}\sum_{t_1=1}^{n}\sum_{t_2=1}^{n}\sum_{j_1=1}^{b}\sum_{j_2=1}^{b}cov[z_{0}z_{-j_1},z_{t_2-t_1}z_{t_2-t_1-j_1}]\\ & = \frac{1}{b^{2}n^{2}}\sum_{h=1-n}^{n-1}(n-|h|)\sum_{j_1=1}^{b}\sum_{j_2=1}^{b}cov[z_{0}z_{-j_1},z_{h}z_{h - j_1}]\\ & < \frac{k}{n}\rightarrow 0 . & \mbox{\qedhere}\end{aligned}\ ] ] 99 berk , kenneth n. `` consistent autoregressive spectral estimates . ''the annals of statistics ( 1974 ) : 489 - 502 .billingsley , patrick .convergence of probability measures .new jersey : john wiley & sons , 1999 .breitung , jorg , and philip hans franses .`` on phillips - perron - type tests for seasonal unit roots . '' econometric theory 14.02 ( 1998 ) : 200 - 221 .boswijk , h. peter , philip hans franses , and niels haldrup .`` multiple unit roots in periodic autoregression . ''journal of econometrics 80.1 ( 1997 ) : 167 - 193 .burridge , peter , and am robert taylor .`` on regression - based tests for seasonal unit roots in the presence of periodic heteroscedasticity . '' journal of econometrics 104.1 ( 2001a ) : 91 - 117 .burridge , peter , and am robert taylor .`` on the properties of regression - based tests for seasonal unit roots in the presence of higher - order serial correlation . '' journal of business and economic statistics 19 , no .3 ( 2001b ) : 374 - 379 .burridge , peter , and am robert taylor .`` bootstrapping the hegy seasonal unit root tests . ''journal of econometrics 123.1 ( 2004 ) : 67 - 87 .chan , ngai hang , and c. z. wei .`` limiting distributions of least squares estimates of unstable autoregressive processes . '' the annals of statistics ( 1988 ) : 367 - 401 . de jong , robert m. , and james davidson . `` the functional central limit theorem and weak convergence to stochastic integrals i. '' econometric theory 16.05 ( 2000 ) : 621 - 642 . del barrio castro , toms , and denise r. osborn .`` testing for seasonal unit roots in periodic integrated autoregressive processes . ''econometric theory 24.04 ( 2008 ) : 1093 - 1129 .del barrio castro , toms , and denise r. osborn .`` hegy tests in the presence of moving averages*. '' oxford bulletin of economics and statistics 73.5 ( 2011 ) : 691 - 704. del barrio castro , toms , denise r. osborn , and am robert taylor . `` on augmented hegy tests for seasonal unit roots . ''econometric theory 28.05 ( 2012 ) : 1121 - 1143 .del barrio castro , toms , denise r. osborn , and am robert taylor . `` the performance of lag selection and detrending methods for hegy seasonal unit root tests .'' econometric reviews 35.1 ( 2016 ) : 122 - 168 .dickey , david a. , and wayne a. fuller .`` distribution of the estimators for autoregressive time series with a unit root . '' journal of the american statistical association 74.366a ( 1979 ) : 427 - 431 .dudek , anna e. , jacek lekow , efstathios paparoditis , and dimitris n. politis .`` a generalized block bootstrap for seasonal time series . ''journal of time series analysis 35.2 ( 2014 ) : 89 - 114 .dudek , anna e. , efstathios paparoditis , and dimitris n. politis .`` generalized seasonal tapered block bootstrap . '' statistics and probability letters ( 2016 ) .franses , philip hans .`` a multivariate approach to modeling univariate seasonal time series . ''journal of econometrics 63.1 ( 1994 ) : 133 - 151 .franses , philip hans , and richard paap . periodic time series models . oxford university press , oxford .galbraith , john w , and victoria zinde - walsh .`` on the distributions of augmented dickey - fuller statistics in processes with moving average components . '' journal of econometrics 93.1 ( 1999 ) : 25 - 47 .ghysels , eric , alastair hall , and hahn shik lee .`` on periodic structures and testing for seasonal unit roots .'' journal of the american statistical association 91.436 ( 1996 ) : 1551 - 1559 .ghysels , eric , and denise r. osborn . the econometric analysis of seasonal time series .cambridge university press , 2001 .hamilton , james douglas .time series analysis .princeton : princeton university press , 1994 .hylleberg , svend , robert f. engle , clive wj granger , and byung sam yoo .`` seasonal integration and cointegration . ''journal of econometrics 44.1 ( 1990 ) : 215 - 238 .helland , inge s. `` central limit theorems for martingales with discrete or continuous time . ''scandinavian journal of statistics ( 1982 ) : 79 - 94 .johansen , sren .`` statistical analysis of cointegration vectors . ''journal of economic dynamics and control 12.2 ( 1988 ) : 231 - 254 .kreiss , j .-p . and paparoditis ,e. bootstrap methods for time series , manuscript ( in progress ) , 2015 .osborn , denise r. `` the implications of periodically varying coefficients for seasonal time - series processes . ''journal of econometrics 48.3 ( 1991 ) : 373 - 384 .paparoditis , efstathios , and dimitris n. politis .`` residualbased block bootstrap for unit root testing . ''econometrica 71.3 ( 2003 ) : 813 - 855 .paparoditis , efstathios , and dimitris n. politis .`` the asymptotic size and power of the augmented dickey - fuller test for a unit root . '' econometric reviews just - accepted ( 2016 ) .park , joon y. `` bootstrap unit root tests . ''econometrica 71.6 ( 2003 ) : 1845 - 1895 .phillips , peter cb , and pierre perron .`` testing for a unit root in time series regression . ''biometrika 75.2 ( 1988 ) : 335 - 346 .politis , dimitris n. , joseph p. romano , and michael wolf .`` subsampling . ''springer , new york .said , said e. , and david a. dickey .`` testing for unit roots in autoregressive - moving average models of unknown order . ''biometrika 71.3 ( 1984 ) : 599 - 607 .wooldridge , jeffrey m. , and halbert white .`` some invariance principles and central limit theorems for dependent heterogeneous processes . '' econometric theory 4.02 ( 1988 ) : 210 - 230 . | both seasonal unit roots and seasonal heterogeneity are common in seasonal data . when testing seasonal unit roots under seasonal heterogeneity , it is unclear if we can apply tests designed for seasonal homogeneous settings , i.e. the hegy test ( hylleberg , engle , granger , and yoo , 1990 ) . in this paper , the validity of both augmented hegy test and unaugmented hegy test is analyzed . the asymptotic null distributions of the statistics testing the single roots at or turn out to be standard and pivotal , but the asymptotic null distributions of the statistics testing any coexistence of roots at , , , or are non - standard , non - pivotal , and not directly pivotable . therefore , the hegy tests are not directly applicable to the joint tests for the concurrence of the roots . as a remedy , we bootstrap augmented hegy with seasonal independent and identically distributed ( iid ) bootstrap , and unaugmented hegy with seasonal block bootstrap . the consistency of both bootstrap procedures is established . simulations indicate that for roots at and seasonal iid bootstrap augmented hegy test prevails , but for roots at seasonal block bootstrap unaugmented hegy test enjoys better performance . * keywords : * seasonality , unit root , ar sieve bootstrap , block bootstrap , functional central limit theorem . |
the problem of index coding over noiseless broadcast channels was introduced in and has been well studied - .it involves a single source and a set of caching receivers .each of the receivers wants a subset of the set of messages transmitted by the source and knows another non - intersecting subset of messages a priori as side information .the problem is to minimize the number of binary transmissions required to satisfy the demands of all the receivers , which amounts to minimizing the bandwidth required .an index coding problem , involves a single source , that wishes to send a set of messages to a set of receivers , .the messages , , take values from some finite field .a receiver , , is defined as . is the set of messages demanded by and is the set of messages known to , a priori , known as the side information that has .an index code for the index coding problem with consists of 1 .an encoding map , , where is called the length of the index code , and 2 . a set of decoding functions such that , for a given input , .an optimal index code for binary transmissions minimizes , the number of binary transmissions required to satisfy the demands of all receivers .a linear index code is one whose encoding function is linear and it is linearly decodable if all the decoding functions are linear .it was shown in that for the class of index coding problems over which can be represented using side information graphs , which were labeled later in as single unicast index coding problems , the length of optimal linear index code is equal to the minrank over of the corresponding side information graph .this was extended in to general index coding problems , over , using minrank over of their corresponding side information hypergraphs . in this paper , we consider noisy index coding problems with over awgn channels . in the noisy version of index coding ,the messages are sent by the source over a noisy broadcast channel . instead of binary transmissionsif multilevel ( -ary , ) modulation schemes are used further bandwidth reduction can be achieved .this has been introduced in for gaussian broadcast channels .it was also found in that using -ary modulation has the added advantage of giving coding gain to receivers with side information , termed the `` side information gain '' .the idea of side information gain was characterized for the case where the source use -psk for transmitting the index coded bits in , where , where is the length of the index code used , with the average energy of the -qam signal being equal to the total energy of binary transmissions .this paper discusses the case of noisy index coding over awgn channels where the source uses -qam to transmit the index coded bits . as in , here also , m = , where is the length of the index code used . the contributions and organization in this paper may be summarized as follows : 1 .an algorithm to map binary symbols to appropriate sized qam constellation is presented which uses the well known ungerboeck labelling as an ingredient .( section [ sec3 ] 2 . a necessary and sufficient condition for a receiver to get side information coding gain is presented .( theorem [ thm1 ] in section [ sec4 ] ) 3 .it is shown that the difference in probability of error performance between the best and worst performing receivers increases monotonically as the length of the index code used increases .( theorem [ thm2 ] in section [ sec4 ] ) in section [ sec2 ] the notions of bandwidth gain and qam side information coding gain are explained and simulation results are presented in section [ sec5 ] . concluding remarks constitute section [ sec6 ]consider a general index coding problem with messages , and receivers , . let the length of a linear index code ( not necessarily optimal ) for the index coding problem at hand be .we have , where is the length of the optimal index code which is equal to the minrank over of the corresponding side information hypergraph .let the encoding matrix corresponding to the linear index code chosen be , where is an matrix over .the index coded bits are given by = \textbf{x}l, ] the noiseless index coding involves binary transmissions .it was shown in that for noisy index coding problems , if we transmit the index coded bits as a point from -psk signal set instead of binary transmissions , the receivers satisfying certain conditions will get coding gain in addition to bandwidth gain whereas other receivers trade off coding gain for bandwidth gain .this gain which was termed as the `` psk side information coding gain '' was obtained by proper mapping of index coded bits to psk symbols an algorithm for which was presented . in this paper , we extend the results in for the case where we use -qam to transmit the index coded bits .the term * qam bandwidth gain * is defined as the bandwidth gain obtained by each receiver by going from binary transmissions to a single -qam symbol transmission .when we transmit a single signal point instead of transmitting binary transmissions we are going from an - real dimensional or equivalently - complex dimensional signal set to 1 complex dimensional signal set .hence all the receivers get a - fold qam bandwidth gain .we state this simple fact as [ lem : qam bg ] each receiver gets an - fold qam bandwidth gain .the term * qam side information coding gain * ( qam - sicg ) is defined as the coding gain a receiver with a non - empty side information set gets w.r.t a receiver with no side information while using -qam to transmit the index coded bits .let the set , , be defined as the set of all binary transmissions which a receiver knows a priori due to its available side information , i.e. , also , let , be defined as follows .in this section , we describe an algorithm to map the index coded bits to signal points of an appropriate sized qam constellation so that the receivers satisfying the conditions of theorem [ thm1 ] in the following section will get qam - sicg . for the given index coding problem ,choose an index code of length .this fixes the value of .order the receivers in the non - decreasing order of .wlog , let be such that . before starting to run the algorithm to map the index coded bits to -qam symbols , we need to 1 .choose an appropriate -qam signal set .2 . use ungerboeck set partitioning to partition the -qam signal set chosen into subsets with increasing minimum subset distances . to choose the appropriate qam signal set , do the following : * *if * is even , then choose the -square qam with average symbol energy being equal to .* * else * , take the -square qam with average symbol energy equal to .use ungerboeck set partitioning to partition the qam signal set into two signal sets .choose any one of them as the -qam signal set .let denote the different levels of partitions of the -qam with the minimum distance at layer , , being such that .the algorithm to map the index coded bits to qam symbols is given in * algorithm 1*. , do an arbitrary order mapping and * exit*. , * exit*. fix such that the set of codewords , , obtained by running all possible combinations of with has maximum overlap with the codewords already mapped to psk signal points ., * = together with all combinations of will result in . * * * if * * then * , * * . * * goto * step 3 * * * else * , goto * step 3 * * of the codewords in which are yet to be mapped , pick any one and map it to a qam signal point in that sized subset at level which has maximum number of signal points mapped by codewords in without changing the already labeled signal points in that subset .+ if all the signal points in such a subset have been already labeled , then map it to a signal point in another sized subset at the same level that this point together with the signal points corresponding to already mapped codewords in , has the largest minimum distance possible .clearly this minimum distance , is such that . * * goto * step 3 * note that algorithm [ algo1 ] above does not result in a unique mapping of index coded bits to -qam symbols .the mapping will change depending on the choice of in each step .however , the performance of all the receivers obtained using any such mapping scheme resulting from the algorithm will be the same .if for some , depending on the ordering of done before starting the algorithm , and may give different performances in terms of probability of error . and with will give the same performance if and only if or vice - versa .in this section we present the main results apart from the algorithm given in the previous section . [ thm1 ] a receiver gets qam side information coding gain , with the scheme proposed , if and only if , where is the length of the index code used .consider a receiver .let and , . for any given realization of , the effective signal set seen by the receiver of points .let the minimum distance of the signal set seen by the receiver , _ proof of the ` if part ' : _ if , then the effective signal set seen by the receiver will have points .hence by appropriate mapping of index coded bits to qam symbols , we can increase .thus will get coding gain over a receiver that has no side information because the minimum distance seen by a receiver with no side information will be the minimum distance of -qam signal set ._ proof of the ` only if part ' : _ let us a consider a receiver such that .then , will not increase . will remain equal to the minimum distance of the corresponding - qam , same as that of a receiver with no side information .thus a receiver with will not get qam - sicg .it is to be noted that the value of not only depends on but also on , which , in turn , depends on the index code chosen .hence for the same index coding problem , a particular receiver may satisfy theorem [ thm1 ] and get qam - sicg for some index codes and may not get qam - sicg for other index codes .[ thm2 ] the difference in probability of error performance between the best performing receiver and the worst performing receiver for a given index coding problem , while using - qam signal point to transmit the index coded bits , will increase monotonically while increases from to if the following conditions are satisfied . +( 1 ) the best performing receiver gets qam - sicg .+ ( 2 ) the worst performing receiver has no side information .if there is a receiver with no side information , say , whatever the length , , of the index code used is , the effective signal set seen by will be -qam .therefore the minimum distance seen by will be the minimum distance of -qam signal set . for -qam with average symbol energy equal to , the squared minimum pair - wise distance of -qam , -qam , obtained by the proposed mapping scheme in algorithm [ algo1 ]is given by which is monotonically decreasing in .therefore the performance of the receiver with no side information deteriorates as the length of the index code increases from to .although condition ( 1 ) in theorem [ thm2 ] is necessary , the same can not be said about condition ( 2 ) . even when condition ( 2 ) above is not satisfied , i.e. , the worst performing receiver has at least 1 bit of side information but not the same amount of side information as the best performing receiver , the difference between their performances can still increase monotonically as we move from to .this is because the error performance is determined by the effective minimum distances seen by the receivers , which , in turn , depend on the mapping used between index coded bits and qam symbols .a general index coding problem can be converted into one where each receiver demands only one message since a receiver can be converted into receivers all with the same side information and each demanding a single message .so it is enough to consider index coding problems where the receivers demand a single message each and hence both the examples considered in this section are such problems . even though the examples considered are what are called single unicast index coding problems in , the results hold for any general index coding problem . in this subsection , we give an example with simulation results to support our claims in section [ sec4 ] .the mapping of index coded bits to qam symbols is done using our algorithm [ algo1 ] .the receivers which satisfy theorem [ thm1 ] are shown to get qam - sicg .we also compare the performance of different receivers while using qam and psk to transmit index coded bits . for a given index coding problem and a chosen index code , the mapping of index coded bits to qam symbolsis done using the algorithm described in section [ sec3 ] , whereas the mapping to psk symbols is done using the algorithm 1 in .we also give the effective minimum distances which are seen by different receivers which explains the difference in their error performance .[ ex_psk_qam ] let . . + the minrank over of the side information graph corresponding to the above problem evaluates to =4 .an optimal linear index code is given by the encoding matrix , + $ ] .the index coded bits are , ; ; ; .the 16-qam mapping for the above example is given in fig .[ map_ex - psk_qam ] .the simulation result which compares the performance of different receivers when they use 16-qam and 16-psk for transmission of index coded bits is shown in fig .[ sim_ex - psk_qam ] .the probability of error plot corresponding to 4-fold binary transmission is also shown in fig .[ sim_ex - psk_qam ] .the reason for the difference in performance while using qam and psk can be explained using the minimum distance seen by the different receivers for the 2 cases .this is summarized in table [ table - ex_psk_qam ] . .[ cols="^,^,^,^,^,^,^,^",options="header " , ]in this paper we considered noisy index coding over awgn channel .the problem of finding an optimal index code , for a given index coding problem , is , in general , exponentially hard .however , we have shown that finding the minimum number of binary transmissions required is not required for reducing transmission bandwidth over a noisy channel since , we can use an index code of any given length as a single qam point thus saving bandwidth .the mapping scheme by the proposed algorithm and qam transmission are valid for any general index coding problem .it was further shown that if the receivers have huge amount of side information , it is more advantageous to transmit using a longer index code as it will give a higher coding gain as compared to binary transmission scheme .1 anjana a. mahesh and b. sundar rajan , `` index coded psk modulation , '' arxiv:1356200 , [ cs.it ] 19 september 2015 .y. birk and t. kol , informed - source coding - on - demand ( iscod ) over broadcast channels , " in _ proc .ieee conf ._ , san francisco , ca , 1998 , pp .1257 - 1264. z. bar - yossef , z. birk , t. s. jayram and t. kol , index coding with side information , " in _ proc .47th annu .ieee symp . found ._ , 2006 , pp .197 - 206 .l ong and c k ho , optimal index codes for a class of multicast networks with receiver side information , " in _ proc .ieee icc _ , 2012 ,2213 - 2218 .l. natarajan , y. hong , and e. viterbo , index codes for the gaussian broadcast channel using quadrature amplitude modulation , " in _ ieee commun ._ , aug . 2015 .s h dau , v skachek , y m chee , error correction for index coding with side information , `` in _ ieee trans .inf theory _ , vol .59 , no.3 , march 2013 . g. ungerboeck , ' ' channel coding for multilevel / phase signals " , in _ ieee trans .inf theory _ , vol .it-28 , no .1 , january 1982 . | this paper discusses noisy index coding problem over gaussian broadcast channel . we propose a technique for mapping the index coded bits to m - qam symbols such that the receivers whose side information satisfies certain conditions get coding gain , which we call the * qam side information coding gain*. we compare this with the psk side information coding gain , which was discussed in . index coding , awgn broadcast channel , m , qam side information coding gain . |
while advances in two - dimensional ( 2d ) percolation have recently allowed to determine the site percolation threshold on the square lattice with an astonishing accuracy of 14 significant digits and many critical exponents in 2d have been known exactly for decades , the progress in higher dimensions is far slower .the main reason for this is that the two theoretical concepts that proved particularly fruitful in percolation theory , conformal field theory and duality , are useful only in 2d systems , and the thresholds in higher dimensions are known only from simulations .the site and bond percolation thresholds in dimensions are known with accuracy of at least 6 significant digits , but for more complicated lattices , e.g. fcc , bcc or diamond lattices , complex neighborhoods , or continuum percolation models this accuracy is often far from satisfactory . moreover , even though the upper critical dimension is known to be , numerical estimates of the critical exponents for are still rather poor .continuous percolation of aligned objects can be regarded as a limit of a corresponding discrete model .using this fact , we recently improved the accuracy of numerical estimates of continuous percolation of aligned cubes ( ) .we also generalized the excluded volume approximation to discrete systems and found that the limit of the continuous percolation is controlled by a power - law dependency with an exponent valid for both and .the main motivation behind the present paper is to verify whether the relation holds also for higher dimensions and if so , whether it can be used to improve the accuracy of continuous percolation thresholds in the model of aligned hypercubes in dimensions . with this selection, the conjecture will be verified numerically for all dimensions as well as in one case above , which should render its generalization to all plausible .answering these questions required to generate a lot of data , from which several other physically interesting quantities could also be determined .in particular , we managed to improve the accuracy of the correlation length critical exponent in dimensions and to determine the values of various universal wrapping probabilities in dimensions .we consider a hypercubic lattice of the linear size lattice units ( l.u . ) in a space dimension .this lattice is gradually filled with hypercubic `` obstacles '' of linear size l.u .( ) until a wrapping percolation has been found ( for the sake of simplicity , henceforth we will assume that , are dimensionless integers ) .the obstacles , aligned to the underlying lattice and with their edges coinciding with lattice nodes , are deposited at random into the lattice and the periodic boundary conditions in all directions are assumed to reduce finite - size effects . during this process the deposited hypercubes are free to overlap ; however , to enhance the simulation efficiency , no pair of obstacles is allowed to occupy exactly the same position . as illustrated in figure[ fig : model ] , construction of the model in the space of dimension .an empty regular lattice of size lattice units ( l.u . ) with periodic boundary conditions ( a ) is filled at random with square obstacles of size l.u .aligned to the lattice axes ( b ) and the elementary cells occupied by the obstacles are identified ( c ) ; finally , a wrapping path through the occupied elementary cells ( site percolation ) is looked for ( d ) .the same method was used for larger . ] the volume occupied by the obstacles can be regarded as a simple union of elementary lattice cells and the model is essentially discrete .two elementary cells are considered to be connected directly if and only if they are occupied by an obstacle and share the same hyperface of an elementary cell .we define a percolation cluster as a set of the elementary cells wrapping around the system through a sequence of directly connected elementary cells .thus , the model interpolates between the site percolation on a hypercubic lattice for and the model of continuous percolation of aligned hypercubes in the limit of .the percolation threshold is often expressed in terms of the volume fraction defined as the ratio of the number of the elementary cells occupied by the obstacles to the system volume , . what is the expected value of after hypercubeshave been placed at random ( but different ) positions ? to answer this question , notice that while the obstacles can overlap , they can be located at exactly distinct locations and so . moreover , owing to the periodic boundary conditions , any elementary cell can be occupied by exactly different hypercubes , where is the volume of a hypercube .thus , the probability that an elementary cell is not occupied by an obstacle , , is equal to the product of probabilities that no hypercubes were placed at locations .this implies that for this formula reduces to irrespective of . in the limit of equation ( [ eq : def - phi ] ) reduces to , where is the reduced number density .the number of lattice sites in a cluster of linear size is of order of , a quantity rapidly growing with in high dimensions .this imposes severe constraints on numerical methods . on the one hand, one would like to have a large to minimize finite - size effects , which are particularly important near a critical state ; on the other hand , dealing with objects exerts a pressure on the computer storage and computational time . to mitigate this problem ,special algorithms were developed that focus on the efficient use of the computer memory .for example , leath s algorithm , in which a single cluster is grown from a single - site `` seed '' , turned out very successful in high - dimensional simulations of site and bond percolation .however , the use of such algorithms in the present model would be impractical , as the obstacle linear size is now allowed to assume values as large as 1000 , which greatly complicates the definition of leath s `` active neighborhood '' of a cluster . therefore we used a different approach , with data structures typical of algorithms designed for the continuous percolation : each hypercube is identified by its coordinates , i.e. , by integers , and the clusters are identified using the union - find algorithm . with this choice , the computer memory storage , as well as the simulation time of each percolation cluster ,is , which enables one to use large values of and .we were able to run the simulations for ( ) , ( ) , ( ) , , and , and the maximum values of were limited by the acceptable computation time rather than the storage .the simulation time in our method is determined by how quickly one can identify all obstacles connected to the next obstacle being added to the system . to speed this step up , we divided the system into bins of linear size .each obstacle was assigned to exactly one bin and for each bin we stored a list of obstacles already assigned to it . in this way , upon adding a new obstacle , the program had to check neighboring bins to identify all obstacles connected to the just added one .as increases , this step becomes the most time - consuming part of the algorithm .fortunately , the negative impact of the factor is to some extent mitigated by the fact that the critical volume fraction , , is much smaller for than for , so that for one needs to generate a relatively small number of obstacles to reach the percolation .this is related to the fact that for the value of , whereas .the case is special in that one has to check only neighboring sites of a given obstacle , a value much smaller than .thus , the total simulation time at a high space dimension for a fixed value of can be approximated as for and for . in practice , simulations in the space dimension ( with fixed ) are between 8 to 13 times faster for than for .this allowed us to run more simulations and obtain more accurate results for than for .we assumed periodic boundary conditions along all main directions of the lattice .the number of independent samples varied from for very small systems ( e.g. , , , ) to for larger and ( e.g. , , , ) . starting from an empty system of volume , we added hypercubes of volume at different random locations until we have detected wrapping clusters in all directions .thus , for each simulation we stored numbers equal to the number of hypercubes for which a wrapping percolation was first detected along cartesian direction .having determined all in a given simulation , we can use several definitions of the onset of percolation in a finite - size system .for example , one can assume that the system percolates when there is a wrapping cluster along some preselected direction , say , . we shall call this definition ` case a ' .alternatively , a system could be said to be percolating when there is a wrapping cluster along _ any _ of the directions .we shall call this ` case b '. another popular definition of a percolation in a finite - size system is the requirement that the wrapping condition must be satisfied in _ all _ directions .this will be denoted as ` case c ' .next , for each of the three percolation definitions , a , b and c , we determined the probability that a system of size , obstacle size , and volume fraction contains a percolating ( wrapping ) cluster .this step is based on a probability distribution function constructed from from all in case a , from all in case b , and from all in case c. notice that in case a we take advantage of the symmetry of the system which ensures that all main directions are equivalent so that after simulations we have pieces of data from which a single probability distribution function can be constructed . in doing thiswe implicitly assume that all are independent of each other , which may improve statistics .fixing and , and using ( [ eq : def - phi ] ) , we can write as a function of , . in accordance with the finite - size scaling theory , is expected to scale with and the deviation from the critical volume fraction as ^{1/\nu } \right ) , \quad l / k \gg 1,\ ] ] where is the correlation length exponent , is a scaling function , and is related to through ( [ eq : def - phi ] ) .this formula describes the probability that there is a percolation cluster in a system containing exactly obstacles , i.e. , is a quantity computed for a `` microcanonical percolation ensemble '' .the corresponding value in the canonical ensemble is where is the probability that there is an obstacle assigned to a given location ; this quantity is related to the mean occupied volume fraction through while is a discrete function , is defined for all and has a reduced statistical noise . by using ( [ eq : phi - p ] ) , can be regarded as a continuous function of for .an effective , -dependent volume fraction was then determined numerically as the solution to where is a fixed parameter , and we chose in all our calculations . the critical volume fraction , , as well as the critical exponent then determined using the scaling relation where are some - and -dependent parameters and is the cutoff parameter .recently a more general scaling ansatz for the form of the probability in the vicinity of the critical point was proposed , where is a universal constant , is the leading correction exponent , and are some nonuniversal , model - dependent parameters . at the critical pointthis reduces to we used this ansatz to determine the universal , -independent constant representing the probability that a wrapping cluster exists at the critical point .relation ( [ eq : phi - scaling ] ) contains unknowns : , , and .the critical exponent can be also estimated from an alternative relation containing unknowns by noticing that ( [ eq : scaling ] ) leads to where . to take into account finite - size corrections , we used a formula where are some parameters and we used . actually , since is a quickly growing function near , we calculated the derivative of its inverse , using a five - point stencil , /12h$ ] with .in we conjectured that for sufficiently large \propto k^{-\theta},\ ] ] where is the critical volume fraction for the continuous percolation of aligned hypercubes and .the left - hand side of ( [ eq : approx - k32 ] ) was derived using the excluded volume approximation applied to discrete systems , whereas its right - hand - side was obtained numerically and verified for .this formula enables one to estimate from the critical volumes obtained for discrete ( lattice ) models with finite by investigating the rate of their convergence as .its characteristic feature is the conjectured independence of on that we verify in this report .the uncertainties of the results were determined as follows .first , the percolation data were divided into 10 disjoint groups .then for each group the value of was calculated in the way described above .the value of was then assumed to be equal to their average value , and its uncertainty to the standard error of the mean .next , the value of was obtained from a non - linear fitting to eq .( [ eq : phi - scaling ] ) using the levenberg - marquardt algorithm , with the errors on the parameters estimated from the square roots of the diagonal elements of the covariance matrix , multiplied by , where is the reduced chi - square statistic .the same method was used to estimate the value and uncertainty of from eq .( [ eq : approx - k32 ] ) . finally , making up the sum in eq .( [ eq : q ] ) is potentially even more tricky than in site percolation , as in our model can be as large as . in solving this technical problemwe followed the method reported in if could be stored in a 64-bit integer , otherwise we approximated the binomial distribution with the normal distribution .another point worth noticing is that in eq .( [ eq : q ] ) can be as small as .in such a case expressions like should be computed using appropriate numerical functions , e.g. , log1p from the c++ standard library , which is designed to produce values of with without a potential loss of significance in the sum .we start our analysis from the particular case , in which the model reduces to the standard site percolation .as site percolation has been analyzed extensively with many dedicated methods , we were going to use the case only to test the correctness of our computer code , but as it turned out , we have managed to obtain some new results , too .the main results are summarized in table [ tab : site - percolation ] .*15 & case & & + & & & + & & best known & present & best known & present ( i ) & present ( ii ) + 3 & a & 0.311 607 68(15) & 0.311 608 8(57 ) & 0.876 19(12) & 0.873 6(35 ) & 0.877 3(12 ) + & b & & 0.311 608 0(42 ) & & 0.855(18 ) & 0.874 3(15 ) + & c & & 0.311 601 7(47 ) & & 0.878 7(16 ) & 0.878 31(80 ) + & & & & + & & & 0.311 606 0(48 ) & & + 4 & a & 0.196 886 1(14) & 0.196 890 8(60 ) & 0.689(10) & 0.683 0(59 ) & 0.682 2(41 ) + & b & & 0.196 891 9(55 ) & & 0.674(18 ) & 0.682 7(34 ) + & c & & 0.196 885(10 ) & & 0.687 9(61 ) & 0.687 5(23 ) + & & & & + & & & 0.196 890 4(65 ) & & + 5 & a & 0.140 796 6(15) & 0.140 796 7(22 ) & 0.569(5) & 0.574 3(50 ) & 0.572 5(38 ) + & b & & 0.140 795(10 ) & & 0.59(11 ) & 0.571 6(26 ) + & c & & 0.140 796 5(21 ) & & 0.573 3(29 ) & 0.572 0(14 ) + & & & & + & & & 0.140 796 6(26 ) & & + 6 & a & 0.109 017(2) & 0.109 011 3(14 ) & 1/2 & 0.58(39 ) & 0.495(19 ) + & b & & 0.109 017 5(26 ) & & & 0.497(33 ) + & c & & 0.109 009 9(16 ) & & 0.513(58 ) & 0.495(40 ) + & & & & + & & & 0.109 011 7(30 ) & & + 7 & a & 0.088 951 1(9) & 0.088 951 4(56 ) & 1/2 & 0.36(12 ) & 0.44(11 ) + & b & & 0.088 950(15 ) & & & + & c & & 0.088 945 7(35 ) & & 0.40(11 ) & 0.41(8 ) + & & & & + & & & 0.088 951 1(90 ) & & + , , , .the values of the critical volume fraction , , were determined using ( [ eq : phi - scaling ] ) with for all three definitions ( a , b , and c ) of the onset of percolation in finite - size systems , as defined in section [ sec : numerical - details ] .for we assumed that , whereas for we treated as an unknown , fitting parameter .percolation thresholds obtained in cases a , b , and c are consistent with each other and with those reported in other studies .we combined them into a single value using the inverse - variance weighting .these combined values are listed in table [ tab : site - percolation ] as `` final '' values .their uncertainty was determined as the square root of the variance of the weighted mean multiplied by a correction term , where is an additional , conservative correction term introduced to compensate for the possibility that the measurements carried out in cases a , b , and c are not statistically independent , as they are carried out using the same datasets .the uncertainties obtained for are similar in magnitude to those obtained with leath s algorithm , even though our algorithm was not tuned to the numerical features of the site percolation problem .it is also worth noticing that the uncertainties of for cases a , b and c are similar to each other even though case a utilizes a larger number of data .this suggests that the numbers obtained in individual simulations are correlated .the values of the critical exponent were obtained independently using either ( [ eq : phi - scaling ] ) , which we call `` method i '' , or ( [ eq : derivative ] ) ( `` method ii '' ) . obviously , in contrast to the use of ( [ eq : phi - scaling ] ) to estimate , in method i we treated as a fitting parameter for all .just as for , we publish the values of for individual cases a , b , and c as well as their combined values obtained with the inverse - variance weighting .we also present the results for , where the exact value of is known , as it helps to verify accuracy of the applied methods .again , the results are consistent with the values of reported in previous studies .method ii turned out to be generally more accurate than method i and the accuracy of both methods decreases with .we attribute the latter phenomenon to a rapid decrease of the maximum system size that can be reached in simulations , , with .for example , while for we used , for we had to do with , which certainly has a negative impact on power - law fitting accuracy . for equation ( [ eq : phi - scaling ] ) with treated as a fitting parameter leads to rather poor fits in which the uncertainty of some fitting parameters may exceed 100%however , the same equation still gives good quality fits after fixing at its theoretical value , which justifies its use for in table [ tab : site - percolation ] .combining the results from cases a , b , and c and methods i and ii , we found for and for , which are more accurate than those reported previously , for and for .these improved values will be used in data analysis for the case .next we verified eq .( [ eq : approx - k32 ] ) for cases a , b , and c and . to this endthe values of were determined using ( [ eq : phi - scaling ] ) and fixed at the best value available , i.e. , for , our values reported in table [ tab : site - percolation ] for , and for .the uncertainty of was included into the final uncertainties of the fitting parameters .our results , depicted in figure [ fig : exponents-4 - 7 ] , the left hand side of ( [ eq : approx - k32 ] ) as a function of the obstacle size , .symbols show numerical results for the space dimension ( circles ) , 4 ( pluses ) , 5 ( crosses ) , 6 ( stars ) , and 7 ( squares ) , whereas the lines are the best fits to ( [ eq : approx - k32 ] ) with . ]confirm our hypothesis that irrespective of the space dimension ( similar scaling for but percolation defined through spanning clusters was reported in ) .this opens the way to use eq .( [ eq : approx - k32 ] ) , with , as a means of estimating the continuous percolation threshold of aligned hypercubes , . the results , presented in table [ tab : continuous - percolation ] , llllll & case & & & + & & & & + & & & best known & present & + 3 & a & 0.226 38 & 0.277 27(2) & 0.277 302 0(10 ) & 0.293 + & b & & & 0.277 300 9(10 ) & + & c & & & 0.277 302 61(79 ) & + & & & + & & & & 0.277 301 97(91 ) + 4 & a & 0.098 13 & 0.113 2(5) & 0.113 234 40(73 ) & 0.146 + & b & & & 0.113 233 90(91 ) & + & c & & & 0.113 237 9(13 ) & + & & & + & & & & 0.113 234 8(17 ) + 5 & a & 0.043 73 & 0.049 00(7) & 0.048 163 5(15 ) & 0.071 + & b & & & 0.048 165 8(14 ) & + & c & & & 0.048 162 1(13 ) & + & & & + & & & & 0.048 163 7(19 ) + 6 & a & 0.020 03 & 0.020 82(8) & 0.021 347 4(10 ) & 0.034 + & b & & & 0.021 344 6(27 ) & + & c & & & 0.021 347 9(10 ) & + & & & + & & & & 0.021 347 4(12 ) + 7 & a & 0.009 38 & 0.009 99(5) & 0.009 776 9(10 ) & 0.017 + & b & & & 0.009 782 0(27 ) & + & c & & & 0.009 773 1(10 ) & + & & & + & & & & 0.009 775 4(31 ) + . .turn out far more accurate than those obtained with other methods .however , they agree with the data reported in only for . in particular , for our value of the percolation threshold of aligned hypercubes , , is away from the value predicted in by , where is the sum of the uncertainities of found in our simulations and that reported in .this indicates that for either our uncertainty estimates or those reported in are too small .this discrepancy is very peculiar , because it exists only for , whereas the results for lower are in perfect accord .this finding made us recheck our computations .the main part of our code is written using c++ templates with the space dimension treated as a template parameter .the raw percolation data is then analyzed using a single toolchain for which is just a parameter .this implies that exactly the same software is used for any .next , our results for different percolation definitions ( a , b , and c ) agree with each other well . moreover , our results for are in good agreement with all the results available for the site percolation , and for they satisfy the asymptotic scaling expressed in eq .( [ eq : approx - k32 ] ) . also , as shown in table [ tab : continuous - percolation ] , all our results for lie between the lower and upper bounds , and , reported in .we verified that the reduced chi - square statistic in practically all fits satisfies , which indicates that the data and their uncertainties fit well to the assumed models .we also implemented the code responsible for the transition from the microcanonical to canonical ensemble , equation ( [ eq : q ] ) , in such a way that all floating - point operations could be performed either in the ieee 754 double ( 64-bit ) or extended precision ( 80-bit ) mode .the results turned out to be practically indistinguishable , indicating that the code is robust to numerical errors related to the loss of significance .an alternative verification of the results is presented in figure [ fig : verification ] .percolation threshold in dimension for several values of the obstacle linear size .pluses represent our numerical results , the dashed line was calculated from a fit to ( [ eq : approx - k32 ] ) for , the circle depicts the value extrapolated for from ( [ eq : approx - k32 ] ) , and the square reproduces the value of this limit as reported in . ]it shows that our simulation data for and are in a very good agreement with ( [ eq : approx - k32 ] ) .the reduced chi - square statistic , , indicates a good fit , even though the uncertainties of individual data points are very small , from ( ) to ( ) the value reported in for is clearly inconsistent with our data .the situation for is similar ( data not shown ) .it is also worth noticing that our results for are an order of magnitude more accurate than those obtained in using exactly the same method , but with the percolation defined through spanning rather than wrapping clusters .this confirms a known fact that the estimates of the percolation threshold obtained using a cluster wrapping condition in a periodic system exhibit significantly smaller finite - size errors than the estimates made using cluster spanning in open systems .one possible cause of the discrepancy between our results for continuous percolation of aligned hypercubes and those obtained in are the corrections to scaling due to the finite size of the investigated systems .to get some insight into their role , we used ( [ eq : u0 ] ) to obtain the values of the universal constant for together with , , , which control the magnitude of the corrections to scaling .first we focused on and found that it is impossible to find reliable values of this exponent from our data .wang et al . also reported difficulties in determining from simulations , but eventually found for .as the values of this exponent for are unknown , and we checked that the value of obtained from ( [ eq : u0 ] ) is practically insensitive to whether one assumes that or , we chose the simplest option : for all , which turns ( [ eq : u0 ] ) into the usual taylor expansion in , where and .figure [ fig : ypc - individual ] the probability of a wrapping cluster along a given axis ( case a ) at criticality , , as a function of the system size relative to the obstacle size , for selected values of the obstacle size , ( red crosses ) , 10 ( blue squares ) , and ( green circles ) in dimensions ( panels a, e , respectively ) .the lines show the fits to ( [ eq : u01 ] ) for ( with ) .the lines for would lie very close to those for and are hidden for clarity .the error bars do not include the uncertainty of .the regions filled with a pattern in panel ( e ) show how would change if the value of was allowed to vary by up to its three standard deviations , for ( red ) and ( green ) ., title="fig : " ] the probability of a wrapping cluster along a given axis ( case a ) at criticality , , as a function of the system size relative to the obstacle size , for selected values of the obstacle size , ( red crosses ) , 10 ( blue squares ) , and ( green circles ) in dimensions ( panels a, e , respectively ) .the lines show the fits to ( [ eq : u01 ] ) for ( with ) .the lines for would lie very close to those for and are hidden for clarity .the error bars do not include the uncertainty of .the regions filled with a pattern in panel ( e ) show how would change if the value of was allowed to vary by up to its three standard deviations , for ( red ) and ( green ) ., title="fig : " ] + the probability of a wrapping cluster along a given axis ( case a ) at criticality , , as a function of the system size relative to the obstacle size , for selected values of the obstacle size , ( red crosses ) , 10 ( blue squares ) , and ( green circles ) in dimensions ( panels a, e , respectively ) .the lines show the fits to ( [ eq : u01 ] ) for ( with ) .the lines for would lie very close to those for and are hidden for clarity .the error bars do not include the uncertainty of .the regions filled with a pattern in panel ( e ) show how would change if the value of was allowed to vary by up to its three standard deviations , for ( red ) and ( green ) ., title="fig : " ] the probability of a wrapping cluster along a given axis ( case a ) at criticality , , as a function of the system size relative to the obstacle size , for selected values of the obstacle size , ( red crosses ) , 10 ( blue squares ) , and ( green circles ) in dimensions ( panels a, e , respectively ) .the lines show the fits to ( [ eq : u01 ] ) for ( with ) .the lines for would lie very close to those for and are hidden for clarity .the error bars do not include the uncertainty of .the regions filled with a pattern in panel ( e ) show how would change if the value of was allowed to vary by up to its three standard deviations , for ( red ) and ( green ) ., title="fig : " ] + the probability of a wrapping cluster along a given axis ( case a ) at criticality , , as a function of the system size relative to the obstacle size , for selected values of the obstacle size , ( red crosses ) , 10 ( blue squares ) , and ( green circles ) in dimensions ( panels a, e , respectively ) .the lines show the fits to ( [ eq : u01 ] ) for ( with ) .the lines for would lie very close to those for and are hidden for clarity .the error bars do not include the uncertainty of .the regions filled with a pattern in panel ( e ) show how would change if the value of was allowed to vary by up to its three standard deviations , for ( red ) and ( green ) ., title="fig : " ] shows as a function of for and selected values of ( case a ) .as is expected to converge to a -dependent limit as , inspection of its convergence rate can serve as an indicator of the magnitude of the corrections to scaling for the range of the values used in the simulations .the plots for are very similar to those obtained for , which suggests that the behavior of for can be used as a good approximation of in the limit of the continuous system , . rather surprisingly ,for this behavior is also similar to that observed in the site percolation ( ) . in higher dimensions the convergence patterns are different : the site percolation is characterized by a nonmonotonic dependence of on , whereas in continuous percolation this dependency is monotonic .notice also the different scales used in the plots : the variability of increases with and at the same time the maximum value of attainable in simulations quickly decreases .these two factors amplify each other s negative influence on the simulations , which hinders the usability of the method in higher dimensions .looking at figures [ fig : ypc - individual ] ( c)-(e ) , one might doubt if they represent quantities converging to the same value irrespective of .however , these curves turn out to be very sensitive to even small changes in , which are known with a limited accuracy , a factor not included into the error bars . to illustrate the magnitude of this effect , we show in figure [ fig : ypc - individual ] ( e ) how would change as a function of if was allowed to vary by up to three times its numerical uncertainty for and ( case a ) .for the impact of the uncertainty of on turns out larger than the statistical errors , and if we take it into account , the hypothesis that the curves converge to the same value can no longer be ruled out . actually , the requirement that this limit is -independent can be used to argue that our estimation of for , is larger than the value obtained from this condition by about twice its numerical uncertainty , which is an acceptable agreement .while this idea could be used to improve the uncertainty estimates of ( see ) , we did not use it systematically in the present study .the reason of high sensitivity of to changes in is related to the fact that the slope of at for the largest system sizes attainable in simulations quickly grows with and , to a lesser extent , with the probability that the system is at percolation , , as a function of the distance to the critical point , , for , , and the largest values of used in our simulations ( case a ) . ]( figure [ fig : pdfs ] ) . for and slope is as large as , so that in this case the uncertainty of of the order of translates into the uncertainty of of the order of .the data in figure [ fig : pdfs ] allows one to make also another observation . using ( [ eq : tau ] ) with and the raw data for , one can estimate the percolation threshold with the accuracy of . using extrapolation ,this can be improved by a factor of to reach the accuracy reported in table [ tab : continuous - percolation ] . for and the error from the raw data is already very small , .extrapolation can be still used to reduce it further , but since now the data come from systems of smaller linear size ( rather than for ) , the reduction factor is also smaller , of the order of .thus , the problems with convergence , which can be seen in panels ( c)-(e ) of figure [ fig : ypc - individual ] , are related to the difficulty in the determination of the universal constant , not . an independent method of evaluating for , even with a moderate precision , would give a powerful method of obtaining the percolation threshold in high dimensional spaces .the values of , , and obtained from the fits of the data shown in figure [ fig : ypc - individual ] are presented in table [ tab : rx ] .cr|lll & & & & + 3 & 1 & 0.2580(2 ) & & + 3 & 10 & 0.2581(6 ) & & + 3 & 100 & 0.2583(6 ) & & + 4 & 1 & 0.1786(7 ) & & + 4 & 10 & 0.1796(19 ) & & + 4 & 100 & 0.1796(16 ) & & + 5 & 1 & 0.167(1 ) & & + 5 & 10 & 0.166(8 ) & & + 5 & 100 & 0.165(9 ) & & + 6 & 1 & 0.206(3 ) & & + 6 & 10 & 0.199(19 ) & & + 6 & 100 & 0.202(18 ) & & + 7 & 1 & 0.313(8 ) & & + 7 & 10 & 0.229(50 ) & & + 7 & 100 & 0.253(32 ) & & + their inspection leads to several conclusions .first , they agree with the hypothesis that is universal for a given space dimension . in particular , our value of the universal constant for , , agrees with reported in .second , even though the uncertainties of and are typically high , often exceeding 100% , one can notice that their magnitude grows with , which means that the magnitude of the corrections to scaling also grows with .this is particularly important for , which controls the main contribution to the corrections to scaling for large system sizes .the absolute value of this parameter for is very likely to be at least two orders of magnitude larger for than for .this translates into much slower convergence of for than for ( c.f .figure [ fig : ypc - individual ] and ) .actually , for the value of the linear coefficient , , is so close to zero that the convergence rate of in simulations is effectively controlled by the quadratic term , , an effect also reported in . once is known with sufficiently low uncertainty , one can try and use it to reduce the corrections to scaling by assuming in ( [ eq : tau ] ) .this method turned out very successful for , but in this case is known exactly .availability of the exact value of appears crucial , because the leading term in ( [ eq : phi - scaling ] ) has special properties only at so that small errors in may disturb the fitting .we checked that , as expected , setting in dimensions significantly increased the convergence rate ; however , it did not result in more accurate values of the percolation threshold , probably due to the errors in and the fact that the uncertainty of the extrapolated value ( ) is closely related to the uncertainty of the data being extrapolated ( ) , which is independent of ( data not shown ) .finally , we checked that the value of is universal for other definitions of percolation in finite - size systems .if we assumed that a system percolates when a wrapping cluster appears in any direction ( case b ) , we obtained , , , , and for , respectively .when we waited until a wrapping condition was satisfied along all directions ( case c ) , we obtained , 0.0291(3 ) , 0.0187(3 ) , 0.0224(9 ) , and 0.043(3 ) for , respectively .the values for are consistent with those reported in , and .treating continuous percolation of aligned objects as a limit of the corresponding discrete model turned out to be an efficient way of investigating the continuous model . using this approachwe were able to determine the percolation threshold for a model of aligned hypercubes in dimensions with accuracy far better than attained with any other method before .actually , for the uncertainty of the continuous percolation threshold is now so small that it matches or even slightly surpasses that for the site percolation .we were also able to confirm the universality of the wrapping probability and determine its value for for several definitions of the onset of percolation in finite - size systems .the method proposed here has several advantages .first , it allows one to reduce the statistical noise of computer simulations by transforming the results from the microcanonical to canonical ensemble .second , it allows to exploit the universality of the convergence rate of the discrete model to the continuous one , which we found to be controlled by a universal exponent for all .finally , it can be readily applied to several important shapes not studied here , like hyperspheres or hyperneedles .one drawback of the method is that it does not seem suitable for continuous models in which the obstacles are free to rotate , e.g. randomly oriented hypercubes .we also did not take into account logarithmic corrections to scaling at the upper critical dimension , which may render our error estimates at too optimistic .our results for the continuous percolation threshold in dimensions are incompatible with those reported recently in .the reason for this remains unknown , and we guess that they are related to corrections to scaling , which quickly grow with . finally , we have managed to improve the accuracy of the critical exponent measurement in dimensions .the source code of the software used in the simulations is available at https://bitbucket.org/ismk_uwr/percolation .the calculations were carried out in the wrocaw centre for networking and supercomputing ( http://www.wcss.wroc.pl ) , grant no . 356 .we are grateful to an anonymous referee for pointing our attention to the gaussian approximation of the binomial distribution .10 h.g .ballesteros , l.a .fernndez , v. martin - mayor , a. muoz sudupe , g. parisi , and j.j .ruiz - lorenzo .measures of critical exponents in the four - dimensional site percolation ., 400(3 - 4):346 351 , 1997 . | we propose a method of studying the continuous percolation of aligned objects as a limit of a corresponding discrete model . we show that the convergence of a discrete model to its continuous limit is controlled by a power - law dependency with a universal exponent . this allows us to estimate the continuous percolation thresholds in a model of aligned hypercubes in dimensions with accuracy far better than that attained using any other method before . we also report improved values of the correlation length critical exponent in dimensions and the values of several universal wrapping probabilities for . _ keywords _ : percolation threshold ; continuous percolation ; critical exponents ; finite - size scaling , percolation wrapping probabilities |
the classical theory by hodgkin and huxley ( hh ) describes nerve impulses ( spikes ) that manifest communication between nerve cells .the underlying mechanism of a single spike is excitability , i.e. , a small disturbance triggers a large excursion that reverts without further input to the original state .a spike lasts a 1/1000 second and even though during this period ions are exchanged across the nerve cell membrane , the change in the corresponding ion concentrations can become significant only in series of such spikes . under certain pathological conditionschanges in ion concentrations become massive and last minutes to hours before they recover .this establishes a new type of excitability underlying communication failure between nerve cells during migraine and stroke . to clarify this mechanism and to recognize the relevant factors that determine the slow time scales of ion changes, we use an extended version of the classical hh theory .we identify one variable of particular importance , the potassium ion gain or loss through some reservoirs provided by the nerve cell surroundings .we suggest to describe the new excitability as a sequence of two fast processes with constant total ion content separated by two slow processes of ion clearance ( loss ) and re uptake ( re gain ) .in this paper we study ion dynamics in ion based neuron models . in comparison to classical hh type membrane modelsthis introduces dynamics on much slower time scales . while spiking activity is in the order of milliseconds , the time scales of ion dynamics range from seconds to minutes and even hours depending on the process ( transmembrane fluxes , glial buffering , backward buffering ) .the slow dynamics leads to new phenomena .slow burst modulation as in seizure like activity ( sla ) emerges from moderate changes in the ion concentrations .phase space excursions with large changes in the ionic variables establish a new type of ionic excitability as observed in cortical spreading depression ( sd ) during stroke and in migraine with aura .such newly emerging dynamics can be understood from the phase space structure of the ion based models .mathematical models of neural ion dynamics can be divided into two classes . on the one handthe discovery of sd by leo in 1944 a severe perturbation of neural ion homeostasis associated with a huge changes in the potassium , sodium and chloride ion concentrations in the extracellular space ( ecs) that spreads through the tissue has attracted many modelling approaches dealing with the propagation of large ion concentration variations in tissue . in 1963 grafstein described spatial potassium dynamics during sd in a reaction diffusion framework with a phenomenological cubic rate function for the local potassium release by the neurons .reshodko and burs proposed an even simpler cellular automata model for sd propagation . in 1978tuckwell and miura developed a sd model that is amenable to a more direct interpretation in terms of biophysical quantities .it contains ion movements across the neural membrane and ion diffusion in the ecs . in more recent studiesdahlem et al. suggested certain refinements of the spatial coupling mechanisms , e.g. , the inclusion of nonlocal and time delayed feedback terms to explain very specific patterns of sd propagation in pathological situations like migraine with aura and stroke . on the other hand single cell ion dynamicswere studied in hh like membrane models that were extended to include ion changes in the intracellular space ( ics ) and the ecs since the 1980s . while the first extensions of this type were developed for cardiac cells by difranceso and noble , the first cortical model in this spiritwas developed by kager , wadman and somjen ( kws) only in 2000 .their model contains abundant physiological detail in terms of morphology and ion channels , and was in fact designed for seizure like activity ( sla ) and local sd dynamics .it succeeded spectacularly in reproducing the experimentally known phenomenology .an even more detailed model was proposed by shapiro at the same time who like yao , huang and miura for kws also investigated sd propagation with a spatial continuum ansatz . in the following hh like models of intermediate complexity were developed by frhlich , bazhenov et al . to describe potassium dynamics during epileptiform bursting .the simplest hh like model of cortical ion dynamics was developed by barreto , cressman et al. who describe the effect of ion dynamics in epileptiform bursting modulation in a single compartment model that is based on the classical hh ion channels .interestingly in none of these considerably simpler than shapiro and kws models extreme ion dynamics like in sd or stroke was studied .to our knowledge the only exception is a study by zandt et al . who describe in the framework of cressman et al . what they call the `` wave of death '' that follows the anoxic depolarization after decapitation as measured in experiments with rats] in this study we systematically analyze the entire phase space of such local ion based neuron models containing the full dynamical repertoire ranging from fast action potentials to slow changes in ion concentrations .we start with the simplest possible model for sd dynamics a variation of the barreto , cressman et al .model and reproduce most of the results for the kws model . our analysis covers sla and sd .three situations should be distinguished : isolated , closed , and open systems , which is reminiscent of a thermodynamic viewpoint ( see fig . [fig : system ] ) . an isolated system without transfer of metabolic energy for the atpase driven pumps will attain its thermodynamic equilibrium , i.e. , its donnan equilibrium . a closed neuron system with functioning pumps but without ion regulation by glia cells or the vascular system is generally bistable .there is a stable state of free energy starvation ( fes ) that is close to the donnan equilibrium and coexists with the physiological resting state .the ion pumps can not recover the physiological resting state from fes .we will now develop a novel phase space perspective on the dynamics in open neuron systems .we describe the first slow fast decomposition of local sd dynamics , in which the ion gain and loss through external reservoirs is identified as the crucial quantity whose essential importance was not realized in earlier studies . treating this slow variable as a parameter allows us to derive thresholds for sd ignition and the abrupt , subsequent repolarization of the membrane in a bifurcation analysis for the first time .moreover we analyze oscillatory dynamics in open systems and thereby relate sla and sd to different so called torus bifurcations .this categorizes sla and sd as genuinely different though they are ` sibling ' dynamics as they both bifurcate from the same ` parent ' limit cycle in a supercritical and subcritical manner , respectively , which also explains the all or none nature of sd .sla is gradual in contrast .local ion dynamics of neurons has been studied in models of various complexity . reduced model types consist of an electrically excitable membrane containing gated ion channels and ion concentrations in an intra and an extracellular compartment .transmembrane currents must be converted to ion fluxes that lead to changes in the compartmental ion concentrations .such an extension requires ion pumps to prevent the differences between ics and ecs ion concentrations that are present under physiological resting conditions from depleting .we consider a model containing sodium , potassium and chloride ions . the hh like membrane dynamicsis described by the membrane potential and the potassium activation variable .the sodium activation is approximated adiabatically and the sodium inactivation follows from an assumed functional relation between and .the ics and ecs concentrations of sodium , potassium and chloride ions are denoted by , and , respectively . in a closed system mass conservationholds , i.e. , with and the ics / ecs volumes . together with the electroneutrality of ion fluxes across the membrane , i.e. , only two of the six ion concentrations are independent dynamical variables .the full list of rate equations then reads they are complemented by six constraints on gating variables and ion concentrations : superscript 0 of indicates ion concentrations in the physiological resting state .unless otherwise stated and are used as initial conditions in the simulations . constrained ion concentrations ( eqs .( [ eq:5])([eq:8 ] ) ) then also take their physiological resting state values .these ion concentrations , the membrane capacitance , the gating time scale parameter , the conversion factor from currents to ion fluxes , and the ics and ecs volumes are listed in tab .[ tab:1 ] .the conversion factor is an expression of the membrane surface area and faraday s constant ( both given in tab .[ tab:1 ] , too ) : we remark that all parameters in tab .[ tab:1 ] are given in typical units of the respective quantities .the numerical values in these units can directly be used for simulations .time is then given in msec , the membrane potential in mv and ion concentrations in .the electroneutrality of the total transmembrane ion flux as expressed in eqs .( [ eq:0 ] ) and ( [ eq:5 ] ) is a consequence of the large time scale separation between the membrane dynamics and the ion dynamics ( cf . ref . and the below discussion of time scales ) .this constraint is the reason why the thermodynamic equilibrium of the system must be understood as a donnan equilibrium .this is the electrochemical equilibrium of a system with a membrane that is impermeable to some charged particles , which can be reached in an electroneutral fashion , i.e. , without separating charges .we do not include this impermeant matter explicitly , because it does not influence the dynamics as long as osmosis is not considered .one should however keep in mind that the initial ion concentrations in tab .[ tab:1 ] do not imply zero charge in the ics or ecs and hence impermeant matter to compensate for this must be present .the gating functions , and are given by here and are the asymptotic values and is potassium activation time scale .they are expressed in terms of the hodgkin huxley exponential functions the three ion currents are they are given in terms of the leak and gated conductances ( with ) and the nernst potentials which are computed from the ( dynamical ) ion concentrations : denotes the valence of the particular ion species .the pump current modelling the atpase driven exchange of intracellular sodium with extracellular potassium at a is given by where is the maximal pump rate .the pump current increases with and .the values for the conductances and pump rate are also given in tab .[ tab:1 ] .let us remark that in comparisons with ref. , we have mildly increased the maximal pump rate and decreased the chloride conductance to obtain a sd threshold in agreement with experiments ( see sect .results ) .( [ eq:1])([eq:10 ] ) describe a closed system in which ion pumps are the only mechanism maintaining ion homeostasis and in which mass conservation holds for each ion species .a remark on terminology is due at this point : a ` closed ' system refers exclusively to the conservation of the ion species that we model .we do not directly model other mass transfer that occurs in real neural systems .yet it is indirectly included . the ion pumps use energy released by hydrolysis of atp , a molecule whose components ( glucose and oxygen or lactate ) therefore have to pass the system boundaries . in thermodynamics, it is customary to call systems that exchange energy but not matter with their environment closed . since atp is in this framework only considered as an energy source , we can describe the system as closed , if ions can not be transferred across its boundaries . as mentioned above the closed systemis bistable .superthreshold stimulations cause a transition from physiological resting conditions to fes . to resolve this and change the behaviour to local sd dynamicsit is necessary to include further regulation mechanisms .since sd is in particular characterized by an extreme elevation of potassium in the ecs we will only discuss potassium regulation .if ecs potassium ions are subject to a regulation mechanism which is independent of the membrane dynamics , then the symmetry between ics and ecs potassium dynamics is broken and eq .( [ eq:7 ] ) for the potassium conservation does not hold .let us represent changes of the potassium content of the system by a variable which is defined by the following relation : changes of the potassium content , i.e. , changes of , can be of different physiological origin .if glial buffering is at work the potassium content will be reduced by the amount of buffered potassium .an initial potassium elevation simply leads to an accordingly increased : for the coupling to an extracellular potassium bath or to the vasculature is a measure for the amount of potassium that has diffused into ( positive ) or out of ( negative ) the system .we are going to discuss two regulation schemes coupling to an extracellular bath and glial buffering .they could be implemented simultaneously , but for our purpose it will suffice to apply only one scheme at a time . in the second subsection of sect .results , the dynamics of is given by glial buffering , while in the third subsection we will discuss the oscillatory regimes one finds for bath coupling with elevated bath concentrations .to implement glial buffering we assume a phenomenological chemical reaction of the following type : the buffer concentration is denoted by .we are using the buffer model from ref . in which the potassium dependent buffering rate is given as the parameter is normally assumed to have the same numerical value as the constant backward buffering rate which is hence an overall parameter for the buffering strength .however , the parameters should be denoted differently as they have different units ( cf . tab .[ tab:1 ] ) . this chemical reaction schemetogether with the mass conservation constraint where is the initial buffer concentration , leads to the following differential equation for : eq .( [ eq:25 ] ) the implies the following rate equation for where and are given by eqs .( [ eq:25 ] ) and ( [ eq:24 ] ) , respectively . to model the coupling toa potassium bath one normally includes an explicit rate equation for the ecs potassium concentration where the diffusive coupling flux is defined by its coupling strength and the potassium bath concentration .( [ eq:24 ] ) implies that eq .( [ eq:31 ] ) can be rewritten in terms of as follows : note that we have chosen to formulate ion regulation in terms of rather than which would be completely equivalent .this is crucial , because the dynamics of happens on a time scale that is only defined by the buffering or the diffusive process , while dynamics involves transmembrane fluxes and reservoir coupling dynamics at different time scales ( cf .the last paragraph of this section ) .this can be seen from eq .( [ eq:31 ] ) .both regulation schemes glial buffering given by eq . ( [ eq:30 ] ) and coupling to a bath with a physiological bath concentration as in eq .( [ eq:33])can be used to change the system dynamics from bistable to ionically excitable , i.e. , excitable with large excursions in the ionic variables . like all other system parameters the regulation parameters and given in tab .[ tab:1 ] .they have been adjusted so that the duration of the depolarized phase is in agreement with experimental data on spreading depression .note that the parameters we have chosen are up to almost one order of magnitude lower than intact brain values like the ones used in refs .while this does not affect the general time scale separation between glial or vascular ion regulation and ion fluxes across the cellular membrane , the duration of sd depends crucially on these parameters .however , during sd oxygen deprivation will weaken glial buffering , and the swelling of glia cells and blood vessel constriction will restrict diffusion to the vasculature .such processes can be included to ion based neuron models and make ion regulation during sd much slower . for our purposeit is however sufficient to assume smaller values from the beginning .we remark that the ion regulation schemes in our model only refer to vascular coupling and glial buffering .lateral ion movement between the ecs of nearby neurons is a different diffusive process that determines the velocity of a travelling sd wave in tissue .this is not described in our framework . in the following section we will demonstrate in detail how can be understood as the inhibitory variable of this excitation process .the above presented model is indeed the simplest ion based neuron model that exhibits local sd dynamics .model simplicity is an appealing feature in its own right , but one might doubt the physiological relevance of such a reduced model .our hypothesis is that it captures very general dynamical features of neuronal ion dynamics , and to confirm this we will compare the results obtained with the reduced model to the physiologically much more detailed kws model .this detailed model contains five different gated ion channels ( transient and persistent sodium , delayed rectifier and transient potassium , and nmda receptor gated currents ) and has been used intensively to study sd and sla .in fact , one modification is required so that we can replicate the results obtained from the reduced model .the kws model contains an unphysical so called fixed leak current that has a constant reversal potential of mv and no associated ion species .this current only enters the rate equation for the membrane potential .the effect on the model dynamics is dramatic . to see this note that the electroneutrality constraint eq .( [ eq:6 ] ) reflects a model degeneracy that occurs when is modelled explicitly with ( for details see ref . ) . with a fixed leak current eq .( [ eq:1_new ] ) becomes which implies that mv is a necessary fixed point condition for the system .in other words , the type of bistability with a second depolarized fixed point that we normally find in closed systems is ruled out by this unphysical current . if we , however , replace it with a chloride leak current as in our model ( cf.eqs .( [ eq:4 ] ) and ( [ eq:21 ] ) ) , i.e. , a current with a dynamically adjusting reversal potential by virtue of eq .( [ eq:22 ] ) , we find the same type of bistability for the closed system and monostability for the system that is buffered or coupled to a potassium bath . the morphological parameters ( compartmental volumes and membrane surface area ) are the same as for the reduced model .in fact in ref . the kws model was used without additional ion regulation for a reaction diffusion study of sd , and the only recovery mechanism of the local system seems to be this unphysical current . theoretically sd could be a travelling wave in a reaction diffusion system with bistable local dynamics , but unpublished results show that the propagation properties in the bistable system are dramatically different from standard sd dynamics with wave fronts and backs travelling at different velocities . we hence suppose that a local potassium clearing mechanism is crucially involved in sd .we conclude this section with a discussion of the time scales of the model . to this end , it is helpful to keep in mind that the phenomenon of excitability requires a separation of time scales .we have electrical and ionic excitability and these dynamics themselves are separated by no fewer than three orders of magnitude .dynamics of happens on a scale that is faster than milliseconds .this follows from the gating time scale which is given explicitly in eq .( [ eq:13 ] ) and the time scale of of which can be computed from the membrane capacitance ( given in tab .[ tab:1 ] ) and the resistance of the ion channels ( for details see ref . ) : with if we approximate the products of gating variables in the above expression with 0.1 this gives .dynamics of happens on a scale in the order of milliseconds .the time scale of ion dynamics is more explicit in the goldman katz ( ghk ) formalism than in the nernst formalism used in this paper .the nernst currents in eqs .( [ eq:19])([eq:21 ] ) are an approximation of the physically more accurate ghk currents , but in ref . we have shown that ion dynamics of ghk models and nernst models are very similar .that is why the latter may be used for studies like this . for timescale considerations , however , we will now switch to the ghk description .the ghk current of ions with concentrations across a membrane is given by where is the permeability of the membrane to the considered ion species and is the dimensionless membrane potential with this expression contains the ideal gas constant , the temperature , ion valence and faraday s constant . if we now write down the ghk analogue of the ion rate eqs .( [ eq:3 ] ) and ( [ eq:4 ] ) we obtain for the conversion factor we have inserted the expression eq .( [ eq:11 ] ) .the fraction term is of the order of the ion concentrations , is a dimensionless quantity and hence of order one . with the ion dynamics time scale we can thus group the parameters as follows permeabilities of ion channels can be found in refs .similar as for the resistance the permeability of a gated channel involves a product of gating variables .approximating such terms again with 0.1 a typical value for the permeability is .together with the values for the membrane surface are and the cell volume from tab .[ tab:1 ] the time scale of transmembrane ion dynamics is .the slowest time scales are related to potassium regulation , i.e. , to dynamics . the glia scheme from eq .( [ eq:26 ] ) and eq .( [ eq:30 ] ) contains a forward buffering process that reduces at a time scale and a backward buffering process with time scale with the parameters from tab .[ tab:1 ] this leads to and .so backward buffering is much slower .this is an important property , because in the following section we will see that recovery from fes requires a strong reduction of the potassium content .if buffering and backward buffering would happen on the same time scale the required potassium reduction would not be possible .backward buffering could well happen at a considerably faster scale than eq .( [ eq:43 ] ) , but as soon as is comparable to the buffer can not re establish physiological conditions after fes .the glia scheme here is phenomenological .a more biophysically detailed model would describe a glial cell as a third compartment .an elevation of ecs potassium leads to glial uptake .spatial buffering , i.e. , the fast transfer of potassium ion between glia cells with elevated concentrations to regions of lower concentrations maintains an almost constant potassium concentrations in the glial cells . in sdpotassium in the ecs is strongly elevated during an about 80 sec lasting phase of fes and is continuously buffered during this time .after 80 sec the concentration quickly reduces to slightly less than the normal physiological level . still there is a local potassium deficit and what we call backward buffering , i.e. , the release of potassium from the glial cells sets in .it is much slower than the uptake , because it is driven by a far smaller deviation of the potassium concentration from physiological resting conditions of the glial cell .so as for diffusion the forward and the backward process do not actually happen simultaneously .similar to the above explanation of slow backward buffering in the glia scheme , an extremely slow backward time scale follows naturally in diffusive coupling . for diffusionthe potassium content is reduced at a time scale if extracellular potassium is greater than .backward diffusion , however , only occurs in the final recovery phase that sets in after the neuron has returned from the transient fes state and is repolarized .while is still far from the resting state level , is comparable to normal physiological conditions ( see the below bifurcation diagrams in figs .[ fig:1]b and [ fig:2]b ) and hence the driving force during the final recovery phase is very small for a bath concentration close to the physiological resting state level .consequently backward diffusion is much slower than forward diffusion .note that this argument for different slow regulation time scales only relies on the values of the ecs potassium concentration along the physiological fixed point branch , and is not a feature of the particular regulation scheme we apply ..[tab:1 ] parameters for ion based model [ cols="<,<,<",options="header " , ]the results are presented in three parts that describe ( i ) the stability of closed models , where we treat the change of the potassium content as a bifurcation parameter , ( ii ) open models , i.e. , becomes a dynamical variable , with glial buffering and ( iii ) oscillations in ion concentrations in open models for bath coupling with the bath concentration as a bifurcation parameter . at first we will not treat the change of the potassium content as a dynamical variable , but as a parameter whose influence on the system s stability we investigate .so the model we consider is defined by the rate eqs .( [ eq:1])([eq:4 ] ) and the constraint eqs .( [ eq:5 ] ) , ( [ eq:6 ] ) , ( [ eq:8])([eq:10 ] ) and ( [ eq:24 ] ). its stability will be important for the full system with dynamical ion exchange between the neuron and a bath or glial reservoir to be discussed in the next two subsections .the phenomenon of ionic excitability as in sd only occurs for dynamical .we will see that a slow fast decomposition of ionic excitability is possible .the fast ion dynamics is governed by the transmembrane dynamics that we discuss now and happens at the time scale .the dynamics of is much slower ( and ) .fast ion dynamics of the full system can hence be understood by assuming as a parameter that determines the level at which fast ( transmembrane ) ion dynamics occurs .this implies a direct physiological relevance of the closed system bifurcation structure with respect to potassium content variation for transition thresholds in the full ( open ) system . as the bifurcation parameter ( purely transmembrane dynamics ) showing * ( a ) * the membrane potential of fixed points ( fp ) and limit cycles ( lc ) , and * ( b ) * potassium concentrations . the fixed point continuation yields the black curves .solid sections are fully stable , dashed sections are unstable . the stability of the fixed point changes in hbs and lps .the initial physiological condition is marked by a black square .the limit cycle is represented by the extremal values of the dynamical variables during one oscillation .the continuation yields the green lines with the same stability convention for solid and dashed sections .the stability of the limit cycle changes either in a lp or in a period doubling bifurcation ( pd ) . in * ( b ) * the maximal and minimal extracellular potassium concentration of the limit cycle never differs by more than mm .the values can hence not be distinguished on the scale of this figure and therefore only the maximal value is drawn .the bifurcations are marked by full circles and labelled by the type , i.e. , hb , lp or , and a counter ( cf . also the insets with blow ups , in particular the rightmost one showing and on a very small horizontal scale ). the vertical and diagonal arrows labelled ` m ' and ` r ' indicate the direction of extracellular potassium changes due to ion fluxes across the membrane ( ` m ' ) and changes only due to , i.e. , because of ion exchange with a reservoir ( ` r ' ) . note that along the horizontal directions only the ics potassium concentration changes by a precise mixture of fluxes across the membrane and ion exchange with a reservoir .[ fig:1 ] ] the bifurcation diagram of the reduced model is presented in fig .[ fig:1 ] .it is shown in the ( fig .[ fig:1]a ) and in the ( fig .[ fig:1]b ) to display membrane and ion dynamics , respectively . a pair of arrows pointing in the direction of extracellular potassium changes only due to fluxes across the membrane ( vertical ` m ' direction ) and only due to exchange with a reservoir ( diagonal ` r ' direction ) is added to fig . [ fig:1]b .the fixed point continuation yields a branch ( black line ) where fully stable sections are solid and unstable sections are dashed .stability changes occur in saddle node bifurcations ( also called limit point bifurcation , lp ) and hopf bifurcations ( hb ) . in a lp the stability changes in one direction ( zero eigenvalue bifurcation ) , in a hbit changes in two directions and a limit cycle is created ( complex eigenvalue bifurcation ) . a limit cycle is usually represented by the maximal and minimal value of the dynamical variables . however , the oscillation amplitude of the ionic variables is almost zero for the limit cycles in our model .maximal and minimal values can not be distinguished on the figure scale .hence in the the limit cycle continuation appears only as a single line ( green ) .stability changes of limit cycles occur in saddle node bifurcations of limit cycles ( lp ) .the limit cycles in our model disappear in homoclinic bifurcations . in this bifurcationa limit cycle collides with a saddle .when it reaches the saddle it becomes a homoclinic cycle of infinite period . as a reference point the initial physiological condition is marked by a black square .we will call the entire stable fixed point branch that contains this point the physiological branch , because the conditions are comparable to the normal functioning physiological state in particular , action potential dynamics is possible when the system is on this branch .let us discuss the bifurcation diagram starting from this reference point and follow the fixed point curve in the right direction , i.e. , for increasing .the physiological fixed point loses its stability in the first ( supercritical ) hopf bifurcation ( hb1 ) at mm .the extracellular potassium concentration is then at mm . in other word ,much of the added potassium has been taken up by the cell .the limit cycle associated with hb1 loses its stability in a period doubling bifurcation ( pd ) and remains unstable .finally it disappears in a homoclinic bifurcation shortly after its creation ( cf .right inset in fig .[ fig:1]a ) .the stable limit cycle emanating from the pd point becomes unstable in a and vanishes in a homoclinic bifurcation , too .the parameter range of these bifurcations is extremely small ( ) .such fine parameter scales will not play a role for the interpretation of ion dynamics .ion concentrations are stationary and physiological up to , but for practical purposes it is irrelevant if we identify or as the end of the physiological branch .the first hb is followed by four more bifurcations ( lp1 , hb2 , lp2 , hb3 ) that all neither restore the fixed point stability nor create any stable limit cycles .the limit cycles for hb2 and hb3 are hence not plotted either .it is only the fourth hopf bifurcation ( hb4 ) at mm in which the fixed point becomes stable again and in which a stable limit cycle is created .the limit cycle branch loses its stability in lp1 and regains it in lp2 .it becomes unstable again and even more unstable in lp3 and lp4 . shortly after that( not resolved on the scales in fig .[ fig:1 ] ) it ends in a homoclinic bifurcation with the saddle between hb1 and lp2 . athb4 the stable free energy starved branch begins .it is generally characterized by a strong increase in the ecs potassium compared to physiological resting conditions ( fig .[ fig:1]b ) , and a significant membrane depolarization ( fig .[ fig:1]a ) .corresponding to the extracellular elevation intracellular potassium is significantly lowered .this goes along with inverse changes of the compartmental sodium concentrations ( all not shown ) . is hence characterized by largely reduced ion gradients and strong membrane depolarization .in fact , at this membrane potential the sodium channels are inactivated which is normally called depolarization block in hh like membrane models without ion dynamics .depolarization block is , however , only one feature of fes .the closeness of fes to the thermodynamic equilibrium of the system is more importantly manifested in the reduced ion gradients . on more bifurcations occur and it remains stable for increasing .the interpretation of this bifurcation diagram should be as follows .the end of defines the maximal potassium content compatible with a physiological state of a neuron . for larger will be inevitably driven to the fes .in other words the end of marks the threshold value for a slow , gradual elevation of the potassium content to cause the transition from physiological resting conditions to fes . in a buffered systemit is the threshold for sd ignition . on the other handstable fes like conditions require a minimal potassium content which marks the end of .it is given by mm .below this value the only stable fixed point is physiological .again there is a narrow range , namely between and mm , in which stable oscillations can occur .when glial buffering is at work the end of defines the threshold for potassium buffering , i.e. , for the potassium reduction that is required to return from fes to physiological conditions ( cf.eq .( [ eq:25 ] ) ) . in the second subsection of sect .results , we will see that this is exactly how ion regulation facilitates recovery in sd models .there is another way the bifurcation diagram in fig .[ fig:1]b can be read . as we have remarked above the limit cycles of the modelare characterized by large oscillation amplitudes in the membrane variables ( not shown ) and , but almost constant ionic variables , and ( only shown ) .so fig . [ fig:1]b tells us which extracellular potassium concentrations can possibly be stable and which ones can not .values below the end of at mm , values between mm and mm and finally concentrations in the range of starting at mm can be stable .any other extracellular potassium concentration is unstable and the system will evolve towards a stable ion configuration that is present in the phase space .the highest stable potassium concentration below fes values is .if potassium in the ecs is increased instantaneously , this value indicates the threshold for sd ignition or the transition to fes .panel * ( a ) * shows the membrane potential and panel * ( b ) * shows the extracellular potassium concentration of the invariant sets , i.e. , fixed points and limit cycles .the line style convention ( solid for stable , dashed for unstable ) and bifurcation labels are the same as in fig .[ fig:1 ] .note the similar shape to fig .[ fig:1 ] , but also the different scale of the two figures .[ fig:2 ] ] performing the same type of bifurcation analysis with the physiologically more detailed model from kager et al. ( cf .last paragraph of sect .models ) leads to the diagram in fig .[ fig:2 ] .it has been shown before that also in this model there is stable fes .we do not find the same bifurcations as in the reduced model , but only two lps and one hb .however , the physiological implications are very similar . like in the reduced modelthere is an upper limit of the potassium content for stable physiological conditions ( mm ) and a lower limit for stable fes ( mm ) . also the downward snaking and the stability changes of the limit cycle that starts from hb1 are very similar to fig . [ fig:1 ]this leads to the same type of conclusion concerning possible stable extracellular potassium concentrations .while numerical values of the stability limits in terms of are specific to each model , the topological similarity of the bifurcation diagrams suggests a generality of results : there is a stable physiological branch that ends at some maximal value of the potassium content . beyond this pointthe neuron can not maintain physiological conditions , but will face fes .on the other hand the stable fes branch ends for a sufficiently reduced potassium content the neuron will return to physiological conditions .the new bifurcation diagrams presented in this section confirm our results from ref . : neuron models whose ionic homeostasis is only provided by atpase driven pumps , but without diffusive coupling or glial buffering , will have a highly unphysiological fixed point that is characterized by free energy starvation and membrane depolarization .however , the here presented bifurcation diagrams contain additional information of great importance .using the new bifurcation parameter crucially extends our results from ref . by uncovering the threshold concentrations in extracellular potassium concentration .these are completely novel insights . in * ( a ) * , * ( b ) * the and * ( c ) * , * ( d ) * the .the black curves are the stable fes branches that lose their stability in hopf bifurcations ( black circles ) .starting from the leftmost fixed point curves the fixed values are 8 , 12 , 16 , 20 , 24 , 28 and 32 mm for the reduced model and 9 , 13 , 17 , 21 , 25 , 29 and 33 mm for the detailed model .the hopf bifurcations for different chloride concentrations lead to the blue hopf line . as a reference the fixed point curves from figs .[ fig:1 ] and [ fig:2 ] are also included to the diagram and drawn in grey .[ fig:3 ] ] in the next subsection the bifurcation diagrams of the unbuffered ( closed ) systems shall facilitate a phase space understanding of the activation and inhibition process of ionic excitability as observed in sd in the buffered ( open ) systems .we are aiming for an interpretation of ionic excitability where neuronal discharge and recovery are fast dynamics that are governed by the bistable structure discussed above , whereas additional ion regulation takes the role of slowly changing .however , only the gated ion dynamics , i.e. , dynamics of sodium and potassium is fast compared to that of , chloride is similarly slow . by electroneutralitythis means that the overall concentration of positively charged ions in the ics , i.e. , the sum of sodium and potassium ion concentrations changes on the same slow time scale as the chloride concentration . to describe this slow processnot dynamically but like terms of a parameter we simply investigate the stability for a given distribution of non dynamic , i.e. , impermeant chloride . to determine this stability we set the chloride current to zero and vary in a certain range ( from 8 to 32 mm for the reduced model , and from 9 to 33 mm for the detailed model ) .this affects the system only through the electroneutrality constraint eq .( [ eq:5 ] ) which sets the intracellular charge concentration to be shared by sodium and potassium . for each value of perform a fixed point continuation as in figs .[ fig:1 ] and [ fig:2 ] which yields similarly folded s shaped curves .the result is shown in fig .[ fig:3 ] . for our analysis of sdit is only relevant where ends .that is why the plot does not contain the whole fixed point curve , but only and a part of the unstable branch for a selection of values . as a referencethe diagrams also contain the fixed point curves from figs .[ fig:1 ] and [ fig:2 ] which include chloride dynamics .the fes branches in fig .[ fig:3 ] end in hopf bifurcations .the bifurcation points for different chloride concentrations yield the blue hopf line .it marks the threshold for recovery from fes when dynamics of chloride and is slow . in the previous subsectionwe have analyzed the phase space structure of ion based neuron models without contact to a reservoir , i.e. , without glial buffering or diffusive coupling .these models have only transmembrane ion dynamics and obey mass conservation of each ion species .hence they describe a closed system .the bistability of a physiological state and fes that we found in these closed models is not experimentally observed , because real neurons are always open systems not merely in the sense that they consume energy a necessary prerequisite for being far from thermodynamic equilibrium but they also can lose or gain ions through reservoirs or buffers .we will now include glial buffering and show how it facilitates recovery from fes , a condition which in contrast to the physiological state is close to a thermodynamic equilibrium , namely the donnan equilibrium ( cf . ) .when glial buffering is at work , becomes a dynamical variable whose dynamics is given by the buffering rate eq .( [ eq:30 ] ) . in previous subsectionwe have explained that the bifurcation diagrams in figs .[ fig:1 ] and [ fig:2 ] imply thresholds for an elevation of extracellular potassium to trigger the transition from physiological resting conditions to fes .this is in agreement with computational and experimental sd studies in which high extracellular potassium concentrations are often used to trigger sd . another physiologically relevant way of sd ignition is the disturbance or temporary interruption of ion pump activity . as we have shown in ref . there is a minimal pump rate required for normal physiological conditions in a neuron . below this ratethe neuron will go into a fes state and remain in that state even when the pump activity is back to normal .mm after 20 sec ( vertical line ) . in * ( a ) * and * ( b ) * the time series of the membrane potentials ( black lines ) are shown .nernst potentials for all ion species are included to the diagrams as a reference .ion dynamics are shown in * ( c ) * and * ( d ) * where extracellular ion concentrations are in lighter color . [ fig:4 ] ] for the simulations in fig .[ fig:4 ] we have interrupted the pump activity for about 10 sec in the reduced model , and we have elevated the extracellular potassium concentration by mm in the detailed model to trigger sd . both stimulation types work for both models , but only the two examples are shown .the phase of pump interruption ( fig .[ fig:4]a and [ fig:4]c ) is indicated by the shaded region in the plots , the time of potassium elevation is marked by the vertical grey line .the dynamics of the two models is very similar : in response to the stimulation the neuron strongly depolarizes and remains in that depolarized state for about 70 sec ( fig .[ fig:4]a and [ fig:4]b ) . after that the neurons repolarizeabruptly and asymptotically return to their initial state .in addition to the membrane potential ( black curve ) the potential plots also contain the nernst potentials for sodium ( red line ) , potassium ( blue line ) and chloride ( green line ) that change with the ion concentrations according to the definition of the nernst potentials in eq .( [ eq:22 ] ) . in fig .[ fig:4]c and [ fig:4]d we see that the potential dynamics goes along with great changes in the ion concentrations .in particular , extracellular potassium is strongly increased in the depolarized phase .these conditions are very similar to the type of fes states discussed in the previous subsection .the recovery of ion concentrations sets in with the abrupt repolarization , but it is a very slow asymptotic process that is not shown in fig .[ fig:4 ] . in both modelsthe neuron is capable of producing spiking activity again right after the repolarization .all these aspects of ion dynamics during sd are well known from several studies .we remark that the time series are almost identical if glial buffering is replaced by the coupling to a potassium bath .both , the strength of glial buffering and of diffusive coupling have been adjusted so that the depolarized phase lasts about 70 sec which is the experimentally determined time .we will focus on bath coupling in last subsection of sec . results .if neither buffering nor a potassium bath is included the neuron does not repolarize ( for time series plots of terminal transitions to fes see ref . ) . .as in fig .[ fig:3 ] panels * ( a ) * and * ( b ) * contain plots of the membrane potentials , in panels * ( c ) * and * ( d ) * extracellular potassium is shown . *( a ) * and * ( c ) * are for the reduced model , * ( b ) * and * ( d ) * for the detailed model .the trajectories of the reduced model are represented as red curves , those of the detailed model are magenta .the sections of the trajectories that belong to times before and during the stimulation are dashed .the fixed point curves from fig .[ fig:3 ] are added to the plots as shaded lines whereas the fixed point continuations for the unbuffered models with dynamical chloride are slightly darker .the pair of arrows in the extracellular potassium plots indicates the direction of pure transmembrane ( vertical ) and pure buffering dynamics ( diagonal ) .[ fig:5 ] ] the time series in fig .[ fig:4 ] are useful to confirm that the neuron models we investigate have the desired phenomenology and indeed show sd like dynamics .yet the nature of the different phases of this ionic excitation process the fast depolarization , the prolonged fes phase and the abrupt repolarization remains enigmatic . in a phase space plotthe picture becomes much clearer and the entire process can be directly related to the two stable branches , and , that we found for the closed and therefore pure transmembrane models in the previous subsection . in fig .[ fig:5 ] the time series from fig . [ fig:4 ] for a simulation time of 50 min are shown in the and the .the parts of the trajectories during the stimulation ( pump interruption and potassium elevation ) are dashed . in the chosen planes vertical lines belong to dynamics of constant potassium contents that can be understood in terms of the models we analyzed in the previous subsection .that is why fig .[ fig:5 ] contains the fixed point curves from fig .[ fig:3 ] as shaded lines as a guide to the eye . in fig .[ fig:5]c and [ fig:5]d buffering dynamics is diagonal as indicated by the pair of arrows added to the plot . for both trajectories the stimulationis followed by a vertical activation process that leads to the transition from to .the verticality means that this is a process almost purely due to transmembrane dynamics .it is governed by the bistable phase space structure that we discussed in the previous section and also in ref .buffering dynamics is too slow to inhibit the activation .the types of stimulation we applied are related to bifurcations of the transmembrane system : the potassium elevation is beyond the end of which is marked by the first hopf bifurcation ( hb1 ) in fig .[ fig:1 ] .the interruption of pump activity means that we go below a pump rate threshold that is defined by a saddle node bifurcation ( cf . ) . more generally , to initiate an ionic excitation it is necessary to stimulate the system until it enters the basin of attraction derived in the unbuffered system of the fes state .the activation is followed by a phase of both , slow transient transmembrane dynamics mostly due to chloride , and potassium buffering .it is the latter that bends the trajectories in the diagonal direction so that they go along the fes branches from fig .[ fig:3 ] .the trajectories slowly approach the repolarization threshold given by the hopf line .the duration of this fes phase is determined by how long it takes the system to reach the hopf line .this process is a mixture of buffering and transient transmembrane dynamics for the reduced model and more buffering dominated in the detailed model .the duration of the fes phase is consequently a result of both types of dynamics .however , the main insight we gain from this plot is : glial buffering is the necessary inhibitory mechanism that takes the system to the hopf line so that it can repolarize .we remark that the time series and phase space plots for bath coupling instead of buffering are almost identical and the same interpretation holds .the more general conclusion is then : ion dynamics beyond transmembrane processes is necessary to take the system to the hopf line so that it can repolarize .this can , of course , be a combination of bath coupling and buffering . when the hopf line is reached that neuron repolarizes abruptly which is the second almost purely vertical process .the repolarization is followed by slow asymptotic recovery dynamics of ion concentrations that takes the neuron back to the initial state which is at mm .the neuron regains the electrical excitability that is lost during fes already right after the repolarization .so the system is back to physiological function long before the ion gradients are fully restored .let us summarize the results from this subsection . by relating the sd time series from fig .[ fig:4 ] to the bifurcation structure of the unbuffered models from the first subsection of sect .results and in particular to the two stable branches and we have succeeded to understand ionic excitability as a sequence of different dynamical phases .the initial depolarization and the later repolarization are membrane mediated fast processes that obey the bistable dynamics of unbuffered systems .the fes phase is buffering dominated and lasts until buffering has taken the system to a well defined repolarization threshold .the recovery phase is dominated by backward buffering .the full excursion time is the sum of the durations of each phase . for the de and repolarization processthis duration mainly depends on the time scale of the transmembrane dynamics and is hence comparably short .the duration of the fes phase is a result of both , the transient transmembrane dynamics and glial ion regulation at a much slower time scale .the final recovery phase is mainly backward buffering dominated which is the slowest process .hence the duration of an sd excursion is mainly determined by the slow buffering and backward buffering time scales .this conclusion that relies on our novel understanding of the different thresholds involved in sd is in fact in agreement with recent experimental data suggesting vascular clearance of extracellular potassium as the central recovery mechanism in sd .the dynamics of excitable systems can often be changed to self sustained oscillations by a suitable parameter variation .the type of bifurcation that leads to the oscillations and the shape of the limit cycle in the oscillatory regime determine excitation properties like threshold sharpness and latency .the oscillatory dynamics that is related to ionic excitability can be obtained for bath coupling with an elevated bath concentration .so in this section we replace the buffering dynamics for with the diffusive coupling given by eq .( [ eq:33 ] ) .this coupling is used in experimental in vitro studies of sd and has also been applied in computational models that are very similar to our reduced one . . * ( a ) * and * ( b ) * , * ( c ) * and * ( d ) * , and * ( e ) * and * ( f ) * are simulations for , and , respectively .the dynamics is typical for * ( a ) * and * ( b ) * seizure like activity , * ( c ) * and * ( d ) * tonic firing , * ( e ) * and * ( f ) * periodic sd .note the different time scales of sla , tonic firing and period sd and also the different oscillation amplitudes in the ionic variables .[ fig:6 ] ] depending on the level of the bath concentration , we find three qualitatively different types of oscillatory dynamics that are shown in fig .[ fig:6 ] .the top row ( a ) shows the time series of seizure like activity for .it is characterized by repetitive bursting and low amplitude ion oscillations .the other types of oscillatory dynamics are tonic firing at with almost constant ion concentrations ( fig .[ fig:6]b ) and periodic sd at with large ionic amplitudes ( fig .[ fig:6]c ) .we see that sla and periodic sd exhibit slow oscillations of the ion concentrations and fast spiking activity , which hints at the toroidal nature of these dynamics . below we will relate sla and periodic sd to torus bifurcations of the tonic firing limit cycle .color and line style conventions for fixed points and limit cycles are figs .[ fig:1 ] and [ fig:2 ] : black and green lines are fixed point and limit cycles , solid and dashed line styles mean stable and unstable sections. stable solution on invariant tori are blue .they were obtained by direct simulations .the fixed point changes stability in hbs and lps .the bifurcation types limit cycle undergoes are , period doubling ( pd ) and torus bifurcation ( tr ) .some physiologically irrelevant unstable limit cycles are omitted ( cf .panel * ( a ) * shows the membrane potential , panel * ( b ) * shows the extracellular potassium concentration . *( b ) * does not contain the limit cycle , because it can hardly be distinguished from the fixed point line.[fig:7 ] ] the examples in fig .[ fig:6 ] show that our model contains a variety of physiologically distinct and clinically important dynamical regimes . a great richness of oscillatory dynamics , in fact, under the simultaneous variation of and the glial buffering strength has already been reported in refs . for a very similar model . in ref . the authors even give a bifurcation analysis of ionic oscillations for elevation . to investigate dynamical changes and the transitions between the dynamical regimes in our model we perform a similar bifurcation analysis and vary , too .two important differences should be noted though .first , ref . uses an approximation of the multi time scale model in which the fast spiking dynamics is averaged over time , while our analysis does not rely on such an approximation .second , our analysis covers a bigger range of values which allows us to compare sla and sd , while ref . exclusively deals with sla .[ fig:7 ] shows the bifurcation diagram for variation in the and in the .in addition to fixed points ( black ) and limit cycles ( green ) also quasiperiodic torus solutions ( blue ) are contained in the diagram . in comparison to fig .[ fig:1 ] this model contains a new type of bifurcation , namely the neimark sacker bifurcation , also called torus bifurcation ( tr ) .a torus bifurcation is a secondary hopf bifurcation of the radius of a limit cycle in which an invariant torus is created .if this torus is stable , nearby trajectories will be asymptotically bound to its surface .however , we can not follow such solutions with standard continuation techniques , because these require an algebraic formulation in terms of the oscillation period .this is not possible for torus solutions , because on a torus the motion is quasiperiodic , i.e. , characterized by two incommensurate frequencies .we can hence only track the stable solutions by integrating the equations of motion and slowly varying .it is due to this numerically expensive method that in this section we will only analyze oscillatory dynamics of the reduced hh model with time dependent ion concentrations .the result of this bifurcation analysis in fig.[fig:7 ] shows us that there is a maximal level of the bath concentration compatible with physiological conditions .it is identified with the subcritical hopf bifurcation hb1 in which the fixed point loses its stability .the related limit cycle is omitted , because it stays unstable and terminates in a homoclinic bifurcation with the unstable fixed point branch .the fixed point undergoes further bifurcations ( lp1 , lp2 , hb2 , hb3 ) which all leave it unstable and do not create stable limit cycles .it is in hb4 that the fixed point becomes stable again and also a stable limit cycle is created .this is the last fixed point bifurcation of the model .the limit cycle that is created in hb4 changes its stability in several bifurcations .the physiologically most relevant ones are the four torus bifurcations .the bifurcation labels indicate the order of detection for the continuation that starts at hb4 . initially the limit cycle is characterized by fast low amplitude oscillations .it becomes unstable in the subcritical torus bifurcation tr1 .it regains and again loses its stability in the subcritical torus bifurcations tr2 and tr3 .the last torus bifurcation , the restabilizing supercritical tr4 , is directly followed by a pd after which no stable limit cycles exist any more .again we have omitted in the diagram the unstable branch after pd and the limit cycle that is created in pd , which remains unstable .physiologically it is more intuitive to discuss the diagram for increasing starting from the initial physiological conditions marked by the black square .normal physiological conditions become unstable at and above this value the neuron spikes continuously according to the stable limit cycle branch between pd and tr4 . when is reached the dynamics changes from stationary spiking to seizure like activity on an invariant torus .the beginning of sla is hence due to a supercritical torus bifurcation and the related ionic oscillation sets in with finite period and zero amplitude . from on tonic spiking activity is stable again and there is a small of bistability between sla and this tonic firing . as we mentioned above solutions on an invariant torus can not be followed with normal continuation tools like auto , so only stable branches are detected .the details of the bifurcation scenario at tr3 are hence not totally clear , but we suspect that the unstable invariant torus that must exist near tr3 collides with the right end of the stable torus sla branch in a saddle node bifurcation of tori .tonic spiking then remains stable until tr2 .this bifurcation is related to the period sd that already exist well below .in fact , the threshold value is in agreement with experiments . againthe unstable torus near tr2 is not detected , but we suppose that a similar scenario as in tr3 occurs .the dynamics on the torus branch related to tr2 ( and tr1 where it seems to end ) is very different from the first torus branch . while the periods of the slow oscillations during sla are 1645 sec the ion oscillations of periodic sds are much slower with periods of 350550 sec .another crucial difference is obvious from fig .[ fig:7]b which shows the bifurcation diagram in the .the fixed point is just a straight line , because the diffusive coupling eq .( [ eq:33 ] ) makes a necessary fixed point condition .the limit cycle is always extremely close to this line . on the chosen scaleit can not be distinguished from the fixed point and is hence not contained in the plot . only the torus solutions of sd and slaattain values that differ significantly from the regulation level .the ionic amplitudes of sd are one order of magnitude larger than those of sla .this has to do with the fact that the peak of sd as described above must be understood as a metastable fes state that exists due to the bistability of the transmembrane dynamics .the dynamics of sla is clearly of a different nature .note that the bifurcation diagram reveals a bistability of tonic firing and full blown sd between the left end of the sd branch at about 11 mm and tr2 .this means that there is no gradual increase in the ionic amplitudes that slowly leads to sd , but instead it implies that sd is a manifest all or none process . .panel * ( a ) * shows the extracellular sodium concentration and includes an inset around tr4 and pd .panel * ( b ) * presents the potassium gain / loss.[fig:8 ] ] in fig .[ fig:8 ] we look at the same bifurcation diagram in the and the . while in fig .[ fig:7 ] most of the ionic phase space structure is hidden , because for fixed points and limit cycles , the in fig .[ fig:8]a provides further insights into the ion dynamics .we see that the stable fixed point branch before hb1 has extracellular sodium concentrations close to the physiological value .the stable branch after hb4 , however , has an extremely reduced extracellular sodium level and is indeed fes like .the stable limit cycles between pd and tr4 and between tr3 and tr2 , and also sla are rather close to the physiological sodium level . on the other hand ,periodic sd is an oscillation between fes and normal physiological conditions , which is an expected confirmation of the findings from the previous section .[ fig:8]b is useful in connecting the phase space structure of the bath coupled system to that of the transmembrane model of the first subsection of sect. results .if we interchange the and the in the diagram it looks very similar to fig .[ fig:1]b .the torus bifurcations tr1 , tr2 and tr3 are very close to the limit point bifurcations , and of the transmembrane model .the fixed point curves are topologically identical .this striking similarity has to do with the fact that the limit cycle in fig .[ fig:1 ] has almost constant ion concentrations .we have pointed out in the first subsection of sect .results that fig .[ fig:1 ] tells us which extracellular potassium concentrations are stable for pure transmembrane dynamics .diffusive coupling with bath concentrations at such potassium levels leads to negligibly small values of ( cf .( [ eq:33 ] ) ) .therefore the limit cycle is still present in the bath coupled model and also the stability changes can be related to those in the transmembrane model . againthis can be seen as a confirmation of the results from the previous section : the transmembrane phase space plays a central role for models that are coupled to external reservoirs .we can interpret the ionic oscillations from fig .[ fig:6 ] and the bifurcations leading to them with respect to this phase space . .only extracellular potassium is shown .the limit cycle and fixed point curves from figs . [ fig:1 ] and [ fig:3 ] are superimposed to the plots as shaded lines whereas the limit cycle and fixed point from fig .[ fig:1 ] ( dynamical chloride ) are darker .the limit cycle and fixed point are not graphically distinguished , but comparison with fig .[ fig:1 ] should avoid confusion .[ fig:9 ] ] last we consider the dynamics of sla and periodic sd in a phase space projection . in fig .[ fig:9 ] the trajectories for sla and periodic sd are plotted in the together with the underlying fixed point and limit cycles from the transmembrane model ( cf .[ fig:3 ] ) .the periodic sd trajectory has a very similar shape to the single sd excursion from fig .[ fig:5 ] and is clearly guided by the stable fixed point branches and . on the other hand sla is a qualitatively very different phenomenon . rather than relating to the fes branch, it is an oscillation between physiological conditions and those stable limit cycles that exist for moderately elevated extracellular potassium concentrations .the ion concentrations remain far from fes .so sla and sd are not only related to distinct bifurcations , though of similar toroidal nature and branching from the same limit cycle , but they are also located far from each other in the phase space .this completes our phase space analysis of local ion dynamics in open neuron systemsin this paper we have analyzed dynamics at different time scales in a hh model that includes time dependent ion concentrations .such models are also called second generation hodgkin huxley models .they exhibit two types of excitability , electrical and ionic excitability , which are based on fast and slow dynamics .the time scales of these types of excitability are themselves separated by four to five orders of magnitude .the dynamics ranges from high frequency bursts of about 100 hz with short interburst periods of the order of 10 msec ( fig .[ fig:6]a ) to the slow periodic sd with frequencies of about hz and periods of about 7:30 min ( fig .[ fig:6]c ) .the slow sd dynamics in our model is classified as ultra slow or near dc ( direct current ) activity and can not normally be observed by electroencephalography ( eeg ) recordings , because of artifacts due to the resistance of the dura ( thick outermost layer of the meninges that surrounds the brain ) .however , recently subdural eeg recordings provided evidence that sds occur in abundance in people with structural brain damage .indirect evidence was provided already earlier by functional magnetic resonance imaging ( fmri) and patient s symptom reports combined with fmri that sd also occurs in migraine with aura .the slowest dynamics that can be accurately measured by eeg , i.e. , the delta band , with frequencies about 0.5 to 4 hz , has attracted modelling approaches much more than sd , which was doubted to occur in human brain until the first direct measurements were reported .it is interesting to compare the origin of slow time scales in such delta band models to our slow dynamics .models of the delta band essentially come in two types . on the one hand thalamo cortical network and mean field models of hh neurons with fixed ion concentrations have been studied . in this case , a slow time scale emerges because the cells are interconnected via synaptic connections using metabotropic receptors that are slow , because they act through second messengers . on the other hand , single neuron models with currents that are not contained in hh , namely a hyperpolarization activated depolarizing current , sodium and potassium currents , and a persistent sodium current , were suggested .the interplay between these currents gives rise to oscillations at a frequency of about 23 hz .it is therefore hardly surprising that these currents , in particular the persistent sodium and the sodium and potassium currents , have also been proposed to play an essential role in sd .furthermore , bursting as another example of slow modulating dynamics was studied in a pure conductance based model with a dendritic and an axo somatic compartment .in contrast to those approaches our results show that already dynamics in a hh framework with time dependent ion concentrations and buffer reservoirs range from seconds to hours even with the original set of voltage gated ion currents .time scales from milliseconds ( membrane dynamics ) to seconds ( ion dynamics ) and even minutes to hours ( ion exchange with reservoirs ) can be directly computed from the model parameters ( cf.sect .the interplay of membrane dynamics , ion dynamics and coupling to external reservoirs ( glia or vasculature ) naturally leads to dynamics typical of sla and sd . and ( see sec . results ) and the directions ( arrows ) of two paths of ` pure ' flux condition : fluxes exclusively across the membrane and fluxes exclusively from ( or to ) reservoirs .a horizontal path is caused by a particular mixture of these fluxes that induces potassium ion concentration changes exclusively to the intracellular compartment .ionic excitability can be understood as a cyclic process in this diagram ( see text ) .[ fig : bifdiagram ] ] in particular sd is explained by a bistability of neuronal ion dynamics that occurs in the absence of external reservoirs . the potassium gain or loss through reservoirs provided by an extracellular bath ,the vasculature or the glial cells is identified as a bifurcation parameter whose essential importance was not realized in earlier studies ( see fig .[ fig : bifdiagram ] ) . using this bifurcation parameter and the extracellular potassium concentration as the order parameter, we obtain a folded fixed point curve with the two outer stable branches corresponding to states with normal physiological function , hence named physiological branch , and to states being free energy starved ( ) .the definition of the bifurcation parameter implies that exchange with ion reservoirs happens along the diagonal direction labelled by ` r ' .mediated dynamics is in the vertical ` m ' direction . in the full system where the ion exchange is a dynamical variable our unconventional choice of variables , i.e. modelling instead of , makes it obvious that the time scales of diagonal and vertical dynamics is separated by at least two orders of magnitude .slow dynamics is along and , and the fast dynamics describes the jumps between these branches .we remark that dynamics along is slower than along , because the branch is almost horizontal which leads to a very small gradient driving the diffusive coupling .similarly the release of buffered potassium from the glia cells is only weakly driven ( cf .the discussion of buffering time scales in sect .model ) . in the closed system sufficiently strong stimulations lead to the transition from the physiological resting state located on to fes . in the full system with dynamical ion exchange with the reservoirs ,physiological conditions are restored after a large phase space excursion to the the before stable fes state .we refer to this process as ionic excitability .in contrast to the electrical excitability of the membrane potential this process involves large changes in the ion concentrations . the entire phase space excursion of this excitation process can be explained through the specific transits between and along and .we observe ion changes on three slow time scales .( i ) vertical transits between and caused by transmembrane dynamics in the order of seconds .the time scale is determined by the volume surface area ratio and the membrane permeability to the ions .( ii ) diagonal dynamics along in the order of tens of seconds caused by contact to ion reservoirs .this time scale is determined by buffer time constants or vascular coupling strength .( iii ) dynamics on again caused by contact to ion reservoirs , but at the slower backward buffering time scale in the order of minutes to hours determined by the slower backward rate of the buffer . during this long refractory phase of ionic excitability the spiking dynamics based on electrical excitability separated by seven orders of magnitude seems fully functional .the right end of and the left end of are marked by bifurcations that occur for an accordingly elevated or reduced potassium content .this is the first explanation of thresholds for local sd dynamics in terms of bifurcations .we remark , however , that for sd ignition the important question is not where ends , but instead where the basin of attraction of begins .this new understanding of sd dynamics suggests a method to investigate the sd susceptibility of a given neuron model . one should consider the closed model without coupling to external reservoirs and check if shows the typical bistability between a physiological resting state and fes .we remark that unphysical so called ` fixed leak ' currents must be replaced by proper leak currents with associated leaking ions .thresholds for the transition between and translate to thresholds for sd ignition and repolarization , i.e. , recovery from fes in the full open model .knowledge of the potassium reduction needed to reach the repolarization threshold and knowledge about the buffer capacity could then tell us if recovery from fes can be expected ( such as in migraine with aura ) or if the depolarization is terminal ( such as in stroke ) .although our model does not contain all important processes involved in sd , our phase space explanation appears to be valid also for certain model extensions .for example , considering only diffusive regulation of potassium is physically inconsistent , but adding an analoguous regulation term for sodium turns out not to alter the dynamics qualitatively .moreover osmosis driven cell swelling normally regarded as a key indicator of sd is not included in our model , but can be added easily .unpublished results confirm that also with such cell swelling dynamics the fundamental bifurcation structure of fig .[ fig : bifdiagram ] is preserved . as a clinical application of our framework ,we have linked a genetic defect , which affects the inactivation gate and which is present in a rare subtype of migraine with aura , to sd .our simulations show that such mutations render neurons more vulnerable to sd .the interesting point , however , is that on the level of the fast time scale the firing rate is decreased , which in a mean field approach ( as done for the delta band ) translates to decreased activity .this effect seemingly contradicts the increased sd susceptibility and hence illustrates the pitfalls in trying to neglect ion dynamics in the brain and to bridge the gap in time scales by population models .the authors are grateful for discussions with steven j. schiff and bas jan zandt .nh thanks prof .dr . eckehard schll for continuous support , fruitful discussions , and critically reading the manuscript .36ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] * * , ( ) ( ) * * , ( ) * * , ( ) in _ _ , ( , , ) pp . * * , ( ) * * , ( ) \doibase doi:10.1016/j.physd.2009.08.009 [ * * , ( ) ] link:\doibase 10.1186/2190 - 8567 - 3 - 7 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1007/s11538 - 011 - 9647 - 3 [ * * , ( ) ] * * ( ) link:\doibase 10.1177/1073858408317955 [ * * , ( ) ] * * , ( ) * * , ( ) link:\doibase 10.1007/s10867 - 010 - 9212 - 6 [ * * , ( ) ] \doibase doi:10.1371/journal.pone.0022127 [ * * , ( ) ] link:\doibase 10.1371/journal.pcbi.1003551 [ * * , ( ) ] * * , ( ) * * , ( ) _ _( , ) * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1038/jcbfm.2009.285 [ * * , ( ) ] * * , ( ) * * , ( ) \doibase doi:10.1371/journal.pone.0005007 [ * * , ( ) ] * * ( ) * * , ( ) * * , ( ) link:\doibase 10.1016/j.neuroimage.2010.02.076 [ * * , ( ) ] link:\doibase 10.7717/peerj.379 [ * * ( ) , 10.7717/peerj.379 ] | the classical hodgkin huxley ( hh ) model neglects the time dependence of ion concentrations in spiking dynamics . the dynamics is therefore limited to a time scale of milliseconds , which is determined by the membrane capacitance multiplied by the resistance of the ion channels , and by the gating time constants . we study slow dynamics in an extended hh framework that includes time dependent ion concentrations , pumps , and buffers . fluxes across the neuronal membrane change intra and extracellular ion concentrations , whereby the latter can also change through contact to reservoirs in the surroundings . ion gain and loss of the system is identified as a bifurcation parameter whose essential importance was not realized in earlier studies . our systematic study of the bifurcation structure and thus the phase space structure helps to understand activation and inhibition of a new excitability in ion homeostasis which emerges in such extended models . also modulatory mechanisms that regulate the spiking rate can be explained by bifurcations . the dynamics on three distinct slow times scales is determined by the cell volume to surface area ratio and the membrane permeability ( seconds ) , the buffer time constants ( tens of seconds ) , and the slower backward buffering ( minutes to hours ) . the modulatory dynamics and the newly emerging excitable dynamics corresponds to pathological conditions observed in epileptiform burst activity , and spreading depression in migraine aura and stroke , respectively . |
a galaxy is a self - gravitating system where stellar dynamics is governed by newton s law .it could be naively described as a set of coupled , second - order , non - linear ordinary differential equations , where is the number of stars , which ranges between and .solving such an equation set numerically is practically only possible at the very low end of the -range , and even so very challenging with current computer hardware .thus , various techniques are used to simplify the mathematical description of the system ; these are often designed to fit a particular problem in stellar dynamics and yield unphysical results when applied to another problem .direct -body simulation is one of the main techniques used to study gravitational systems in general and galaxies in particular . in this technique ,the distribution function is sampled at points in a monte - carlo fashion .this depends on the computational capabilities , and an astrophysical system with stars might be represented numerically by a sample of just `` supermassive '' particles .this seems to be allowed because of the equivalence principle and the fact that gravitation is scale free , unlike , for example , in molecular dynamics .however in gravity too this simplification can cause problems , as some dynamical effects depend on number density rather than just mass density .the most well known -dependent effect in stellar dynamics is two - body relaxation .the relaxation time , the characteristic time for a particle s velocity to change by order of itself due to encounters with other particles , scales with the crossing time roughly as .thus , the ratio between the relaxation times in a real and a simulated system is of similar order of magnitude to the undersampling factor .this could be taken into account when interpreting the result of an undersampled simulation , but a poorly sampled distribution function might have other , unexpected , consequences .galaxies are often described as collisionless stellar systems , which means that the relaxation time is much larger than the timescale of interest ( except perhaps at the very center ) .this property could be very useful : since a particle s orbit is basically what it would be if it were moving in a smooth gravitational field , we could evaluate the field instead of calculating all stellar interactions , this is cheaper computationally .another useful property is that galaxies are often spheroidal in shape .even highly flattened galaxies will have a spherical dark halo component .thus , a spherical shape could be used as a zeroth order approximation for the gravitational field , and higher order terms could be written using spherical harmonics .the goal of this paper is to examine two techniques that utilize both these facts .these are the multipole expansion ( mex ) and the self - consistent field ( scf ) methods .they historically come from different ideas , and as explained below in detail , they are mathematically distinct . in the context of numerical simulations , however , they serve a similar function : to evaluate the gravitational force on all particles generated by this same collection of particles , in a way that discards spurious small scale structure ( in other words , smooths the field ) .mex was born of the need to ease the computational burden .the idea is that given spherical symmetry , gauss s law says that the gravitational force on a particle at radius from the center is simply , towards the center , where is the enclosed ( internal ) mass .the gravitational constant , , will be omitted in the following text .this idea was used by who simulated clusters with up to 100 particles to study phase mixing due to spherical collapse .this `` spherical shells '' methods is mex of order zero and was also used for the same purpose by .the extension of this this idea is that when spherical symmetry breaks , corrections to the force can be expressed by summing higher multiples ( dipole , quadruple , etc . ) of the other particles , both internal and external to . used such a code to study a stellar cluster of a 1000 stars embedded in a galactic potential , truncating the expansion at . used a variation of this method to study galaxy collision .these authors employed a grid and conducted simulations of also up to and .they additionally assumed azimuthal symmetry which reduced the number of terms in the expansion . , , and all use variations of this method , with additional features which are partly discussed in secion [ sec : discussion - mex ] .see also for a review .the prehistory of scf is rooted in the problem of estimating a disk galaxy s mass distribution from its rotation curve . proposed a mathematical method to generate a surface density profile and a corresponding rotation curve ( related to the potential ) by means of a hankel transform , and introduced a family of such pairs . used toomre s idea , but in reverse : to calculate the gravitational field from an arbitrary 2d density , he generated an orthogonal set of density profiles and their corresponding potentials .this solved two problems ( 1 ) with his orthogonal set it was possible to represent any flat galaxy as a finite linear combination of basis functions , and ( 2 ) unwanted collisional relaxation was curbed due to the smooth nature of the reconstructed gravitational field .cf . a related method by . introduced a 3d extension of his method , which was called scf by ( * ? ? ?* hereafter ho92 ) by analogy to a similar technique used in stellar physics ; further historical developments are discussed in section [ sec : radial - basis ] . to exploit recent developments in the world of general purpose computing on gpus, we implemented both scf and mex routines in a code called _ etics _( acronym for _ expansion techniques in collisionless systems _ ) . in section [ sec : formalism ] we explain the mathematical formalism of both methods and highlight the differences between them . in section 3 we explain the unique challenges in a gpu implementation and measure the code s performance . in section 4we discuss the accuracy of expansion and direct techniques . in section 5 we present a general discussion and finally summarize in section 6 .here we clarify some terms used throughout this work : expansion methods : : a way to get potential and force by summing a series of terms ; in this paper either mex or scf .mex : : multipole expansion method ( sometimes known in the literature as the spherical harmonics method ) ; expansion of the angular part .scf : : self - consistent field method ; a `` pure '' expansion method since both angular and radial parts are expanded ._ etics _ : : expansion techniques in collisionless systems ; the name of the code we wrote , which can calculate the force using both mex and scf , using a gpu .gpu : : graphics processing unit ; a chip with highly parallel computing capabilities , originally designed to accelerate image rendering but is also used for general - purpose computing .it often lies on a video card that can be inserted into an expansion slot on a computer motherboard .both mex and scf methods are ways to solve the poisson equation : the formal solution of which is given by the integral : the expression is the green s function of the laplace operator in three dimensions and in free space ( no boundary conditions ) , and the integral is over the whole domain of definition of . in an -body simulation , the density field sampled at discrete points , such that where is the 3d dirac delta function .direct -body techniques evaluate integral ( [ eq : pot-3d ] ) _ directly _ : and thus at each point where the potential is evaluated , require calculations of inverse distance , or if , since there is no self - interaction . in practice , we are interested in evaluating the potential at the same points in which the density field is sampled , and thus a `` full '' solution of the poisson equation requires inverse distance calculations . in both mex and scf the integrand in equation ( [ eq : pot-3d ] ) is expanded as a series of terms , each of which more easily numerically integrable ; this is done in two different ways , lending the two methods quite different properties . in both methods , the reduction in numerical effort comes at the expense of accuracy compared to direct - summation , but this statement is arguable since in practice direct -body techniques use a very small number of particle to sample the phase space . to demonstrate the difference between the two approaches in the following section ,let us consider a 1d version of integral ( [ eq : pot-3d ] ) ; let us further assume that the density exists in the interval : note that this is not a solution for a 1d poisson equation ( hence the notation instead of ) , but just a simplification we will use to illustrate the properties of each method .we will conveniently ignore the fact that this integral is generally divergent in 1d , as it does not affect the following discussion . in brief, mex is a _taylor_-like expansion of the green s function , while scf is a _fourier_-like expansion of the density .this already hints at the most critical difference between the mex and scf : while the former , like a taylor series , is local in nature , the latter is global .another way to look at it is that in both methods the integrand is written as a series of functions ( of ) with coefficients : in mex one uses the given density to evaluate the functions , while their coefficients are known in advance ; in scf one evaluates coefficients , while the functions are known in advance .let us define and expand the green s function equivalent in equation ( [ eq : pot-1d ] ) around , we get that for or : while for or we can expand around : the first and second terms of integral ( [ eq : pot-1d ] ) define the functions and ( utilizing the commutativity of the sum and integral operations ) : and thus .\ ] ] while seemingly we made things worse ( instead of one integral to evaluate , we now have a series of integrals ) , the fact that has moved from the integrand to the integral s limit greatly simplifies things .let the density be sampled at discrete and ordered points ; it is easy to show that in other words , each of these functions is a cumulative sum of simple terms and can be evaluated at all in just one pass , but a sorting operation is required .let us leave the green s function as it is , and instead expand the density as a generalized fourier series : where is a complete set of real or complex functions ( the basis functions ) ; orthonormality of the basis functions is assumed above .the integral ( [ eq : pot-1d ] ) becomes : the function set is defined by the above integral .in essence , we replaced the integral over an arbitrary density with an integral over some predefined ` densities ' .the advantage is that we can calculate the corresponding potentials , in advance , and then the problem is reduced to numerically determining the coefficients .the choice of the basis is not unique , and an efficient scf scheme requires that the following : 1 .the functions and are easy to evaluate numerically .the sum ( [ eq : scf - sum ] ) convergence quickly , or in other words , is already close to .the standard form of mex in 3d is {lm}(\theta,\phi)\label{eq : mex - phi}\\ q_{lm}(r ) & = & \int_{r'<r}r^{\prime l}\rho({\bf r}')y_{lm}^{*}(\theta',\phi'){\rm d}^{3}r'\label{eq : mex - qlm}\\ p_{lm}(r ) & = & \int_{r< r'}r^{\prime-(l+1)}\rho({\bf r}')y_{lm}^{*}(\theta',\phi'){\rm d}^{3}r'.\label{eq : mex - plm}\end{aligned}\ ] ] all together there are complex function pairs ( not counting negative , which are complex conjugates of the others ) that need be calculated from the density . since in practice the density fieldis made of discrete points , they must be sorted by in order for the above integrals to be evaluated in one pass . the standard form for scf is : all together there are complex coefficients ( not counting negative ) that need be calculated from the density .a typical choice is , for which there would be 308 coefficients .the radial basis functions and coefficients for scf are discussed in the next section .spherical harmonics are used in both cases to expand the angular part , but alternatives exist , such as spherical wavelets ( e.g. ) .mex has two sums ( one infinite ) while scf has three sums ( two infinite ) . in practice, the radial and angular infinite sums must be cut off at and , respectively .the finite sum could in principle also be truncated to discard azimuthal information .simply equating the expressions gives the relation between the two methods : =\sum_{n=0}^{\infty}a_{nlm}\phi_{nl}(r),\ ] ] where is the -pole . in casethe system is azimuthally symmetric , for all .also , the same azimuthal information is carried in positive and negative terms , and they are related to each other by complex conjugation . if one decompose the density to a spherical average and the non - spherical deviation , then it is easy to show that depends only on the spherical average , while all other term depend only on the deviation . in a spherically symmetric system only nonzero , and setting yields an accurate result .while the choice of depends only on the deviation of the system from spherical symmetry , the choice of in scf depends on how well the system is described by the the zeroth radial basis function , and is usually determined by trial and error ( see section [ sec : inifinite - particle ] ) .it is interesting to note a nontrivial mathematical difference between the two methods .one can show that the laplacian of equation ( [ eq : mex - phi ] ) is zero when substituting the appropriate expressions for and from the equations ( [ eq : mex - qlm ] ) and ( [ eq : mex - plm ] ) ; the proof is mathematically cumbersome and will not be brought here .this is surprising , since according to the poisson equation the result should be proportional to the density .one can not appeal to series truncation to resolve this apparent contradiction ; indeed each term in the formally non - truncated infinite series yields a zero density , despite representing the multipoles as continuous functions .the solution is that the potential at point has contributions from all internal ( i.e. at ) particles ( represented by ) and all external particles ( represented by ) , but no information about the density at itself .this is the case also when the potential is constructed by a direct - summation of all gravitational point sources , so one may say mex is similar to direct methods in this sense . in scf , by construction , taking the laplacian of equation ( [ eq : scf - phi ] ) leads right back to the density field ( equation [ eq : poisson ] ) .one can thus use the coefficients to represent a smoothened field .one can also use mex for this purpose , if the derivatives of are calculated on a grid or with a spline .a key difference between mex and scf is the freedom of choice of _ radial _ basis .there are in fact two function sets : the radial densities and the radial potentials ; they are related via the poisson equation ( in this case only contains derivatives with respect to ) .the choice of basis is not unique , and the basis functions themselves need not represent physical densities and potentials ( i.e. could be negative ) .however it is convenient to take the zeroth term ( ) to represent some physical system , and to construct the rest of the set by some orthogonalization method , such as the gram schmidt process .the idea of was to use a model as the zeroth term and construct the next orders using the gegenbauer ( ultraspherical ) polynomials and spherical harmonics ( cf . who developed a virtually identical method for finite stellar systems using spherical bessel functions for the radial part ) .ho92 constructed a new radial basis ( also using gegenbauer polynomials ) which zeroth order was a model ; this is the basis set we adopt in _ etics_. they argued that this basis was more well suited to study galaxies .more basis sets followed . used the idea of , that the basis does not have to be biorthogonal , to construct as set which zeroth order was oblate . gave a radial basis set for the more general -model ( of which both plummer and hernquist are special cases ) and introduced a basis for thick disks in cylindrical coordinates . , describe numerical derivation of the radial basis set so that the lowest order matches any initial spherically - symmetric model , so called `` designer basis functions '' . introduced an analytical set which zeroth order is the perfect sphere of .there are several levels of task parallelism available when writing computer code . at one level, tasks are performed in parallel on different computational units ( such as cpus ) but only one copy of the data exists , which is accessed by all tasks ; this is called a _ shared memory _ scheme .the tasks are called `` threads '' , and they are generally managed within one `` process '' of the program . a higher level of parallelism is called _ distributed memory _scheme , where tasks are performed on different units ( often called `` nodes '' ) , but each unit has access only to its own memory ; thus data must be copied and passed . in this casethe parallel tasks are different processes , and cooperation between them is facilitated by a message passing interface ( mpi ) .the parallel programming model is different between shared and distributed memory ; the former is considered easier since threads can faster and more easily cooperate . a high - performance supercomputer will generally enable parallelism on both levels : these machines are made of multiple nodes , each of which has its own memory and multiple computational units .graphics processing units ( gpus ) are powerful and cost - effective devices for high performance parallel computing .they are used to accelerate many scientific calculations , especially in astrophysics , such as dynamics of dense star clusters and galaxy centers ( ; ; ; ; see review by ) .the gpu contains its own memory and many computational units , thus it is a shared memory device .scf force calculation is particularly easy to parallelize , since the contribution of each particle to the coefficients is completely independent of all other particles .particle data can be split to smaller chunks ( each could be on a different node ) , from each chunk partial -s are calculated . then the partial coefficients summed up and the result communicated to all the nodes .this was done by ( * ? ? ?* hereafter h95 ) , whose code used the mpi call ` mpi_allreduce ` to combine the partial coefficients .this parallelization scheme , however , is not suitable for gpus , as discussed in section [ sec : implementation - scf ] .mex force calculation is harder to parallelize since the contribution of each particle depends on its position in a sorted list ( by radius ) .however , in a shared memory scheme this too could be achieved relatively easily as explained in the following section .the current implementation of the mex method relies on _ thrust _ , a c++ template library of parallel algorithms which is part of the cuda framework .it makes parallel programming on a shared memory device ( either a gpu or a multicore cpu ) transparent , meaning that the task is performed in parallel with a single subroutine call , and the device setup and even choice of algorithm is performed by the library . _thrust _ provides a sorting routine that selects one of two algorithms depending on input type . in the current version of mex and using version 1.6 of _ thrust _ , a general merge sort algorithm is used .a flowchart of the entire mex routine is shown in fig .[ fig : flowchart - mex ] .the flow is controlled by the cpu , and boxes with double - struck vertical edges indicate a gpu - accelerated operation .the blue double - struck boxes represent _ thrust _ calls , while the black ones are regular cuda kernel calls .when a gpu operation is in progress , the cpu flow is paused .[ fig : memory - mex ] shows the four main memory structures of the program and how the _ thrust _ subroutines and kernels in the program operate on them .the particle array contains all particle coordinates and also the distance square from the center , which needs to reside in this structure for the sorting operation ( in practice the particle array contains additional data such as i d and velocity , but this is not used by the mex routine ) ; the cache structure contains functions of particle coordinates which are needed to calculate the multipoles .kernel1 , which is executed once , reads the coordinates , calculates those functions and fills the cache structure .kernel2 calculates the spherical harmonics at the current -level and from that the contribution of the particle to and , which are saved in global memory .when this kernel returns , the _ thrust _ subroutines are dispatched to perform the cumulative sum .the `` scan '' ( forward cumulative sum ) and `` r. scan '' ( reverse scan ) are both in fact calls to the ` exclusive_scan ` subroutine , but to perform the reverse scan , we wrap with a special _ thrust _ structure called ` reverse_iterator ` . not shown in the flowchart , the two scan subroutines have to be called times at each -level since they work on one value at a time .kernel3 has both cache and compute operations : it calculates the partial forces in spherical coordinates ( i.e. the -order correction to the force ) and/or potentials by evaluating all the spherical harmonics again ( and their derivatives with respect to spherical coordinates ) . later it advances and to the next -level ( except at the last iteration ) . finally , the last kernel operates on the force structure and transforms it to cartesian coordinates .[ fig : pie - mex ] shows the relative time it takes to do the internal operation .we note that the potential could be calculated at the same time as the force ( in kernel3 ) and stored at another memory structure ( not shown in fig .[ fig : memory - mex ] ) but is skipped if only forces are needed . alternatively, only the potential could be calculated ( this is faster since the derivatives of the special functions are not calculated ) .the choice between calculating force , potential or both is done with c++ templates .we first briefly explain the serial algorithm used by ho92 .the force ( and potential ) calculation had two parts : ( 1 ) calculation of all the -s ( the plural suffix ` -s ' to emphasize that there are hundreds of coefficients in this 3d structure ) and ( 2 ) calculation of all the forces using the coefficients . in both parts ,the particle loop ( the -loop ) was the external one , inside of which there are again two main steps . in step ( 1a )all necessary special functions were calculated using recursion relations .step ( 2a ) was identical but additionally , the derivatives of those functions were calculated . in step ( 1b )there was a nested loop ( -- structure ) in which a particle s contribution to every calculated and added serially . in step ( 2b ) there was also such a loop , which used all the -s to calculate the force on each particle . in the parallel algorithm used by h95, another part was added between the two parts mentioned above : communicating all partial -s from the various mpi processes , adding them up and distributing the results . in practiceit was achieved using just one command , ` mpi_allreduce ` .there are two main reasons why this algorithm could not be used effectively on a gpu , both are related to the difference between how the gpu and cpu access and cache memory . the first difficulty is performing the sum .the partial sums from the different parallel threads could in principle be stored on a part of the gpu memory called _ global memory _ , and then summed in parallel .however a modern gpu can execute tens of thousands of threads per kernel ( note that the concept of a thread in cuda is abstract , and the number of threads by far exceed the number of thread processors on the gpu chip ) , and every partial is kilobyte in size ( depending on and ) .thus , writing and summing the partial coefficients would require extensive access to global memory , which is slow compared to the actual calculation part .the second difficulty is that if one thread uses too much memory , for example to store all necessary legendre and gegenbauer polynomials as well as complex exponent ( as is done in the ho92 code ) , this may lead to an issue called _ register spilling _ , where instead of using the very fast register memory , the thread will store the values on the slow global memory , which again we wish to avoid on performance grounds . to tackle those issues we utilized another type of gpu memory called __ shared memory__. this memory is `` on chip '' ( on the multiprocessor circuit rather than elsewhere on the video card ) and has lower latency than global memory .threads in a cuda program are grouped into blocks , threads in the same block share this fast memory ( hence the name ) .it is also much less abundant than global memory .the nvidia tesla k20 gpus have just 64 kilobytes of shared memory per block , while they have 5 gigabytes of global memory .in order to use shared memory to calculate the coefficients , each thread would serially add contributions from particles to the partial -s on shared memory ; then they would be summed up in parallel in each block .however , there are usually hundreds of different -s , as well as tens or hundreds of threads per block ( depending on hardware ; which is required for efficient loading of the gpu ) ; there is not enough shared memory for that ( by far ) . to solve this, we changed the order of the loops : the external loop is the -loop , then comes the -loop . for each pair, a cuda kernel is executed where the -loop is performed in parallel on different threads , inside of which the -loop is done .now each threads has to deal with far fewer -s ( no more than ) , for which there is usually enough shared memory .but for the scf routine .] a flowchart of the entire scf routine is shown in fig .[ fig : flowchart - scf ] .the flow is controlled by the cpu , and boxes with double - struck vertical edges indicate a cuda kernel call .when a gpu operation is in progress , the cpu flow is paused .[ fig : memory - scf ] shows the four main memory structures of the program and how the five kernels in the program operate on them .the particle array contains all particle coordinates ( in practice it contains additional data such as i d and velocity , but this is not used by the scf routine ) ; the cache structure contains functions of particle coordinates which are needed to calculate the basis functions .kernel1 , which is executed once , reads the coordinates , calculates those functions and fills the cache structure .kernel2 only operates on the cache structure , it has just one function which is to advance by one level ; thus it needs to be executed at the beginning of each iteration of the -loop .as shown in the flowchart , it is skipped for because kernel1 calculates and caches .kernel3 has both cache and compute operations : it calculates the current using recursion relations from the cached and and then updates the cache .later it calculates the spherical harmonics and from that the contribution of the particle to the all in the current ( ,)-level , which are saved in shared memory .when all threads in the block have finished calculating contributions of the particles assigned to them , they are synchronized and a parallel reduction is performed . since threads from different blocks can not share memory , the data from each block must be transfered to the host machine s memory and the cpu finishes the summation process . for the force calculation , just a reading the -s is required .the gpu has yet another type of memory which is ideal for storing of coefficient or constant parameters .it is fittingly called `` constant memory '' , and is as fast as shared memory when every thread in a warp accesses the same memory element .it is also very limited ( usually to 64 kilobytes per device ) , but the could still fit there nicely .once calculation of all the coefficients is complete , it is transferred back to the gpu constant memory to be used to calculate the forces . since only reading the coefficient is required , in kernel4 which calculates the forces and/or potentials by evaluating all the basis functions again ( and their derivatives with respect to spherical coordinates ) , the -loop is the external one . to avoid register spillingwe keep the internal loop structure as -- , and thus we only need to recalculate the complex exponents , which is relatively cheap .finally , the last kernel operates on the force structure and transforms it to cartesian coordinates . fig .[ fig : pie - scf ] shows the relative time it takes to do the internal operation .we note that the potential could be calculated at the same time as the force ( in kernel4 ) and stored at another memory structure ( not shown in fig .[ fig : memory - scf ] ) but is skipped if only forces are needed . alternatively , only the potential could be calculated ( this is faster since the derivatives of the special functions are not calculated ) . the choice between calculating force , potential or both is done with c++ templates .we tested the performance of _ etics _ ( both mex and scf ) on a single nvidia tesla k20 gpus on the laohu supercomputer at the naoc in beijing . for comparison, we also tested the fortran cpu scf code by lars hernquist on the accre cluster at vanderbilt university in nashville , tennessee ( we used a node with intel xeon e5520 cpu ) .if the initial conditions are not sorted by in advance , the first mex force calculation is more costly than all the following , since the sorting of an already nearly - sorted particle list is faster .thus , all measurements of the mex code are done after the system is evolved one very short leapfrog time step . fig .[ fig : scaling ] shows the time it takes to do one full force calculation as a function of , and .each point represents the mean time of 10 different calculations .the dispersion is generally very low , with the exception of _ etics_-mex with ; only for which we show error bars .note that the timing only depends on the number of particles ( and expansion cutoffs ) and not on their spatial distribution .the cpu and gpu scf codes are both theoretically . at low gpu is not fully loaded , and _ etics _ performance seems superlinear with ._ etics_-mex is theoretically , but this again is an asymptotic behavior which is not observed .the lack of good gpu load for is much more evident than the nature of the algorithm .the gpu global memory was the limiting factor in how many particles could be used with both methods .the dotted lines show the performance of _ etics _ using single - precision instead of double .the speed increase is 61% for scf and 65% for mex , but there is a price to pay in accuracy as noted in section [ sec : single ] .the speedup factor could be very different for different gpus .all codes should scale quadratically with , but as the middle panel of fig .[ fig : scaling ] shows , this behavior is not so clear for _ etics_-mex .this is due to the extensive memory access this code requires , which rivals the calculation time .memory latency on gpus is not easy to predict ; due to caching and the way memory is copied in blocks , and the latency depends not only on the amount of memory accessed but also on the memory access pattern .scf codes theoretically scale linearly with .a strange behavior of the cpu code is noted : it seems that the time increases with in a `` zigzag '' fashion ( the measurement error of the times is much smaller than this effect , and it is reproducible ) .this is paradoxical : it takes a shorter time to calculate with than with , even though more operations are required .it is not simple to understand why this is , but it seems that the compiler performs some optimization on the first -loop ( coefficient computation ) that only help when is odd but not when it is even .the comparison between _ etics_-gpu and hernquist s code is not exactly fair since they use different types of hardware . specifically for hardware we tested , _etics_-gpu outperforms hernquist s code by a factor of about 20 ( which depends little on all parameters ) .however , hernquist s code can utilize a multicore cpu ( using mpi ) .the xeon cpu we used has 4 cores , and two such cpus are mounted on a single accre node .we could use the fortran code in mpi mode on all 8 effective cores with almost no overhead , and the calculation is accelerated by a factor of 8 .also , hernquist s code calculate the jerk ( force derivative ) , which _etics_-gpu does not ; this takes percent of the total time . and ) ; the total time is 0.15 sec on nvidia tesla k20 gpu .the results may differ significantly on different hardware and if single - precision is used instead .the first operation is sort , followed counter - clockwise by initialization of the cache arrays , the -loop where each iteration is divided to ( a ) summand calculation , ( b ) cumulative sum and ( c ) partial force calculation .the final operation is coordinate transformation from spherical to cartesian ] for a full scf force calculation with _ etics _( particles , and ) ; the total time is 0.16 sec on nvidia tesla k20 gpu .the first operation is initialization , followed counter - clockwise by the -loop ( in which the -loop is nested ) .the partial force calculation is a single cuda kernel , inside of which all the loops are performed . ][ fig : pie - mex ] and [ fig : pie - scf ] show the fraction of time it takes to perform the internal operations for the force calculation for _ etics_-mex and -scf , respectively , both use , and for scf , . for mex , operations inside each iteration of the -loop are shown in different shades ( also denoted by letters corresponding to stages 3a , 3b and 3c as explained in section [ sec : implementation - mex ] ) .the most costly operations are the ones we entrust to _ thrust _ , namely the sorting and cumulative sum . in fig .[ fig : pie - scf ] the internal structure of each -iteration is not shown ( since there are too many internal operations , including the -loop ) .the force calculation is executed as one operation ( a single cuda kernel call ) , and includes the -loop nested inside it ( unlike mex where only a partial force was calculated at every -iteration , step 3c ) .two separate questions come up when discussing the accuracy of expansion methods : how well the expansion approximates the -body force ( i.e. direct - summation ) , and how well it approximates the smooth force in the limit of infinite particles ( which we will refer to as the `` real '' force in the following discussion ) .both questions depend on , and ( for scf ) .a related question is how well the -body force approximates the real force , as a function of .all these questions depend not only on the expansion cutoff and , but on the stellar distribution as well ( e.g. global shape , central concentration , fractality , etc . ) ; this will not be fully explored in this work .there are two types of error when considering the expansion methods versus the real force , analogous to systematic and random errors .the first , systematic - like error , comes from the expansion cutoff , this is called the _bias_. for example , a system which is highly flattened could not be described by keeping just the quadrupole moment , so both mex and scf cut off at would exhibit this type of error , regardless of ( see ; for discussion about bias due to softening ) .the second , random - like error , comes from the finite number of particle and their coarse grainy distribution ; it is the equivalent of -body noise ( also referred to as particle noise or sampling noise ) .ho92 attempted to estimate accuracy of scf by showing convergence of the coefficient amplitudes with increasing for the density profiles of some well known stellar models .they showed that decayed exponentially or like a power law with , depending on the model .this analysis was not satisfactory because it applied to the limit of infinite , thus ignoring the random - like error .furthermore , showing convergence of the coefficients does not give information about the force error .the bias and the random error are not easy to distinguish .the bias could be calculated , in principle , only if the true mass density is known , which is not generally the case ; however , it is still useful to look at some particular examples where it is known . to test the accuracy of the expansions techniques , we used two simple models for the mass density . both our models are ferrers ellipsoids ( often called ferrers bars ) with index . ] : ` model1 ` is a mildly oblate spheroid with axis ratio of 10:10:9 , ` model2 ` is triaxial with axis ratio of 3:2:1 .ferrers ellipsoids are often used in stellar dynamics , especially in the modeling of bars ( e.g. ) .they have a very simple mass density : ^{n } & \mu\leq a\\ 0 & \mu > a \end{cases}\ ] ] where , and are the axes , is the central density , is the index and is the ellipsoidal radius , defined by : the potential due to this family of models is simply a polynomial in if is an integer . the coefficients could be calculated numerically ( also analytically for some cases ) by solving a 1d integral ( * ? ? ? * chapter 2.5 ) . for both our models we used the mathematical software _sage _ to calculate the coefficients to better than .the force vector components are trivially derived from the potential polynomial ; this is the `` real '' force .we created many realization of these two models , ranging from just 100 particles to .the goal is to compare for each realization the force calculated using mex , scf and direct - summation ( no softening ) , with the real force .all calculations performed using double - precision , and the direct - summation force is not softened . for each realizationwe get a distribution of values of the relative force error , where is the particle s index .it is not practical to show to full distribution for all cases , so in figs .[ fig : force - error1 ] and [ fig : force - error2 ] we show the mean , and the full distribution for only selected cases . the left panel of fig .[ fig : force - error1 ] shows the mean relative force error in ` model1 ` for direct - summation and mex with even between and ; odd terms are in principle zero if the expansion center coincides with the center of mass , and in practice very small . for this model, is decreasing with for all cases but ( monopole only ) .the smallest error is for ( monopole and quadrupole only ) .unintuitively , adding correction terms _ increases _ the error ( for constant ) , this is because the model s deviation from sphericity is so mild , that the quadruple describes it well enough ; the following terms just capture some of the -body noise in the realization and make more harm than good . in the right panelwe show the full log - distribution for selected cases .the histograms for are made by stacking of realizations , so there are values of in all histograms . in all cases the distributions are close to log - normal; the logarithmic horizontal axis hides the fact that the distributions on the right are much wider in terms of standard deviation due to a very long and fat tail when viewed in linear space .note that while the number of particles increased by 1000 , in all cases the error distribution shifted down by just a factor of .[ fig : force - error2 ] is the same but for the triaxial ` model2 ` . while in the -body casesthe distributions are much the same , mex shows a different behavior . the most prominent feature is the bump on right side of the , error distribution , which demonstrates the issue of bias .most of the particles which make up this bump are located in the lobes of the ellipsoid , where many angular terms are required . when is increased to ,this bump disappears .it also is not present in the , case , probably because it is overwhelmed by the random error .this bump causes the mean error to saturate with particle number , as the left panel shows .increasing will shift the bulk of the bell curve to the left ( zero ) , but will not quench the bump . at much larger ,` model1 ` will show the same behavior as the random error becomes smaller than the bias , and the high- cases would outperform .we repeat this exercise for scf , which has an additional source of bias due to the radial expansion cutoff . fig .[ fig : force - scf - error2 ] illustrates that point by showing the relative force error distribution in ` model2 ` for scf compared to mex with the same number of particles ( ) and same angular cutoff ( ) . with increasing , the scf error distribution approaches that of mex , demonstrating the point made in section [ sec:3d ] that mex is equivalent to scf with .it must be noted that the basis set we programmed in _ etics _ is not at all suitable for ferrers models ( which are finite and have a flat core ) , and the apparently slow convergence should not disparage one from using scf , even if it is not known in advance what basis to choose .the overlap between the relative force error distributions of scf at and mex is 77% . a more intelligent choice of basis function is discussed by , who used a similar methodology to choose the best basis set for triaxial -models with from a family of basis sets similar to the ho92 .the results presented in this section suggest that there is some optimal expansion cutoff , which is different for different models and depends on the number of particles .this is analogous to optimal softening in direct - summation force calculations .if not enough terms are used , there is a large bias ; if too many terms are used , the particle noise dominates . addressed this issue by calculating the variance of each scf coefficient among several realizations of the same triaxial dehnen model , found that for particles , angular terms beyond are dominated by noise ( and that only the first few , terms at that l - level are reliable ) . the force error discussed above is not directly related to energy diffusion or relaxation , which are reduced due to the smoothing , but not absent .the mechanism for energy ( and angular momentum ) diffusion in both expansion methods is temporal fluctuation of the multipoles or coefficients ( due to the particle noise ) .this is somewhat analogous to two - body relaxation in that the potential felt by every particle fluctuates ( although in this case there is no spatial graininess ) .* in prep . )examined energy diffusion in a plummer sphere with particles using scf and direct -body codes , and found that scf demonstrated a diffusion rate only several times lower , which was close to the rate in a direct technique using near - optimal softening for this .further reduction was achieved by discarding of expansion terms which are nominally zero in any triaxial system centered around the expansion center .finally , vasiliev used temporal softening ( ho92 ) , where the coefficients ( and thus the potential ) are updated in longer intervals than the dynamical time step ; this procedure however introduces a global energy errors unless some measures are taken to amend this . .the green histogram is also shown in the right panel of fig .[ fig : force - error2 ] and represent a mex expansion with .the other histograms represent scf expansions with and varying values as shown . with increasing , the scf error distribution approaches that of mex with the same .in this case , the model differs greatly from the zeroth order function of the basis set , showing relatively slow convergence . ]due to their original intended use , gpus are not optimized for double - precision arithmetic ( indeed early gpus completely lacked a double - precision floating - point type ) . in cards that do support double - precision , arithmetic operationscould still be significantly slower than for single . as noted before , in our test we measured a 6065% speed increase when using single - precision .the nvidia tesla k20 gpus we used have enhanced double - precision performance with respect to other gpus , for which using double - precision may be significantly slower .those devices are somewhat specialized for scientific use and are thus more expensive ( albeit in many applications still superior to parallel cpu architectures in terms of price / performance ratio due to the low energy consumption ) .cpus usually take the same time to perform an arithmetic operation in either single- or double - precision , but a program s general performance could be faster in single - precision due to smaller memory load . for the hernquist - scf cpu code , we measured a 6% improvement in speed .using single - precision however inevitably reduces the accuracy of the calculated force ; here we examine how bad this _ performance - accuracy trade - off _ is . fig .[ fig : single ] show the relative force error distributions of single - precision calculations , compared to double .the relative force error on particle is now defined as : we testes an realization of a hernquist sphere with characteristic scale of one unit .the top panel shows two scf force calculations : the green histogram ( on the left ) is a low order expansion up to , retaining 36 coefficients ; the red histogram is an expansion up to , retaining 308 coefficients .the bottom panel similarly shows two mex expansions . in both cases ,the higher order expansion has relatively large errors . while it is still smaller than the error with respect to the `` real '' force discussed in section [ sec : inifinite - particle ] , its nature is numeric and it could hinder energy conservation .the relatively large error is not remedied by usual methods to improve accuracy of floating point arithmetic such as kahan summation algorithm , because the error does not come from accumulation of round off errors . instead , the accuracy bottle neck is the calculation of the spherical harmonics and/or the gegenbauer polynomials .particles for which those special functions are calculated with large numerical error will have a large force error , but additionally they contribute erroneously to _ all _ the coefficient or multipoles , thus causing some error in the force calculation of all other particles as well .there are two groups of particles with large relative error in this implementation : particles that are very far away from the center , and particles which happen to lie very close to the -axis .the former group is not so problematic since the absolute force is very small as well as their contribution to the coefficients or multipoles .the latter group causes large error because the recursion relation used to calculate the associated legendre polynomials : is not upwardly stable because of the factor , which diverges when the polar angle is very small or very close to ( although the polynomials themselves approach zero in these limits ) .the distributions shown in fig .[ fig : single ] may vary significantly depending on the model .for example , ferrers ellipsoids are finite and flat at the center , thus they do not contain the problematic particles described above and have much smaller error in single - precision .a hernquist sphere is more representative of the general case in galaxies , being infinite and relatively centrally concentrated .one could conceivably improve the accuracy at single - precision in several ways . in the testdescribed above everything was calculated in single - precision , apart from some constant coefficients that were only calculated once , in double - precision , and then cast to single .it may be possible to identify the most sensitive parts of the force calculation and use double - precision just for those , or use pseudo - double - precision ( as in ) for part of or the entire force calculation routine .another possibility is to keep using single - precision for everything but prescribe special treatment to those orbits close to the -axis . ) .the model used is a hernquist sphere with .the top and bottom panels shows scf and mex force calculations , respectively . in both casesthe left ( green ) histogram is a lower - order expansion as indicated in the legend . ]expansion techniques , on their own , are best geared to simulate systems with a dominant single center , where it is important to minimize the effects of two - body relaxation , and where the system potential does not change radically ( quickly ) with time .an ideal class would be long - term secular evolution in a near - equilibrium galaxy .both methods presented in this work can be used to quickly calculate the gravitational potential and force on each particle in a many - body system , while discarding small scale structure .mex comes from a taylor - like expansion of the green s function in the formal solution of the poisson equation , while scf is a fourier - like expansion of the density .both methods are important tools for collisionless dynamics and has been used extensively in astrophysics as discussed in the following sections .they are comparable in terms of both accuracy and performance . in both methods , there are free parameters to be set : the choice of length unit ( or model scaling ) affects the accuracy of scf expansion because the zeroth order of the radial basis functions corresponds to a model of a particular scale .for example , the basis set offered by _ etics_ corresponds to a model with scale length .the main difference for the end - user is that scf smooths the radial direction as well .this could be an advantage when is very small , since scf will still provide a rather smooth potential , although it might not represent the real potential well at all due to random error . in mex, particles are not completely unaware of each other , and every time two particles cross each other s shell , there is a discontinuity in the force , which may lead to large energy error when is small .this shell crossing occurs when two particles change places in the -sorted list , and the particles need not be close to each other at all .both methods have some problems close to the center . in scf ,the limitation comes from both radial and angular expansions .the radial expansion cutoff induces a bias if the central density profile does not match the zeroth basis function , and a very non - spherical model would cause force bias at the center as well as the lobes .the latter is also a problem for mex , which has two additional problems : the discrete nature and inevitably small number of particles when one gets arbitrarily close to the center , as well as the numerical error ( and/or small step size required ) due to having to calculate ( the scf basis function we use are completely regular at the center ) .it is clear from the literature that scf has been by far more popular . butdespite the above , we do not think that most authors intentionally avoided mex , and that scf was better publicized and became the standard .mex is rarely used in its full form , but more frequently in the spherically symmetric version , sometimes called the `` spherical shells '' method ; in this case just the monopole term is kept ( ) .for example , it is used in the poisson solvers of monte carlo method .this hints that it might be easy to extends codes like mocca to non - spherical cases using our version of mex .this monopole approximation has also been used to study dark and stellar halo growth .the extension of the spherical case using spherical harmonics exists in several variations , divided roughly to two classes : grid and gridless codes .the mex version presented in this work is gridless and follows from and .these authors used cartesian instead of spherical coordinates , and softened the potential at the center .this softening , albeit similar mathematically , is not equivalent to particle - particle softening in direct -body simulations and was just used to prevent divergence at the center .the first mex code however is by , who divided the simulation volume into thick shells , and the force on a particle was calculated by summing the multipoles of all shells except its own ( own - shell correction was added ) .similarly , used a mex code with to explore galaxy correlation functions ; in their version each shell had six particles , and softened newtonian interaction was used within a shell .as noted in the introduction , used a variation with axial symmetry ( up to but with no azimuthal terms , namely ) , with a grid in both and . in a follow up work the method was extended to 3d geometry .finally , used a grid in only , with logarithmic spacing .he argues that softening sacrifices the higher resolution near the center ( which is one of the primary advantages of the method ) and that a radial grid smooths the potential and prevents shell crossing .recently , presented a similar potential solver with a spline instead of a grid .we note that a virtually identical mathematical treatment to the mex method has been applied to solve the fokker - planck equation under the local approximation ( neglecting diffusion in position ) .the collisional terms of the fokker - planck equation can be written by means of the rosenbluth potentials , which are integrals in velocity space very similar in form to equation ( [ eq : pot-3d ] ) . assumed azimuthal symmetry and wrote the rosenbluth potentials using the legendre polynomials up to in a way exactly analogous to our equation ( [ eq : mex - phi ] ) .this treatment was expanded to by .as noted in the previous section , scf gained much more popularity than mex .the scf formalism has had wide use on galaxy - scale problems .it has been used to model the effect of black hole growth or adiabatic contraction on the structure ( density profile ) of the dark matter halo ( e.g. * ? ? ?scf is also an appropriate tool to model the growth of the stellar and dark matter halos ( e.g. * ? ? ?* ; * ? ? ?* ) as well as the mass evolution of infalling satellite galaxies ( e.g. * ? ? ?* ; * ? ? ?one of the clearest uses of the scf technique is when the stability of the orbit matters such as in the study of chaos in galactic potentials ( e.g * ? ? ?* ; * ? ? ?* ) , and in the exchange of energy and angular momentum by mean resonances ( e.g * ? ? ?* ; * ? ? ? compare a number of methods and show that scf is superior for stability work . the initial motivation for _ this _ work was to follow up on , who studied supermassive black hole binaries using a restricted technique . in their method ,the stellar potential was held constant while the black holes were treated separately as collisional particles ; it was thus not self - consistent in terms of the potential .this class of problems , where there is a small subset of particles that need to be treated collisionally , has already been attempted using an extension of the expansion technique which hybridizes scf and direct aarseth - type gravitational force calculation ; in these extensions , either the black holes are the only collisional particle ( e.g * ? ? ?* ; * ? ? ?* ) , or all centrophilic particles are treated collisionally .mex has not been applied to this particular problem to our knowledge , although it is as well suited as scf .our scf implementation on gpu outperformed the serial hernquist cpu version by a factor of ( for double - precision ) , but this number depends on the particular gpu and cpu hardware compared .the cpu code is definitely competitive on multi - core cpus .intel recently introduced the many integrated core architecture ( known as intel mic ) , which are shared memory boards with the equivalent of tens of cpus . in principle , the fortran scf code for cpus could be adapted for this architecture with little modification , and it will most likely outperform the gpu version . on the other hand , next generation gpus ( such as nvidia s maxwell architecture ) would also deliver performance improving features , and it is not clear which one would win .the goal of this project is to ultimately enable simulations of , and to perform them fast enough so that many could be performed , exploring large parameter space rather than making a few such large- simulations . to do that, the code will be adapted to a multi - gpu and multi - node machines using mpi . as noted in section [ sec : implementation - parallel ] , this is easy for scf but not so much for mex . simultaneously we will attempt to improve the per - gpu performance .we spent a lot of time trying to optimize this first version of _ etics _, by no means we guarantee that out implementation is flawless .some improvement might come from tweaking of the implementation .for example , we decided not to cache but rather recalculate it in - kernel before every -loop ( as a starting point for the recursion relation ) . since the legendre polynomials are `` hard - coded '' and computed very efficiently , it is not immediately clear if caching is a more efficient approach ( it is probably worth while at very high ) .likewise , we chose to separate the caching operations that are performed once per routine or once per external loop , and execute them as independent kernels , while in principle they could be executed as statements inside the inner kernels ( so called `` kernel fusion '' , which would save kernel execution overhead ) , with an if - statement making sure that the cache operations are performed only if needed .some possible more fundamental changes include trying to get rid of the sorting operation in mex ; while the most basic approach requires the particle list to be sorted and a cumulative sum performed over the multiples , some alternatives exist such as logarithmic grid ( as in ) or spline ( as in ) . also we might find a more sophisticated way to perform the cumulative sum , since we suspect that the _ thrust _ routines are not optimal for our uses .another improvement might come from the integration side rather than force - calculations , such as implementation of higher order integrator instead of the leapfrog .hernquist s scf code already contains a 4th order hermite scheme , which is not hard to implement for gpus , but mex has a fundamental problem with this scheme due to shell crossing , which causes the force derivatives to be discontinuous ._ etics _ is a powerful code , but as with any computer program , one should understand its limitation .the code in its current form should not be used for highly flattened system , or where two - body interactions are significant .the code is momentarily available upon request from the authors , but we plan to make it public , including a module to integrate it with the amuse framework .we thank peter berczik , adi nusser , marcel zemp and eugene vasiliev for the interesting and helpful discussions and the referee for useful comments .ym is grateful for support from the china postdoctoral science foundation through grant no .2013m530471 . the special gpu accelerated supercomputer laohu at the center of information and computing at national astronomical observatories , chinese academy of sciences , has been used for the simulations ; it was funded by ministry of finance of people s republic of china , under the grant zdy z2008 - 2 , and by the `` recruitment program of global experts '' ( qianren ) for rainer spurzem ( 2013 ) .rs has been partially supported by nsfc ( national natural science foundation of china ) , grant no . | we present gpu implementations of two fast force calculation methods , based on series expansions of the poisson equation . one is the self - consistent field ( scf ) method , which is a fourier - like expansion of the density field in some basis set ; the other is the multipole expansion ( mex ) method , which is a taylor - like expansion of the green s function . mex , which has been advocated in the past , has not gained as much popularity as scf . both are particle - field method and optimized for collisionless galactic dynamics , but while scf is a `` pure '' expansion , mex is an expansion in just the angular part ; it is thus capable of capturing radial structure easily , where scf needs a large number of radial terms . we show that despite the expansion bias , these methods are more accurate than direct techniques for the same number of particles . the performance of our gpu code , which we call _ etics _ , is profiled and compared to a cpu implementation . on the tested gpu hardware , a full force calculation for one million particles took seconds ( depending on expansion cutoff ) , making simulations with as many as particles fast on a comparatively small number of nodes . |
traditional methods for visual cryptography have been established , are consistent and easily understood .unfortunately , these methods exist for black and white pictures only , leaving the encryption of colour images wanting . while there are a handful of attempts at bringing colour to visual cryptography, it is still an open field with implementations of varying efficiency . in this paper, we will establish some basic standards for encrypting colour pictures , as well as a simple , yet efficient method for encryption based upon those rules . while black and white pictures are fairly easy to work with due to their simple nature , colour pictures contain much more information and possibly details that may not be lost in the process .this leads to a need for the static that normally appears in the decryption of black and white pictures to have to be absent[1 ] .however , a partial reconstruction of the picture ( such as having less than all the necessary parts for full restoration ) may not hint at what the final image is meant to be .in addition to the standard of security for the traditional black and white pictures , we set the following points as mandatory for encryption of colour images . * full restoration upon decryption . * no indication as to the original image , whether by eye or any other method , when combining a subset of all available parts .* usability for any type of image , whether that image contains a mixture of colours , is black and white or is simply one single colour . *destruction of intermediary steps in the encryption process .our method of encryption is , due to the process of creating images within visual cryptography and the expanse of computer use , based around the rgb colour model for computers and other , similar devices .this does not prevent implementation of this process with any other model as long as the information is stored as bits .the encryption is fairly straightforward and can be easily understood as well as implemented .although it is easy to handle , it is efficient , provides the necessary security and fulfills every point previously stated . for the process , the thought was to work on the bitwise level that represents the colours themselves ; in this case , the rgb values are used . as each pixelis processed , two random values are generated ; the first one is compared to the rgb value of the current pixel and it is then separated into two values : an rgb value with the bits that were set in both the original as well as the first random value , and another one with the set bits left over from the original . next , the second random value is compared to the two new results from the previous step .if both values are not set while the bit at the same position in the random value is , those bits for the values from the previous step are set .these steps are repeated for every pixel and then the encryption is finished .decryption is easily done via a bitwise xor of the rgb values of the two resulting pictures and we effectively have a one - time pad implementation on the colour values of an image .the random values from the process of encryption are discarded alongside any other values we might have produced .the encryption algorithm will `` split '' the original image into two so called _ shadow images_. let denote one pixel in the original image , and let and denote the corresponding pixels in the shadow images , respectively . here , , and are vectors , representing the channels used , e.g. red , green and blue in the rgb colour model . the calculations during the encryption are carried out both bitwise and channel - wise . we illustrate the encryption with an example .assume that we want to encrypt the image in figure [ fig : ex1 ] .then if we apply the algorithm , it may result in the two shadow images in figure [ fig : ex2]remember that the algorithm is probabilistic . to restore the original image , we compute for each pixel .the result will give us back the original image , without any loss of quality , that is , figure [ fig : ex1 ] .as with both visual cryptography and the one time pad , this method offers complete security as there is neither a repetition to be found , nor is a brute force attack possible as every possible result within the picture s resolution will show up .this method does not only fulfill the standards we previously set , it even leaves the resolution of the original image intact .of course , it should be noted that for this encryption to work to its full potential , the results must be saved as a lossless image type . in the event that one would wish to separate the original into more than two pictures , reapplying this process to the results until a satisfactory amount is reached is all one needs to do .it could be argued that all this encryption would need is random values generated and applied with a bitwise xor to the original image , leaving the randomly generated sequence as one of the resulting images and the result of the bitwise xor as the other .while this could be done , let us take a look at the absolute worst case scenario , disregarding the possibilities that an attacker knows the original or has access to all parts of the picture .the scenario in mind would be in the highly unlikely event that an attacker would have an intimate enough knowledge of the encryption process to know exactly how it is implemented as well as knowing exactly which random values were generated .if the method of encryption would be nothing more than the simplified version suggested , then having the result of the bitwise xor in your possession would be enough to get the original as being able to predict the exact pseudo - random values would mean that you effectively have the key .this is not the case for the method we propose as depending on the random values and how they interact with the original data , only some bits could possibly be decided , as shown in table [ tab : values ] .there are four possible cases for the pair .each case can be divided into two subcases , where each corresponds to the possible value of . without loss of generality , we consider the cases on bit - level . * 5c case & & & & + 1 & 0 & 0 & 0 & + 2 & 0 & 0 & 1 & 1 + 3 & 0 & 1 & 0 & 1 + 4 & 0 & 1 & 1 & + 5 & 1 & 0 & 0 & + 6 & 1 & 0 & 1 & 1 + 7 & 1 & 1 & 0 & 1 + 8 & 1 & 1 & 1 & + as seen , even in the case of knowing the generated pseudo - random values , half the possible combinations lack a value , thus mitigating such an attack .the method of encryption presented here could also be applied to the one time pad due to the likeness of them , so providing the extra bit of security by mitigating this most unlikely of scenarios .the reasoning for the received original value presented in the above table is shown with simple boolean algebra .the equations used are the ones presented in the pseudocode , also presented here for ease of following .keep in mind that we do not know which one of the values we have and thus not what the other one could be .[ [ case-1-and-2 ] ] * case 1 and 2 * + + + + + + + + + + + + + + here is and . while we know for sure that at least one of the values must be , the other could be either possible value .this means that if our available result ( ) is , we have no way of knowing which value we have in our possession , which in turn means that and are equal in terms of possibility .thus , since the original value could be either for , it remains undetermined .if , on the other hand , our available value is , then we can logically conclude that it is that we are dealing with and we only have to complete the following equation : and therefore .[ [ case-3-and-4 ] ] * case 3 and 4 * + + + + + + + + + + + + + + here is and . as per the same logic as in the previous one : if the available result is the same as the one we know must exist ( in this case one ) , then we can not tell which equation it is we have .if it is the opposing value , then we can in this case rule out and calculate the equation : and hence .[ [ case-5-and-6 ] ] * case 5 and 6 * + + + + + + + + + + + + + + here is and .following the previous logical steps gives us an unknown for the available result of , but the following result for is .[ [ case-7-and-8 ] ] * case 7 and 8 * + + + + + + + + + + + + + + here is and .once again we can not say for sure which case is before us if our available result is the same as the one we know with certainty to exist , but in the case of a differing result , we know the following : and .we would like to thank robert nyqvist for : | while strictly black and white images have been the basis for visual cryptography , there has been a lack of an easily implemented format for colour images . this paper establishes a simple , yet secure way of implementing visual cryptography with colour , assuming a binary data representation . |
in this paper , we consider the cross - view and cross - modality matching problem between street - level rgb images and a geographic information system ( gis ) . specifically , given an image taken from street - level , the goal is to query a database assembled from a gis in order to return likely locations of the street - level query image which contain similar semantic concepts in a consistent layout . relying only on visual data is important in gps - denied environments , for images wheresuch tags have been removed on purpose ( e.g. for applications in intelligence or forensic sciences ) , for historical images , or images from the web which are lacking any gps tags . traditionally , such matching problems are solved by establishing pairwise correspondences between interest points using local descriptors such as sift with a subsequent geometric verification stage .unfortunately , even if top - down satellite imagery is available , such an approach based on local appearance features is not applicable to the wide - baseline cross - view matching considered in our setting , mainly because of the following two reasons .firstly , the extremely wide baseline between top - view gis imagery and the street - level image leads to a strong perspective distortion , and secondly , there can be drastic changes in appearance , e.g. due to different weather conditions , time of day , camera response function , etc . in this paper , we present a system to handle those two challenges .we propose to phrase the cross - view matching problem in a semantic way .our system makes use of two cues : what objects are seen and what their geometric arrangement is .this is very similar to the way we humans try to localize ourselves on a map .for instance , we identify that a house can be seen on the left of a lake and that there are two streets crossing in front of this house. then we will look for the same semantic concepts in a consistent spatial configuration in the map to find our potential locations .inspired by this analogy , in our system , instead of matching low - level appearance - based features , we propose to extract segments from the image and label them with a semantic concept employing imperfect classifiers which are trained using images of the same viewpoint and therefore are not invalidated by the viewpoint change .gis often already provide highly - accurate semantically annotated top - down views thereby rendering the semantic labeling superfluous for the gis satellite imagery .hence , we assume that such a semantic map is provided by the gis .a typical query image and an excerpt of a semantic map can be seen in .the semantic concepts we focus on ( e.g. , buildings , lakes , roads , etc ) form large ( and quite often insignificant in number ) segments in the image , and not points . therefore , we argue that a precise point - based geometric verification , like a ransac - search with an inlier criterion based on the euclidean distance between corresponding points , is not applicable .we address these issues by designing a descriptor to robustly capture the spatial layout of those semantic segments .pairwise asymmetric l2 matching between these descriptors is then used to find likely locations in the gis map with a spatial layout of semantic segments which is consistent with the one in the query image .we also develop a tree - based search method based on a hierarchical semantic tree to allow fast geo - localization in a geographically broad areas .cross - view matching in terms of semantic segments between street - level query image and a gis map joins several previous research directions . matching across a wide baselinehas traditionally been addressed with local image descriptors for points , areas , or lines .registration of street - level images with oblique aerial or satellite imagery is generally based only on geometric reasoning .previous work , e.g. , has reduced the matching problem to a 2d-2d registration problem by projecting ground models along vertical directions and rectifying the ground plane . unlike our approach, the mentioned work requires a point cloud at query time , either from a laser scan or from multiple views based on structure - from - motion .more recently , considered the registration problem of a dense multi - view - stereo reconstruction from street - level images to oblique aerial views . building upon accurate 3d city models for assembling a database ,contours of skylines in an upward pointing camera can be matched to a city model or perspective distortion can be decreased by rectifying regions of the query image according to dominant scene planes . also relied on rectification of building facades , however , their system relied on the repetitive structure of elements in large facades , enabling a rectification without access to a 3d city model . using contours between the sky and landscape has also been shown to provide valuable geometric cues when matching to a digital elevation model .not using any 3d information , lin et al . proposed a method to localize a street - level image with satellite imagery and a semantic map .their system relies on an additional , large dataset which contains gps - annotated street - level images which therefore establish an explicit link between street - level images and corresponding areas in the satellite imagery and semantic map .similarly to the idea of information transfer in exemplarsvms , once a short - list of promising images from this additional dataset has been generated by matching appearance - based features , appropriate satellite and semantic map information can be transferred from this short - list to the query image . visual location recognition and image retrieval system emphasise the indexing aspect and can handle large image collections : bag - of - visual - words , vocabulary trees or global image descriptors such as fisher vectors have been proposed for that purpose , for example .all those schemes do not account for any higher - level semantic information .more recently , has therefore introduced a scheme where pooling regions for local image descriptors are defined in a semantic way : detectors assign each segment a class label and a separate descriptor ( e.g. a fisher vector ) is computed for each such segment .those descriptors rely on local appearance features , which fail to handle significant viewpoint changes faced in the cross - view matching problem considered in our paper .also , this approach does not encode the spatial layout _ between _ semantic segments . if the descriptors are sufficiently discriminative by themselves , encoding this spatial layout is less important . in our casehowever , the information available in the query image which is shared with the gis only captures class labels and a very coarse estimate of the segment shapes .it is therefore necessary to capture both , the presence of semantic concepts and the spatial layout between those concepts , in a joint representation .very recently , ardeshir et al.s work considered the matching problem between street - level image and a dataset of semantic objects .specifically , deformable - part - models ( dpms ) were trained to detect distinctive objects in urban areas from a single street - level image .the main objective of that paper was improved object detection with a geometric verification stage using a database of objects with known locations and the gps - tag and viewing direction of the image has been assumed to be known roughly .they also present an exhaustive search based approach to matching an image against the entire object database .the considered dpms in are well - localized and can be reduced to the centroid of the detection bounding box for a subsequent ransac step which searches for the best 2d - affine alignment in image space .therefore , can only handle such `` spot '' based information and is not designed to handle less accurate and potentially larger semantic segments with uncertain locations such as the ones provided by classifiers for ` road ' or ` lake ' .a graphical illustration of our proposed system is shown in .the building blocks proposed by our paper will be described in detail in the next section , here we provide a rough overview of the system and describe the system inputs .the computation of this input information relies on previous work and is therefore not considered as one of our contributions .given a street - level query image , first we split it into superpixel segments and label each segment with a semantic concept by a set of pre - trained classifiers .we train two different types of classifiers to annotate superpixels with class labels corresponding to a subset of the labels available in the gis . for semantic concepts with large variation in appearance and shape, we are using the work by ren et al . . however , in street level images , it is also quite common to spot highly informative objects with small with - in class variation .typical examples are traffic signs or lamp posts . for each of those classes ,we therefore employ a deformable - part - model ( dpm ) , similar to the ones of .the second piece of input information is an estimate of the vertical vanishing point or the horizon line , e.g. describe how those entities can be estimated from a single image .the perspective distortion of the ground plane can then be undone by warping the query image with a suitable rectifying homography , which is fully determined by the horizon line , assuming a rough guess of the camera intrinsic matrix is available .we have opted to estimate the two inputs for our system , namely the ground plane location and the superpixel segmentation , in two entirely independent steps .we note however , that at the expense of higher computational cost this could be estimated jointly .the semantic map from the gis and the warped and labelled query image now share the same modality and suffer from less perspective distortion . the cross - view matching problem between query and semantic gis mapis then cast as a search for consistent spatial layouts of those semantically labeled regions . in order to do so , we design a semantic segment layout ( ssl ) descriptor which captures the _ spatial _ and _ semantic _ arrangement of those segments within a local support area located at a point in the query image or the semantic map . upon extracting such descriptors from the rectified image and semantic map ,the problem can be reduced to a well - understood matching problem .the goal of the ssl descriptor is to capture the _ presence _ of semantic concepts and at the same time encode the _ rough geometric layout _ of those concepts .similar to several previous descriptors , the neighborhood around the descriptor centre is captured with pooling regions arranged in an annular pattern where the size of the regions increases with increasing distance to the descriptor centre .we instantiate separate pooling regions for each semantic concept and the overall descriptor is the concatenation of the per - concept descriptors .similar to appearance based local descriptors , we have to choose the locations where to extract descriptors and its orientation .we are not aware of a good way to find reliable interest points in arrangements of semantic regions .we initially experimented with placing a separate descriptor at the center of each semantic segment .unfortunately , this choice turns out to be very sensitive to the location of the segment projected onto the ground plane .this projection depends on an accurate estimate of the contact region of that segment with the ground plane . based on our experiments , it is challenging to get a sufficiently accurate estimate without manual user intervention . in some preliminary experiments ,we therefore also tried to factor in the contact region uncertainty by blurring the contributions of a neighbouring segment along the line of sight between the camera and that segment , with an increasing amount of blur the further away the segment is from the camera center .we think this is an elegant and theoretically sound way to account for those uncertainties and we refer to for a graphical illustration of the descriptor and of the subsequent processing steps of our preliminary pipeline .we plan to explore this pipeline in future work in more detail . in this workhowever , we settled for a simpler choice : a single descriptor is placed either in the center of the rectified image ( denoted ci in the experiments ) or at the center of the camera ( cc in the experiments ) . also , despite the ssl descriptor being more general , we only choose one annular pooling region thereby putting more emphasis on capturing the direction of semantic segments rather than the direction _ and _ distance .this choice is again motivated by the difficulty of accurately estimating contact regions .we suggest using descriptors which are not rotation invariant and an orientation therefore needs to be assigned to each descriptor as well .the reason for this choice is that the alternative of defining a rotation - invariant descriptor leads to a considerably less discriminative descriptor and pairwise matching score .it is straightforward to define a canonical orientation in the semantic gis map .for example , the first pooling region can be chosen to point to geographic north . however ,unless compass direction is available , it is not easily possible to define a canonical orientation for the rectified query image .hence , we choose an arbitrary direction for the descriptors extracted from the rectified query image , and cope with not knowing the rotation parameters by employing a rotation invariant distance metric at the query time ( see section ) .there are several ways to capture the overlap between a pooling region and a semantic segment .an intuitive approach is to compute the area of intersection between the pooling region and the segment .this can be fairly slow for irregularly shaped segments .moreover , the shape of the segments in the query image are quite imprecise , so an accurate computation of the intersection area might be unnecessary or even harmful . here , we propose a scheme motivated by a probabilistic point of view .the segments can be considered as a spatial probability distribution , that a point sampled in or close to this segment takes a certain label .similarly , the pooling regions are interpreted as probability distributions of sampling a point at a certain location .we then have to compute a statistical measure for the similarity between the two distributions . for discrete distributions ,the mutual information is a good candidate . in our settinghowever , we have to handle continuous distributions defined over the ground plane .the bhattacharyya distance is a good way to measure the overlap between two continuous distributions and can be efficiently computed when we deal with gaussian distributions .hence , in practice , in order to keep the computational requirements low , we will use a two - dimensional gaussian to define a pooling region and each segment will be approximated by a gaussian , as well . in detail , let and denote the gaussian for the segment and the pooling region , respectively .the bhattacharyya distance is then given by where .if multiple segments that are labeled with the same semantic concept are present , they can be treated as a gaussian mixture model ( gmm ) .the bhattacharyya distance between two gmms can be approximated by , where and are a gmm for the segment and the pooling region , respectively . in our case , the pooling region is always represent by a single gaussian , so and .the bhattacharyya distance is then converted to the hellinger distance where is the bhattacharyya coefficient .our descriptor is the concatenation of all those hellinger distances .each block of this descriptor corresponding to a semantic concept is then l2-normalized independently of the other blocks .if a concept is not present , the hellinger distance for all pooling regions of that concept are set to zero .the descriptor extraction from the semantic map is considerably simpler than from the query ] matching -------- given a street - level query image , our goal is to generate a ` heat - map ' of likely locations where this image has been taken .the semantic gis map is therefore split into fix sized overlapping tiles , based on parameters such as average query image field - of - view and height of camera above ground . as described previously , the rotation alignment between the descriptor used in the semantic map and the query image is unknown . in order to cope with that, we propose to rotate one of the descriptors in discrete rotation steps and compute the l2 distance for each step .this boils down to the computation of a circular correlation ( or circular convolution ) between blocks of the two descriptors corresponding to the same semantic concept .this can be implemented efficiently with a circulant matrix multiplication or even with a fft using the circular convolution theorem . in our implementation, we are currently using the l2 distance between two descriptors .however , especially for the descriptor placed at the camera centre , several pooling regions will not be contained in the field of view .we therefore employ an asymmetric l2 distance where the distance contribution of pooling regions in the query descriptor which are not within the field of view of the camera is set to zero .the field of view of the camera can easily be estimated given the image resolution and focal length .groups with the hierarchical spectral clustering , as described in the paper .we can see how this clustering produces semantically similar clusters .note that the white areas in the top - most layer denote tiles which are mostly empty , i.e. had no semantic entries in the gis map.,scaledwidth=45.0% ] the reference area covered by gis maps is often very broad .this leads to a large number of reference tiles which the query descriptor should be compared against .several techniques , such as k - means or kd - tree , have developed for fast nearest neighbor search which we can employ to speed up this process .we describe a procedure inspired by k - means trees suitable for pre - computing a hierarchical semantic tree which arranges the tiles of the map in a semantically and spatially meaningful way .this speeds up the matching and also enables fast semantic queries in a gis system .our tree construction is based on hierarchical spectral clustering with branches on each level . the similarity matrix required for spectral clusteringis composed of the previously described asymmetric distance between pairs of tiles of the gis map .this provides us with a similarity matrix , where denotes the number of tiles .we use spectral clustering , rather than k - means as used in k - means trees or similar methods , since we are employing our own customized distance function ( asymmetric l2 ) while those methods often assume a standard distance .shows the two top layers of the hierarchical tree ( ) obtained by applying the procedure just defined on a large gis image .what we observe is that the area has been partitioned into three well - defined semantic concepts : water ( in blue ) , area scarcely populated ( yellow ) and densely populated area ( grey ) in the first layer . in the second layer ,each of those three areas are decomposed into three other areas , and construction of the tree continues as such .the tree being semantically meaningful is a byproduct of the fact that our descriptor is targeted towards capturing semantic information . at query time , we start traversing the tree at the root .the query image is matched against a random subset of tiles contained in each cluster of the current level .the most promising child node out of the children at each level , which represents the cluster , is found in this way .we use this technique since the cluster center is not straightforward to define for our customized distance functions .the tree is traversed all the way down to the leaf nodes to find the final match .the tree depth is , which leads to a speed - up of roughly where denotes the cardinality of the random subset of tiles explored at each level .our framework has been tested for the geolocalization of generic outdoor images taken in the entire ( ) of the district of columbia , us , as extensive gis databases of this area are made available to public .the area is depicted in figure [ fig : exp1 ] and includes a variety of different regions ( water , suburban , urban ) .we have gathered a set of geo - tagged images from google maps and panoramio taken at different locations , which serves as a benchmark for our system .we have semantic classes , that are \{_road , tree , building , water , lamp post , traffic signal , traffic sign _ } , and in the following experiments we are going to use different sets of classes .the size of a tile in the gis is around .we set the number of rings in our descriptors to , and the number of pooling regions to .we assume that the focal length is approximately known .the homography which rectifies the ground plane is then determined by the vertical vanishing point or the horizon line .we assume that the y - axis of the image is roughly aligned with the vertical direction and the vertical vanishing point is then given by the mle including all the lines segments degrees w.r.t .the y - axis , see also .as ground plane estimation is not the topic of our paper , we manually check and correct highly inaccurate estimates of the vanishing point in our benchmark images in order not to bias our evaluation toward mistakes in the ground plane estimation ( this is done for all baselines ) .since the semantic gis map uses metric units and the rectified image is in pixels , the size or scale of the pooling regions need to be converted between those two units .a reasonable assumption for street - level images is that the camera is roughly at above ground .the conversion factor between metric units and pixels is then given by = \frac{f}{d } [ \text{meters}]$ ] .the scale of the pooling regions is chosen such that a ssl descriptor captures significant contributions from segments within roughly m .reports detailed qualitative results of our proposed cross - view matching scheme for several sample queries .several interesting observations can be made in this figure .first , as the covered area is very large , even such generic semantic cues can narrow down the search space to often .second , the semantic and geometric similarity among the top 15 matching gis tiles shows the proposed method is successfully capturing such properties and is yet forgiving of the modeled uncertainties by not being overly discriminative ; note that the ground truth location ( marked in the cdf ) is often among the top few percents .third , the heat map correlates well with the semantic content of the image . as for quantitative results ,we compare different nearest neighbor ( nn ) classifiers with the following feature vectors : ssl descriptor in the center of image ( per query image / gis tile ) , ssl descriptor in the center of the camera , binary indicator vector encoding the presence or absence of semantic concepts ( _ presence _ term ) , ssl descriptor in the center of the camera plus the presence term . the last method is random matching. the superior results of the ssl plus presence matching reported in confirm the necessity of jointly using semantic and coarse geometry for a successful localization task .we also suspect that placing the ssl descriptor in the center of camera results in better performance than placing the descriptor in the center of the image because the former approach is less sensible to tiling quantization artifacts : e.g. objects on the left side of the field of view will generally remain on the left side even when the viewpoint is moved to the closest tile location .we used the subset \{_building , lamp post , traffic signal , traffic sign _ } of the semantic classes since this subset yielded the best overall geo - localization result .for further evaluation , depicts the results of ssl plus presence matching over four different sets of semantic classes , which shows the contribution of each semantic class in the overall geo - localization results .interestingly , the curves also show how certain sets of semantic classes may even mislead the geo - localization process .we believe this is due to un - informativeness ( e.g. being too common ) of some classes as well as several sources of noise whose magnitude can vary between different classes ( e.g. inaccurate entries in gis map , semantic segmentation misclassifications , etc . ) . if enough training data were available , appropriate weights could be learned for each class .this paper proposed an approach for cross - view matching between a street - level image and a gis map .this problem was addressed in a semantic way .a fast semantic segment layout descriptor has been proposed which jointly captures the presence of segments with a certain semantic concept and the spatial layout of those segments .as our experimental evaluation showed , this enabled matching a street - level image to a large reference map based on purely semantic cues of the scene and their coarse spatial layout .the results confirm that the semantic and topological cues captured by our method significantly narrow down the search area .this can be used as an effective pre - processing for other less efficient but more accurate localization techniques , such as street view based methods .* acknowledgments : * this work has been supported by the max planck center for visual computing and communication .we also acknowledge the support of nsf career grant ( n1054127 ) . | matching cross - view images is challenging because the appearance and viewpoints are significantly different . while low - level features based on gradient orientations or filter responses can drastically vary with such changes in viewpoint , semantic information of images however shows an invariant characteristic in this respect . consequently , semantically labeled regions can be used for performing cross - view matching . in this paper , we therefore explore this idea and propose an automatic method for detecting and representing the semantic information of an rgb image with the goal of performing cross - view matching with a ( non - rgb ) geographic information system ( gis ) . a segmented image forms the input to our system with segments assigned to semantic concepts such as traffic signs , lakes , roads , foliage , etc . we design a descriptor to robustly capture both , the presence of semantic concepts and the spatial layout of those segments . pairwise distances between the descriptors extracted from the gis map and the query image are then used to generate a shortlist of the most promising locations with similar semantic concepts in a consistent spatial layout . an experimental evaluation with challenging query images and a large urban area shows promising results . |
ever since it was introduced in , the notion of a nash equilibrium and its refinements have remained among the most prominent solution concepts of noncooperative game theory . in its turn , not only has noncooperative game theory found applications in such diverse topics as economics , biology and network design , but it has also become the standard language to actually _ describe _ complex agent interactions in these fields . still , the issue of why and how players may arrive to equilibrial strategies in the first place remains an actively debated question .after all , the complexity of most games increases exponentially with the number of players and , hence , identifying a game s equilibria quickly becomes prohibitively difficult .accordingly , as was first pointed out by aumann in , a player has no incentive to play his component of a nash equilibrium unless he is convinced that all other players will play theirs . andif the game in question has multiple nash equilibria , this argument gains additional momentum : in that case , even players with unbounded deductive capabilities will be hard - pressed to choose a strategy . from this point of view, rational individuals would appear to be more in tune with aumann s notion of a correlated equilibrium where subjective beliefs are also taken into account .nevertheless , the seminal work of maynard smith on animal conflicts has cast nash equilibria in a different light because it unearthed a profound connection between evolution and rationality : roughly speaking , one leads to the other .so , when different species contend for the limited resources of their habitat , evolution and natural selection steer the ensuing conflict to an equilibrial state which leaves no room for irrational behavior . as a consequence ,instinctive `` fight or flight '' responses that are deeply ingrained in a species can be seen as a form of rational behavior , acquired over the species evolutionary course .of course , this evolutionary approach concerns large populations of different species which are rarely encountered outside the realm of population biology .however , the situation is not much different in the case of a finite number of players who try to learn the game by playing again and again and who strive to do better with the help of some learning algorithm .therein , evolution does not occur as part of a birth / death process ; rather , it is a byproduct of the players acquired experience in playing the game see for a most comprehensive account .it is also worth keeping in the back of our mind that in some applications of game theory , `` rationality '' requirements precede evolution .for example , recent applications to network design start from a set of performance aspirations ( such as robustness and efficiency ) that the players ( network devices ) seek to attain in the network s equilibrial state .thus , to meet these requirements , one has to literally reverse - engineer the process by finding the appropriate game whose equilibria will satisfy the players the parallel with mechanism design being obvious . in all these approaches ,a fundamental selection mechanism is that of the _ replicator dynamics _ put forth in and which reinforces a strategy proportionately to the difference of its payoff from the mean ( taken over the species or the player s strategies , depending on the approach ) . as was shown in the multi - population setting of samuelson and zhang ( which is closer to learning than the self - interacting single - population scenaria of and ) , these dynamics are particularly conducive to rationality .strategies that are suboptimal when paired against any choice of one s adversaries rapidly become extinct , and in the long run , only rationally admissible strategies can survive .even more to the point , the only attracting states of the dynamics turn out to be precisely the ( strict ) nash equilibria of the game see for a masterful survey .we thus see that nash equilibria arise over time as natural attractors for rational individuals , a fact which further justifies their prominence among noncooperative solution concepts . yet , this behavior is also conditional on the underlying game remaining stationary throughout the time horizon that it takes players to adapt to it and unfortunately , this stationarity assumption is rarely met in practical applications . in biological models , for example , the reproductive fitness of an individual may be affected by the ever - changing weather conditions ; in networks , communication channels carry time - dependent noise and interference as well as signals ; and when players try to sample their strategies , they might have to deal with erroneous or imprecise readings .it is thus logical to ask : _ does rational behavior still emerge in the presence of stochastic perturbations that interfere with the underlying game _ ? in evolutionary games ,these perturbations traditionally take the form of `` aggregate shocks '' that are applied directly to the population of each phenotype .this approach by fudenberg and harris has spurred quite a bit of interest and there is a number of features that differentiate it from the deterministic one .for example , cabrales showed in that dominated strategies indeed become extinct , but only if the variance of the shocks is low enough .more recently , the work of imhof and hofbauer revealed that even equilibrial play arises over time but again , conditionally on the variance of the shocks .be that as it may , if one looks at games with a finite number of players , it is hardly relevant to consider shocks of this type because there are no longer any populations to apply them to .instead , the stochastic fluctuations should be reflected directly on the stimuli that incite players to change their strategies : their payoffs .this leads to a picture which is very different from the evolutionary one and is precisely the approach that we will be taking . in this paper, we analyze the evolution of players in stochastically perturbed games of this sort .the particular stimulus - response model that we consider is simple enough : players keep cumulative scores of their strategies performance and employ exponentially more often the one that scores better .after a few preliminaries in section [ sec : preliminaries ] , this approach is made precise in section [ sec : replicator ] where we derive the stochastic replicator equation that governs the behavior of players when their learning curves are subject to random perturbations .the replicator equation that we get is different from the `` aggregate shocks '' approach of and , as a result , it exhibits markedly different rationality properties as well . in stark contrast to the results of , we show in section [ sec : dominated ] that dominated strategies become extinct irrespective of the noise level ( proposition [ prop : dominated ] ) and provide an exponential bound for the rate of decay of these strategies ( proposition [ prop : timetolive ] ) .in fact , by induction on the rounds of elimination of dominated strategies , we show that this is true even for _ iteratively _ dominated strategies : despite the noise , only rationally admissible strategies can survive in the long run ( theorem [ thm : rational ] ) .then , as an easy corollary of the above , we infer that players will converge to a strict equilibrium ( corollary [ cor : dominance ] ) whenever the underlying game is dominance - solvable .we continue with the issue of equilibrial play in section [ sec : congestion ] by making a suggestive detour in the land of congestion games . if the noise is relatively mild with respect to the rate with which players learn , we find that the game s potential is a lyapunov function which ensures that strict equilibria are stochastically attracting ; and if the game is dyadic ( i.e. , players only have two choices ) , this tameness assumption can be dropped altogether . encouraged by the results of section [ sec : congestion ] , we attack the general case in section [ sec : equilibrium ] .as it turns out , strict equilibria are _ always _ asymptotically stochastically stable in the perturbed replicator dynamics that stem from exponential learning ( theorem [ thm : stability ] ) .this begs to be compared to the results of where it is the equilibria of a suitably modified game that are stable , and not necessarily those of the actual game being played .fortunately , exponential learning seems to give players a clearer picture of the original game and there is no need for similar modifications in our case . given a finite set , we will routinely identify the set of probability measures on with the standard -dimensional simplex of and . under this identification, we will also make no distinction between and the vertex of ; in fact , to avoid an overcluttering of indices , we will frequently use to refer to either or , writing , for example , `` '' or `` '' instead of `` '' or `` , '' respectively . to streamline our presentation, we will consistently employ latin indices for players ( ) and greek for their strategies ( ) , separating the two by a comma when it would have been sthetically unpleasant not to . in like manner , when we have to discriminate between strategies, we will assume that indices from the first half of the greek alphabet start at ( ) while those taken from the second half start at ( ) .finally , if is some stochastic process in starting at , its law will be denoted by or simply by if there is no danger of confusion ; and if the context leaves no doubt as to which process we are referring to , we will employ the term `` almost surely '' in place of the somewhat unwieldy `` -almost surely . ''as is customary , our starting point will be a ( finite ) set of _ players _ , indexed by .the players possible actions are drawn from their _ strategy sets _ and they can combine them by choosing their ( pure ) strategy with probability . in that case , the players _ mixed strategies _ will be described by the points or , more succinctly , by the _ strategy profile _ . in particular , if denotes the vertex of the component simplex , the ( pure ) profile simply corresponds to player playing . on the other hand ,if we wish to focus on the strategy of a particular player against that of his _ opponents _ , we will employ the shorthand notation to denote the profile where plays against his opponents strategy .so , once players have made their strategic choices , let be the reward of player in the profile , that is , the payoff that strategy yields to player against the strategy of s opponents . then, if players mix their strategies , their expected reward will be given by the ( multilinear ) _ payoff functions _ : under this light , the payoff that a player receives when playing a pure strategy deserves special mention and will be denoted by this collection of _ players_ , their _ strategies _ and their _ payoffs _ will be our working definition for a _ game in normal form _ , usually denoted by if we need to keep track of more data .needless to say , rational players who seek to maximize their individual payoffs will avoid strategies that always lead to diminished payoffs against any play of their opponents .we will thus say that the strategy is ( _ strictly _ ) _ dominated _ by and we will write when for all strategies of s opponents . with this in mind , dominated strategies can be effectively removed from the analysis of a game because rational players will have no incentive to ever use them .however , by deleting such a strategy , another strategy ( perhaps of another player ) might become dominated and further deletions of _ iteratively dominated _ strategies might be in order ( see section [ sec : dominated ] for more details ) .proceeding ad infinitum , we will say that a strategy is _ rationally admissible _ if it survives every round of elimination of dominated strategies .if the set of rationally admissible strategies is a singleton ( e.g. , as in the prisoner s dilemma ) , the game will be called _ dominance - solvable _ and the sole surviving strategy will be the game s _ rational solution_. then again , not all games can be solved in this way and it is natural to look for strategies which are stable at least under unilateral deviations .hence , we will say that a strategy profile is a _nash equilibrium _ of the game when if the equilibrium profile only contains pure strategies , we will refer to it as a _ pure equilibrium _ ; and if the inequality ( [ eq : nash ] ) is strict for all , the equilibrium will carry instead the characterization _strict_. clearly , if two pure strategies are present with positive probability in an equilibrial strategy , then we must have as a result of being linear in .consequently , only pure profiles can satisfy the strict version of ( [ eq : nash ] ) so that strict equilibria must also be pure .the converse implication is false but only barely so : a pure equilibrium fails to be strict only if a player has more than one pure strategies that return the same rewards . since this is almost always true ( in the sense that the degenerate case can be resolved by an arbitrarily small perturbation of the payoff functions ), we will relax our terminology somewhat and use the two terms interchangeably . to recover the connection of equilibrial play with strategic dominance , note that if a game is solvable by iterated elimination of dominated strategies , the single rationally admissible strategy that survives will be the game s unique strict equilibrium .but the significance of strict equilibria is not exhausted here : strict equilibria are exactly the evolutionarily stable strategies of multi - population evolutionary games proposition 5.1 in . moreover , as we shall see a bit later , they are the only asymptotically stable states of the multi - population replicator dynamics again , see chapter 5 , pages 216 and 217 of . unfortunately , strict equilibria do not always exist , rock - paper - scissors being the typical counterexample .nevertheless , pure equilibria do exist in many large and interesting classes of games , even when we leave out dominance - solvable ones .perhaps the most noteworthy such class is that of _ congestion games_.[ def : congestion ] a game will be called a _ congestion game _ when : 1 .all players share a common set of _ facilities _ as their strategy set : for all ; 2 . the payoffs are functions of the number of players sharing a particular facility : where is the number of players choosing the same facility as . amazingly enough, monderer and shapley made the remarkable discovery in that these games are actually equivalent to the class of _ potential games_. [ def : potential ] a game will be called a _ potential game _ if there exists a function such that for all players and all strategies , .this equivalence reveals that both classes of games possess equilibria in pure strategies : it suffices to look at the vertices of the face of where the ( necessarily multilinear ) potential function is minimized . as one would expect , locating the nash equilibria of a game is a rather complicated problem that requires a great deal of global calculations , even in the case of potential games ( where it reduces to minimizing a multilinear function over a convex polytope ) .consequently , it is of interest to see whether there are simple and distributed learning schemes that allow players to arrive at a reasonably stable solution .one such scheme is based on an exponential learning behavior where players play the game repeatedly and keep records of their strategies performance . in more detail , at each instance of the game all players update the cumulative scores of their strategies as specified by the recursive formula where is the players strategy profile at the iteration of the game and , in the absence of initial bias , we assume that for all .these scores reinforce the perceived success of each strategy as measured by the average payoff it yields and hence , it stands to reason that players will lean towards the strategy with the highest score .the precise way in which they do that is by playing according to the namesake exponential law : for simplicity , we will only consider the case where players update their scores in continuous time , that is , according to the coupled equations then , if we differentiate ( [ eq : detlogit ] ) to decouple it from ( [ eq : detscore ] ) , we obtain the _ standard _ ( _ multi - population _ ) _ replicator dynamics _ alternatively ,if players learn at different speeds as a result of varied stimulus - response characteristics , their updating will take the form where represents the _ learning rate _ of player , that is , the `` weight '' which he assigns to his perceived scores . in this way , the replicator equation evolves at a different time scale for each player , leading to the _ rate - adjusted _ dynamics naturally , the uniform dynamics ( [ eq : rd ] ) are recovered when all players learn at the `` standard '' rate .if we view the exponential learning model ( [ eq : discretelogit ] ) from a stimulus - response angle , we see that that the payoff of a strategy simply represents an ( exponential ) propensity of employing said strategy .it is thus closely related to the algorithm of _ logistic fictitious play _ where the strategy of ( [ eq : ratelogit ] ) can be seen as the ( unique ) best reply to the profile in some suitably modified payoffs .interestingly enough , turns out to be none other than the _ entropy _ of : that being so , we deduce that the learning rates act the part of ( player - specific ) inverse temperatures : in high temperatures ( small ) , the players learning curves are `` soft '' and the payoff differences between strategies are toned down ; on the contrary , if the scheme `` freezes '' to a myopic best - reply process .the replicator dynamics were first derived in in the context of population biology , first for different phenotypes within a single species ( single - population models ) , and then for different species altogether ( multi - population models ; and provide excellent surveys ) . in both these cases , one begins with large populations of individuals that are programmed to a particular behavior ( e.g. , for `` hawks '' or for `` doves '' ) and matches them randomly in a game whose payoffs directly affect the reproductive fitness of the individual players .more precisely , let be the population size of the phenotype ( strategy ) of species ( player ) in some multi - population model where individuals are matched to play a game with payoff functions .then , the relative frequency ( share ) of will be specified by the _ population state _ where .so , if individuals are drawn randomly from the species , their expected payoffs will be given by , , and if these payoffs represent a proportionate increase in the phenotype s fitness ( measured as the number of offsprings in the unit of time ) , we will have as a result , the population state will evolve according to which is exactly ( [ eq : rd ] ) viewed from an evolutionary perspective .on the other hand , we should note here that in _ single - population _ models the resulting equation is cubic and not quadratic because strategies are matched against themselves . to wit , assume that individuals are randomly drawn from a large population and are matched against one another in a ( symmetric ) 2-player game with strategy space and payoff matrix . then, if denotes the population share of individuals that are programmed to the strategy , their expected payoff in a random match will be given by ; similarly , the population average payoff will be .hence , by following the same procedure as above , we end up with the single - population replicator dynamics which behave quite differently than their multi - population counterpart ( [ eq : erd ] ) .as far as rational behavior is concerned , the replicator dynamics have some far - reaching ramifications .if we focus on multi - population models , samuelson and zhang showed in that the share of a strategy which is strictly dominated ( even iteratively ) converges to zero along any interior solution path of ( [ eq : rd ] ) ; in other words , _ dominated strategies become extinct in the long run_. additionally , there is a remarkable equivalence between the game s nash equilibria and the stationary points of the replicator dynamics : _ the asymptotically stable states of coincide precisely with the strict nash equilibria of the underlying game _ .a large part of our work will be focused on examining whether the rationality properties of exponential learning ( elimination of dominated strategies and asymptotic stability of strict equilibria ) remain true in a stochastic setting . however , since asymptotic stability is ( usually ) too stringent an expectation for stochastic dynamical systems , we must instead consider its stochastic analogue .that being the case , let be a standard wiener process in and consider the stochastic differential equation ( sde ) following , the notion of asymptotic stability in this sde is expressed by the following .[ def : stability ] we will say that is _ stochastically asymptotically stable _ when , for every neighborhood of and every , there exists a neighborhood of such that for all initial conditions of the sde ( [ eq : sde ] ) .much the same as in the deterministic case , stochastic asymptotic stability is often established by means of a lyapunov function . in our context , this notion hinges on the second order differential operator that is associated to ( [ eq : sde ] ) , namely the _ generator _ of : the importance of this operator can be easily surmised from it s lemma ; indeed , if is sufficiently smooth , the generator simply captures the drift of the process : in this way , can be seen as the stochastic version of the time derivative ; this analogy then leads to the following .[ def : lyapunov ] let and let be an open neighborhood of .we will say that is a ( local ) _ stochastic lyapunov function _ for the sde ( [ eq : sde ] ) if : 1 . for all , with equality iff ; 2 .there exists a constant such that for all .whenever such a lyapunov function exists , it is known that the point where attains its minimum will be stochastically asymptotically stable for example , see theorem 4 in pages 314 and 315 of .a final point that should be mentioned here is that our analysis will be constrained on the compact polytope instead of all of .accordingly , the `` neighborhoods '' of definitions [ def : stability ] and [ def : lyapunov ] should be taken to mean `` neighborhoods in , '' that is , neighborhoods in the subspace topology of .this minor point should always be clear from the context and will only be raised in cases of ambiguity .of course , it could be argued that the rationality properties of the exponential learning scheme are a direct consequence of the players receiving accurate information about the game when they update their scores .however , this is a requirement that can not always be met : the interference of nature in the game or imperfect readings of one s utility invariably introduce fluctuations in ( [ eq : detscore ] ) , and in their turn , these lead to a perturbed version of the replicator dynamics ( [ eq : rd ] ) . to account for these random perturbations, we will assume that the players scores are now governed instead by the _differential equation where , as before , the strategy profile is given by the logistic law in this last equation , is a standard wiener process living in and the coefficients measure the impact of the noise on the players scoring systems . of course, these coefficients need not be constant : after all , the effect of the noise on the payoffs might depend on the state of the game in some typically continuous way .for this reason , we will assume that the functions are continuous on , and we will only note en passant that our results still hold for essentially bounded coefficients ( we will only need to replace and with and , respectively , in all expressions involving ) . a very important instance of this dependence can be seen if for all , in which case equation ( [ eq : score ] ) becomes a convincing model for the case of insufficient information .it states that when a player actually uses a strategy , his payoff observations are accurate enough ; but with regards to strategies he rarely employs , his readings could be arbitrarily off the mark .now , to decouple ( [ eq : score ] ) and ( [ eq : logit ] ) , we may simply apply it s lemma to the process . to that end ,recall that has independent components across players and strategies , so that ( the kronecker symbols being for and , otherwise ). then , it s formula gives \\[-8pt ] & = & \sum_{\beta } \biggl(u_{i\beta}(x)\,\frac{\partial x_{i\alpha}}{\partial u_{i\beta } } + \frac{1}{2}\eta_{i\beta}^{2}(x)\,\frac{\partial^{2}x_{i\alpha } } { \partial u_{i\beta}^{2 } } \biggr ) \,dt\nonumber\\ & & { } + \sum_{\beta } \eta_{i\beta}(x)\frac{\partial x_{i\alpha } } { \partial u_{i\beta } } \,dw_{i\beta}.\nonumber\end{aligned}\ ] ] on the other hand , a simple differentiation of ( [ eq : logit ] ) yields and by plugging these expressions back into ( [ eq : dxprelim ] ) , we get \,dt\nonumber\\ & & { } + x_{i\alpha } \biggl[\frac{1}{2}\eta_{i\alpha}^{2}(x)(1 - 2 x_{i\alpha } ) - \frac{1}{2}\sum_{\beta}\eta_{i\beta}^{2}(x ) x_{i\beta } ( 1 - 2x_{i\beta } ) \biggr ] \,dt\\ & & { } + x_{i\alpha } \biggl[\eta_{i\alpha}(x ) \,dw_{i\alpha } - \sum_{\beta } \eta _ { i\beta}(x ) x_{i\beta } \,dw_{i\beta } \biggr]\nonumber.\end{aligned}\ ] ] alternatively , if players update their strategies with different learning rates , we should instead apply it s formula to ( [ eq : ratelogit ] ) .in so doing , we obtain \,dt\nonumber\\ & & { } + \frac{\lambda_{i}^{2}}{2}x_{i\alpha } \biggl[\eta_{i\alpha}^{2}(x)(1 - 2 x_{i\alpha } ) -\sum_{\beta}\eta_{i\beta}^{2}(x ) x_{i\beta } ( 1 - 2x_{i\beta } ) \biggr ] \,dt\nonumber\\[-8pt]\\[-8pt ] & & { } + \lambda_{i } x_{i\alpha } \bigl[\eta_{i\alpha}(x)\ , d w_{i\alpha } - \sum\eta_{i\beta}(x ) x_{i\beta } \,dw_{i\beta } \bigr]\nonumber\\ & = & b_{i\alpha}(x ) \,dt + \sum_{\beta } \sigma_{i,\alpha\beta } ( x ) \,d w_{i\beta},\nonumber\end{aligned}\ ] ] where , in obvious notation , and are , respectively , the drift and diffusion coefficients of the diffusion .obviously , when , we recover the uniform dynamics ( [ eq : srd ] ) ; equivalently ( and this is an interpretation that is well worth keeping in mind ) , the rates can simply be regarded as a commensurate inflation of the payoffs and noise coefficients of player in the uniform logistic model ( [ eq : logit ] ) .equation ( [ eq : srd ] ) and its rate - adjusted sibling ( [ eq : slrd ] ) will constitute our stochastic version of the replicator dynamics and thus merit some discussion in and by themselves . first , note that these dynamics admit a ( unique ) strong solution for any initial state , even though they do not satisfy the linear growth condition that is required for the existence and uniqueness theorem for sdes ( e.g. , theorem 5.2.1 in ) .instead , an addition over reveals that every simplex remains invariant under ( [ eq : srd ] ) : if , then and hence , will stay in for all , it is not harder to see that every face of is a trap for .so , if is a smooth bump function that is equal to on some open neighborhood of and which vanishes outside some compact set , the sde _ will _ have bounded diffusion and drift coefficients and will thus admit a unique strong solution .but since this last equation agrees with ( [ eq : srd ] ) on and any solution of ( [ eq : srd ] ) always stays in , we can easily conclude that our perturbed replicator dynamics admit a unique strong solution for any initial .it is also important to compare the dynamics ( [ eq : srd ] ) , ( [ eq : slrd ] ) to the `` aggregate shocks '' approach of fudenberg and harris that has become the principal incarnation of the replicator dynamics in a stochastic environment .so , let us first recall how aggregate shocks enter the replicator dynamics in the first place .the main idea is that the reproductive fitness of an individual is not only affected by deterministic factors but is also subject to stochastic shocks due to the `` weather '' and the interference of nature with the game .more precisely , if denotes the population size of phenotype of the species in some multi - population evolutionary game , its growth will be determined by where , as in ( [ eq : offspring ] ) , denotes the population shares . in this way , it s lemma yields the _ replicator dynamics with aggregate shocks _ : \,dt\nonumber\\[-8pt]\\[-8pt ] & & { } + x_{i\alpha } \bigl[\eta_{i\alpha } \,dw_{i\alpha } - \sum\eta _ { i\beta } x_{i\beta } \,dw_{i\beta } \bigr].\nonumber\end{aligned}\ ] ] we thus see that the effects of noise propagate differently in the case of exponential learning and in the case of evolution . indeed , if we compare equations ( [ eq : srd ] ) and ( [ eq : asrd ] ) term by term , we see that the drifts are not quite the same : even though the payoff adjustment ties both equations back together in the deterministic setting ( ) , the two expressions differ by \,dt.\ ] ] innocuous as this term might seem , it is actually crucial for the rationality properties of exponential learning in games with randomly perturbed payoffs .as we shall see in the next sections , it leads to some miraculous cancellations that allow rationality to emerge in all noise levels .this difference further suggests that we can pass from ( [ eq : srd ] ) to ( [ eq : asrd ] ) simply by modifying the game s payoffs to .of course , this presumes that the noise coefficients be constant the general case would require us to allow for games whose payoffs may not be multilinear . this apparent lack of generality does not really change things but we prefer to keep things simple and for the time being , it suffices to point out that this modified game was precisely the one that came up in the analysis of . as a result, this modification appears to play a pivotal role in setting apart learning and evolution in a stochastic setting : whereas the modified game is deeply ingrained in the process of natural selection , exponential learning seems to give players a clearer picture of the actual underlying game .thereby armed with the stochastic replicator equations ( [ eq : srd ] ) , ( [ eq : slrd ] ) to model exponential learning in noisy environments , the logical next step is to see if the rationality properties of the deterministic dynamics carry over to this stochastic setting . in this direction , we will first show that dominated strategies always become extinct in the long run and that only the rationally admissible ones survive . as in ( implicitly ) and ( explicitly ) , the key ingredient of our approach will be the _ cross entropy _ between two mixed strategies of player : where is the _ entropy _ of and is the intimately related _ kullback leibler divergence _ ( or _ relative entropy _ ): this divergence function is central in the stability analysis of the ( deterministic ) replicator dynamics because it serves as a distance measure in probability space .as it stands however , is not a distance function per se : neither is it symmetric , nor does it satisfy the triangle inequality .still , it has the very useful property that iff employs with positive probability all pure strategies that are present in [ i.e. , iff or iff is absolutely continuous w.r.t . ] . therefore ,if for all dominated strategies of player , it immediately follows that can not be dominated itself . in this vein , we have the following . [ prop : dominated ] let be a solution of the stochastic replicator dynamics ( [ eq : srd ] ) for some interior initial condition .then , if is ( strictly ) dominated , in particular , if is pure , we will have ( a.s . ) : strictly dominated strategies do not survive in the long run .note first that and hence , will almost surely stay in for all ; this is a simple consequence of the uniqueness of strong solutions and the invariance of the faces of under the dynamics ( [ eq : srd ] ) .let us now consider the cross entropy between and : as a result of being an interior path , will remain finite for all ( a.s . ) .so , by applying it s lemma we get \\[-8pt ] & = & -\sum_{\beta } \frac{q_{i\beta}}{x_{i\beta } } \,dx_{i\beta } + \frac{1}{2 } \sum_{\beta } \frac{q_{i\beta}}{x_{i\beta}^{2 } } ( d x_{i\beta } ) ^{2}\nonumber\end{aligned}\ ] ] and , after substituting from the dynamics ( [ eq : srd ] ) , this last equation becomes \,dt\nonumber\\[-8pt]\\[-8pt ] & & { } + \sum_{\beta } q_{i\beta}\sum_{\gamma } ( x_{i\gamma } - \delta _ { \beta \gamma } ) \eta_{i\gamma } ( x)\,dw_{i\gamma}.\nonumber\end{aligned}\ ] ] accordingly , if is another mixed strategy of player , we readily obtain \\[-8pt ] & & { } + \sum_{\beta } ( q_{i\beta } ' - q_{i\beta } ) \eta_{i\beta}(x ) \,dw_{i\beta}\nonumber\end{aligned}\ ] ] and , after integrating , \\[-8pt ] & & { } + \sum_{\beta}(q_{i\beta}'-q_{i\beta})\int_{0}^{t}\eta_{i\beta } ( x(s ) ) \,dw_{i\beta}(s).\nonumber\end{aligned}\ ] ] suppose then that and let . with compact, it easily follows that and the first term of ( [ eq : gint ] ) will be bounded from below by .however , since monotonicity fails for it integrals , the second term must be handled with more care . to that end , let and note that the cauchy schwarz inequality gives \\[-8pt ] & \leq & s_{i}\eta_{i}^{2 } \sum_{\beta } ( q'_{i\beta } -q_{i\beta})^{2}\leq 2 s_{i } \eta_{i}^{2},\nonumber\end{aligned}\ ] ] where is the number of pure strategies available to player and ; recall also that for the last step .therefore , if denotes the martingale part of ( [ eq : gcomp ] ) and is its quadratic variation , the previous inequality yields (t ) = \int_{0}^{t}\xi _ { i}^{2}(s)\,ds \leq2 s_{i } \eta_{i}^{2 } t.\ ] ] now , if , it follows from the time - change theorem for martingales ( e.g. , theorem 3.4.6 in ) that there exists a wiener process such that . hence , by the law of the iterated logarithm we get on the other hand , if , it is trivial to obtain by letting in ( [ eq : gint ] ) .therefore , with , we readily get ( a.s . ) ; and since for all pure strategies , our proof is complete . as in , we can now obtain the following estimate for the lifespan of pure dominated strategies .[ prop : timetolive ] let be a solution path of ( [ eq : srd ] ) with initial condition and let denote its law . assume further that the strategy is dominated ; then , for any and for large enough , we have where is the number of strategies available to player , and the constants and do not depend on .the proof is pretty straightforward and for the most part follows .surely enough , if and we use the same notation as in the proof of proposition [ prop : dominated ] , we have where and . then \\[-8pt ] & = & \frac{1}{2}\operatorname{erfc}\biggl(\frac{m - h_{i}(x_{i } ) - v_{i}t}{\sqrt{2\rho _ { i}(t ) } } \biggr)\nonumber\end{aligned}\ ] ] and , since the quadratic variation is bounded above by ( [ eq : quvar ] ) , the estimate ( [ eq : timetolive ] ) holds for all sufficiently large [ i.e. , such that .some remarks are now in order : first and foremost , our results should be contrasted to those of cabrales and imhof where dominated strategies die out only if the noise coefficients ( shocks ) satisfy certain tameness conditions . the origin of this notable difference is the form of the replicator equation ( [ eq : srd ] ) and , in particular , the extra terms that are propagated there by exponential learning and which are absent from the aggregate shocks dynamics ( [ eq : asrd ] ) . as can be seen from the derivations in proposition [ prop : dominated ] , these terms are precisely the ones that allow players to pick up on the true payoffs instead of the modified ones that come up in ( and , indirectly , in as well ) .secondly , it turns out that the way that the noise coefficients depend on the profile is not really crucial : as long as is continuous ( or essentially bounded ) , our arguments are not affected . the only way in which a specific dependence influences the extinction of dominated strategies is seen in proposition [ prop : timetolive ] : a sharper estimate of the quadratic variation of could conceivably yield a more accurate estimate for the cumulative distribution function of ( [ eq : timetolive ] ) .finally , it is only natural to ask if proposition [ prop : dominated ] can be extended to strategies that are only _ iteratively _ dominated . as it turns out , this is indeed the case .[ thm : rational ] let be a solution path of ( [ eq : srd ] ) starting at . then ,if is iteratively dominated , that is , _ only rationally admissible strategies survive in the long run . _ as in the deterministic case , the main idea is that the solution path gets progressively closer to the faces of that are spanned by the pure strategies which have not yet been eliminated .following , we will prove this by induction on the rounds of elimination of dominated strategies ; proposition [ prop : dominated ] is simply the case . to wit ,let , and denote by the set of strategies that are admissible ( i.e. , not dominated ) with respect to any strategy .so , if we start with and , we may define inductively the set of strategies that remain admissible after elimination rounds by where ; similarly , the pure strategies that have survived after such rounds will be denoted by .clearly , this sequence forms a descending chain and the set will consist precisely of the strategies of player that are rationally admissible .assume then that the cross entropy diverges as for all strategies that die out within the first rounds ; in particular , if this implies that as . we will show that the same is true if survives for rounds but is eliminated in the subsequent one .indeed , if but , there will exist some such that now , note that any can be decomposed as where is the `` admissible '' part of , that is , the projection of on the subspace spanned by the surviving vertices . hence , if , we will have and , by linearity , moreover , by the induction hypothesis , we also have as .thus , there exists some such that for all [ recall that is spanned by already eliminated strategies ] .therefore , as in the proof of proposition [ prop : dominated ] , we obtain for where is a constant depending only on . in this way , the same reasoning as before gives and the theorem follows . as a result ,if there exists only one rationally admissible strategy , we get the following .[ cor : dominance ]let be an interior solution path of the replicator equation ( [ eq : srd ] ) for some dominance - solvable game and let be the ( unique ) strict equilibrium of . then that is , _ players converge to the game s strict equilibrium __ ) . in concluding this section, it is important to note that all our results on the extinction of dominated strategies remain true in the adjusted dynamics ( [ eq : slrd ] ) as well : this is just a matter of rescaling . the only difference in using different learning rates comes about in proposition [ prop : timetolive ] where the estimate ( [ eq : timetolive ] ) becomes as it stands , this is not a significant difference in itself because the two estimates are asymptotically equal for large times . nonetheless , it is this very lack of contrast that clashes with the deterministic setting where faster learning rates accelerate the emergence of rationality .the reason for this gap is that an increased learning rate also carries a commensurate increase in the noise coefficients , and thus deflates the benefits of accentuating payoff differences .in fact , as we shall see in the next sections , the learning rates do not really allow players to learn any faster as much as they help diminish their shortsightedness : by effectively being lazy , it turns out that players are better able to average out the noise .having established that irrational choices die out in the long run , we turn now to the question of whether equilibrial play is stable in the stochastic replicator dynamics of exponential learning .however , before tackling this issue in complete generality , it will be quite illustrative to pay a visit to the class of congestion games where the presence of a potential simplifies things considerably . in this way, the results we obtain here should be considered as a motivating precursor to the general case analyzed in section [ sec : equilibrium ] . to begin with, it is easy to see that the potential of definition [ def : potential ] is a lyapunov function for the deterministic replicator dynamics .indeed , assume that player is learning at a rate and let be a solution path of the rate - adjusted dynamics ( [ eq : lrd ] ) .then , a simple differentiation of gives \\[-8pt ] & = & -\sum_{i } \lambda_{i } \biggl(\sum_{\alpha } x_{i\alpha } u^{2}_{i\alpha } ( x ) - u_{i}^{2}(x ) \biggr)\leq0,\nonumber\end{aligned}\ ] ] the last step following from jensen s inequality recall that on account of ( [ eq : potential ] ) and also that .in particular , this implies that the trajectories are attracted to the local minima of , and since these minima coincide with the strict equilibria of the game , we painlessly infer that strict equilibrial play is asymptotically stable in ( [ eq : lrd])as mentioned before , we plead guilty to a slight abuse of terminology in assuming that all equilibria in pure strategies are also strict .it is therefore reasonable to ask whether similar conclusions can be drawn in the noisy setting of ( [ eq : slrd ] ) . mirroring the deterministic case , a promising way to go aboutthis question is to consider again the potential function of the game and try to show that it is stochastically lyapunov in the sense of definition [ def : lyapunov ] . indeed ,if is a local minimum of ( and hence , a strict equilibrium of the underlying game ) , we may assume without loss of generality that so that in a neighborhood of .we are thus left to examine the negativity condition of definition [ def : lyapunov ] , that is , whether there exists some such that for all sufficiently close to . to that end ,recall that and that .then , the generator of the rate - adjusted dynamics ( [ eq : slrd ] ) applied to produces \\[-8pt ] & & { } -\sum_{i,\alpha } \frac{\lambda_{i}^{2}}{2 } x_{i\alpha } u_{i\alpha}(x ) \biggl(\eta_{i\alpha}^{2}(1 - 2x_{i\alpha } ) - \sum_{\beta}\eta_{i\beta}^{2 } x_{i\beta}(1 - 2x_{i\beta } ) \biggr),\nonumber\end{aligned}\ ] ] where , for simplicity , we have assumed that the noise coefficients are constant. we will study ( [ eq : genv ] ) term by term by considering the perturbed strategies where belongs to the face of that lies opposite to ( i.e. , , and ) and measures the distance of player from . in this way, we get \\ & = & u_{i,0}(x ) -\varepsilon_{i } \sum_{\mu } y_{i\mu } \delta u_{i\mu } + \mathcal o(\varepsilon_{i}^{2 } ) , \nonumber\end{aligned}\ ] ] where .then , by going back to ( [ eq : genv ] ) , we obtain \nonumber\\ & & \qquad= ( 1-\varepsilon_{i } ) u_{i,0}(x ) [ u_{i,0}(x ) - u_{i}(x ) ] \nonumber\\ & & \qquad\quad { } + \varepsilon_{i } \sum_{\mu } y_{i\mu } u_{i\mu } ( x ) [ u_{i\mu}(x ) - u_{i}(x ) ] \nonumber\\ & & \qquad= ( 1-\varepsilon_{i } ) u_{i,0}(x ) \cdot\varepsilon_{i } \sum_{\mu } y_{i\mu } \delta u_{i\mu}\\ & & \qquad\quad{}- \varepsilon_{i } \sum_{\mu } y_{i\mu } u_{i\mu}(q_{0 } ) \delta u_{i\mu } + \mathcal o(\varepsilon_{i}^{2 } ) \nonumber\\ & & \qquad= \varepsilon_{i } \sum_{\mu } y_{i\mu } u_{i,0}(q_{0 } ) \delta u_{i\mu } - \varepsilon_{i } \sum_{\mu } y_{i\mu } u_{i\mu}(q_{0 } ) \delta u_{i\mu } + \mathcal o(\varepsilon_{i}^{2 } ) \nonumber\\ & & \qquad= \varepsilon_{i } \sum_{\mu } y_{i\mu } ( \delta u_{i\mu } ) ^{2 } + \mathcal o(\varepsilon_{i}^{2 } ) .\nonumber\end{aligned}\ ] ] as for the second term of ( [ eq : genv ] ) , some easy algebra reveals that \\[-8pt ] & & \qquad\quad { } + 2(1-\varepsilon_{i})^{2 } \eta_{i,0}^{2 } + 2\varepsilon_{i}^{2}\sum_{\mu } \eta_{i\mu}^{2 } y_{i\mu } ^{2}\nonumber\\ & & \qquad= -\varepsilon_{i } \biggl(\eta_{i,0}^{2 } + \sum_{\mu } y_{i\mu } \eta _ { i\mu } ^{2 } \biggr ) + \mathcal o(\varepsilon_{i}^{2 } ) \nonumber\end{aligned}\ ] ] and , after a ( somewhat painful ) series of calculations , we get \\[-8pt ] & & \qquad= -\varepsilon_{i } u_{i,0}(q_{0 } ) \biggl ( \eta_{i,0}^{2 } + \sum_{\mu}y_{i\mu}\eta_{i\mu}^{2 } \biggr)\nonumber\\ & & \qquad\quad{}+ \varepsilon_{i}\sum_{\mu } y_{i\mu } u_{i\mu}(q_{0 } ) ( \eta_{i\mu } ^{2 } + \eta_{i,0}^{2 } ) + \mathcal o(\varepsilon_{i}^{2 } ) \nonumber\\ & & \qquad= -\varepsilon_{i}\sum_{\mu } y_{i\mu } \delta u_{i\mu } ( \eta _ { i\mu } ^{2 } + \eta_{i,0}^{2 } ) + \mathcal o(\varepsilon_{i}^{2 } ) .\nonumber\end{aligned}\ ] ] finally , if we assume without loss of generality that and set ( i.e. , and for all , ) , we readily get \\[-8pt ] & = & -\sum_{i,\alpha } u_{i\alpha}(q_{0 } ) \xi_{i\alpha } + \mathcal o(\varepsilon^{2 } ) \nonumber\\ & = & \sum_{i}\varepsilon_{i}\sum_{\mu } y_{i\mu}\delta u_{i\mu } + \mathcal o(\varepsilon^{2 } ) , \nonumber\end{aligned}\ ] ] where . therefore , by combining ( [ eq : paystep ] ) , ( [ eq : noisestep ] ) and ( [ eq : potstep ] ) , the negativity condition becomes \nonumber\\[-8pt]\\[-8pt ] & & \qquad\geq k\sum_{i}\varepsilon_{i } \sum_{\mu } y_{i\mu}\delta u_{i\mu } + \mathcal o ( \varepsilon^{2}).\nonumber\end{aligned}\ ] ] hence , if for all , this last inequality will be satisfied for some whenever is small enough .essentially , this proves the following .[ prop : potential ] let be a strict equilibrium of a congestion game with potential function and assume that . assume further that the learning rates are sufficiently small so that , for all and all , then is stochastically asymptotically stable in the rate - adjusted dynamics ( [ eq : slrd ] ) .we thus see that no matter how loud the noise might be , stochastic stability is always guaranteed if the players choose a learning rate that is slow enough as to allow them to average out the noise ( i.e. , ) . of course , it can be argued here that it is highly unrealistic to expect players to be able to estimate the amount of nature s interference and choose a suitably small rate . on top of that , the very form of the condition ( [ eq : payvsnoise ] ) is strongly reminiscent of the `` modified '' game of , a similarity which seems to contradict our statement that exponential learning favors rational reactions in the _ original _ game .the catch here is that condition ( [ eq : payvsnoise ] ) is only _ sufficient _ and proposition [ prop : potential ] merely highlights the role of a potential function in a stochastic environment . as we shall see in section [ sec : equilibrium ], nothing stands in the way of choosing a different lyapunov candidate and dropping requirement ( [ eq : payvsnoise ] ) altogether . to gain some further intuition into why the condition ( [ eq : payvsnoise ] ) is redundant , it will be particularly helpful to examine the case where players compete for the resources of only two facilities ( i.e. , for all ) and try to learn the game with the help of the uniform replicator equation ( [ eq : srd ] ) .this is the natural setting for the el farol bar problem and the ensuing minority game where players choose to `` buy '' or `` sell '' and are rewarded when they are in the minority buyers in a sellers market or sellers in an abundance of buyers .as has been shown in , such games always possess strict equilibria , even when players have distinct payoff functions .so , by relabeling indices if necessary , let us assume that is such a strict equilibrium and set .then , the generator of the replicator equation ( [ eq : srd ] ) takes the form \ , \frac{\partial}{\partial x_{i}}\nonumber\\[-8pt]\\[-8pt ] & & { } + \frac{1}{2}\sum_{i}x_{i}^{2}(1-x_{i})^{2 } \eta_{i}^{2}(x)\ , \frac { \partial^{2}}{\partial x_{i}^{2}},\nonumber\end{aligned}\ ] ] where now and .it thus appears particularly appealing to introduce a new set of variables such that ; this is just the `` logit '' transformation : . in these new variables , ( [ eq:2dgen ] ) assumes the astoundingly suggestive guise which reveals that the noise coefficients can be effectively decoupled from the payoffs .we can then take advantage of this by letting act on the function ( ) : hence , if is chosen small enough so that for all sufficiently large [ recall that since is a strict equilibrium ] , we get where . and since is strictly positive for and only vanishes as ( i.e. , at the equilbrium ) , a trivial modification of the stochastic lyapunov method ( see , e.g. , pages 314 and 315 of ) yields the following . [ prop : minority ] the strict equilibria of minority games are stochastically asymptotically stable in the uniform replicator equation ( [ eq : srd ] ) .it is trivial to see that strict equilibria of minority games will also be stable in the rate - adjusted dynamics ( [ eq : slrd ] ) : in that case we simply need to choose such that .a closer inspection of the calculations leading to proposition [ prop : minority ] reveals that nothing hinges on the minority mechanism per se : it is ( [ eq:2dygen ] ) that is crucial to our analysis and takes this form whenever the underlying game is a _ dyadic _ one ( i.e. , for all ) . in other words ,proposition [ prop : minority ] also holds for all games with 2 strategies and should thus be seen as a significant extension of proposition [ prop : potential ] .[ prop : dyadic ] the strict equilibria of dyadic games are stochastically asym ptotically stable in the replicator dynamics ( [ eq : srd ] ) , ( [ eq : slrd ] ) of exponential learning .in deterministic environments , the `` folk theorem '' of evolutionary game theory provides some pretty strong ties between equilibrial play and stability : strict equilibria are asymptotically stable in the multi - population replicator dynamics ( [ eq : rd ] ) . in our stochastic setting, we have already seen that this is always true in two important classes of games : those that can be solved by iterated elimination of dominated strategies ( corollary [ cor : dominance ] ) and dyadic ones ( proposition [ prop : dyadic ] ) .although interesting in themselves , these results clearly fall short of adding up to a decent analogue of the folk theorem for stochastically perturbed games. nevertheless , they are quite strong omens in that direction and such expectations are vindicated in the following .[ thm : stability ] the strict equilibria of a game are stochastically asymptotically stable in the replicator dynamics ( [ eq : srd ] ) , ( [ eq : slrd ] ) of exponential learning . before proving theorem[ thm : stability ] , we should first take a slight detour in order to properly highlight some of the issues at hand . on that account ,assume again that the profile is a strict equilibrium of .then , if is to be stochastically stable , say in the uniform dynamics ( [ eq : srd ] ) , one would expect the strategy scores of player to grow much faster than the scores of his other strategies .this is captured remarkably well by the `` adjusted '' scores [ eq : ratescore ] where is a sensitivity parameter akin ( but not identical ) to the learning rates of ( [ eq : slrd ] ) ( the choice of common notation is fairly premeditated though ) .clearly , whenever is large , will be much greater than any other score and hence , the strategy will be employed by player far more often . to see this in more detail ,it is convenient to introduce the variables [ eq : ydef ] where is a measure of how close is to and is a direction indicator ; the two sets of coordinates are then related by the transformation , , . consequently , to show that the strict equilibrium is stochastically asymptotically stable in the replicator equation ( [ eq : srd ] ) , it will suffice to show that diverges to infinity as with arbitrarily high probability .our first step in this direction will be to derive an sde for the evolution of the processes . to that end ,it s lemma gives \\[-8pt ] & = & \sum_{\beta } \biggl ( u_{i\beta}\frac{\partial y_{i\alpha}}{\partial u_{i\beta } } + \frac{1}{2 } \eta_{i\beta}^{2 } \,\frac{\partial^{2 } y_{i\alpha } } { \partial u_{i\beta}^{2 } } \biggr ) \,dt + \sum_{\beta } \eta_{i\beta } \,\frac{\partial y_{i\alpha}}{\partial u_{i\beta } } \,dw_{i\beta},\nonumber\end{aligned}\ ] ] where , after a simple differentiation of ( [ eq : y0 ] ) , we have \end{aligned}\ ] ] and , similarly , from ( [ eq : ym ] ) \end{aligned}\ ] ] \\[-8pt ] \frac{\partial^{2 } y_{i\mu}}{\partial u_{i\nu}^{2 } } & = & \lambda_{i}^{2 } y_{i\mu}(\delta_{\mu\nu } - y_{i\nu})(1 - 2y_{i\nu}).\nonumber\end{aligned}\ ] ] in this way , by plugging everything back into ( [ eq : itoy ] ) we finally obtain [ eq : dy ] \,dt\nonumber\\[-8pt]\\[-8pt ] & & { } + \lambda_{i } y_{i,0 } \biggl [ \eta_{i,0 } \,dw_{i,0 } -\sum_{\mu } \eta_{i\mu } y_{i\mu } \,dw_{i\mu } \biggr],\nonumber\\ \label{eq : dym } dy_{i\mu } & = & \lambda_{i}y_{i\mu } [ u_{i\mu } - \sum_{\nu } u_{i\nu } y_{i\nu } ] \,dt\nonumber\\ & & { } + \frac{\lambda_{i}^{2}}{2}y_{i\mu } \biggl[\eta_{i\mu}^{2}(1 - 2 y_{i\mu } ) -\sum_{\nu}\eta_{i\nu}^{2 } y_{i\nu } ( 1 - 2y_{i\nu } ) \biggr ] \,dt\\ & & { } + \lambda_{i } y_{i\mu } \biggl[\eta_{i\mu } \,dw_{i\mu } - \sum_{\nu } \eta _ { i\nu } y_{i\nu } \,dw_{i\nu } \biggr],\nonumber\end{aligned}\ ] ] where we have suppressed the arguments of and in order to reduce notational clutter . this last sde is particularly revealing : roughly speaking , we see that if is chosen small enough , the deterministic term will dominate the rest ( cf . with the `` soft '' learning rates of proposition [ prop : potential ] ) . and, since we know that strict equilibria are asymptotically stable in the deterministic case , it is plausible to expect the sde ( [ eq : dy ] ) to behave in a similar fashion .proof of theorem [ thm : stability ] tying in with our previous discussion , we will establish stochastic asymptotic stability of strict equilibria in the dynamics ( [ eq : srd ] ) by looking at the processes of ( [ eq : ydef ] ) . in these coordinates, we just need to show that for every and any , there exist such that if , then , with probability greater than , and for all . in the spirit of the previous section , we will accomplish this with the help of the stochastic lyapunov method. our first task will be to calculate the generator of the diffusion , that is , the second order differential operator where and are the drift and diffusion coefficients of the sde ( [ eq : dy ] ) , respectively .in particular , if we restrict our attention to sufficiently smooth functions of the form , the application of yields \ , \frac{\partial f_{i}}{\partial y_{i,0}}\\ & & { } + \frac{1}{2 } \sum_{i\in\mathcal{n } } \lambda_{i}^{2 } y_{i,0}^{2 } \biggl [ \eta_{i,0}^{2 } + \sum_{\mu } \eta_{i\mu}^{2 } y_{i\mu}^{2 } \biggr]\ , \frac{\partial^{2 } f_{i}}{\partial^{2}y_{i,0}}.\nonumber\end{aligned}\ ] ] therefore , let us consider the function for . with and , ( [ eq : generator ] ) becomes \\[-8pt ] & & \hspace*{54.1pt } { } - \frac{\lambda_{i}}{2}\sum_{\mu } y_{i\mu}(1-y_{i\mu})\eta _ { i\mu } ^{2 } \biggr].\nonumber\end{aligned}\ ] ] however , since has been assumed to be a strict nash equilibrium of , we will have for all .then , by continuity , there exists some positive constant with whenever is large enough ( recall that ) .so , if we set and pick positive with , we get for all sufficiently large .moreover , is strictly positive for and vanishes only as .hence , as in the proof of proposition [ prop : minority ] , our claim follows on account of being a ( local ) stochastic lyapunov function . finally , in the case of the rate - adjusted replicator dynamics ( [ eq : slrd ] ) , the proof is similar and only entails a rescaling of the parameters .if we trace our steps back to the coordinates , our lyapunov candidate takes the form .it thus begs to be compared to the lyapunov function employed by imhof and hofbauer in to derive a conditional version of theorem [ thm : stability ] in the evolutionary setting . as it turns out , the obvious extension works in our case as well , but the calculations are much more cumbersome and they are also shorn of their ties to the adjusted scores ( [ eq : ratescore ] ) .we should not neglect to highlight the dual role that the learning rates play in our analysis . in the logistic learning model ( [ eq : ratelogit ] ) , they measure the players convictions and how strongly they react to a given stimulus ( the scores ) ; in this role , they are fixed at the outset of the game and form an intrinsic part of the replicator dynamics ( [ eq : slrd ] ) . on the other hand, they also make a virtual appearance as free temperature parameters in the adjusted scores ( [ eq : ratescore ] ) , to be softened until we get the desired result . for this reason ,even though theorem [ thm : stability ] remains true for any choice of learning rates , the function is lyapunov only if the sensitivity parameters are small enough .it might thus seem unfortunate that we chose the same notation in both cases , but we feel that our decision is justified by the intimate relation of the two parameters .our aim in this last section will be to discuss a number of important issues that we have not been able to address thoroughly in the rest of the paper ; truth be told , a good part of this discussion can be seen as a roadmap for future research . in single - population evolutionary models ,an evolutionarily stable strategy ( ess ) is a strategy which is robust against invasion by mutant phenotypes .strategies of this kind can be considered as a stepping stone between mixed and strict equilibria and they are of such significance that it makes one wonder why they have not been included in our analysis .the reason for this omission is pretty simple : even the weakest evolutionary criteria in multi - population models tend to reject all strategies which are not strict nash equilibria .therefore , since our learning model ( [ eq : rd ] ) corresponds exactly to the multi - population environment ( [ eq : erd ] ) , we lose nothing by concentrating our analysis only on the strict equilibria of the game .if anything , this equivalence between ess and strict equilibria in multi - population settings further highlights the importance of the latter .however , this also brings out the gulf between the single - population setting and our own , even when we restrict ourselves to 2-player games ( which are the norm in single - population models ) .indeed , the single - population version of the dynamics ( [ eq : asrd ] ) is : \,dt\nonumber\\[-8pt]\\[-8pt ] & & { } + x_{\alpha } \bigl[\eta_{\alpha } \,dw_{\alpha } - \sum\eta_{\beta } x_{\beta } \,dw_{\beta } \bigr].\nonumber\end{aligned}\ ] ] as it turns out , if a game possesses an interior ess and the shocks are mild enough , the solution paths of the ( single - population ) replicator dynamics will be recurrent ( theorem 2.1 in ) . theorem[ thm : stability ] rules out such behavior in the case of strict equilibria ( the multi - population analogue of ess ) , but does not answer the following question : if the underlying game only has mixed equilibria , will the solution of the dynamics ( [ eq : srd ] ) be recurrent ? this question is equivalent to showing that a profile is stochastically asymptotically stable in the replicator equations ( [ eq : srd ] ) , ( [ eq : slrd ] ) only if it is a strict equilibrium . since theorem [ thm : stability ] provides the converse `` if '' part , an answer in the positive would yield a strong equivalence between stochastically stable states and strict equilibria ; we leave this direction to be explored in future papers . for comparison purposes ( but also for simplicity ) ,let us momentarily assume that the noise coefficients do not depend on the state of the game . in that case , it is interesting ( and very instructive ) to note that the sde ( [ eq : score ] ) remains unchanged if we use stratonovich integrals instead of it ones : then , after a few calculations , the corresponding replicator equation reads the form of this last equation is remarkably suggestive .first , it highlights the role of the modified game even more crisply than ( [ eq : srd ] ) : the payoff terms are completely decoupled from the noise , in contrast to what one obtains by introducing stratonovich perturbations in the evolutionary setting .secondly , one can seemingly use this simpler equation to get a much more transparent proof of proposition [ prop : dominated ] : the estimates for the cross entropy terms are recovered almost immediately from the stratonovich dynamics .however , since ( [ eq : stratonovich ] ) takes this form only for constant coefficients ( the general case is quite a bit uglier ) , we chose the route of consistency and employed it integrals throughout our paper . before closing , it is worth pointing out the applicability of the above approach to networks where the presence of noise or uncertainty has two general sources .the first of these has to do with the time variability of the connections which may be due to the fluctuations of the link quality because of mobility in the wireless case or because of external factors ( e.g. , load conditions ) in wireline networks .this variability is usually dependent on the state of the network and was our original motivation in considering noise coefficients that are functions of the players strategy profile ; incidentally , it was also our original motivation for considering randomly fluctuating payoffs in the first place : travel times and delays in traffic models are not determined solely by the players choices , but also by the fickle interference of nature .the second source stems from errors in the measurement of the payoffs themselves ( e.g. , the throughput obtained in a particular link ) and also from the lack of information on the payoff of strategies that were not employed .the variability of the noise coefficients again allows for a reasonable approximation to this problem .indeed , if is continuous and satisfies for all , this means that there are only errors in estimating the payoffs of strategies that were not employed ( or small errors for pure strategies that are employed with high probability ) . of course, this does not yet give the full picture [ one should consider the discrete - time dynamical system ( [ eq : discretescore ] ) instead where the players _ actual _ choices are considered ] , but we conjecture that our results will remain essentially unaltered .we would like to extend our gratitude to the anonymous referee for his insightful comments and to david leslie from the university of bristol for the fruitful discussions on the discrete version of the exponential learning model . | we study repeated games where players use an exponential learning scheme in order to adapt to an ever - changing environment . if the game s payoffs are subject to random perturbations , this scheme leads to a new stochastic version of the replicator dynamics that is quite different from the `` aggregate shocks '' approach of evolutionary game theory . irrespective of the perturbations magnitude , we find that strategies which are dominated ( even iteratively ) eventually become extinct and that the game s strict nash equilibria are stochastically asymptotically stable . we complement our analysis by illustrating these results in the case of congestion games . and . |
with the introduction of microarrays biologist have been witnessing entire labs shrinking to matchbox size .this paper invites quality researchers to join scientists on their _ fantastic journey _ into the world of microscopic high - throughput measurement technologies . building a biological organism as laid out by the genetic codeis a multi - step process with room for variation at each step .the first steps , as described by the _ dogma of molecular biology , _ are genes ( and dna sequence in general ) , their transcripts and proteins .substantial factors contributing to their variation in both structure and abundance include cell type , developmental stage , genetic background and environmental conditions .connecting molecular observations to the state of an organism is a central interest in molecular biology .this includes the study of the gene and protein functions and interactions , and their alteration in response to changes in environmental and developmental conditions .traditional methods in molecular biology generally work on a `` one gene ( or protein ) in one experiment '' basis . with the invention of _ microarrays _huge numbers of such macromolecules can now be monitored in one experiment .the most common kinds are _ gene expression microarrays , _ which measure the mrna transcript abundance for tens of thousands of genes simultaneously . for biologists , this high - throughput approach has opened up entirely new avenues of research . rather than experimentally confirming the hypothesized role of a certain candidate gene in a certain cellular process , they can use genome - wide comparisons to screen for all genes which might be involved in that process .one of the first examples of such an exploratory approach is the expression profiling study of mitotic yeast cells by which determined a set of a few hundred genes involved in the cell cycle and triggered a cascade of articles re - analyzing the data or replicating the experiment .microarrays have become a central tool in cancer research initiated by the discovery and re - definition of tumor subtypes based on molecular signatures ( see e.g. , , , ) . in section [ b ]we will explain different kinds of microarray technologies in more detail and describe their current applications in life sciences research .a dna microarray consists of a glass surface with a large number of distinct fragments of dna called probes attached to it at fixed positions .a fluorescently labelled sample containing a mixture of unknown quantities of dna molecules called the target is applied to the microarray . under the right chemical conditions ,single - stranded fragments of target dna will base pair with the probes which are their complements , with great specificity .this reaction is called hybridization , and is the reason dna microarrays work .the fixed probes are either fragments of dna called complementary dna ( cdna ) obtained from messenger rna ( mrna ) , or short fragments known to be complementary to part of a gene , spotted onto the glass surface , or synthesized in situ .the point of the experiment is to quantify the abundance in the target of dna complementary to each particular probe , and the hybridization reaction followed by scanning allows this to be done on a very large scale .the raw data produced in a microarray experiment consists of scanned images , where the image intensity in the region of a probe is proportional to the amount of labelled target dna that base pairs with that probe . in this way we can measure the abundance of thousands of dna fragments in a target sample .microarrays based on cdna or long oligonucleotide probes typically use just one or a few probes per gene .the same probe sequence spotted in different locations , or probe sequences complementary to different parts of the same gene can be used to give within array replication .short oligonucleotide microarrays typically use a larger number per gene , e.g. 11 for the hu133 affymetrix array per gene .such a set of 11 is called probeset for that gene , and the probes in a probe set are arranged randomly over the array . in the biological literature, microarrays are also referred to as ( gene ) chips or slides . when the first microarray platforms were introduced in the early 90s ,the most intriguing fact about them was the sheer number of genes that could be assayed simultaneously .assays that used to be done one gene at a time , could suddenly be produced for thousands of genes at once .a decade later , high - density microarrays would even fit entire genomes of higher organisms .after the initial euphoria , the research community became aware that findings based solely on microarray measurements were not always as reproducible as they would have liked and that studies with inconclusive results were quite common . with this high - throughput measurement technology becoming established in many branches of life sciences research , scientists in both academic and corporate environments raised their expectations concerning the validity of the measurements .data quality issues are now frequently addressed at meetings of the microarray gene expression database group ( mged ) .the _ microarray quality control project , _ a community - wide effort , under the auspices of the u.s . food and drug administration ( fda ) , is aiming at establishing _ operational metrics _ to objectively assess the performance of seven microarray platform and develop minimal quality standards .their assessment is based on the performance of a set of standardized external rna controls .the first formal results of this project have been published in a series of articles in the september 2006 issue of _ nature biotechnology ._ assessing the _ quality of microarray data _ has emerged as a new research topic for statisticians . in this paper , we conceptualize microarray data quality issues from a perspective which includes the technology itself as well as their practical use by the research community .we characterize the nature of microarray data from a quality assessment perspective , and we explain the different levels of microarray data quality assessment. then we focus on short oligonucleotide microarrays to develop a set of specific statistical data quality assessment methods including both numerical measures and spatial diagnostics .assumptions and hopes about the quality of the measurements have become a major issue in microarray purchasing . despite their substantially higher costs , affymetrix short oligonucleotide microarrayshave become a widespread alternative to cdna chips .informally , they are considered the industrial standard among all microarray platforms . more recently , agilent snon - contact printed high - density cdna microarrays and illumina s bead arrays have fueled the competition for high quality chips .scientist feel the need for systematic quality assessment methods allowing them to compare different laboratories , different chip generation , or different platforms .they even lack good methods for selecting chips of good enough quality to be included in statistical data analysis beyond preprocessing .we have observed several questionable practices in the recent past : * skipping hybridization qa / qc all together * discarding entire batches of chips following the detection of a few poor quality chips * basing hybridization qa / qc on raw data rather than data that has already been had large - scale technical biases removed * delaying any qa / qc until all hybridizations are completed , thereby losing the opportunity to remove specific causes of poor quality at an early stage * focussing on validation by another measurement technolgy ( e.g , quantitative pcr ) in publication requirements rather than addressing the quality of the microarray data in the first place * merging of data of variable quality into one database with the inherent risk of swamping it with poor quality data ( as this produced at a faster rate due to few replicates , less quality checks , less re - doing of failed hybridizations etc . ) the community of microarray users has not yet agreed on a framework to measure accurary or precision in microarray experiments . without universally accepted methods for quality assessment , and guidelines for acceptance , statisticians judgements about data quality may be perceived as arbitrary by experimentalists .users expectations as to the level of gene expression data quality vary substantially .they can depend on time frame and financial constraints , as well as on the purpose of their data collection . , p.120/21 , explained the standpoint of the applied scientist : + _ he knows that if he were to act upon the meagre evidence sometimes available to the pure scientist , he would make the same mistakes as the pure scientist makes in estimates of accuracy and precisions .he also knows that through his mistakes someone may lose a lot of money or suffer physical injury or both .[ ... ] he does not consider his job simply that of doing the best he can with the available data ; it is his job to get enough data before making this estimate . _ + following this philosophy , microarray data used for medical diagnostics should meet high quality standards .in contrast , microarray data collected for a study of the etiology of a complex genetic disease in a heterogeneous population , one may decide to tolerate lower standards at the level of individual microarrays and invest the resources in a larger sample size .scientists need informative quality assessment tools to allow them to choose the most appropriate technology and optimal experimental design for their precision needs , within their time and budget constraints .the explicit goals of quality assessment for microarrays are manifold .which goals can be envisioned depends on the resources and time horizon and on the kind of user single small user , big user , core faculity , multi - center study , or `` researcher into quality '' .the findings can be used to simply exclude chips from further study or recommend to have samples reprocessed .they can be imbedded in a larger data quality management and improvement plan .typical quality phenomena to look for include : * outlier chips * trends or patterns over time * effects of particular hybridization conditions and sample characteristics * changes in quality between batches of chips , cohorts of samples , lab sites etc .* systematic quality differences between subgroups of a study some aspects of quality assessment and control for cdna have been discussed in the literature . among these , and emphasize the need for quality control and replication . define a quality score for each spot based on intensity characteristics and spatial information , while approach this with baysian networks . and explicit statistical quality measures based on individual spot observations using the image analysis software spot from . multivariate statistical process control to detect single outlier chips .the preprocessing and data management software package arraymagic of includes quality diagnostics .the book by is a comprehensive collection of quality assessment and control issues concerning the various stages of cdna microarray experiments including sample preparation , all from an experimentalist s perspective . suggest spot quality scores based on the variance of the ratio estimates of replicates ( on the same chip or on different chips ) .spatial biases have also been addressed .in examining the relationship between signal intensity and print - order , reveals a plate - effect .the normalization methodology by incorporates spatial information such as print - tip group or plate , to remove spatial biases created by the technological processes . and found pairwise correlations between genes due to their relative positioning of the spots on the slide and suggest a localized mean normalization method to adjust for this . proposed a method of identifying poor quality spots , and of addressing this by assigning quality weights . developed an approach for the visualization and quantitation of regional bias applicable to both cdna and affymetrix microarrays .for affymetrix arrays , the commercial software includes a _ quality report _ with a dozen scores for each microarray ( see subsection [ d_a ] ) .none of them makes use of the gene expression summaries directly , and there are no universally recognized guidelines as to which range should be considered good quality for each of the gcos quality scores .users of short oligonucleotide chips have found that the quality picture delivered by the gcos quality report is incomplete or not sensitive enough , and that it is rarely helpful in assigning causes to poor quality .the literature on quality assessment and control for short oligonucleotide arrays is still sparse , though the importance of the topic has been stressed in numerous places , and some authors have addressed have looked at specific issues .an algorithm for probeset quality assessment has been suggested by . transfer the weight of a measurement to a subset of probes with optimal linear response at a given concentration . investigate the effect of updating the mapping of probes to genes on the estimated expression values . define four types of degenerate probe behaviour based on free energy computations and pattern recognition . evaluated the affymetrix quality reports of over 5,000 chips collected by st .jude children s research hospital over a period of three years , and linked some quality trends to experimental conditions . extend traditional effect size models to combine data from different microarray experiments , incorporating a quality measure for each gene in each study .the detection of specific quality issues such as the extraction , handling and amount of rna , has been studied by several authors ( e.g. , , , ) . before deriving new methods for assessing microarray data quality, we will relate the issue to established research into data quality from other academic disciplines , emphasizing the particular characteristics of microarray data ( section [ d_c ] ) . a conceptual approach to the statistical assessment of microarray data qualityis suggested in subsection [ d_a ] , and is followed by a summary of the existing quality measures for affymetrix chips .the theoretical basis of this paper is section [ m ] , where we introduce new numerical and spatial quality assessment methods for short oligonucleotide arrays . two important aspects of our approach are : * the quality measures are based on _ all the data _ from the array . *the quality measures are computed after hybridization and data preprocessing .more specifically , we make use of probe level and probeset level quantities obtained as by - products of the robust multichip analysis ( rma / fitplm ) preprocessing algorithm presented in , and . our _ quality landscapes _serve as tools for visual quality inspection of the arrays after hybridization .these are two dimensional pseudo - images of the chips based on probe level quantities , namely the weights and residuals computed by rma / fitplm .these quality landscapes allow us to immediately relate quality to an actual location on the chip , a crucial step in detecting special causes for poor chip quality .our numerical quality assessment is based on two distributions computed at the probeset level , the_ normalized unscaled standard error ( nuse ) _ and the _ relative log expression ( rle ) . _ given a fairly general biological assumption is fulfilled , these distributions can be interpreted for chip quality assessment .we further suggest ways of conveniently visualizing and summarizing these distributions for larger chip sets and of relating this quality assessment with other factors in the experiment , to permit the detection of special causes for poor quality to reveal biases .quality of gene expression data can be assessed on a number of levels , including that of probeset , chip and batch of chips . another aspect of quality assessment concerns batches of chips .we introduce the _ residual scale factor ( rsf ) , _ a measure of chip batch quality .this allows us to compare quality across batches of chips within an experiment , or across experiments .all our measures can be computed for all available types of short oligonuceotide chips given the raw data ( cel file ) for each chip and the matching cdf file .software packages are described in and available at www.bioconductor.org . in section [ r ]we extensively illustrate and evaluate our quality assessment methods on the experimental microarray datasets described in section [ s ] .to reflect the fact that quality assessment is a necessary and fruitful step in studies of any kind , we use a variety of datasets , involving tissues ranging from fruit fly embryos to human brains , and from academic , clinical , and corporate labs .we show how quality trends and patterns can be associated with sample characteristics and/or experimental conditions , and we compare our measures with the affymetrix gcos quality report .after the hunt for new genes has dominated genetics in the 80s and 90s of the last century , there has be a remarkable shift in molecular biology research goals towards a comprehensive understanding of the function of macromolecules on different levels in a biological organism .how and to what extend do genes control the construction and maintenance of the organism ?what is the role of intermediate gene products such as _ rna transcripts _ ? how do the macromolecules interact with others ?the latter may refer to _ horizontal _ interaction , such as genes with genes , or proteins with proteins .it may also refer to _ vertical _ interaction , such as between genes and proteins ._ genomics _ and _ proteomics _ in professional slang summarized as _omics sciences _ have started to put an emphasis on _ functions ._ as the same time , these research areas have become more quantitative , and they have broadened the perspective in the sense of observing huge numbers of macromolecules simultaneously .these trends have been driven by recent biotechnological inventions , the most prominent ones being _ microarrays . _ with these _ high - throughput _ molecular measurement instruments , the relative concentration of huge numbers of macromolecules can be obtained simultaneously in one experiment. this section will give an overview of the biological background and the applications of microarrays in biomedical research .for an extended introduction to _omics sciences _ and to microarray - based research we refer to the excellent collections of articles in the three nature genetics supplements _ the chipping forecast i , ii , iii _( 1999 , 2002 , 2005 ) and to the recent review paper by .though the popular belief about genes is still very deterministic once they are put into place , they function in a preprogrammed straight forward way for biologists the effect of a gene is variable .most cells in an organism contain essentially the same set of genes .however , cells will look and act differently depending on which organ they belong to , the state of the organ ( e.g.healthy vs.diseased ) , the developmental stage of the cell , or the phase of the cell cycle .this is predominantly the result of differences in the abundance , distribution , and state of the cells proteins . according to the _ central dogma of molecular biology _ the production of proteinsis controlled by dna ( for simplicity , the exceptions to this rule are omitted here ) .proteins are polymers built up from 20 different kinds of amino acids .genes are _ transcribed _ into dna - like macromolecules called _ messenger rna ( mrna ) _ , which goes from the chromosomes to the _ ribosomes ._ there , _ translation _ takes place , converting mrna into the amino acid chains which fold into proteins .the term _ gene expression _ is defined as the relative concentration of mrna and protein produced by that gene .depending on the context , however , it is often used to refer to only one of the two .the _ gene expression profile _ of a type of cell usually refers to the relative abundance of each of the mrna species in the total cellular mrna population . from a practical point of view , in particular by many areas of medical research , protein abundance is seen as generally more interesting than mrna abundance .the measurement of protein abundances , however , is still much more difficult to measure on a large scale than mrna abundance .there is one property which is peculiar to nucleic acids : their complementary structure . dna is reliably replicated by separating the two strands , and complementing each of the single strands to give a copy of the original dna .the same mechanism can be used to detect a particular dna or rna sequence in a mixed sample .the first tool to measure gene expression in a sample of cells of a was introduced in 1975 .southern blot _ named for its inventor is a multi - stage laboratory procedure which produces a pattern of bands representing the activity of a small set of pre - selected genes . during the 1980s spotted arrays on nylon holding bacterial colonies carrying different genomic inserts were introduced . in the early 1990s , the latterwould be exchanged for preidentified cdnas .the introduction of _ gene expression microarrays _ on glass slides in the mid 1990s brought a substantial increase in feature density . with the new technology ,gene expression measurements could be taken in parallel for thousands of genes .modern microarray platforms even assess the expression levels of tens of thousands of genes simultaneously .a gene expression microarray is a small piece of glass onto which _ a priori _ known dna fragments called _ probes _ are attached at fixed positions . in a chemical process called _ hybridization , _the microarray is brought into contact with material from a sample of cells .each probe binds to its complementary counterpart , an mrna molecule ( or a complementary dna copy ) from the sample , which we refer to as the _target_. the hybridization reaction product is made visible using fluorescent dyes or other ( e.g.radioactive ) markers , which are applied to the sample prior to hybridization .the readout of the microarray experiment is a scanned image of the labelled dna .microarrays are specially designed to interrogate the genomes of particular organisms , and so there are yeast , fruit fly , worm and human arrays , to name just a few .there are three major platforms for microarray - based gene expression measurement : _ spotted two - color cdna arrays , _ _ long oligonucleotide arrays _ and _ short oligonucleotide arrays ._ in the platform specific parts of this paper we will focus on the latter . on a short oligonucleotide microarray , each gene is represented on the array by a _ probe set _ that uniquely identifies the gene .the individual probes in the set are chosen to have relatively uniform hybridization characteristics . in the affymetrix hu133 arrays , for example ,each probe set consists of 11 to 20 probe sequence pairs .each pairs consists of a _ perfect match ( pm ) _ probe , a 25 bases long oligonucleotide that matches a part of the gene s sequence , and a corresponding _ mismatch ( mm ) _ probe , that has the same sequence as the pm except for the center base being flipped to its complementary letter .the mm probes are intended to give an estimate of the random hybridization and cross hybridization signals , see and for more details .other affymetrix gene expression arrays may differ from the hu133 in the number of probes per probe set .exon arrays do not have mm probes .most of the arrays produced by nimblegen are composed from 60mer probes , but some are using 25mer probes .the number of probes per probeset is adapted to the total number of probesets on the array to make optimal use of the space . besides being more efficient than the classical gene - by - gene approach , microarrays open up entirely new avenues for research .they offer a comprehensive and cohesive approach to measuring the activity of the genome .in particular , this fosters the study of interactions .a typical goal of a microarray based research project is the search for genes that behave differently between different cell populations .some of the most common examples for comparisons are diseased vs. healthy cells , injured vs. healthy tissue , young vs. old organism , treated vs. untreated cells . more explicitly, life sciences researchers try to find answers to questions such as the following .which genes are affected by environmental changes or in response to a drug ? how do the gene expression levels differ across various mutants ?what is the gene expression signature of a particular disease ?which genes are involved in each stage of a cellular process ? which genes play a role in the development of an organism ? or , more generally , which genes vary their activity with time ?the principle of microarray measurement technology has been used to assess molecules other than mrna .a number of platforms are currently at various stages of development ( see review by ) .snp chips detect single nucleotide polymorphisms .they are an example for a well developed microarray - based genotyping platform .cgh arrays are based on comparative genome hybridization .this method permits the analysis of changes in gene copy number for huge numbers of probes simultaneously .a recent modification , representational oligonucleotide microarray analysis ( roma ) , offers substantially better resolution .both snp chips and cgh arrays are genome - based methods , which , in contrast to the gene expression - based methods , can exploit the stability of dna .the most common application of these technologies is the localization of disease genes based on association with phenotypic traits .antibody protein chips are used to determine the level of proteins in a sample by binding them to antibody probes immobilized on the microarray .this technology is still considered semi - quantitative , as the different specificities and sensitivities of the antibodies can lead to an inhomogeneity between measurements that , so far , can not be corrected for .the applications of protein chips are similar to the ones of gene expression microarrays , except that the measurements are taken one step further downstream .more recent platforms address multiple levels at the same time .chip - on - chip , also known as genome - wide location analysis , is a technique for isolation and identification of the dna sequences occupied by specific dna binding proteins in cells .the still growing list of statistical challenges stimulated by microarray data is a _tour dhorizon _ in applied statistics ; see e.g. , and for broad introductions . from a statistical point of view a microarray experimenthas three main challenges : ( i ) measurement process as multi - step biochemical and technological procedure ( array manufacturing , tissue acquisition , sample preparation , hybridization , scanning ) with each step contributing to the variation in the data ; ( ii ) huge numbers measurements of different ( correlated ) molecular species being take in parallel ; ( iii ) unavailability of gold - standards covering a representative part of these species .statistical methodology has primarily been developed for gene expression microarrays , but most of the conceptual work applies directly to many kinds of microarrays and many of the actual methods can be transferred to other microarray platforms fitting the characteristics listed above .the first steps of the data analysis , often referred to as _ preprocessing _ or _ low level analysis _ , are the most platform - dependent tasks . for two - color cdna arrays this includes image analysis ( see e.g. ) and normalization ( see e.g. ) . for short oligonucleotide chip data this includes normalization ( see e.g. ) and the estimation of gene expression values ( see e.g. and as well as subsequent papers by these groups ) .questions around the design of micorarray experiments are mostly relevant for two - color platforms ( see e.g. ch . 2 in , and further references there ) .analysis beyond the preprocessing steps is often referred to as _ downstream analysis ._ the main goal is to identify genes which act differently in different types of samples .exploratory methods such as classification and cluster analysis have quickly gained popularity for microarray data analysis . for reviews on such methods from a statistical point of viewsee e.g. ch . 2 and ch . 3 in and ch. 3 - 7 in . on the other side of the spectrum ,hypothesis - driven inferential statistical methods are now well established and used .this approach typically takes a single - gene perspective in the sense that it searches for _ individual _ genes that are expressed differentially across changing conditions ; see e.g. .the main challenge is the imprecision of the gene - specific variance estimate , a problem that has been tackled by strategies incorporating a gene - unspecific component into the estimate ; see e.g. , , and references therein , and for the case of microarray time course data .testing thousands of potentially highly correlated genes at the same time with only a few replicates raises a substantial multiple testing problem that has been systematically addressed by various authors incorporating benyamini s and hochberg s _ false discovery rate ( fdr ) _ ; see e.g. and the review .the joint analysis of pre - defined groups of genes based on _ a priori _ knowledge has become an established alternative to the genome - wide exploratory approaches and the gene - by - gene analysis ; see e.g. and .while methodology for microarray data analysis has become a fast growing research area , the epistemological foundation of this research area shows gaps . among other issues , addresses the problem of simultaneous validation of research results and research methods . offer a review of the main approaches to microarray data analysis developed so far and attempt to unify them .many software packages for microarray data analysis have been made publicly available by academic researchers .in particular , there is the bioconductor project , a community - wide effort to maintain a collection of r - packages for genomics applications at www.bioconductor.org .many of the main packages are described in .data quality is a well established aspect of many quantitative research fields .the most striking difference between assessing the quality of a measurement as opposed to assessing the quality of a manufactured item is the additional layer of uncertainty .concerns around the accuracy of measurements have a long tradition in physics and astronomy ; the entire third chapter of the classical book is devoted to this field .biometrics , psychometrics , and econometrics developed around similar needs , and many academic fields have grown a strong quantitative branch .all of them facing data quality questions .clinical trials is a field that is increasingly aware of the quality of large data collections ( see and other papers in this special issue ) . with its recent massive move into the quantitative field , functional genomics gave birth to what some statisticians call _ genometrics ._ we now touch on the major points that characterize gene expression microarray data from the point of view of qa / qc . these points apply to other high - dimensional molecular measurements as well . *unknown kind of data : * being a new technology in the still unknown terrain of functional genomics , microarrays produce datasets with few known statistical properties , including shape of the distribution , magnitude and variance of the gene expression values , and the kind of correlation between the expression levels of different genes .this limits access to existing statistical methods .* simultaneous measurements : * each microarray produces measurements for thousands of genes simultaneously .if we measured just one gene at a time , some version of shewhart control charts could no doubt monitor quality .if we measured a small number of genes , multivariate extensions of control charts might be adequate . in a way , the use of control genes is one attempt by biologists to scale down the task to a size that can be managed by these classical approaches .control genes , however , can not be regarded as typical representatives of the set of all the genes on the arrays .gene expression measures are correlated because of both the biological interaction of genes , and dependencies caused by the common measurement process .biologically meaningful correlations between genes can potentially `` contaminate '' hybridization quality assessment .* multidisciplinary teams : * microarray experiments are typically planned , conducted and evaluated by a team which may include scientists , statisticians , technicians and physicians . in the interdisciplinarity of the data production and handling , they are similar to large datasets in other research areas . for survey data , namesthe risk associated with such a _`` mlange of workers''_. among other things , he mentions : radically different purposes , lack of communication , disagreements on the priorities among the components of quality and concentration on the `` error of choice '' in their respective discipline .the encouragement of a close cooperation between scientists and statisticians in the care for measurement quality goes all the way back to , p.70/71 : _ `` where does the statistician s work begin ? [ ... ] before one turns over any sample of data to the statistician for the purpose of setting tolerances he should first ask the scientist ( or engineer ) to cooperate with the statistician in examining the available evidence of statistical sontrol .the statistician s work solely as a statistician begins after the scientist has satisfied himself through the application of control criteria that the sample has arisen under statistically controlled conditions . '' _* systematic errors : * as pointed out by , and , in the context of clinical trials , by , systematic errors in large datasets are much more relevant than random errors .microarrays are typically used in studies involving different experimental or observational groups .quality differences between the groups are a potential source of confounding . * heterogenous quality in data collections :* often microarray data from different sources are merged into one data collection .this includes different batches of chips within the same experiment , data from different laboratories participating in a single collaborative study , or data from different research teams sharing their measurements with the wider community . depending on the circumstances, the combination of data typically takes place on one of the following levels : raw data , preprocessed data , gene expression summaries , lists of selected genes .typically , no quality measures are attached to the data .even if data are exchanged at the level of cel files , heterogeneity can cause problems .some laboratories filter out chips or reprocess the samples that were hybridized to chips that did not pass screening tests , others do not .these are decision processes that ideally should take place according to the same criteria .the nature of this problem is well known in data bank quality or _ data warehousing _ ( see e.g. , , ) . *re - using of shared data : * gene expression data are usually generated and used to answer a particular set of biological questions .data are now often being placed on the web to enable the general community to verify the analysis and try alternative approaches to the original biological question .data may also find a secondary use in answering modified questions .the shifted focus potentially requires a new round of qa / qc , as precision needs might have changed and artifacts and biases that did not interfere with the original goals of the experiment may do so now . * across - platform comparison : * , p. 112, already values the consistency between different measurement methods higher than consistency in repetition . for microarrays ,consistency between the measurements of two or more platforms ( two - color cdna , long oligonucleotide , short oligonucleotide ( affymetrix ) , commercial cdna ( agilent ) , and real - time pcr ) on rna from the same sample has been addressed in a number of publications .some of the earlier studies show little or no agreement ( e.g. , , , ) , while others report mixed results ( e.g. , , ) .more recent studies improved the agreement between platforms by controlling for other factors . and comparisons to subsets of genes above the noise level . use sequence - based matching of probes instead of gene identifier - based matching . , and use superior preprocessing methods and systematically distinguish the lab effect from the platform effect ; see and for detailed reviews and further references .for affymetrix arrays , , and found inter - laboratory differences to be managable .however , merging data from different generations of affymetrix arrays is not as straightforward as one might expect ( e.g. , , , , ) .quality assessment for microarray data can be studied on at least seven levels : * the raw chip ( pre - hybridization ) * the sample * the experimental design * the multi - step measurement process * the raw data ( post - hybridization ) * the statistically preprocessed microarray data * the microarray data as entries in a databank the last two items are the main focus of this paper .the quality of the data after statistical processing ( which includes background adjustment , normalization and probeset summarization ) is greatly affected , but not entirely determined by the quality of the preceeding five aspects .the raw microarray data ( 5 ) are the result of a multi - step procedure . in the case of the expressionmicroarrays this includes converting mrna in the sample to cdna , labelling the target mrna via an in vitro transcription step , fragmenting and then hybridizing the resulting crna to the chip , washing and staining , and finally scanning the resulting array .temperature during storage and hybridization , the amount of sample and mixing during hubridization all have a substantial impact on the quality of the outcome .seen as a multi - step process ( 4 ) the quality management for microarray experiments has a lot in common with chemical engineering , where numerous interwoven quality indicators have to be integrated ( see e.g. ) .the designer of the experiment ( 3 ) aims to minimize the impact of additional experimental conditions ( e.g. hybridization date ) and to maximize accuracy and precision for the quantities having the hightest priority , given the primary objectives of the study .sample quality ( 2 ) is a topic in its own right , strongly tied to the organism and the institutional setting of the study .the question how sample quality is related to the microarray data has been investigated in based on a variety of rna quality measures and chip quality measures including both affymetrix scores and and ours .the chip before hybridization ( 1 ) is a manufactured item .the classical theory of quality control for industrial mass production founded by provides the appropriate framework for the assessment of the chip quality before hybridization .the affymetrix software gcos presents some chip - wide quality measures in the expression report ( rtp file ) .they can also be computed by the bioconductor r package simpleaffy described in .the document `` qc and affymetrix data '' contained in this package discusses how these metrics can be applied .the quantities listed below are the most commonly used ones from the affymetrix report ( descriptions and guidelines from and ) . while some ranges for the values are suggested , the manuals mainly emphasize the importance of _ consistency _ of the measures within a set of jointly analyzed chips using similar samples and experimental conditions .the users are also encouraged to look at the scores in conjuction with others scores . * * average background : * average of the lowest 2 cell intensities on the chip .affymetrix does not issue official guidelines , but mentions that values typically range from 20 to 100 for arrays scanned with the genechip scanner 3000 .a high background indicates the presence of nonspecific binding of salts and cell debris to the array . ** raw q ( noise ) : * measure of the pixel - to - pixel variation of probe cells on the chip .the main factors contributing to noise values are electrical noise of the scanner and sample quality .older recommendations give a range of 1.5 to 3 .newer sources , however , do not issue official guidelines because of the strong scanner dependence .they recommend that data acquired from the same scanner be checked for comparability of noise values .* * percent present : * the percentage of probesets called _ present _ by the affymetrix detection algorithm .this value depends on multiple factors including cell / tissue type , biological or environmental stimuli , probe array type , and overall quality of rna .replicate samples should have similar percent present values .extremely low percent present values indicate poor sample quality .a general rule of thumb is human and mouse chips typically have 30 - 40 percent present , and yeast and e. coli have 70 - 90 percent present . ** scale factor : * multiplicative factor applied to the signal values to make the 2 trimmed mean of signal values for selected probe sets equal to a constant . for the hu133 chips , the default constant is 500 .no general recommendation for an acceptable range is given , as the scale factors depend on the constant chosen for the scaling normalization ( depending on user and chip type ) .* * gapdh 3 to 5 ratio ( gapdh 3/5 ) : * ratio of the intensity of the 3 probe set to the 5 probe set for the gene gapdh .it is expected to be an indicator of rna quality .the value should not exceed 3 ( for the 1-cycle assay ) .* perfect match ( pm ) : * the distribution of the ( raw ) pm values . while we do not think of this as a full quality assessment measure, it can indicate particular phenomena such as brightness or dimness of the image , or saturation . using this tool in combination with other quality measures , can help in detecting and excluding technological reasons for poor quality .a convenient way to look at the pm distributions for a number of chips is to use boxplots .alternatively , the data can be summarized on the chip level by two single values : the median of the pm of all probes on the chip , abbreviated _med(pm ) , _ and the interquartile range of the pm of all probes on the chip , denoted by __ our other assessment tools use probe level and probeset level quantities obtained as a by - product of the robust multichip analysis ( rma ) algorithm developed in , and .we now recall the basics about rma and refer the reader to above papers for details .consider a fixed probeset .let denote the intensity of probe from this probeset on chip usually already background corrected and normalized .rma is based on the model with a _ probe affinity effect _ , representing the log scale expression level for chip and an i.i.d .centered error with standard deviation .for identifiability of the model , we impose a zero - sum constraint on the .the number of probes in the probeset depends on the kind of chip ( e.g.11 for the hu133 chip ) . for a fixed probeset, rma robustly fits the model using iteratively weighted least squares and delivers a probeset expression index for each chip .the analysis produces residuals and weights attached to probe on chip the weights are used in the irls algorithm to achieve robustness .probe intensities which are discordant with the rest of the probes in the set are deemed less reliable and downweighted .the collective behaviour of all the weights ( or all the residuals ) on a chip is our starting point in developing post - hybridization chip quality measures .we begin with a `` geographic '' approach images of the chips that highlight potential poorly performing probes and then continue with the discussion of numerical quality assessment methods .* quality landscapes : * an image of a hybridized chip can be constructed by shading the positions in a rectangular grid according to the magnitude of the perfect match in the corresponding position on the actual chip .in the same way , the positions can be colored according to probe - level quantities other than the simple intensities .a typical color code is to use shades of red for positive residuals and shades of blue for negative ones , with darker shades corresponding to higher absolute values .shades of green are used for the weights , with darker shades indicating lower weights .as the weights are in a sense the reciprocals of the absolute residuals , the overall information gained from these two types of quality landscapes is the same . in some particular cases ,the sign of the residuals can help to detect patterns that otherwise would have been overlooked ( see both fruit fly datasets in sections [ r ] for examples ) .if no colors are available , gray level images are used .this has no further implications for the weight landscapes . for the residual landscapes , note that red and blue shades are translated into similar gray levels , so the sign of the residuals is lost .positive and negative residuals can plotted on two separate images to avoid this problem . *normalized unscaled standard error ( nuse ) : * fix a probeset .let be the estimated residual standard deviation in model ( [ rmamodel ] ) and the _ total probe weight _ ( of the fixed probeset ) in chip the expression value estimate for the fixed probeset on chip and its standard error are given by the residual standard deviations vary across the probesets within a chip .they provide an assessment of overall goodness of fit of the model to probeset data for all chips used to fit the model , but provide no information on the relative precision of estimated expressions across chips .the latter , however , is our main interest when we look into the quality of a chips compared to other chips in the same experiment .replacing the by gives what we call the _ unscaled standard error ( use ) _ of the expression estimate .another source of heterogeneity is the number of `` effective '' probes in the sense of being given substantial weight by the rma fitting procedure . that this number varies across probeset is obvious when different numbers of probes per probeset are used on the same chipanother reason is dysfunctional probes , that is , probes with high variabiliy , low affinity , or a tendency to crosshybridize . to compensate for this kind of heterogeneity, we divide the use by its median over all chips and call this _ normalized unscaled standard error ( nuse ) ._ an alternative interpretation for the nuse of a fixed probeset becomes apparent after some arithmetic manipulations .for any odd number of positive observations we have since the function is monotone . for an even number identity is it still approximatively true .( the reason for the slight inaccuracy is that , for an even number , the median is the average between the two data points in the center positions . )now we can rewrite the total probe weight can also be thought of as an _ effective number of observations _ contributing to the probeset summary for this chip .its square root serves as the divisor in the standard error of the expression summaries , similarly to the role of in the classical case of the average of independent observations .this analogy supposes , for heuristic purposes , that the probes are independent ; in fact this is not true due to normalization , probe overlap and other reasons .the median of the total probe weight over all chips serves as normalization constant . in the form, we can think of the nuse as the reciprocal of the normalized square root of total probe weight .the nuse values fluctuate around 1 .chip quality statements can be made based on the distribution of all the nuse values of one chip .as with the pm distributions , we can conveniently look at nuse distributions as boxplots , or we can summarize the information on the chip level by two single values : the median of the nuse over all probesets in a particular chip , _med(nuse ) , _ and the interquartile range of the nuse over all probesets in the chip , _iqr(nuse ) . _* relative log expression ( rle ) : * we first need a reference chip .this is typically the _ median chip _ which is constructed probeset by probeset as the median expression value over all chips in the experiment .( a computationally constructed reference chips such as this one is sometimes called `` virtual chip '' . ) to compute the rle for a fixed probeset , take the difference of its log expression on the chip to its log expression on the reference chip .note that the rle is not tied to rma , but can be computed from any expression value summary .the rle measures how much the measurement of the expression of a particular probeset in a chip deviates from measurements of the same probeset in other chips of the experiment .again , we can conveniently look at the distributions as boxplots , or we can summarize the information on the chip level by two single values : the median of the rle over all probesets in a particular chip , _med(rle ) , _ and the interquartile range of the rle over all probesets in the chip , _iqr(rle ) . _the latter is a measure of deviation of the chip from the median chip .a priori this includes both biological and technical variability . in experiments where it can be assumed that iqr(rle ) is a measure of technical variability in that chipeven if biological variability is present for most genes , iqr(rle ) is still a sensitive detector of sources of technical variability that are larger than biological variability .med(rle ) is a measure of bias . in many experimentsthere are reasons to believe that in that case , any deviation of med(rle ) from is an indicator of a bias caused by the technology .the interpretation of the rle depends on the assumptions ( and ) on the biological variability in the dataset , but it provides a measure that is constructed independently of the quality landscapes and the nuse . for quality assessment , we summarize and visualize the nuse , rle , and pm distributions .we found series of boxplots to be very a convenient way to glance over sets up to 100 chips .outlier chips as well as trends over time or pattern related to time can easily be spotted .for the detection of systematic quality differences related to circumstances of the experiment , or to properties of the sample it is helpful to color the boxes accordingly .typical coloring would be according to groups of the experiment , sample cohort , lab site , hybridization date , time of the day , a property of the sample ( e.g.time in freezer ) . to quickly review the quality of larger sets of chips , shorter summaries such as the above mentioned median or the interquartile range of pm , nuse and rle .these single - value summaries at the chip level are also useful for comparing our quality measures to other chip quality scores in scatter plots , or for plotting our quality measures against continuous parameters related to the experiment or the sample .again , additional use of colors can draw attention to any systematic quality changes due to technical conditions . while the rle is a form of absolute measure of quality ,the nuse is not .the nuse has no units .it is designed to detect differences _ between chips within a batch ._ however , the magnitudes of these differences have no interpretation beyond the batch of chips analyzed together .we now describe a way to attach a quality assessment to a set of chips as a whole .it is based on a common residual factor for a batch of jointly analyzed chips , rma estimates a common residual scale factor .it enables us to compare quality between different experiments , or between subgroups of chips in one experiment .it has no meaning for _ single _ chips .* residual scale factor ( rsf ) : * this is a quality measure for batches of chips .it does not apply to individual chips , but assesses the quality of batches of chips .the batches can be a series of experiments or subgroups of one experiment ( defined , e.g.by cohort , experimental conditions , sample properties , or diagnostic groups ) . to compute the rsf , assume the data are background corrected .as the background correction works on a chip by chip basis it does not matter if the computations were done simultaneously for all batches of chips or individually . for the normalization , however , we need to find one target distribution to which we normalize all the chips in all the batches .this is important , since the target distribution determines the scale of intensity measures being analyzed .we then fit the rma model to each batch separately .the algorithm delivers , for each batch , a vector of the estimated _ residual scales _ for all the probesets .we can now boxplot them to compare quality between batches of chips .the median of each is called _ residual scale factor ( rsf ) . _a vector of residual scales is a heterogeneous set . to remove the heterogeneity ,we can divide it , probeset by probeset , by the median over the estimated scales from all the batches .this leads to alternative definitions of the quantities above , which we call _ normalized residual scales _ and _ normalized residual scale factor ( nrsf ) . _the normalization leads to more discrimination between the batches , but has the drawback of having no units .software for the computation and visualization of the quality measures and the interpretation of the statistical plots is discussed in .the code is publicly available from www.bioconductor.org in the r package affyplm .note that the implementation of the nuse in affyplm differs slightly from the above formula .it is based on the `` true '' standard error as it is comes from m - estimation theory instead of the total weights expression in [ m : nuse ] .however , the difference is small enough not to matter for any of the applications the nuse has in chip quality assessment .* affymetrix hu95 spike - in experiments : * here 14 human crna fragments corresponding to transcripts known to be absent from rna extracted from pancreas tissue were spiked into aliquots of the hybridization mix at different concentrations , which we call chip - patterns .the patterns of concentrations from the spike - in crna fragments across the chips form a latin square .the chip - patterns are denoted by a , b, ... ,s and t , with a, ... ,l occurring just once , and m and q being repeated 4 times each .chip patterns n , o and p are the same as that of m , while patterns r , s , and t are the same as q. each chip - pattern was hybridized to 3 chips selected from 3 different lots referred to as the l1521 , the l1532 , and the l2353 series . see www.affymetrix.com/support/technical/sample.data/datasets.affx for further details and data download . for this paper, we are using the data from the 24 chips generated by chip patterns m , n , o , p , q , r , s , t with 3 replicates each . * st .jude children s research hospital leukemia data collection : * the study by was conducted to determine whether gene expression profiling could enhance risk assignment for pediatric acute lymphoblastic leukemia ( all ) .the risk of relapse plays a central role in tailoring therapy intensity .a total of 389 samples were analyzed for the study , from which high quality gene expression data were obtained on 360 samples .distinct expression profiles identified each of the prognostically important leukemia subtypes , including t - all , e2a - pbx1 , bcr - abl , tel - aml1 , mll rearrangement , and hyperdiploid chromosomes . in addition , another all subgroup was identified based on its unique expression profile . re - analized 132 cases of pediatric all from the original 327 diagnostic bone marrow aspirates using the higher density u133a and b arrays .the selection of cases was based on having sufficient numbers of each subtype to build accurate class predictions , rather than reflecting the actual frequency of these groups in the pediatric population .the follow - up study identified additional marker genes for subtype discrimination , and improved the diagnostic accuracy .the data of these studies are publicly available as supplementary data .* fruit fly mutant pilot study : * gene expression of nine fruit fly mutants were screened using affymetrix drosgenome1 arrays .the mutants are characterized by various forms of dysfunctionality in their synapses .rna was extracted from fly embryos , pooled and labelled .three to four replicates per mutant were done .hybridization took place on six different days . in most cases ,technical replicates were hybridized on the same day .the data were collected by tiago magalhes in the goodman lab at the university of california , berkeley , to gain experience with the new microarray technology .* fruit fly time series : * a large population of wild type ( canton - s ) fruit flies was split into twelve cages and allowed to lay eggs which were transferred into an incubator and aged for 30 minutes . from that time onwards , at the end of each hour for the next 12 hours , embryos from one plate were washed on the plate , dechorionated and frozen in liquid nitrogen .three independent replicates were done for each time point . as each embryo sample contained a distribution of different ages , we examined the distribution of morphological stage - specific markers in each sample to correlate the time - course windows with the nonlinear scale of embryonic stages .rna was extracted , pooled , labeled and hybridized to affymetrix drosgenome1 arrays .hybridization took place on two different days .this dataset was collected by pavel tomanck in the rubin lab at the university of california , berkeley , as a part of their comprehensive study on spatial and temporal patterns of gene expression in fruit fly development .the raw microarray data ( .cel files ) are publically available at the projects website www.fruitfly.org/cgi-bin/ex/insitu.pl .* pritzker data collection : * the pritzker neuropsychiatric research consortium uses brains obtained at autopsy from the orange country coroner s office through the brain donor program at the university of california , irvine , department of psychiatry .rna samples are taken from the left sides of the brains .labeling of total rna , chip hybridization , and scanning of oligonucleotide microarrays are carried out at independent sites ( university of california , irvine ; university of california , davis ; university of michigan , ann arbor ) .hybridizations are done on hu95 and later generations of affymetrix chips . in this paper, we are looking at the quality of data used in two studies by the pritzker consortium . the _ gender study _ by is motivated by gender difference in prevalence for some neuropsychiatric disorders .the raw dataset has hu95 chip data on 13 subjects in three regions ( anterior cingulate cortex , dorsolateral prefrontal cortex , and cortex of the cerebellar hemisphere ) .the _ mood disorder study _ described in is based on a growing collection of gene expression measurements in , ultimately , 25 regions .each sample was prepared and then split so that it could be hybridized to the chips in both michigan and either irvine or davis .we start by illustrating our quality assessment methods on the well known affymetrix spike - in experiments .the quality of these chips is well above what can be expected from an average lab experiment .we then proceed with data collected in scientific studies from variety of tissue types and experimental designs .different aspects of quality analysis methods will be highlighted throughout this section .our quality analysis results will be compared with the affymetrix quality report for several sections of the large publicly available st .jude children s research hospital gene expression data collection .\(a ) * outlier in the affymetrix spike - in experiments : * 24 hu 95a chips from the affymetrix spike - in dataset .all but the spike - in probesets are expected to be non - differentially expressed across the arrays. as there are only 14 spike - ins out of about twenty thousand probesets , they are , from the quality assessment point of view , essentially 24 identical hybridizations .a glance at the weight ( or residual ) landscapes gives a picture of homogenous hybridizations with almost no local defects on any chip but # 20 ( fig .the nuse indicates that chip # 20 is an outlier .its median is well above 1.10 , while all others are smaller than 1.05 , and its iqr is three and more times bigger than it is for any other chip ( fig .the series of boxplots of the rle distributions confirms these findings .the median is well below and the iqr is two and more times bigger than it is for any other chip .chip # 20 has both a technologically caused bias and a higher noise level .the affymetrix quality report ( fig .a3 ) , however , does not clearly classify # 20 as an outlier .its gapdh 3/5 of about 2.8 is the largest within this chip set , but the value 2.8 is considered to be acceptable . according to all other affymetrix quality measures percent present , noise , background average , scale factor chip # 20 is within a group of lower quality chips , but does not stand out .\(b ) * outlier in st .jude s data not detected by the affymetrix quality report : * the collection of mll hu133b chips consists of 20 chips one of which turns out to be an outlier .the nuse boxplots ( fig . b1 , bottom line ) show a median over 1.2 for chip # 15 while all others are below 1.025 .the iqr is much larger for chip # 15 than it is for any other chip .the rle boxplots ( fig .b1 , top line ) as well distinguish chip # 15 as an obvious outlier .the median is about for the outlier chip , while it is very close to for all other chips .the iqr is about twice as big as the largest of the iqr of the other chips .b1 displays the weight landscapes of chip#15 along with those of two of the typical chips .a region on the left side of chip # 15 , covering almost a third of the total area , is strongly down weighted , and the chip has elevated weights overall .affymetrix quality report ( fig .b3 ) paints a very different picture chip is an outlier on the med(nuse ) scale , but does not stand out on any of common affymetrix quality assessment measures : percent present , noise , scale factor , and gapdh 3/5. \(c ) * overall comparison of our measures and the affymetrix quality report for a large number of st .jude s chips : * fig .d1 pairs the med(nuse ) with the four most common gcos scores on a set of 129 hu133a chips from the st .jude dataset .there is noticable linear association between med(nuse ) and percent present , as well as between med(nuse ) and scale factor .gapdh 3/5 does not show a linear association with any the other scores .\(d ) * disagreement between our quality measures and the affymetrix quality report for hyperdip subgroup in st .jude s data : * the affymetrix quality report detects problems with many chips in this dataset . for chipa , raw q ( noise ) is out of the recommended range for the majority of the chips : , , c1 , c13 , c15 , c16 , c18 , c21 , c22 , c23 , c8 and r4 .background detects chip as an outlier .scale factor does not show any clear outliers .percent present is within the typical range for all chips .gapdh 3/5 is below 3 for all chips . for chipb , raw q ( noise ) is out of the recommended range for the , , and r4 .background detects chip and as outliers .scale factor does not show any clear outliers .percent present never exceeds 23 in this chip set , and it is below the typical minimum of 20 for chips , c15 , c16 , c18 , c21 and c4 .gapdh 3/5 is satisfactory for all chips .our measures suggest that , with one exception , the chips are of good quality ( fig .the heterogeneity of the perfect match distributions does not persist after the preprocessing . for chipa , has the largest iqr(rle ) and is a clear outlier among the nuse distributions .two other chips have elevated iqr(rle ) , but do not stand out according to nuse . for chipb , the rle distributions are very similar with again having the largest iqr(rle ) .the nuse distributions are consistently showing good quality with the exception of chip .\(e ) * varying quality between diagnostic subgroups in the st .jude s data : * each boxplot in fig .e1 sketches the residual scale factors ( rsf ) of the chips of all diagnostic subgroups .they show substantial quality differences .the e2a subgroup has a much higher med(rsf ) than the other subgroups .the t subgroup has a slightly elevated med(rsf ) and a higher iqr(rsf ) than the other subgroups .\(f ) * hybridization date effects on quality of fruit fly chips :* the fruit fly mutant with dysfunctional synapses is an experiment of the earlier stages of working with affymetrix chips in this lab .it shows a wide range of quality . in the boxplot series of rle and nuse ( fig .f1 ) a dependency of the hybridization date is striking . the chips of the two mutants hybridized on the day colored yellow show substantially lower quality than any of the other chips .f2 shows a weight landscape revealing smooth mountains and valleys .while the pattern is particularly strong in the chip chosen for this picture , it is quite typical for the chips in this dataset .we are not sure about the specific technical reason for this , but assume it is related to insufficient mixing during the hybridization .\(g ) * temporal trends or biological variation in fruit fly time series : * the series consists of 12 developmental stages of fruit fly embryos hybridized in 3 technical replicates each . while the (pm ) distributions are very similar in all chips , we can spot two kinds of systematic patterns in the rle and nuse boxplots ( fig .one pattern is connected to the developmental stage . within each single one of the three repeat time series ,the hybridizations in the middle stages look `` better '' than the ones in the early stages and the chips in the late stages .this may , at least to some extent , be due to biological rather than technological variation . in embryo development , especially in the beginning and at the end, huge numbers of genes are expected to be affected , which is a potential violation of assumption .insufficient staging in the first very short developmental stages may further increase the variability .also , in the early and late stages of development , there is substantial doubt about the symmetry assumption .another systematic trend in this dataset is connected to the repeat series .the second dozen chips are of poorer quality than the others .in fact , we learned that they were hybridized on a different day from the rest .the pairplot in fig .g2 looks at the relationship between our chip quality measures .there is no linear association between the raw intensities summarized as med(pm ) and any of the quality measures .a weak linear association can be noted between med(rle ) and iqr(rle ) .it is worth to note that is becomes much stronger when focusing on just the chips hybridized on the day colored in black .iqr(rle ) and med(nuse ) again have a weak linear association which becomes stronger when looking only at one of the subgroups , except this time it is the chips colored in gray . for the pairing med(rle ) and med(nuse ) , however , there is no linear relationship . finally ( not shown ) , as in the dysfunctional synapses mutant fruit fly dataset , a double - wave gradient , as seen in fig .f2 for the other fruit fly dataset , can be observed in the quality landscapes of many of the chips .although these experiments were conducted by a different team of researchers , they used the same equipment as that used in generating the other fruit fly dataset .\(h ) * lab differences in pritzker s gender study : * we looked at hu95 chip data from 13 individuals in two brain regions , the cortex of the cerebellar hemisphere ( short : cerebellum ) and the dorsolateral prefrontal cortex .with some exceptions , each sample is hybridized in both lab m and lab i. the nuse and rle boxplots ( fig .h1 ) for the cerebellum dataset display an eye - catching pattern : they show systematically much better quality in lab m then in lab i. this might be caused by overexposure or saturation effects in lab i. the medians of the raw intensities ( pm ) values in lab i are , on a -scale between about 9 and 10.5 , while they are very consistently about 2 two 3 points lower in lab m. the dorsolateral prefrontal cortex hybridizations show , for the most part , a lab effect similar to the one we saw in the cerebellum chips ( plots not shown here ) .\(i ) * lab differences in pritzker s mood disorder study : * after the experiences with lab differences in the gender study , the consortium went through extended efforts to minimize these problems .in particular , the machines were calibrated by affymetrix specialists .i1 summarizes the quality assessments of three of the pritzker mood disorder datasets .we are looking at hu95 chips from two sample cohorts ( a total of about 40 subjects ) in each of the brain regions anterior cingulate cortex , cerebellum , and dorsolateral prefrontal cortex . in terms of med(pm ) , for each of the three brain regions , the two replicates came closer to each other : the difference between the two labs in the mood disorder study is a third or less of the difference between the two labs in the gender study ( see first two boxes in each of the three parts of fig .i1 , and compare with fig .this is due to lab i dropping in intensity ( toward lab m ) and the new lab d also operating at that level .the consequence of the intensity adjustments for chip quality do not form a coherent story . while for cerebellum the quality in lab m is still better than in the replicate in one of the other labs , for the other two brain regions the ranking is reversed .effects of a slight underexposure in lab m may now have become more visible .generally , in all brain regions , the quality differences between the two labs are still there , but they are much smaller than they in the gender study data .\(j ) * assigning special causes of poor quality for st .jude s data : * eight quality landscapes from the early st .jude s data , a collection of 335 hu133av2 chips .the examples were picked for being particularly strong cases of certain kinds of shortcomings that repeatedly occur in this chip collection .they do not represent the general quality level in the early st .jude s chips , and even less so the quality of later st . jude s chips .the figures in this paper are in gray levels . if the positive residual landscape is shown , the negative residual landscape is typically some sort of complementary image , and vice versa .colored quality landscapes for all st . jude s chips can be downloaded from bolstad s _ chip gallery _ at www.plmimagegallery.bmbolstad.com . fig .j1 `` bubbles '' is the positive residual landscape of chip hyperdip-50 - 02 .there are small dots in the left upper part of the slide , and two bigger ones in the middle of the slide .we attribute the dots to dust attached to the slide or air bubbles stuck in this place during the hybridization .further , there is an accumulation of positive residuals in the bottom right corner .areas of elevated residuals near the corners and edges of the slide are very common , often much larger than in this chip .mostly they are positive .the most likely explanation are air bubbles that , due to insufficient mixing during the hybridization , got stuck close to the edges where they had gotten when this edges was in a higher position to start with or brought up there by the rotation .note that there typically is some air in the solution injected into the chip ( through a little hole near one of the edges ) , but that the air is moved around by the rotation during the hybridization to minimize the effects on the probe measurements .j2 `` circle and stick '' is the residual landscape of hyperdip47 - 50-c17 .this demonstrates two kinds of spatial patterns that are probably caused by independent technical shortcomings .first , a circle with equally spaced darker spots ( approximately ) .the symmetry of the shape suggests it was caused by a foreign object scratching trajectories of the rotation during the hybridization into the slide .second , there are little dots that almost seem to be aligned along a straight line connecting the circle to the upper right corner .the dots might be air bubbles stuck to some invisible thin straight object or scratch . fig .j3 `` sunset '' is the negative residual landscape of hyperdip-50-c6 .this chip illustrates two independent technical deficiencies .first , there is a dark disk in the center of the slide .it might be caused by insufficient mixing , but the sharpness with which the disk is separated from the rest of the image asks for additional explanations .second , the image obviously splits into an upper and a lower rectangular part with different residual , separated by a straight border . as a most likely explanation ,we attribute this to scanner problems .j4 `` pond '' is the negative residual landscape of tel - aml1 - 2m03 .the nearly centered disc covers almost the entire slide .it might be caused by the same mechanisms that were responsible for the smaller disc in the previous figure .however , in this data collection , we have only seen two sizes of discs the small disk as in the previous figure and the large one as in this figure .this raises the question why the mechanism that causes them does not produce medium size discs .j5 `` letter s '' is the positive residual landscape of hypodip-2m03 .the striking pattern the letter s with the `` cloud '' on top is a particularly curious example of a common technical shortcoming .we attribute the spatially heterogeneous distribution of the residuals to insufficient mixing of the solution during the hybridization .j6 `` compartments '' is the positive residual landscape of hyperdip-50 - 2m02 .this is a unique chip .one explanation would be that the vertical curves separating the three compartments of this image are long thin foreign objects ( e.g. hair ) that got onto the chip and blocked or inhabited the liquid from being spread equally over the entire chip .j7 `` triangle '' is the positive residual landscape of tel - aml1 - 06 .the triangle might be caused by a long foreign object stuck to the center of the slide on one end and free , and hence manipulated by the rotation , on the other end .j8 `` fingerprint '' is the positive residual landscape of hyperdip-50-c10 .what looks like a fingerprint on the picture might actually be one . with the slide measuring 1 cm by 1 cm , the pattern has about the size of a human fingerprint or the middle part of it .* quality landscapes : * the pair of the positive and negative residual landscapes contains the maximum information . often , one of the two residual landscapes can already characterize most of the spatial quality issues . in the weight pictures ,the magnitude of the derivation is preserved , but the sign is lost . therefore , unrelated local defects can appear indistinguishable in weight landscapes .the landscapes allow a first glance at the overall quality of the array : a square filled with low - level noise typically comes from a good quality chip , one filled with high - level noise comes from a chip with uniformly bad probes . if the landscape is reveals any spatial patterns , the quality may or may not be compromised depending on the size of the problematic area .even a couple of strong local defects may not lower the chip quality , as indicated by our measures .the reason lies in both the chip design and the rma model .the probes belonging to one probeset are scattered around the chip assuring that a bubble or little scratch would only affect a small number of the probes in a probeset ; even a larger under- or overexposed area of the chip may affect only a minority of probes in each probeset . as the rma model is fitted robustly , its expression summaries are shielded against this kind of disturbance .we found the quality landscape most useful in assigning special causes of poor chip quality .a quality landscape composed of smooth mountains and valleys is most likely caused by insufficient mixing during the hybridization .smaller and sharper cut - out areas of elevated residuals are typically related to foreign objects ( dust , hair , etc . ) orair bubbles .symmetries can indicate that scratching was caused by particles being rotated during hybridization .patterns involving horizontal lines may be caused by scanner miscalibration .it has to be noted , that the above assignment of causes are educated guesses rather than facts .they are the result of extensive discussions with experimentalist , but there remains a speculative component to them .even more hypothetical are some ideas we have regarding how the sign of the residual could reveal more about the special cause .all we can say at this point is that the background corrected and normalized probe intensity deviate from what the fitted rma model would expect them to be .the focus in this paper is a global one : chip quality .several authors have worked on spatial chip images from a different perspective , that of automatically detecting and describing local defects ( see , or the r - package harshlight by ) .it remains an open question how to use this kind of assessment beyond the detection and classification of quality problems . in our approach , if we do not see any indication of quality landscpape features in another quality indicator such as nuse or rle , we suppose that it has been rendered harmless by our robust analysis . this may not be true .* rle : * despite its simplicity the rle distribution turns out to be a powerful quality tool . for a small number of chips , boxplots of the rle distributions of each chip allow the detection of outliers and temporal trends .the use of colors or gray levels for different experimental conditions or sample properties facilitates the detection of more complex patterns . for a large number of chips , the iqr(rle )is a convenient and informative univariate summary .med(rle ) should be monitored as well to detect bias . as seen in the drosophila embryo data , these assumptions are crucial to ensuring that what the rle suggests really are technical artifacts rather than biological differences .note that the rle is not tied to the rma model , but could as well be computed based on expression values derived from other algorithms .the results may differ , but our experience is that the quality message turns out to be similar .* nuse : * as in the case of the rle , boxplots of nuse distributions can be used for small chip sets , and plots of their median and interquartile ranges serve as a less space consuming alternative for larger chip sets . for the nuse ,however , we often observe a very high correlation between median and interquartile range , so keeping track of just the median will typically suffice .again , colors or gray levels can be used to indicate experimental conditions facilitating the detection their potential input on quality .the nuse is the most sensitive of our quality tools , and it does not have a scale. observed quality differences , even systematic ones , have therefore to carefully assessed .even large relative differences do not necessarily compromise an experiment , or render useless batches of chips within an experiment .they should always alert the user to substantial heterogeneity , whose cause needs to be investigated . on this matterwe repeat an obvious but important principle we apply .when there is uncertainty about whether or not to include a chip or set of chips in an analysis , we can do both analyses and compare the results .if no great differences result , then sticking with the larger set seems justifiable . * raw intensities and quality measures : * raw intensities are not useful for quality prediction by itself , but they can provide some explanation for the poor performance according to the other quality measures .all the pritzker datasets , for example , suffer from systematic differences in the location of the pm intensity distribution ( indicated by med(pm ) ) . sometimes the lower location was worse too close to underexposure and sometimes the higher was worse too close to overexposure or saturation .we have seen examples of chips for which the raw data give misleading quality assessment .some kinds of technological shortcomings can be removed without trace by the statististical processing , while others remain .* comparison with affymetrix quality scores : * we found good overall agreement between our quality assessment and two of the affymetrix scores : percent present and scale factor .provided the quality in a chips set covers a wide enough range , we typically see at least a weak linear association between our quality measures and these two , and sometimes other affymetrix quality scores .however , our quality assessment does not always agree with the affymetrix quality report . in the st .jude data collection we saw that the sensitivity of the affymetrix quality report could be insufficient . while our quality assessment based on rle and nuse clearly detected the outlier chip in the mll chip b dataset , none of the measures in the affymetrix quality did .the reverse situation occured in the hyperdip chip a dataset .while most of the chips passed according to our quality measures , most of the chips got a poor affymetrix quality scores .* rsf : * the residual scale factor can detect quality differences between parts of a data collection , such as the diagnostic subgroups in the st .jude s data . in the same way, it can be employed to investigate quality differences between other dataset divisions defined by sample properties , lab site , scanner , hybridization day , or any other experimental condition .more experience as to what magnitudes of differences are acceptable is still needed .in this paper , we have laid out a conceptual framework for a statistical approach for the assessment and control of microarray data quality . in particular , we have introduced a quality assessment toolkit for short oligonucleotide arrays .the tools highlight different aspects in the wide spectrum of potential quality problems .our numerical quality measures , the nuse and the rle , are an efficient way to detect chips of unusually poor quality . furthermore , they permit the detection of temporal trends and patterns , batch effects , and quality biases related to sample properties or to experimental conditions .our spatial quality methods , the weight and residual landscapes , add to the understanding of specific causes of poor quality by marking those regions on the chip where defects occur .furthermore , they illustrate the robustness of the rma algorithm to small local defects .the rsf quantifies quality differences between batches of chips .it provides a broader framework for the quality scores of the individual chips in an experiment .all the quality measures proposed in this paper can be computed based on the raw data using publicly available software .deriving the quality assessment directly from the statistical model used to compute the expression values is more powerful than basing it on the performance of a particular set of of control probes , because the control probes may not behave in a way that is representative for the whole set of probes on the array .the model - based approach is also preferable to metrics less directly related to the bulk of the expression values .some of the affymetrix metrics , for example , are derived from the raw probe values and interpret any artifacts as quality problems , even if they are removed by routine preprocessing steps .a lesson from the practical examples in this paper is the importance of a well designed experiment .one of the most typical sources for bias , for example , is an unfortunate systematic connection between hybridization date and groups of the study a link that could have been avoided by better planning .more research needs to be done on the attribution of specific causes to poor quality measurements . while our quality measures , and , most of all , our quality landscapes , are a rich source for finding specific causes of poor quality , a speculative component remains . to increase the credibility of the diagnoses , systematic quality experiments need to be conducted .a big step forward is bolstad s _ chip gallery _ at www.plmimagegallery.bmbolstad.com , which collects quality landscapes from affymetrix chip collections , provides details about the experiment and sometimes offers explanations for the technical causes of poor quality .started as a collection of chip curiosities this website is now growing into a visual encyclopedia for quality assessment .contributions to the collection , in particular those with known causes of defects , are invited ( anonymous if preferred ) .further methodological research is needed to explore the use of the spatial quality for statistically `` repairing '' local defects , or making partial use of locally damaged chips .we are well aware that the range of acceptable values for each quality scores is the burning question for experimentalists .our quality analysis results with microarray datasets from a variety of scientific studies in section [ r ] show that the question about the right threshold for good chip quality does not have a simple answer yet , at least not as the present level of generality .thresholds computed for gene expression measurements in fruit fly mutant screenings can not necessarily be transferred to brain disease research or to leukemia diagnosis .thresholds need to be calibrated to the tissue type , the design , and the precision needs of the field of application .we offer two strategies to deal with this on different levels : 1 . *individual researchers : * we encouraged researchers to look for outliers and artificial patterns in the series of quality measures of the batch of jointly analyzed chips . furthermore , any other form of unusual observations e.g.a systematic disagreement between nuse and rle , or inconsistencies in the association between raw intensities and quality measures potentially hints at a quality problem in the experiment .* community of microarray users : * we recommend the development of quality guidelines .they should be rooted in extended collections of datasets from scientific experiments .complete raw datasets are ideal , where no prior quality screening has been employed .careful documentation of the experimental conditions and properties help to link unusual patterns in the quality measures to specific causes .the sharing of unfiltered raw chip data from scientific experiments on the web and the inclusion of chip quality scores in gene expression databank entries can lead the way towards community - wide quality standards . besides , it contributes to a better understanding how quality measures relate to special causes of poor quality .in addition , we encourage the conduction of _ designed microarray quality experiments . _such experiments aim at an understanding of the effects of rna amount , experimental conditions , sample properties and sample handling on the quality measures as well as on the downstream analysis .they give an idea of the range of chip quality to be expected under given certain experimental , and , again , they help to characterize specific causes of poor quality .benchmarking of microarray data quality will happen one way or another ; if it is not established by community - wide agreements the individual experimentalist will resort to judging on the basis of anecdotal evidence .we recommend that benchmarks be actively developed by the community of microarray researchers , experimentalists and data analysts .the statistical concepts and methods proposed here may serve as a foundation for the quality benchmarking process .we thank st .jude children s research hospital , the pritzker consortium , tiago magalhes , pavel tomanck and affymetrix for sharing their data for quality assessment purposes . 99 natexlab#1#1 affymetrix ( 2001 ) , _ guidelines for assessing data quality _ ,affymetrix inc , santa clara , ca .alizadeh , a. a. , eisen , m. b. , davis , r. e. , ma , c. , lossos , i. s. , rosenwald , a. , boldrick , j. c. , sabet , h. , tran , t. , yu , x. , powell , j. i. , yang , l. , marti , g. e. , moore , t. , hudson , j. , lu , l. , lewis , d. b. , tibshirani , r. , sherlock , g. , chan , w. c. , greiner , t. c. , weisenburger , d. d. , armitage , j. o. , warnke , r. , levy , r. , wilson , w. , grever , m. r. , byrd , j. c. , botstein , d. , brown , p. o. , and staudt , l. m. ( 2000 ) , `` distinct types of diffuse large b - cell lymphoma identified by gene expression profiling , '' _ nature _ , 403 , 503511 .allison , d. , cui , x. , page , g. , and sabripour , m. ( 2006 ) , `` microarray data analysis : from disarray to consolidation and consensus , '' _ nature review genetics _, 7 , 5565 .archer , k. , dumur , c. , joel , s. , and ramakrishnan , v. ( 2006 ) , `` assessing quality of hybridized rna in affymetrix genechip experiments using mixed effects models , '' _ biostatistics _ , 7 , 198212 .barczak , a. , rodriguez , m. , hanspers , k. , koth , l. , tai , y. , bolstad , b. , speed , t. , and erle , d. ( 2003 ) , `` spotted long oligonucleotide arrays for human gene expression analysis , '' _ genome research _ , 1 , 17751785 .beissbarth , t. , fellenberg , k. , brors , b. , arribas - prat , r. , boer , j. , hauser , n. c. , scheideler , m. , hoheisel , j. d. , schutz , g. , poustka , a. , and vingron , m. ( 2000 ) , `` processing and quality control of dna array hybridization data , '' _ bioinformatics _ , 16 , 10141022 .bild , a. , yao , g. , chang , j. , wang , q. , potti , a. , chasse , d. , joshi , m. , harpole , d. , lancaster , j. , berchuck , a. , olson , j. j. , marks , j. , dressman , h. , west , m. , and nevins , j. ( 2006 ) , `` oncogenic pathway signatures in human cancers as a guide to targeted therapies , '' _ nature _ , 439(7074 ) , 353357 .bolstad , b. ( 2003 ) , `` low level analysis of high - density oligonucleotide array data : background , normalization and summarization , '' ph.d .thesis , university of california , berkeley , http://bmbolstad.com .bolstad , b. , collin , f. , brettschneider , j. , simpson , k. , cope , l. , irizarry , r. , and speed , t. ( 2005 ) , `` quality assessment of affymetrix genechip data , '' in _ bioinformatics and computational biology solutions using r and bioconductor _ , eds .gentleman , r. , carey , v. , huber , w. , irizarry , r. , and dudoit , s. , new york : springer , statistics for biology and health , pp .bolstad , b. , collin , f. , simpson , k. , irizarry , r. , and speed , t. ( 2004 ) , `` experimental design and low - level analysis of microarray data , '' _ int rev neurobiol _, 60 , 2558 .bolstad , b. , irizarry , r. , astrand , m. , and speed , t. ( 2003 ) , `` a comparison of normalization methods for high density oligonucleotide array data based on variance and bias , '' _ bioinformatics _ , 19 , 185193 , evaluation studies .buness , a. , huber , w. , steiner , k. , sultmann , h. , and poustka , a. ( 2005 ) , `` arraymagic : two - colour cdna microarray quality control and preprocessing , '' _ bioinformatics _ , 21 , 554556 .bunney , w. , bunney , b. , vawter , m. , tomita , h. , li , j. , evans , s. , choudary , p. , myers , r. , jones , e. , watson , s. , and akil , h. ( 2003 ) , `` microarray technology : a review of new strategies to discover candidate vulnerability genes in psychiatric disorders , '' _ am j psychiatry _ , 160 , 657666 .cho , r. , campbell , m. , winzeler , e. , steinmetz , l. , conway , a. , wodicka , l. , wolfsberg , t. , gabrielian , a. , landsman , d. , lockhart , d. , and davis , r. ( 1998 ) , `` a genome - wide transcriptional analysis of the mitotic cell cycle , '' _ mol cell _ , jul 2(1 ) , 6573 .cui , x. , hwang , j. , qiu , j. , blades , n. , and churchill , g. ( 2005 ) , `` improved statistical tests for differential gene expression by shrinking variance components estimates , '' _ biostatistics _ , 6 , 3146 .dobbin , k. , beer , d. , meyerson , m. , yeatman , t. , gerald , w. , jacobson , w. , conley , b. , buetow , k. , heiskanen , m. , simon , r. , minna , j. , girard , l. , misek , d. , taylor , j. , hanash , s. , naoki , k. , hayes , d. , ladd - acosta , c. , enkemann , s. , viale , a. , and giordano , t. ( 2005 ) , `` interlaboratory comparability study of cancer gene expression analysis using oligonucleotide microarrays , '' _ clin cancer res _ , 11 , 565572 .draghici , s. , khatri , p. , eklund , a. , and szallasi , z. ( 2006 ) , `` reliability and reproducibility issues in dna microarray measurements , '' _ trends in genetics _ , 22 ( 2 ) , 101109 .dudoit , s. , shaffer , j. , and boldrick , j. ( 2003 ) , `` multiple hypothesis testing in microarray experiments , '' _ statistical science _ , 18 , 71103 .dudoit , s. , yang , y. , speed , t. , and mj , c. ( 2002 ) , `` statistical methods for identifying differentially expressed genes in replicated cdna microarray experiments , '' _ statistica sinica _, 12 , 111139 .dumur , c. , nasim , s. , best , a. , archer , k. , ladd , a. , mas , v. , wilkinson , d. , garrett , c. , and ferreira - gonzalez , a. ( 2004 ) , `` evaluation of quality - control criteria for microarray gene expression analysis , '' _ clin chem _ , 50 , 19942002 .efron , b. , tibshirani , r. , storey , j. , and v , t. ( 2001 ) , `` empirical bayes analysis of a microarray experiment , '' _ j am stat ass _ , 96 , 11511160 .finkelstein , d. ( 2005 ) , `` trends in the quality of data from 5168 oligonucleotide microarrays from a single facility , '' _ j biomol tech _ , 16 , 143153 . finkelstein , d. , ewing , r. , gollub , j. , sterky , f. , cherry , j. , and somerville , s. ( 2002 ) , `` microarray data quality analysis : lessons from the afgc project , '' _ plant molecular biology _ , 48 , 119131 .gassman , j. , owen , w. , kuntz , t. , martin , j. , and amoroso , w. ( 1995 ) , `` data quality assurance , monitoring , and reporting , '' _ controlled clinical trials _ , 16(2 suppl ) , 104s136s .gautier , l. , cope , l. , bolstad , b. , and irizarry , r. ( 2004 ) , `` affy - analysis of affymetrix genechip data at the probe level , '' _ bioinformatics _ , 20(3 ) , 307315 .gautier , l. , moller , m. , friis - hansen , l. , and knudsen , s. ( 2004 ) , `` alternative mapping of probes to genes for affymetrix chips , '' _ bmc bioinformatics _, 5 , e111 . gcos ( 2004 ) , _ genechip expression analysis data analysis fundamentals _ , affymetrix , inc , santa clara , ca .gentleman , r. , carey , v. , huber , w. , irizarry , r. , and dudoit , s. e. ( 2005 ) , _ bioinformatics and computational biology solutions using r and bioconductor _ , springer .groves , r. ( 1987 ) , `` research on survey data quality , '' _ public opinion quaterly _ , 51 , part 2 , suppl ., s156s172 .hautaniemi , s. , edgren , h. , vesanen , p. , wolf , m. , jrvinen , a. , yli - harja , o. , astola , j. , kallioniemi , o. , and monni , o. ( 2003 ) , `` a novel strategy for microarray quality control using bayesian networks , '' _ bioinformatics _ , 19 , 20312038 .hoheisel , j. ( 2006 ) , `` microarray technology : beyond transcript profiling and genotype analysis , '' _ nature review genetics _ , 7 ( 3 ) , 200210 .hu , p. , greenwood , c. , and beyene , j. ( 2005 ) , `` integrative analysis of multiple gene expression profiles with quality - adjusted effect size models , '' _ bmc bioinformatics _, 6 , e128 .hwang , k. , kong , s. , greenberg , s. , and park , p. ( 2004 ) , `` combining gene expression data from different generations of oligonucleotide arrays , '' _ bmc bioinformatics _, 5 , e159 .irizarry , r. , bolstad , b. , collin , f. , cope , l. , hobbs , b. , and speed , t. ( 2003 ) , `` summaries of affymetrix genechip probe level data , '' _ nucleic acids res _, 31 , e15 .irizarry , r. , warren , d. , spencer , f. , kim , i. , biswal , s. , frank , b. , gabrielson , e. , garcia , j. , geoghegan , j. , germino , g. , griffin , c. , hilmer , s. , hoffman , e. , jedlicka , a. , kawasaki , e. , martinez - murillo , f. , morsberger , l. , lee , h. , petersen , d. , quackenbush , j. , scott , a. , wilson , m. , yang , y. , ye , s. , and yu , w. ( 2005 ) , `` multiple - laboratory comparison of microarray platforms , '' _ nat methods _ , 2 , 345350 .jarvinen , a .- k . ,hautaniemi , s. , edgren , h. , auvinen , p. , saarela , j. , kallioniemi , o .- p . , and monni , o. ( 2004 ) , `` are data from different gene expression microarray platforms comparable ? '' _ genomics _ , 83 , 11641168 .jones , l. , goldstein , d. , hughes , g. , strand , a. , collin , f. , dunnett , s. , kooperberg , c. , aragaki , a. , olson , j. , augood , s. , faull , r. , luthi - carter , r. , moskvina , v. , and hodges , a. ( 2006 ) , `` assessment of the relationship between pre - chip and post - chip variables for affymetrix genechip expression data , '' _ bmc bioinformatics _ , 7 , e211 .kerr , m. ( 2003 ) , `` design considerations for efficient and effective microarray studies , '' _ biometrica _ , 59 , 822828 .kluger , y. , yu , h. , qian , j. , and gerstein , m. ( 2003 ) , `` relationship between gene co - expression and probe localization on microarray slides , '' _ bmc genomics _ , 4 , e49 .kong , s. , hwang , k. , kim , r. , zhang , b. , greenberg , s. , kohane , i. , and park , p. ( 2005 ) , `` crosschip : a system supporting comparative analysis of different generations of affymetrix arrays , '' _ bioinformatics _ , 21 , 21162117 .kuo , w. , jenssen , t. , butte , a. , ohno - machado , l. , and kohane , i. ( 2002 ) , `` analysis of matched mrna measurements from two different microarray technologies , '' _ bioinformatics _ , 18 , 405412 .li , c. and wong , h. ( 2001 ) , `` model - based analysis of oligonucleotide arrays : expression index computation and outlier detection , '' _ pnas _ , 98 , 3136 .lipshutz , r. , fodor , s. , gingeras , t. , and lockhart , d. ( 1999 ) , `` high density synthetic oligonucleotide arrays , '' _ nat genet _ , 21 , 2024 .lockhart , d. , dong , h. , byrne , m. , follettie , m. , gallo , m. , chee , m. , mittmann , m. , wang , c. , kobayashi , m. , horton , h. , and brown , e. ( 1996 ) , `` expression monitoring by hybridization to high - density oligonucleotide arrays , '' _ nat biotechnol _ , 14 , 16751680 .loebl , a. ( 1990 ) , `` accuracy and relevance and the quality of data , '' in _ data quality control , theory and pragmatics _ , eds .liepins , g. and uppuluri , v. , new york : marcel dekker , inc .112 in statistics : textbooks and monographs , pp . 105144 .lnnstedt , i. and speed , t. ( 2002 ) , `` replicated microarray data , '' _ statistica sinica _, 12 ( 1 ) , 3146 .marinez , y. , mcmahan , c. , barnwell , g. , and wigodsky , h. ( 1984 ) , `` ensuring data quality in medical research through an integrated data management system , '' _ stat med _ , 3 , 101111 .mason , r. and young , j. ( 2002 ) , _ multivariate statistical process control with industrial applications _ , philadelphia , pennsylvania : asa - siam .mclachlan , g. , do , k. , and ambroise , c. ( 2004 ) , _ analyzing microarray gene expression data _ , hoboken , new jersey : wiley .mecham , b. , klus , g. , strovel , j. , augustus , m. , byrne , d. , bozso , p. , wetmore , d. , mariani , t. , kohane , i. , and szallasi , z. ( 2004 ) , `` sequence - matched probes produce increased cross - platform consistency and more reproducible biological results in microarray - based gene expression measurements , '' _ nucleic acids res _ , 32 , 74 .mehta , t. , tanik , m. , and allison , d. ( 2004 ) , `` towards sound epistemological foundations of statistical methods for high - dimensional biology , '' _ nature genetics _, 36 , 943947 .mitchell , s. , brown , k. , henry , m. , mintz , m. , catchpoole , d. , lafleur , b. , and stephan , d. ( 2004 ) , `` inter - platform comparability of microarrays in acute lymphoblastic leukemia , '' _ bmc genomics _, 5 , e71 . model , f. , knig , t. , piepenbrock , c. , and adorjan , p. ( 2002 ) , `` statistical process control for large scale microarray experiments , '' _ bioinformatics _ , 18 suppl 1 , 155163 , evaluation studies .morris , j. , yin , g. , baggerly , k. , wu , c. , and zhang , l. ( 2004 ) , `` pooling information across different studies and oligonucleotide microarray chip types to identify prognostic genes for lung cancer , '' in _ methods of microarray data analysis iii _ , eds . shoemaker , j. and lin , s. , new york : springer , pp .naef , f. , socci , n. , and magnasco , m. ( 2003 ) , `` a study of accuracy and precision in oligonucleotide arrays : extracting more signal at large concentrations , '' _ bioinformatics _ , 19 , 178184 .nimgaonkar , a. , sanoudou , d. , butte , a. , haslett , j. , kunkel , l. , beggs , a. , and kohane , i. ( 2003 ) , `` reproducibility of gene expression across generations of affymetrix microarrays , '' _ bmc bioinformatics _ , 4 , e27 .novikov , e. and barillot , e. ( 2005 ) , `` an algorithm for automatic evaluation of the spot quality in two - color dna microarray experiments , '' _ bmc bioinformatics _ , 6 , e293 .perou , c. , sorlie , t. , eisen , m. , van de rijn , m. , jeffrey , s. , rees , c. , pollack , j. , ross , d. , johnsen , h. , akslen , l. , fluge , o. , pergamenschikov , a. , williams , c. , zhu , s. , lonning , p. , borresen - dale , a. , brown , p. , and botstein , d. ( 2000 ) , `` molecular portraits of human breast tumours , '' _ nature _ , 406 , 747752 .qian , j. , kluger , y. , yu , h. , and gerstein , m. ( 2003 ) , `` identification and correction of spurious spatial correlations in microarray data , '' _ biotechniques _ , 35 , 4244 , evaluation studies .ramaswamy , s. and golub , t. ( 2002 ) , `` dna microarrays in clinical oncology , '' _ j. clin_ , 20 , 19321941 .redman , t. ( 1992 ) , _ data quality : management and technology _, new york : bantam books .reimers , m. and weinstein , j. ( 2005 ) , `` quality assessment of microarrays : visualization of spatial artifacts and quantitation of regional biases , '' _ bmc bioinformatics _ , 6 , e166 .ritchie , m. , diyagama , d. , neilson , j. , van laar , r. , a , d. , a , h. , and gk , s. ( 2006 ) , `` empirical array quality weights in the analysis of microarray data , '' _ bmc bioinformatics _, 7 , e261 .rogojina , a. , orr , w. , song , b. , and geisert , e. j. ( 2003 ) , `` comparing the use of affymetrix to spotted oligonucleotide microarrays using two retinal pigment epithelium cell lines , '' _ mol vis _ , 9 , 482496 .ross , m. , zhou , x. , song , g. , shurtleff , s. , girtman , k. , williams , w. , liu , h. , mahfouz , r. , raimondi , s. , lenny , n. , patel , a. , and downing , j. ( 2003 ) , `` classification of pediatric acute lymphoblastic leukemia by gene expression profiling , '' _ blood _ , 102 , 29512959 .schoor , o. , weinschenk , t. , hennenlotter , j. , corvin , s. , stenzl , a. , rammensee , h. , and stevanovic , s. ( 2003 ) , `` moderate degradation does not preclude microarray analysis of small amounts of rna , '' _ biotechniques _, 35 , 11921196 .shewhart , w. ( 1939 ) , _ statistical method from the viewpoint of quality control _ ,lanceser , pennsylvania : lancester press , inc .shippy , r. , sendera , t. , lockner , r. , palaniappan , c. , kaysser - kranich , t. , watts , g. , and j , a. ( 2004 ) , `` performance evaluation of commercial short - oligonucleotide microarrays and the impact of noise in making cross - platform correlations , '' _ bmc genomics _ , 5 , e61 .smith , k. and hallett , m. ( 2004 ) , `` towards quality control for dna microarrays , '' _ j comput biol _ , 11 , 945970 .smyth , g. ( 2002 ) , `` print - order normalization of cdna microarrays , '' tech .rep . , genetics and bioinformatics , walter and eliza hall institute of medical research , melbourne , available at www.statsci.org/smyth/pubs/porder/porder.html .smyth , g. , yang , h. , and speed , t. ( 2003 ) , `` statistical issues in cdna microarray data analysis , '' in _ functional genomics , methods and protocols _ ,brownstein , m. j. and khodursky , a. b. , totowa , new jersey : humana press , no . 224 in methods in molecular biology ,111136 .speed , t. ( 2003 ) , _ statistical analysis of gene expression of gene expression microarray data _ , boca raton ,florida : chapman and hall / crc .stevens , j. and doerge , r. ( 2005 ) , `` combining affymetrix microarray results , '' _ bmc bioinformatics _ , 6 , e57 .storey , j. ( 2003 ) , `` the positive false discovery rate : a bayesian interpretation and the q - value , '' _ annals of statistics _ , 31 , 20132035 .surez - farias , m. , haider , a. , and wittkowski , k. ( 2005 ) , `` `` harshlighting '' small blemishes on microarrays , '' _ bmc bioinformatics _ , 6 , e65 .subramanian , a. , tamayo , p. , mootha , v. , mukherjee , s. , ebert , b. , gillette , m. , paulovich , a. , pomeroy , s. , golub , t. , lander , e. , and mesirov , j. ( 2005 ) , `` gene set enrichment analysis : a knowledge - based approach for interpreting genomewide expression profiles , '' _ pnas _ , 102 , 15554550 .tai , y. and speed , t. ( 2006 ) , `` a multivariate empirical bayes statstic for replicated microarray time course data , '' _ ann statist _ , 34 , 23872412 .thach , d. , lin , b. , walter , e. , kruzelock , r. , rowley , r. , tibbetts , c. , and stenger , d. ( 2003 ) , `` assessment of two methods for handling blood in collection tubes with rna stabilizing agent for surveillance of gene expression profiles with high density microarrays , '' _ j immunol methods _ , 283 , 269279 .the chipping forecast ( 1999 ) , _ the chipping forecast i _ , vol . 21 - 1s, nature genetics suppl . ( 2002 ) , _ the chipping forecast ii _ , vol .32 - 4s , nature genetics suppl . ( 2005 ) , _ the chipping forecast iii _37 - 6s , nature genetics suppl .thompson , k. , rosenzweig , b. , pine , p. , retief , j. , turpaz , y. , afshari , c. , hamadeh , h. , damore , m. , boedigheimer , m. , blomme , e. , ciurlionis , r. , waring , j. , fuscoe , j. , paules , r. , tucker , c. , fare , t. , coffey , e. , he , y. , collins , p. , jarnagin , k. , fujimoto , s. , ganter , b. , kiser , g. , kaysser - kranich , t. , sina , j. , and sistare , f. ( 2005 ) , `` use of a mixed tissue rna design for performance assessments on multiple microarray formats , '' _ nucleic acid research _ , 33 ( 2 ) , e187 .tom , b. , gilks , w. , brooke - powell , e. , and ajioka , j. ( 2005 ) , `` quality determination and the repair of poor quality spots in array experiments , '' _ bmc bioinformatics _, 6 , e234 .tomanck , p. , beaton , a. , weiszmann , r. , kwan , e. , shu , s. , lewis , s. , richards , s. , ashburner , m. , hartenstein , v. , celniker , s. , and rubin , g. ( 2002 ) , `` systematic determination of patterns of gene expression during drosophila embryogenesis , '' _ genome biol _ , 3 , 114 .vawter , m. , evans , s. , choudary , p. , tomita , h. , meador - woodruff , j. , molnar , m. , li , j. , lopez , j. , myers , r. , cox , d. , watson , s. , akil , h. , jones , e. , and bunney , w. ( 2004 ) , `` gender - specific gene expression in post - mortem human brain : localization to sex chromosomes , '' _ neuropsychopharmacology _ , 29 , 373384 .wang , h. , he , x. , band , m. , and wilson , cand liu , l. ( 2005 ) , `` a study of inter - lab and inter - platform agreement of dna microarray data , '' _ bmc genomics _ , 6 , e71 .wang , r. ( 2001 ) , _ data quality _ , boston : kluwer academic publishers .wang , r. , storey , v. , and firth , c. ( 1995 ) , `` a framework for analysis of data quality research , '' _ ieee transactions of knowledge and data engineering _ , 7 , 623640 .wang , x. , ghosh , s. , and guo , s. ( 2001 ) , `` quantitative quality control in microarray image processing and data acquisition , '' _ nucleic acids res _ , 29 ( 15 ) ,wilson , c. and miller , c. ( 2005 ) , `` simpleaffy : a bioconductor package for affymetrix quality control and data analysis , '' _ bioinformatics _ , 21 , 36833685 . wit , e. and mcclure , j. ( 2004 ) , _ statistics for microarrays : design , analysis , inference _ , hoboken , new jersey : wiley .woo , y. , affourtit , j. , daigle , s. , viale , a. , johnson , k. , naggert , j. , and churchill , g. ( 2004 ) , `` a comparison of cdna , oligonucleotide , and affymetrix genechip gene expression microarray platforms , '' _ j biomol tech _ , 15 , 276284 .yang , y. , buckley , m. , and speed , t. ( 2001 ) , `` analysis of cdna microarray images , '' _ brief bioinform _ , 2 , 341349 .yang , y. , dudoit , s. , luu , p. , lin , d. , peng , v. , ngai , j. , and speed , t. ( 2002 ) , `` normalization for cdna microarray data : a robust composite method addressing single and multiple slide systematic variation , '' _ nucleic acids res _, 30 , e15 .yauk , c. , berndt , m. , williams , a. , and douglas , g. ( 2004 ) , `` comprehensive comparison of six microarray technologies , '' _ nucleic acid research _ , 32 ( 15 ) , e124 .yeoh , e. , ross , m. , shurtleff , s. , williams , w. , patel , d. , mahfouz , r. , behm , f. , raimondi , s. , relling , m. , patel , a. , cheng , c. , campana , d. , wilkins , d. , zhou , x. , li , j. , liu , h. , pui , c. , evans , w. , naeve , c. , wong , l. , and downing , j. ( 2002 ) , `` classification , subtype discovery , and prediction of outcome in pediatric acute lymphoblastic leukemia by gene expression profiling , '' _ cancer cell _ , 1 , 133143 .yuen , t. , wurmbach , e. , pfeffer , r. , ebersole , b. , and sealfon , s. ( 2002 ) , `` accuracy and calibration of commercial oligonucleotide and custom cdna microarrays , '' _ nucleic acids res _, 30 , e48 .zhang , w. , shmulevich , i. , and astola , j. ( 2004 ) , _ microarray quality control _ ,hoboken , new jersey : john wiley & sons , inc .zhu , b. , ping , g. , shinohara , y. , zhang , y. , and baba , y. ( 2005 ) , `` comparison of gene expression measurements from cdna and 60-mer oligonucleotide microarrays , '' _ genomics _ , 85 , 657665 . | quality of microarray gene expression data has emerged as a new research topic . as in other areas , microarray quality is assessed by comparing suitable numerical summaries across microarrays , so that outliers and trends can be visualized , and poor quality arrays or variable quality sets of arrays can be identified . since each single array comprises tens or hundreds of thousands of measurements , the challenge is to find numerical summaries which can be used to make accurate quality calls . to this end , several new quality measures are introduced based on probe level and probeset level information , all obtained as a by - product of the low - level analysis algorithms rma / fitplm for affymetrix genechips . quality landscapes spatially localize chip or hybridization problems . numerical chip quality measures are derived from the distributions of _ normalized unscaled standard errors _ and of _ relative log expressions . _ quality of chip batches is assessed by _ residual scale factors . _ these quality assessment measures are demonstrated on a variety of datasets ( spike - in experiments , small lab experiments , multi - site studies ) . they are compared with affymetrix s individual chip quality report . * _ to be published in technometrics ( with discussion ) _ * * julia brettschneider , franois collin , benjamin m.bolstad , terence p.speed * * quality assessment for * + + keywords : quality control , microarrays , affymetrix chips , relative log expression , normalized unscaled standard errors , residual scale factors . |
the design of effective numerical methods for solving structured generalized eigenvalue problems has recently attracted a great deal of attention .palindromic matrix polynomials arise in many applications .an matrix polynomial of degree , , , , is said to be t - palindromic if for .it is well - known , that the palindromic structure induces certain spectral symmetries : in particular if is an eigenvalue of then is also an eigenvalue of .numerical solution methods are generally asked to preserve these symmetries .the customary approach for polynomial eigenproblems consists in two steps : first is linearized into a matrix pencil , , and then the eigenvalues of are computed by some iterative solver .the usual choice of the matrix qz algorithm applied to a companion linearization of is implemented in the matlab function _ polyeig_. an alternative solver based on the ehrlich - aberth root finding algorithm is proposed in for dealing with certain structured linearizations .specifically the method of is designed to solve generalized tridiagonal eigenvalue problems but virtually , as shown below , it can be extended to several other rank structures . a generalization for tridiagonal quadratic eigenvalue problemsis presented in . a similar strategy using newton s iteration directly applied to compute the zeros of pursued in .modified methods for palindromic eigenproblems which are able to preserve their spectral symmetries have been proposed in several papers .the construction of t - palindromic linearizations of palindromic eigenproblems is the subject of and , whereas numerical methods based on matrix iterations have been devised in , , and for computing the eigenvalues of these linearizations by maintaining the palindromic structure throughout the computation . to date , however , the authors are not aware of any specific adaptation of the root - finding based methods to palindromic structures .the contribution of this paper is to fill the gap by developing a root finder specifically suited for t - palindromic matrix polynomials , with particular emphasis on the case of large degree .t - palindromic polynomials of large even degree arise as truncation of fourier series in several applications such as spectral theory , filtering problems , optimal control and multivariate discrete time series prediction .the polynomial root - finding paradigm is a flexible , powerful and quite general tool for solving both structured and unstructured polynomial eigenproblems . in its basic form it proceeds in four steps : 1 .the matrix polynomial is represented in some convenient polynomial basis .2 . the transformed polynomial is linearized .the linearization is reduced in the customary hessenberg - triangular form .a root - finding method is applied for approximating the eigenvalues of the ( reduced ) pencil .this scheme has some degrees of freedom concerning the choice of the polynomial basis at step 1 and the choice of the linearization at step 2 which can be used to exploit both structural and root properties of the matrix polynomial .the complexity heavily depends on the efficiency of the polynomial zero - finding method applied to the determinant of the pencil .steps 2 and 3 are optional but can substantially improve the numerical and computational properties of the method .some caution should be used at step 1 since the change of the basis could modify the spectral structure of the matrix polynomial .the key idea we propose for the implementation of step 4 is the use of the jacobi formula .we emphasize that , although in this paper we focus on palindromics and on a version of the method that is able to extract the palindromic spectral structure , this strategy may be used to address the most general case of an unstructured matrix polynomial eigenproblem , for instance by applying it to the companion linearization .an analysis of the application of the method to a generic matrix polynomial will appear elsewhere .in this paper we consider the polynomial root - finding paradigm for solving t - palindromic eigenproblems .in particular , we address the main theoretical and computational issues arising at steps 1 , 2 and 4 of the previous scheme applied to t - palindromic matrix polynomials , and also we indicate briefly how to carry out the reduction at step 3 .the proposed approach relies upon the representation and manipulation of t - palindromic matrix polynomials in a different degree - graded polynomial basis , namely the dickson basis , satisfying a three - term recurrence relation and defined by , and for . for the given t - palindromic polynomial of degree we determine a novel polynomial , , , , with the property that if and are two distinct ( i.e. ) finite semi - simple eigenvalues of with multiplicity , then is a semi - simple eigenvalue for with multiplicity .moreover , we find that ^ 2 = p(y ) \cdot p(y),\ ] ] where is a polynomial of degree at most .solving the algebraic equation is at the core of our method for t - palindromic eigenproblems .our computational experience in polynomial root - finding indicates that the ehrlich - aberth method , for the simultaneous approximation of polynomial zeros realizes a quite good balancing between the quality of convergence and the cost per iteration .the main requirements for the effective implementation of the ehrlich - aberth method are both a fast , robust and stable procedure to evaluate the newton correction and a reliable criterion to stop the iteration .concerning the first issue it is worth noting that and , therefore , the computation immediately reduces to evaluating the newton correction of .a suitable structured linearization of can be obtained following which displays a semiseparable structure . in this way , in view of the celebrated jacobi formula , the newton correction can be evaluated by performing a qr factorization of , say , at low computational cost and fulfilling the desired requirements of robustness and stability . also , since for we obtain at no additional cost a reliable stop condition based on an estimate of the backward error given by higham and higham .if , the degree of the matrix polynomial , is large with respect to , the size of its matrix coefficients , our approach looks appealing since with a smart choice of the starting points it needs operations , whereas the qz method makes use of operations .the unpleasant factor in our cost estimate depends on the block structure of the linearization used in our current implementation and can in principle be decreased by performing the preliminary reduction of the linearization in hessenberg - triangular form as stated at step 3 of the basic scheme .the reduction can be carried out by a structured method exploiting the semiseparable structure of the block linearization to compute a rank - structured hessenberg - triangular linearization .incorporating the structured method in our implementation would finally lead to a fast method that outperforms the qz algorithm for large degrees and is comparable in cost for small degrees . the paper is organized as follows .the theoretical properties of the considered linearizations of t - palindromic matrix polynomials expressed in the dickson basis are investigated in section [ teo ] and [ teo1 ] .the derivation of the proposed eigenvalue method for t - palindromic eigenproblems is established in section [ main1 ] and [ main2 ] .the complete algorithm is described in section [ alg ] .numerical experiments are presented in section [ exp ] to illustrate the robustness of our implementation and to indicate computational issues and possible improvements of our algorithm compared with other existing methods .finally , conclusion and future work are discussed in section [ end ] .this preparatory section recalls some basic definitions , background facts and notations used throughout the paper . for let , , be constant matrices and consider the matrix polynomial .the generalized polynomial eigenproblem ( pep ) associated to is to find an eigenvalue and a corresponding nonzero eigenvector satisfying in this paper , we will always suppose that is _ regular _, i.e. its determinant does not identically vanish .a _ linearization _ of is defined as a pencil , with , such that there exist unimodular polynomial matrices and for which moreover , if one defines the reversal of a matrix polynomial as rev , the linearization is said to be _ strong _ whenever rev is a linearization of rev . following the work of mackey , mackey , mehl and mehrmann , in the paper higham , mackey , mackey and tisseur study the two ( right and left ) _ ansatz vector linearization spaces _ : having introduced the vector , these spaces are defined as follows : it is shown in that almost every pencil in these spaces is a linearization , while in two binary operations on block matrices , called column shifted sum and row shifted sum , are first introduced and then used to characterize the above defined spaces .on the other hand , in amiraslani , corless and lancaster consider linearizations of a matrix polynomial expressed in some polynomial bases different than the usual monomial one .equation ( 7 ) in resembles closely the defining equation of .the authors themselves stress this analogy , that suggests an extension of the results of to the case of different polynomial bases .let be a basis for the polynomials of degree less than or equal to . in degree - graded bases that satisfy a three - terms recurrence relation ( for instance ,orthogonal polynomials always do so ) are considered : the are obviously linked to the leading - term coefficients of the . specifically , calling such coefficients , one has that .we wish to consider the expansion of the polynomial in this basis : we introduce the vector by generalizing the linearizations studied in , for each choice of two new ansatz vector linearization spaces can be defined : it is worth noticing that it is not strictly necessary for the new basis to be degree - graded , nor it is to satisfy a three - term recurrence relation .in fact , it is sufficient that are linearly independent and have degree less than or equal to , so that there exists an invertible basis change matrix such that .the basis is degree - graded if and only if is lower triangular . in the light of the above definitions it is immediately seen that the main results of , remain valid in the case of a more general polynomial basis . in particular the following result holds .[ prop2 ] let .the following properties are equivalent : * is a linearization of * is a strong linearization of * is regular _ proof ._ it is a corollary of theorem 4.3 of .in fact , any can be written as for some .therefore , has each of the three properties above if and only if has the corresponding property . this proposition guarantees that almost every ( more precisely , all but a closed nowhere dense set of measure zero ) pencil in is a strong linearization for . for a proof , see theorem 4.7 of .the eigenvectors of are related to those of .more precisely , is an eigenpair for if and only if is an eigenpair for .moreover , if is a linearization then every eigenvector of is of the form for some eigenvector of .a similar recovery property holds for the left ansatz vector linearizations .these properties can be simply proved as in theorems 3.8 and 3.14 of , that demonstrate them for the special case . for the numerical treatment of palindromic generalized eigenproblems a crucial roleis played by the so - called dickson basis defined by if we consider the mapping ( which we will refer to as the dickson transformation or the dickson change of variable ) then for . for , we obtain that . from by choosing as the ansatz vector we find a suitable strong linearization of represented as in : in the next section we study the spectral modifications induced by the dickson change of variable that provide the basic link between palindromic matrix polynomials and matrix polynomials expressed in the dickson basis .let us recall that if is an eigenvalue of then the set , is a jordan chain of length if and the following relations hold : where denotes the -th derivative of evaluated at .the case corresponds to the definition of an eigenvector .the notion of a jordan chain can be extended to any matrix function whose determinant vanishes at , as long as is analytic in a neighborhood of .in particular , the case of laurent polynomials is important for our investigations .if the principal part of a laurent polynomial is a polynomial of degree in , then is a polynomial .the following lemma relates the jordan chains of the two .the proof is a straightforward application of the product differentiation rule .[ laurent ] let be a ( laurent ) polynomial and for some natural number .then the set is a jordan chain of length for associated to the eigenvalue if and only if is a jordan chain of length for associated to the same eigenvalue .roughly speaking , lemma [ laurent ] makes us able to switch between regular and laurent polynomials without worrying about changes in eigenvalues and generalized eigenvectors .actually , this result can be slightly generalized with the next lemma , which is just an adaptation of a well - known result in for the case where the four matrix functions that we are going to consider are polynomials . in order to prove the lemma , we recall that a vector polynomial is called a root polynomial of order corresponding to for the matrix polynomial if the following conditions are satisfied : obviously a root polynomial of order is defined up to an additive term of the form for any suitable vector polynomial .it is possible to prove that if and only if is a jordan chain of length for at .when , thanks to lemma [ laurent ] it is possible to extend the concept to laurent polynomials : if is a laurent polynomial whose singular part has degree as a polynomial in , then we say that is a root polynomial for if it is a root polynomial for .[ analytic ] let , be ( laurent ) polynomials and , be two matrix functions with .suppose that an open neighborhood of exists such that all the considered functions are analytic in , and also suppose that both and are invertible .then is an eigenvalue for if and only if it is an eigenvalue for , and is a jordan chain of length for at if and only if is a jordan chain of length for at , where ._ if and are classical polynomials then the thesis follows as in the proof of proposition 1.11 in after having represented and by their taylor series expansions . to deal with the laurent case ,let and be the minimal integers such that and are classical polynomials .just follow the previous proof for and apply lemma [ laurent]. we are now in the position to prove a result for the dickson change of variable .the following proposition shows that the number of jordan chains and their length at some eigenvalue ( for the sake of brevity , we shall use the expression _jordan structure at _ ) is related to the jordan structures at and .[ change ] let and let be a polynomial in , so that is a laurent polynomial in .let first , , be a finite eigenvalue of .then the jordan structure of at is equal to the jordan structure of at either or .if on the contrary , then there is a jordan chain of length at if and only if there is a jordan chain of length at ._ it is obvious that is an eigenvalue for if and only if both and are eigenvalues of .let , where is the smith form ( , ) of .define .if are such that , , and are polynomials in , then we have the relation ; however , in general is not the smith form of .nevertheless , it has the form where and . in other words ,the -s are palindromic polynomials with no roots at and such that divides for .moreover , is a zero of multiplicity for if and only if both and are zeros of multiplicity for . to reduce into a smith form ,we proceed by steps working on principal submatrices . in each step , we consider the submatrix , with .if , then do nothing ; if , premultiply the submatrix by and postmultiply it by , where while and are such that ; the existence of two such polynomials is guaranteed by bezout s lemma , since is the greatest common divisor of and .it is easy to check that both matrices are unimodular , and that the result of the matrix multiplications is . by subsequent applications of this algorithmwe thus conclude that the smith form of is .it follows that the invariant polynomial of has a root of multiplicity at if and only if the invariant polynomial of has a root of multiplicity at and a root of multiplicity at . from lemma [ analytic ] ,the jordan structures of are equal to those of .the thesis follows from the properties of the smith form and from lemma [ laurent ] .mutatis mutandis , a similar argument can be used to analyze the case of : notice in fact that is a factor of the invariant polynomial of if and only if is a factor of the invariant polynomial of .we will now specialize our analysis to the case of a matrix polynomial with palindromic structure .[ oddtoeven ] in this section , we will only treat the case of even degree palindromic matrix polynomials .notice in fact that an odd degree palindromic may always be transformed to an even degree palindromic , either by squaring the variable ( ) or by multiplication by .potentially , both actions may introduce problems : squaring the variable adds an additional symmetry to the spectrum while multiplying by increases by the multiplicity of as an eigenvalue .however , the first issue may be solved , after passing to laurent form , by the use of the change of variable .see also remark [ evenodd ] . regarding the latter issue , since one knows that he is adding n times there is no need to compute it : of the starting points of the ehrlich - aberth iteration shall be set equal to , and there they remain with no further corrections .the shortcoming is that the jordan structure at changes .let be a polynomial of even degree .by lemma [ laurent ] , switching to the laurent form is not harmful for finite nonzero eigenvalues and the corresponding ( generalized ) eigenvectors ; we can therefore consider its laurent counterpart three different kinds of palindromic structure can be defined .we say that the laurent polynomial is purely palindromic ( resp ., -palindromic , ) if the following relations hold between its matrix coefficients : it is well - known that the palindromic structure induces certain symmetries of eigenvalues and eigenvectors : in particular if is an eigenvalue , is a right eigenvector and is a left eigenvector , then , denoting complex conjugation with the operator in this paper we are primarily interested in the design of an efficient solver for eigenproblems . a numerical method will be presented in subsection [ tpalin ] .the proposed approach can however be described very easily with purely palindromic polynomials .thus we first consider this case for the sake of clarity .the most obvious way to deal with this kind of palindromicity is via introduction of the change of variable , in order to halve the degree of the polynomial .more explicitly , one can define ; clearly , the purely palindromic structure of guarantees that is itself a polynomial in the new variable . the next proposition is a simple application of lemmas [ laurent ] and [ change ] , and it relates eigenvectors and jordan chains of the two polynomials : when , the jordan structure of at the eigenvalue is equal to the jordan structure of at either or . if , has a jordan chain of length at if and only if has a jordan chain of length at .in particular , the eigenvectors of at are exactly the same of the eigenvectors of at ( or equivalently at , since they are the same ) .albeit very attractive , from a numerical point of view this trick is not very suitable as soon as one considers a high degree polynomial .in fact , the matrix coefficients of need to be computed as linear combinations of the ones of .since the powers of a binomial are involved , the coefficients of these linear combinations would exponentially grow with the polynomial degree . to circumvent this difficulty , we shall make use of the dickson polynomials .the polynomial is readily expressed in terms of the since in the dickson basis the coefficients are just the old ones and therefore no computation at all is needed , namely , the associated linearization has several computational advantages with respect to other customary linearizations of .its size is versus , the spectral symmetries are preserved and , moreover , the linearization displays a semiseparable structure .more precisely , it is of the form where is identity plus low rank while is hermitian plus low rank .this kind of structure is preserved under the qz algorithm and it may be exploited for the design of an efficient and numerically robust root - finder applied to the algebraic equation .consider now a t - palindromic polynomial of even degree .we will suppose once more that neither nor are eigenvalues , so that we can divide by and consider the laurent form , which is a t - palindromic laurent polynomial of degree both in and in .since the symmetry is still present in the spectrum , we expect that the dickson basis may still play a role .however , unlike the purely palindromic case , it is not possible to directly express a t - palindromic polynomial as a polynomial in the variable .in fact , splitting as the sum of its symmetric part and its skew - symmetric part we obtain that .\ ] ] if we introduce the new variables and , then can be expressed as a bivariate polynomial in and which is always linear in , that is , the property follows from by substituting notice moreover that is a symmetric polynomial ( that is to say , every matrix coefficient is symmetric ) , is skew - symmetric , and the operation of transposition corresponds to changing the sign of , that is , in principle one may think of treating with available techniques for the bivariate eigenvalue problem ( see e.g. and references therein ) , but actually and are not independent .they are related by the trigonometric dispersion relation .this suggests that it is possible to obtain a univariate polynomial by doubling the dimensions of the matrix coefficients .let us define then is a polynomial in of degree at most .moreover , it has the following property : if and are two distinct ( i.e. ) finite semisimple eigenvalues of with multiplicity , then is a semisimple eigenvalue for with multiplicity . to see this ,notice first that and hence , we find that since , as long as is defined ( that is to say or ) , then ^ 2 , \quad \forall \ ( y , w)\in \mathbb c\times \mathbb c.\ ] ] therefore , has algebraic multiplicity for if and only if has algebraic multiplicity for .this gives the factorization for a suitable polynomial having the zero of multiplicity . concerning eigenvectors ,if is semisimple , then let ( resp . ) , be the eigenvectors for ( resp . ) corresponding to : it can be easily checked that , where , are two linearly independent eigenvectors for corresponding to .thus , geometric multiplicity is also .indeed , something more can be said in the more general case of jordan chains .[ jc ] let be an eigenvalue of so that and are eigenvalues for . if then the jordan structure of at is equal to the union of the jordan structures of at and at ._ since is t - palindromic , it is clear that the jordan structure of at either or is the union of the jordan structures of at and at .define the matrix function , defined in the previous page , is analytic everywhere in the complex plane but on a branch semiline passing through the origin . since by hypothesis , the branch cut can be always chosen in such a way that is analytic in a neighborhood of , and thus is analytic in a neighborhood of . then we can apply lemma [ analytic ] to conclude that the jordan structures of and are the same .application of lemma [ change ] completes the proof . [ ins1 ]another remarkable property of is that its coefficients are all skew - hamiltonian , that is to say they can be written as where and is some skew - symmetric matrix .this link between t - palindromic and skew - hamiltonian polynomials is interesting because it may shed more light on the relation between several polynomial structures .it is known that one can easily transform a palindromic polynomial to an even polynomial by a cayley transformation , and then to a hermitian polynomial via a multiplication by ( if one started from a real polynomial ) or to a symmetric polynomial by squaring the matrix coefficients . on the other hand , hamiltonian polynomialscan lead to skew - hamiltonian polynomials by squaring each coefficients , and multiplication by sends a skew - hamiltonian polynomial to a skew - symmetric polynomial .the dickson change of variable , followed by doubling the dimension , is able to map t - palindromic polynomials of even degree to a subset of skew - hamiltonian polynomials .unlike some of the other mentioned maps , this is not a bijection between two classes of structured polynomials , because what is obtained is actually a subset of skew - hamiltonian polynomials .in fact , since the north - west and south - east coefficients of are the coefficients of they must be symmetric and there is a relation between the north - east and south - west coefficients of .however , a deeper investigation on this subject is needed in the future .[ evenodd ] notice that a similar technique can be applied to _ even / odd _ matrix polynomials , that is polynomials whose coefficients alternate between symmetric and skew - symmetric matrices . in this case , on can apply the transformation and use algebraic manipulations , akin to the ones described for the t - palindromic case , in order to build a new polynomial in with double dimensions .in the case of an odd - degree t - palindromic polynomial , the substitutions and lead to an such that is odd .therefore , one may apply and build a third polynomial in order to extract the additional structure .equation and enable the computation of the eigenvalues of to be reduced to solving algebraic equations . from proposition [ jc ]it follows that possible discrepancies in the jordan structures can be expected for and corresponding to and , respectively . when not only the proof we gave is not valid ( because , since is a branch point , there is no neighborhood of analyticity of the matrix function ) , but in fact the proposition itself does not hold .as a counterexample , let and consider the polynomial we have that is a jordan chain for at .the corresponding is which has a semisimple eigenvalue at with the corresponding eigenvectors and . if the leading coefficient of is not symmetric , then has extra infinite eigenvalues , where is the dimension of the matrix coefficients of . these eigenvalues are defective since their geometric multiplicity is only , where is the leading coefficient of . for the numerical approximation of the roots of we can exploit again the properties of the dickson basis to compute the matrix coefficients of .the code below computes the matrices , , given in input the coefficients of , , defined as in .* function dickson * + * input : * + * output : * + ; ; + for + ; ; + end + , ; + for + + ; + end + ; ; + ; ; + for + ; + end + ; ; + for + ; + ] ; ; ; ; + ; ; + ; ; ; ; + ; ; ; + end + for + ; + end + for + ; + end + ; + for + ; ; + end + ; ; + for + ; + end + ; + for + ; + end + ; ; ;a simple tool for the simultaneous approximation of all the eigenvalues of a polynomial is the ehrlich - aberth method .bini and fiorentino showed that a careful implementation of the method yields an efficient and robust polynomial root finder .the software package mpsolve documented in is designed to successfully compute approximations of polynomial zeros at any specified accuracy using a multi - precision arithmetic environment .a root finder for t - palindromic eigenproblems can be based on the ehrlich - aberth method applied for the solution of the algebraic equation , where is related with by and is generated by the function * dickson * applied to the input coefficients of the t - palindromic matrix polynomial of degree given as in .the method simultaneously approximates all the zeros of the polynomial : given a vector , , of initial approximations to the zeros of , the ehrlich - aberth iteration generates a sequence , , which locally converges to the of the roots of , according to the equation the convergence is superlinear for simple roots and linear for multiple roots . in practice, the ehrlich - aberth method exhibits quite good global convergence properties , even though no theoretical results are known in this regard .the main requirements for an efficient implementation of the method are : 1 . a rule for choosing the initial approximations ; 2 . a fast , numerically robust and stable method to compute the newton correction ; 3 .a reliable stopping criterion .concerning the first issue it is commonly advocated that for scalar polynomials the convergence benefits from the choice of equally spaced points lying on some circles around the origin in the complex plane . in the case of matrix polynomials where the eigenvalues are often widely varying in magnitude this choice can not be optimal .a better strategy using the initial guesses lying on certain ellipses around the origin in the complex plane is employed in our method .the second task can be accomplished by means of the function * trace * in the previous section . with respect to the third issue ,it is worth observing that the ql - based method pursued for the trace computation also provides an estimate on the backward error for the generalized eigenvalue problem . from a result in follows that if is not an eigenvalue of then gives an appropriate measure of the backward error for the approximate eigenvalue .since for we have that in our implementation we consider the quantity as an error measure .if is smaller than a fixed tolerance then is taken as an approximate eigenvalue and the corresponding iteration is stopped .the resulting ehrlich - aberth algorithm for approximating finite eigenvalues of and hence obtaining the corresponding eigenvalues of is described below . in the next section we present results of numerical experiment assessing the robustness of the proposed approach. * function palindromic * + * input : * , , , initial guesses + * output : * approximations , , of the zeros of + = ] ; + ; ; + end the total cost of the algorithm is therefore operations , where is the total number of times that the function is called .numerical experiments presented in the next section show that heavily depends on the choice of the starting points . with a smart choice, is of order , which gives a total computational cost of .since the cost of our method grows as but is only quadratic in , where customary qz - like methods use operations , an ehrlich - aberth approach looks particularly suitable when the matrix polynomial has a high degree and small coefficients so that is large .it is worth noticing that the case of large can still be treated by means of an ehrlich - aberth method in operations .the basic observation is that the factor comes from the block structure of the linearization involved in the computation of the trace .a reduction of the cost can therefore be achieved by a different strategy where the linearization is initially converted into ( scalar ) triangular - hessenberg form : say , where is ( scalar ) triangular and is ( scalar ) hessenberg .the task can virtually be performed by any extension of the fast structured methods for the hessenberg reduction proposed in .these methods preserve the rank structure which can therefore be exploited also in the triangular - hessenberg linearization . once the matrices and have been determined then the computation of can be performed by the following algorithm which has a cost of operations : * perform a decomposition of the hessenberg matrix , obtaining a unitary matrix represented as product of givens transformations ( schur decomposition ) and a triangular matrix .* compute the last row of by solving and then computing . *recover the diagonal entries of from the entries of and the elements of the schur decomposition of .this alternative road leads to an algorithm of total cost operations .an efficient implementation exploiting the rank structures of the matrices involved will be presented elsewhere .the function * palindromic * for computing the roots of a t - palindromic matrix polynomial , given its coefficients , , has been implemented in matlab and then used for the computation of the zeros of polynomials of both small and high degree . the tolerance is fixed at and for the maximum number of iterations we set .extensive numerical experiments have been performed to illustrate some basic issues concerned with the efficiency and the accuracy of a practical implementation of our method .an accurate and efficient root - finder is essential to the success of our algorithm . in practice ,the cost of each iteration is strongly dependent on the amount of early convergence ( for the sake of brevity , in the following we will refer to this phenomenon using the word _ deflation _ ) occurring for a given problem . in other words , a critical point to assessthe efficiency of the novel method is the evaluation of the total number of calls of the function trace , and of its dependence on the total number of the eigenvalues . when the ehrlich - aberth method is used to approximate scalar polynomials roots ,experiments show that depends on the choice of the starting points .if there is not any a priori knowledge about the location of the roots , empirical evidence shows that choosing starting points distributed on some circles around the origin leads to acceptable performances and/or quite regular convergence patterns .the class h of t - palindromic polynomials have been used to verify if these properties still hold in the matrix case .the polynomials are constructed according to the following rules : from we find that most of the eigenvalues lie on the unit circle and for even is a double root of .figure 1 describes the convergence history for our root finder applied to with starting values equally spaced on the circle centered in the origin with radius 4 .the curves represented are generated by plotting the sequences , , for .the convergence is quite regular and very similar to that exhibited in the scalar polynomial case and theoretically predicted for simultaneous iterations based on newton - like methods . with this choice of starting pointswe have observed that the number of global iterations is typically of order of but there are not enough early deflations , that is , iterations that are prematurely stopped due to early convergence . in order to increase the cost savings due to premature deflation in our program we have employed a slightly refined strategy .since the method does not approximate directly the eigenvalues but their dickson transform , we have chosen starting points on the dickson transform of the circles , that is points lying on ellipses .more precisely , this is the algorithm we used to pick the starting points : * input : * number of eigenvalues to approximate and parameters and + * output : * starting points , + ; + =randn ; + for + =mod( ) ; + ; + ; + ; + + end + -0.6 cm the integer determines the number of ellipses whereas is used to tune the lenghts and , defined as above , of their semiaxes .we expect that a good choice for the parameters and depends on the ratio : when we expect many eigenvalues to lie on or near to the unit circle , while when we expect a situation more similar to the eigenvalues of a random matrix , with no particular orientation towards unimodularity .we therefore expect that a small ratio works well in the former case while on the contrary in the latter case should be a better choice .moreover , we expect that as grows it is helpful to increase the total number of ellipses as well .we show here some of the results on random t - palindromic polynomials .figure 2 refers to an experiment on small - dimensional , high - degree polynomials : the value of has been set to while was variable .the average number of over a set of random polynomials for each value of is shown on the graph .the parameters satisfy and and they are determined by and , where the integer is defined as .the graph shows a linear growth of with respect to .figure 3 refers to an experiment where on the contrary the case of small is explored .we have considered here and let vary and we show the results for plotted against for several choices of and .the choice labelled as step function is for and generated by and . once againthe experiments suggest that when the starting points are conveniently chosen for some constant and any in the specified range , and , moreover , the bound still holds for different reasonable choices of the parameters and .the experimentation with random polynomials gives as an estimate for the constant . in conclusion, the algorithm can greatly benefit from a smart strategy for the selection of the starting points by increasing the number of early deflations .the experiments show that as long as the starting points are suitably chosen the value of is proportional to . the other important aspect of our solver based on polynomial root - finding concerns the accuracy of computed approximations . in our experiencethe method competes very well in accuracy with the customary qz - algorithm .the accuracy of the computed non - exceptional roots for the random polynomials was always comparable with the accuracy of the approximation obtained with the method .the results of other numerical experiments confirm the robustness of the novel method .figure 4 illustrates the computed eigenvalues for the problem .figure 5 also reports the plot of the absolute error vector and , where is the vector formed by the eigenvalues computed in high precision arithmetic by mathematica while and are , respectively , the vectors formed from the eigenvalues returned by our routine * palindromic * and suitably sorted by the internal function . the numerical results put in evidence the following important aspects : 1 .poor approximations for the exact eigenvalue are in accordance with the theoretical predictions : in fact the reverse transformation from to is known to be ill - conditioned near ( or ) . since in this example is a defective eigenvalue , the approximations returned by have comparable absolute errors of order which are in accordance with the unstructured backward error estimates given in .the accuracy of the remaining approximations is unaffected from the occurrence of near - to - critical eigenvalues and is in accordance with the results returned by . for most non - exceptional eigenvalues ,the accuracy of approximations computed by our method is slightly better .this kind of behavior is confirmed by many other experiments .our method performs similarly to the qz for non - exceptional eigenvalues and for defective exceptional eigenvalues , but generally worse than qz and the structure - preserving methods for exceptional eigenvalues .in this paper we have shown that the ehrlich - aberth method can be used for solving palindromic and t - palindromic generalized eigenproblems .the basic idea can be applied to a generic matrix polynomial of any kind ; moreover , as we have shown in this paper , it is possible to adapt it in order to exploit certain structures as the palindromic structure that we have considered here .the resulting algorithm is numerically robust and achieves computational efficiency by exploiting the rank - structure of the associated linearization in the dickson basis .the algorithm is quite interesting for its potential for parallelization on distributed architectures and , moreover , can be easily incorporated in the mpsolve package to develop a multiprecision root finder for matrix polynomial eigenproblems . 1 .the development of an automatic procedure for the selection of starting points is important to attain a low operation count due to the prevalence and ease of deflation .we have shown that a smart choice could be based on a few parameters to be determined from some rough information on the spectrum localization .the proposed algorithm is still inefficient with respect to the size of the polynomial coefficients .the preliminary reduction of the linearized problem into a hessenberg - triangular form is the mean to devise a unified efficient algorithm for both small and large coefficients .a fast reduction algorithm would be incorporated in our implementation .the algorithm should be able to exploit the rank structure of the linearization ( for large degrees ) , and , at the same time , the inner structure of the quasiseparable generators ( for large coefficients ) .3 . regarding the accuracy of the methodthere are still some difficulties in the numerical treatment of the critical cases .our current research is focusing on the issue of a structured refinement of the approximations of such eigenvalues .* acknowledgements . *the second author wishes to thank volker mehrmann for the many discussions and the valuable comments and suggestions provided during the gene golub siam summer school 2010 .we are grateful to the anonymous referees for their hints and suggestions .w. gander .zeros of determinants of -matrices . in vadim olshevsky and eugenetyrtyshnikov , editors , _ matrix methods : theory , algorithms and applications , dedicated to the memory of g. golub_. world scientific publisher , 2010 . | an algorithm based on the ehrlich - aberth root - finding method is presented for the computation of the eigenvalues of a t - palindromic matrix polynomial . a structured linearization of the polynomial represented in the dickson basis is introduced in order to exploit the symmetry of the roots by halving the total number of the required approximations . the rank structure properties of the linearization allow the design of a fast and numerically robust implementation of the root - finding iteration . numerical experiments that confirm the effectiveness and the robustness of the approach are provided . _ ams classification : _ 65f15 generalized eigenvalue problem , root - finding algorithm , rank - structured matrix |
in wireless networking , determining the sets of links that can be active simultaneously is a cornerstone optimization task of combinatorial nature . for a link to be active , a given signal - to - interference - and - noise ratio ( sinr ) thresholdmust be met at the receiver , according to the physical connectivity model . within this domain, previous analyses assume that the communication system employs single - user decoding ( sud ) receivers that treat interference as additive noise . for interference - limited scenarios ,it is very unlikely that all links can be active at the same time .hence , it is necessary to construct transmission schedules that orthogonalize link transmissions along some dimension of freedom , such as time .the schedule is composed by link subsets , each of which is a feasible solution to the link activation ( la ) problem .thus , for scheduling , repeatedly solving the la problem becomes the dominant computational task .intuitively , with sud , a solution to the la problem consists in links being spatially separated , as they generate little interference to each other .thus , scheduling amounts to optimal spatial reuse of the time resource .for this reason , scheduling is also referred to as spatial time - division multiple access ( stdma ) .optimal la has attracted a considerable amount of attention .problem complexity and solution approximations have been addressed in .a recent algorithmic advance is presented in .research on scheduling , which uses la as the building block , is extensive ; see , e.g , and references therein .in addition to scheduling , la is an integral part of more complicated resource management problems jointly addressing scheduling and other resource control aspects , such as rate adaptation and power control , as well as routing , in ad hoc and mesh networks ; see , e.g. , . in the general problem setting of la , each linkis associated with a nonnegative weight , and the objective is to maximize the total weight of the active links . the weights may be used to reflect utility values of the links or queue sizes .a different view of weights comes from the column generation , method proposed in , which has become the standard solution algorithm for scheduling as well as for joint scheduling , power control , and rate adaptation .the algorithm decomposes the problem to a master problem and a subproblem , both of which are much more tractable than the original . solving the subproblem constructs a feasible la set . in the subproblem ,the links are associated with prices coming from the linear programming dual , corresponding to the weights of our la problem .a special case of the weights is a vector of ones ; in this case , the objective becomes to maximize the cardinality of the la set .all aforementioned previous works on optimal la have assumed sud , for which interference is regarded as additive noise . in this work ,we examine the problem of optimal la under a novel setup ; namely when the receivers have multiuser decoding ( mud ) capability . note that , unlike noise , interference contains encoded information and hence is a structured signal .this is exploited by mud receivers to perform interference cancellation ( ic ) .that is , the receivers , before decoding the signal of interest , first decode the interfering signals they are able to and remove them from the received signal . for ic to take place , a receiver acts as though it is the intended receiver of the interfering signal .therefore , an interfering signal can be cancelled , i.e. , decoded at the rate it was actually transmitted , only if it is received with enough power in relation to the other transmissions , including the receiver s original signal of interest . in other words ,the `` interference - to - other - signals - and - noise '' ratio ( which is an intuitive but non - rigorous term in this context ) , must meet the sinr threshold of the interfering signal . with mud, the effective sinr of the signal of interest is higher than the original sinr , with sud , since the denominator now only contains the sum of the residual , i.e. , undecoded , interference plus noise .clearly , with mud , concurrent activation of strongly interfering links becomes more likely , enabling activation patterns that are counter - intuitive in the conventional stdma setting .the focus of our investigation is on the potential of ic in boosting the performance of la .because la is a key element in many resource management problems , the investigation opens up new perspectives of these problems as well .the topic of implementing mud receivers in real systems has recently gained interest , particularly in the low sinr domain using low - complexity algorithms ; see , e.g. , . technically , implementing ic is not a trivial task .a fundamental assumption in mud is that the receivers have information ( codebooks and modulation schemes ) of the transmissions to be cancelled .furthermore , the transmitters need to be synchronized in time and frequency .finally , the receivers must estimate , with sufficient accuracy , the channels between themselves and all transmitters whose signals are trying to decode . for our work ,we assume that mud is carried out without any significant performance impairments , and examine it as an enabler of going beyond the conventionally known performance limits in wireless networking . hence, the results we provide effectively constitute upper bounds on what can be achievable , for the considered setup , in practice .the significance of introducing mud and more specifically ic to wireless networking is motivated by the fundamental , i.e. , information - theoretic , studies of the so - called interference channel , which accurately models the physical - layer interactions of the transmissions on coupled links .the capacity region of the interference channel is a long - standing open problem , even for the two - link case , dating back to . up to now , it is only known in a few special cases ; see , e.g. , for some recent contributions .two basic findings , regarding optimal treatment of interference in the two - link case , can be summarized as follows .when the interference is very weak , it can simply be treated as additive noise .when the interference is strong enough , it may be decoded and subtracted off from the received signal , leaving an interference - free signal containing only the signal of interest plus thermal noise . from a physical - layer perspective, the simple two - link setting above corresponds to a received signal consisting of , where is the signal of interest , with received power and encoded with rate , is the interference signal with received power encoded with rate , and is the receiver noise with power .assuming gaussian signaling and capacity achieving codes , the interference is `` strong enough '' to be decoded , treating the signal of interest as additive noise , precisely if where denotes the sinr threshold for decoding the interference signal .if condition ( [ eq : intro : example1 ] ) holds , i.e. , the `` interference - to - other - signal - and - noise '' ratio is at least , can be decoded perfectly and subtracted off from .then , decoding the signal of interest is possible , provided that the interference - free part of has sufficient signal - to - noise ratio ( snr ) where denotes the sinr threshold for decoding signal .by contrast , if the interference is not sufficiently strong for ( [ eq : intro : example1 ] ) to hold , then it must be treated as additive noise .in such a case , decoding of signal is possible only when this way of reasoning can be extended to more than one interfering signals . towards this end , we examine the effect of ic in scenarios with potentially many links in transmission .our study has a clear focus on performance engineering in wireless networking with arbitrary topologies . in consequence , a thorough study of the gain of ic to la is highly motivated , in view of the pervasiveness of the la problem in resource management of many types of wireless networks . in the multi - link setup that we consider , the optimal scheme is to allow every receiver perform ic successively , i.e. , in multiple stages . in every stage ,the receiver decodes one interfering signal , removes it from the received signal , and continues as long as there exists an undecoded interfering link whose received signal is strong enough in relation to the sum of the residual interference , the signal of interest , and noise .this scheme is referred to as _ successive ic ( sic)_. from an optimization standpoint , modeling sic mathematically is very challenging , because the order in which cancellations take place is of significance .clearly , enumerating the potential cancellation orders will not scale at all .thus compact formulations that are able to deliver the optimal order are essential , especially under the physical connectivity model , which quantifies interference accurately .alternatively , a simplified ic scheme is to consider only the cancellations that can be performed concurrently , in a single stage . in this scheme , when determining the possibility for the cancellation of an interfering link , all remaining transmissions , no matter whether or not they are also being examined for cancellation , are regarded as interference .we refer to this scheme as _parallel ic ( pic)_. it is easily realized that some of the cancellations in sic may not be possible in pic ; thus one can expect that the gain of the latter is less than that of the former .a further restriction is to allow at most one cancellation per receiver .this scheme , which we refer to as _ single - link ic ( slic ) _ , poses additional limit on the performance gain . however , it is the simplest scheme for practical implementation and frequently captures most of the performance gain due to ic . in comparison to sic ,pic and slic are much easier to formulate mathematically , as ordering is not relevant . in , we evaluated the potential of slic in the related problem of sinr balancing .that is , we considered as input the number of active links , let the transmit powers be variables and looked for the maximum sinr level that can be guaranteed to all links . in ,we exploited link rate adaptation to maximize the benefits of ic to aggregate system throughput . in parallel ,another set of authors has made a relevant contribution in the context of ic .they considered a sic - enabled system and introduced a greedy algorithm to construct schedules of bounded length in ad - hoc networks with mud capabilities .there though , the interference is modeled using the protocol - based model of conflict graphs , which simplifies the impact of interference , in comparison to the more accurate physical connectivity model that we are using .the overall aim of our work is to deliver a comprehensive theoretical and numerical study on optimal link activation under this novel setup in order to provide insight into the gains from adopting interference cancellation .this is achieved through the following contributions : * first , we introduce and formalize the optimization problems of la in wireless networks with pic and sic , focusing on the latter most challenging case . *second , we prove that these optimization problems are np - hard . *third , we develop ilp formulations that enable us to approach the global optimum for problem sizes of practical interest and thus provide an effective benchmark for the potential of ic on la . *fourth , we present an extensive numerical performance evaluation that introduces insight into the maximum attainable gains of adopting ic . we show that for some of the test scenarios the improvement is substantial .the results indicate that strong interference can indeed be taken as a great advantage in designing new notions for scheduling and cross - layer resource allocation in wireless networking with mud capabilities .the remainder of the paper is organized as follows . in section [sec : preliminaries ] , we introduce the notation and formalize the novel optimization problems . in section[ sec : complexity ] , we prove the theoretical complexity results . in section [ sec :singlestage ] , we propose a compact ilp formulation for the la problem with pic having quadratic size to the number of links .for the most challenging problem of la with sic , we devote two sections . in section [ sec : sicuniform ] , we treat sic under a common sinr threshold . by exploiting the problem structure ,we show that the order of cancellations can be conveniently modeled and derive an ilp formulation of quadratic size .then , in section [ sec : sicvariable ] , we consider individual sinr thresolds . for this case , we give an ilp formulation of cubic size . in section[ sec : simulation ] , we present and discuss simulation results evaluating the performance of all proposed ic schemes . in section [ sec : conclusion ] , we give conclusions and outline perspectives .consider a wireless system of pairs of transmitters and receivers , forming directed links .the discussions in the forthcoming sections can be easily generalized to a network where the nodes can act as both transmitters or receivers .let denote the set of links .the gain of the channel between the transmitter of link and the receiver of link is denoted by , for any two .the noise power is denoted by and , for simplicity , is assumed equal at all receivers .the sinr threshold of link is denoted by .each link is associated with a predefined positive activation weight , reflecting its utility value or queue size or dual price .if a link is activated , its transmit power is given and denoted by , for .a la set is said to be feasible if the sinr thresholds of the links in the set are met under simultaneous transmission .all versions of the la problems we consider have the same input that we formalize below .+ * input : * a link set with the following parameters : transmit powers , sinr thresholds , and link weights , , and gain values , .consider first the la problem with the conventional assumption of sud , where the interference is treated as additive noise .this is the baseline version of the la problem in our comparisons ; its output is formulated as follows ._ optimal link activation with single - user decoding ._ + * output : * an activation set , maximizing and satisfying the conditions : + this classical version of the la problem can be represented by means of an ilp formulation ; see , e.g. , .a set of binary variables , is used to indicate whether or not each of the links is active .the activation set is hence . in order to ease comparisons to the formulations that are introduced later , we reproduce below the formulation of la - sud : [ eq : prelim : model] \text{s . t.}~~ & \frac{p_k g_{kk } + m_{k}(1 - x_{k})}{\sum_{m \not = k}\limits p_m g_{mk } x_m + \eta } \geq \gamma_k \qquad \forall k \in \k , \label{eq : prelim : sinr}\\ & x_k \in \{0 , 1\ } \qquad \forall k \in \k .\label{eq : prelim : x_k01}\end{aligned}\ ] ] the objective function aims to maximize the total weight of the la set .the constraints formulate the sinr criteria .if , indicating that link is active , the inequality constrains the sinr of link to be at least . for the case that link is not active , , the inequality in is always satisfied , i.e. , it has null effect , if parameter is set to a sufficiently large value . by construction ,an obvious choice is .note that the size of the formulation , both in the numbers of variables and constraints , is of .now , consider the same system but with receivers having mud capability that enable cancellation of strongly interfering links .we distinguish between ic in a single stage ( pic ) and in multiple stages ( sic ) . in the former , to cancel the transmission of an interfering link , all other signals of active links , including the signal of interest , are considered to be additive noise , independent of other cancellation decisions at the same receiver .a formal definition of the output is given below ._ optimal link activation with parallel interference cancellation . _ + * output : * an activation set and the set of cancelled transmissions for each , maximizing and satisfying the conditions : & \frac{p_k g_{kk}}{\sum_{m\in\a \setminus\{k,\c_k\ } } \limits p_m g_{mk } + \eta } \geq \gamma_k \qquad \forall k \in \a .\label{eq : sinr1}\end{aligned}\ ] ] the set of conditions ensures that the specified cancellations can take place . for the receiver of link to cancel the transmission of link , the receiver of acts as if it was the receiver of .hence , the `` interference - to - other - signals - and - noise '' ratio incorporates in the numerator the received power of the interfering link and in the denominator the received power of own link .this ratio must satisfy the sinr threshold of the signal to be decoded .the set of conditions formulates the sinr requirements for the signals of interest , taking into account the effect of ic in the sinr ratio .that is , the cancelled terms are removed from the sum in the denominator , determining the aggregate power of the undecoded interference which is treated as additive noise . for sic , the output must be augmented in order to specify , in addition to the cancellations , by the receiver of , the order in which they take place .a formal definition of the output is given below ._ optimal link activation with successive interference cancellation ._ + * output : * an activation set and the set of cancelled transmissions along with a bijection for each , maximizing and satisfying the conditions : similarly to la - pic , the set of conditions formulates the requirement for sic and the set the requirement for decoding the signals of interest , taking into account the effect of ic in the sinr ratio . in the output ,the cancellation sequence for each in the activation set is given by the bijection ; the bijection defines a unique mapping of the link indices in the cancellation set to the ic order numbers in the cancellation sequence .that is , defines the stage at which link is cancelled by the receiver of link .the bijection is used in the ic conditions , in order to exclude from the sum in the denominator , the interference terms that have been cancelled in stages prior to .the baseline problem , la - sud , is known to be np - hard ; see , e.g. , . fora combinatorial optimization problem , introducing new elements to the problem structure may change the complexity level , potentially making the problem easier to solve . hence , without additional investigation , the np - hardness of la - sud does not carry over to la with ic . in this section , we provide the theoretical result that problems la - pic and la - sic remain np - hard , using a unified proof applicable to both cases . [theorem : hardness ] problem la - pic is np - hard . we provide a reduction from la - sud to la - pic . considering an arbitrary instance of la - sud, we construct an instance of la - pic as follows . for each link , we go through all other links in one by one .let be the link under consideration .the power of link is set to where is a small positive constant . by ,the power of is either kept as before , or grows by an amount such that .therefore , link is not able to decode the signal of , i.e. , the ic condition of la - pic can not be satisfied , even in the most favorable scenario that all other links , apart from and , are inactive . after any power increase of link , we make sure that this update does not have any effect in the application of to the other links .this is achieved by scaling down the channel gain as meaning that for any , the received signal strength from remains the same as in the original instance of la - sud . as a result , even though ic is allowed in the instance of la - pic , no cancellation will actually take place , since none of the ic conditions holds due to the scalings in and . by the construction above , for each link the total interference that is treated as noise in the instance of la - pic equals that in the instance of la - sud .thus , the denominator of the sinr of the signal of interest does not change . on the other hand ,the numerator may have grown from to . to account for this growth ,the sinr threshold is set to in effect , the increase of the power on a link , if any , is compensated for by the new sinr threshold .note that , because , prohibits cancellation of the signal by any receiver other than the one . from the construction, one can conclude that a la set is feasible in the instances of la - sud , if and only if this is the case in the instance of la - pic .in addition , the reduction is clearly polynomial . hence the conclusion .[ th : sichardness ] problem la - sic is np - hard .the result follows immediately from the fact that , in the proof of theorem [ theorem : hardness ] , the construction does not impose any restriction on the number of links to be cancelled , nor to the order in which the cancellations take place .in this section , we propose a compact ilp formulation for la - pic .in addition to the , variables in , we introduce a second set of binary variables , .variable is one if the receiver of link decodes and cancels the interference from link and zero otherwise .the output of la - pic is then defined by and , for each .the proposed formulation for la - pic is [ eq : ssic_model ] & y_{mk } \leq x_k \qquad \forall m , k \in \k , ~ m \neq k , \label{eq : ssic_activ_cond_k } \\[1ex ] & \frac{p_k g_{kk } + m_{k}(1 - x_{k})}{\sum_{m \neq k}\limits p_m g_{mk } ( x_m -y_{mk } ) + \eta } \geq \gamma _ k \qquad \forall k \in \k , \label{eq : ssic_sinr_k } \\[1ex ] & \frac{p_m g_{mk } + m_{mk}(1 - y_{mk})}{\sum_{n \neq m}\limits p_n g_{nk } x_n + \eta } \geq \gamma_m \qquad \forall m , k \in \k , ~ m \neq k , \label{eq : ssic_isnr_m } \\[1ex ] % two - column version % & \frac{p_m g_{mk } + m_{mk}(1 - y_{mk})}{\sum_{n \neq m}\limits p_n g_{nk } x_n + \eta } \geq \gamma_m \nonumber\\ % & \qquad \qquad \qquad \qquad \qquad ~~ \forall m , k \in \k , ~ m \neq k , \label{eq : ssic_isnr_m } \\[1ex ] & y_{mk } \in \{0 , 1\ } \qquad \forall m , k\in \k , ~ m \neq k,\\[1ex ] & x_{k } \in \{0 , 1\ } \qquad \forall k \in \k.\end{aligned}\ ] ] the objective function is the same as for la - sud .the first two sets of inequalities , and , pose necessary conditions on the relation between the variable values .namely , a cancellation can take place , i.e. , , only if both links and are active , i.e. , .the set of inequalities formulates the sinr requirements for decoding the signals of interest , in a way similar to for la - sud , with the difference that here the cancelled interference terms are subtracted from the denominator using the term .note that , without , the formulation will fail , as in it would allow to reduce the denominator of the ratio by subtracting non - existing interference from non - active links .the next set of constraints formulates the condition for pic : can be set to one only if the interference from link , , is strong enough in relation to all other active signals , including the signal of interest .if the ratio meets the sinr threshold for link , cancellation can be carried out .setting to zero is always feasible , on the other hand , provided that the parameter is large enough .a sufficiently large value is .the construction of reflects the fact of performing all cancellations in a single stage , as in cancelling the signal of link , other transmissions being cancelled in parallel are treated as additive noise . note that the model remains in fact valid even if is removed .doing so would allow the receiver of an inactive link to perform ic .however , since an inactive link does not contribute at all to the objective function , this is a minor `` semantic '' mismatch that can be simply alleviated by post - processing . for practical purposes , each receiver may be allowed to cancel the signal of at most one interfering link .the resulting la problem , denoted la - slic , can be easily formulated by adapting the formulation for la - pic .the only required change is the addition of the set of constraints that restricts each receiver to cancel at most one interfering transmission .the size of the formulation , both in the numbers of variables and constraints , is of .thus , the formulation is compact and its size grows by one magnitude in comparison to .in fact , to incorporate cancellation between link pairs , one can not expect any optimization formulation of smaller size .when implementing the formulation , two pre - processing steps can be applied to reduce the size of the problem and hence speed - up the calculation of the solution .first , the links that are infeasible , taking into account only the receiver noise , are identified by checking for every receiver whether the received snr meets the sinr threshold for activation .if the answer is `` false '' , i.e. , , then link is removed from consideration by fixing the variable to zero .second , the link pairs for which cancellation can never take place are found by checking , for every receiver and interfering signal , whether the `` interference - to - signal - of - interest - and - noise '' ratio meets the sinr threshold for decoding the interference signal .if the answer is `` false '' , i.e. , , then link can not decode the interference from and this option is eliminated from the formulation by setting the respective variable to zero .incorporating the optimal ic scheme , sic , to the la problem is highly desired , since it may activate sets that are infeasible by pic . however , using ilp to formulate compactly the solution space of la - sic , is challenging .this is because the formulation has to deal , for each link , with a bijection giving the cancellation sequence .we propose an ilp approach and present it in two steps . in this section ,we consider la - sic under the assumption that all links have a common sinr threshold for activation , i.e. , . in the next section ,we address the general case of individual sinr thresholds . for sic under common sinr threshold ,we exploit the problem structure and show that the optimal cancellation order can be handled implicitly in the optimization formulation . as a result , we show that la - sic can in fact be formulated as compactly as la - pic , i.e. , using variables and constraints .the idea is to formulate an optimality condition on the ordering of ic . to this end , consider an arbitrary link and observe that meeting the sinr threshold for decoding the signal of interest is equivalent to having in the receiver , after ic , a total amount of undecoded interference and noise at most equal to .we refer to this term as the _ interference margin _ . similarly , the interference margin that allows cancellation of the interference from link at the receiver of link is .consider any two interfering links and , and suppose .note that , because the sinr threshold is common , the condition is equivalent to , i.e. , the receiver of link experiences stronger interference from .if the condition holds , the cancellation of should be `` easier '' in some sense .thus , one may expect that if can decode both and , the decoding of should take place first . in the following ,we prove a theorem , stating that this is indeed the case at the optimum there exists an optimal solution having the structure in which if a weaker interference signal can be cancelled , then any stronger one is cancelled before it .[ th : order ] if and the receiver of link is able to cancel the signal of , then it is feasible to cancel the signal of before and there exists at least one optimum having this structure in the cancellation sequence .let denote the total power of undecoded interference and noise when the receiver of decodes the signal from .assume that has not been cancelled in a previous stage .then , is part of .successful cancellation of means that .since , it holds that .consider now decoding the signal of immediately before .thus for this cancellation , the total power of the undecoded interference and noise incorporates the interference of , but not that of , i.e. , .because implies , it holds that . since ,the cancellation condition is satisfied . after cancelling , ic can still take place for , because the new is decreased by .consequently , both and can be cancelled .obviously , doing so will not reduce the number of active links and the theorem follows . by theorem[ th : order ] , for each link , we can perform a pre - ordering of all other links in descending order of their interference margins .sic at link can be restricted to this order without loss of optimality . at the optimum, the cancellations performed by , for interfering links that are active , will follow the order , until no more additional cancellations can take place . in this optimal solution ,when considering the cancellation condition of interfering link , interference can only originate from links appearing after in the sorted sequence .we propose an ilp formulation based on theorem [ th : order ] .the formulation uses the same variables of for la - pic , as there is no need to formulate the cancellation order explicitly .the sorted sequence is denoted by , for each link , a bijection , where is the position of link in the sorted sequence .the sorting results in if . in case of , the tie can be broken arbitrarily without affecting the optimization result .in addition , let denote the number of links appearing after in the sorted sequence for .the proposed formulation for la - sic under common sinr threshold is [ eq : modelic ] & y_{mk } \leq x_k \qquad \forall m , k \in \k , ~ m \neq k , \label{eq : cond_k } \\[1ex ] & \frac{p_k g_{kk } + m_{k}(1 - x_{k})}{\sum_{m \neq k}\limits p_m g_{mk } ( x_m -y_{mk } ) + \eta } \geq \gamma \qquad ~ \forall k \in \k , \label{eq : sinr_k } \\[1ex ] & \frac{p_m g_{mk } + m_{mk}(1 - y_{mk})}{\sum _ { n \neq k , ~ i_{k}(n ) >i{_{k}(m ) } } \limits p_n g_{nk } x_n + p_kg_{kk } + \eta } \geq \gamma \qquad \forall m , k \in \k , ~ m \neq k , \label{eq : sinr_mk } \\[1ex ] & \sum _ { n \neq k , ~ i_{k}(n ) >i{_{k}(m ) } } y_{nk } \leq c_{mk } ( 1 - x_m + y_{mk } ) \qquad \forall m , k \in \k , ~ m \neq k , \label{eq : ordering } \\[1ex ] % two - column version % & \frac{p_m g_{mk } + m_{mk}(1 - y_{mk})}{\sum _ { n \neq k , ~ i_{k}(n ) >i{_{k}(m ) } } \limits p_n g_{nk } x_n + p_kg_{kk } + \eta } \geq \gamma \nonumber\\ % & \qquad \qquad \qquad \qquad \qquad \forall m , k \in \k , ~ m \neq k , \label{eq : sinr_mk } \\[1ex ] % & \sum _ { n \neq k , ~ i_{k}(n ) >i{_{k}(m ) } } y_{nk } \leq c_{mk } ( 1 - x_m + y_{mk } ) \nonumber \\ % & \qquad \qquad \qquad \qquad \qquad \forall m , k \in \k , ~ m \neq k , \label{eq : ordering } \\[1ex ] & y_{mk } \in \{0 , 1\ } \qquad \forall m , k \in \k , ~ m \neq k , \\[1ex ] & x_{k } \in \{0 , 1\ } \qquad \forall k \in \k .\end{aligned}\ ] ] the first three constraint sets have the same meaning with for la - pic ; see section [ sec : singlestage ] . the constraint set ( [ eq : sinr_mk ] )formulates the conditions for sic , making use of theorem [ th : order ] . consider the condition for cancellation of signal from receiver in stage . then, in the denominator of the ratio , the sum of undecoded interference is limited to the transmissions coming after in the sorted sequence of , since all other active links with higher interference margin than have already been cancelled .the formulation is however not complete without .this set of constraints , in fact , ensures the optimality condition set by theorem [ th : order ] and utilized in .that is , if both and are active , , and is cancelled by , then is cancelled by as well .equivalently speaking , if is active but not cancelled by , then none of the other links after in the sequence of may be cancelled .examining , we see that it has no effect as long as equals . if link is active but not cancelled , corresponding to and , the right - hand side of becomes zero , and therefore no cancellation will occur for any having position after in the ordered sequence .also , note that the case but can not occur , because of .given a solution to the formulation , the cancellation sequence of each active link , i.e. , the bijection in the definition of la - sic in section [ sec : preliminaries ] , is easily obtained by retrieving from the predefined bijection the elements with .the compactness of the formulation is manifested by the fact that its size , in both the numbers of variables and constraints , is of .thus , provided that there is a common sinr threshold for activation , we have formulated la - sic as compactly as la - pic .when implementing the formulation , similar pre - processing steps with for la - pic can be applied to reduce the size of the problem . first , the infeasible links are removed for consideration by fixing to zero when .second , the infeasible ic options are eliminated from the formulation by fixing to zero when .in this section , we consider the la - sic problem under the most general setup ; namely when the links have individual sinr thresholds . differently from the common sinr case , treated in section [ sec : sicuniform ] ,a pre - ordering of the sequence of potential ic does not apply .the reason is that the interference margin does not depend anymore only on the received power but also on the link - specific sinr threshold . to see this point , consider a scenario where link attempts to cancel the signal of two interfering links and in two consecutive stages .denote by the sum of the remaining interference , other than or , the received power of link s own signal , and noise .assume a mismatch between the relation of interference margin and that of received power : but because . if cancels and then , the cancellation conditions are and . reversing the cancellation order leads to the conditions and .for our example , we let .consider two sets of values for the other parameters .the values in the first set are : , , , and in the second set are : , , . for the first set , both interfering links can be cancelled only if cancellation applies to first , whereas the opposite order must be used for the second set .hence the interference margin ( or received power ) does not provide a pre - ordering for cancellation . in the following ,we propose an ilp formulation for la - sic under individual sinr thresholds , that explicitly accounts for the cancellation order .our approach is to introduce for each pair of links , , a set of binary variables and represent the cancellation stage by the superscript .variable is one if and only if the receiver of link cancels the interference from link at stage .the effect is that , for each link , the solution values of order the feasible cancellations ; hence , they have a direct correspondence to the output bijection of la - sic , defined in section [ sec : preliminaries ] .it is apparent that the index ranges between one and . in practice , due to computational considerations, we may want to restrict the maximum number of cancellation stages .to this end , we define , for each , the integer parameters and the sets . the proposed formulation of the general la - sic problem , under individual sinr thresholds and restricted cancellation stages , is [ eq : finalmodel ] & \text{s . t. } \nonumber \\ & \sum _ { t=1}^{t_k } y^{t}_{mk}\leq x_m , \qquad \forall m , k\in \k , ~ m \neq k , \label{eq : finalactiv}\\[1ex ] & \sum_{m \neq k } y^{t}_{mk } \leq x_k , \qquad \forall k \in \k , ~\forall t\in\t_k , \label{eq : finalstage}\\[1ex ] & \frac{p_k g_{kk } + m_{k}(1 - x_{k})}{\sum_{m \not = k}\limits p_m g_{mk } \big(x_m -{\sum _ { t=1}^{t_k}\limits y^{t}_{mk}}\big ) + \eta } \geq \gamma_k \qquad \forall k \in \k , \label{eq : finalsinr } \\[1ex ] & \frac{p_m g_{mk } + m_{mk } ( 1-y_{mk}^t)}{\sum_{n \not = m , k } \limits p_n g_{nk } \big(x_n - { \sum_{t'=1}^{t-1 } \limits y^{t'}_{nk } } \big ) + p_k g_{kk } + \eta } \geq \gamma_m \qquad \forall m , k \in \k , ~ m \neq k , ~\forall t\in\t_k , \label{eq : finalisnr } \\[1ex ] % two - column version % & \frac{p_m g_{mk } + m_{mk } ( 1-y_{mk}^t)}{\sum_{n \not = m , k } \limits p_n g_{nk } \big(x_n - { \sum_{t'=1}^{t-1 } \limits y^{t'}_{nk } } \big ) + p_k g_{kk } + \eta } \geq \gamma_m \nonumber \\ % & \qquad\qquad\qquad\qquad\quad \forall m , k \in \k , ~ m \neq k , ~\forall t\in\t_k , \label{eq : finalisnr } \\[1ex ] & \sum_{m \neq k } \limits y^{t}_{mk } \leq \sum_{m \neq k}\limits y^{t-1}_{mk } \qquad \forall k \in \k , ~\forall t \in \t_k \setminus\{1\ } , \label{eq : finalblock}\\[1ex ] & y^{t}_{mk } \in \{0 , 1\ } \qquad \forall m , k \in \k , ~ m \neq k , ~\forall t\in\t_k , \label{eq : finaly}\\[1ex ] & x_{k } \in \{0 , 1\ } \qquad \forall k \in \k.\label{eq : finalx}\end{aligned}\ ] ] the conditions have similar role with .namely , only when links and are active , the receiver of can consider to cancel the transmission of .in addition , the summation over in ensures that link is cancelled in at most one stage .furthermore , the summation over in enforces each receiver to perform at most one cancellation per stage .this removes equivalent solutions , without compromizing optimality , to enhance computational efficiency .the sinr requirements for decoding the signals of interest are set in , similarly to . in the denominator ,all cancelled links , regardless of the stage the cancellation is performed , are removed from the sum of undecoded interference .the next set of constraints formulates the requirement for the cancellation of link by link at stage .the active interfering transmissions that have been cancelled before stage are excluded from the sum of undecoded interference in the denominator of the ratio .the constraints are formulated with the convention that the sum within the parenthesis in the denominator of the ratio is zero for .note that , even though for each receiver and interfering link , constraints are formulated , due to , all but at most one will be trivially satisfied by the respective variabls being equal to zero . the constraints are not mandatory for the correctness of the formulation , but their role is to enhance the computational efficiency .these constraints ensure that the cancellations are performed as `` early '' as possible , i.e. , there are no `` idle '' stages at which no cancellation takes place and which are followed by later stages where cancellation takes place .otherwise , if and are cancelled by , and the cancellation of the former takes place first , the cancellations can be performed at any two stages and , as long as .clearly , such solutions are all equivalent to each other , and the presence of them slows down the computational process .+ since formulates the most general la - sic problem , it also applies to the common - sinr case of section [ sec : sicuniform ] .its computational efficiency though is significantly lower than the respective of formulation .the reason is that its size is one magnitude larger than , i.e. , the numbers of variables and constraints grow from to . however, we note that the formulation remains compact . in order to deal with the scalability issue, one may resort to restrict the maximum number of cancellations , , to a constant being considerably lower than .doing so has little impact on the solution quality , because most of the performance gain from ic is due to the first few cancellations .also , when implementing the formulation , similar pre - processing steps with can be applied , see section [ sec : sicuniform ] , to reduce the size of the problem .this section presents a quantitative study of the effect that ic has on the optimal la problem in wireless networking .the ilp formulations , proposed in sections [ sec : singlestage][sec : sicvariable ] , are utilized to conduct extensive simulation experiments on randomly generated network instances with various topologies , densities , cardinalities , and sinr thresholds .nodes are uniformly scattered in square areas of m and m , in order to create sparse and dense topologies , respectively .two types of datasets are generated .the first one takes an information - theoretic viewpoint and is henceforth denoted dataset i. to this end , the transmitter - receiver matchings are arbitrarily chosen , with the sole criterion of feasible single - link activation .thus , the links have arbitrary length within the test area , provided that their snr is larger than the sinr threshold required for activation .the second dataset provides a rather networking - oriented approach and is henceforth denoted dataset n. in this dataset , the length of the links is constrained to be from m up to m , with the rationale to produce instances resembling a multihop network .the networks considered have cardinality ranging from up to links .[ fig : instances ] illustrates instances of a 20-link network ; figs .[ fig : setup20spit ] and [ fig : setup20deit ] correspond to the sparse and dense topology , respectively , of dataset i , whereas figs .[ fig : setup20spnw ] and [ fig : setup20denw ] correspond to the sparse and dense topology , respectively , of dataset n. + the input parameters are chosen to be common for all links ; specifically , the transmit power , , is set to , the noise power to , and the channel gains follow the geometric , distance - based , path loss model with an exponent of .the major difference between the datasets is the distribution of the link lengths , which effectively determines the snr distribution of the links .the input parameters yield minimum snr approximately equal to , , and for dataset sparse i , dense i , and n , respectively .the histograms in fig .[ fig : snr ] illustrate the snr distribution of each dataset ; as in fig . [fig : instances ] , left and right sub - figures are for dataset i and n , respectively , whereas upper and lower sub - figures are for sparse and dense topologies , respectively . for dataseti , the links in the sparse topology have on average lower snr than in the dense topology ; the mass of the snr distribution is roughly for in the sparse and for in the dense topology .this is because in the sparse topology the test area is enlarged , allowing generation of longer links which have lower snr values . on the contrary , the snr distribution of datasetn is invariant to the network density ; this is by construction , since the distribution of the link lengths is not affected by the size of the test area .+ for each dataset and network cardinality , instances are generated and the performance of la with ic is assessed by two simulation studies . in the first study ,all links are assumed to require for activation a common sinr threshold , , taking values from up to , and have equal activation weights , e.g. , , .the goal is to evaluate the performance gain due to single - link , parallel , and successive ic schemes on the la problem over the baseline approach without ic . for this purpose, we implemented the formulations , , , and , for la - sud , la - slic , la - pic , and la - sic , respectively .[ fig : activation ] illustrates exemplary activation sets for an instance of a 10-link network , drawn from dataset dense i , when the sinr threshold is .it is evidenced that performance increases with problem sophistication : figs ., , , and , show that la - sud , la - slic , la - pic , and la - sic activate , , , and links , respectively .the optimal solutions are found by an off - the - shelf solver , implementing standard techniques such as branch - and - bound and cutting planes .the simulations were performed on a server with a quad - core amd opteron processor at 2.6 ghz and 7 gb of ram .the ilp formulations were implemented in ampl 10.1 using the gurobi optimizer ver .3.0 . regarding the computational complexity of the proposed ilp formulations for ic ,an empirical measure is the running time of the solution process .we have observed that it is not an obstacle for practical instance sizes . + in the following a selection of the simulation results is presented .[ fig : sinr-6 ] shows the average , over instances , number of activated links versus the total number of links in the network , achieved by all versions of the la problem when the sinr threshold is .the results in the four sub - figures correspond to the datasets exemplified in fig .[ fig : instances ] .the major observation is that all la schemes with ic clearly outperform la - sud and in particular la - sic yields impressive performance . comparing figs .[ fig : sinr-6is ] and [ fig : sinr-6id ] , it is concurred that the results for dataset i are density invariant .as the number of links in the network increases , the performance of la - sud improves , due to the diversity , almost linearly but with very small slope .la - sic though improves significantly , activating two to three times more links than the baseline .when the network has up to about links , nearly all of them are activated with la - sic . on the other hand, la - pic has a consistent absolute gain over la - sud , activating one to two links more .furthermore , la - slic has almost as good performance with la - pic , i.e. , it captures most of the gain due to single - stage ic .[ fig : sinr-6nd ] shows that the la schemes have similar performance in dataset dense n as in dataset i. fig .[ fig : sinr-6ns ] shows that la is easier for dataset sparse n , even without ic .the curves of all schemes linearly increase with network cardinality , but with ic the slopes are higher , so that the absolute gains , differences from the baseline , broaden .maximum gains are for links , where la - sud , la - slic , la - pic , and la - sic activate about , , , and links , respectively . for the tested network cardinalities , la - sic achieves the ultimate performance activating all links . + as seen in fig .[ fig : sinr-6 ] , the performance gains due to ic are very significant when the sinr threshold for activation is low . however ,for high sinr thresholds , the gains are less prominent .for example , fig .[ fig : sinr3 ] shows the performance of the la schemes when is set to .[ fig : sinr3is ] and [ fig : sinr3ns ] are for the sparse datasets i and n , respectively ; for the dense topologies the results are similar to fig .[ fig : sinr3is ] .it is evidenced that ic schemes activate one to two links more than the baseline and that most of this gain can be achieved with single - stage ic . + the fact that the ic gains diminish with increasing the sinr threshold is clearly illustrated in fig . [fig : size30 ] , which compares , for networks of 30 links , the average performance of all la schemes for various sinr thresholds .the relative gain of sic is more prominent in the case of dataset i , which is more challenging for the baseline problem .for dataset i , when the sinr threshold is low , around , sic activates nearly all links , whereas sud activates less than a third of them . for sparse and dense datasetn , sic activates effectively all links when the sinr threshold is lower than and , respectively , whereas sud activates less than two thirds and less than half of them , respectively . for mid - range sinr thresholds ,up to about , sic has an exponentially decreasing performance , but nevertheless still significantly outperforms sud . on the other hand , for sinr thresholds up to about , pic yields a relatively constant performance improvement of roughly two to five links , depending on the dataset .pic is effectively equivalent to its simpler counterpart slic , for sinr higher than .the performance of all ic schemes converges for sinr thresholds higher than 3db .the interpretation is that if ic is possible , it is more likely that it will be restricted to a single link .for very high sinr thresholds , it becomes rarely possible to perform ic .+ in the second simulation setup , the performance of the general la - sic problem , under individual sinr thresholds , is evaluated .the sinr threshold for each link is taking , with equal probability , one of the values in the set and the activation weights are set equal to the data rates , in bits per second per hertz , corresponding to the respective sinr thresholds . the formulation is implemented varying the maximum number of cancellation stages , , from , corresponding to the baseline case without ic , up to . fig .[ fig : individual ] shows the average , over 30 instances , throughput of all activated links versus the network cardinality , for all the datasets .for dataset i , the network throughput is almost doubled with ic ; roughly half of this increase is achieved by the first cancellation stage and most of the rest by the next two to three stages . for datasetn , it is seen that the first cancellation stage yields a significant gain of about b / s / hz and that it only pays off to have more than two cancellation stages for large and sparse networks .in this paper , we have addressed the problem of optimal concurrent link activation in wireless systems with interference cancellation .we have proved the np - hardness of this problem and developed integer linear programming formulations that can be used to approach the exact optimum for parallel and successive interference cancellation . using these formulations ,we have performed numerical experiments to quantitatively evaluate the gain due to interference cancellation .the simulation results indicate that for low to medium sinr thresholds , interference cancellation delivers a significant performance improvement .in particular , the optimal sic scheme can double or even triple the number of activated links .moreover , node density may also affects performance gains , as evidenced in one of the datasets . given these gains and the proven computational complexity of the problem , the development of approximation algorithms or distributed solutions incorporating icare of high relevance .concluding , the novel problem setting of optimal link activation with interference cancellation we have introduced here provides new insights for system and protocol design in the wireless networking domain , as in this new context , strong interference is helpful rather than harmful .thus , the topic calls for additional research on resource allocation schemes in scheduling and routing that can take the advantage of the interference cancellation capability .indeed , the la setup studied herein assumes fixed transmit power for active links .this can lead to increased interference levels , since the sinrs can be oversatisfied .incorporating power control to the la problem with ic will bring another design dimension which can yield additional gains .furthermore , it may enable ic for high sinr thresholds ic .v. s. annapureddy and v. v. veeravalli , `` gaussian interference networks : sum capacity in the low interference regime and new outer bounds on the capacity region , '' _ ieee trans .theory _ , vol .55 , no . 6 , pp .30323050 , 2009 .a. capone , g. carello , i. filippini , s. gualandi , and f. malucelli , `` routing , scheduling and channel assignment in wireless mesh networks : optimization models and algorithms , '' , _ ad hoc netw ._ , vol . 8 , pp . 545 - 563 , 2010. a. capone , l. chen , s. gualandi , and d. yuan , `` a new computational approach for maximum link activation in wireless networks under the sinr model , '' _ ieee trans .wireless commun ., _ vol .10 , no . 5 , pp .13681372 , 2011 .l. tassiulas , and a. ephremides , `` stability properties of constrained queueing systems and scheduling for maximum throughput in multihop radio networks , '' _ ieee trans . on automat .1936 - 1949 , 1992 . | a fundamental aspect in performance engineering of wireless networks is optimizing the set of links that can be concurrently activated to meet given signal - to - interference - and - noise ratio ( sinr ) thresholds . the solution of this combinatorial problem is the key element in scheduling and cross - layer resource management . previous works on link activation assume single - user decoding receivers , that treat interference in the same way as noise . in this paper , we assume multiuser decoding receivers , which can cancel strongly interfering signals . as a result , in contrast to classical spatial reuse , links being close to each other are more likely to be active simultaneously . our goal here is to deliver a comprehensive theoretical and numerical study on optimal link activation under this novel setup , in order to provide insight into the gains from adopting interference cancellation . we therefore consider the optimal problem setting of successive interference cancellation ( sic ) , as well as the simpler , yet instructive , case of parallel interference cancellation ( pic ) . we prove that both problems are np - hard and develop compact integer linear programming formulations that enable us to approach the global optimum solutions . we provide an extensive numerical performance evaluation , indicating that for low to medium sinr thresholds the improvement is quite substantial , especially with sic , whereas for high sinr thresholds the improvement diminishes and both schemes perform equally well . |
ensemble classifiers have become very popular for classification and regression tasks .they offer the potential advantages of robustness via bootstrapping , feature prioritization , and good out - of - sample performance characteristics ( ) .however , they suffer from lack of interpretability , and oftentimes features are reported as `` word bags '' - e.g. by feature importance ( ) .generalized linear models , a venerable statistical toolchest , offer good predictive performance across a range of prediction and classification tasks , well - understood theory ( advantages and modes of failure ) and implementation considerations and , most importantly , excellent interpretability . until recently ,there has been little progress in bringing together ensemble learning and glms , but some recent work in this area ( e.g. ) has resulted in publicly - available implementations of glm ensembles .nevertheless , the resulting ensembles of glms remain difficult to interpret .meantime , human understanding of models is pivotal in some fields - e.g. in translational medicine , where machine learning influences drug positioning , clinical trial design , treatment guidelines , and other outcomes that directly influence people s lives .improvement in performance without interpretability can be useless in such context . to improve performance of maximum - likelihood models , proposed to learn multiple centroids of parameter space .built bottom - up , such ensembles would have only a limited number of models , keeping the ensemble interpretable . in this paper, we work from a model ensemble down .we demonstrate that minimum description length - motivated ensemble summarization can dramatically improve interpretability of model ensembles with little if any loss of predictive power , and outline some key directions in which these approaches may evolve in the future .the problem of ml estimators being drawn to dominant solutions is well understood .likewise , an ensemble consensus can be drawn to the ( possibly infeasible ) mode , despite potentially capturing the relevant variability in the parameter space .relevant observations on this issue are made in , who have proposed centroid estimators as a solution .working from the ensemble backwards , we use this idea as the inspiration to compress ensembles to their constituent centroids . in order to frame the problem of ensemble summarization as that of mdl - driven compression , we consider which requirementsa glm ensemble must meet in order to be compressible , and what is required of the compression technique .to wit , these are : 1 .representation * the ensemble members needs to be representible as vectors in a cartesian space * the ensemble needs to be `` large enough '' with respect to its feature set * the ensemble needs to have a very non - uniform distribution over features 2 .compression : the compression technique needs to * capture ensemble as a number of overlapping or non - overlapping clusters * provide a loss measure * formulate a `` description length '' measure it is easy to see that glm ensembles can satisfy the representation requirement very directly .it is sufficient to view ensembles of _ regularized _ glms as low - dimensional vectors in a high - dimensional space .the dimensionality of the overall space will somewhat depend on the cardinality of the ensemble , on the strictness of regularization used , on the amount of signal in the data , on the order of interactions investigated , and on other factors influencing the search space of the optimizer generating the ensemble of glms .coordinates in this space can be alternately captured by ( ideally standardized ) coefficients or , perhaps more meaningfully , by some function of statistical significance of the terms . in this work, we apply the latter . for representation , we choose a basis vector of subnetworks . in order to identify this basis vector ,we have experimented with gaussian mixture decomposition ( gmm ) ( finding clusters of vectors in model space ) and hierarchical clustering . for performance reasons , we present results using the latter technique , despite its shortcomings : instability and inability to fit overlapping clusters ( this may lead to overfitting ) .nevertheless , in practice we find that this latter technique performs reasonably well .optionally , to summarize the clusters , centroids can be fit_ de novo _ once these groups of models are identified , or medoids can be used , obviating the need for further fitting .here we use the first method , refitting centroids from training data on just the terms occurring in the models in a given cluster .lastly , bayesian information criterion ( _ bic _ ) satisfies the representation scoring requirement .the likelihood term serves as the loss function and the penalty term captures `` description length '' ( ) .the bic - regularized glm ensembles were fit for binary - outcome datasets used in and using the software from the same paper ( number of bags = = 100 , other settings left at defaults ) .the result of this step was an ensemble which , ignoring the outcome variable and the intercepts , could be captured via a non - sparse matrix as follows : where , the ensemble dimensionality , refers to the number of fitted models and to the number of terms found in the whole fitted ensemble .importantly , is always an arbitrary parameter - the fact that partially motivated our study . for each dataset , the fitted ensembleswere then compressed using the following procedure .first of all , for each ensemble we created the significance matrix s : where , and the p - value is determined from the fit of the linear model of the glm ensemble ( s is the heatmap in figure [ figure1 ] ) .each row of projects every model into a multivariate cartesian space where each axis corresponds to model terms observed in the whole ensemble and each coordinate corresponds to log - scaled term significance .log - scaling is introduced to induce separability of models that share terms in the presence or absence of collinear covariates that would be expected to influence , but not obviate , shared terms significance .note that , in this representation , if is missing , . as an example of what the and matrices would look like after this step , for the * compressed * ensemble in figure [ figure2 ] , = and = of course , in practice it is the full ensemble we are interested in representing in this manner en route to compressing it , so and will have their dimensions , and , of for a computational - biological application using a regularized glm ensemble construction approach .having constructed the matrix s for each ensemble in this manner , we clustered its rows ( the models coefficients ) using ward s clustering criterion ( ) and euclidean distance metric .we then traversed the resulting dendrogram from left to right , using each cutpoint to perform model assignment to clusters implied by the leaves of the resulting dendrogram ( figure [ figure1 ] ) .we captured the assignment of models to k clusters produced by each step in this traversal as a column vector expressed as a categorical variable with unique values , such that .then , for each model ensemble term , we extracted the vector - a column slice through the matrix s for term .we next defined a linear model with parameters .the success of ensemble compression via the clusters implied by could then be assessed by defining being the log - likelihood of the model .this evaluation of cost across clustering levels to find maximal average likelihood compression , , could be viewed as using the bayes factor as the loss function for optimization , and the process of describing the model ensemble by centroids ( or medoids ) of the clusters of models described by could be described as an mdl - driven compression of the ensemble , using the bic - penalized likelihood as the measure of optimal compression .it is worth adding that the aforementioned gmm approach for cluster membership assignment , which can also be driven by bic( , ) , does not imply a specific nested membership of models in clusters , but generally results in cluster membership strongly correlated with that identified via the hierarchical model clustering technique .while the gmm approach is more robust ( e.g. , it s not path - dependent , and does nt require specification of a linkage function ) , it scales worse when number of terms in the ensemble is large .using the datasets described above , we performed 3-fold cross - validation repeated three times , and for each training fold and repeat fitted medoid- and centroid - compressed model ensembles .for each held - out fold , we then computed out - of - sample auc for every method ( table [ table : table1 ] ) .additionally , we performed paired one - tailed t - tests comparing medoid and centroid compression strategies to uncompressed ensembles across folds .all aucs arising from repeats and folds were averaged prior to t - test to avoid pseudo - replication issues .while medoids performed slightly worse , on average , than uncompressed ensembles , centroids performance was degraded only in the statistically `` suggestive '' sense ( 0.05 < p < 0.1 ) .note that in our experience using this technique on real - world datasets , larger datasets and continuous outcomes result in even smaller , if any , degradation of performance , performance being especially degraded for logistic regression .in other words , we believe that these results , reported for binary outcomes , are essentially a lower bound .maximum - likelihood methods identify the most likely fit in the parameter space . however ,unless the most likely fit is vastly superior to all others and is sharply defined , a rare scenario in practice , the total probability of this fit in the infinite model ensemble may be very small .for that reason , model ensembles are thought to be superior to individual models .ensemble construction gains power by sampling multiple models from the parameter space but , by so doing , loses interpretability by introducing alternative parameter configurations and values .we build on the understanding that model parameter space centroids should be sufficient to capture predictive power of large ensembles ( ) , while observing that such centroids exhibit better interpretability by having fewer parameters among the alternative models . working top - down , we demonstrate and validate on several datasets a novel approach to summarizing ensembles of glms .our data shows this approach can result in models nearly identical to full ensembles in performance and vastly superior in interpretability , owing to dramatically reduced ensemble sizes ( figure [ figure2 ] ) .in addition , since this approach can operate on any models that can be shoehorned into a cartesian space , it shows promise for compressing and thus summarizing ensembles of other types - for instance , causal model ensembles with individual models represented as orderings ( ) .we posit that our approach can alter applicability of ensemble methods in general , making their use possible for a wide range of applications where the bottleneck has been the interpretability of results .future directions of research may include multivariate classification methods beyond gmm and hierarchical clustering , as well as extension of this methodology beyond ensembles of glms to other types of predictive ensembles .presented at nips 2016 workshop on interpretable machine learning in complex systems.aucs by method and dataset .last column shows p - value of one - tailed paired t - test vs randomglm ( : the compressed ensemble performs as well as the full ensemble ) .values are averaged over 3 folds and 3 repeats . because of multiple repeats , standard errors are not shown . [cols=">,<,<,<,<,<,<",options="header " , ]the authors would like to acknowledge leon furchtgott and fred gruber for their invaluable feedback on the manuscript , and fred gruber for his help with latex .song , l. , langfelder , p. , horvath , s. ( 2013 ) random generalized linear model : a highly accurate and interpretable ensemble predictor . _ bmc bioinformatics _14:5 pmid : 23323760 doi : 10.1186/1471 - 2105 - 14 - 5 teyssier , m. and koller , d ( 2005 ) .ordering - based search : a simple and effective algorithm for learning bayesian networks . _ proceedings of the twenty - first conference on uncertainty in ai ( uai ) _ ( pp .584 - 590 ) .chitraa , v. , thanamani a.s .( 2013 ) , review of ensemble classification . _international journal of computer science and mobile computing a monthly journal of computer science and information technology _ , vol .2 , issue . 5 , may 2013 , pg.307 312 liaw , a. , wiener , m. ( 2002 ) classification and regression by randomforest r news 1822 hansen , m.h . , and yu , b. model selection and the principle of minimum description length ._ journal of the american statistical association _ 96.454 ( 2001 ) : 746 - 774 .carvalho , l.e . ,lawrence , c.e ., centroid estimators for inference in high - dimensional discrete spaces _ pnas _ , 105 : 32093214 ( 2008 ) ward , j. h. , jr .hierarchical grouping to optimize an objective function , _ journal of the american statistical association _ , 58 , 236244 ( 1963 ) fraley , c. , raftery , a.e . , murphy , t.b . , and scrucca , l. ( 2012 ) mclust version 4 for r : normal mixture modeling for model - based clustering , classification , and density estimation technical report no .597 , department of statistics , university of washington fraley , c. and raftery , a.e .( 2002 ) model - based clustering , discriminant analysis and density estimation journal of the american statistical association 97:611 - 631 | over the years , ensemble methods have become a staple of machine learning . similarly , generalized linear models ( _ glms _ ) have become very popular for a wide variety of statistical inference tasks . the former have been shown to enhance out - of - sample predictive power and the latter possess easy interpretability . recently , ensembles of glms have been proposed as a possibility . on the downside , this approach loses the interpretability that glms possess . we show that minimum description length ( _ mdl_)-motivated compression of the inferred ensembles can be used to recover interpretability without much , if any , downside to performance and illustrate on a number of standard classification data sets . |
regular expressions ( res ) , because of their succinctness and clear syntax , are the common choice to represent regular languages .equivalent deterministic finite automata ( dfa ) would be the preferred choice for pattern matching or word recognition as these problems can be solved efficiently by dfas .however , minimal dfascan be exponentially bigger than res .nondeterministic finite automata ( nfa ) obtained from rescan have the number of states linear with respect to ( w.r.t ) the size of the res . because nfaminimization is a pspace - complete problem other methodsmust be used in order to obtain small nfas usable for practical purposes .conversion methods from resto equivalent nfascan produce nfaswithout or with transitions labelled with the empty word ( -nfa ) .here we consider several constructions of small -free nfasthat were recently developed or improved , and that are related with the one of glushkov and mcnaughton - yamada .the nfasize can be reduced by merging equivalent states .another solution is to simplify the resbefore the conversion .gruber and gulan showed that resin reduced star normal form ( ) achieve some conversion lower bounds .our experimental results corroborate that resmust be converted to reduced . in this paperwe present the implementation within the * fado*system of several algorithms for constructing small -free nfasfrom res , and a comparison of regular expression measures and nfasizes based on experimental results obtained from uniform random generated res .we consider nonredundant resand resin reduced in particular .let be an _ alphabet _ ( set of _ letters _ ) . over is any finite sequence of letters .the _ empty word _ is denoted by .let be the set of all words over .a _ language _ over is a subset of .the set of _ regular expressions _ ( re ) over is defined by : where the operator ( concatenation ) is often omitted .the language associated to is inductively defined as follows : , , for , , , and .two regular expressions and are _ equivalent _ if , and we write .the algebraic structure constitutes an idempotent semiring , and with the unary operator , a kleene algebra .there are several ways to measure the size of a regular expression .the _ size _ ( or _ ordinary length _ ) of is the number of symbols in , including parentheses ( but not the operator ) ; the _ alphabetic size _ ( or ) is its number of letters ( multiplicities included ) ; and the _ reverse polish notation size _ is the number of nodes in its syntactic tree .the _ alphabetic size _ is considered in the literature the most useful measure , and will be the one we consider here for several remeasure comparisons .moreover all these measures are identical up a constant factor if the regular expression is reduced ( * ? ? ?3 ) . let be if , and otherwise .a regular expression is _ reduced _ if it is normalised w.r.t the following equivalences ( rules ) : a recan be transformed into an equivalent reduced rein linear time .a _ nondeterministic automaton _ ( nfa ) is a quintuple , where is a finite set of states , is the alphabet , the transition relation , the initial state , and the set of final states .the _ size _ of an nfais . for and , we denote by , and we can extend this notation to , and to .the _ language _ accepted by is .two nfasare _ equivalent _ , if they accept the same language .if two nfas and are isomorphic , and we write .an nfais _ deterministic _ ( dfa ) if for each pair there exists at most one such that .a dfais _ minimal _ if there is no equivalent dfawith fewer states .minimal dfaare unique up to isomorphism . given an equivalence relation on , for let {e} ] .the equivalence relation is _ right invariant _ w.r.t an nfa if and for any , if , then .the quotient automaton , f/_e) ] , satisfies . given two equivalence relations over a set , and , we say that is _ finer _ than ( and _ coarser _ than ) if and only if .we consider three methods for constructing small nfas from a regular expression such that , i.e. , they are _the position automaton construction was independently proposed by glushkov , and mcnaughton and yamada .let for , and let .we consider the expression obtained by marking each letter with its position in , . the same notation is used to remove the markings , i.e. , . for and , let , , and .let .the _ position automaton _ for is , with and if , and , otherwise .we note that the number of states of is exactly .other interesting property is that is _ homogeneous _ , i.e., all transitions arriving at a given state are labelled by the same letter .brggemann - klein showed that the construction of can be obtained in ( ) if the regular expression is in the so called _ star normal form _ ( ) , i.e. , if for each subexpression of , and .for every there is an equivalent rein star normal form that can be computed in linear time and such that .ilie and yu introduced the construction of the follow automaton from a re .their initial algorithm begins by converting into an equivalent -nfafrom which the follow automaton is obtained . for efficiency reasons we implemented that method in the * fado*library . the _ follow automaton _ is a quotient of the position automaton w.r.t the right - invariant equivalence given by the _ follow relation _ that is defined by : lcr & if & [ cols= " < " , ] .let be a set of regular expressions .then if and . for and , the set of _ partial derivatives _ of w.r.t . is defined inductively as follows : this definition can be extended to sets of regular expressions , words , and languages . given and , for , , for , and for .the _ set of partial derivatives _ of denoted by given a regular expression , the partial derivative automaton , introduced by mirkin and antimirov , is defined by where , for all and . champarnaud and ziadi showed that the partial derivative automaton is also a quotient of the position automaton .et al . _ proved that for rereduced and in star normal form the size of its partial derivative automaton is always smaller than the one of its follow automaton .the automata here presented , and can in worst - case be constructed in time and space , and have , in worst - case , size , where is the size of the re .recently , nicaud showed that on the average - case the size of the automata is linear .the best worst case construction of -free nfasfrom reis the one presented by hromkovic _et al . _ that can be constructed and have size . however this construction is not considered here .it is possible to obtain in time a ( unique ) minimal dfaequivalent to a given one .however nfastate minimization is pspace - complete and , in general , minimal nfasare not unique .considering the exponential succinctness of nfasw.r.t dfas , it is important to have methods to obtain small nfas .any right - invariant equivalence relation over w.r.t can be used to diminish the size of ( by computing the quotient automaton ) .coarsest right - invariant equivalence _ can be computed by an algorithm similar to the one used to minimize dfas .this coincides with the notion of ( auto)-bisimulation , widely applied to transition systems and which can be computed efficiently ( in almost linear time ) by the paige and tarjan algorithm .left - invariant _ equivalence relation on w.r.t is any right - invariant equivalence relation on the reversed automaton of , , where if ( and we allow multiple initial states ) .coarsest left - invariant equivalence _ on w.r.t , , is of .* fado * is an ongoing project that aims to provide a set of tools for symbolic manipulation of formal languages . to allow high - level programming with complex data structures , easy prototyping of algorithms , and portability are its main features .it is mainly developed in the ` python`programming language . in * fado * , regular expressions and finite automata are implemented as ` python`classes .figure [ fig : regexpclass ] presents the classes for resand the main methods described in this paper . the `regexp ` class is the base class for all resand the class ` position ` is the base class for marked res . the methods ` first ( ) ` , ` last ( ) ` and ` followmap ( ) ` ( where ) are coded for each subclass .the method ` nfaposition ( ) ` implements a construction of the automaton without reduction to .brggemann - klein algorithm is implemented by the ` nfapsnf ( ) ` method .the methods ` nfafollowepsilon ( ) ` and ` nfafollow ( ) ` implement the construction of the via an -nfa . the exact text of all these algorithms is too long to present here .the method ` nfapd ( ) ` computes the and uses the method ` linearform ( ) ` .this method implements the function defined by antimirov to compute the partial derivatives of a rew.r.t all letters .algorithm [ alg : nfapd ] presents the computation of the . stack push(stack , pd ) , head , head figure [ fig : faclass ] presents the classes for finite automata . `fa ` is the abstract class for finite automata .the class ` nfar ` includes the inverse of the transition relation , that is not included in the ` nfa ` class for efficiency reasons . in the ` nfa `class the method ` autobisimulation ( ) ` implements a nave version for compute , as presented in algorithm [ alg : equiv ] . given an equivalence relation the method ` equivreduced ( )` builds the quotient automaton .given an nfa**a * * , * a*.`requiv ( ) ` corresponds to , * a*.`lequiv ( ) ` to and * a*.`lrequiv ( ) ` to .we refer the reader to gouveia and to * fado*webpage for more implementation details . * return * random generators are essential to obtain reliable experimental results that can provide information about the average - case analysis of both computational and descriptional complexity . for general regular expressions , the task is somehow simplified because they can be described by small unambiguous context - free grammars from which it is possible to build uniform random generators . in the * fado*system we implemented the method described by mairson for the generation of context - free languages .the method accepts as input a context - free grammar and the size of the words to be uniformly random generated .the random samples need to be consistent and large enough to ensure statistically significant results . to have these samples readily available , the * fado*system includes a dataset of random re , that can be accessed online .the current dataset was obtained using a grammar for resgiven by lee and shallit , and that is presented in figure [ fig : grammar ] .this grammar generates resnormalized by rules that define reduced res , except for certain cases of the rule : , where .the database makes available random samples of reswith different sizes between and and with alphabet sizes between and .in order to experiment with several properties of resand nfaswe developed a generic program to ease to add / remove the methods to be applied and to specify the data , from the database , to be used .here we are interested in the comparison of several resdescriptional measures with measures of the nfasobtained using the methods earlier described . for reswe considered the following properties : the alphabetic size ( ) ; the size ( ) ; testif it is in ( ) ; if not in , compute the and its measures ( , ) ; test if it is reduced ; if not reduced , reduce it and compute its measures ( , ) ; the number of states ( ) and number of transitions ( ) of the equivalent minimal dfa . for each nfa( , , and ) we considered the following properties : the number of states ( ) ; the number of transitions ( ) ; if it is deterministic ( ) ; and if it is homogeneous ( ) . all these properties were also considered for the case where the resare in , and for the nfasobtained after applying the invariant equivalences , , and their composition .all tests were performed on samples of uniformly random generated res .each sample contains resof size , , and , respectively .table [ tab : resp ] shows some results concerning res .the ratio of alphabetic size to size is almost constant for all samples .almost all resare in , so we do not presented the measures after transforming into .this fact is relevant as the reswere generated only _ almost reduced_. the column contains the percentage of resfor which their are reduced .it is interesting to note that the average number of states of the minimal dfa ( ) is near ( i.e. , near the number of states of ) .the standard deviation is here very high . for the sample of size , however , of the reshave .more theoretical work is needed for a deeper understanding of these results .table [ tab : nfasp ] and table [ tab : ratios ] show some results concerning the nfasobtained from res . in table[ tab : nfasp ] the values not in percentage are average values . if is deterministic then the resis unambiguous ( and strong unambiguous , if in ) .the results obtained suggest that perhaps of the reduced resare strong unambiguous .note that if is not deterministic , almost certainly , neither nor are . for reasonable sized res ,although are homogeneous it is unlikely that either or will be so .it is not significant the difference between and . on average seems linear in the size of the re , and that fact was recently proved by nicaud .reductions by and ( or ) decrease by less than the size of the considered nfas( , , and ) .in particular the quotient automata of are less than smaller than . in general , we can hypothesize that reductions by the coarsest invariant equivalences are not significant when resare reduced ( and/or are in ) .we presented a set of tools within the * fado*system to uniformly random generate res , to convert resinto -free nfasand to simplify both resand nfas .these tools can be used to obtain experimental results about the relative descriptional complexity of regular language representations on the average case .our experimental data corroborate some previous experimental and theoretical results , and suggest some new hypotheses to be theoretically proved .we highlight the two following conjectures .reduced reshave high probability of being in .and the obtained from resin reduced seems to almost coincide with quotient automata of by .a. almeida , m. almeida , j. alves , n. moreira , and r. reis . and guitar : tools for automata manipulation and visualization . in s.maneth , editor , _14th ciaa09 _ , volume 5642 of _ lncs _ , pages 6574 .springer , 2009 .l. ilie , r. solis - oba , and s. yu . reducing the size of nfas by using equivalences and preorders . in a.apostolico , m. crochemore , and k. park , editors , _ 16th cpm 2005 _ , volume 3537 of _ lncs _ , pages 310321 .springer , 2005 .j. lee and j. shallit .enumerating regular expressions and their languages . in m.domaratzki , a. okhotin , k. salomaa , and s. yu , editors , _9th ciaa 2004 _ , volume 3314 of _ lncs _ , pages 222 .springer , 2005 . | regular expressions ( res ) , because of their succinctness and clear syntax , are the common choice to represent regular languages . however , efficient pattern matching or word recognition depend on the size of the equivalent nondeterministic finite automata ( nfa ) . we present the implementation of several algorithms for constructing small -free nfasfrom reswithin the * fado*system , and a comparison of regular expression measures and nfasizes based on experimental results obtained from uniform random generated res . for this analysis , nonredundant resand reduced resin star normal form were considered . |
polyelectrolyte ( pe ) solutions are systems widely studied since they show properties that are of fundamental interest for applications in health science , food industry , water treatment , surface coatings , oil industry , among other fields .in fact , one of the problems found in genetic engineering in the appearance of conformational changes of the adn molecule , which is a charged polyelectrolyte. .+ here we study an infinite dilution polyelectrolyte solution , so that , the interaction among polyelectrolyte macromolecules are negligible .we model the polyelectrolyte as having dissociable functional groups that give rise to charged sites and counter - ions in aqueous solution .the long range interactions arising from these multiple charges are responsible for their macroscopic complex properties , which can not be explained by regular polymer theories .the spatial structures of these materials in solution have been studied extensively , particularly with a scaling theory that are not appropriate for highly charged pe .the first simulations carried out for a single chain predicted the formation of groups of monomers , as the fraction of charged monomers increased .such structures are known as pearl necklaces .the size of such pearls and the distance between then is determined by the balance between the electrostatic repulsion and steric effects .+ these pearl necklace structures have also been found in molecular dynamics ( md ) simulations . in this paperwe are interested in the application of the much simpler cellular automata simulation to characterize the main features of a polyelectrolyte that could be responsible for such conformations .the complete simulation of this complex system requires the description of a model in terms of potential or forces . in the md simulations of limbach and holm ,the monomers are connected along the chain by the finite extendible nonlinear elastic ( fene ) bond represented by the potential energy . where is the distance between two bonded monomers , , is the elastic bond constant , is the monomer diameter , is boltzmann s constant , is the absolute temperature and the parameter represents the maximum extension of the bond between two neighbor monomers .two charged sites i and j , with charges and , a distance apart , interact with the electrostatic coulomb potential this potential is weighed by the bjerrum length ] .the short range and van der waals interaction between any two particles or monomers is represented in the md simulation by a typical truncated lennard - jones potential + \epsilon_{cut } & r_{ij } < r_{c},\\ 0 & r_{ij } > r_{c } \end{array}\right.\ ] ] where is the potential energy well depth and is the cut off energy .this potential prevents the superposition of the bonding monomers .counter - ions interact via a purely repulsive lj interaction with .even though in the cellular automata simulation we do not use any form of potential energies or forces in an explicit manner , the _ rules _ for the movement of the different particles must be inspired on a model defined in terms of such potentials .we therefore establish our rules based on the essence of the previous three potentials .+ the polymer is constructed by placing the monomers in a three dimensional cubic network of side and volume .each cell then has neighbors and represents a monomer with a monovalent charge , , or , for a neutral monomer , , as depicted in fig.(1 ) in two dimensions . out of a total of monomers in the chain, it is assumed that a given fraction is charged .the polymer is then constructed by randomly binding consecutive sites in the network .each monomer could be charged or uncharged , with a distribution chosen randomly .a key step on the construction of the polyion is the spatial location of the dissociated counter - ions .we place the counter - ions also randomly in free cells in the volume around the charged monomer within a distance , that is , in a volume centered on the charged site .the use of the bjerrum parameter , which is related to the quality of the solvent , ensures the conservation of the total electroneutrality but gives a spatial distribution of counter - ions around the charged sites .+ so , each monomer of the system is represented in a matrix where each element indicates the polymer with charge , that could be or , at the positions given by the cell label $ ] . the counter - ions with opposite chargeare represented by a similar matrix .+ for simplicity we chose monomers with dissociable groups that give a site with a positive charge .we then set the following displacement rules for the different particles : + * _ neutral monomer particle _ * 1 .locate the unoccupied nearest neighbor sites .the new position where a move could be acceptable are those where it does not superimposed with any other particle in the system and where no bond is broken .2 . count the amount of monomer particles around the current position and around every unoccupied neighbor , within a cube of volume centered on it .3 . move the test neutral monomer to the position that has the higher amount of monomers around it , including the current one if it were the case .* _ positively charged monomer particle _ * 1 . as before ,locate the unoccupied acceptable nearest neighbor sites .2 . count the amount of charge around the current position and around every unoccupied neighbor , within a cube of volume centered on it .3 . move the test positive charged monomer to the position that has the lowest positive charge around it , including the current one if it were the case . *_ negatively charged counter - ion _ * 1 .move randomly to an unoccupied site within a cube of volume centered on the accepted new position of the corresponding positive monomer in the polymeric chain .we denote the position of monomer i with and the distance between two particles i and j with .the center of mass for the chain is then . and the center of mass coordinates are .a parameter that is useful in the study of the spatial conformations of a polymer is the radius of gyration , defined as according with our construction and movement rules , we can vary the length of the chain , or the number of monomers , satisfying and the number of charged monomers , .we also take as an independent variable the parameter , which determines the number of cells , of size , where the range of the electrostatic attraction between a charged monomer and contra - ion extends .the charge distribution of the sequence of charged and uncharged monomers is determined randomly depending on the initial random seed .+ in fig.(2 ) we show some of the equilibrium conformations obtained for a polyelectrolyte with monomers , with , for several charge fractions .the line represents the polyelectrolyte , the filled black bonded circles represent the charged monomers .the counter - ions in solution are represented by open non - bonded circles . for clarity ,the neutral monomers are not shown .for a low charge fraction of , the polyion presents an elongated string appearance . as the charge fractionis increased the polyion contracts and some groups of monomers tend to form clusters , so that , already for , it shows the locally collapsed structures known as pearl necklace .these results are very similar to those obtained by the molecular dynamics simulations of limbach and holm + it is important to notice that for a given total charge , determined by the fraction , different distributions of the charged monomers give different conformations . to study this behavior ,we have carried out simulations with several initial seeds for monomers , with a fixed fraction and a range parameter of . in fig.(3 ) , we show the temporal evolution of the radius of gyration . in all cases the conformations change from the initial given value to a plateau value that correspond to the equilibrium structures .fig.(3 ) clearly show that the plateau values , and thence , the final conformations depend on the charge distribution . in fig.(4 ) we show snapshots of final structures corresponding with the different seeds of fig.(3 ) . + as we can see from fig.(4 ) , the formation of the pearl necklace structures is independent of the charge sites distribution , for a given and .these clusters seem to be stabilized by the counter - ions as a consequence of the electroneutrality condition that we force to be satisfied .this is so because in our model the degree of freedom of the mobile counter - ions is much higher that that of the monomers tied to the chain .the strong repulsions that originate by the formation of clusters of neutral and positively charged monomers is compensated by the counter - ion cloud that forms around it . in order to study the reproducibility of the configurations found , we carried out a large number of simulation runs , for fixed values of , , and and for the same initial charge distribution . in fig.(5 ) we show the histograms for the frequency with which a given value of appears .we can see that the distribution of the is very closely a gaussian with a reasonably low dispersion of less than a about the mean value .+ in fig.(5b ) we can see that similar distribution is obtained when the range parameter increases to a value of .we have further tested the effect of the parameter by generating structures for different values of it . in fig.(6 ) we show some equilibrium conformations for equal to a ) 3,b ) 6,c ) 9 and d ) 12 .here we use a large charge fraction and and we used the same initial charge distributions in all cases .as we can observe as bjerrum length increases the final polyelectrolyte structures become more compact .the number or pearls or conglomerates is higher for the lower values of , a result similar to that obtained by md simulations of limbach y holm - the simple technique described here , we were able to reproduce the complex structure of model polyelectrolytes that fare very well with those predicted by the more sophisticated molecular dynamic and monte carlo simulations .we even predict situations with single conglomerates and with pearl necklace type conglomerates .we thus show the potentiality of the celular automata in the simulation of the trends in the formation of the various types of spatial conformations of polyelectrolytes .we remark on the importance of the charge distribution once the fractional charge is fixed .this work was supported in part by the grant 04 - 005 - 01 from the decanato de investigacin of universidad nacional experimental del tchira , in part by the grant g-9700741 of fonacyt , and in part by the grant c-1279 - 0402-b from consejo de desarrollo cientco humanstico y tecnolgico of universidad de los andes .+ numerical calculations were carried out at the computer center cecalcula .+ 25 a.v .dobrynin , r.h .colby and m. rubinstein,_macromolecules _ , * 28*,1859,(1995 ) .a. v. dobrynin ; m. rubinstein ; s. p. obukhok,_macromolecules_,*29*,2974,(1996 ) . y. yasmasaki , y. teramoto , and k. yoshikawa,_biophys .j._,*80*,2974,(2001 ) .h. j. limbach y c. holm,_j .b_,*107*,8041 - 8055,(2003 ) .u. micka ; c. holm ; k. kremer,_langmuir_,*15*,4033,(1999 ) . u. micka ; k. kremer,_europhys .lett._,*49*,189 - 195,(2000 ) . h.j. limbach y c. holm , _ j. chem .phys._,*114*,9674 - 9682,(2002 ) .h. j. limbach y c. holm,_comput .commun._,*147*,321 - 324,(2002 ) . h.j. limbach y c. holm ; k. kremer , _j. phys : condens .matter_,*15*,s205-s211,(2003 ) . h. j. limbach y c. holm ; k. kremer , _ europhys .lett._,*60*,566 - 572,(2002 ) . | resumen + + abstract + . + |
) ( e ) 2-wasserstein distance.,width=291 ] -.5 cm comparing , summarizing and reducing the dimensionality of empirical probability measures defined on a space are fundamental tasks in statistics and machine learning .such tasks are usually carried out using pairwise comparisons of measures .classic information divergences are widely used to carry out such comparisons . unless is finite , these divergences can not be directly applied to empirical measures , because they are ill - defined for measures that do not have continuous densities . they also fail to incorporate prior knowledge on the geometry of , which might be availableif , for instance , is also a hilbert space .both of these issues are usually solved using s approach to smooth empirical measures with smoothing kernels before computing divergences : the euclidean and distances , the kullback - leibler and pearson divergences can all be computed fairly efficiently by considering matrices of kernel evaluations .the choice of a divergence defines implicitly the _ mean _ element , or barycenter , of a set of measures , as the particular measure that minimizes the sum of all its divergences to that set of target measures .the goal of this paper is to compute efficiently barycenters ( possibly in a constrained subset of all probability measures on ) defined by the _ optimal transport distance _ between measures .we propose to minimize directly the sum of optimal transport distances from one measure ( the variable ) to a set of fixed measures by gradient descent .these gradients can be computed for a moderate cost by solving smoothed optimal transport problems as proposed by .wasserstein distances have many favorable properties , documented both in theory and practice .we argue that their versatility extends to the barycenters they define .we illustrate this intuition in figure [ fig : nines ] , where we consider 30 images of nested ellipses on a grid .each image is a discrete measure on ^ 2 ] , here the euclidean distance .note also that the gaussian kernel smoothing approach uses the same distance , in addition to a bandwidth parameter which needs to be tuned in practice .this paper is organized as follows : we provide background on optimal transport in [ sec : back ] , followed by the definition of wasserstein barycenters with motivating examples in [ sec : baryc ] .novel contributions are presented from [ sec : computing ] : we present two subgradient methods to compute wasserstein barycenters , one which applies when the support of the mean measure is known in advance and another when that support can be freely chosen in .these algorithms are very costly even for measures of small support or histograms of small size .we show in [ sec : smooth ] that the key ingredients of these approaches the computation of primal and dual optimal transport solutions can be bypassed by solving smoothed optimal transport problems .we conclude with two applications of our algorithms in [ sec : exp ] .let be an arbitrary space , a metric on that space and the set of borel probability measures on .for any point , is the dirac unit mass on . for and probability measures in , their -wasserstein distance is is the set of all probability measures on that have marginals and .we will only consider empirical measures throughout this paper , that is measures of the form where is an integer , and lives in the probability simplex , let us introduce additional notations : * measures on a set with constrained weights .* let be a non - empty closed subset of .we write **measures supported on up to points . * * given an integer and a subset of , we consider the set of measures of that have discrete support of size up to and weights in , when no constraints on the weights are considered , namely when the weights are free to be chosen anywhere on the probability simplex , we use the shorter notations and .consider two families and of points in .when and , the wasserstein distance between and is the root of the optimum of a network flow problem known as the _ transportation problem _ .this problem builds upon two elements : the _ * matrix * _ _ * of pairwise distances * _ between elements of and raised to the power , which acts as a cost parameter , {ij } \in\mathbb{r}^{n\times m},\ ] ] and the _ * transportation polytope * _ of and , which acts as a feasible set , defined as the set of nonnegative matrices such that their row and column marginals are equal to and respectively .writing for the -dimensional vector of ones , let be the frobenius dot - product of matrices .combining eq .& , we have that distance raised to the power be written as the optimum of a parametric linear program on variables , parameterized by the marginals and a ( cost ) matrix : present in this section the wasserstein barycenter problem , a variational problem involving all wasserstein distances from one to many measures , and show how it encompasses known problems in clustering and approximation . a wasserstein barycenter of measures in is a minimizer of over , where consider more generally a non - negative weight in front of each distance .the algorithms we propose extend trivially to that case but we use uniform weights in this work to keep notations simpler .we highlight a few special cases where minimizing over a set is either trivial , relevant to data analysis and/or has been considered in the literature with different tools or under a different name . inwhat follows and are arbitrary finite subsets of .* * when only one measure , supported on is considered , its closest element in no constraints on weights are given can be computed by defining a weight vector on the elements of that results from assigning all of the mass to the closest neighbor in metric of in . + * centroids of histograms * : finite , .when is a set of size and a matrix describes the pairwise distances between these points ( usually called in that case bins or features ) , the -wasserstein distance is known as the earth mover s distance ( emd ) . in that context, wasserstein barycenters have also been called emd prototypes by . *euclidean * : , . minimizing on is a euclidean metric space and is equivalent to the -means problem .* constrained -means * : , .consider a measure with support and weights .the problem of approximating this measure by a uniform measure with atoms a measure in -wasserstein sense was to our knowledge first considered by , who proposed a variant of s algorithm for that purpose .more recently , remarked that such an approximation can be used in the resampling step of particle filters and proposed in that context two ensemble methods inspired by optimal transport , one of which reduces to a single iteration of s algorithm .such approximations can also be obtained with kernel - based approaches , by minimizing an information divergence between the ( smoothed ) target measure and its ( smoothed ) uniform approximation as proposed recently by and . consider conditions on the s for a wasserstein barycenter in to be unique using the multi - marginal transportation problem .they provide solutions in the cases where either ( i ) ; ( ii ) using s interpolant ; ( iii ) all the measures are gaussians in , in which case the barycenter is a gaussian with the mean of all means and a variance matrix which is the unique positive definite root of a matrix equation ( * ? ? ?* eq.6.2 ) . were to our knowledge the first to consider practical approaches to compute wasserstein barycenters between point clouds in .to do so , propose to approximate the wasserstein distance between two point clouds by their _ sliced _wasserstein distance , the expectation of the wasserstein distance between the projections of these point clouds on lines sampled randomly . because the optimal transport between two point clouds on the real line can be solved with a simple sort, the sliced wasserstein barycenter can be computed very efficiently , using gradient descent .although their approach seems very effective in lower dimensions , it may not work for and does not generalize to non - euclidean metric spaces .we propose in this section new approaches to compute wasserstein barycenters when ( i ) each of the measures is an empirical measure , described by a list of atoms of size , and a probability vector in the simplex ; ( ii ) the search for a barycenter is not considered on the whole of but restricted to either ( the set of measures supported on a predefined finite set of size with weights in a subset of ) or ( the set of measures supported on up to atoms with weights in a subset of ) .looking for a barycenter with atoms and weights is equivalent to minimizing ( see eq .[ eq : primal ] for a definition of ) , over relevant feasible sets for and . when is _ fixed _, we show in [ subsec : dualityconvexity ] that is convex w.r.t regardless of the properties of .a subgradient for w.r.t can be recovered through the _ dual optimal solutions _ of all problems , and can be minimized using a projected subgradient method outlined in [ subsec : xrestricted ] .if is _ free _ , constrained to be of cardinal , and and its metric are both _ euclidean _ , we show in [ subsec : xeuclidean ] that is not convex w.r.t but we can provide subgradients for using the _ primal optimal solutions _ of all problems .this in turn suggests an algorithm to reach a local minimum for w.r.t . and in by combining both approaches . *dual transportation problem . * given a matrix , the optimum admits the following dual linear program ( lp ) form ( * ? ? ?* , 7.8 ) , known as the dual optimal transport problem : where the polyhedron of dual variables is by lp duality , .the dual optimal solutions which can be easily recovered from the primal optimal solution ( * ? ? ?* eq.7.10)define a subgradient for as a function of : [ prop : convex ] given and , the map is a polyhedral convex function .any optimal dual vector of is a subgradient of with respect to .these results follow from sensitivity analysis in lp s . is bounded and is also the maximum of a finite set of linear functions , each indexed by the set of extreme points of , evaluated at and is therefore polyhedral convex .when the dual optimal vector is unique , is a gradient of at , and a subgradient otherwise .because for any real value the pair is feasible if the pair is feasible , and because their objective are identical , any dual optimum is determined up to an additive constant . to remove this degree of freedom which arises from the fact that one among all row / column sum constraints of is redundant we can either remove a dual variable or normalize any dual optimum so that it sums to zero , to enforce that it belongs to the tangent space of .we follow the latter strategy in the rest of the paper .let be fixed and let be a closed convex subset of .the aim of this section is to compute weights such that is minimal .let be the optimal dual variable of normalized to sum to 0 . being a sum of terms , we have that : the function is polyhedral convex , with subgradient assuming is closed and convex , we can consider a naive projected subgradient minimization of . alternatively ,if there exists a bregman divergence for defined by a prox - function , we can define the proximal mapping and consider accelerated gradient approaches .we summarize this idea in algorithm [ algo : discwass ] . * inputs * : . for .form all matrices , see eq . .set . , .form subgradient using all dual optima of . . .notice that when and is the kullback - leibler divergence , we can initialize with and use the multiplicative update to realize the proximal update : , where is schur s product .alternative sets for which this projection can be easily carried out include , for instance , all ( convex ) level set of the entropy function , namely where .we consider now the case where with , is the euclidean distance and .when , a family of points and a family of points can be represented respectively as a matrix in and another in .the pairwise squared - euclidean distances between points in these sets can be recovered by writing and , and observing that * transport cost as a function of . * due to the margin constraints that apply if a matrix is in the polytope , we have : discarding constant terms in and , we have that minimizing with respect to locations is equivalent to solving as a function of , that objective is the sum of a convex quadratic function of with a piecewise linear concave function , since is the minimum of linear functions indexed by the vertices of the polytope . as a consequence , is not convex with respect to . + * quadratic approximation . *suppose that is optimal for problem .updating eq ., minimizing a local quadratic approximation of at yields thus the newton update a simple interpretation of this update is as follows : the matrix has column - vectors in the simplex . the suggested update for is to replace it by barycenters of points enumerated in with weights defined by the optimal transport .note that , because the minimization problem we consider in is not convex to start with , one could be fairly creative when it comes to choosing and among other distances and exponents .this substitution would only involve more complicated gradients of w.r.t . that would appear in eq . .we now consider , as a natural extension of [ subsec : xrestricted ] when , the problem of minimizing over a probability measure that is ( i ) supported by _ at most atoms _ described in , a matrix of size , ( ii ) with weights in .* alternating optimization .* to obtain an approximate minimizer of we propose in algorithm [ algo : general ] to update alternatively locations ( with the newton step defined in eq .[ eq : newton ] ) and weights ( with algorithm [ algo : discwass ] ) . * input * : for initialize and using algorithm [ algo : discwass ] . optimal solution of , setting $ ] with line - search or a preset value .* algorithm [ algo : general ] and / algorithms . * as mentioned in [ sec : back ] , minimizing defined in eq . over , with , andno constraints on the weights ( ) , is equivalent to solving the -means problem applied to the set of points enumerated in . in that particular case ,algorithm [ algo : general ] is also equivalent to s algorithm .indeed , the assignment of the weight of each point to its closest centroid in s algorithm ( the maximization step ) is equivalent to the computation of in ours , whereas the re - centering step ( the expectation step ) is equivalent to our update for using the optimal transport , which is in that case the trivial transport that assigns the weight ( divided by ) of each atom in to its closest neighbor in .when the weight vector is constrained to be uniform ( ) , proposed a heuristic to obtain uniform -means that is also equivalent to algorithm [ algo : general ] , and which also relies on the repeated computation of optimal transports . for more general sets , algorithm [ algo : discwass ]ensures that the weights remain in at each iteration of algorithm [ algo : general ] , which can not be guaranteed by neither s nor s approach .* algorithm [ algo : general ] and s transform .* has recently suggested to approximate a weighted measure by a uniform measure supported on as many atoms .this approximation is motivated by optimal transport theory , notably asymptotic results by , but does not attempt to minimize , as we do in algorithm [ algo : general ] , any wasserstein distance between that approximation and the original measure .this approach results in one application of the newton update defined in eq ., when is first initialized to and to compute the optimal transport .* summary * we have proposed two original algorithms to compute wasserstein barycenters of probability measures : one which applies when the support of the barycenter is fixed and its weights are constrained to lie in a convex subset of the simplex , another which can be used when the support can be chosen freely .these algorithms are relatively simple , yet to the best of our knowledge novel .we suspect these approaches were not considered before because of their prohibitive computational cost : algorithm [ algo : discwass ] computes at each iteration the dual optima of transportation problems to form a subgradient , each with variables and inequality constraints .algorithm [ algo : general ] incurs an even higher cost , since it involves running algorithm [ algo : discwass ] at each iteration , in addition to solving primal optimal transport problems to form a subgradient to update .since both objectives rely on subgradient descent schemes , they are also likely to suffer from a very slow convergence .we propose to solve these issues by following s approach to smooth the objective and obtain strictly convex objectives whose gradients can be computed more efficiently .to circumvent the major computational roadblock posed by the repeated computation of primal and dual optimal transports , we extend s approach to obtain smooth and strictly convex approximations of both primal and dual problems and . the matrix scaling approach advocated by was motivated by the fact that it provided a fast approximation to .we show here that the same approach can be used to smooth the objective and recover for a cheap computational price its gradients w.r.t . and .a transport , which is by definition in the -simplex , has entropy has recently proposed to consider , for , a regularized primal transport problem as we introduce in this work its dual problem , which is a smoothed version of the original dual transportation problem , where the positivity constraints of each term have been replaced by penalties : these two problems are related below in the sense that their respective optimal solutions are linked by a unique positive vector : [ prop : primdual]let be the elementwise exponential of , .then there exists a pair of vectors such that the optimal solutions of and are respectively given by the result follows from the lagrange method of multipliers for the primal as shown by ( * ? ? ?* lemma 2 ) , and a direct application of first - order conditions for the dual , which is an unconstrained convex problem .the term in the definition of is used to normalize so that it sums to zero as discussed in the end of [ subsec : dualityconvexity ] .the positive vectors mentioned in proposition [ prop : primdual ] can be computed through s matrix scaling algorithm applied to , as outlined in algorithm [ algo : sk ] : [ theo : sk ] for any positive matrix in and positive probability vectors and , there exist positive vectors and , unique up to scalar multiplication , such that .such a pair can be recovered as a fixed point of the sinkhorn map the convergence of the algorithm is linear when using hilbert s projective metric between the scaling factors .although we use this algorithm in our experiments because of its simplicity , other algorithms exist which are known to be more reliable numerically when is large .* input * ; % use ` bsxfun( , k , a ) ` set `ones(n,1)/n ` ; ` u=1./(\widetilde{k}(b./(k^tu ) ) ) ` . . . % use ; * summary : * given a smoothing parameter , using sinkhorn s algorithm on matrix , defined as the elementwise exponential of ( the pairwise gaussian kernel matrix between the supports and when , using bandwidth ) we can recover smoothed optima and for _ both _ smoothed primal and dual transport problems . to take advantage of this, we simply propose to substitute the smoothed optima and to the original optima and that appear in algorithms [ algo : discwass ] and [ algo : general ] .we present two applications , one of algorithm [ algo : discwass ] and one of algorithm [ algo : general ] , that both rely on the smooth approximations presented in [ sec : smooth ] .the settings we consider involve computing respectively tens of thousands or tens of high - dimensional optimal transport problems2.500.500 for the first application , for the second which can not be realistically carried out using network flow solvers . using network flow solvers ,the resolution of a single transport problem of these dimensions could take between several minutes to several hours .we also take advantage in the first application of the fact that algorithm [ algo : sk ] can be run efficiently on gpgpus using vectorized code ( * ? ? ?* alg.1 ) .we use images of the mnist database , with approximately images for each digit from 0 to 9 .each image ( originally pixels ) is scaled randomly , uniformly between half - size and double - size , and translated randomly within a grid , with a bias towards corners .we display intermediate barycenter solutions for each of these 10 datasets of images for gradient iterations . is set to , where is the squared - euclidean distance matrix between all 2,500 pixels in the grid .using a quadro k5000 gpu with close to 1500 cores , the computation of a single barycenter takes about 2 hours to reach 100 iterations . because we use warm starts to initialize in algorithm [ algo : sk ] at each iteration of algorithm [ algo : discwass ] , the first iterations are typically more computationally intensive than those carried out near the end .-1.2 cm ) centroids .the size of each of the 57.647 blue crosses is proportional to the local average of the relevant variable ( income above and population below ) at that location , normalized to sum to 1 .each downward triangle is a centroid of the -means clustering ( equivalent to a wasserstein barycenter with ) whose size is proportional to the portion of mass captured by that centroid .red dots indicate centroids obtained with a uniform constraint on the weights , . since such centroids are constrained to carry a fixed portion of the total weight , one can observe that they provide a more balanced clustering than the -means solution.,title="fig:",width=415 ] in practice , the -means cost function applied to a given empirical measure could be minimized with a set of centroids and weight vector such that the entropy of is very small .this can occur when most of the original points in the dataset are attributed to a very small subset of the centroids , and could be undesirable in applications of -means where a more regular attribution is sought .for instance , in sensor deployment , when each centroid ( sensor ) is limited in the number of data points ( users ) it can serve , we would like to ensure that the attributions agree with those limits . whereas the original -means can not take into account such limits, we can ensure them using algorithm [ algo : general ] .we illustrate the difference between looking for optimal centroids with `` free '' assignments ( ) , and looking for optimal `` uniform '' centroids with constrained assignments ( ) using us census data for income and population repartitions across 57.647 spatial locations in the 48 contiguous states .these weighted points can be interpreted as two empirical measures on with weights directly proportional to these respective quantities .we initialize both `` free '' and `` uniform '' clustering with the actual 48 state capitals .results displayed in figure [ fig : america ] show that by forcing our approximation to be uniform , we recover centroids that induce a more balanced clustering .indeed , each cell of the voronoi diagram built with these centroids is now constrained to hold the same aggregate wealth or population .these centroids could form the new state capitals of equally rich or equally populated states . on an algorithmic note ,we notice in figure [ fig : graphe ] that algorithm [ algo : general ] converges to its ( local ) optimum at a speed which is directly comparable to that of the -means in terms of iterations , with a relatively modest computational overhead .unsurprisingly , the wasserstein distance between the clusters and the original measure is higher when adding uniform constraints on the weights . ) and its unconstrained equivalent ( -means ) to the income empirical measure .note that , because of the constraints on weights , the wasserstein distance of the uniform wasserstein barycenter is necessarily larger . on a single cpu core ,these computations require 12.5 seconds for the constrained case , using sinkhorn s approximation , and 1.55 seconds for the regular -means algorithm . using a regular transportation solver , computingthe optimal transport from the 57.647 points to the 48 centroids would require about 1 hour for a single iteration , width=283 ] we have proposed in this paper two original algorithms to compute wasserstein barycenters of empirical measures .using these algorithms in practice for measures of large support is a daunting task for two reasons : they are inherently slow because they rely on the subgradient method ; the computation of these subgradients involves solving optimal and dual optimal transport problems .both issues can be substantially alleviated by smoothing the primal optimal transport problem with an entropic penalty and considering its dual .both smoothed problems admit gradients which can be computed efficiently using only matrix vector products .our aim in proposing such algorithms is to demonstrate that wasserstein barycenters can be used for visualization , constrained clustering , and hopefully as a core component within more complex data analysis techniques in future applications .we also believe that our smoothing approach can be directly applied to more complex variational problems that involve multiple wasserstein distances , such as wasserstein propagation . | we present new algorithms to compute the mean of a set of empirical probability measures under the optimal transport metric . this mean , known as the wasserstein barycenter , is the measure that minimizes the sum of its wasserstein distances to each element in that set . we propose two original algorithms to compute wasserstein barycenters that build upon the subgradient method . a direct implementation of these algorithms is , however , too costly because it would require the repeated resolution of large primal and dual optimal transport problems to compute subgradients . extending the work of , we propose to smooth the wasserstein distance used in the definition of wasserstein barycenters with an entropic regularizer and recover in doing so a strictly convex objective whose gradients can be computed for a considerably cheaper computational cost using matrix scaling algorithms . we use these algorithms to visualize a large family of images and to solve a constrained clustering problem . |
diffraction by an infinite wedge is a fundamental canonical problem in acoustic scattering .exact closed - form frequency - domain solutions for point source , line source or plane wave excitation with homogeneous dirichlet ( sound soft ) or neumann ( sound hard , or rigid ) boundary conditions are available in many different forms .for example , series expansions in terms of eigenfunctions are available for near field calculations ( e.g. for analysing edge singularities ) .contour integral representations over so - called sommerfeld - malyuzhinets contours are better suited to far field computations ( e.g. for deriving diffraction coefficients in computational methods such as the geometrical theory of diffraction ) .more recently it has been discovered that the ` diffracted ' component of these solutions ( precisely , that which remains after subtracting from the total field the geometrical acoustics terms ) can be expressed in a more physically intuitive form , namely as a line integral superposition of directional secondary sources located along the diffracting edge .( in fact the frequency domain expressions derived in ref . had appeared already in ref . , but the interpretation in terms of secondary edge sources seems to have been first made in ref . . )one appealing feature of the edge source interpretation is that it offers a natural way to write down approximate solutions for finite edges , simply by truncating the domain of integration .it has also led to edge integral equation formulations of scattering problems , where the integral equation is posed on the union of all the scatterer s edges . for dirichlet and neumann boundary conditions the edge source formulationsare now well understood : efficient numerical evaluation of the line integrals has been considered in ref . using the method of numerical steepest descent , as has the behaviour of the line integrals near shadow boundaries and edges .note also ref . , where the corresponding time - domain case is considered . for the more difficult case of diffraction by a wedge with impedance ( absorbing ) boundary conditions ,some exact solutions are also known .for example , the case of plane wave incidence on an impedance wedge can be solved using the sommerfeld - malyuzhinets technique , and converted to a series expansion using a watson - type transformation ( see ref . and the references therein ) .but the solution obtained is much more cumbersome than those for the corresponding dirichlet and neumann problems , and the technique requires the solution of a certain non - trivial functional difference equation .this increased complexity is perhaps to be expected , since the physics of the impedance problem are fundamentally more complicated than those for the dirichlet and neumann problems ; in particular , the wedge faces can under certain conditions support surface waves .however , for the special ( yet important ) case of a right - angled wedge , the solution takes a particularly simple and explicit form . in ref . , rawlins proves that the solution to the impedance problem for a right - angled wedge ( with possibly different impedances on each face ) can be obtained from that of the corresponding dirichlet problem , generalised to allow complex incident angles , by the application of a certain linear differential operator ( see eqs . below for details ) .rawlins applies this operator to the classical series and integral representations of the dirichlet solution to obtain relatively simple series and integral representations for the impedance solution .( the solution for the case where the impedance is the same on both faces was presented previously in a similar but more complicated form in ref . . ) in this paper it will be shown that rawlins solution for the impedance wedge can be transformed into an edge source representation of the same form as those derived for rigid ( sound - hard ) wedges in ref .this appears to be the first edge source representation for diffraction by an impedance wedge .while the solution obtained is valid only for a right - angled wedge , it should be remarked that this special case is ubiquitous in many acoustical applications ( e.g. , urban acoustics ) .the edge source formulation for ideal ( dirichlet and neumann ) wedges will briefly be reviewed . for the most general setting ( illustrated in fig . 1 ) ,consider a point source and point receiver in the presence of a wedge of exterior angle .let denote cylindrical coordinates with the -axis along the edge , the propagation domain occupying the region , and the wedge the region .consider also cartesian coordinates with , , . without loss of generalityit will be assumed that the receiver is located in the plane , at .for each point on the edge one can also introduce local spherical coordinates , with defined as before and , , . for consistency with ref . the time - dependence will be assumed throughout .then the diffracted field at ( i.e. the total field minus the geometrical acoustics field ) due to a monopole source at can be written as a line integral over edge positions , where is the wedge index .the integral in eq . can be interpreted as a superposition of secondary edge sources along the edge .the factor can be interpreted as a directivity function , and takes the following forms for dirichlet ( ) and neumann ( ) boundary conditions : where and the auxiliary function is the second expression for in eq .shows that is a function only of the local spherical angles ; this justifies the interpretation of as a directivity function .the formula for above corrects that in ref . by reversing the sign of . also , the first expression for in eq .( [ eq : etadef ] ) corrects a sign error in the corresponding formula in ref . . a mixed wedge with dirichlet conditions on and neumann conditions on can also be treated , by summing the dirichlet solutions for a wedge angle and source positions and respectively .this gives the directivity factor where which , when inserted in eq . , agrees with the solution derived by buckingham in eqs . ( 62)-(63 ) of ref . using a modal expansion .the solution for a mixed wedge with neumann conditions on and dirichlet conditions on can then be obtained by replacing by and by in eq . .for the two - dimensional case of plane wave incidence perpendicular to the edge , the source is placed at with . in this case and , because of symmetry , the integration range in eq . ([ eq : basicintegral ] ) can be halved .furthermore , for plane wave incidence the spherical attenuation factor is removed , and , if one refers the phase of the diffracted sound pressure relative to the arrival time at the edge , the phase oscillation factor also disappears , giving for plane wave incidence perpendicular to a right - angled wedge , for which ( i.e. ) , further simplification is obtained . now. is and using multiple angle formulas one can show that where to summarise , for plane wave incidence perpendicular to the edge of a right - angled wedge , with , the total field is {{\rm e}}^{-{{\rm i}}kr\cos(\theta-\theta_0)}\notag \\ & \quad+ r_1{{\rm h}}[\pi-\theta-\theta_0]{{\rm e}}^{-{{\rm i}}kr\cos(\theta+\theta_0)}\notag \\ & \quad+ r_2{{\rm h}}[\theta+\theta_0 - 2\pi]{{\rm e}}^{{{\rm i}}kr\cos(\theta+\theta_0)}\notag \\ & \quad+ p_d(r,\theta ) , \label{eqn : ptotal}\end{aligned}\ ] ] where the diffracted field is given by eq .and the remaining three terms represent the geometrical acoustics field ( here ] , ; =1/2 ] , ) .the first term in eq . represents the incident field , the second the reflected wave from the face and the third the reflected wave from . are reflection coefficients : for the pure dirichlet case ; for the pure neumann case ; and for the mixed dirichlet / neumann and neumann / dirichlet cases , and , , respectively . in order to understand how eq .should be generalised to the case of impedance boundary conditions , it is instructive to review its derivation from the classical sommerfeld contour integral solution .part of this derivation was presented already in ref . , but in this section the analysis of ref . will be extended to allow a complex incident angle , corresponding physically to diffraction by an inhomogeneous plane wave . for brevity, attention will be restricted to the dirichlet case , but the neumann and mixed problems can be analysed similarly . for the dirichlet casethe sommerfeld contour integral solution is where the integration contour is in two parts : lies above all singularities of the integrand and goes from to , and lies below all singularities of the integrand and goes from to ( see fig .[ fig : gamma ] ) .the integral converges rapidly for all complex and , and the integrand has poles at for . as is pointed out in ref . , the formula in eq .makes sense not just for real , but also for all complex . ]one can obtain an expression suitable for far field ( large ) evaluation by deforming the contour in eq . onto the steepest descent contours and passing through the saddle points at ( see fig .[ fig : gamma ] ) .these contours are defined by respectively , where is the gudermannian function with , for , and . to achieve the deformation one draws down the contour onto , picking up residue contributions from any poles lying between and .assuming that and , one obtains {{\rm e}}^{-{{\rm i}}k r \cos(\theta-\theta_0)}\notag\\ & \;\;-{{\rm h}}[\pi-|\theta+\re{\theta_0}+\mathrm{gd}(\im{\theta_0})|]{{\rm e}}^{-{{\rm i}}k r \cos(\theta+\theta_0)}\notag\\ & \;\;-{{\rm h}}[\pi-|\theta+\re{\theta_0}+\mathrm{gd}(\im{\theta_0})-3\pi|]{{\rm e}}^{{{\rm i}}k r \cos(\theta+\theta_0)}\notag\\ & \;\;+ p^d_d(r,\theta ) , \label{eqn : sommtotal}\end{aligned}\ ] ]where and the integrals over and have been combined into a single integral over the contour illustrated in fig .[ fig : contoura ] by the changes of variable .note that eq . corrects a sign in one of the exponents in the corresponding formula in ref . ( eq . ( 26 ) on p. 166 ) .note also that in ref . rawlins uses the equivalent representation the first term in eq . represents the incident wave , and the second and third terms in eq .represent the two reflected waves . note that the location of the zone boundary across which these waves are switched on / off by the heaviside function prefactors is shifted compared to the case of purely real .the shift agrees exactly with that derived in ref . for the case of an inhomogeneous plane wave incident on a half plane .note also that at a zone boundary the argument of one of the heaviside functions in eq. equals zero , and a pole lies on the contour of integration . in this case a principal value integral should be taken in eq .( for consistency with the assumption that =1/2 $ ] ) ; the same convention applies to the other decompositions in eqs . , , and .the edge source representation can then be obtained from eq . as follows .first deform the contour of integration from to the imaginary axis in fig .[ fig : contoura ] ( equivalently , deform the original contours and onto the vertical contours and in fig .[ fig : gamma ] rather than and before changing variables ) .then write the resulting integral as an integral over the positive imaginary axis only , applying the identity which follows from the identity finally , parametrise the resulting integral by so that and , giving {{\rm e}}^{-{{\rm i}}k r \cos(\theta-\theta_0)}\notag\\ & \quad-{{\rm h}}[\pi-|\theta+\re{\theta_0}|]{{\rm e}}^{-{{\rm i}}k r \cos(\theta+\theta_0)}\notag\\ & \quad-{{\rm h}}[\pi-|\theta+\re{\theta_0}-3\pi|]{{\rm e}}^{{{\rm i}}k r \cos(\theta+\theta_0)}\notag\\ & \quad+ p^d_{d,{\rm edge}}(r,\theta ) , \label{eqn : sommtotaledgesource}\end{aligned}\ ] ] where is given by the edge source integral in eq . .clearly eq .reduces to eq . when . for complex is a discrepancy in the arguments of the heaviside functions between eqs . and ; this is simply a consequence of the contour deformation from to .accordingly , eq . does not represent a decomposition of the field into ` geometrical acoustics ' and ` diffracted ' components , at least not in the sense understood in ref .such discrepancies will have implications for the physical interpretation of the edge source integral derived for the impedance problem in the following section .the main aim of this paper is to generalise eq . to the case of impedance boundary conditions , where the unit normal vector points into the wedge , and is the complex admittance ( inversely proportional to the impedance ) , which is assumed to satisfy , so as to prohibit energy creation at the boundary .it will be assumed that takes a constant value on each of the two wedge faces , but that the value of this constant may be different for the two wedge faces . following ref . , in order to simplify later formulas will be written where are complex angles such that recalling that the wedge faces are given respectively by , ( ) and , ( ) , the boundary condition in eq .can then be stated as \dfrac{\partial p}{\partial x } = { { \rm i}}k \cos{\theta_2}p , & \textrm{on } \theta=3\pi/2 .\end{cases}\end{aligned}\ ] ] note that when takes the same value on both faces the angles and are related by , since .rawlins shows that the total field for the impedance problem can be written as where and where , and denotes the dirichlet solution for incident angle ( this follows from eqs .( 28)-(31 ) in ref . combined with standard trigonometric identities ) . for completeness note that in eq .( 29 ) on p. 167 of ref . , in the numerator should be .rawlins derivation of eq . in ref . is based on a trick first introduced by williams in ref . to solve the analogous problem for a mixed neumann / impedance wedge . to evaluate the formula in eq ., rawlins uses the representation for given in eq . , but translates to give where is the translated version of passing through the saddle point at ( see fig .[ fig : contourb ] .this transformation greatly simplifies the application of the differential operator because the spatial variables and occur only in the exponential factor in the integrand in eq . , with .hence eq . can be evaluated as {{\rm e}}^{-{{\rm i}}k r \cos(\theta-\theta_0)}\notag\\ & \quad+r_1^i{{\rm h}}[\pi-\theta-\theta_0]{{\rm e}}^{-{{\rm i}}k r \cos(\theta+\theta_0)}\notag\\ & \quad+r_2^i{{\rm h}}[\theta+\theta_0 - 2\pi]{{\rm e}}^{{{\rm i}}kr \cos(\theta+\theta_0)}\notag\\ & \quad+t_1^i{{\rm h}}[\pi-\theta-\re{\theta_1}-\operatorname{gd}(\im{\theta_1})]{{\rm e}}^{-{{\rm i}}k r \cos(\theta+\theta_1)}\notag\\ & \quad+t_2^i{{\rm h}}[\theta+\re{\theta_2}+\operatorname{gd}(\im{\theta_2})-2\pi]{{\rm e}}^{{{\rm i}}k r \cos(\theta+\theta_2)}\notag\\ & \quad+ p^i_d(r,\theta ) , \label{eqn : sommtotalimp}\end{aligned}\ ] ] where and ( changing variable back to ) the first term in eq . represents the incident field ; the second and third terms represent reflected waves from the two wedge faces ; the fourth and fifth terms represent surface waves propagating along the two wedge faces ; and the final term represents the diffracted field . a surface wave associated with the face is excited if , and in this case is confined to the angular region .similarly , a surface wave associated with the face is excited if , and in this case is confined to the angular region . in terms of the admittance parameter , recalling eq .one finds that surface waves are excited if the surface waves ( when they exist ) decay exponentially with increasing distance both perpendicular to and along the face with which they are associated ( unless is pure negative imaginary , in which case they maintain a constant amplitude along the face itself , decaying in the perpendicular direction ) .the diffracted field can be approximated in the far field using the method of steepest descent , giving as , where the diffraction coefficient note that eq .corrects a typographical error in ref . : in eq .( 36 ) of ref . , in the denominator should be .note also that the approximation in eq. breaks down near zone boundaries ( i.e. , at values of for which the argument of one of the heaviside functions in eq .equals zero ) .a more sophisticated far field approximation , valid uniformly across the zone boundaries , is given in ref . . an edge source representation for can be derived by closely following the procedure outlined in section [ sec : dirderivation ] for the dirichlet case .first deform the contour of integration from to ( recall fig .[ fig : contoura ] ) . then write the resulting integral as an integral over the positive imaginary axis only . simplifying the resulting expressionrequires slightly more work than in the dirichlet case because the part of the integrand in eq .not involving is no longer an even function of as it was in the dirichlet case . to deal with this ,first decompose into a sum of even and odd parts ( with respect to ) then deal with the contribution from the even part using eq . , and that from the odd part using the identity where with eq. follows from the identity finally , parametrising the contour using eq . gives an edge source representation for the impedance solution : {{\rm e}}^{-{{\rm i}}k r \cos(\theta-\theta_0)}\notag\\ & \quad+r_1^i{{\rm h}}[\pi-\theta-\theta_0]{{\rm e}}^{-{{\rm i}}k r \cos(\theta+\theta_0)}\notag\\ & \quad+r_2^i{{\rm h}}[\theta+\theta_0 - 2\pi]{{\rm e}}^{{{\rm i}}kr \cos(\theta+\theta_0)}\notag\\ & \quad+t_1^i{{\rm h}}[\pi-\theta-\re{\theta_1}]{{\rm e}}^{-{{\rm i}}k r \cos(\theta+\theta_1)}\notag\\ & \quad+t_2^i{{\rm h}}[\theta+\re{\theta_2}-2\pi]{{\rm e}}^{{{\rm i}}k r \cos(\theta+\theta_2)}\notag\\ & \quad+ p^i_{d,{\rm edge}}(r,\theta ) , \label{eqn : sommtotalimpedgesource}\end{aligned}\ ] ] where and where and , for , and denote the functions and evaluated at incidence angle .with regard to surface waves , note that the arguments of the heaviside functions multiplying the fourth and fifth terms on the right - hand - side of eq .are always equal to zero except in the degenerate cases ( i ) and and ( ii ) and , respectively .thus , recalling the discussion at the end of section [ sec : impcontint ] , one finds that if either or is such that surface waves are present , these surface waves must form part of the edge source integral in eq . .in this case , in the region where the surface waves exist it holds that so that the edge source integral in eq .can not be associated solely with the diffracted field , as is the case for ideal ( dirichlet or neumann ) boundary conditions with a real incidence angle .nonetheless , one can check that by applying the method of stationary phase to the integral in eq ., the far - field diffraction coefficient approximation in eq . is recovered .this does not contradict the above remarks ( in particular eq . ) since the surface wave contributions ( when present ) are exponentially small with respect to increasing , and hence are not picked up by the method of stationary phase .it should be remarked that the existence of the edge source representation in eq .is of mainly theoretical interest ( for example in the development of approximate solutions for finite edges and of edge integral equation formulations of scattering problems - see , e.g. , ref . ) . for numerical computations of the infinite wedge solution at medium to high frequencies the expression in eq . should be used rather than that in eq . , because of the faster convergence of the integral in eq .compared to that in eq . .a secondary edge source representation has been presented ( in eq . ) for the exact solution of scattering of a plane wave at perpendicular incidence on a right - angled impedance wedge . when the impedance parameters are such that surface waves are present , the edge source integral can not be associated solely with the diffracted field , as in the case of ideal ( dirichlet or neumann ) boundary conditions , because it also incorporates the surface waves .a similar edge source representation should also be possible for general wedge angles , starting from the contour integral solutions in refs . and .but the analysis , and the resulting edge integral , are expected to be significantly more complicated than in the right - angled case considered here , for which one has the particularly simple contour integral solution provided by ref .another interesting problem would be the derivation of edge source representations for more general incident waves , for example line source or point source excitation .however , to the present author s knowledge no convenient contour integral solution exists for these cases . certainly the expressions obtained would be significantly more complicated than those obtained here for plane wave incidence ; for a start , the green s function for a line source or point source above an impedance boundary can not be obtained by the method of images , as it can in the plane wave case .these generalisations are left for future work . | this paper concerns the frequency domain problem of diffraction of a plane wave incident on an infinite right - angled wedge on which impedance ( absorbing ) boundary conditions are imposed . it is demonstrated that the exact sommerfeld - malyuzhinets contour integral solution for the diffracted field can be transformed to a line integral over a physical variable along the diffracting edge . this integral can be interpreted as a superposition of secondary point sources ( with directivity ) positioned along the edge , in the spirit of the edge source formulations for rigid ( sound - hard ) wedges derived in [ u. p. svensson , p. t. calamia and s. nakanishi , acta acustica / acustica 95 , 2009 , pp . 568 - 572 ] . however , when surface waves are present the physical interpretation of the edge source integral must be altered : it no longer represents solely the diffracted field , but rather includes surface wave contributions . |
the inference of couplings between dynamical subsystems , from data , is a topic of general interest .transfer entropy , which is related to the concept of granger causality , has been proposed to distinguish effectively driving and responding elements and to detect asymmetry in the interaction of subsystems . by appropriate conditioning of transitionprobabilities this quantity has been shown to be superior to the standard time delayed mutual information , which fails to distinguish information that is actually exchanged from shared information due to common history and input signals . on the other hand , granger formalized the notion that , if the prediction of one time series could be improved by incorporating the knowledge of past values of a second one , then the latter is said to have a _ causal _ influence on the former . initially developed for econometric applications ,granger causality has gained popularity also in neuroscience ( see , e.g. , ) .a discussion about the practical estimation of information theoretic indexes for signals of limited length can be found in .transfer entropy and granger causality are equivalent in the case of gaussian stochastic variables : they measure the information flow between variables .recently it has been shown that the presence of redundant variables influences the estimate of the information flow from data , and that maximization of the total causality is connected to the detection of groups of redundant variables . in recent years ,information theoretic treatment of groups of correlated degrees of freedom have been used to reveal their functional roles as memory structures or those capable of processing information .information theory suggests quantities that reveal if a group of variables is mutually redundant or synergetic .most approaches for the identification of functional relations among nodes of a complex networks rely on the statistics of motifs , subgraphs of _ k _ nodes that appear more abundantly than expected in randomized networks with the same number of nodes and degree of connectivity .an interesting approach to identify functional subgraphs in complex networks , relying on an exact expansion of the mutual information with a group of variables , has been presented in . in this workwe generalize these results to show a formal expansion of the transfer entropy which puts in evidence irreducible sets of variables which provide information for the future state of the target .multiplets of variables characterized by an high value , unjustifiable by chance , will be associated to informational circuits present in the system .additionally , in applications where linear models are sufficient to explain the phenomenology , we propose to use the exact formula for the conditioned mutual information among gaussian variables so as to get a computationally efficient approach .an approximate procedure is also developed , to find informational circuits of variables starting from few variables of the multiplet by means of a greedy search .we illustrate the application of the proposed expansion to a toy model and two real eeg data sets . the paper is organized as follows . in the next sectionwe describe the expansion and motivate our approach . in section iiiwe report the applications of the approach and describe our greedy search algorithm . in section iv we draw our conclusions .we start describing the work in . given a stochastic variable and a family of stochastic variables , the following expansion for the mutual information , analogous to a taylor series , has been derived there : where the variational operators are defined as and so on .now , let us consider time series .the lagged state vectors are denoted being the window length .firstly we may use the expansion ( [ mi ] ) to model the statistical dependencies among the variables at equal times .we take as the target time series , and the first terms of the expansion are for the first order ; for the second order ; and so on .we note that where is the _ interaction information _ , a well known information measure for sets of three variables ; it expresses the amount of information ( redundancy or synergy ) bound up in a set of variables , beyond that which is present in any subset of those variables . unlike the mutual information, the interaction information can be either positive or negative .common - cause structures lead to negative interaction information . as a typical example of positive interaction informationone may consider the three variables of the following system : the output of an xor gate with two independent random inputs ( however some difficulties may arise in the interpretation of the interaction information , see ) .it follows that positive ( negative ) corresponds to redundancy ( synergy ) among the three variables , and . in order to go beyond equal time correlations , here we propose to consider the flow of information from multiplets of variables to a given target .accordingly , we consider which measures to what extent all the remaining variables contribute to specifying the future state of .this quantity can be expanded according to ( [ mi ] ) : drawback of the expansion ( [ mi2 ] ) is that it does not remove shared information due to common history and input signals ; therefore we choose to condition it on the past of , i.e. . to this aimwe introduce the conditioning operator : and observe that and the variational operators ( [ diff1 ] ) commute .it follows that we can condition the expansion ( [ mi3 ] ) term by term , thus obtaining the first order terms in the expansion are given by : and coincide with the bivariate transfer entropies ( times -1 ) .the second order terms are and may be seen as a generalization of the interaction information ; hence a positive ( negative ) corresponds to a redundant ( synergetic ) flow of information .the typical examples of synergy and redundancy , in the present framework of network analysis , are the same as in the static case , plus a delay for the flow of information towards the target .the third order terms are and so on . the generic term in the expansion ( [ mi4 ] ) , is symmetrical under permutations of the and , remarkably , statistical independence among any of the results in vanishing contribution to that order .therefore each nonvanishing accounts for an irreducible set of variables providing information for the specification of the target : the search for for informational multiplets is thus equivalent to search for terms ( [ term ] ) which are significantly different from zero .another property of ( [ mi4 ] ) is that the sign of each term is connected to the informational character of the corresponding set of variables , see ) . for practical applications ,a reliable estimate of conditional mutual information from data should be used .non parametric methods are recommendable when nonlinear effects are relevant .however , a conspicuous amount of phenomenology in brain can be explained by linear models : therefore , for the sake of computational load , in this work we adopt the assumption of gaussianity and use the exact expression that holds in this case , which reads as follows .given multivariate gaussian random variables , and , the conditioned mutual information is where denotes the determinant , and the partial covariance matrix is defined in terms of the covariance matrix and the cross covariance matrix ; the definition of is analogous . the statistical significance of ( [ term ] ) can be assessed by observing that it is the sum of terms like ( [ bar ] ) which , under the null hypothesis , have a distribution .alternatively , statistical testing may be done using surrogate data obtained by random temporal shuffling of the target vector ; the latter strategy is the one we use in this work .in this subsection we show the application of the proposed expansion , truncated at the second order . to this aimwe turn to real electroencephalogram ( eeg ) data , the window length being fixed by cross validation .firstly we consider recordings obtained at rest from 10 healthy subjects . during the experiment , which lasted for 15 min ,the subjects were instructed to relax and keep their eyes closed . to avoid drowsiness ,every minute the subjects were asked to open their eyes for 5 s. eeg was measured with a standard 10 - 20 system consisting of 19 channels .data were analyzed using the linked mastoids reference , and are available from .for each subject we consider several epochs of 4 seconds in which the subjects kept their eyes closed . for each epochwe compute the second order terms at equal times and the lagged ones ; then we average the results over epochs . in order to visualize these results ,for each target electrode we plot a on a topographic scalp map the pairs of electrodes which are redundant or synergetic with respect to it .both quantities are distributed with a clear pattern across the scalp .interactions at equal times are one order of magnitude higher than the lagged interactions , and are dominated by the effect of spatial proximity , see fig . 1 .on the other hand , show a richer dynamics , such as interhemispheric communications and predominance redundancy to and from the occipital channels , see fig .2 , reflecting the prominence of the occipital rhythms when the subjects rest with their eyes closed .as another example we consider intracranial eeg recordings from a patient with drug - resistant epilepsy and which has thus been implanted with an array of cortical electrodes and two depth electrodes with six contacts .the data are available at and described in . for each seizure dataare recorded from the preictal period , the 10 seconds preceding the clinical onset of the seizure , and the ictal period , 10 seconds from the clinical onset of the seizure .we analyze data corresponding to eight seizures and average the corresponding results . for each electrodewe compute the lagged influences , obtaining for each electrode the pair of other electrodes with redundant or synergetic contribution to its future .the patient has a putative epileptic focus in a deep hippocampal region , with the seizure that then spreads to the cortical areas . in fig .3 we report the values of coefficients taking as the target a cortical electrode located on the putative cortical focus : we report the values of corresponding to all the couple of the electrodes , as well as their sum over electrode .it is clear how the redundancy increases during the seizure . on the other hand , for sensors from 70 to 76 , corresponding to a depth electrode , the redundancy is higher in the preictal period , reflecting the fact that the seizure is already active in its primary focus even if not yet clinically observable .the values of corresponding to this electrode are reported in fig .4 . given a target variable ,the time required for the exhaustive search of all the subsets of variables , with a statistically significant information flow ( [ term ] ) , is exponential in the size of the system .it follows that the exact search for large multiplets is computationally unfeasible , hence we adopt the following approximate strategy .we start from a pair of variables with non - vanishing second order term w.r.t . the given target .we consider these two variables as a _ seed _ , and aggregate other variables to them so as to construct a multiplet .the third variable of the subset is selected among the remaining ones as those that , jointly with the previously chosen variable , maximize the modulus of the corresponding third order term .then , one keeps adding the rest of the variables by iterating this procedure . calling the selected set of k - 1 variables , the set is obtained adding , to , the variable , among the remaining ones , with the greatest modulus of .these iterations stop when , corresponding to , is not significantly different from zero ( the bonferroni correction for multiple comparisons is to be applied at each iteration ) ; is then recognized as the multiplet originated by the initial pair of variables chosen as the seed .we apply this strategy to the following toy model where and are i.i.d .unit variance gaussian variables . in this modelthe target is influenced by the process ; variables , , are a mixture of and noise , whilst the remaining variables are pure noise .estimates of are based on time series , generated from ( [ map ] ) and 1000 samples long . the results are displayed in figure ( [ figmodels ] ) .firstly we consider the case and , with all the twenty variables driving the target with equal couplings ; in figure ( [ figmodels])-a we depict the term corresponding to the -th iteration of the greedy search .we note that has alternating sign and its modulus decreases with . in figure ( [ figmodels])-bwe consider another situation , with and , the ten non - zero couplings being non - uniform . still shows alternating sign , and vanishes for ; hence the multiplet of ten variables is correctly identified .the order of selection is related to the strength of the coupling : variables with stronger coupling are selected first . in figure ( [ figc3 ] )we consider again the eeg data from healthy subjects with closed eyes , and apply the greedy search taking c3 as the target and as the seed .we find a subset of 9 variables influencing the target .the fact that the sign of is alternating , as in the previous model , suggests that the channels in this set correspond to a single source which is responsible for the inter - hemispheric communication towards the target electrode c3 . in figure ( [ figo1 ] )we take o1 as the target and as the seed .a subset of 11 variables is found which describes the information flow from the frontal to the occipital cortex . for two target electrodes , c3 on the left and o1 on the right .the target electrode is in white , and for each of the other electrodes on the map , the value of is displayed for the other electrodes .[ figeegz],width=321 ] for two target electrodes , c3 on the left and o1 on the right .the target electrode is in white , and for each of the other electrodes on the map , the value of is displayed for the other electrodes .[ figeegz2],width=321 ] for a cortical electrode ( in white ) right before and during the clinical onset of a seizure , and the sum over the second electrode of the pair in the lower right panel .[ figcort],width=321 ] for a depth electrode ( in white ) right before and during the clinical onset of a seizure , and the sum over the second electrode of the pair in the lower right panel .[ figdepth],width=321 ] as a function of the multiplet size for a model in which one variable is influenced by all the other variables or by part of them . in ( a ) and : all the 20 variables influence the target with unitary weight . in (b ) and ; the weights are [ 1.75 1.75 1 1 1 1 .5 .5 .5 .5 ] . the insets show the logarithm of the absolute value of .the first point , in both plots , represents the initial pair of variables chosen as the seed , i.e. . the other parameters are , in both cases , .[ figmodels],width=321 ] leads to a which is not significantly different from zero .right : the absolute value of this contributions plotted on a scalp map .[ figc3],width=321 ] is not significantly different from zero .right : the absolute value of this contributions plotted on a scalp map .[ figo1],width=321 ]summarizing , we have proposed to describe the flow of information , in a system , by means of multiplets of variables which send information to each assigned target node .we used a recently proposed expansion of the mutual information , between a stochastic variable and a set of other variables , to measure the character and the strength of multiplets of variables .indeed , terms of the proposed expansion put in evidence irreducible sets of variables which provide information for the future state of the target channel .the sign of the contributions are related to their informational character ; for the second order terms , synergy and redundancy correspond to negative and positive sign , respectively . for higher orders, we have shown that groups of variables , related to the same source of information , lead to contributions with alternating signs as the number of variables is increased . a decomposition with similarities to the present workhave been reported in , where for multiple sources the distinction between unique , redundant , and synergistic transfer has been proposed ; in the inference of an effective network structure , given a multivariate time series , using incrementally conditioned transfer entropy measurements , has been discussed .the main purpose of this paper is to introduce an information based decomposition , and we did that in a framework unifying granger causality and transfer entropy , thus using a formula which is exact for linear models . in cases in which a nonlinear model is required , the entropy has to be computed , requiring a high enough number of time points for statistical validation ; nonetheless the expansion that we proposed remains valid and exact in both cases .we have reported the results of the applications to two eeg examples .the first data set is from _ resting brains _ and we found signatures of inter - hemispherical communications and frontal to occipital flow of information . concerning a data set from an epileptic subject , our analysis puts in evidence that the seizure is already active , close to the primary lesion , before it is clinically observable .99 t. schreiber,_phys .lett . _ * 85 * , pp .461 - 464 , 2000 .granger , _ econometrica _ * 37 * , pp .424 - 438 , 1969 .m. staniek , k. lehnertz , phys .* 100 * , 158101 ( 2008 ) .m. lindner , r. vicente , v. priesemann , and m. wibral , bmc neuroscience * 12 * , 119 ( 2011 ) .blinowska , r. kus , m. kaminski , _ phys .e _ * 70 * , pp . 50902 - 50905(r ) , 2004 .smirnov , b.p .bezruchko , _ phys rev . _* e 79 * , pp .46204 - 46212 , 2009 .m. dhamala , g. rangarajan , m. ding,_phys ._ * 100 * , pp .18701 - 18704 , 2008 .d. marinazzo , m. pellicoro , s. stramaglia , _ phys .lett . _ * 100 * , pp .144103 - 144107 , 2008 .l. faes , a. porta , g. nollo , _ phys .* e 78 * , 26201 ( 2008 ) .a. porta et al , _ methods of information in medicine _ 49 , pp .506 - 510 , 2010 .l. barnett , a.b .barrett , and a.k .seth , _ phys . rev* 103 * , pp . 238701 - 23704 , 2009 .k. hlavackova - schindler , m. palus , m. vejmelka , j. bhattacharya , physics reports * 441 * , 1 ( 2007 ) .l. angelini , m. de tommaso , d. marinazzo , l. nitti , m. pellicoro , and s. stramaglia , _ phys ._ * e 81 * , 37201 ( 2010 ) .a. borst , f.e .theunissen , _ nat .neurosci . _* 2 * , pp .947 - 957 , 1999 .e. schneidman , w. bialek , m.j .berry ii , _j. neuroscience _ * 23 * , pp . 11539 - 11553 , 2003 .bettencourt et al . , _ phys .rev . _ * e 75 * , pp .21915 - 21924 , 2007 .r. milo et al . , _ science _ 298 , pp .824 - 827 , 2002 .e. yeger - lotem et al . , _ proc .natl.acad . sci ._ 101 , pp .5934 - 5939 , 2004 .bettencourt , v. gintautas , m.i .ham , _ phys .lett . _ * 100 * , pp .238701 - 238704 , 2008 .z. shehzad et al . , _ cerebral cortex _ * 19 * , pp .2209 - 2229 , 2009 .n. tzourio - mazoyer et al . , it neuroimage * 15 * , pp .273 - 289 , 2002 .fox et al ., _ proc natl acad sci u s a. _ * 102 * , pp .9673 - 9678 , 2005 .r. salvador et al . , _ cereb .cortex _ * 15 * , 1332 - 1342 , 2005 .r. leech , r. braga , and d. j. sharp , _ the journal of neuroscience _ 32 , pp .215 - 222 , 2012 .g. nolte et al . , _ phys .lett . _ * 100 * , pp .234101 - 234104 , 2008 .g. nolte , a. ziehe , v. nikulin , a. schlgl , n. krmer , t. brismar , k.r .mller , phys .* 100 * , 234101 , 2008 | we propose a formal expansion of the transfer entropy to put in evidence irreducible sets of variables which provide information for the future state of each assigned target . multiplets characterized by a large contribution to the expansion are associated to informational circuits present in the system , with an informational character which can be associated to the sign of the contribution . for the sake of computational complexity , we adopt the assumption of gaussianity and use the corresponding exact formula for the conditional mutual information . we report the application of the proposed methodology on two eeg data sets . |
the recovery problem of sparse vectors from a linear underdetermined set of equations has recently attracted attention in various fields of science and technology due to its many applications , for example , in linear regression , communication , , , multimedia , , , and compressive sampling ( cs ) , .in such a sparse representation problem , we have the following underdetermined set of linear equations where is is the dictionary is and .[multiblock footnote omitted ] another way of writing is that a large dimensional sparse vector is coded / compressed into a small dimensional vector and the task will be to find the from with the full knowledge of . for this problem , the optimum solution is the sparsest vector satisfying . finding the sparsest vector is however np - hard ; thus , a variety of practical algorithms have been developed . among the most prominentis the convex relaxation approach in which the objective is to find the minimum -norm solution to . for the -norm minimization , if is -sparse , which indicates that the number of non - zero entries of is at most , the minimum that satisfies gives the limit up to which the signal can be compressed for a given dictionary .an interesting question then arises : how does the choice of the dictionary affect the typical compression ratio that can be achieved using the -recovery ?recent results in the parallel problem of cs , where acts as a sensing matrix , reveal that the typical conditions for perfect -recovery are universal for all random sensing matrices that belong to the rotationally invariant matrix ensembles .the standard setup , where the entries of the sensing matrix are independent standard gaussian , is an example that belongs to this ensemble .it is also known that the conditions required for perfect recovery do not in general depend on the details of the marginal distribution related to the non - zero elements . on the other hand , we know that correlations in the sensing matrix can degrade the performance of -recovery .this suggests intuitively that using a sample matrix of the rotationally invariant ensembles as is preferred in the recovery problem when we expect to encounter a variety of dense signals .however , the set of matrix ensembles whose -recovery performance are known is still limited , and further investigation is needed to assess whether the choice of is indeed so straightforward . the purpose of the present study is to fulfill this demand .specifically , we examine the typical -recovery performance of the matrices constructed by concatenating several randomly chosen orthonormal bases .such construction has attracted considerable attention due to ease of implementation and theoretical elegance , , for designing sparsity inducing over - complete dictionaries for natural signals . for a practical engineering scheme , audio coding ( music source coding ) uses a dictionary formed by concatenating several modified discrete cosine transforms with different parameters . by using the replica method in conjunction with the development of an integral formula for handling random orthogonal matrices, we show that the dictionary consisting of concatenated orthogonal matrices is also preferred in terms of the performance of -recovery .more precisely , the matrices can result in better -recovery performance than that of the rotationally invariant matrices when the density of non - zero entries of is not uniform among the orthogonal matrix modules , while the performance is the same between the two types of matrices for the uniform densities .this surprising result further promotes the use of the concatenated orthogonal matrices in practical applications .this paper is organized as follows . in the next section ,we explain the problem setting that we investigated . in section 3 , which is the main part of this paper, we discuss the development of a methodology for evaluating the recovery performance of the concatenated orthogonal matrices on the basis of the replica method and an integral formula concerning the random orthogonal matrices . in section 4 , we explain the significance of the methodology through application to two distinctive examples , the validity of which is also justified by extensive numerical experiments .the final section is devoted to a summary .we assume that is a multiple number of ; namely , .suppose a situation in which an dictionary matrix is constructed by concatenating module matrices , which are drawn uniformly and independently from the haar measure on orthogonal matrices , as .\label{lorth}\end{aligned}\ ] ] using this , we compress a sparse vector to following the manner of ( [ eq : sparse_representation_without_noise ] ) . we denote for the concatenation of sub - vectors of dimensions as yielding the expression with full knowledge of and , the -recovery is performed by solving the constrained minimization problem where for and generally denotes the minimization of with respect to and . at the minimum condition , constitutes the recovered vector in the manner of ( [ vector_union ] ) . for theoretically evaluating the -recovery performance , we assume that the entries of , are distributed independently according to a block - dependent sparse distribution where means the density of the non - zero entries of the -th block of the same size in and is a distribution whose second moment about the origin is finite , which is assumed as unity for simplicity .intuitively , as the compression rate decreases , the overall density up to which ( [ l1_recovery ] ) can successfully recover a typical sample of the original vector becomes smaller .however , precise performance may depend on the profile of .the above setting allows us to quantitatively examine how such block dependence of the non - zero density affects the critical relation between and for typically successful -recovery of .expressing the solution of ( [ l1_recovery ] ) as where and , constitutes the basis of our analysis .equations ( [ posterior_mean ] ) and ( [ posterior ] ) mean that can be identified with the average of the state variable for the gibbs - boltzmann distribution ( [ posterior ] ) in the vanishing temperature limit .however , as ( [ posterior ] ) depends on and , further averaging with respect to the generation of these external random variables is necessary for evaluating the typical properties of the -recovery .evaluation of such `` double averages '' can be carried out systematically using the replica method . in the replica method , we need to evaluate the average of for with respect to over the uniform distributions of orthogonal matrices . however , this is rather laborious and is easy to yield notational confusions . for reducing such technical obstacles ,we introduce a formula convenient for accomplishing this task before going into detailed manipulations .similar formulae have been introduced for handling random eigenbases of symmetric matrices and random left and right eigenbases of rectangular matrices .let us assume that -dimensional vectors are characterized by their norms as , where denotes the standard euclidean norm of the vector .for these vectors , we define the function {\{{\boldsymbol{o}}_t\ } } \right ) \cr & = & \lim_{m \to \infty } \frac{1}{m}\ln \left ( \frac{\int \left ( \prod_{t=1}^{t}{\cal d}{\boldsymbol{o}}_t \right ) \delta \left ( \sum_{t=1}^{t}{\boldsymbol{o}}_t { \boldsymbol{u}}_t \right ) } { \int \left ( \prod_{t=1}^{t}{\cal d}{\boldsymbol{o}}_t \right ) } \right ) , \label{formula_def}\end{aligned}\ ] ] where ] , can be accurately evaluated for all by using the saddle point method with respect to macroscopic variables , , and , where and .intrinsic permutation symmetry concerning the replica indices in ( [ power_partition ] ) guarantees that there exists a saddle point of the form , , and , which is often termed the replica symmetric ( rs ) solution . as a simple and plausible candidate, we adopt this solution as the relevant saddle point for describing the typical property of the -recovery , the validity of which will be checked in section 3.5 .the detailed computation is carried out as follows .let us consider averaging ( [ power_partition ] ) with respect to and define for each fixed set of : {\{{\boldsymbol{o}}_t\ } } , \label{average_o}\end{aligned}\ ] ] where .when is placed in the configuration of the rs solution , the expression ^{\rm t } \times \left [ { \boldsymbol{u}}_t^1 \ { \boldsymbol{u}}_t^2 \ \ldots \ { \boldsymbol{u}}_t^n \right ] = \left ( \begin{array}{cccc } mr_t & mr_t & \cdots & mr_t \cr mr_t & mr_t & \cdots & mr_t \cr \vdots & \vdots & \ddots & \vdots \cr mr_t & mr_t & \cdots & m r_t \end{array } \right ) \cr & & = { \boldsymbol{e } } \times \left ( \begin{array}{cccc } m(r_t - r_t+nr_t ) & 0 & \cdots & 0 \cr 0 & m(r_t - r_t ) & \cdots & 0 \cr \vdots & \vdots & \ddots & \vdots \cr 0 & 0 & \cdots & m(r_t - r_t ) \end{array } \right ) \times { \boldsymbol{e}}^{\rm t}\end{aligned}\ ] ] holds for each , where stands for matrix transpose , , and . ] may be expressed as = \left [ \tilde{{\boldsymbol{u}}}_t^1\ \tilde{{\boldsymbol{u}}}_t^2 \ \ldots \ \tilde{{\boldsymbol{u}}}_t^n \right ] \times { \boldsymbol{e}}^{\rm t } , \label{cordinate_conversion}\end{aligned}\ ] ] by using a set of orthogonal vectors ,whose norms are given as and for , along with an orthogonal matrix that does not depend on .this guarantees the equality .furthermore , condition and the orthogonality of among the replica indexes allows us to evaluate the average concerning independently for each index when computing .this , in conjunction with ( [ formula_def ] ) , provides each set of the rs configuration with an expression of ( [ average_o ] ) as the right hand side of ( [ energy ] ) is likely to hold for as well , although ( [ average_o ] ) is defined originally for only . on the other hand , inserting identities , and into ( [ power_partition ] ) and taking an average concerning , in conjunction with integration with respect to dynamical variables , result in the expression {{\boldsymbol{x}}_t^0}\right ) \cr & & = \int d \hat{{\boldsymbol{q } } } \exp \left ( m ( a(\{q_t , q_t , m_t\ } , \{\hat{q}_t^a,\hat{q}_t^{ab},\hat{m}_t^a\ } ) + b(\{\hat{q}_t^a,\hat{q}_t^{ab},\hat{m}_t^a\ } ) \right ) , \label{volume}\end{aligned}\ ] ] for a fixed set of .the conjugate variable is introduced for expressing a delta function as , and similarly for and .notation stands for an integral measure , and the functions on the right hand side are defined as and {x_t^0 } \right ) , \end{aligned}\ ] ] where the average for is taken according to ( [ sparse_dist ] ) .the expression of ( [ volume ] ) indicates that its rescaled logarithm is accurately evaluated using the saddle point method with respect to the conjugate variables in the large system limit .in addition , the replica symmetry guarantees that the relevant saddle point is of the rs form as , , and . as a consequence ,the evaluation yields {x_t^0 } \!\right ) \!\right \ } , \label{entropy}\end{aligned}\ ] ] where denotes the gaussian measure .this is also likely to hold for , although ( [ volume ] ) is originally defined for only .the replica method uses the identity {\{{\boldsymbol{x}}_t^0\ } , \{{\boldsymbol{o}}_t\ } } = -\lim_{n\to 0 } ( \partial /\partial n)(\beta m)^{-1 } \ln \left ( \left [ z^n(\beta;\{{\boldsymbol{x}}_t^0\ } , \{{\boldsymbol{o}}_t\}\right . ] for evaluating the typical free energy density .the above argument indicates that {\{{\boldsymbol{x}}_t^0\ } , \{{\boldsymbol{o}}_t\ } } \right ) ] as well .in particular , in the limit of , which is relevant in the current problem , the expression of the free energy of the vanishing temperature is expressed as {\{{\boldsymbol{x}}_t^0\ } , \{{\boldsymbol{o}}_t\ } } \cr & & = -\mathop{\rm extr}_{}\left \{\sum_{t=1}^{t}\left ( \frac{\partial f(\{\chi_k\})}{\partial \chi_t } ( q_t-2 m_t+\rho_t ) + \frac{\hat{q}_tq_t}{2}-\frac{\hat{\chi}_t\chi_t}{2}+\hat{m}_tm_t \right . \right .\cr & & \left .. \hspace*{6 cm } -\int dz \left [ \phi \left ( \sqrt{\hat{\chi}_t } z+ { \hat{m}_t } x^0 ; \hat{q}_t \right ) \right ] _ { x^0 } \right ) \right \ } , \label{free_energy}\end{aligned}\ ] ] where and rescaled variables are introduced as , , , and to properly describe the relevant solution in the limit of .extremization is to be performed with respect to .similar to earlier studies , at the extremum characterized by a set of the saddle point equations {x_t^0 } , \label{sp1}\\ & & \chi_t=\int dz \left [ \frac{\partial x^2(\sqrt{\hat{\chi}_t}z+\hat{m}_t x_t^0;\hat{q}_t)}{\partial ( \sqrt{\hat{\chi}_t}z ) } \right ] _{ x_t^0 } , \label{sp2}\\ & & m_t=\int dz \left [ x_t^0x(\sqrt{\hat{\chi}_t}z+\hat{m}_t x_t^0;\hat{q}_t ) \right ] _ { x_t^0 } , \label{sp3}\end{aligned}\ ] ] and physically denote the macroscopic averages of the recovered vector as {\{{\boldsymbol{x}}_k^0\},\{{\boldsymbol{o}}_k\}} ] , respectively . here, is provided by the extremum solution of ( [ formula_concrete ] ) for , and for , a lagrange multiplier should be exceptionally introduced in ( [ sph2 ] ) for enforcing .the success of the -recovery is characterized by the condition in which is satisfied at the extremum for .therefore , one can evaluate the critical relation between and by examining the thermodynamic stability of the success solution for the saddle point equations ( [ sph1])([sp3 ] ) .we assume for a while , since an exceptional treatment is required for . for obtaining the success solution , it is necessary that and hold . expanding ( [ sp1])([sp3 ] ) under the assumption of yields where , and we used {x_t^0}=\rho_t ] . for , holds for the success solution at the critical condition of the -recovery .equation ( [ hhath ] ) yields an eigenvalue of unity whose eigenvector is given as , which makes ( [ at ] ) hold .similarly , ( [ at ] ) is also satisfied at the critical condition of the -recovery of .these validate our rs evaluation in terms of the local stability analysis , although further justification with other schemes , such as comparison with numerical experiments , is necessary for examining possibilities that the rs solution becomes thermodynamically irrelevant due to discontinuous phase transitions .let us examine the significance of the developed methodology by applying it to two representative examples .we consider the uniform density case of ( ) as the first example , where the uniformity allows us to solve ( [ sph1])([sp3 ] ) setting all variables to be independent of as .in particular , setting simplifies the expressions of ( [ sph1])([sph3 ] ) providing and .this makes it unnecessary to deal with the saddle point problems of in an exceptional manner . as a consequence, the critical condition of the -recovery is expressed compactly by using a pair of equations as for both and . by setting ,these provide a critical condition identical to that obtained for the rotationally invariance matrix ensembles in earlier studies .this indicates that for vectors of the uniform non - zero density , the -recovery performance of the concatenated orthogonal matrices is identical to that of the standard setup provided by the matrix of independent standard gaussian entries .however , this is not the case when the non - zero density is not uniform . as a distinctive example, we examined the case of localized density , which is characterized by setting and for .table [ table ] and figure [ figure1 ] show critical values of the total non - zero density given the compression rate for the uniform and localized density cases .these show that the concatenated matrices always result in better -recovery performance for vectors of the localized densities , and the significance increases as becomes smaller while matrices of rotationally invariant ensembles result in identical performance as long as is unchanged .this indicates that , in addition to their ease of implementation and theoretical elegance , the concatenated orthogonal matrices are preferred for practical use in terms of their high recovery performance for vectors of non - uniform non - zero densities .c|cccccccc &2&3&4&5&6&7&8 uniform & 0.1928 & 0.1021 & 0.0668 & 0.0487 & 0.0378 & 0.0308 & 0.0257 localized & 0.2267 & 0.1190 & 0.0780 & 0.0566 & 0.0438 & 0.0354 & 0.0294 ( color online ) critical values of total non - zero density versus compression rate .circles and crosses correspond to the localized and uniform densities , respectively , for .the curve represents the relation between and for the rotationally invariance matrix ensembles .crosses coincide with values of the curve for ( ) . ] to justify our theoretical results , we conducted extensive numerical experiments of the -reconstruction .figures [ figure2 ] ( a ) and ( b ) depict the experimental assessment of the critical threshold for and , respectively . the case of an i.i.d .standard gaussian dictionary is also plotted for comparison . given fixed values of and ,a trial was started with an empty vector and a concatenated orthogonal dictionary generated from a set of standard i.i.d . gaussian matrices using qr - decomposition . based on the relative densities , one sub - vector then randomly chosen and assigned a non - zero component drawn from the standard gaussian ensemble .matlab algorithm `` linprog '' from optimization toolbox was used to solve the -minimization problem and obtain the reconstruction .the reconstruction was deemed to be a success if and a failure otherwise . given a successful reconstruction , we again randomly chose one sub - vector based on the densities and inserted a non - zero component drawn independently from the standard gaussian ensemble into it .the process was continued until the original vectors had non - zero components and the reconstruction was deemed a failure , that is , . the critical value was recorded and the experiment was started again using a new independent dictionary and an empty vector . for each value of and , we carried out independent trials .the experimental critical density was defined as , where denotes the arithmetic average over the trials . for all system sizes, we also computed the experimental per - block densities and checked that they were close to the desired densities after the trials . for fixed ,the experimental data points were fitted with a quadratic function of .extrapolation for provided the experimental estimates of the critical densities , as listed in table [ table2 ] in which the theoretical estimates in table [ table ] are also listed for comparison .( color online ) experimental assessment of critical densities for -reconstruction . experimental data ( see the main text ) were fitted with a quadratic function of and plotted with solid lines .concatenated orthogonal `` o '' and i.i.d .standard gaussian `` g '' basis under uniform and localized densities .filled markers represent the predictions obtained through the replica analysis .extrapolation for provides the estimates for the critical values .( a ) , where the markers correspond to simulated values , and ( b ) , where the markers correspond to simulated values . , title="fig : " ] ( color online ) experimental assessment of critical densities for -reconstruction . experimental data ( see the main text ) were fitted with a quadratic function of and plotted with solid lines . concatenated orthogonal `` o '' and i.i.d .standard gaussian `` g '' basis under uniform and localized densities .filled markers represent the predictions obtained through the replica analysis .extrapolation for provides the estimates for the critical values .( a ) , where the markers correspond to simulated values , and ( b ) , where the markers correspond to simulated values . , title="fig : " ] c|ccccc &2&3&4&5 uniform ( experiment ) & 0.1927 & 0.1019 & 0.0670 & 0.0487 uniform ( theory ) & 0.1928 & 0.1021 & 0.0668 & 0.0487localized ( experiment ) & 0.2264 & 0.1196 & 0.0779 & 0.0567 localized ( theory ) & 0.2267 & 0.1190 & 0.0780 & 0.0566 comparing the theoretical and experimental results confirms the accuracy of the replica analysis . from figure [ figure2 ], we observe that for the finite - sized systems , the orthogonal dictionaries seem to always provide higher thresholds than the gaussian one , even for uniform densities .in summary , we investigated the performance of recovering a sparse vector from a linear underdetermined equation when is provided as a concatenation of independent samples of random orthogonal matrices and the -recovery scheme is used .performance was measured using a threshold value of the density of non - zero entries in the original vector , below which the -recovery is typically successful for given compression rate .for evaluating this , we used the replica method in conjunction with the development of an integral formula for handling the random orthogonal matrices .our analysis indicated that the threshold is identical to that of the standard setup for which matrix entries are sampled independently from identical gaussian distribution when the non - zero entries in are distributed uniformly among blocks of the concatenation .however , it was also shown that the concatenated orthogonal matrices generally provide higher threshold values than the standard setup when the non - zero entries are localized in a certain block . results of extensive numerical experiments exhibited excellent agreement with the theoretical predictions .these mean that , in addition to their ease of implementation and theoretical elegance , the concatenated orthogonal matrices are preferred for practical use in terms of their high recovery performance for vectors of non - uniform non - zero densities .promising future studies include performance evaluation in the case of noisy situations and development of approximate recovery algorithms suitable for the concatenated orthogonal matrices the authors would like to thank erik aurell , mikael skoglund , and lars rasmussen for their useful comments .we also thank csc it center for science ltd . for the allocation of computational resources .this work was partially supported by grants from the japan society for the promotion of science ( kakenhi nos .22300003 and 22300098 ) ( yk ) and swedish research council under vr grant 621 - 2011 - 1024 ( mv ) .when are sampled independently and uniformly from the haar measure of the orthogonal matrices , are distributed independently and uniformly on the surfaces of the -dimensional hyperspheres of radius for a fixed set of -dimensional vectors satisfying .this means that the integral of ( [ formula_def ] ) can be expressed as we insert the fourier expressions of -function and into the numerator of ( [ spherical_integration ] ) , and carry out the integration with respect to , where .this yields the expression evaluating this by means of the saddle point method with respect to results in similarly , the denominator of ( [ spherical_integration ] ) is evaluated as substituting ( [ spherical_integration ] ) , ( [ numer ] ) , and ( [ denomi ] ) into ( [ formula_def ] ) leads to the expression of ( [ formula_concrete ] ) .10 url # 1#1urlprefix[2][]#2 miller a 2002 _ subset selection in regression ( second edition ) _ ( chapman and hall / crc ) | we consider the problem of recovering an -dimensional sparse vector from its linear transformation of dimension . minimizing the -norm of under the constraint is a standard approach for the recovery problem , and earlier studies report that the critical condition for typically successful -recovery is universal over a variety of randomly constructed matrices . for examining the extent of the universality , we focus on the case in which is provided by concatenating matrices drawn uniformly according to the haar measure on the orthogonal matrices . by using the replica method in conjunction with the development of an integral formula for handling the random orthogonal matrices , we show that the concatenated matrices can result in better recovery performance than what the universality predicts when the density of non - zero signals is not uniform among the matrix modules . the universal condition is reproduced for the special case of uniform non - zero signal densities . extensive numerical experiments support the theoretical predictions . |
compressive sensing , or compressive sampling ( cs ) , is a novel signal processing technique proposed to effectively sample and compress sparse signals , i.e. , signals that can be represented by few significant coefficients in some basis .assume that the signal of interest can be represented by , where is the basis matrix and is -sparse , which means only out of its entries are nonzero .one of the essential issues of cs theory lies in recovering ( or equivalently , ) from its linear observations , where is a sensing matrix with more columns than rows and is the measurement vector .unfortunately , directly finding the sparsest solution to ( [ y = ax ] ) is np - hard , which is not practical for sparse recovery .this leads to one of the major aspects of cs theory designing effective recovery algorithms with low computational complexity and fine recovery performance .a family of convex relaxation algorithms for sparse recovery had been introduced before the theory of cs was established .based on linear programming ( lp ) techniques , it is shown that norm optimization , also known as basis pursuit ( bp ) , yields the sparse solution as long as satisfies the restricted isometry property ( rip ) with a constant parameter .recovery algorithms based on convex optimization include interior - point methods and homotopy methods .in contrast with convex relaxation algorithms , non - convex optimization algorithms solve ( [ y = ax ] ) by minimizing norm with respect to , which is not convex .typical algorithms include focal underdetermined system solver ( focuss ) , iteratively reweighted least squares ( irls ) , smoothed ( sl0 ) , and zero - point attracting projection ( zap ) . compared with convex relaxation algorithms , theoretical analysis based on rip shows that fewer measurements are required for exact recovery by non - convex optimization methods .a family of iterative greedy algorithms has received much attention due to their simple implementation and low computational complexity .the basic idea underlying these algorithms is to iteratively estimate the support set of the unknown sparse signal , i.e. , the set of locations of its nonzero entries . in each iteration, one or more indices are added to the support estimation by correlating the columns of with the regularized measurement vector .typical examples include orthogonal matching pursuit ( omp ) , regularized omp ( romp ) , and stage - wise omp ( stomp ) .compared with convex relaxation algorithms , greedy pursuits need more measurements , but they tend to be more computationally efficient . recently , several greedy algorithms including compressive sampling matching pursuit ( cosamp ) and subspace pursuit ( sp ) have been proposed by incorporating the idea of backtracking . in each iteration, sp algorithm refines the columns of matrix that span the subspace where the measurement vector lies .specifically , sp adds more indices to the candidates of support estimate , and discards the most unreliable ones .similarly , cosamp adds more indices in each iteration , while computes the regularized measurement vector in a different way . by evaluating the reliability of all candidates in each iteration, these algorithms can provide comparable performance to convex relaxation algorithms , and exhibit low computational complexity as matching pursuit algorithms .another kind of greedy pursuits , including iterative hard thresholding ( iht ) and its normalized variation , is proposed with the advantages of low computational complexity and theoretical performance guarantee . in each iteration ,the entries of the iterative solution except for the most reliable ones are set to zero .together with cosamp and sp , these algorithms can be considered as greedy pursuits with replacement involved in each iteration .to apply the theory of cs in practice , the effect of noise and perturbation must be taken into consideration .the common analysis is the additive noise to the measurements , i.e. , where is termed measurement noise .most existing algorithms have been proved to be stable in this scenario , including theoretical analysis of bp , romp , cosamp , sp , and iht .it is shown that the error bounds of the recovered solutions are proportional to the norm of .a certain distribution of the measurement noise can be introduced to achieve better results , such as gaussian or others .until recently , only a few researches involve the perturbation to the sensing matrix , which is also termed system perturbation .existing works include analysis of bp , cosamp , sp , norm minimization with ] in the -th iteration of cosamp and iht algorithms satisfies }\big\|_2\leq c\big\|{\bf s}-{\bf s}^{[l-1]}\big\|_2 + c_1\left\|{\bf e}\right\|_2,\end{aligned}\ ] ] and the estimated support set in the -th iteration of sp algorithm satisfies where denotes the support of . furthermore , if the matrix satisfies , then , and it can be derived that }\big\|_2\le ac^l\left\|{\bf s}\right\|_2 + d\left\|{\bf e}\right\|_2\end{aligned}\ ] ] holds for greedy pursuits with replacement .the specific values of the constants , , , , , and are illustrated in table [ tableconstant1 ] .[ tableconstant1 ] .the specification of the constants [ cols="^,^,^,^",options="header " , ] based on theorem [ gpra ] , two remarks are derived as follows .* remark 4 * in the noiseless scenario , after finite iterations , the recovered solutions of cosamp and sp are guaranteed to be identical to the sparse signal .the result can be verified through the following statement .suppose is the smallest magnitude of the nonzero entries of . then after iterations , the recovered solution }$ ] obeys }\big\|_2 < s_{\min}.\end{aligned}\ ] ] if the support of is perfectly recovered , then for cosamp and sp , the solution is already identical to .otherwise , at least one nonzero entry is not detected , thus the recovery error is no less than , which contradicts ( [ remarkequ1 ] ) .notice that the solution of iht does not possess the above property , since exact support recovery for iht does not imply exact signal recovery .* remark 5 * according to ( [ gprerror2 ] ) , after iterations , the error bound of the recovered solution satisfies }\big\|_2\le ( d+1)\left\|{\bf e}\right\|_2,\end{aligned}\ ] ] which means that the error bound is proportional to the norm of the noise , and the recovery performance of greedy pursuits with replacement is stable in this scenario .for cosamp algorithm , the inequality ( [ gprerror1 ] ) can be obtained by following the steps of the proof of theorem 4.1 in , while preserving rics during the derivation . as for the second part of the theorem , it is easy to derive that if , then . by recursion, it can be proved from ( [ gprerror1 ] ) that }\big\|_2\le c^l\big\|{\bf s}-{\bf s}^{[0]}\big\|_2 + \frac{c_1}{1-c}\left\|{\bf e}\right\|_2,\end{aligned}\ ] ] which arrives ( [ gprerror2 ] ) . [ splemma1 ] ( lemma 3 in ) let be a -sparse vector , and is a noisy measurement vector where satisfies the rip with parameter . for an arbitrary set such that , define as then lemma [ splemma2 ] directly implies inequality ( [ sperror1 ] ) .it s easy to check that if , then , and applying lemma [ splemma1 ] to ( [ spapp1 ] ) and with the fact that , inequality ( [ gprerror2 ] ) can be derived .e. cands , j. romberg , and t. tao , `` robust uncertainty principles : exact signal reconstruction from highly incomplete frequency information , '' _ ieee trans .information theory _ ,52 , no . 2 , pp . 489 - 509 , feb .2006 .s. kim , k. koh , m. lusig , s. boyd , and d. gorinevsky , `` an interior - point method for large - scale -regularized least squares , '' _ ieee journal of selected topics in signal process ._ , vol . 1 , no . 4 , pp . 606 - 617 , dec . 2007 .h. mohimani , m. babaie - zadeh , and c. jutten , `` a fast approach for overcomplete sparse decomposition based on smoothed norm , '' _ ieee trans .signal process ._ , vol .57 , no . 1 ,pp . 289 - 301 , jan .j. jin , y. gu , and s. mei , `` a stochastic gradient approach on compressive sensing signal reconstruction based on adaptive filtering framework , '' _ ieee journal of selected topics in signal process ._ , vol . 4 , no . 2 , pp .409 - 420 , apr . 2010 .x. wang , y. gu , and l. chen , `` proof of convergence and performance analysis for sparse recovery via zero - point attracting projection , '' _ ieee trans .signal processing _ , vol .60 , no . 8 ,4081 - 4093 , aug .2012 .d. needell and r. vershynin , `` uniform uncertainty principle and signal recovery via regularized orthogonal matching pursuit , '' _ foundations of computational mathematics _ , vol . 9 , no . 3 , pp .317 - 334 , june 2009 .d. donoho , y. tsaig , i. drori , and j. starck , `` sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching pursuit , '' _ ieee trans .information theory _ ,58 , no . 2 , pp .1094 - 1121 , feb .2012 .t. blumensath and m. davies , `` normalized iterative hard thresholding : guaranteed stability and performance , '' _ ieee journal of selected topics in signal process . _ , vol . 4 , no . 2 , pp . 298 - 309 , apr .d. needell and r. vershynin , `` signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit , '' _ ieee journal of selected topics in signal process ._ , vol . 4 , no . 2 , pp .310 - 316 , apr . 2010 .z. ben - haim , y. eldar , and m. elad , `` coherence - based performance guarantees for estimating a sparse vector under random noise , '' _ ieee trans .signal process ._ , vol . 58 , no . 10 , pp . 5030 - 5043 , oct. 2010 .w. dai and o. milenkovic , `` subspace pursuit for compressive sensing : closing the gap between performance and complexity , '' available online : http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.154.8384 . | applying the theory of compressive sensing in practice always takes different kinds of perturbations into consideration . in this paper , the recovery performance of greedy pursuits with replacement for sparse recovery is analyzed when both the measurement vector and the sensing matrix are contaminated with additive perturbations . specifically , greedy pursuits with replacement include three algorithms , compressive sampling matching pursuit ( cosamp ) , subspace pursuit ( sp ) , and iterative hard thresholding ( iht ) , where the support estimation is evaluated and updated in each iteration . based on restricted isometry property , a unified form of the error bounds of these recovery algorithms is derived under general perturbations for compressible signals . the results reveal that the recovery performance is stable against both perturbations . in addition , these bounds are compared with that of oracle recovery least squares solution with the locations of some largest entries in magnitude known a priori . the comparison shows that the error bounds of these algorithms only differ in coefficients from the lower bound of oracle recovery for some certain signal and perturbations , as reveals that oracle - order recovery performance of greedy pursuits with replacement is guaranteed . numerical simulations are performed to verify the conclusions . * keywords : * compressive sensing , sparse recovery , general perturbations , performance analysis , restricted isometry property , greedy pursuits , compressive sampling matching pursuit , subspace pursuit , iterative hard thresholding , oracle recovery . |
there is an ever increasing wealth of observational evidence indicating the non - sphericity of almost every type of astronomical object ( e.g. , extended circumstellar environments , novae shells , planetary nebulae , galaxies , and agns ) . to accurately interpret this data, detailed two- and three - dimensional radiation transfer techniques are required . with the availability of fast workstations , many researchers are turning to monte carlo techniques to produce model images and spectra for the asymmetric objects they are investigating . in monte carlo radiation transfer simulations , packets of energy or `` photons '' are followed as they are scattered and absorbed within a prescribed medium .one of the features of this technique is that the locations of the packets are known when they are absorbed , so we can determine where their energy is deposited .this energy heats the medium , and to conserve radiative equilibrium , the absorbed energy must be reradiated at other wavelengths , depending on the opacity sources present . tracking these photon packets , while enforcing radiative equilibrium , permits the calculation of both the temperature structure and emergent spectral energy distribution ( sed ) of the envelope .the ability of monte carlo techniques to easily follow the transfer of radiation through complex geometries makes them very attractive methods for determining the temperature structure within non - spherical environments a task which is very difficult with traditional ray tracing techniques .previous work on this problem for spherical geometries includes the approximate solutions by scoville & kwan ( 1976 ) , who ignored scattering , leung ( 1976 ) , and diffusion approximations by yorke ( 1980 ) .the spherically symmetric problem has been solved exactly by rowan - robinson ( 1980 ) , wolfire & cassinelli ( 1986 ) , and ivezi & elitzur ( 1997 ) , who used a scaling technique .extensions of the exact solution to two dimensions have been performed by efstathiou & rowan - robinson ( 1990 , 1991 ) , while approximate two - dimensional models have been presented by sonnhalter , preibisch , & yorke ( 1995 ) and menshchikov & henning ( 1997 ) .radiative equilibrium calculations using the monte carlo technique have been presented by lefevre , bergeat , & daniel ( 1982 ) ; lefevre , daniel , & bergeat ( 1983 ) ; wolf , henning , & secklum ( 1999 ) ; and lucy ( 1999 ) .most of these authors ( lucy being exceptional ) use a technique in which stellar and envelope photon packets are emitted separately .the number of envelope packets to be emitted is determined by the envelope temperature , while the envelope temperature is determined by the number of absorbed packets .consequently these techniques require iteration , usually using the absorbed stellar photons to provide an initial guess for the envelope temperature .the iteration proceeds until the envelope temperatures converge .note that the stellar luminosity is not automatically conserved during the simulation ; only after the temperatures converge is the luminosity approximately conserved .in contrast , lucy adopts a strategy in which absorbed photon packets are immediately re - emitted , using a frequency distribution set by the current envelope temperature .although the frequency distribution of the reprocessed photons is incorrect ( until the temperatures have converged ) , his method automatically enforces local radiative equilibrium and implicitly conserves the stellar luminosity .the insight of lucy s method is that conservation of the stellar luminosity is more important than the spectral energy distribution when calculating the radiative equilibrium temperatures .nonetheless , this method requires iteration .the primary problem faced by lucy s method is the incorrect frequency distribution of the re - emitted photons . in this paperwe develop an adaptive monte carlo technique that corrects the frequency distribution of the re - emitted photons .essentially , our method relaxes to the correct frequency and temperature distribution .furthermore it requires no iteration as long as the opacity is independent of temperature .such is the case for astrophysical dust . in section 2, we describe the temperature correction algorithm .we compare the results of our code with a spherically symmetric code in section 3 , and in section 4 we present results for two dimensional axisymmetric density structures .we wish to develop a method to calculate the temperature distribution throughout an extended dusty environment for use with monte carlo simulations of the radiation transfer .the radiation transfer technique we employ has been described in detail in other papers : code & whitney ( 1995 ) ; whitney & hartmann ( 1992 , 1993 ) ; wood et al .( 1996 ) , so we only summarize it here .the basic idea is to divide the luminosity of the radiation source into equal - energy , monochromatic `` photon packets '' that are emitted stochastically by the source .these packets are followed to random interaction locations , determined by the optical depth , where they are either scattered or absorbed with a probability given by the albedo .if the packet is scattered , a random scattering angle is obtained from the scattering phase function ( differential cross section ) .if instead the packet is absorbed , its energy is added to the envelope , raising the local temperature . to conserve energy and enforce radiative equilibrium ,the packet is re - emitted immediately at a new frequency determined by the envelope temperature .these re - emitted photons comprise the diffuse radiation field . after either scattering or absorption plus reemission, the photon packet continues to a new interaction location .this process is repeated until all the packets escape the dusty environment , whereupon they are placed into frequency and direction - of - observation bins that provide the emergent spectral energy distribution . since all the injected packets eventually escape ( either by scattering or absorption followed by reemission ) , this method implicitly conserves total energy .furthermore it automatically includes the diffuse radiation field when calculating both the temperature structure and the emergent spectral energy distribution .we now describe in detail how we calculate the temperature structure and seds of dusty environments illuminated by a radiation source .this radiation can come from any astrophysical source , either internal or external , point - like or extended .initially we divide the source luminosity , , into photon packets emitted over a time interval .each photon packet has the same energy , , so note that the number of physical photons in each packet is frequency - dependent .when the monochromatic photon packet is injected into the envelope , it will be assigned a random frequency chosen from the spectral energy distribution of the source .this frequency determines the dust absorptive opacity , , and scattering opacity , ( both per mass ) , as well as the scattering parameters for the ensuing random walk of the packet through the envelope .the envelope is divided into spatial grid cells with volume , where is the cell index .as we inject source photon packets , we maintain a running total , , of how many packets are absorbed in each grid cell .whenever a packet is absorbed in a grid cell , we deposit its energy in the cell and recalculate the cell s temperature . the total energy absorbed in the cell we assume that the dust particles are in local thermodynamic equilibrium ( lte ) , and for simplicity we adopt a single temperature for the dust grains . note that although we use dust for the continuous opacity source , we could replace the dust by any continuous lte opacity source that is independent of temperature . in radiative equilibrium , the absorbed energy , ,must be reradiated .the thermal emissivity of the dust , where is the planck function at temperature , so the emitted energy is where is the planck mean opacity , and is the frequency integrated planck function .if we adopt a temperature that is constant throughout the grid cell , , then where is the mass of the cell . equating the absorbed ( [ eq : eabs ] ) and emitted ( [ eq : eem ] ) energies , we find that after absorbing packets , the dust temperature is given by note that the planck mean opacity , , is a function of temperature , so equation ( [ eq : retemp ] ) is actually an implicit equation for the temperature , which must be solved _ every time a packet is absorbed_. since this equation is solved so many times , an efficient algorithm is desirable . fortunately is a slowly varying function of temperature , which implies a simple iterative algorithm may be used to solve equation ( [ eq : retemp ] ) .to do so efficiently , we pre - tabulate the planck mean opacities for a large range of temperatures and evaluate by interpolation , using the temperature from the previous guess .after a few steps , we have the solution for .note that because the dust opacity is temperature - independent , the product , which is proportional to , increases monotonically with temperature .consequently always increases when the cell absorbs an additional packet . now that we know the temperature after absorbing an additional packet within the cell , we must reradiate this energy so that the heating always balances the cooling . prior to absorbing this packet ,the cell previously has emitted packets that carried away an energy corresponding to the cell s previous emissivity , where is the temperature increase arising from the packet absorption .note that these previous packets were emitted with an incorrect frequency distribution corresponding to the previous temperature , .the total energy that should be radiated now corresponds to at the new temperature , .thus the additional energy to be carried away is given by \ ; , \label{eq : deltakappab}\ ] ] which is the shaded area shown in figure [ fig : tempcorrect ] . as long as the packet energy is not too large ( this may be assured by choosing a large enough number of photon packets , , to use for the simulation ) , the temperature change is small , so the temperature correction spectrum note that is everywhere positive because , and the planck function is a monotonically increasing function of temperature . therefore to correct the previously emitted spectrum, we immediately re - emit the packet ( to conserve energy ) , and we choose its frequency using the shape of .this procedure statistically reproduces for the distribution of the re - emitted packets .normalizing this distribution , we find the temperature correction probability distribution where is the probability of re - emitting the packet between frequencies and , and the normalization constant . now that the packet s frequency has changed , we change the opacity and scattering parameters accordingly and continue with scattering , absorption , temperature correction , and re - emission until all the photon packets finally escape from the system . in principlewe could also account for back - warming of the source .whenever a packet hits the source , the source must reradiate this new energy .this will change the temperature of the source , and the new source photons can be emitted using a difference spectrum similar to equation ( [ eq : tcorrect ] ) .when we begin our calculation , no packets have been absorbed , so the initial temperature is zero throughout the envelope .this means that the initial temperature change is not small as is required by equation ( [ eq : deltajnu ] ) .one could use equation ( [ eq : deltakappab ] ) to re - emit the first packet that is absorbed ; however , this is not necessary .the number of packets producing this initial temperature change is small ; it is of order the number of spatial grid cells .furthermore , these packets generally are re - emitted at such long wavelengths that they are not observable .consequently the error arising from using equation ( [ eq : deltajnu ] ) to re - emit every packet is too small to be of importance .as the simulation runs , the envelope starts at zero temperature .it then heats up , and the radiation field `` relaxes '' to its final spectral shape .the temperature correction procedure is simply a way of re - ordering the calculation ( which frequencies are being used at a given moment ) so that in the end , all the frequencies have been properly sampled .consequently , after all the stellar photon packets have been transported , the final envelope temperature is the correct radiative equilibrium temperature , and the emergent spectral energy distribution has the correct frequency shape .furthermore , the photon re - emission automatically accounts for the diffuse envelope emission .note that energy is necessarily conserved , and there is no time - consuming iteration in our scheme ; calculating the radiative equilibrium temperature requires no more computational time than an equivalent pure scattering calculation in which the temperature structure is held fixed .similarly , there is no issue of convergence in our method . unlike -iteration, the photon packets carry energy over large distances throughout the envelope because they are never destroyed , and of course there is no iteration at all . after running packets, we have the final answer .the simulation does not continue running until some convergence criterion is met , and the only source of error in the calculation is the statistical sampling error inherent in monte carlo simulations .to validate our method for determining the radiative equilibrium temperatures and emergent fluxes , we compared our results against a set of benchmark calculations recently developed by ivezi ' c et al .( 1997 ) for testing spherically symmetric dust radiative equilibrium codes .the parameters listed by ivezi ' c et al . enable us to exactly reproduce the same set of physical conditions ( i.e , input spectrum , dust destruction radius , optical depth , opacity frequency distribution , and radial density structure ) . for all benchmark tests , ivezi ' c et al .used a point source star , radiating with a black body spectrum whose temperature .the dust density distribution was a power law with radius , where the inner radius of the envelope is the dust destruction radius , , the outer radius is , and the total radial optical depth is , specified at .the dust absorptive opacity , , and scattering opacity , , were taken to be since the total optical depth at is independently specified , these opacities have been normalized to that at for convenience .the wavelength - dependent scattering albedo is given by , and the scattering was assumed to be isotropic ( note that in dust simulations , we would normally use a non - isotropic phase function for the scattering ) . in principle , the inner radius of the dust shell , , is determined by the dust condensation temperature , chosen by ivezi ' c et al . to be .however , we are only testing the temperature correction procedure , so we have not implemented a scheme to solve self consistently for the dust destruction radius .the values for were calculated instead using ivezi ' c et al.s eq .( 4 ) and data from their table 1 . finally, the outer radius of the dust shell was set to be .the parameters describing the various test simulations are summarized in table [ table : benchmarkparms ] .lcccc 0 & 1 & 8.44 0 & 10 & 8.46 0 & 100 & 8.60 2 & 1 & 9.11 2 & 10 & 11.37 2 & 100 & 17.67 [ table : benchmarkparms ] lcccc 1000 & 10 & 200 & 0.2 1000 & 10 & 20 & 0.02 10 & 10 & 200 & 20 10 & 10 & 20 & 2 [ table : ellipsoidparms ] to begin the simulation , we release stellar photon packets with a black body frequency distribution , given by the normalized planck function where . a particularly simple method for sampling the black body distribution is given by carter & cashwell ( 1975 ) .since this reference is somewhat obscure and difficult to obtain , we summarize the method here .first , generate a uniform random number , , in the range 0 to 1 , and determine the minimum value of , , that satisfies the condition next obtain four additional uniform random numbers ( in the range 0 to 1 ) , , , , and .finally , the packet frequency is given by after emitting these packets from the star , we track them through the envelope . to determine the envelope temperature , we must count how many packets are absorbed in each grid cell . since the envelope is spherically symmetric , we employ a set of spherical shells for our grid . to obtain the best poisson error statistics , we should ideally construct the grid positions so that equal numbers of packets are absorbed in each cell . since the probability of absorbing a photon packet is proportional to the optical depth ,we choose equal radial optical depth grid locations , where is the total number of cells we used , , and . integrating the density , eq .( [ eq : rho ] ) , over the cell volume to obtain the mass , we find from eq .( [ eq : retemp ] ) that the temperature in each grid cell is given by } \over { 4 n_\gamma \tau_{1\mu{\rm m } } [ ( i^2-i+1/3)\delta r^2+(2i-1)\delta r+1 ] } } & ( ),\cr { { n_i(n_f - i)(n_f - i+1)(r_\star / r_{\rm dust})^2 ( 1-r_{\rm dust}/r_{\rm max } ) } \over { 4 n_\gamma \tau_{1\mu{\rm m } } n_f [ \kappa_p(t_i ) / ( \kappa_{1\mu{\rm m}}+\sigma_{1\mu{\rm m } } ) ] } } & ( ),\cr}\ ] ] where we have used for the stellar luminosity .we then proceed with the radiation transfer , temperature calculation , and reemission as described in section 2 until all packets exit the envelope .when the packets escape , they are placed into uniform frequency bins , where . the width of each bin . since the envelope is spherically symmetric , the observed flux , where is the number of packets in the frequency bin , and is the observer s distance from the starnormalizing to the total flux , , the sed is given by the factor arises from using the frequency at the center of the bin . in figures [ fig : benchmarksed ] and [ fig : benchmarktemp ] we show the results of our simulation compared with the output of one of the codes tested by ivezi et al .this code , called dusty , is publicly available and is described in ivezi & elitzur ( 1997 ) .we see that our monte carlo calculations reproduce both the correct temperature structure and sed .note the large errors at the longest and shortest wavelengths in the monte carlo calculations . at these wavelengths , the number of emerging packets is small , resulting in large errors ( the relative error in each flux bin equals , owing to the poisson sampling statistics inherent in monte carlo simulations ) .the excellent agreement of the comparisons shown in figures 2a and 2b ( the differences are smaller than the numerical error of the dusty calculations ) demonstrates the validity of our temperature correction procedure described in section 2 .now that we have verified our our basic radiative equilibrium algorithm , we can proceed to investigate the temperature structure and seds of other geometries . owing to the inherently three - dimension nature of monte carlo simulations ( even our 1-d spherically symmetric code internally tracks the photon packets in three dimensions ) ,our code is readily modified for arbitrary geometries .we now show the results of an application to axisymmetric circumstellar environments .for the purpose of this illustrative simulation , we adopt a stellar blackbody with , envelope inner radius , and a simple ellipsoidal parameterization for the circumstellar density .the isodensity contours are elliptical with being the ratio of the semi - major to semi - minor axis .the density is given by where is given by eq .( [ eq : rho0 ] ; case ) , is the polar angle , and the `` flattening factor '' note that the equatorial to polar density ratio at a given radius is . as before ,we divide the circumstellar environment into cells with radial and latitudinal grid points .note that the envelope is symmetric about the equator , so we combine the cells below the equator with their counterparts above the equator .this is automatically accomplished by using for the latitudinal grid point coordinate .spacing the grid so that the radial and latitudinal optical depths are the same for each cell , we find with these cell coordinates , equation ( [ eq : retemp ] ) for the temperature in cell becomes [ \tan^{-1}(f\mu_{j-1})-\tan^{-1}(f\mu_{j } ) ] } } \ ; , \ ] ] where is the number of packets absorbed in the cell , and we have chosen ( -band ) for our equatorial radial optical depth parameter , . for the dust opacity , we adopt a standard mrn interstellar grain mixture ( mathis , rumpl , & nordsieck 1977 ) , using optical constants from draine & lee ( 1984 ) .figure [ fig : dustopac ] shows the opacity and albedo in graphical form . the data for this figure is available in tabular form from the dusty web site , http://www.pa.uky.edu//dusty .note the prominent silicate absorption features at m and m .for the current demonstration , we assume that the scattering is isotropic , but we can easily accommodate any phase function , analytic or tabulated when calculating the emergent sed . unlike the spherically symmetric benchmarks , the sed now depends on viewing angle ,so we must bin the escaping packets both in direction and frequency .since the envelope is axisymmetric , the observed flux only depends on the inclination angle , , of the envelope symmetry axis ( i.e. , the stellar rotation axis ) . to obtain approximately equal numbers of escaping packets, we choose direction bins , with equal solid angles , .the escaping directions ( inclination angles ) , , for these bins are given by where .in addition to the direction bins , we use the frequency bins given by eq .( [ eq : freqbins ] ) .now that we have specified both the direction and frequency bins , the observed flux is , where is the number of escaping packets in the bin .normalizing to the bolometric flux f gives the emergent sed , we now choose to investigate the seds produced by two density structures with different degrees of flattening .the first has a density ratio to represent a disk - like structure .the second has a density ratio , which is mildly oblate , representative of an infalling protostellar envelope . for each density structure , we perform optically thick ( in the mid - ir ) and optically thin calculations .the optically thick calculations have an equatorial -band optical depth , while the optically thin calculations have . table [ table : ellipsoidparms ] summarizes these density structures . for comparison , we also have performed calculations for spherically symmetric models containing the same total mass as our disk and envelope densities . keeping the same inner and outer radii ,the optical depth for the equivalent spherical model is figure [ fig : disksed ] shows the incident stellar spectrum and the emergent sed as a function of viewing angle for the disk - like model , as well as the result of the equivalent spherically symmetric calculation . for both disk simulations ,when viewing the system pole - on , we are looking through optically thin circumstellar dust ( see table [ table : ellipsoidparms ] ) and can see the star at optical wavelengths . in the ir , there is an excess arising from the circumstellar disk , which reprocesses the stellar radiation .note that , for pole - on viewing , we see the silicate features in emission , since the disk is optically thin in the vertical direction . as we go to higher viewing angles , the optical depth to the central star increases , and as a result, the star becomes more extincted in the optical region .note that at almost edge - on viewing , a `` shoulder '' appears around m for the optically thick case .this arises due to the dominant effects of scattering the stellar radiation at these wavelengths .these scattering shoulders are also present in the axisymmetric calculations presented by efstathiou & rowan - robinson ( 1991 ) , sonnhalter et al .( 1995 ) , menschikov & henning ( 1997 ) , and dalessio et al .( 1999 ) . at wavelengthslonger than about m , the albedo begins to drop rapidly ( see fig . [fig : dustopac ] ) , and the disk thermal emission begins to dominate , so the shoulder terminates . beyond m ,the envelope becomes optically thin , so the spectrum is independent of inclination and is dominated by the dust emission .the two dimensional temperature structure for the disk - like models is displayed in figure 4b .we see that at the inner edge of the envelope ( ) there is little variation of the temperature with latitude , while at large radii there is a clear latitudinal temperature gradient , with the dust in the denser equatorial regions being cooler than dust at high latitudes . in the polar region ,the material is optically thin to the stellar radiation , so it heats up to the optically thin radiative equilibrium temperature .this temperature has a power law behavior , for dust opacity ( ) .as can be seen in figure [ fig : disktemp ] , the polar temperature does indeed have a power law decrease with a slope of approximately .in contrast , the disk only displays this power law behavior at large radii . at the inner edge of the disk ,the disk sees the same mean ( stellar ) intensity as is present in the polar region ( , where ^{1/2}\}$ ] is the dilution factor ) .consequently , the inner edge of the disk heats up to the optically thin radiative equilibrium temperature the same as the polar temperature .however at larger radii , the opaque material in the inner regions of the disk shields the outer regions from direct heating by the stellar radiation .this shielding reduces the mean intensity ( ) . as a result ,the outer disk is only heated to a fraction of the optically thin radiative equilibrium temperature .thus the equatorial region is cooler than the polar region . eventually at large enough radii ,the disk becomes optically thin to the heating radiation . at that point, it sees a radially streaming radiation field from an effective photosphere that has a much lower temperature than the star . from that pointoutward , the disk temperature displays an optically thin power law decrease with a slope that parallels the polar temperature .the spherically symmetric calculation overestimates both the emergent flux and the disk temperature . in the optically thin limit ,the ir continuum is proportional to the mass of the circumstellar material , so one would expect that a spherically equivalent mass would reproduce the long wavelength spectrum .recall however that we can see the star at pole - on viewing angles .this implies that the disk does not reprocess the entire bolometric luminosity of the star .consequently , the ir excess is less than that in the spherical model .note that one can nonetheless reproduce the edge - on sed using a spherical model if we allow the density power law to depart from and change the size of the spherical envelope .this also was noted by sonnhalter et al .( 1997 ) , who found that they could fit the ir continuum of their disk models by changing the radial dependence of the circumstellar density and the outer radius for the same total envelope mass .the spectral energy distribution and temperature structure for the oblate envelope are shown in figures [ fig : envelopesed ] and [ fig : envelopetemp ] . the optically thin envelope displays significant extinction of the star at all viewing angles .close to edge - on , scattering shoulders appear around m that are similar to those seen in the disk model ( fig .[ fig : disksed ] ) . finally , because the model is optically thin in the mid - ir , the silicate feature is always in emission , and the sed is independent of viewing angle for wavelengths longer than a few microns . for the denser envelope ,the star is extremely faint at optical wavelengths . along withthe increased extinction in the optical , the scattering shoulders are less prominent , due to the thermal emission by the envelope . in the mid - ir ,the envelope is optically thick edge - on and optically thin pole - on .consequently , the silicate features go from absorption to emission as the viewing angle changes from edge - on to pole - on .the general shape of the seds , which now peak in the mid - ir for all inclinations , are reminiscent of spectra from embedded ( class i ) t tauri stars , which are commonly modeled using flattened axisymmetric dusty envelopes ( e.g. , adams , lada , & shu 1987 ; kenyon , calvet , & hartmann 1993 ; menshchikov & henning 1997 ; dalessio , calvet , & hartmann 1997 ) . of these t tauri simulations ,only efstathiou & rowan - robinson ( 1991 ) have performed an exact calculation for the sed and circumstellar temperature for the terebey , shu , & cassen ( 1984 ) collapse model . the temperature structure for our envelope models ( fig . [ fig : envelopetemp ] )is qualitatively similar to the temperature structure of the disk models ( fig .[ fig : disktemp ] ) .the temperature at the inner edge of the envelope is independent of latitude , while at larger radii the equatorial regions are cooler than the polar regions .the primary difference is that the latitudinal temperature gradient is not as extreme .finally , we note that the equivalent spherically symmetric model better reproduces the far ir spectrum of the envelope models than the disk models .this is because the envelope models reprocess the entire bolometric luminosity , while in disk models , some of the stellar luminosity escapes through the polar region .we have developed a temperature correction procedure for use in monte carlo radiation transfer codes .we have tested our method against other spherically symmetric benchmark codes and successfully matched their results . after verifying our method, we applied it to obtain sample temperature distributions and seds for 2-d axisymmetric disk - like models and mildly oblate envelopes .these simulations illustrate the important role envelope geometry can play when interpreting the seds of embedded sources .the primary limitation of our temperature correction procedure is that it only applies if the opacity is independent of temperature .this is not true for free - free opacity or hydrogen bound - free opacity , but it is true for dust opacity , which is the dominant opacity source in many astrophysical situations .the reason our method is limited to temperature - independent opacities is that two associated problems occur if the opacity varies when the cell s temperature changes .the first is that the cell will have absorbed either too many or too few of the previous packets passing through the cell .the second is the associated change in the interaction locations of the previous packets , which implies that the paths of the previous photon packets should have been different .these problems do not occur if the opacity is independent of temperature .lucy ( 1999 ) has proposed a slightly different method to calculate the equilibrium temperature . instead of sampling photon absorption, he directly samples the photon density ( equivalent to the mean intensity ) by summing the path length of all packets that pass through a cell .typically more packets pass through a cell than are absorbed in the cell , so this method potentially produces a more accurate measurement of the temperature for a given total number of packets , especially when the envelope is very optically thin .the disadvantage of his method is that it requires iteration to determine the envelope temperature . in principleone could partially combine both methods .first , run our simulation , adding pathlength counters to each cell . after running all packets , use the pathlength information to calculate a final temperature .this will provide a more accurate temperature that can be used to calculate the source function .after obtaining the source function for each cell , it is a simple matter to integrate the transfer equation to obtain the sed .another limitation of the monte carlo method is that it is not well suited to envelopes with very high optical depths ( 1000 ) , unless there is an escape channel for the photons .for example , geometrically thin disks can be extremely optically thick in the radial direction but optically thin in the polar direction , providing an escape channel for the photons .similarly , dense envelopes can be very optically thick to the illumination source , but optically thin to the reprocessed radiation , which is a another escape channel . in the event no such escape channels exist, one must turn to other methods .we are currently investigating how to couple monte carlo simulation in the optically thinner regions with other methods in the optically thick interior .the focus of the present paper has been on the development and implementation of the temperature correction procedure . for this reason, we have made several simplifying assumptions regarding the circumstellar opacity and geometry .for example , we have assumed a single temperature for all dust grains regardless of their size and composition .this differs from other investigations where different types of grains can have different temperatures at the same spatial location ( e.g. , the spherically symmetric radiation transfer code developed by wolfire & cassinelli 1986 ) .similarly , we have not implemented a procedure to solve for the location of the dust destruction radius , which will be different for grains of differing size and composition .these issues are currently under investigation . with the speed of todays computers , monte carlo radiation transfer simulations can be performed in a reasonably short time .the 2-d simulations presented in this paper employed packets , requiring about two hours of cpu time ; the spherically symmetric cases used packets , requiring about one minute .so for continuum transfer , monte carlo simulation is proving to be a very powerful technique for investigating arbitrary density structures and illuminations .we would like to thank barbara whitney and mrio magalhes for many discussions relating to this work .an anonymous referee also provided thoughtful comments which helped improved the clarity of our presentation .this work has been funded by nasa grants nag5 - 3248 , nag5 - 6039 , and nsf grant ast-9819928 . | we describe a general radiative equilibrium and temperature correction procedure for use in monte carlo radiation transfer codes with sources of temperature - independent opacity , such as astrophysical dust . the technique utilizes the fact that monte carlo simulations track individual photon packets , so we may easily determine where their energy is absorbed . when a packet is absorbed , it heats a particular cell within the envelope , raising its temperature . to enforce radiative equilibrium , the absorbed packet is immediately re - emitted . to correct the cell temperature , the frequency of the re - emitted packet is chosen so that it corrects the temperature of the spectrum previously emitted by the cell . the re - emitted packet then continues being scattered , absorbed , and re - emitted until it finally escapes from the envelope . as the simulation runs , the envelope heats up , and the emergent spectral energy distribution ( sed ) relaxes to its equilibrium value , _ without iteration_. this implies that the equilibrium temperature calculation requires no more computation time than the sed calculation of an equivalent pure scattering model with fixed temperature . in addition to avoiding iteration , our method conserves energy exactly , because all injected photon packets eventually escape . furthermore , individual packets transport energy across the entire system because they are never destroyed . this long - range communication , coupled with the lack of iteration , implies that our method does not suffer the convergence problems commonly associated with -iteration . to verify our temperature correction procedure , we compare our results to standard benchmark tests , and finally we present the results of simulations for two - dimensional axisymmetric density structures . |
despite the absence of an explicit minimization principle , variational methods have been used successfully in many problems of quantum scattering theory .such calculations typically exploit a stationary principle in order to obtain an accurate description of scattering processes .the kohn variational method has been applied extensively to problems in electron - atom and electron - molecule scattering , as well as to the scattering of positrons , , by atoms and molecules .it has been widely documented , however , that matrix equations derived from the kohn variational principle are inherently susceptible to spurious singularities .these singularities were discussed first by schwartz and have subsequently attracted considerable attention . in the region of these singularities, results of kohn calculations can be anomalous .although sharing characteristics similar to those exhibited by scattering resonances , schwartz singularities are nonphysical and arise only because the trial wavefunction , used in kohn calculations to represent scattering , is inexact . for projectiles of a given incident energy ,anomalous results are confined to particular formulations of the trial wavefunction and can , in principle , be mitigated by a small change in boundary conditions or some other parameter .it has also been shown that the use of a complex - valued trial wavefunction avoids anomalous behaviour except in exceptional circumstances .alternative versions of the kohn method have been developed in terms of a feshbach projection operator formalism and have been found to give anomaly - free results . in this articlewe will discuss our investigations of schwartz - type anomalies for generalized kohn calculations involving the elastic scattering of positrons by molecular hydrogen , .we will find that our choice of trial wavefunction contains a free parameter that can be varied in such a way as to produce singularities which are legitimate in the context of the scattering theory and which do not give rise to anomalous results . indeed, these singularities can be used to formulate an optimization scheme for choosing the free parameter so as to automatically avoid anomalous behaviour in calculations of the scattering phase shift .the novelty of determining the phase shift in this way is that an explicit solution of the linear system of kohn equations is not required .we will also develop an alternative optimization and show that the two schemes give results in close agreement .further , the results obtained will be seen to be in excellent agreement at all positron energies with those determined via the complex kohn method .we will give examples of anomalous behaviour which can not be avoided with either optimization , and show that the same anomalies appear in our application of the complex kohn method .we will discuss circumstances under which these anomalies might occur .we will show also that such results are nonphysical by considering small changes in the nonlinear parameters of the trial wavefunction .our investigations of singular behaviour have been carried out as part of a wider study on and annihilation using extremely flexible wavefunctions . our ability to recognize clearly and analyze the anomalous behaviour is as good for this system as it would be for a simpler model system , with the advantage that our calculations can be used to provide meaningful and physically relevant results .the kohn variational method is used to calculate approximations to exact scattering wavefunctions .determining an approximation , , allows a variational estimate , ps ., of the scattering phase shift to be calculated , the error in which is of second order in the error of the exact scattering wavefunction , .the standard approach in kohn calculations is to assume an overall form for depends linearly on a set of unknown parameters , optimal values for which are then determined by the application of a stationary principle . in our investigations of anomalous behaviour in kohn calculations for , we have studied the lowest partial wave of symmetry .this partial wave has been shown to be the only significant contributor to scattering processes for incident positron energies below ev .the first significant inelastic channel is positronium formation which has a threshold at ev .although we will here consider positron energies higher than these thresholds , it is not our intention to provide a comprehensive physical treatment of the scattering problem taking higher partial waves and inelastic processes into account .the purpose of the present study is to give a correct and , as far as possible , anomaly - free treatment of the lowest partial wave .it is important to examine the single channel case as accurately as possible as a preliminary for more sophisticated calculations . by not taking into account additional channels ,it is possible that anomalous behaviour could occur due to physical inaccuracies in the trial wavefunction at higher energies .however , we will demonstrate that all of the anomalies in our results ultimately can be attributed to purely numerical effects .we have initially used a trial wavefunction having the same general form as described in our earlier calculations , where = \left [ \begin{array}{cc } \cos(\tau ) & \sin(\tau)\\ -\sin(\tau ) & \cos(\tau)\\ \end{array}\right ] \left [ \begin{array}{c } { s}\\ { c}\\ \end{array}\right],\ ] ] for some phase parameter , , with ,\ ] ] and \lbrace 1-\exp\left[-\gamma\left ( \lambda_{3}-1\right)\right]\rbrace.\ ] ] as before , we have carried out calculations using the fixed - nuclei approximation , taking the internuclear separation to be at the equilibrium value , au .we have labelled the electrons as particles and , taking the positron to be particle .the position vector , , of each lepton is described by the prolate spheroidal coordinates , .these coordinates are defined implicitly in terms of the cartesian coordinates , , as ^{\frac{1}{2 } } \cos \left(\phi_{j}\right),\\ y_{j } & = & \frac{1}{2}r\left[\left ( \lambda_{j}^{2}-1\right ) \left ( 1-\mu_{j}^{2}\right)\right]^{\frac{1}{2 } } \sin \left(\phi_{j}\right),\\z_{j } & = & \frac{1}{2}r\lambda_{j } \mu_{j}.\end{aligned}\ ] ] the functions and represent , respectively , the incident and scattered positrons asymptotically far from the target .the shielding parameter , , ensures the regularity of at the origin and is taken to have the value .the constant , , is defined to be , being the magnitude of the positron momentum in atomic units . is a normalization constant and can here be regarded as arbitrary .the unknowns , and , are constants to be determined .the inclusion of the parameter , , in a generalization of the kohn method due to kato .this parameter is of only minor physical significance , playing the role of an additive phase factor in the part of the wavefunction representing the incident and scattered positrons asymptotically far from the target .however , at each value of , the value of can be varied to avoid spurious singularities in the kohn calculations .away from the spurious singularities , for an accurate trial wavefunction we can expect the variation in the calculated values of ps . over to be small . in the original application of the kohn method , only wavefunctions corresponding to were considered .the function , , is an approximation to the ground state wavefunction of the unperturbed hydrogen molecule and is determined by the rayleigh - ritz variational method . in the calculations presented here , we have taken be the target wavefunction described in detail in another of our previous calculations , accounting for of the correlation energy of .the function , \\ \label{eq : chi}&\times&\lbrace 1-\exp\left[-\gamma\left ( \lambda_{3}-1\right)\right]\rbrace\exp\left[-\gamma\left ( \lambda_{3}-1\right)\right],\end{aligned}\ ] ] is the same as has been used in our earlier calculations and was introduced first by massey and ridley .the remaining short - range correlation functions , , allow for the description of direct electron - positron and electron - electron interactions . here , we have used the same set of correlation functions described in detail in equations ( 5 - 8 ) of .the general form of each function , , is \quad \left(1\leq i \leq m\right),\ ] ] where each is symmetric in the coordinates of the electrons .they are a mixture of separable correlation functions and hylleraas - type functions containing the electron - positron distance as a linear factor .as discussed previously , the hylleraas - type functions in particular allow for high accuracy of results away from anomalous singularities .unless otherwise noted , we have here chosen values of and , rather than the values of and used earlier .this choice of nonlinear parameters highlights the interesting aspects of schwartz - type anomalies more clearly . in our application of the kohn variational principle , the functional =\tan\left(\eta_{\mathrm{v}}-\tau+c\right)=a_{\mathrm{t}}-\frac{2}{\pi n^2 r^2 k}\langle\psi_{\mathrm{t}},\psi_{\mathrm{t}}\rangle\ ] ] is made stationary with respect to variations in and . here ,we have denoted by the integral , , where is the nonrelativistic hamiltonian for the scattering system and is the sum of the positron kinetic energy and the ground state energy expectation value of .the integral is evaluated over the configuration space of the positron and the two electrons .we will , henceforth , use this notation more generally to denote by integrals of the form .the stationary principle imposed upon ( [ eq : principle ] ) leads to the linear system of equations where ,\\ \label{eq : vecb}b&=&\left[\begin{array}{c } \langle\bar{c}\psi_{\mathrm{g}},\bar{s}\psi_{\mathrm{g}}\rangle \\\langle\chi_{0}\psi_{\mathrm{g}},\bar{s}\psi_{\mathrm{g}}\rangle \\ \vdots \\ \langle\chi_{m},\bar{s}\psi_{\mathrm{g}}\rangle \\ \end{array}\right],\\ x&=&\left[\begin{array}{c } a_{t}\\ p_{0}\\ \vdots\\ p_{m } \end{array}\right].\end{aligned}\ ] ] solving ( [ eq : kohneq ] ) determines the values of and , allowing and , hence , ps . to be calculated via ( [ eq : principle ] ) .however , as has been discussed extensively ( see , for example , ) , the particular form of the functions , , used in our calculations does not , in general , permit analytic evaluation of the integrals comprising the matrix elements of and .sophisticated methods to determine these integrals numerically have been developed .however , the numerical approaches can give only accurate approximations to the exact values of the integrals , so that small errors in determining the elements of and are unavoidable .singularities in our generalized kohn calculations arise from zeros of , the determinant of ( [ eq : mata ] ) . under these circumstances , the linear system ( [ eq : kohneq ] )has no unique solution .close to these singularities , it is well known that values of ps . obtained by solving ( [ eq : kohneq ] ) can be anomalous ; small errors in the elements of or can correspond to large errors in the solution , , particularly when is close to singularity in a sense that can be defined formally in terms of the condition number of .a more detailed discussion of the condition number will be given in section [ ss : persistent ] .it is appropriate at this point to define a convention that we will adopt in our discussion of singularities in the generalized kohn method .the type of spurious singularities mentioned by schwartz here correspond to zeros in the particular case when .we will , however , find it convenient to label as schwartz singularities those zeros of at any which give rise to anomalous behaviour in the calculations of when is near .this is an important clarification for the following reason : we claim that , because of our inclusion of in , there exist zeros of are not spurious and which do not correspond to anomalous behaviour in the values of ps . .we will refer to such singularities as anomaly - free singularities . to understand how anomaly - free singularities might arise, it is helpful to consider the component , , of the exact scattering wave function , , corresponding to the lowest partial wave . can be expanded as where is the exact ground state target wavefunction and the complete set of correlation functions , , describes exactly the leptonic interactions at short - range .as noted by takatsuka and fueno in their kohn calculations for single channel scattering , the exact phase shift , , determined by is independent of the choice of in ( [ eq : exact ] ) . as a result , there is precisely one value , , at each positron energy such that for some odd value of , where either or is chosen to keep ] . in the following section we will present results of generalized kohn calculations exhibiting anomalous behaviour due to schwartz singularities and , further , demonstrate empirically that the anomaly - free singularities do exist and that values of can be found . at each , choosing then defines an optimization of that will be seen to avoid anomalies in ps .due to schwartz singularities .in our generalized kohn calculations , we have obtained values of ] .we have again indicated the position of the singularity in this figure by a dashed line .the results shown in the figure suggest that converges smoothly to zero as from either side , supporting the assertion made in section [ ss : singularities ] regarding the correspondence of the zeros in and .we have found that behaviour of the type shown in figure [ fig : cot ] is a general feature of the calculation . at for values of either side of a singularity . ] using ( [ eq : tautrans ] ) and ( [ eq : mata ] ) , it is straightforward to show that where , and are constants with respect to variations in . for a given positron momentum , the constants , , and , can be determined by calculating from at particular values of . strictly speaking , in our calculationswe have evaluated , being the approximation to whose elements have been determined using numerical integration .we will assume that the values of , and are not unduly sensitive to small changes in the elements of and henceforth take and to be essentially equivalent . at each ,provided that , the values , , making singular can be found by solving the quadratic equation in , if only is varied , and unless , there will be no more than two zeros of in the range .figure [ fig : det2d ] shows a function of at .the scale on the vertical axis is unimportant , since the value of each can be made arbitrarily large or small by a choice of the normalization constant , .the result of interest in the figure is that it indicates two values of at which is singular . at for . ]the anomalous behaviour in figure [ fig : ps2d ] corresponds directly to the singularity observed at in figure [ fig : det2d ] . however , there are no anomalies in figure [ fig : ps2d ] corresponding to the singularity at in figure [ fig : det2d ] , suggesting that this singularity is of the anomaly - free type described in the previous section .we have examined this phenomenon at other values of the positron momentum .figure [ fig : smap ] indicates the roots of ( [ eq : qeq ] ) for different positron momenta equidistant in the range , corresponding to a positron energy range from mev to ev . for the majority of positron momenta considered here , , and are such that there are two values of at each , the exceptions being and , for which we have found no real - valued solutions of ( [ eq : qeq ] ) .it is apparent from figure [ fig : smap ] that the roots of ( [ eq : qeq ] ) lie in two families of curves .the first family spans the entire range , , for .the second family is confined to values of in the range ] suggests that singularities do genuinely exist for at these values of , but small errors in our calculations of , and due to inexact numerical integration have prevented us from finding them .having investigated this problem in more detail , in figure [ fig : imroot ] we show the calculated values of ] over in this region does not necessarily preclude the notion that the failure to find real - valued solutions is due to small numerical errors in our calculations .it is conceivable that inaccuracies in the calculated values of , and could also arise from systematic errors in the algorithm used to calculate the determinants .nevertheless , the results illustrated in figure [ fig : imroot ] are interesting ; their exact origin may be speculated upon and will remain a subject of our ongoing investigations . ] and different values of in the range $ ] . at . ] to illustrate persistent anomalous behaviour , it is helpful to define a function analogous to ( [ eq : delta1 ] ) , where , for each of the values of considered , is the median value of ps . evaluated across the range of values of .values of are shown in figure [ fig : psssurf ] , from which it is clear that persistent anomalies appear distributed about a curve in the plane . for values of and away from this curve , the calculations are free of anomalies .hence , a small change in the values of or can indeed be shown to successfully avoid persistent anomalous behaviour . at . ] finally , we consider briefly that the shielding parameter , , in ( [ eq : cosopenchannel ] ) and ( [ eq : chi ] ) might also be varied in an effort to avoid anomalous behaviour .values of ps . at , and for are shown in figure [ fig : varygamma ] .it is apparent that small changes in the value of have relatively little effect on the persistent anomaly at .this is not unexpected , being consistent with the findings of lucchese , who investigated the effect of varying a parameter analogous to in his model potential calculations .he noted that singularities due to the choice of occurred when the values of and were not similar , most typically when . with this in mind , and from inspection of figures [ fig : psssurf ] and [ fig : varygamma ] , we can conclude that the anomaly observed in figure [ fig : methcomp ] is due primarily to the choices of and rather than the choice of .we have carried out a thorough examination of singularities and related anomalous behaviour in generalized kohn calculations for . we have argued that singularities do not always occur spuriously and that variational calculations of the scattering phase shift can be anomaly - free at these singularities .subsequently , we have developed an optimization scheme for choosing a free parameter of the trial wavefunction allowing anomaly - free values of the phase shift to be determined without the need to solve the linear system of equations derived from the kohn variational principle .this approach has been seen to be largely successful , giving phase shifts in close agreement with those determined by a conventional generalization of the kohn method , as well as those obtained with the complex kohn method .persistent anomalies in both sets of calculations have been identified and attributed to singularities that can not be avoided with any choice of the parameter , .further , we have found that our implementation of the complex kohn method is susceptible to the same behaviour .we have demonstrated , however , that persistent anomalies can be avoided by small changes in the nonlinear parameters of the short - range correlation functions .hence , by studying the behaviour of , and over , we can predict the appearance of persistent anomalous behaviour quantitatively and avoid it by an appropriate change in or .we wish to thank john humberston for valuable discussions .this work is supported by epsrc ( uk ) grant ep / c548019/1 .99 kohn w 1948 176372 nesbet r k 1980 _ variational methods in electron - atom scattering theory _( new york : plenum ) massey h s w and ridley r o 1956 _ proc .soc . _ a * 69 * 65967 schneider b i and rescigno t n 1988 a * 37 * 374954 van reeth p and humberston j w 1995 l5117 van reeth p , humberston j w , iwata k , greaves r g and surko c m 1996 l46571 van reeth p and humberston j w 1999 365167 armour e a g , baker d j and plummer m 1990 305774 cooper j n and armour e a g 2008 b * 266 * 4527 cooper j n , armour e a g and plummer m 2008 245201 schwartz c 1961 3650 schwartz c 1961 146871 nesbet r k 1968 13442 brownstein k r and mckinley w a 1968 125566 shimamura i 1971 85270 takatsuka k and fueno t 1979 a * 19 * 10117 mccurdy c w , rescigno t n and schneider b i 1987 a * 36 * 20616 lucchese r r 1989 a * 40 * 687985 feshbach h 1962 287313 chung k t and chen j c y 1971 11124 charlton m and humberston j w 2005 _ positron physics _ ( _ cambridge monographs on atomic , molecular and chemical physics _vol 11 ) ed a dalgarno ( cambridge : cambridge university press ) temkin a and vasavada k v 1967 10917 temkin a , vasavada k v , chang e s and silver a 1969 5766 flammer c 1957 _ spheroidal wave functions _( stanford : stanford university press ) kato t 1950 475 kato t 1951 _ prog .* 6 * 394407 bransden b h and joachain c j 2003 _ physics of atoms and molecules _( harlow : prentice hall ) hylleraas e a 1929 34766 armour e a g and baker d j 1987 610519 armour e a g 1988 _ phys .rep . _ * 169 * 198 armour e a g , todd a c , jonsell s , liu y , gregory m r and plummer m 2008 b * 266 * 3638 higham n j 2002 _ accuracy and stability of numerical algorithms _( philadelphia : society for industrial and applied mathematics ) armour e a g and humberston j w 1991 _ phys .rep . _ * 204 * 165251 http://www.nag.co.uk/numeric/fl/manual20/pdf/f03/f03aaf.pdf http://www.nag.co.uk/numeric/fl/manual20/pdf/f07/f07agf.pdf | we have carried out an analysis of singularities in kohn variational calculations for low energy scattering . provided that a sufficiently accurate trial wavefunction is used , we argue that our implementation of the kohn variational principle necessarily gives rise to singularities which are not spurious . we propose two approaches for optimizing a free parameter of the trial wavefunction in order to avoid anomalous behaviour in scattering phase shift calculations , the first of which is based on the existence of such singularities . the second approach is a more conventional optimization of the generalized kohn method . close agreement is observed between the results of the two optimization schemes ; further , they give results which are seen to be effectively equivalent to those obtained with the complex kohn method . the advantage of the first optimization scheme is that it does not require an explicit solution of the kohn equations to be found . we give examples of anomalies which can not be avoided using either optimization scheme but show that it is possible to avoid these anomalies by considering variations in the nonlinear parameters of the trial function . |
one of the challenges in rl is the trade - off between exploration and exploitation . the agent must choose between taking an action known to give positive reward or to explore other possibilities hoping to receive a greater reward in the future . in this context ,a common strategy in unknown environments is to assume that unseen states are more promising than those states already seen .one such approach is optimistic initialization of values ( * ? ? ?* section 2.7 ) .several rl algorithms rely on estimates of expected values of states or expected values of actions in a given state .optimistic initialization consists in initializing such estimates with higher values than are likely to be the true value . to do so , we depend on prior knowledge of the expected scale of rewards .this paper circumvents such limitations presenting a different way to optimistically initialize value functions without additional domain knowledge or assumptions . in the next sectionwe formalize the problem setting as well as the rl framework .we then present our optimistic initialization approach .also , we present some experimental analysis of our method using the arcade learning environment as the testbed .consider a markov decision process , at time step the agent is in a state and it needs to take an action .once the action is taken , the agent observes a new state and a reward from a transition probability function .the agent s goal is to obtain a policy that maximizes the expected discounted return ] is the discount factor and is the action - value function for policy . sometimes it is not feasible to compute , we then approximate such values with linear function approximation : , where is a learned set of weights and is the feature vector .function approximation adds further difficulties for optimistic initialization , as one only indirectly specifies the value of state - action pairs through the choice of .an approach to circumvent the requirement of knowing the reward scale is to normalize all rewards ( ) by the first non - zero reward seen ( ) , _ i.e. _ : .then we can optimistically initialize as , representing the expectation that a reward the size of the first reward will be achieved on the next timestep to . for sparse reward domains , which is common in the arcade learning environment ,the mild form is often sufficient . ] . with function approximation, this means initializing the weights to ensure , _e.g. _ : . however , this requires to be constant among all states and actions . if the feature vector is binary - valued then one approach for guaranteeing has a constant norm is to stack and , where is applied to each coordinate .while this achieves the goal , it has the cost of doubling the number of features .besides , it removes sparsity in the feature vector , which can often be exploited for more efficient algorithms .our approach is to shift the value function so that a zero function is in fact optimistic .we normalize by the first reward as described above .in addition , we shift the rewards downward by , so .thus , we have : \\ & = & \underbrace{\mathbb{e}_\pi\bigg[\sum_{k = 0}^\infty \gamma^k \frac{r_{t+k+1}}{|r_{1\mbox{\tiny{st}}}| } \bigg]}_{\frac{q_\pi(s_t , a_t)}{|r_{1\mbox{\tiny{st}}}| } } + \underbrace{\sum_{k = 0}^\infty \gamma^k ( \gamma - 1)}_{-1}\end{aligned}\ ] ] notice that since , initializing is the same as initializing .this shift alleviates us from knowing , since we do not have the requirement anymore .also , even though is defined in terms of , we only need to know once a non - zero reward is observed . in episodic tasks this shift will encourage agents to terminate episodes as fast as possible to avoid negative rewards . to avoid thiswe provide a termination reward , where is the number of steps in the episode and is the maximum number of steps .this is equivalent to receiving a reward of for additional steps , and forces the agent to look for something better .we evaluated our approach in two different domains , with different reward scales and different number of active features .these domains were obtained from the arcade learning environment , a framework with dozens of atari 2600 games where the agent has access , at each time step , to the game screen or the ram data , besides an additional reward signal .we compare the learning curves of regular sarsa( ) and sarsa( ) with its q - values optimistically initialized .basic _ features with the same sarsa( ) parameters reported by .basic _ features divide the screen in to tiles and check , for each tile , if each of the 128 possible colours are active , totalling 28,672 features .the results are presented in figure 1 .we report results using two different learning rates , a low value ( ) and a high value ( ) , each point corresponds to the average after 30 runs .the game freeway consists in controlling a chicken that needs to cross a street , avoiding cars , to score a point ( reward ) .the episode lasts for 8195 steps and the agent s goal is to cross the street as many times as possible .this game poses an interesting exploration challenge for ramdom exploration because it requires the agent to cross the street acting randomly ( ) for dozens of time steps .this means frequently selecting the action `` go up '' while avoiding cars .looking at the results in figure 1 we can see that , as expected , optimistic initialization does help since it favours exploration , speeding up the process of learning that a positive reward is available in the game .we see this improvement over sarsa( ) for both learning rates , with best performance when .the game private eye is a very different domain . in this game the agentis supposed to move right for several screens ( much more than when crossing the street in the game freeway ) and it should avoid enemies to avoid negative rewards . along the path the agent can collect intermediate rewards ( ) but its ultimate goal is to get to the end and reach the goal , obtaining a much larger reward .we can see that the optimistic initialization is much more reckless in the sense that it takes much more time to realize a specific state is not good ( one of the main drawbacks of this approach ) , while sarsa( ) is more conservative .interestingly , we observe that exploration may have a huge benefit in this game as a larger learning rate guides the agent to see rewards in a scale that was not seen by sarsa( ) .0.47 0.47 0.47 0.47 thus , besides our formal analysis , we have shown here that our approach behaves as one would expect optimistically initialized algorithms to behave .it increased agents exploration with the trade off that sometimes the agent `` exploited '' a negative reward hoping to obtain a higher return .rl algorithms can be implemented without needing rigorous domain knowledge , but as far as we know , until this work , it was unfeasible to perform optimistic initialization in the same transparent way .besides not requiring adaptations for specific domains , our approach does not hinder algorithm performance .the authors would like to thak erik talvitie for his helpful input throughout this research .this research was supported by alberta innovates technology futures and the alberta innovates centre for machine learning and computing resources provided by compute canada through westgrid . | in reinforcement learning ( rl ) , it is common to use optimistic initialization of value functions to encourage exploration . however , such an approach generally depends on the domain , viz . , the scale of the rewards must be known , and the feature representation must have a constant norm . we present a simple approach that performs optimistic initialization with less dependence on the domain . |
we consider a piece of cortex ( the _ neural field _ ) , which is a regular compact subset when representing locations on the cortex , or periodic domains such as the torus of dimension 1 in the case of the representation of the visual field , in which neurons code for a specific orientation in the visual stimulus : in that model , is considered to be the feature space . ] of for some , and the density of neurons on is given by a probability measure assumed to be absolutely continuous with respect to lebesgue s measure on , with strictly positive and bounded density ] with points . in this model ,typical micro - circuit size could be chosen to be with , and of order with .our model takes into account the fact that in reality , neurons are not regularly placed on the cortex , and therefore such a regular lattice case is extremely unlikely to arise ( this architecture has probability zero ) .moreover , in contrast with this more artificial example , the probability distribution of the location of one given neuron do not depend on the network size . in our setting , accounts for the density of neurons on the cortex , and as the network size is increased , new neurons are added on the neural field at locations independent of that of other neurons , with the same probability , so that neuron locations sample the asymptotic cell density .these elements describe the random topology of the network .prior to the evolution , a number of neurons and a configuration is drawn in the probability space .the configuration of the network provides : * the locations of the neurons i.i.d . with law * the connectivity weights , in particular the values of the i.i.d .bernoulli variables of parameter .let us start by analyzing the topology of the micro - circuit .at the macroscopic scale , we expect local micro - circuits to shrink to a single point in space , which would precisely correspond to the scale at which imaging techniques record the activity of the brain ( a pixel in the image ) .the micro - circuit connects a neuron to its nearest neighbors .we made the assumptions that tends to infinity as while keeping .this property ensures that for a fixed neuron and for any , the distance and , the euclidean norm of , regardless of the space involved and the dimension considered . ] is , with overwhelming probability , upperbounded by a constant multiplied by . in the regular lattice case ,this property is trivial . in our random setting, we introduce the maximal distance between two neurons in the microcircuit associated to neuron is noted : this quantity has a law that is independent of the specific neuron chosen . [lem : sizemicro ] the microcircuit shrinks to a single point in space as to infinity .more precisely , for any , the maximal distance between two neurons in the microcircuit associated to neuron decreases towards , in the sense that there exists such that the maximal distance between two points in a microcircuit satisfies the inequality : \leq c \left(\left(\frac{v(n)}{n}\right)^{\frac 1 d } + \frac{1}{v(n)}\right).\ ] ] we have assumed that the locations are iid with law absolutely continuous with respect to lebesgue s measure with density lowerbounded by some positive quantity .let us fix a neuron at location which is almost surely in the interior of .we are interested in the distances between different neurons within the microcircuit around , and will therefore consider the distribution of relative locations of neurons belonging to conditionally to the location of neuron .we will denote the expectation under this conditioning .it is clear that the set of random variables are identically distributed .moreover , these are independent conditionally on the value of .we will show that , for any neuron , the distance tends to zero as increases with probability one . to this end, we use the characterization of the maximal distance as the minimal radius such that the ball centered at with diameter contains points : we will show that there exists a quantity tending to zero such that with large probability , i.e. . to this end , we start by noting that conditionally on , the random variables are independent , identically distributed. moreover , for fixed , there exists such that for any , .therefore , for , the random variables are such that : }=\int_{b(r_i,\alpha(n ) ) } d\lambda(r)\in [ \lambda_{min},\lambda_{max}]\times \gamma(n)\\ \text{var}_i(z_j^n ) = { \mathcal{e}}_i{[z_j^n ] } - { \mathcal{e}}_i{[z_j^n]}^2 \in [ \lambda_{\min } - \lambda_{\max } \gamma(n),\lambda_{\max}-\lambda_{\min } \gamma(n ) ] \times \gamma(n ) \end{cases}\ ] ] where \times c = [ ac , bc] ] a square integrable process with values in .under the current assumptions , for any configuration of the network , there exists a unique strong solution to the network equations with initial condition .this solution is square integrable and defined for all times .the proof of this proposition is classical .it is a direct application of the general theory of sdes in infinite dimensions ( * ? ? ?* chapter 7 ) , and elementary proof in our particular case of delayed stochastic differential equations can be found in ( * ? ? ?* theorem 5.2.2 ) : for any fixed configuration , we have a regular -dimensional sde with delays satisfying a monotone growth condition [ assump : lineargrowth ] ensuring a.s .boundedness for all times of the solution .the proof of this property is essentially based on the same arguments as those of the proof of theorem [ thm : existenceuniquenessspace ] , and the interested reader is invited to follow the steps of that demonstration .it is important to note that the bound one obtains on the expectation of the squared process depends on the configuration of the network .indeed , the macroscopic interaction term involves the sum of a random number of terms rescaled by .the quantity can take large values ( up to ) with positive ( but small ) probability , and therefore the scaling coefficient is not enough to properly control such cases .the bound obtained by classical methods will therefore diverge in , and this will be a deep question for our aim to prove convergence results as . in the present manuscript , we will be able to handle these terms properly in that limit by using fine estimates related to , see lemma [ lem : sumchi ] .we are interested in the limit , as , of the behavior of the neurons .since we are dealing with diffusions in random environment , there are at least two notions of convergence : _ quenched _ convergence results valid for almost all configuration , and _ annealed _ results valid for the law of the network averaged across all possible configurations . here, we will show averaged convergence results as well as quenched properties along subsequences .similarly to what was observed in , the limit of such spatially extended mean - field models will be stochastic processes indexed by the space variable , which , as a function of space , are not measurable with respect to the borel algebra . as noted in ,this is not a mathematical artifact of the approach , since neurons accumulating on the neural field are driven by independent brownian motions , and therefore no regularity is to be expected in the limit . however , even if trajectories are highly irregular , this will not be the case of the law of these solutions . in order to handle this irregularity, we will use the _ spatially chaotic _brownian motion on , a two - parameter process such that for any fixed , the process is a -dimensional standard brownian motion , and for in , the processes and are independent of _ spatially chaotic _ if the processes and are independent for any . ] .this process is relatively singular seen as a spatio - temporal process : in particular , it is not measurable with respect to .the spatially chaotic brownian motion is distinct from other more usual spatio - temporal processes . in particular, its covariance is }=(t\wedge t ' ) \delta_{r = r'} ] that have the following continuity property : there exists a _ coupled process _ indexed by , such that for any fixed , has the same law as , and moreover , there exists a constant such that for any : }\left\vert \hat{x}_t(r)-\hat{x}_t(r')\right\vert \right]}\leq c ( \vert r - r'\vert + \sqrt{\vert r - r'\vert}).\ ] ] note that in this case , for any lipschitz - continuous function , the map } ] a -progressively measurable real - valued process indexed by that belongs to and which is independent of the collection of brownian motions .we denote by the coupled process corresponding to the regularity condition .we assume that for any we have } < c<\infty , \text { and}\\ { \mathbbm{e}\left [ \vert \hat{\delta}_t(r)-\hat{\delta}_t(r')\vert^2 \right]}\leq c^2 ( \vert r - r'\vert + \sqrt{\vert r - r'\vert})^2 \end{cases}\ ] ] since for any fixed , the process is a standard brownian motion , the process defined by the stochastic integral : is well defined .it is spatially chaotic since for the brownian motions and and the processes and are independent .moreover , they have a regular law in the sense of our definition .indeed , let be a standard brownian motion independent of .the process has the same law as , and moreover , }\leq \left(\int_0^t { \mathbbm{e}\left [ \vert \hat{\delta}_s(r)-\hat{\delta}_s(r')\vert^2 \right]}\,ds\right)^{1/2}\leq \sqrt{t } c ( \vert r - r'\vert+\sqrt{\vert r - r'\vert}).\ ] ] the process therefore belongs to .moreover , it is a square integrable martingale with quadratic variation }\,ds ] : } = { \mathbbm{e}\left [ \vert\int_t^{t ' } \delta_s(r ) dw_s(r)\vert \right]}\leq \left(\int_{t}^{t ' } { \mathbbm{e}\left [ \vert\delta_s(r)\vert^2 \right]}\,ds\right)^{1/2}\leq c \sqrt{t'-t}.\ ] ] the process therefore belongs to .note that this example illustrates an important fact .the process involves two processes , and , and in order to build up the coupled , we used the fact that we were able to find two processes and such that the pairs and had the same law ( here , the two components are independent ) .this fact will be also prominent in the definition of the solutions to the mean - field equation .[ def : solution ] a _ strong solution _ to the mean - field equation on the probability space , with respect to the chaotic brownian motion and with an initial condition is a spatially chaotic process , i.e. with continuous sample paths and regular law , such that : 1 .there exists a coupling , r\in\gamma) ] is continuous . in particular , it is measurable and hence the integral }d\lambda(r) ] is a strong solution , in the usual sense ( see ( * ? ? ?* defintion 5.2.1 ) ) , i.e. it is adapted to the filtration , almost surely equal to for ( x_t) ( w_t(\cdot)) ] a square - integrable process with a regular law . the mean - field equation with initial condition has a unique strong solution on ( x_t) ( w_t(\cdot)) ] satisfies the relationship : } \vert \zeta^0_s(r)\vert^2 \right ] } + t \,c\int_0^t ( 1+n_s^x(r))\,ds+ 4\,t \vert\sigma(r)\vert^2 \bigg ) \ ] ] which is finite under the assumption that and are square integrable .note that this property readily implies , by application of gronwall s lemma , that any possible solution is square integrable .the regularity in time is then a direct consequence of this inequality and of the fact that the lipschitz continuity of implies that . indeed , for , we have } & \leq \int_{t}^{t ' } { \mathbbm{e}\left[ \vert \psi(r , s , x_s(r))\vert \right ] } ds + { \mathbbm{e}\left [ \vert \sigma(r ) ( w_{t'}(r)-w_t(r))\vert \right]}\\ & \leq c ( 1+n_t^x(r)^{1/2 } ) ( t'-t ) + \vert \sigma(r)\vert \sqrt{t'-t}. \end{aligned}\ ] ] it therefore remains to show that is regular in law .let be a standard brownian motion , and assume that is a coupling of in the sense that they are equal in law for any fixed , and that both and have the regularity property ( is a process satisfying the assumptions of definition [ def : solution ] ) .we define as : \,ds\\ + \int_0^t \int_{\gamma } j(r , u){\mathbbm{e}}_{z } [ b(\hat{x}_s(r ) , z_{s-\tau(r , u ) } ( u ) ) ] d\lambda(u)\,ds \end{gathered}\ ] ] it is clear that this process has the same law as since , and this obviously also holds for the processes and .let us denote } \vert \hat{x}_s(r)-\hat{x}_s(r')\vert \right]} ] is measurable with respect to the borel algebra in , allowing to make sense of the integral over the space variable .let us eventually remark that , again , by gronwall s lemma , any possible solution has a coupled process satisfying the regularity condition .these properties ensure that we can make sense of the spatial integral term in the definition of for iterates of that function .a sequence of processes can therefore be defined by iterating the map .we fix a process in satisfying the coupling assumptions above ( related to definition [ def : solution ] ) , and build the sequence by induction through the recursion relationship .we show that these processes constitute a cauchy sequence for . this will not be enough for our purposes : we are interested in proving existence and uniqueness of solutions for all . equipped with the estimates on the distance , we will come back to the sequence of processes at single locations , show that these also constitute a cauchy sequence in the space of stochastic processes in ( which is complete ) and conclude .again , one needs to be careful in the definition of the above recursion and build recursively a sequence of processes independent of the collection of processes and having the same law as follows : * is independent of and has the same law as * for , is independent of the sequence of processes and is such that the collection of processes has the same joint law as , i.e. is chosen such as its conditional law given is the same as that of given . once all these ingredients have been introduced , it is easy to show that satisfies a recursion relationship , by decomposing this difference into the sum of elementary terms : \\ & \quad + \int_0^t \int_{\gamma } j(r , r')\big\{\big ( { \mathbbm{e}}_{z } [ b(x_s^{k}(r ) , z^{k}_{s-\tau(r , r ' ) } ( r ' ) ) ] \\ & \qquad \qquad \qquad-{\mathbbm{e}}_{z } [ b(x^{k-1}_s(r ) , z^{k-1}_{s-\tau(r , r ' ) } ( r ' ) ) ] \big)\big\}d\lambda(r')\,ds \\ & = : a_t(r ) + b_t(r)+ c_t(r ) \end{aligned}\ ] ] and checking that the following inequalities apply : through the use of cauchy - schwarz inequality , by standard mckean - vlasov arguments , and } \bigg \vert \int_{\gamma}j(r , r')\int_0^s \big({\mathbbm{e}}_{z } [ b(x_u^{k}(r ) , z_{u-\tau(r , r')}^{k } ( r ' ) ) - b(x_u^{k-1}(r ) , z_{u-\tau(r , r')}^{k-1 } ( r ' ) ) ] \big ) { du\,}d\lambda(r ' ) \bigg\vert d\lambda(r)\bigg]\\ & \leq \vert j\vert_{\infty } \;\int_{\gamma^2 } \int_0^t{\mathbbm{e}}\bigg [ { \mathbbm{e}}_{z } [ \big \vert b(x_u^{k}(r ) , z_{u-\tau(r , r')}^{k } ( r ' ) ) - b(x_u^{k-1}(r ) , z_{u-\tau(r , r')}^{k-1 } ( r ' ) ) ] \big\vert\big ) du\bigg]d\lambda(r)d\lambda(r ' ) \quad ( cs)\\ & \leq 2 \ , l\vert j\vert_{\infty } \int_{\gamma^2 } \int_0^t { \mathbbm{e}\left [ \vert x_s^{k}(r)-x_s^{k-1}(r)\vert \right]}\,ds d\lambda(r)d\lambda(r')\quad \ref{assump : loclipschbspace}\\ & \leq 2 \;l\vert j\vert_{\infty } \int_0^t \vert{x^{k}-x^{k-1}}\vert_s^1\,ds \end{aligned}\ ] ] these inequalities imply : with .let us now denote for the norm } \vertz_s(r ) \vert \right]} ] ( the space of square integrable processes from ] .this property is not classical : indeed , in the network equation , the microcircuit interaction term is , and therefore involve the state of neurons located at different places on and different delays .the convergence will be handled using ( i ) the fact that in the limit considered , the neurons belong to the microcircuit collapse at a single space location , and ( ii ) regularity properties of the law of the solution as a function of space and time. this convergence will be the subject of lemma [ lem : convmicro ] .* second , the macro - circuit interaction term involve delocalized terms across the neural field . the sum will be shown to converge to a non - local averaged term involving an integral over space .this will be proved through the use of lemma [ lem : convmacro ] and [ lem : sumchi ] .moreover , we will prove our convergence through a non - classical coupling method that we now describe .let us now fix a configuration of the network .the neuron labeled in the network is driven by the -dimensional brownian motion , and has the initial condition .we aim at defining a spatially chaotic brownian motion on such that the standard brownian motion is equal to , and proceed as follows .let , r\in\gamma} ( \bar{x}^i_t) ( w^{i}_t(\cdot)) ] is precisely the average of the random variable .therefore , a quadratic control argument ( see e.g. ( * ? ? ?* theorem 1.4 . ) ) allows to show that the first term is of order , in the sense that : \right)]]\leq \frac{k_1}{\sqrt{v(n)}}.\ ] ] this argument consists in showing that the expectation of the square of the sum is of order , which is performed by showing that ( i ) the terms of the sum are centered ( i.e. that the expectation term introduced which was chosen to this purpose is precisely the expectation with respect to of for equal in law to which all have the same law ) and ( ii ) using cauchy - schwarz inequality to bound the term by the square root of the expectation of the squared sum , developing the square and showing that the number of null terms is bounded by some constant multiplied by .this argument is not developed here as it will be the core of the proof of lemma [ lem : convmacro ] .the second term is handled by using the control given by equation and the result of proposition [ lem : sizemicro ] , ensuring that - { \mathbbm{e}}_{z}[b(\bar{x}^i_t(r_i),{z}_{t-\tau_s}(r_i))]\big]\big]\\\leq k_1\bigg(\sqrt{\left({\frac{v(n)}{n}}\right)^{\frac{1}{d}}+\frac 1 { v(n ) } } + \left({\frac{v(n)}{n}}\right)^{\frac{1}{d}}+\frac 1 { v(n)}\bigg ) .\end{gathered}\ ] ] put together , the two last estimates yield the desired result .[ lem : convmacro ] the coupled macroscopic interaction term converges towards a non - local mean - field term with speed , in the sense that there exists a constant independent of such that : \lambda(r)\right \vert\big]\big]\leq \frac{k_2}{\sqrt{n\beta(n)}}.\ ] ] conditioned on the location of neuron , the collection of -random variables are independent and identically distributed .the sum is therefore , conditionally on and , the sum of independent and identically distributed processes , with finite mean and variance ( since is a bounded function ) .the expectation of each term in the sum , conditionally on and , is equal to : d\lambda(r).\ ] ] let us denote by the term under consideration is simply the empirical average , and conditionally on and , the terms are independent , identically distributed , centered -random variables with second moment : \leq \frac{1}{\beta(n)}m_2\ ] ] where is a finite constant independent of , and . let us denote by ] is therefore the global expectation , i.e. on .the result shows that the expectation tends to zero .this implies quenched convergence ( i.e. for almost all configuration ) along subsequences . in detail, the speed of converge announced in the theorem ( on the righthand side of equation ) allows to define subsequences ( i.e. a sequence of network size ) for which we have almost sure convergence .these sequences are such that borel - cantelli lemma can be applied , namely subsequences extracted through a strictly increasing application such that is summable .we prepare for the proof by demonstrating the following fine estimate that will be used to control configurations with more links than the expected value : [ lem : sumchi ] under our assumptions , for any and , we have for sufficiently large : where . in order to demonstrate the result , we make use of chernoff - hoeffding theorem controlling the deviations from the mean of bernoulli random variables with fixed mean , is such that , for any , \leq \left(\left(\frac p { p+\varepsilon}\right)^{p+\varepsilon}\left(\frac { 1-p } { 1-p-\varepsilon}\right)^{1-p-\varepsilon}\right)^m.\ ] ] ] .let us denote by .this is a binomial variable of parameters .chernoff - hoeffding theorem ensures that : & \leq \left(\left(\frac{\beta(n)}{\gamma\beta(n)}\right)^{\gamma \beta(n)}\left(\frac{1-\beta(n)}{1-\gamma\beta(n)}\right)^{1-\gamma \beta(n)}\right)^n\\ & \leq \exp\left(-\gamma \log(\gamma ) n \beta(n ) + n\big(1-\gamma\beta(n)\big)\big(\log\big(1-\beta(n)\big)-\log\big(1-\gamma\beta(n)\big)\big)\right ) . \end{aligned}\ ] ] using a taylor expansion of the logarithmic terms for large ( using the fact that tends to zero at infinity ) , it is easy to obtain \leq \exp\left ( ( -\gamma\log(\gamma)+\gamma-1 ) n\beta(n ) + o(n\beta(n)^2)\right)\ ] ] note that for , the quantity is strictly positive .we therefore have , for sufficiently large , that the probability is bounded by : \leq \exp\left ( -\frac 1 2 ( \gamma\log(\gamma)-\gamma+1 ) n\beta(n))\right).\ ] ] this allows to conclude the lemma as follows .it is clear by definition that .therefore , \end{aligned}\ ] ] yielding the desired result .we are now in a position to perform the proof of theorem [ thm : propagationchaosspace ] . the proof is based on evaluating the distance $ ] , and breaking it into a few elementary , easily controllable terms .a substantial difference with usual mean - field proofs is that network equations correspond to processes taking values in in which the interaction term is sum over a finite number of neurons in the network equation , while the mean - field equation is a spatially extended equation with an effective interaction term involving an integral over .this will be handled using the result of lemma [ lem : convmacro ] .we introduce in the distance coupled interaction terms that were controlled in lemmas [ lem : convmicro ] and [ lem : convmacro ] and obtain the following elementary decomposition ( each line of the righthand side corresponds to one term of the decomposition , ) : \big)\ , ds\\ & \quad + \frac{1}{n\beta(n ) } \sum_{j=1}^{n } \int_0^t j(r_i , r_j)\chi_{ij } \big(b(x^{i,{\mathcal{a}}_n}_s , x^{j,{\mathcal{a}}_n}_{s-\tau_{ij}})-b(\bar{x}^{i}_s(r_{i}),\bar{x}^{j}_{s-\tau_{ij}}(r_{j } ) ) \big)\,ds\\ & \quad + \int_0^t \big(\frac{1}{n\beta(n ) } \sum_{j=1}^{n} j(r_i , r_j)\chi_{ij } b(\bar{x}^{i}_s(r_{i}),\bar{x}^{j}_{s-\tau_{ij}}(r_{j } ) ) -\int_{\gamma } j(r_i , r ' ) { \mathbbm{e}}_z[b(\bar{x}^i_s(r_{i}),z_{s-\tau(r_{i},r')}(r'))]d\lambda(r')\big)\,ds\\ & \nonumber \qquad = : a^i_t(n)+b^i_t(n)+c^i_t(n)+d^i_t(n)+e^i_t(n ) \end{aligned}\ ] ] it is easy to show , using assumptions [ assump : loclipschspace ] and [ assump : loclipschbspace ] , that the terms and satisfy the inequalities : & \leq k_f\,\int_0^{t } { \mathbbm{e}}\big[\sup_{-\tau\leq u\leq s } \vert x_u^{i,{\mathcal{a}}_n}-\bar{x}_u^i(r_{i})\vert \big]\ , ds\\ \max_{i=1\cdots n}\mathcal{e}\big[{\mathbbm{e}}\big[\sup_{-\tau\leq s\leq t } \vert b_s^i(n ) \vert\big]\big ] & \leq \frac{v(n)+1}{v(n ) } l\ , \int_0^{t } \max_{k=1\cdots n}{\mathcal{e}}\big[{\mathbbm{e}}\big[\sup_{-\tau\leq u\leq s}\vert x^{k,{\mathcal{a}}_n}_u-\bar{x}^k_u(r_{i } ) \vert\big]\big ] \ , ds \end{aligned}\ ] ] the term requires to be handled with care , because of the sparsity of the macrocircuit .indeed , this term involves the sum of random variables and is rescaled by . as we assumed in order to account for the sparsity in the macrocircuit , most terms in the sum are equal to zero such that .we have : \leq \vert j\vert_{\infty } \frac 1 { n\beta(n ) } \sum_{j } \chi_{ij } \int_0^t { \mathbbm{e}}\big[\sup_{0 \leq u \leq s } \vert b(x^{i,{\mathcal{a}}_n}_u , x^{j,{\mathcal{a}}_n}_{u-\tau_{ik}})-b(\bar{x}^i_u,\bar{x}^j_{u-\tau_{ik } } ) \vert \big]\,ds\ ] ] this expression shows how critical the singular sparse coupling is to our estimates .indeed , the random variable almost surely tends to as goes to infinity , but it can reach very large values ( up to which diverges as goes to infinity ) .configurations for which the sum is large are increasingly improbable , but for these configurations , the deterministic scaling is not fast enough to overcome the divergence of the input term .there is therefore a competition between the probability of having configurations with large values of and the divergence of the solutions .however , in the present case , this control will be possible using the estimate of the probability that the number of links exceeds using the result of lemma [ lem : sumchi ] .indeed , fixing , and distinguishing whether or not , we obtain : \big ] & \leq 2 \gamma \ ; l \ ; \vert j \vert_{\infty } \int_0^t \max_{k=1\cdots n}{\mathcal{e}}\big [ { \mathbbm{e}}\big[\sup_{-\tau \leq u \leq s } \vert x^{k,{\mathcal{a}}_n}_{u}-\bar{x}^k_{u } \vert \big]\big]\,ds\\ & \qquad + 2 \vert b\vert_{\infty}\vert j\vert_{\infty } { \mathcal{e}}\left(\frac 1 { n\beta(n ) } \sum_{j } \chi_{ij } { \mathbbm{1}_{\mathcal{d}_{\gamma}}}\right ) \end{aligned}\ ] ] where as defined in lemma [ lem : sumchi ] . by application of this lemma , and using the fact that the second term of the upper bound is negligible compared to , we conclude that : \big ] & \leq 2 \gamma \ ; l \ ; \vert j \vert_{\infty } \int_0^t \max_{k=1\cdots n}{\mathcal{e}}\big[ { \mathbbm{e}}\big[\sup_{-\tau \leq u \leq s } \vert x^{k,{\mathcal{a}}_n}_{u}-\bar{x}^k_{u } \vert \big]\,ds + \frac{k_c}{\sqrt{n\beta{(n)}}}. \end{aligned}\ ] ] we are left with controlling the terms and .these consist of sums only involving the coupled processes , and were analyzed in the previous sections . by direct application of the results of lemmas [lem : convmicro ] and [ lem : convmacro ] , we have : \big ] } & \leq \displaystyle{k_1\sqrt{\left(\frac{v(n)}{n}\right)^{\frac{1}{d}}+\frac 1 { { v(n)}}}}\\ \displaystyle{\max_{k=1\cdots n}{\mathcal{e}}\big [ { \mathbbm{e}}\big[\sup_{-\tau\leq s \leq t } \vert e^k_t \vert \big]\big ] } & \leq \displaystyle{\frac{k_2}{\sqrt{n\beta{(n ) } } } } \end{cases}\ ] ] all together , we hence have , for some constants and independent of \right),\ ] ] the inequality : which proves the theorem by application of gronwall s lemma . [ cor : propachaosspace ]let and fix neurons . under the assumptions of theorem [ thm : propagationchaosspace ], the process converges in law towards .we have : \right ) \\ & \quad \leq l \max_{k=1\cdotsn}\mathcal{e}\left({\mathbbm{e}}\left [ \sup_{-\tau\leq t \leq t } \left\vert x^{k,{\mathcal{a}}_n}_t-\bar{x}^{k}_t\right\vert^2 \right]\right)\\ \end{aligned}\ ] ] which tends to zero as goes to infinity , hence the law of converges towards that of which is equal by definition to .the dynamics of neuronal networks in the brain lead us to analyze a class of spatially extended networks which display multiscale connectivity patterns that are singular in at least two aspects : * the network display local dense connectivity patterns in which neurons are connected to their -nearest neighbors , where . * the macro - circuit was also singular , in the sense that the probability of two neurons and to be connect tends towards zero .this is very far from usual mean - field models that consider full connectivity patterns , or partial connectivity patterns proportional to the network size . in these cases ,the convergence is substantially slower , and the rescaling actually required thorough controls on the number of incoming connections to each neurons .the introduction of local microcircuits with negligible size was suggested in , in the context of the fluctuations induced by the microcircuit .our scaling , motivated by characterizing the macroscopic activity at the scale of the neural field , lead us to consider local microcircuits with spatial extension of order , which tends to zero in the limit . at this scale ,the fluctuations related to the microcircuit vanish , which allowed identifying the large limit process .however , at the scale of one neuron ( or considering , similarly to , a field of size , i.e. typical distances between neurons of order ) , the microcircuits may induce more complex phenomena in which fluctuations become prominent .this interesting problem remains largely open and can not be addressed with the techniques presented in the manuscript .the developments presented in this article also go way beyond what was done in the domain of mean - field analysis of large spatially extended systems . in that domain ,probably the two most relevant contributions to date are and . in , a relatively sketchy model of neural fieldwas proposed , in which the system was fully connected and neurons gathered at discrete space location that eventually filled the neural field .the model presented here is considerably more relevant from the biological viewpoint , and necessitated to deeply modify the proofs proposed in that manuscript . in particular , the connectivity patterns are now randomized , and the proof is now made independent of results arising in finite - populations networks .moreover , the two main contributions of the article , namely the singular coupling , was absent of the above cited manuscript .such coupling was discussed in , where the authors consider the case of network with nearest - neighbors topology ( only a local micro - circuit ) in which neurons connect to a non - trivial proportion of neurons .there is a substantial difficulty in considering only very local micro - circuits connectivity and sparse macro - circuits .here , we solved this problem and framed it in a more general setting with multiscale coupling .the proof presented in the present manuscript is relatively general .in particular , it can be extended to models with non locally lipchitz continuous dynamics ( as is the case of the classical fitzhugh - nagumo model ) as was presented in , or to networks with multiple layers .the results enjoy a relatively broad universality . indeed , we observe that the limit obtained is independent of the choice of the size of the micro - circuit and sparsity of the macro - circuit ( as long as proper scaling is considered ) .this property shows that the limit is universal : for any choice of function and , the macroscopic limit of our networks are identical .an interesting question is then what would be an optimal choice of functions and so that the convergence is the fastest .the speed of convergence towards the mean - field equation is hence governed by three quantities : * the term controls the regularity of the law solution of the mean - field equation with respect to space .the larger , the wider the micro - circuit , and therefore the slowest the local convergence . * the term controls the speed of averaging at the micro - circuit scale , which decreases with the size of the micro - circuit . * the term controls the speed of the averaging at the macro - circuit scale .this term is of course the smallest when is large . in the biological system under consideration, there is nevertheless an energetic cost to increasing the connectivity level .the two first term corresponding to the micro - circuit convergence properties can give an information on the order of the optimal micro - circuit size .minima are obtained when is of order , e.g. in dimension .other choices may be analyze to optimize other criteria such that information capacity vs energetic considerations , anatomical constraints , size of clusters sharing resources , .eventually , this result has also implications in neuroscience modeling . in this domain, authors widely use the so - called wilson - cowan neural field model ( see for a review ) .this model is given by non - local differential equations of type : where represents the mean firing - rate of neurons and corresponds to a sigmoidal function .this type of equations is similar to those obtained in the analysis of fully connected neural fields , as shown in , when considering a discrete wilson - cowan type of dynamics for the underlying network , i.e. a case where and for a smooth sigmoidal function . in this case, we showed that the solutions were attracted by gaussian spatially chaotic processes with mean and standard deviation satisfying the integro - differential equations : where .these are compatible with the neural field equations .however , these actually appear to overlook the complex connectivity pattern , and in particular neglect the additional local averaging term that we found here using rigorous probabilistic methods . taking into account local microcircuitry would actually yield an additional term in the equation on : the study of these new equations will , with no doubt , present substantial different dynamics , are offer a new neural field model well worth analyzing in order to understand the qualitative role of local microcircuits on the dynamics . | the cortex is a very large network characterized by a complex connectivity including at least two scales : a microscopic scale at which the interconnections are non - specific and very dense , while macroscopic connectivity patterns connecting different regions of the brain at larger scale are extremely sparse . this motivates to analyze the behavior of networks with multiscale coupling , in which a neuron is connected to its nearest - neighbors where , and in which the probability of macroscopic connection between two neurons vanishes . these are called singular multi - scale connectivity patterns . we introduce a class of such networks and derive their continuum limit . we show convergence in law and propagation of chaos in the thermodynamic limit . the limit equation obtained is an intricate non - local mckean - vlasov equation with delays which is universal with respect to the type of micro - circuits and macro - circuits involved . ' '' '' ' '' '' the purpose of this paper is to provide a general convergence and propagation of chaos result for large , spatially extended networks of coupled diffusions with multi - scale disordered connectivity . such networks arise in the analysis of neuronal networks of the brain . indeed , the brain cortical tissue is a large , spatially extended network whose dynamics is the result of a complex interplay of different cells , in particular neurons , electrical cells with stochastic behaviors . in the cortex , neurons interact depending on their anatomical locations and on the feature they code for . the neuronal tissue of the brain constitute spatially - extended structures presenting complex structures with local , dense and non - specific interactions ( microcircuits ) and long - distance lateral connectivity that are function - specific . in other words , a given cell in the cortex sends its projections at ( i ) a local scale : the neurons connect extensively to anatomically close cells ( the _ microcircuits _ ) , forming a dense local network , and ( ii ) superimposed to this local architecture , a very sparse functional architecture arises , in which long - range connections are made with other cells that are anatomically more remote but that respond to the same stimulus ( the functional _ macrocircuit _ ) . this canonical architecture was first evidenced by electrophysiological recordings in the 70 s and made more precise as experimental techniques developed ( see for striking representations of this architecture in the striate cortex ) . the primary visual cortex of certain mammals is a paradigmatic well documented cortical area in which this architecture was evidenced . in such cortical areas , neurons organize into columns of small spatial extension containing a large number of cells ( on the order of tens of thousands cells ) responding preferentially to specific orientations in visual stimuli , constituting local microcircuits that distribute across the cortex in a continuous map , each cell connecting densely with its nearest neighbors and sparsely with remote cells coding for the same stimulus . these spatially extended networks are called _ neural fields_. such organizations and structures are deemed to subtend processing of complex sensory or cortical information and support brain functions . in particular , the activity of these neuronal assemblies produce a mesoscopic , spatially extended signal , which is precisely at the spatial resolution of the most prominent imaging techniques ( eeg , meg , mri ) . these recordings are good indicators of brain activity : they are a central diagnostic tool used by physicians to assert function or disfunction . in these spatially extended systems , the presence of delays in the communication of cells , chiefly due to the transport of information through axons and to the typical time the synaptic machinery needs to transmit it , is essential to the dynamics . these transmission delays will chiefly affect the long connections of the macrocircuit , which are orders of magnitude longer than those of the microcircuit . the mathematical and computational analysis of the dynamics of neural fields relies almost exclusively on the use of heuristic models since the seminal work of wilson , cowan and amari . these propose to describe the mesoscopic cortical activity through a deterministic , scalar variable whose dynamics is given by integro - differential equations . this model was widely studied analytically and numerically , and successfully accounted for hallucination patterns , binocular rivalry and synchronization . justifying these models starting from biologically realistic settings has since then been a great endeavor . this problem was undertaken recently using probabilistic methods . the first contribution introduced an approximation of the underlying connectivity of the neural network involved , considering a fully connected architecture ( each neuron was connected to all the others ) and neurons in the same column were considered to be precisely at the same spatial location . they showed propagation of chaos and convergence to some intricate mckean - vlasov equation . more recently , an heterogeneous macrocircuit model was analyzed in . in that paper , the authors considered a network with heterogeneous and non - global connectivity : neurons were connected with their -nearest neighbors , where with , or with power - law synaptic weights , and obtained a limit theorem for the behavior of the empirical density . in both cases , the connectivity was considered at a single scale , and did not reproduce the actual type of connectivity pattern observed in the brain . in the present manuscript , we come back to these models with a more plausible architecture including local microcircuit together with non - local macroscopic sparse connectivity . using statistical methods and in particular an extension of the coupling method , we will demonstrate the propagation of chaos property , and convergence towards a complex nonlinear markov equation similar to the classical _ mckean - vlasov _ equations , but with a non - local integral over space locations and delays . interestingly , this object presents substantial differences with the usual mckean - vlasov limits : beyond the presence of delays , the neural field limit regime is at a mesoscopic scale where averaging effects locally to occur , but is fine enough to resolve brain s structure and its activity , resulting in the presence of an integral term over space . the solution , seen as a function of space , is everywhere discontinuous , which makes the limiting object highly singular . the present work is distinct of that of in that we consider local connectivity patterns in which neurons connect to a negligible portion of the neurons . this includes non - trivial issues , and necessitate to thoroughly control the regularity of the law of the solution as a function of space . on the other hand , beyond the presence of random locations of individual neurons and the presence of a dense microcircuit , the sparse macro - circuit generalizes non - trivially the work done in . indeed , at the macro - circuit scale , the probability of connecting two fixed neurons tends to zero . we therefore need to deal with a non - globally connected network , and address the problem by using fine estimates on the interaction terms and chernoff - hoeffding theorem . the speed of convergence towards the mean - field equations is quantified and involves three terms , one governing the local averaging effects arising from the micro - circuits , one arising from the regularity properties of the solutions , and one corresponding to the speed of convergence of the macro - circuit interaction term towards a continuous limit . in the neural field regime , the limit equations are very singular , in particular trajectories are not measurable with respect to the space . these limits are very hard to analyze at this level of generality . however , in the type of models usually considered in the study of neural fields , namely the firing - rate model , it was shown in that the behavior can be rigorously and exactly reduced to a system of deterministic integro - differential equations that are compatible with the usual wilson and cowan system in the zero noise limit . noise intervenes in these equations a nonlinear fashion , fundamentally shaping in the macroscopic dynamics . the paper is organized as follows . we start in section [ sec : model ] by introducing precisely our model and proving a few simple results on the network equations and on the topology of the micro - circuit . this being shown , we will turn in section [ sec : existenceuniquenessspace ] to the analysis of the network equations , and will in particular make sense of the intricate non - local mckean - vlasov equation , show well - posedness and some regularity estimates on the law of the mean - field equations . section [ sec : propachaspace ] will be devoted to the demonstration of the convergence of the network equations towards the mean - field equations . |
random molecular interactions can have profound effects on gene expression . because the expression of a gene can be regulated by a single promotor , and because the number of mrna copies and protein molecules is often small , deterministic models of gene expressioncan miss important behaviors .a deterministic model might show multiple possible stable behaviors , any of which can be realized depending on the initial conditions of the system .different stable behavior that depend on initial conditions allows for variability in response and adaptation to environmental conditions .although in some cases , noise from multiple sources can push the behavior far from the deterministic model , here we focus on situation where the system fluctuates close to the deterministic trajectory ( i.e. , weak noise ) . of particular interestis behavior predicted by a stochastic model that is qualitatively different from its deterministic counterpart , even if the fluctuations are small .several interesting questions emerge when including stochastic effects in a model of gene expression . for example , what are the different sources of fluctuations affecting a gene circuit ? can noise be harnessed for useful purpose , and if so , what new functions can noise bring to the gene - regulation toolbox ?one way in which noise can induce qualitatively different behavior occurs when a rare sequence of random events pushes the system far enough away from one of the stable deterministic behaviors that the system transitions toward a different stable dynamic behavior , one that would never be realized in the deterministic model without changing the initial conditions .for example , if the deterministic model is bistable , fluctuations can cause the protein concentration to shift between the different metastable protein concentrations .this happens when fluctuations push the system past the unstable fixed point that separates two stable fixed points .while often times a spontaneous change in gene expression might be harmfull , it might also be beneficial .for example , in certain types of bacteria , a few individuals within a population enter a slow - growth state in order to resist exposure to antibiotics . in a developing organism, a population of differentiating cells might first randomly choose between two or more expression profiles during their development and then later segregate into distinct groups by chemotaxis . in both examples , switching between metastable statesleads to mixed populations of phenotypic expression .this leads to the question of how cells coordinate and regulate different sources of biochemical fluctuations , or noise , to function within a genetic circuit . in many cases ,the genes within a given circuit are turned on and off by regulator proteins , which are often the gene products of the circuit . if a gene is switched on , its dna is transcribed into one or more mrna copies , which are in turn translated into large numbers of proteins . typically , the protein products form complexes with each other or with other proteins that bind to regulatory dna sequences , or operators , to alter the expression state of a gene . for example, a repressor binds to an operator which blocks the promotor the region of dna that a polymerase protein binds to before transcribing the gene so that the gene is turned off and no mrna are transcribed .this feedback enables a cell to regulate gene expression , and often multiple genes interact within groups to form gene circuits . understanding how different noise sources affect the behavior of a gene circuit and comparing this with how the circuit behaves with multiple noise sources is essential for understanding how a cell can use different sources of noise productively .fluctuations arising from the biochemical reactions involving the dna , mrna , and proteins are commonly classified as `` intrinsic '' noise .one important source of intrinsic noise is fluctuations from mrna transcription , protein translation , and degradation of both mrna and protein product .this type of noise is common among many of the biochemical reactions within a cell , and its effect is reduced as the number of reacting species within a given volume grows large .another source of intrinsic noise is in the expression state of the genes within the circuit .typically there is only one or two copies of a gene within a cell , which means that thermal fluctuations within reactions with regulatory proteins have a significant effect on mrna production . here , we consider the situation where transitions in the behavior of a gene circuit are primarily driven by fluctuations in the on / off state of its promotor and examine the effect of removing all other sources of noise .stochastic gene circuits are typically modelled using a discrete markov process , which tracks the random number of mrna and/or proteins along with the state of one or more promotors ( but see also ) .monte - carlo simulations using the gillespie algorithm can be used to generate exact realizations of the random process .the process can also be described by its probability density function , which satisfies a system of linear ordinary differential equations known as the master equation .the dimension of the master equation is the number of possible states the system can occupy , which can be quite large , leading to the problem of dimensionality when analyzing the master equation directly .however , for the problem considered here , the full solution to the master equation is not necessary in order to understand metastable transitions .the motivating biological question we consider here is what percentage of a population of cells can be expected to exhibit a metastable transition within a given timeframe .if a spontaneous transition is harmfull to the cell , one expects that reaction rates and protein / dna interactions should evolve so that transition times are likely to be much larger than the lifetime of the cell . on the other hand ,if spontaneous transition are functional , transition times should be tuned to achieve the desired population in which the transition occurs . in either case, the key quantity of interest is the distribution of transition times between metastable states , regardless of the noise source driving the transition . except for a few special cases exact results , even for the mean transition time , are not possible and approximation techniques or monte - carlo simulationsmust be used .however , because rare events typically involve long simulation times where large numbers of jumps occur , monte - carlo simulations are computationally expensive to perform , leaving perturbation analysis ideally suited for the task .past studies of metastable transitions , where perturbation methods are applied to the master equation , have used a simplifying assumption so that the state of the promotor is not accounted for explicitly .the assumption is that proteins are produced in `` bursts '' during which one or more mrna copies are translated to rapidly produce many proteins . in these models ,production bursts occur as instantaneous jumps , with a predefined distribution determining the number of proteins produced during a given burst .more recently , assaf and coworkers analyzed a model where the on / off state of a single stochastic promotor is accounted for explicitly , and mrna copies are produced stochastically at a certain rate when the promotor is turned on .however , the case where the model contains an arbitrary number of promotors or promotor states has not been addressed , and as we show in this paper , accounting for even just three promotor states is nontrivial .similar asymptotic methods have also been developed to study metastable transitions in continuous markov processes , but can not be applied to a discrete chemical reaction system because continuous approximations , such as the system - size expansion , of discrete markov processes do not , in general , accurately capture transition times .another source of difficulty that arises from isolating promotor state fluctuations as the only source of noise is that the resulting state space of the markov process is both continuous and discrete . after removing all sources of intrinsic noise except for the fluctuating promotor by taking the thermodynamic limit the protein levels change deterministically and continuously , and the promotor s state jumps at exponentially - distributed random times .the random jumps in the promotor s state makes the protein levels appear random , even though they are only responding deterministically to changes in the promotor state .such random processes are sometimes called hybrid systems , or piecewise deterministic . herewe refer to it as the quasi - deterministic ( qd ) process because we are taking part of the randomly fluctuating discrete state of the system ( the number of protein molecules ) and replacing it with a deterministically - changing continuous state .recently , we have developed asymptotic methods for metastable transitions similar to those applied to the discrete master equation for markov processes with both discrete and continuous state spaces .however , these methods do not account for two or more continuous state variables , which restricts the genetic circuit the method can analyze to one with a single protein product . in this paper, we develop new perturbation methods so that we can study metastable transitions in genetic circuits driven by promotor fluctuations .these methods are based on previous theory developed for one - dimensional velocity jump processes , and are generalized to account for the multiple continuous states representing the quantity of proteins produced by the genetic circuit .they also fit within a larger framework of methods to study metastable transitions in continuous markov processes and in discrete markov processes . for illustration, we use a simple model known as the `` mutual repressor '' model , which contains two genes , two promotors , and three promotor states .although our example considers only three promotor states , the methods presented are general and can account for an arbitrary number of promotor states . for a range of parameter values , the deterministic limit of the mutual repressor modelis bistable , having two stable fixed points separated by an unstable saddle point . for the stochastic model ,the deterministic forces create two confining wells surrounding each stable fixed point , separated by a stability barrier along the separatrix that contains the unstable saddle .this geometric interpretation is given by taking the logarithm of the stationary probability density function , which we refer as the `` stability landscape . ''an approximation of the first exit time density is found for the random process to escape over the stability barrier from one of the stability wells to the other . using this model, we seek to answer the following question . how does the random process change when protein noise is removed , leaving the state of the promotor as the only source of randomness ?that is , are there any qualitative differences in the behavior of the system without other sources of intrinsic noise ? the paper is organized as follows . in section [ sec : model ]the mutual repressor model is presented along with its reduction to the qd process , and then in section [ sec : trt ] , perturbation methods for estimating the exit time density are applied to the qd process . for comparison ,the stability landscape is also computed for the full process in section [ sec : full ] , which includes fluctuations in protein production / degradation .finally , results are presented in section [ sec : results ] , and the qd process is compared to the full process , using analytical / numerical approximations and monte - carlo simulations .the mutual repressor model is a hypothetical gene circuit consisting of a single promotor driving the expression of two genes : and .each protein product can dimerize and bind to the promotor to repress the expression of the other .when no dimer is bound to the promotor , both genes are expressed equally .thus , the promotor can be in one of three states : bound to a dimer of protein one , unbound , or bound to a dimer of protein two .let the number of protein product of gene and be and , respectively .it is assumed that the mrna and protein production steps can be combined into a single protein production rate and that the dimerization reaction is fast so that it can be taken to be in quasi - steady - state .we then have the following transition between the three promotor states where is a rate and is a nondimensional dissociation constant . protein ( )is produced at a rate while the promotor is in states ( ) , and both proteins are degraded at a rate in all three promotor states. the probability density function , for promotor state and protein numbers , satisfies the master equation \mathbf{p},\ ] ] where \ ] ] is the matrix responsible for promotor state transitions .the diagonal matrix , responsible for changes in protein numbers , has elements with .\ ] ] the shift operators are defined according to we now introduce the nondimensional variables , , and , where .then , the master equation ( [ eq:2 ] ) for the rescaled probability density , , becomes \bm{\rho},\ ] ] with dimensionless parameters and .the matrices are given by ,\ ] ] and the operators and are defined in terms of taylor series expansions ( in small ) which replace the shift operators with assume that is a small parameter , so that there is a large average number of proteins , and assume also that the parameter is small , which reflects rapid switching between promotor states compared to the rate of protein production / degradation .because we have two small parameters in our system , and , when perusing an asymptotic solution , we must carefully consider how the limit and is taken , or more practically , how large is compared to .the fluctuations in the promotor state are controlled by , and in the limit , the transitions are infinitely fast so that the promotor behaves deterministically .the fluctuations in protein levels is controlled by , and in the limit the protein production / degradation behaves deterministically .since we are concerned primarily with rare transitions driven by promotor fluctuations and not by fluctuations in the protein production / degradation reaction , we assume that ( i.e. , ) .taking both limits , and , yields the fully - deterministic dynamics , where note the symmetry in the problem ; the deterministic system is unchanged if we exchange .dynamically , the system is bistable for . at is a saddle - node bifurcation , and for there is a single stable fixed point .we consider the case of bistability and chose . in fig .[ fig : det ] the nullclines and fixed points are shown . .the black curve shows the -nullcline and the grey curve shows the -nullcline .the green circles show the stable fixed points , the red circle shows the unstable saddle .the blue curve shows a stochastic trajectory leaving the lower basin of attraction to reach the separatrix.,width=377 ] the two stable fixed points are located near the corners , and the unstable saddle point is located along the separatrix .arrows show the eigenvectors of the jacobian with their direction determined by the sign of the eigenvalues , all of which are real .a stochastic trajectory that starts at the lower stable fixed point remains nearby for a long period of time until a rare sequence of jumps carries it to the separatrix . because the separatrix is the stable manifold , trajectories are most likely to exit near the unstable saddle point .to remove protein noise from the system , consider the limit , with fixed , so that the protein production / degradation process is deterministic within each promotor state , while the promotor state remains random .the master equation ( [ eq:7 ] ) becomes where and .\ ] ]the focus of the remaining analysis is to obtain an accurate approximation of the first exit time density function ( fetd ) for the qd process to evolve from one metastable state to the another . to obtain the fetd ,we supplement an absorbing boundary condition to the governing equation along the separatrix , of the deterministic dynamics , which is the barrier the process must surmount in order to transition to the other metastable state . for the master equation ( [ eq:2 ] ), the absorbing boundary condition is simply for the qd ck equation ( [ eq:13 ] ) , the absorbing boundary condition is where is the unit vector normal to the boundary .then , the domain for the qd process with the absorbing boundary is given by note the choice of the lower triangular region ( instead of the upper triangular region with ) is arbitrary due to the symmetry in the problem .to see how the absorbing boundary on sets up the exit time problem , define to be the random time at which the separatrix is reached for the first time , given that the process starts at the stable fixed point .consider the survival probability which is the probability that .the fetd ( or probability density function for ) is then the fetd for the qd process can be approximated using perturbation methods as follows .suppose we have a ck equation of the form where is a linear operator acting on the continuous and discrete state variables of the density function . in the case of the qd ck equation ( [ eq:13 ] ), we have assume that has a complete set of eigenfunctions , , so that the solution can be written for some constants , and that all of the eigenvalues , , are nonnegative .assume further that if we impose a reflecting boundary condition on then the principal eigenvalue , , is the only zero eigenvalue and the eigenfunction , , is the stationary density of the process ( after appropriate normalization ) .furthermore , we assume that the stationary density is exponentially small on the boundary .then , if instead we place an absorbing boundary condition on , the stationary density no longer exists and the principal eigenvalue is exponentially small in , while the remaining eigenvalues are much larger so that the solution resembles the stationary density after some initial transients .it is this difference in time scales that we exploit to approximate the fetd .a universal feature of the fetd for rare events is its exponential form ( which follows from the separation of time scales ) because the time dependence is , for . indeedthe fetd ( [ eq:24 ] ) is thus , for large times the fetd is completely characterized by the principal eigenvalue , .the mean exit time is simply , which means that the eigenvalue also has the physical interpretation of the rate at which metastable transitions occur . to obtain an approximation of this eigenvalue, we use a spectral projection method , which makes use of the adjoint operator .consider the adjoint eigenfunctions , , , satisfying and so that the two sets of eigenfunctions are biorthogonal .soppose that the boundary condition is ignored and is approximated by the stationary density , . by application of the divergence theorem ,the adjoint operator is such that where the boundary contribution is nonzero because does not satisfy the absorbing boundary condition .then , since , the principal eigenvalue is in the remainder of this section , we use ( [ eq:30 ] ) to approximate the principle eigenvalue by approximating the stationary density , , and the adjoint eigenfunction , .in this section , we obtain an approximation to the quasi - stationary density , , using a wkb approximation method .we begin by illustrating the procedure for the qd process .that is , we seek an approximation of the solution to the equation {\bar{\bm{\rho}}}(x , y ) = 0.\ ] ] consider the anzatz },\ ] ] where are -vectors and both and are scalar functions .note that in other studies of gene regulation models where similar methods are used , the small parameter in the exponential is .this difference in scaling arrises from the assumption that the metastable transitions are driven by fluctuation in the promotor and not the production of protein .substituting ( [ eq:32 ] ) into ( [ eq:31 ] ) and collecting leading order terms in yields \mathbf{r}_{0 } = 0,\ ] ] where the prefactor term ( up to a normalization factor ) is simply the nullspace of the matrix $ ] , and we assume that it is normalized so that . using theorem 3.1 in ref . , we can provide necessary and sufficient conditions for to be unique and positive . for any fixed , there exists a unique vector , satisfying ( [ eq:33 ] ) if and only if the diagonal matrix is such that at least two of its elements have oposite sign .that is , there exist , with , such that .it is interesting to speculate that once the solution to ( [ eq:61 ] ) is substituted into the matrix that this requirement is satisfies for all .however , this is not necessarily the case , which means that the quasi - stationary density is restricted to a subdomain where .it is obvious that the protein levels must be bounded within the domain when protein production / degradation is deterministic .that is , if the gene remains in the unrepressed state , the protein level tends toward the mean value ( or ) but never exceeds it since protein levels do not fluctuate ( unless the promotor state fluctuates ) .however , it is not as obvious that the total amount of protein is further bounded so that , which means the domain is further restricted to the upper triangular portion of the unit square .this means that once a trajectory enters , it remains in this domain for all time and can not escape . to show this ,we need simply look at the rate of protein production / degradation for each state normal to the line .this gives the rate for each of the promotor states ( on , both on , on ) when both protein levels satisfy .these rates are given by the diagonal components of the matrix , where and are defined in ( [ eq:14 ] ) .it is evident that when no repressor is bound and both proteins are produced , the flux across this line is in the positive direction , and when one repressor is bound , there is no flux across this line .the leading order result ( [ eq:33 ] ) defines a nonlinear partial differential equation for , = 0.\ ] ] the function is referred to as the hamiltonian for the system , due to the similarity to classical hamiltonian dynamics .an implicit assumption , is present to ensure that is the gradient of the scalar field .the above system can be solved by the method of characteristics to obtain the system of ordinary differential equations ( see pg . 360 ) , where each variable is parameterized by , which should not be confused with physical time .the above system of ordinary differential equations is supplemented with an equation for the stability lanscape and solutions specify along a curve in the plane .a family of curves is defined by specifying cauchy data along a curve parameterized by .one of the difficulties found in this method is determining cauchy data . at the stable fixed points ,the value of each of the variables is known ( i.e. , and ) but data at a single point can not hope to generate a family of rays .therefore , data must be specified on an ellipse surrounding the fixed point , using the hessian matrix , .\ ] ] expanding the function in a taylor series around the fixed point yields the quadratic form , as its leading order term .cauchy data is specified on the ellipse for some suitably small . in practice, must be small enough to generate accurate numerical results , but large enough so that trajectories can be generated to cover the domain .on the elliptical contour , the initial values for are it can be shown that the hessian matrix is the solution to the algebraic riccati equation , where ,\quad c = \left [ \begin{array}{c c } \frac{\partial^{2 } \mathcal{h}}{\partial p \partial x } & \frac{\partial^{2 } \mathcal{h}}{\partial p \partial y } \\\frac{\partial^{2 } \mathcal{h}}{\partial q \partial x } & \frac{\partial^{2 } \mathcal{h}}{\partial q \partial y } \end{array}\right],\ ] ] evaluated at , , and .this equation can be transformed into a linear problem ( in order to actually solve it ) by making the substitution to get an equation for the scalar function is found by substituting ( [ eq:32 ] ) into ( [ eq:31 ] ) and keeping second order terms in , to get \mathbf{r}_{1 } = { \frac{\partial } { \partial x}}(f\mathbf{r}_{0 } ) + { \frac{\partial } { \partial y}}(g\mathbf{r}_{0 } ) - \left({\frac{\partial { k}}{\partial x}}f - { \frac{\partial { k}}{\partial y}}g\right)\mathbf{r}_{0}.\ ] ] for solutions to exist , the fredholm alternative theorem requires that for = 0\ ] ] it must be that = 0.\ ] ] note that if spans the right nullspace of , then the left nullspace is also one dimensional . after rewriting ( [ eq:49 ] ), we have the pde for given by although the solution to this equation can be formulated by the method of characteristics , it requires values of the vectors and , which in turn require the solution to the ray equations ( [ eq:38 ] ) . since rays must be integrated numerically in most cases , solving ( [ eq:50 ] ) along its own characteristics is impractical .however , ( [ eq:50 ] ) can be computed along the characteristic curves of ( [ eq:38 ] ) as follows .first , differentiating ( [ eq:50 ] ) along characteristics yields using the fact that along characteristics , we can define then , after combining ( [ eq:50 ] ) and ( [ eq:51 ] ) we have that .\ ] ] the above requires values of and , which are not provided by the system ( [ eq:38 ] ) . to obtain these ,a formula is needed to relate the hessian matrix , , of to .then , the hessian matrix can be computed by expanding the system of ray equations ( [ eq:38 ] ) .first , differentiate both sides of equation ( [ eq:33 ] ) to get \nabla \mathbf{r}_{0 } = -\left ( \nabla a + \nabla ( p f ) + \nabla ( q g ) \right ) \mathbf{r}_{0},\ ] ] the fredholm alternative theorem requires that which is always true since and the general solution to ( [ eq:54 ] ) is where is the pseudoinverse of the matrix and is an unknown constant . since the vector is normalized so that its entries sum to one , it follows that . summing over both sides of equation ( [ eq:114 ] )then yields thus , we have that equation ( [ eq:125 ] ) gives a relationship between , , and . to obtain the hessian matrix , ,away from the fixed point , the ray equations are extended to include the variables for .a good choice for the current problem is to take and , where is a point on the initial curve defined by ( [ eq:42 ] ) .the hessian matrix is then obtained using = z \left[\begin{array}{c c } x_{1 } & x_{2}\\ y_{1 } & y_{2 } \end{array}\right].\ ] ] as long as the matrix on the rhs is invertible , the matrix can be obtained along characteristics and can be integrated numerically using ( [ eq:53 ] ) and ( [ eq:54 ] ) . the dynamics for the extended variables ( [ eq:55 ] ) is given by where and is the jaciobian matrix for the system ( [ eq:38 ] ) .one can choose different variables with which to extend the system based on what works in practice , and the only thing one must change are the initial conditions . for our choice , the initial conditions are where the matrix is the solution to ( [ eq:44 ] ) .the above analysis can be repeated to obtain a hamiltonian system for the full process ( with protein noise ) , but a choice must be made for how the limit , is taken .first consider the equation for the quasi - stationary density , , for the full process ( [ eq:7 ] ) {\bar{\bm{\rho}}}(x , y ) = 0.\ ] ] here , the domain is the cone .the quasi - stationary density is assumed to have the form },\ ] ] where again is a -vector and is a scalar function representing the stability landscape . note that we have ignored higher order terms here because we only want the hamiltonian function for comparison to the qd process . substituting ( [ eq:58 ] ) into ( [ eq:57 ] ) does not lead to any meaningful equation for at leading order unless we make an assumption about how the limit is taken .there are two relevant cases : and . in the former case ,one recovers the qd process ( [ eq:33 ] ) , and in the latter case , collecting terms of leading order in , with , yields \mathbf{r } = 0,\ ] ] where and thus , the hamiltonian for the full process is ,\ ] ] which we refer to as the full hamiltonian the differences between the full process and the qd process is nicely illustrated by comparing their associated hamiltonians .notice that the full hamiltonian ( [ eq:61 ] ) is a transcendental function of and , whereas the hamiltonian for the qd process ( [ eq:36 ] ) is a cubic polynomial in and .one can view this as a taylor series expansion of the full hamiltonian about . for this reason ,the qd process , as an approximation for the full process with a small amount of protein noise , is only valid within a neighborhood of a deterministic fixed point .this is , of course , just a reflection of the fluctuation dissipation theorem .an example of numerical integration ( for details regarding numerics see the appendix ) of the ray equations ( [ eq:38 ] ) for the qd ( [ eq:36 ] ) and full ( [ eq:61 ] ) hamiltonian is shown in fig .[ fig : rays ] . .the triangular restricted domain boundary for the qd process is shown with black lines .the two symmetric stable fixed points are represented by green circles , and the unstable fixed point by a red circle .the separatrix along which an absorbing boundary condition is imposed is shown by a dashed black line .parameter values used are , for the qd hamiltonian , and for the full hamiltonian.,width=453 ] the qd rays are shown above the separatrix for comparison . notice that the qd rays are contained within a triangular domain , while the rays from the full hamiltonian cover the entire domain .this is due to the domain restriction that occurs when removing the protein fluctuations from the process . up to terms that are exponentially small in , the adjoint eigenfunction , , satisfies {\bm{\xi}_{0 } } = 0.\ ] ] to make things easier , we change coordinates with so that this transforms the absorbing boundary , , to the vertical line , .then , ( [ eq:66 ] ) becomes {\bm{\xi}_{0}}(\tau,\sigma ) = 0.\ ] ] where where , , and are defined in ( [ eq:14 ] ) and ( [ eq:15 ] ) .the absorbing boundary condition is then before proceeding , it is convenient to make the following definitions . in the rest of this section , we make frequent use of the eigenvectors ( right eigenvectors and left eigenvectors ) and eigenvalues , , satisfying we normalize the two sets of eigenvectors ( which are biorthogonal ) so that .note that because the matrices and are functions of , so are the eigenpairs .it is easily shown that one of the eigenvalues is zero for all values of , which we set to .the right eigenvector , , is then given by the nullspace of the matrix furthermore , the corresponding left eigenvector is simply it is convenient to define distinct notation for the eigenpairs evaluated on , with at the boundary one of the eigenvalues , say , vanishes and the eigenspace for the zero eigenvalue is degenerate ( i.e. there are two zero eigenvalues but the nullspace is one dimensional ) which means that , and . the approximation of the adjoint eigenfunction proceeds using singular perturbation methods , along the lines of .three solutions are found which are valid in different regions of the domain : an outer solution , a boundary layer solution for the strip near the absorbing boundary , and a transition layer solution in the overlap region between the other two .away from the boundary , the exact solution ( that does not satisfy the boundary condition ) is to obtain a uniform asymptotic approximation that also satisfies ( [ eq:71 ] ) , a boundary - layer solution is needed .consider the stretched variable . to leading order in , the boundary - layer solutions , ,satisfies where .the solution has the form where , is the only eigenpair ( on the boundary ) with a nonzero eigenvalue .however , the eigenvalue , , is negative for all values of , and in order to obtain a bounded solution in the limit we set .the vector is the generalized left eigenvector satisfying and is given by at the boundary , the solution is and the boundary condition ( [ eq:71 ] ) requires so that .thus , up to a single unknown constant , , which must be determined by matching , the boundary - layer solution is because is unbounded in the limit , it is not possible to match it to the outer solution , .we can think of the term , , in the boundary - layer solution is a truncated taylor series expansion of the true solution around . to match the boundary - layer and outer solutions ,a transition - layer solution is required for the strip of width along the boundary .consider the stretched coordinate .keeping terms to , the transition - layer solution , , satisfies it is less clear how to truncate the above equation to obtain a leading order transition - layer solution .because we must match the outer solution , , to the boundary layer solution that has the generalized eigenvector , we try a solution of the form where and are unknown scalar functions . in the limit , the deterministic flux across the boundary , ( with given by ( [ eq:12 ] ) )vanishes and the eigenvector .that is , the eigenvalue , corresponding to the eigenvector , vanishes on the boundary .furthermore , it can be shown that where substituting ( [ eq:84 ] ) into ( [ eq:83 ] ) yields to obtain the unknown functions we project ( [ eq:87 ] ) with the right eigenvectors , , .after applying these projections ( using the fact that , , , and ) and collecting leading order terms in , we get where it turns out that is related to the curvature of the stability landscape normal to the separatrix ; that is , if we define then at , the curvature vanishes and changes sign but is always negative ( with ) at the unstable fixed point , .divide the separatrix ( and ) into three regions : , , and .the first region is ignored because it is in part of the domain excluded from the stationary density function ( see sec .[ sec : qsd ] ) .the second region contains the unstable fixed point , and the third we can ignore as only extremely rare trajectories cross the separatrix in this region . up to an unknown constant , the solutions to ( [ eq:88 ] ) and ( [ eq:89 ] ) are }du,\\ a_{1}(\tau , s ) & \sim & \hat{a}{\exp\left [ -\frac{1}{2}\tilde{\mu}_{1}^{(\sigma)}(\tau ) s^{2 } \right]}.\end{aligned}\ ] ] the transition layer solution is then }du{\mathbf{1}}\right .\\ \nonumber & & \qquad\left .+ { \exp\left [ -\frac{1}{2}\tilde{\mu}_{1}^{(\sigma)}(\tau ) s^{2 } \right]}\bm{\chi}(\tau , s)\right).\end{aligned}\ ] ] the three solutions can now be matched .first , matching the transition layer solution to the boundary layer solution is done using the van - dyke rule . in terms of the boundary layer variable , ,the transition layer solution is matching terms with the boundary layer solution yields the composite boundary / transition layer solution is then }du{\mathbf{1}}\right.\right.\\ \nonumber & & \qquad\qquad \left.\left.+{\exp\left [ -\frac{1}{2}\tilde{\mu}_{1}^{(\sigma)}(\tau ) \frac{\sigma^{2}}{\epsilon } \right]}\frac{\tilde{{\nu}}^{(\sigma)}(\tau)}{\tilde{\mu}_{1}^{(\sigma)}(\tau ) } \bm{\chi}(\tau,\frac{\sigma}{\sqrt{\epsilon}})\right)\right]\end{aligned}\ ] ] the final unknown constant , , is determined by matching to the outer solution so that which implies that in order to evaluate the term in the numerator of the eigenvalue formula ( [ eq:30 ] ) , we require the adjoint eigenfunction evaluated on the boundary ( in a neighborhood of the unstable fixed point ) which is we now have all of the components necessary to approximate the principal eigenvalue , using the formula ( [ eq:30 ] ) .first , for the term in the denominator , we can approximate the adjoint eigenfunction with the outer solution and the ( unnormalized ) stationary density with ( [ eq:32 ] ) ( the higher order term can be ignored ) .then the term in the denominator is simply the normalization factor , which can be approximated using laplace s method to get } da \sim \frac{2\pi\epsilon}{\sqrt{\mbox{det}(z)}},\ ] ] where is the hessian matrix ( [ eq:40 ] ) of at the stable fixed point .note that we have used the fact that and that the vector is normalized so that its entries sum to one .the term in the numerator of ( [ eq:32 ] ) requires the approximation ( [ eq:99 ] ) of the adjoint eigenfunction on the absorbing boundary .the integral can also be approximated using laplace s method , with }d\tau \\ \nonumber & \sim & \frac{\epsilon b\sqrt{\pi}e^{-{k}(x_{u},y_{u})}}{b\sqrt{\pi } - \epsilon \sqrt{2\tilde{\mu}_{1}^{(\sigma)}(\tau_{u } ) } } \sqrt{\frac{2\tilde{\mu}_{1}^{(\sigma)}(\tau_{u})}{\tilde{\phi}''(\tau_{u})}}\bm{\zeta}^{t}\hat{g}(0)\tilde{\mathbf{r}}_{0}(\tau){\exp\left [ -\frac{1}{\epsilon}\phi(x_{u},y_{u } ) \right]}.\end{aligned}\ ] ] for convenience , we have defined functions on the boundary in the variable with where and are defined in ( [ eq:67 ] ) . although the quantities and must be computed numerically , the remaining unknown terms can be computed analytically by exploiting the reflection symmetry of the problem . along , we have that and so that the equation for and ( [ eq:33 ] ) can be written as \tilde{\mathbf{r}}_{0}(\tau ) = 0,\ ] ] where we have defined the stability landscape function on the boundary is then the above is just an eigenvalue problem with three possible solutions , one of which can be excluded because there is a zero eigenvalue corresponding to the nullspace of .it can be shown that if the diagonal elements of are such that at least two have oposite sign then only one of the remaining two eigenvalues has a corresponding positive eigenvector ( must have positive elements ) making the solution to ( [ eq:103 ] ) unique .it turns out that this is only true for , which is due to the domain restriction caused by removing protein fluctuations .the result is we also have that and finally , using ( [ eq:79 ] ) , ( [ eq:70 ] ) , and ( [ eq:106 ] ) we get combining these components together , we have the final result that \sqrt{\frac{\tilde{\mu}_{1}^{(\sigma)}(\tau_{u})\mbox{det}(z)}{\tilde{\phi}''(\tau_{u } ) } } { \exp\left [ -\frac{1}{\epsilon}\phi(x_{u},y_{u } ) \right]},\ ] ] where , and is given by ( [ eq:90 ] ) .as expected , the eigenvalue is exponentially small in , which means that the height of the stability lanscape at the unstable fixed point , , must be approximated as accurately as possible .the remaining terms are often referred to as the ` prefactor ' , and except for the quantity , all of the terms in the prefactor ( the derivatives of the stability landscape function and ( [ eq:108 ] ) ) represent properties local to the fixed points .the remaining term in the prefactor depends on the function , which depends on properties of the process not local to the fixed points and must be computed numerically .in this section , the results gathered throughout this paper are used to explore how removing the intrinsic noise that arises from protein production / degradation effects the random process .in particular , we examine the stability landscape and the metastable transition times .first , in fig . [ fig : level_curves ] the numerical solutions to the ray equations ( [ eq:38 ] ) are used to generate level curves of the stability landscape function , , for both the qd ( [ eq:36 ] ) and the full ( [ eq:61 ] ) hamiltonians . .black lines show the domain boundary for the qd process , and the dashed line shows the boundary of positive protein levels .( a ) the qd result is compared to the full process with so that the protein noise is weak compared to promotor noise .( b ) same as ( a ) but with so that the protein noise strength is comparable to promotor noise .parameter values are the same as fig .[ fig : rays].,width=453 ] for presentation , the level curves are shown in the plane , with the separatrix along the left edge ( ) of each frame . in fig .[ fig : level_curves]a , level curves for the full process are shown for so that the protein noise is small compared to promotor noise . recall that the parameter controls the strength of protein noise relative to the strength of promotor noise so that there is no protein noise in the limit .the resulting curves match closely in a neighborhood of the stable fixed point and extend out toward the unstable saddle point .as expected , the level curves begin to diverge the farther away from either fixed point they are . in fig .[ fig : level_curves]b , the strength of the protein noise is increased , with , and in this case , the stability landscape of the two processes are quite different , matching only near the fixed points of the deterministic dynamics . as a result of the domain restriction effect , level curves of the full process extend into regions the qd process is excluded from .this is because protein levels can only cross above the line , not below it , when there is no protein noise .this is also true of the lines and .the domain restriction effect is eliminated when protein noise is added back into the process , even if it is very small compared to promotor noise . for the model considered here ,the effect is of no serious consequence for metastable transitions as the restricted domain still contains all three fixed points .however , a model of a more complex gene circuit might be significantly affected by removing protein noise especially if this restricts the domain for the protein levels in such a way as to generate qualitatively different behavior , which would imply a nontrivial contribution of protein noise , no matter how negligible it may be .because of the symmetry in the problem , we obtained analytical results for various quantities on the separatrix , including the shape stability landscape .we can use these results to obtain an analytical approximation of the probability density for the position along the separatrix a trajectory passes through as it transitions from one basin of attraction to another . using the results of sec .[ sec : eval ] , the stationary density along the separatrix is given by }}{\int_{0}^{1}e^{-\tilde{{k}}(\tau)}{\exp\left [ -\frac{1}{\epsilon}\tilde{\phi}(\tau ) \right]}d\tau},\ ] ] where we remind the reader that and along the separatrix .the only term that can not be obtained analytically is the function , which can be ignored as a first approximation . for simplicity, we also average over the promotor state to get the scalar marginal probability density for the exit point .then , using laplace s method , the exit density is },\quad \tau\in(0,1),\ ] ] where is given by the qd exit density approximation is shown in fig .[ fig : exitdist ] along with two histograms obtained from monte - carlo simulations . , with .the black curve shows the analytical approximation for the qd ( ) process , and the symbols show histograms from monte - carlo simulations for different values of .,width=377 ] while the histogram for is close to the qd approximation , it is evident that some trajectories pass through the separatrix in the interval , which is impossible without protein noise .this effect becomes negligible when the protein noise is reduced to .the fetd for a trajectory , starting from a stable fixed point , to reach the separatrix is asymptotically exponential in the large time limit , and the timescale is determined by the principal eigenvalue , , from equation ( [ eq:109 ] ) .we can then approximate the mean exit time with . in fig .[ fig : mfpt ] the mean exit time is shown on a log scale as a function of along with results from monte - carlo simulations . .symbols represent averaged monte carlo simulations for various values of .the dashed black line is the approximation , , for the qd process ( ).,width=453 ] notice that the approximation and the monte - carlo simulations are asymptotically linear as . for the approximation ,the slope of this line is determined by the height of the stability barrier , , while the prefactor affects the vertical shift .from the monte carlo results ( symbols with grey lines ) we see that the mean exit time converges to the approximation as .however , it is clear that the slope of the analytical curve is slightly different then that of the monte carlo results even when is small .thus , we may think of the mean exit time approximation for the qd process as an asymptotic approximation of the full process in terms of the small parameter so long as is also small but not too small .understanding how different noise sources affect the dynamics of a gene circuit is essential to understand how different regulatory components interact to produce the complex variety of environmental responses and behaviors . even if one excludes extrinsic noise sources such as environmental and organism - to - organism variations there are several sources of intrinsic noise , such as fluctuations in translation , transcription , and the conformational state of dna regulatory units .the behavior we are interested in understanding is a transition from one metastable state to another .each metastable state corresponds to the stable steady - state solutions of the underlying deterministic system .the bistable mutual repressor model has two identical stable steady states separated by an unstable saddle node . on small time scales ,the protein levels fluctuate near one of the two stable steady states . on large time scales , fluctuations cause a metastable transition to occur , where the protein levels shift to the other steady state by crossing the separatrix containing the unstable saddle point . to understand how metastable transitions can be induced by promotor noise , we consider a discrete stochastic model of a mutual repressor circuit where the protein levels change deterministically , which we call the qd process .this is done by fixing the promotor state and then taking the thermodynamic , or large system size , limit of the protein production / degradation reactions .we then compare the qd process to the full process that includes protein noise .we find important qualitative differences that persist even when the magnitude of the protein noise is small compared to promotor noise .in particular , without protein noise and after initial transients , the protein levels are restricted such that the total amount of protein is never less than half its maximum possible value ; that is , the total number of proteins is such that , where is the protein production rate and is the degradation rate . said another way , assuming that the random process starts at one of the deterministic stable fixed points , promotor fluctuations could never push the protein copy numbers so that , for example , only a single copy of each protein is present . in contrast , the stability landscape for the full process and monte - carlo simulations show that protein levels are able to reach all positive values . while this restriction does not stop the qd process from exhibiting metastable transitions, more complex gene circuits may require protein fluctuations , even if they are very small , in order to function correctly .in this appendix , we summarize the numerical methods and tools used throughout the paper . most numerical work is performed in python , using the numpy / scipy package . for more computationally - expensive tasks, we use scipy s weave package to include functions written in c , which allows us to use the gnu scientific library for numerical integration of the ray equations ( [ eq:38 ] ) , and for random number generators used in monte carlo simulations .there are a few notable observations regarding integration of the ray equations .first , characteristic projections , , have a tendency to stick together along certain trajectories , peeling off one at a time ( see fig . [fig : rays ] ) . to adequately cover the domain with rays ,a shooting method must be used to select points on the cauchy data . formore details on this see ref .we found that the simplest method was to use the secant method ( we use the `` brentq '' function in the scipy.integrate package ) to minimize the euclidian distance between the final value of along the separatrix and the saddle node .this method is convenient since it does not require knowledge of the hessian matrix , .second , the value of used to generate cauchy data must be chosen small enough to get accurate results. however , we found that if it is chosen too small , rays are no longer able to cover the domain , and more importantly , we could no longer generate a ray that reaches the unstable fixed point on the separatrix .for the mutual repressor model , the trajectory connecting the stable fixed point to the saddle is one of the curves along which characteristics tend to stick to each other .suppose that is the point on the cauchy data ( [ eq:126 ] ) that generates the ray that connects the fixed points .then small perturbations , , cause the characteristic , , to diverge sharply away from the saddle .this not only makes it difficult to compute , but also creates difficulties for computing the function ( see sec .[ sec : prefac ] ) . since the expanded set of ray equations ( [ eq:110 ] ) track the derivatives of , , , and with respect to the point on the cauchy data , which , for values of near , becomes very large as the ray approaches the saddle point . as the expanded variables become very large , computing using equation ( [ eq:56 ] ) is unstable . furthermore , this effect becomes worse as the initial value , , goes to zero .jmn would like to thank james p. keener for valuable discussions throughout this project .this work was supported by award no kuk - c1 - 013 - 4 made by king abdullah university of science and technology ( kaust ) .10 charles doering , khachik sargsyan , and leonard sander .extinction times for birth - death processes : exact results , continuum asymptotics , and the failure of the fokker planck approximation ., 3(2):283299 , 2005 . | the stochastic mutual repressor model is analysed using perturbation methods . this simple model of a gene circuit consists of two genes and three promotor states . either of the two protein products can dimerize , forming a repressor molecule that binds to the promotor of the other gene . when the repressor is bound to a promotor , the corresponding gene is not transcribed and no protein is produced . either one of the promotors can be repressed at any given time or both can be unrepressed , leaving three possible promotor states . this model is analysed in its bistable regime in which the deterministic limit exhibits two stable fixed points and an unstable saddle , and the case of small noise is considered . on small time scales , the stochastic process fluctuates near one of the stable fixed points , and on large time scales , a metastable transition can occur , where fluctuations drive the system past the unstable saddle to the other stable fixed point . to explore how different intrinsic noise sources affect these transitions , fluctuations in protein production and degradation are eliminated , leaving fluctuations in the promotor state as the only source of noise in the system . perturbation methods are then used to compute the stability landscape and the distribution of transition times , or first exit time density . to understand how protein noise affects the system , small magnitude fluctuations are added back into the process , and the stability landscape is compared to that of the process without protein noise . it is found that significant differences in the random process emerge in the presence of protein noise . |
forward - starting options are path dependent put / call financial contracts characterized by having a strike price expressed in terms of a pre - specified percentage of the asset price taken at a contractually fixed intermediate date ] , with denoting the maturity of the contracts .we assume that the market is made up of a single asset and a bond living in a probability space , with a filtration , made complete and right continuous to satisfy the _ usual hypotheses _( see ) .moreover , we assume we are in absence of arbitrage opportunity and that is a risk neutral measure selected by some criterion . before formalizing mathematically the new contract previously described ,we need to recall briefly the evaluation of classical forward - start options . denoting by the discounting process , the price of a forward - start call with percentage and strike - determination time is given by ,\qquad 0\le t\le t.\ ] ] in market models where the price of a plain vanilla call option is represented by a deterministic homogeneous function of degree in two spatial variables , that is applied to the given pair , the price of a classical ( call ) forward - start option is |\cf_t]={\mathbf{e}}^p[b(t , u ) call(u , s_u , s_u\alpha , t)|\cf_t]\\ \nonumber\!\!\!&=&\!\!\!{\mathbf{e}}^p[b(t , u)call(u , \gamma 1 , \gamma \alpha , t)\big |_{\gamma = s_u } |\cf_t]={\mathbf{e}}^p[b(t , u)(\gamma call(u , 1 , \alpha , t))\big |_{\gamma = s_u } |\cf_t]\\ \label{hom}\!\!\!&=&\!\!\ ! { \mathbf{e}}^p[b(t , u)s_u call(u,1,\alpha , t ) |\cf_t].\end{aligned}\ ] ] scale - invariant models ( see ) certainly hold this property . furthermore ,if the call price is a deterministic function of the current price of the underlying and it does not include any additional stochastic factor , we may conclude that the price of the forward - start option verifies = { \mathbf{e}}^p[b(t , u)s_u|\cf_t ] call(u,1,\alpha , t ) = s_t \ call(u,1,\alpha , t),\ ] ] where in the last equality we used the martingale property of the discounted asset price .notice that in such a case the portfolio having units of the stock replicates " the -value of the forward - start option .sometimes an analogous result may be achieved also in presence of stochastic factors via change of numeraire arriving at a formula of the type = s_t{\mathbf{e}}^{\bar p}[call(u,1,\alpha , t)|\cf_t]\ ] ] for some proper equivalent measure .we want to generalize these contracts allowing the strike - determination time to be random , thus we need to consider a market model ( and hence a probability space ) where the asset price lives together with a random time .we therefore assume that there exists a containing and a filtration , satisfying the usual hypotheses , that makes and adapted processes ( that is to say that is stopping time w.r.t . ) and that we may extend the probability to a new probability measure on the measure space . hence , referring to the previous notation , we have for all and from now on we will assume where .we recall that the filtration stopped at the stopping time will be denoted and it means besides we introduce the classical and fundamental hypothesis * ( h)*every remains a .this assumption , known as the h - hypothesis , is widely used in the credit risk literature ( see e.g. and references therein ) . with this hypothesis , we may conclude that remains in general a and a martingale under the extended probability .so we can replace the arbitrage free pricing formula ( [ deterministic ] ) by the more general expression .\ ] ] from now on , this product will be called a * random time forward starting * option and we will refer to it as * rtfs * options .notice that the payoff remains .in what follows we treat only the call case as , under hypothesis ( h ) , a natural extension of the put - call parity holds for these products - { \mathbf{e}}^{q}[b(t , t)(\alpha s_{\tau\wedge t}- s_t)^+|\cal g_t ] = s_t - \alpha s_{\tau\wedge t } b(t , \tau\wedge t).\ ] ] lastly we remark that for , these options are worth something only if the random occurrence happens before maturity , in a sense we could say they are triggered by the random time .in this section we want to understand how to make formula ( [ random ] ) more explicitly computable . as a general consideration , let us notice that \\ \nonumber & = & { \mathbf{e}}^{q}[b(t , t)(s_t-\alpha s_{\tau})^+\mathbf 1_{\{0 < \tau\le t\}}|\cal g_t]+(1-\alpha ) { \mathbf{e}}^{q}[b(t , t)s_t \mathbf 1_{\ { \tau > t\}}|\cal g_t]\\ \nonumber & = & { \mathbf{e}}^{q}[b(t , t)(s_t-\alpha s_{\tau})^+|\cal g_t]\mathbf 1_{\{0 < \tau\le t\}}+ { \mathbf{e}}^{q}[b(t , t)(s_t-\alpha s_{\tau})^+\mathbf 1_{\{t < \tau\le t\}}|\cal g_t]\\ & + & ( 1-\alpha)s_t\mathbf 1_{\{t<\tau\}}- ( 1-\alpha){\mathbf{e}}^{q}[b(t , t)s_t\mathbf 1_{\ { t < \tau\le t\}})|\cal g_t].\end{aligned}\ ] ] having written as . for the first termthere is not much to say , there and it behaves as a call price with strike price , which is completely known at time .the third term represents the guaranteed payoff of this contract , so we have to study the remaining terms .hypothesis ( h ) is going to help us for the last one .we know that is a martingale under and that the event ( since for any , ) , so conditioning internally w.r.t . , we obtain ={\mathbf{e}}^{q}[{\mathbf{e}}^q(b(t , t)s_t\mathbf 1_{\ { t < \tau\le t\}}|\cg_\tau)|\cal g_t]\\ & = & { \mathbf{e}}^{q}[{\mathbf{e}}^{q}(b(t , t)s_t|\cal g_\tau)\mathbf 1_{\ { t< \tau\le t\}}|\cal g_t]={\mathbf{e}}^{q}(b(t,\tau)s_\tau\mathbf 1_{\ { t < \tau\le t\}}|\cal g_t],\end{aligned}\ ] ] where we used the optional sampling theorem , assuming enough integrability of the asset price process , as almost surely .the second term may be rewritten as ={\mathbf{e}}^{q}[{\mathbf{e}}^q(b(t , t)(s_t-\alpha s_{\tau})^+\mathbf 1_{\ { t < \tau\le t\}}|\cg_\tau)|\cal g_t]\\ & = & { \mathbf{e}}^{q}[b(t,\tau){\mathbf{e}}^{q}(b(\tau , t)(s_t-\alpha s_{\tau})^+|\cal g_\tau)\mathbf 1_{\ { t < \tau\le t\}}|\cal g_t]={\mathbf{e}}^q[b(t,\tau ) c(\tau,\tau , t)\mathbf 1_{\ { t< \tau\le t\}}|\cal g_t],\end{aligned}\ ] ] with defined w.r.t . as in ( [ deterministic ] ) , where we extended the definition to include also . to proceed with our evaluation we have two possibilities .either , i.e. is an time , or not . in the first case , some computable situations may occur .for instance , if is a hitting time ( choosing the barrier level ) , we have \mathbf 1_{\{0 < \tau \le t\}}+ { \mathbf{e}}^{q}[b(t , t)(s_t-\alpha h)^+\mathbf 1_{\{t < \tau \le t\}}|\cf_t]\\ & + & ( 1-\alpha)s_t\mathbf 1_{\{t<\tau\}}- ( 1-\alpha){\mathbf{e}}^{q}[b(t , t)s_t\mathbf 1_{\ { t< \tau\le t\}})|\cf_t].\end{aligned}\ ] ] summarizing +(1-\alpha)s_t\mathbf 1_{\{t<\tau\}},\end{aligned}\ ] ] where barrier price " denotes the price of a barrier call option with strike price which is activated as soon as the threshold is reached by the asset price .if this event does not happen before maturity , at maturity the contract is not going to pay zero to the holder as in standard barrier contracts , but -percent of the terminal asset price , since the first three pieces of the formula are zero . therefore , if , this random starting forward option differs from a plain vanilla barrier option by a specific non negative value , never exceeding .other situations may be more or less complex to evaluate , depending upon the definition of the stopping time . in general, since the observable is the asset s price process , it would be interesting to be able to write the pricing formula in terms of , rather than . for that we have the following key lemma ( see , ) .[ key ] for any integrable r.v . , the following equality holds =q(\tau > t|\cg_t)\frac{{\mathbf{e}}^q\big[\mathbf 1_{\{\tau > t\}}y|\cf_t\big]}{q(\tau > t|\cf_t)}.\ ] ] applying this lemma to the second and fourth term of ( [ general ] ) , respectively with and remembering that is , we obtain &=&\mathbf 1_{\{\tau > t\ } } \frac{{\mathbf{e}}^q[b(t,\tau)c(\tau,\tau , t)\mathbf 1_{\{t < \tau\le t\}}|\cf_t]}{q(\tau > t|\cf_t)}\\ \label{key2 } { \mathbf{e}}^q[b(t,\tau)s_\tau\mathbf 1_{\{t < \tau\le t\}}|\cal g_t]&=&\mathbf 1_{\{\tau > t\ } } \frac{{\mathbf{e}}^q[b(t,\tau)s_\tau\mathbf 1_{\{t <\tau\le t\}}|\cf_t]}{q(\tau > t|\cf_t)}.\end{aligned}\ ] ] the above may be rewritten following the hazard process approach .let us denote the conditional distribution of the default time given as and let us notice that for , .in order to apply the so called intensity based approach , we need to assume that for all ( which automatically excludes that ) to well define the so called risk or hazard process with this notation , we rewrite ( [ key1 ] ) and ( [ key2 ] ) as =(1-h_t){\mathbf{e}}^q[b(t,\tau)c(\tau,\tau , t)(h_t - h_t)\mathrm e^{\gamma_t}|\cf_t]\\ \label{key4 } & & { \mathbf{e}}^q[b(t,\tau)s_\tau\mathbf 1_{\{t < \tau\le t\}}|\cal g_t=(1-h_t){\mathbf{e}}^q[b(t,\tau)s_\tau(h_t - h_t)\mathrm e^{\gamma_t}|\cf_t],\end{aligned}\ ] ] but by proposition 5.1.1 ( ii ) page 147 of these conditional expectations may be written as &=&{\mathbf{e}}^q[\int_t^t b(t , u)c(u , u , t ) df_u|\cf_t]\\ \label{key6 } { \mathbf{e}}^q[b(t,\tau)s_\tau(h_t - h_t)|\cf_t]&=&{\mathbf{e}}^q[\int_t^t b(t , u)s_udf_u|\cf_t].\end{aligned}\ ] ] if we have the additional hypotheses for all and ( ind ) : : and are independent , then is independent of , hypothesis ( h ) is automatically satisfied and is deterministic , so the previous formula becomes ={\mathbf{e}}^q[\int_t^t b(t , u)c(u , u , t ) df_u|\cf_t]\\ \nonumber&=&\int_t^t{\mathbf{e}}^q [ { \mathbf{e}}^q(b(t , t)(s_t-\alpha s_u)^+|\cf_u)|\cf_t]df_u=\int_t^t{\mathbf{e}}^q [ b(t , t)(s_t-\alpha s_u)^+|\cf_t ] df_u\\ & = & \int_t^tc(t , u , t)df_u\end{aligned}\ ] ] and the other is treated similarly .we can summarize the above results in the following [ gener1 ] with the above notation , we have 1 . under hypothesis the price of a * rtfs * call option is given by \mathbf 1_{\{0< \tau\le t\}}+(1-\alpha)s_t\mathbf 1_{\{\tau > t\}}\\ & + & \mathbf 1_{\{\tau > t\}}{\mathbf{e}}^q[\int_t^t b(t , u)c(u , u , t)\mathrm e^{-(\gamma_u-\gamma_t)}d\gamma_u|\cf_t]\\ & -&(1-\alpha)\mathbf 1_{\{\tau > t\ } } { \mathbf{e}}^q[\int_t^t b(t , u)s_u \mathrm e^{-(\gamma_u- \gamma_t)}d\gamma_u|\cf_t]\end{aligned}\ ] ] 2 . under hypothesis the price of a * rtfs * call option is given by \mathbf 1_{\{0 < \tau\le t\}}\\ & + & \mathbf 1_{\{\tau >t\}}\big [ \int_t^tc(t , u , t)\mathrm e^{-(\gamma_u- \gamma_t)}d\gamma_u+(1-\alpha)s_t\mathrm e^{-(\gamma_t-\gamma_t ) } \big ] \end{aligned}\ ] ] part ( a ) is only the assembling of the various pieces previously presented . as for part ( b ) , because of independence , is deterministic and it gets pulled out of the expectation . since is a martingale measure , the integrands of the last two terms verify the martingale property w.r.t . under . [ rem1 ] if we assume that is absolutely continuous w.r.t .the lebesgue measure , that is to say for some and non negative process called intensity ( see ) , then also is absolutely continuous and if we denote by its density , necessarily we have in this case we have 1 . under hypothesis the price of a * rtfs *call option is given by \mathbf 1_{\{0 < \tau\le t\}}+(1-\alpha)s_t\mathbf 1_{\{\tau > t\}}\\ \nonumber & + & \mathbf 1_{\{\tau > t\}}{\mathbf{e}}^q[\int_t^t b(t , u)c(u , u , t ) \lambda_u\mathrm e^{-\int_t^u\lambda_s ds}du|\cf_t]\\ \nonumber & -&(1-\alpha)\mathbf 1_{\{\tau > t\ } } { \mathbf{e}}^q[\int_t^t b(t , u)s_u \lambda_u\mathrm e^{-\int_t^u\lambda_s ds}du|\cf_t]\end{aligned}\ ] ] 2 . under hypothesis the price of a * rtfs *call option is given by \mathbf 1_{\{0 < \tau\le t\}}\\ \nonumber&+&\mathbf 1_{\{\tau > t\}}\big [ \int_t^tc(t , u , t ) \lambda_u\mathrm e^{-\int_t^u\lambda_s ds}du+(1-\alpha)s_t\mathrm e^{-\int_t^t\lambda_s ds } \big ] \end{aligned}\ ] ] the above formulas , either in presence of independence or not , rely on the knowledge of the model and of the conditional distribution of the default time. hence , deriving an explicit or implementable formula for the pricing will depend heavily on the modelling choices we are going to make . of course , the independent case is much easier than the other and we focus on it in the next section , to show that there are some interesting treatable cases .we will always take .when we no longer assume independence of and , we need to resort to formula ( [ hazard1 ] ) and a reasonable choice is to make the hazard rate depend also on the price process . as a first attempt one may try an affine model , so we decide that where are deterministic , positive and bounded functions , while is a positive process independent of , . by using the optional projection theorem ( see theorem vi.57 ) , we may arrive at the pricing formula \mathbf1_{\{0 < \tau\le t\}}\\ \!\!\!&+&\!\!\!\int_t^t a(u)call(u,1,\alpha , t){\mathbf{e}}^q [ b(t , u)s_u^2\mathrm e^{-\int_t^u a(s ) s_sds}|\cf_t]{\mathbf{e}}^q[\mathrm e^{-\int_t^ub(s ) z_sds}]du\\ \!\!\!&+&\!\!\!\int_t^t b(u)call(u,1,\alpha , t){\mathbf{e}}^q [ b(t , u)s_u\mathrm e^{-\int_t^u ( a(s ) s_sds}|\cf_t]{\mathbf{e}}^q[z_u\mathrm e^{-\int_t^u b(s ) z_sds}]du\\ \!\!\!&+&\!\!\!(1-\alpha){\mathbf{e}}^q[b(t ,t)s_t\mathrm e^{-\int_t^t a(s ) s_sds}|\cf_t]{\mathbf{e } } [ \mathrm e^{-\int_t^t b(s ) z_sds}]\big \},\end{aligned}\ ] ] where all the above expectations are of the same type .if not explicitly computable , the expectations can be evaluated via monte carlo or transform methods , once a grid of points in order to approximate the time integrals has been fixed .more precisely , one applies the monte carlo approach to the inner expectations by generating independent paths of the processes and at the grid points .we now show some examples where the above formulas for the rtfs options can be explicitly computed , of course the first case to analyze is the black and scholes model .* pricing in the black & scholes market * in the classical bs market with constant risk - free rate and volatility , the price of a plain vanilla european call with maturity and strike will be denoted by , while the price of of a forward - start option with maturity , strike - determination time and percentage ( see ) is simply where hence in formula ( [ fs_pricebs ] ) we have .let us notice that is always true . henceforth , under ( ind ) , the price of a ( * rtfs * ) option is given by the valuation of ( [ gfspricebs ] ) can be found in closed form in some simple cases . if , under , then for the second term we obtain ,\ ] ] where and it is clear that for all values of the parameters .the above integrals specialize depending on the choice of the parameters . if we have three cases .if then these integrals can be explicitly computed exploiting integration by parts and the gaussian density , obtaining ( keeping in mind that ) .\end{aligned}\ ] ] when ( hence ) , then the above formulas reduce to \end{aligned}\ ] ] and for we lose the gaussian integral and we arrive at \\ \label{case3a2 } \!\!\!\!\!\!a_2(t , t)\!\!\ ! & = & \!\!\ ! \frac \lambda{\lambda\!-\ !\big \{\mathrm e^{-r(t\!-t)}\cn ( c_2\sqrt{t\!\!-\!t})\!- \!\mathrm e^{-\lambda(t\!-t ) } \big [ \frac 12 + \!\frac{c_2}{\sqrt{2\lambda\!-\!c_1 ^ 2}}\ ! \int^{\sqrt{(2\lambda\!- c_1 ^ 2)(t\!-t)}}_0\!\ ! \frac{\mathrm e^{\frac{z^2}2}}{\sqrt{2\pi}}dz\big ] \big \}.\end{aligned}\ ] ] if instead condition holds , we arrive in any case to an explicit formula with the same as in ( [ case1a1 ] ) and given by .\ ] ] * ( the merton market model ) * the above results can be extended with moderate computational effort to the merton jump - diffusion market model ( see e.g. ) where is a brownian motion , a poisson process with jump - arrival intensity , all independent of each other , - 1 ] the generalized characteristic function of .because of the independence of the increments , does not depend on , so the model is scale - invariant and , applying ( [ scale ] ) , we may conclude that the price of the standard forward starting option with determination time is by inserting this fourier representation of the forward starting call price into the formulas of proposition 1.1 or remark [ rem1 ] , we get a double integral representation of our rtfs call price . as usual , under hypothesis ( ind ) ,setting , we obtain \mathbf 1_{\{0 < \tau\le t\ } } \\ & + & \mathbf 1_{\{\tau > t\ } } \frac{s_t}{2 \pi } \int_{\mathrm{i } \nu - \infty}^{\mathrm{i } \nu + \infty } \frac 1{\mathrm{i } z - z^2 } \big [ \int_t^t \mathrm{e}^{- r(t - u)(1 + \mathrm{i } z ) } \phi_{x_t- x_u}(-z ) \lambda_u\mathrme^{-\int_t^u\lambda_s ds}du \big ] dz.\end{aligned}\ ] ] in some cases the inner integral can be solved in closed form . as a matter of fact ,if we choose a variance gamma model for the price dynamics with a standard brownian motion and a gamma process independent of , with parameters 1 and and ( see ) , we have where , with and , denotes the principal complex logarithm .taking also ( for ease of computation ) a constant hazard rate , we may conclude \mathbf 1_{\{0 < \tau\le t\ } } \nonumber \\ \!\!\ ! & + & \!\!\!\mathbf 1_{\{\tau > t\ } } \frac{s_t}{2 \pi}\lambda \mathrm{e}^{-\lambda(t - t ) } \int_{\mathrm{i } \nu - \infty}^{\mathrm{i } \nu + \infty}\frac 1{\mathrm{i } z - z^2}\ ; \frac { 1 - \mathrm{e}^{-[r ( 1+\mathrm{i } z)+\frac 1\mu\ln ( 1-\mathrm{i}b\mu z + c^2 \mu \frac{z^2}2 ) - \lambda](t- t)}}{r ( 1+\mathrm{i } z)+\frac 1\mu\ln ( 1-\mathrm{i}b\mu z + c^2 \mu \frac{z^2}2 ) - \lambda}dz,\end{aligned}\ ] ] leading to a computable formula .it is reasonable that the same arguments might be extended to tempered stable distributions as well as to the carr - geman - madan - yor model ( see ) .* pricing in the heston stochastic volatility market . * assuming independence , we can arrive to an explicit formula also in the heston stochastic volatility model , given ( under the risk neutral measure ) by where is a constant interest rate . here has to be the natural filtration generated by the couple , which is jointly a bidimensional markov process , hence the final pricing formula will depend only upon the initial values of both and . as before , we start from the expression \mathbf1_{\{0 < \tau\le t\}}\\ & + & \mathbf 1_{\{\tau >t\}}\big [ \!\int_t^t\!\!c_h(t , u , t , s_t , \sigma_t ) \lambda_u\mathrm e^{-\int_t^u\lambda_s ds}du+(1\!-\alpha)s_t\mathrm e^{-\int_t^t\lambda_s ds } \big ] .\end{aligned}\ ] ] by employing the results in ( or ) , we can find an explicit formula for the forward start option .indeed by virtue of the scale invariant property verified also by this model , we have where is the standard price of a call option in the heston model ( for details see for instance lemma 2.1 ) and and is a modified bessel function of the first kind . the previous method can be adapted to cover also the case of affine stochastic interest rate market model proposed in , where the interest rate follows a hull and white model and the stochastic volatility is a ornstein - uhlenbeck process .in this section we report the results of the numerical implementation of some of the models presented before : the black & scholes , the variance gamma and the heston models , with a random time with parameter . in all experiments we set , andwe assume the independent hypothesis ( ind ) .analytical or closed - form prices , eventually requiring numerical quadrature , are compared with standard monte carlo approximations of the arbitrage free pricing formula ( [ random ] ) .sample paths of the underlying dynamics are evaluated at the random time and the corresponding payoffs are collected to estimate ( [ random ] ) . in our experiments we simulated samples / paths and remark [ rem1 ] can be easily designed . ] . adaptive gauss - lobatto quadrature was used to approximate representation integrals ( vg and heston models ) , which required the evaluation of extended fourier transforms involving multivalued functions , such as the complex logarithm . to avoid the well - known numerical instabilities due to a wrong formulation of the characteristic function, we used the so - called _ rotation count algorithm _ by kahl and jckel ( see e.g. ) .all routines were coded and implemented in matlab , version 8.0 ( r2012b ) on an intel core i7 2.40 ghz machine running under windows 7 with 8 gb physical memory .for each model we report the closed - form ( cf ) and the mc prices for different values of , fixing the other model parameters . in parenthesiswe report the absolute error w.r.t .the cf price , followed by the corresponding length of the confidence interval . without loss of generality we set , and . inthe black & scholes model the rtfs price is given by formulas ( [ gfspricebs])-([case4a2 ] ) .exact simulations of the geometrical brownian motion with at the random time are used to estimate the pricing formulas ..*black & scholes * model for different values of . [ cols="^,^,^,^",options="header " , ] lastly , we choose the random time with varying in order to compare at time , the rtfs price and the price of a fs option with determination time . in other wordswe compare with , see figure ( [ figall1 ] ) . in all the instances prices decrease as increases, moreover we notice that : i ) for ( i.e. ) fs and rtfs prices both tend to the price of an atm plain vanilla call ; ii ) for ( i.e. ) the fs value trivially goes to , while the value of a rtfs option stays positive since . .[ figall1],width=680,height=226 ]in this section we are interested in computing the credit value adjustment ( cva ) for a rtfs option , when one of the parties is subject to a possibly additional default time .the interest lies in the fact that those products would be otc and knowing that the cva is computable might be convenient .we first recall what the cva is for an additive cashflow .let us denote by the discounted value of the cashflow between time and ; this value has to be additive , which means that for times it holds all european contingent claims with integrable , payoff , trivially verify ( [ additive ] ) since .we call net present value of we now look into unilateral counterparty credit risk . let us suppose that the cashflow is between two parties ( buyer ) and the counterparty , that is subject to default , with default time . then , the discounted value of the cashflow has to be adjusted by subtracting from the defaultless case the quantity where represents a deterministic recovery rate ( see ) . in the case of classical forward start options with fixed strike - determination time ,it is possible to compute directly the cva .we denote by the filtration generated by the process and by . under independence between and , taking for granted that remains a martingale in the enlarged filtration w.r.t .a risk neutral probability measure which extends the risk neutral probability , we have where the key lemma and the same notation as in section 1 were used , under the assumption that . to extend the previous formula to the case of the rtfs options we need to consider two random times : the strike - determination time and the counterparty default time .hence we need to extend the original filtration with both and , obtaining accordingly , we assume there exists an extension of the risk neutral probability to a common probability space that we still denote by and we set the corresponding key assumption * ( hc)*every martingale remains a martingale .taking for simplicity , we have that the price of a defaultable rtfs call option is .\ ] ] so , in presence of a recovery rate , we obtain that the cva for this product has to be = ( 1\!-\!r ) { \mathbf{e}}^{q}[\mathbf 1_{\{t<\tau_c\le t\}}b(t , t)(s_t - s_{\tau\wedge t})^+|\cal g_t].\end{aligned}\ ] ] we may decompose the expectation as \\ \!\!\!&=&\!\!\!{\mathbf{e}}^{q}[\mathbf 1_{\{t<\tau_c\le t\}}\mathbf1_{\{\tau > t\}}b(t , t)(s_t\!-\!s_{\tau\wedge t})^+|\cal g_t]\!+{\mathbf{e}}^{q}[\mathbf 1_{\{t<\tau_c\le t\}}\mathbf 1_{\{\tau\le t\}}b(t , t)(s_t\!-\!s_{\tau\wedge t})^+|\cal g_t]\\ \!\!\!&=&\!\!\!{\mathbf{e}}^{q}[\mathbf 1_{\{t<\tau_c\le t\}}\mathbf 1_{\{t<\tau\le t\}}b(t , t)(s_t - s_{\tau})^+|\cal g_t ] + \mathbf 1_{\{\tau\le t\}}{\mathbf{e}}^{q}[\mathbf 1_{\{t<\tau_c\le t\}}b(t , t)(s_t - s_{\tau})^+|\cal g_t],\end{aligned}\ ] ] where the last equality is justified by the fact that the first term in the first passage is equal to 0 . in the second equalitythe crucial term is the first , as the second reduces to the cva of a standard forward start option , for which formula ( [ cvafs ] ) applies .we may handle the first term under condition of independence between and . denoting by ,we have and applying the key lemma to and we obtain \\ & = & q(\tau_c > t|\cg_t)\frac{{\mathbf{e}}^{q}[\mathbf 1_{\{t<\tau_c\le t\}}\mathbf 1_{\{t < \tau\le t\}}b(t , t)(s_t - s_{\tau})^+|\cf^h_t]}{q(\tau_c > t|\cf^h_t)}.\end{aligned}\ ] ] the independence of from the remaining factors gives \\ & = & q(\tau_c > t|\cg_t)\frac{ q(t < \tau_c\le t)}{q(\tau_c > t)}{\mathbf{e}}^{q}[\mathbf 1_{\{t < \tau\le t\}}b(t , t)(s_t - s_{\tau})^+|\cf^h_t]\end{aligned}\ ] ] and if , the above reduces to =q(t < \tau_c\le t ) { \mathbf{e}}^{q}[\mathbf 1_{\{t < \tau\le t\}}b(t , t)(s_t - s_{\tau})^+|\cf^h_t]\ ] ] which is the weighted value of a rtfs option , exactly as it happened with the standard fs option .summarizing we have under hypothesis ( hc ) and the condition that , if is independent of , then the cva of a defaultable rtfs call option of price , with recovery rate $ ] , is given by .lastly we remark that if the two random times coincide , , then we may conclude that which implies that the defaultable price is .the authors thank prof . c. chiarella for useful suggestions at an early stage of this work .d. brigo , k .chourdakis , _ counterparty risk for credit default swaps : impact of default volatility and spread correlation _ , international journal of theoretical and applied finance , vol .12 , no . 7 , 1007 - 1029 , ( 2009 ) .g. h. cheang , c. chiarella , _ a modern view on merton s jump - diffusion model _ , in `` stochastic processes , finance and control : a festschrift in honor of robert j. elliott '' , samuel n. cohen , dilip madan , tak kuen siu and hailiang yang ( eds . ) , pg 217 - 234 , world scientific , ( 2012 ) .a. van haastrecht , a. pelsser _accounting for stochastic interest rates , stochastic volatility and a general correlation structure in the valuation of forward starting options _ , journal of futures markets , vol .31 , no . 2 , 103 - 125 ( 2011 ) .a. ramponi , _ on fourier transform methods for regime - switching jump - diffusions and the pricing of forward starting options _ , international journal of theoretical and applied finance , vol .15 , no . 5 , ( 2012 ) | we introduce a natural generalization of the forward - starting options , first discussed by m. rubinstein ( ) . the main feature of the contract presented here is that the strike - determination time is not fixed ex - ante , but allowed to be random , usually related to the occurrence of some event , either of financial nature or not . we will call these options * random time forward starting ( rtfs)*. we show that , under an appropriate martingale preserving " hypothesis , we can exhibit arbitrage free prices , which can be explicitly computed in many classical market models , at least under independence between the random time and the assets prices . practical implementations of the pricing methodologies are also provided . finally a credit value adjustment formula for these otc options is computed for the unilateral counterparty credit risk . * keywords * : random times , forward - starting options , cva . * jel classification * : g13 |
the evolution of rotation is usually associated with an evolution of angular momentum ; changing the angular momentum of any body requires torques and stars do not escape from this law of physics . in binary stars there is a permanent source of torques : the tides .hence , understanding the evolution of rotation of stars in a binary system demands the understanding of the action of tides .this compulsory exercise was started more than thirty years ago by jean - paul zahn during his thse dtat " , _ les mares dans une toiles double serre _ ( zahn 1966 ) .all the concepts needed to understand tides and their actions in the dynamical evolution of binaries are presented in this work .surely , as in isolated stars , rotation is an important ingredient of evolution through the induced mixing processes : turbulence in stably stratified radiative zones , circulations ... all these processes will influence the abundances of elements in the atmospheres or the internal profile of angular velocity , for instance .however , in binary stars new phenomena appear : tides .they make the orbit evolving , force some mixing processes ( through eigenmode resonances for instance ) or may even generate instabilities leading , to some turbulence ( see below ) .these new phenomena need also to be understood if one wishes to decipher the observations of binary stars . in this respectbinary stars offer more observables than single stars like the parametres of the orbit , masses of the stars , their radii , etc .if the system has not exchanged mass during evolution and if other parameters like luminosity , surface gravity , abundances can also be determined unambiguously , binary stars offer new constrains on the stars which may be useful for our understanding of stellar evolution .also , a statistical view of orbital parameters may constrain the dissipative processes at work in these stars ( mathieu et al . 1992 ) .let us consider an isolated system made of two stars of mass m , m , of moment of inertia i , i and of spin angular velocity , .the semi - major axis of the orbit is and the eccentricity .for simplicity we shall assume that the angular momentum vectors are all aligned .hence , the total ( projected ) angular momentum of the system , which is conserved during evolution , reads : on the other hand , the total energy of the system , namely , decreases because of dissipative processes . to appreciate the natural evolution of such a system ,let us consider the even more simplified system where the angular momentum and the energy of the spin of the stars are negligible compared to their orbital equivalent . using kepler third law to eliminate the mean angular velocity of the orbital motion , the previous equations lead to during evolution the system loses energy through dissipative mechanisms , thus decreases which implies that also decreases to insure a constant angular momentum .thus , with time the orbit slowly circularizes .once the orbit is circular or nearly circular , the system may continue to evolve if the orbital angular velocity and spin angular velocity are not identical : this is the synchronization process after which the system has reached its minimum mechanical energy state : all the stars rotate at the same rate , i.e. like the moon on its terrestrial orbit . in the foregoing section we presented a global view of the evolution of the system , however the way the orbit or the spin change is controlled by the torques applied to the stars . as we said at the beginning , a permanent source of torques is given by the tides which therefore need to be studied .but what is a tide ? the tide is the fluid flow generated by the tidal potential , i.e. the azimuth dependent part of the gravitational potential inside a star . in short ,if you sit on a star , it is the forced flow generated by the celestial bodies orbiting around you .if you sit on earth you feel the tides of the moon and the sun essentially .now let us assume that the tidal forcing is milde enough so that the fluid flow obeys linear equations ; formally , we may write the system like where we assume that the tidal force can be separated into its spatial and temporal dependence . written in this way we immediately see that if the inertia of the fluid can be neglected ( i.e. the term ) , then the velocity field can be computed with the same temporal dependence as the exciting force .the response is instantaneous .moreover , if coriolis acceleration and viscosity are negligible , the only response of the fluid is through a pressure perturbation , i.e. it is purely hydrostatic .this extremely simple , but not unrealistic , case is called the _ equilibrium tide_. on earth , this tide is enough to understand the basic properties of terrestrial tides : i.e. that there are two tides a day , that their amplitude is cm or that they are stronger at full or new moon ; the hydrostatic perturbation describes the famous tidal bulge which is often representing tides in elementary courses .such a description is appropriate if you sit on the mediterranean beaches or on those of the pacific ocean ; however , if you sit on the atlantic shore , like here in cancun , you will easily notice that the tide is much larger than the expected 50 cm . in the mont saint - michel bay it easily reaches 10 meters !the difference comes from what we neglected : the inertia of the fluid and the ensuing resonance phenomenon . for the tidal wave ,whose wavelength is a few thousand kilometers , the atlantic ocean is a mere puddle five kilometers deep .surface gravity waves may thus be studied using the shallow water approximation and their speed is given by where is the gravity and the depth . with a mean width of 5000 km, the atlantic is crossed twice in 12.6 hours ; but the tidal forcing is back after 12.4 hours . obviously , we are close to a resonance and in this case the equilibrium tide is insufficient to describe the tidal response of the fluid . in this casethe tide will be qualified as _dynamical_. quite clearly, the equilibrium tide is much easier to handle than the dynamical one ; this is why it was first studied ( zahn 1966 ) ; the dynamical tide received first serious considerations by zahn ( 1970 ) and a proper treatment by zahn ( 1975 ) . in order to compute the dynamical evolution of the system , namely the parameters of the orbit and the spin of the stars it is necessary to evaluate both the torques and the dissipation inside the stars .the torque suffered by a star results from an unsymmetrical distribution of mass with respect to the tidal potential ; mathematically , where is the density perturbation generated by the tidal potential ; is the angular azimuthal variable .we see from this expression that torques can only exist if the excitation and the response are out of phase ( or antiphase ) .the phase lag between these two quantities comes from the dissipative mechanisms at work in the stars and is usually a small but important quantity . from the expressions of the energy and angular momentum of the orbit, we can derive : where and are respectively the power dissipated in the stars and the torques exerted on the orbit ( the parallelism of angular momentum vectors is still assumed ) .the foregoing discussion therefore shows that , provided the stars do not change , all the evolution of the system is controlled by the dissipation of energy by the tidal flow . in the case stellar evolutionis important , for instance before or after the main sequence , changes in the inertia momenta will change the spin of the stars and the applied torques .the first and most obvious physical mechanism to dissipate mechanical energy is viscosity .however , the viscosity of stellar plasma is far too weak to be efficient and only turbulent viscosity of convection zones can significantly affect the evolution of the system .but the effectiveness of turbulent viscosity is hampered by the fact that tidal flows are periodic in time : in any region of a convective zone where the lifetime of eddies is longer than the period of the tidal flows , the turbulent eddy viscosity is reduced .the question is , of course , how much it is reduced compared to its usual approximation where and are respectively the velocity of turbulent eddies and their mean free path .presently , two prescriptions coexist : the first by zahn ( 1966 ) says that which means that when the period of tides is shorter than the turnover time of the eddies , the mean - free path of the eddies should be reduced to the distance covered by an eddy during half a period .the second prescription is originally due to goldreich and keeley ( 1977 ) but adapted by golman and mazeh ( 1991 ) ; it says that this prescription means that the turbulent viscosity should be such that namely that the dissipation by tidal currents fits the dissipation by the turbulent cascade ( recall that in the kolmogorov cascade ) .these two prescriptions yield different exponent in the dependence of the evolution time scale with the period of the system ; in principle , they can be discriminated by observations .however , this exercise turns out to be difficult ; moreover , using binaries in stellar clusters , mathieu et al .( 1992 ) found that no reduction of viscosity was necessary to explain observations !obviously , more work is needed to clarify this question .another viscous damping process was also put forward by tassoul ( 1987 ) , namely the dissipation in ekman boundary layers . however , we have shown that because of the stress - free surfaces of the stars , such layers are rather regions with lower dissipation than the rest of the star ( see rieutord 1992 , rieutord and zahn 1997 ) . another way to dissipate energy of tidal flows is through radiative damping .this mechanism will affect essentially radiative zones ; indeed , from the tidal excitation , one needs to generate temperature fluctuations which are dissipated by radiative diffusion .the most natural way to generate these temperature fluctuations comes from the excitation of gravity modes since , mechanically , they are associated with the buoyancy force .it is therefore quite clear that the dissipation and the ensuing torques will be most important when the forcing frequency is near that of an eigenmode of the star . from this remark, it is also clear that only the dynamical tide will be relevant in this process .on general grounds we may observe that the tidal forcing is low - frequency and that low - frequency gravity modes are high order modes which can then be described by an asymptotic theory .this was the way chosen by zahn ( 1975 ) and later by goldreich and nicholson ( 1989 ) .this mechanism is essentially relevant for early - type stars which own an outer radiative zone . as shown by the previous authors the tidal excitation is most intense near the core - envelope boundary but since gravity waves are only partially reflected at the star surface , they deposit their angular momentum there and these layers are synchronized first .this argument was developed by golreich and nicholson ( 1989 ) to explain the higher synchonization rate observed in early - type stars compared to the theoretical predictions of zahn ( 1977 ) .these studies are based on an asymptotic approach and ignore the role of the coriolis force .this force is , however , unavoidable since the stars are always rotating and in most cases the tidal forcing is also in the band of the so - called inertial modes whose restoring force is precisely the coriolis force .in fact inertial modes always combine with gravity modes as they are also low - frequency modes ; both together , they occupy the band $ ] , where is the brunt - visl frequency . recently , much progress was achieved in the direction of including the whole spectrum of low - frequency modes in the tidal response of an early - type star . indeed , witte and savonije ( 1999a , b,2001 ) computed numerically the response of a 10 m main sequence star and the ensuing evolution of the system with various inital eccentricities .their results show many interesting features : * the efficiency of resonance crossing at decreasing the eccentricity when it is initially small ( a few percent ) . * a new phenomenon , which they called resonance locking " andby which two resonant modes , one tending to spin - up the star and the other trying to spin it down , yield equilibrating torques which maintain the two stellar modes close to resonance and therefore force a strong evolution of eccentricity ( see figure 3 in witte and savonije 1999b ) .the calculations of witte and savonije show that radiative damping is efficient at reducing the eccentricity of the orbit in a fraction of the lifetime of the star : in their examples , the 10 m star has a lifetime of 20 myrs ; an initial eccentricity of 0.02 is erased in 1 myr , but an initial eccentricity of 0.25 or 0.7 reduces to 0.1 in 3 myrs ; a further evolution apparently operates on a much longer time scale .obviously , if the model of witte and savonije could be used as a true system , it would mean that orbits with may be as evolved as ones and differ only by initial conditions .likely , all the mechanisms by which energy can be dissipated have not been examined .we shall now present a new one caused by the elliptic instability and which is potentially a rather strong source of dissipation .the elliptic instability has been studied mainly in simple geometries and even in these cases it is a difficult problem .for a recent review we refer the reader to the work of kerswell ( 2002 ) . presently , the work closest to astrophysical applications is that of seyed - mahmoud et al .( 2000 ) who investigated this instability in an ellipsoidal configuration for the core of the earth .the basic result we need from fluid mechanics is the growth rate of this instability , namely where is the ellipticity of streamlines and the rotation rate of the vortex .schematically , this instability may be seen as a parametric instability : the solid body rotating fluid feels a perturbation ( the ellipticity of the stream lines ) with a frequency .such a periodic forcing can destabilize modes at half its frequency namely ; this is precisely the frequency of the so - called spin - over modes ( or poincar modes ) .such modes are solid - body rotation around an axis in the equatorial plane . when the instability develops and the amplitude gets sufficiently large , the vortex start to precess and is usually completely destroyed as observed in experiments ( malkus 1989 ) .hence , this instability may be very important as it is able to dissipate the total kinetic energy of the vortex .but let us consider the astrophysical case .for the sake of simplicity we consider a main sequence star in a binary system with a circular orbit .the tidal perturbation is provided by a point - mass object .the system is not synchronized and therefore . ina reference frame rotating with the tidal potential , the main sequence star is like a strained vortex rotating with the angular velocity and enduring an elongation of the tidal potential .in such a case the instability develop on a time scale .the energy available for dissipation is where is the inertia momentum of the zone developing the elliptic instability . in this first attempt ,we shall consider a late type stars and restrict , conservatively , the action of this instability to the convection zone so that it does not interfere with the stratification .we note that the spin - over modes which are destabilized are rigid rotations and are thus not affected by the large turbulent viscosity of the fluid .hence , the power dissipated is approximately with these two quantities , one can estimate the time scale over which synchronization occurs : this is typically the growth of the elliptical instability .to have orders of magnitude , let us take two solar type stars orbiting their center of mass in 10 days and let their spin be twice faster ( i.e. ) .the time scale is then 64 years , thus very short .as far as circularization is concerned , we need to restrict ourselves to weakly eccentric orbits ( say ) . using the stars to dissipate energy while the angular momentum of the orbit is assumed constant ,the reduction of the eccentricity also occurs on a short time scale , namely a thousand of years .the numbers thus derived are rather robust as they depend on geometric quantities of the system . in view of the observations , which show that they are systems with a 10 days period which are not circularized nor synchronous, we may wonder why the elliptic instability is not so efficient in binary stars .the answer is likely in the much more complex setup yielded by stars compared to their equivalent in the laboratory . in stars ,there are stratification , magnetic fields , time variable ellipticity ( on eccentric orbits ) , rotation etc ... all these effects have been explored but most of the time using systems which are not quite similar to a star .nevertheless , we shall list them and speculate about their effects in a stellar situation .* * rotation * of the frame associated with the orbital motion influences the elliptic instability through the coriolis force . craik ( 1989 ) has shown , but using an unbounded strained flow , that rotation is either stabilizing or destabilizing .it is stabilizing if , for instance , in an inertial frame , the star does not rotate .* * stratification : * similarly to rotation , stratification does not act in a unique sense ; using the context of an elliptical cylinder kerswell ( 1993 ) has shown that stratification is stabilizing for polar regions of a star but destabilizing for equatorial regions . * * magnetic fields * were found invariably stabilizing ( see kerswell 1994 ) . ** time - periodic ellipticity * is likely destabilizing through parametric instabilities .however , the only studied cases are those of an unbounded strained fluid ( kerswell 2002 ) . * * nonlinear effects * are the most difficult to appreciate .experiments have shown that the nonlinear development is violent ( aldridge et al .1997 ) and rapidly leads to a turbulent state ( malkus 1989 ) .in fact , it seems that the saturated state exists only in a very narrow range of parameters and beyond this range secondary instabilities due to triadic interactions of inertial modes lead to small scale motions and a turbulent state ( mason and kerswell 1999 ) .+ globally , the way the instability saturates may be thought as a change of the spin axis of the fluid so as to reduce the ellipticity of the streamlines .in stars such an effect is possible if the spin axis of stars is inclined in the plane normal to the orbital plane and passing through the centres of mass .this shows that the apparently inocuous hypothesis of angular momentum vectors alignment may be crucial to the instability .it may also be an observational signature .indeed , we noticed that the turbulent viscosity of convection zone could hardly inhibit the resonance of spin - over modes ; however , it can easily suppress secondaries instabilities which , in laboratory experiments lead to a turbulent state .it may well be that the saturated state is difficult to obtain in the laboratory but much more easily in stars .the foregoing discussion shows that many points of the elliptic instability in the stellar context remain in the shadows ; but because of its great potential dissipative power , this instability deserves more study .born together , stars of a binary system share the same age and the same initial metallicity . in most favourable cases , their mass , radii , rotation rates can be determined .these are of course very interesting pieces for the puzzle of stellar evolution but there is a price to pay for these additional informations : the two stars interact during their whole lifetime . in its simplest formthis interaction is of gravitational origin and generates tides .as we have shown , tides generate various fluid flows giving rise to transport processes which may be observationnally constrained if a comparison with analogous isolated stars is possible .the foregoing presentation which sketched out all the known mechanisms by which energy is dissipated also shows that the situation is not simple .various processes can dissipate energy and we suggest that among them the elliptic instability plays a non negligible part . quite surprisingly , this instability has been overlooked until now . despite its strength , shown by laboratory experiments , observations do not show , yet , an evidence of this instability . in the list of the mechanisms which may inhibit this instability , we noticed the misalignement of rotation axis .as such a misalignement may also result from a saturated state of this instability and can be an observational signature , it deserves some study . * on the theoretical side more work is obviously needed on the effects of the elliptic instability . * on models , the integration of stellar evolution combined with dynamical ( tidal ) evolution following up the works of claret and cunha ( 1997 ) and witte and savonije ( 1999b ) will be useful to understand , for instance , the statistical properties of eccentricities as a function of periods and ages , or the relative importance of pre - main sequence , main sequence , post - main sequence phases .* on the observational side , much data are needed .first , the elements describing the dynamics are very much desired : these are the elements of the orbit , the masses m , m , the spins and their variations with time . for instance , the motion of the apsidal line is a quantity constraining the mass distribution of the stars and therefore their internal rotation .it is clear that such data will require a lot of efforts but they will help us much in our understanding of stellar evolution .a pulsar like j0045 - 7319 , which travels around an early - type star on a highly eccentric orbit , offers a good step in this direction ( lai et al .1995 ) .aldridge k. , seyed - mahmoud b. , henderson g. and van wijngaarden w. , 1997 , phys .earth plan .int . , * 103 * , 365 claret a. and cunha n. , 1997 , , * 318 * , 187 craik a. , 1989 , j. fluid mech . , * 198 * , 275 goldreich , p. and keeley , d. , 1977 , , * 212 * , 243 kerswell , r. , 1993 , geophys. astrophys .fluid dyn . , * 71 * , 105 kerswell , r. , 1994 , j. fluid mech . ,* 274 * , 219 kerswell , r. , 2002 , ann .fluid mech . ,* 34 * , 83 lai d. , bildsten l. and kaspi v. , 1995 , , * 452 * , 819 malkus w. , 1989 , geophys .fluid dyn . , * 48 * , 123 mason d. and kerswell r. , 1999 , j. fluid mech . , * 396 * , 73 mathieu r. d. , duquennoy a. , latham d. w. , mayor m. , mermilliod t. , mazeh , j. c. , 1992 , binaries as tracers of stellar formation .proceedings of a workshop held in bettmeralp , switzerland , sept .1991 , edts a. duquennoy , m. mayor , p. 278 rieutord , m. , 1992 , , * 259 * , 581 rieutord , m. and zahn , j .-, 1997 , , * 474 * , 760 seyed - mahmoud b. , henderson g. and aldridge k.d ., 2000 , phys .earth plan .int . , * 117 * , 51 tassoul , j .- l . , 1987 , , * 322 * , 856 witte , m. and savonije , g. , 1999a , , * 341 * , 842 witte , m. and savonije , g. , 1999b , , * 350 * , 129 witte , m. and savonije , g. , 2001 , , * 366 * , 840 zahn , j .-, 1966 , ann .astrophys . ,* 29 * , 313 , 489 , 565 .zahn , j .-, 1970 , , * 4 * , 452 zahn , j .- p . , 1975 , ,* 41 * , 329 | in this review , we describe the physical processes driving the dynamical evolution of binary stars , namely the circularization of the orbit and the synchronization of their spin and orbital rotation . we also discuss the possible role of the elliptic instability which turns out to be an unavoidable ingredient of the evolution of binary stars . # 1_#1 _ # 1_#1 _ = # 1 1.25 in .125 in .25 in |
modern cosmology is one of the most exciting areas in physical science .decades of surveying the sky have culminated in a cross - validated , `` cosmological standard model '' . yet ,key pillars of the model dark matter and dark energy together accounting for 95% of the universe s mass - energy remain mysterious .deep fundamental questions demand answers : what is the dark matter? why is the universe s expansion accelerating ? what is the nature of primordial fluctuations ? should general relativity be modified ? to address these questions , ground and space - based observatories operating at multiple wavebands are aiming to unveil the true nature of the `` dark universe '' . driven by advances in semiconductor technology ,surveys follow a version of moore s law , in terms of ccd pixels or surveyed galaxies per year . in a major leap forward , current cosmological constraints will soon be improved by an order of magnitude . as an example, the large synoptic survey telescope ( lsst ) can be compared to today s observations from the sloan digital sky survey ( sdss ) : in one night lsst will capture data equivalent to five years of sdss imaging ( fig .[ dls ] ) !interpreting future observations will be impossible without a modeling and simulation effort as revolutionary as the new surveys : the desired size and performance improvements for simulations over the next decade are measured in orders of magnitude . because the simulations to be run are memory - limited on even the largest machines available and a large number of them are necessary , very stringent requirements are simultaneously imposed on code performance and efficiency .we show below how hacc meets these exacting conditions by attaining unprecedented sustained levels of performance , reaching up to of peak on certain bg / q partition sizes .cosmic structure formation is described by the gravitational vlasov - poisson equation in an expanding universe , a 6-d pde for the liouville flow ( [ le ] ) of the phase space pdf where self - consistency is imposed by the poisson equation ( [ pe ] ) : the expansion of the universe is encoded in the time - dependence of the scale factor governed by the cosmological model , the hubble parameter , , is newton s constant , is the critical density , , the average mass density as a fraction of , is the local mass density , and is the dimensionless density contrast , the vlasov - poisson equation is very difficult to solve directly because of its high dimensionality and the development of structure including complex multistreaming on ever finer scales , driven by the gravitational jeans instability .consequently , n - body methods , using tracer particles to sample are used ; the particles follow newton s equations in an expanding universe , with the forces given by the gradient of the scalar potential as computed from eq .( [ pe ] ) . under the jeans instability ,initial perturbations given by a smooth gaussian random field evolve into a ` cosmic web ' comprising of sheets , filaments , and local mass concentrations called halos .the first stars and galaxies form in halos and then evolve as the halo distribution also evolves by a combination of dynamics , mass accretion and loss , and by halo mergers . to capture this complex behavior , cosmological n - body simulationshave been developed and refined over the last three decades .in addition to gravity , gasdynamic , thermal , radiative , and other processes must also modeled , e.g. , sub - grid modeling of star formation .large - volume simulations usually incorporate the latter effects via semi - analytic modeling . to understand the essential nature of the challenge posed by future surveys , a few elementary arguments suffice .survey depths are of order a few gpc ( 1 light - years ) ; to follow typical galaxies , halos with a minimum mass of m ( solar mass ) must be tracked . to properly resolve these halos ,the tracer particle mass should be m and the force resolution should be small compared to the halo size , i.e. , .this last argument immediately implies a dynamic range ( ratio of smallest resolved scale to box size ) of a part in ( / kpc ) everywhere in the _ entire _ simulation volume ( fig .[ zoom ] ) .the mass resolution can be specified as the ratio of the mass of the smallest resolved halo to that of the most massive , which is . in terms of the number of simulation particles ,this yields counts in the range of hundreds of billions to trillions .time - stepping criteria follow from a joint consideration of the force and mass resolution .finally , stringent requirements on accuracy are imposed by the very small statistical errors in the observations certain quantities such as lensing shear power spectra must be computed at accuracies of a _ fraction _ of a percent . for a cosmological simulation to be considered `` high - resolution '' , _ all _ of the above demandsmust be met .in addition , throughput is a significant concern .scientific inference from sets of cosmological observations is a statistical inverse problem where many runs of the forward problem are needed to obtain estimates of cosmological parameters via markov chain monte carlo methods .for many analyses , hundreds of large - scale , state of the art simulations will be required .the structure of the hacc framework is based on the realization that it must not only meet the challenges of spatial dynamic range , mass resolution , accuracy , and throughput , but also overcome a final hurdle , i.e. , be fully cognizant of coming disruptive changes in computational architectures . as a validation of its design philosophy ,hacc was among the pioneering applications proven on the heterogeneous architecture of roadrunner , the first supercomputer to break the petaflop barrier .hacc s multi - algorithmic structure also attacks several weaknesses of conventional particle codes including limited vectorization , indirection , complex data structures , lack of threading , and short interaction lists .it combines mpi with a variety of local programming models ( opencl , openmp ) to readily adapt to different platforms .currently , hacc is implemented on conventional and cell / gpu - accelerated clusters , on the blue gene architecture , and is running on prototype intel mic hardware .hacc is the first , and currently the only large - scale cosmology code suite world - wide , that can run at scale ( and beyond ) on _ all _ available supercomputer architectures . to showcase this flexibility , we present scaling results for two systems aside from the bg / q in section [ sec : results ]; on the entire anl bg / p system and over all of roadrunner .recent hacc science results on roadrunner include a suite of 64 billion particle runs for baryon acoustic oscillations predictions for boss ( baryon oscillation spectroscopic survey ) and a high - statistics study of galaxy cluster halo profiles .hacc s performance and flexibility are not dependent on vendor - supplied or other high - performance libraries or linear algebra packages ; the 3-d parallel fft implementation in hacc couples high performance with a small memory footprint as compared to available libraries .unlike some other high - performance n - body codes , hacc does not use any special hardware .the implementation for the bg / q architecture has far more generally applicable features than ( the hacc or other ) cpu / gpu short - range force implementations .the cosmological n - body problem is typically treated by a mix of grid and particle - based techniques .the hacc design accepts that , as a general rule , particle and grid methods both have their limitations . for physics and algorithmic reasons ,grid - based techniques are better suited to larger ( ` smooth ' ) lengthscales , with particle methods having the opposite property .this suggests that higher levels of code organization should be grid - based , interacting with particle information at a lower level of the computational hierarchy . following this central idea, hacc uses a hybrid parallel algorithmic structure , splitting the gravitational force calculation into a specially designed grid - based long / medium range spectral particle - mesh ( pm ) component that is common to all architectures , and an architecture - tunable particle - based short / close - range solver ( fig . [ haccforce ] ) .the grid is responsible for 4 orders of magnitude of dynamic range , while the particle methods handle the critical 2 orders of magnitude at the shortest scales where particle clustering is maximal and the bulk of the time - stepping computation takes place .the computational complexity of the pm algorithm is + , where is the total number of particles , and the total number of grid points .the short - range tree algorithms in hacc can be implemented in ways that are either or , where is the number of particles in individual spatial domains ( ) , while the close - range force computations are where is the number of particles in a tree leaf node within which all direct interactions are summed . values can range from in a ` fat leaf ' tree , to as large as in the case of a cpu / gpu implementation ( no mediating tree ) .hacc uses mixed precision computation double precision is used for the spectral component of the code , whereas single precision is adequate for the short / close - range particle force evaluations and particle time - stepping .hacc s long / medium range algorithm is based on a fast , spectrally filtered pm method .the density field is generated from the particles using a cloud - in - cell ( cic ) scheme , but is then smoothed with the ( isotropizing ) spectral filter ^{n_s } , \label{filter}\ ] ] with the nominal choices , .this reduces the anisotropy `` noise '' of the cic scheme by over an order of magnitude without requiring complex and inflexible higher - order spatial particle deposition methods .the noise reduction allows matching the short and longer - range forces at a spacing of 3 grid cells , with important ramifications for performance .the poisson solver uses a sixth - order , periodic , influence function ( spectral representation of the inverse laplacian ) .the gradient of the scalar potential is obtained using higher - order spectral differencing ( fourth - order super - lanczos ) .the `` poisson - solve '' in hacc is the composition of all the kernels above in one single fourier transform ; each component of the potential field gradient then requires an independent fft .hacc uses its own scalable , high performance 3-d fft routine implemented using a 2-d pencil decomposition ( details are given in section [ sec : results ] . ) to obtain the short - range force , the filtered grid force is subtracted from the exact newtonian force .the filtered grid force was obtained numerically to high accuracy using randomly sampled particle pairs and then fitted to an expression with the correct large and small distance asymptotics . because this functional form is needed only over a small , compact region , it can be simplified using a fifth - order polynomial expansion to speed up computations in the main force kernel ( section [ sec : bg ] ) .hacc s spatial domain decomposition is in regular ( non - cubic ) 3-d blocks , but unlike the guard zones of a typical pm method , full particle replication termed ` particle overloading ' is employed across domain boundaries ( fig. [ overload ] ) .the typical memory overhead cost for a large run is .the point of overloading is to allow essentially exact medium / long - range force calculations with no communication of particle information and high - accuracy local force calculations with relatively sparse refreshes of the overloading zone ( for details , see ref .the second advantage of overloading is that it frees the local force solver from handling communication tasks , which are taken care of by the long / medium - range force framework .thus new ` on - node ' local methods can be plugged in with guaranteed scalability and only local optimizations are necessary .note that all short - range methods in hacc are local to the mpi - rank and the locality can be fine - grained further .this locality has the key advantage of lowering the number of levels in tree algorithms and being able to parallelize across fine - grained particle interaction sub - volumes .the time - stepping in hacc is based on a 2nd - order split - operator symplectic scheme that sub - cycles the short / close - range evolution within long / medium - range ` kick ' maps where particle positions do not change but the velocities are updated .the relatively slowly evolving longer range force is effectively frozen during the shorter - range time steps , which are a symmetric ` sks ' composition of stream ( position update , velocity fixed ) and kick maps for the short / close - range forces : the number of sub - cycles can vary , depending on the force and mass resolution of the simulation , from . the long / medium - range solver remains unchanged across all architectures .the short / close - range solvers are chosen and optimized depending on the target architecture .these solvers can use direct particle - particle interactions , i.e. , a p m algorithm , as on roadrunner , or use both tree and particle - particle methods as on the ibm bg / p and bg / q ( ` pptreepm ' ) .the availability of multiple algorithms within the hacc framework allows us to carry out careful error analyses , for example , the p m and the pptreepm versions agree to within for the nonlinear power spectrum test in the code comparison suite of ref . . for heterogeneous systems such as roadrunner , or in the near future , titan at olcf, the long / medium - range spectral solver operates at the cpu layer . depending on the memory balance between the cpu and the accelerator , we can choose to specify two different modes , 1 ) grids held on the cpu and particles on the accelerator , or 2 ) a streaming paradigm with grid and particle information primarily resident in cpu memory with computations streamed through the accelerator . in both cases , the local force solve is a direct particle - particle interaction , i.e. , the whole is a p m code with hardware acceleration . for a many - core system ,the top layer of the code remains the same , but the short - range solver changes to a tree - based algorithm which is much better suited to the blue gene and mic architectures. we will provide an in - depth description of our blue gene / q - specific implementation in section [ sec : bg ] .to summarize , the hacc framework integrates multiple algorithms and optimizes them across architectures ; it has several interesting performance - enhancing features , e.g. , overloading , spectral filtering and differentiation , mixed precision , and compact local trees .hacc attacks the weaknesses of conventional particle codes in ways made fully explicit in the next section lack of vectorization , indirection , complex data structures , lack of threading , and short interaction lists .finally , weak scaling is a function only of the spectral solver ; hacc s 2-d domain decomposed fft guarantees excellent performance and scaling properties ( see section [ sec : results ] ) .the bg / q is the third generation of the ibm blue gene line of supercomputers targeted primarily at large - scale scientific applications , continuing the tradition of optimizing for price performance , scalability , power efficiency , and system reliability .the new bg / q compute chip ( bqc ) is a system - on - chip ( soc ) design combining cpus , caches , network , and messaging unit on a single chip .a single bg / q rack contains 1024 bg / q nodes like its predecessors .each node contains the bqc and 16 gb of ddr3 memory .each bqc uses 17 augmented 64-bit powerpc a2 cores with specific enhancements for the bg / q : 1 ) 4 hardware threads and a simd quad floating point unit ( quad processor extension , qpx ) , 2 ) a sophisticated l1 prefetching unit ( l1p ) with both stream and list prefetching , 3 ) a wake - up unit to reduce certain thread - to - thread interactions , and 4 ) transactional memory and speculative execution . of the 17 bqc cores , 16 are for user applications and one for handling os interrupts and other system services .each core has access to a private 16 kb l1 data cache and a shared 32 mb multi - versioned l2 cache connected by a crossbar .the a2 core runs at 1.6 ghz and the qpx allows for 4 fmas per cycle , translating to a peak performance per core of 12.8 gflops , or 204.8 gflops for the bqc chip .the bg / q network has a 5-d torus topology ; each compute node has 10 communication links with a peak total bandwidth of 40 gb / s .the internal bqc interconnect has a bisection bandwidth of 563 gb / s . in order to evaluate the short - range force on non - accelerated systems , such as the bg / q ,hacc uses a recursive coordinate bisection ( rcb ) tree in conjunction with a highly - tuned short - range polynomial force kernel . the implementation of the rcb tree , although not the force evaluation scheme , generally follows the discussion in ref .two core principles underlie the high performance of the rcb tree s design ._ spatial locality . _the rcb tree is built by recursively dividing particles into two groups .the dividing line is placed at the center of mass coordinate perpendicular to the longest side of the box . once this line is chosen , the particles are partitioned such that particles in each group occupy disjoint memory buffers .local forces are then computed one leaf node at a time .the net result is that the particle data exhibits a high degree of spatial locality after the tree build ; because the computation of the short - range force on the particles in any given leaf node , by construction , deals with particles only in nearby leaf nodes , the cache miss rate during the force computation is extremely low . _walk minimization ._ in a traditional tree code , an interaction list is built and evaluated for each particle . while the interaction list size scales only logarithmically with the total number of particles ( hence the overall complexity ) ,the tree walk necessary to build the interaction list is a relatively slow operation .this is because it involves the evaluation of complex conditional statements and requires `` pointer chasing '' operations .a direct force calculation scales poorly as grows , but for a small number of particles , a thoughtfully - constructed kernel can still finish the computation in a small number of cycles .the rcb tree exploits our highly - tuned short - range force kernels to decrease the overall force evaluation time by shifting workload away from the slow tree - walking and into the force kernel .up to a point , doing this actually speeds up the overall calculation : the time spent in the force kernel goes up but the walk time decreases faster .obviously , at some point this breaks down , but on many systems , tens or hundreds of particles can be in each leaf node before the crossover is reached .we point out that the force kernel is generally more efficient as the size of the interaction list grows : the relative loop overhead is smaller , and more of the computation can be done using unrolled vectorized code .in addition to the performance benefits of grouping multiple particles in each leaf node , doing so also increases the accuracy of the resulting force calculation : the local force is dominated by nearby particles , and as more particles are retained in each leaf node , more of the force from those nearby particles is calculated exactly . in highly - clustered regions ( with very many nearby particles ) ,the accuracy can increase by several orders of magnitude when keeping over 100 particles per leaf node .another important consideration is the tree - node partitioning step , which is the most expensive part of the tree build .the particle data is stored as a collection of arrays the so - called structure - of - arrays ( soa ) format .there are three arrays for the three spatial coordinates , three for the velocity components , in addition to arrays for mass , a particle identifier , etc .our implementation in hacc divides the partitioning operation into three phases .the first phase loops over the coordinate being used to divide the particles , recording which particles will need to be swapped .next , these prerecorded swapping operations are performed on six of the arrays .the remaining arrays are identically handled in the third phase .dividing the work in this way allows the hardware prefetcher to effectively hide the memory transfer latency during the particle partitioning operation and reduces expensive read - after - write dependencies .we now turn to the evaluation of the bg / q - specific short - range force kernel , where the code spends the bulk of its computation time .due to the compactness of the short - range interaction ( cf .section [ sec : hacc ] ) , the kernel can be represented as where , (s)$ ] , and is a short - distance cutoff .this computation must be vectorized to attain high performance ; we do this by computing the force for every neighbor of each particle at once .the list of neighbors is generated such that each coordinate and the mass of each neighboring particle is pre - generated into a contiguous array .this guarantees that 1 ) every particle has an independent list of particles and can be processed within a separate thread ; and 2 ) every neighbor list can be accessed with vector memory operations , because contiguity and alignment restrictions are taken care of in advance .every particle on a leaf node shares the interaction list , therefore all particles have lists of the same size , and the computational threads are automatically balanced . the filtering of , i.e. , checking the short - range condition , can be processed during the generation of the neighbor list or during the force evaluation itself ; since the condition is likely violated only in a number of `` corner '' cases , it is advantageous to include it into the force evaluation in a form where ternary operators can be combined to remove the need of storing a value during the force computation .each ternary operator can be implemented with the help of the instruction , which also has a vector equivalent .even though these alterations introduce an ( insignificant ) increase in instruction count , the entire force evaluation routine becomes fully vectorizable . on the bg / q ,the instruction latency is 6 cycles for most floating - point instructions ; latency is hidden from instruction dependencies by a combination of placing the dependent instructions as far as 6 instructions away by using 2-fold loop unrolling and running 4 threads per core .register pressure for the 32 vector floating - point registers is the most important design constraint on the kernel .half of these registers hold values common to all iterations , 6 of which store the coefficients of the polynomial .the remaining registers hold iteration - specific values .because of the 2-fold unrolling , this means that we are restricted to 8 of these registers per iteration . evaluating the force in eq .( [ force ] ) requires a reciprocal square root estimate and evaluating a fifth - order polynomial , but these require only 5 and 2 iteration - specific registers respectively .the most register - intensive phase of the kernel loop is actually the calculation of , requiring 3 registers for the particle coordinates , 3 registers for the components of , and one register for accumulating the value of .there is significant flexibility in choosing the number of mpi ranks versus the number of threads on an individual bg / q node . because of the excellent performance of the memory sub - system and the low overhead of context switching ( due to use of the bqc wake - up unit ) , a large number of openmp threads significantly larger than is considered typical can be run to optimize performance .figure [ threads ] shows how increasing the number of threads per core increases the performance of the force kernel as a function of the size of the particle neighbor list .the best performance is attained when running the maximum number of threads ( 4 ) per core , and at large neighbor list size . the optimum value for the current hacc runs turns out to be 16/4 as it allows for short tree walks , efficient fft computation , and a large fraction of time devoted to the force kernel . the numbers in fig .[ threads ] show that for runs with different parameters , such as high particle loading , the broad performance plateau allows us to use smaller or higher rpn / thread ratios as appropriate .the neighbor list sizes in representative simulations are of order 500 - 2500 . at the lower end of this neighbor list size, performance can be further improved using assembly - level programming , if desired . at the chosen 16/4 operating point, the code spends 80% of the time in the highly optimized force kernel , 10% in the tree walk , and 5% in the fft , all other operations ( tree build , cic deposit ) adding up to another 5% .note that the actual fraction of peak performance attained in the force kernel is close to 80% as against a theoretical maximum value of 81% .the 26 instructions in the kernel correspond to a maximum of 208 flops if they were all fmas , whereas in the actual implementation , 16 of them are fmas yielding a total flop count of implying a theoretical maximum value of or 81% of peak .the high efficiency is due to successful use of the stream prefetch ; we have measured the latency of the l2 cache to be approximately 45 clock cycles , thus even a single miss per iteration would be enough to significantly degrade the kernel performance .we present performance data for three cases : 1 ) weak scaling of the poisson solver up to 131,072 ranks on different architectures for illustrative purposes ; 2 ) weak scaling at parallel efficiency for the full code on the bg / q with up to 1,572,864 cores ( 96 racks ) ; and 3 ) strong scaling of the full code on the bg / q up to 16,384 cores with a fixed - size realistic problem to explore future systems with lower memory per core .all the timing data were taken over many tens of sub - steps , each individual run taking between 4 - 5 hours . to summarize our findings ,both the long / medium range solver and the full code exhibit perfect weak scaling out to the largest system we have had access to so far ; we achieved a performance of 13.94 pflops on 96 racks , at around 67 - 69% of peak in most cases ( up to 69.75% ) . the full code demonstrates strong scaling up to one rack on a problem with 1024 particles .finally , the biggest test run evolved more than 3.6 trillion particles ( 15,360 ) , exceeding by more than an order of magnitude , the largest high - resolution cosmology run performed to date . as discussed in section [ science ] , the results of runs at this scale can be used for many scientific investigations . .fft scaling on up to 10240 grid points on the bg / q [ cols="^,^,^",options="header " , ] the evolution of many - core based architectures is strongly biased towards a large number of ( possibly heterogeneous ) cores per compute node .it is likely that the ( memory ) byte / flop ratio could easily evolve to being worse by a factor of 10 than it is for the bg / q , and this continuing trend will be a defining characteristic of exascale systems . for these future - looking reasons the arrival of the strong - scaling barrier for large - scale codes and for optimizing wall - clock for fixed problem size , it is important to study the robustness of the strong scaling properties of the hacc short / close - range algorithms. we designed the test with a particle problem running on one rack from 512 to 16384 cores , spanning a per node memory utilization factor of approximately 57% , a typical production run value , to as low as 7% .the actual memory utilization factor scales by a factor of 8 , instead of 32 , because on 16384 nodes we are running a ( severely ) unrealistically small simulation volume per rank with high particle overloading memory and compute cost . despite this ` abuse ' of the hacc algorithms , which are designed to run at of per node memory utilization to about a factor of 4 less ( ) , the strong scaling performance , as depicted in fig .7 ( associated data in table [ tab : perf3 ] ) is impressive .the performance stays near - ideal throughout , as does the push - time per particle per step up to 8192 cores , slowing down at 16384 cores , only because of the extra computations in the overloaded regions .therefore , we expect the basic algorithms to work extremely well in situations where the byte / flop ratio is significantly smaller than that of the current optimum plateau for the bg / q .dark energy is one of the most pressing puzzles in physics today .hacc will be used to investigate the signatures of different dark energy models in the detail needed to analyze upcoming cosmological surveys .the cosmology community has mostly focused on one cosmological model and a handful of ` hero ' runs to study it . with hacc, we aim to systematically study dark energy model space at extreme scales and derive not only qualitative signatures of different dark energy scenarios but deliver quantitative predictions of unprecedented accuracy urgently needed by the next - generation surveys .the simulations can be used to interpret observations of various kinds , such as weak gravitational lensing measurements to map the distribution of dark matter in the universe , measurements of the distribution of galaxies and clusters , from the largest to the smallest scales , measurements of the growth and distribution of cosmic structure , gravitational lensing of the cosmic microwave background , and many more .we now show illustrative results from a science test run on 16 racks of mira , the bg / q system now under acceptance testing at the alcf . in this simulation , 10240 particlesare evolved in a ( 9.14 gpc) volume box .this leads to a particle mass , m , allowing us to resolve halos that host , e.g. , luminous red galaxies ( lrgs ) , a main target of the sloan digital sky survey .the test simulation was started at an initial redshift of ( our actual science runs have ) and evolved until today ( redshift ) .we stored a slice of the three - dimensional density at the final time ( only a small file system was available during this run ) , as well as a subset of the particles and the mass fluctuation power spectrum at 10 intermediate snapshots .the total run time of the simulation was approximately 14 hours .as the evolution proceeds , the particle distribution transitions from essentially uniform to extremely clustered ( see fig . [ evolv ] ) .the local density contrast can increase by five orders of magnitude during the evolution .nevertheless , the wall - clock per time step does not change much over the entire simulation .the large dynamic range of the simulation is demonstrated in fig .the outer image shows the full simulation volume ( 9.14 gpc on a side ) . in this case , structures are difficult to see because the visualization can not encompass the dynamic range of the simulation . successively zooming into smaller regions , down to a ( 7 mpc) sub - volume holding a large dark matter - dominated halo gives some impression of the enormous amount of information contained in the simulation .the zoomed - in halo corresponds to a cluster of galaxies in the observed universe . note that in this region the actual ( formal ) force resolution of the simulation is 0.007 mpc , a further factor of 1000 smaller than the sub - volume size !a full simulation of the type described is extremely science - rich and can be used in a variety of ways to study cosmology as well as to analyze available observations .below we give two examples of the kind of information that can be gained from large scale structure simulations .( we note that the test run is more than three times bigger than the largest high - resolution simulation available today . )clusters are very useful probes of cosmology as the largest gravitationally bound structures in the universe , they form very late and are hence sensitive probes of the late - time acceleration of the universe .figure [ cluster ] gives an example of the detailed information available in a simulation , allowing the statistics of halo mergers and halo build - up through sub - halo accretion to be studied with excellent statistics .in addition , the number of clusters as a function of their mass ( the mass function ) , is a powerful cosmological probe .simulations provide precision predictions that can be compared to observations .the new hacc simulations will not only predict the mass function ( as a function of cosmological models ) at unprecedented accuracy , but also the probability of finding very massive clusters in the universe .large - volume simulations are required to determine the abundance of these very rare objects reliably .cosmological information resides in the nature of material structure and also in how structures grow with time . to test whether general relativity correctly describes the dynamics of the universe ,information related to structure growth ( evolution of clustering ) is essential .figure [ evolv ] shows how structure evolves in the simulation .large - volume simulations are essential in producing predictions for statistical quantities such as galaxy correlation functions and the associated power spectra with small statistical errors in order to compare the predictions against observations .figure [ power ] shows how the power spectrum evolves as a function of time in the science test run . at small wavenumbers ,the evolution is linear , but at large wavenumbers it is highly nonlinear , and can not be obtained by any method other than direct simulation . to summarize , armed with large - scale simulations we can study and evaluate many cosmological probes .these probes involve the statistical measurements of the matter distribution at a given epoch ( such as the power spectrum and the mass function ) as well as their evolution .in addition , the occurrence of rare objects such as very massive clusters can be investigated in the simulations we will carry out with hacc .these are exciting times for users of the bg / q : in the us , two large systems are undergoing acceptance at livermore ( sequoia , 96 racks ) and at argonne ( mira , 48 racks ) .as shown here , hacc scales easily to 96 racks .our next step is to exploit the power of the new systems with the aim of carrying out a suite of full science runs with hundreds of billions to trillions of simulation particles .because hacc s performance and scalability do not rely on the use of vendor - supplied or other ` black box ' high - performance libraries or linear algebra packages , it retains the key advantage of allowing code optimization to be a continuous process ; we have identified several options to enhance the performance as reported here .an initial step is to fully thread all the components of the long - range solver , in particular the forward cic algorithm .next , we will improve ( nodal ) load balancing by using multiple trees at each rank , enabling an improved threading of the tree - build . while the force kernel already runs at very high performance, we can improve it further with lower - level implementations in assembly .many of the ideas and methods presented here are relatively general and can be re - purposed to benefit other hpc applications .in addition , hacc s extreme continuous performance contrasts with the more bursty stressing of the bg / q architecture by linpack ; this feature has allowed hacc to serve as a valuable stress test in the mira and sequoia bring - up process . to summarize ,we have demonstrated outstanding performance at close to 14 pflops on the bg / q ( 69% of peak ) using more than 1.5 million cores and mpi ranks , at a concurrency level of 6.3 million .we are now ready to carry out detailed large - volume n - body cosmological simulations at the size scale of trillions of particles .we are indebted to bob walkup for running hacc on a prototype bg / q system at ibm and to dewey dasher for help in arranging access . at anl, we thank susan coghlan , paul messina , mike papka , rick stevens , and tim williams for obtaining allocations on different blue gene systems . at llnl , we are grateful to brian carnes , kim cupps , david fox , and michel mccoy for providing access to sequoia .we thank ray loy and venkat vishwanath for their contributions to system troubleshooting and parallel i / o .we acknowledge the efforts of the alcf operations team for their assistance in running on mira and the veas bg / q system , in particular , to paul rich , adam scovel , tisha stacey , and william scullin for their tireless efforts to keep mira and veas up and running and helping us carry out the initial long - duration science test runs .this research used resources of the alcf , which is supported by doe / sc under contract de - ac02 - 06ch11357 .albrecht , a. , g. bernstein , r. cahn , w.l .freedman , j. hewitt , w. hu , j. huth , m. kamionkowski , e.w .kolb , l. knox , j.c .mather , s. staggs , n.b .suntzeff , dark energy task force report , arxiv : astro - ph/0609591v1 .chen , d. , n.a .eisley , p. heidelberger , r.m .senger , y. sugawara , s. kumar , v. salapura , d.l .satterfield , b. steinmacher - burowin , and j.j .parker , in sc11 , proceedings of the 2011 int .conf . high performance computing networking storage and analysis ( 2011 ) .doe ascr and doe hep joint extreme scale computing report ( 2009 ) `` challenges for understanding the quantum universe and the role of computing at the extreme scale '' , http://extremecomputing.labworks.org/highenergyphysics/index.stm haring , r.a . , m. ohmacht , t.w .fox , m.k .gschwind , p.a .boyle , n.h .christ , c. kim , d.l .satterfield , k. sugavanam , p.w .coteus , p. heidelberger , m.a .blumrich , r.w .wisniewski , a. gara , and g.l .chiu , ieee micro * 32 * , 48 ( 2012 ) .menanteau , f. , j.p .hughes , c. sifon , m. hilton , j. gonzalez , l. infante , l.f .barrientos , a.j .baker , j.r .bond , s. das , m.j .devlin , j. dunkley , a. hajian , a.d .hincks , a. kosowsky , d. marsden , t.a .marriage , k. moodley , m.d .niemack , m.r .nolta , l.a .page , e.d .reese , n. sehgal , j. sievers , d.n .spergel , s.t .staggs , e. wollack , astrophys .j. * 748 * , 1 ( 2012 ) .pfalzner , s. , and p. gibbon , _ many - body tree methods in physics _ ( cambridge university press , 1996 ) ; see also barnes , j. , and p. hut , nature * 324 * , 446 ( 1986 ) ; warren , m.s . and j.k. salmon , technical paper , supercomputing 1993 .wittman , d.m .tyson , i.p .dellantonio , a.c .becker , v.e .margoniner , j. cohen , d. norman , d. loomba , g. squires , g. wilson , c. stubbs , j. hennawi , d. spergel , p. boeshaar , a. clocchiatti , m. hamuy , g. bernstein , a. gonzalez , p. guhathakurta , w. hu , u. seljak , and d. zaritsky , arxiv : astro - ph/0210118 . | remarkable observational advances have established a compelling cross - validated model of the universe . yet , two key pillars of this model dark matter and dark energy remain mysterious . sky surveys that map billions of galaxies to explore the ` dark universe ' , demand a corresponding extreme - scale simulation capability ; the hacc ( hybrid / hardware accelerated cosmology code ) framework has been designed to deliver this level of performance now , and into the future . with its novel algorithmic structure , hacc allows flexible tuning across diverse architectures , including accelerated and multi - core systems . on the ibm bg / q , hacc attains unprecedented scalable performance currently 13.94 pflops at 69.2% of peak and 90% parallel efficiency on 1,572,864 cores with an equal number of mpi ranks , and a concurrency of 6.3 million . this level of performance was achieved at extreme problem sizes , including a benchmark run with more than 3.6 trillion particles , significantly larger than any cosmological simulation yet performed . |
due to the increasing use of wireless technology in communication networks , there has been a significant amount of research on methods of improving wireless performance .while there are many ways of measuring wireless performance , a good first step ( which has been extensively studied ) is the notion of _ capacity_. given a collection of communication links , the capacity of a network is simply the maximum number of simultaneously satisfiable links .this can obviously depend on the exact model of wireless communication that we are using , but is clearly an upper bound on the usefulness " of the network .there has been a large amount of research on analyzing the capacity of wireless networks ( see e.g. ) , and it has become a standard way of measuring the quality of a network . because of this , when introducing a new technology it is interesting to analyze its affect on the capacity .for example , we know that in certain cases giving transmitters the ability to control their transmission power can increase the capacity by or , where is the ratio of the longest link length to the smallest transmitter - receiver distance , and can clearly never decrease the capacity . however , while the capacity might improve , it is not nearly as clear that the _ achieved _ capacity will improve . after all , we do not expect our network to actually have performance that achieves the maximum possible capacity .we show that not only might these improved technologies not help , they might in fact _ decrease _ the achieved network capacity . following andrews and dinitz and sgeirsson and mitra , we model each link as a self - interested agent and analyze various types of game - theoretic behavior ( nash equilibria and no - regret behavior in particular ) .we show that a version of _ braess s paradox _ holds : adding new technology to the networks ( such as the ability to control powers ) can actually decrease the average capacity at equilibrium .our main results show that in the context of wireless networks , and particularly in the context of the sinr model , there is a version of _ braess s paradox _ . in his seminal paper ,braess studied congestion in road networks and showed that adding additional roads to an existing network can actually make congestion _worse _ , since agents will behave selfishly and the additional options can result in worse equilibria .this is completely analogous to our setting , since in road networks adding extra roads can not hurt the network in terms of the value of the optimum solution , but can hurt the network since the _ achieved _ congestion gets worse . in this workwe consider the physical model ( also called the sinr model ) , pioneered by moscibroda and wattenhofer and described more formally in section [ sec : models ] .intuitively , this model works as follows : every sender chooses a transmission power ( which may be pre - determined , e.g. due to hardware limitations ) , and the received power decreased polynomially with the distance from the sender . a transmission is successful if the received power from the sender is large enough to overcome the interference caused by other senders plus the background noise . with our baseline being the sinr model ,we then consider four ways of improving " a network : adding power control , adding interference cancellation , adding both power control and interference cancellation , and decreasing the sinr threshold . with all of these modificationsit is easy to see that the optimal capacity can only increase , but we will show that the equilibria can become worse . thus improving " a network might actually result in worse performance . the game - theoretic setup that we useis based on and will be formally described in section [ sec : game - theory ] , but we will give an overview here .we start with a game in which the players are the links , and the strategies depend slightly on the model but are essentially possible power settings at which to transmit .the utilities depend on whether or not the link was successful , and whether or not it even attempted to transmit . in a pure nash equilibrium every player has a strategy ( i.e. power setting ) and has no incentive to deviate : any other strategy would result in smaller utility . in a mixed nash equilibriumevery link has a probability distribution over the strategies , and no link has any incentive to deviate from their distribution .finally , no - regret behavior is the empirical distribution of play when all players use _ no - regret _ algorithms , which are a widely used and studied class of learning algorithms ( see section [ sec : game - theory ] for a formal definition ) .it is reasonably easy to see that any pure nash is a mixed nash , and any mixed nash is a no - regret behavior .for all of these , the quality of the solution is the achieved capacity , i.e. the average number of successful links .our first result is for interference cancellation ( ic ) , which has been widely proposed as a practical method of increasing network performance .the basic idea of interference cancellation is quite simple .first , the strongest interfering signal is detected and decoded .once decoded , this signal can then be subtracted ( canceled " ) from the original signal .subsequently , the next strongest interfering signal can be detected and decoded from the now cleaner " signal , and so on .as long as the strongest remaining signal can be decoded in the presence of the weaker signals , this process continues until we are left with the desired transmitted signal , which can now be decoded .this clearly can increase the capacity of the network , and even in the worst case can not decrease it . and yet due to bad game - theoretic interactionsit might make the achieved capacity worse : [ thm : ic - intro ] there exists a set of links in which the _ best _ no - regret behavior under interference cancellation achieves capacity at most times the _ worst _ no - regret behavior without interference cancellation , for some constant .however , for every set of links the worst no - regret behavior under interference cancellation achieves capacity that is at least a constant fraction of the best no - regret behavior without interference cancellation .thus ic can make the achieved capacity worse , but only by a constant factor .note that since every nash equilibrium ( mixed or pure ) is also no - regret , this implies the equivalent statements for those type of equilibria as well . in this result ( as in most of our examples ) we only show a small network ( links ) with no background noise , but these are both only for simplicity it is easy to incorporate constant noise , and the small network can be repeated at sufficient distance to get examples with an arbitrarily large number of links .we next consider power control ( pc ) , where senders can choose not just whether to transmit , but at what power to transmit .it turns out that any equilibrium without power control is also an equilibrium with power control , and thus we can not hope to find an example where the best equilibrium with power control is worse than the worst equilibrium without power control ( as we did with ic ) .instead , we show that adding power control can create worse equilibria : [ thm : pc - intro ] there exists a set of links in which there is a pure nash equilibrium with power control of value at most times the value of the worst no - regret behavior without power control , for some constant .however , for every set of links the worst no - regret behavior with power control has value that is at least a constant fraction of the value of the _ best _ no - regret behavior without power control .note that the first part of the theorem implies that not only is there a pure nash with low - value ( with power control ) , there are also mixed nash and no - regret behaviors with low value ( since any pure nash is also mixed and no - regret ) .similarly , the second part of the theorem gives a bound on the gap between the worst and the best mixed nashes , and the worst and the best pure nashes .our third set of results is on the combination of power control and interference cancellation .it turns out that the combination of the two can be quite harmful .when compared to either the vanilla setting ( no interference cancellation or power control ) or the presence of power control without interference cancellation , the combination of ic and pc acts essentially as in theorem [ thm : pc - intro ] : pure nash equilibria are created that are worse than the previous worst no - regret behavior , but this can only be by a constant factor .on the other hand , this factor can be super - constant when compared to equilibria that only use interference cancellation .let be the ratio of the length of the longest link to the minimum distance between any sender and any receiver .there exists a set of links in which the worst pure nash with both pc and ic ( and thus the worst mixed nash or no - regret behavior ) has value at most times the value of the worst no - regret behavior with just ic .however , for every set of links the worst no - regret behavior with both pc and ic has value at least times the value of the best no - regret behavior with just ic .this theorem means that interference cancellation changes the game " : if interference control were not an option then power control can only hurt the equilibria by a constant amount ( from theorem [ thm : pc - intro ] ) , but if we assume that interference cancellation is present then adding power control can hurt us by . thus when deciding whether to use both power control and interference cancellation , one must be particularly careful to analyze how they act in combination .finally , we consider the effect of decreasing the sinr threshold ( this value will be formally described in section [ sec : models ] ) . we show that , as with ic , there are networks in which a decrease in the sinr threshold can lead to _ every _ equilibrium being worse than even the worst equilibrium at the higher threshold , despite the capacity increasing or staying the same : there exists a set of links and constants in which the best no - regret behavior under threshold has value at most times the value of the worst no - regret behavior under threshold , for some constant . however , for any set of links and any the value of the worst no - regret behavior under is at least a constant fraction of the value of the best no - regret behavior under .our main network constructions illustrating braess s paradox in the studied settings are summarized in fig .[ fig : nash_all ] . schematic illustration of the main lower bounds illustrating the braess s paradox with ( a ) ic : a network in which every no - regret behavior without ic is better than any no - regret behavior solution with ic ; ( b ) pc : a network in which there exists a pure nash equilibrium with pc which is worse than any no - regret behavior with ic ; ( c ) pic : a network with a pure nash equilibrium in the pic setting which is worse than any no - regret behavior in the ic setting but _ without _ power control ; and ( d ) decreased sinr threshold : a network in which every no - regret behavior with has a smaller value than any no - regret behavior with higher sinr threshold .edge weights represent distances ., title="fig : " ] schematic illustration of the main lower bounds illustrating the braess s paradox with ( a ) ic : a network in which every no - regret behavior without ic is better than any no - regret behavior solution with ic ; ( b ) pc : a network in which there exists a pure nash equilibrium with pc which is worse than any no - regret behavior with ic ; ( c ) pic : a network with a pure nash equilibrium in the pic setting which is worse than any no - regret behavior in the ic setting but _ without _ power control ; and ( d ) decreased sinr threshold : a network in which every no - regret behavior with has a smaller value than any no - regret behavior with higher sinr threshold .edge weights represent distances ., title="fig : " ] schematic illustration of the main lower bounds illustrating the braess s paradox with ( a ) ic : a network in which every no - regret behavior without ic is better than any no - regret behavior solution with ic ; ( b ) pc : a network in which there exists a pure nash equilibrium with pc which is worse than any no - regret behavior with ic ; ( c ) pic : a network with a pure nash equilibrium in the pic setting which is worse than any no - regret behavior in the ic setting but _ without _ power control ; and ( d ) decreased sinr threshold : a network in which every no - regret behavior with has a smaller value than any no - regret behavior with higher sinr threshold .edge weights represent distances ., title="fig : " ] schematic illustration of the main lower bounds illustrating the braess s paradox with ( a ) ic : a network in which every no - regret behavior without ic is better than any no - regret behavior solution with ic ; ( b ) pc : a network in which there exists a pure nash equilibrium with pc which is worse than any no - regret behavior with ic ; ( c ) pic : a network with a pure nash equilibrium in the pic setting which is worse than any no - regret behavior in the ic setting but _ without _ power control ; and ( d ) decreased sinr threshold : a network in which every no - regret behavior with has a smaller value than any no - regret behavior with higher sinr threshold .edge weights represent distances ., title="fig : " ] the capacity of _ random _ networks was examined in the seminal paper of gupta and kumar , who proved tight bounds in a variety of models .but only recently has there been a significant amount of work on algorithms for determining the capacity of _ arbitrary _ networks , particularly in the sinr model .this line of work began with goussevskaia , oswald , and wattenhofer , who gave an -approximation for the uniform power setting ( i.e. the vanilla model we consider ) .goussevskaia , halldrson , wattenhofer , and welzl then improved this to an -approximation ( still under uniform powers ) , while andrews and dinitz gave a similar -approximation algorithm for the power control setting .this line of research was essentially completed by an -approximation for the power control setting due to kesselheim . in parallel to the work on approximation algorithms , there has been some work on using game theory ( and in particular the games used in this paper ) to help design distributed approximation algorithms .this was begun by andrews and dinitz , who gave an upper bound of on the price of anarchy for the basic game defined in section [ sec : game - theory ] .but since computing the nash equilibrium of a game is ppad - complete , we do not expect games to necessarily converge to a nash equilibrium in polynomial time .thus dinitz strengthened the result by showing the same upper bound of for no - regret behavior .this gave the first distributed algorithm with a nontrivial approximation ratio , simply by having every player use a no - regret algorithm .the analysis of the same game was then improved to by sgeirsson and mitra .there is very little work on interference cancellation in arbitrary networks from an algorithmic point of view , although it has been studied quite well from an information - theoretic point of view ( see e.g. ) .recently avin et al . studied the topology of _ sinr diagrams _ under interference cancellation , which are a generalization of the sinr diagrams introduced by avin et al . and further studied by kantor et al . for the sinr model without interference cancellation .these diagrams specify the reception zones of transmitters in the sinr model , which turn out to have several interesting topological and geometric properties but have not led to a better understanding of the fundamental capacity question .we model a wireless network as a set of links in the plane , where each link represents a communication request from a sender to a receiver .the senders and receivers are given as points in the euclidean plane .the euclidean distance between two points and is denoted .the distance between sender and receiver is denoted by .we adopt the physical model ( sometime called the sinr model ) where the received signal strength of transmitter at the receiver decays with the distance and it is given by , where ] .interference cancellation allows receivers to cancel signals that they can decode .consider link .if can decode the signal with the largest received signal , then it can decode it and remove it .it can repeat this process until it decodes its desire message from , unless at some point it gets stuck and can not decode the strongest signal .formally , can decode if ( i.e. it can decode in the presence of weaker signals ) and if it can decode for all links with .link is successful if can decode .the following key notion , which was introduced in and extended to arbitrary powers by , plays an important role in our analysis .the _ affectance _ of link caused by another link with a given power assignment vector is defines to be , where .informally , indicates the amount of ( normalized ) interference that link causes at link .it is easy to verify that link is successful if and only if .when the powers of are the same for every , ( i.e. uniform powers ) , we may omit it and simply write .for a set of links and a link , the total affectance caused by is . in the same manner ,the total affectance caused by on the link is .we say that a set of links is _-feasible _ if for all , i.e. every link achieves sinr above the threshold ( and is thus successful even without interference cancellation ) .it is easy to verify that is -feasible if and only if for every .following , we say that a link set is _ amenable _ if the total affectance caused by any single link is bounded by some constant , i.e. , for every .the following basic properties of amenable sets play an important role in our analysis . [ fc : amenable ] ( a ) every feasible set contains a subset , such that is amenable and .+ ( b ) for every amenable set that is -feasible with uniform powers , for every other link , it holds that . we will use a game that is essentially the same as the game of andrews and dinitz , modified only to account for the different models we consider .each link is a player with possible strategies : broadcast at power , or at integer power .a link has utility if it is successful , has utility if it uses nonzero power but is unsuccessful , and has utility if it does not transmit ( i.e. chooses power ) . note that if power control is not available , this game only has two strategies : power and power .let denote the set of possible strategies .strategy profile _ is a vector in , where the component is the strategy played by link . for each link ,let be the function mapping strategy profiles to utility for link as described . given a strategy profile ,let denote the profile without the component , and given some strategy let denote the utility of if it uses strategy and all other links use their strategies from .a pure nash equilibrium is a strategy profile in which no player has any incentive to deviate from their strategy .formally , is a pure nash equilibrium if for all and players . in a mixed nash equilibrium ,every player has a probability distribution over , and the requirement is that no player has any incentive to change their distribution to some .so \geq { \mathop{\mathbb e}}[f_i(\pi ' , a_{-i})] ] .our main claim is that braess s paradox is once again possible : there are networks in which adding power control can create worse equilibria . for illustration of such a network ,[ fig : nash_all](b ) .we first observe the following relation between no - regret solutions with or without power control .[ obs : nash_pc_contain ] every no - regret solution in the uniform setting , is also a no - regret solution in the pc setting .hence , we can not expect the best no - regret solution in the pc setting to be smaller than the worst no - regret solution in the uniform setting . yet, the paradox still holds .[ thm : pmax_nash_lb ] there exists a configuration of links satisfying for some constant .we now prove that ( as with ic ) that the paradox can not be too bad : adding power control can not cost us more than a constant .the proof is very similar to that of thm .[ thm : nash_ic_upper ] up to some minor yet crucial modifications .[ thm : nash_pc_upper ] for any set of links and some constant .[ cor : pmax_price ] the price of total anarchy under the power control setting with maximum transmission energy is .in this section we consider games in the power control with ic setting where _ transmitters _ can adopt their transmission energy in the range of $ ] and in addition , _ receivers _ can employ interference cancelation .this setting is denote as ( power control+ic ) .we show that braess s paradox can once again happen and begin by comparing the pic setting to the setting of power control without ic and to the most basic setting of uniform powers .[ lemma : pic_pcunilower ] there exists a set of links and constant such that + ( a ) .+ ( b ) .+ moreover , we proceed by showing that pic can hurt the network by more than a constant when comparing pic equilibria to ic equilibria . for an illustration of such a network ,[ fig : nash_all](c ) .[ theorem : ic_pc_lower ] there exists a set of links and constant such that the best pure nash solution with pic is worse by a factor of than the worst no - regret solution with ic .there exists a set of links satisfying that . as in the previous sections ,we show that our examples are essentially tight .[ lem : nash_ic_pc_upper ] for every set of links it holds that there exists a constant such that + ( a ) .+ ( b ) .+ ( c ) . finally , as a direct consequences of our result , we obtain a tight bound for the price of total anarchy in the pic setting .[ cor : pic_price ] for every set of links it holds that the price of total anarchy with pic is .we begin by showing that in certain cases the ability to successfully decode a message at a lower sinr threshold results in _ every _ no - regret solution having lower value than _ any _ no - regret solution at higher . for an illustration of such a network ,[ fig : nash_all](d ) .[ thm : beta_nash_lb ] there exists a set of links and constants such that for some constant .we now show that the gap between the values of no - regret solution for different sinr threshold values is bounded by a constant .[ lem : beta_nash_lb ] for every and every set of links satisfying that for every , it holds that for some constant .in this paper we have shown that braess s paradox can strike in wireless networks in the sinr model : improving technology can result in worse performance , where we measured performance by the average number of successful connections .we considered adding power control , interference cancellation , both power control and interference cancellation , and decreasing the sinr threshold , and in all of them showed that game - theoretic equilibria can get worse with improved technology .however , in all cases we bounded the damage that could be done .there are several remaining interesting open problems .first , what other examples of wireless technology exhibit the paradox ?second , even just considering the technologies in this paper , it would be interesting to get a better understanding of when exactly the paradox occurs .can we characterize the network topologies that are susceptible ?is it most topologies , or is it rare ?what about random wireless networks ? finally ,while our results are tight up to constants , it would be interesting to actually find tight constants so we know precisely how bad the paradox can be . | when comparing new wireless technologies , it is common to consider the effect that they have on the capacity of the network ( defined as the maximum number of simultaneously satisfiable links ) . for example , it has been shown that giving receivers the ability to do interference cancellation , or allowing transmitters to use power control , never decreases the capacity and can in certain cases increase it by , where is the ratio of the longest link length to the smallest transmitter - receiver distance and is the maximum transmission power . but there is no reason to expect the optimal capacity to be realized in practice , particularly since maximizing the capacity is known to be np - hard . in reality , we would expect links to behave as self - interested agents , and thus when introducing a new technology it makes more sense to compare the values reached at game - theoretic equilibria than the optimum values . in this paper we initiate this line of work by comparing various notions of equilibria ( particularly nash equilibria and no - regret behavior ) when using a supposedly better " technology . we show a version of braess s paradox for all of them : in certain networks , upgrading technology can actually make the equilibria _ worse _ , despite an increase in the capacity . we construct instances where this decrease is a constant factor for power control , interference cancellation , and improvements in the sinr threshold ( ) , and is when power control is combined with interference cancellation . however , we show that these examples are basically tight : the decrease is at most for power control , interference cancellation , and improved , and is at most when power control is combined with interference cancellation . |
it is well known that the tsallis entropy and fisher information entropy ( matrix ) are very important quantities expressing information measures in nonextensive systems .the tsallis entropy for -unit nonextensive system is defined by - with ^q \:\pi_i d x_i , \label{eq : a2}\end{aligned}\ ] ] where is the entropic index ( ) , and denotes the probability distribution of variables . in the limit of , the tsallis entropy reduces to the boltzman - gibbs - shannon entropy given by the boltzman - gibbs - shannon entropy is extensive in the sense that for a system consisting independent but equivalent subsystems , the total entropy is a sum of constituent subsystems : .in contrast , the tsallis entropy is nonextensive : for , and expresses the degree of the nonextensivity of a given system .the tsallis entropy is a basis of the nonextensive statistical mechanics , which has been successfully applied to a wide class of systems including physics , chemistry , mathematics , biology , and others .the fisher information matrix provides us with an important measure on information .its inverse expresses the lower bound of decoding errors for unbiased estimator in the cramr - rao inequality .it denotes also the distance between the neighboring points in the rieman space spanned by probability distributions in the information geometry .the fisher information matrix expresses a local measure of positive amount of information whereas the boltzman - gibbs - shannon - tsallis entropy represents a global measure of ignorance . in recent years , many authors have investigated the fisher information in nonextensive systems - . in a previous paper ,we have pointed out that two types of _ generalized _ and _ extended _ fisher information matrices are necessary for nonextensive systems .the generalized fisher information matrix obtained from the generalized kullback - leibler divergence in conformity with the tsallis entropy , is expressed by , \label{eq : a4}\end{aligned}\ ] ] where ] expresses the average over the escort probability given by ^q}{c_q^{(n)}},\end{aligned}\ ] ] being given by eq .( [ eq : a2 ] ) . in the limit of , both the generalized and extended fisher information matrices reduce to the conventional fisher information matrix .studies on the information entropies have been made mainly for independent ( uncorrelated ) systems .effects of correlated noise and inputs on the fisher information matrix and shannon s mutual information have been extensively studied in neuronal ensembles ( for a recent review , see ref . ; related references therein ) .it is a fundamental problem in neuroscience to determine whether correlations in neural activity are important for decoding , and what is the impact of correlations on information transmission .when neurons fire independently , the fisher information increases proportionally to the population size . in ensembles with the limited - range correlations ,however , the fisher information is shown to saturate as a function of population size - . in recent yearsthe interplay between fluctuations and correlations in nonextensive systems has been investigated - .it has been demonstrated that in some globally correlated systems , the tsallis entropy becomes extensive while the boltzman - gibbs - shannon entropy is nonextensive .thus the correlation plays important roles in discussing the properties of information entropies in nonextensive systems .it is the purpose of the present paper to study effects of the spatially - correlated variability on the tsallis entropy and fisher information in nonextensive systems . in sec .2 , we will discuss information entropies of correlated nonextensive systems , by using the probability distributions derived by the maximum entropy method ( mem ) . in sec . 3, we discuss the marginal distribution to study the properties of probability distributions obtained by the mem .previous related studies are critically discussed also . the final sec .4 is devoted to our conclusion . in appendix a , results of the mem for uncorrelated , nonextensive systems are briefly summarized .we consider correlated -unit nonextensive systems , for which the probability distribution is derived with the use of the mem under the constraints given by , \label{eq : c22}\\ \sigma^2 & = & \frac{1}{n } \sum_i e_q\left[(x_i-\mu)^2 \right ] , \label{eq : c23 } \\s \:\sigma^2 & = & \frac{1}{n(n-1)}\sum_i \sum_{j ( \neq i ) } e_q\left[(x_i-\mu)(x_j-\mu ) \right ] , \label{eq : c24}\end{aligned}\ ] ] , and expressing the mean , variance , and degree of the correlated variability , respectively .cases with and arbitrary will be separately discussed in secs .2.1 and 2.2 , respectively . for a given correlated nonextensive system with , the mem with constraints given by eqs .( [ eq : c21])-([eq : c24 ] ) yields ( details being explained in appendix b ) , \label{eq : c5}\end{aligned}\ ] ] with where denotes the beta function and expresses the -exponential function defined by ^{1/(1-q)}. \label{eq : c13}\ ] ] the matrix with elements is expressed by the inverse of the covariant matrix given by with .\hspace{1cm}\mbox{for } \label{eq : d2 } \end{aligned}\ ] ] in the limit of , the distribution reduces to , \label{eq : d3 } \end{aligned}\ ] ] which is nothing but the gaussian distribution for .we have calculated information entropies , by using the distribution given by eq .( [ eq : c5 ] ) . * tsallis entropy * we obtain +\log(r_q^{(2 ) } ) , \hspace{1cm}\mbox{for } \\ & = & \frac{1-c_q^{(2)}}{q-1 } , \hspace{4cm}\mbox{for } \label{eq : d6}\end{aligned}\ ] ] with where is given by eq .( [ eq : c9])-([eq : c11 ] ) . from given by eq .( [ eq : c8 ] ) , we may obtain the dependence of as given by , \hspace{2cm}\mbox{for } \label{eq : d9}\end{aligned}\ ] ] which yields figure [ figh](a ) shows as a function of the correlation for ( values of and are hereafter adopted in model calculations shown in figs .[ figh]-[fige ] ) .we note that tsallis entropy is decreased with increasing absolute value of independently of its sign .* fisher information * by using eqs . ( [ eq : a4 ] ) and ( [ eq : a5 ] ) for , we obtain the fisher information matrices given by which show that is independent of and that the inverses of both matrices are proportional to .figure [ figj ] shows the dependence of the extended fisher information for , whose inverse is increased ( decreased ) for a positive ( negative ) , depending on a sign of in contrast to .it is possible to extend our approach to the case of arbitrary , for which the mem with the constraints given by eqs .( [ eq : c21])-([eq : c24 ] ) lead to the distribution given by ( details being given in appendix b ) , \label{eq : f1}\end{aligned}\ ] ] with }{\nu_q^{(n)}\sigma^2(1-s)[1+(n-1)s ] } , \\ b & = & - \:\frac{s}{\nu_q^{(n)}\sigma^2(1-s)[1+(n-1)s ] } ,\\ z_q^{(n ) } & = & \frac{(2 \nu_q^{(n ) } \sigma^2)^{n/2 } \ : r_q^{(n ) } } { ( q-1)^{n/2}}\;\ ; \pi_{i=1}^n \:b\left(\frac{1}{2 } , \frac{1}{q-1}-\frac{i}{2 } \right ) , \hspace{1 cm } \mbox{for }\\ & = & ( 2 \pi \sigma^2)^{n/2 } \ : r_q^{(n ) } , \hspace{6 cm } \mbox{for } \\ & = & \frac{(2 \nu_q^{(n ) } \sigma^2)^{n/2 } \ : r_q^{(n ) } } { ( 1-q)^{n/2}}\;\ ; \pi_{i=1}^n \:b\left(\frac{1}{2 } , \frac{1}{1-q}+\frac{(i+1)}{2 } \right ) , \hspace{0.5 cm } \mbox{for } \\ r_q^{(n ) } & = & \ : \{(1-s)^{n-1}[1+(n-1)s ] \}^{1/2 } , \label{eq : f3 } \\\nu_q^{(n ) } & = & \frac{[(n+2)-nq]}{2}. \label{eq : f4}\end{aligned}\ ] ] the matrix is expressed by the inverse of the covariant matrix whose elements are given by .\end{aligned}\ ] ] in the limit of , the distribution given by eq .( [ eq : f1 ] ) becomes the multivariate gaussian distribution given by .\end{aligned}\ ] ] it is necessary to note that there is the condition for a physically conceivable value given by [ see eq .( [ eq : z6 ] ) , details being discussed in appendix c ] where the lower and upper critical values are given by and , respectively . in the case of and , for example , we obtain and , respectively . by using the probability distribution given by eq .( [ eq : f1 ] ) , we have calculated information entropies whose dependences are given as follows .* tsallis entropy * we obtain +\log(r_q^{(n ) } ) , \hspace{1cm}\mbox{for } \\ & = & \frac{1-c_q^{(n)}}{q-1 } , \hspace{4cm}\mbox{for } \label{eq : f5b}\end{aligned}\ ] ] with , \hspace{0.5cm}\mbox{for } \label{eq : f5c}\end{aligned}\ ] ] where the dependence of arises from a factor of in eq .( [ eq : f3 ] ) , and expresses the value of .equation ( [ eq : f5 ] ) yields the dependent given by s^2 , \hspace{0.5cm}\mbox{for } \label{eq : f6}\end{aligned}\ ] ] where stands for the tsallis entropy for .the region where eqs .( [ eq : f5c ] ) and ( [ eq : f6 ] ) hold becomes narrower for larger .the dependence of for is shown in fig . [figh](b ) , where has a peak at and it is decreased with increasing . comparing fig .[ figh](b ) with fig .[ figh](a ) , we notice that -dependence of for is more significant than that for [ eq .( [ eq : f6 ] ) ] .circles in figs . [ figf](a ) and [ figf](b ) show with for and , respectively , which are calculated with the use of the expressions given by eqs .( [ eq : f5b ] ) and ( [ eq : f5 ] ) .they are in good agreement with dashed curves showing exact results which are given by eq .( [ eq : b5 ] ) and shown in figs .[ fige](a ) and [ fige](b ) in appendix a. squares show with calculated by using eqs .( [ eq : f5b ] ) and ( [ eq : f5 ] ) .the tsallis entropy is decreased by an introduced correlation . because of a computational difficulty , calculations using eqs .( [ eq : f5b ] ) and ( [ eq : f5 ] ) can not be performed for larger than those shown in figs . [ figf](a ) and [ figf](b ) . * fisher information* the generalized and extended fisher information matrices are given by } , \label{eq : f10 } \\\tilde{g}_q^{(n ) } & = & \frac{nq(q+1 ) } { \sigma^2(3-q)(2q-1 ) [ 1+(n-1 ) s]}. \label{eq : f11}\end{aligned}\ ] ] the results for given by eqs . ( [ eq : f10 ] ) and ( [ eq : f11 ] ) are consistent with those derived with the use of the multivariate gaussian distribution . by using the fisher information matrices for , and , given by eqs . ( [ eq : b9a ] ) and ( [ eq : b10a ] ) , we obtain which holds independently of .inverses of the fisher information matrices approach the value of for , and are proportional to for .in particular , they vanish at .these features are clearly seen in fig .[ figd ] where the inverses of fisher information matrices , and , are plotted as a function of for various values .in the present study , we have obtained the probability distributions , applying the mem to spatially - correlated nonextensive systems .we will examine our probability distributions in more detail .the dependences of for given by eq .( [ eq : c5 ] ) with and are plotted in figs . [ figc](a ) and [ figc](b ) , respectively , where is treated as a parameter . when , the distribution is symmetric with respect of for all values .when the correlated variability of is introduced , peak positions of the distribution appear at finite for and 1.0 . in the limit of ( _ i.e. _ no correlated variability ) , given by eq .( [ eq : c5 ] ) becomes ^{1/(1-q ) } , \label{eq : g8}\end{aligned}\ ] ] which does not agree with the exact result ( except for ) , as given by ^{1/(1-q ) } , \label{eq : g } \\ & \neq & p^{(2)}(x_1,x_2),\end{aligned}\ ] ] because of the properties of the -exponential function defined by eq .( [ eq : c13 ] ) : . by using the -product defined by ^{1/(1-q ) } , \end{aligned}\ ] ] we may obtain the expression given by ^{1/(1-q ) } , \label{eq : g16}\end{aligned}\ ] ] which coincides with given by eq .( [ eq : g8 ] ) besides a difference between and . in order to study the properties of the probability distribution of in more details ,we have calculated its marginal probability ( with ) given by ^{1/(1-q)+1/2}. \label{eq : g4}\end{aligned}\ ] ] dashed curves in figs . [ figb](a ) and [ figb](b ) show in linear and logarithmic scales , respectively .the marginal distributions are in good agreement with solid curves showing [ eq .( [ eq : x4 ] ) ] , ^{1/(1-q)}. \label{eq : g7}\end{aligned}\ ] ] in the case of , the distribution given by eq .( [ eq : f1 ] ) yields its marginal distribution ( with ) given by ^{1/(1-q)+1}. \label{eq : g6}\end{aligned}\ ] ] chain curves in figs . [figb](a ) and [ figb](b ) represent , which is again in good agreement with solid curves showing .these results justify , to some extent , the probability distribution adopted in our calculation . the marginal distribution for an arbitrary ( with ) is given by ^{1/(1-q)+(n-1)/2 } , \\& \propto & \left [ 1- \frac{(1-q_n ) x_1 ^ 2 } { 2 \nu_n \sigma^2 } \right]^{1/(1-q_n ) } , \label{eq : g11}\end{aligned}\ ] ] with equations ( [ eq : g11])-([eq : g13 ] ) show that in the limit of , we obtain , , and reduces to the gaussian distribution .one of typical microscopic nonextensive systems is the langevin model subjected to multiplicative noise , as given by - here expresses the relaxation rate , denotes a function of an external input , and and stand for magnitudes of multiplicative and additive noise , respectively , with zero - mean white noise given by and with the correlated variability , \delta(t - t'),\\ \langle \xi_i(t)\:\xi_j(t ' ) \rangle & = & \beta^2 [ \delta_{ij } + c_a(1-\delta_{ij})]\delta(t - t'),\\ \langle \eta_i(t)\:\xi_j(t ' ) \rangle & = & 0 , \label{eq : h1}\end{aligned}\ ] ] where and express the degrees of correlated variabilities of additive and multiplicative noise , respectively .the fokker - planck equation ( fpe ) for the probability distribution ( ) is given by \nonumber \\ & + & \frac{\beta^2}{2}\sum_{i}\sum_{j } [ \delta_{ij } + c_a ( 1-\delta_{ij } ) ] \frac{\partial^2}{\partial x_{i } \partial x_{j } } \:p \nonumber \\ & + & \frac{\alpha^2}{2}\sum_{i } \sum_j [ \delta_{ij}+ c_m ( 1-\delta_{ij } ) ] \frac{\partial}{\partial x_{i } } x_i \frac{\partial}{\partial x_j } ( x_j \:p ) , \label{eq : h0}\end{aligned}\ ] ] in the stratonovich representation .for additive noise only ( ) , the stationary distribution is given by ,\end{aligned}\ ] ] where denotes the average of , and expresses the covariance matrix given by .\end{aligned}\ ] ] when multiplicative noise exists , the calculation of even stationary distribution becomes difficult , and it is generally not given by the gaussian .indeed , the stationary distribution for non - correlated multiplicative noise with , and is given by - ^{1/(1-q ) } \:e^{y(x_i ) } , \label{eq : h3}\end{aligned}\ ] ] with the probability distribution given by eq .( [ eq : h3 ] ) for ( ) agrees with that derived by the mem for [ eq .( [ eq : x4 ] ) ] . for , and ( ) , eq .( [ eq : h3 ] ) becomes yielding the fisher information given by where and is the heaviside function .the probability distribution for correlated multiplicative noise ( , ) is also the non - gaussian , which is easily confirmed by direct simulations of the langevin model with . in some previous studies - , the stationary distribution of the langevin model subjected to correlated multiplicative noise with , and assumed to be expressed by the gaussian distribution with the covariance matrix given by .\label{eq : h2}\end{aligned}\ ] ] this is equivalent to assume that & \simeq & \frac{\partial}{\partial x_i } \left [ \langle x_i \rangle \frac{\partial } { \partial x_j } ( \langle x_j \rangle \:p ) \right ] , \nonumber \\& = & \mu_i \mu_j \frac{\partial^2 p } { \partial x_i \partial x_j } , \label{eq : h3b}\end{aligned}\ ] ] in the fpe given by eq .( [ eq : h0 ] ) with and . by using such an approximation , abbott and dayan ( ad ) calculated the fisher information matrix of a neuronal ensemble with the correlated variability , which is given by } + 2 n k , \nonumber \\ & = & \frac{n}{\sigma_m^2 \mu^2 [ 1+(n-1)c_m]}+ \frac{2n}{\mu^2 } , \label{eq : h4 } \end{aligned}\ ] ] with a spurious second term ( ) , where ^ 2=1/\mu^2 1 < q < 3 q=1.0 0 < q < 1 q=1 q \neq 1 ] and i=1 1 < i \leq n$}\end{aligned}\ ] ] eqs .( [ eq : c23 ] ) and ( [ eq : c24 ] ) lead to , \nonumber \\ & = & \frac{1}{\nu_q^{(n ) } n } \sum_i \left ( \frac{1}{\lambda_i } \right ) , \nonumber \\ & = & \frac{[a+b(n-2)]}{\nu_q^{(n ) } ( a - b)[a+(n-1)b ] } , \label{eq : w3}\\ s\:\sigma^2 & = & \frac{1}{n(n-1 ) } \sum_{i < j}e_q[y_i^2-y_j^2 ] , \nonumber\\ & = & \left ( \frac{1}{\nu_q^{(n ) } n(n-1 ) } \right ) \sum_{i < j}\left ( \frac{1}{\lambda_i}-\frac{1}{\lambda_j } \right ) , \nonumber\\ & = & -\ : \frac{b}{\nu_q^{(n ) } ( a - b)[a+(n-1)b]}. \label{eq : w4 } \end{aligned}\ ] ] from eqs .( [ eq : w3 ] ) and ( [ eq : w4 ] ) , and are expressed in terms of and , as given by }{\nu_q^{(n)}\sigma^2(1-s)[1+(n-1)s ] } , \\ b & = & - \:\frac{s}{\nu_q^{(n)}\sigma^2(1-s)[1+(n-1)s]}.\end{aligned}\ ] ] in order to discuss the condition for a physically conceivable value , we consider the global variable defined by the first and second moments of are given by & = & \frac{1}{n } \sum_i e_q[x_i(t ) ] = \mu(t ) ,\\ \label{eq : z2 } e_q[\ { \delta x(t ) \}^2 ] & = & \frac{1}{n^2 } \sum_i \sum_j e_q[\delta x_i(t)\delta x_j(t)],\\ & = & \frac{1}{n^2 } \sum_i e_q[\ { \delta x_i(t ) \}^2 ] + \frac{1}{n^2 } \sum_i \sum_{j ( \neq i ) } e_q[\delta x_i(t)\delta x_j(t ) ] , \\ & = & \frac{\sigma(t)^2}{n}[1+(n-1)s(t ) ] , \label{eq : z3}\end{aligned}\ ] ] where and .since global fluctuation in is smaller than the average of local fluctuation in , we obtain \leq \frac{1}{n}\sum_i e[\ { \delta x_i(t)\}^2]=\sigma(t)^2 .\label{eq : z4}\end{aligned}\ ] ] equations ( [ eq : z3 ] ) and ( [ eq : z4 ] ) yield }{n } \leq 1.0 , \label{eq : z5}\ ] ] which leads to with and .a. plastino and a. r. plastino , physica a * 222 * , 347 ( 1995 ) . c. tsallis and d. j. bukman , phys .e * 54 * , r2197 ( 1996 ) .a. plastino , a. r. plastino , and h. g. miller , physica a * 235 * , 577 ( 1997 ) . f. pennini , a. r. plastino , and a. plastino , physica a * 258 * , 446 ( 1998 ) .l. borland , f. pennini , a. r. plastino , and a. plastino , eur .j. b. * 12 * , 285 ( 1999 ) .a. r. plastino , m. casas , and a. plastino , physica a * 280 * , 289 ( 2000 ). s. abe , phys .e * 68 * , 031101 ( 2003 ) .calculations of for large with the use of eqs .( [ eq : f5b ] ) and ( [ eq : f5 ] ) need a computer program of the gamma function with a negative ( real ) argument , which is not available in our facility . | we have calculated the tsallis entropy and fisher information matrix ( entropy ) of spatially - correlated nonextensive systems , by using an analytic non - gaussian distribution obtained by the maximum entropy method . effects of the correlated variability on the fisher information matrix are shown to be different from those on the tsallis entropy . the fisher information is increased ( decreased ) by a positive ( negative ) correlation , whereas the tsallis entropy is decreased with increasing an absolute magnitude of the correlation independently of its sign . this fact arises from the difference in their characteristics . it implies from the cramr - rao inequality that the accuracy of unbiased estimate of fluctuation is improved by the negative correlation . a critical comparison is made between the present study and previous ones employing the gaussian approximation for the correlated variability due to multiplicative noise . = 1.333 * effects of correlated variability on information entropies + in nonextensive systems * hideo hasegawa _ department of physics , tokyo gakugei university + koganei , tokyo 184 - 8501 , japan _ ( ) pacs number(s ) : 05.70.-a,05.10.gg,05.45.-a _ key words _ : tsallis entropy , fisher information , correlated variability , nonextensive systems |
it is widely believed that the space - time , where physical process and measurements take place , might have a structure different from a continuous and differentiable manifold , when it is probed at the planck length .for example , the space - time could have a foamy structure , or it could be non - commutative in a sense inspired by string theory results or in the sense of - minkowski approach .if this happens in the space - time , in the momentum space there must also be a scale , let say , that signs this change of structure of the space - time , even if the interplay between length and momentum ( ) will presumably change when we approach such high energy scales .one could argue that , if the planck length gives a limit at which one expects that quantum gravity effects become relevant , then it would be independent from observers , and one should look for symmetries that reflect this property .such argument gave rise to the so called dsr proposals , that is , a deformation of the lorentz symmetry ( in the momentum space ) with two invariant scales : the speed of light and ( or ) .in this note , we will discuss this class of deformations of the lorentz symmetry and its realization in the space - time .approaches to the problem inspired by the momentum space formulation have been presented , but our approach is quite different from these because we demand the existence of an invariant measurable physical scale compatible with the deformation of the composition law of space - time coordinates induced by the non linear transformation .it has also been claimed that -minkowski gives a possible realization of the dsr principles in space - time , however the construction is still not satisfactory since the former is only compatible with momentum composition law non symmetric under the exchange of particles labels ( see discussions in ) . in this work we are dealing with non linear realizations of the lorentz algebra which induce symmetric composition law andtherefore it is not compatible with the -minkowski approach .the main results of our studies are : _ i _ ) the strategy of defining a non linear realization of the lorentz symmetry with a consistent vector composition law can not be reconciled with the extra request of an invariant length ( time ) scale ; _ ii _ ) the request of an invariant length forces to abandon the group structure of the translations and leaves a space - time structure where points with relative distances smaller or equal to the invariant scale can not be unambiguously defined . in the next sectionwe will explore the approach to dsr in the momentum space and will implement these ideas in the space - time sector . in the final section conclusion and discussions are presented .in this section we will first review the approach to dsr as a non linear realization of the lorentz transformations in the energy - momentum space , and then try to apply these ideas to the space - time . fora more general review of dsr see for example and references therein .dsr principles are realized in the energy - momentum space by means of a non - linear action of the lorentz group .more precisely , if the coordinates of the physical space are , we can define a non- linear function , where is the space with coordinates , on which the lorentz group acts linearly .we will refer to as a _ classical _ momentum space . in terms of the previous variables ,a boost of a single particle with momentum ] to another reference frame , where the momentum of the particle is , is given by \equiv { \cal b } [ p].\ ] ] finally , an addition law ( ) for momenta , which is covariant under the action of , is +f[p_b]\right],\ ] ] and satisfies = { \cal b } [ p_a ] \hat{+ } { \cal b } [ p_b] ] ( or ] such that :x \rightarrow { \cal x} ] its image corresponds to the origin of the _ classical _ space . with the previous definitions we want to add the condition that , under the deformed boosts , an invariant measurable physical scale ( both a time or length scale ) has to exist . in doing that we have in mind that , eventually , this scale is related with the planck length ( or time ) .let us call the vector which defines the invariant scale . by this we mean that is any vector of the form where is any spatial vector with modulus equal to the invariant scale and is the invariant time scale .the invariance condition we impose is that a time ( length ) equal to the invariant scale is not affected by boosts .this corresponds to the physical intuition that any planck - length segment ( whatever his time position ) or planck - time interval ( whatever his space position ) should remain unaffected by a change in reference frame : in the _ classical _ space we have the invariant whose image in the physical space is obviously invariant under the action of deformed boost .this , together with the requirement of invariance under rotations , allows to demonstrate that the above relations have to satisfy and .the above result is valid if we assume the existence of ) only a temporal invariant scale , ) only a spatial invariant scale , ) both a temporal and a spatial invariant scale .for example , if we assume an invariant time scale , the vector ( with arbitrary ) under the action of a boost transformation has to be modified in such a way to keep fixed as well as to leave the casimir invariant .since , in physically interesting cases , depends on the spatial coordinates only via the modulus , we get .we get the interesting result that , assuming only a temporal invariant , all the vectors with time component equal to the invariant quantity ( and any ) keep the modulus of the spatial coordinate unchanged when boosted .clearly this does not imply that any length scale is an invariant one since , in general , with if . the same analysis shows that , if we assume the existence of a length invariant , the casimir invariant implies for any value .the final result is that our invariant vector(s ) has to satisfy the following relation or , equivalently , =g[\hat{\delta}_{p}].\ ] ] the invariant points of standard lorentz transformations are and ( vectors where all the components are zero or infinity ) and we can only use one of this two points to ensure the condition ( [ invarianza ] ) . in dsr in the momentum spacewe have a similar situation : in that case the fixed point at infinity is used to guarantee invariance of ( high ) energy or momentum scales . since we demand our model to be equivalent to usual space - time when the distances ( times ) are much bigger than the invariant scale we expect to approach the identity in this conditions and this implies =\infty ] in contrast with our assumption of invertible . we conclude that , to satisfy the invariance condition , we loose the uniqueness of the neutral element of translations and therefore its group structure . , scaledwidth=32.0% ] in figure 1 we give an explicit realization of the temporal part of the function for a vector with a fixed spatial part and a varying temporal component in order to represent it as an ordinary one - dimensional function . as required , we recover the standard lorentz transformation for large times the function becomes the identity for while goes to once the invariant scale is reached .it is clear that both the images of and are zero : we loose the uniqueness ( is not invertible at this points ) and an invertible extension of can not be defined on points in the interval ] or , equivalently , =0 $ ] , . with a speculative attitudewe can imagine that the model suggests that a process of space ( or time ) measurement can never give a result with a precision better than the invariant length since all points that differ by an invariant length are practically indistinguishable . in the particular example of fig .1 , if is a time interval measured in a given space point , we have to conclude that we can not speak of or , equivalently , this can be understood as an indetermination in the measurement of time when the planck scale is reached .therefore we can imagine this signals an intrinsic obstruction to measure a time ( distance ) when it becomes of the order of the invariant scale .two comments are in order here .we have demonstrated the impossibility of implementing a minimal scale ( in space , time or both ) in the space - time by means of a dsr - like approach .however , it is clear that as occurs in dsr in the momentum space it is always possible to map the invariant scale ( space or time ) to infinity in the classical space time and to map the zero of the physical space time to the zero of the classical space time .the resulting space - time will differ from the lorentz invariant one at large scales but it will not suffer the problems we discussed above : it will have a _ maximal scale _ ( and possibly a minimum momentum ) and will mirror the usual dsr in the momentum space . finally , let us say that the statement that the approach with a minimal scale is not possible , but the one with a maximal scale is allowed , can be understood by a dimensional argument . if we assume : _ i _ ) a continuous differentiable manifold structure for the space time , _ ii _ ) the the existence of a length scale , it is always possible to express any quantity depending on the coordinates as a series containing only negative powers of .if we put the extra condition that _ iii _ ) it should exist a smooth limit toward the undeformed space time , it is clear the small limit can not be accepted .the limit of large , instead , is well defined and the interpretation of the scale as a maximal scale becomes clear .these arguments were already discussed in .in this note we have explored a possible scenario for the a space - time with an invariant scale in a dsr based approach .we started constructing a non linear realization of lorentz transformations defining a non linear , invertible map between the physical space and an auxiliary space where the lorentz group acts linearly . in doing that we introduce a deformed composition law for vectors in the physical space to guarantee its invariance under boosts .up to this point we can still define a translation operator compatible with the deformed action of boosts , and this translations define a group .then we try to impose the physical condition that some ( small ) measurable physical length ( in space or time ) should remain invariant under boosts . to define the space length we use the standard expression for the modulus of a vector ( again in full analogy with the dsr approach in momentum space ) .we showed that the invariance requirement is incompatible with ) a well defined ( and invertible ) map for all the physical space vectors and ) with the group structure for the translations since the neutral element ( and consequently the inverse of any given translation ) can not be unique .we understand why we encounter differences respect to the dsr approach in momentum space . for the latterthe invariant momentum is realized mapping the physical momentum space up to the maximum momentum ( energy ) to the entire _ classical _ space ( we obtain the invariant scale using the standard lorentz fixed point at infinity ) . in present case instead , we are forced to map the invariant scale to the first lorentz fixed point : the origin of vector space ( recall that both in coordinate and momentum space the lorentz transformations are linear and the only two fixed point are zero and infinity ) .this procedure unavoidably leaves all vectors with spatial or temporal length smaller or equal to the invariant length without counterpart in the _ classical _ space .the main difference of our result with other approaches in the literature , resides in the definition of the composition law ( [ suma ] ) , which has been introduced in order to extend the notion of covariance .we think that the assumption of the standard composition law for vectors in the physical space is not correct .to assume ( [ suma ] ) is an unavoidable step if we want to consistently construct a non linear realization of lorentz invariance . at the end of our constructionwe arrive to inconsistencies .a first possibility is , of course , to reject the idea that a dsr - like transformation can be defined in a coordinate space - time with an invariant scale . indeedthe connection of dsr models ( defined in momentum space ) with coordinate space is unclear and may be this connection will be realized only via a completely different approach . in any case , since the non - linear realizations of lorentz symmetry have been very successful in constructing new versions of particle s momentum space , we think it is worthwhile to explore the same technique further in the attempt to understand the possible structure of space - time . a possible way out is to accept that translations do not form any more a group and interpret the peculiar behavior of the non linear mapping as a clue for a fundamental indetermination at the scale of the invariant length .this interpretation suggests a sort of ( space - time ) uncertainty , something that resembles what is expected to happen in a non commutative space - time .alternatively we can speculate that defects ( as in a worm - hole qg vacuum ) might be present in space - time .y.jack ng , lectures given at 40th winter school of theoretical physics : quantum gravity phenomenology , ladek zdroj , poland , 4 - 14 feb 2004 , _ gr - qc/0405078 _ ; a. perez , class .20 * , 43 ( 2003 ) ; k. noui and p. roche , class .grav.*20 * , 3175 ( 2003 ) ; y. jack ng , int . j. modd11 * , 1585 ( 2002 ) ; g. amelino - camelia , _ gr - qc/0104005 _ ; j. r. ellis , n. e. mavromatos and d. v. nanopoulos , phys . rev . * d63 * , 124025 ( 2001 ) ; a. perez and c. rovelli , phys . rev . * d63 * , 041501 ( 2001 ) ; l. j. garay , int . j. mod .* a14 * , 4079 ( 1999 ) ; j. c. baez , class .15 * , 1827 ( 1998 ) . | in this note we discuss the possibility to define a space - time with a dsr based approach . we show that the strategy of defining a non linear realization of the lorentz symmetry with a consistent vector composition law can not be reconciled with the extra request of an invariant length ( time ) scale . the latter request forces to abandon the group structure of the translations and leaves a space - time structure where points with relative distances smaller or equal to the invariant scale can not be unambiguously defined . |
the monte carlo method based on the metropolis algorithm is the most successful and influential stochastic algorithm of the 20th century and has been used in variety of applications not limited to physics .the method is powerful in an exhaustive search of highly multi - dimensional phase space , and , hence , has been routinely used to calculate the thermal averaging of statistical physics . aside from the applications to statistical physics , the metropolis algorithm has been used as a vehicle for global optimization , that is , a task to search for the lowest minimum point in a rugged landscapes in a high dimension .in fact , the simulated annealing ( sa ) based on the metropolis algorithm is the oldest metaheuristics in global optimization . in global optimization, a good balance between a global search ( exploration ) and a local search ( exploitation ) is necessary .since the metropolis algorithm has only the ability to perform a global search , it is usually necessary to augment sa using a more traditional local optimization method to handle realistic problems .later , a method which combines the metropolis algorithm and the gradient - based local search algorithm was proposed .that is the monte carlo minimization ( mcm ) of li and sherga , and the basin - hopping ( bh ) of wales and doye .wales and doye , for example , used their bh algorithm to study the lowest - energy structures of lennard - jones clusters that consist of up to 110 atoms successfully . in this paper , we propose a new variant of the basin - hopping(bh ) algorithm .in contrast to another variant of the bh by leary and doye where only the down - hill search is allowed , we borrow the concept of extremal optimization or thermal cycling , and introduce the process of _ jumping _ to enhance the search of a rugged energy landscape .the basin hopping ( bh ) algorithm ( which is also called the monte carlo plus minimization , mcm ) uses hopping due to monte carlo random walks to explore the phase space , and gradient - based local optimization to locate the local minimum or _ basin _ plus metropolis criterion to accept or reject the move .since the energy landscape explored by the usual monte carlo move and immediate relaxation to a nearby basin of attractor using the gradient method in bh looks like hopping among basin of attractors , the algorithm is termed `` basin hopping '' . in order to enhance the exploration , the monte carlo move in bhconsists of a simultaneous displacement of all particles in the cluster in contrast to the usual monte carlo method in classical statistical mechanics where usually a randomly chosen single particle is displaced .[ fig : energy1 ] the bh has been extensively tested for the simplest benchmark problem for the lennard jones clusters lj where the total energy of -atom cluster is given by where is the distance between two particles and within the cluster . even for this simple lennard - jones problem, the bh clarified the extremely complex energy landscape for several clusters , and it even failed to locate the lowest - energy structures for several special sizes .for example , the bh could not locate the lowest minimum of lj , lj , and lj easily when an unbiased search that starts from a completely random initial configuration of clusters is used .in fact , in order to find out the lowest - energy structure of lj , wales and doye had to use _ seeding _ to feed the bh the lowest - energy structure of smaller lj or larger lj clusters .the unbiased search has an extremely low probability ( 4% ) of hitting the lowest - energy structure even when large numbers ( 100 times ) of a fairly long run ( 5000 step ) were executed .the same problem occurred for lj , and less severely in lj , lj and lj .difficulty occurs when the energy landscape consists of several large valleys ( funnels ) instead of one , and the funnel corresponding to the lowest - energy minimum is narrow and separated from other funnels by a high barrier .the monte carlo trial move plus local minimization and acceptance / rejection using the metropolis criterion is less powerful and time - consuming to overcome such a large barrier ( fig .recently , a new variant of the basin - hopping algorithm called the `` monotonic sequence basin - hopping algorithm '' ( msbh ) was proposed by leary and doye .their algorithm is essentially the bh at temperature , which allows only downhill moves .when the search is stacked at a local minimum , the program restarts from a new random configuration .therefore , msbh is essentially the multi - start strategy of a greedy search .naturally , the msbh algorithm seems less powerful than the original bh algorithm because it is not equipped with any mechanism to cross the barrier .one of the present authors suggested another extension of the bh by introducing the concept of `` extremal optimization '' ( eo ) proposed by boettcher and percus . in thiseo - based basin - hopping ( eobh ) algorithm , the monte carlo move of only one less - bounded particle within the cluster is attempted , and every move is accepted without using the metropolis criterion .therefore this eobh is essentially the bh at .although the eobh can achieve the crossing of high barriers in contrast to the msbh , it is again less powerful than the original bh because its ability in terms of a local search is less effective though the method is proven to be useful to enumerate all the low - energy structures . the lesson learned from these two previous exercises is that the inclusion of the high - temperature monte carlo move at will enhance the ability of a global search ( exploration ) , but the original metropolis criterion of the monte carlo move should be retained to maintain the efficiency of a local search ( exploitation ) .actually , such a prescription of re - heating or thermal treatment at high temperatures has been repeatedly proposed in the application of the simulated annealing ( sa ) .for example , mbius _ et al _ , introduced `` thermal cycling '' and ingber introduced `` reannealing '' in sa . in order to enhance the ability of crossing the high barrier in the bh , we have introduced a re - heating process called `` jumping '' as shown in fig .fig : flow2 ] jumping is the monte carlo move without relaxation ( local minimization ) at , and , hence , it is always accepted .when the usual monte carlo moves are rejected max times , the system is judged to be trapped at the local minimum .then the temperature is raised to , and the monte carlo moves ( jumpings ) are executed jmp times in the search space to allow for the system to escape from the local minimum ( fig .this move is always accepted irrespective of the increase in the energy because of and is called jumping here .the parameter max is used to detect the entrapment .the parameter jmp is used to try to climb up the barrier several times . if the jmp is too large , the algorithm is nothing more than simple random multi - start strategy .it is essential to keep the partial memory of the previous state .so , the jmp should not be too large .we will call this version of bh the `` basin - hopping with occasional jumping '' ( bhoj ) . [fig : energy2 ] now , the exploration of phase space in our bhoj proceeds as follows ( fig .3 ) : the rugged energy landscape is transformed into successive steps of the basin by local minimization .the downhill move is simply the descending of the stairs by hopping .there are , however , two kinds of uphill moves .one is by climbing the stairs by hopping using the metropolis criterion , which is costly and may not be effective to climb up the high barrier because the moves with a large energy difference are rejected by the metropolis criterion .another move is jumping , which does not use local minimization and is always accepted , so the uphill moves by jumping do not use stairs and are simply along the surface of the hill ( fig .3 ) . this jumping must be an efficient way to escape from a local minimum ( valley ) and to explore the next basin of the valley when it is separated by high barriers .in order to test the performance of our modification of the bh with occasional jumping ( bhoj ) , we have calculated the lowest - energy structure of the lennard - jones clusters of a particular size , lj , lj , lj , and lj for which the original bh is not effective .we conducted a 100 independent unbiased search starting from 100 random initial structures with the maximum of 5000 iterations which do not include the jumping process .we also performed the same experiment using the original bh with the same random initial structures .the temperature is fixed at .two additional parameters max and jmp for the bhoj are arbitrarily fixed to max=10 and jmp=3 or 5 or 7. table 1 gives the success rates of 100 unbiased searches .our bhoj could successfully reproduce the lowest - energy structures in the literature .it is apparent that our bhoj performs in general better than the original bh .the success rate increased twice to five times from the original bh to the bhoj .thus , the ability of exploration is in fact enhanced by the introduction of jumping processes . for the sake of comparison, we also showed the results of msbh of leary and the parallel fast annealing ( pfa ) of cai _ et al . _ . .success rates of original bh , msbh , pfa and our bhoj for selected lennard - jones clusters which are notoriously difficult cases to find out the lowest - energy structure . in msbhleary conducted experiments 1000 times while we conducted experiments 100 times for original bh and bhoj . [ cols="^,>,>,>,>,>",options="header " , ] table 2 shows the results for lj and lj . for lj we could confirm the new lowest - energy structure found by wales and doye using our bhoj .we could also successfully confirm the lowest energy -1125.493794 cited in found by leary for lj which is lower than the previous record -1125.304876 found by hartke .the new lowest - energies -1132.669966 for lj and -1139.455696 for lj found by hartke were also successfully located by our bhoj though the success rates of these three cases were very low .in this paper we proposed a way to improve the performance of the basin hopping ( bh ) algorithm by introducing the jumping in addition to the hopping .we call this new algorithm as the basin - hopping with occasional jumping ( bhoj ) .the jumping is a process of heating the system and raising the temperature to infinitely high which is attempted when the trajectory in the phase space is trapped at a local minimum . by jumping, the trajectory can climb up high barriers and can explore the next valley .thus the exploratory ability of the algorithm is enhanced .experiments on benchmark problem of the lennard - jones clusters , in particular , for notorious difficult sizes of 75 to 77 particles lj , of 98 particles lj , and of 102 to 104 particles lj reveal that the proposed bhoj is really superior to the original bh .this jumping is easy to implement , and consumes very little cpu resources .any adaptive or scheduled jumping could be easily incorporated .the bhoj with jumping will be helpful to search for the lowest - energy structures of larger clusters and more complex clusters with many body forces .one of the author ( m.i . ) would like to thank department of physics , tokyo metropolitan university for providing him a visiting fellowship .22 i. beichl and f. sullivan , computing in science and engineering * 2 * ( 2000 ) 65 .s. kirkpatrick , c.d .gelatt , m.p .vecchi , science * 220 * ( 1983 ) 671 .wille , chem .* 133 * ( 1987 ) 405 .z. li , h.a .scheraga , proc .* 84 * ( 1987 ) 6611 .wales , j.p.k .doye , j. phys .chem . a * 101 * ( 1997 ) 5111 .leary and j.p.k .doye , phys .e * 60 * ( 1999 ) r6320 .leary , j. global .* 18 * ( 2000 ) 367 .p. boettcher and a. percus , artificial intelligence * 119 * ( 2000 ) 275 .a.mbius , a. neklioudov , a. daz - snchez , k.h .hoffmann , a. fachat , m. schreiber , phys .* 79 * ( 1997 ) 4297 .d. frenkel , b. smit , _ understanding molecular simulation from algorithm to application _ , academic press , 1986 .hoare , p. pal , adv .* 20 * ( 1971 ) 161 .northby , j. chem . phys .* 87 * ( 1987 ) 6166 .deaven , n. tit , j.r .morris , k.m . ho , chem .* 256 * ( 1996 ) 195 .niesse , h.r .mayne , j. chem .* 105 * ( 1996 ) 8428 .doye , m.a .miller , d.j .wales , j. chem . phys .* 111 * ( 1999 ) 8417 .m. iwamatsu , proc .2002 ieee world congr .pp.1180 , ieee press , 2002 .l. ingber , mathl .modelling * 12 * ( 1989 ) 967 .wales , program `` gmin '' , http://www-wales.ch.cam.ac.uk /software.html .wales , `` cambridge cluster database '' , + http://www-doye.ch.cam.ac.uk/jon/structures/lj /tables.150.html ) .w. cai , h. jiang , x. shao , j. chem .* 42 * ( 2002 ) 1099 .m.d . wolf and u. landmann ,a * 102 * ( 1998 ) 6129 .j. pillardy , a. liwo , h.a .scheraga , j. phys .a * 103 * ( 1999 ) 9370 .g. hartke , j. comput . chem . * 20 * ( 1999 ) 1752 .i. rata , a.a .shvartsburg , m. horoi , t. frauenheim , k.w.m .siu , k.a .jackson , phys .* 85 * ( 2000 ) 546 .j. lee , i - h .lee , j. lee , phys .* 91 * ( 2003 ) 080201 .d. romero , c. barrn , s. gmez , comp .* 123 * ( 1999 ) 87 . c. barrn , http://www.vcl.uh.edu//lj_ cluster /ljpottable.html( currently down ) . | basin - hopping ( bh ) or monte - carlo minimization ( mcm ) is so far the most reliable algorithms in chemical physics to search for the lowest - energy structure of atomic clusters and macromolecular systems . bh transforms the complex energy landscape into a collection of basins , and explores them by hopping , which is achieved by random monte carlo moves and acceptance / rejection using the metropolis criterion . in this report , we introduce the jumping process in addition to the hopping process in bh . jumping are invoked when the hopping stagnates by reaching the local optima , and are achieved using the monte carlo move at the temperature without rejection . our basin - hopping with occasional jumping ( bhoj ) algorithm is applied to the lennard - jones clusters of several notoriously difficult sizes . it was found that the probability of locating the true global optima using bhoj is significantly higher than the original bh . basin - hopping , lowest - energy structure , lennard - jones cluster 02.60.pn , 02.70.tt , 36.40.mr |
in many applications we know that the data or measurement vectors do not fill the whole vector space but that they are confined to some special regions .often the form of these regions is defined by problem specific constraints .typical examples are color vectors .the values in these vectors represent photon counts and therefore they must be non - negative and color vectors are all constrained to lie in the positive part of the vector space . apart from the measurement vectors we are also interested in transformations acting on these measurements .a typical example in the case of color is the multiplication of the vectors with a global constant .this can describe a global change of the intensity of the illumination or a simple change in the measurement units .it can be shown that many non - negative measurement spaces can be analyzed with the help of lorentz groups acting on conical spaces .details of the mathematical background and investigations of spectral color databases can be found in .similar geometric structures are used in perception - based color spaces such as the standard cielab color system . here the l - part ( related to intensity ) is given by the positive half - axis .the ( a , b)-part describing the chromatic properties of color are often parameterized by polar coordinates where the radial part corresponds to saturation ( given by a finite interval ) and the angular part represents hue .a second example that will be considered comes from low - level signal processing .basic edge - detectors for two - dimensional images come in pairs and the resulting two - dimensional result vectors are often transformed into polar coordinates where the radial part describes the edge - magnitude and the angular part the edge orientation . in many machinelearning approaches the raw magnitude value is further processed by an activation function which maps the non - negative raw output to a finite value .a typical activation function is the hyperbolic tangent function and we see that output of these edge detector systems are located on the unit disk . in the followingwe describe how harmonic analysis ( the generalization of fourier analysis to groups ) can be used to investigate functions defined on special domains with a group theoretical structure .we will present the following results : the investigated functions are defined on a domain with a group of transformations acting on this domain and in this case they can be interpreted as functions on the transformation group .the transformation group defines an ( essentially unique ) transform that shares many essential properties of the ordinary fourier transform .examples of such properties are the simple behaviour under convolutions , preservation of scalar products in the orignal and the transform domain and the close relation to the laplace operator .this will be illustrated for the unit disk and its transformation group .the functions corresponding to the complex exponentials are in this case the associated legendre functions and the ( radial ) fourier transform on the unit disk is the mehler - fock transform ( mft ) .the associated legendre functions are also eigenfunctions of the hyperbolic laplace operator and the mft can therefore be used to study properties related to the heat equation on the unit disk . as mentioned before we will illustrate the theory with some examples from color processing and low - level signal processing .we will mainly study probability distributions on the unit disk and show how the mft can be used to investigate textures and to manipulate color properties of images .probability distributions on manifolds and their kernel density estimators have been studied before ( see , for example , , , , and ) .often they are assuming that the manifold is compact . herewe study functions on the unit disk as the simplest example of a non - compact manifold but apart from the riemannian geometry it also has a related group structure which is crucial in the derivation of the method .the mft is a well - known tool in mathematics and theoretical physics but there are very few studies relevant for signal processing .some examples are described in and describes an application of hyperbolic geometry in radar processing .an application to inverse problems is described in . to our knowledgethis is the first application to color signal processing .the structure of the paper is as follows : first we introduce the relevant group and describe how it operates on the unit disk .we also collect some basic facts of the disk - version of hyperbolic geometry and introduce the parametrization of the unit disk by elements of the factor group .next we introduce some facts from the representation theory of the group and its relation to the associated legendre functions .restriction to zonal functions ( that are functions of the radius only ) leads to the mft which is a central result in the harmonic analysis of this group .the theoretical concepts will be illustrated with two examples , one from color science and the other from texture analysis .for the color image we show how the mft can be used to desaturate local color distributions and for the textures images we show how it can be used to characterize the textures in the normalized brodatz texture database described in .no claim is made regarding the performance of the method compared with other methods in color or texture processing .in the following we consider points on the unit disk as complex variables and we introduce the group consisting of the matrices with complex elements : the group operation is the usual matrix multiplication and the group acts as a transformation group on the open unit disk ( consisting of all points with ) as the mbius transforms : the concatenation of two transforms correspond to the multiplication of the two corresponding matrices which gives : for all matrices and all points . in the followingwe will use the notation if we think of the group elements as matrices .if we want to stress their role as elements of a group or if we want to parametrize the mbius transforms then we will often use the symbol .every three - dimensional rotation can be written as a product of three rotations around the coordinate axes .a similar decomposition holds for whose elements are : with three parameters defining subgroups and we write the decomposition as this is known as the cartan or polar decomposition of the group and are the cartan coordinates .matrices in are rotations , leaving the origin fixed and applying a general element in to the origin gives : defining two transformations as equivalent if the parameters are identical , we obtain a correspondence between the points on the unit disk and the equivalence classes of transformations .we write and functions on the unit disk are functions on that are independent of the last argument of the cartan decomposition . for two points define the kernel function as the probability of obtaining the true measurement given that the actual measurement resulted in the point .often it is natural to require for example in the case of invariance against coordinate system changes .the mbius transforms define an invariant , hyperbolic , distance between points and given by and we consider only kernel functions of the form .points on the unit disk are equivalence classes of group elements and in the general case we consider ( ( left - isotropic ) kernels on the full group satisfying it follows that where is the identity element . in the case of the unit diskwe consider elements and and we find the relation between these two decompositions and the difference as ( see , ( vol 1 , page 271 ) ) : we therefore consider only radial kernels of the form : in general we have to compute the function values for every pair of special interest are functions which separate these factors .they are the associated legendre functions ( zonal or mehler functions , , page 324 ) of order and degree defined as ( see eq.([eq - alfundef ] ) ) : with the addition formula ( , page 327 ) where .it seperates dependent terms , ( from data with parameters ) and the the kernel part ( from with ) .this is generalized by the mft , showing that a large class of functions are combinations of the associated legendre functions : [ the : mft ] for a function defined on the interval define its transform as : then can be recovered by the inverse transform : details about the transform , special cases and its applications can be found in ,,, and . in the general group theoretical context the integral in eq.([mft - cos ] )defines a convolution over the subgroup parametrized by the hyperbolic angle in the case of the group this general construction can be applied by using the cartan coordinates of group elements .the analysis of the angular parts lead to ordinary fourier series and for the radial part we find for the mft ( in hyperbolic coordinates ) the integrals ( using eq.([eq - ttrans ] ) characterizing the contribution from datapoint : with .the coefficients are computed from the measurements and the weights are independent of the data .it is also known that the mft preserves the scalar product ( parseval relation ) . if then this can be found in (7.6.16 ) and (7.7.1 ) where the interested reader can find more information on the mft .finally we mention that the conical functions are eigenfunctions of the laplace - operator which is the operator that commutes with the group action .the eigenvalues are ( see chap . 4 for a brief summary of these results ) .in the first example we use the mft to study a simple filter system ( more information can be found in ) .the input of the system are the pixel values on a four - point orbit of the dihedral group , for example the four corner points of a square .the theory of group representations for the dihedral group shows that a natural filter pair consists of the edge filters with coefficients and the raw output values should be transformed into polar coordinates they are the familiar edge magnitude ( independent of dihedral transformation ) and the edge - orientation . following popular practice in machine learning the raw magnitude result converted to a value less than or equal to one with the help of a transfer function .every configuration of a four - pixel distribution results thus in a point on the unit disk . in our first experimentwe compute the probability distribution of such a filter system for each of the 112 images in the normalized brodatz texture database ( described in ) .the kernels used for the density estimators are of the form for which the mft is known .the resulting probability density functions on the disc where then analysed with the help of the mft and the ordinary fourier transform in the angular variable . in the following example we used the orientation invariant component of the angular fft andthen computed its mft - values .next we applied the parseval relation ( eq.([eq : parseval ] ) ) to compute the distance between pairs of mfts .multidimensional scaling was then used to project the distance distributions to the two - dimensional plane . ignoring the orientation information and only using the magnitude values results in a characterization of the textures in terms of overall roughness which is illustrated in figure [ fig : brodatz ] .it turns out that the first axis in the multidimensional scaling mapping is by far the most dominant and we therefore sort the textures using their position on this first axis .the contour plots of the sorted sequence of these pdfs is shown in figure [ fig : brodatz](a ) .the first and last five textures in the sequence are shown in figure [ fig : brodatz](b ) .\(a ) sorted densities \(b ) first and last textures in the sequence in the color image illustration we convert first the original rgb image into its cielab form .the ( a , b ) part of a cielab vector describes the chroma properties of a color and we convert the original ( a , b ) vector to polar coordinates and scale the radial part so that all points are located on the unit disk . for each point in the imagewe then select a window ( of size pixels ) and compute the mft of the distribution of the radial ( saturation ) variable . from the fact that the conical functions are eigenfunctions of the laplace operator it follows that we can simulate a kind of heat - flow on these probability distributions of the saturation values . in the mft spacethe `` high - frequency '' components with big values are faster suppressed due to the negative eigenvalue of the conical functions . in the experiment we computed first the probability distributions of the local saturation values .we then used the mft to transform these original distributions and then we used a sequence of increasing time - intervals to simulate the dissipation of the distributions . due to the form of the conical functions we expect that image points with concentrated high saturation values should survive longest whereas low saturation values and inhomogenous distributions should disappear faster .after the operation in the transform domain we transformed the new function back and we detected the mode ( saturation with the highest probability value ) .this mode - saturation value is used as the new saturation at this point , the angular - hue and the intensity value of the cielab l - component are copied from the original and the color vectors are transformed back into rgb - form .the result of a sequence of sixteen such time - intervals is shown in figure [ fig : pepmon ] . herethe first fifteen use equal time - increments whereas the last image corresponds to a time - interval that is twice as long as the previous one .we observed that many processes produce signals that are located on the unit disk .apart from the property that the outpout vectors are points on the unit disk it is also important that operations on the disk have a meaningful interpretation . in the case of the edge filter systems these operations are changes in orientation and edge - magnitude . for the chroma - vectors the corresponding operations are hue and saturation changes .the next step is the observation that functions on the unit disk have an interpretation as functions on the lorentz group for this group there is a ( more or less ) unique transform with similar properties as the ordinary fourier transform .this transform is the mehler - fock transform and the functions that correspond to the complex exponentials are the conical functions .based on the properties of the mft we showed that ( a ) processing can be done in the original or the transform domain ( b ) the relation between the mft and group convolution leads to a separation of the kernel density estimator into data - dependent and kernel - dependent terms ( c ) the mft preserves geometry between the original and the transform domain and we can therefore estimate similarity between signals in either space and ( d ) we can use the eigenfunction property of the laplacian to study methods for smoothing the data . the detailed study of numerical aspects such as approximation errors , sampling properties , and comparison to other methods is left for future investigations .e.a chevallier , f.b barbaresco , and j.a .angulo , `` probability density estimation on the hyperbolic space applied to radar processing , '' in _geometric science of information_. 2015 , vol .9389 of _ lncs _ , pp . 753761 , springer verlag . | many stochastic processes are defined on special geometrical objects like spheres and cones . we describe how tools from harmonic analysis , i.e. fourier analysis on groups , can be used to investigate probability density functions ( pdfs ) on groups and homogeneous spaces . we consider the special case of the lorentz group su(1,1 ) and the unit disk with its hyperbolic geometry , but the procedure can be generalized to a much wider class of lie - groups . we mainly concentrate on the mehler - fock transform which is the radial part of the fourier transform on the disk . some of the characteristic features of this transform are the relation to group - convolutions , the isometry between signal and transform space , the relation to the laplace - beltrami operator and the relation to group representation theory . we will give an overview over these properties and their applications in signal processing . we will illustrate the theory with two examples from low - level vision and color image processing . |
in the companion paper we compare the force networks in tapped systems by using relatively simple measures : probability density functions ( pdfs ) for normal and tangential forces between the particles , correlation functions describing positional order of the considered particles , as well as possible correlations of the emerging force networks .these classical measures are supplemented by analysis of cluster sizes and distributions at different force levels ( i.e. , by considering the part of the force network that only includes contacts involving forces exceeding a threshold ) .these results have uncovered some differences between the force networks in the considered systems .for example , we have found that the number of clusters as a function of the force level is heterogeneous in the tapped systems under gravity , with different distributions deeper in the samples compared to the ones measured closer to the surface .however , some of the differences remain unclear .for example , tapped disks exposed to different tap intensities that lead to the same ( average ) packing fraction , are found to have similar pdfs and similar cluster size distributions , although it is known that there are some differences in the geometrical properties of the contact networks in these systems . in the present paper ,we focus on a different approach , based on persistence analysis .this approach has been successfully used to explain and quantify the properties of force networks in the systems exposed to compression .in essence , persistence analysis allows to quantify the force network ` landscapes ' in a manner that is global in character , but it still includes detailed information about the geometry at all force levels . the global approach to the analysis of force networks makes it complementary to other works that have considered in detail the local structure of force networks , and attention to geometry distinguishes this approach from network type of analysis .we will use persistence analysis to compare the force networks between the systems of disks exposed to different tapping intensities , as well as to discuss similarities and differences between the systems of disks and pentagons . as we will see , some differences between the considered networks that could not be clearly observed ( and even less quantified ) using classical measures become obvious when persistence analysis is used .furthermore , persistence analysis allows for formulating measures that can be used to quantify , in a precise manner , differences in force networks between realizations of a nominally same system .we note that persistence has been used to quantify the features in other physical systems such as isotropically compressed granular media .it was also used to study dynamics of the kolmogorov flow and rayleigh - bnard convection .this paper is organized as follows . in sec .[ sec : methods ] , we discuss briefly the persistence approach and also provide some examples to illustrate its use in the present context . in sec .[ sec : results ] , we discuss the outcome of persistence approach and quantify the differences between the considered systems .section [ sec : conclusions ] is devoted to the conclusions and suggestions for the future work .the simulations utilized in this paper are described in detail in ; here we provide a brief overview .we consider tapped systems of disks and pentagons in a gravitational field .the particles are confined in two - dimensional ( 2d ) rectangular box with solid ( frictionless ) side walls , .initially particles are placed at random ( without overlaps ) into the box , and the particles are allowed to settle to create the initial packing .then , vertical taps are applied to each system considered ; we discard the initial taps and analyze the remaining . after each tap, we wait for the particles to dissipate their kinetic energy and achieve a mechanical equilibrium .we record the particles positions and the forces acting between them ; the interactions between the particles and the walls are not included . for more direct comparison ,the forces are normalized by the average contact force .in addition to discussing the influence of particle shape , we consider two different tapping intensities , ( called ` high ' and ` low ' tap in what follows ) that lead to the same packing fractions for disks ( ( low ) and ( high ) , where is the disk radius and the acceleration of gravity ) .we also discuss the influence of gravitational compaction , and for this purpose we consider ` slices ' of the systems , particle diameters thick : bottom slice positioned deep inside the domain , and the top slice close to the surface .see for more details .we are interested in understanding the geometry exhibited by force networks .their complete numerical representation contains far too much information . with this in mind, we make use of the tools from algebraic topology , in particular homology , to reduce this information by counting simple geometric structures . in the two dimensional setting of interest in the present context , fixing a magnitude , , of the force and considering the particles which interact with a force at or above yields a 2d topological space , .two simple geometric properties of are the number of components ( clusters ) , , and the number of loops ( holes ) , . in it is shown that even though we are counting very simple geometric objects , by varying the threshold , the set of betti numbers and provides novel distinctions between the behavior of the above mentioned systems .however , there is an obvious limitation to just using the betti numbers to describe a system .consider two different thresholds and assume that the values of the betti numbers are the same .does this mean that the geometric structures , e.g. components and loops , are the same at these two thresholds , or have some components or loops disappeared and been replaced by an equal number of different components or loops ?this distinction can not be determined from the betti number count alone . to provide more complete description , we make use of a relatively new algebraic topological tool called _ persistent homology_. in the context of the 2d systems that we are considering here , it is sufficient to remark that to each force network landscape persistent homology assigns two _ persistence diagrams _ , and , such as those shown in fig .[ fig : diagrams ] .each persistence diagram consists of a collection of pairs of points where , the birth , indicates the threshold value at which a geometric structure ( a component / cluster for or a loop for ) first appears and , the death , indicates the threshold value at which the geometric structure disappears . in this paperwe measure the geometry of the part of the contact network with force interactions greater than a given threshold , and thus .the value is called the _ lifespan_. note that the component represented by the point ` dies ' when it merges with some other component with the birth coordinate larger or equal to .in particular , the single generator in with death coordinate , see fig .[ fig : diagrams](a ) , represents the component that contains the strongest force ` chain ' in the system : the one that formed at the highest force level .note that it has both the highest birth value and the longest lifespan .more detailed interpretation of in 1d can be found in , while a rigorous presentation for 2d is given in .+ the loop structure of a force network is described by .a loop in the network is a closed path of the edges connecting centers of the particles .similarly to , the point indicates that a loop appears in the part of the network exceeding the force threshold .this loop is present for all the values of the threshold in ] , where indicates an average over all time origins , and is used to normalize so that . auto - correlation curves for that while for high tapping there is no observable correlation between taps , for low tapping there is a clear correlation for up to taps .this correlation is consistent with the structure of the distance matrices for low tapping , see fig .[ fig : disks_bottom_heat_hist](a , b ) .the results for auto - correlation functions are very similar .we can go further and compute the instantaneous cross - correlation between and , defined as [ tp({{\mathsf{pd}}}_1)(t)-\langle tp({{\mathsf{pd}}}_1 ) \rangle ] \rangle}{\sigma_{\phi}\sigma_{tp({{\mathsf{pd}}}_1)}},\ ] ] where indicates the variance of the variable .we find significant correlation between these two quantities : reaches the values of for normal and tangential forces ( here means perfect correlation and lack of correlation ) . to rationalize the correlation of and ,we recall the results obtained by considering the systems of disks exposed to compression .in that system it was found that for monodisperse disks , which are more likely to crystallize ( therefore having larger ) , there is a larger number of points in , consistently with the results presented here .we note that in it was found that larger number of points in occurred for strong forces ( with the idea that strong loops form at the boundaries of the fault zones separating crystalline regions ) ; we expect a similar reason for the larger values of in the present setting .to conclude this section , we note that persistence analysis allows to clearly identify differences between the systems of disks exposed to different tapping intensities leading to the same ( average ) packing fraction : these differences are particularly clear when considering the structure of loops .the differences are apparent for the averaged persistence diagrams but they are even more prominent when considering individual taps and their variability .this variability is much stronger for the systems exposed to low tapping . as already noted in the context of the results shown in fig .[ fig : disks_bottom_heat_hist](c , d ) the differences between different realizations for low tapping may be as large as the differences between low and high tapping ones .in we discussed some of the differences in the structure of the force networks between disks and pentagons .the main findings reported in that paper are that the differences between these systems manifest themselves particularly in the structure of tangential force networks measured by ( although pdfs of the forces are almost indistinguishable ) , and by the number of loops , measured by , for both normal and tangential forces .the number of loops in the disk - based system is consistently larger .this finding supports the idea that the clusters are larger for disks , and therefore can support larger number of loops . in the present work, we will discuss additional insight that can be reached by persistence analysis .figures [ fig : disks_pents_heat_w1 ] and [ fig : disks_pents_hist_w1 ] show the distance matrices and corresponding distributions comparing disks and pentagons exposed to the same ( low ) tapping intensity . in agreement with the results from , the differences between the components ( the parts ( a ) and ( b ) of these figures ) are relatively minor .considering loops , these figures show that the distances between pentagon - based systems are much smaller than for the disk - based ones .in particular , fig .[ fig : disks_pents_hist_w1 ] shows that the distances between pentagon - based systems are centered at much smaller values , and their distribution is much narrower than for disks .we also note that the distances between disks and pentagons are much larger than between different disk realizations , showing that persistence analysis clearly distinguishes these systems .we note that consideration of other distances , such as bottleneck , , that measures only the largest difference , are consistent with the ones presented for distance ( figures not shown for brevity ) .in particular , the distributions of for loops are similar to the ones shown in fig .[ fig : disks_pents_hist_w1 ] , with the maximum and the spread for pentagons smaller than for disks .this is as expected , since the loops form at lower force level in pentagon - based system , compare fig . [fig : diagrams_all](c ) and ( d ) .+ + we proceed by discussing the source of the differences between the disk- and pentagon - based systems considered so far .first , we focus on the distributions of birth times. figsures [ fig : disks_pents_birth0.1 ] and [ fig : disks_pents_birth0 ] show the corresponding results .the only difference between these figures is that in fig .[ fig : disks_pents_birth0.1 ] we consider only the points with the lifespan larger than , while in fig .[ fig : disks_pents_birth0 ] we consider all the points .the reason for showing both figures is that the differences between the two provide additional information about the points with short lifespan .considering components for normal forces , parts ( a ) in these two figures , we observe that birth times capture some differences between the two systems that were not obvious when considering distances .there are more points in for disks that are born around , and more points in for pentagons born at larger forces .this is consistent with the pdfs for disks and pentagons shown in fig .10 of . for the tangential forces , parts ( b ), we do not see much if any difference in the birth times .regarding loops , the parts ( c ) and ( d ) of figs .[ fig : disks_pents_birth0.1 ] and [ fig : disks_pents_birth0 ] , one consistent observation is that there are more points in for disks than for pentagons for the whole range of forces considered .moreover , for disks loops start appearing at higher force level than for pentagons .the differences between these two figures show how many of the points have a short lifespan ; these differences are particularly interesting for loops , parts ( c ) and ( d ) : we note a significantly larger number of points for pentagons at small birth times , suggesting that loops for pentagon - based systems form at very small or vanishing force , consistently with the discussion in .this finding holds both for loops formed by normal and tangential forces .+ figure [ fig : lifespans ] presents distributions of the lifespans for disks and pentagons . from diagrams we conclude that for both disks and pentagons , the dominant number of components is characterized by rather short lifespans .we also observe a cross - over ( more pronounced for tangential forces ) between disk and pentagon distributions , although the difference is not large .we note that the lifespans larger than are more probable for pentagons than for disks .therefore , the components live longer for pentagon - based system in particular when tangential forces are considered . to use the landscape analogy , this result says that mountain peaks in the tangential force network are more pronounced for pentagon - based systems .observe from fig .[ fig : lifespans](c , d ) that the lifespan curves are similar to the birth time curves shown in fig .[ fig : disks_pents_birth0 ] .this is because for both disks and pentagons , most of the loops disappear very close to the zero force level , and thus the death time provides no additional information .figure [ fig : totalpers ] shows the total persistence , , that to a large degree summarizes many of the findings discussed so far .we recall that corresponds to the sum of the lifespans , see sec [ sec : methods ] , so considering the results shown in this figure together with the ones shown in fig .[ fig : lifespans ] is useful . for diagrams , there is only a minor difference between disks and pentagons in the normal force network ; however , for tangential forces , there are significant differences .this reflects the larger lifespans of the components for pentagon - based system . for ,the differences are very obvious for both normal and tangential forces , and in contrast to results , here we find that the distribution of is shifted to larger values and is much broader for disk - based systems .figure [ fig : totalpers ] shows clearly significant differences in the structure of force networks in the systems of tapped disks and pentagons .pentagon systems tend to form new components ( clusters ) at higher force levels and these endure longer before they merge , in comparison to disk - based ones .this is particularly evident for the tangential force network .in contrast , loops are formed at relatively low force levels in pentagon - based systems .hence , one could expect that the clusters that form at higher force levels are more stretched ( because they do not contain loops ) for pentagons . since most loops persist down to zero force levels ,the for pentagons is significantly lower than for disks .in the present paper , we discuss and describe properties of force networks in tapped particulate systems of disks and pentagons .our analysis is based on persistent homology that allows to precisely measure and quantify a number of properties of these networks .the persistence diagrams record the distribution and connectivity of the features ( components , loops ) that develop in the force landscape as the force threshold is decreased .these diagrams can then be analyzed and compared by a number of different means , some of them described and used in the present work .one of the considered concepts is the distance between the persistence diagrams that allows for their direct comparison .the comparison can be carried out on the level of individual diagrams , allowing to compare between different configurations of nominally the same system , between different parts of a given system , or between completely different systems .in addition , one can compare the distributions of the distances .these comparisons allow us to identify , in a precise manner , the differences between persistence diagrams , and therefore force networks .in addition to distances , we have defined and used other measures , such as birth times , showing at which force level features appear ; lifespans , showing how long the features persist as force threshold is modified ; and finally total persistence to describe essentially how ` mountainous ' the force landscape considered is .the listed measures were computed both for components / clusters that could be in a loose sense related to force chains , and for loops that could be related to ` holes ' in between the force chains . the use of the outlined measures has allowed us to identify a number of features of force networks .we use these measures , for example , to identify and explain the differences between the systems of disks exposed to different tapping intensities that lead to ( on average ) the same packing fraction .in addition to identifying the differences between these systems , the implemented measures have also shown that the systems of disks , when exposed to low tapping intensity , evolve in a nontrivial manner , with the subsequent taps possibly correlated to the preceding ones .we have shown that the oscillations in the measures built upon persistence diagrams are correlated with small oscillations in the packing fraction .more generally , the finding is that if the system is tapped strongly and therefore the force network is rebuilt from scratch at each tap , the resulting force networks are similar ; however , under low tapping regime , the system ( and the resulting force network ) appears to be stuck in a certain state , and jumps out of it only infrequently .this nontrivial finding and its consequences will be explored in more detail in our future works .another comparison that we carried out involves tapped systems of disks and pentagons .one important finding here is that the differences between disks and pentagons are significant when the structure of loops is considered : presence of loops is much more common for the systems of disks than for pentagons , independently of whether normal or tangential forces are considered . on the other hand ,the differences between the persistence diagrams based on components / clusters are minor and relatively difficult to identify .therefore , the force networks that form in tapped systems of disks and pentagons are similar when only components are considered , but significantly different when loops are included . on a more general note , it should be emphasized that the measures described here allow to directly compare force networks , and quantify the differences .for example , we can now quantify in precise terms variability of force networks between one system and another .this ability opens the door for developing more elaborate comparisons , measures , and also connections between the force network properties and mechanical response on the macroscale .furthermore , the analysis that we presented here can be easily applied to the three dimensional systems , where any direct visualization may be difficult .our future research will proceed in this direction . | in the companion paper , we use classical measures based on force probability density functions ( pdfs ) , as well as betti numbers ( quantifying the number of components , related to force chains , and loops ) , to describe the force networks in tapped systems of disks and pentagons . in the present work , we focus on the use of persistence analysis , that allows to describe these networks in much more detail . this approach allows not only to describe , but also to quantify the differences between the force networks in different realizations of a system , in different parts of the considered domain , or in different systems . we show that persistence analysis clearly distinguishes the systems that are very difficult or impossible to differentiate using other means . one important finding is that the differences in force networks between disks and pentagons are most apparent when loops are considered : the quantities describing properties of the loops may differ significantly even if other measures ( properties of components , betti numbers , or force pdfs ) do not distinguish clearly the investigated systems . |
in the last decade , quantile regression has attracted considerable attention .there are two major reasons for such popularity .the first is that quantile regression estimation can be robust to non - gaussian or heavy - tailed data .in addition , it includes the commonly used least absolute deviation ( lad ) method as a special case .the second is that the quantile regression model allows practitioners to provide more easily interpretable regression estimates obtained via various quantiles ] .we name them quantile correlation ( qcor ) and quantile partial correlation ( qpcor ) . based on these two measures ,we propose the quantile partial autocorrelation function ( qpacf ) and the quantile autocorrelation function ( qacf ) to identify the order of the qar model and to assess model adequacy , respectively .it is noteworthy that the application of qcor and qpcor is not limited to qar models .they can be used broadly as the classical correlation and partial correlation measures in various contexts .the rest of this article is organized as follows .section 2 introduces qcor and qpcor .furthermore , the asymptotic properties of their sample estimators are established .section 3 obtains qpacf and its large sample property for identifying the order of qar model .in addition , the autoregressive parameter estimator and its asymptotic distribution are demonstrated .moreover , qacf and its resulting test statistics , together with their asymptotic results , are provided to examine the model adequacy .section 4 conducts simulation experiments to study the finite sample performance of the proposed methods , and also presents an empirical example to demonstrate usefulness .finally , we conclude the article with a brief discussion in section 5 .all technical proofs of lemmas and theorems are relegated to the appendix .for random variables and , let be the unconditional quantile of and be the quantile of conditional on .one can show that is independent of , i.e. with probability one , if and only if the random variables and are independent , where is the indicated function .this fact has been used by and , and it also motivates us to define the quantile covariance given below . for , define where the function .subsequently , the quantile correlation can be defined as follows , where . in the simple linear regression with the quadratic loss function, there is a nice relationship between the slope and correlation .hence , it is of interest to find a connection between the quantile slope and .to this end , consider a simple quantile linear regression , ,\ ] ] in which one attempts to approximate by a linear function ( see ) , where ] and =0 ] , , ^ 2-[\textrm{qcov}_{\tau}\{y , x\}]^2,\ ] ] -\sigma_x^2\cdot\textrm{qcov}_{\tau}\{y , x\},\ ] ] and ,\ ] ] where is defined as in the previous subsection . then , we obtain the following result .[ thm1 ] suppose that and there exists a such that the density is continuous and the conditional density is uniformly integrable on ] , and . as a result, we obtain the estimate of , and denote it by .we next estimate the quantile partial correlation .let based on equation ( [ eqd ] ) , the sample quantile partial correlation is defined as where . to investigate the asymptotic property of ,denote the conditional density of given and the conditional density of given and by and , respectively .in addition , let , , , ] , , , ^ 2-\{e[\psi_{\tau}(y-\theta_2^{\prime}\mathbf{z}^*)x]\}^2,\ ] ] -\sigma_{x|\mathbf{z}}^2\cdot e[\psi_{\tau}(y-\theta_2^{\prime}\mathbf{z}^*)x],\ ] ] and )^2}{4\sigma_{x|\mathbf{z}}^6 } -\frac{\sigma_{25}\cdot e[\psi_{\tau}(y-\theta_2^{\prime}\mathbf{z}^*)x]}{\sigma_{x|\mathbf{z}}^4 } + \frac{\sigma_{24}}{\sigma_{x|\mathbf{z}}^2 } \right],\ ] ] where , , , and are defined as in the previous subsection .then , we have the following result . [ thm2 ]suppose that , , , , , and there exists a such that and are uniformly integrable on ] .in addition , assume that the random vector has a joint density .we then have that =f_{y^*}(0)\cdot e[x\mathbf{z}^*|y^*=0] ] , and \{e[\mathbf{z}^*\mathbf{z}^{*\prime}|y^*=0]\}^{-1} ] and ] .subsequently , the rest of quantities involved in , , , , , and can be , respectively , estimated by , , , ^ 2-[\widehat{\textrm{qcov}}_{\tau}\{y^*,x\}]^2 ] . following the box - jenkins classical approach, we next introduce the qpacf of a time series to identify the order of a qar model , and then propose using the qacf of residuals to assess the adequacy of the fitted model . for the positive integer , let , , and ] , then , and for .the above lemma indicates that the proposed qpacf plays the same role as that of pacf in the classical ar model identification . in practice, one needs the sample estimate of qpacf . to this end , let and .according to ( [ seqd ] ) , we obtain the estimation for , and we term it the sample qpacf of the time series . to study the asymptotic property of , we introduce the following assumption , which is similar to condition a.3 in .[ assum3 ] , ^ 2>0 ] .furthermore , let by , the random variable is independent of for any , and for .let be the conditional density of on the -field , and .moreover , let ] , ] , and then , we obtain the asymptotic result given below .[ thm3 ] for , if , and assumption [ assum3 ] is satisfied , then and to estimate in the above theorem , we first apply the method to obtain the estimation of given below . where is the estimated quantile of and is the bandwidth selected via appropriate methods ( e.g. , see ) .afterwards , we can use the sample averaging to approximate , , , , , and by replacing their , , and , respectively , with , and .accordingly , we obtain an estimate of , and denote it as . in sum , we are able to use the threshold values to check the significance of . to demonstrate how to use the above theorem to identify the order of a qar model , we generate the observations from where is the standard normal cumulative distribution function , , and is an sequence with uniform distribution on ] , ] , ] , see . in addition , the application of the proposed correlations to the quantile regression model with autoregressive errors is worth further investigation .clearly , the contribution of the proposed measures is not limited to those two models .for example , variable screening and selection ( e.g. , ; ) in quantile regressions are other important topics for future research . in sum , this paper introduces valuable measures to broaden and facilitate the use of quantile models .for , denote the function ] .let and .then , by and , \\ & = -e[\xi \psi_{\tau}(y_0)]+e[(y_0-\xi)i(0>y_0>\xi)]+e[(\xi - y_0)i(0<y_0<\xi)]\\ & = e[(y_0-\xi)i(0>y_0>\xi)]+e[(\xi - y_0)i(0<y_0<\xi)].\end{aligned}\ ] ] note that both and are nonnegative random variables , and is a continuous random variable .thus , with probability one , , which implies .let and .\ ] ] since the random vector has a joint density , we apply similar techniques to those in the proof of lemma [ lem1 ] to show that =0 , \hspace{5 mm } e[\psi_{\tau}(y^*)\mathbf{z}]=\mathbf{0},\ ] ] and the values of , and are unique and satisfy =\mathbf{0},\ ] ] where is a zero vector . from , if , then =0 ] , , and according the law of large numbers, we then have that we next consider the numerator in . for the sake of simplicity ,let , , and , where is defined in the proof of lemma 2 , and and are defined in subsections 2.1 and 2.2 , respectively . under the theorems assumptions , we employ similar techniques to those used in the proof of lemma [ lem1 ] and given in to show that there exists a unique such that =\mathbf{0} ] are defined in subsection 2.2 .this , together with , results in subsequently , by , , the central limit theorem , and the cramer - wold device , we obtain that \end{array}\right ) \rightarrow_d n(0,\sigma_2),\ ] ] where and , , and are defined as in subsection 2.2 .finally , following the delta method ( * ? ? ?* chapter 3 ) , we complete the proof .we next study the numerator of .let , and , where is the vector defined in the proof of lemma 3 , and and are defined in subsection 3.1 .it is noteworthy that the series is fitted by model with order and the true parameter vector .accordingly , and the parameter estimate of is .then , using in the proof of theorem [ thm4 ] , we obtain that \}^{-1 } \cdot\frac{1}{n}\sum_{t = k+1}^n\psi_{\tau}(e_{t,\tau } ) \mathbf{z}^*_{t , k-1}+o_p(n^{-1/2}).\ ] ] applying a similar approach to that used in obtaining , and then using the above result , we further have that y_{t - k}\\ & = -\frac{1}{n}\sum_{t = k+1}^n \int_0^{(\widetilde{\theta}_2-\theta_2)^{\prime}\mathbf{z}^*_{t , k-1 } } f_{t-1}(s)dsy_{t - k}+o_p(n^{-1/2})\\ & = -(\widetilde{\theta}_2-\theta_2)^{\prime } \cdot\frac{1}{n}\sum_{t = k+1}^n f_{t-1}(0)y_{t - k}\mathbf{z}^*_{t , k-1}+o_p(n^{-1/2})\\ & = -a_1^{\prime}\sigma_{31}^{-1 } \cdot\frac{1}{n}\sum_{t = k+1}^n\psi_{\tau}(e_{t,\tau } ) \mathbf{z}^*_{t , k-1}+o_p(n^{-1/2 } ) , \end{split}\end{aligned}\ ] ] where and are defined as in subsection 3.1 .subsequently , using similar techniques to those for obtaining and the result from equation ( [ thm3_eq2 ] ) , we obtain that y_{t - k}\\ & = \frac{1}{n}\sum_{t = k+1}^n \psi_{\tau}(e_{t,\tau } ) [ y_{t - k}-a_1^{\prime}\sigma_{31}^{-1}\mathbf{z}^*_{t , k-1}]+o_p(n^{-1/2 } ) .\end{split}\end{aligned}\ ] ] equations and ( [ thm3_eq4 ] ) , together with the central limit theorem for the martingale difference sequence , complete the proof of the asymptotic normality of . from lemma [ lem3 ] , we also have that . for any , denote where . applying and techniques similar to those in the proof of theorem 3.1 in , we can show that \mathbf{v}+o_p(1).\end{aligned}\ ] ] note that is a convex function with respect to .by , we then have the bahadur representation as follows , \}^{-1}\cdot \frac{1}{\sqrt{n}}\sum_{t = p+1}^n\psi_{\tau}(e_{t,\tau}^*)\mathbf{z}^*_{t , p}+o_p(1).\ ] ] this , in conjunction with the central limit theorem and the cramer - wold device , completes the proof . without loss of generality, we assume that is observable .then for .we first consider the term in . by the ergodic theorem and the fact that , we can show that and where is defined in subsection 3.2 .we next consider the numerator of .using the fact that , we obtain + o_p(n^{-1})\\ & = \frac{1}{n}\sum_{t = k+1}^n \psi_{\tau}(y_t-\widetilde{{\mbox{\boldmath{}}}}^{\prime}(\tau)\mathbf{z}^*_{t , p})e_{t - k,\tau } \\ & \hspace{5mm}-(\widetilde{{\mbox{\boldmath{}}}}(\tau)-{{\mbox{\boldmath{}}}}(\tau))^{\prime } \cdot\frac{1}{n}\sum_{t = k+1}^n \psi_{\tau}(y_t-\widetilde{{\mbox{\boldmath{}}}}^{\prime}(\tau)\mathbf{z}^*_{t , p})\mathbf{z}^*_{t - k , p } + o_p(n^{-1/2 } ) .\end{split}\end{aligned}\ ] ] applying similar techniques to those used in obtaining , we are able to show that {t - k,\tau } \\ & = -\sigma_{51,k}\sigma_{41}^{-1}\cdot\frac{1}{n}\sum_{t = k+1}^n\psi_{\tau}(e_{t,\tau})\mathbf{z}^*_{t , p}+o_p(n^{-1/2}),\end{aligned}\ ] ] where is defined in subsection 3.1 and $ ] . in addition , using similar techniques to those in obtaining and the above result , we further obtain that +o_p(n^{-1/2}).\end{aligned}\ ] ] analogously , we can verify that the above results , together with , ( [ thm5_eq2 ] ) , and the fact that , imply +o_p(n^{-1/2}),\ ] ] and +o_p(n^{-1/2}),\ ] ] where and are defined in subsection 3.2 .subsequently , applying the central limit theorem for the martingale difference sequence and the cramer - wold device , we complete the proof .the sample qpacf of daily closing prices on the nasdaq composite and the sample qacf of residuals from the fitted models for , 0.5 , and 0.8 .the dashed lines in the left and right panels correspond to and , respectively.,title="fig : " ] the sample qpacf of daily closing prices on the nasdaq composite and the sample qacf of residuals from the fitted models for , 0.5 , and 0.8 .the dashed lines in the left and right panels correspond to and , respectively.,title="fig : " ] the sample qpacf of daily closing prices on the nasdaq composite and the sample qacf of residuals from the fitted models for , 0.5 , and 0.8 .the dashed lines in the left and right panels correspond to and , respectively.,title="fig : " ] | in this paper , we propose two important measures , quantile correlation ( qcor ) and quantile partial correlation ( qpcor ) . we then apply them to quantile autoregressive ( qar ) models , and introduce two valuable quantities , the quantile autocorrelation function ( qacf ) and the quantile partial autocorrelation function ( qpacf ) . this allows us to extend the classical box - jenkins approach to quantile autoregressive models . specifically , the qpacf of an observed time series can be employed to identify the autoregressive order , while the qacf of residuals obtained from the fitted model can be used to assess the model adequacy . we not only demonstrate the asymptotic properties of qcor , qpcor , qacf , and pqacf , but also show the large sample results of the qar estimates and the quantile version of the ljung - box test . simulation studies indicate that the proposed methods perform well in finite samples , and an empirical example is presented to illustrate usefulness . _ keywords and phrases : _ autocorrelation function ; box - jenkins method ; quantile correlation ; quantile partial correlation ; quantile autoregressive model |
bandwidth selection is a key issue in kernel density estimation that has deserved considerable attention during the last decades .the problem of selecting the most suitable bandwidth for the nonparametric kernel density estimator introduced by and is the main topic of the reviews of , and , among others .comprehensive references on kernel smoothing and bandwidth selection include the books by , and .bandwidth selection is still an active research field in density estimation , with some recent contributions like and in the last years . + kernel density estimation has been also adapted to directional data , that is , data in the unit hypersphere of dimension .due to the particular nature of directional data ( periodicity for and manifold structure for any ) , the usual multivariate techniques are not appropriate and specific methodology that accounts for their characteristics has to be considered .the classical references for the theory of directional statistics are the complete review of and the book by .the kernel density estimation with directional data was firstly proposed by , studying the properties of two types of kernel density estimators and providing cross - validatory bandwidth selectors .almost simultaneously , provided a similar definition of kernel estimator , establishing its pointwise and consistency .some of the results by were extended by , who studied the estimation of the laplacian of the density and other types of derivatives . whereas the framework for all these references is the general -sphere , which comprises as particular case the circle ( ) , there exists a remarkable collection of works devoted to kernel density estimation and bandwidth selection for the circular scenario .specifically , presented the first plug - in bandwidth selector in this context and derived a selector based on mixtures and on the results of for the circular asymptotic mean integrated squared error ( amise ) .recently , proposed a product kernel density estimator on the -dimensional torus and cross - validatory bandwidth selection methods for that situation .another nonparametric approximation for density estimation with circular data was given in and .in the general setting of spherical random fields derived an estimation method based on a needlet basis representation .+ directional data arise in many applied fields . for the circular case ( ) a typical example is wind direction , studied among others in , and .the spherical case ( ) poses challenging applications in astronomy , for example in the study of stars position in the celestial sphere or in the study of the cosmic microwave background radiation . finally , a novel field where directional data is present for large is text mining , where documents are usually codified as high dimensional unit vectors .for all these situations , a reliable method for choosing the bandwidth parameter seems necessary to trust the density estimate .+ the aim of this work is to introduce new bandwidth selectors for the kernel density estimator for directional data .the first one is a rule of thumb which assumes that the underlying density is a von mises and it is intended to be the directional analogue of the rule of thumb proposed by for data in the real line .this selector uses the amise expression that can be seen , among others , in .the novelty of the selector is that it is more general and robust than the previous proposal by , although both rules exhibit an unsatisfactory behaviour when the reference density spreads off from the von mises . to overcome this problem , two new selectors based on the use of mixtures of von mises for the reference densityare proposed .one of them uses the aforementioned amise expression , whereas the other one uses the exact mise computation for mixtures of von mises densities given in .both of them use the expectation - maximization algorithm of to fit the mixtures and , to select the number of components , the bic criteria is employed .these selectors based on mixtures are inspired by the earlier ideas of , for the multivariate setting , and for the circular scenario .+ this paper is organized as follows .section [ kdebwd : sec : kdedir ] presents some background on kernel density estimation for directional data and the available bandwidth selectors .the rule of thumb selector is introduced in section [ kdebwd : sec : ruleofthumb ] and the two selectors based on mixtures of von mises are presented in section [ kdebwd : sec : mixtures ] .section [ kdebwd : sec : comparative ] contains a simulation study comparing the proposed selectors with the ones available in the literature .finally , section [ kdebwd : sec : data ] illustrates a real data application and some conclusions are given in section [ kdebwd : sec : conclusions ] .supplementary materials with proofs , simulated models and extended tables are given in the appendix .denote by a directional random variable with density .the support of such variable is the -dimensional sphere , namely , endowed with the lebesgue measure in , that will be denoted by .then , a directional density is a nonnegative function that satisfies .also , when there is no possible confusion , the area of will be denoted by where represents the gamma function defined as , .+ among the directional distributions , the von mises - fisher distribution ( see ) is perhaps the most widely used .the von mises density , denoted by , is given by where is the directional mean , the concentration parameter around the mean , stands for the transpose operator and is the modified bessel function of order , this distribution is the main reference for directional models and , in that sense , plays the role of the normal distribution for directional data ( is also a multivariate normal conditioned on ; see ) . a particular case of this density sets , which corresponds to the uniform density that assigns probability to any direction in .+ given a random sample from the directional random variable , the proposal of for the directional kernel density estimator at a point is where is a directional kernel ( a rapidly decaying function with nonnegative values and defined in ) , is the bandwidth parameter and is a normalizing constant .this constant is needed in order to ensure that the estimator is indeed a density and satisfies that as usual in kernel smoothing , the selection of the bandwidth is a crucial step that affects notably the final estimation : large values of result in a uniform density in the sphere , whereas small values of provide an undersmoothed estimator with high concentrations around the sample observations . on the other hand ,the choice of the kernel is not seen as important for practical purposes and the most common choice is the so called von mises kernel . its name is due to the fact that the kernel estimator can be viewed as a mixture of von mises - fisher densities as follows : where , for each von mises component , the mean value is the -th observation and the common concentration parameter is given by .+ the classical error measurement in kernel density estimation is the distance between the estimator and the target density , the so called integrated squared error ( ise ) . as this is a random quantity depending on the sample , its expected value , the mean integrated squared error ( mise ) ,is usually considered : {\right]}}={\mathbb{e}{\left[}{\int_{\omega_{q } } { \left(\hat f_h({\mathbf{x}})-f({\mathbf{x}})\right)}^2\,\omega_{q}(d { \mathbf{x}})}{\right]}},\end{aligned}\ ] ] which depends on the bandwidth , the kernel , the sample size and the target density . whereas the two last elements are fixed when estimating a density from a random sample , the bandwidth has to be chosen ( also the kernel , although this does not present a big impact in the performance of the estimator ) .then , a possibility is to search for the bandwidth that minimizes the mise : to derive an easier form for the mise that allows to obtain , the following conditions on the elements of the estimator ( [ kdebwd : kernel_directional ] ) are required : 1 .extend from to by for all , where denotes the euclidean norm .assume that the gradient vector and the hessian matrix exist , are continuous and square integrable .[ kdebwd : cond : d1 ] 2 .assume that is a bounded and integrable function such that , , for .[kdebwd : cond : d2 ] 3 .assume that is a positive sequence such that and as .[kdebwd : cond : d3 ] the following result , available from , provides the mise expansion for the estimator ( [ kdebwd : kernel_directional ] ) .it is worth mentioning that , under similar conditions , and also derived analogous expressions .[ kdebwd : dir : prop:3 ] under conditions [ kdebwd : cond : d1][kdebwd : cond : d3 ] , the mise for the directional kernel density estimator ( [ kdebwd : kernel_directional ] ) is given by where , , and this results leads to the decomposition , where amise stands for the asymptotic mise .it is possible to derive an optimal bandwidth for the amise in this sense , , that will be close to when is small enough .[ kdebwd : dir : cor:1 ] the amise optimal bandwidth for the directional kernel density estimator ( [ kdebwd : kernel_directional ] ) is given by }^{\frac{1}{4+q}},\label{kdebwd : dir : cor:1:1}\end{aligned}\ ] ] where .unfortunately , expression ( [ kdebwd : dir : cor:1:1 ] ) can not be used in practise since it depends on the curvature term of the unknown density .the first proposals for data - driven bandwidth selection with directional data are from , who provide cross - validatory selectors .specifically , least squares cross - validation ( lscv ) and likelihood cross - validation ( lcv ) selectors are introduced , arising as the minimizers of the cross - validated estimates of the squared error loss and the kullback - leibler loss , respectively .the selectors have the following expressions : where represents the kernel estimator computed without the -th observation .see remark [ kdebwd : rem:3 ] for an efficient computation of .+ recently , proposed a plug - in selector for the case of circular data ( ) for the estimator with the von mises kernel .the selector of uses from the beginning the assumption that the reference density is a von mises to construct the amise .this contrasts with the classic rule of thumb selector of , which supposes at the end ( _ i.e. _ , after deriving the amise expression ) that the reference density is a normal .the bandwidth parameter is chosen by first obtaining an estimation of the concentration parameter in the reference density ( for example , by maximum likelihood ) and using the formula }^\frac{1}{5}.\end{aligned}\ ] ] note that the parametrization of has been adapted to the context of the estimator ( [ kdebwd : kernel_directional ] ) by denoting by the inverse of the squared concentration parameter employed in his paper . + more recently , proposed a selector that improves the performance of allowing for more flexibility in the reference density , considering a mixture of von mises .this selector is also devoted to the circular case and is mainly based on two elements .first , the amise expansion that derived for the circular kernel density estimator by the use of fourier expansions of the circular kernels .this expression has the following form when the kernel is a circular von mises ( the estimator is equivalent to consider , and as the inverse of the squared concentration parameter in ( [ kdebwd : kernel_directional ] ) ) : }^2\int_0^{2\pi } f''(\theta)^2\,d\theta+\frac{{\mathcal{i}}_0\big(2h^{-1/2}\big)}{2n\pi{\mathcal{i}}_0{\left(h^{-1/2}\right)}^2}. \label{kdebwd : dimarzio}\end{aligned}\ ] ] the second element is the expectation - maximization ( em ) algorithm of for fitting mixtures of directional von mises .the selector , that will be denoted by , proceeds as follows :1 . use the em algorithm to fit mixtures from a determined range of components .choose the fitted mixture with minimum aic .3 . compute the curvature term in ( [ kdebwd : dimarzio ] ) using the fitted mixture and seek for the that minimizes this expression , that will be .using the properties of the von mises density it is possible to derive a directional analogue to the rule of thumb of , which is the optimal amise bandwidth for normal reference density and normal kernel .the rule is resumed in the following result .[ kdebwd : prop : rot ] the curvature term for a von mises density is }.\end{aligned}\ ] ] if is a suitable estimator for , then the rule of thumb selector for the kernel estimator ( [ kdebwd : kernel_directional ] ) with a directional kernel is }^{\frac{1}{4+q}}.\end{aligned}\ ] ] if is the von mises kernel , then : }n}\right]}^\frac{1}{5 } , & q=1,\\ \displaystyle{\left[\frac{8\sinh^2(\hat\kappa)}{\hat\kappa{\left[(1 + 4\hat\kappa^2)\sinh(2\hat\kappa)-2\hat\kappa\cosh(2\hat\kappa)\right]}n}\right]}^\frac{1}{6 } , & q=2,\\ \displaystyle{\left[\frac{4\pi^\frac{1}{2}{\mathcal{i}}_{\frac{q-1}{2}}(\hat\kappa)^2}{\hat\kappa^{\frac{q+1}{2}}{\left[2q{\mathcal{i}}_{\frac{q+1}{2}}(2\hat\kappa)+(2+q)\hat\kappa{\mathcal{i}}_{\frac{q+3}{2}}(2\hat\kappa)\right]}n}\right]}^\frac{1}{4+q } , & q\geq3 .\end{array}{\right.}\label{kdebwd : rot}\end{aligned}\ ] ] the parameter can be estimated by maximum likelihood . in view of the expression for in ( [ kdebwd : rot ] ) ,it is interesting to compare it with when .as it can be seen , both selectors coincide except for one difference : the term in the sum in the denominator of .this `` extra term '' can be explained by examining the way that both selectors are derived . whereas the selector derives the bandwidth supposing that the reference density is a von mises when the amise is already derived in a general way , the selector uses the von mises assumption to compute it .therefore , it is expected that the selector will be more robust against deviations from the von mises density .+ figure [ kdebwd : fig : vs ] collects two graphs exposing these comments , that are also corroborated in section [ kdebwd : sec : comparative ] .the left plot shows the mise for and for the density , where ] ) .when ] , which indexes the reference density .right plot : logarithm of , , and their corresponding mise for different values of , with .[kdebwd : fig : vs],title="fig : " ] . left plot : logarithm of the curves of , and for sample size .the curves are computed by monte carlo samples and is obtained exactly .the abscissae axis represents the variation of the parameter } ] is computed using }}&=\sum_{j=1}^m p_j \frac{c_q(\kappa_j)c_q{\left(1/h^2\right)}}{c_q\big(||{\mathbf{x}}/h^2+\kappa_j{\boldsymbol\mu}_j||\big)}.\end{aligned}\ ] ] [ kdebwd : rem:3 ] by the use of similar techniques , when the kernel is von mises , the lscv selector admits an easier expression for the cv loss that avoids the calculation of the integral of : }-\frac{c_q(1/h^2)^2}{nc_q(2/h^2)}.\end{aligned}\ ] ] based on the previous result , the philosophy of the emi selector is the following : using a suitable pilot parametric estimation of the unknown density ( given by algorithm [ kdebwd : algo : nm ] ) , build the exact mise and obtain the bandwidth that minimizes it .this is summarized in the following procedure .[ kdebwd : algo : emi ] consider the von mises kernel and let be a random sample of a directional variable . 1 .compute a suitable estimation using algorithm [ kdebwd : algo : nm ] .2 . obtain .the em algorithm of , implemented in the ` r ` package ` movmf ` ( see ) , provides a complete solution to the problem of estimation of the parameters in a mixture of directional von mises of dimension .however , the issue of selecting the number of components of the mixture in an automatic and optimal way is still an open problem .+ the propose considered in this work is an heuristic approach based on the bayesian information criterion ( bic ) , defined as , where is the log - likelihood of the model and is the number of parameters .the procedure looks for the fitted mixture with a number of components that minimizes the bic .this problem can be summarized as the global minimization of a function ( bic ) defined on the naturals ( number of components ) .+ the heuristic procedure starts by fitting mixtures from to , computing their bic and providing , the number of components with minimum bic .then , in order to ensure that is a global minimum and not a local one , neighbours next to are explored ( _ i.e. _ fit mixture , compute bic and update ) , if they were not previously explored .this procedure continues until has at least neighbours at each side with larger bics .a reasonable compromise for and , checked by simulations , is to set and . in order to avoid spurious solutions , fitted mixtures with any removed .the procedure is detailed as follows .[ kdebwd : algo : nm ] let be a random sample of a directional variable with density . 1. set and as the user supplies , usually .[ kdebwd : algo : nm:1 ] 2 . for varying from to , [ kdebwd : algo : nm:2 ] 1 .estimate the -mixture with the em algorithm of and 2 .compute the bic of the fitted mixture .3 . set as the number of components of the mixture with lower bic .[ kdebwd : algo : nm:3 ] 4 .if , set and turn back to step [ kdebwd : algo : nm:2 ] .otherwise , end with the final estimation .[ kdebwd : algo : nm:4 ] other informative criteria , such as the akaike information criterion ( aic ) and its corrected version , aicc , were checked in the simulation study together with bic .the bic turned out to be the best choice to use with the ami and emi selectors , as it yielded the minimum errors .along this section , the three new bandwidth selectors will be compared with the already proposed selectors described in subsection [ kdebwd : subsec : bwsels ] . a collection of directional models , with their corresponding simulation schemes ,are considered .subsection [ kdebwd : subsec : dirmods ] is devoted to comment the directional models used in the simulation study ( all of them are defined for any arbitrary dimension , not just for the circular or spherical case ) .these models are also described in the appendix . + for each of the different combinations of dimension , sample size and model , the mise of each selector was estimated empirically by monte carlo samples , with the same seed for the different selectors .this is used in the computation of , where is obtained as a numerical minimization of the estimated mise .the calculus of the ise was done by : simpson quadrature rule with discretization points for ; rule with nodes for and monte carlo integration with sampling points for ( same seed for all the integrations ) . finally , the kernel considered in the study is the von mises .the first models considered are the uniform density in and the von mises density given in ( [ kdebwd : dir : cq ] ) .the analogous of the von mises for axial data ( _ i.e. _ , directional data where ) is the watson distribution : where .this density has two antipodal modes : and , both of them with concentration parameter .a further extension of this density is the called small circle distribution : where , and . for the case , this density has a kind of modal strip along the -sphere . +a common feature of all these densities is that they are rotationally symmetric , that is , their contourlines are -spheres orthogonal to a particular direction .this characteristic can be exploited by means of the so called tangent - normal decomposition ( see ) , that leads to the change of variables where is a fixed vector , ( measures the distance of from ) , and is the semi - orthonormal matrix ( and , with the -identity matrix ) resulting from the completion of to the orthonormal basis .the family of rotationally symmetric densities can be parametrized as where is a function depending on a vector parameter and such that is a density in , for all .using this property , it is easy to simulate from ( [ kdebwd : rotsym ] ) . 1 .sample from the density .[kdebwd : algo : rotsym:1 ] 2 .sample from a uniform in .[kdebwd : algo : rotsym:2 ] 3 . is a sample from .[kdebwd : algo : rotsym:3 ] step [ kdebwd : algo : rotsym:1 ] can always be performed using the inversion method .this approach can be computationally expensive : it involves solving the root of the distribution function , which is computed from an integral evaluated numerically if no closed expression is available .a reasonable solution to this ( for a fixed choice of and ) is to evaluate once the quantile function in a dense grid ( for example , points equispaced in ) , save the grid and use it to interpolate using cubic splines the new evaluations , which is computationally fast . extending these ideas for rotationally symmetric models ,two new directional densities are proposed .the first one is the directional cauchy density , defined as an analogy with the usual cauchy distribution as where is the mode direction and the concentration parameter around it ( gives the uniform density ) .this density shares also some of the characteristics of the usual cauchy distribution : high concentration around a peaked mode and a power decay of the density .the other proposed density is the skew normal directional density , where is the skew normal density of with location , scale and shape that is truncated to the interval .the density is inspired by the wrapped skew normal distribution of , although it is based on the rotationally symmetry rather than in wrapping techniques .a particular form of this density is an homogeneous `` cap '' in a neighbourhood of that decreases very fast outside of it .+ non rotationally symmetric densities can be created by mixtures of rotationally symmetric .however , it is interesting to introduce a purely non rotationally symmetric density : the projected normal distribution of . denoted by , the corresponding densityis where , , and .sampling from this distribution is extremely easy : just sample and then project to by .+ the whole collection of models , with densities in total , are detailed in table [ kdebwd : tab : models ] in appendix [ kdebwd : ap : models ] .figures [ kdebwd : fig : circ ] and [ kdebwd : fig : sph ] show the plots of these densities for the circular and spherical cases . for the circular case , the comparative study has been done for the models described in figure [ kdebwd : fig : circ ] ( see table [ kdebwd : tab : models ] to see their densities ) , for the circular selectors , , , , , and and for the sample sizes , , and . due to space limitations ,only the results for sample size are shown in table [ kdebwd : tab : cir ] , and the rest of them are relegated to appendix [ kdebwd : ap : tables ] .+ in addition , to help summarizing the results a ranking similar to ranking b of will be constructed .the ranking will be computed according to the following criteria : for each model , the bandwidth selectors considered are sorted from the best performance ( lowest error ) to the worst performance ( largest error ) .the best bandwidth receives points , the second and so on .these points , denoted by , are standardized by and multiplied by the relative performance of each selector compared with the best one .in other words , the points of the selector , if is the best one , are .the final score for each selector is the sum of the ranks obtained in all the twenty models ( thus , a selector which is the best in all models will have points ) . with this ranking , it is easy to group the results in a single and easy to read table .+ in view of the results , the following conclusions can be extracted .firstly , performs well in certain unimodal models such as m3 ( von mises ) and m6 ( skew normal directional ) , but its performance is very poor with multimodal models like m15 ( watson ) . in its particular comparison with , it can be observed that both selectors share the same order of error , but being better in all the situations except for one : the uniform model ( m1 ) .this is due to the `` extra term '' commented in section [ kdebwd : sec : ruleofthumb ] : its absence in the denominator makes that faster than when the concentration parameter and , what is a disadvantage for , turns out in an advantage for the uniform case . with respect to and , although their performance becomes more similar when the sample size increases , something expected , seems to be on average a step ahead from , specially for low sample sizes . among the cross - validated selectors , performs better than , a fact that was previously noted by simulation studies carried out by and .finally , presents the most competitive behaviour among the previous proposals in the literature when the sample size is reasonably large ( see table [ kdebwd : tab : rankcirsph ] ) .+ the comparison between the circular selectors is summarized in the scores of table [ kdebwd : tab : rankcirsph ] .for all the sample sizes considered , is the most competitive selector , followed by for all the sample sizes except , where is the second .the effect of the sample effect is also interesting to comment .for , and perform surprisingly well , in contrast with , which is the second worst selector for this case .when the sample size increases , and have a decreasing performance and stretches differences with , showing a similar behaviour .this was something expected as both selectors are based on error criteria that are asymptotically equivalent .the cross - validated selectors show a stable performance for sample sizes larger than .[ kdebwd : fig : circ],title="fig:",scaledwidth=22.5% ] is drawn .[ kdebwd : fig : circ],title="fig:",scaledwidth=22.5% ] is drawn .[ kdebwd : fig : circ],title="fig:",scaledwidth=22.5% ] is drawn .[ kdebwd : fig : circ],title="fig:",scaledwidth=22.5% ] + is drawn .[ kdebwd : fig : circ],title="fig:",scaledwidth=22.5% ] is drawn .[ kdebwd : fig : circ],title="fig:",scaledwidth=22.5% ] is drawn .[ kdebwd : fig : circ],title="fig:",scaledwidth=22.5% ] is drawn .[ kdebwd : fig : circ],title="fig:",scaledwidth=22.5% ] + is drawn .[ kdebwd : fig : circ],title="fig:",scaledwidth=22.5% ] is drawn .[ kdebwd : fig : circ],title="fig:",scaledwidth=22.5% ] is drawn .[ kdebwd : fig : circ],title="fig:",scaledwidth=22.5% ] is drawn .[ kdebwd : fig : circ],title="fig:",scaledwidth=22.5% ] + is drawn .[ kdebwd : fig : circ],title="fig:",scaledwidth=22.5% ] is drawn .[ kdebwd : fig : circ],title="fig:",scaledwidth=22.5% ] is drawn .[ kdebwd : fig : circ],title="fig:",scaledwidth=22.5% ] is drawn .[ kdebwd : fig : circ],title="fig:",scaledwidth=22.5% ] + is drawn .[ kdebwd : fig : circ],title="fig:",scaledwidth=22.5% ] is drawn .[ kdebwd : fig : circ],title="fig:",scaledwidth=22.5% ] is drawn .[ kdebwd : fig : circ],title="fig:",scaledwidth=22.5% ] is drawn .[ kdebwd : fig : circ],title="fig:",scaledwidth=22.5% ] .comparative study for the circular case , with sample size .columns of the selector represent the , with bold type for the minimum of the errors .the standard deviation of the is given between parentheses.[kdebwd : tab : cir ] [ cols=">,>,>,>,>,>,>,>,>,>",options="header " , ] | new bandwidth selectors for kernel density estimation with directional data are presented in this work . these selectors are based on asymptotic and exact error expressions for the kernel density estimator combined with mixtures of von mises distributions . the performance of the proposed selectors is investigated in a simulation study and compared with other existing rules for a large variety of directional scenarios , sample sizes and dimensions . the selector based on the exact error expression turns out to have the best behaviour of the studied selectors for almost all the situations . this selector is illustrated with real data for the circular and spherical cases . * keywords : * bandwidth selection ; directional data ; mixtures ; kernel density estimator ; von mises . |
the subject of this work is mathematical modeling of state - of - charge in simple battery cells , such as a non - rechargeable 3 volts lithium coin battery .the goal is to understand the response of the battery , and ultimately to predict battery lifetime , as energy is consumed under a given discharge usage pattern .the main incentive for our work is the battery usage in wireless sensor networks and similar internet - of - things systems .these networks consist of inter - connected low - cost nodes , equipped with basal sensors , computer , radio and a battery , expected to run for many years under very low intensity loads and short dutycycles . within the tight cost restrictions typical of such systems , methods or techniques providing `` battery - charge indicators ''do not seem to be within reach currently . to make progress in this directionit is essential to address the problem of predicting battery life .our paper intends to cover some of the required modeling groundwork and discuss analysis of linear , nonlinear and stochastic aspects of modeling battery cells .mathematical modeling of batteries has developed over several decades along with the growth of new battery technologies and materials . yet, there has been relatively little in - depth study of widely available , inexpensive coin cell batteries and on special load characteristics including short load periods .primarily , lithium and lithium - ion battery models have been developed within electrochemical engineering .two recent survey and review works represent the state - of - art of modeling based on the fundamental principles of electrochemistry , and emphasize the wide range of scales involved .the temporal and spatial scales of the physics and chemistry of the battery range from macroscopic level all the way down to the atomistic level . the tutorial review by landstorfer and jacob provides a framework of non - equilibrium thermodynamics as the foundation for studying the electrode , electrolyte and interface reactions in great detail .the review work by ramadesigan _ , summarizes the literature on such models and , in addition , brings a systems engineering approach applied to li - ion batteries .this type of model can be said to begin with the pseudo - two - dimensional ( p2d ) model of doyle _ , which leads to a coupled system of non - linear pdes . more generally , coupled systems of equations with complex boundary conditions are derived , which connect charge concentrations with transport and kinetics of reactant species .several approaches have been proposed to simplify the resulting sets of equations and allow for numerical computations , see e.g. . in this workwe use the different mathematical approach to battery modeling developed in communication engineering for computer science applications , see e.g. jongerden and haverkort . where chemical engineering modeling typically begins with a detailed scheme of reactions and mechanisms in the various phases and interfaces of the cell , these models view the battery as a generic device subject to some fundamental principles .the main focus of the modeling changes and is now rather the response of the battery to external load .a typical purpose is load scheduling to optimize battery utilization .important aspects of battery behavior from this point of view are the rate - capacity effect and charge recovery .quoting : _ the former refers to the fact that a lower discharge rate is more efficient than a higher ; more charge can be extracted from the battery before reaching a given cut - off value .the latter refers to the fact that an intermittent discharge is more efficient than a continuous one . because of these effects , different battery loads that use the same total charge do not result in the same device lifetime . _the most basic of these methods use linear odes and gradually build complexity by using pdes and other means .a further direction is stochastic modeling using markov chain dynamics .the aim of our work is to investigate to what extent these simplified battery models are able to capture important aspects of battery behavior , and to get new insights by developing the mathematical models further .of special interest is the essential response of the battery to deterministic or random on - off discharge patterns typical for batteries in wireless sensor networks , and the ability of a cell to recover charge during operation .charge recovery is believed to depend on a number of internal mechanisms , such as convection , diffusion , migration , and charge - transfer . in the modeling work we put special emphasis on separating the roles of diffusion , which is the motion of electroactive species in the presence of a concentration gradient , and migration , the motion of charged species in an electric field .our starting point is the kinetic battery model of manwell and mcgowen , which describes the joint evolution of available charge and bound charge over time .charge recovery in this framework consists in the continuous transition of bound charge to available charge .as observed in and further investigated in , more general spatial versions of these models are related to the diffusion model of rakhmatov and vrudhula , and leads to a class of second order diffusion equations with robin type boundary conditions .quoting , the linear dynamics of the kinetic battery model is _ useful in getting an intuitive idea of how and why the recovery occurs but it needs a number of additions to be useful for the types of batteries used in mobile computing_. using experimental data from ni - mh batteries , proposed a modified , non - linear , factor in the flow charge and discussed related stochastic versions of the model . in an effort to compare more systematically linear and nonlinear dynamics in kinetic and stochastic battery models , we study discrete time markov chains with nonlinear jump probabilities derived from a simplified charge transport scenario . by a scaling approximationwe obtain in the limit a deterministic nonlinear ode , with explicit solutions which , in principle , can be compared to those of the linear approach .the unifying mathematical aspect in our analysis is the representation of remaining and available capacities as time - autonomous systems . in section 2 ,following preliminaries on capacities , internal charge recovery and discharge profiles , we consider the kinetic battery model and discuss a variation .then we set up an extended version of the spatial kinetic battery model with a finite number of serial compartments , derive the spatially continuous limiting pde , and give a probabilistic representation of the solution of the pde in terms of brownian motion with drift reflected at the boundaries on both sides of a finite interval .the solution represents the capacity storage of a battery and these tools allow us to study the balance of available and remaining stored capacity .section 3 studies a nonlinear , stochastic markov chain model and its deterministic ode approximation . with proper choice of nonlinear dynamics for charge recovery due to transfer , diffusion and migration effects, we then propose a somewhat wider class of nonlinear ode of potential use for battery modeling . as a consequence ,it is possible to study performance measures such as battery life , delivered capacity and gain , and to compare and optimize the performance of batteries .we consider a primary ( non - rechargeable ) battery cell consisting of two electrodes , anode and cathode , linked by an electrolyte .the cell contains a certain amount of chemically reactive material which is converted into electrical energy by an oxidation reaction at the anode .primary lithium batteries have a lithium anode and may have soluble or solid electrolytes and cathodes .the mass of material involved in the battery reaction yields a higher concentration of electrons at the anode , and hence by faraday s first law the transfer of a proportional quantity of electrical charge .this determines a terminal voltage between the pair of electrodes . by closinga wired circuit between the terminals a current of electrons will start moving through the wire from anode to cathode where they react with a positively charged reactant , manifesting the ability of the battery to drive electric current .the intensity of the current depends on the total resistance along the wire . inside the batterythe movement of charge - carriers forms a corresponding ionic current , which is controlled by a variety of mechanisms , among them migration of ions , diffusion of reactant species , and charge - transfer reaction .migration is generated by an electric potential gradient ( electric field ) and convective diffusion by the concentration gradient .conductivity arises from the combination of migration and diffusion .charge - transfer reactions take place when migrating ions are transferred from the electrolyte bulk through the anode surface .it is sometimes helpful to keep track of proper units .the battery has a given voltage in volts [ v ] and a theoretical capacity in ampere hours [ ah ] , representing the entire storage of chemically reactive material in the cell .we write [ ah ] for the nominal capacity of a fully loaded cell , where .this is the amount of electric charge which is delivered if the cell is put under constant , high load and drained until a predefined cut - off voltage is reached .measuring time in hours [ h ] we write for the consumed capacity [ ah ] and for the remaining capacity [ ah ] , at time . here , is an increasing function , typically continuous with the slope representing intensity of the current .it is also plausible to let the discharge function have jumps to be interpreted as spikes of charge units being released point - wise to the device driven by the battery . asan additional level of generality it is straightforward to consider defined on a probability space and subject to a suitable distributional law of a random process .the instantaneous discharge current [ a ] at is the derivative and the average discharge current [ a ] is the quantity , assuming this limit exists .we are interested in the behavior of the battery cell when exposed to the accumulated discharge process , in particular regarding at time } , \quad u(0)=n\\[1 mm ] \widetilde u(t)&=u(t)/n= \mbox{state of charge at time },\quad \widetilde u(0)=1\\[1 mm ] e(t)&=\mbox{voltage [ v ] at time },\quad e(0)=e_0 .\end{aligned}\ ] ] while there is no obvious method of observing state - of - charge empirically , voltage is accessible to measurements at least in principle . to describe typical voltage, we imagine that a fully charged battery at time is connected to a closed circuit at constant discharge current [ a ] .the result is an instant voltage drop from to a new level at approximate voltage , where is an internal resistance [ ohm ] of the cell .as long as charge is consumed , the state - of - charge will then begin to decline over time accompanied by a subsequent change of voltage .if after a period of discharge the current is disconnected and the battery temporarily put to rest , then the voltage increases .first , instantly , by the amount and then over the course of the off - period at some rate due to recovery effects inside the cell. the resulting voltage versus time curve extended over a longer time span would typically stay nearly constant or exhibit slow decline over most of the active life of the battery followed by a steeper decent until a cut - off level is reached , beyond which the cell is considered to be non - operational . to relate state of charge and voltage we recall that the actual load a time is given by and apply the equilibrium nernst equation , ch . 1, to obtain where is the ideal gas constant , is absolute temperature , is the valency of the battery reactant ( for lithium ) , is faraday s constant , and dimensions are such that is measured in volts . the internal resistance , however , may have a more complex origin arising from a series of resistances in the electrodes and electrolyte .electrical circuit models use equivalent electrical circuits to capture such current - voltage characteristics . in this direction , consider a hybrid model of kinetic battery and electrical circuit models to fascilitate the derivation of voltage in terms of exponential and polynomial functions of state of charge . building on this approach , fits empirical battery data in order to evaluate battery models for wireless sensor network applications .the general principle for charge recovery is that of balancing the discharge rate by a positive drift of the available capacity due to the release and transport of stored charges .such effects should exist as long as the theoretical capacity of the cell has not yet been fully consumed , that is .the first recovery mechanism to take into account is ( solid - state ) diffusion of charge carriers caused by the build - up of a concentration gradient in the electrolyte during discharge .the drift of the process is convective flow and a diffusion coefficient controls random variations around the main direction of transport .diffusion transport of charge carriers might be a slow process which persists even if the load is removed and the battery put to rest , and runs until charge concentrations have reached local equilibrium .another mechanism for gaining capacity due to recovery is migration of charge - carriers caused by the electric field , as an action of a potential gradient .the strength of this effect should increase with the gap between maximal and actual capacity .it appears reasonable to assume that the effect of migration is ongoing whether the battery is under load or at rest .the final aspect of recovery we wish to include in the modeling scenario is charge transfer , meaning the transfer of charges from electrolyte through an interface to the terminal electrode .a simplified approach for this effect is that of a friction mechanism , such that a fraction of recovered charges are actually effectuated proportional to the applied load current , either instantaneous current or average current over long time .the battery models we study are introduced in relation to an arbitrary accumulated discharge function . to engage in a more detailed analysis of battery performancewe consider three stylized examples of , which represent typical discharge patterns for the intended usage of the battery ._ constant current . _the first such pattern is that of draining the battery at a constant current which remains the same over the entire battery life until the cell is emptied . clearly , and ._ deterministic on - off pattern . _the second example is relevant for the case when we know in advance both the amount of work the battery is supposed to power and the scheduled timing of loads . for such caseswe consider a deterministic pulse - train which consists of a periodic sequence of cycles of equal length .each cycle begins with an active on - period during which a pulse of constant load is transmitted , followed by an off - period of rest and no load .specifically we assume that a current [ a ] is drawn continuously during each on - period of length followed by a dormant off - period of length .hence the cycle duration is and the dutycycle is given by the fraction .we introduce so that if belongs to an on - period and for in an off - period .then the consumed capacity is and is a piecewise continuous function with non - decreasing rate .the average discharge rate equals ._ random discharge pattern . _our third stylized example of discharge mechanisms represents the case where no information except average load is available in advance of battery operation .in this situation we consider completely random discharge with the load to be drawn from the battery per time unit scattered independently and uniformly random in the sense of the poisson process . herewe take , where denotes a standard poisson process on the half line with constant intensity .this amounts to saying that the battery is drained from energy in small jumps of charge which occur interspaced by independent and exponentially distributed waiting times with expected value .again the average discharge current is .the kinetic battery model , originally introduced for lead acid batteries in , takes the view that remaining capacity of the battery is split in two wells , or compartments , one representing available charge and the other bound charge .discharge is the consumption of available charge and charge recovery is the flow of matter from the bound well to the available one .the available capacity in this model is precisely the amount of charge in the available well as function of time .hence we call , the bound charge and decompose remaining capacity as , , . with the use of the fraction , ,the two wells are assigned a measure of height given by and .the principle of the kinetic battery model is that bound charge becomes available at a rate which is proportional to the height difference .once available , no charges return to the bound state .thus , dy(t ) & = -k\big(\frac{y(t)}{1-c}-\frac{u(t)}{c}\big)\,dt,\quad & y(0)=t - n , \end{array } \right .\label{kibameqnsyst}\ ] ] where is a reaction parameter . by assumption , , .it is convenient therefore to consider the pair . with as an alternative parameter , in ( [ kibameqn ] ) , charge recovery as represented by the factor may be seen as the combination of a migration effect due to the term together with ( negative ) drift . to clarify these connections , ref. studied a reweighted version of the model . in this paperwe wish to develop these ideas further and hence consider the closely related reweighted model where the parameter controls charge recovery due to migration .the case reflects migration of additional charge from the bound well adding to the internal charge recovery .the case represents loss of recovery due to migration .the linear system ( [ kibameqnplus ] ) is readily solved as for later reference we note that the relevant version of ( [ kibameqnsyst ] ) for this extended case is dy(t ) & = -k_c(cy(t)-(1-c)u(t)+p(n - u(t)))\,dt . \end{array } \right.\ ] ] we emphasize that the discharge profile is arbitrary for this version of the kinetic battery model .for example , with the random discharge pattern discussed in section [ sec : dischargeprofile ] the integral in ( [ kibameqnplussol ] ) is a stochastic poisson integral and the the corresponding solution is a random process , which may be written where the sum extends over all jumps in ] of the one - dimensional line .initially , the electroactive species are uniformly distributed over space . over time , the concentration of electroactive species at time and distance from the electrode at develop following fick s laws of diffusion with suitable boundary conditions at both electrode endpoints and . in , comparing kinetic and diffusion battery models , jongerden and haverkort introduced the idea of placing a finite number of charge compartments in series along the spatial range and letting charges move between adjacent components according to eq.([kibameqnsyst ] ) .discharge occurs at the anode which is located in one end point of the spatial interval . starting from a state of fully charged compartments a spatial charge profile develops over time and determines the pace at which the battery is drained .this system is shown to match the diffusion model when the diffusion equations for are discretized over an equally sized partition of ] , modeling in time slot }\\ x_n&=\mbox{available capacity [ ah ] in time slot . } \end{aligned}\ ] ] we assume that and are integer multiples of and that all jumps are of size .all jumps in the first coordinate are downwards .jumps in the second are allowed to be both up and down as long as .the battery is discharged randomly at constant current with probability per slot . letting where is a sequence of i.i.d random variables with , it follows that is the accumulated discharge at slot and is the remaining charge in the battery after slots .the expected discharge rate is .given the sequence as input we model as a markov chain modulated by .all jumps down of are inherited from the discharge profile and follow those of .jumps up will occur according to a markovian dynamics chosen so as to reflect the recovery properties of the battery . in slot the transition probabilities depend on the current state and the current discharge information stored in . in an attempt to model charge recovery the battery cellis thought to consist of a randomly structured , electroactive material which allows transport of charge carrying species through the electrolyte by liquid or solid state diffusion .internal charge recovery relies on access to transportation channels of enough connectivity to allow the material to pass from one node to the other .also , these channels must be `` activated '' by a sufficient amount of previous discharge events . to try to describe such a system ,we introduce conditional on , the updates and are assumed to have poisson distributions , such that for given nonnegative parameters and and is binomially sampled from , so that in case there is no discharge in slot , that is , then the battery cell is able to recover one unit of charge if both and . hence the dynamics specified by this recursive relation is that , given , if then the transition in slot is and if then together these relations define a bivariate markov chain model with dynamics specified by and , typically , initial condition .we obtain a drift function and a variance function for the markov chain from and the drift function suggests a relevant , approximating ode for the markov chain . to formalize this limit procedureit is convenient to introduce a scaling parameter . at scaling level number of slots per unit time is and the discharge current jump size is rather than .consider and define for continuous time , }.\ ] ] similarly , let be the scaled discharge process with replaced by and put } , \quad v^{(m)}(t)=t-\lambda^{(m)}(t).\ ] ] then }z_i = \frac{q\delta[mt]}{m}+\sqrt{\frac{q(1-q)\delta^2}{m } } \frac{1}{\sqrt{m}}\sum_{k=1}^{[mt]}\frac{z_i - q}{\sqrt{q(1-q)}}.\ ] ] by the functional central limit theorem we may introduce a wiener process and for large view as an approximation of the continuous time remaining capacity function , in the sense where and the approximation error is of the order .moreover , by considering the differential change of over a time interval where , and as above we obtain a deterministic limit equation for large by applying a diffusion approximation with the diffusion term of magnitude . to simplify notation we put then where is another wiener process . since and have simultaneous jumps , and are dependent with a non - zero covariance . as , the stochastic differential equations for and simplify and become the ordinary differential equation & v_t'=-\delta q,&\qquad v_0=t .\end{array}\ ] ] recalling , the solution is and where hence and so thus , in analogy with the results obtained for the kinetic battery models in the previous section , it follows that is an autonomous system from which we can read off the available capacity as a function of remaining capacity . in the markov chain model of section [ sec : markov ] , we considered random discharge at current with probability per slot , which converged by the law of large numbers to a constant discharge pattern , , under the approximation scheme of section [ sec : approxmarkov ] .an alternative would be to run the markov chain relative to a given discharge sequence , such that in the scaling limit emerges an on - off discharge pattern as discussed in section [ sec : dischargeprofile ] .we recall that is the piecewise constant indicator function which is one during periods when the battery is under load and zero otherwise and that is the remaining capacity at time .as the degree of resolution increases by taking we then expect the process of nominal capacity , , to converge to a deterministic limiting function , which solves the ordinary differential equation then and so , by introducing a generating factor defined by hence we can solve for and obtain with this may be written which is a closed form solution for capacity in terms of the given discharge profile , coded by and with parameters and , and the additional battery parameters and . we have found a deterministic function which satisfies the ordinary differential equation ( [ eq : conttimeapproxode ] ) , in the limit of the scaled markov chain with constant discharge rate .now we introduce the random quantities in order to study the fluctuations of the scaled markov chain around its deterministic limit . with and as in section [ sec : conttimeapprox ] we obtain for large , where .it follows by a taylor expansion of that in the scaling limit we therefore expect that the fluctuation process converges to a diffusion process , such that this stochastic differential equation for can be seen as a generalized ornstein uhlenbeck process with time - inhomogeneous drift and variance coefficients and , which is also modulated by an additional , random , drift . to derive the capacity dynamics in ( [ markovode ] ) we were guided by a markov chain argument based on a simplified view of charge transport inside the battery cell . in this final sectionwe mention a more general class of deterministic non - linear recovery models for the dynamics of nominal capacity and state of charge .again we write for nominal capacity and for the remaining capacity in the cell as functions of time . here and is the discharge process with average discharge current .the general principle for charge recovery is that of balancing the discharge rate in by a positive drift of the nominal capacity due to the release and transport of stored charges .it is reasonable that such effects are proportional to the applied average load and exist as long as the theoretical capacity of the cell has not yet been fully consumed , that is .furthermore , the gain in capacity due to recovery depends on the migration of charge - carriers , and the strength of this effect should increase with the gap between maximal and actual capacity . hence , to capture charge recovery in the framework of one - dimensional differential equations ,we may let ] be non - decreasing functions with , , and consider equations of the generic shape where is battery life defined as the maximal time for which .the special case of ( [ genericode ] ) where and is a positive constant which measures the strength of migration of charges due to the existing electric field in the battery , is given by because of the separation of variables we may write for example , taking the simplest case , equivalently , it is immediate in this model that the pair is autonomous so that available capacity can be viewed as a function of remaining capacity only . moreover , the capacity dynamics is load - invariant in the sense that the curve does not depend on .in other words , batteries drained using different choices of will exhibit different battery life , longer the smaller intensity of the current , but the used capacity at end of life will be the same in each case . in the framework of the nonlinear ode approach we have obtained , just as for the linear models studied previously , a phase plane relation for the capacity dynamics .via state - of - charge we may proceed as before to modeling the corresponding voltage . without going into detailsthis will give us some capacity threshold below which the battery is no more functioning .then the unused capacity that remains in the battery at the end of its life time is the unique solution of .thus , the delivered capacity is and the gained capacity is .our model allows for some qualitative conclusions about these quantities as well as numerical studies of special cases .various discharge processes can be compared against experimental data and relevant parameters estimated . to give an indication of this type of work , figure [ fig : dischargeprofile ]shows the result of repeated independent simulations based on ( [ expode ] ) with and a random poisson discharge process as described in section [ sec : dischargeprofile ] .the parameters used for these particular simulations are chosen arbitrarily such that the output appears to mimic that of a real battery .the upper panel shows the phase plane traces of the resulting solutions .the lower panel shows the corresponding random paths of the state - of - charge as function of time .also in the lower panel we have superimposed ( solid red curve ) the state - of - charge for the case of constant discharge with the same average load , now given by the explicit solution of equation ( [ expode ] ) with .i. kaj , v. konane , analytical and stochastic modelling of battery cell dynamics ._ proceed . 19th intern .analytic and stochastic modelling techn . and appl . _ k. al - begain , d. fiems , and j .-vincent ( eds . ) : asmta 2012 , lecture notes in computer science , vol 7314 ( 2012 ) , 240254 .v. ramadesigan , p.w.c northrop , s. de , s. sananthagopalan , r.d .braatz and v.r .subramanian , modeling and simulation of lithium - ion batteries from a systems engineering perspective .j. electrochem .* 159*:3 ( 2012 ) , r31-r45 . c. rohner , l.m .feeney , and p. gunningberg , evaluating battery models in wireless sensor networks .11th international conference on wired / wireless internet communication ( wwic ) , lecture notes in computer science vol 7889 , 2013 . v. subramaniam , v. boovaragavan , v. ramadesigan , m. arabandi , mathematical model reformulation for lithium - ion battery simulations : galvanostatic boundary conditions . j. electrochem .soc . * 156*:4 ( 2009 ) , a260-a271 .svensson , the term structure of interest rate differentials in a target zone , theory and swedish data .seminar paper 466 , institute for international economic studies , stockholm university 1990 . | in this paper we review several approaches to mathematical modeling of simple battery cells and develop these ideas further with emphasis on charge recovery and the response behavior of batteries to given external load . we focus on models which use few parameters and basic battery data , rather than detailed reaction and material characteristics of a specific battery cell chemistry , starting with the coupled ode linear dynamics of the kinetic battery model . we show that a related system of pde with robin type boundary conditions arises in the limiting regime of a spatial kinetic battery model , and provide a new probabilistic representation of the solution in terms of brownian motion with drift reflected at the boundaries on both sides of a finite interval . to compare linear and nonlinear dynamics in kinetic and stochastic battery models we study markov chains with states representing available and remaining capacities of the battery . a natural scaling limit leads to a class of nonlinear ode , which can be solved explicitly and compared with the capacities obtained for the linear models . to indicate the potential use of the modeling we discuss briefly comparison of discharge profiles and effects on battery performance . battery lifetime ; state - of - charge ; charge recovery ; probabilistic solution of pde ; robin boundary condition ; nonlinear ode s h |
the dimension of the multiwire chambers deployed in modern high energy physics experiments is usually large conforming to the scale of experimental setup .the electrostatic instability in such chambers may be crucial when the amplitude of the oscillation caused by the action of electrostatic force alone or combined with the gravity becomes comparable to the electrode spacings .the study of the wire deflection in such a geometry is usually a complex affair since an interplay between several physical forces determines the wire stability .the approximation of constant or linear dependence of the force on the wire deflection is not adequate to solve for the differential equation governing the wire dynamics because all the wires in the chamber move in a collective way influencing each other giving rise to a nonlinear effect .since the exact solutions for the differential equation involving the nonlinear force are no longer known , it has to be solved numerically . of various methods of estimating the electrostatic sag from the differential equation ,only the linear and iterative methods have been attempted in several geometries . in these works ,the electrostatic force has been estimated from the 2d field calculation which differs significantly from 3d solutions .owing to the 2d nature of the problem , the sag is normally overestimated due to the fact that the whole length of the wire is considered to be at maximum sag . in this work ,an accurate 3d computation of electrostatic field has been carried out through the use of a nearly exact boundary element method ( nebem ) which has yielded precise force estimation . in order to reduce complexity ,only the normal component of the field has been considered in the calculation .the deflection of each segment has been assumed to be very small in comparison to its length .the calculation has been carried out for a geometry similar to that of rich detector in alice .the anode plane consists of gold - tungsten wires with m diameter with pitch mm .the upper cathode plane is made of copper - berrylium wires with diameter m and pitch mm while the lower one is a uniform conducting plate .the separation of upper and lower cathodes from the anode are respectively mm and mm and length of the detector in z - direction is cm .the anode plane is supplied with high voltage w.r.t . the cathode planes .the second order differential equation in an equilibrium state of the wire can be written as where , are the electrostatic and gravitational forces per unit length while the stringing tension of the wire . using three point finite difference formula , it can be rewritten as .(\delta z)^2\ ] ] where , and represent the deflections of respective segments . the electrostatic force on the -th segment has been computed using nebem solver for the given 3d geometry .the required sag due to the action of either of the electrostatic and gravitational forces or combined may be obtained from this equation .thus the set of equations for the segments on a wire can be represented as where is the tridiagonal coefficient matrix whose inverse has been calculated following standard numerical receipe . in the present work ,five anode wires have been considered with discretization of linear segments while that of the cathode plate has been .it should be noted that no plates on the sides of the chamber have been taken into account .the calculation procedure has been validated by calculating wire sag due to gravitational force and comparing with the analytic solution for gravitational force only as where and , , are the length , radius and density of the wire respectively .the results has been illustrtaed in fig.[fig : gravsagandcath ] which has demonstrated the validity of the method . gravitational sag of central anode and cathode wires , scaledwidth=45.0% ]the normal electric field components acting on the anode and cathode wire segments for anode voltage of have been plotted in fig.[fig : normalef ] .the field component on each segment has been calculated from the vectorial addition of field components at four radial locations on the segment periphery .the wire sag at the centre due to electrostatic force following the solution of tridiagonal matrix equation [ eqn.[eqn : mateq ] ] has been shown as a function of anode voltage in fig.[fig : wiresag ] for anode and cathode wires separately .it is evident from the result that the sag in the anode wire changes more rapidly than the cathode wires .r0.5 the central wire in the anode plane has been found to undergo more deflection in comparison to the edge wires .the calculation of for wire sags in this chamber has reported less deflection in comparison to our result . in ,an additional restoring electrostatic force has been considered to be operational when the wire gets deflected which in turn has helped to reduce the wire sag . in our calculation , no such dynamic consideration of the electrostatic force with the wire deflection has been incorporated . to reproduce the actual wire sags ,an iterative process can be carried out each time calculating the electrostatic force due to new position of the deflected wire .using the nebem solver , the electrostatic field could be accurately calculated for the three dimensional geometry of multiwire rich chamber .an fdm approach to compute the wire sag has been developed and validated for the case of gravitational sag calculation . in the present calculation ,no restoring effect of electrostatic force has been considered unlike the earlier work which has led to larger sag estimates .the restoring force aspect will be implemented in future by iterative technique to estimate a realistic wire sag in this chamber . | a numerical method of determining the wire sag in a multiwire proportional chamber used in rich by solving the second order differential equation which governs the wire stability has been presented . the three point finite difference method ( fdm ) has generated a tridiagonal matrix equation relating the deflection of wire segments to the force acting on it . the precise estimates of electrostatic force has been obtained from accurate field computation using a nearly exact boundary element method ( nebem ) solver . |
we address one of the fundamental problems in uncertainty quantification ( uq ) : the mapping of the probability distribution of a random variable through a nonlinear function .let us assume that we are concerned with a specific physical or engineering model which is computationally expensive .the model is defined by the map .it takes a parameter as input , and produces an output , . in this paperwe restrict ourselves to a proof - of - principle one - dimensional case .let us assume that is a random variable distributed with probability density function ( pdf ) .the uncertainty quantification problem is the estimation of the pdf of the output variable , given .formally , the problem can be simply cast as a coordinate transformation and one easily obtains where is the jacobian of .the sum over all such that takes in account the possibility that may not be injective .if the function is known exactly and invertible , eq.([py ] ) can be used straightforwardly to construct the pdf , but this is of course not the case when the mapping is computed via numerical simulations .several techniques have been studied in the last couple of decades to tackle this problem .generally , the techniques can be divided in two categories : intrusive and non - intrusive .intrusive methods modify the original , _ deterministic _ , set of equations to account for the stochastic nature of the input ( random ) variables , hence eventually dealing with stochastic differential equations , and employing specific numerical techniques to solve them .classical examples of intrusive methods are represented by polynomial chaos expansion , and stochastic galerkin methods .on the other hand , the philosophy behind non - intrusive methods is to make use of the deterministic version of the model ( and the computer code that solves it ) as a black - box , which returns one deterministic output for any given input .an arbitrary large number of solutions , obtained by sampling the input parameter space , can then be collected and analyzed in order to reconstruct the pdf .the paradigm of non - intrusive methods is perhaps best represented by monte carlo ( mc ) methods : one can construct an ensemble of input parameters ( typically large ) distributed according to the pdf , run the corresponding ensemble of simulations , and process the outputs .mc methods are probably the most robust of all the non - intrusive methods .their main shortcoming is the slow convergence of the method , with a typical convergence rate proportional to . for many applications quasi - monte carlo ( qmc ) methods now preferred to mc methods , for their faster convergence rate .in qmc the pseudo - random generator of samples is replaced by more uniform distributions , obtained through so - called quasi - random generators .it is often said that mc and qmc do not suffer the ` curse of dimensionality' , in the sense that the convergence rate ( but not the actual error ! ) is not affected by the dimension of the input parameter space .therefore , they represent the standard choice for large dimensional problems . on the other hand , when the dimension is not very large , collocation methods are usually more efficient . collocation methods recast an uq problem as an interpolation problem . in collocation methods ,the function is sampled in a small ( compared to the mc approach ) number of points ( ` collocation points ' ) , and an interpolant is constructed to obtain an approximation of over the whole input parameter space , from which the pdf can be estimated .the question then arises on how to effectively choose the collocation points .recalling that every evaluation of the function amounts to performing an expensive simulation , the challenge resides in obtaining an accurate approximation of with the least number of collocation points .+ as the name suggests , collocation methods are usually derived from classical quadrature rules .the type of pdf can guide the choice of the optimal quadrature rule to be used ( i.e. , gauss - hermite for a gaussian probability , gauss - legendre for a uniform probability , etc . ) . furthermore ,because quadratures are associated with polynomial interpolation , it becomes natural to define a global interpolant in terms of a lagrange polynomial .also , choosing the collocation points as the abscissas of a given quadrature rule makes sense particularly if one is only interested in the evaluation of the statistical moments of the pdf ( i.e. , mean , variance , etc . ) .on the other hand , there are several applications where one is interested in the approximation of the full pdf .for instance , when is narrowly peaked around two or more distinct values , its mean does not have any statistical meaning . in such casesone can wonder whether a standard collocation method based on quadrature rules still represents the optimal choice , in the sense of the computational cost to obtain a given accuracy . from this perspective, a downside of collocation methods is that the collocation points are chosen a priori , without making use of the knowledge of acquired at previous interpolation levels .for instance , the clenshaw - curtis ( cc ) method uses a set of points that contains nested subset , in order to re - use all the previous computations , when the number of collocation points is increased .however , since the abscissas are unevenly spaced and concentrated towards the edge of the domain ( this is typical of all quadrature rules , in order to overcome the runge phenomenon ) , it is likely that the majority of the performed simulations will not contribute significantly in achieving a better approximation of .stated differently , one would like to employ a method where each new sampling point is chosen in such a way to result in the fastest convergence rate for the approximated , in contrast to a set of points defined a priori .as a matter of fact , because the function is unknown , a certain number of simulations will always be redundant , in the sense that they will contribute very little to the convergence of .the rationale for this work is to devise a method to minimize such a redundancy in the choice of sampling points while achieving fastest possible convergence of .clearly , this suggests to devise a strategy that chooses collocation points _ adaptively _ , making use of the knowledge of the interpolant of , which becomes more and more accurate as more points are added .a well known adaptive sampling algorithm is based on the calculation of the so - called hierarchical surplus ( * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* see e.g ) .this is defined as the difference , between two levels of refinement , in the solution obtained by the interpolant .although this algorithm is quite robust , and it is especially efficient in detecting discontinuities , it has the obvious drawback that it can be prematurely terminated , whenever the interpolant happens to exactly pass through the true solution on a point where the hierarchical surplus is calculated , no matter how inaccurate the interpolant is in close - by regions ( see figure [ fig : hierarchical_example ] for an example ) .the goal of this paper is to describe an alternative strategy for the adaptive selection of sampling points .the objective in devising such strategy is to have a simple and robust set of rules for choosing the next sampling point .the paper is concerned with a proof - of - principle demonstration of our new strategy , and we will focus here on one dimensional cases and on the case of uniform only , postponing the generalization to multiple dimensions to future work .it is important to appreciate that the stated goal of this work is different from the traditional approach followed in the overwhelming majority of works that have presented sampling methods for uq in the literature .indeed , it is standard to focus on the convergence of the nonlinear unknown function , trying to minimize the interpolation error on , for a given number of sampling points . on the other hand, we will show that the convergence rates of and of its cumulative distribution function can be quite different .our new strategy is designed to achieve the fastest convergence on the latter quantity , which is ultimately the observable quantity of an experiment .the paper is organized as follows . in section 2we define the mathematical methods used for the construction of the interpolant and show our adaptive strategy to choose a new collocation points . in section 3we present some numerical examples and comparisons with the clenshaw - curtis collocation method , and the adaptive method based on hierarchical surplus .finally , we draw our conclusions in section 4 .in section 3 , we compare our method with the cc method , which is the standard appropriate collocation method for a uniform . here , we recall the basic properties of cc , for completeness .the clenshaw - curtis ( cc ) quadrature rule uses the extrema of a chebyshev polynomial ( the so - called ` extrema plus end - points ' collocation points in ) as abscissas .they are particularly appealing to be used as collocation points in uq , because a certain subset of them are nested .specifically , they are defined , in the interval ] . calculate the interpolant on the grid .define and add them on the grid calculate the interpolant on the new grid calculate the hierarchical surplus on the last two entries of and store them in the vector find the largest hierarchical surplus in , remove it from and remove the corresponding from append the two neighbors to and add them to the grid we use a multiquadric biharmonic radial basis function ( rbf ) with respect to a set of points , with , defined as : where are free parameters ( referred to as shape parameters ) . the function is approximated by the interpolant defined as the weights are obtained by imposing that for each sampling point in the set , namely the interpolation error is null at the sampling points .this results in solving a linear system for of the form , with a real symmetric matrix .we note that , by construction , the linear system will become more and more ill - conditioned with increasing , for fixed values of .this can be easily understood because when two points become closer and closer the corresponding two rows in the matrix become less and less linearly independent . to overcome this problem one needs to decrease the corresponding values of . in turns, this means that the interpolant will tend to a piece - wise linear interpolant for increasingly large .we focus , as the main diagnostic of our method , on the cumulative distribution function ( cdf ) , which is defined as where . as it is well known , the interpretation of the cumulative distribution function is that , for a given value , is the probability that .of course , the cdf contains all the statistical information needed to calculate any moment of the distribution , and can return the probability density function , upon differentiation .moreover , the cdf is always well defined between 0 and 1 .the following two straightforward considerations will guide the design of our _ adaptive selection strategy_. a first crucial point , already evident from eq .( [ py ] ) , is whether or not is bijective .when is bijective this translates to the cdf being continuous , while a non - bijective function produces a cdf which is discontinuous .it follows that intervals in where is constant ( or nearly constant ) will map into a single value ( or a very small interval in ) where the cdf will be discontinuous ( or ` nearly ' discontinuous ) .secondly , an interval in with a large first derivative of will produce a nearly flat cdf .this is again clear by noticing that the jacobian in eq .( [ py ] ) ( in one dimension ) is in the denominator , and therefore the corresponding will be very small , resulting in a flat cdf .+ loosely speaking one can then state that regions where is flat will produce large jumps in the cdf and , conversely , regions where the has large jumps will map in to a nearly flat cdf . from this simple considerations one can appreciate how important it is to have an interpolant that accurately capture both regions with _ very large _ and _ very small _ first derivative of . moreover , since the cdf is an integrated quantity , interpolation errors committed around a given will propagate in the cdf for all larger values .for this reason , it is important to achieve a global convergence with interpolation errors that are of the same order of magnitude along the whole domain .+ the adaptive section algorithm works as follows .we work in the interval ] , and we compare our results against the clenshaw - curtis , and the hierarchical surplus methods . we denote with the interpolant obtained with a set of points ( hence the iterative procedure starts with ) . a possible way to construct the cdf from a given interpolant would be to generate a sample of points in the domain ] where to compute . in the followingwe will use , in the evaluation of the cdf , a grid in with points equally spaced in the interval ] .we define the following errors : where denotes the l norm .it is important to realize that the accuracy of the numerically evaluated cdf will always depend on the binning of , i.e. the points at which the cdf is evaluated . as we will see in the following examples , the error saturates for large , which thus is an artifact of the finite bin size .we emphasize that , differently from most of the previous literature , our strategy focuses on converging rapidly in , rather than in .of course , a more accurate interpolant will always result in a more accurate cdf , however the relationship between a reduction in and a corresponding reduction in is not at all trivial .this is because the relation between and is mediated by the jacobian of , and it also involves the bijectivity of .+ finally , we study the convergence of the mean , see equation [ eq : mean ] , and the variance , which is defined as these will be calculated by quadrature for the cc methods , and with an integration via trapezoidal method for the adaptive methods .+ we study two analytical test cases : * case 1 : ; * case 2 : ; and two test cases where an analytical solution is not available , and the reference will be calculated as an accurate numerical solution of a set of ordinary differential equations : * case 3 : lotka - volterra model ( predator - prey ) ; * case 4 : van der pol oscillator . while case 1 and 2 are more favorable to the cc method , because the functions are smooth and analytical , hence a polynomial interpolation is expected to produce accurate results , the latter two cases mimic applications of real interest , where the model does not produce analytical results , although might still be smooth ( at least piece - wise , in case 4 ) . in this case is a bijective function , with one point ( ) where the first derivative vanishes .figure [ fig : case_1_f ] shows the function ( top panel ) and the corresponding cdf ( bottom panel ) , which in this case can be derived analytically .hence , we use the analytical expression of cdf to evaluate the error . the convergence of and is shown in figure [ fig : case_1_err ] ( top and bottom panels , respectively ) . here and in all the following figures blue squares denote the new adaptive selection method , red dots are for the cc methods , and black line is for the hierarchical surplus method .we have run the cc method only for ( i.e. the points at which the collocation points are nested ) , but for a better graphical visualization the red dots are connected with straight lines .one can notice that the error for the new adaptive method is consistently smaller than for the cc method . from the top panel , one can appreciate the saving in computer power that can be achieved with our new method .although the difference with cc is not very large until , at there is an order of magnitude difference between the two .it effectively means that in order to achieve the same error , the cc method would run at least twice the number of simulations .the importance of focusing on the convergence of the cdf , rather than on the interpolant , is clear in comparing our method with the hierarchical surplus method .for instance , for , the two methods have a comparable error , but our method has achieved almost an order of magnitude more accurate solution in . effectively , this means that our method has sampled the new points less redundantly . in this case is an anti - symmetric function with zero mean .hence , any method that chooses sampling points symmetrically distributed around zero would produce the correct first moment .we show in figure [ fig : case_1_sigma ] the convergence of , as the absolute value of the different with the exact value , in logarithmic scale .blue , red , and black lines represent the new adaptive method , the cc , and the hierarchical surplus methods , respectively ( where again for the cc , simulations are only performed where the red dots are shown ) .the exact value is .as we mentioned , the cc method is optimal to calculate moments , since it uses quadrature .although in our method the error does not decrease monotonically , it is comparable with the result for cc . in this casethe function is periodic , and it presents , in the domain ] . clearly , the solution of the model depends on the input parameter .we define our test function to be the result of the model for the population at time : the resulting function , and the computed cdf are shown in figure [ fig : case_3_f ] ( top and bottom panel , respectively ) .we note that , although can not be expressed as an analytical function , it is still smooth , and hence it does not present particular difficulties in being approximated through a polynomial interpolant .indeed the error undergoes a fast convergence both for the adaptive methods and for the cc method ( figure [ fig : case_3_err ] ) .once again , the new adaptive method is much more powerful than the cc method in achieving a better convergence rate , and thus saving computational power , while the hierarchical surplus method is the worst of the three .convergence of and are shown in figures [ fig : case_3_mu ] and [ fig : case_3_sigma ] , respectively .similar to previous cases , the cc presents a monotonic convergence , while this is not the case for the adaptive methods . only for ,the cc method yields much better results than the new method .our last example is the celebrated van der pol oscillator , which has been extensively studied as a textbook case of a nonlinear dynamical system . in this respectthis test case is very relevant to uncertainty quantification , since real systems often exhibit a high degree of nonlinearity .similar to case 3 , we define our test function as the output of a set of two odes , which we solve numerically with matlab .the model for the van der pol oscillator is : the initial conditions are , .the model is solved for time ] , it ranges between 50 and 250 .the function and the corresponding cdf are shown in figure [ fig : case_4_f ] .this function is clearly much more challenging than the previous ones .it is divided in two branches , where it takes values and , and it presents discontinuities where it jumps from one branch to the other .correspondingly , cdf presents a flat plateau for , which is the major challenge for both methods . in figure[ fig : case_4_err ] we show the errors and .the overall convergence rate of the cc and the new method is similar . for this case ,the hierarchical surplus method yields a better convergence , but only for . as we commented before, the mean has no statistical meaning in this case , because the output is divided into two separate regions . the convergence for is presented in figure [ fig : case_4_sigma ] .we have presented a new adaptive algorithm for the selection of sampling points for non - intrusive stochastic collocation in uncertainty quantification ( uq ) .the main idea is to use a radial basis function as interpolant , and to refine the grid on points where the interpolant presents large and small first derivative .+ in this work we have focused on 1d and uniform probability , and we have shown four test cases , encompassing analytical and non - analytical smooth functions , which are prototype of a very wide class of functions . in all cases the new adaptive method improved the efficiency of both the ( non - adaptive ) clenshaw - curtis collocation method , and of the adaptive algorithm based on the calculation of the hierarchical surplus ( note that the method used in this paper is a slight improvement of the classical algorithm ) .the strength of our method is the ability to select a new sampling point making full use of the interpolant resulting from all the previous evaluation of the function , thus seeking the most optimal convergence rate for the cdf .we have shown that there is no one - to - one correspondence between a reduction in the interpolation error and a reduction in the cdf error .for this reason , collocation methods that choose the distribution of sampling points a priori can perform poorly in attaining a fast convergence rate in , which is the main goal of uq .moreover , in order to maintain the nestedness of the collocation points the cc method requires larger and larger number of simulations ( moving from level to level ) , which is in contrast with our new method where one can add one point at the time . + we envision many possible research directions to further investigate our method .the most obvious is to study multi - dimensional problems .we emphasize that the radial basis function is a mesh - free method and as such we anticipate that this will largely alleviate the curse of dimensionality that afflicts other collocation methods based on quadrature points ( however , see for methods related to the construction of sparse grids , which have the same aim ) .moreover , it will be interesting to explore the versatility of rbf in what concerns the possibility of choosing an optimal shape parameter .recent work investigated the role of the shape parameter in interpolating discontinuous functions , which might be very relevant in the context of uq , when the continuity of can not be assumed a priori .finally , a very appealing research direction , would be to simultaneously exploit quasi - monte carlo and adaptive selection methods for extremely large dimension problems .a. a. and c. r. are supported by fom project no .12cser058 and 12pr304 , respectively .we would like to remember dr.ir .j.a.s . witteveen ( 2015 ) for the useful discussions we had about uncertainty quantification .43 natexlab#1#1[1]`#1 ` [ 2]#2 [ 1]#1 [ 1]http://dx.doi.org/#1 [ ] [ 1]pmid:#1 [ ] [ 2]#2 , , ( ) ., , , ( ) . , , , , , , in : , , pp . . , , , ( ) ., , , , ( ) . , , , , in : , gcms 11 , , , , pp . ., , , ( ) . , , , ., , , , . , , ( ) ., , , , . , , , , ., , , ( ) . , , ( ) ., , , in : , , , pp . . , , , ( ) ., , , ( ) ., , , ( ) . , , , , ( ) ., , , ( ) . , , ( ) ., , ( ) . , , , ( ) ., , , ( ) . , , , in : , , , pp . ., , , ( ) ., , ( ) . , , , ( ) ., , in : , , , pp . . , ,( ) . , , , ( ) . , , , , , , ( ) . , , , , ( ), , , ( ) . , , , ( ) ., , , ( ) . , , , ( ) .( in black ) goes exactly through the red straight line at the points .calculating the piece - wise linear interpolant between two ( ) , three ( ) , and five ( ) points would result in a null hierarchical surplus on these points.,width=10 ] ( top ) ( bottom ) as function of number of sampling points .blue squares : new adaptive selection method .red dots : clenshaw - curtis .black curve : adaptive method based on hierarchical surplus.,width=10 ] ( top ) ( bottom ) as function of number of sampling points .blue squares : new adaptive selection method .red dots : clenshaw - curtis .black curve : adaptive method based on hierarchical surplus.,width=10 ] ( top ) ( bottom ) as function of number of sampling points .blue squares : new adaptive selection method .red dots : clenshaw - curtis .black curve : adaptive method based on hierarchical surplus.,width=10 ] ( top ) ( bottom ) as function of number of sampling points .blue squares : new adaptive selection method .red dots : clenshaw - curtis .black curve : adaptive method based on hierarchical surplus.,width=10 ] | we present a simple and robust strategy for the selection of sampling points in uncertainty quantification . the goal is to achieve the fastest possible convergence in the cumulative distribution function of a stochastic output of interest . we assume that the output of interest is the outcome of a computationally expensive nonlinear mapping of an input random variable , whose probability density function is known . we use a radial function basis to construct an accurate interpolant of the mapping . this strategy enables adding new sampling points one at a time , _ adaptively_. this takes into full account the previous evaluations of the target nonlinear function . we present comparisons with a stochastic collocation method based on the clenshaw - curtis quadrature rule , and with an adaptive method based on hierarchical surplus , showing that the new method often results in a large computational saving . |
over one hundred years ago , the italian economist vilfredo pareto made one of the first empirical studies of the distribution of wealth by undertaking a careful study of land ownership in italy , switzerland and germany .in the course of this study , he plotted the fraction of economic agents with land holdings worth more than as a function of .his studies led him to believe that this function , which we shall denote by has a universal form . by its definition , it is clear that is monotone non - increasing , and that and .what pareto observed is that is approximately equal to one for all less than a certain cutoff value denoted by , and decays as a power law for .that is , pareto s empirical observations led him to conclude that it is approximately true that the power is usually called the pareto exponent . to put pareto s observations in modern terms , we may note that is the cumulative distribution function ( cdf ) of economic agents , ordered by wealth .let us denote the corresponding probability density function ( pdf ) of agents by , but we shall adopt the convention of normalizing to the total number of economic agents , rather than to unity , so that is the total number of agents with wealth in ] of the agents would possess the same fraction of the total wealth . in real economies, this curve always lies below the diagonal , more like the orange curve in fig .[ fig : pareto - lorenz ] . to prove that the lorenz curve can never pass above the diagonal ,note first that so the slope of the lorenz curve is given by which is the ratio of the wealth corresponding to that point on the curve to the average wealth .it follows that where we have abused notation , in that is now considered a function of .we see that when , and that when , and moreover that is concave up .it follows that is bounded above by the diagonal .equations ( [ eq : dl ] ) and ( [ eq : df ] ) relate the quantities plotted in the lorenz curve to , and we may also note that thus , given and , we see that any of the quantities , , and can be derived from any other , so they all contain equivalent information which is to say that they all contain essentially complete information about the distribution of wealth in a society .in fact , actual data for wealth distributions in the world today is very scant , and economists have to content themselves with much coarser characterizations of wealth inequality than the quantities described above .one of the most popular of these is due to the italian statistician and sociologist corrado gini , who also worked roughly contemporaneously with pareto and lorenz . the _ gini coefficient _ is defined as the ratio of the shaded area in fig .[ fig : pareto - lorenz ] , lying between the diagonal segment and the lorenz curve .consequently , when everybody has equal wealth ; the limit describes the approach to complete oligarchy . in terms of the quantities defined above , the gini coefficient is given by using eqs .( [ eq : dl ] ) and ( [ eq : df ] ) , we can change integration variables from to to find alternatively , changing the order of integration yields where we have changed the remaining variable of integration from to in the rightmost expression .equation ( [ eq : gini1 ] ) indicates that , where the angle brackets denote an average over the pdf .likewise , eq . ( [ eq : gini2 ] ) indicates that . from these fundamental relationships between the gini coefficient and the functions introduced earlier , we see that is a quadratic functional of , and we may sometimes emphasize this functional dependence by writing it as ] with respect to , which is the analog of the gradient in function space .this is defined by noting that , for any sufficiently well behaved function , - g[p ] \right)\ ] ] is a linear functional of . as such , by the riesz representation theorem , it can be expressed as the inner product of and a quantity which we shall denote by ; that is , for all sufficiently well behaved functions , the relation - g[p ] \right)\ ] ] defines the frchet derivative , , where the parentheses on the left indicate the standard inner product .this may be thought of as the infinite - dimensional version of for a scalar function .applying the above definition to eq .( [ eq : gini2 ] ) , it is a straightforward calculation to find .\label{eq : frechet}\ ] ] now , if the pdf , , changes in time , it will cause a change in the gini coefficient given by which is an infinite - dimensional version of the ordinary chain rule , in the next two sections , we shall consider two different dynamical equations for , and use them along with eq .( [ eq : dgdt ] ) to place bounds on .as noted in the introduction , though pareto s law is over a century old , an explanation for it in terms of microeconomic exchange relations between individual agents is still elusive .the general idea that simple rules for asset exchange might be used to explain wealth distributions appears to be due to angle in 1986 .such models have come to be called _ asset - exchange models _ ( aems ) and they typically involve binary transactions between agents involving some increment of wealth , with rules for which agent gains it and which agent loses it . the first work applying the mathematical methods of kinetic theory to such models is the paper of ispolatov , krapivsky and redner in 1998 .they considered an aem model in which the agent who loses the wealth is selected with even odds , and in which was proportional to the wealth of the losing agent .writing in a popular article in 2002 , hayes noted that in an economy governed by this model , no agent would willingly trade with a poorer agent , and therefore would try to use deception to trade only with wealthier agents .for this reason , he named the model of ispolatov et al . the _ theft - and - fraud model _( tfm ) . in the same articlementioned above , hayes proposed a variant of the tfm , in which the losing agent is still selected with even odds , but in which is proportional to the wealth of the poorer agent , rather than that of the losing agent .he referred to this as the _ yard - sale model _( ysm ) and noted that it describes an economy in which rich and poor will have no strategic reason not to trade with each other .the first kinetic theoretical description of the ysm was given by boghosian in 2014 .in analogy with , he derived a boltzmann equation for the ysm , but then went on to show that in the limit of small and frequent transactions , this reduced to a certain nonlinear , nonlocal fokker - planck equation .he presented numerical evidence indicating that the ysm by itself exhibits `` wealth condensation , '' in which all the wealth ends up in the hands of a single oligarch .he also showed that , when supplemented with a simple model for redistribution , the ysm yields pareto - like wealth distributions , including a cutoff at low wealth and an approximate power law at large wealth , very reminiscent of eq .( [ eq : parteopdf ] ) .the ysm can be described by a very simple algorithm .the version of the algorithm we shall use here is completely equivalent to that described by boghosian , though we state it in the following slightly different fashion : choose two agents from the population at random to engage in a financial transaction .call them agent 1 and agent 2 , and denote their respective wealth values by and .the amount of wealth that will be transferred from agent 1 to agent 2 in this transaction is then , where is sampled from a _ symmetric _ probability density function denoted by .note that , because is symmetric , the two agents both have even odds of winning and losing .it may seem odd that an algorithm which gives even odds of winning to both agents engaging in a transaction would cause wealth to concentrate , but this is indeed the case .this was demonstrated by extensive numerical simulations in , and there it was conjectured that the time - asymptotic state of is a generalized function , with support at , zeroth and first moments given by and respectively , and possibly divergent higher moments .this corresponds to an oligarchical state with . in this paper, we confirm that conjecture by demonstrating that the gini coefficient is a monotone increasing lyapunov functional of the boltzmann equation for the ysm , and that it reaches a maximum value of in the above - described oligarchical state .the boltzmann equation of the yard - sale model of asset exchange may be written + \frac{1}{n}\int_0^{\frac{w}{1+\beta}}dx \ ; p(x ) \left[p(w-\beta x)-\frac{1}{1+\beta}p\left(\frac{w}{1+\beta}\right)\right ] \right\ } , \label{eq : boltzmann}\ ] ] where is the symmetric distribution described in the last section .this was derived by boghosian using arguments similar to the derivation of the molecular boltzmann equation and also using a master equation approach , and so we shall not re - derive it in this paper . instead, we shall demonstrate that the gini coefficient is a lyapunov function for this equation .that is , we shall show that is monotone non - decreasing as a consequence of the above dynamics for .the lyapunov function of the molecular boltzmann equation is traditionally called boltzmann s `` function . ''adopting that nomenclature , we will show that the gini coefficient is an function for the yard - sale model boltzmann equation .our first task will be to substitute eq .( [ eq : boltzmann ] ) into eq .( [ eq : dgdt ] ) , and demonstrate that thereby obtained is greater than or equal to zero .we partition this task by rewriting the above as where we have defined , \label{eq : boltzmann1}\ ] ] and .\label{eq : boltzmann2}\ ] ] we shall now show that the corresponding rates of increase of the gini coefficient given by eq .( [ eq : dgdt ] ) , are separately greater than or equal to zero for .we first consider .combining eqs .( [ eq : frechet ] ) , ( [ eq : boltzmann1 ] ) and ( [ eq : dgdtj ] ) , we have \int_{-1}^1d\beta \;\eta(\beta ) \left [ p(w)-\frac{1}{1+\beta}p\left(\frac{w}{1+\beta}\right ) \right]\\ & = & \frac{2}{w}\int_{-1}^1d\beta \;\eta(\beta)\int_0^\infty dw\ ; wp(w ) -\frac{2}{w}\int_{-1}^1d\beta \;\eta(\beta)\int_0^\infty dw\ ; w\frac{1}{1+\beta}p\left(\frac{w}{1+\beta}\right)\\ & & + \frac{2}{nw}\int_{-1}^1d\beta \;\eta(\beta ) \int_0^\infty dw\;\frac{1}{1+\beta}p\left(\frac{w}{1+\beta}\right)\int_0^w dx\ ; p(x)(w - x)\\ & & -\frac{2}{nw}\int_{-1}^1d\beta \;\eta(\beta ) \int_0^\infty dw\;p(w)\int_0^w dx\ ; p(x)(w - x).\end{aligned}\ ] ] use the change of variables in the second and third terms , and then change the name of the integration variable from back to to obtain .\end{aligned}\ ] ] the first term above vanishes because the integrand is odd in , integrated from to .the second term is nonnegative because is a non - decreasing function of , as follows from thus we have demonstrated that we next consider . combining eqs .( [ eq : frechet ] ) , ( [ eq : boltzmann2 ] ) and ( [ eq : dgdtj ] ) , we have \\ & & \int_0^{\frac{w}{1+\beta}}dy\ ; p(y ) \left[p(w-\beta y)-\frac{1}{1+\beta}p\left(\frac{w}{1+\beta}\right)\right]\\ & = & -\frac{2}{nw}\int_{-1}^1d\beta \;\eta(\beta ) \int_0^\infty dw\;w \int_0^{\frac{w}{1+\beta}}dy\ ; p(y)p(w-\beta y ) \\ & & + \frac{2}{nw}\int_{-1}^1d\beta \;\eta(\beta)\frac{1}{1+\beta } \int_0^\infty dw\;w \int_0^{\frac{w}{1+\beta}}dy\ ; p(y)p\left(\frac{w}{1+\beta}\right ) \\ & & + \frac{2}{n^2w}\int_{-1}^1d\beta \;\eta(\beta ) \int_0^\infty dw\ ; \int_0^w dx\ ; p(x)(w - x ) \int_0^{\frac{w}{1+\beta}}dy\ ; p(y)p(w-\betay ) \\ & & -\frac{2}{n^2w}\int_{-1}^1d\beta \;\eta(\beta)\frac{1}{1+\beta } \int_0^\infty dw \int_0^w dx\ ; p(x)(w - x ) \int_0^{\frac{w}{1+\beta}}dy\ ; p(y)p\left(\frac{w}{1+\beta}\right).\end{aligned}\ ] ] in the first and third terms , swap the order of integration so that is outermost , and then make the substitution . in the second and fourth terms , use the change of variable , and then swap the order of integration so that is outermost .the result is the integral over can be performed for the first two terms , whereupon they cancel .now swap the order of integration in the remaining two integrals so that is outermost , followed by , and then to obtain \\ & & + \frac{2}{n^{2}w } \int_{-1}^{+1 } d\beta \ ; \eta ( \beta ) \int_{0}^{\infty } dx \ ; p(x ) \int_{\frac{x}{1+\beta}}^{\infty } dy \ ;p(y ) \left [ \beta \int_{y}^{\infty } du \ ; p(u)y - \beta \int_{y}^{\infty } du \ ; p(u)u \right].\end{aligned}\ ] ] in the second term above , note that .the term with then vanishes upon integration over , leaving us with .\end{aligned}\ ] ] next note that because and , it follows that so the integral over from to can be split into one from to plus another from to , resulting in \\ & = & + \frac{2}{n^{2}w } \int_{-1}^{+1 } d\beta \ ; \eta(\beta ) \int_{0}^{\infty } dx \ ; p(x ) \int_{0}^{\frac{x}{1+\beta } } dy \ ; p(y)\\ & & \;\;\;\;\;\;\;\;\;\ ; \left [ \int_{\frac{x}{1+\beta}}^{x-\beta y } du \ ; p(u)((x-\beta y)-u ) + \int_{y}^{\frac{x}{1+\beta } } du \; p(u)(u - y ) \right].\end{aligned}\ ] ] the integrands are now manifestly positive , and we may conclude the above demonstration is admittedly tedious. there may be a shorter route to this result , but as of this writing we have not been able to find one .combining eqs .( [ eq : dgdt1geq0 ] ) and ( [ eq : dgdt2geq0 ] ) , we have and so the gini coefficient is proven to be a lyapunov functional of the boltzmann equation .in earlier work , boghosian demonstrated that in the limit of small , the boltzmann equation , eq .( [ eq : boltzmann ] ) , reduces to a fokker - planck equation of the form where and is a constant . from its method of derivation, one would expect that is also a lyapunov functional of this equation . herewe demonstrate this directly . combining eqs .( [ eq : frechet ] ) , ( [ eq : dgdt ] ) and ( [ eq : fp ] ) yields \frac{\partial^2}{\partial w^2}\left(\gamma \frac{w^2}{2}c(w)p(w)\right)\\ & = & -\frac{2}{w}\int_0^\infty dw \ ; \left[-1+\frac{1}{n}\int_0^w dx\ ; p(x)\right ] \frac{\partial}{\partial w}\left(\gamma \frac{w^2}{2}c(w)p(w)\right ) \\ & = & \frac{2}{w}\int_0^\infty dw \ ; \frac{1}{n}p(w)\gamma \frac{w^2}{2}c(w)p(w)\\ & = & \frac{\gamma}{nw}\int_0^\infty dw \ ; [ p(w)]^2 c(w)\geq 0,\end{aligned}\ ] ] where we have integrated by parts .the gini coefficient is thus proven to be a lyapunov functional of the fokker - planck equation .it is well known that the equilibrium solution of the molecular boltzmann equation , namely the maxwell - boltzmann distribution , may be found by setting the variation of boltzmann s function to zero , under the constraints of fixed mass , momentum and energy .we may attempt an analogous computation here , but , as conjectured by boghosian , the equilibrium solution of the boltzmann equation for the yard - sale model without redistribution is a singular generalized function , .it is zero for , has zeroth moment and first moment , and its higher moments may not exist .it is only when redistribution is included that steady - state solutions similar to the pareto distribution , eq .( [ eq : parteopdf ] ) , are obtained .still , it is instructive to see if the variational approach will yield this singular generalized function , so we turn our attention to that problem in this last section . using eq .( [ eq : gini2 ] ) for the gini coefficient , $ ] , we introduce the lagrange multipliers and to enforce the constraints respectively .we obtain - \lambda\int_0^\infty dw\ ; p(w ) - \mu\int_0^\infty dw\ ; p(w ) w\right ) \nonumber\\ & = & \frac{2}{w}\left[-w+\frac{1}{n}\int_0^w dx\ ; p(x)(w - x)\right ] - \lambda - \mu w , \label{eq : var}\end{aligned}\ ] ] where we have used eq . ( [ eq : frechet ] ) .interpreting this in a weak sense , we take the zeroth moment of this with respect to . using eqs .( [ eq : af ] ) , ( [ eq : gini1 ] ) and ( [ eq : gini2 ] ) , we obtain next , differentiating eq .( [ eq : var ] ) yields - \mu \nonumber\\ & = & \frac{2}{w}\left[-1+f(w)\right ] - \mu \nonumber\\ & = & -\frac{2}{w}a(w ) - \mu , \label{eq : var1}\end{aligned}\ ] ] where we have used eq .( [ eq : af ] ) . taking the first moment of this with respect to , and using eq .( [ eq : gini2 ] ) , we obtain also , taking the limit as of eq .( [ eq : var1 ] ) yields eqs .( [ eq : mom0 ] ) , ( [ eq : mom1 ] ) and ( [ eq : momlim ] ) can be solved simultaneously to yield and , most importantly , .so , under the dynamics of the yard - sale model , increases in time and asymptotically approaches the value one , corresponding to a state of `` perfect oligarchy . '' finally , note that differentiating eq .( [ eq : var1 ] ) one more time immediately yields , valid everywhere expect , as expected .we have proven that the gini coefficient is a lyapunov function of the boltzmann equation for the yard - sale model of asset exchange , as well as the fokker - planck equation obtained in the limit of small transaction sizes .we have also shown that the equilibrium distribution , obtained in the time - asymptotic limit , is zero for all , and corresponds to .as noted earlier , it is only when the yard - sale model is supplemented with a mechanism for redistribution that steady states similar to the pareto distribution are found . with redistribution , however , the gini coefficient is clearly no longer a lyapunov functional , since it would be possible to begin with a higher concentration of wealth than that obtained in the time - asymptotic limit. it may be possible to find a different lyapunov functional for the boltzmann or fokker - planck equations with a redistribution term , but we leave this as a topic for future study . | in recent work , boltzmann and fokker - planck equations were derived for the `` yard - sale model '' of asset exchange . for the version of the model without redistribution , it was conjectured , based on numerical evidence , that the time - asymptotic state of the model was oligarchy complete concentration of wealth by a single individual . in this work , we prove that conjecture by demonstrating that the gini coefficient , a measure of inequality commonly used by economists , is an function of both the boltzmann and fokker - planck equations for the model . |
astronomical images obtained form the ground suffer from serious degradation of resolution , because the light passes through a turbulent medium ( the atmosphere ) before reaching the detector .a number of methods was developed to alleviate this phenomenon and multiple special - case solutions were implemented as well ( e.g. ) , but none of them is able to provide a perfect correction and restore the diffraction - limited ( dl ) image .therefore , the best observatories , in terms of angular resolution , are the spaceborne ones , since they are limited only by the dl .recently , there is an increasing interest in the attempts to overcome the dl boundary .devices which can give such possibility are called the quantum telescopes ( qt ) .first qts will probably work in the uv , optical and ir bands , mainly because of the speed , maturity and reliability of the detectors .latest progress in adaptive optics ( ao ) , especially so called extreme ao , makes the use of qts realistic also in ground - based observatories . in this letterthe general idea of quantum telescopes is considered . in particularwe refer to the setup proposed by , since , to our knowledge , it is the only existing detailed description of a qt . in fig .[ fig : qtscheme ] we propose an upgraded version of this setup . according to , each photon coming from the extended source triggers a signal by qnd detection and gets cloned . the coincidence detector controlled by the triggeris turned on for a short period and registers the clones .after that it is quickly turned off , so that it receives only a small fraction of spontaneous emission from the cloning medium and virtually no clones from other photons . if the source is too bright and emits too many photons per unit of time , a gray filter should be installed . as a result , a set of clones is produced and registered .the centroid position of the clones cloud is used to add 1 adu . ] at the corresponding position of the high resolution output image .the exposure time has to be much longer than in classic telescopes ( ct ) , since ( a ) in most cases a narrowband filter has to be applied and ( b ) the qnd detection efficiency is much below 100% .below we present the results of our detailed simulations of the qt system , discuss the feasibility of building such a device and predict its expected performance . to our knowledge , this is the first paper describing the detailed simulations of a qt of any design , as this is a preliminary concept . in our simulations as input images we used parts of real images obtained by the hubble space telescope ( hst ) .such images are optimal for the purpose of qt testing because this observatory is working at its dl . we cropped and decimated them to 200 pixels to speed up the computations .for the comparison , we also simulated the process of digital image formation in the case of a ct ( for a review on high angular resolution imaging see ) using the same images ( fig .[ fig : obscuredpupil]e ) .we assumed a real telescope for which the pupil is obscured by a secondary mirror and its truss ( spider ) .we assumed a similar size of a secondary mirror and truss as it is installed on the hst ( fig .[ fig : obscuredpupil]a ) . in the case of a ct , we sampled the counts ( photons ) from the original image ( fig .[ fig : obscuredpupil]d ) and distributed them according to the airy diffraction pattern decimated to 61x61 pixels ( fig .[ fig : obscuredpupil]b ) . for the simulation of a qt, each photon from the reference image was converted to cloned photons ( fig .[ fig : obscuredpupil]c ) .for we assumed the poissonian distribution .the clones were arranged using the gaussian profile ( see fig . 2 therein for justification ; we assumed = 10 ) centered around the photon s position . to include the effects of spontaneous emission, on the top of it we added counts distributed equally within the coincidence detector plane and governed by the poissonian distribution .the mean noise level was tuned to achieve a given signal - to - noise ratio ( snr ; see exemplary simulated exposure in fig .[ fig : drawing]b ) .the assessment of the snr was based on the comparison of the number of cloned photons with the number of counts originating from the spontaneous emission within the circular aperture of 3 radius ( i.e. within the aperture , for which nearly all the clones are received ) .it follows the approach of snr derivation presented in . in the next stage of simulations of the qt image formation , we computed the centroid of clones employing the matched filtering approach .as the cloned photons exhibit gaussian spatial distribution , in our calculations the image registered on the coincidence detector was first convolved with the gaussian ( = 10 ) and then the centroid was obtained from the position of the maximum value of such a filtered image ( fig .[ fig : drawing]c ) . = (, ) ) generate generate clones in the around ( , ) compute paste to (, ) we ran the simulations for different numbers of clones , reaching also very high numbers ( up to the expected value of ~10k , see for justification ) .the mean level of the poissonian noise of spontaneous emission was set so that snr was : 3/1 , 2/1 , 1 , 1/2 , 1/3 , 1/4 , 1/5 , 1/6 , 1/7 , 1/8 , 1/9 , 1/10 , 1/11 , 1/12 , 1/13 and 1/14 .such a selection of snr includesa value of 1/7.3 which was assessed for qt in .the quality of the simulated qt outcomes was assessed by two indicators : peak signal - to - noise ratio ( psnr ) and mean centroid error ( rms value ) . for 16-bit pixel representationthe psnr measure is defined as follows : where and denotes the intensity of pixel at ( ) in respectively qt simulated image and reference high - fidelity image .the psnr and the mean centroid errors are depicted in fig . [ fig : surfs ] .the place where both dependencies meet each other was retrieved and presented in fig .[ fig : betterworse ] .it shows the minimal requirements for the clones number and the snr level , which should be satisfied to achieve the resolution enhancement in qt .this curve can be treated as a first guidance for selection of qt parameters . as seen from figs .[ fig : surfs ] and [ fig : betterworse ] , assuming snr 7.3 , in average clones per 1 detected photon are necessary to produce an image sharper than ct .for clones the image is virtually ideally restored , even for the lowest values of snrs .the noticeable improvement of the outcome with higher clones count is related to the generally better estimation of the centroid when using the mf approach .for very small number of clones , the photons distribution is dominated by isolated photon - detection events and the mf output is strongly dependent on actual arrangement of registered clones .in contrast , for higher numbers of clones , the gaussian becomes more uniform and therefore , the mf provides much more reliable estimations even if the snr remains the same . in fig .[ fig : outputimages ] we show exemplary resulting qt images for various snrs and clones counts .the improvement in the image quality is easily noticeable as both parameters increase .below we summarize and discuss the feasibility of the qts as emerging from the most recent literature . a constant progress in the qnd increases the qt feasibility .there remains a possibility that some other kind of discriminator of spontaneous emission will be found .possibly qnd does not require to hold the photon in a resonant cavity for several hundreds microseconds , as it was claimed before . according to , in the case of the cavity - free qnd the photons momentum change is also negligible .however , this setup is suspected to produce false detections ( < 5% ) and can not be miniaturized .authors are presently working on a version easier to miniaturize ( keyu xia priv . com . ) .we see no strong limitations for the size of the device if it could be placed e.g. at the ` cassegrain ` or ` coud ` plane optically conjugated to the pupil plane of the optical system near the focus ( fig .[ fig : focus ] ) .another technological issue concerning the qt is the spontaneous emission from the process of cloning ( for details see ) .as we showed , in our model , for a sufficiently large number of clones this problem could be overcome .however , in the general case of the qt some more sophisticated methods might be needed ( e.g. different geometrical setup or filters ) .given the recent progress in cloning , it might also become possible to clone photons from a wider wavelength span and at a greater amount .in conclusion , the general idea of the quantum telescopes seems feasible and in the near future it might be possible to construct a technology demonstrator . in this paper we presented the updated schematic `` toy model '' of a qt system and the first quantitative results of simulations of such an imager , aiming at predicting the conditions which would guarantee its optimal performance .we found that it is generally more important to provide more clones than to reduce the noise background .given the predicted snr = 7.3 , a satisfactory results are obtained from less than 60 clones . using 10k clones ,the signal is almost perfectly restored for any noise level .we encourage the interest and discussion on the idea of the qt , since even a small increase in the resolution might lead to a major breakthrough in astronomical imaging . * funding . *marian smoluchowski research consortium matter energy future from know .polish national science center grant number umo-2012/07/b/ st9/04425 .polish national science center , grant no .umo-2013/11/n / st6/03051 .poig.02.03.01 - 24 - 099/13 grant : geconii - upper silesian center for computational science and engineering .v. m. tikhomirov , `` dissipation of energy in isotropic turbulence '' , _ selected works of a. n. kolmogorov _ ( springer science + business media , 1993 ) http://dx.doi.org/10.1007/978-94-011-3030-1_47 [ online version ] beckers , j. m. , `` adaptive optics for astronomy principles , performance , and applications '' , annual review of astronomy and astrophysics * 31 , * 13 - 62 ( 1993 ) http://adsabs.harvard.edu/abs/1993ara%26a..31...13b [ online version ] tonry , j. ; burke , barry e. ; schechter , paul l , `` the orthogonal transfer ccd '' , publications of the astronomical society of the pacific * 109 , * 1154 - 1164 ( 1997 ) http://adsabs.harvard.edu/abs/1997pasp..109.1154t [ online version ] roggemann , michael c. ; welsh , byron m. ; fugate , robert q. , `` improving the resolution of ground - based telescopes '' , reviews of modern physics . * * 69,**issue 2 , 437 - 505 ( 1997 ) http://adsabs.harvard.edu/abs/1997rvmp...69..437r [ online version ] hecquet , j. ; coupinot , g. , `` a gain in resolution by the superposition of selected recentered short exposures '' , journal of optics * 16 , * 21 - 26 ( 1985 ) http://adsabs.harvard.edu/abs/1985jopt...16...21h [ online version ] buie , m. w. and grundy , w. m. and young , e. f. and young , l. a. and stern , s. a. , `` pluto and charon with the hubble space telescope .ii . resolving changes on pluto s surface and a map for charon '' , the astronomical journal , * 139 , * , 1128 - 1143 ( 2010 ) http://adsabs.harvard.edu//abs/2010aj....139.1128b [ online version ] v. korkiakoski , c. vrinaud , `` extreme adaptive optics simulations for the european elt '' , frontiers in optics , optics & photonics technical digest ( 2009 ) https://www.osapublishing.org/abstract.cfm?uri=aopt-2009-aotud1 [ online version ] kellerer , a. , `` beating the diffraction limit in astronomy via quantum cloning '' , astronomy & astrophysics , * 561 , * i d .( 2014 ) http://adsabs.harvard.edu/abs/2014a%26a...561a.118k [ online version ] xia , keyu ; johnsson , mattias ; knight , peter l. ; twamley , jason , `` cavity - free scheme for nondestructive detection of a single optical photon '' , phys .116 , 023601 ( 2016 ) http://journals.aps.org/prl/abstract/10.1103/physrevlett.116.023601 [ online version ] mutchler , m. ; beckwith , s. v. w. ; bond , h. ; christian , c. ; frattare , l. ; hamilton , f. ; hamilton , m. ; levay , z. ; noll , k. ; royle , t. , `` hubble space telescope multi - color acs mosaic of m51 , the whirlpool galaxy '' , bulletin of the american astronomical society * 37 , * p.452 ( 2005 ) http://adsabs.harvard.edu/abs/2005aas...206.1307m [ online version ] guerlin , c. ; bernu , j. ; delglise , s. ; sayrin , c. ; gleyzes , s. ; kuhr , s. ; brune , m. ; raimond , j.- . ;haroche , s. , `` progressive field - state collapse and quantum non - demolition photon counting '' , nature * 448 , * 889 - 893 ( 2007 ) http://adsabs.harvard.edu/abs/2007natur.448..889g [ online version ] lamas - linares , a. ; simon , c. ; howell , j. c. ; bouwmeester , d. , `` experimental quantum cloning of single photons '' , science * 296 , * 712 - 714 ( 2002 ) http://adsabs.harvard.edu/cgi-bin/bib_query?arxiv:quant-ph/0205149 [ online version ] barbieri , m. ; ferreyrol , f. ; blandino , r. ; tualle - brouri , r. ; grangier , ph . ,`` nondeterministic noiseless amplification of optical signals : a review of recent experiments '' , laser physics letters * 8 , * issue 6 , article i d . 417 ( 2011 )http://adsabs.harvard.edu/abs/2011laphl...8..417b [ online version ] ralph , t. c. and lund , a. p. , `` nondeterministic noiseless linear amplification of quantum systems '' , american institute of physics conference series * 1110 , * p. 155 - 160( 2009 ) http://adsabs.harvard.edu/abs/2009aipc.1110..155r [ online version ] s. e. derenzo , woon - seng choong and w. w. moses , `` fundamental limits of scintillation detector timing precision '' , physics in medicine and biology * 59 , * 3261 - 3286 ( 2014 ) http://dx.doi.org/10.1088/0031-9155/59/13/3261 [ online version ] p. grangier , j. a. levenson and jean - philippe poizat , `` quantum non - demolition measurements in optics '' , nature * 396 , * 537 - 542 ( 1998 ) http://www.nature.com/nature/journal/v396/n6711/abs/396537a0.html [ online version ] harvey , j. e. and ftaclas , c. , `` diffraction effects of telescope secondary mirror spiders on various image - quality criteria '' , applied optics .* 34 , * 6337 - 6349 ( 1995 ) http://adsabs.harvard.edu/abs/1995apopt..34.6337h [ online version ] kellerer , a. , `` beating the diffraction limit in astronomy via quantum cloning ( corrigendum ) '' , astronomy & astrophysics , * 582 , * i d . c3 ( 2015 ) http://www.aanda.org/articles/aa/abs/2015/10/aa22665e-13/aa22665e-13.html [ online version ] c. w. helstrom , d. w. fry , l. costrell and k. kandiah , `` statistical theory of signal detection '' , international series of monographs in electronics and instrumentation , vol .9 , ( pergamon press , oxford ; new york ) , 2nd edition ( 1968 ) http://www.sciencedirect.com/science/book/9780080132655 [ online version ] woodward , p.m. , `` probability and information theory with applications to radar '' , ( norwood , ma : artech house ) , 2nd edition ( 1980 ) http://www.sciencedirect.com/science/book/9780080110066 [ online version ] de martini , f. ; sciarrino , f. , `` investigation on the quantum - to - classical transition by optical parametric amplification : generation and detection of multiphoton quantum superposition '' , optics communications * 337 , * 4452 ( 2015 ) http://www.sciencedirect.com/science/article/pii/s0030401814007640 [ online version ] nogues , g. ; rauschenbeutel , a. ; osnaghi , s. ; brune , m. ; raimond , j. m. ; haroche , s , `` seeing a single photon without destroying it '' , nature * 400 , * 239 - 242 ( 1999 ) http://adsabs.harvard.edu/abs/1999natur.400..239n [ online version ] | quantum telescope is a recent idea aimed at beating the diffraction limit of spaceborne telescopes and possibly also other distant target imaging systems . there is no agreement yet on the best setup of such devices , but some configurations have been already proposed . + in this letter we characterize the predicted performance of quantum telescopes and their possible limitations . our extensive simulations confirm that the presented model of such instruments is feasible and the device can provide considerable gains in the angular resolution of imaging in the uv , optical and infrared bands . we argue that it is generally possible to construct and manufacture such instruments using the latest or soon to be available technology . we refer to the latest literature to discuss the feasibility of the proposed qt system design . copyright : the optical society ( osa publishing ) 2016 . published in optics letters vol . 41 no . 6 ( 2016 ) + |
in computer graphics programming the standard framework for modeling points in space is via a projective representation .so , for handling problems in three - dimensional geometry , points in euclidean space are represented projectively as rays or vectors in a four - dimensional space , the additional vector is orthogonal to , , and is normalised to 1 , . from the definition of it is apparent that is the projective representation of the origin in euclidean space .the projective representation is _ homogeneous _, so both and represent the same point .projective space is also not a linear space , as the zero vector is excluded . given a vector in projective space , the euclidean point is then recovered from the components of define a set of homogeneous coordinates for the position .the advantage of the projective framework is that the group of euclidean transformations ( translations , reflections and rotations ) is represented by a set of linear transformations of projective vectors .for example , the euclidean translation is described by the matrix transformation this linearisation of a translation ensures that compounding a sequence of translations and rotations is a straightforward exercise in projective geometry .all one requires for applications is a fast engine for multiplying together matrices .the main operation in projective geometry is the _ exterior product _, originally introduced by grassmann in the nineteenth century .this product is denoted with the wedge symbol .the outer product of vectors is associative and totally antisymmetric .so , for example , the outer product of two vectors and is the object , which is a rank-2 antisymmetric tensor or _ bivector_. the components of are the exterior product defines the _ join _ operation in projective geometry , so the outer product of two points defines the line between them , and the outer product of three points defines a plane . in this scheme a line in three dimensionsis then described by the 6 components of a bivector .these are the plcker coordinates of a line .the associativity and antisymmetry of the outer product ensure that which imposes a single quadratic condition on the coordinates of a line .this is the plcker condition .the ability to handle straight lines and planes in a systematic manner is essential to practically all graphics applications , which explains the popularity of the projective framework .but there is one crucial concept which is missing .this is the euclidean _ distance _ between points .distance is a fundamental concept in the euclidean world which we inhabit and are usually interested in modeling . but distance can not be handled elegantly in the projective framework , as projective geometry is non - metrical .any form of distance measure must be introduced via some additional structure .one way to proceed is to return to the euclidean points and calculate the distance between these directly .mathematically this operation is distinct from all others performed in projective geometry , as it does not involve the exterior product ( or duality ) .alternatively , one can follow the route of classical planar projective geometry and define the additional metric structure through the introduction of the _ absolute conic _ . butthis structure requires that all coordinates are complexified , which is hardly suitable for real graphics applications .in addition , the generalisation of the absolute conic to three - dimensional geometry is awkward . there is little new in these observations .grassmann himself was dissatisfied with an algebra based on the exterior product alone , and sought an algebra of points where distances are encoded in a natural manner .the solution is provided by the _conformal model _ of euclidean geometry , originally introduced by mbius in his study of the geometry of spheres .the essential new feature of this space is that it has mixed signature , so the inner product is not positive definite .in the nineteenth century , when these developments were initiated , mixed signature spaces were a highly original and somewhat abstract concept . today, however , physicists and mathematicians routinely study such spaces in the guise of special relativity , and there are no formal difficulties when computing with vectors in these spaces . as a route to understanding the conformal representation of points in euclidean geometrywe start with a description of the _stereographic projection_. this map provides a means of representing points as null vectors in a space of two dimensions higher than the euclidean base space .this is the conformal representation .the inner product of points in this space recovers the euclidean distance , providing precisely the framework we desire .the outer product extends the range of geometric primitives from projective geometry to include circles and spheres , which has many applications .the conformal model of euclidean geometry makes heavy use of both the interior and exterior products . as such, it is best developed in the language of _ geometric algebra _ a universal language for geometry based on the mathematics of _ clifford algebra_ .this is described in section [ sga ] .the power of the geometric algebra development becomes apparent when we discuss the group of conformal transformations , which include euclidean transformations as a subgroup .as in the projective case , all euclidean transformations are linear transformations in the conformal framework . furthermore , these transformations are all _ orthogonal _ , and can be built up from primitive reflections .the join operation in conformal space generalises the join of projective geometry .three points now define a line , which is the circle connecting the points .if this circle passes through the point at infinity it is a straight line .similarly , four points define a sphere , which reduces to a plane when its radius is infinite .these new geometric primitives provide a range of intersection and reflection operations which dramatically extend the available constructions in projective geometry .for example , reflecting a line in a sphere is encoded in a simple expression involving a pair of elements in geometric algebra .working in this manner one can write computer code for complicated geometrical operations which is robust , elegant and highly compact .this has many potential applications for the graphics industry .the stereographic projection provides a straightforward route to the principle construction of the conformal model the representation of a point as a null vector in conformal space .the stereographic projection maps points in the euclidean space to points on the unit sphere , as illustrated in figure [ fstereog ] .suppose that the initial point is given by , and we write where is the magnitude of the vector , .the corresponding point on the sphere is where is the unit vector perpendicular to the plane defining the south pole of the sphere .the angle , is related to the distance by which inverts to give the stereographic projection maps into , where is the south pole of .we complete the space by letting the south pole represent the point at infinity .we therefore expect that , under euclidean transformations of , the point at infinity should remain invariant .is mapped to the unit sphere . given a point in we form the line through this point and the south pole of the sphere .the point where this line intersects the sphere defines the image of the projection.,width=340 ] we now have a representation of points in with unit vectors in the space . but the constraint that the vector has unit magnitude means that this representation is not homogeneous .a homogeneous representation of geometric objects is critical to the power of projective geometry , as it enables us to write the equation of the line through and as clearly , if satisfies this equation , then so to does . to achieve a homogeneous representation we introduce a further vector , , which has _ negative signature _, we also assume that is orthogonal to and .we can now replace the unit vector with the vector , where the vector satisfies so is _null_. this equation is homogeneous , so we can now move to a homogeneous encoding of points and let and represent the _ same _ point in . multiplying the vector in equation ( [ ex1 ] ) by establish the conformal representation the vectors and extend the euclidean space to a space with two extra dimensions and signature .it is generally more convenient to work with a null basis for the extra dimensions , so we define these vectors satisfy the vector is now which defines our standard representation of points as vectors in conformal space . given a general , unnormalised null vector in conformal space , the standard form of equation ( [ eptmap ] ) is recovered by setting this map makes it clear that the null vector now represents the point at infinity .in general we will not assume that our points are normalised , so the components of are homogeneous coordinates for the point .the step of normalising the representation is only performed if the actual euclidean point is required .given two null vectors and , in the form of equation ( [ eptmap ] ) , their inner product is this is the key result which justifies the conformal model approach to euclidean geometry .the inner product in conformal space encodes the _ distance _ between points in euclidean space .this is why points are represented with null vectors the distance between a point and itself is zero . since equation ( [ econfinner ] ) was appropriate for normalised points , the general expression relating euclidean distance to the conformal inner product is this is manifestly homogeneous in and .this formula returns the dimensionless distance . to introduce dimensionsone requires a fundamental length scale , so that is the dimensionless representation of the position vector .appropriate factors of can then be inserted when required . an orthogonal transformation in conformal spacewill ensure that a null vector remains null .such a transformation therefore maps points to points in euclidean space .this defines the full conformal group of euclidean space , which is isomorphic to the group .conformal transformations leave angles invariant , but can map straight lines into circles .the euclidean group is the subgroup of the conformal group which leaves the euclidean distance invariant .these transformations include translations and rotations , which are therefore linear , orthogonal transformations in conformal space .the key to developing simple representations of conformal transformations is geometric algebra , which we now describe .the language of geometric algebra can be thought of as clifford algebra with added geometric content .the details are described in greater detail elsewhere , and here we just provide a brief introduction .a geometric algebra is constructed on a vector space with a given inner product .the _ geometric _ product of two vectors and is defined to be associative and distributive over addition , with the additional rule that the ( geometric ) square of any vector is a scalar , if we write we see that the symmetric part of the geometric product of any two vectors is also a scalar .this defines the inner product , and we write the geometric product of two vectors can now be written where the exterior product is the antisymmetric combination under the geometric product , orthogonal vectors anticommute and parallel vectors commute .the product therefore encodes the basic geometric relationships between vectors .the totally antisymmetrised sum of geometric products of vectors defines the exterior product in the algebra .once one knows how to multiply together vectors it is a straightforward exercise to construct the entire geometric algebra of a vector space .general elements of this algebra are called _ multivectors _ , and they too can be multiplied via the geometric product . the two algebras which concern us in this paper are the algebras of conformal vectors for the euclidean plane and three - dimensional space . for the euclidean plane ,let denote an orthonormal basis set , and write , .the full basis set is then .these generators satisfy where algebra generated by these vectors consists of the set scalars are assigned grade zero , vectors grade one , and so on .the highest grade term in the algebra is the pseudoscalar , this satisfies the steps in this calculation simply involve counting the number of sign changes as a vector is anticommuted past orthogonal vectors .this is essentially how all products in geometric algebra are calculated , and it is easily incorporated into any programming language . in even dimensions , such as the case here , the pseudoscalar anticommutes with vectors , for the case of the conformal algebra of euclidean three - dimensional space , we can define a basis as , with , , and a basis set for three - dimensional space .the algebra generated by these vectors has 32 terms , and is spanned by the dimensions of each subspace are given by the binomial coefficients .each subspace has a simple geometric interpretation in conformal geometry .the pseudoscalar for five - dimensional space is again denoted by , and this time is defined by in five - dimensional space the pseudoscalar commutes with all elements in the algebra .the signature of the space implies that the pseudoscalar satisfies so , algebraically , has the properties of a unit imaginary , though in geometric algebra it plays a definite geometric role . in a general geometric algebra , multiplication bythe pseudoscalar performs the duality transformation familiar in projective geometry .suppose that the vectors and represent two lines from a common origin in euclidean space , and we wish to reflect the vector in the hyperplane perpendicular to .if we assume that is normalised to , the result of this reflection is this is the standard expression one would write down without access to the geometric product . but with geometric algebra at our disposal we can expand into the advantage of this representation of a reflection is that we can easily chain together reflections into a series of geometric products .so two reflections , one in followed by one in , produce the transformation but two reflections generate a rotation , so a rotation in geometric algebra can be written in the simple form where the tilde denotes the operation of _ reversion _ , which reverses the order of vectors in any series of geometric products .given a general multivector we can decompose it into terms of a unique grade by writing where denotes the grade- part of .the effect of the reverse on is then the geometric product of an even number of positive norm unit vectors is called a rotor .these satisfy and generate rotations .a rotor can be written as where is a bivector .the space of bivectors is therefore the space of generators of rotations .these define a _ lie algebra _, with the rotors themselves defining a _ lie group_. the action of this group on vectors is defined by equation ( [ erotn ] ) , so both and define the same rotation .transformations of conformal vectors which leave the product of equation ( [ eucdist ] ) invariant correspond to the group of euclidean transformations in . to simplify the notation, we let denote the geometric algebra of a vector space with signature . the euclidean spaces of interest to us therefore have the algebras and associated with them .the corresponding conformal algebras are and respectively .each euclidean algebra is a subalgebra of the associated conformal algebra . the operations which mainly interest us here are translations and rotations .the fact that translations can be treated as orthogonal transformations is a novel feature of conformal geometry .this is possible because the underlying orthogonal group is non - compact , and so contains null generators . to see how this allows us to describe a translation ,consider the rotor where is a vector in the euclidean space , so that .the bivector generator satisfies so is null .the taylor series for therefore terminates after two terms , leaving the rotor transforms the null vectors and into and as expected , the point at infinity remains at infinity , whereas the origin is transformed to the vector .acting on a vector we similarly obtain combining these results we find that which performs the conformal version of the translation .translations are handled as rotations in conformal space , and the rotor group provides a double - cover representation of a translation .the identity ensures that the inverse transformation in conformal space corresponds to a translation in the opposite direction , as required .similarly , as discussed above , a rotation in the origin in is performed by , where is a rotor in .the conformal vector representing the transformed point is this holds because is an even element in , so must commute with both and .rotations about the origin therefore take the same form in either space . but suppose instead that we wish to rotate about the point .this can be achieved by translating to the origin , rotating , and then translating forward again . in terms of result is the rotation is now controlled by the rotor , where the conformal model frees us up from treating the origin as a special point .rotations about any point are handled with rotors in the same manner .similar comments apply to reflections , though we will see shortly that the range of possible reflections is enhanced in the conformal model .the euclidean group is a subgroup of the full conformal group , which consists of transformations which preserve angles alone .this group is described in greater detail elsewhere .the essential property of a euclidean transformation is that the point at infinity is invariant , so all euclidean transformations map to itself .in the conformal model , points in euclidean space are represented homogeneously by null vectors in conformal space . as in projective geometry ,a multivector encodes a geometric object in via the equations one result we can exploit is that is unchanged if , where is a rotor in .so , if a geometric object is specified by via equation ( [ ecf1 ] ) , it follows that we can therefore transform the object with a general element of the full conformal group to obtain a new object . as well as translations and rotations , the conformal group includes dilations and inversions , which map straight lines into circles .the range of geometric primitives is therefore extended from the projective case , which only deals with straight lines . the first case to consider is a pair of null vectors and .their inner product describes the euclidean distance between points , and their outer product defines the bivector the bivector has magnitude which shows that is _ timelike _ , in the terminology of special relativity .it follows that contains a pair of null vectors .if we look for solutions to the equation the only solutions are the two null vectors contained in .these are precisely and , so the bivector encodes the two points directly . in the conformal model ,no information is lost in forming the exterior product of two null vectors .frequently , bivectors are obtained as the result of intersection algorithms , such as the intersection of two circles in a plane .the sign of the square of the resulting bivector , , defines the number of intersection points of the circles . if then defines two points , if then defines a single point , and if then contains no points .given that bivectors now define pairs of points , as opposed to lines , the obvious question is how do we encode lines ?suppose we construct the line through the points . a point on the lineis then given by the conformal version of this line is and any multiple of this encodes the same point on the line .it is clear , then , that a conformal point is a linear combination of , and , subject to the constraint that .this is summarised by so it is the _ trivector _ which represents a line in conformal geometry .this illustrates a general feature of the conformal model geometric objects are represented by multivectors one grade higher than their projective counterpart .the extra degree of freedom is absorbed by the constraint that .now suppose that we form a general trivector from three null vectors , this must still encode a conformal line via the equation .in fact , encodes the _ circle _ through the points defined by , and . to see why ,consider the conformal model of a plane .the trivector therefore maps to a _ dual _ vector , where and is the pseudoscalar defined in equation ( [ eps2 ] ) .we see that so is a vector with positive square .such a vector can always be written in the form where is the conformal vector for the point .the dual version of the equation is which reduces to since satisfies , equation ( [ ecirc ] ) states that the euclidean distance between and is equal to the constant .this clearly defines a circle in a plane .furthermore , the radius of the circle is defined by where the minus sign in the final expression is due to the fact that the 4-vector has negative square .this equation demonstrates how the conformal framework allows us to encode dimensional concepts such as radius while keeping multivectors like as homogeneous representations of geometric objects .the equation for the radius of the circle tells us that the circle has infinite radius if this is the case of a straight line , and this equation can be interpreted as saying the line passes through the point at infinity .so , given three points , the test that they lie on a line is this test has an important advantage over the equivalent test in projective geometry .the degree to which the right - hand side differs from zero directly measures how far the points are from lying on a common line .this can resolve a range of problems caused by the finite numerical precision of most computer algorithms .numerical drift can only affect how near to being straight the line is . when it comes to plotting, one can simply decide what tolerance is required and modify equation ( [ elntst ] ) to read where .a similar idea can not be applied so straightforwardly in the projective framework , as there is no intrinsic measure of being ` nearly linearly dependent ' for projective vectors .next , suppose that and define two vectors in , both as rays from the origin , and we wish to find the angle between these .the conformal representation of the line can be built up from where and .similarly the line in the direction is represented by .we can therefore write so that if we let angle brackets denote the scalar part of the multivector we see that where is the angle between the lines .this is true for lines through the origin , but the expression in terms of and is unchanged under the action of a general rotor , so applies to lines meeting at any point , and to circles as well as straight lines .similar considerations apply to circles and planes in three dimensional space .suppose that the points define a plane in , so that an arbitrary point in the plane is given by the conformal representation of is where _ etc ._ , and varying and , together with the freedom to scale , now produces general null combinations of the vectors , , and . the equation for the planecan then be written so , as one now expects , it is 4-vectors which define planes .if instead we form the 4-vector then in general defines a sphere . to see why ,consider the dual of , where is now given by equation ( [ eps3 ] ) .again we find that , and we can write the equation is now equivalent to , which defines a sphere of radius , with centre . the radius of the sphere is defined by the sphere becomes a flat plane if the radius is infinite , so the test that four points lie on a common plane is as with the case of the test of collinearity , this is numerically well conditioned , as the deviation from zero is directly related to the curvature of the sphere through the four points .the conformal model extends the range of geometric primitives beyond the simple lines and planes of projective geometry .it also provides a range of new algorithms for handling intersections and reflections .first , suppose we wish to find the intersection of two lines in a plane .these are defined by the trivectors and .the intersection , or meet , is defined via its dual by the star , , denotes the dual of a multivector , which in geometric algebra is formed by multiplication by the multivector representing the join . for the case of two distinct lines in a plane ,their join is the plane itself , so the dual is formed by multiplication by the pseudoscalar of equation ( [ eps2 ] ) .the intersection of two lines therefore results in the bivector this expression is a bivector as it is the contraction of a vector and a trivector . as discussed in section [ sprims ] , a bivector can encode zero , one or two points , depending on the sign of its square .two circles can intersect in two points , for example , if they are sufficiently close to one another .two straight lines will also intersect in two points , though one of these is at infinity . in three dimensionsa number of further possibilities arise .the intersection of a line and a sphere or plane also results in a bivector , where is now the grade-5 pseudoscalar in the conformal algebra for three - dimensional euclidean space. the bivector encodes the fact that a line can intersect a sphere in up to two places .if is a flat plane and a straight line , then one of the intersection points will be at infinity .more complex is the case of two planes or spheres . in this caseboth geometric primitives are 4-vectors , and .their intersection results in the trivector this trivector encodes a line .this one single expression covers a wide range of situations , since either plane can be flat or spherical .if both planes are flat their intersection is a straight line with . if one or both of the planes are spheres , their intersection results in a circle .the sign of the square of the trivector immediately encodes whether or not the spheres intersect . if the spheres do not intersect , as there are no null solutions to when . as well as new intersection algorithms ,the application of geometric algebra in conformal space provides a highly compact encoding of reflections . as a single example, suppose that we wish to reflect the line in the plane . to start with ,suppose that the point of intersection is the origin , with a straight line in the direction , and the plane defined by the points , and the origin .in this case we have the result of reflecting in is a vector in the direction , where the conformal representation of the line through the origin in the direction is . in terms of their conformal representations , we have so far this is valid at the origin , but conformal invariance ensures that it holds for all intersection points .this expression for finds the reflected line in space without even having to find the intersection point .furthermore , the line can be curved , and the same expression reflects a circle in a plane. we can also consider the case where the plane itself is curved into a sphere , in which case the transformation of equation ( [ eplnrf ] ) corresponds to an _ inversion _ in the sphere .inversions are important operations in geometry , though perhaps of less interest in graphics applications . to find the reflected line for the spherical caseis also straightforward , as all one needs is the formula for the tangent plane to the sphere at the point of intersection .this is where is the point where the line intersects the sphere .the plane can then be used in equation ( [ eplnrf ] ) to find the reflected line .conformal geometry provides an efficient framework for studying euclidean geometry because the inner product of conformal vectors is directly related to the euclidean distance between points .the true power of the conformal framework only really emerges when the subject is formulated in the language of geometric algebra .the geometric product unites the outer product , familiar from projective geometry , with the inner product of conformal vectors .each graded subspace encodes a different geometric primitive , which provides for good data typing and aids writing robust code .furthermore , the algebra includes multivectors which generate euclidean transformations .so both geometric objects , and the transformations which act on them , are contained in a single , unified framework .the remaining step in implementing this algebra in applications is to construct fast algorithms for multiplying multivectors .much as projective geometry requires a fast engine for multiplying matrices , so conformal geometric algebra requires a fast engine for multiplying multivectors .one way to achieve this is to encode each multivector via its matrix representation . for the important case of this representation consists of complex matrices .but in general this approach is slower than algorithms designed to take advantage of the unique properties of geometric algebra .such algorithms are under development by a number of groups around the world .these will doubtless be of considerable interest to the graphics community .cd would like to thank the organisers of the conference for all their work during the event , and their patience with this contribution .cd is supported by an epsrc advanced fellowship .lasenby and j. lasenby .surface evolution and representation using geometric algebra . in r.cippola , a. martin , editors , _ the mathematics of surfaces ix : proceedings of the ninth i m a conference on the mathematics of surfaces _ , pages 144168 .london , 2000 . | projective geometry provides the preferred framework for most implementations of euclidean space in graphics applications . translations and rotations are both linear transformations in projective geometry , which helps when it comes to programming complicated geometrical operations . but there is a fundamental weakness in this approach the euclidean distance between points is not handled in a straightforward manner . here we discuss a solution to this problem , based on conformal geometry . the language of geometric algebra is best suited to exploiting this geometry , as it handles the interior and exterior products in a single , unified framework . a number of applications are discussed , including a compact formula for reflecting a line off a general spherical surface . * conformal geometry , euclidean space + and geometric algebra * chris doran , anthony lasenby and joan lasenby cambridge university , uk keywords : geometric algebra , clifford algebra , conformal geometry , projective geometry , homogeneous coordinates , sphere geometry , stereographic projection |
gene regulatory networks play a central role in cellular function by translating genotype into phenotype . by dynamically controlling gene expression ,gene regulatory networks provide cells with a mechanism for responding to environmental challenges .therefore , creating accurate mathematical models of gene regulation is a central goal of mathematical biology .delay in protein production can significantly affect the dynamics of gene regulatory networks .for example , delay can induce oscillations in systems with negative feedback , and has been implicated in the production of robust , tunable oscillations in synthetic gene circuits containing linked positive and negative feedback .indeed , delayed negative feedback is thought to govern the dynamics of circadian oscillators , a hypothesis experimentally verified in mammalian cells . in genetic regulatory networks , noise and delayinteract in subtle and complex ways .delay can affect the stochastic properties of gene expression and hence the phenotype of the cell .it is well known that noise can induce switching in bistable genetic circuits ; the infusion of delay dramatically enhances the stability of such circuits and can induce an analog of stochastic resonance .variability in the delay time ( distributed delay ) can accelerate signaling in transcriptional signaling cascades .given the importance of delay in gene regulatory networks , it is necessary to develop methods to simulate and analyze such systems across spatial scales . in the absence of delay, it is well known that chemical reaction networks are accurately modeled by ordinary differential equations ( odes ) in the thermodynamic limit , _i.e. _ when molecule numbers are sufficiently large .when molecule numbers are small , however , stochastic effects can dominate . in this case , the chemical master equation ( cme ) describes the evolution of the probability density function over all states of the system .gillespie s stochastic simulation algorithm ( ssa ) samples trajectories from the probability distribution described by the cme .while exact , the cme is difficult to analyze and the ssa can be computationally expensive . to address these issues ,a hierarchy of coarse - grained approximations of the ssa has been developed ( see figure [ f : hierarchy ] ) .spatially discrete approximations , such as -leaping and -leaping trade exactness for efficiency . at the next level are chemical langevin equations ( cles ) , which are stochastic differential equations of dimension equal to the number of species in the biochemical system .cles offer two advantages .first , unlike the ssa , the well - developed ideas from random dynamical systems and stochastic differential equations apply to cles .second , it is straightforward to simulate large systems using cles. finally , in the thermodynamic limit , one arrives at the end of the markovian hierarchy : the reaction rate equation ( rre ) .( 1,0.75770602 ) ( 0,0 ) ( 0.63978037,0.66330034)(0,0)[lb ] ( 0.63353796,0.08899866)(0,0)[lb ] ( 0.5960835,0.23257399)(0,0)[lb ] ( 0.63353796,0.36366461)(0,0)[lb ] ( 0.38384251,0.65966796)(0,0)[lb ] ( 0.47656487,0.50123945)(0,0)[lb ] ( 0.50777693,0.33893741)(0,0)[lb ] ( 0.2331119,0.33893741)(0,0)[lb ] ( 0.30802054,0.19536251)(0,0)[lb ] ( 0.50777688,0.07051479)(0,0)[lb ] ( 0.63978037,0.66330034)(0,0)[lb ] ( 0.78959824,0.33869497)(0,0)[lb ] ( 0.78959824,0.07651419)(0,0)[lb ] ; see section [ sec : pf ] . ]the markovian hierarchy above ( no delay ) is well - understood , but a complete analogue of the markovian theory does not yet exist for systems with delay .the ssa has been generalized to a delay version - the dssa - to allow for both fixed and variable delay .some analogues of -leaping exist for systems with delay ; see _ e.g. _ -leaping .several methods have been used to formally derive a delay chemical langevin equation ( dcle ) from the delay chemical master equation ( dcme ) ; see section [ sec : discuss ] for details .brett and galla use the path integral formalism of martin , siggia , rose , janssen , and de dominicis to derive a dcle approximation without relying on a master equation .the brett and galla derivation produces the ` correct ' dcle approximation of the underlying delay birth - death ( dbd ) process in the sense that the first and second moments of the dcle match those of the dbd process .however , their derivation has some limitations ( see section [ sec : discuss ] ) . in particular, it gives no rigorous quantitative information about the distance between the dbd process and the dcle . in this paper, we establish a rigorous link between dbd processes and dcles by proving that the distance between the dbd process and the correct approximating dcle process converges to zero as system size tends to infinity ( as measured by expectations of functionals of the processes ) .in particular , this result applies to all moments .it is natural to express distance in terms of expectations of functionals because the dbd process is spatially discrete while the correct dcle produces continuous trajectories ( see figure [ fig : intro ] ) .further , we prove that both processes converge weakly to the thermodynamic limit .finally , we quantitatively estimate the distance between the dbd process and the correct dcle approximation as well as the distance of each of these to the thermodynamic limit .all of these results hold for both fixed delay and distributed delay ( see figure [ fig : schematic]a ) .the correct dcle approximation is distinguished within the class of gaussian approximations of the dbd process by the fact that it matches both the first and second moments of the dbd process .as we will see , it performs remarkably well at moderate system sizes in a number of dynamical settings : steady state dynamics , oscillatory dynamics , and metastable switches .we will demonstrate via simulation and argue mathematically using characteristic functions that no other gaussian process with appropriately scaled noise performs as well . in the following, the term ` dcle ' shall refer specifically to the dcle derived by brett and galla and expressed by , unless specifically stated otherwise .we prove our mathematical results in the supplement .genetic regulatory networks may be simulated using an exact dssa to account for transcriptional delay . herewe provide a heuristic derivation of a related dcle , and show that in a number of concrete examples it provides an excellent approximation of the system ( see figure [ fig : schematic ] ) .these simulations raise the following questions : is the dcle approximation valid in general ? can the expected quality of the approximation be quantified in general ?we answer these questions mathematically in section [ sec : main ] .we will adopt the following notation for reactions with delay , here denotes the rate of the reaction , the dashed arrow indicates a reaction with delay , and is a probability measure that describes the delay distribution .solid arrows indicate reactions without delay .accounts for the lag between the initialization of transcription and the production of mature product . in a system with distributed delay, different production events can have different delay times ; the order of the output process may therefore not match that of the input process .( b e ) simulated gene regulatory network motifs : a transcriptional cascade ( b ) , oscillators ( c ) and metastable systems ( d e ) . ]first we consider a transcriptional cascade with two genes that code for proteins and .protein is produced at a basal rate ; production of is induced by the presence of .the state of the system is represented by an ordered pair .note that we use and to denote both protein names and protein numbers .the reactions in the network , and the associated state change vectors , are given by this system can be simulated exactly using the dssa : suppose the state of the system and the reactions in the queue are known at time ( the queued reactions can be thought of as the `` input process '' ; see figure [ fig : schematic]a ) , and that the delay kernel is supported on a finite interval ] ( ) .we first approximate the number of reactions that produce ( eq . ( [ reac : type3 ] ) ) that will be completed within the interval ] must have been initiated at some time within ] into intervals of length .the ( random ) number of reactions completed within ] may be approximated by a poisson random variable with mean summing over , the ( random ) number of reactions completed within ] no longer follows a poisson distribution with mean dependent only on the state of the system at time . on the other hand ,if is too small for a given , then the poisson distribution can not be approximated by a normal distribution . in order to rigorously derive the langevin approximation andestimate the distance between the dbd and dcle processes , we will have to take a careful limit by relating to ( with as ) .we describe the proper scaling in section [ sec : main ] .applied to the transcriptional cascade , theorem [ thm : dssa_dcle ] asserts that provided scales correctly with , the distance between the dbd process and the process described by eq .converges to zero as ( as measured by expectations of functionals of the processes ) .theorem [ thm : dssa_therm ] asserts that the dbd process then converges weakly to the thermodynamic limit given by eq . as .moreover , when is correctly scaled with respect to , theorem [ thm : tube ] provides explicit bounds for the probabilities that the dbd and dcle processes deviate from a narrow tube around the solution of eq . .in the previous example , the deterministic system has a fixed point .the time series for the stochastic system , therefore , stay within a small neighborhood of this fixed point ( see inset , fig .[ fig : ff_cross ] ) . in the next example , we show that the dcle approximation remains excellent even when the deterministic dynamics are non - trivial .we consider a degrade - and - fire oscillator for which the deterministic system has a limit cycle .the dcle correctly captures the peak height and the inter - peak times for the dssa realization of the degrade and fire oscillator , in addition to statistics such as the mean and variance .the approximation does not break down at small instantaneous protein numbers .indeed , the mathematical theory developed in this work makes an important point : protein concentrations at any particular time do not limit the quality of the dcle approximation ( in the presence of delay , or otherwise ) .instead , the quality of the dcle approximation depends on the latent parameter .theorem [ thm : tube ] makes this more precise : if one fixes the allowable error in the approximation of the dbd process by the dcle process , then the time during which the approximation error stays smaller than increases with ., the dcle approximation given by eq. closely matches dssa with respect to spike height distribution ( b ) and interspike interval distribution ( c ) .in contrast , removing delay from the diffusion term results in a poor approximation of these distributions , as shown by the sizable shifts affecting the red curves .( d ) and ( e ) illustrate mean repressor protein level and repressor protein variance , respectively , as functions of system size .. provides a good approximation for all simulated values of while the performance of eq .improves as increases .the quantity represents protein number , not protein concentration .parameter values are , , , , , .a soft boundary was added at to ensure positivity . ]the degrade and fire oscillator depicted schematically in figure [ fig : schematic]c consists of a single autorepressive gene and corresponds to the reaction network the production rate is given by , where is the propensity function the enzymatic degradation rate is given by .here ; is the michaelis - menten constant , the maximal enzymatic degradation rate , and the dilution rate coefficient . in the thermodynamic limit, the system is modeled by the delay differential equation as before , denotes the concentration of protein .we model the formation of functional repressor protein using distributed delay ( described by the probability measure ) ; this delayed negative feedback can induce oscillations .figure [ fig : df_oscillator]a depicts a sample realization of the stochastic version of the degrade and fire oscillator ( the finite system size regime ) generated by dssa .the dcle approximation is in this case given by ^{\frac{1}{2 } } { \mathrm{d } } w_{t}. \end{aligned}\ ] ] figure [ fig : df_oscillator ] illustrates that eq . provides a good approximation of the dbd dynamics , even when system size is relatively small . at system size ,the spike height distribution and interspike interval distribution obtained using the dssa ( black dots in figure [ fig : df_oscillator]b[fig : df_oscillator]c ) are nearly indistinguishable from those obtained using eq .( black curves in figure [ fig : df_oscillator]b[fig : df_oscillator]c ) .further , we see a close match with respect to mean repressor protein level and repressor protein variance across a range of system sizes ( figure [ fig : df_oscillator]d[fig : df_oscillator]e ) .interestingly , the dcle approximation is very good even though the protein number approaches zero during part of the oscillation .this illustrates a central feature of the theory : the quality of the dcle approximation is a function of a latent parameter , not of the number of molecules present at any given time .the exact form of the diffusion term is crucial to the accuracy of the dcle approximation . if we remove delay from the diffusion term in eq ., we obtain ^{1/2 } { \mathrm{d } } w_{t}. \end{aligned}\ ] ] at system size ( red curves in figure [ fig : df_oscillator]b[fig : df_oscillator]c ) , dsde produces dramatically different results from those generated by the correct dcle approximation . the performance of eq. improves as increases ( figure [ fig : df_oscillator]d[fig : df_oscillator]e ) .this is expected , as both eq . andeq . converge weakly to eq . as .understanding metastability in stochastic systems is of fundamental importance in the study of biological switches .while metastability is well understood mathematically in the absence of delay , understanding the impact of delay on metastability remains a major theoretical and computational challenge .we examine two canonical examples to show that the dcle can be used to study the impact of delay on metastability : a positive feedback circuit and a co - repressive genetic toggle switch . and a hill coefficient of .the bottom row corresponds to a delay and .the tail of the hitting times distribution becomes longer with both increasing delay and increasing hill coefficient .the dcle captures the lengthening of the tail due to both effects .parameter values are , , , , , . ]the simplest metastable system consists of a single protein that drives its own production ( figure [ fig : schematic]d ) .the chemical reaction network is given by with for the propensity in the thermodynamic limit , the dynamics of this model are described by the dde here represents protein concentration and is the hill coefficient . in the thermodynamic limit , there are two stable stationary states , and , as well as an unstable stationary state .these states satisfy .in the stochastic ( finite ) regime , the stationary states and become metastable .we simulate the metastable dynamics using dssa and the dcle approximation given in this case by ^{\frac{1}{2 } } \mathrm{d } w_{t}. \end{aligned}\ ] ] figure [ fig : pf_hitting ] displays hitting time distributions for the dssa simulations ( black curves ) and eq .( blue curves ) . a hitting time is defined as follows : we choose neighborhoods and of and , respectively .we start the clock when a trajectory enters from the right .the clock is stopped when that trajectory first enters .a hitting time is the amount of time that elapses from clock start to clock stop .we see that for no delay ( figure [ fig : pf_hitting ] , top ) and fixed delay ( bottom ) , the dcle approximation accurately captures the hitting time distributions for hill coefficients increasing from to .hence , the dcle approximation accurately captures the rare events associated with a spatially - discrete delay stochastic process .this is significant because dsdes are more amenable to large deviations theoretical analysis than their spatially - discrete counterparts . hitting times increase dramatically as the delay increases from to , in accord with earlier analysis .a dramatic increase is also seen as the hill coefficient increases .this is due to the fact that the potential wells around and deepen as increases . and fall back into the same neighborhood before transitioning into a neighborhood of the stable point .the two right panels illustrate trajectories that leave a neighborhood of and transition to a neighborhood of before falling back into the first neighborhood .top panels correspond to dssa ; bottom panels correspond to dcle .cartoons of typical trajectories corresponding to failed transitions ( left ) and successful transitions ( right ) are shown for the dcle process .plots are shown for .the values of the other parameters are , , . ] the co - repressive toggle switch ( figure [ fig : schematic]e ) is a two - dimensional metastable system described in the thermodynamic limit by the ddes [ e : toggle_dde ] the measure describes the delay associated with production in this symmetric circuit .. has two stable stationary points and separated by the unstable manifold associated with a saddle equilibrium point . in the stochastic ( finite system size ) regime ,the stable stationary points become metastable . in this regimea typical trajectory spends most of its time near the metastable points , occasionally moving between them .figure [ fig : toggle ] displays density plots corresponding to trajectories that either successfully transition between metastable states ( four panels on the right ) or make failed transition attempts ( four panels on the left ) . even for the moderate system size , the density plots generated by dssa ( top four panels ) closely match those generated by the dcle approximation in eq .( bottom four panels ) . given the importance of rare events throughout stochastic dynamics , it is encouraging that the dcle approximation captures their statistics well .the simulations thus far described suggest that the dcle closely approximates the dbd process provided that scales properly with .we next provide mathematical statements that make this observation precise .we prove that the distance between the dbd process and the approximating dcle ( as measured by expectations of functionals of the processes ) converges to zero as the system size ( theorem [ thm : dssa_dcle ] ) .in particular , theorem [ thm : dssa_dcle ] implies that the dcle may be used to approximate all moments of the dbd process .further , we then prove that the dbd and dcle processes both converge weakly to the thermodynamic limit ( theorem [ thm : dssa_therm ] ) .theorem [ thm : dssa_therm ] strengthens a result of schlicht and winkler that establishes convergence of the first moment of the dbd process .theorem [ thm : tube ] quantitatively bounds the probabilities that the dbd and dcle processes deviate from a narrow tube around the solution of the deterministic thermodynamic limit .we first precisely describe the general setting and then state our theorems .all proofs are provided in the supplement .consider a system of biochemical species and possible reactions .we are interested in describing the dynamics as a function of a latent system parameter , the system size .let denote the state of the system at time .each reaction is described by the following : 1 . a propensity function .the firing rate of reaction is given by .2 . a state - change vector .the vector describes the change in the number of molecules of each species that results from the completion of a reaction of type .3 . a probability measure supported on ] , then proceed as follows . if , then move to time and set .if , then move to time , set , and put the state change vector into a queue along with the designated time of reaction completion , .if a reaction from the past is set to complete in ] into that possess limits from the left .our main results quantify the behavior of as .we make the following regularity assumptions on the propensities . 1 .[ i : propen_reg ] the functions have continuous derivatives of order .[ i : propen_supp1 ] there exists a compact set such that for all .[ i : propen_supp2 ] for all , only if all coordinates of the vector are nonnegative ; otherwise .the correct dcle approximation of is given for by where is a -dimensional vector of independent standard brownian motions and is given by let denote the stochastic process described by eq . and let denote the probability measure on realizations associated with this process .theorem [ thm : dssa_dcle ] controls the distance between and .[ thm : dssa_dcle ] assume that the propensities satisfy ( [ i : propen_reg])([i : propen_supp2 ] ) .fix .for every continuous observable , we have as .theorem [ thm : dssa_therm ] establishes weak convergence of the scaled process to the thermodynamic limit governed by the delay reaction rate equations [ thm : dssa_therm ] assume that the propensities satisfy ( [ i : propen_reg])([i : propen_supp2 ] ) .fix .for every continuous observable , we have as , where denotes the solution of eq . .crucial to the proofs of theorems [ thm : dssa_dcle ] and [ thm : dssa_therm ] are the discretization of the time interval ] into subintervals of length .it is crucial that scales with in the right way .the quantitative pathwise controls in theorem [ thm : tube ] hold if .we define an euler - maruyama discretization of . for ,define . for integers and , define recursively by where \delta , \\ a_{2 } & = \frac{\sqrt{\delta}}{\sqrt{n}}\eta , \end{aligned}\ ] ] and is a mean multivariate gaussian random variable with correlation matrix defined as for , define by linearly interpolating between and .theorem [ thm : tube ] asserts that realizations of stay close to those of with high probability .[ thm : tube ] suppose that .there exist constants and such that where is a system constant defined by and is the ` discretized ' norm differential equations ( sdes ) are one of our main tools for modeling noisy processes in nature . interactions between the components of a system or network are frequently not instantaneous .it is therefore natural to include such delay into corresponding stochastic models .however , the relationship between delay sdes ( dsdes ) and the processes they model has not been fully established .delay stochastic differential equations have previously been formally derived from the delay chemical master equation ( dcme ) . unlike the chemical master equation , however , the dcme is not closed ; this complicates the derivation of dsde approximations .closure in this context means the following : let denote the probability that the stochastic system is in state at time .the dcme expresses the time derivative of in terms of joint probabilities of the form - the probability that the system is in state at time and was in state at time , where is the delay .the one - point probability distribution is therefore expressed in terms of two - point joint distributions , resulting in a system that is not closed .timescale separation assumptions have been used to close the dcme .if the delay time is large compared to the other timescales in the system , one may assume that events that occur at time are decoupled from those that occur at time and close the dcme by assuming the joint probabilities may be written as products : having closed the dcme , one may then derive dsde approximations as well as useful expressions for autocorrelations and power spectra .approximations of dsde type have also been derived using system size expansions such as van kampen expansions and kramers - moyal expansions for both fixed delay and distributed delay .brett and galla use the path integral formalism of martin , siggia , rose , janssen , and de dominicis to derive the delay chemical langevin equation ( dcle ) without relying on the dcme . using this formalism , a moment generating functionalmay be expressed in terms of the system size parameter and the sampling rate . in the continuous - time limit , , the dcle may be inferred from the moment generating functional .however , the brett and galla derivation has some limitations .first , the limit can not be taken without simultaneously letting .intuitively , this is because as , the gaussian approximation to the poisson distribution with mean breaks down unless the parameter simultaneously diverges to infinity .second , the derivation gives no quantitative information about the distance between the dcle and the original delay birth - death ( dbd ) process . in this paper , we address these shortcomings .we prove rigorously that the dbd process can be approximated by a class of gaussian processes that includes the dcle .in particular , we establish that for most biophysically relevant propensity functions , the dcle process will approximate all moments of the dbd process .the rigorous proof includes bounds on the quality of the approximation in terms of the time for which the approximation is desired to hold and the characteristic protein number ( see theorem [ thm : tube ] ) .the error bounds also indicate that the quality of the dcle approximation worsens with increasing upper bounds on the reaction propensity functions and state - change vectors .physically , this means that high reaction rates and reactions that cause large changes in the protein populations are detrimental to the quality of the dcle approximation .the dcle is one of many gaussian processes that approximate the dbd process . among all gaussian approximations with noise components that scale as ,the dcle is optimal because it is the only such approximation that exactly matches the first and second moments of the dbd process .we formally justify this assertion in the supplement using characteristic functions .as our simulations of the degrade and fire oscillator demonstrate , the dcle can significantly outperform other gaussian approximations at moderate system sizes .nevertheless , the quantitative tube estimates in theorem [ thm : tube ] apply to _ any _ gaussian approximation of the dbd process provided the noise scales as .this is significant because it is often advantageous to use linear noise approximations of the dcle .delay appears in the drift component of a linear noise approximation but not in the diffusion component .linear noise approximations are therefore easier to analyze than their dcle counterparts . in particular ,elements of the theory of large deviations for markovian systems can be extended to sdes with delay in the drift . for metastable systems ,our simulations indicate that the dcle captures both temporal information ( such as hitting times for the positive feedback model ; see fig .[ fig : pf_hitting ] ) and spatial information ( such as densities for trajectories corresponding to failed and successful transitions ; see fig . [fig : toggle ] ) .this suggests that dcle approximations may be used to study rare events for biochemical systems that exhibit metastability .we have shown that the dcle provides an accurate approximation of a number of stochastic processes . although we chose gene regulatory networks in our examples ,the theory is applicable to general birth - death processes with delayed events .sdes , and the chemical langevin equation in particular , are fundamental in modeling and understanding the behavior of natural and engineered systems .we therefore expect that the dcle will be widely applicable when delays impact system dynamics .consider a system of biochemical species and possible reactions .we are interested in describing the dynamics as a function of a latent system parameter , the system size .let denote the state of the system at time .each reaction is described by the following : 1. a propensity function .the firing rate of reaction is given by .2 . a state - change vector .the vector describes the change in the number of molecules of each species that results from the completion of a reaction of type .3 . a probability measure supported on ] , then proceed as follows . if , then move to time and set .if , then move to time , set , and put the state change vector into a queue along with the designated time of reaction completion , .if a reaction from the past is set to complete in ] into that possess limits from the left .our main results quantify the behavior of as .we make the following regularity assumptions on the propensities . 1 .[ i : propen_reg ] the functions have continuous derivatives of order .[ i : propen_supp1 ] there exists a compact set such that for all .[ i : propen_supp2 ] for all , only if all coordinates of the vector are nonnegative ; otherwise .the correct dcle approximation of is given for by where is a -dimensional vector of independent standard brownian motions and is given by let denote the stochastic process described by eq . and let denote the probability measure on realizations associated with this process .theorem [ thm : dssa_dcle ] controls the distance between and .[ thm : dssa_dcle ] assume that the propensities satisfy ( [ i : propen_reg])([i : propen_supp2 ] ) .fix . for every continuous observable , we have as .theorem [ thm : dssa_therm ] establishes weak convergence of the scaled process to the thermodynamic limit governed by the delay reaction rate equations [ thm : dssa_therm ] assume that the propensities satisfy ( [ i : propen_reg])([i : propen_supp2 ] ) .fix .for every continuous observable , we have as , where denotes the solution of eq . .crucial to the proofs of theorems [ thm : dssa_dcle ] and [ thm : dssa_therm ] are the discretization of the time interval ] into subintervals of length .it is crucial that scales with in the right way .the quantitative pathwise controls in theorem [ thm : tube ] hold if .we define an euler - maruyama discretization of . for ,define . for integers and , define recursively by \delta + \frac{\sqrt{\delta}}{\sqrt{n}}\eta , \ ] ] where is a mean multivariate gaussian random variable with correlation matrix defined as for , define by linearly interpolating between and . theorem [ thm : tube ] asserts that realizations of stay close to those of with high probability .[ thm : tube ] suppose that .there exist constants and such that where is a system constant defined by and is the ` discretized ' norm [ thm : dssa_therm ] establishes weak convergence of the scaled process to the thermodynamic limit .we prove theorem [ thm : dssa_therm ] in two steps .first , we show that the family of measures is a tight family on . by the prohorov theorem ( see _ e.g. _ ) , the family is then relatively compact in the space of probability measures .second , we consider finite sequences of times in ] with and . for , define for functions and in , define } { \left\lvert z_{1 } ( t ) - z_{2 } ( \gamma ( t ) ) \right\rvert } { \leqslant}{\varepsilon}\big\}.\end{aligned}\ ] ] let be a sequence of probability measures on .we recall a characterization of tightness for . for and , define where the infimum is taken over partitions of ] . using lemma [ lemma : a1 ] and lemma [ lemma : a2 ] , we have \right)}{\left ( \frac{2 \cdot \max{\ { |f_i|_\infty \}}}{|f_{i_{0}}|_\infty}\right)^{2n \delta \cdot \max{\ { |f_i|_\infty \ } } } } \leq e^{-2 c_4 n \delta \max_{i } |f_i|_\infty } { \mathrel{=\mkern-4.2mu\raise.095ex\hbox{}}}e^{-c_5 n^{1-\alpha}}.\ ] ] [ dcle : supp : exponentials_are_close ] for every constant , for large enough , ~d\mu_j(s ) \right ] \right ) \right.\\ - \left . \exp\left ( n\delta \left [ \sum_{j=1}^m \left ( e^{\mathrm{i}\mathbf{r}v_j / n}-1\right ) \int_{0}^{\infty } [ f_j(b_n(t -s ) ) + c\delta\zeta ]~d\mu_j(s ) \right ] \right ) \right|\\ \hfill \leq 2c \delta^2 \zeta |\mathbf{r}| \left(\sum_{j = 1}^m |v_j|\right ) + \frac{c}{2 } |\mathbf{r}|^2 \delta^3 \zeta \left ( \sup_{j } m |f_j|_\infty |v_j|\right)^2 . \end{aligned}\ ] ] ~d\mu_j(s ) \right ] \right ) \right.\\ - \left .\exp\left ( n\delta \left [ \sum_{j=1}^m \left ( e^{\mathrm{i}\mathbf{r}v_j / n}-1\right ) \int_{0}^{\infty } [ f_j(b_n(t -s ) ) + c\delta\zeta ] ~d\mu_j(s ) \right ] \right ) \right| \\\leq \left| \exp\left ( n\delta \sum_{j = 1}^m \left\{\int_{0}^{\infty } \left[f_j(b_n(t -s ) ) - c\delta \zeta\right]~ d\mu_j(s)\right\ } \left ( \frac{\mathrm{i } \mathbf{r } v_j}{n } - \frac{(\mathbf{r}v_j)^2}{n^2 } + \mathcal{o}(n^{-3})\right ) \right)\right.\\ \left . - \exp\left ( n\delta \sum_{j = 1}^m \left\{\int_{0}^{\infty } \left[f_j(b_n(t -s ) ) + c\delta \zeta\right]~ d\mu_j(s)\right\ } \left ( \frac{\mathrm{i } \mathbf{r } v_j}{n } - \frac{(\mathbf{r}v_j)^2}{n^2 } + \mathcal{o}(n^{-3})\right ) \right ) \right|\\ \leq \left| n\delta \sum_{j = 1}^m\left ( \frac{\mathrm{i}\mathbf{r}v_j}{n } - \frac{(\mathbf{r}v_j)^2}{n^2 } + \mathcal{o}(n^{-3})\right ) \left ( \int_{0}^{\infty } 2 c\delta \zeta ~ d\mu_j(s)\right ) \right|\\ + \frac{1}{2 } \left| n^2\delta^2 \left\ { \sum_{j = 1}^m \left ( \frac{\mathrm{i}\mathbf{r}v_j}{n } - \frac{(\mathbf{r}v_j)^2}{n^2 } + \mathcal{o}(n^{-3})\right)^2\right.\right.\\ \left.\left . \left\{ \int_{0}^\infty \left[(f_j(b_n(t - s ) ) - c \delta \zeta)^2 -(f_j(b_n(t - s ) ) + c \delta \zeta)^2 \right ] d\mu_j(s ) \right\ } \right\ } \right|\\ \leq 2c \delta^2 \zeta |\mathbf{r}| \left ( \sum_{j = 1}^{m } |v_j|\right ) + \frac{c}{2}|\mathbf{r}|^2 \delta^3 \zeta \left ( \sum_{j = 1}^m \int_{0}^\infty |v_j f_j(t - s)|~ d\mu_j(s ) \right)^2 \end{aligned}\phantom{\hfill}\ ] ] [ dcle : prop:3 ] define . for and , we have - e\left [ \exp\left ( n \delta \sum_{j = 1}^m\left ( e^{\mathrm{i}\mathbf{r } v_j /n } -1\right ) \int_{0}^{\infty } f_j ( b_n(t_0 -s ) ) ~d\mu(s ) \right ) \right ] \right|\\ \leq 5 |\mathbf{r}| \delta^2 \zeta |f'|_\infty + o(\delta^{2+\epsilon } ) \end{aligned}\ ] ] where since , we may also take corollary [ c : c_jump_bound ] gives .define a random variable as follows : abusing notation , we write for throughout the proof .one can see that pointwise , and by the dominated convergence theorem we can directly obtain that \to e[\varphi] ] the bound on the right is smaller than an analogous computation can be used to show that - e[e^{\mathrm{i}\mathbf{r } \varphi_{n\delta\zeta}/n}]| \leq 4 \zeta e^{-c_4 \zeta n^{1 - \alpha}}. ] for .let be the random variable that defines the change to a process over the interval ] . on expanding the term term using a taylor series expansion , we can approximate the expression \ ] ] by \ ] ] with an error that is .however , this is the characteristic function for the gaussian random variable \delta + \frac{\sqrt{\delta}}{\sqrt{n } } \eta \end{aligned}\]][dcle : supp : e : sigma ] where is a mean multivariate gaussian random variable with correlation matrix , with [ dcle : supp : defn : dcle ] for , define . for , define recursively as \delta + \frac{\sqrt{\delta}}{\sqrt{n}}\eta\ ] ] where is a mean 0 multivariate gaussian random variable with correlation matrix defined as the next proposition estimates the probability of finding the processes and outside a tube around of radius greater than . as the proposition shows , the gaussian process has a smaller tail ( of the order ) than the birth - death process ( which has a tail of order ) .let denote the increment to the process in the time interval ] since for , it follows that we now bound the error on the interval .12 & 12#1212_12%12[1][0] link:\doibase 10.1111/j.1365 - 2958.2010.07111.x[ * * , ( ) ] link:\doibase 10.1073/pnas.0503858102 [ * * , ( ) ] , \doibase http://dx.doi.org/10.1016/0065-2571(65)90067-1 [ * * , ( ) ] \doibase http://dx.doi.org/10.1016/s0960-9822(03)00534-7 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.102.068105 [ * * , ( ) ] \doibase http://dx.doi.org/10.1016/s0960-9822(03)00494-9 [ * * , ( ) ] * * , ( ) link:\doibase 10.1038/nature07389 [ * * , ( ) ] link:\doibase 10.1038/nature07616 [ * * , ( ) ] \doibase http://dx.doi.org/10.1016/s0006-3495(02)75249-1 [ * * , ( ) ] \doibase http://dx.doi.org/10.1016/j.jtbi.2004.04.006 [ * * , ( ) ] \doibase http://dx.doi.org/10.1016/j.cell.2010.12.019 [ * * , ( ) ] link:\doibase 10.1073/pnas.0913317107 [ * * , ( ) ] link:\doibase 10.1038/ncomms1422 [ * * ( ) , 10.1038/ncomms1422 ] link:\doibase 10.1371/journal.pone.0002972 [ * * , ( ) ] link:\doibase 10.1103/physreve.80.031129 [ * * , ( ) ] ( ) * * , ( ) * * , ( ) link:\doibase 10.1038/nature02298 [ * * , ( ) ] \doibase http://dx.doi.org/10.1016/s0006-3495(01)75949-8 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1103/physrevlett.111.058104 [ * * , ( ) ] link:\doibase 10.1142/s0219493705001389 [ * * , ( ) ] link:\doibase 10.1080/07362990500397715 [ * * , ( ) ] link:\doibase 10.1371/journal.pcbi.1002264 [ * * , ( ) ] http://www.scopus.com/inward/record.url?eid=2-s2.0-33645429016&partnerid=40&md5=47e48b2d9e17bcc889478b564877d5e0 [ * * , ( ) ] link:\doibase 10.1137/060666457 [ * * , ( ) ] http://www.scopus.com/inward/record.url?eid=2-s2.0-31544465969&partnerid=40&md5=2871efe61543fb533e53de65f2d40e04 [ * * ( ) ] http://www.scopus.com/inward/record.url?eid=2-s2.0-34250754425&partnerid=40&md5=143efd47f6b1d958a31e5de896e1bd59 [ * * ( ) ] http://www.scopus.com/inward/record.url?eid=2-s2.0-0035933994&partnerid=40&md5=0cbe82982da7f4d67dd3f6288097f278 [ * * , ( ) ] http://www.scopus.com/inward/record.url?eid=2-s2.0-42449102270&partnerid=40&md5=ae704eb25ba6e4bbaa98a59b09e43f09 [ * * ( ) ] http://www.scopus.com/inward/record.url?eid=2-s2.0-33847220720&partnerid=40&md5=104123affd3b345af8c2d2bdef599b73 [ * * ( ) ] link:\doibase 10.1007/s00285 - 008 - 0178-y [ * * , ( ) ] link:\doibase 10.1103/physrevlett.110.250601 [ * * , ( ) ] http://www.scopus.com/inward/record.url?eid=2-s2.0-84877749614&partnerid=40&md5=2fea8984888ce36cf51e70c2f0a9509a [ * * ( ) ] link:\doibase 10.1371/journal.pcbi.0020117 [ * * , ( ) ] * * , ( ) `` , '' ( ) `` '' link:\doibase 10.1002/9780470316658 [ _ _ ] , wiley series in probability and mathematical statistics : probability and mathematical statistics ( , , ) pp . , link:\doibase 10.1103/physrevlett.90.020601 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.87.250602 [ * * , ( ) ] \doibase http://dx.doi.org/10.1016/j.cam.2006.02.063 [ * * , ( ) ] , link:\doibase 10.1103/physreve.80.021909 [ * * , ( ) ] link:\doibase 10.1103/physreve.84.021128 [ * * , ( ) ] ( ) link:\doibase 10.1002/9780470316962 [ _ _ ] , ed . , wiley series in probability and statistics : probability and statistics ( , , ) pp . , | delay is an important and ubiquitous aspect of many biochemical processes . for example , delay plays a central role in the dynamics of genetic regulatory networks as it stems from the sequential assembly of first mrna and then protein . genetic regulatory networks are therefore frequently modeled as stochastic birth - death processes with delay . here we examine the relationship between delay birth - death processes and their appropriate approximating delay chemical langevin equations . we prove that the distance between these two descriptions , as measured by expectations of functionals of the processes , converges to zero with increasing system size . further , we prove that the delay birth - death process converges to the thermodynamic limit as system size tends to infinity . our results hold for both fixed delay and distributed delay . simulations demonstrate that the delay chemical langevin approximation is accurate even at moderate system sizes . it captures dynamical features such as the spatial and temporal distributions of transition pathways in metastable systems , oscillatory behavior in negative feedback circuits , and cross - correlations between nodes in a network . overall , these results provide a foundation for using delay stochastic differential equations to approximate the dynamics of birth - death processes with delay . |
late time accelerated expansion of the universe was indicated by measurements of distant type ia supernovae ( sne ia ) . this was confirmed by observations of cosmic microwave background ( cmb ) anisotropies by the wilkinson microwave anisotropy probe ( wmap ) , and the large - scale structure in the distribution of galaxies observed in the sloan digital sky survey ( sdss ) .it is not possible to account for this phenomenon within the framework of general relativity containing only matter .therefore , a number of models containing `` dark energy '' have been proposed as the mechanism for the acceleration .there are currently many dark energy models , including cosmological constant , scalar field , quintessence , and phantom models .however , dark energy , the nature of which remains unknown , has not been detected yet .the cosmological constant , which is the standard candidate for dark energy , can not be explained by current particle physics due to its very small value , and it is plagued with fine - tuning problems and the coincidence problem .an alternative method for explaining the current accelerated expansion of the universe is to extend general relativity to more general theories on cosmological scales . instead of adding an exotic component such as a cosmological constant to the right - hand side ( i.e. , the energy - momentum tensor ) of einstein s field equation , the left - hand side ( i.e. , the einstein tensor , which is represented by pure geometry ) can be modified .typical models based on this modified gravity approach are models and the dvali gabadadze porrati ( dgp ) model ( for reviews , see ) . in models , the scalar curvature in the standard einstein hilbert gravitational lagrangian is replaced by a general function .by adopting appropriate function phenomenologically , models can account for late - time acceleration without postulating dark energy .the dgp model is an extra dimension scenario . in this model ,the universe is considered to be a brane ; i.e. , a four - dimensional ( 4d ) hypersurface , embedded in a five - dimensional ( 5d ) minkowski bulk . on large scales , the late - time acceleration is driven by leakage of gravity from the 4d brane into 5d spacetime .naturally , there is no need to introduce dark energy . on small scales ,gravity is bound to the 4d brane and general relativity is recovered to a good approximation . according to various recent observational data including that of type ia supernovae , it is possible that the effective equation of state parameter , which is the ratio of the effective pressure to the effective energy density , evolves from being larger than ( non - phantom phase ) to being less than ( phantom phase ) ; namely , it has currently crossed ( the phantom divide ) . models that realize the crossing of the phantom divide have been studied . on the other hand , in the original dgp model and a phenomenological extension of the dgp model described by the modified friedmann equation proposed by dvali and turner , the effective equation of state parameter never crosses the = line . in this paper, we develop the `` phantom crossing dgp model '' by further extending the modified friedmann equation by dvali and turner . in our model ,the effective equation of state parameter of dgp gravity crosses the phantom divide line , as indicated by recent observations .this paper is organized as follows . in the next section ,we summarize the original dgp model , and check the behavior of the effective equation of state . in section [ sec:3 ] , we describe the modified friedmann equation by dvali and turner , and we also demonstrate that the effective equation of state does not cross the = line in this framework . in section [ sec:4 ] , we construct `` the phantom crossing dgp model '' by extending the modified friedmann equation proposed by dvali and turner .we show that the effective equation of state parameter of our model crosses the phantom divide line , and investigate the properties of our model .finally , a summary is given in section [ sec:5 ] .the dgp model assumes that we live on a 4d brane embedded in a 5d minkowski bulk .matter is trapped on the 4d brane and only gravity experiences the 5d bulk .the action is where the subscripts ( 4 ) and ( 5 ) denote quantities on the brane and in the bulk , respectively . ( ) is the 5d ( 4d ) planck mass , and represents the matter lagrangian confined on the brane .the transition from 4d gravity to 5d gravity is governed by a crossover scale . on scales larger than , gravity appears 5d . on scales smaller than , gravity is effectively bound to the brane and 4d newtonian dynamics is recovered to a good approximation . is the single parameter in this model . assuming spatial homogeneity and isotropy , a friedmann - like equation on the brane is obtained as where is the total cosmic fluid energy density on the brane . represents the two branches of the dgp model .the solution with is known as the self - accelerating branch . in this branch ,the expansion of the universe accelerates even without dark energy because the hubble parameter approaches a constant , , at late times . on the other hand, corresponds to the normal branch. this branch can not undergo acceleration without an additional dark energy component .hence in what follows we consider the self - accelerating branch ( ) only . for the second term on the right - hand side of eq .( [ dgp_fri ] ) , which represents the effect of dgp gravity , the effective energy density is and the effective pressure is where , the differential of the hubble parameter with respect to the cosmological time . using eqs .( [ rho_eff ] ) and ( [ p_eff ] ) , the effective equation of state parameter of dgp gravity is given by vs. redshift .the red ( solid ) , green ( dashed ) , blue ( dotted ) lines represent the cases for , and , respectively ( corresponding to , and , respectively ) .[ fig : dgp_wrc ] ] fig .[ fig : dgp_wrc ] shows the behavior of the effective equation of state of dgp gravity versus the redshift for , and . assuming that the total cosmic fluid energy density of eq .( [ dgp_fri ] ) contains matter and radiation , from eq .( [ rch0omegam ] ) , these values of correspond to , and , respectively . where is the normalized energy density of matter and is the radiation on the brane ; i.e. , and .( , ) .the subscripts designate the present value .the effective equation of state of dgp can also be exactly expressed in terms of the energy densities of matter and radiation , and . in realistic ranges of the energy density , and , the value of the effective equation of state can not be less than or equal to .that is , the effective equation of state never crosses the phantom divide line in the original dgp model .dvali and turner phenomenologically extended the friedmann - like equation ( eq .( [ dgp_fri ] ) ) of the dgp model .this model interpolates between the original dgp model and the pure model with an additional parameter .the modified friedmann - like equation is for , this agrees with the original dgp friedmann - like equation , while leads to an expansion history identical to that of cosmology .differentiating both sides of eq .( [ dt_fri ] ) with respect to the cosmological time , we obtain the following differential equation . where a dot indicates the derivative respect to the cosmological time .the quantity is the total cosmic fluid pressure on the brane . for the second term on the right - hand side of eq .( [ dt_fri ] ) , which represents the effect of dgp gravity , the effective energy density is and from eq .( [ hdot2 ] ) the effective pressure is .\label{p_dt}\ ] ] from eqs .( [ rho_dt ] ) and ( [ p_dt ] ) , the effective equation of state parameter of the dgp model extended by dvali and turner is given by fig .[ fig : dt_wa ] shows a plot of the behavior of the effective equation of state of the dgp model by dvali and turner versus the redshift for , and ( assuming ) ., vs. redshift of the dgp model extended by dvali and turner for , and ( top to bottom ) assuming .[ fig : dt_wa ] ] in general , for equation of state , the energy density varies as .this leads to the following proportional relation . at the same time , from eq .( [ rho_dt ] ) , we find . in the radiation - dominated epoch , from the proportional relation on hubble parameter , we obtain the following relation . as compared the right - hand side of eq .( [ rad1 ] ) to that of eq .( [ rad2 ] ) , during the earlier radiation - dominated epoch ( ) , the effective equation of state can also be represented with . in the same way , during the matter - dominated epoch ( ) , from the proportional relation on hubble parameter , at the present time , is a stationary value close to . from these results , in the case of , the effective equation of state in all era , even in the radiation - dominated epoch .that is to say , the component of the dgp gravity works as the driving force of the accelerated expansion of the universe in all epochs .on the other hand , for , there is era when the effective equation of state becomes .that is , the dgp gravity does not drive the accelerated expansion in all epochs .the case of corresponds to the original dgp model described in the previous section .thus , in the original dgp model , the effective equation of state in the radiation - dominated epoch . andafter the radiation - dominated epoch , becomes .in other words , the dgp gravity acts as the driving force of the accelerated expansion just after the radiation - dominated epoch .however , when is positive , the effective equation of state will exceed at all times . for negative , is always less than . in the case of , is constantly .based on this analysis , crossing of the phantom divide does not occur in the dgp model extended by dvali and turner .we propose the `` phantom crossing dgp model '' that extends the modified friedmann equation ( eq .( [ dt_fri ] ) ) proposed by dvali and turner .our model can realize crossing of the phantom divide line for the effective equation of state of the dgp gravity . as mentioned in the previous section , the effective equation of state parameter of the dgp model by dvali and turner ,takes the value of over for positive , and it is less than for negative . when , becomes . on the basis of these results , we consider a model in which varies being positive to being negative . to keep the model as simple as possible , we make the following assumption , where is the scale factor ( normalized such that the present day value is unity ) .the quantity is a constant parameter . in the periodwhen the scale factor is less than the parameter ( ) , the effective equation of state exceeds .at the point when the scale factor equals , ( ) , the equation of state s value will be . in the period when the scale factor exceeds the parameter ( ) ,the equation of state will be less than . in this way , crossing of the phantom divide is realized in our model .replacing by in eq .( [ dt_fri ] ) , the friedmann - like equation in our model is given by differentiating both sides of eq .( [ hirano_fri ] ) with respect to the cosmological time , the following differential equation is obtained . for the second term on the right - hand side of eq .( [ hirano_fri ] ) representing the effect of dgp gravity , the effective energy density is and from eq .( [ hirano_hdot2 ] ) , the effective pressure is .\label{p_hirano}\ ] ] using eqs .( [ rho_hirano ] ) and ( [ p_hirano ] ) , the effective equation of state of our model is given by , vs. redshift .the red ( solid ) , green ( dashed ) , blue ( dotted ) lines represent the cases of , and , respectively ( assuming ) .[ fig : dgp_hirano],width=445 ] depicted in fig .[ fig : dgp_hirano ] near recent epochs .[ fig : dgp_hirano_kakudai],width=445 ] fig .[ fig : dgp_hirano ] shows a plot of the effective equation of state of our model versus the redshift ( see also fig .[ fig : dgp_hirano_kakudai ] which shows an enlarged view of this diagram ) .our model is an extension of the dgp model and realizes crossing of the phantom divide .the effective equation of state of models for and ( assuming ) crosses the phantom divide line when the redshift and , respectively .we find that the smaller the parameter is , the older epoch crossing of the phantom divide occurs in . is not necessarily equal to the scale factor at the time of crossing the phantom divide , even though eq .( [ beta ] ) is assumed . in the eq .( [ hirano_fri ] ) , the value of that is the power index of varies with respect to time . as the power index of the differential equation changes with time , furthermore , in parallel , the differential equation is solved with respect to time .hence , the time lag occurs , the scale factor at the time of crossing the phantom divide is more than the value of . in a way similar to the derivation of eq .( [ wa_rad ] ) , we represent the effective equation of state of phantom crossing dgp model with . in the radiation - dominated epoch , the scale factor is taken to be in comparison with the value of .therefore , as in eq .( [ beta ] ) , during the radiation - dominated epoch ( ) , the effective equation of state is approximately that is , in the case of , the effective equation of state in all era , including the radiation - dominated epoch . on the other hand , for , there is era when the effective equation of state becomes . and absolute value of the effective pressure ( note that ) of our model vs. redshift , for ( ) = ( ) .[ fig : rhop_hirano ] ] fig .[ fig : rhop_hirano ] shows the effective energy density and absolute value of the effective pressure ( note that ) of our model for ( ) = ( ) versus the redshift , normalized such that the effective energy density is unity at the time of phantom crossing .it shows that the absolute value of the effective pressure exceeds the effective energy density at the time of crossing of the phantom divide .the recent observational data for type ia supernovae show that crossing of the phantom divide line occurs at a redshift . in our model , for ( when ) , crossing of the phantom divide occurs at .+ in a proposed model in which the phantom divide is crossed at , ( ) = ( ) , we investigate and show the property of phantom crossing dgp model .relative to that of a constant expansion cosmology , vs. the redshift . models and parametersare ( from top to bottom ) : ( 1 ) model , = 0.30 ; ( 2 ) phantom crossing dgp model , = 0.50 , = 0.30 ; ( 3 ) dgp model by dvali and turner , = 0.50 , = 0.30 ; ( 4 ) original dgp model , = 0.30 .[ fig : snia ] ] , matter , and dgp gravity , vs. the redshift in the phantom crossing dgp model with the proposed parameter ( ) = ( ) .[ fig : omega ] ] fig .[ fig : snia ] shows the distance modulus relative to that of a constant expansion cosmology , versus the redshift .that is , when is positive , cosmic expansion is accelerating .the distance modulus is defined by where is the hubble free luminosity distance given by being the hubble constant in units of .we adopt . in fig .[ fig : snia ] , models and parameters are ( from top to bottom ) : ( 1 ) model , = 0.30 ; ( 2 ) phantom crossing dgp model , = 0.50 , = 0.30 ; ( 3 ) dgp model by dvali and turner , = 0.50 , = 0.30 ; ( 4 ) original dgp model , = 0.30. phantom crossing dgp model can realize late - time acceleration of the universe very similar to that for model , without dark energy .[ fig : omega ] shows the normalized energy density of radiation , matter , and dgp gravity versus the redshift in the phantom crossing dgp model with the proposed parameter ( ) = ( ) .where , . is the effective energy density of dgp gravity defined by eq .( [ rho_hirano ] ) .we find that the universe is dgp gravity - dominated near recent epochs .therefore , in the phantom crossing dgp model , the late - time acceleration is driven by the effect of dgp gravity ., vs. the redshift .models and parameters are same as fig . [fig : snia ] .[ fig : weff ] ] fig .[ fig : weff ] shows the effective equation of state versus the redshift .models and parameters are same as fig .[ fig : snia ] .only our phantom crossing dgp model can realize crossing of the phantom divide line at as indicated by recent observations .* we confirmed that the effective equation of state does not cross the phantom divide line in the original dgp model . + * we also demonstrated that crossing of the phantom divide does not occur in the dgp model by dvali and turner . +* we constructed the phantom crossing dgp model .this model realizes crossing of the phantom divide .we found that the smaller the value of the new introduced parameter is , the older epoch crossing of the phantom divide occurs in .our model can realize late - time acceleration of the universe very similar to that of model , without dark energy , due to the effect of dgp gravity . in the proposed model ( ( ) = ( ) ) , crossing of the phantom divide occurs at as indicated by recent observations .riess , et al .j. * 116 * , 1009 ( 1998 ) s. perlmutter , et al ., astrophys .j. * 517 * , 565 ( 1999 ) r. knop , et al ., astrophys .j. * 598 * , 102 ( 2003 ) a.g .riess , et al ., astrophys .j. * 607 * , 665 ( 2004 ) a.g .riess , et al ., astrophys .j. * 659 * , 98 ( 2007 ) p. astier , et al .astrophys . * 447 * , 31 ( 2006 ) g. miknaitis , et al ., astrophys .j. * 666 * , 674 ( 2007 ) w.m .wood - vasey , astrophys .j. * 666 * , 694 ( 2007 ) j.a .frieman , et al .j. * 135 * 338 ( 2008 ) d.n .spergel , et al ., astrophys .* 148 * , 175 ( 2003 ) e. komatsu , et al .[ wmap collaboration ] , astrophys .* 180 * , 330 ( 2009 ) m. tegmark , et al .[ sdss collaboration ] , astrophys .j. * 606 * , 702 ( 2004 ) m. tegmark , et al .d * 69 * , 103501 ( 2004 ) b. ratra , p.j.e .peebles , phys .d * 37 * , 3406 ( 1988 ) p.j.e .peebles , b. ratra , astrophys .j. * 325 * , l17 ( 1988 ) r.r .caldwell , phys .b * 545 * , 23 ( 2002 ) r.r .caldwell , m. kamionkowski , n.n .weinberg , phys .lett * 91 * , 071301 ( 2003 ) k. hirano , k. kawabata , z. komiya , astrophys .space sci . * 315 * , 53 ( 2008 ) j.g .hartnett , k. hirano , astrophys .space sci . * 318 * , 13 ( 2008 ) k. hirano , et al . ,proceedings of 59th yamada conference `` inflating horizons of particle astrophysics and cosmology '' ( universal academy press ) , p.219 ( 2005 ) z. komiya , k. kawabata , k. hirano , h. bunya , n. yamamoto , j. korean astron .soc . * 38 * , 157 ( 2005 ) z. komiya , k. kawabata , k. hirano , h. bunya , n. yamamoto , astron .astrophys . * 449 * , 903 ( 2006 ) s. nojiri , s.d .odintsov , int .phys . * 4 * , 115 ( 2007 ) s. nojiri , s.d .odintsov , arxiv:0801.4843 [ astro - ph ] .s. nojiri , s.d .odintsov , arxiv:0807.0685 [ hep - th ] .dvali , g. gabadadze , m. porrati , phys .b * 485 * , 208 ( 2000 ) c. deffayet , phys .b * 502 * , 199 ( 2001 ) c. deffayet , g.r .dvali , g. gabadadze , phys .d * 65 * , 044023 ( 2002 ) k. koyama , gen .. grav . * 40 * , 421 ( 2008 ) .u. alam , v. sahni , a.a .starobinsky , jcap * 0406 * , 008 ( 2004 ) s. nesseris , l. perivolaropoulos , jcap * 0701 * , 018 ( 2007 ) p.u .yu , phys .b * 643 * , 315 ( 2006 ) h.k .jassal , j.s .bagla , t. padmanabhan , astro - ph/0601389 .s. nojiri , s.d .odintsov , phys .b * 562 * , 147 ( 2003 ) k. bamba , c.q .geng , phys .b * 679 * , 282 ( 2009 ) k. bamba , c.q .geng , s. nojiri , s.d .odintsov , phys .d * 79 * , 083014 ( 2009 ) g. dvali , m. turner , astro - ph/0301510 .k. nozari , m. pourghassemi , jcap * 0810 * , 044 ( 2008 ) k. nozari , n. behrouz , t. azizi , b. fazlpour , prog .. phys . * 122 * , 735 ( 2009 ) a. lue , r. scoccimarro , g.d .starkman , phys .d * 69 * , 124015 ( 2004 ) a. lue , phys. rep . * 423 * , 1 ( 2006 ) w.l .freedman , et al ., astrophys .j. * 553 * , 47 ( 2001 ) | we propose a phantom crossing dvali gabadadze porrati ( dgp ) model . in our model , the effective equation of state of the dgp gravity crosses the phantom divide line . we demonstrate crossing of the phantom divide does not occur within the framework of the original dgp model or the dgp model developed by dvali and turner . by extending their model , we construct a model that realizes crossing of the phantom divide . we find that the smaller the value of the new introduced parameter is , the older epoch crossing of the phantom divide occurs in . our model can account for late - time acceleration of the universe without dark energy . we investigate and show the property of phantom crossing dgp model . example.eps gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore |
( snr ) is an important index for wireless communication systems . in wireless communication systems , it is the most significant to achieve high capacity . in general , it is necessary and sufficient for achieving high capacity to increase snr under the condition that the width of the frequency band is constant .similary , the performance of wireless communication is evaluated in bit error rate ( ber ) .however , these two are not independent , and it is known that ber decreases as snr increases . as a wireless communication system ,we focus on a code division multiple access ( cdma ) system , in particular , an asynchronous cdma system .it is one of the multiple access systems with which many people can communicate each other at the same time . in cdma systems , spreading sequences are utilized as codes to multiplex . each user is assigned a different code and uses it to modulate and demodulate his signal . in cdma systems ,many methods have been proposed to increase snr .the one of such methods is based on the blind multiuser detection . on the other hand , improving the receiver with the application of digital implementation of ica and maximum likelihood ( ml ) estimation are also efficient .however , in particularly , ml estimation method needs a large amount of calculations . on the contrary , to increase snr, the representative approach is to improve spreading sequences .the current spreading sequence of 3 g cdma systems is the gold code .it is known that the gold code is optimal in all the binary spreading sequences as well as the kasami sequence . to explore a better sequence , in and ,the use of chaotic spreading sequences has been proposed .these chaotic spreading sequences are multivalued sequences , not binary ones , and are obtained from chaotic maps .examples of such spreading sequences have been given in - .however , the spreading sequences whose snr is maximum in all the spreading sequences are not yet obtained . in , the approach to obtain the capacity of spreading sequences has been proposed .however , the sequence achieving maximal capacity has not been suggested . to achieve the maximal capacity, we have to derive the practical spreading sequences whose interference noise is minimal . in general , crosscorrelation is treated as a basic component of interference noise , and autocorrelation is related to synchronization at the receiver side and the fading noise .thus , it is desirable that the first peak of crosscorrelation and the second peak in autocorrelation should be kept low .however , sarwate has shown that there is an avoidable limitation trade - off as a relation between lowering crosscorrelation s peak and autocorrelation s second peak . from this result , it is impossible that both of the peaks of crosscorrelation and autocorrelation are zero .welch shows that the maximum value of crosscorrelation has a universal lower bound . from this result , it is impossible that the maximum value of crosscorrelation is zero in some situations . therefore , it is not straightforward to derive practical spreading sequences whose snr is high . in , sarwate has shown two kinds of characterized sequences on his limitation .one kind is a set of sequences whose periodic crosscorrelation is always zero .we call them sarwate sequences .the other kind is a set of sequences whose periodic autocorrelation is always zero except for only one point , that is , frank - zadoff - chu ( fzc ) sequences . in , the extended set of the fzc sequences ,the fzc sequence families are proposed .they have three parameters and their snr , autocorrelation and crosscorrelation has been investigated . in this paper, we define the weyl sequence class , which is a set of sequences generated by the weyl transformation .this class belongs to the extended fzc sequence families and includes the sarwate sequences .the sequence in the weyl sequence class has a desired property that the order of crosscorrelation is low .we evaluate the upper bound of crosscorrelation and construct the optimization problem : minimize the upper bound of crosscorrelation . from the problem ,we derive optimal spreading sequences in the weyl sequence class .we show snr of them in a special case and compare them with other sequences in bit error rate .in this section , we define the weyl sequence class and show their properties . let be the length of spreading sequences .we define the weyl sequence as the following formula where and are real parameters . from the above definition, we can assume that the parameters and satisfy and .the sequences whose is an irrational number are used in a quasi - monte carlo method .we apply this sequence to a spreading sequence .then , the weyl spreading sequence is defined as where is the number of the user and is the unit imaginary number . in cdma systems ,the value of has no effects to signal to noise ratio ( snr ) since is united to the phase term of the signal .thus , we set .we call the class which consists of weyl spreading sequences as the weyl sequence class .note that this class is similar to the fzc sequence families .the -th element of the fzc sequence families is defined as where is an integer that is relatively prime to such that and and are any real numbers .the triple specifies the set of sequences .when the triple is , we obtain the element of the fzc sequence the weyl sequence class is obtained when the triple is and .note that is not always an integer .thus , the weyl sequence class belongs to the extended fzc sequence families whose is a real number . the element of the weyl sequence class , has a desired property that crosscorrelation is low .we define the periodic correlation function and aperiodic correlation function as where and is the conjugate of .the correlation functions and have been studied in . when , and are periodic and aperiodic crosscorrelation functions .it is necessary for achieving high snr to keep the value of the crosscorrelation functions , and low for all . the absolute value of crosscorrelation functions and have the common upper bound that as the equality is attained if and only if where . from the above result, obeys similarly , obeys .thus , the upper bound of and is independent of . forthe general spreading sequences , due to the central limit theorem ( clt ) , the crosscorrelations and become large as becomes large .for this reason , compared to the general spreading sequences , the weyl spreading sequence is expected to have low crosscorrelation .in this section , we consider an asynchronous binary phase shift keying ( bpsk ) cdma system . our goal is to derive the spreading sequences whose interference noise is the smallest in the weyl sequence class .let , and be the number of users , the durations of the symbol and each chip , respectively . in this situation , the user despreads the spreading sequences with the spreading sequence of the user , .the symbols denote bits which the user send .the transmitted signal of the user has time delay . from , we assume that time delay is distributed in and satisfies , where is an integer .then , the interference noise between the user and the user , is obtained as , \label{eq : correlation } \end{split}\ ] ] where is the phase of user s carrier . with eq .( [ eq : bound1 ] ) , the absolute value of the interference noise is evaluated as thus , we have shown that the upper bound of interference noise between two sequences is inversely proportional to . to reduce the interference noise ,it is necessary to reduce . to eliminate the absolute value function, we introduce the distance between the phases and .the distance we propose here is given by note that this satisfies the axiom of distance , and if we regard in the same light as . from eq .( [ eq : d ] ) , we rewrite eq .( [ eq : bound2 ] ) without any absolute value as we should take into account the whole interference noise in the users .the whole interference noise is written as with eq .( [ eq : bound3 ] ) , it is clear that has the upper bound : thus , we minimize eq .( [ eq : whole ] ) and obtain the problem this problem is equivalent to that we minimize the sum of the upper bound of .thus , the crosscorrelation among all the users is expected to be always low when we solve this problem . from eq .( [ eq : d_prop1 ] ) , it is clear that .then , in the problem , we count two times the same distance .thus , we obtain the equivalent problem it is not clear if the objective function of the problem is convex since the form of function is complicated . to eliminate the function , we introduce slack variables for .then , the problem is rewritten as without loss of generality , we assume . then , the problem can be rewritten as notice that the objective function and the inequality constraints of the problem are convex . it has been known that convex programming can be solved with the kkt conditions . to write such conditions ,we define the variable vector as where , , and is the transpose of . from the kkt conditions ,the solution is a global solution of if satisfies the following equation : where and the lagrange multipliers and are non - negative real numbers .they have to satisfy the following conditions : in the appendix a , we prove that the global optimal solutions and are given by where is a real number . from the above result, the optimal spreading sequence of the user , is given by for a real number .this is equivalent to the following spreading sequences : where which satisfies when .this sequence is obtained from the triple and in the fzc sequence families .if , and , where is an even number , then our sequences are equivalent to the song - park ( sp ) sequences ( is not used ) .if and , our sequences are equivalent to the sarwate sequences .in the previous section , we fix the number of users and derive optimal spreading sequences in the weyl spreading sequence class .this spreading sequence is expected to be useful when the number of the users is fixed . however , the number of users in a channel changes as time passes .thus , in this section , we fix the maximum number of users in a channel , and assign the to users .the spreading sequences assigned to the user is expressed as note that optimal spreading sequence of the problem ( ) is expressed as in particular , when and , this spreading sequence is equivalent to the sequence shown by sarwate .this spreading sequence has the one that the periodic crosscorrelation function is always 0 , that is , for all .note that in all the .thus , we define as , that is , we consider the following sequences in this section , we assume that is a random variable and is uniformly distributed in . in the next section ,we consider how to assign in a systematic approach .the expression of snr of the user is obtained in as where \\ & + |c_{i , k}(l - n+1)|^2 + |c_{i , k}(l)|^2 \\ & + \operatorname{re}[c_{i , k}(l ) \overline{c_{i , k}(l+1 ) } ] + |c_{i , k}(l+1)|^2 \ } , \end{split } \label{eq : snr}\ ] ] is the energy per data bit and is the power of gaussian noise .snr is the ratio between the variance of a desired signal and the one of a noise signal . in the appendix b , with spreading sequence , we prove that snr of the user is given by where equation ( [ eq : upper ] ) is obtained when the ratio is close to , that is , the number of users is sufficiently large . from eqs .( [ eq : upper ] ) and ( [ eq : upper_sub ] ) , the spreading sequence has different snr in .thus , some users have high snr and other users have low snr .the lower bound of is this section , we consider how to assign to the each users .let us consider the spreading sequences from eq .( [ eq : global_solution ] ) , it is demanded that we assign at regular interval .however , these sequences can not be used if the number of users changes .thus , we have to make the rule to assign when the number of users changes .we give the rule to assign in the situation that the number of users monotonic increases . from the demanded of the problem ,it is desirable that we assign to each users at regular interval .thus , it is appropriate to assign them at nearly regular interval in every number of users .we apply the van der corput sequence to the method to assign since the sequence is a regular interval sequence in some situations . for example, the van der corput sequence is obtained as in particular , when we take the first eight elements out from and sort them , we obtain the sequence this sequence is a regular interval sequence . we can consider as a nearly regular interval sequence . when the length of spreading sequences equals , where is an integer , is rewritten in terms of .for example , when , the sequence is obtained as thus , we propose that we use the -th element of as , that is , the spreading sequences are expressed as where is the -th element of .in this section , we simulate an asynchrnous cdma communication system and discuss the performance of the spreading sequences obtained by eqs .( [ eq : optimal_seq ] ) ( [ eq : sarwate_seq ] ) ( [ eq : vander_seq ] ) .we use two parameters and , which are independent of the number of users and depend on .we consider a bpsk model .the channel has a additional white gaussian noise ( awgn ) and no fading signals .we measure the average bit error rate that where is the trial numbers , is the -th trial number and is the bit error rate of the user at the -th trial . in each trial , we choose randomly the parameter the delay , the symbol and the phase and the initial elements of sequence .this section consists of three subsections . in the first subsection , we compare the spreading sequences obtained by eq .( [ eq : sarwate_seq ] ) and ( [ eq : optimal_seq ] ) with other sequences , the gold codes , the optimal chebyshev spreading sequences and the fzc sequence families .in particular , we choose the triple as the parameters of the fzc sequence families .this triple is shown in as the optimal parameters when with being the length . in the second subsection , we compare the spreading sequences obtained by eq .( [ eq : sarwate_seq ] ) and ( [ eq : optimal_seq ] ) with one obtained by eq .( [ eq : vander_seq ] ) .we compare the random assigning approach with the systematic approach . in the final subsection , we compare the spreading sequences obtained by eq .( [ eq : sarwate_seq ] ) and ( [ eq : optimal_seq ] ) with the sp sequences .we consider the following spreading sequences : the former sequence is obtained from eq .( [ eq : sarwate_seq ] ) and the latter sequence is obtained from eq .( [ eq : optimal_seq ] ) . in this section ,the `` weyl '' spreading sequence is and the `` optimal '' one is .note that the optimal spreading sequences are different in the number of users .the `` upper bound '' is obtained from eq .( [ eq : upper ] ) .figure 1 shows the relation between the number of users and bit error rate ( ber ) when and (db ) . in this figure, we set , which is independent of the number of users .the ber of the weyl spreading sequences is lower than the one of the gold codes and the optimal chebyshev spreading sequences .however , it is higher than one of the fzc sequence families .the upper bound is established when the number of the users is larger than 20 . on the other hands ,the ber of the global solution of the problem , the optimal weyl sequences is the lowest .these sequences are dramatically efficient when the number of the users is fixed .figure 2 shows the relation between the (db ) and bit error rate ( ber ) when .we used the two cases for , and .the former parameter is independent of and the latter parameter is dependent on . in this figure ,the ber of the weyl spreading sequences is lower than one of the fzc sequence families .this result shows that ber of the weyl spreading sequences is changed when the value of is varied .however , the ber of the optimal sequence , which is the global solution of , is independent of .( db ) , =31,width=288 ] ( db ) and ber : , =31,width=288 ] in section v , we discussed how to assign the element to the user and proposed the method to assign .we set the length and the parameter .we compare two types of weyl spreading sequences , whose is randomly assigned to user and whose is orderly assigned as the -th element of the van der corput sequence .note that the first elements of the van der corput sequence are used as when the number of the users is .figure 3 shows the relation between the number of users and ber when (db ) .the ber of the spreading sequences obtained by eq .( [ eq : vander_seq ] ) is lower than one of the spreading sequences to which we randomly assign .the weyl spreading sequences will have better performance if we successfully assign ., width=288 ] the song - park ( sp ) sequences have been proposed in .we set the length and the number of the users .then , the maximum number of the users of the sp sequences is 14 .thus , we compare them with four types of the weyl spreading sequences . we choose the parameters as .the parameter represents the situation where the maximum number of the users is 14 .thus , the weyl spreading sequences have the same feature to the sp sequences .figure 4 shows the relation between the (db ) and ber .the ber of the weyl spreading sequences whose is higher than one of the sp sequences .however , the ber of the weyl spreading sequences whose and one of the sp sequences are the same .further , the ber of the weyl spreading sequences whose is the same ber in different .this result and the result of subsection ( a ) could suggest that the optimal parameter depends on , and .the ber of the global solutions is lowest and each ber of them is the same in and . by taking into account these ,we conclude that the ber of the global solution is independent of .( db ) and ber : , ,width=288 ]in this paper , we have defined the weyl sequence class and shown the features of the sequences in the class .we have constructed the optimization problem : minimize the upper bound of the absolute value of the whole interference noise and derive the global solutions . from this solution, we can derive other sequences , the sarwate s sequences , the sp sequences .the global solution is dramatically efficient when the number of the users is fixed .we have evaluated their snr and shown the simulation results . in the global solution of the problem ,the parameter is any real number .however , its ber depends on when we let the maximum number of users as or other number , not .the remained issue is to investigate the optimal and how to assign successfully to the user .[ [ section ] ] in this appendix , we prove that the global optimal solutions of , and are given by where is a real number .+ since the problem is a convex programming , it is necessary and sufficient for the global solution to satisfy the kkt conditions , eqs .( [ eq : kkt1_1])-([eq : kkt1_3 ] ) .+ when satisfies eq . , it is clearly that since and .we let .thus , it is sufficient to consider only two kinds of the lagrange multipliers , and .they satisfy the following equation which is obtained from eq .( [ eq : kkt1_1 ] ) : where have in the -th element and 0 in the others and have in the -th element and 0 in the others . from eq .( [ eq : kkt2 ] ) , we consider two vector equations .one is the first -dimensional vector equation of eq .( [ eq : kkt2 ] ) and the other is the last -dimensional vector equation . they are expressed as then , we define as note that since . from the definition of , only depends on the absolute value of difference , .we therefore rewrite as the variable has the property that this result is obtained from the definition of .we consider the two types of : is an odd number or is an even number .for all and , , and satisfy either only or .they satisfy we consider the -th element of the left side of eq . from eq . , for all the integers and , the term in summation of the left side of eq . equals . from the above proof, all the lagrange multipliers satisfy eq . .the lagrange multipliers , and satisfy when , they satisfy and .thus , we set similar to the case that is an odd number , we consider the -th element of left side of eq . . the terms of the difference equaling vanish .therefore , we obtain thus , we have proven that eq .( [ eq : la2 ] ) equals to 0 .it is clearly that the left side of eq . equals when .when , it follows that thus , for all the integer and , eq . is satisfied .+ from the proofs a and b , we have proven that the existence of the lagrange multipliers which satisfy eq. therefore , and are the global solutions of the problem ( ) .[ [ section-1 ] ] in this appendix , with the spreading sequences , we prove that snr of the user is given by where we assume that the element is a random variable uniformly distributed in .this assumption is fullfilled when the ratio is close to , that is , the number of users is sufficiently large since snr is not the reciprocal of the average of the interference noise over the users .however , with the spreading sequences , they are equivalent when the number of users equals ( see eq .( [ eq : first_r2 ] ) ) .thus , it is conceivable that the assumption is established when the ratio is close to .the correlation function of the spreading sequences in eq .( [ eq : optimal_seq ] ) is where and thus , we obtain the squared absolute value of : on the other hand , the following relations are satisfied : \\ = & \frac{n\left\{\cos\left(2 \pi \left(\gamma + \frac{\sigma_k}{n}\right)\right ) + \cos\left(2 \pi \left(\gamma + \frac{\sigma_i}{n}\right)\right)\right\}}{2\left(1-\cos\left(2 \pi \frac{\sigma_k - \sigma_i}{n}\right)\right ) } , \end{split}\ ] ] and \\ = & \sum_{l=0}^{n-1}\operatorname{re}\left[c_{i , k}\left(l\right)\overline{c_{i , k}\left(l+1\right)}\right ] .\label{eq : c_2 } \end{split}\ ] ] in the above equations , we used the assumption . from eqs .( [ eq : c_1])-([eq : c_2 ] ) , in eq .( [ eq : snr ] ) is given by when we calculate the sum of eq .( [ eq : r_ik ] ) , the first term of it is given by the integer is a random variable and satisfies when .thus , is expressed as and we can treat as a random variable instead of .the integer is uniformly distributed in .thus , the average of eq .( [ eq : first_r ] ) is where is the average over . in , it is shown that thus , eq . ( [ eq : first_r2 ] ) is equivalent to from the above result , we obtain the following relation the average of the second term of the sum of eq . is given by note that it is clear that therefore , eq . ( [ eq : term2 ] )is rewritten as thus , the sum of the average of the second term in eq .( [ eq : r_ik ] ) is written as similarly , we obtain the average of the sum of the third term of eq .( [ eq : r_ik ] ) : finally , we obtain the following relation from eq . and eq . , we arrive at snr of the user with the spreading sequence where of the authors , hirofumi tsuda , would like to thank for advise of prof .nobuo yamashita and dr .shin - itiro goto .99 c. e. shannon , `` a mathematical theory of communication , '' _ bell system technical journal _ ,volume 27 , issue 3 , 379 - 423 ( 1948 ) .s. verd and s. shamai , `` spectral efficiency of cdma with random spreading . ''_ ieee transactions on information theory _ 45.2 ( 1999 ) : 622 - 640. j. proakis , `` digital communications .1995 '' , mcgraw - hill , new york .r. steele and l. hanzo , `` mobile radio communications '' , second and third generation cellular and watm systems : 2nd .ieee press - john wiley , 1999 .m. honig , u. madhow and s. verdu , `` blind adaptive multiuser detection . ''_ ieee transactions on information theory _ 41.4 ( 1995 ) : 944 - 960 .t. ristaniemi and j. joutsensalo , `` advanced ica - based receivers for block fading ds - cdma channels '' , _ signal processing _ , 82.3 ( 2002 ) : 417 - 431 .a. hyvrinen , j. karhunen , and e. oja , `` independent component analysis '' , vol .john wiley sons , 2004 .s. verdu , `` minimum probability of error for asynchronous gaussian multiple - access channels '' , _ ieee transactions on information theory _ , 32.1 ( 1986 ) : 85 - 96 .r. gold , `` optimal binary sequences for spread spectrum multiplexing ( corresp . ) . ''_ ieee transactions on information theory _ , 13.4 ( 1967 ) : 619 - 621 . t. kasami , `` weight distribution formula for some class of cyclic codes . ''_ coordinated science laboratory report _r-285 ( 1966 ) .g. heidari - bateni and c. d. mcgillem , `` a chaotic direct - sequence spread - spectrum communication system . ''_ ieee transactions on communications _ ,42.234 ( 1994 ) : 1524 - 1527 .halle , c.w .wu , m. itoh and l.o .chua , `` spread spectrum communication through modulation of chaos . '' _ international journal of bifurcation and chaos _ , 3.02 ( 1993 ) : 469 - 477. y. soobul , k. chady and h. c.s .rughooputh , `` digital chaotic coding and modulation in cdma . ''_ africon conference in africa , 2002 .ieee africon .6th_. vol .2 . ieee , 2002 .chen , k. yao , k. umeno and e. biglieri , `` design of spread - spectrum sequences using chaotic dynamical systems and ergodic theory . ''_ ieee transactions on circuits and systems i : fundamental theory and applications _ , 48.9 ( 2001 ) : 1110 - 1114 .k. umeno and k. kitayama .`` spreading sequences using periodic orbits of chaos for cdma . ''_ electronics letters _ 35.7 ( 1999 ) : 545 - 546. g. mazzini , r. rovatti and g. setti , `` interference minimisation by autocorrelation shaping in asynchronous ds - cdma systems : chaos - based spreading is nearly optimal . ''_ electronics letters _ , 35.13 ( 1999 ) : 1054 - 1055 . g. mazzini , g. setti , and r. rovatti , `` chaotic complex spreading sequences for asynchronous ds - cdma .i. system modeling and results . ''_ ieee transactions oncircuits and systems i : fundamental theory and applications _ ,44.10 ( 1997 ) : 937 - 947 .r. riccardo , g. mazzini and g. setti , `` on the ultimate limits of chaos - based asynchronous ds - cdma - i : basic definitions and results '' , _ ieee transactions on circuits and systems _i , 51.7 ( 2004 ) : 1336 - 1347 .d. v. sarwate , `` bounds on crosscorrelation and autocorrelation of sequences '' , _ ieee transactions on information theory _ , 25.6 ( 1979 ) : 720 - 724. l. r. welch , `` lower bounds on the maximum cross correlation of signals '' , _ ieee transactions on information theory _ , 20.3 ( 1974 ) : 397 - 399. r. frank , s. zadoff and r. heimiller , `` phase shift pulse codes with good periodic correlation properties ( corresp . ) . '' _ ire transactions on information theory _ 8.6 ( 1962 ) : 381 - 382. d. chu , `` polyphase codes with good periodic correlation properties ( corresp . ) . ''_ ieee transactions on information theory _ 18.4 ( 1972 ) : 531 - 532 .j. oppermann and b. s. vucetic , `` complex spreading sequences with a wide range of correlation properties . ''_ ieee transactions on communications _ , 45.3 ( 1997 ) : 365 - 375. h. weyl .ber die gleichverteilung von zahlen mod .ann , 77:313 - 352 , 1916 .( in german ) . j. dick and f. pillichshammer , `` digital nets and sequences : discrepancy theory and quasi - monte carlo integration '' . cambridge university press , 2010 . k. umeno , `` spread spectrum communications based on almost periodic functions '' _ ieice technical report _, nlp 2014 - 101 , pp .11 - 16 ( 2014)(in japanese ) d. v. sarwate and m. b. pursley , `` crosscorrelation properties of pseudorandom and related sequences . ''_ proceedings of the ieee _ , 68.5 ( 1980 ) : 593 - 619 . m. b. pursley , `` performance evaluation for phase - coded spread - spectrum multiple - access communication .i - system analysis . '' _ ieee transactions on communications _ , 25 ( 1977 ) : 795 - 799. w. kuhn and a. w. tucker , `` nonlinear programming '' , in j. neyman ( ed . ) , proceedings of the second berkley symposium on mathematical statistics and probability ( university of california press , berkley , ca ) , pp .481 - 492 , 1951 .s. r. park , l. song and s. yoon , `` a new polyphase sequence with perfect even and good odd cross - correlation functions for ds / cdma systems . '' _ ieee transactions on vehicular technology _, 51.5 ( 2002 ) : 855 - 866 . j.g .van der corput , `` verteilungsfunktionen i und ii '' .wet_. , 38 ( 1935 ) , p. 813 .e. r. hansen , `` a table of series and products . ''prentice hall series in automatic computation , englewood cliffs : prentice hall , 1975 1 ( 1975 ) , eq .( 24.1.2 ) . | this paper shows an optimal spreading sequence in the weyl sequence class belonging to the extended frank - zadoff - chu ( fzc ) sequence family . the sequences in this class have the desired property that the order of crosscorrelation is low . we evaluate the upper bound of crosscorrelation in the class and construct the optimization problem . then , we derive the optimal spreading sequences as the global solution of the problem and show their snr in a special case . from this result , we propose how the initial elements are assigned , that is , how spreading sequences are assigned to each users . we also numerically compare our spreading sequences with other ones , the gold codes , the sequences of the fzc sequence families , the optimal chebyshev spreading sequences and the sp sequences in bit error rate . submitted paper spread spectrum communication , nonlinear programming , spreading sequence , signal to noise ratio , bit error rate |
clustering analysis ( or hca ) is an extensively studied field of unsupervised learning . very useful in dimensionality reduction problems , we will study ways of using this clustering method with the aim of reducing ( or removing ) the need for human intervention .this problem of human intervention stems from the fact that hca is used when we do not know the correct number of clusters in our data ( otherwise we might use , say , k - means ) .while the ability to cluster data with an unknown number of clusters is a powerful one , we often need a researcher to interpret the results - or cutoff the algorithm - to recover a meaningful cluster number .while our motivation stems from dna micro - array data and gene expression problems , these methods can apply to any similarly structured scenario . specifically, we will analyze different existing automated methods for cutting off hca and propose two new ones . in section ii we will discuss background material on hca and the existing methods and in section iii we will present some technical details on these methods and introduce our own .section 4 will contain results on simulated and actual data , and section 5 will examine data sampling procedures to improve accuracy .hierarchical clustering , briefly , seeks to pair up data points that are most similar to one another . with the agglomerative ( or bottom - up ) approach , we begin with data points forming singleton clusters .for each point , we measure the distance between it and its neighbors . the pair with the shortest distance between them is taken to form a new cluster .we then look at the distance between the points remaining and the newly formed cluster , and again pair off the two with shortest distance ( either adding to our 2-cluster , or forming another one ) .this process is repeated until we have a single cluster with points ( regardless of the absolute distance between points ) .naturally , this is a very good dimensionality reduction algorithm .unfortunately , it keeps going until we ve flattened our data to 1 dimension . in cases where in truth we have clusters ,this is problematic .the results of a hca are often expressed as a dendrogram , a tree - like graph that contains vital information about the distances measured in the clustering and the pairings generated .an example of a dendrogram can be seen in figure [ fig:1 ] .briefly , horizontal lines denote pairings , and the height of those lines represent the distance that needed to be bridged in order to cluster the points together .that is , the smaller the height ( or jump ) of a pairing , the closer the points were to begin with .our goal is to find a way to say , once the full tree is made , jumps beyond this point are not reasonable " , and we can cutoff the algorithm , keeping only those clusters generated before that point .the problem of cutting off a dendrogram is one that researchers encounter often , but there are no reliable automated methods for doing it .often , the gap statistic is the only proposed automated method , as in . as such, many researchers will inspect the finished dendrogram and manually select a cutoff point , based on their own judgment .apart from the obviously slow nature of this exercise , there is also the question of human error to consider - as well as bias . in cases where the cutoff is not easily determined , two different researchers may arrive at different conclusions as to the correct number of clusters - which could both be incorrect .algorithmic approaches aim to eliminate this , and range from simpler methods to more complex ones .an excellent summary of existing methods is given in , which is in fact referenced in .the latter , more importantly , develops the gap statistic .we will present the technical aspects in section iii , but we quickly discuss some properties here .first , the gap statistic is one of few methods that is capable of accurately estimating single clusters ( in the case where all our data belongs to one cluster ) , a situation often undefined for other methods .while it is rather precise overall , it requires the use of a reference distribution " , which must be chosen by the researcher .they put forward that the uniform distribution is in fact the best choice for unimodal distributions .a powerful result , it is still limited in other cases , and thus many researchers still take the manual approach. however , it generally outperforms other complex methods , as such we focus on the gap statistic . on the other side of the complexity spectrum, we have variants of the elbow method " .the elbow method looks to explain the variance in the data as a function of the number of clusters we assign .the more clusters we assign , the more variance we can explain .however , the marginal gain from adding new clusters will begin to diminish - we choose this point as the number of clusters .a variant of this method , often applied to dendrograms , looks for the largest acceleration of distance growth .while this method is very flexible , it can not handle the single - cluster case .we will look at both the elbow method variant and the gap statistic , as well as our own 2 methods we are presenting in this paper . while there are many other methods to compare them to, the gap statistic is quite representative of a successful ( if more complex ) solution - and tends to outperform the other known methods .the elbow method represents the more accepted simple approaches . in all tests in this paper, we use an agglomerative hierarchy , with average linkage and euclidean distance measure .the gap statistic is constructed from the within - cluster distances , and comparing their sum to the expected value under a null distribution .specifically , as given in , we have for clusters - \log w_k\ ] ] where , with , that is , we are looking at the sum of the within - cluster distances , across all clusters .computationally , we estimate the gap statistic and find the number of clusters to be ( as per ) where is the standard error from the estimation of .as mentioned , considers both a uniform distribution approach and a principal component construction . in many cases , the uniform distribution performs better , and this is the one we will use .this variant of the elbow method , which looks at the acceleration , is seen in . a straightforward method, we simply look at the acceleration in jump sizes .so given the set of distances from our clustering , the acceleration can be written as we choose our number of clusters as the jump with the highest acceleration , giving us } \left\ { d_i-2d_{i-1}-d_{i-2 } \right\}.\ ] ] while very simple and very fast , this method will never find the endpoints , ie , the singleton clusters and the single -element cluster cases .the first method we propose is based on the empirical distribution of jump sizes .specifically , we look to the mode of the distribution , denoted , adjusted by the standard deviation ( ) .our motivation is that the most common jump size likely does not represent a good cutoff point , and we should look at a higher jump threshold . as such, we take the number of clusters to be where is a parameter that can be tuned .naturally , tuning would require human intervention or a training data set .as such , we set it to somewhat arbitrarily .our second method is even simpler , but is surprisingly absent from the literature .inspired by the elbow method , we look at the maximum jump difference - as opposed to acceleration . our number of clustersis then given by }\left\ { d_i - d_{i-1 } \right\}.\ ] ] this method shares the elbow method s drawback that it can not solve the single cluster case ( though it can handle singleton clusters ) , but we thought it prudent to examine as the literature seemed to focus on acceleration and not velocity .we present results of simulations on several numbers of true clusters , drawn from a -dimensional normal distribution .each cluster is comprised of samples .we are most interested in tracking the success rate and the _ error size given an incorrect estimate_. that is , how often can we correctly estimate the number of clusters and when we ca nt , by how much are we off ?formally , this is given by and we chose this measure of the error to avoid under - estimating error size ( which would happen in a method that is often correct ) .the data used was drawn from a standard normal distribution , with cluster centers at , shown in figure [ fig:2 ] . in the case of cluster , the first is taken , for clusters , the first two , and so on .we present the results of the methods on simulations below in tables i - iii , with the best results in bold . .average cluster numbers over runs . [ cols="<,>,>,>,>",options="header " , ] while the maximum difference method seems robust in the face of different sampling , it seems that this exercise has revealed some instability in the mode method , which has reverted back to a lackluster performance . to a much lesser extent ,the elbow method is having some more trouble as well .it seems more likely that the choice of sampling parameters could be the cause of the clustering in the biobase data in figure [ fig:6 ] .more generally , we should look into determining optimal mixing parameters and and/or their impact on these methods .this mixing method does appear to perform better for the gene expression set than the previous choice of mixing parameters , which seems to confirm our hypothesis that there is perhaps an oversampling effect that can come into play , or something along those lines which much be explored more fully .we have developed two new empirical methods for clustering data in a hierarchical framework . while our methods are substantially faster than the existing gap statistic , we insist that they do not handle the single - cluster case .in other cases , our maximum difference method is at least comparable to the gap statistic and outperforms the elbow method .in addition , the use of the data mixing procedure presented can greatly improve performance ( especially for the mode method ) , leading to the maximum difference method outperforming the other 3 .lastly , these methods can be implemented in a few lines of code and should allow researchers to quickly utilize them at little computational cost . in the futurewe hope to study the possibility of finding optimal mixing numbers and and the impact of the choice of these parameters of our results .hopefully , they are related to the instability detected in the mode method when using in our mixing procedure . | we propose two new methods for estimating the number of clusters in a hierarchical clustering framework in the hopes of creating a fully automated process with no human intervention . the methods are completely data - driven and require no input from the researcher , and as such are fully automated . they are quite easy to implement and not computationally intensive in the least . we analyze performance on several simulated data sets and the biobase gene expression set , comparing our methods to the established gap statistic and elbow methods and outperforming both in multi - cluster scenarios . shell : bare demo of ieeetran.cls for journals clustering , hierarchy , dendrogram , gene expression , empirical . |
since the confirmation by wilkinson microwave anisotropy probe ( wmap ) observations of the cosmic microwave background that the large scale ( in comoving units ) auto - correlation function of temperature fluctuations is close to zero [ fig . 16 , ( orthogonally projected spatial scale ) ; fig . 1 of ( non - projected spatial scale ) ] and difficult to reconcile with a standard cosmic concordance model ( 7 of ; though see also ) there has recently been considerable observational interest in understanding multiply connected models of the universe .the poincar dodecahedral space ( pds ) has been proposed as a better model of the 3-manifold of comoving space rather than an `` infinite '' flat space ( e.g. ; though see also ) .curvature estimates are consistent with the model : analysis of the 3-year wmap data results in best estimates of the total density parameter ( when combined with hst key project on data ) and ( when combined with supernova legacy survey data ) , or together with the third , fourth and fifth acoustic peaks , as estimated from their observations near the south galactic pole using the arcminute cosmology bolometer array receiver ( acbar ) , is obtained , consistently with that expected from the pds analyses , which require positive curvature in this range of values .it has also recently become noticed that global topology in a universe containing density perturbations can , in principle , have at least some effect on the metric , even though the effect is expected to be small . at the present epoch , in the case of a model of fundamental lengths which are slightly unequal by a small fraction , a new term appears in the acceleration equation , causing the scale factor to accelerate or decelerate in the directions of the three fundamental lengths in a way that tends to equalise them . in this context , the properties and implications of the twin paradox of special relativity in a multiply connected space need to be correctly understood .it has already been shown that resolving the twin paradox of special relativity in a multiply - connected `` minkowski '' space - time implies new understanding of the paradox relative to the case in simply connected minkowski space . moreover , it is known that , at least in the case of a static space with zero levi - civita connection , multiple connectedness implies a favoured space - time splitting .this should correspond to the comoving reference frame .this could be of considerable importance to the standard cosmological model , since it would provide a novel physical ( geometrical ) motivation for the existence of a favoured space - time foliation , i.e. the comoving coordinate system .the difference between the twin paradox of special relativity in a multiply connected space relative to that in a simply connected space is that in a multiply connected space , the two twins can move with constant relative speed and meet each other a second time , _ without requiring any acceleration_. the paradox is the apparent symmetry of the twins situations despite the time dilation effect expected due to their non - zero relative speed .it is difficult to understand how one twin can be younger than the other why should moving to the left or to the right be somehow favoured ? does the time dilation fail to occur ?as shown by several authors , the apparent symmetry is violated by the fact that ( at least ) one twin must identify the faces of the fundamental domain of the spatial 3-manifold _ non - simultaneously _ , and has problems in clock synchronisation . suggested that the apparent symmetry is also violated by an asymmetry between the homotopy classes of the worldlines of the two twins . here , we reexamine this suggestion . the multiply connected space version of the twins paradox is briefly reviewed in [ s - paradox ], the question of clarifying the asymmetry is described in [ s - asymmetry ] , and a multiply connected space - time , with a standard minkowski covering space - time , and the corresponding space - time diagrams , are introduced in [ s - st - diagrams ] . in [ s - worldlines - etc ] ,the worldlines of the two twins , projections from space - time to space , homotopy classes and winding numbers are expressed algebraically . in [ s - results ] , the resulting projections of the twins paths from space - time to space are given and their homotopy classes inferred .we also calculate whether or not the generator of the multiply connected space and the lorentz transformation commute with one another , in [ s - lorentz ] .discussion suggesting why this result differs from that of is presented in [ s - discuss ] , including a brief description in [ s - tperiodic ] of how the non - favoured twin can choose a time - limited instead of a space - limited fundamental domain of space - time .conclusions are summarised in [ s - conclu ] . a thought experiment to develop physical intuition of the non - favoured twin s understanding of the spatial homotopy classesis given in appendix [ s - stretchable - cord ] .for a short , concise review of the terminology , geometry and relativistic context of cosmic topology ( multiply connected universes in the context of modern , physical cosmology ) , see ( slightly outdated , but sufficient for beginners ) .there are several in - depth review papers available and workshop proceedings are in and the following articles , and . comparisons and classifications of different _ observational _ strategies ,have also been made .we set the conversion factor between space and time units to unity throughout : .the first articles presenting the special relativity twins paradox in a multiply connected space were apparently and .there are several more recent articles , including e.g. and .consider the following description . in a one - dimensional ,multiply connected , locally lorentz invariant space , one twin moves to the left and one to the right in rockets moving at constant relative speed to one another .the two twins meet twice , at two distinct space - time events . at the earlier space - time event ,the two twins are of equal ages . at the later space - time event, each twin considers the other to be younger due to lorentz time dilation .however , this later space - time event is a single space - time event each twin has undergone physical aging processes . if necessary, each twin could carry an atomic clock in order to more precisely measure proper time than with biological clocks .so there can only be one ordinal relation between the two twins ages at the second space - time event : either the leftward moving twin is younger , or the rightward moving twin is younger , or the two twins are of equal age .which is correct ?there is no acceleration ( change in velocity ) by either twin , so the usual explanation of the paradox ( in simply connected space ) is invalid .however , in this case , the situation is , or at least seems to be , perfectly symmetrical .it would be absurd for either `` leftwards '' or `` rightwards '' movement to yield a younger age . on the other hand , time dilation implies that the `` other '' twin must `` age more slowly '' .the twins physically meet up ( for an instant ) at the second space - time event , which is a single location in space - time , so the `` first '' twin objectively measures that the `` other '' twin is younger , so the two twins can not be equally aged at the second space - time event .but then which twin is `` the first '' and which is `` the other '' ?this question illustrates the apparent symmetry of the situation and the apparent implication that `` time dilation fails '' . alternatively ,if the situation is _ not _ symmetrical and time dilation occurs as is expected , then what breaks the apparent symmetry ? why should leftwards movement by favoured relative to rightward movement , or vice - versa ?which is correct : is the situation symmetrical with a failure of time dilation , or is the situation asymmetrical with time dilation taking place ? in , , and was shown that the apparent symmetry in the question as stated above is not mathematically ( physically ) possible .there is a hidden implicit assumption related to the usual intuitive error common to beginners in special relativity : the assumption of absolute simultaneity .the necessary asymmetry can be described in different ways .one way of explaining the asymmetry is as follows .one twin is able to consistently synchronise her clocks by sending photons in opposite directions to each make a loop around the universe and observing their simultaneous arrival time , and the other twin measures a delay between receiving the two photons ( or coded signal streams ) and is forced to conclude that something is asymmetrical about the nature of her `` inertial '' reference frame .carrying out this test requires a metrical measurement , that of proper time intervals . here , in order to examine s suggestion that there is a homotopy asymmetry , i.e. an asymmetry of topology rather than an asymmetry depending on the metric , it is easier to first explain the asymmetry of the apparently symmetrical paradox in a more geometric way , similar to the presentation in , but with some additional figures . and coordinate system , made multiply connected in each spatial section at constant time via the generator ( translation ) .a twin moving to the right at constant relative velocity ( hereafter , the `` rightmoving twin '' ) has worldline and simultaneity axis where , defining her coordinate system .space - time points a and a are identical space - time events under the generator , i.e. the single event may in general be called a. similarly , space - time points b , b and b are a single space - time event b. space - time events c and d are distinct from each other and from a and b ; c and d occur at the same spatial location for the rightmoving twin .the worldlines of the leftmoving and rightmoving twins are and respectively .[ f - spacetimeone ] , width=302 ] figure [ f - spacetimeone ] shows a standard minkowski space as covering space for simplicity with only one spatial dimension , for two twins moving with constant velocity relative to one another , hereafter , the `` leftmoving '' and `` rightmoving '' twins respectively . specifically , we set the rightmoving twin moving at a velocity , in units of the space - time conversion constant , towards the right .as a covering space , this space is a standard locally and globally lorentz invariant space - time .we choose a generator which favours , arbitrarily , but without loss of generality , the leftmoving twin .this generator , , a translation of constant length with generates the quotient , multiply connected space , , where is the group generated by , i.e. .this arbitrary choice reveals where an implicit assumption was made in the presentation of the paradox above : a generator matching space - time events in a way that preserves time unchanged in one reference frame , or in other words , a generator which `` is simultaneous '' in one reference frame , is not simultaneous in other frames .hence , symmetry is not possible . , but shown in the rest frame of the rightmoving twin .the identities due to the generator remain correct : a and b , _ even though they are non - simultaneous _ ; could be described as a non - simultaneous generator in the rightmoving twin s reference frame .the worldline of the leftmoving twin is mapped by the generator to a physically equivalent copy , , running from a to b .figure [ f - cylindertwo ] helps show this is possible by using our three - dimensional intuition .[ f - spacetimetwo ] , width=302 ] the generator identifies points ( in three - dimensional space , these would be faces of the fundamental domain rather than points ) in a spatial section at any given time : a and b .the rightmoving twin has and axes different from those of her leftmoving twin , in order to preserve lorentz invariance .she disagrees with the leftmoving twin about simultaneity of events , finding , e.g. that space - time event a occurs before space - time event a , space - time event b occurs before space - time event b , etc .this is shown in fig .[ f - spacetimetwo ] , where is the doppler boost .so far , this is identical to the situation in simply connected minkowski space - time , until we realise that both twins must agree that a and that b ., embedded in 3-d euclidean space and projected onto the page , or informally , `` rectangle a rolled up into a cylinder and stuck together to make it multiply connected '' .the leftmoving twin s worldline is shown .[ f - cylinderone ] , width=132 ] , embedded in 3-d euclidean space and projected onto the page , or informally , `` rectangle a rolled up and stuck together to make it multiply connected '' . _ the spatial boundaries of this region , and , are offset by a time interval a before being matched : the result is not a cylinder ._ note that in space - time , there are two geodesics joining space - time events a and : one at constant spatial position ( appearing vertical here ) , and one at constant time ( appearing nearly horizontal , but sloped at a moderate angle in this projection ) , and similarly for b and .however , only one of these two geodesics the vertical ( timelike ) one can be a worldline of a physical ( non - tachyonic ) particle , so there is no causality violation . a similar diagram to this onehas earlier been published in figure 5b in .the rightmoving twin s worldline from a to b is shown .[ f - cylindertwo ] , width=94 ] while both twins agree that a , they _disagree _ as to whether or not these are simultaneous events . using the terminology of , the leftmoving twin is able to synchronise clocks , while the rightmoving twin is _ unable to synchronise clocks_. writing the lorentz transformation from to as a matrix , the generator , initially expressed in the first twin s coordinates in eq .( [ e - generator ] ) , can be rewritten using the second twin s coordinates as an intuitive , geometric way of describing this is that according to the rightmoving twin , _ after cutting the covering space , non - simultaneous points are pasted together_. figure [ f - cylinderone ] shows the cylinder `` cut and pasted together '' out of a space - time region with constant space and time boundaries according to the leftmoving twin .note that a trapezium in fig .[ f - spacetimeone ] , e.g. a would serve just as well as the rectangle a for this `` rolling up '' process . as long as there are boundaries of constant time ,the result of identifying the other two sides of the trapezium is a cylinder .this trapezium , a , is particularly interesting when we shift to the reference frame of the rightmoving twin in fig .[ f - spacetimetwo ] , since the boundaries a and a now become spatial boundaries : cutting and pasting from the rightmoving twin s point of view must still identify identical space - time events to one another : _ either _ identifying a to a , or a to a , will correctly apply the isometry to the covering space and `` paste '' together our spatially finite interval in order to obtain a manifold without any boundaries .( the time domain extends infinitely from this rectangle in both the negative and positive directions . )so , one option for embedding this identification in 3-d euclidean space and projecting it onto the page would be to use the same trapezium .this corresponds to the rightmoving twin s intuition of identifying `` two spatial points '' to one another while trying to ignore the nature of space - time as a two - dimensional continuum : the set of points along the line segment a constitute a single `` spatial point '' , while the set of points along the line segment a constitute a single `` spatial point '' , a `` spatial point '' is really a worldline it is not just a single point , it s a curve in space - time . however , rather than identifying the `` spatial borders '' of a to one another , it is helpful to follow the rightmoving twin s naive intuition even further. let us try to cut out and then paste together the space - time region with both constant space boundaries and constant time boundaries , i.e. the region a .the result is shown in fig .[ f - cylindertwo ] ( cf . fig .5b in ) .this clearly shows the non - simultaneity of the cutting / pasting process for the rightmoving twin .the rectangle in space - time has to be given a time mismatch when it s pasted together .this visually illustrates the error in the twins paradox in a multiply connected space ( as described in [ s - paradox ] ) : _ the implicit assumption of absolute simultaneity_. if we implicitly assume absolute simultaneity , then we implicitly assume that there is no inconsistency in supposing that both twins can identify spatial boundaries without any time offsets .however , lorentz invariance is inconsistent with absolute simultaneity ; hence , the asymmetry : at most one twin can simultaneously identify spatial boundaries .of course , neither the leftmoving twin nor the rightmoving twin are necessarily favoured .a complete , precise statement of the problem needs to arbitrarily favour one twin over the other : either the leftmoving or the rightmoving twin may be chosen , but one of them must be chosen to be favoured in order for the space - time to be self - consistent .this also illustrates the potential interest for cosmology : several authors note the existence of a favoured inertial reference frame implied by the multiple - connectedness of a ( static ) space - time whose covering space - time is minkowski and that this could provide a physical motivation for the comoving coordinate system .is the apparent ( erroneous ) symmetry of the two twins situations broken by asymmetry between the homotopy classes of the two twins projected worldlines ( `` spatial paths '' ) in some way ?in it was suggested that the twin who simultaneously identifies spatial boundaries ( in this paper , the leftmoving twin ) has a spatial path of zero winding index , while the rightmoving twin has a spatial path of non - zero winding index .we consider the worldlines of the two twins between the two space - time events a and b , using figs [ f - spacetimeone ] and [ f - spacetimetwo ] and eqs ( [ e - generator ] ) , ( [ e - lorentz ] ) and ( [ e - generator - moving ] ) .a superscript `` l '' or `` r '' is used to denote which reference frame is being used for expression in a particular coordinate system . the leftmoving twin s worldline , can be written in her own reference frame as using her proper time ( see fig .[ f - spacetimeone ] ) .a copy of the leftmoving twin s worldline as mapped by can be written , using eq .( [ e - generator ] ) , as we can write this in the rightmoving twin s frame using eq .( [ e - lorentz ] ) : the rightmoving twin s worldline , can be written in her own reference frame as using her own proper time ( see fig .[ f - spacetimetwo ] ) , or equivalently in the leftmoving twin s frame , using and eq .( [ e - lorentz ] ) , as state that `` the asymmetry between [ the two twins ] spacetime trajectories lies in a topological invariant of their spatial geodesics , namely the homotopy class '' . in order to consider spatial geodesics in the two twins reference frames , we need to project from space - time to space in each of their reference frames .let us write these projections and algebraically in the leftmoving and rightmoving twins frames in the most obvious way : and using eqs ( [ e - generator ] ) and([e - generator - moving ] ) , the generator can then be rewritten in _ projected _ form as and in the two twins frames respectively .the two spaces which space - time is projected to are clearly both equivalent to , where the group is defined by the generator or for the leftmoving or rightmoving twin respectively .the length ( circumference ) of is or respectively .it is important to note here that _ in the absence of metric information , the two twins do not know anything more than this about the nature of their respective spaces , and they do not know how their spaces relate to spatial sections , i.e. subspaces of the covering space . _in fact , metric information gives an important difference between these two ( in this paper , one - dimensional ) spaces : the rightmoving twin s constant time hypersurface has a time mismatch after `` going around '' the universe once .however , here the question is to examine the _ topological _ nature of the twins spatial paths in the _ absence _ of metric information .the space - time point of view which the rightmoving twin would have , if she had metric information available ( e.g. by sending out photons and measuring proper time intervals ) , is irrelevant . similarly to , we write that two loops in at a are homotopic to one another , `` if they can be _ continuously _ deformed into one another , i.e. that there exists a continuous map '' satisfying as in , we will assume that no proof is needed that , and we define the winding number as the `` the number of times a loop rolls around '' the circle . since the aim is to see if the leftmoving and rightmoving twins points of view can be distinguished in order to determine who is favoured , we need indices on the winding number to indicate whether it refers to the worldline of the leftmoving or rightmoving twin , and from whose point of view . again , we use the superscript `` l '' or `` r '' to denote which reference frame is being used . a twin s point of view of her own winding number is denoted by an empty subscript , and the subscript `` other '' is used for a twin s point of view of the other twin s winding number .that is , the leftmoving twin measures her own winding number and the other twin s winding number , and the rightmoving twin measures her own winding number and the other twin s winding number . in each case , the winding number is positive in the direction of motion of the `` other '' twin . , including the worldlines and of the leftmoving and rightmoving twins respectively , corresponding to and in the covering space - time as shown in figure [ f - spacetimeone ] , and showing their projections from space - time into space under the projection given in eq .( [ e - phi - l ] ) .arrows indicate increasing proper time along each worldline .the worldlines are given algebraically in [ s - worldlines ] .[ f - cylinderone_homotopy ] , width=226 ] , including the worldlines of the leftmoving and rightmoving twins respectively .these correspond to and in the covering space - time as shown in fig .[ f - spacetimetwo ] . also shownare their projections from space - time into space under the projection given in eq .( [ e - phi - r ] ) .arrows indicate increasing proper time along each worldline .the worldlines are given algebraically in [ s - worldlines ] .[ f - cylindertwo_homotopy ] , width=226 ]figure [ f - cylinderone_homotopy ] shows that projects to a point , and projects to a closed loop . as stated in , the former path has a zero winding index , while the second has a unity winding index : from the point of view of the leftmoving twin , there is a difference in the two twins winding numbers .however , we havent yet checked if the winding numbers are relative or absolute .we evaluate the winding numbers as follows . using the two twins worldlines from the leftmoving twin s point of view , eqs ( [ e - p - l ] ) and ( [ e - q - l ] ) , and projecting them with eq .( [ e - phi - l ] ) , we have and since from eq .( [ e - generator - proj ] ) , clearly and in the sense of eq .( [ e - defn - homotopy ] ) , so that the winding index of the leftmoving twin is zero , i.e. and that of the rightmoving twin is unity , figure [ f - cylindertwo_homotopy ] shows that projects to a point , and projects to a closed loop .the former path has a zero winding index , while the latter has a unity winding index : from the point of view of the rightmoving twin , there is a difference in the two twins winding numbers .however , if each twin each assume that she is stationary and that the other is moving away , as we have assumed throughout , then they also disagree regarding the winding numbers : neither simultaneity nor winding number is absolute .we evaluate this algebraically as follows .using the two twins worldlines from the rightmoving twin s point of view , eqs ( [ e - pg - r ] ) and ( [ e - q - r ] ) and projecting them with eq .( [ e - phi - r ] ) , we have \label{e - pg - r - projected } \ ] ] and eq . ( [ e - q - projected ] )shows that in this case , the rightmoving twin has winding index zero : since from eq .( [ e - generator - moving - proj ] ) , eq .( [ e - pg - r - projected ] ) clearly gives using the sign convention which we use globally here for the or coordinates , the winding index of the leftmoving twin in this case would be .more usefully , using the winding number convention described in [ s - winding - defn ] , i.e. the winding number is positive in the direction of motion of the `` other '' twin , this becomes in summary , from eqs .( [ e - nl ] ) , ( [ e - nl - other ] ) , ( [ e - nr ] ) and ( [ e - nr - other ] ) , we have clearly , the homotopy classes of the twins worldlines do not reveal the asymmetry of the system : each twin considers herself to have winding index and the other twin to have . also suggested in their eqs ( 6)(8 ) that the generator does not commute with the lorentz transformation .here we calculate this explicitly .equation ( [ e - lorentz ] ) gives applying the generator in the appropriate reference frame , eq .( [ e - generator - moving ] ) , gives \nonumber \\ & = & \gamma [ x -\beta t + l , t -\beta ( x + l ) ] .\label{e - gofl - two } \end{aligned}\ ] ] equation ( [ e - generator ] ) gives the lorentz transformation , eq .( [ e - lorentz ] ) , gives \nonumber \\ & = & \gamma [ x -\beta t + l , t - \beta ( x+l ) ] \nonumber \\ & = & g \circ { \cal l } ( ( x , t ) ) , \label{e - gofl - is - lofg}\end{aligned}\ ] ] where the last equality uses eq .( [ e - gofl - two ] ) .these last two equations give this result is in contrast to s suggestion that .it is not obvious why our results differ . here, we consider to be a holonomy transformation on the covering minkowski space , independently of any particular coordinate representation . as in , we consider to be a transformation `` from one frame of space - time coordinates to another system '' .the results summarised in [ s - results - both ] differ from the conclusion in , where it was pointed out that since winding indices are topological invariants , `` neither change of coordinates or reference frame ( which ought to be continuous ) can change [ the winding indices ] values '' , i.e. the rightmoving twin and the leftmoving twin should agree that the leftmoving twin has a zero winding index and the rightmoving twin a unity winding index . , but showing that the two worldlines together form a single closed loop in space - time .neither worldline constitutes a closed curve alone , prior to projection .[ f - cylinderone_easy ] , width=170 ] this argument does not take into account the nature of the projection from space - time to space .the argument is correct in that the topologically invariant nature of winding indices is valid in _ space - time _ , but is not necessarily valid in the worldlines _ after projection from space - time to space_. the projection from an -dimensional manifold to an -dimensional manifold does not ( in general ) conserve all topological properties of subspaces : while a continuous projection conserves continuity , it does not conserve non - continuity .for example , consider a discontinuous path in the leftmoving twin s reference frame , where is a step function , i.e. a function discontinuous for some values .this projects under to a continuous , closed loop . similarly in higher dimensions : consider , which can be continuously deformed into something resembling a mess of string with the two ends tied together and not touching itself anywhere. project this from euclidean 3-space into the euclidean 2-plane .the projection will ( in general ) be a graph with many nodes , not .in fact , the worldlines ( e.g. those labelled 1 , 2 , 3 , 4 in fig. 2 in ) are all open curves in space - time , i.e. each of them is in the same homotopy class as a single point .figures [ f - cylinderone_homotopy ] and [ f - cylindertwo_homotopy ] show that these open curves only become either a loop or a point after projection .the projection can convert a discontinuity into a continuity ( or a continuum of points into a single point ) . in space - time , it is the union of the two twins worldlines ( two different paths in space - time from space - time event a to space - time event b ) which forms a closed curve , not either worldline alone .this is shown in fig .[ f - cylinderone_easy ] .this is why a `` change of coordinates or reference frame '' using a continuous function _ can _ change the winding index values of two worldlines , if those two worldlines are projections from -space - time to -space , considered in two different reference frames . nevertheless , with the use of metrical information and space - time rather than just space , it is interesting to note some different possibilities of what `` space '' could correspond to if thought of as a subspace of the covering space - time .let us now use our knowledge of metrical information for considering the nature of `` space '' , i.e. spacelike sections of space - time , for the two twins ., again in the reference frame of the rightmoving twin , showing part of a hypersurface at constant time for the rightmoving twin as thick horizontal line segments .space - time events c and c are identical to one another ( the generator identifying equal space - time events is illustrated ) ; space - time events e , e and e are identical to one another .a is a single space - time event , c is a single space - time event , and e is a single space - time event .the fundamental domain ( fd ) can be written as a space - limited region with unconstrained time ( shaded with a backslash `` '' ) and a time offset when matching boundaries .it can also be written as a _time_-limited region with unconstrained space ( shaded with a forward slash `` / '' ) and a space offset when matching boundaries . within this fd ,consider an observer at rest in this frame , capable of making metric measurements , located at space - time event a and shown in the covering space at a .for this observer , the events c and e happen `` over there and simultaneously to a '' .the observer will arrive at those spatially distant events without any motion , simply by waiting .see fig .[ f - cylindertwo_tperiodic ] for a cylindrical representation of the time - limited fd .[ f - spacetimetwoconst ] , width=302 ] a spacelike section of space - time at constant time for the leftmoving twin connects with itself and is clearly . and [ s - tperiodic ] .c and c are a single space - time location ; e and e are a single space - time location .time boundaries are identified to one another with a spatial offset .some of the multiple topological images of the worldlines of the leftmoving and rightmoving twins are labelled as previously , i.e. and respectively , as well as .the positive spatial and time directions are shown as and respectively .[ f - cylindertwo_tperiodic ] , width=302 ] in contrast , a spacelike section of space - time at constant time for the rightmoving twin does not connect with itself : there is a time offset of after one loop . for a spacelike section to connect with itself, it would need to be at non - constant time .this is not a problem for topological measurements in the absence of metrical information : the rightmoving observer who sends out slow moving probes equipped with neither clocks nor rods and having speeds which can not be precisely calibrated will assume that space is . in the covering space ,this could be most simply interpreted as replacing eq .( [ e - phi - r ] ) by a projection from to a subspace defined : in the rightmoving twin s reference frame , or equivalently , in the leftmoving twin s reference frame .in other words , the subspace of correspond to the rightmoving twin s interpretation of space from topological measurements alone would most easily be modelled as a spatial hypersurface of constant time for the leftmoving twin .while the rightmoving ( non - favoured ) twin making only topological measurements does not _ know _ that her `` space '' consists of a set of space - time points at differing times , the nature of what would be her constant time hypersurface , if she were able to determine it , reveals an interesting property of a multiply connected space - time for a non - favoured observer : _ the fundamental domain can be chosen by matching time boundaries instead of space boundaries ._ figures [ f - spacetimetwoconst ] and [ f - cylindertwo_tperiodic ] illustrate this situation .the fundamental domain of this space - time can either be described as the region expressed as the direct product of a spatial interval as the first parameter and a time interval as the second , or as again with space first and time second .the former , , is the `` obvious '' fundamental domain , satisfying where the generator is easiest to use if expressed in the rightmoving twin s reference frame , i.e. as in eq .( [ e - generator - moving ] ) , and , , etc .the latter , , is the more surprising fundamental domain , satisfying again where is best expressed using eq .( [ e - generator - moving ] ) .what is surprising is that a verbal description of could be `` time is periodic for the non - favoured twin '' .however , this description omits important information ; a more complete description would be `` time is periodic , but with a spatial offset , for the non - favoured twin '' .in other words , the ability to choose a time - limited fundamental domain does _ not _ mean that events periodically repeat themselves from the rightmoving twin s point of view , since the generator of the periodicity does _ not _ yield an offset by a vector . what it yields is a space - time offset by ( an integer multiple of ) the vector .we could also say that for a non - favoured twin , the space - time periodicity is `` diagonal '' to the space - time axes , i.e. an asymmetry in the twin paradox of special relativity in a multiply connected space is less obvious than in a simply connected space , since neither twin accelerates .it was already known that the asymmetry required is the fact that ( at least ) one twin must identify space - time events non - simultaneously and has problems in clock synchronisation . here, space - time diagrams and the corresponding algebra have been presented in order to examine whether or not the homotopy classes of the twins worldlines provide another asymmetry .they show that homotopy classes ( numbers of windings ) do _ not _ show which of the two twins of the twin paradox has a preferred status , contrary to what was previously suggested : each twin finds her own spatial path to have zero winding index and that of the other twin to have unity winding index ( in the direction of travel of the other twin ) .although the twins apparent symmetry is broken by the need for the non - favoured twin to non - simultaneously identify spatial domain boundaries , and by the non - favoured twin s problems in clock synchronisation ( provided that she has precise clocks ) , the non - favoured twin _ can not _ detect her disfavoured state by measuring the topological properties of the two twins worldlines in the absence of precise metric measurements with clocks or rods . on the other hand , a non - favoured twin capable of making precise metric measurements will notice many surprising properties of space - time .for example , a non - favoured twin could define the fundamental domain of space - time to be where the first dimension is space and the second is time . in words , time would be `` periodic with period and a spatial offset when matching time boundaries '' as an alternative explanation to having space be `` periodic with period and a time offset when matching space boundaries '' .we thank agnieszka szaniewska for a careful reading and useful comments on this paper . st.b .acknowledges suport form polish ministry of science grant no nn203 394234 .r. , lustig s. , steiner f. , 2005a , cqg , 22 , 3443 , http://arxiv.org/abs/astro-ph/0504656 [ ] r. , lustig s. , steiner f. , 2005b , cqg , 22 , 2061 , http://arxiv.org/abs/astro-ph/0412569 [ ] j. d. , levin j. , 2001 , phys .a , 63 , 044104 , http://arxiv.org/abs/gr-qc/0101014 [ ] v. , roukema b. f. , eds , 2000 , `` cosmological topology in paris 1998 '' paris : blanlil & roukema , http://arxiv.org/abs/astro-ph/0010170 [ ] c. h. , stewart d. r. , 1973 , phys .d , 8 , 1662 g. , 2004 , mnras , 348 , 885 , http://arxiv.org/abs/astro-ph/0310207 [ ] j. , 2005 , arxiv preprints , http://arxiv.org/abs/astro-ph/0503014 [ ] m. , luminet j. , 1995 , phys ., 254 , 135 , http://arxiv.org/abs/gr-qc/9605010 [ ] r. j. , 1990 , european journal of physics , 11 , 25 j. , roukema b. f. , 1999 , in nato asic proc . 541 : theoretical and observational cosmology topology of the universe : theory and observation .p. 117 , http://arxiv.org/abs/astro-ph/9901364 [ ] j. , weeks j. r. , riazuelo a. , lehoucq r. , uzan j. , 2003 , nature , 425 , 593 , http://arxiv.org/abs/astro-ph/0310253 [ ] j .- p ., 1998 , acta cosmologica , xxiv-1 , 105 , http://arxiv.org/abs/gr-qc/9804006 [ ] a. , jaffe a. , 2007 , physical review letters , 99 , 081302 , http://arxiv.org/abs/astro-ph/0702436 [ ] p. c. , 1983 , am . j. phys . , 51 ,791 p. c. , 1986 , am .j. phys . , 54 , 334 m. j. , gomero g. i. , 2004 , braz. j. phys . , 34 , 1358 , http://arxiv.org/abs/astro-ph/0402324 [ ] c. l. , ade p. a. r. , bock j. j. , bond j. r. , brevik j. a. , contaldi c. r. , daub m. d. , dempsey j. t. , goldstein j. h. , et al .2008 , arxiv e - prints , 801 , http://arxiv.org/abs/0801.1491 [ ] b. f. , 2000 , bull .astr . soc .india , 28 , 483 , http://arxiv.org/abs/astro-ph/0010185 [ ] b. f. , 2002 , in marcel grossmann ix conference on general relativity , eds v.g .gurzadyan , r.t .jantzen and r. ruffini , world scientific , singapore observational approaches to the topology of the universe .p. 1937 , http://arxiv.org/abs/astro-ph/0010189 [ ] b. f. , bajtlik s. , biesiada m. , szaniewska a. , jurkiewicz h. , 2007 , a&a , 463 , 861 , http://arxiv.org/abs/astro-ph/0602159 [ ] b. f. , buliski z. , szaniewska a. , gaudin n. e. , 2008 , a&a , 486 , 55 , http://arxiv.org/abs/0801.0006 [ ] b. f. , lew b. , cechowska m. , marecki a. , bajtlik s. , 2004 , a&a , 423 , 821 , http://arxiv.org/abs/astro-ph/0402608 [ ] d. n. , bean r. , dor o. , nolta m. r. , bennett c. l. , hinshaw g. , jarosik n. , komatsu e. , page l. , peiris h. v. , verde l. , barnes c. , halpern m. , hill r. s. , kogut a. , et al .2007 , apjsupp , 170 , 377 , http://arxiv.org/abs/astro-ph/0603449 [ ] d. n. , verde l. , peiris h. v. , komatsu e. , nolta m. r. , bennett c. l. , halpern m. , hinshaw g. , jarosik n. , kogut a. , limon m. , meyer s. s. , page l. , tucker g. s. , weiland j. l. , wollack e. , wright e. l. , 2003 , apjsupp , 148 , 175 , http://arxiv.org/abs/astro-ph/0302209 [ ] g. d. , 1998 , cqg , 15 , 2529 j .- p ., lehoucq r. , luminet j .-, 1999 , in proc .of the xix texas meeting , paris 1418 december 1998 , eds .e. aubourg , t. montmerle , j. paul and p. peter , article n 04/25 new developments in the search for the topology of the universe , http://arxiv.org/abs/gr-qc/0005128 [ ] j .- p . ,luminet j .-, lehoucq r. , peter p. , 2002, 23 , 277 , http://arxiv.org/abs/physics/0006039v1 [ ] o. , 2004 , submitted to found ., http://arxiv.org/abs/gr-qc/0403111 [ ]since we are interested in topology , suppose that neither twin has _precise _ measuring rods or clocks , though both twins may have _ very approximate _ methods of measuring metric properties .neither twin is aware that when she completes a loop of the universe , she may detect a time offset .however , both twins have read history books and are aware of claims that `` space is multiply connected '' , so they attempt to verify this experimentally .we can imagine that at event a , the two twins instantaneously create a highly stretchable physical link between them , such as a light - weight string or cord of negligible mass and extremely high strength against breaking .as they move apart , the cord stretches , preserving the topological properties of space connectedness , while ignoring time .we could alternatively imagine that one of the twins leaves behind a `` trail '' of some sort , e.g. like the vapour trail of an aircraft visible by human eye from the ground .however , in this case , we have to be careful to avoid thinking of the particles in the vapour trail as being at rest in any particular frame , since otherwise we favour one of the two twins arbitrarily .now consider the state of the cord `` at '' event b , when the two twins meet up again and join the two ends of the cord together .b is a single space - time event .even though the two twins disagree about space - time coordinates of the event , they agree that it is a single event and agree that they have physically joined the two ends of the cord . clearly the cord now forms a closed loop , of winding index one .firstly , it is misleading because event b is just one space - time point among a whole set of space - time points where the particles constituting the cord are located , but `` the state of the cord '' is only of interest at this point of the discussion in the local neighbourhood of event b. it is difficult to avoid intuitively thinking of `` the state of the cord '' , i.e. of the state of a spatially extended object `` at '' the _ time _ of the event b , which is wrong , because it assumes simultaneity . a better way of thinking of the cord is presented below . secondly , it is misleading because the word `` at '' suggests a mono - valued time coordinate for a single event .the reality is that just as in a multiply connected space , a single ( physical ) spatial point exists at many spatial points in the covering space , the situation is similar in a multiply connected space - time : a twin ( observer ) finds that a single space - time event exists at many ( in general ) non - simultaneous space - time points in the covering space - time .one twin happens to be favoured and finds that the multiple space - time copies of a single event are simultaneous , but the other twin , moving with a different velocity , has a generator which is `` diagonal '' to her space - time axes . from the leftmoving twin s point of view , in fig .[ f - spacetimeone ] , the cord can always be considered as a simultaneous object , i.e. a series of successive `` snapshots '' of the cord consist of horizontal line segments joining and , starting at a and sliding up to a final state of which for the leftmoving twin , is the state of the cord `` at '' the time of space - time event b. the rightmoving twin s point of view is similar , except that as can be seen in fig .[ f - spacetimetwo ] , `` simultaneous '' snapshots of the cord , i.e. horizontal line segments joining and , starting at a and sliding upwards , have a problem when the right - hand end of the cord arrives at b . at this point , the left - hand end of the cord has not yet arrived at b according to the righmoving twin s notion of simultaneity .however , b and b are a single physical space - time event : the space - time event of joining the two ends of the cord together occurs _ non - simultaneously _ according to the rightmoving twin .this is intuitively difficult to imagine .one way that the rightmoving twin could think about this could be that `` as '' the cord slides up from event a , it tilts in some way so that when / where the two ends of the cord are joined at b , the cord can be imagined as stretched along the line segment along a series of space - time events which are non - simultaneous .this requires the use of some arbitrary affine parameter to define `` as '' for the rightmoving twin , i.e. a parameter that is something like time but is not physical time .of course , the simplest option for this parameter is the leftmoving twin s time coordinate , but this does not make it any easier for the rightmoving twin to develop her intuition about it . if we consider the cord to be `` stretched '' rather than `` unrolled '' , so that the parts of the cord closest to each twin are ( nearly ) stationary with respect to that twin , and if the cord is created containing a mix of isotopes of radioactive elements in initially known proportions , then at event b , the proper times at the two ends of the cord will be measurable by measuring the isotopal mixes . in this case, both twins will agree that not only the rightmoving twin has aged less , but also that the end of the cord `` held '' by the rightmoving twin is younger than the end of the cord `` held '' by the leftmoving twin .so although the joined - up cord forms a single closed loop , its non - simultaneous nature is revealed by the discordant ages ( isotope mixes ) of the two ends that are joined up at b. this is dependent on the thought experimental setup requiring the cord to be locally ( nearly ) at rest with respect to each twin , i.e. the cord is `` stretched '' . with a different experimental setup for the behaviour of the cord ,the aging of the cord occurs differently , and can be calculated by studying the worldlines of the particles composing the cord .now that we have some way of seeing either twin s way of thinking of this closed loop from b to b , whose path through space ( projection of worldline to a spacelike hypersurface ) does this loop represent ?each twin considers herself to be stationary , and the other twin to be moving `` rightwards '' or `` leftwards '' , respectively .so each twin considers her own path through space to be a single point a path of zero winding index and that of the other twin to be a closed loop a path of winding index unity represented by the cord .each twin considers the _ other _ twin to have pulled and/or stretched the cord so that eventually the two ends could be joined , not herself .this is just an intuitive way of thinking of the projections described above : an observer in a spatially multiply connected , locally lorentz space - time , who is unable to make high - precision spatial and temporal measurements but can measure topological properties of space is unable to use the homotopy class of her spatial path ( projected worldline ) to detect the fact that she is either a favoured or a non - favoured observer .of course , if we understand the full nature of this space - time , then we can note that the nature of `` the cord '' for at least one of the twins is a cross - section through space - time at non - constant time , as noted above .this is necessary in order for event b to be a single event in space - time .it can also to help to remember a key idea in resolving the `` pole in the barn paradox '' of simply connected minkowski space : neither a pole nor the door - to - door path of a barn is a one - dimensional object both are _ two - dimensional space - time objects_. the `` length '' of any such object depends on the choice of the reference frame , or in other words , the choice of spacelike cross - section .yet another useful way of thinking of a pole is as a `` worldplane '' a collection of worldlines .our ordinary intuition of a pole as a one - dimensional object is due to our implicit assumption of absolute simultaneity .we can think of the cord stretched between the two twins and joined up at event b to be the entire filled - in area of the triangle in figs [ f - spacetimeone ] and [ f - spacetimetwo ] a two - dimensional space - time object . depending on various possiblethought experimental setups for creating / producing / stretching the cord , various sets of worldlines for the particles composing the cord are possible , but in each case would fill in this triangle . | in a multiply connected space , the two twins of the special relativity twin paradox move with constant relative speed and meet a second time without acceleration . the twins situations appear to be symmetrical despite the need for one to be younger due to time dilation . here , the suggestion that the apparent symmetry is broken by homotopy classes of the twins worldlines is reexamined using space - time diagrams . it is found that each twin finds her own spatial path to have zero winding index and that of the other twin to have unity winding index , i.e. the twins worldlines relative homotopy classes are symmetrical . although the twins apparent symmetry is in fact broken by the need for the non - favoured twin to non - simultaneously identify spatial domain boundaries , the non - favoured twin _ can not _ detect her disfavoured state if she only measures the homotopy classes of the two twins projected worldlines , contrary to what was previously suggested . we also note that for the non - favoured twin , the fundamental domain can be chosen by identifying time boundaries ( with a spatial offset ) instead of space boundaries ( with a temporal offset ) . reference systems time cosmology : theory |
one of the main goals of percolation theory in recent decades has been to understand the geometric structure of percolation clusters .considerable insight has been gained by decomposing the incipient infinite cluster into a _backbone _ plus _ dangling bonds _ , and then further decomposing the backbone into _ blobs _ and _ red bonds _ . to define the backbone , one typically fixes two distant sites in the incipient infinite cluster , and defines the backbone to be all those occupied bonds in the cluster which belong to trails between the specified sites .the remaining bonds in the cluster are considered dangling .similar definitions apply when considering spanning clusters between two opposing sides of a finite box ; this is the so - called _ busbar _ geometry .the bridges in the backbone constitute the red bonds , while the remaining bonds define the blobs . at criticality ,the average size of the spanning cluster scales as , with the linear system size and the fractal dimension .similarly , the size of the backbone scales as , and the number of red bonds as . while exact values for and are known ( see ) , this is not the case for . in , it was shown that coincides with the so - called monochromatic path - crossing exponent with . an exact characterization of in terms of a second - order partial differential equation with specific boundary conditions was given in , for which , unfortunately , no explicit solution is currently known .the exponent was estimated in using transfer matrices , and in by studying a suitable correlation function via monte carlo simulations on the torus . in this paper , we consider a natural partition of the edges of a percolation configuration , and study the fractal dimensions of the resulting clusters .specifically , we classify all occupied bonds in a given configuration into three types : branches , junctions and nonbridges .a bridge is a _ branch _ if and only if at least one of the two clusters produced by its deletion is a tree .junctions are those bridges which are not branches .deleting branches from percolation configurations produces _ leaf - free _ configurations , and further deleting junctions from leaf - free configurations generates bridge - free configurations .these definitions are illustrated in fig .[ fig : diagram ] .it is often useful to map a bond configuration to its corresponding baxter - kelland - wu ( bkw ) loop configuration , as illustrated in fig .[ fig : diagram ] .the loop configurations are drawn on the medial graph , the vertices of which correspond to the edges of the original graph .the medial graph of the square lattice is again a square lattice , rotated .each unoccupied edge of the original lattice is crossed by precisely two loop arcs , while occupied edges are crossed by none .the continuum limits of such loops are of central interest in studies of scharmm lwner evolution ( sle ) . at the critical point , the mean length of the largest loop scales as , with the hull fractal dimension .a related concept is the accessible external perimeter .this can be defined as the set of sites that have non - zero probability of being visited by a random walker which is initially far from a percolating cluster .the size of the accessible external perimeter scales as with . in two dimensions , coulomb - gas arguments the following exact expressions for , , and where for percolation the coulomb - gas coupling . ] .we note that the magnetic exponent , the two - arm exponent satisfies , and that for percolation the thermal exponent .the two - arm exponent gives the asymptotic decay of the probability that at least two spanning clusters join inner and outer annuli ( of radii o(1 ) and respectively ) in the plane .we also note that and are related by the duality transformation .the most precise numerical estimate for currently known is .we study critical bond percolation on the torus , and show that as a consequence of self - duality the density of bridges and nonbridges both tend to 1/4 as .using monte carlo simulations , we observe that despite the fact that around 43% of all occupied edges are branches , the fractal dimension of the leaf - free clusters is simply , while their hulls are governed by .by contrast , the fractal dimension of the bridge - free configurations is , and that of their hulls is .[ fig : configuration ] shows a typical realization of the largest cluster in critical square - lattice bond percolation , showing the three different types of bond present . in more detail ,our main findings are summarized as follows . 1 .the leading finite - size correction to the density of nonbridges scales with exponent , consistent with .it follows that the probability that a given edge is not a bridge but has both its loop arcs in the same loop decays like as .the leading finite - size correction to the density of junctions also scales with exponent , while the density of branches is almost independent of system size .the fractal dimension of leaf - free clusters is , consistent with for percolation clusters .the hull fractal dimension for leaf - free configurations is , consistent with .the fractal dimension for bridge - free clusters is consistent with , and we provide the improved estimate .the hull fractal dimension for bridge - free configurations is , consistent with .the remainder of this paper is organized as follows .section [ model_algorithm_quantities ] introduces the model , algorithm and sampled quantities .numerical results are summarized and analyzed in section [ results ] .a brief discussion is given in section [ discussion ] .we study critical bond percolation on the square lattice with periodic boundary conditions , with linear system sizes , 16 , 24 , 32 , 48 , 64 , 96 , 128 , 256 , 512 , 1024 , 2048 , and 4096 . to generate a bond configuration, we independently visit each edge on the lattice and randomly place a bond with probability . for each system size, we produced at least independent samples ; for each we produced more than independent samples .a _ leaf _ in a percolation configuration is a site which is adjacent to precisely one occupied bond . given a percolation configuration we generate the corresponding _ leaf - free _ configuration via the following iterative procedure , often referred to as _burning_. for each leaf , we delete its adjacent bond . if this procedure generates new leaves , we repeat it until no leaves remain .the bonds which are deleted during this iterative process are precisely the branches defined in section [ introduction ] .the bridges in the leaf - free configurations are the junctions . deleting the junctions from the leaf - free configurations then produces bridge - free configurations .the algorithm we used to efficiently identify junctions in leaf - free configurations is described in sec .[ algorithm ] . given an arbitrary graph , the bridges can be identified in time . rather than applying such graph algorithms to identify the junctions in our leaf - free configurations however , we took advantage of the associated loop configurations .these loop configurations were also used to measure the observable , defined in section [ measured quantities ] .consider an edge which is occupied in the leaf - free configuration , and denote the leaf - free cluster to which it belongs by . in the planar case ,it is clear that will be a bridge iff the two loop segments associated with it belong to the same loop .more generally , the same observation holds on the torus provided does not simultaneously wind in both the and directions .if does simultaneously wind in both the and directions , loop arguments may still be used , however the situation is more involved .it clearly remains true that if the two loop segments associated with belong to different loops , then is a nonbridge .suppose instead that the two loop segments associated with belong to the same loop , which we denote by .deleting breaks into two smaller loops , and .for each such loop , we let and denote the winding numbers in the and directions , respectively , and we define . as we explain below , the following two statements hold : 1 .[ bridge test ] if or , then is a bridge .[ nonbridge test ] if and , then is a nonbridge . as an illustration , in fig .[ fig : diagram ] edge 1 is a junction while edge 2 is a nonbridge , despite both of them being bounded by the same loop .edge 1 can be correctly classified using statement [ bridge test ] , while edge 2 can be correctly classified using statement [ nonbridge test ] . by making use of these observations ,all but very few edges in the leaf - free clusters can be classified as bridges / nonbridges .we note that in our implementation of the above algorithm , the required values can be immediately determined from the stored loop configuration without further computational effort . for the small number of edges to which neither of the above two statements apply ,we simply delete the edge and perform a connectivity check using simultaneous breadth - first search .this takes time per edge tested .we now justify statement [ bridge test ] . in this case , the loop is contained in a simply - connected region on the surface of the torus .the cluster contained within the loop is therefore disconnected from the remainder of the lattice , implying that is a bridge .edge 1 in fig .[ fig : diagram ] provides an illustration .finally , we justify statement [ nonbridge test ] . in this case , and either both wind in the direction , or both in the direction ( one winds in the positive sense , the other in the negative sense ) .suppose they wind in the direction .it then follows from the definition of the bkw loops that there can be no -windings in the cluster . by assumptionhowever , does contain an -winding , so it must be the case that belongs to a winding cycle in that winds in the direction .the edge is therefore not a bridge .edge 2 in fig .[ fig : diagram ] provides an illustration . from our simulations , we estimated the following quantities . 1 .the mean density of branches , junctions , and nonbridges .2 . the mean size of the largest cluster 3 . the mean size of the largest leaf - free cluster 4 .the mean size of the largest bridge - free cluster 5 .the mean length of the largest loop , , for the loop configuration associated with leaf - free configurations 6 .the mean length of the largest loop , , for the loop configuration associated with bridge - free configurations we note that fewer samples were generated for and than for other the quantities .in sections [ bond density ] , [ fractal dimension of clusters ] , [ fractal dimension of loops ] , we discuss least - squares fits for , , and , , , .the results are presented in tables [ tab : bond_density ] , [ tab : fd_clusters ] and [ tab : fd_loops ] . in section [ fitting methodology ] , we first make some comments on the anstze and methodology used. let ( ) denote the mean density of occupied edges whose two associated loop segments belong to the same ( distinct ) loop(s ) . from lemma [ loop lemma ] in appendix [ loop lemma appendix ] , we know that for bond percolation on we have for all . in the plane however , an edge is a bridge iff the two associated loop segments belong to the same loop .we therefore expect that both and should converge to 1/4 as .furthermore , there is a natural interpretation of the quantity .as noted in section [ algorithm ] , if the two loop segments associated with an edge belong to different loops , then that edge can not be a bridge .this implies that is equal to the probability of the event that `` a given edge is not a bridge but has both its loop arcs in the same loop '' .let us denote this event by .studying the finite - size behaviour of will therefore allow us to study the scaling of .since , it follows that is also equal to .armed with the above observations , we fit our monte carlo data for the densities , and to the finite - size scaling ansatz we note that since for all , the finite - size corrections of should be equal in magnitude and opposite in sign to the finite - size corrections of . since , the latter should be positive and the former negative .finally , we note that the event essentially characterizes edges which _ would _ be bridges in the plane , but which are prevented from being bridges on the torus by windings . by construction, branches always have at least one end attached to a tree , suggesting that they can not be _ trapped _ in winding cycles in this way .this would suggest that it should be that contributes the leading correction of away from its limiting value of 1/4 .the observables , , , are expected to display non - trivial critical scaling , and we fit them to the finite - size scaling ansatz where denotes the appropriate fractal dimension . as a precaution against correction - to - scaling terms that we failed to include in the fit ansatz , we imposed a lower cutoff on the data points admitted in the fit , and we systematically studied the effect on the value of increasing .generally , the preferred fit for any given ansatz corresponds to the smallest for which the goodness of fit is reasonable and for which subsequent increases in do not cause the value to drop by vastly more than one unit per degree of freedom . in practice , by `` reasonable '' we mean that , where df is the number of degrees of freedom . in all the fitsreported below we fixed , which corresponds to the exact value of the sub - leading thermal exponent . leaving free in the fits of and we estimate .we therefore conjecture that , which we note is precisely equal to the two - arm exponent .we comment on this observation further in section [ discussion ] . for by contrast, we were unable to obtain stable fits with free .fixing , the resulting fits produce estimates of that are consistent with zero .in fact , we find is consistent with 0.21405018 for all . this weak finite - size dependence of is in good agreement with the arguments presented in section [ fitting methodology ] .all the fits for , and gave estimates of consistent with zero .we therefore set identically in the fits reported in table [ tab : bond_density ] . from the fits , we estimate , and .we note that within error bars , as expected .the fit details are summarized in table [ tab : bond_density ] .we also note that the estimates of for and are equal in magnitude and opposite in sign , which is as expected given that is consistent with zero for . in fig .[ fig : bond_density ] , we plot , and versus .the plot clearly demonstrates that the leading finite - size corrections for and are governed by exponent , while essentially no finite - size dependence can be observed for ..fit results for , , and . [ cols="<,<,<,<,<",options="header " , ]we have studied the geometric structure of percolation on the torus , by considering a partition of the edges into three natural classes . on the square lattice , we have found that leaf - free configurations have the same fractal dimension and hull dimension as standard percolation configurations , while bridge - free configurations have cluster and hull fractal dimensions consistent with the backbone and external perimeter dimensions , respectively .in addition to the results discussed above , we have extended our study of leaf - free configurations to site percolation on the triangular lattice and bond percolation on the simple - cubic lattice , the critical points of which are respectively 1/2 and 0.24881182(10 ) .we find numerically that the fractal dimensions of leaf - free clusters for these two models are respectively 1.8957(2 ) and 2.5227(6 ) , both of which are again consistent with the known results and for . in both cases ,our data show that the density of branches is again only very weakly dependent on the system size .it would also be of interest to study the bridge - free configurations on these lattices .in addition to investigating the fractal dimensions for cluster size , and in the triangular case also the hull length , it would be of interest to determine whether the leading finite - size correction to is again governed by the two - arm exponent .the two - arm exponent is usually defined by considering the probability of having multiple spanning clusters joining inner and outer annuli in the plane . as noted in section [ fitting methodology ]however , our results show that for percolation , the two - arm exponent also governs the probability of a rather natural geometric event on the torus : the event that a given edge is not a bridge but has both its loop arcs in the same loop .this provides an interesting alternative interpretation of the two - arm exponent in terms of toroidal geometry .let us refer to an edge that is not a bridge but has both its loop arcs in the same loop as a _pseudobridge_. we note that an alternative interpretation of the observation that is that the number of pseudobridges scales as . a natural question to ask is to what extent the above results carry over to the general setting of the fortuin - kasteleyn random - cluster model .consider the case of two dimensions once more . in that case , we know that if we fix the edge weight to its critical value and take we obtain the uniform spanning trees ( ust ) model .for this model all edges are branches , and so the leaf - free configurations , which are therefore empty , certainly do not scale in the same way as ust configurations . despite this observation , preliminary simulations and the chayes - machta algorithm for . ]performed on the toroidal square lattice at , , , , , and suggest that , for all ]. it would also be of interest to determine whether the fractal dimensions of cluster size and hull length for bridge - free random cluster configurations again coincide with and when .the authors wish to thank bob ziff for several useful comments / suggestions , and t.g . wishes to thank nick wormald for fruitful discussions relating to the density of bridges .this work is supported by the national nature science foundation of china under grant no .91024026 and 11275185 , and the chinese academy of sciences .it was also supported under the australian research council s discovery projects funding scheme ( project number dp110101141 ) , and t.g .is the recipient of an australian research council future fellowship ( project number ft100100494 ) .the simulations were carried out in part on nyu s its cluster , which is partly supported by nsf grant no .in addition , this research was undertaken with the assistance of resources provided at the nci national facility through the national computational merit allocation scheme supported by the australian government . j.f.w and y.j.d acknowledge the specialized research fund for the doctoral program of higher education under grant no .y.j.d also acknowledge the fundamental research funds for the central universities under grant no . 2340000034 .let ( ) denote the fraction of occupied edges whose two associated loop segments belong to the same ( distinct ) loop(s ) .let denote the number of edges in .since is a cellularly - embedded graph , it has a well - defined geometric dual and medial graph . for any denote its dual by . for ,let be the event that the two loop segments associated with both belong to the same loop , and let be the event that they belong to distinct loops .the key observation is that for any we have to see this , first note that the number of terms on either side of is , and that each term is either 0 or 1 .then note that there is a bijection between the terms on the left- and right - hand sides such that the term on the left - hand side is 1 iff the term on the right - hand side is 1 , as we now describe .let with , and let denote the _ dual _ configuration : include in iff . with the term on the left - hand side corresponding to , associate the term appearing on the right - hand side .this is clearly a 1 - 1 correspondence .let denote the loop configuration corresponding to . by construction , .the loop configuration differs from only in that the loop arcs cross in but cross in .if , then it follows that .the converse holds by duality , and so is established .see fig .[ fig : loops diagram ] for an illustration . | we investigate the geometric properties of percolation clusters , by studying square - lattice bond percolation on the torus . we show that the density of bridges and nonbridges both tend to 1/4 for large system sizes . using monte carlo simulations , we study the probability that a given edge is not a bridge but has both its loop arcs in the same loop , and find that it is governed by the two - arm exponent . we then classify bridges into two types : branches and junctions . a bridge is a _ branch _ iff at least one of the two clusters produced by its deletion is a tree . starting from a percolation configuration and deleting the branches results in a _ leaf - free _ configuration , while deleting all bridges produces a bridge - free configuration . although branches account for of all occupied bonds , we find that the fractal dimensions of the cluster size and hull length of leaf - free configurations are consistent with those for standard percolation configurations . by contrast , we find that the fractal dimensions of the cluster size and hull length of bridge - free configurations are respectively given by the backbone and external perimeter dimensions . we estimate the backbone fractal dimension to be . |
advances in metrology are pivotal to improve measurement standards , to develop ultrasensitive technologies for defence and healthcare , and to push the boundaries of science , as demonstrated by the detection of gravitational waves . in a typical metrological setting, an unknown parameter is dynamically imprinted on a suitably prepared probe .we can think e.g. of a two - level spin undergoing a unitary phase shift . by subsequently interrogating the probeone builds an estimate for the parameter .the corresponding mean - square error can be reduced , for instance , by using uncorrelated identical probes . in that case, scales asymptotically as , which is referred to as the standard quantum limit .however , if those probes were prepared in an entangled state , the resulting uncertainty could be further reduced by an additional factor of , leading to . this ultimate quantum enhancement in resolutionis termed heisenberg limit and incarnates the _ holy grail _ of quantum metrology . in practice, the unitary dynamics of the probe will be distorted by noise , due to unavoidable interactions with its surroundings .unfortunately , the metrological advantage of entangled probes over separable ones vanishes for most types of uncorrelated noise , such as spontaneous emission , depolarizing noise , or phase damping .entanglement may remain advantageous though , provided one gains precise control over the noise strength , and only for limited cases such as time - inhomogeneous phase - covariant noise , transversal noise , or when error - correction protocols may be used .creating entangled states with a large number of particles is anyhow a costly process , limited by technological constraints .furthermore , to fully harness the metrological power of entanglement in presence of noise , collective measurements on all probes at the output would be generally required .this contrasts with the noiseless scenario , in which separable measurements ( i.e. , performed locally on each probe ) suffice to attain the heisenberg scaling .one can try to circumvent these problems by devising an alternative _ sequential _ or ` multi - round ' strategy , in which the parameter - imprinting unitary acts consecutive times on a single probe before performing the final measurement . in absence of noise , this sequential setting is formally equivalent to the parallel one , the only difference being that quantum _ coherence _ takes over the instrumental role of entanglement .the sequential scheme seems more appealing from a practical viewpoint , as only a single probe needs to be addressed in both state preparation and final interrogation .however , the heisenberg scaling of the precision can not be maintained asymptotically in the sequential scenario either , once again due to the detrimental effects of noise .given the severe limitations that environmental disturbance places on quantum - enhanced metrology , for practical purposes it seems best to give up the prospect of super - classical _ asymptotic _ scaling of the resolution and to concentrate instead in using the _ finite _ resources available as efficiently as possible .in this paper , we explore the optimization of phase estimation with a two - level probe , in the presence of _ unital phase - covariant _ noise . to that end , in sec .[ sec : noise ] we introduce a simple versatile model in which the noise is intrinsically accounted for : we take the generator of the phase shift to be partly unknown and sample instances of it from some probability distribution .the ensuing average mimics the environmental effects . in sec . [sec : sens ] we calculate the _ quantum fisher information _ ( qfi ) , which can be meaningfully regarded as a quantitative benchmark for the optimal estimation sensitivity , and derive a close - fitting lower bound to it .both quantities grow quadratically for small , reach a maximum at some , and decay to zero as increases further .in particular , we obtain from in terms of parameters directly accessible via process tomography , giving a useful prescription for practical phase estimation with a large guaranteed sensitivity .we do this for any unital phase - covariant qubit channel , hence delivering results widely applicable to a broad range of relevant physical processes , including those in which noise effects of the depolarizing type are dominant , such as spin - lattice relaxation at room temperature . in sec .[ sec : ex ] we then illustrate our results by choosing a specific distribution for the stochastic generator .we compare the qfi in the sequential setting ( with and without passive correlated ancillas ) with the actual phase sensitivity of given feasible measurements . for completeness, we also compute the qfi analytically in a parallel - entangled setting starting from an -qubit ghz state .although the qfi exhibits an asymptotic linear scaling in in such setting , we find that entangled probes may provide no practical advantage when their interrogation is restricted to measurements of local observables on each individual qubit .in fact , in such case the sensitivity for the parallel - entangled strategy reduces to that of the sequential one , where the ` number of probes ' comes to play the role of the ` number of rounds ' .our analysis , summarized in sec .[ sec : d ] , reveals feasible solutions for quantum metrology based on the little - studied sequential paradigm ( possibly supplemented by a passive ancilla ) , robust even under sizeable levels of noise .let us start by introducing our model for ease of illustration . in the sequential scenario ,a two - level probe undergoes a sequence of phase shifts before being interrogated .the generator can be written as , where the axis is sampled from some normalized probability distribution and is the vector of the three pauli matrices .thus the phase - imprinting operation , which transforms the probe state at each step , is the resulting qubit channel is completely positive , trace preserving , _ unital _( i.e. ) , and contractive , i.e. any state will be asymptotically mapped to ; note also that eq .( [ eq : channel ] ) is akin to the classical simulation method of ref . . without loss of generalitywe may take the average rotation axis proportional to , hence restricting to probability distributions with axial symmetry on the bloch sphere , so that our qubit channel is _ phase - covariant _ , as it commutes with .physically , we may think , for instance , of the free evolution of a nuclear spin with gyromagnetic ratio in an external magnetic field pointing along . in that case , so that , where is some coarse - grained time resolution , of the same order as the thermal fluctuations of the environment .the interactions with the surrounding nuclei at large temperatures result in random changes of the net direction of the magnetic field on our spin .this gives rise to a relaxation process towards the thermal equilibrium state .if the direction of were kept fixed and the environmental effects were accounted for by a fluctuating magnetic field intensity , the model would realize pure dephasing instead , which is often dominant at short time scales . while the model in eq .( [ eq : channel ] ) can be conveniently adopted as a physical example to focus our analysis , the results derived in this paper will hold for a more general class of channels , namely all the unital phase - covariant qubit channels , whose description is recalled here . given a single - qubit state , any single - qubit channel maps the bloch vector into , where is a real distortion matrix , and is a displacement vector . for the most general unital phase - covariant channel , and channels encode information about not only in the rotation of the bloch ball by a function , but also in its deformation , through the singular values of , namely and the doubly - degenerate , which must satisfy and ( implying ) for the map to be completely positive .( [ eq : phase_covariant ] ) thus generalizes the canonical phase - covariant channels like phase damping , for which . in what followswe shall work with the liouville representation , which acts on vectorizations of any density matrix in the computational basis as , where writes as will now analyze the sensitivity for estimating a phase imprinted by any unital phase - covariant qubit channel , starting with the sequential estimation setting . recall that the estimate is built by measuring some observable on the final state of the probe after successive applications of the channel .we can then gauge the phase sensitivity of as where and .in general , , where is the _ classical _ fisher information of a projective measurement onto the eigenstates of the observable ( assumed non - degenerate ) . in particular , one can verify that equality holds for all the explicit examples presented later in the paper , henceforth we shall simply refer to as the phase sensitivity associated to the ( generally suboptimal ) observable .the qfi , which captures geometrically the rate of change of the evolved probe state under an infinitesimal variation of the parameter , corresponds to the classical fisher information of an _ optimal _ observable ( i.e. ) and is thus a meaningful benchmark for the best estimation protocol .such an optimal observable is diagonal in the eigenbasis of the symmetric logarithmic derivative ( sld ) , defined implicitly by note that the prominent role of the qfi in quantum estimation theory is well established in an asymptotic setting , by virtue of its appearance in the quantum cramr - rao bound , , , which becomes tight in the limit of a large number of independent repetitions .however , the qfi also enters in both the van trees inequality and the ziv - zakai bound , which can provide tighter and more versatile bounds on the mean - square error in the relevant case of finite ( including bayesian settings ) .therefore , we shall adopt the qfi as our main figure of merit , in keeping with the quantum metrology bulk literature ( see e.g. for a recent discussion ) and in compliance with the spirit of this paper , which focuses on the use of finite resources to retain quantum enhancements in the estimation sensitivity .we will also assume maximization of over the initial state of the probe unless stated otherwise . to calculate the qfi we make use of the formula where and are , respectively , the eigenvectors and eigenvalues of ( excluding terms with from the sum ) , and we have made the dependence of the qfi on and explicit . by preparing the probe in the optimal , maximally coherent state , we obtain exactly .\label{eq : qfi_formula_general}\ ] ] the qfi thus grows as for small , reaches a peak at some , and then decays asymptotically to zero .a similar qualitative behaviour has been recently reported under other types of ( non - unital ) noise , such as erasure , spontaneous emission , and phase damping . in order to optimize our sequential protocol, we need a practical way to determine . to that end , we aim to bound the qfi from below . in general , one may use , which is not necessarily tight . luckily , for the family of channels encompassed by our , a tighter bound stemming from , where stands for the operator norm , can be established ( see appendix [ app : bound ] ) .concretely , we will thus define \leq f_n(\varphi ) , \label{eq : bound_result}\ ] ] which closely follows the behaviour of the qfi , as confirmed by our numerical analysis ( cf .[ fig1 ] ) . maximizing yields ( rounded to the nearest integer and using natural logarithm ) .this approximation to the optimal ` sampling time ' only depends on , which is experimentally accessible through process tomography .( solid ) and lower bound ( dashed ) versus number of rounds under the channel of eq . for the von mises - fisher distribution , with and , andprobe initialized in the state .the maximum of is at , only one round further the actual maximum of . is discrete and the continuous lines are a guide to the eye .all the quantities plotted are dimensionless . ]it follows that there exist observables with a phase sensitivity of _ at least _ , which could be made grow above by just running the sequential protocol for iterations .once the precision has been optimized at the single - probe level , it can be scaled - up `` classically '' by increasing the number of independent probes .we emphasize that eqs . and apply _ generally _ to any phase - covariant channel preserving the identity .therefore , we have provided useful guidelines for precise phase estimation under a wide class of physically motivated noise models .to illustrate our results , we shall pick the von mises - fisher distribution for our random generator in eq . .this distribution can be seen as the counterpart of a gaussian over the bloch sphere : it becomes uniform for ( i.e. is equally likely to point in any direction , as in a black - box scenario ) , whereas it localizes sharply around the for . in appendices [ app : kraus ] and[ app : sequential ] , we provide an explicit operator - sum representation for this choice of , as well as expressions for the resultant , , and .recall that saturating the precision bound set by the qfi requires interrogating the probe in the eigenbasis of the sld .however , the sld basis usually depends explicitly on the unknown parameter ( and , in this case , on the noise parameter ) so that , in practice , one would need to implement adaptive feedforward estimation procedures , or could have to resort to sub - optimal phase - independent estimators .it is therefore particularly interesting to see how the sensitivity of _ accessible _ observables compares with the qfi .we can study , for instance , the phase sensitivity of or , equivalently , that of for any in the equatorial plane .explicit formulas are provided in appendix [ app : sequential ] , and the resulting is plotted alongside in fig .[ fig2 ] ( dashed and solid curves , respectively ) .interestingly , as increases , the sensitivity of is seen to _ oscillate _ regularly between zero and the qfi .this can be intuitively understood if one thinks of the ` dynamics ' of in the bloch sphere : the maximally coherent preparation lies along the equator , on the bloch sphere surface .then , the iterative application of gives rise to a trajectory which inspirals on the equatorial plane as approaches its fixed point at the center of the sphere .this is just a combination of the unitary rotation around the and the loss of purity that results from the average in eq . .the eigenstates of the sld follow the rotation of , whereas our actual measurement basis remains fixed . as a result , the sensitivity of oscillates between and , with the latter saturated when the two bases coincide ( see appendix [ app : sequential ] for a visual description ) . as a comparison ,let us now consider the parallel - entangled strategy , i.e. , a single - round protocol starting from an entangled state of the qubits .we will choose the maximally - entangled ghz state as initial preparation .although this may not be optimal for noisy parameter estimation , it comes with the advantage that it keeps a simple form under the local application of our channel on all probes .it also has the same degree of quantum coherence as the single - qubit state , as measured by the .this choice allows us to obtain an analytic expression for the qfi ( see appendix [ app : parallel ] ) , which is plotted in fig .[ fig2 ] ( dotted curve ) along with the qfi of the sequential case .note that , while we use the same notation ` ' for the number of rounds in the sequential case and for the number of probes in the parallel one , these are essentially different resources .one can nonetheless make sense of the comparison between the two metrological settings by invoking their formal equivalence in absence of noise , and by recalling that equals the overall number of interactions with the phase - imprinting channel in both cases . for : the -round sequential setting with a single - qubit initial state ( solid ) , the -round sequential setting with a passive ancilla and two - qubit initial bell state ( dot - dashed ) , and the parallel - entangled setting with -qubit initial ghz state ( dotted ) .the dashed line amounts to the phase sensitivity of , , and in each of the three settings , respectively .the model parameters are the same as in fig .all the quantities plotted are dimensionless . ]the resulting parallel qfi exhibits a linear asymptotic scaling with , unlike the sequential setting .however , even if such a large -qubit entangled probe could be prepared , its maximum sensitivity would only be saturated by some phase - dependent _ collective _ measurement on all probes . indeed , in appendix[ app : parallel ] we show that this parallel qfi may be split into two contributions : one , with a profile similar to that of the sequential , stemming from the matrix elements of the output state in the subspace of total angular momentum , and another one , related to the complementary subspace , which depends on the phase through the longitudinal deformation parameter .it is precisely this second contribution which endows the probe with a linearly increasing sensitivity at large .it seems intuitively clear that singling out the relevant information contained in the subspace requires a collective estimator , such as itself .note that such coherent manipulations may be demanding to implement , or even unavailable in case the probes are transmitted to remote stations during the process .alternatively , one could ask about the performance of a collection of accessible _ separable _ measurements , such as , implemented locally on each probe and supplemented by classical communication in the data analysis stage .the corresponding phase sensitivity can also be computed analytically ( see appendix [ app : parallel ] ) , and quite remarkably it turns out to _ coincide _ with that of in the -round setting ( i.e. , the dashed line in fig .[ fig2 ] ) .this behaviour is generic and does not depend on the specific sub - optimal observable measured on each probe .that is , although the parallel - entangled setting can , in principle , outperform the sequential one asymptotically , we find that they may become metrologically equivalent when the probe readout at the output is limited to measuring local observables : this restriction _ de facto _ banishes the asymptotic linear scaling of the precision .it is important to remark that we assumed a particular probe preparation ( ghz states ) .therefore , our observations should not be understood as a general ` no - go ' result , advocating against parallel - entangled estimation strategies in the presence of noise . the general question of whether the gap between sequential and parallel - entangled settings persists when optimal input states and more general separable measurements are considered is definitely worthy of further investigation , although it lies beyond the scope of this paper .finally , as an example of the usefulness of entanglement in a practical sequential scenario , let us supplement the probe with a passive two - level ancilla .specifically , we can prepare the probe - ancilla pair in a bell state ( which has the same of coherence as the single - qubit and -qubit ghz states ) and apply the noisy channel only on the first qubit , yielding .we find that , although interrogating probe and ancilla by a separable measurement like reduces once more to the same sensitivity as the single - qubit unassisted scenario ( dashed curve in fig . [ fig2 ] ) , performing instead a non - separable ( yet manageable ) measurement such as does provide a sizeable increase in phase sensitivity ( see appendix [ app : passive ] ) .in general , one may conclude that using independent probes in a sequential scheme , possibly supplemented by correlated passive ancillas , offers a _ practical advantage _ in noisy parameter estimation , in spite of the potential superiority of parallel - entangled strategies . as we illustrated , acquiring partial information about the geometry of the parameter - imprinting process allows one to optimize the estimation protocol at the single - probe level , by simply adjusting the sampling time or number of rounds .such a sequential estimation protocol relies on the initial amount of `` unspeakable '' coherence , which is a genuinely quantum feature , and is here confirmed as the key resource for estimating parameters encoded in incoherent operations , which include all phase - covariant channels .however , the estimation performance only scales linearly or `` classically '' in the probe size , whereby scaling up the probe size is intended as repeating the optimized sequential procedure times using independent probe qubits , all initialized in a maximally coherent state .nonetheless , at the single - probe level , the sensitivity does scale quadratically in the number of rounds , provided is well below .notably , in the technologically relevant limit of ( e.g. magnetometry in a very weak magnetic field ) , the optimal number of rounds stays fairly large even for relatively low ( see appendix [ app : kraus ] ) , which translates into a very uncertain phase generator . as a result , a quadratic - like scaling of the precision for each individual probecan be maintained up to many iterations , although definitely not asymptotically .an interesting next step could be to extend our analysis to _ multiparameter metrology _ ,e.g. considering the simultaneous estimation of the phase and the noise parameter , or the actual generator . to conclude ,let us remark that , in general , the comparison between different metrological settings is a particularly tricky subject , since all the _ resources _ must be identified and properly accounted for .for instance , in spite of the formal equivalence of sequential and parallel settings in absence of noise , promoting the ` number of rounds ' to the status of resource , at the same level as the ` number of probes ' is probably not fair , since the actual costs of preparation , control and measurement of an additional quantum probe are not comparable to the costs of increasing the sampling time for one additional round .it is surely worthwhile to put different metrological settings under a unified set - up , also including feedback control protocols , so as to carry out an objective bookkeeping of the associated costs. this will be the subject of future work .we acknowledge the european research council ( erc stg gqcop , grant no .637352 ) and the royal society ( grant no .ie150570 ) for financial support .we thank a. datta , r. demkowicz - dobrzaski , a. farace , c. gogolin , m. gu , l. maccone , m. mehboudi , k. macieszczak , k. modi , and m. oszmaniec for useful discussions , and especially j. koodyski for helping us improve the manuscript with his feedback .below we will give further details about the tight - fitting lower bound to the qfi .as stated in the main text , does hold in general although it is not necessarily a tight bound .if the maximization is not restricted to vectorized physical states , but extended to all normalized four - dimensional vectors in liouville space , we would end up calculating the operator norm .this equals the largest eigenvalue of the enclosed matrix .in particular , for the family of channels considered in the main text and represented by , the eigenvalues of are , , and ] . note that is thus just the prefactor in the expression for in of eq . , which essentially modulates the amplitude of the optimal phase sensitivity . in fig[ fig : contour ] , we plot as a function of the concentration parameter and the phase .note that for small enough , the optimal number of applications remains on the order of even for very low ( i.e. quasi - uniform distribution for the rotation axis ) . as a result , the sequential setting may exhibit a super - classical scaling in the sensitivity , up to a significantly large number of rounds .the maximum phase sensitivity , given by the qfi , may only be reached when an optimal estimator is measured on the output state of the probe .recall that such optimal estimator must be diagonal in the eigenbasis of the sld , which reads we can instead calculate the phase sensitivity of some sub - optimal observable defined as in eq .( [ eq : cfi_general ] ) . choosing as our estimator yields qfi and the phase sensitivity are depicted in fig .[ fig : blocha ] for particular values of and .it can be seen that displays the aforementioned quadratic behaviour followed by an exponential tail - off , whereas oscillates between zero and .this curious behaviour can be understood by visualizing the evolved probe state and the measurement eigenbases of both the sld and the sub - optimal estimator on the equatorial plane of the bloch sphere ( see panels a - d in fig .[ fig : blocha ] ) . here, the initial probe state begins on the at the surface of the sphere . as stated in the main text , the bloch vector of the evolved state _ inspirals _ towards the normalized identitythis is a result of the rotation around the due to the parameter - encoding unitary , and the loss of purity due to the noise .the optimal measurement basis begins on the , parallel to the probe state , and rotates with .meanwhile , the fixed measurement basis lies on the , so that it periodically coincides with the optimal one : when they are parallel , , whereas when they become perpendicular , .the frequency of these oscillations is given approximately by .( dashed line ) for and .representations of the evolved probe state ( purple arrow ) , optimal measurement basis ( red arrow ) , and sub - optimal measurement basis ( blue arrow ) , as bloch vectors on the equatorial plane are shown in the bottom panels .( a ) initially , the probe state and both measurements are aligned .( b ) the phase sensitivity of meets the qfi when the measurement bases realign .( c ) the phase sensitivity of vanishes whenever the measurement bases become perpendicular .( d ) otherwise , the phase sensitivity of oscillates between zero and the qfi .note that , as grows , the optimal basis vectors become perpendicular to the probe state vector as they rotate .note as well that the probe state vector gradually shortens due to the loss of purity .all the quantities plotted are dimensionless.,title="fig : " ] + ( dashed line ) for and .representations of the evolved probe state ( purple arrow ) , optimal measurement basis ( red arrow ) , and sub - optimal measurement basis ( blue arrow ) , as bloch vectors on the equatorial plane are shown in the bottom panels .( a ) initially , the probe state and both measurements are aligned .( b ) the phase sensitivity of meets the qfi when the measurement bases realign .( c ) the phase sensitivity of vanishes whenever the measurement bases become perpendicular .( d ) otherwise , the phase sensitivity of oscillates between zero and the qfi .note that , as grows , the optimal basis vectors become perpendicular to the probe state vector as they rotate .note as well that the probe state vector gradually shortens due to the loss of purity .all the quantities plotted are dimensionless.,title="fig : " ] ( dashed line ) for and .representations of the evolved probe state ( purple arrow ) , optimal measurement basis ( red arrow ) , and sub - optimal measurement basis ( blue arrow ) , as bloch vectors on the equatorial plane are shown in the bottom panels .( a ) initially , the probe state and both measurements are aligned .( b ) the phase sensitivity of meets the qfi when the measurement bases realign .( c ) the phase sensitivity of vanishes whenever the measurement bases become perpendicular .( d ) otherwise , the phase sensitivity of oscillates between zero and the qfi .note that , as grows , the optimal basis vectors become perpendicular to the probe state vector as they rotate .note as well that the probe state vector gradually shortens due to the loss of purity .all the quantities plotted are dimensionless.,title="fig : " ] ( dashed line ) for and .representations of the evolved probe state ( purple arrow ) , optimal measurement basis ( red arrow ) , and sub - optimal measurement basis ( blue arrow ) , as bloch vectors on the equatorial plane are shown in the bottom panels .( a ) initially , the probe state and both measurements are aligned .( b ) the phase sensitivity of meets the qfi when the measurement bases realign .( c ) the phase sensitivity of vanishes whenever the measurement bases become perpendicular .( d ) otherwise , the phase sensitivity of oscillates between zero and the qfi .note that , as grows , the optimal basis vectors become perpendicular to the probe state vector as they rotate .note as well that the probe state vector gradually shortens due to the loss of purity .all the quantities plotted are dimensionless.,title="fig : " ] ( dashed line ) for and .representations of the evolved probe state ( purple arrow ) , optimal measurement basis ( red arrow ) , and sub - optimal measurement basis ( blue arrow ) , as bloch vectors on the equatorial plane are shown in the bottom panels .( a ) initially , the probe state and both measurements are aligned .( b ) the phase sensitivity of meets the qfi when the measurement bases realign .( c ) the phase sensitivity of vanishes whenever the measurement bases become perpendicular .( d ) otherwise , the phase sensitivity of oscillates between zero and the qfi .note that , as grows , the optimal basis vectors become perpendicular to the probe state vector as they rotate .note as well that the probe state vector gradually shortens due to the loss of purity .all the quantities plotted are dimensionless.,title="fig : " ]in this section we will give further details about the performance of the sequential estimation setting when the probe is supplemented with a passive two - level ancilla . recall from the main text that ,in this case , we prepare the two qubits in a bell state and proceed to apply sequentially times the phase - imprinting channel as .the resulting ` evolved ' two - qubit state has maximally mixed marginals at any .the corresponding qfi may be readily evaluated to where once again , we shall particularize our results choosing the vmf distribution for the stochastic generator ( see fig .[ fig : seq_anc ] ) . in the first place , note that notably outperforms the sensitivity of a single probe . in particular , when a separable estimator such as is considered , the phase sensitivity of the probe - ancilla pair remains upper - bounded by the qfi of a single probe .however , as we can see , when a bell measurement is performed _jointly _ on the probe and the ancilla , the ensuing sensitivity can come much closer to the ultimate limit set by . .the qfi for a single two - level probe ( without ancilla ) is included for comparison ( dot - dashed ) .the phase sensitivity of ( dotted ) and ( dashed ) have also been plotted .the vmf distribution is assumed for the stochastic generator of the phase rotations , with and .all the quantities plotted are dimensionless . ]we will now compute the qfi in the parallel setting for a maximally entangled greenberger - horne - zeilinger ( ghz ) state of the probes , .the output state is a type of _ x state _ , that is , a state represented by a matrix with non - zero elements ( with ) only along the main diagonal and in the extreme off - diagonal corners .in particular , we find where is the hamming weight of , i.e. the number of s in the binary representation of .the over - bar denotes complex conjugation and is given by then , the qfi ( ) may be calculated using eq . and is given by the contribution of the first term in eq .to the total sensitivity is qualitatively similar to the qfi in the sequential setting , i.e. it scales quadratically for low and , after peaking , it decays exponentially to zero .however , the second term , contributes with an classical - like increase in , which ultimately yields a linear asymptotic scaling for the overall qfi . as stated in the main text ,while the first contribution relates to the outermost `` corners '' of the density operator ( i.e. the subspace spanned by and , with total angular momentum ) , the second one is determined by all the matrix elements along the diagonal of the state , such that . in particular , accessing all the relevant information giving rise to the classical - like asymptotic scaling of the sensitivity thus requires one to perform a non - separable measurement capable of telling the subspace from its complementary , such as a measurement of the total angular momentum .analogously to the sequential setting , one can consider the sensitivity of a sub - optimal estimator .in particular , we can compute the phase sensitivity of the separable observable by resorting to eq . and .as pointed out in the main text this yields exactly the same formula of eq . , thus implying that the parallel - entangled and unentangled sequential settings are metrologically equivalent so long as the estimation is constrained to separable measurements .note furthermore that , in both sequential and parallel settings , one could consider alternative observables to with eigenbases on the equatorial plane of the bloch sphere , such as or even some suboptimal yet phase - dependent observable .generically , the oscillatory behavior will remain , but the periodicity and locations of the maxima will change depending on the chosen observable . | the problem of estimating an unknown phase using two - level probes in the presence of unital phase - covariant noise and using finite resources is investigated . we introduce a simple model in which the phase - imprinting operation on the probes is realized by a unitary transformation with a randomly sampled generator . we determine the optimal phase sensitivity in a sequential estimation protocol , and derive a general ( tight - fitting ) lower bound . the sensitivity grows quadratically with the number of applications of the phase - imprinting operation , then attains a maximum at some , and eventually decays to zero . we provide an estimate of in terms of accessible geometric properties of the noise and illustrate its usefulness as a guideline for optimizing the estimation protocol . the use of passive ancillas and of entangled probes in parallel to improve the phase sensitivity is also considered . we find that multi - probe entanglement may offer no practical advantage over single - probe coherence if the interrogation at the output is restricted to measuring local observables . |
supervised classification concerns the task of assigning an object ( or a number of objects ) to one of two or more groups , based on a sample of labelled training data .the problem was first studied in generality in the famous work of , where he introduced some of the ideas of linear discriminant analysis ( lda ) , and applied them to his iris data set . nowadays ,classification problems arise in a plethora of applications , including spam filtering , fraud detection , medical diagnoses , market research , natural language processing and many others .in fact , lda is still widely used today , and underpins many other modern classifiers ; see , for example , and .alternative techniques include support vector machines , tree classifiers , kernel methods and nearest neighbour classifiers .more substantial overviews and in - depth discussion of these techniques , and others , can be found in and .an increasing number of modern classification problems are _ high - dimensional _ , in the sense that the dimension of the feature vectors may be comparable to or even greater than the number of training data points , . in such settings , classical methods such as those mentioned in the previous paragraph tend to perform poorly , and may even be intractable ; for example , this is the case for lda , where the problems are caused by the fact that the sample covariance matrix is not invertible when .many methods proposed to overcome such problems assume that the optimal decision boundary between the classes is linear , e.g. and .another common approach assumes that only a small subset of features are relevant for classification .examples of works that impose such a sparsity condition include , where it is also assumed that the features are independent , as well as and , where soft thresholding is used to obtain a sparse boundary .more recently , and both solve an optimisation problem similar to fisher s linear discriminant , with the addition of an penalty term to encourage sparsity .in this paper we attempt to avoid the curse of dimensionality by projecting the feature vectors at random into a lower - dimensional space .the use of random projections in high - dimensional statistical problems is motivated by the celebrated johnson lindenstrauss lemma ( e.g. * ? ? ?this lemma states that , given , and , there exists a linear map such that for all . in fact , the function that nearly preserves the pairwise distances can be found in randomised polynomial time using random projections distributed according to haar measure as described in section [ sec chooserp ] below .it is interesting to note that the lower bound on in the johnson lindenstrauss lemma does not depend on .as a result , random projections have been used successfully as a computational time saver : when is large compared to , one may project the data at random into a lower - dimensional space and run the statistical procedure on the projected data , potentially making great computational savings , while achieving comparable or even improved statistical performance .as one example of the above strategy , obtained vapnik chervonenkis type bounds on the generalisation error of a linear classifier trained on a single random projection of the data .see also , and for other instances .other works have sought to reap the benefits of aggregating over many random projections .for instance , considered estimating a population inverse covariance ( precision ) matrix using , where denotes the sample covariance matrix and are random projections from to . used this estimate when testing for a difference between two gaussian population means in high dimensions , while applied the same technique in fisher s linear discriminant for a high - dimensional classification problem .our proposed methodology for high - dimensional classification has some similarities with the techniques described above , in the sense that we consider many random projections of the data , but is also closely related to _ bagging _ , since the ultimate assignment of each test point is made by aggregation and a vote .bagging has proved to be an effective tool for improving unstable classifiers ; indeed , a bagged version of the ( generally inconsistent ) -nearest neighbour classifier is universally consistent as long as the resample size is carefully chosen ; see .more generally , bagging has been shown to be particularly effective in high - dimensional problems such as variable selection .another related approach to ours is , who consider ensembles of random rotations , as opposed to projections . from model 2 in section[ sec tsims ] with dimensions and prior probability .top row : three projections drawn from haar measure ; bottom row : the projections with smallest estimate of test error out of 100 haar projections with lda ( left ) , quadratic discriminant analysis ( middle ) and -nearest neighbours ( right ) . ] one of the basic but fundamental observations that underpins our proposal is the fact that aggregating the classifications of all random projections is not sensible , since most of these projections will typically destroy the class structure in the data ; see the top row of figure [ fig : useless ] . for this reason, we advocate partitioning the projections into non - overlapping blocks , and within each block we retain only the projection yielding the smallest estimate of the test error .the attraction of this strategy is illustrated in the bottom row of figure [ fig : useless ] , where we see a much clearer partition of the classes .another key feature of our proposal is the realisation that a simple majority vote of the classifications based on the retained projections can be highly suboptimal ; instead , we argue that the voting threshold should be chosen in a data - driven fashion in an attempt to minimise the test error of the infinite - simulation version of our random projection ensemble classifier .in fact , this estimate of the optimal threshold turns out to be remarkably effective in practice ; see section [ sec alpha ] for further details .we emphasise that our methodology can be used in conjunction with any base classifier , though we particularly have in mind classifiers designed for use in low - dimensional settings .the random projection ensemble classifier can therefore be regarded as a general technique for either extending the applicability of an existing classifier to high dimensions , or improving its performance .our theoretical results are divided into three parts . in the first, we consider a generic base classifier and a generic method for generating the random projections into and quantify the difference between the test error of the random projection ensemble classifier and its infinite - simulation counterpart as the number of projections increases .we then consider selecting random projections from non - overlapping blocks by initially drawing them according to haar measure , and then within each block retaining the projection that minimises an estimate of the test error . under a condition implied by the widely - used sufficient dimension reduction assumption , we can then control the difference between the test error of the random projection classifier and the bayes risk as a function of terms that depend on the performance of the base classifier based on projected data and our method for estimating the test error , as well as terms that become negligible as the number of projections increases .the final part of our theory gives risk bounds on the first two of these terms for specific choices of base classifier , namely fisher s linear discriminant and the -nearest neighbour classifier .the key point here is that these bounds only depend on , the sample size and the number of projections , and not on the original data dimension .the remainder of the paper is organised as follows .our methodology and general theory are developed in sections [ sec rp ] and [ sec chooserp ] .specific choices of base classifier are discussed in section [ sec base ] , while section [ sec prac ] is devoted to a consideration of the practical issues of the choice of voting threshold , projected dimension and the number of projections used . in section [ sec empirical ] we present results from an extensive empirical analysis on both simulated and real data where we compare the performance of the random projection ensemble classifier with several popular techniques for high - dimensional classification .the outcomes are extremely encouraging , and suggest that the random projection ensemble classifier has excellent finite - sample performance in a variety of different high - dimensional classification settings .we conclude with a discussion of various extensions and open problems .all proofs are deferred to the appendix . finally in this section , we introduce the following general notation used throughout the paper . for a sufficiently smooth real - valued function defined on a neighbourhood of , let and denote its first and second derivatives at , andlet and denote the integer and fractional part of respectively .we start by describing our setting and defining the relevant notation .suppose that the pair takes values in , with joint distribution , characterised by , and , the conditional distribution of , for . for convenience ,we let . in the alternative characterisation of ,we let denote the marginal distribution of and write for the regression function . recall that a _ classifier _ on is a borel measurable function , with the interpretation that we assign a point to class .we let denote the set of all such classifiers . the misclassification rate , or _ risk _ , of a classifier is , and is minimised by the _ bayes _classifier ( e.g. * ? ? ?its risk is ] , the random variables are independent , each having a bernoulli( ) distribution .recall that and are the distribution functions of and , respectively .we can therefore write where here and throughout the proof , denotes a random variable .similarly , it follows that where .writing , we now show that as .our proof involves a one - term edgeworth expansion to the binomial distribution function in , where the error term is controlled uniformly in the parameter .the expansion relies on the following version of esseen s smoothing lemma .* chapter 2 , theorem 2b ) let , , , let be a non - decreasing function and let be a function of bounded variation .let and be the fourier stieltjes transforms of and , respectively .suppose that * and ; * ; * the set of discontinuities of and is contained in , where is a strictly increasing sequence with ; moreover is constant on the intervals for all ; * for all .then there exist constants such that provided that .[ thm esseen ] let , and let and denote the standard normal distribution and density functions , respectively . moreover , for , let and in proposition [ cor edgeworth ] below we apply theorem [ thm esseen ] to the following functions : and [ cor edgeworth ] let and be as in ( [ eq fb1 ] ) and ( [ eq gb1 ] ) .there exists a constant such that , for all , proposition [ cor edgeworth ] , whose proof is given after the proof of theorem [ thm : main ] , bounds uniformly in the error in the one - term edgeworth expansion of the distribution function . returning to the proof of theorem [ thm bvar ], we will argue that the dominant contribution to the integral in ( [ eq rtp ] ) arises from the interval , , where .for the remainder of the proof we assume is large enough that \subseteq ( 0,1) ] , thus , for large , we deduce that as ._ to bound : write , where and by the bound in ( [ eq cont ] ) , given , for all sufficiently large , latexmath:[\ ] ] similarly , but , for $ ] , it follows that now where . by (* theorem 2 , section 40 ) , we have that moreover , finally , by ( [ eq ij1 ] ) , ( [ eq : ij ] ) , ( [ eq ij2 ] ) , ( [ eq ij3 ] ) , ( [ eq ij4 ] ) , ( [ eq ij5 ] ) and ( [ eq ij6 ] ) , it follows that by ( [ eq interval ] ) , ( [ eq s12 ] ) , ( [ eq : q * ] ) , ( [ eq : f*middle ] ) , ( [ eq : phi*middle ] ) , ( [ eq : p*middle ] ) , ( [ eq s1s2 ] ) , ( [ eq : phipend ] ) and ( [ eq s2b1 ] ) , we conclude that ( [ eq fourierbound ] ) holds .the result now follows from theorem [ thm esseen ] , by taking , and in that result .the research of the second author is supported by an engineering and physical sciences research council fellowship .the authors thank rajen shah and ming yuan for helpful comments .bickel , p. j. and levina , e. ( 2004 ) . some theory for fisher s linear discriminant function , ` naive bayes ' , and some alternatives when there are more variables than observations ._ bernoulli _ , * 10 * , 9891010 .hastie , t. , tibshirani , r. , and friedman , j. ( 2009 ) . _ the elements of statistical learning : the elements of statistical learning : data mining , inference , and prediction._. springer series in statistics ( 2nd ed . ) .springer , new york .tibshirani , r. , hastie , t. , narisimhan , b. and chu , g. ( 2002 ) .diagnosis of multiple cancer types by shrunken centroids of gene expression ._ procedures of the natural academy of science , usa _ , * 99 * , 65676572 . | we introduce a very general method for high - dimensional classification , based on careful combination of the results of applying an arbitrary base classifier to random projections of the feature vectors into a lower - dimensional space . in one special case that we study in detail , the random projections are divided into non - overlapping blocks , and within each block we select the projection yielding the smallest estimate of the test error . our random projection ensemble classifier then aggregates the results of applying the base classifier on the selected projections , with a data - driven voting threshold to determine the final assignment . our theoretical results elucidate the effect on performance of increasing the number of projections . moreover , under a boundary condition implied by the sufficient dimension reduction assumption , we show that the test excess risk of the random projection ensemble classifier can be controlled by terms that do not depend on the original data dimension . the classifier is also compared empirically with several other popular high - dimensional classifiers via an extensive simulation study , which reveals its excellent finite - sample performance . |
large scale structures like galaxies and clusters of galaxies are believed to have formed by gravitational amplification of small perturbations . for an overview and original references , see , e.g. , .density perturbations are present at all scales that have been observed .understanding the evolution of density perturbations for systems that have fluctuations at all scales is essential for the study of galaxy formation and large scale structures .the equations that describe the evolution of density perturbations in an expanding universe have been known for a long time and these are easy to solve when the amplitude of perturbations is much smaller than unity .these equations describe the evolution of density contrast defined as . here is the density at at time , and is the average density in the universe at that time .these are densities of non - relativistic matter , the component that clusters at all scales and is believed to drive the formation of large scale structures in the universe . once the density contrast at relevant scales becomes large , i.e. , , the perturbation becomes non - linear and coupling with perturbations at other scales can not be ignored .the equations that describe the evolution of density perturbations can not be solved for generic perturbations in this regime .n - body simulations are often used to study the evolution in this regime .alternative approaches can be used if one requires only a limited amount of information and in such a case either quasi - linear approximation schemes or scaling relations suffice . in cosmological n - body simulations ,we simulate a representative region of the universe .this is a large but finite volume and periodic boundary conditions are often used .almost always , the simulation volume is taken to be a cube .effect of perturbations at scales smaller than the mass resolution of the simulation , and of perturbations at scales larger than the box is ignored . indeed, even perturbations at scales comparable to the box are under sampled .it has been shown that perturbations at small scales do not influence collapse of perturbations at much larger scales in a significant manner .this is certainly true if the scales of interest are in the non - linear regime .therefore we may assume that ignoring perturbations at scales much smaller than the scales of interest does not affect results of n - body simulations .perturbations at scales larger than the simulation volume can affect the results of n - body simulations .use of the periodic boundary conditions implies that the average density in the simulation box is same as the average density in the universe , in other words we ignore perturbations at the scale of the simulation volume ( and at larger scales ) .therefore the size of the simulation volume should be chosen so that the amplitude of fluctuations at the box scale ( and at larger scales ) is ignorable .if the amplitude of perturbations at larger scales is not ignorable then clearly the simulation is not a faithful representation of the model being studied .it is not obvious as to when fluctuations at larger scales can be considered ignorable , indeed the answer to this question depends on the physical quantity of interest , the model being studied and the specific length / mass scale of interest as well . the effect of a finite box size has been studied using n - body simulations and the conclusions in this regard may be summarised as follows . *if the amplitude of density perturbations around the box scale is small ( ) but not much smaller than unity , simulations underestimate the correlation function though the number density of small mass haloes does not change by much .in other words , the formation of small haloes is not disturbed but their distribution is affected by non - inclusion of long wave modes . * in the same situation , the number density of the most massive haloes drops significantly . * effects of a finite box size modify values of physical quantities like the correlation function even at scales much smaller than the simulation volume . *the void spectrum is also affected by finite size of the simulation volume if perturbations at large scales are not ignorable . *it has been shown that properties of a given halo can change significantly as the contribution of perturbations at large scales is removed to the initial conditions but the distribution of most internal properties remain unchanged .* we presented a formalism for estimating the effects of a finite box size in .we used the formalism to estimate the effects on the rms amplitude of fluctuations in density , as well as the two point correlation function .we used these to further estimate the effects on the mass function and the multiplicity function .* the formalism mentioned above was used to estimate changes in the formation and destruction rates of haloes .* it was pointed out that the second order perturbation theory and corrections arising due this can be used to estimate the effects due to a finite box size .this study focused specifically on the effects on baryon acoustic oscillations . *if the objects of interest are collapsed haloes that correspond to rare peaks , as in the study of the early phase of reionisation , we require a fairly large simulation volume to construct a representative sample of the universe . in some cases , one may be able to devise a method to `` correct '' for the effects of a finite box - size , but such methods can not be generalised to all statistical measures or physical quantities .effects of a finite box size modify values of physical quantities even at scales much smaller than the simulation volume .a workaround for this problem was suggested in the form of an ensemble of simulations to take the effect of convergence due to long wave modes into account , the effects of shear due to long wave modes are ignored here .however it is not clear whether the approach where an ensemble of simulations is used has significant advantages over using a sufficiently large simulation volume .we review the basic formalism we proposed in in 2 .we then extend the original formalism to the cases of non - linear amplitude of clustering and also for estimating changes in skewness and other reduced moments of counts in cells .this is done in 3 . in 4 we confront our analytical models with n - body simulations .we end with a discussion in 5 .initial conditions for n - body simulations are often taken to be a realisation of a gaussian random field with a given power spectrum , for details see , e.g. , .the power spectrum is sampled at discrete points in the space between the scales corresponding to the box size ( fundamental mode ) and the grid size ( nyquist frequency / mode ) . here is the wave vector . in the approach outlined above ,the value of at a given scale is expressed as a combination of the expected value and the correction due to the finite box size . here is independent of the box size and depends only on the power spectrum and the scale of interest .it is clear than , and , .it can also be shown that for hierarchical models , , i.e. , increases or saturates to a constant value as we approach small . at large scales and have a similar magnitude and the _ rms _ fluctuations in the simulation become negligible compared to the expected values in the model .as we approach small the correction term is constant and for most models it becomes insignificant in comparison with . in models where increases very slowly at small scales or saturates to a constant value, the correction term can be significant at all scales .this formalism can be used to estimate corrections for other estimators of clustering , for example the two point correlation .see for details .the estimation of the _ rms _ amplitude of density perturbations allows us to use the theory of mass function and estimate a number of quantities of interest . for details, we again refer the reader to but we list important points here . *the fraction of mass in collapsed haloes is under - estimated in n - body simulations .this under - estimation is most severe near the scale of non - linearity , and falls off on either side .if we consider fractional under - estimation in the collapsed fraction then this increases monotonically from small scales to large scales . *the number density of collapsed haloes is under - estimated at scales larger than the scale of non - linearity .the maximum in collapsed fraction near the scale of non - linearity leads to a change of sign in the effect of a finite box - size for the number density of haloes at this scale : at smaller scales the number density of haloes is over - estimated in simulations .this can be understood on the basis of a paucity of mergers that otherwise would have led to formation of high mass haloes . *the above conclusions are generic and do not depend on the specific model for mass function .indeed , expressions for both the press - schechter and the sheth - tormen mass functions are given in , and we have also checked the veracity of our claims for the mass function .in this section we outline how the formalism and results outlined above may be used to estimate the effect of a finite box - size on reduced moments . reduced moments like the skewness and kurtosis can be computed using perturbation theory in the weakly non - linear regime .the expected values of the reduced moments are related primarily to the slope of the initial or linearly extrapolated , as all non - gaussianities are generated through evolution of the gaussian initial conditions and the initial characterises this completely .we can use the expression for as it is realised in simulations with a finite box size to compute the expected values of reduced moments in n - body simulations in the weakly non - linear regime . is the expected value of for the given mode , i.e. , when there are no box corrections and is the correction term in due to a finite box size .box size effects lead to a change in slope of , and hence the effective value of changes .the last term is the offset in skewness in n - body simulations as compared with the expected values in the model being simulated .we would like to emphasise that this expression is valid only in the weakly non - linear regime .in general we expect to increase as we go to larger scales .thus the skewness is under estimated in n - body simulations and the level of under estimation depends on the slope of as compared to the slope of . in the limit of small scales where is almost independent of scale , we find that the correction is : + \mathcal{o}\left ( \frac{\partial}{\partial\ln{r}}\left(\frac{\sigma_1 ^ 2}{\sigma_0 ^ 2 } \right)^2 \right ) \nonumber \\ & & + \mathcal{o}\left(\frac{1}{\sigma_0 ^ 2 } \frac{\partial\sigma_1 ^ 2}{\partial\ln{r } } \right ) \end{aligned}\ ] ] here is the index of the initial spectrum we are simulating . for non - power law models this will also be a function of scale .the correction becomes more significant at larger scales and the net effect , as noted above , is to under estimate .similar expressions can be written down for kurtosis and other reduced moments using the approach outlined above .we give the expression for kurtosis below , but do not compute further moments as the same general principle can be used to compute these as well . ^ 2 + 2 \frac{\partial^2\ln\sigma^2}{\partial\ln^2 r } \nonumber \\ & \simeq & \frac{6071}{1323 } - \frac{62}{3}(n+3 ) \left[1 + \frac{\sigma_1 ^ 2}{\sigma_0 ^2 } \right ] + \frac{7}{3 } ( n+3)^2 \left[1 - \frac{8}{7 } \frac{\sigma_1 ^ 2}{\sigma_0 ^ 2 } \right ] \nonumber \\ & & + \mathcal{o}\left ( \frac{\partial}{\partial\ln{r}}\left(\frac{\sigma_1 ^ 2}{\sigma_0 ^ 2 } \right)^2 \right ) + \mathcal{o}\left(\frac{1}{\sigma_0 ^ 2 } \frac{\partial\sigma_1 ^ 2}{\partial\ln{r } } \right ) \end{aligned}\ ] ]in this section we compare the analytical estimates for finite box size effects for various quantities with n - body simulations .such a comparison is relevant in order to test the effectiveness of approximations made in computing the effects of a finite box size .we have made the following approximations : * effects of mode coupling between the scales that are taken into account in a simulation and the modes corresponding to scales larger than the simulation box are ignored .we believe that this should not be important unless the initial power spectrum has a sharp feature at scales comparable with the simulation size . * sampling of modes comparable to the box size is sparse , and the approximation of the sum over wave modes as an integral can be poor if the relative contribution of these scales to is significant .table 1 gives details of the n - body simulations used in this paper . in order to simulate the effects of a finite box size, we used the method employed by where initial perturbations are set to zero for all modes with wave number smaller than a given cutoff .the initial conditions are exactly the same as the reference simulation in each series in all other respects . for a finite simulation box, there is a natural cutoff at the fundamental wave number and simulations a1 , b1 and c1 impose no other cutoff .these are the reference simulations for the two series of simulations .simulations a2 , b2 and c2 sample perturbations at wave numbers larger than whereas simulations a3 , b3 and c3 are more restrictive with non - zero perturbations above .the cutoff of and corresponds to scales of and grid lengths , respectively . for the c series of simulations , the cutoff of and corresponds to scales of h and h , respectively .the background cosmology was taken to be einstein - desitter for the a and b series simulations .the best fit model from wmap-5 was used for the c series of simulations . in order to ensure that the initial conditions do not get a rare contribution from a large scale mode , we forced while keeping the phases random for modes .we have chosen to work with models where box size effects are likely to be significant , particularly with the larger cutoff in wave number .this has been done to test our analytical model in a severe situation , and also to further illustrate the difficulties in simulating models with large negative indices .we present results from n - body simulations in the following section ..this table lists characteristics of n - body simulations used in our study . herethe spectral index gives the slope of the initial power spectrum and the cutoff refers to the wave number below which all perturbations are set to zero : is the fundamental wave mode for the simulation box .all models were simulated using the treepm code . particles were used in each simulation , and the pm calculations were done on a grid .power spectra for both the a and the b series of simulations were normalised to ensure at the scale of grid lengths at the final epoch if there is no box - size cutoff .a softening length of grid lengths was used as the evolution of small scale features is not of interest in the present study .simulations for both the a and the b series were done with the einstein - desitter background and the c series used the wmap-5 best fit ( bf ) model as the cosmological background , as also for the power spectrum and transfer function . [ cols="^,^,^",options="header " , ] figure 5 shows the number density of haloes for the three series of simulations as a function of mass of haloes .the haloes have been identified using the friends of friends ( fof ) method with a linking length of in units of the grid length . plotted in the same panelsare the expected values computed using the press - schechter mass function with a correction for the finite box size . for each series of simulations , and at each epoch , we fitted the value of to match the simulation with the natural cutoff at the box scale .the same value of is then used for other simulations of the series .we find that the features of the mass function are reproduced correctly by the analytical approximation , namely : * the number density of the most massive haloes declines rapidly as the _ effective _ box size is reduced . *the number density of low mass haloes increases as the _ effective _ box size is reduced .this feature is apparent only at the late epoch . in our discussion of analytical estimates of the effects of a finite box size on observable quantities, we have so far omitted any discussion of velocity statistics .the main reason for this is that the power spectrum for velocity is different as compared to the power spectrum for density and one can get divergences for quantities analogous to the second order estimators analogous to for models with .this is due to a more significant contribution of long wave modes to the velocity field than is the case for density .relative velocity statistics are more relevant on physical grounds and we use these for an empirical study of the effects of a finite box size on velocities .it is also important to check whether considerations related to velocity statistics put a stronger constraint on the box size required for simulations of a given model .we measure the radial pair velocity and also the pair velocity dispersion in the simulations used in this work .these quantities are defined as follows : where the averaging is done over all pairs of particles with separation . in practicethis is done in a narrow bin in . here is the scale factor , is the hubble parameter and is the velocity of the particle .similarly , the relative pair velocity dispersion is defined as : where is the relative velocity for a pair of particles , and averaging is done over pairs with separation .dividing by gives us a dimensionless quantity and the usefulness of this is apparent from the following discussion .we have plotted the radial component of pair velocity as a function of scale in the top two rows of figure 6 .panels in these rows show the pair velocity for the different models at early and late epochs . in each panel , we find that the dependence of pair velocity on is very sensitive to the small cutoff used in generating the initial conditions for the simulation .it has been known for some time that is an almost universal function of .this is certainly true in the linear regime where for clustering in an einstein - de sitter universe . in order to exploit this aspect , and also to check whether the relation between and in the weakly non - linear regime is sensitive to the box size , we plot as a function of at the same scale in the last two rows of figure 6 .we find that all runs of a series fall along the same line and variations induced by the finite box size are small even in the non - linear regime .figure 7 shows the relative velocity dispersion as a function of scale ( top two rows ) and also as a function of ( lowest two rows ) .again , we find that although the relative pair velocity dispersion at a given scale is sensitive to the size of the simulation box , its dependence on is not affected by the large scale cutoff .thus we can use estimates of the correction in to get an estimate of corrections in pair velocity statistics .the conclusions of this paper may be summarised as follows : * we have extended our formalism for estimating the effects of a finite box size beyond the second moment of the density field .we have given explicit expressions for estimating the skewness and kurtosis in the weakly non - linear regime when a model is simulated in a finite box size .* we have tested the predictions of our formalism by comparing these with the values of physical quantities in n - body simulations where the large scale modes are set to zero without changing the small scale modes .* we find that the formalism makes accurate predictions for the finite box size effects on the averaged two point correlation function and skewness .* we find that the formalism correctly predicts all the features of the mass function of a model simulated in a finite box size .* we studied the effects of a finite box size on relative velocities .we find that the effects on relative velocities mirror the effects on .it is desirable that in n - body simulations the intended model is reproduced at all scales between the resolution of the simulation and a fairly large fraction of the simulation box .the outer scale up to which the model can be reproduced fixes the effective dynamical range of simulations .one would like be within a stated tolerance of the expected value at this scale .we plot for power law models at the scale , and in the left panel of figure 8 .these are plotted as a function of .it can be shown that this ratio , as also are functions of scale only through the ratio .we find that is large for large negative and decreases monotonically as increases .this ratio is smaller than only for at .the corresponding number for is , and for is clearly , the effective dynamic range decreases rapidly as .this highlights the difficulties associated with simulating such models .similarly , one would like and to be comparable at the scale of non - linearity . from requirements of self similar evolution of power law models in simulations, we find that at the scale of non - linearity is required for the effects of a finite box size to be ignorable .this gives us a lower bound on for any given model .the middle panel shows the required as a function of for , and . here is the asymptotic value of at and is a fairly good approximation at small scales .we find that the required for is more than for at the scale of non - linearity .thus we need a simulation with if we are to probe the strongly non - linear regime ( ) with some degree of confidence .requirements for models with are much more stringent , and for models like even the largest simulations can not be used to study the asymptotic regime . to put things in context for the favoured cosmological model , the right panel in figure 8 shows contours of in the plane for the model that best fits the wmap-5 data .we find that in order to ensure that the error in skewness is less than at a scale of h , we need a simulation box of more than h .the required box size is much bigger if the tolerance on error in skewness is smaller .this is a very stringent requirement for simulations of the epoch of reionization where one would like to get the clustering right at the scales of a few mpc .given that the formalism we have proposed works well when compared with simulations , and the fact that calculations in this formalism are fairly straightforward , we would like to urge the cosmological n - body simulations community to make use of this formalism .we would like to request simulators to report the fractional corrections to the linearly extrapolated amplitude of clustering and the fractional correction to skewness across the range of scales of interest .this will enable users of simulations to assess potential errors arising due to a finite simulation volume .numerical experiments for this study were carried out at cluster computing facility in the harish - chandra research institute ( http://cluster.mri.ernet.in ) . this research has made use of nasa s astrophysics data system .we would like to thank t. padmanabhan for useful discussions .we thank the anonymous referee for useful comments and suggestions . | n - body simulations are an important tool in the study of formation of large scale structures . much of the progress in understanding the physics of galaxy clustering and comparison with observations would not have been possible without n - body simulations . given the importance of this tool , it is essential to understand its limitations as ignoring these can easily lead to interesting but unreliable results . in this paper we study the limitations due to the finite size of the simulation volume . in an earlier work we proposed a formalism for estimating the effects of a finite box - size on physical quantities and applied it to estimate the effect on the amplitude of clustering , mass function . here , we extend the same analysis and estimate the effect on skewness and kurtosis in the perturbative regime . we also test the analytical predictions from the earlier work as well as those presented in this paper . we find good agreement between the analytical models and simulations for the two point correlation function and skewness . we also discuss the effect of a finite box size on relative velocity statistics and find the effects for these quantities scale in a manner that retains the dependence on the averaged correlation function . methods : n - body simulations , numerical gravitation cosmology : theory , dark matter , large scale structure of the universe |
the last few years , cooperative transmission has become widely prominent with the increases in the size of communication networks . in wireless networks , the transmitted message from a node is heard not only by its intended receiver , but also by other neighbour nodes and those neighbour nodes can use the received signals to help transmission .they bring a cooperative transmission by acting as relays .the relay channel first introduced by van der meulen in and it consists of a source aiming to communicate with a destination with the help of a relay . in this case, we call the relay channel _ one - way relay channel _ or _ single relay channel_. in , cover and el gamal proposed the fundamental _ decode - forward _ ( df ) and _ compress - forward _ ( cf )schemes for the one - way relay channels which achieve near capacity rates . in df scheme ,the relay decodes the messages from the source and forwards them to the destination . in cf scheme, the relay compresses received signals and forwards the compression indices .it is proved that the df scheme is optimal for these types of channels : for physically degraded relay channels in which the output observed at the receiver is a degraded version of the channel output at the relay , for semi - deterministic channels in which the channel output at the relay is a deterministic function of the channel input at the transmitter and the channel input at the relay . the exact capacity of general relay channels is not known to date , although , there exist tight capacity approximations for a large class of networks , and schemes like df and cf achieve near - capacity rates .the upper bound on capacity is given by the cut - set upper bound and the lower bound is given by chong et al . in .the scheme in is a block - markov transmission scheme that is a combination of the df scheme and the cf scheme .the one - way relay channel can be extended to the _ two - way relay channel _, where a relay helps two users exchange their messages .two types of two - way relay channels can be considered , that is , without a direct link between the two users , and with a direct link between the two users .the former is a suitable model for wired communication and the latter is suitable for wireless communication .applications of relay cooperation can be seen in increasing the capacity , combating the fading effect , mitigating the effects of interference and increasing the physical layer security .however , df scheme has been used in numerous applications , it achieves capacity only in special few cases . all of these approaches are using random gaussian coding which is impractical for implementation .thus , applying df scheme in a practical scenario is interesting .one of the research areas that has such potential is lattice theory .an dimensional lattice in is the set of integer linear combinations of a given basis with linearly independent vectors in . using lattices for communication over the real awgn channel ,has been investigated by poltyrev .in such a communication system , instead of the coding rate and capacity , normalized logarithmic density ( nld ) and generalized capacity have been introduced , respectively . using construction d of lattices , the existence of sphere - bound - achieving and capacity - achieving lattices has been proved by forney et al .a capacity - achieving lattice code can be obtained from a capacity - achieving lattice together with a proper shaping region .lattice codes are the euclidean - space analog of linear codes .researchers have also studied practical lattice codes .the search for practical implementable capacity - achieving lattices and lattice codes started by proposing low density parity check ( ldpc ) lattices . in this class of lattices , a set of nested ldpc codes and construction d of lattices are used to generate lattices with sparse parity check matrices .another class of lattices , called low density lattice codes ( ldlc ) , introduced and investigated in .turbo lattices employed construction d along with turbo codes to achieve capacity gains .low density construction a ( lda ) lattices are another class of lattices with near - capacity error performance and low - complexity , low - storage decoder. an lda lattice can be obtained from construction a with a non - binary ldpc code as its underlying code .the use of lattice codes in relay networks has received significant attentions in recent years , , , , .it was shown in and that lattice codes can achieve the df rates for the relay channel .all of these achievable schemes rely on asymptotic code lengths , which is a drawback in practical implementation . recently ,aazhang et al .proposed a practical scheme based on ldlcs , for the real - valued , full - duplex one - way and two - way relay channels . in this work, we propose another class of practical , efficient lattice codes , based on ldpc lattices , for the real - valued , full - duplex one - way and two - way relay channels .the rest of this paper is organized as follows .section [ system_model ] introduces the system models of the one - way and two - way relay channels .section [ lattice ] presents the preliminaries on lattices and lattice codes . in section [ ldpc lattices ] , we introduce ldpc lattices .the encoding and decoding of these lattices are also presented in this section . in section [ shaping_sec ], we consider the application of the ldpc lattices in the power constrained awgn channels by presenting two efficient shaping methods , based on hypercube shaping and nested lattice shaping . in section [ one_way_channel ] , we adapt our shaping algorithms that enable us to do the decomposition of the ldpc lattices into lower - rate components without loss of shaping efficiency .then , we present a practical block markov scheme for the real - valued , full - duplex one - way relay channels , based on ldpc lattices . in section [ two_way_channel_sec ] , we present another decomposition method based on doubly nested ldpc lattices , for the two - way relay channels . finally , in section [ numerical results ] , we examine the practical performance of our proposed schemes .section [ conclusions ] contains the concluding remarks .the relay channel that we have considered , is a three - terminal relay channel .first , we present the one - way relay channel , depicted in ( a ) .the source transmits a message , which is mapped to a codeword , to both relay and destination and in the next time slot relay aids the destination by sending the part of the information of the previous time slot .we assume a full - duplex relay which can simultaneously transmit and receive the massages . for simplicitywe suppose real - valued channels .( 5.,-57 . )rectangle ( 136.,-25 . ) ; ( 67.21748791714167,-32.21948106496881 ) ( 52.21748791714167,-32.21948106496881 ) ( 51.490059965882836,-32.48807198275779 ) ( 51.21748791714167,-33.21948106496881 ) ( 51.21748791714167,-36.219481064968804 ) ( 51.48476550176804,-36.95578284369824 ) ( 52.21748791714167,-37.219481064968804 ) ( 67.21748791714167,-37.219481064968804 ) ( 67.95243514205762,-36.95561431076526 ) ( 68.21748791714167,-36.219481064968804 ) ( 68.21748791714167,-33.21948106496881 ) ( 67.9514822812876,-32.484145066191346 ) cycle ; ( 22.217487917141618,-32.21948106496881 ) ( 7.21748791714162,-32.21948106496881 ) ( 6.490059965882777,-32.48807198275779 ) ( 6.21748791714162,-33.21948106496881 ) ( 6.21748791714162,-36.219481064968804 ) ( 6.484765501767987,-36.95578284369824 ) ( 7.21748791714162,-37.219481064968804 ) ( 22.217487917141618,-37.219481064968804 ) ( 22.952435142057574,-36.95561431076526 ) ( 23.217487917141618,-36.219481064968804 ) ( 23.217487917141618,-33.21948106496881 ) ( 22.951482281287547,-32.484145066191346 ) cycle ; ( 44.69811370147054,-47.22790740491434 ) ( 29.698113701470568,-47.22790740491434 ) ( 28.9706857502117,-47.49649832270332 ) ( 28.698113701470568,-48.22790740491434 ) ( 28.69811370147054,-51.22790740491434 ) ( 28.96539128609691,-51.96420918364377 ) ( 29.69811370147054,-52.22790740491434 ) ( 44.69811370147054,-52.227907404914355 ) ( 45.4330609263865,-51.9640406507108 ) ( 45.698113701470554,-51.227907404914355 ) ( 45.698113701470554,-48.22790740491434 ) ( 45.43210806561647,-47.492571406136875 ) cycle ; ( 88.82519949396965,-32.46041161184824 ) ( 73.82519949396965,-32.46041161184824 ) ( 73.09777154271082,-32.72900252963721 ) ( 72.82519949396965,-33.46041161184824 ) ( 72.82519949396965,-36.46041161184824 ) ( 73.09247707859603,-37.19671339057766 ) ( 73.82519949396965,-37.46041161184822 ) ( 88.82519949396965,-37.46041161184822 ) ( 89.56014671888562,-37.19654485764469 ) ( 89.82519949396965,-36.46041161184824 ) ( 89.82519949396965,-33.46041161184824 ) ( 89.55919385811558,-32.725075613070764 ) cycle ; ( 133.80047395779474,-32.47152777316628 ) ( 118.80047395779472,-32.47152777316628 ) ( 118.07304600653588,-32.74011869095526 ) ( 117.80047395779472,-33.47152777316628 ) ( 117.80047395779472,-36.47152777316628 ) ( 118.06775154242108,-37.207829551895706 ) ( 118.80047395779472,-37.47152777316628 ) ( 133.80047395779474,-37.47152777316628 ) ( 134.53542118271065,-37.207661018962746 ) ( 134.80047395779474,-36.47152777316628 ) ( 134.80047395779474,-33.47152777316628 ) ( 134.53446832194066,-32.73619177438882 ) cycle ; ( 111.28109974212362,-47.479954113111795 ) ( 96.28109974212362,-47.479954113111795 ) ( 95.55367179086474,-47.74854503090078 ) ( 95.28109974212362,-48.479954113111795 ) ( 95.28109974212362,-51.479954113111795 ) ( 95.54837732674994,-52.21625589184122 ) ( 96.28109974212362,-52.47995411311181 ) ( 111.28109974212362,-52.47995411311181 ) ( 112.01604696703953,-52.21608735890826 ) ( 112.28109974212362,-51.479954113111795 ) ( 112.28109974212362,-48.479954113111795 ) ( 112.01509410626954,-47.74461811433434 ) cycle ; ( 7.21748791714162,-32.21948106496881) ( 22.217487917141618,-32.21948106496881 ) ; ( 7.21748791714162,-37.219481064968804) ( 22.217487917141618,-37.219481064968804 ) ; ( 6.21748791714162,-33.21948106496881) ( 6.21748791714162,-36.219481064968804 ) ; plot[domain=1.5707963267948966:3.141592653589793,variable=](1.*1.*cos(r)+0.*1.*sin(r),0.*1.*cos(r)+1.*1.*sin(r ) ) ; ( 23.217487917141618,-33.21948106496881) ( 23.217487917141618,-36.219481064968804 ) ; plot[domain=3.141592653589793:4.71238898038469,variable=](1.*1.*cos(r)+0.*1.*sin(r),0.*1.*cos(r)+1.*1.*sin(r ) ) ; plot[domain=0.:1.5707963267948966,variable=](1.*1.*cos(r)+0.*1.*sin(r),0.*1.*cos(r)+1.*1.*sin(r ) ) ; plot[domain=-1.5707963267948966:0.,variable=](1.*1.*cos(r)+0.*1.*sin(r),0.*1.*cos(r)+1.*1.*sin(r ) ) ; ( 52.21748791714167,-32.21948106496881) ( 67.21748791714167,-32.21948106496881 ) ; ( 52.21748791714167,-37.219481064968804) ( 67.21748791714167,-37.219481064968804 ) ; ( 51.21748791714167,-33.21948106496881) ( 51.21748791714167,-36.219481064968804 ) ; plot[domain=1.5707963267948966:3.141592653589793,variable=](1.*1.*cos(r)+0.*1.*sin(r),0.*1.*cos(r)+1.*1.*sin(r ) ) ; ( 68.21748791714167,-33.21948106496881) ( 68.21748791714167,-36.219481064968804 ) ; plot[domain=3.141592653589793:4.71238898038469,variable=](1.*1.*cos(r)+0.*1.*sin(r),0.*1.*cos(r)+1.*1.*sin(r ) ) ; plot[domain=0.:1.5707963267948966,variable=](1.*1.*cos(r)+0.*1.*sin(r),0.*1.*cos(r)+1.*1.*sin(r ) ) ; plot[domain=-1.5707963267948966:0.,variable=](1.*1.*cos(r)+0.*1.*sin(r),0.*1.*cos(r)+1.*1.*sin(r ) ) ; ( 29.698113701470568,-47.22790740491434) ( 44.69811370147054,-47.22790740491434 ) ; ( 29.698113701470568,-52.207120204149255) ( 44.69811370147054,-52.227907404914355 ) ; ( 28.698113701470568,-48.22790740491434) ( 28.698113701470568,-51.207120204149255 ) ; plot[domain=1.5707963267948966:3.141592653589793,variable=](1.*1.*cos(r)+0.*1.*sin(r),0.*1.*cos(r)+1.*1.*sin(r ) ) ; ( 45.698113701470554,-48.22790740491434) ( 45.698113701470554,-51.227907404914355 ) ; plot[domain=3.141592653589793:4.71238898038469,variable=](1.*1.*cos(r)+0.*1.*sin(r),0.*1.*cos(r)+1.*1.*sin(r ) ) ; plot[domain=0.:1.5707963267948966,variable=](1.*1.*cos(r)+0.*1.*sin(r),0.*1.*cos(r)+1.*1.*sin(r ) ) ; plot[domain=-1.5707963267948966:0.,variable=](1.*1.*cos(r)+0.*1.*sin(r),0.*1.*cos(r)+1.*1.*sin(r ) ) ; ( 23.217487917141618,-34.719481064968804 ) ( 51.21748791714167,-34.72874555724628 ) ; ( 13.580557760431486,-31.894304393840127 ) node[anchor = north west ] ; ( 58.151882220748504,-31.74423259431044 ) node[anchor = north west ] ; ( 35.49104049176577,-47.05155614633853 ) node[anchor = north west ] ; ( 22.217487917141618,-37.219481064968804 ) ( 35.180406011112964,-47.22790740491434 ) ; ( 39.13768193920707,-47.22790740491434 ) ( 52.21748791714167,-37.219481064968804 ) ; ( 35.94125589035484,-28.592724804187007 ) node[anchor = north west ] ; ( 23.03508113080176,-40.14825336797292 ) node[anchor = north west ] ; ( 47.49678445414073,-39.848109768913545 ) node[anchor = north west ] ; ( 35.94125589035484,-53.054428127526016 ) node[anchor = north west ] ( a ) ; ( 73.80047395779467,-32.47152777316628) ( 88.80047395779465,-32.47152777316628 ) ; ( 73.80047395779467,-37.47152777316628) ( 88.80047395779465,-37.47152777316628 ) ; ( 72.80047395779467,-33.47152777316628) ( 72.80047395779467,-36.47152777316628 ) ; plot[domain=1.5707963267948966:3.141592653589793,variable=](1.*1.*cos(r)+0.*1.*sin(r),0.*1.*cos(r)+1.*1.*sin(r ) ) ; ( 89.80047395779465,-33.47152777316628) ( 89.80047395779465,-36.47152777316628 ) ; plot[domain=3.141592653589793:4.71238898038469,variable=](1.*1.*cos(r)+0.*1.*sin(r),0.*1.*cos(r)+1.*1.*sin(r ) ) ; plot[domain=0.:1.5707963267948966,variable=](1.*1.*cos(r)+0.*1.*sin(r),0.*1.*cos(r)+1.*1.*sin(r ) ) ; plot[domain=-1.5707963267948966:0.,variable=](1.*1.*cos(r)+0.*1.*sin(r),0.*1.*cos(r)+1.*1.*sin(r ) ) ; ( 118.80047395779472,-32.47152777316628) ( 133.80047395779474,-32.47152777316628 ) ; ( 118.80047395779472,-37.47152777316628) ( 133.80047395779474,-37.47152777316628 ) ; ( 117.80047395779472,-33.47152777316628) ( 117.80047395779472,-36.47152777316628 ) ; plot[domain=1.5707963267948966:3.141592653589793,variable=](1.*1.*cos(r)+0.*1.*sin(r),0.*1.*cos(r)+1.*1.*sin(r ) ) ; ( 134.80047395779474,-33.47152777316628) ( 134.80047395779474,-36.47152777316628 ) ; plot[domain=3.141592653589793:4.71238898038469,variable=](1.*1.*cos(r)+0.*1.*sin(r),0.*1.*cos(r)+1.*1.*sin(r ) ) ; plot[domain=0.:1.5707963267948966,variable=](1.*1.*cos(r)+0.*1.*sin(r),0.*1.*cos(r)+1.*1.*sin(r ) ) ; plot[domain=-1.5707963267948966:0.,variable=](1.*1.*cos(r)+0.*1.*sin(r),0.*1.*cos(r)+1.*1.*sin(r ) ) ; ( 96.28109974212362,-47.479954113111795) ( 111.28109974212362,-47.479954113111795 ) ; ( 96.28109974212362,-52.47995411311181) ( 111.28109974212362,-52.47995411311181 ) ; ( 95.28109974212362,-48.479954113111795) ( 95.28109974212362,-51.479954113111795 ) ; plot[domain=1.5707963267948966:3.141592653589793,variable=](1.*1.*cos(r)+0.*1.*sin(r),0.*1.*cos(r)+1.*1.*sin(r ) ) ; ( 112.28109974212362,-48.479954113111795) ( 112.28109974212362,-51.479954113111795 ) ; plot[domain=3.141592653589793:4.71238898038469,variable=](1.*1.*cos(r)+0.*1.*sin(r),0.*1.*cos(r)+1.*1.*sin(r ) ) ; plot[domain=0.:1.5707963267948966,variable=](1.*1.*cos(r)+0.*1.*sin(r),0.*1.*cos(r)+1.*1.*sin(r ) ) ; plot[domain=-1.5707963267948966:0.,variable=](1.*1.*cos(r)+0.*1.*sin(r),0.*1.*cos(r)+1.*1.*sin(r ) ) ; ( 81.80047395779465,-37.47152777316628 ) ( 95.28109974212362,-49.4315279994261 ) ; ( 95.28109974212362,-50.40898585229985 ) ( 80.80047395779465,-37.47152777316628 ) ; ( 112.28109974212362,-50.449155353102896 ) ( 126.80047395779474,-37.47152777316628 ) ; ( 125.80047395779471,-37.47152777316628 ) ( 112.28109974212362,-49.444917833027134 ) ; ( 89.80047395779465,-34.47152777316628 ) ( 117.80047395779472,-34.47152777316628 ) ; ( 117.80047395779472,-35.47152777316628 ) ( 89.80047395779465,-35.47152777316628 ) ; ( 79.46207775396404,-32.1944479928995 ) node[anchor = north west ] ; ( 124.03340221428105,-31.894304393840127 ) node[anchor = north west ] ; ( 102.42306308200614,-47.3516997453979 ) node[anchor = north west ] ; ( 102.57313488153582,-28.292581205127632 ) node[anchor = north west ] ; ( 102.57313488153582,-34.74566858490418 ) node[anchor = north west ] ; ( 91.3177499168093,-40.14825336797292 ) node[anchor = north west ] ; ( 83.21387274220622,-43.14968935856666 ) node[anchor = north west ] ; ( 111.72751465284674,-39.247822570794796 ) node[anchor = north west ] ; ( 119.9814636269795,-42.54940216044791 ) node[anchor = north west ] ; ( 102.57313488153582,-53.054428127526016 ) node[anchor = north west ] ( b ) ; let and denote the signals transmitted by the source and the relay .let and denote the signals received at the relay and the destination .the received signals are where , .moreover , , , and are the channel gains between source , relay , and destination , which follow the usual path - loss model .in particular , we take the distance between source and destination to be unity , and to be the distance between source to relay and relay to destination , and and to be the corresponding path - loss exponents .we constrain the source and relay transmissions to have average power no greater than and . in general, the capacity of this channel is unknown however , the df scheme which is proposed in , achieves the following inner bound : this rate can be achieved via _ block markov encoding_. after decoding the message , the relay re - encodes the message and transmits the corresponding codeword in the next block .the lattice - coding version of block markov encoding is proposed in .it is proved theoretically that it can achieve the decode - and - forward rates .the results of suggest that structured lattice codes may be used to outperform , random gaussian codes in general gaussian networks .the authors of have applied this scheme and designed a family of practically implementable ldlc lattice codes for relay channels . in this paperwe present another family of lattice codes which are amenable to practical implementation in block markov schemes .@#1@@@(#1) next , we present the full - duplex gaussian two - way relay channel , as depicted in ( b ) .the two sources and exchange their messages , which are mapped to codewords and transmitted over the wireless medium .the relay node overhears the noisy superposition of signals transmitted from sources and makes its own transmissions to facilitate communications .this channel can be modeled as follows where the noise components are , and .similar to the one - way relay channel model , , , , , and are the channel gains which follow the usual path - loss model .we constrain the sources , and relay transmissions to have average powers no greater than , and , respectively .the capacity of this channel is unknown , but a df scheme was presented in that achieves rate pairs satisfying ( [ rate_bound2 ] ) , ^{+ } \right\},\end{aligned}\ ] ] in which ^{+}\triangleq \max \left\{x,0\right\} ] or an -cube ^n ] is the generator matrix of in systematic form , is the rank of , and , are identity and all zero square matrices of size , respectively . the practicalencoding and decoding of ldpc lattices , both with linear complexity in the dimension of the lattice , has been addressed in . in this paper , we consider a translated and scaled version of the lattice , generated by ( [ eq12 ] ) , as suggested in and . in the sequel , we present the decoding of these scaled and translated versions of ldpc lattices , which is proposed in and it is obtained by combining the suggested decoding method in , and the decoding of binary ldpc codes .construction and decoding of these new lattices can be done using the following steps .first , convert the codewords of ] be the full - rate power - constrained lattice codeword associated with block , and let ] block .the source and the relay transmit ] during the block . at block , the source sends ] by the factor .then , we use iterative decoder of ldpc lattices to obtain full - rate lattice codeword ] and can be obtained as =\textrm{mod}(\hat{\mathbf{x}}'[t],\mathbf{l},\mathbf{g}_{\lambda}^{-1}) ] from ] . in this case , the decoding error is =\mathbf{x}_r'[t]-\hat{\mathbf{x}}_r'[t] ] by -\mathbf{e}_r[t] ] . in the destination ,treat +\mathbf{z}_d[t+1]-h_{rd}\sqrt{p_r}\mathbf{e}_r[t] ] =\left\lfloor \mathcal{d}\left(\textrm{dec}_{\textrm{ldpcl}}\left ( \frac{\mathbf{y}_d[t+1]}{\sqrt{p_r}h_{rd}},\frac{n_d}{\sqrt{p_r}h_{rd } } \right)\right)\mathbf{g}_{\lambda}^{-1}\right\rceil,\end{aligned}\ ] ] where .then , the relay obtains =\textrm{mod}(\tilde{\mathbf{x}}_r'[t],\mathbf{l},\mathbf{g}_{\lambda}^{-1}) ] .the decoding errors are & = & \mathbf{x}_r'[t]- \tilde{\mathbf{x}}_r'[t ] , \nonumber \\\mathbf{e}_{d_1}[t ] & = & \mathbf{x}_r[t ] -\tilde{\mathbf{x}}_r[t].\end{aligned}\ ] ] now , the destination knows the unshaped resolution codeword ] .next , the receiver turns to ] , we know ] from ( [ y_dt2 ] ) to obtain =h_{sd}\sqrt{p_s}\mathbf{x}_v'[t]+\mathbf{e}_{d_2}[t]+\mathbf{z}_d[t],\end{aligned}\ ] ] where =h_{rd}\sqrt{p_r}(\mathbf{e}_{d_1}'[t-1]-\mathbf{e}_r[t-1])+ h_{sd}\sqrt{p_s}\mathbf{e}_{d_1}'[t].\end{aligned}\ ] ] then , we use ] . then , the desired integer vector can be obtained as = \textrm{mod}(\tilde{\mathbf{x}}'[t],\mathbf{l},\mathbf{g}_{\lambda}^{-1}) ] and ] . during the block , the relay transmits nothing and at block , the sources receive the resolution information ] . using ] \oplus m_2 \hat{\mathbf{x}}_{s_2}'[t ] & = & \textrm{dec}_{\textrm{ldpcl}}\left(\frac{\mathbf{y}_r'[t]}{\rho},\frac{n_r}{\rho}\right ) , \quad\,\,\,\label{sum_information1}\\ m_1\hat{\mathbf{b}}_{s_1}'[t]+m_2 \hat{\mathbf{b}}_{s_2}'[t ] & = & \left\lfloor \mathcal{d}\left ( \hat{\mathbf{x}}_r'[t ] \right ) \mathbf{g}_{\lambda}^{-1}\right\rceil,\label{sum_information2}\end{aligned}\ ] ] where function is defined in section [ encoding_oneway ] and =m_1\hat{\mathbf{x}}_{s_1}'[t]+m_2 \hat{\mathbf{x}}_{s_2}'[t] ] to another information vector ] . during block , the relay transmits ] from the received signal to obtain & = & h_{rs_2}\sqrt{p_r}\left[\hat{\mathbf{x}}_{r}^{\prime(r)}[t]-m_2\mathbf{x}_{s_2}[t]-(1,\ldots,1)\right]\nonumber\\ & + & h_{s_1s_2}\sqrt{p_{s_1}}\mathbf{x}_{s_1}'[t+1]+\mathbf{z}_{s_2}[t+1].\end{aligned}\ ] ] then , treating +\mathbf{z}_{s_2}[t+1] ] .let =\mathrm{smod}\left(\left[\tilde{\mathbf{b}}_{s_1}^{\prime ( r)}[t]+m_2\mathbf{b}_{s_2}[t]\right],\mathbf{l}^{(r ) } \right) ] , in which the function , that is defined next , is applied componentwise source obtains the resolution information of source as \right . & - & \left .m_2\mathbf{b}_{s_2}[t]\right ) \pmod*{\mathbf{l}^{(r)}}\nonumber\\ & = & \left[\left(m_1\tilde{\mathbf{b}}_{s_1}[t]+m_2 \tilde{\mathbf{b}}_{s_2}[t]\right)\pmod*{\mathbf{l}^{(r ) } } \right .\nonumber\\ & & - \left. m_2\mathbf{b}_{s_2}[t]-\mathbf{s}_r\mathbf{l}^{(r)}\right]\pmod*{\mathbf{l}^{(r)}}\nonumber\\ & = & m_1\tilde{\mathbf{b}}_{s_1}[t]\pmod*{\mathbf{l}^{(r)}}.\end{aligned}\ ] ] then , it recovers the element of unshaped resolution information ] and +m_2\mathbf{b}_{s_2}[t] ] and =\mathcal{e}(\tilde{\mathbf{b}}_{s_1}^{\prime ( r)}[t]+m_2\mathbf{b}_{s_2}[t]) ] and ] and ] from the original received signal at block ] . finally , obtains the shaped and unshaped full - rate information by taking the sum of the resolution and vestigial information as =\tilde{\mathbf{b}}_{s_1}^{(r)}[t]+\tilde{\mathbf{b}}_{s_1}^{\prime(v)}[t] ] , respectively .a symbol error occurs at if .note that the above decoding process is presented for the case that the employed shaping method is hypercube shaping .when the employed shaping is nested lattice shaping , this decoding steps are still valid by changing function into regular modulo operation .this is due to the fact that , the components of lattice vectors , given as the inputs for hypercube shaping and nested lattice shaping , are drawn from different sets . for hypercube and nested lattice shading methods we use the sets and , respectively .in the simulations , , i.e. , we assume of the information integers are zero for resolution and vestigial information vectors .we have used binary ldpc codes with , where and are the codeword length and the dimension of the code , respectively . symbol error rate ( ser ) performance of ldpc lattice codes are plotted against the sum power at source and the relay , i.e. , .we have considered , and .the path loss exponents are , .the variances of the noise at the relay and destination are .the maximum number of iterations in each step of the decoding is assumed to be . since , the encoder and the decoder both know the locations of the zeros in resolution and vestigial information , based on ( [ si2])([bvi ] ) , for following locations we have we estimate and for our scheme , for the case that hypercube shaping is applied .when , all of the elements of the lattice codewords are uniformly distributed over , the average power of is . due to the fact that , the resolution lattice vectors contain more zeros , the average power in this case is less than .we assume that the members of incoming integer vector are uniformly distributed over , for .since for and for , we have for , and for , and from above equations we have put , then we have for , , so . for , .thus , we have considered , for .thus , for the cases that we have employed hypercube shaping , based on ( [ rate ] ) , the corresponding rate is bits / integer . for employing nested lattice shaping , we consider and . then , based on ( [ rate_nested ] ) , the corresponding rate is bits / integer . in order to achieve these rates , according to ( [ rate_bound ] ) , the total required powers are , and , respectively . in, we have presented ser variation versus sum of transmit powers for both nested - lattice shaping and hypercube shaping . in , the implementation of block markov encoding proposed for ldlc lattice codes .we have considered , , and other parameters similar to their corresponding values in .the ser performance of an ldlc lattice code with dimension and rate , which is obtained by employing nested lattice shaping , at is away from its corresponding df inner bound , which is .we observe that the ser performance of an ldpc lattice code of length at is away from its corresponding df inner bound .this is a natural result , due to the better ser performance of ldlcs comparing to ldpc lattice codes , over awgn channels .different decoders have been proposed for ldlcs . as far as we know ,the best one is proposed in .the decoding complexity of ldlcs , by using proposed decoder in , is at least times more than the decoding complexity of ldpc lattices .indeed , the decoding complexity of an ldpc lattice of dimension is equivalent to the decoding complexity of an ldlc with dimension .results of show that the increase in the dimension of the lattice can decrease the gap between df bound and the performance curve . using an ldpc lattice code of dimension instead of dimension makes about improvement in the performance . in , we plot the ser versus the sum of transmit powers , for the two - way relay channel . we suppose that the relay is midway between the sources , that is , and .we assume , for and , for .for the resolution lattice we consider .we choose .path - loss exponents are and .the used underlying codes are the same ones that we used for one - way relay channels .depending on the dimension of the lattice and the applied shaping method , our scheme achieves to within of the achievable rate in ( [ rate_bound2 ] ) .in this paper , we present the implementation of block markov encoding using ldpc lattice codes over the one - way and two - way relay channels . in order to apply this scheme, we employ a low complexity decoding method for ldpc lattices .then , for using these lattices in the power constrained scenarios , we propose two efficient shaping methods based on hypercube shaping and nested lattice shaping .we apply different decomposition schemes for one - way and two - way relay channels .the applied decomposition schemes are the altered versions of the applied methods for decomposing ldlcs . due to the lower complexity of decoding ldpc lattices comparing to ldlcs ,the complexity of the proposed schemes in this paper are significantly lower than the ones proposed for ldlcs .simulation results show that ldlcs outperform ldpc lattices in general .however , having lower decoding complexity enables us to increase the dimension of the lattice to compensate the existing gap between the performance of the ldpc lattice codes and the ldlcs .j. n. laneman , d. n. c. tse , and g. w. wornell , `` cooperative diversity in wireless networks : efficient protocols and outage behavior , '' _ ieee trans . on inform .50 , no . 12 , pp .30623080 , dec . 2004 .n. lee and c. wang , `` aligned interference neutralization and the degrees of freedom of the two - user wireless networks with an instantaneous relay , '' _ ieee trans . on commun ._ , vol .61 , no . 9 , pp . 36113619 , sep . | low density parity check ( ldpc ) lattices are obtained from construction d and a family of nested binary ldpc codes . we consider an special case of these lattices with one binary ldpc code as underlying code . this special case of ldpc lattices can be obtained by lifting binary ldpc codes using construction a lattices . the ldpc lattices were the first family of lattices which have efficient decoding in high dimensions . we employ the encoding and decoding of the ldpc lattices in a cooperative transmission framework . we establish two efficient shaping methods based on hypercube shaping and voronoi shaping , to obtain ldpc lattice codes . then , we propose the implementation of block markov encoding for one - way and two - way relay networks using ldpc lattice codes . this entails owning an efficient method for decomposing full - rate codebook into lower rate codebooks . we apply different decomposition schemes for one - way and two - way relay channels which are the altered versions of the decomposition methods of low density lattice codes ( ldlcs ) . due to the lower complexity of the decoding for ldpc lattices comparing to ldlcs , the complexity of our schemes are significantly lower than the ones proposed for ldlcs . the efficiency of the proposed schemes are presented using simulation results . relay networks , lattice codes , block markov encoding . |
the constrained voter model has been originally introduced in to understand the opinion dynamics in a spatially structured population of leftists , centrists and rightists . as in the popular voter model , the individuals are located on the vertex set of a graph andinteract through the edges of the graph at a constant rate .however , in contrast with the classical voter model where , upon interaction , an individual adopts the opinion of her neighbor , it is now assumed that this imitation rule is suppressed when a leftist and a rightist interact . in particular, the model includes a social factor called homophily that prevents agents who disagree too much to interact .* model description * this paper is concerned with a natural generalization of the previous version of the constrained voter model that includes an arbitrary finite number of opinions and a so - called confidence threshold . having a connected graph representing the network of interactions ,the state at time is a spatial configuration each individual looks at each of her neighbors at rate one that she imitates if and only if the opinion distance between the two neighbors is at most equal to the confidence threshold . formally , the dynamics of the system is described by the markov generator \end{array}\ ] ] where configuration is obtained from by setting and where means that the two vertices are connected by an edge .note that the basic voter model and the original version of the constrained voter model including the three opinions leftist , centrist and rightist can be recovered from our general model as follows : the main question about the constrained voter model is whether the system fluctuates and evolves to a global consensus or fixates in a highly fragmented configuration . to define this dichotomy rigorously, we say that * fluctuation * occurs whenever and that * fixation * occurs if there exists a configuration such that in other words , fixation means that the opinion of each individual is only updated a finite number of times , therefore fluctuation and fixation exclude each other .we define convergence to a global consensus mathematically as a * clustering * of the system , i.e. , note that , whenever , the process reduces to the basic voter model with instead of two different opinions for which the long - term behavior of the process is well known : the system on lattices fluctuates while the system on finite connected graphs fixates to a configuration in which all the individuals share the same opinion .in particular , the main objective of this paper is to study fluctuation and fixation in the nontrivial case when . * main results * whether the system fluctuates or fixates depends not only on the two parameters but also on the initial distribution .in particular , we point out that , throughout the paper , it will be assumed that the initial distribution is the product measure with constant densities . to avoid trivialities , we also assume that the initial density of each of the opinions is positive : for the constrained voter model on the one - dimensional torus with vertices , the mean - field analysis in suggests that , in the presence of three opinions and when the threshold is equal to one , the average domain length at equilibrium is ^ 2 \ \sim \ \frac{2 \rho_2}{\pi}\ ] ] when the initial density of centrists is small and is large .vzquez et al . also showed that these predictions agree with their numerical simulations from which they conclude that , when the initial density of centrists is small , the system fixates with high probability in a frozen mixture of leftists and rightists .in contrast , it is conjectured in based on an idea in that the infinite system fluctuates and clusters whenever , which includes the threshold one model with three opinions introduced in . to explain this apparent disagreement, we first observe that , regardless of the parameters , the system on finite graphs always fixate and there is a positive probability that the final configuration consists of a highly fragmented configuration , thus showing that spatial simulations of the necessarily finite system are not symptomatic of the behavior of its infinite counterpart .our first theorem shows that the conjecture in is indeed correct . [ th : fluctuation ] assume and . then , 1 .the process on fluctuates and clusters . 2 .the probability of consensus on any finite connected graph satisfies the intuition behind the proof is that , whenever , there is a nonempty set of opinions which are within the confidence threshold of any other opinions .this simple observation implies the existence of a coupling between the constrained and basic voter models , which is the key to proving fluctuation .the proof of clustering is more difficult .it heavily relies on the fact that the system fluctuates but also on an analysis of the interfaces of the process through a coupling with a certain system of charged particles .in contrast , our lower bound for the probability of consensus on finite connected graphs relies on techniques from martingale theory .note that this lower bound is in fact equal to the initial density of individuals who are in the confidence threshold of any other individuals in the system .returning to the relationship between finite and infinite systems , we point out that the simulation pictures of figure [ fig : interface ] , which show two typical realizations of the process on the torus under the assumptions of the theorem , suggest fixation of the infinite counterpart in a highly fragmented configuration , in contradiction with the first part of our theorem , showing again the difficulty to interpret spatial simulations .note also that , for the system on the one - dimensional torus with vertices , the average domain length at equilibrium is bounded from below by which , together with the second part of the theorem , proves that the average domain length scales like the population size when and that does not hold . while our fluctuation - clustering result holds regardless of the initial densitiesprovided they are all positive , whether fixation occurs or not seems to be very sensitive to the initial distribution .also , to state our fixation results and avoid messy calculations later , we strengthen condition and assume that the next theorem looks at the fixation regime in three different contexts . [ th : fixation ] assume .then , the process on fixates in the following cases : 1 . and is small enough .2 . is large , and where 3 . and and .the first part of the theorem is the converse of the first part of theorem [ th : fluctuation ] , thus showing that the condition is critical in the sense that * when , the one - dimensional constrained voter model fluctuates when starting from any nondegenerate distributions whereas * when , the one - dimensional constrained voter model can fixate even when starting from a nondegenerate distribution .the last two parts of the theorem specialize in two particular cases .the first one looks at uniform initial distributions in which all the opinions are equally likely . for simplicity ,our statement focuses on the fixation region when the parameters are large but our proof is not limited to large parameters and implies more generally that the system fixates for all pairs of parameters corresponding to the set of white dots in the phase diagram of figure [ fig : diagram ] for the one - dimensional system with up to twenty opinions .note that the picture suggests that the process starting from a uniform initial distribution fixates whenever even for a small number of opinions .the second particular case returns to the slightly more general initial distributions but focuses on the threshold one model with four opinions for which fixation is proved when is only slightly less than one over the number of opinions = 0.25 .this last result suggests that the constrained voter model with four opinions and threshold one fixates when starting from the uniform product measure , although the calculations become too tedious to indeed obtain fixation when starting from this distribution .* structure of the paper * the rest of the article is devoted to the proof of both theorems . even though our proof of fluctuation - clustering and fixation differ significantly , a common technique we introduce to study these two aspects for the one - dimensional process is a coupling with a certain system of charged particles that keeps track of the discrepancies along the edges of the graph rather than the actual opinion at each vertex .in contrast , our approach to analyze the process on finite connected graphs is to look at the opinion at each vertex and use , among other things , the optimal stopping theorem for martingales .the coupling with the system of charged particles is introduced in section [ sec : coupling ] and then used in section [ sec : fluctuation ] to prove theorem [ th : fluctuation ] .the proof of theorem [ th : fixation ] is more complicated and carried out in the last five sections [ sec : condition][sec : fixation - particular ] .in addition to the coupling with the system of charged particles introduced in the next section , the proof relies on a characterization of fixation based on so - called active paths proved in section [ sec : condition ] and large deviation estimates for the number of changeovers in a sequence of independent coin flips proved in section [ sec : deviation ] .to study the one - dimensional system , it is convenient to construct the process from a graphical representation and to introduce a coupling between the process and a certain system of charged particles that keeps track of the discrepancies along the edges of the lattice rather than the opinion at each vertex .this system of charged particles can also be constructed from the same graphical representation . since the constrained voter model on general finite graphswill be studied using other techniques , we only define the graphical representation for the process on , which consists of the following collection of independent poisson processes : * for each , we let be a rate one poisson process , * we denote by its arrival time . this collection of independent poisson processes is then turned into a percolation structure by drawing an arrow at time and , given a configuration of the one - dimensional system at time , we say that this arrow is * active * if and only if the configuration at time is then obtained by setting an argument due to harris implies that the constrained voter model starting from any configuration can indeed be constructed using this percolation structure and rule . from the collection of active arrows , we construct active paths as in percolation theory .more precisely , we say that there is an * active path * from to , and write , whenever there exist such that the following two conditions hold : 1 . for , there is an active arrow from to at time .2 . for , there is no active arrow that points at .note that conditions 1 and 2 above imply that moreover , because of the definition of active arrows , the opinion of vertex at time originates and is therefore equal to the initial opinion of vertex so we call vertex the * ancestor * of vertex at time . one of the key ingredients to studying the one - dimensional system is to look at the following process defined on the edges : identifying each edge with its midpoint , we set and think of edge as being * empty whenever , * occupied by a pile of particles with positive charge whenever , * occupied by a pile of particles with negative charge whenever .the dynamics of the constrained voter model induces evolution rules which are again markov on this system of charged particles .assume that there is an arrow at time and indicating in particular that there is a pile of particles with positive charge at .then , we have the following alternative : * there is no particle at edge or equivalently in which case the individuals at vertices and already agree so nothing happens .* there is a pile of particles at edge in which case and disagree too much to interact so nothing happens .* there is a pile of particles at in which case in particular , there is no more particles at edge and a pile of particles all with the common charge at edge .similar evolution rules are obtained by exchanging the direction of the interaction or by assuming that we have from which we can deduce the following description : * piles with more than particles can not move therefore we call such piles * blockades * and the particles they contain * frozen * particles .* piles with at most particles jump one step to the left or one step to the right at the same rate one therefore we call the particles they contain * active * particles . * when a pile with positive / negative particles jumps onto a pile with negative / positive particles , positive and negative particles *annihilate * by pair which results in a smaller pile of particles all with the same charge .we refer to figure [ fig : particles ] for an illustration of these evolution rules .note that whether an arrow is active or not can also be characterized from the state of the edge process : in particular , active arrows correspond to active piles of particles .the key ingredient to proving fluctuation of the one - dimensional system and estimating the probability of consensus on finite connected graphs is to partition the opinion set into two sets that we shall call the set of * centrist * opinions and the set of * extremist * opinions : note that the assumption implies that the set of centrist opinions is nonempty .note also that both sets are characterized by the properties as shown in figure [ fig : partition ] which gives a schematic illustration of the partition .fluctuation is proved in the next lemma using this partition and relying on a coupling with the voter model . [ lem : fluctuation ] the process on fluctuates whenever and .it follows from that centrist agents are within the confidence threshold of every other individual .in particular , for each pair we have the transition rates and similarly now , we introduce the process since for all the transition rates are constant over all according to , we have the following local transition rate for this new process : using in place of and some obvious symmetry , we also have this shows that the spin system reduces to the voter model. in particular , the lemma directly follows from the fact that the one - dimensional voter model itself , when starting with a positive density of each type , fluctuates , a result proved based on duality in , pp 868869 . [ lem : cluster ] the process on clusters whenever and .the proof strongly relies on the coupling with the voter model in the proof of the previous lemma . to begin with, we define the function which , in view of translation invariance of the initial configuration and the evolution rules , does not depend on the choice of . note that , since the system of charged particles coupled with the process involves deaths of particles but no births , the function is nonincreasing in time , therefore it has a limit : as .now , on the event that an edge is occupied by a pile of at least one particle at a given time , we have the following alternative : * is a blockade . in this case , since the centrist agents are within the confidence threshold of all the other agents , we must have but since the voter model fluctuates , in particular , at least of one of the frozen particles at is killed eventually .* is a live edge . in this case , since one - dimensional symmetric random walks are recurrent , the active pile of particles at eventually intersects another pile of particles , and we have the following alternative : * * the two intersecting piles of particles have opposite charge , which results in the simultaneous death of at least two particles . ** the two intersecting piles have the same charge and merge to form a blockade in which case we are back to the previous case : since the voter model fluctuates , at least one of the frozen particles in this blockade is killed eventually . ** the two intersecting piles have the same charge and merge to form a larger active pile in which case the pile keeps moving until , after a finite number of collisions , we are back to one of the previous two possibilities : at least two active particles annihilate or there is creation of a blockade with at least one particle that is killed eventually . in either case , as long as there are particles , there are also annihilating events indicating that the density of particles is strictly decreasing as long as it is positive .in particular , the density of particles decreases to zero so there is extinction of both the active and frozen particles : in particular , for all with , we have which proves clustering . + + the second part of the theorem , which gives a lower bound for the probability of consensus of the process on finite connected graphs , relies on very different techniques , namely techniques related to martingale theory following an idea from , section 3 . however , the partition of the opinion set into centrist opinions and extremist opinions is again a key to the proof . [ lem : consensus ] for the process on any finite connected graph , the first step is to prove that the process that keeps track of the number of supporters of any given opinion is a martingale .then , applying the martingale convergence theorem and optimal stopping theorem , we obtain a lower bound for the probability of extinction of the extremist agents , which is also a lower bound for the probability of consensus . for , we set and we observe that letting denote the natural filtration of the process , we also have indicating that the process is a martingale with respect to the natural filtration of the constrained voter model .this , together with , implies that also is a martingale .it is also bounded because of the finiteness of the graph therefore , according to the martingale convergence theorem , there is almost sure convergence to a certain random variable : and we claim that can only take two values : to prove our claim , we note that , invoking again the finiteness of the graph , the process gets trapped in an absorbing state after an almost surely stopping time so we have assuming by contradiction that gives an absorbing state with at least one centrist agent and at least one extremist agent . since the graph is connected , this implies the existence of an edge such that but then we have and showing that is not an absorbing state , in contradiction with the definition of time .this proves that our claim is true .now , applying the optimal stopping theorem to the bounded martingale and the almost surely finite stopping time and using , we obtain from which it follows that to conclude , we observe that , on the event that , all the opinions present in the system after the hitting time are within distance of each other therefore the process evolves according to a voter model after that time . since the only absorbing states of the voter model on finite connected graphs are the configurations in which all the agents share the same opinion , we deduce that the system converges to a consensus .this , together with , implies that this completes the proof of the lemma .the main objective of this section is to prove a sufficient condition for fixation of the constrained voter model based on certain properties of the active paths . [ lem : fixation - condition ] for all , let then , the constrained voter model fixates whenever this is similar to the proof of lemma 2 in and lemma 4 in . to begin with ,we define recursively a sequence of stopping times by setting in other words , the stopping time is the time the individual at the origin changes her opinion .now , we define the following random variables and collection of events : the assumption together with reflection symmetry implies that the event occurs almost surely for some positive integer , which implies that since is the event that the individual at the origin changes her opinion infinitely often , in view of the previous inequality , in order to establish fixation , it suffices to prove that to prove equations , we let be the set of descendants of at time which , due to one - dimensional nearest neighbor interactions , is necessarily an interval and its cardinality , respectively . now , since each interaction between two individuals is equally likely to affect the opinion of each of these two individuals , the number of descendants of any given site is a martingale whose expected value is constantly equal to one .in particular , the martingale convergence theorem implies that therefore the number of descendants of converges to a finite value .since in addition the number of descendants is an integer - valued process , which further implies that , with probability one , finally , we note that , on the event , the last time the individual at the origin changes her opinion is at most equal to the largest of the stopping times for therefore according to .this proves and the lemma .in order to find later a good upper bound for the probability in and deduce a sufficient condition for fixation of the process , the next step is to prove large deviation estimates for the number of piles with particles with a given charge in a large interval . more precisely , the main objective of this section is to prove that for all and all the probability that decays exponentially with . note that , even though the initial opinions are chosen independently , the states at different edges are not independent .for instance , a pile of particles with a positive charge is more likely to be surrounded by negative particles .in particular , the result does not simply follow from large deviation estimates for the binomial distribution .the main ingredient is to first show large deviation estimates for the number of so - called changeovers in a sequence of independent coin flips .consider an infinite sequence of independent coin flips such that where is the outcome : heads or tails , at time .we say that a * changeover * occurs whenever two consecutive coin flips result in two different outcomes .the expected value of the number of changeovers before time can be easily computed by observing that and by using the linearity of the expected value : then , we have the following large deviation estimates for the number of changeovers . [ lem : changeover ] for all , there exists such that to begin with , we let be the time to the changeover and notice that , since all the outcomes between two consecutive changeovers are identical , the sequence of coin flips up to this stopping time can be decomposed into strings with an alternation of strings with only heads and strings with only tails followed by one more coin flip .in addition , since the coin flips are independent , the length distribution of each string is and lengths are independent . in particular , is equal in distribution to the sum of independent geometric random variables with parameters and , namely , we have now , using that , for all , and large deviation estimates for the binomial distribution implies that for a suitable constant and all large .similarly , for a suitable and all large . combining , we deduce that taking and observing that , we deduce for a suitable and all large . in particular , for all sufficiently large , for suitable constants and and all sufficiently large . using the previous two inequalities and the fact that the event that the number of changeovers is equal to is also the event that the time to the changeover is less than but the time to the next changeover is more than , we conclude that for all sufficiently large .this completes the proof .+ + now , we say that an edge is of type if it connects an individual with initial opinion on the left to an individual with initial opinion on the right , and let denote the number of edges of type in the interval . using the large deviation estimates for the number of changeovers established in the previous lemma, we can now deduce large deviation estimates for the number of edges of each type . [ lem : edge ] for all , there exists such that for any given , the number of edges and with has the same distribution as the number of changeovers in a sequence of independent coin flips of a coin that lands on heads with probability .in particular , applying lemma [ lem : changeover ] with gives for all sufficiently large .in addition , since each preceding a changeover is independently followed by any of the remaining opinions , combining with large deviation estimates for the binomial distribution , conditioning on the number of edges of type for some , and using that we deduce the existence of such that for all large . similarly , there exists such that for all large .the lemma follows from . + + note that the large deviation estimates for the initial number of piles of particles easily follows from the previous lemma .finally , from the large deviation estimates for the number of edges of each type , we deduce the analog for a general class of weight functions that will be used in the next section to find a sufficient condition for fixation of the constrained voter model . [ lem : weight ] let and assume that with for . for all , there exists such that first , we observe that this , together with lemma [ lem : edge ] , implies that for a suitable constant and all large .in view of lemma [ lem : fixation - condition ] , in order to prove fixation , it suffices to show that the probability of the event in equation , that we denote by , tends to zero as .let be the first time an active path starting from hits the origin , and observe that from which it follows that denote by the initial position of this active path and by the rightmost source of an active path that reaches the origin by time , i.e. , and define .now , note that each blockade initially in the interval must have been destroyed , i.e. , turned into a set of active particles through the annihilation of part of the particles that constitute the blockade , by time .moreover , all the active particles initially outside the interval can not jump inside the space - time region delimited by the two active paths implicitly defined in because the existence of such particles would contradict either the minimality of or the maximality of .in particular , on the event , all the blockades initially in must have been destroyed before time by either active particles initially in or active particles resulting from the destruction of the blockades initially in . to estimate the probability of this last event, we first give a weight of to each particle initially active by setting now , since each blockade with frozen particles can induce the annihilation of at least active particles after which there is a set of at most initially frozen particles becoming active , the weight of such an edge is set to , i.e. , the fact that the event occurs only if all the blockades initially in the interval are destroyed by either active particles initially in or active particles resulting from the destruction of the blockades initially in can be expressed as to find an upper bound for the probability of the event on the right - hand side of , we first compute the expected value of the weight function and then use the large deviation estimates proved in lemma [ lem : weight ] .the next lemma gives an explicit expression of the expected value of the weight function and will be used repeatedly in the proofs of our last three theorems in order to identify sets of parameters in which fixation occurs . to prove this lemma as well as the so - called contribution of additional events later , we make use of the identities [ lem : expected - weight ] assume . then , where to begin with , we note that from which we deduce that for all while for .it follows that also , decomposing the second sum in depending on whether the number of particles is larger or smaller than and changing variables , we obtain combining , we obtain which , recalling the notation in , becomes to further simplify the expected value , we note that then , plugging into , we get this completes the proof .+ + to deduce theorem [ th : fixation].a , we observe that using the continuity of together with lemma [ lem : expected - weight ] gives then , using and applying lemma [ lem : weight ] with , we deduce this together with lemma [ lem : fixation - condition ] implies fixation .we now specialize in the case of a uniform initial distribution : , which forces the initial density of each opinion to be equal to . in this case, the expected value of the weight function reduces to the expression given in the following lemma . [ lem : expected - weight - uniform ] assume with .then , according to lemma [ lem : expected - weight ] , we have in other respects , using with together with gives the lemma follows from combining .+ + letting and taking , the lemma implies that in particular , whenever is large and using again lemmas [ lem : fixation - condition ] and [ lem : weight ] as well as , we deduce as in the previous section that fixation occurs under the condition above when the parameters are large enough .this is not exactly the assumption of our theorem .note however that the weight function is defined based on a worst case scenario in which the active particles _ do their best _ to destroy the blockades and to turn as many frozen particles as possible into active particles . to improve the lower bound for the asymptotic critical slope from to ,the idea is to take into account additional events that eliminate some active particles and find lower bounds for their contribution defined as the number of active particles they eliminate times their probability .the events we consider are 1 .* annihilation of active particles * due to the collision of a pile of active particles with positive charge with a pile of active particles with a negative charge , 2 .* blockade formation * due to the collision of two piles of active particles with the same charge and total size exceeding the confidence threshold , 3 .* blockade increase * due to the jump of a pile of active particles with a certain charge onto a blockade with the same charge .more precisely , we introduce the events as well as the three events the next three lemmas give lower bounds for the contribution of these three events . [ lem : contribution-1 ] the contribution of the event satisfies by conditioning on the possible values of , we get in particular , when and we have note that on the event the number of particles that are eliminated is twice the size of the smallest of the two active piles therefore the contribution of this event is given by where the factor 1/3 is the probability that there is an arrow pointing at before there is an arrow pointing at . using obvious symmetry , and , we deduce that this completes the proof . [ lem : contribution-2 ] the contribution of the event satisfies first , we note that taking and in equation gives in addition , the event with replaces a set of active particles by a blockade of size so it induces the annihilation of at least active particles .the contribution of is therefore using again some obvious symmetry together with and and the fact that the contribution above is a function of , we deduce that this completes the proof . [ lem : contribution-3 ] the contribution of the event satisfies assume that and .then , again implies that in addition , the event replaces an active pile of size and a blockade of size with a blockade of size so it induces the annihilation of active particles .the contribution of is therefore where we use that the conditional probability is larger than using symmetry together with and , we get this completes the proof .+ + using again lemma [ lem : fixation - condition ] and the large deviation estimates of lemma [ lem : weight ] , we deduce that the one - dimensional constrained voter model fixates whenever which , together with lemmas [ lem : expected - weight][lem : contribution-3 ] , gives the condition for fixation : letting again and taking , we obtain fixation whenever this gives fixation when is large and which completes the proof of theorem [ th : fixation].b .in this last section , we assume , thus returning to slightly more general initial distributions , but specialize in the system with threshold one and four opinions , which is not covered by part b of the theorem .note that , applying lemma [ lem : expected - weight ] with and , we obtain then , using that , we get in particular , using the same arguments as in the previous two sections , we deduce that the system with threshold one and four opinions fixates whenever to improve this condition to the one stated in the theorem , we follow the same strategy as in the previous section , namely we compute the contribution of the three events . using the specific value of the parameters allows us to significantly improve the lower bounds for the contribution of the three events .this is done in the following three lemmas . to simplify the notation, we define and partition the event into two events distinguishing between two types of initial conditions that result in two different contributions : also , we let be the time of the first active arrow that either starts or points at . since two particles are eliminated on , the contribution of the event is given by now , we let be the distance between vertex and the rightmost agent with either initial opinion 1 or initial opinion 3 to the left of , i.e. , and observe that , in order to have a change of opinion at before time , there must be a sequence of at least arrows all occurring before time . summing over all possible positions of this rightmost agent and using that exponentially distributed with rate 4 , we obtain where the first term corresponds to the case where and is obtained by further conditioning on whether the arrow and the reverse arrow occur before time or not . combining our lower bound for the contribution of together with , we obtain repeating the same reasoning for the event gives conditioning on all possible values of where now keeps track of the position of the closest agent with opinion 2 to the left of , which is geometric with parameter , we get therefore , the contribution of the event is bounded by the lemma follows from the combination of and . note that the set of initial configurations on the event is again , we let be the time of the first active arrow that either starts or points at .since two active particles are eliminated on the event , we have using and as in the previous lemma gives this completes the proof .the set of initial configurations is now note that the time of the first active arrow that either starts or points at is now exponentially distributed with rate two .since two active particles are eliminated on , we have using the same reasoning as in but recalling that is now exponentially distributed with parameter two instead of four , we also have from which we deduce that this completes the proof . + + we refer the reader to figure [ fig : contribut ] for a plot of the expected value of and the lower bounds proved in lemmas [ lem : contribution-4][lem : contribution-6 ] with respect to the initial density . using again the same arguments as in the previous two sections, we deduce that the one - dimensional constrained voter model with threshold one and four opinions fixates whenever which , recalling , using lemmas [ lem : contribution-4][lem : contribution-6 ] , and expanding and simplifying the expression above , gives the following sufficient condition for fixation : where to complete the proof of the theorem , we need the following technical lemma . to begin with, we introduce the polynomials and observe that the derivative of can be written as now , it is clear that in other respects , we have showing that is decreasing in the interval . therefore , the polynomial is concave in this interval , from which it follows that combining , we deduce that is decreasing in . to prove that also has a unique root in this interval , we use its monotonicity and continuity , the fact that and the intermediate value theorem .adamopoulos , a. , scarlatos , s. ( 2012 ) .behavior of social dynamical models ii : clustering for some multitype particle systems with confidence threshold .lecture notes in computer science , vol .7495 , pp . 151160 , springer , heidelberg 2012 . | the constrained voter model describes the dynamics of opinions in a population of individuals located on a connected graph . each agent is characterized by her opinion , where the set of opinions is represented by a finite sequence of consecutive integers , and each pair of neighbors , as defined by the edge set of the graph , interact at a constant rate . the dynamics depends on two parameters : the number of opinions denoted by and a so - called confidence threshold denoted by . if the opinion distance between two interacting agents exceeds the confidence threshold then nothing happens , otherwise one of the two agents mimics the other one just as in the classical voter model . our main result shows that the one - dimensional system starting from any product measures with a positive density of each opinion fluctuates and clusters if and only if . sufficient conditions for fixation in one dimension when the initial distribution is uniform and lower bounds for the probability of consensus for the process on finite connected graphs are also proved . |
biological systems are composed of molecular components and the interactions between these components are of an intrinsically stochastic nature . at the same time , living cells perform their tasks reliably , which leads to the question how reliability of a regulatory system can be ensured despite the omnipresent molecular fluctuations in its biochemical interactions .previously , this question has been investigated mainly on the single gene or molecule species level .in particular , different mechanisms of noise attenuation and control have been explored , such as the relation of gene activity changes , transcription and translation efficiency , or gene redundancy .apart from these mechanisms acting on the level of the individual biochemical reactions , also features of the circuitry of the reaction networks can be identified which aid robust functioning .a prime example of such a qualitative feature that leads to an increased stability of a gene s expression level despite fluctuations of the reactants is negative autoregulation . at a higher level of organization, the specifics of the linking patterns among groups of genes or proteins can also contribute to the overall robustness .in comparative computational studies of several different organisms , it has been shown that among those topologies that produce the desired functional behavior only a small number also displays high robustness against parameter variations . indeed , the experimentally observed networks rank high among these robust topologies . however , these models are based on the deterministic dynamics of differential equations .modeling of the intrinsic noise associated with the various processes in the network requires an inherently stochastic modeling framework , such as stochastic differential equations or a master equation approach .these complex modeling schemes need a large number of parameters such as binding constants and reaction rates and can only be conducted for well - known systems or simple engineered circuits . for generic investigations of such systems ,coarse - grained modeling schemes have been devised that focus on network features instead of the specifics of the reactions involved . to incorporate the effects of molecular fluctuations into discrete models ,a commonly used approach is to allow random flips of the node states .several biological networks have been investigated in this framework and a robust functioning of the core topologies has been identified . however , for biological systems , the perturbation by node state flips appears to be a quite harsh form of noise : in real organisms , concentrations and timings fluctuate , while the qualitative state of a gene is often quite stable . a more realistic form of fluctuations than macroscopic ( state flip ) noise should allow for microscopic fluctuations .this can be implemented in terms of fluctuating timing of switching events .the principle idea is to allow for fluctuations of event times and test whether the dynamical behavior of a given network stays ordered despite these fluctuations . in this workwe want to focus on the reliability criterion that has been used to show the robustness of the yeast cell - cycle dynamics against timing perturbations and investigate the interplay of topological structure and dynamical robustness . using small genetic circuits we explore the concept of reliability and discuss design principles of reliable networks .however , biological networks have not been engineered with these principles in mind , but instead have emerged from evolutionary procedures .we want to investigate whether an evolutionary procedure can account for reliability of network dynamics .a number of studies has focused on the question of evolution towards robustness .however , the evolution of reliability against timing fluctuations has not been investigated .first indications that network architecture can be evolved to display reliable dynamics despite fluctuating transmission times has been obtained in a first study in . using a deterministic criterion for reliable functioning , introduced in , it was found that small networks can be rapidly evolved towards fully reliable attractor landscapes .also , if a given ( unreliable ) attractor is chosen as the `` correct '' system behavior , it was shown that with a high probability a simple network evolution is able to find a network that reproduces this attractor reliably , i.e. in the presence of noise . here, we want to use a more biologically plausible definition of timing noise to investigate whether a network evolution procedure can generate robust networks .we focus on the question whether a predefined network behavior can be implemented in a reliable way , just utilizing mutations of the network structure .we use a simple dynamical rule to obtain the genes activity states , such that the dynamical behavior of the system is completely determined by the wiring of the network .a widely accepted computational description of molecular biological systems uses chemical master equations and simulation of trajectories by explicit stochastic modeling , e.g. through the gillespie algorithm .however , this method needs a large number of parameters to completely describe the system dynamics .thus , for gaining qualititative insights into the dynamics of genetic regulatory system it has proven useful to apply strongly coarse - grained models .boolean networks , first introduced by kauffman have emerged as a successful tool for qualitative dynamical modeling and have been successfully employed in models of regulatory circuits in various organisms such as _ d. melanogaster _ , _ s. cerevisiae _a. thaliana _ , and recently _ s. pombe _ . in this class of dynamical models , genes , proteins and mrnaare modeled as discrete switches which assume one of only two possible states .here , the active state represents a gene being transcribed or molecular concentrations ( of mrna or proteins ) above a certain threshold level .thus , at this level , a regulatory network is modeled as a simple network of switches .time is modeled in discrete steps and the state of all nodes is updated at the same time depending only on the state of all nodes at the previous time step according to the wiring of the network and the given boolean function .when such a system is initialized with some given set of node states , it will in general follow a series of state changes until it reaches a configuration that has been visited before ( finite number of states ) .because of the deterministic nature of the dynamics , the system has then entered a limit cycle and repeats the same sequence of states indefinitely ( or keeps the same state in the case of a fixed point attractor ) . in the original boolean modelthere are two assumptions that are clearly non - biological and are thus often criticized : the discrete time which implies total synchrony of all components ; and the binary node states which prohibit intermediate levels and gradual effects. there have been various attempts at loosening these assumptions while keeping the simplicity of the boolean models .it is a clear advantage of boolean models that they operate on a finite state space .the synchronous timing , however , does not hold a similar advantage apart from computational simplicity .models that overcome this synchronous updating scheme have been suggested in a variety of forms . in asynchronous schemes are used in the model of the fruit fly .the simplest asynchronous model keeps the discrete notion of time but lets events happen sequentially instead of simultaneously .a continuous - time generalization of boolean models that is inspired by differential equation models has been suggested in . here , the discreteness of the node states is kept but the dynamics take place in a continuous time . in the limit of infinitesimally small disturbances from synchronous behavior is investigated .this concept of allowing variations from the synchronous behavior will also be used in this work .the principle idea is to use a continuous time description and identify the state of the nodes at certain times with the discrete time steps of the synchronous description .further , an internal continuous variable is introduced for every node and the binary value of the node is obtained from this continuous variable using a threshold function .now a differential equation can be formulated for the continuous variable . andthe corresponding activity state ( , ).,title="fig:",width=151 ] + and the corresponding activity state ( , ).,title="fig:",width=340 ] this is pictured in figure [ fcharging ] . herethe internal dynamics and the resulting activity state of a node with just one input are shown for a given input pattern .the activator of the node is switched on ( say , externally ) at time and stays on until it is switched off at time . in the boolean descriptionwe would say node assumes state at time step 1 and at time step 2 switches to state .node would react by switching to state at step 2 and to at step . in the continuous version ,we implement this by a delay time and a `` charging '' behavior of the concentration value of node , driven by the input variable .as soon as crosses the threshold of , the activity state of switches to .let us formulate the time evolution of a system of such model genes by the set of delay differential equations here , denotes the transmission function of node and describes the effect of all inputs of node at the current time .the parameter sets the time scale of the production or decay process .in general , any boolean functions can be used as transmission function . for simplicity , we choose threshold functions , which have proven useful for the modeling of real regulatory networks .let us use the following transmission function where is the transmission delay time that comprises the time taken by processes such as translation or diffusion that cause the concentration buildup of one protein to not immediately affect other proteins .the interaction weight determines the effect that protein has on protein .an activating interaction is described by , inhibition by . if the presence of protein does not affect expression of protein , .the discrete state variable is determined by the continuous concentration variable via a heaviside function $ ] .the threshold value is given by ( this choice is equivalent to the commonly used threshold value of 0 if the activity states are given by instead of the boolean values used here ) . for the simple transmission function given above , equation ( [ eq : ode ] )can be easily solved piecewise ( for every time span of constant transmission function ) , leading to the following buildup or decay behavior of the concentration levels this has the effect of a low - pass filter , i.e. a signal has to sustain for a while to affect the discrete activity state . a signal spike , on the other hand ,will be filtered out . up to nowwe have only introduced a continuous , but still deterministic generalization of the synchronous boolean model .if one now allows noise on the timing delay , the model becomes stochastic and asynchronous . the way we model this stochastic timing is via a signal mechanism .as soon as one node flips its discrete state at , say , , it sends a signal to each node it regulates .this signal affects the input of a regulated node at a later time where is a uniformly distributed random number between and .the random number is chosen for each signal and each link independently , which means that a switching node will affect two regulated nodes at slightly different times .due to the timing perturbations , the network states at exactly integer time do not hold a special significance any more . to overcome this problem, we define a new macro step whenever all discrete node states ( not the concentration levels ) are constant for at least a time span of , which amounts to one discrete time step in the synchronous model .only the system states at these times of extended rest are used in the comparison with the synchronous behavior . this way ,small fluctuations of the signal events are tolerated , but extended times of inactivity of the system must exist and the state of the network at these times must correspond to the respective states under synchronous dynamics .we call network dynamics `` reliable '' if , despite the stochastic effects on the signal transmission times , the network follows the synchronous state sequence . although fluctuations in the exact timing are omnipresent , ordered behavior of the sequence of states can still be realized. an exact definition of the algorithm can be found in the appendix . in a similar model , but with infinitesimal timing perturbations , was used to identify those attractors in random boolean networks that are reliable .most attractors in fact are unreliable , but are irrelevant for the system because of very small basins of attraction .further , it was shown that the number of reliable attractors scales sublinearly with system size , which reconciles the behavior of boolean dynamics with the numbers of observed cell types as was originally proposed in .a similar result was also obtained in a sequential updating scheme .in this section we want to discuss the principle differences behind reliable and unreliable dynamics and show that the same dynamical sequence can be achieved both in a reliable or in an unreliable fashion , depending on the underlying network that drives the dynamics . , the production time constant , and the noise level are chosen for optimal readability of the figures .we stress that all our conclusions also hold for the parameter choice from the results part or for any variation of these values within reasonable bounds . ]we start with the well - known example of two mutually activating genes and model the system according to equations ( [ eq : ode ] ) ( [ solution ] ) .apart from the trivial fixed points ( both on or both off ) this system displays an unreliable attractor as shown in the left panel of figure [ reliableunreliablecomp ] . in the upper part, the synchronous boolean attractor is depicted in a simple pictorial form ( black means active , white inactive ) .below that the continuous variable of both nodes is plotted over time in an example run and it can be seen that because of desynchronization the system can exit the synchronous state sequence .changing just one link and thus creating an inhibiting self - interaction at the first gene ( see right panel of figure [ reliableunreliablecomp ] ) , the dynamics is now driven by this one node loop .the synchronous sequence of the attractor is still the same , but now the fixed points of the old network are no longer fixed points but transient states to this attractor . the asynchronous dynamics ,as shown in the lower part of figure [ reliableunreliablecomp ] now display an ordered behavior that would continue indefinitely . the essential feature that causes these stable oscillationsis the time delay involved . without a time delay , the system would not exhibit stable oscillations in either case but would assume intermediate levels for both nodes .thus , a direct comparison of these dynamics with a stability analysis of ordinary differential equations without delay is not adequate .next , we now want to test the reliability of examples of circuits that can be created artificially .the repressilator is a simple artificially generated genetic circuit implemented in _e. coli _ .consisting of of three genes inhibiting each other in a ring topology ( see upper part of figure [ frep ] ) , this system displays stable oscillations . describing this system using differential equationsit was found that the unique steady state is unstable for certain parameter values and that numerical integration of the differential equations displays oscillatory behavior . also in a stochastic modeling scheme , sustained but irregular oscillations can be observed , which show some resemblance of the experimental time series . to discuss this model system in our framework, the synchronous boolean description has to be analyzed first . here, the three - gene repressilator exhibits two attractors which comprise all eight network states the `` all - active - all - inactive '' ( two states ) and the `` signal - is - running - around '' pattern ( six states ) . in the asynchronous scheme , independently of the initial conditions , the system reaches the second attractor .once the attractor is reached , the system stays in it forever ( i.e. is reliable in our definition ) see figure [ frep ] .this is due to the fact that only a single switching happens at a given time .this is depicted in the lower part of figure [ frep ] by the arrows which are successively active , no two events happening at the same time . , title="fig:",width=151 ] , title="fig:",width=340 ] this picture changes in the case of the four - genes repressilator .the attractor structure is now much more involved , consisting of the two fixed points ( 1010 ) and ( 0101 ) and three attractors with four states each . using the stochastic scheme as before , we find that only the fixed points emerge as reliable attractors of the system .if any state of one of the 4-cycles is prepared as initial condition , the system thus always ends up in one of the two fixed points . in figure [ f4rep ] ,an example run is shown which is initialized with the state ( 1100 ) . without any noise ,the system would follow a four - state sequence consisting of all states where the two active nodes are adjacent and the two inactive nodes are as well . however , if a small perturbation is allowed , the system can exit this attractor as shown in the lower panel of [ f4rep ] . here ,we have drawn two arrows showing two causal events happening at the same time .in fact , there are two independent causal chains in the system dynamics .if these two chains fluctuate in phase relative to each other , they can extinguish each other and drive the system into a fixed point . for an extended discussion of the concept of multiple causal chains ,see . apart from the concept explained in the last section , when a system can accumulate a phase lag through small perturbations in a random walk - like fashion and eventually ends up in a different attractor , there are also examples of systems , in which any small perturbation drives the system away from the current attractor .this is the case if the concentration variables do not reach their maximal value before their input nodes switch their state .this can cause the following node to switch even earlier as compared to the unperturbed timing ( in the limit , this behavior is impossible and each perturbation that is not removed from the system is merely `` neutral '' ) .we show this behavior in figure [ frepunstable ] . here again , the four repressilator is shown , but with the initial state configuration 0000 . without noise, this state belongs to the `` all - active - all - inactive '' attractor and four independent events are happening at each time step .the small stochastic asynchrony in the beginning is amplified and leads to a quick loss of the attractor .the system then enters an intermediate attractor where the neutral perturbation behavior is predominant , because the concentration levels have more time to approach their saturation value .the opposite behavior is also possible , that the system itself prevents divergence of the phases .this can happen if the intermediate system state creates a signal spike ( i.e. a short - term status change of a node ) that itself feeds back to the causal chain . even though the causal chains are independent in synchronous mode , they can be connected through such intermediate states .we want to stress that in the criterion employed in it can not be identified whether an attractor is marginally stable or exhibits such a self - catching behavior .this is a limit of the deterministic criterion that is overcome by the explicit modeling used here . in this work, we consider all attractors as `` unreliable '' that can desynchronize so strongly that the system does not maintain a `` rest phase '' in which no switching events occur for an extended time .this includes all marginally stable as well as unstable attractors .we do not distinguish between these in our results as both do not seem suitable for the reliability of a biological system .now that we have introduced the main concepts and ideas surrounding our definition of reliability , we want to turn to the question , whether such a simple model of regulatory networks can be evolved towards realizations displaying reliable dynamics . for this question ,we define the notion of a `` functional attractor '' . as we are dealing with random networks , we need a measure of what the system is supposed to do .thus , we choose one attractor of the starting network as the prototype dynamics that define the desirable dynamical sequence . the functional attractor is determined by running the synchronous model with a randomly chosen initial state until an attractor is found . during the evolution processwe demand each network to reproduce this attractor .this prescription introduces a bias towards attractors with large basins of attraction .however , as the basin of attraction is commonly understood as a measure of the significance of an attractor , this appears to be a natural choice .only unreliable attractors are used as functional attractors , as in the case of reliable attractors , the evolution goal would be achieved before the start of the evolution .the evolution procedure is chosen as a simple version of a mutation and selection process .we start by creating a directed random network with the prescribed number of links ( self - links are allowed ) and determine the functional attractor . during the evolution procedure ,we mutate the current network by a single rewiring of a link , that is , removal of one link and simultaneous addition of a random link between two nodes that are not yet connected .this actually amounts to two operations on the graph , but has the advantage that the connectivity is kept fixed .the fitness of a given network is assessed by comparison of the asynchronous dynamics with the synchronous functional attractor .the initial network state is set according to one randomly chosen state of the synchronous attractor .the concentration levels are initialized to the same value ( either or ) .now the system dynamics is explicitly simulated using the stochastic algorithm introduced above ( details given in the appendix ) .we follow the dynamics for maximally macro time steps ( i.e. rest phases ) .the fitness score is then obtained by dividing the number of steps correctly following the synchronous behavior by this maximal step number .the algorithm used in is an abstraction of the principle that small fluctuations can not on its own drive the system out of its attractor , but only if successively adding up .thus , in a systematic way , for any possible retardation of signal events it is checked whether it persists in the system for a full progression around the attractor .if so , the attractor is marginally stable and can in principle lose its synchrony . while this has the advantage of being a deterministic criterion and thus leads to a noiseless fitness function , there are situations in which this criterion is sufficient but not necessary for reliable behavior . by explicitly modeling the time course , as done in this work , only the truly unstable attractors are marked as such .we do not take into account the transient behavior of the system and define only the limit cycle as the functional attractor . as our reliability definition would be trivially fulfilled in the case of fixed points , we use networks exhibiting a limit cycle from the start . in the evolution process , the fitness of a mutant is compared to the fitness that the mother network scored . a network is only selected , if it scores higher than any other network found before during the evolution . as the dynamics is inherently stochastic , the fitness criterion is noisy , too .thus , networks which are not more reliable than the mother network might still be selected in the evolution due to variability in the fitness score .if a network follows the attractor up to a maximum step number , it is said to be `` reliable '' . if during the network process a given number of mutation triesis exceeded , the evolution process is aborted . the maximal number of mutation tries in the evolution is at each step and during the full course of evolution .we later discuss the implications of these fixed parameter settings .we have used the following parameters in the results part .the delay time is set to unity , the buildup time is .maximal noise is .this means , that the impact of any individual perturbation is low and can not itself cause a failure in the fitness test .only if several perturbations consecutively drive the system away from synchronization , the requirement of an extended static period can be missed . .in this example , three steps suffice for stabilization .the structure of each network during the evolution is shown , with the arrows denoting the subsequent step in the evolution . in every step ,one link is lost ( shown in grey color ) and a new link is added ( denoted by the plus sign ) .the change of the state space of the network is given in figure [ fevoexampleattractors].,width=302 ] .for every step , the full attractor landscape is shown .every dot denotes a state , the subsequent state is connected via a line .the limit cycle is shown in the center of each attractor basin .the functional attractor is shown as the upper leftmost attractor in all steps.,width=302 ] in figures [ fevoexamplestructure ] and [ fevoexampleattractors ] we show an example of a typical evolution process for a small network of . during three steps , the network is evolved towards a reliable architecture . the initial network ( upper - left in figure [ fevoexamplestructure ] ) displays three synchronous attractors ( top panel in figure [ fevoexampleattractors ] ) of which the first is chosen as the functional attractor .the structural changes are depicted in figure [ fevoexamplestructure ] by a grey arrow for the removed link and a plus - sign for the newly added link . as is typical for these evolution processes , the attractor landscape is affected dramatically during the evolution . in this example ,only the functional attractor survives the evolution procedure .we have performed the described network evolution for a variety of different network sizes as well as connectivities . for system sizes of and and connectivities between and the ratio of networks that were stabilized during the evolution is shown in figure [ fevolution ] .whenever we plot the ratio of stabilized networks , we have calculated the sample errors by a poissonian error estimate , , where is the obtained ratio from sample runs .one can see that for intermediate connectivities between and , the ratio of stabilized networks is above 80% for all system sizes under investigation .this means , that starting from any random network , in four out of five cases a simple network evolution is able to find a network that displays the same dynamical attractor , but performs it reliably .this result matches a very similar dependence on the connectivity that was found for 16 nodes in the infinitesimal scheme in .it is interesting to note that for lower connectivities , the ratio of stabilized networks decreases significantly . for all system sizes considered, there is a sizable decrease of the stabilization ratio for connectivities below two .this is especially apparent for the large system size .this is due to the `` essentiality '' of the structure on the dynamics .changing a link without destroying the dynamical attractor is less likely for lower connectivities . at higher connectivities the larger number of non - essential links in the system aids evolvability towards reliable dynamics via phenotypically neutral mutations .however , considering large connectivities and large system sizes , the ratio of stabilized networks drops again .along with the increase of attractor lengths with system size that impairs reproducibility of dynamics .thus , we find an area of connectivity between and for which the ratio of stabilized networks is similar for all system sizes considered .the plot in figure [ fevo_steps ] shows the average number of rewiring steps necessary until a stable network realization is found for networks of 32 nodes .for all connectivities , this number is remarkably low , as the evolution procedure basically implements a biased random walk through structure space .this is due to the large variation of the fitness score of a single network . despite the rather small evolutionary pressure, the evolution procedure quickly finds a realization exhibiting reliable dynamics .interestingly , the number of evolution steps does not monotonically grow with the connectivity , but instead drops for connectivities larger than two .this again is an indication that networks with higher connectivities are easier to evolve towards reliability .the ratio of links rewired in the evolution to the total number of links is even monotonically decreasing ( not shown ) .we want to further investigate the dependence on network size by repeating the evolution procedure with system sizes up to .this is shown in figure [ fevolutionncomp ] in a log - linear plot of the ratio of stabilized networks vs. system size .we find that the ability of the process to stabilize a given network decreases with system size .the line in the figure represents a fit of the function with , thus a relatively slow decay with system size .one also has to keep in mind that the fixed set of parameters for the number of attempted mutations per evolution step and the total number of attempted mutations during the evolution reduces the success rate for larger networks . for small networks of , 20000 attempted mutations per evolution step suffices for a good estimate of the space of all one - link mutations , but as the number of possible mutations scales with the system size n as , it quickly becomes impossible to check all possibilities .thus , the results in figure [ fevolutionncomp ] underestimate the probability to find a stable instance .we have checked the dependence of the results on the selection parameters ( attempted mutations per evolution step and total number of attempted mutations during evolution ) for selected network sizes and connectivities . in figure [ fevolutionncompparams ]we again show the ratio of stabilized networks against system size , but this time for two different parameter values the original parameter set with ( denoted by ` + ' ) and for an increased value of ( denoted by ` ' ) . for small networks, the value of this parameter does not significantly affect the results , but for , differences can be clearly seen .for the ratio rises from at to at .interestingly , for larger system sizes this effect does not seem to be amplified : for the ratio rises from to .for we plot the dependence of the ratio of stabilized networks on the parameter in figure [ fevolution50nparams ] .the largest parameter value used , is about twice the total number of possible rewirings and should thus suffice .one can see that the decrease in the ratio of successfully evolved networks can be significantly reduced when attempting more mutations per evolution step .this is due to the fact that an enormous number of mutations is possible of which only a small fraction retains the requested dynamical sequence .still , one can deduce from these results that it is harder to stabilize large networks than smaller ones : even though there might be a path to a stable network instance , it may not be practically realizable as the chance to find exactly the right mutations may be too small .however , real world systems display a large amount of modularity that leads to smaller cores of strongly interacting components . we have not taken this into account in our random network approach .we see this as a model for small networks of key generators as were described in recent boolean models of biological systems .the resulting dynamics of the full networks are then influenced by this core without strong feedback .this allows for rather simple expression patterns of the full network without constraints on the network size .we have discussed a simple reliability criterion for biological networks and have applied it to network design features that produce reliable dynamics .we showed that small changes in the network topology can dramatically affect the dynamical behavior of a system and can lead to reliable network dynamics . to investigate how reliability can emerge in real - world systems that have been shaped by evolution, we studied an evolutionary algorithm that selects networks with a prescribed dynamical behavior if they function more reliably than a given mother network .we found that a high ratio of random networks can evolve towards instances displaying reliable dynamics . in accordance with other recent work , it was shown that the evolution of network structures can lead to reliable dynamics both with a high probability and within short evolutionary time scales .surprisingly , small connectivities are detrimental to this evolvability .this is counter - intuitive as sparsely connected networks show rather simple dynamics with short attractor lengths .however , at the same time they are difficult to evolve because they have a small structural `` buffer '' of links that can be neutrally rewired without changing the dynamics .this is related to the concepts of `` degeneracy '' and `` distributed robustness '' where additional elements are present in a system that are not strictly necessary for the system s function but have a positive effect on robustness . here, these additional elements are links that are not strictly necessary to perform a specific function . thus, rewiring of these links is possible and allows for a higher probability to find a network with reliable dynamics .we thus find in our framework that high connectivity , although leading to increasing complexity of the dynamics , can be beneficial for the evolution of networks . for larger system sizes the evolvability towards reliable dynamics decreases .this is due to the increasing dynamical complexity of such networks ( longer attractor cycles , more non - frozen nodes ) .our strict criterion requests the reliable reproduction of the exact state sequence for every node , which leads to a more difficult selection process for large system sizes . in summary ,our results suggest that reliability is an evolvable trait of regulatory networks . in the present simple model , reliabilitycan be achieved by topological changes alone and without fine - tuning of parameters .this means that through mutations of the reaction networks , biological systems may have the ability to rapidly acquire the property of reliable functioning in the presence of biochemical stochasticity .the authors would like to thank maria i. davidich for discussions and helpful comments on the manuscript and fabian zhrer for help with the state - space visualization graph .this work was supported by deutsche forschungsgemeinschaft grants bo1242/5 - 1 , bo1242/5 - 2 .the asynchronous algorithm is implemented such that no discretized clock is needed .only those times will be investigated , when changes in the system happen .* : time of the last change of buildup / decay behavior * : concentration level at that time * : flag for current behavior - either buildup ( 1 ) or decay ( 0 ) * : current discrete state of node * : discrete state of node that would result from the current states of all nodes : , the system is initialized by setting all values of discrete states equal to the state given by the discrete initial conditions .the concentration levels are set to the same values ( or ) .the times of the last behavior changes are set to .before the simulation is started , for every node it is checked whether the aspired state differs from the current state .if so , an event is added to the queue q ( sorted by time ) for time , where is a uniformly distributed random number between and .a simple analytical expression can be given for the times when the concentration levels are crossed ( case 1 ) .if , the node will not switch its state because the concentration is moving away from the threshold . otherwise , one can calculate the time of the next concentration level to cross the threshold by solving equation ( [ solution ] ) for with : if an event of type 1 happens next , the discrete state of the respective node , , is updated and the effect on other nodes is calculated . for definiteness ,let us assume this crossing takes place at time .if this switch causes the aspired state of another node to switch , an event is sorted into the queue at .when in the queue events for the same node are scheduled to happen at later times , they will be removed .they are thought to have been `` caught '' by the newly added event . in the second case, the concentration level of the node at time is calculated according to equation ( [ solution ] ) and saved as with the new time .the behavior flag is switched to reflect that the node has changed from buildup to decay or vice versa . if the time between any two successive node state changes in the network ( not necessarily of the same node ) is larger than , the node states are recorded and set as a new step to be compared to the synchronous attractor . | the problem of reliability of the dynamics in biological regulatory networks is studied in the framework of a generalized boolean network model with continuous timing and noise . using well - known artificial genetic networks such as the repressilator , we discuss concepts of reliability of rhythmic attractors . in a simple evolution process we investigate how overall network structure affects the reliability of the dynamics . in the course of the evolution , networks are selected for reliable dynamics . we find that most networks can be easily evolved towards reliable functioning while preserving the original function . |
a key unsolved problem in general relativity is to compute the gravitational radiation produced by a small object spiralling into a much larger black hole .this problem is of direct observational relevance .inspirals of compact objects into intermediate mass black holes ( may be observed by ligo and other ground based interferometers ; recent observations suggest the existence of black holes in this mass range .in addition , a key source for the space - based gravitational wave detector lisa is the final epoch of inspiral of a stellar - mass compact object into a massive ( ) black hole at the center of a galaxy . have estimated that lisa should see over a thousand such inspiral events during its multi - year mission lifetime , based on monte carlo simulations of the dynamics of stellar cusps by freitag .observations of these signals will have several major scientific payoffs : * from the observed waveform , one can measure the mass and spin of the central black hole with fractional accuracies of order .the spin can provide useful information about the growth history ( mergers versus accretion ) of the black hole .* likewise , one obtains a census of the inspiralling objects masses with precision , teaching us about the stellar mass function and mass segregation in the central parsec of galactic nuclei .* the measured event rate will teach us about dynamics in the central parsec of galaxies . *the gravitational waves will be wonderful tools for probing the nature of black holes in galactic nuclei , allowing us to observationally map , for the first time , the spacetime geometry of a black hole , and providing a high precision test of general relativity in the strong field regime . to be in the lisa waveband , the mass of the black hole must be in the range .the ratio between the mass of the compact object and is therefore in the range .these systems spend the last year or so of their lives in the very relativistic regime close to the black hole horizon , where the post - newtonian approximation has completely broken down , and emit cycles of waveform . realizingthe above science goals will require accurate theoretical models ( templates ) of the gravitational waves .this is because the method of matched filtering will be used both to detect the signals buried in the detector noise , and to measure the parameters characterizing detected signals .the accuracy requirement is roughly that the template should gain or lose no more than cycle of phase compared to the true waveform over the cycles of inspiral .these sources must therefore be modeled with a fractional accuracy .the past several years have seen a significant research effort in the relativity community aimed at providing these accurate templates . to date, there have been several approaches to generating waveforms .the foundation for all the approaches is the fact that , since , the field of the compact object can be treated as a linear perturbation to the large black hole s gravitational field . on short timescales ,the compact object moves on a geodesic of the kerr geometry , characterized by its conserved energy , -component of angular momentum , and carter constant . over longer timescales ,radiation reaction causes the parameters , and to evolve and the orbit to shrink .the various approaches are : 1 . _use of post - newtonian methods : _ fairly crude waveforms can be obtained using post - newtonian methods .these have been used to approximately scope out lisa s ability to detect inspiral events and to measure the waveform s parameters . however , since the orbital speeds are a substantial fraction of the speed of light , these waveforms are insufficiently accurate for the eventual detection and data analysis of real signals .use of conservation laws : _ in this approach one uses the teukolsky - sasaki - nakamura ( tsn ) formalism to compute the fluxes of energy and angular momentum to infinity and down the black hole horizon generated by a a compact object on a geodesic orbit . imposing global conservation of energy and angular momentum ,one infers the rates of change of the orbital energy and angular momentum . for certain special classes of orbits ( circular and equatorial orbits )this provides enough information that one can also infer the rate of change of the carter constant , and thus the inspiralling trajectory .direct computation of the self - force : _ in this more fundamental approach one computes the self - force or radiation - reaction force arising from the interaction of the compact object with its own gravitational field .a formal expression for this force in a general vacuum spacetime in terms of the retarded green s function was computed several years ago .translating this expression into a practical computational scheme for kerr black holes is very difficult and is still in progress .roughly 100 papers devoted to this problem have appeared in the last few years ; see , for example poisson for an overview and references .time - domain numerical simulations : _ another technique is to numerically integrate the teukolsky equation as a 2 + 1 pde in the time domain , and to model the compact object as a finite - sized source .this approach faces considerable challenges : ( i ) there is a separation of timescales the orbital period is much shorter than the radiation reaction timescale .( ii ) there is a separation of lengthscales the compact object is much smaller than the black hole .( iii ) the self - field of the small object must be computed with extremely high accuracy , as the piece of the self - field responsible for the self - force is a tiny fraction ( ) of the divergent self - field .this approach may eventually be competitive with ( ii ) and ( iii ) but is currently somewhat far from being competitive [ flux accuracies are as compared to for ( ii ) ] .a key feature of these systems is that they evolve adiabatically : the radiation reaction timescale is much longer than the orbital timescale , by a factor of the inverse of the mass ratio .this has implications for the nature of the signal .the self - acceleration of the compact object can be expanded in powers of the mass ratio as \;. \label{eq : selfaccel}\ ] ] here and are the dissipative and conservative pieces of the leading - order self - acceleration computed in refs .similarly , and are the corresponding pieces of the first correction to the self - acceleration , which has not yet been computed ( although see ref. for work in this direction ). the effect of the dissipative pieces of the self force will accumulate secularly , while the effect of the conservative pieces will not .hence the effect of the dissipative pieces on the phase of the orbit will be larger than that of the conservative pieces by a factor of the number of cycles of inspiral , .consider now , for example , the azimuthal phase of the orbit .this can be expanded in powers of the mass ratio using a two - timescale expansion as \ ; , \label{eq : emri_phase}\ ] ] where the leading - order phase is determined by , and the higher order correction to the phase is determined by and . here is a `` slow '' time variable which satisfies .we shall call leading - order waveforms containing only terms corresponding to _adiabatic waveforms_. from eq.([eq : emri_phase ] ) these waveforms are accurate to cycle over the inspiral ( more precisely , the phase error is independent of in the limit ) . to compute adiabatic waveformswe need only keep the dissipative piece of the self - force , and we can discard any conservative pieces . the conservation - law method discussed above yields waveforms that are accurate to this leading order , but is limited in that it can not be used for generic orbits .adiabatic waveforms will likely be sufficiently accurate for detecting inspiral events with lisa .the correction to the phase of the waveform can be estimated using the post - newtonian approximation , and amounts to cycle over the inspiral for typical parameter values . while the post - newtonian approximation is not strictly valid in the highly relativistic regime near the horizon of interest here , it suffices to give some indication of the accuracy of adiabatic waveforms .this computation is described in [ sec : accuracy ] .recently yasushi mino derived a key result that paves the way for computations of adiabatic waveforms for generic inspirals in kerr .mino showed that the time average of the self - force formula of for bound orbits in kerr yields the same result as the gradient of one half the difference between the retarded and advanced metric perturbations .this `` half retarded minus half advanced '' prescription is the standard result for electromagnetic radiation reaction in flat spacetime due to dirac .this prescription was also posited , without proof , for scalar , electromagnetic and gravitational radiation reaction in the kerr spacetime by galtsov .however , prior to mino s analysis , the prescription was not generally thought to be applicable or relevant in kerr. as one might expect , knowledge of the infinite - time - averaged self - force is sufficient to compute the leading - order , adiabatic waveforms , although there are subtleties related to the choice of gauge .the sufficiency can also be established using a two - timescale expansion . in this paperwe apply mino s result to compute an explicit expression for the time - averaged time derivative of the carter constant .we specialize to the case of a particle endowed with a scalar charge , coupled to a scalar field .this computation is useful as a warm up exercise for the more complicated case of a particle emitting gravitational waves .we start in sec .[ sec : scalar ] by describing the model of a point particle coupled to a scalar field , and review how the self - force causes both an acceleration of the particle s motion and also an evolution of the renormalized rest mass of the particle . in sec .[ sec : geosesics ] we review the properties of generic bound geodesic orbits in the kerr spacetime. a crucial result we use later is that the and motions are periodic when expressed as functions of a particular time parameter we call mino time , and that the and motions consist of linearly growing terms plus terms that are periodic with the period of the -motion , plus terms that are periodic with the period of the -motion . section [ sec : minoproof ] reviews mino s derivation of the half - retarded - minus - half - advanced prescription for radiation reaction in the adiabatic limit . in sec .[ sec : modes ] we discuss a convenient basis of modes for solutions of the scalar wave equation in kerr , namely the `` in '' , `` out '' , `` up '' and `` down '' modes used by chrzanowski and galtsov .we also review several key properties of these modes including symmetry relations and relations between the various reflection and transmission coefficients that appear in the definitions of the modes .section [ sec : retarded ] reviews the standard derivation of the mode expansion of the retarded green s function , and sec .[ sec : radiative ] reviews the derivation of the mode expansion of the radiative green s function given by galtsov , correcting several typos in galtsov which are detailed in [ sec : typos ] .next , in sec .[ sec : harmonic ] we turn to the source term in the wave equation for the scalar field .we review the derivation of drasco and hughes of the harmonic decomposition of this source term in terms of a discrete sum over frequencies in which harmonics of three different fundamental frequencies occur .we derive expressions for the mode coefficients in the expansion of the retarded field near future null infinity and near the future event horizon .these coefficients are expressed as integrals over a torus in phase space , which is filled ergodically by the geodesic motion .section [ sec : edot ] combines the results of the preceding sections to derive expressions for the time - averaged rates of change of two of the conserved quantities of geodesic motion , namely the energy and the angular momentum .the expressions are derived from the radiative self force , following galtsov .galtsov also shows that identical expressions are obtained by using the fluxes of energy and angular momentum to infinity and down the black hole horizon .[ this result has recently been independently derived for circular , equatorial orbits in ref . .] we also show that the time - averaged rate of change of the renormalized rest mass of the particle vanishes .in section [ sec : kdot ] we derive an expression for the time - averaged rate of change of the carter constant , using the radiative self - force and the mode expansion of the radiative green s function .this expression [ eq .( [ eq : dkdt ] ) below ] is the main new result in this paper .it involves two new amplitudes that are computed in terms of integrals over the torus in phase space , just as for the amplitudes appearing in the energy and angular momentum fluxes . finally , in sec .[ sec : circular ] we show that our result correctly predicts the known result that circular orbits remain circular while evolving under the influence of radiation reaction .this prediction serves as a check of our result . as apparent from the above summary, about 25 percent of this paper is new material , and the remaining 75 percent is review .the intent is to give a complete and self - contained treatment of scalar radiation reaction in the kerr spacetime , in a single unified notation , starting with the kerr metric , and ending with formulae for the evolution of all three constants of the motion that are sufficiently explicit to be used immediately in a numerical code .we consider a point particle of scalar charge coupled to a scalar field .we denote by the bare rest mass of the particle , to be distinguished from a renormalized rest mass which will occur below .the particle moves in a spacetime with metric which is fixed ; we neglect the gravitational waves generated by the particle .the worldline of the particle is , where is proper time .the action is taken to be s = - d^4 x ( ) ^2 - d\{_0 - q } .[ eq : action ] varying this action with respect to the worldline yields the equation of motion where is the 4-velocity and is the 4-acceleration .following poisson , we define the renormalized mass by and add the electromagnetic coupling term to the action ( ) , the equation of motion ( [ eq : acc ] ) becomes , where is the faraday tensor .this shows that the renormalized mass ( [ eq : mudef ] ) is the mass that would be measured by coupling to other fields . ]( ) = _ 0 - q .[ eq : mudef ] the equation of motion ( [ eq : acc00 ] ) can then be written as it is also useful to rewrite the definition ( [ eq : mudef ] ) of the renormalized mass in terms of a differential equation for its evolution with time : ( ) = - q u^_. [ eq : dmudt0 ] the phenomenon of evolution of renormalized rest - mass is discussed further in refs . . equations ( [ eq : acc ] ) and ( [ eq : dmudt0 ] ) can also be combined to give the expression f^= u^ _ ( u^ ) = q ^. [ eq : tsf ] for the total self force .the components of parallel to and perpendicular to the four velocity yield the quantity and times the self - acceleration , respectively .varying the action ( [ eq : action ] ) with respect to the field gives the equation of motion ( x ) = t(x ) , [ eq : wave ] where the scalar source is ( x ) = - q _^(4)[x , z ( ) ] .[ eq : sourcedef ] here ^(4)(x , x ) = ^(4)(x - x)/ [ eq : deltafndef ] is the generalized dirac delta function , where .we will assume that there is no incoming scalar radiation , so that the physical solution of the wave equation ( [ eq : wave ] ) is the retarded solution _ret(x ) = d^4 x g_ret(x , x ) t(x ) = - q dg_ret[x , z ( ) ] . [ eq : phiret ] here is the retarded green s function for the scalar wave equation ( [ eq : wave ] ) . the retarded field ( [ eq : phiret ] ) must of course be regularized before being inserted into the expression ( [ eq : acc ] ) for the self - acceleration and into the expression ( [ eq : dmudt0 ] ) for , or else divergent results will be obtained .the appropriate regularization prescription has been derived by quinn .the regularized self - acceleration at the point is a^ ( ) = ( g^ + u^u^ ) , [ eq : selfa1 ] and the regularized expression for is ( ) = - r + q^2 _ 0 _ -^- d u^_g_ret[z(),z( ) ] .[ eq : dmudt1 ] as discussed in the introduction , we specialize in this paper to bound motion about a kerr black hole of mass with . in this casethe terms in eqs.([eq : selfa1 ] ) and ( [ eq : dmudt1 ] ) involving the ricci tensor vanish .also , as long as the orbit is not very close to the innermost stable orbit , the evolution of the orbit is adiabatic : the orbital evolution timescale is much longer than the orbital timescale .this adiabaticity allows a significant simplification of the formulae ( [ eq : selfa1 ] ) and ( [ eq : dmudt1 ] ) for the self - acceleration and for the rate of change of mass , as shown by mino .namely , these formulae reduce to the simple expressions ( [ eq : acc ] ) and ( [ eq : dmudt0 ] ) , but with replaced by the radiative field here is the advanced solution of eq.([eq : wave ] ) . in sec . [sec : minoproof ] below we review mino s argument , specialized to the scalar case .to summarize , the starting point for our analysis is the self - acceleration expression a^= ( g^ + u^u^ ) _ _ rad .[ eq : selfacc0 ] together with the expression for the evolution of rest mass ( ) = - q u^ _ _ rad .[ eq : dmudt2 ] equations ( [ eq : selfacc0 ] ) and ( [ eq : dmudt2 ] ) are equivalent to the equation f^= u^ _ ( u^ ) = q ^_rad . [ eq : tsf1 ] for the total self - force .this section summarizes the notation and results of drasco & hughes , and also some of the results of mino , for generic bound geodesic orbits in kerr . in boyer - lindquist coordinates , the kerr metric is here and , are the black hole mass and spin parameter . throughout the rest of this paper we use units in which , for simplicity . the square root of the determinant of the metric is = , [ eq :detg ] and the wave operator is given by \phi_{,tt } - \frac{4 a r}{\delta } \phi_{,t\phi } + \left ( \frac{1}{\sin^2 \theta } - \frac{a^2}{\delta } \right ) \phi_{,\phi\phi } \nonumber \\ \lo + ( \delta \phi_{,r})_{,r } + \frac{1}{\sin \theta } ( \sin \theta \phi_{,\theta})_{,\theta}. \label{eq : waveoperator}\end{aligned}\ ] ] the differential operator that appears on the right hand side of this equation is the teukolsky differential operator for spin .we will use later the kinnersley null tetrad , , , , which is given by = _t + _ r + _ , [ eq : vecldef ] = _ t - _ r + _ , [ eq : vecndef ] and = ( i a _ t + _ + _ ) .the corresponding one - forms are = - dt + a ^2 d+ dr , = - dt + d- dr , and = ( - i a dt + d+ i ^2 d ) .the basis vectors obey the orthonormality relations and while all other inner products vanish .the metric can be written in terms of the basis one - forms as g _( n _ ) + 2 m _ ( m_)^*. [ eq : gabformula ] we define the conserved energy per unit rest mass e = - u , [ eq : edef ] the conserved -component of angular momentum divided by l_z = u , [ eq : lzdef ] and carter constant divided by q = u_^2 - a^2 ^2 e^2 + ^2 l_z^2 + a^2 ^2 .[ eq : qdef ] the geodesic equations can then be written in the form ^ 2- \delta\left[r^2 + ( l_z - a e)^2 + q\right ] \equiv v_r(r)\ ; , \label{eq : rdot}\\ % \fl \left(\frac{d\theta}{d\lambda}\right)^2 = q - \cot^2\theta l_z^2 -a^2\cos^2\theta(1 - e^2 ) \equiv v_\theta(\theta)\ ; , \label{eq : thetadot}\\ % \fl \frac{d\phi}{d\lambda } = \csc^2\theta l_z + ae\left(\frac{r^2+a^2}{\delta } - 1\right ) - \frac{a^2l_z}{\delta } \equiv v_\phi(r,\theta)\ ; , \label{eq : phidot}\\ % \fl \frac{dt}{d\lambda } = e\left[\frac{(r^2+a^2)^2}{\delta } - a^2\sin^2\theta\right ] + al_z\left(1 - \frac{r^2+a^2}{\delta}\right ) \equivv_t(r,\theta)\;. \label{eq : tdot}\end{aligned}\ ] ] here is the mino time parameter , related to proper time by d= d. [ eq : minotime ] also these equations define the potentials , , and . , , and in ref . .we no not use this notation here since it would clash with the functions , and defined in eq .( ) below . ] sometimes it will be convenient to use instead of the carter constant the quantity k = q + ( l_z - a e)^2 . for convenience we will also call this quantitythe `` carter constant '' . in the schwarzschild limit and given by and .the quantity can be written as k = k^ u_u _ , [ eq : kdef ] where is the killing tensor k^ = 2 m^ ( |m^ ) - a^2 ^2 g^. using the identity ( [ eq : gabformula ] ) this can also be written as k^ = 2 l^ ( n^ ) + r^2 g^. [ eq : kalphabeta ] using the formulae ( [ eq : vecldef ] ) and ( [ eq : vecndef ] ) for the null vectors and together with the definitions ( [ eq : edef ] ) and ( [ eq : lzdef ] ) of and , we obtain from eq .( [ eq : kalphabeta ] ) the following formula for : k = ( ^2 e - a l_z)^2 - u_r^2 - r^2 .[ eq : kformula1 ] solving this for gives u_r = ; [ eq : urformula ] this formula will be useful later . following mino , we parameterize any geodesic by seven parameters : e , l_z , q,_r0,_0,t_0,_0 . here and are the values of and at .the quantity is the value of nearest to for which , where is the minimum value of attained on the geodesic .similarly is the value of nearest to for which , where is the minimum value of attained on the geodesic .this parameterization is degenerate because of the freedom to reparametrize the geodesic via .we discuss this degeneracy further in sec.[sec : degeneracy ] . frequently in this paper we will focus on the _ fiducial geodesic _ associated with the constants , and , namely the geodesic with _r0 = _ 0 = t_0 = _ 0 = 0 .it follows from the geodesic equations ( [ eq : rdot ] ) and ( [ eq : thetadot ] ) that the functions and are periodic .we denote the periods by and , respectively , so r(+ _ r ) = r ( ) , ( + _ ) = ( ) .[ eq : periodic ] using the initial condition we can write the solution to eq .( [ eq : rdot ] ) explicitly as r ( ) = r(- _ r0 ) , [ eq : rmotion ] where the function is defined by _ r_min^r ( ) = .[ eq : hatrdef ] similarly we can write the solution to eq .( [ eq : thetadot ] ) explicitly as ( ) = ( - _ 0 ) , [ eq : thetamotion ] where the function is defined by __ min^ ( ) = .[ eq : hatthetadef ] the functions and are just the and motions for the fiducial geodesic. next , the function that appears on the right hand side of eq .( [ eq : tdot ] ) is a sum of a function of and a function of : v_t(r , ) = v_tr(r ) + v_t ( ) , where and . therefore using we obtain t ( ) = t_0 + _ 0^d \ { v_tr[r( ) ] + v_t[( ) ] } .[ eq : tmotion0 ] next we define the averaged value of to be v_tr = _ 0^_r dv_tr[r ( ) ] = _ 0^_r dv_tr[r ( ) ] . herethe second equality follows from the representation ( [ eq : rmotion ] ) of the motion together with the periodicity condition ( [ eq : periodic ] ) .similarly we define v_t = _ 0^ _ dv_t [ ( ) ] = _ 0^ _ dv_t [ ( ) ] . inserting these definitions into eq .( [ eq : tmotion0 ] ) allows us to write as a sum of a linear term and terms that are periodic : here we have defined the constant = v_tr + v_t , [ eq : gammadef ] and the functions t_r ( ) = _ 0^d \ { v_tr[r( ) ] - v_tr } , [ eq : deltatrdef ] t _ ( ) = _ 0^d \ { v_t[( ) ] - v_t } .[ eq : deltatthetadef ] the key property of these functions is that they are periodic : t_r(+ _ r ) = t_r ( ) , t_(+ _ ) = t _ ( ) ; this follows from the definitions ( [ eq : deltatrdef ] ) and ( [ eq : deltatthetadef ] ) together with the periodicity condition ( [ eq : periodic ] ) . we can exhibit the dependence of these functions on the parameters and by substituting the formulae ( [ eq : rmotion ] ) and ( [ eq : thetamotion ] ) for and into eqs .( [ eq : deltatrdef ] ) and ( [ eq : deltatthetadef ] ) .the result is t_r ( ) = t_r(- _ r0 ) - t_r(-_r0 ) , [ eq : deltatrformula ] t _ ( ) = t_(- _ 0 ) - t_(-_0 ) , [ eq : deltatthetaformula ] where the functions and are defined by _ r ( ) = _ 0^d \ { v_tr[r( ) ] - v_tr } , [ eq : hattrdef ] _( ) = _ 0^d \ { v_t[( ) ] - v_t } .[ eq : hattthetadef ] the motion in can be analyzed in exactly the same way as the motion in .first , the function that appears on the right hand side of eq .( [ eq : phidot ] ) is a sum of a function of and a function of : v_(r , ) = v_r(r ) + v _ ( ) , where and . therefore using we obtain ( )0^d \ { v_r[r( ) ] + v_[( ) ] } .[ eq : phimotion0 ] next we define the averaged value of to be v_r = _ 0^_r dv_r[r ( ) ] = _ 0^_r dv_r[r ( ) ] . herethe second equality follows from the representation ( [ eq : rmotion ] ) of the motion together with the periodicity condition ( [ eq : periodic ] ) .similarly we define v _ = _ 0^ _ dv _ [ ( ) ] = _ 0^ _ dv _ [ ( ) ] . inserting these into eq .( [ eq : phimotion0 ] ) and using allows us to write as a sum of a linear term and terms that are periodic : here we have defined the constant _ = v_r + v _ , [ eq : upsilonphidef ] and the functions _ r ( ) = _ 0^d \ { v_r[r( ) ] - v_r } , [ eq : deltaphirdef ] _ ( ) = _ 0^d \ { v_[( ) ] - v _ } .[ eq : deltaphithetadef ] the key property of these functions is that they are periodic : _ r(+ _( ) , _ ( + _ ) = _ ( ) ; this follows from the definitions ( [ eq : deltaphirdef ] ) and ( [ eq : deltaphithetadef ] ) together with the periodicity condition ( [ eq : periodic ] ) . we can exhibit the dependence of these functions on the parameters and by substituting the formulae ( [ eq : rmotion ] ) and ( [ eq : thetamotion ] ) for and into eqs .( [ eq : deltaphirdef ] ) and ( [ eq : deltaphithetadef ] ) . the result is _ r ( ) = _ r(- _ r0 ) - _ r(-_r0 ) , [ eq : deltaphirformula ] _ ( ) = _ ( - _ 0 ) - _ ( -_0 ) , [ eq : deltaphithetaformula ] where the functions and are defined by _ r ( ) = _ 0^d \ { v_r[r( ) ] - v_r } , [ eq : hatphirdef ] _ ( ) = _ 0^d \ { v_[( ) ] - v _ } .[ eq : hatphithetadef ] not all of the parameters , , , , , and that characterize the geodesic are independent .this is because of the freedom to change the dependent variable via . under this change of variablethe parameters and transform as _r0 _ r0 = _ r0 + , [ eq : t1 ] _ 0 _ 0 = _ 0 + .[ eq : t2 ] we can compute how the parameters and transform as follows . combining eqs .( [ eq : tmotion1 ] ) , ( [ eq : deltatrformula ] ) and ( [ eq : deltatthetaformula ] ) gives the following formula for the motion : t = t_0 + + t_r(- _r0 ) - t_r(-_r0 ) + t_(- _ 0 ) - t_(-_0 ) .rewriting the right hand side in terms of , and yields comparing the first and second lines here allows us to read off the value of : _ 0 = t_0 - + t_r(-_r0 - ) - t_r(-_r0 ) + t_(-_0 - ) - t_(-_0 ) .[ eq : t3 ] similarly we obtain _ 0 _ 0 = _ 0 - _ + _ r(-_r0 - ) - _ r(-_r0 ) + _ ( -_0 - ) - _ ( -_0 ) .[ eq : t4 ] in sec .[ sec : dep ] below we will explicitly show that all of the amplitudes and waveforms we compute are invariant under these transformations .we will often encounter functions of mino time that are periodic with period . for such functionsthe average over mino time is given by f__= _0^ _ f _ ( ) d .[ eq : thetaaverage ] in order to compute such averages it will be convenient to change the variable of integration from to a new variable .this variable is a generalization of the newtonian true anomaly and is defined as follows .the potential defined by eq .( [ eq : thetadot ] ) can be written in terms of the variable z ^2 as v_(z ) = .[ eq : thetaformula ] here and and are the two zeros of .they are ordered such that .the variable is defined by ^2 ( ) = z_- ^2 , together with the requirement that increases monotonically as increases . from the definition ( [ eq : hatthetadef ] ) of together with the formula ( [ eq : thetaformula ] ) for we get = . therefore the average ( [ eq : thetaaverage ] ) can be written as f__= _ 0 ^ 2 d. [ eq : thetaaverage1 ] a similar analysis applies to averages of functions that are periodic with period .the average over of such a function is f_r _ = _0^_r f_r ( ) d .[ eq : raverage ] now the potential in eq .( [ eq : rdot ] ) can be written as v_r(r ) = ( 1-e^2)(r_1-r)(r - r_2)(r - r_3)(r - r_4 ) , where , , and are the four roots of the quartic , ordered such that . for stable orbitsthe motion takes place in .the orbital eccentricity and semi - latus rectum are defined by r_1 = and r_2 = .we also define the parameters and by r_3 = , r_4 = . then following ref . we define the parameter by ( ) = .[ eq : psidef ] it follows from these definitions that = p ( ) , where ( ) = ^1/2 ^1/2 .the average ( [ eq : raverage ] ) over is therefore f_r _ = _0 ^ 2 d = .[ eq : raverage1 ] note that the viewpoint on the new variables , adopted here is slightly different to that in : here they are defined in terms of the fiducial motions and instead of the actual motions and .the new viewpoint facilitates the computation of amplitudes and fluxes for arbitrary geodesics ; if one is working with the fiducial geodesic the distinction is unimportant .finally suppose we have a function of two parameters and which is biperiodic : the average value of this function is f _ = _ 0^_r d_r _ 0^ _ d_f(_r , _ ) . by combining the results ( [ eq : thetaaverage1 ] ) and ( [ eq : raverage1 ] ) we can write this average as a double integral over the new variables and : f _ = _ 0 ^ 2 d_0 ^ 2 d. [ eq : averageidentity ] this formula will be used in later sections .in this section we review the proof by mino that for computing radiation reaction in the adiabatic limit in kerr , one can use the `` half retarded minus half advanced '' prescription .we specialize to the scalar case .we start by defining some notation for self - forces .suppose that we have a particle with scalar charge at a point in a spacetime with metric .suppose that the 4-velocity of the particle at is , and that we are given a solution ( not necessarily the retarded solution ) of the wave equation ( [ eq : wave ] ) for the scalar field , for which the source is a delta function on the geodesic determined by and .the self - force on the particle is then some functional of , , and , which we write as f^. here we suppress the trivial dependence on .note that this functional does not depend on a choice of time orientation for the manifold , and also it is invariant under .next , we define the retarded self - force as f^_ret= f^ , [ eq : selfret ] where is the retarded solution to the wave equation ( [ eq : wave ] ) using the time orientation that is determined by demanding that be future directed . similarly we define the advanced self force by f^_adv= f^ , where is the advanced solution .it follows from these definitions that f^_ret= f^_adv .[ eq : flip ] the derivation of the half - retarded - minus - half - advanced prescription in the adiabatic limit rests on two properties of the self - force .the first property is the fact that the self - force can be computed by subtracting from the divergent field a locally constructed singular field that depends only on the spacetime geometry in the vicinity of the point and on the four velocity at .this property is implicit in the work of quinn and quinn and wald , and was proved explicitly by detweiler and whiting .we can write this property as f^= q ^ ( - _ sing [ p , u^,g _ ] ) .[ eq : property0 ] the second property is a covariance property .suppose that is a diffeomorphism from the manifold to another manifold that takes to .we denote by the natural mapping of tensors over the tangent space at to tensors over the tangent space at .we also denote by the associated natural mapping of tensor fields on to tensor fields on . then the covariance property is f^= ^ * f^. this expresses the fact that the self - force does not depend on quantities such as a choice of coordinates .it follows from the definition ( [ eq : selfret ] ) that the retarded self - force satisfies a similar covariance relation : f^_ret[(p ) , ^ * u^ , ^ * g _ ] = ^ * f^_ret[p , u^ , g _ ] .[ eq : covariance ] next we review a property of bound geodesics in kerr upon which the proof depends .this is the fact that for generic bound geodesics there exists isometries of the kerr spacetime of the form t 2 t_1 - t , 2 _ 1 - , [ eq : isometry ] where and are constants , which come arbitrarily close to mapping the geodesic onto itself . to see this , note that if a geodesic is mapped onto itself by the mapping ( [ eq : isometry ] ) , then the point ] , which must equal ] is the geodesic . now using the fourier series expansion ( [ eq : doublefourierseries ] ) of gives __ mknlm^out[z^ ( ) ] = - _ k,n j__mknlmkn^*e^-i ( _ mkn - _ mkn ) . from this expressionit is clear that averaging over kills all the terms in the sum except , , and we obtain _= - j__mknlmkn^ * = - ( z^out_lmkn)^*. [ eq : zoutuseful ] for the last equation we have used eq .( [ eq : zlmknans ] ) . finally substituting this result , together with a similar equation for the `` down '' modes , in to eq .( [ eq : dcaledt1 ] ) we obtain the final result ( [ eq : dcaledt2 ] ) equation ( [ eq : dmudt2 ] ) for the time derivative of the renormalized rest mass can be immediately integrated with respect to between two times and .this yields ( _ 2 ) - ( _ 1 ) = q _rad[z(_2 ) ] - q _rad[z(_1 ) ] , where is the geodesic .now since the zeroth - order orbit is bound , the right hand side of this equation does not contain any secularly growing terms .instead it consists only of oscillatory terms .therefore , over long timescales and in the adiabatic limit ( the regime in which we are working in this paper ) , the rest mass is conserved : = 0 .we now turn to the corresponding computation for the carter constant defined by eq .( [ eq : kdef ] ) .the result is .\label{eq : dkdt}\end{aligned}\ ] ] this expression has the same structure as the expression ( [ eq : dcaledt2 ] ) for the time derivatives of the energy and angular momentum , except that the squared amplitudes and have been replaced with products of the amplitudes and with two new amplitudes and .these new amplitudes are defined by the equation ^ * \nn \\ \mbox { } \times e^{- i m \delta \phi_r(\lambda_r ) } e^{-i m \delta \phi_\theta(\lambda_\theta ) } e^{i \omega_{mkn } \delta t_r(\lambda_r ) } e^{i \omega_{mkn } \delta t_\theta(\lambda_\theta ) } \nn \\ \mbox { } \times \left\ { g_{mkn}[\lambda_r,\lambda_\theta ] r^{\rm out , down}_{\omega_{mkn}lm}[r(\lambda_r)]^ * + g[\lambda_r,\lambda_\theta ] \frac{d r^{\rm out , down}_{\omega_{mkn}lm}}{dr}[r(\lambda_r)]^ * \right\ } , \nonumber \\ \label{eq : tildezlmknformula}\end{aligned}\ ] ] where r^out , down__mknlm(r ) = , g_mkn(_r , _ ) - ( ^2 e - a l_z ) ( ^2 _ mkn- a m ) + 2 i r u_r , [ eq : gmnkdef ] g(_r , _ ) = - i u_r .[ eq : gdef1 ] on the right hand sides of eqs .( [ eq : gmnkdef ] ) and ( [ eq : gdef1 ] ) , it is understood that , and are evaluated at and .also it is understood that is given as a function of by eq.([eq : urformula ] ) , and hence as a function of by using and by resolving the sign ambiguity in eq .( [ eq : urformula ] ) using eq .( [ eq : psidef ] ) .several features of the formulae ( [ eq : dkdt ] ) and ( [ eq : tildezlmknformula ] ) are worth noting : * the formula ( [ eq : tildezlmknformula ] ) for the new amplitudes and . has a very similar structure to the formula ( [ eq : zlmknformula ] ) defining the amplitudes and .in particular , the derivation of the formula ( [ eq : changeparameter ] ) for the dependence of the amplitudes on the parameters , , and of the geodesic ( via an overall phase ) carries through as before .this means that the time derivative of the carter constant is independent of these parameters , as expected , since the phase from the untilded amplitudes cancels the phase from the tilded amplitudes in eq.([eq : dkdt ] ) .* although we have no general definition for `` flux of carter '' , presumably the first term in eq .( [ eq : dkdt ] ) corresponds to something like the flux of carter to future null infinity , and the second term to the flux of carter down the black hole horizon .* as was the case for the original amplitudes , to evaluate the double integral ( [ eq : tildezlmknformula ] ) it is convenient to use the identity ( [ eq : averageidentity ] ) to express the integral in terms of the variables and instead of and .taking a time derivative of the expression ( [ eq : kdef ] ) for gives here we have used the killing tensor equation .now substituting the expression ( [ eq : selfacc0 ] ) for the self - acceleration gives \nn \\ \mbox { } & = & \frac{2 q}{\mu } k \frac{d \phi_{\rm rad}}{d \tau } + \frac{2 q}{\mu } k^{\alpha\beta } u_{\alpha } \nabla_\beta \phi_{\rm rad}. \label{eq : dkdtau0}\end{aligned}\ ] ] now the first term here is a total time derivative to leading order in , since is conserved to zeroth order in .hence this term can be neglected for the reason explained in sec.[sec : derivation1 ] .dropping the first term and substituting into eq .( [ eq : used ] ) we get _ t = _ .[ eq : dkdtau1 ] next , we use the explicit expression ( [ eq : phiradformula0a ] ) for the radiation field .this gives .\label{eq : dkdt1}\end{aligned}\ ] ] we now define the amplitudes ^out_lmkn = _ [ eq : tildezoutdef ] and ^down_lmkn = _ .[ eq : tildezdowndef ] substituting these definitions into eq .( [ eq : dkdt1 ] ) gives the formula ( [ eq : dkdt ] ) .therefore it remains only to derive the formula ( [ eq : tildezlmknformula ] ) for the amplitudes and .we will derive the formula for ; the derivation of the formula for is similar .we start by simplifying the differential operator which appears in the definitions ( [ eq : tildezoutdef ] ) and ( [ eq : tildezdowndef ] ) . using the expression ( [ eq : kalphabeta ] ) for the killing tensorwe can write this as k^ u _ _ = ( l^u _ ) n^ _ + ( n^u _ ) l^_+ r^2 . using the definitions ( [ eq : vecldef ] ) and ( [ eq : vecndef ] ) of and ,the definitions ( [ eq : edef ] ) and ( [ eq : lzdef ] ) of and , and the fact that and reduce to and when acting on gives consider now the contribution of the first term in eq.([eq : k3 ] ) to the expression ( [ eq : tildezoutdef ] ) for . using the relation ( [ eq : minotime ] ) between and we can write this as we can integrate this by parts with respect to ; the boundary term which is generated can be neglected for the reason explained in sec .[ sec : derivation1 ] .this gives where we have used from eqs.([eq : minotime ] ) and ( [ eq : kerrmetric ] ) . combining this with the result obtained from substituting the second and third terms in eq .( [ eq : k3 ] ) into eq .( [ eq : tildezoutdef ] ) gives ^out_lmkn = _ , [ eq : tildezoutformula1 ] where the functions and are [ cf .( [ eq : gmnkdef ] ) and ( [ eq : gdef1 ] ) above ] g_mkn(_r , _ ) - ( ^2 e -a l_z ) ( ^2 _ mkn - a m ) + 2 i r u_r , [ eq : gmnkdef2 ] g(_r , _ ) = - i u_r .[ eq : gdef2 ] as explained above , it is understood that the functions , and on the right hand sides of eqs.([eq : gmnkdef2 ] ) and ( [ eq : gdef2 ] ) are evaluated at and .also it is understood that is given as a function of by eq.([eq : urformula ] ) , and hence as a function of by using and by resolving the sign ambiguity in eq .( [ eq : urformula ] ) using eq .( [ eq : psidef ] ) .the average over in eq .( [ eq : tildezoutformula1 ] ) can be evaluated using the same techniques as in secs .[ sec : harmonic ] and [ sec : edot ] above . using the definition ( [ eq : pioutdef ] ) of the `` out '' mode function and the definition ( [ eq : omegamkndef ] ) of , the quantity inside the angular brackets in eq .( [ eq : tildezoutformula1 ] ) can be written as _mkn ( , ) , where ^ * \nn \\ \mbox { } \times e^{- i m \delta \phi_r(\lambda_r ) } e^{-i m \delta \phi_\theta(\lambda_\theta ) } e^{i \omega_{mkn } \delta t_r(\lambda_r ) } e^{i \omega_{mkn } \delta t_\theta(\lambda_\theta ) } \nn \\\mbox { } \times \left\ { g_{mkn}[\lambda_r,\lambda_\theta ] r^{\rm out , down}_{\omega_{mkn}lm}[r(\lambda_r)]^ * + g[\lambda_r,\lambda_\theta ] \frac{d r^{\rm out , down}_{\omega_{mkn}lm}}{dr}[r(\lambda_r)]^ * \right\}. \nonumber \\ \label{eq : tildejformula}\end{aligned}\ ] ] this function is biperiodic : hence it can be expanded as a double fourier series , and the average over of is _ = _ 0^_r d_r _ 0^ _ d__mkn(_r , _ ) .[ eq : jav ] substituting eq .( [ eq : tildejformula ] ) into eq .( [ eq : jav ] ) and back into eq .( [ eq : tildezoutformula1 ] ) now gives the formula ( [ eq : tildezlmknformula ] ) .in this section we study the prediction of the formula ( [ eq : dkdt ] ) for the time derivative of the carter constant , for the special case of circular orbits ( orbits of constant boyer - lindquist radius ) .it is known that circular orbits remain circular while evolving under the influence of radiation reaction in the adiabatic regime . also for circular orbitsit is possible to relate the carter constant to the energy and angular momentum of the orbit : specializing eq .( [ eq : kformula1 ] ) to zero radial velocity yields in order for this relation to be preserved under slow evolution of , and , we must have _[ eq : circtocirccheck ] we now verify that our expression ( [ eq : dkdt ] ) for satisfies the condition ( [ eq : circtocirccheck ] ) . for circular orbits , the following simplifications apply to the harmonic decomposition of the geodesic source derived in sec.[sec :harmonic ] .first , the functions , , and defined by eqs .( [ eq : deltatrformula ] ) , ( [ eq : hattrdef ] ) , ( [ eq : deltaphirformula ] ) and ( [ eq : hatphirdef ] ) vanish identically .second , the biperiodic function defined in eq.([eq : jomegalmdef ] ) is therefore independent of .it follows that the double fourier series ( [ eq : doublefourierseries ] ) is replaced by the single fourier series j_lm ( _ ) = _ k=-^j_lmk e^-i k _ _ , [ eq : singlefourierseries ] where the coefficients are given by j_lmk = _ 0^ _ d _ e^i k _ _ j_lm ( _ ) .[ eq : jomegalmkdef ] the effect of this change on the subsequent formulae for fluxes and amplitudes in secs .[ sec : edot ] and [ sec : kdot ] can be summarized as : ( i ) remove the sums over ; ( ii ) remove the indices from the amplitudes ; ( iii ) remove the averaging operation from the formulae for the amplitudes ; and ( iv ) evaluate the integrands in the formulae for the amplitudes at .also the formula ( [ eq : omegamkndef ] ) for the frequency is replaced by _mk = m _ + k _ . with these simplifications ,the formula ( [ eq : dcaledt2 ] ) for the time derivatives of energy and angular momentum becomes where or , and the expression in curly brackets means either ( for ) or ( for ) .similarly the expression ( [ eq : dkdt ] ) for the time derivative of the carter constant becomes .\label{eq : dkdtcirc}\end{aligned}\ ] ] comparing eqs .( [ eq : dcaledt2circ ] ) and ( [ eq : dkdtcirc ] ) , we see that the condition ( [ eq : circtocirccheck ] ) will be satisfied as long as the amplitudes satisfy _ lmk^out , down = ( ^2_mk - m a ) z_lmk^out , down .[ eq : amplitudeidentity ] therefore it suffices to derive the identity ( [ eq : amplitudeidentity ] ) .we will derive this identity for the `` out '' modes ; the `` down '' derivation is identical. the particular form of the expressions for the amplitudes that are most useful here are eqs .( [ eq : zoutuseful ] ) and ( [ eq : tildezoutformula1 ] ) : z^out_lmk = - _ , [ eq : zoutuseful1 ] and ^out_lmk = _ .[ eq : tildezoutformula2 ] here we have defined .it follows from eq .( [ eq : gdef2 ] ) together with that the function vanishes , so the formula for simplifies to ^out_lmk = _ .[ eq : tildezoutformula3 ] we now substitute into eq .( [ eq : tildezoutformula3 ] ) the expression ( [ eq : gmnkdef2 ] ) for . the second term in eq .( [ eq : gmnkdef2 ] ) vanishes since , and of all the factors in the first term , only depends on ; the remaining factors are constants and can be pulled out of the average over .this yields _ lmk^out = - ( ^2 _ mk - a m ) _ . comparing this with the formula ( [ eq : zoutuseful1 ] ) for nowyields the identity ( [ eq : amplitudeidentity ] ) .this research was supported in part by nsf grants phy-0140209 and phy-0244424 and by nasa grant nagw-12906 .we thank the anonymous referees for several helpful comments .in this appendix we estimate the accuracy of the adiabatic waveforms , expanding on the brief treatment given in ref .specifically we estimate the magnitude of the correction to the waveform s phase that corresponds to the term in eq.([eq : emri_phase ] ) .we can roughly estimate this term by using post - newtonian expressions for the waveform in which terms corresponding to are readily identified . while the post - newtonian approximation is not strictly valid in the highly relativistic regime near the horizon of interest here , it suffices to give some indication of the accuracy of the adiabatic waveforms . in this appendixwe specialize for simplicity to circular , equatorial orbits .we also specialize to gravitational radiation reaction , unlike the body of the paper which dealt with the scalar case .our conclusion is that adiabatic waveforms will likely be sufficiently accurate for signal detection .the procedure we use is as follows .we focus attention on the phase of the fourier transform of the dominant , piece of the gravitational waveform . here is the gravitational wave frequency . nowa change to the phase function of the form is not observable ; the constant corresponds to a change in the time of arrival of the signal .therefore we focus on the observable quantity , and we write this as ( f ) = g(f , m_1,m_2 ) .[ eq : gadef ] here is a function , is the mass of the black hole , and is the mass of the inspiralling compact object , the total mass , and , the reduced mass . throughout the body of the paper we referred to the mass of the inspiralling object as and the mass of the black hole as ; this is valid to leading order in . in this appendixhowever we need to be more accurate , and hence we revert to using and for the two masses . ] .the specific form of obtained from post - newtonian theory is discussed below .we next expand the function as a power series in , at fixed and .the leading order term scales as , and we obtain g(f , m_1,m_2 ) = + g_1(f , m_1 ) + g_2(f , m_1 ) m_2 + [ eq : gexpand ] the first term in this expression corresponds to the term in eq .( [ eq : emri_phase ] ) ; it gives the leading - order , adiabatic waveform .the error incurred from using adiabatic waveforms is therefore g(f , m_1,m_2 ) = g(f , m_1,m_2 ) - = g_1(f , m_1 ) + g_2(f , m_1 ) m_2 + we want to estimate the effects of this phase error .it is useful to split the phase error into two terms : g(f , m_1,m_2 ) = g_1(f , m_1,m_2 ) + g_2(f , m_1,m_2 ) , [ eq : deltagsplit ] where g_1(f , m_1,m_2 ) = g(f , m_1,m_2 ) - [ eq : deltag1def ] and g_2(f , m_1,m_2 ) = - .the phase error corresponds to the error that would be caused by using an incorrect value of the mass of the black hole .the effect of this error on the data analysis would be to cause a systematic error in the inferred best fit value of the black hole mass .however , the fractional error would be of order .this error will not have any effect on the ability of adiabatic waveforms to detect signals when used as search templates .therefore , we will neglect this error , and focus on the remaining error term .we now compute explicitly the phase error that corresponds to the term in eq .( [ eq : deltagsplit ] ) .we use the post-3.5-newtonian expression for , which can be computed from the orbital energy and gravitational wave luminosity via the equation = - 2 . for the energy use the expression given in eq .( 50 ) of ref. , and for the luminosity we use eq.(12.9 ) of ref . . from ref . and from refs . .] this gives ( f ) = 2 f t_c + _ c + ( f ) , [ eq : psi0 ] where and are constants , , and here is euler s constant , is the dimensionless spin parameter of the black hole , and .we have also added to eq .( [ eq : fullphase ] ) the spin - dependent terms up to post-2-newtonian order , taken from eq .( 1 ) of ref. , specialized to a non - spinning particle on a circular , equatorial orbit .we now insert the expression ( [ eq : psi0 ] ) for into eqs.([eq : gadef ] ) and ( [ eq : deltag1def ] ) , and integrate twice with respect to to obtain .the result is , \label{eq : deltapsians}\end{aligned}\ ] ] where .now as discussed above , a change to the phase function of the form is not observable .this freedom allows us to set where is a fixed frequency , which amounts to replacing our expression for the phase error with _1(f ) - _ 1(f_0 ) - _ 1(f_0)(f - f_0 ) .we now evaluate the phase error for some typical sources .first , we take , , . for this case the last year of inspiral extends from hz to hz ; the maximum value of is cycles if we take hz . as a second example with a higher mass ratio , we take , , . the last year of inspiral for this case extends from hz to hz ; the maximum value of is cycles if we take hz ( see fig .[ fig : phase ] ) larger . ] . for signals from intermediate mass black holes that may be detectable by ligo , the phase errors are yet smaller .these phase errors are sufficiently large that adiabatic waveforms can not be used as data - analysis templates ; the template will not quite stay in phase with the signal over the entire cycles of inspiral .however , the requirements on detection templates are much less stringent : phase coherence is only needed for weeks , rather than a year . in addition , in the matched filtering search process phase error will tend to be compensated for by small systematic errors in the best - fit mass parameters . because the adiabatic waveforms are _ almost _ good enough for data - analysis templates ( phase errors cycle in the worst cases ), adiabatic waveforms will likely be accurate enough for detection templates both for ground - based and space - based detectors .this appendix lists some of the typos in the paper by galtsov : * the discussion of the range of summation of the indices , given after eq .( 2.5 ) is incorrect. it should be summation , with and rather than and .* in the definition of the variable given just before eq .( 2.7 ) , the factor should be . * in the first of the four equations in eqs . ( 2.9 ) , the term should be . * the right hand side of eq . ( 2.25 ) should be divided by , as already discussed in sec .[ sec : retardedformula ] above . * in eq .( 2.30 ) , the quantity should be replaced by its complex conjugate . * in the first line of eq .( 3.2 ) , the quantity should be replaced by its complex conjugate . * in the second of eqs .( 3.3 ) , the right hand side should read rather than .e. the closing bracket is in the wrong place .* in the second term on the first line of eq .( 3.6 ) , the first factor of should be omitted . * in eq .( 3.9 ) , the argument of the last factor should be rather than .* galtsov s equation ( 4.13 ) for the rate of change of energy or angular momentum of the orbit is correct only for sources which are smooth functions of frequency .however , for bound geodesics , the inner products are sums of delta functions in frequency , so it does not make sense to square these inner products as galtsov does .the correct version of this equation is our eq .( [ eq : dcaledt2 ] ) above .10 lee samuel finn and kip s. thorne . gravitational waves from a compact star in a circular , inspiral orbit , in the equatorial plane of a massive , spinning black hole , as observed by lisa . ,d62:124021 , 2000 , gr - qc/0007074 .kostas glampedakis and daniel kennefick .zoom and whirl : eccentric equatorial orbits around spinning black holes and their evolution under gravitational radiation reaction ., d66:044002 , 2002 , gr - qc/0203086 .luc blanchet , thibault damour , and gilles esposito - farese .dimensional regularization of the third post - newtonian dynamics of point particles in harmonic coordinates . ,d69:124007 , 2004 , gr - qc/0311052 .luc blanchet , thibault damour , gilles esposito - farese , and bala r. iyer. gravitational radiation from inspiralling compact binaries completed at the third post - newtonian order ., 93:091101 , 2004 , gr - qc/0406012 . | a key source for lisa will be the inspiral of compact objects into massive black holes . recently mino has shown that in the adiabatic limit , gravitational waveforms for these sources can be computed by using for the radiation reaction force the gradient of one half the difference between the retarded and advanced metric perturbations . using post - newtonian expansions , we argue that the resulting waveforms should be sufficiently accurate for signal detection with lisa . data - analysis templates will require higher accuracy , going beyond adiabaticity ; this remains a significant challenge . we describe an explicit computational procedure for obtaining waveforms based on mino s result , for the case of a point particle coupled to a scalar field . we derive an explicit expression for the time - averaged time derivative of the carter constant , and verify that the expression correctly predicts that circular orbits remain circular while evolving under the influence of radiation reaction . the derivation uses detailed properties of mode expansions , green s functions and bound geodesic orbits in the kerr spacetime , which we review in detail . this paper is about three quarters review and one quarter new material . the intent is to give a complete and self - contained treatment of scalar radiation reaction in the kerr spacetime , in a single unified notation , starting with the kerr metric , and ending with formulae for the time evolution of all three constants of the motion that are sufficiently explicit to be used immediately in a numerical code . |
the largest source of information today is the world wide web .the estimated number of documents nears 10 billion .similarly , the number of documents changing on a daily basis is also enormous .the ever - increasing growth of the web presents a considerable challenge in finding novel information on the web .in addition , properties of the web , like scale - free small world ( sfsw ) structure may create additional challenges .for example the direct consequence of the scale - free small world property is that there are numerous urls or sets of interlinked urls , which have a large number of incoming links .intelligent web crawlers can be easily trapped at the neighborhood of such junctions as it has been shown previously .we have developed a novel artificial life ( a - life ) method with intelligent individuals , crawlers , to detect new information on a news web site .we define a - life as a population of individuals having both static structural properties , and structural properties which may undergo continuous changes , i.e. , adaptation .our algorithms are based on methods developed for different areas of artificial intelligence , such as evolutionary computing , artificial neural networks and reinforcement learning .all efforts were made to keep the applied algorithms as simple as possible subject to the constraints of the internet search .evolutionary computing deals with properties that may be modified during the creation of new individuals , called multiplication. descendants may exhibit variations of population , and differ in performance from the others .individuals may also terminate .multiplication and selection is subject to the fitness of individuals , where fitness is typically defined by the modeler .for a recent review on evolutionary computing , see . for reviews on related evolutionary theories and the dynamics of self - modifying systemssee and , respectively .similar concepts have been studied in other evolutionary systems where organisms compete for space and resources and cooperate through direct interaction ( see , e.g. , and references therein . )selection , however , is a very slow process and individual adaptation may be necessary in environments subject to quick changes . the typical form of adaptive learning is the connectionist architecture , such as artificial neural networks .multilayer perceptrons ( mlps ) , which are universal function approximators have been used widely in diverse applications .evolutionary selection of adapting mlps has been in the focus of extensive research . in a typical reinforcement learning ( rl )problem the learning process is motivated by the expected value of long - term cumulated profit .a well - known example of reinforcement learning is the td - gammon program of tesauro .the author applied mlp function approximators for value estimation .reinforcement learning has also been used in concurrent multi - robot learning , where robots had to learn to forage together via direct interaction .evolutionary learning has been used within the framework of reinforcement learning to improve decision making , i.e. , the state - action mapping called policy . in this paperwe present a selection based algorithm and compare it to the well - known reinforcement learning algorithm in terms of their efficiency and behavior . in our problem ,fitness is not determined by us , but fitness is implicit .fitness is jointly determined by the ever changing external world and by the competing individuals together .selection and multiplication of individuals are based on their fitness value .communication and competition among our crawlers are indirect .only the first submitter of a document may receive positive reinforcement .our work is different from other studies using combinations of genetic , evolutionary , function approximation , and reinforcement learning algorithms , in that i ) it does not require explicit fitness function , ii ) we do not have control over the environment , iii ) collaborating individuals use value estimation under ` evolutionary pressure ' , and iv ) individuals work without direct interaction with each other .we performed realistic simulations based on data collected during an 18 days long crawl on the web .we have found that our selection based weblog update algorithm performs better in scale - free small world environment than the rl algorithm , eventhough the reinforcement learning algorithm has been shown to be efficient in finding relevant information .we explain our results based on the different behaviors of the algorithms .that is , the weblog update algorithm finds the good relevant document sources and remains at these regions until better places are found by chance .individuals using this selection algorithm are able to quickly collect the new relevant documents from the already known places because they monitor these places continuously .the reinforcement learning algorithm explores new territories for relevant documents and if it finds a good place then it collects the existing relevant documents from there .the continuous exploration of rl causes that it finds relevant documents slower than the weblog update algorithm .also , crawlers using weblog update algorithm submit more different documents than crawlers using the rl algorithm .therefore there are more relevant new information among documents submitted by former than latter crawlers .the paper is organized as follows . in section [s : related ] we review recent works in the field of web crawling .then we describe our algorithms and the forager architecture in section [ s : architecture ] .after that in section [ s : experiments ] we present our experiment on the web and the conducted simulations with the results . in section [ s : discussion ]we discuss our results on the found different behaviors of the selection and reinforcement learning algorithms .section [ s : conclusion ] concludes our paper .our work concerns a realistic web environment and search algorithms over this environment . we compare selective / evolutionary and reinforcement learning methods .it seems to us that such studies should be conducted in ever changing , buzzling , wabbling environments , which justifies our choice of the environment .we shall review several of the known search tools including those that our work is based upon .readers familiar with search tools utilized on the web may wish to skip this section .there are three main problems that have been studied in the context of crawlers .rungsawang et al . and references therein and menczer studied the topic specific crawlers .risvik et al . and references therein address research issues related to the exponential growth of the web .cho and gracia - molina , menczer and edwards et .al and references therein studies the problem of different refresh rates of urls ( possibly as high as hourly or as low as yearly ) .rungsawang and angkawattanawit provide an introduction to and a broad overview of topic specific crawlers ( see citations in the paper ) .they propose to learn starting urls , topic keywords and url ordering through consecutive crawling attempts .they show that the learning of starting urls and the use of consecutive crawling attempts can increase the efficiency of the crawlers .the used heuristic is similar to the weblog algorithm , which also finds good starting urls and periodically restarts the crawling from the newly learned ones .the main limitation of this work is that it is incapable of addressing the freshness ( i.e. , modification ) of already visited web pages .menczer describes some disadvantages of current web search engines on the dynamic web , e.g. , the low ratio of fresh or relevant documents .he proposes to complement the search engines with intelligent crawlers , or web mining agents to overcome those disadvantages .search engines take static snapshots of the web with relatively large time intervals between two snapshots .intelligent web mining agents are different : they can find online the required recent information and may evolve intelligent behavior by exploiting the web linkage and textual information .he introduces the infospider architecture that uses genetic algorithm and reinforcement learning , also describes the myspider implementation of it .menczer discusses the difficulties of evaluating online query driven crawler agents .the main problem is that the whole set of relevant documents for any given query are unknown , only a subset of the relevant documents may be known . to solve this problem he introduces two new metrics that estimate the real recall and precision based on an available subset of the relevant documents . with these metrics search engine and online crawler performancescan be compared .starting the myspider agent from the 100 top pages of altavista the agent s precision is better than altavista s precision even during the first few steps of the agent .the fact that the myspider agent finds relevant pages in the first few steps may make it deployable on users computers .some problems may arise from this kind of agent usage .first of all there are security issues , like which files or information sources are allowed to read and write for the agent .the run time of the agents should be controlled carefully because there can be many users ( google answered more than 100 million searches per day in january - february 2001 ) using these agents , thus creating huge traffic overhead on the internet .our weblog algorithm uses local selection for finding good starting urls for searches , thus not depending on any search engines .dependence on a search engine can be a suffer limitation of most existing search agents , like myspiders .note however , that it is an easy matter to combine the present algorithm with urls offered by search engines .also our algorithm should not run on individual users s computers .rather it should run for different topics near to the source of the documents in the given topic e.g. , may run at the actual site where relevant information is stored .risvik and michelsen mention that because of the exponential growth of the web there is an ever increasing need for more intelligent , ( topic-)specific algorithms for crawling , like focused crawling and document classification . with these algorithms crawlers and search enginescan operate more efficiently in a topically limited document space .the authors also state that in such vertical regions the dynamics of the web pages is more homogenous .they overview different dimensions of web dynamics and show the arising problems in a search engine model .they show that the problem of rapid growth of web and frequent document updates creates new challenges for developing more and more efficient web search engines .the authors define a reference search engine model having three main components : ( 1 ) crawler , ( 2 ) indexer , ( 3 ) searcher .the main part of the paper focuses on the problems that crawlers need to overcome on the dynamic web . as a possible solution the authors propose a heterogenous crawling architecture .they also present an extensible indexer and searcher architecture .the crawling architecture has a central distributor that knows which crawler has to crawl which part of the web .special crawlers with low storage and high processing capacity are dedicated to web regions where content changes rapidly ( like news sites ) .these crawlers maintain up - to - date information on these rapidly changing web pages .the main limitation of their crawling architecture is that they must divide the web to be crawled into distinct portions manually before the crawling starts .a weblog like distributed algorithm as suggested here my be used in that architecture to overcome this limitation .cho and garcia - molina define mathematically the freshness and age of documents of search engines .they propose the poisson process as a model for page refreshment .the authors also propose various refresh policies and study their effectiveness both theoretically and on real data .they present the optimal refresh policies for their freshness and age metrics under the poisson page refresh model .the authors show that these policies are superior to others on real data , too .they collected about 720000 documents from 270 sites . although they show that in their database more than 20 percent of the documents are changed each day , they disclosed these documents from their studies .their crawler visited the documents once each day for 5 months , thus can not measure the exact change rate of those documents . while in our work we definitely concentrate on these frequently changing documents .the proposed refresh policies require good estimation of the refresh rate for each document .the estimation influences the revisit frequency while the revisit frequency influences the estimation .our algorithm does not need explicit frequency estimations .the more valuable urls ( e.g. , more frequently changing ) will be visited more often and if a crawler does not find valuable information around an url being in it s weblog then that url finally will fall out from the weblog of the crawler .however frequency estimations and refresh policies can be easily integrated into the weblog algorithm selecting the starting url from the weblog according to the refresh policy and weighting each url in the weblog according to their change frequency estimations .menczer also introduces a recency metric which is 1 if all of the documents are recent ( i.e. , not changed after the last download ) and goes to 0 as downloaded documents are getting more and more obsolete . trivially immediately after a few minutes run of an online crawler the value of this metric will be 1 , while the value for the search engine will be lower .edwards et al . present a mathematical crawler model in which the number of obsolete pages can be minimized with a nonlinear equation system .they solved the nonlinear equations with different parameter settings on realistic model data .their model uses different buckets for documents having different change rates therefore does not need any theoretical model about the change rate of pages .the main limitations of this work are the following : * by solving the nonlinear equations the content of web pages can not be taken into consideration .the model can not be extended easily to ( topic-)specific crawlers , which would be highly advantageous on the exponentially growing web , , . * the rapidly changing documents ( like on news sites ) are not considered to be in any bucket , therefore increasingly important parts of the web are disclosed from the searches .however the main conclusion of the paper is that there may exist some efficient strategy for incremental crawlers for reducing the number of obsolete pages without the need for any theoretical model about the change rate of pages .there are two different kinds of agents : the foragers and the reinforcing agent ( ra ) .the fleet of foragers crawl the web and send the urls of the selected documents to the reinforcing agent .the ra determines which forager should work for the ra and how long a forager should work .the ra sends reinforcements to the foragers based on the received urls .we employ a fleet of foragers to study the competition among individual foragers .the fleet of foragers allows to distribute the load of the searching task among different computers .a forager has simple , limited capabilities , like limited number of starting urls and a simple , content based url ordering .the foragers compete with each other for finding the most relevant documents . in this waythey efficiently and quickly collect new relevant documents without direct interaction . at firstthe basic algorithms are presented .after that the reinforcing agent and the foragers are detailed .a forager periodically restarts from a url randomly selected from the list of starting urls .the sequence of visited urls between two restarts forms a path .the starting url list is formed from the first urls of the weblog . in the weblogthere are urls with their associated weblog values in descending order .the weblog value of a url estimates the expected sum of rewards during a path after visiting that url .the weblog update algorithm modifies the weblog before a new path is started ( algorithm [ t : weblog_pseudo ] ) .the weblog value of a url already in the weblog is modified toward the sum of rewards in the remaining part of the path after that url .a new url has the value of actual sum of rewards in the remaining part of the path .if a url has a high weblog value it means that around that url there are many relevant documents .therefore it may worth it to start a search from that url . ' '' '' ' '' '' [ t : weblog_pseudo]*weblog update*. was set to 0.3 ' '' '' xxx = xx = xx = xx = xx = xx = xx = xx = xx = ` input ` + the steps of the given path + the sum of rewards for each step in the given path + ` output ` + starting url list + ` method ` + cumulated sum of in reverse order + not having value in + having value in + ` for each ` + + ` endfor ` + ` for each ` + + + ` endfor ` + descending order of values in + truncate after the + element + starting url list first elements of ' '' '' ' '' '' without the weblog algorithm the weblog and thus the starting url list remains the same throughout the searches .the weblog algorithm is a very simple version of evolutionary algorithms . here, evolution may occur at two different levels : the list of urls of the forager is evolving by the reordering of the weblog . also , a forager may multiply , and its weblog , or part of it may spread through inheritance . this way , the weblog algorithm incorporates most basic features of evolutionary algorithms .this simple form shall be satisfactory to demonstrate our statements .a forager can modify its url ordering based on the received reinforcements of the sent urls .the ( immediate ) profit is the difference of received rewards and penalties at any given step .immediate profit is a myopic characterization of a step to a url .foragers have an adaptive continuous value estimator and follow the _ policy _ that maximizes the expected long term cumulated profit ( ltp ) instead of the immediate profit .such estimators can be easily realized in neural systems .policy and profit estimation are interlinked concepts : profit estimation determines the policy , whereas policy influences choices and , in turn , the expected ltp .( for a review , see . ) here , choices are based on the greedy ltp policy : the forager visits the url , which belongs to the _ frontier _ ( the list of linked but not yet visited urls , see later ) and has the highest estimated ltp . in the particular simulationeach forager has a dimensional probabilistic term - frequency inverse document - frequency ( prtfidf ) text classifier , generated on a previously downloaded portion of the geocities database .fifty clusters were created by boley s clustering algorithm from the downloaded documents .the prtfidf classifiers were trained on these clusters plus an additional one , the , representing general texts from the internet .the prtfidf outputs were non - linearly mapped to the interval [ -1,+1 ] by a hyperbolic - tangent function . the classifier was applied to reduce the texts to a small dimensional representation .the output vector of the classifier for the page of url is .( the output was dismissed . )this output vector is stored for each url ( algorithm [ t : pageinfo_urlordering_pseudo ] ) . ' '' '' ' '' '' [ t : pageinfo_urlordering_pseudo]*page information storage * ' '' '' xxx = xx = xx = xx = xx = xx = xx = xx = xx = ` input ` + urls of pages to be stored + ` output ` + the classifier output vectors for pages of + ` method ` + ` for each ` + text of page of + classifier output vector for + ` endfor ` ' '' '' ' '' '' a linear function approximator is used for ltp estimation .it encompasses parameters , the _ weight vector _ .the ltp of document of url is estimated as the scalar product of and : . during url ordering the url with highest ltp estimationis selected .the url ordering algorithm is shown in algorithm [ t : urlordering_pseudo ] . ' '' '' ' '' '' [ t : urlordering_pseudo]*url ordering * ' '' '' xxx = xx = xx = xx = xx = xx = xx = xx = xx = ` input ` + the set of available urls + the stored vector representation of the urls + ` output ` + url with maximum ltp value + ` method ` + ` for each ` + + ` endfor ` + url with maximal ltp ' '' '' ' '' '' the weight vector of each forager is tuned by temporal difference learning .let us denote the current url by , the next url to be visited by , the output of the classifier for by and the estimated ltp of a url by .assume that leaving to the immediate profit is .our estimation is perfect if .future profits are typically discounted in such estimations as , where .the error of value estimation is we used throughout the simulations . for each step the weights of the value function were tuned to decrease the error of value estimation based on the received immediate profit .the estimation error was used to correct the parameters .the component of the weight vector , , was corrected by with and .these modified weights in a stationary environment would improve value estimation ( see , e.g , and references therein ) .the url ordering update is given in algorithm [ t : urlordering_update_pseudo ] . ' '' '' ' '' '' [ t : urlordering_update_pseudo]*url ordering update * ' '' '' xxx = xx = xx = xx = xx = xx = xx = xx = xx = ` input ` + the step for which the reinforcement is received + the previous step before + reinforcement for visiting + ` output ` + the updated weight vector + ` method ` + + ' '' '' ' '' '' without the update algorithm the weight vector remains the same throughout the search .a document or page is possibly relevant for a forager if it is not older than 24 hours and the forager has not marked it previously .algorithm [ t : relevant_pseudo ] shows the procedure of selecting such documents .the selected documents are sent to the ra for further evaluation . ' '' '' ' '' '' [ t : relevant_pseudo]*document relevancy at a forager * ' '' '' xxx = xx = xx = xx = xx = xx = xx = xx = xx = ` input ` + the pages to be examined + ` output ` + the selected pages + ` method ` + previously selected relevant pages + all pages from which are + not older than 24 hours and + not contained in + add to ' '' '' ' '' '' during multiplication the weblog is randomly divided into two equal sized parts ( one for the original and one for the new forager ) .the parameters of the url ordering algorithm ( the weight vector of the value estimation ) are either copied or new random parameters are generated . if the forager has a url ordering update algorithm then the parameters are copied .if the forager does not have any url ordering update algorithm then new random parameters are generated , as shown in algorithm [ t : multiplication_pseudo ] . ' '' '' ' '' '' [ t : multiplication_pseudo]*multiplication * ' '' ''xxx = xx = xx = xx = xx = xx = xx = xx = xx = ` input ` + + weight vector of url ordering + ` output ` + + + ` method ` + randomly selected + urls and values from + delete from + ` if ` forager has url ordering update algorithm + copy the weight vector of url ordering + ` else ` + generate a new random weight vector + ` endif ` ' '' '' ' '' '' a reinforcing agent controls the `` life '' of foragers .it can start , stop , multiply or delete foragers .ra receives the urls of documents selected by the foragers , and responds with reinforcements for the received urls .the response is ( a.u . ) for a relevant document and ( a.u . ) for a not relevant document .a document is relevant if it is not yet seen by the reinforcing agent and it is not older than 24 hours .the reinforcing agent maintains the score of each forager working for it .initially each forager has score .when a forager sends a url to the ra , the forager s score is decreased by .after each relevant page sent by the forager , the forager s score is increased by ( algorithm [ t : manageurl_pseudo ] ) . ' '' '' ' '' '' [ t : manageurl_pseudo]*manage received url * ' '' '' xxx = xx = xx = xx = xx = xx = xx = xxxxx = ` input ` + received url from forager + ` output ` + reinforcement to forager + updated forager score + ` method ` + relevant pages seen by the ra + get page of + decrease forager s score with + ` if ` or page date is older than 24 hours + send to forager + ` else ` + add to + send to forager + increase forager s score with + ` endif ` ' '' '' ' '' '' when the forager s score reaches and the number of foragers is smaller than then the forager is multiplied .that is a new forager is created with the same algorithms as the original one has , but with slightly different parameters .when the forager s score goes below and the number of foragers is larger than then the forager is deleted ( algorithm [ t : manageforager_pseudo ] ) .note that a forager can be multiplied or deleted immediately after it has been stopped by the ra and before the next forager is activated . ' '' '' ' '' '' [ t : manageforager_pseudo ] * : manage forager * ' '' '' xxx = xx = xx = xx = xx = xx = xx = xxxxx = ` input ` + the forager to be multiplied or deleted + ` output ` + possibly modified list of foragers + ` method ` + ` if ` ( s score ` and ` + number of foragers ) + call s + * multiplication , alg .[ t : multiplication_pseudo ] * + may modify it s own weblog + create a new forager with the received + and + set the two foragers score to + ` else if ` ( s score ` and ` + number of foragers ) + delete + ` endif ` ' '' '' ' '' '' foragers on the same computer are working in time slices one after each other .each forager works for some amount of time determined by the ra .then the ra stops that forager and starts the next one selected by the ra .the pseudo - code of the reinforcing agent is given in algorithm [ t : reinforcing_pseudo ] . ' '' '' ' '' '' [ t : reinforcing_pseudo ] * : reinforcing agent * ' '' '' xxx = xx = xx = xx = xx = xx = xx = xxxxx = ` input ` + seed urls + ` output ` + found relevant documents + ` method ` + empty set /*set of all observed relevant pages + initialize foragers with the seed urls + set one of them to be the next + ` repeat ` + start next forager + receive possibly relevant url + call * manage received url , alg .[ t : manageurl_pseudo ] * with url + stop forager if its time period is over + call * manage forager , alg . [ t : manageforager_pseudo ] * with this forager + choose next forager + ` until ` time is over ' '' '' ' '' '' a forager is initialized with parameters defining the url ordering , and either with a weblog or with a seed of urls ( algorithm [ t : initforager_pseudo ] ) . after its initialization a forager crawls in search paths , that is after a given number of steps the search restarts and the steps between two restarts form a path . during each paththe forager takes number of steps , i.e. , selects the next url to be visited with a url ordering algorithm . at the beginning of a path a urlis selected randomly from the starting url list .this list is formed from the 10 first urls of the weblog .the weblog contains the possibly good starting urls with their associated weblog values in descending order .the weblog algorithm modifies the weblog and so thus the starting url list before a new path is started .when a forager is restarted by the ra , after the ra has stopped it , the forager continues from the internal state in which it was stopped .the pseudo code of step selection is given in algorithm [ t : stepselection_pseudo ] . ' '' '' ' '' '' [ t : initforager_pseudo]*initialization of the forager * ' '' '' xxx = xx = xx = xx = xx = xx = xx = xx = xx = ` input ` + weblog or seed urls + url ordering parameters + ` output ` + initialized forager + ` method ` + set path step number to /*start new path + set the weblog + either with the input weblog + or put the seed urls into the weblog with 0 weblog value + set the url ordering parameters in url ordering algorithm ' '' '' ' '' '' ' '' '' ' '' '' [ t : stepselection_pseudo]*url selection * ' '' '' xxx = xx = xx = xx = xx = xx = xx = xx = xx = ` input ` + set of urls available in this step + set of visited urls in this path + ` output ` + selected url to be visited next + ` method ` + ` if ` path step number + selected url by * url ordering , alg .[ t : urlordering_pseudo ] * + increase path step number + ` else ` + call the * weblog update , alg .[ t : weblog_pseudo ] * to update the weblog + select a random url from the starting url list + set path step number to 1 + empty set + empty set + ` endif ` ' '' '' ' '' '' the url ordering algorithm selects a url to be the next step from the frontier url set .the selected url is removed from the frontier and added to the visited url set to avoid loops .after downloading the pages , only those urls ( linked from the visited url ) are added to the frontier which are not in the visited set . in each stepthe forager downloads the page of the selected url and all of the pages linked from the page of selected url .it sends the urls of the possibly relevant pages to the reinforcing agent .the forager receives reinforcements on any previously sent but not yet reinforced urls and calls the url ordering update algorithm with the received reinforcements .the pseudo code of a forager is shown in algorithm [ t : forager_pseudo ] . ' '' '' ' '' '' [ t : forager_pseudo]*forager * ' '' '' xxx = xx = xx = xx = xx = xx = xx = xx = xx = ` input ` + set of urls available in the next step + set of visited urls in the current path + ` output ` + sent documents to the ra + modified and + modified and url ordering weight vector + ` method ` + ` repeat ` + call * url selection , alg .[ t : stepselection_pseudo ] * + remove from + add to + download the page of + links of + which are not + add to + download pages of + call * page information storage , alg . [ t : pageinfo_urlordering_pseudo ] * with + call * document relevancy , alg .[ t : relevant_pseudo ] * for + all pages + send to reinforcing agent + receive reinforcements for sent but not yet reinforced pages + call * url ordering update , alg .[ t : urlordering_update_pseudo ] * with + the received reinforcements + ` until ` time is over ' '' '' ' '' ''we conducted an 18 day long experiment on the web to gather realistic data .we used the gathered data in simulations to compare the weblog update ( section [ sss : weblog ] ) and reinforcement learning algorithms ( section [ sss : rl ] ) . in web experimentwe used a fleet of foragers using combination of reinforcement learning and weblog update algorithms to eliminate any biases on the gathered data .first we describe the experiment on the web then the simulations .we analyze our results at the end of this section .we ran the experiment on the web on a single personal computer with celeron 1000 mhz processor and 512 mb ram .we implemented the forager architecture ( described in section [ s : architecture ] ) in java programming language . in this experiment a fixed number of foragers were competing with each other to collect news at the cnn web site .the foragers were running in equal time intervals in a predefined order .each forager had a 3 minute time interval and after that interval the forager was allowed to finish the step started before the end of the time interval .we deployed 8 foragers using the weblog update and the reinforcement learning based url ordering update algorithms ( 8 wlrl foragers ) .we also deployed 8 other foragers using the weblog update algorithm but without reinforcement learning ( 8 wl foragers ) .the predefined order of foragers was the following : 8 wlrl foragers were followed by the 8 wl foragers .we investigated the link structure of the gathered web pages . as it is shown in fig .[ f : sf ] the links have a power - law distribution ( ) with for outgoing links and for incoming links .that is the link structure has the scale - free property .the clustering coefficient of the link structure is 0.02 and the diameter of the graph is 7.2893 .we applied two different random permutations to the origin and to the endpoint of the links , keeping the edge distribution unchanged but randomly rewiring the links .the new graph has 0.003 clustering coefficient and 8.2163 diameter .that is the clustering coefficient is smaller than the original value by an order of magnitude , but the diameter is almost the same .therefore we can conclude that the links of gathered pages form small world structure . ) . vertical axis : relative frequency of number of edges at different urls ( ) .dots and dark line correspond to outgoing links , crosses and gray line correspond to incoming links ., width=240 ] the data storage for simulation is a centralized component .the pages are stored with 2 indices ( and time stamps ) .one index is the url index , the other is the page index .multiple pages can have the same url index if they were downloaded from the same url .the page index uniquely identifies a page content and the url from where the page was download . at each page download of any foragers we stored the followings ( with a time stamp containing the time of page download ) :1 . if the page is relevant according to the ra then store `` relevant '' 2 .if the page is from a new url then store the new url with a new url index and the page s state vector with a new page index 3 . if the content of the page is changed since the last download then store the page s state vector with a new page index but keep the url index 4 . in both previous cases store the links of the page as links to page indices of the linked pages 1 .if a linked page is from a new url then store the new url with a new url index and the linked page s state vector with a new page index 2 .if the content of the linked page is changed since the last check then store the page s state vector with a new page index but same url index for the simulations we implemented the forager architecture in matlab .the foragers were simulated as if they were running on one computer as described in the previous section . during simulations we used the web pages that we gathered previously to generate a realistic environment( note that the links of pages point to local pages ( not to pages on the web ) since a link was stored as a link to a local page index ) : * simulated documents had the same state vector representation for url ordering as the real pages had * simulated relevant documents were the same as the relevant documents on the web * pages and links appeared at the same ( relative ) time when they were found in the web experiment - using the new url indices and their time stamps * pages and links are refreshed or changed at the same relative time as the changes were detected in the web experiment using the new page indices for existing url indices and their time stamps * simulated time of a page download was the average download time of a real page during the web experiment .we conducted simulations with two different kinds of foragers .the first case is when foragers used only the weblog update algorithm without url ordering update ( wl foragers ) .the second case is when foragers used only the reinforcement learning based url ordering update algorithm without the weblog update algorithm ( rl foragers ) .each wl forager had a different weight vector for url value estimation during multiplication the new forager got a new random weight vector .rl foragers had the same weblog with the first 10 urls of the gathered pages that is the starting url of the web experiment and the first 9 visited urls during that experiment . in both cases initially there were 2 foragers and they were allowed to multiply until reaching the population of 16 foragers .the simulation for each type of foragers were repeated 3 times with different initial weight vectors for each forager .the variance of the results show that there is only a small difference between simulations using the same kind of foragers , even if the foragers were started with different random weight vectors in each simulation .table [ t : params ] shows the investigated parameters during simulations . [ cols= " < ,< " , ] from table [ t : data ] we can conclude the followings : * rl and wl foragers have similar download efficiency , i.e. , the efficiencies from the point of view of the news site are about the same .* wl foragers have higher sent efficiencies than rl foragers , i.e. , the efficiency from the point of view of the ra is higher .this shows that wl foragers divide the search area better among each other than rl foragers .sent efficiency would be 1 if none of two foragers have sent the same document to the ra .* rl foragers have higher relative found url value than wl foragers .rl foragers explore more than wl foragers and rl found more urls than wl foragers did per downloaded page .* wl foragers find faster the new relevant documents in the already found clusters .that is freshness is higher and age is lower than in the case of rl foragers .[ f : efficiency ] shows other aspects of the different behaviors of rl and wl foragers .download efficiency of rl foragers has more , higher , and sharper peaks than the download efficiency of wl foragers has .that is wl foragers are more balanced in finding new relevant documents than rl foragers .the reason is that while the wl foragers remain in the found good clusters , the rl foragers continuously explore the new promising territories .the sharp peaks in the efficiency show that rl foragers _ find and recognize _ new good territories and then _ quickly collect _ the current relevant documents from there .the foragers can recognize these places by receiving more rewards from the ra if they send urls from these places .the predefined order did not influence the working of foragers during the web experiment . from fig .[ f : efficiency ] it can be seen that foragers during the 3 independent experiments did not have very different efficiencies . on fig .[ f : freshness ] we show that the foragers in each run had a very similar behavior in terms of age and freshness , that is the values remains close to each other throughout the experiments .also the results for individual runs were close to the average values in table [ t : data ] ( see the standard deviations ) . in each individual runthe foragers were started with different weight vectors , but they reached similar efficiencies and behavior .this means that the initial conditions of the foragers did not influence the later behavior of them during the simulations .furthermore foragers could not change their environment drastically ( in terms of the found relevant documents ) during a single 3 minute run time because of the short run time intervals and the fast change of environment large number of new pages and often updated pages in the new site . duringthe web experiment foragers were running in 8 wlrl , 8 wl , 8 wlrl , 8 wl , temporal order . because of the fact that initial conditions does not influence the long term performance of foragers and the fact that the foragers can not change their environment fully we can start to examine them after the first run of wlrl foragers .then we got the other extreme order of foragers , that is the 8 wl , 8 wlrl , 8 wl , 8 wlrl , temporal ordering . for the overall efficiency and behavior of foragers it did not really matterif wlrl or wl foragers run first and one could use mixed order in which after a wlrl forager a wl forager runs and after a wl forager a wlrl forager comes . however , for higher bandwidths and for faster computers , random ordering may be needed for such comparisons .our first conjecture is that selection is efficient on scale - free small world structures .lrincz and kkai and rennie et al . showed that rl is efficient in the task of finding relevant information on the web .here we have shown experimentally that the weblog update algorithm , selection among starting urls , is at least as efficient as the rl algorithm .the weblog update algorithm finds as many relevant documents as rl does if they download the same amount of pages .wl foragers in their fleet select more different urls to send to the ra than rl foragers do in their fleet , therefore there are more relevant documents among those selected by wl foragers then among those selected by rl foragers . also the freshness and age of found relevant documents are better for wl foragers than for rl foragers . for the weblog update algorithm ,the selection among starting urls has no fine tuning mechanism . throughout its lifea forager searches for the same kind of documents goes into the same ` direction ' in the state space of document states determined by its fixed weight vector .the only adaptation allowed for a wl forager is to select starting urls from the already seen urls .the wl forager can not modify its ( ` directional ' ) preferences according goes newly found relevant document supply , where relevant documents are abundant .but a wl forager finds good relevant document sources in its own direction and forces its search to stay at those places . by chance the forager can find better sources in its own direction if the search path from a starting url is long enough . on fig .[ f : efficiency ] it is shown that the download efficiency of the foragers does not decrease with the multiplication of the foragers .therefore the new foragers must found new and good relevant document sources quickly after their appearances .the reinforcement learning based url ordering update algorithm is capable to fine tune the search of a forager by adapting the forager s weight vector .this feature has been shown to be crucial to adapt crawling in novel environments .an rl forager goes into the direction ( in the state space of document states ) where the estimated long term cumulated profit is the highest .because the local environment of the foragers may changes rapidly during crawling , it seems desirable that foragers can quickly adapt to the found new relevant documents .relevant documents may appear lonely , not creating a good relevant document source , or do not appear at the right url by a mistake .this noise of the web can derail the rl foragers from good regions .the forager may `` turn '' into less valuable directions , because of the fast adaptation capabilities of rl foragers .our second conjecture is that selection fits sfsw better than rl .we have shown in our experiments that selection and rl have different behaviors .selection selects good information sources , which are worth to revisit , and stays at those sources as long as better sources are not found by chance .rl explores new territories , and adapts to those .this adaptation can be a disadvantage when compared with the more rigid selection algorithm , which sticks to good places until ` provably ' better places are discovered .therefore wl foragers , which can not be derailed and stay in their found ` niches ' can find new relevant documents faster in such already known terrains than rl foragers can .that is , freshness is higher and age is lower for relevant documents found by wl foragers than for relevant documents found by rl foragers .also , by finding good sources and staying there , wl foragers divide the search task better than rl foragers do , this is the reason for the higher sent efficiency of wl foragers than of rl foragers .we have rewired the network as it was described in section [ ss : real ] . this way a scale - free ( sf ) but not so small world was created . intriguingly , in this sf structure , rl foragers performed better than wl ones .clearly , further work is needed to compare the behavior of the selective and the reinforcement learning algorithms in other then sfsw environments .such findings should be of relevance in the deployment of machine learning methods in different problem domains . from the practical point of view, we note that it is an easy matter to combine the present algorithm with urls offered by search engines .also , the values reported by the crawlers about certain environments , e.g. , the environment of the url offered by search engines represent the neighborhood of that url and can serve adaptive filtering .this procedure is , indeed , promising to guide individual searches as it has been shown elsewhere .we presented and compared our selection algorithm to the well - known reinforcement learning algorithm .our comparison was based on finding new relevant documents on the web , that is in a dynamic scale - free small world environment .we have found that the weblog update selection algorithm performs better in this environment than the reinforcement learning algorithm , eventhough the reinforcement learning algorithm has been shown to be efficient in finding relevant information .we explain our results based on the different behaviors of the algorithms .that is the weblog update algorithm finds the good relevant document sources and remains at these regions until better places are found by chance .individuals using this selection algorithm are able to quickly collect the new relevant documents from the already known places because they monitor these places continuously . the reinforcement learning algorithm explores new territories for relevant documents and if it finds a good place then it collects the existing relevant documents from there .the continuous exploration and the fine tuning property of rl causes that rl finds relevant documents slower than the weblog update algorithm . in our future workwe will study the combination of the weblog update and the rl algorithms .this combination uses the wl foragers ability to stay at good regions with the rl foragers fine tuning capability . in this wayforagers will be able to go to new sources with the rl algorithm and monitor the already found good regions with the weblog update algorithm .we will also study the foragers in a simulated environment which is not a small world .the clusters of small world environment makes it easier for wl foragers to stay at good regions .the small diameter due to the long distance links of small world environment makes it easier for rl foragers to explore different regions .this work will measure the extent at which the different foragers rely on the small world property of their environment .this material is based upon work supported by the european office of aerospace research and development , air force office of scientific research , air force research laboratory , under contract no .fa8655 - 03 - 1 - 3036 .this work is also supported by the national science foundation under grants no .int-0304904 and no .any opinions , findings and conclusions or recommendations expressed in this material are those of the author(s ) and do not necessarily reflect the views of the european office of aerospace research and development , air force office of scientific research , air force research laboratory .j. edwards , k. mccurley , and j. tomlin , _ an adaptive model for optimizing performance of an incremental web crawler _ , proceedings of the tenth international conference on world wide web , 2001 , pp . 106113 .b. gbor , zs .palotai , and a. lrincz , _ value estimation based computer - assisted data mining for surfing the internet _ , int .joint conf . on neural networks ( piscataway , nj 08855 - 1331 ) , ieee operations center , 26 - 29 july , budapest , hungary 2004 , pp .1035 . , ieee catalog number : 04ch37541c , ijcnn2004 cd rom conference proceedings .thorsten joachims , _ a probabilistic analysis of the rocchio algorithm with tfidf for text categorization _ , proceedings of icml-97 , 14th international conference on machine learning ( nashville , us ) ( douglas h. fisher , ed . ) , morgan kaufmann publishers , san francisco , us , 1997 , pp . 143151 .k. tuyls , d. heytens , a. nowe , and b. manderick , _ extended replicator dynamics as a key to reinforcement learning in multi - agent systems _ , ecml 2003 , lnai 2837 ( n. lavrac et al . , ed . ) , springer - verlag , berlin , 2003 , pp . | in this paper we compare the performance characteristics of our selection based learning algorithm for web crawlers with the characteristics of the reinforcement learning algorithm . the task of the crawlers is to find new information on the web . the selection algorithm , called weblog update , modifies the starting url lists of our crawlers based on the found urls containing new information . the reinforcement learning algorithm modifies the url orderings of the crawlers based on the received reinforcements for submitted documents . we performed simulations based on data collected from the web . the collected portion of the web is typical and exhibits scale - free small world ( sfsw ) structure . we have found that on this sfsw , the weblog update algorithm performs better than the reinforcement learning algorithm . it finds the new information faster than the reinforcement learning algorithm and has better new information / all submitted documents ratio . we believe that the advantages of the selection algorithm over reinforcement learning algorithm is due to the small world property of the web . |
the purpose of this work is to develop a positivity - preserving version of the lax - wendroff discontinuous galerkin method for the compressible euler equations on unstructured meshes .the compressible euler equations form a system of hyperbolic conservation law that can be written as follows : the _ conserved _ variables are the mass density , , the momentum density , , and the energy density , ; the _ primitive _ variables are the mass density , , the fluid velocity , , and the pressure , . the energy is related to the primitive variables through the equation of state , where the constant is the ratio of specific heats ( aka , the _ gas constant _ ) .the compressible euler equations are an important mathematical model in the study of gases and plasma .attempts at numerically solving the these equations has led to a plethora of important historical advances in the development of numerical analysis and scientific computing ( see e.g. , ) .the focus of this work is on high - order discontinuous galerkin ( dg ) methods , which were originally developed for general hyperbolic conservation laws by cockburn , shu , et al . in series of papers .the purpose of this section is to set the notation used throughout the paper and to briefly describe the dg spatial discretization .let be a polygonal domain with boundary .the domain is discretized via a finite set of non - overlapping elements , , such that .let denote the set of polynomials from to with maximal polynomial degree .let denote the _ broken _ finite element space on the mesh : ^{\meq } : \ , w^h \bigl|_{{\mathcal{t}}_i } \in \left [ { p}^{\ , \deg } \right]^{\meq } , \ , \forall { \mathcal{t}}_i \in { \mathcal{t}}^h \right\},\ ] ] where is the mesh spacing .the above expression means that has components , each of which when restricted to some element is a polynomial of degree at most and no continuity is assumed across element edges ( or faces in 3d ) .the approximate solution on each element at time is of the form q^h(t^n , * x * ( ) ) |__i = _= 1^ ( ) q^ ( ) n_i ^ ( ) ( ) , where is the number of legendre polynomials and are the legendre polynomials defined on the reference element in terms of the reference coordinates .the legendre polynomials are orthonormal with respect to the following inner product : _ _ 0 ^(k ) ( ) ^ ( ) ( ) d= 1 & k= , + 0 & k .we note that independent of , , , and the type of element , the lowest order legendre polynomial is always .this makes the first legendre coefficient the cell average : q^(1 ) n_i = _ _ 0 q^h(t^n , * x * ( ) ) |__i ^(1 ) ( ) d= : ^n_i .the most common approach for time - advancing dg spatial discretizations is via explicit runge - kutta time - stepping ; the resulting combination of time and space discretization is often referred to as the `` rk - dg '' method .the primary advantage for this choice of time stepping is that explicit rk methods are easy to implement , they can be constructed to be low - storage , and a subclass of these methods have the so - called strong stability preserving ( ssp ) property , which is important for defining a scheme that is provably positivity - preserving . however , there are no explicit runge - kutta methods that are ssp for orders greater than four .the main difficulty with runge - kutta methods is that they typically require many stages ; and therefore , many communications are needed per time step .one direct consequence of the communication required at each rk stage is that is difficult to combine rk - dg with locally adaptive mesh refinement strategies that simultaneously refine in both space and time .the key piece of technology required in locally adaptive dg schemes is local time - stepping ( see e.g. , dumbser et al . ) . local time - stepping is easier to accomplish with a single - stage , single - step ( lax - wendroff ) method than with a multi - stage runge - kutta scheme . for these reasons thereis interest from discontinuous galerkin theorists and practitioners in developing single - step time - stepping techniques for dg ( see e.g. , ) , as well as hybrid multistage multiderivative alternatives . in this work ,we construct a numerical scheme that uses a lax - wendroff time discretization that is coupled with the discontinuous galerkin spatial discretization . in subsequent discussions in this paperwe demonstrate the advantages of switching to single - stage and single - stage time - stepping in regards to enforcing positivity on arbitrary meshes . in simulations involving strong shocks , high - order schemes ( i.e. more than first - order ) for the compressible euler equations generally create nonphysical undershoots ( below zero ) in the density and/or pressure. these undershoots typically cause catastrophic numerical instabilities due to a loss of hyperbolicity .moreover , for many applications these positivity violations exist even when the equations are coupled with well - understood total variation diminishing ( tvd ) or total variation bounded ( tvb ) limiters .the chief goal of the limiting scheme developed in this work is to address positivity violations in density and pressure , and in particular , to accomplish this task with a high - order scheme . in the dg literature ,the most widely used strategy to maintain positivity was developed by zhang and shu in a series of influential papers .the basic strategy of zhang and shu for a positivity - preserving rk - dg method can be summarized as follows : * step 0 . * : : write the current solution in the form where is yet - to - be - determined . represents the unlimited solution .* step 1 . * : : find the largest value of , where , such that satisfies the appropriate positivity conditions at some appropriately chosen quadrature points and limit the solution .* step 2 . * : : find the largest stable time - step that guarantees that with a forward euler time that the cell average of the new solution remains positive .* step 3 . *: : rely on the fact that strong stability - preserving runge - kutta methods are convex combinations of forward euler time steps ; and therefore , the full method preserves the positivity of cell averages ( under some slightly modified maximum allowable time - step ) . for a lax - wendroff time discretization ,numerical results indicate that the limiting found in step 1 is insufficient to retain positivity of the solution , even for simple 1d advection .therefore , the strategy we pursue in this work will still contain an equivalent step 1 ; however , in place of step 2 and step 3 above , we will make use of a parameterized flux , sometimes also called a flux corrected transport ( fct ) scheme , to maintain positive cell averages after taking a single time step . in doing so , we avoid introducing additional time step restrictions that often appear ( e.g. , in step 2 . above ) when constructing a positivity - preserving scheme based on runge - kutta time stepping .this idea of computing modified fluxes by combining a stable low - order flux with a less robust high - order flux is relatively old , and perhaps originates with harten and zwas and their _ self adjusting hybrid scheme _ .the basic idea is the foundation of the related _ flux corrected transport _ ( fct ) schemes of boris , book and collaborators , where fluxes are adjusted in order to guarantee that average values of the unknown are constrained to lie within locally defined upper and lower bounds .this family of methods is used in an extensive variety of applications , ranging from seismology to meteorology .a thorough analysis of some of the early methods is conducted in .identical to modern maximum principle preserving ( mpp ) schemes , fct can be formulated as a global optimization problem where a `` worst case '' scenario assumed in order to decouple the previously coupled degrees of freedom .here we do not attempt to use fct to enforce any sort of local bounds ( in the sense of developing a shock - capturing limiter ) , instead we leverage these techniques in order to retain positivity of the density and pressure associated to ; such approaches have recently received renewed interest in the context of weighted essentially non - oscillatory ( weno ) methods . to summarize , our limiting scheme draws on ideas from the two aforementioned families of techniques that are well established in the literature .first , we start with the now well known ( high - order ) pointwise limiting developed for discontinuous galerkin methods , and second , we couple this with the very large family of flux limiters .( developed primarily for finite - difference ( fd ) and finite - volume ( fv ) schemes ) .the compressible euler equations can be written compactly as where the _ conserved variables _ are and the _ flux function _ is where and . the basic positivity limiting strategy proposed in this work is summarized below .some important details are omitted here , but we elaborate on these details in subsequent sections . *step 0 . * : : on each element we write the solution as where is yet - to - be - determined . represents the _ unlimited _ solution . *step 1 . * : : assume that this solution is positive in the mean .that is , we assume for all that and a consequence of these assumptions is that .* step 2 . * : : find the largest value of , where , such that the density and pressure are positive at some suitably defined quadrature points .this step is elaborated upon in [ subsec : mpp - limiter ] .* step 3 . * : : construct time - averaged fluxes through the lax - wendroff procedure .that is , we start with the exact definition of the time - average flux : and approximate this via a taylor series expansion around : all time derivatives in this expression are replaced by spatial derivatives using the chain rule and the governing pde .the approximate time - averaged flux is first evaluated at some appropriately chosen set of quadrature points in fact , the same quadrature points as used in step 2 and then , using appropriate quadrature weights , summed together to define a high - order flux , at both interior and boundary quadrature points .thanks to step 2 , all quantities of interest used to construct this expansion are positive at each quadrature point .* step 4 . * : : time step the solution so that cell averages are guaranteed to be positive .that is , we update the _ cell averages _ via a formula of the form where is an outward - pointing ( relative to ) normal vector to edge with the property that is the length of edge , and the numerical flux on edge , , is a convex combination of a high - order flux , , and a low - order flux : the low - order flux , , is based on the ( approximate ) solution to the riemann problem defined by cell averages only , and the `` high - order '' flux , , is constructed after integrating via gaussian quadrature the ( approximate ) riemann solutions at quadrature points along the edge : where are the gaussian quadrature weights for quadrature with points and are the numerical fluxes at each of the quadrature points .note that this sum has only a single summand in the one - dimensional case .the selection of is described in more detail in [ subsec : mpp - limiter ] .this step guarantees that the solution retains positivity ( in the mean ) for a single time step .* step 5 . * : : apply a shock - capturing limiter .the positivity - preserving limiter is designed to preserve positivity of the solution , but it fails at reducing spurious oscillations , and therefore a shock - capturing limiter needs to be added .there are many choices of limiters available ; we use the limiter recently developed in because of its ability to retain genuine high - order accuracy , and its ability to push the polynomial order to arbitrary degree without modifying the overall scheme .* step 6 . * : : repeat all of these steps to update the solution for the next time step .each step of this process is elaborated upon throughout the remainder of this paper .the end result is that our method is the first scheme to simultaneously obtain all of the following properties : * * high - order accuracy . *the proposed method is third - order in space and time , and can be extended to arbitrary order . ** positivity - preserving . *the proposed limiter is provably positivity - preserving for the density and pressure , at a finite set of point values , for the entire simulation . * * single - stage , single - step .* we use a lax - wendroff discretization for time stepping the pde , and therefore we only need one communication per time step . ** unstructured meshes . *because we use the discontinuous galerkin method for our spatial discretization and all of our limiters are sufficiently local , we are able to run simulations with dg - fem on both cartesian and unstructured meshes .* * no additional time - step restrictions .* because we do not rely on a ssp runge - kutta scheme , we do not have to introduce additional time - step restrictions to retain positivity of the solution .this differentiates us from popular positivity - preserving limiters based on rk time discretizations .the remainder of this paper has the following structure .the lax - wendroff dg ( lxw - dg ) method is described in [ sec : lwdg ] , where we view the scheme as a method of modified fluxes .the positivity - preserving limiter is described in [ sec : positivity ] , where the discussion of the limiter is broken up into two parts : ( 1 ) the moment limiter ( [ sec : zh - limiter ] ) and ( 2 ) the parameterized flux limiter ( [ subsec : mpp - limiter ] ) . in [ sec : numerical - results ] we present numerical results on several test cases in 1d , 2d cartesian , and 2d unstructured meshes .finally we close with conclusions and a discussion of future work in [ sec : conclusions ] .the lax - wendroff discontinuous galerkin ( lxw - dg ) method serves as the base scheme for the method developed in this work .it is the result of an application of the cauchy - kovalevskaya procedure to hyperbolic pde : we start with a taylor series in time , then we replace all time derivatives with spatial derivatives via the pde .finally , a galerkin projection discretizes the overall scheme , where a single spatial derivative is reserved for the fluxes in order to perform integration - by - parts .we review the lax - wendroff dg scheme for the case of a general nonlinear conservation law that is autonomous in space and time in multiple dimensions .the current presentation illustrates the fact that lax - wendroff schemes can be viewed as a method of modified fluxes , wherein higher - order information about the pde is directly incorporated by simply redefining the fluxes that would typically be used in an `` euler step . ''we consider a generic conservation law of the form where the matrix is diagonalizable for every unit length vector and in the domain of interest .formal integration of over an interval ] such that for all quadrature points and all ] that guarantees for all ] .that is , for all , and ] .finally , we define the scaling parameter for the entire cell as and use this value to limit the higher order coefficients in the galerkin expansions of the density , momentum , and energy displayed in eqns . and .this definition gives us the property that and at each quadrature point .this process is repeated ( locally ) in each element in the mesh . as a side benefit to guaranteeing that the density and pressure are positive, we have the following remark .if and are positive at each quadrature point , then is also positive at each quadrature point .divide by and add to both sides .this concludes the first of two steps for retaining positivity of the solution .we now move on to the second and final step , which takes into account the temporal evolution of the solver .the procedure carried out for the flux limiter presented in this section is very similar to recent work for finite volume as well as finite difference methods . when compared to the finite difference methods ,the main difference in this discussion is that the expressions do not simplify as much because quantities such as the edge lengths must remain in the expressions .this makes them more similar to work on finite volume schemes .overall however , there is little difference between flux limiters on cartesian and unstructured meshes , and between flux limiters for finite difference , finite volume ( fv ) , and discontinuous galerkin ( dg ) schemes .this is because the updates for the cell average in a dg solver can be made to look identical to the update for a fv solver , and once flux interface values are identified , a conservative fd method can be made to look like a fv solver , albeit with a different stencil for the discretization .all of the aforementioned papers rely on the result of perthame and shu , which states that a first - order finite volume scheme ( i.e. , one that is based on a piecewise constant representation with forward euler time - stepping ) that uses the lax - friedrichs ( lxf ) numerical flux is positivity - preserving under the usual cfl condition . similar to previous work , we leverage this idea and incorporate it into a flux limiting procedure . here , the focus is on lax - wendroff discontinuous galerkin schemes . in this work we write out the details of the limiting procedure only for the case of 2d triangular elements .however , all of the formulas generalize to higher dimensions and cartesian meshes .to begin , we consider the euler equations and a mesh that fits the description given in [ sec : dg_spatial ] .after integration over a single cell , , and an application of the divergence theorem , we see that the exact evolution equation for the cell average of the density is given by where is the outward pointing ( relative to ) unit normal to the boundary of . applying to this equation a first - order finite volume discretization using the lax - friedrichs flux yields andthe lax - friedrichs flux is where is an outward - pointing ( relative to ) normal vector to edge with the property that is the length of edge , and the index refers to the solution on edge on the exterior side of .we use a _ global _ wave speed , because this flux defines a provably positivity - preserving scheme .other fluxes can be used , provided they are positivity - preserving ( in the mean ) .recall that the update for the lxw - dg method is given in .the numerical edge flux , , is based on the temporal taylor series expansion of the fluxes .the update for the cell averages takes the form : which in practice needs to be replaced by a numerical quadrature along the edges . applying guassianquadrature along each edge produces the following edge value ( or face value in the case of 3d ) : where and are gaussian quadrature points and weights , respectively , for integration along edge .this allows us to write the update for the cell average in the lax - wendroff dg method in a similar fashion to that of the lax - friedrichs solver , but this time we have higher - order fluxes : next we define a parameterized flux on edge by where ] such that the average pressure as defined by and based on the average state : is positive .note that the average pressure is always positive when , and if the average pressure is positive for , then the average pressure will be positive for any .this is because the pressure is a convex function of . *( c ) * ; ; rescale the edge values ( when compared to the neighboring elements ) based on as follows : loop over all elements , : : : for each of the three edges that make up element determine the damping coefficients , for , as follows : update the cell averages : ,\ ] ] as well as the high - order moments for , where exact integration is replaced by numerical quadrature .( a ) \(a ) at ( -1.5cm,-1.cm ) ; ( c ) at ( 0.5cm,-1.0 cm ) ; ( b ) at ( -1.5cm,1.0 cm ) ; ( a ) node[rotate=90,xshift=0.0 cm , yshift=0.2 cm ] ( b ) node[right , rotate=-45,xshift=-0.5 cm , yshift=0.2 cm ] ( c ) node[below ] ( a ) ; ( b ) \(a ) at ( -1.5cm,-1.cm ) ; ( c ) at ( 0.5cm,-1.0 cm ) ; ( b ) at ( -1.5cm,1.0 cm ) ; ( a ) node[rotate=90,xshift=0.0 cm , yshift=0.2 cm ] ( b ) node[right , rotate=-45,xshift=-0.5 cm , yshift=0.2 cm ] ( c ) node[below ] ( a ) ; ( c ) \(a ) at ( -1.5cm,-1.cm ) ; ( c ) at ( 0.5cm,-1.0 cm ) ; ( b ) at ( -1.5cm,1.0 cm ) ; ( a ) node[rotate=90,xshift=0.0 cm , yshift=0.2 cm ] ( b ) node[right , rotate=-45,xshift=-0.5 cm , yshift=0.2 cm ] ( c ) node[below ] ( a ) ; ( d ) \(a ) at ( -1.5cm,-1.cm ) ; ( c ) at ( 0.5cm,-1.0 cm ) ; ( b ) at ( -1.5cm,1.0 cm ) ; ( a ) node[rotate=90,xshift=0.0 cm , yshift=0.2 cm ] ( b ) node[right , rotate=-45,xshift=-0.5 cm , yshift=0.2 cm ] ( c ) node[below ] ( a ) ; ( e ) \(a ) at ( -1.5cm,-1.cm ) ; ( c ) at ( 0.5cm,-1.0 cm ) ; ( b ) at ( -1.5cm,1.0 cm ) ; ( a ) node[rotate=90,xshift=0.0 cm , yshift=0.2 cm ] ( b ) node[right , rotate=-45,xshift=-0.5 cm , yshift=0.2 cm ] ( c ) node[below ] ( a ) ; ( f ) \(a ) at ( -1.5cm,-1.cm ) ; ( c ) at ( 0.5cm,-1.0 cm ) ; ( b ) at ( -1.5cm,1.0 cm ) ; ( a ) node[rotate=90,xshift=0.0 cm , yshift=0.2 cm ] ( b ) node[right , rotate=-45,xshift=-0.5 cm , yshift=0.2 cm ] ( c ) node[below ] ( a ) ; ( g ) \(a ) at ( -1.5cm,-1.cm ) ; ( c ) at ( 0.5cm,-1.0 cm ) ; ( b ) at ( -1.5cm,1.0 cm ) ; ( a ) node[rotate=90,xshift=0.0 cm , yshift=0.2 cm ] ( b ) node[right , rotate=-45,xshift=-0.5 cm , yshift=0.2 cm ] ( c ) node[below ] ( a ) ; extensions to 2d cartesian , 3d cartesian , and 3d tetrahedral mesh elements follow directly from what is presented here , and require considering flux values along each of the edges / faces of a given element .all of the results presented in this section are implemented in the open - source software package dogpack . in addition , the positivity limiter described thus far is not designed to handle shocks , and therefore an additional limiter needs to be applied in order to prevent spurious oscillations from developing ( e.g. , in problems that contain shocks but have large densities ) .there are many options available for this step , but in this work , we supplement the positivity - preserving limiter presented here with the recent shock - capturing limiter developed in in order to navigate shocks that develop in the solution .specifically we use the version of this limiter that works with the primitive variables , and we set the parameter . in our experience , this limiter with these parameters offers a good balance between damping oscillations while maintaining sharply refined solutions .additionally , we point out that extra efficiency can be realized by locally storing quantities computed for the aforementioned positivity - limiter as well as this shock - capturing limiter .unless otherwise noted , these examples use a cfl number of with a order lax - wendroff time discretization .all of the examples have the positivity - preserving and shock - capturing limiters turned on . in this section we present some standard one - dimensional problems that can be found in and references therein .these problems are designed to break codes that do not have a mechanism to retain positivity of the density and pressure , but with this limiter , we are able to successfully simulate these problems .our first example is the double rarefaction problem that can be found in .this is a riemann problem with initial conditions given by and .the solution involves two rarefaction waves that move in opposite directions that leave near zero density and pressure values in the post shock regime .we present our solution on a mesh with a course resolution of , as well as a highly refined solution with .our results , shown in figure [ fig:1ddoublemach ] , are comparable to those obtained in other works , however the shock - capturing limiter we use is not very diffusive and therefore there is a small amount of oscillation visible in the solution at the lower resolution .however this oscillation vanishes for the more refined solution .this example is a simple one - dimensional model of an explosion that is difficult to simulate without aggressive ( or positivity - preserving ) limiting .the initial conditions involve one central cell with a large amount of energy buildup that is surrounded by a large area of undisturbed air .these initial conditions are supposed to approximate a delta function of energy .as time advances , a strong shock waves emanates from this central region and they move in opposite directions .this leaves the central post - shock regime with near zero density .the initial conditions are uniform in both density and velocity , with and .the energy takes on the value in the central cell and in every other cell .this problem is explored extensively by sedov , and in his classical text gives an exact solution that we use to construct the exact solution underneath our simulation .we show our solution in figure [ fig:1dsedov ] , and we point out that our results are quite good especially since we use such a coarse resolution of size .[ fig:1dsedov ] here , we highlight the fact that this solver is able to operate on both cartesian and unstructured meshes . we first verify the high - order accuracy of the proposed scheme . for problems wherethe density and pressure are far away from zero , the limiters proposed in this work `` turn off '' , and therefore have no effect on the solution . in order to investigate the effect of this positivity preserving limiter, we simulate a smooth problem where the solution has regions that are nearly zero .this is similar to the smooth test case considered by , and . to this end , we consider initial conditions defined by on a computational domain of \times[0,1] ] . the step is the region \times [ 0,6]$ ]our boundary conditions are transparent everywhere except above the step where they are inflow and on the surface of the step where we used solid wall .our initial conditions have the shock located above the step at .the problem is typically run out to , and if a positivity limiter is not used the solution develops negative density and pressure values , which causes the simulation to fail . the solution shown in figure [ fig : shockdiffstruct ]is run on a cartesian mesh .next , we run the same problem from [ ex : shockdiffstruct ] , but we discretize space using an unstructured triangular mesh with cells . the results are shown in figure [ fig : shockdiffunstruct ] , and indicate that the unstructured solver behaves similarly to the cartesian one .this final shock - diffraction test problem is very similar to the previous test problems , however it must be run on an unstructured triangular mesh because the wedge involved in this problem is triangular ( with a degree angle ) .our domain for this problem is given by \times[0,11 ] \setminus [ 0,3.4 ] \cup [ 0,3.4 ] \times \left[\frac{6.0}{3.4}x , 6.0\right].\ ] ] again the boundary conditions are transparent everywhere except above the step where they are inflow and on the surface of the step where they are reflective solid wall boundary conditions .in addition , the initial conditions for this problem are also slightly different that those in the previous example . here, we have a mach shock located above the step at , and undisturbed air in the rest of the domain with and .this problem is run on an unstructured mesh with a total of cells .these results are presented in figure [ fig : shockdiffwedge ] .in this work we developed a novel positivity - preserving limiter for the lax - wendroff discontinuous galerkin ( lxw - dg ) method .our results are high - order and applicable for unstructured meshes in multiple dimensions .positivity of the solution is realized by leveraging two separate ideas : the moment limiting work of zhang and shu , as well as the flux corrected transport work of xu and collaborators .the additional shock capturing limiter , which is required to obtain non - oscillatory results , is the one recently developed by the current authors .numerical results indicate the robustness of the method , and are promising for future applications to more complicated problems such as the ideal magnetohydrodynamics equations .future work includes introducing source terms to the solver , as well as pushing these methods to higher orders ( e.g. , -order ) , but that requires either ( a ) an expedited way of computing higher derivatives of the solution , or ( b ) rethinking how runge - kutta methods are applied in a modified flux framework ( e.g. , ) .p. bochev , d. ridzal , g. scovazzi , and m. shashkov .formulation , analysis and numerical study of an optimization - based conservative interpolation ( remap ) of scalar fields for arbitrary lagrangian - eulerian methods ., 230(13):51995225 , 2011 .b. cockburn , g.e .karniadakis , and c .- w .the development of discontinuous galerkin methods . in _ discontinuous galerkin methods ( newport , ri , 1999 )_ , volume 11 of _ lect .notes comput .sci . eng ._ , pages 350 .springer , berlin , 2000 .m. dumbser , d.s .balsara , e.f .toro , and c .- d .munz . a unified framework for the construction of one - step finite volume and discontinuous galerkin schemes on unstructured meshes ., 227(18):82098253 , 2008 .ruuth and r.j .two barriers on strong - stability - preserving time discretization methods . in _ proceedings of the fifth international conference on spectral and high order methods ( icosahom-01 ) ( uppsala )_ , volume 17 , pages 211220 , 2002 .seal , q. tang , z. xu , and a.j . christlieb .an explicit high - order single - stage single - step positivity - preserving finite difference weno method for the compressible euler equations . ,pages 120 , 2015 .titarev and e.f .ader : arbitrary high order godunov approach . in _ proceedings of the fifth international conference on spectral and high order methods ( icosahom-01 ) ( uppsala ) _ , volume 17 , pages 609618 , 2002 . | this work introduces a single - stage , single - step method for the compressible euler equations that is provably positivity - preserving and can be applied on both cartesian and unstructured meshes . this method is the first case of a single - stage , single - step method that is simultaneously high - order , positivity - preserving , and operates on unstructured meshes . time - stepping is accomplished via the lax - wendroff approach , which is also sometimes called the cauchy - kovalevskaya procedure , where temporal derivatives in a taylor series in time are exchanged for spatial derivatives . the lax - wendroff discontinuous galerkin ( lxw - dg ) method developed in this work is formulated so that it looks like a forward euler update but with a high - order time - extrapolated flux . in particular , the numerical flux used in this work is a linear combination of a low - order positivity - preserving contribution and a high - order component that can be damped to enforce positivity of the cell averages for the density and pressure for each time step . in addition to this flux limiter , a moment limiter is applied that forces positivity of the solution at finitely many quadrature points within each cell . the combination of the flux limiter and the moment limiter guarantees positivity of the cell averages from one time - step to the next . finally , a simple shock capturing limiter that uses the same basic technology as the moment limiter is introduced in order to obtain non - oscillatory results . the resulting scheme can be extended to arbitrary order without increasing the size of the effective stencil . we present numerical results in one and two space dimensions that demonstrate the robustness of the proposed scheme . |
shannon s entropy quantifies information .it measures how much uncertainty an observer has about an event being produced by a random system .another important concept in the theory of information is the mutual information .it measures how much uncertainty an observer has about an event in a random system * x * after observing an event in a random system * y * ( or vice - versa ) .mutual information is an important quantity because it quantifies not only linear and non - linear interdependencies between two systems or data sets , but also is a measure of how much information two systems exchange or two data sets share . due to these characteristics, it became a fundamental quantity to understand the development and function of the brain , to characterise and model complex systems or chaotic systems , and to quantify the information capacity of a communication system . when constructing a model of a complex system ,the first step is to understand which are the most relevant variables to describe its behaviour .mutual information provides a way to identify those variables .however , the calculation of mutual information in dynamical networks or data sets faces three main difficulties .mutual information is rigorously defined for random memoryless processes , only .in addition , its calculation involves probabilities of significant events and a suitable space where probability is calculated .the events need to be significant in the sense that they contain as much information about the system as possible .but , defining significant events , for example the fact that a variable has a value within some particular interval , is a difficult task because the interval that provides significant events is not always known .finally , data sets have finite size .this prevents one from calculating probabilities correctly . as a consequence , mutual informationcan often be calculated with a bias , only . in this work ,we show how to calculate the amount of information exchanged per unit of time [ eq .( [ mir_introduction ] ) ] , the so called mutual information rate ( mir ) , between two arbitrary nodes ( or group of nodes ) in a dynamical network or between two data sets .each node representing a d - dimensional dynamical system with state variables .the trajectory of the network considering all the nodes in the full phase space is called `` attractor '' and represented by .then , we propose an alternative method , similar to the ones proposed in refs . , to calculate significant upper and lower bounds for the mir in dynamical networks or between two data sets , in terms of lyapunov exponents , expansion rates , and capacity dimension .these quantities can be calculated without the use of probabilistic measures . as possible applications of our bounds calculation, we describe the relationship between synchronisation and the exchange of information in small experimental networks of coupled double - scroll circuits . in previous works of refs . , we have proposed an upper bound for the mir in terms of the positive conditional lyapunov exponents of the synchronisation manifold . as a consequence , this upper bound could only be calculated in special complex networks that allow the existence of complete synchronisation . in the present work, the proposed upper bound can be calculated to any system ( complex networks and data sets ) that admits the calculation of lyapunov exponents .we assume that an observer can measure only one scalar time series for each one of two chosen nodes .these two time series are denoted by and and they form a bidimensional set , a projection of the `` attractor '' into a bidimensional space denoted by . to calculate the mir in higher - dimensional projections , see supplementary information .assume that the space is coarse - grained in a square grid of boxes with equal sides , so .mutual information is defined in the following way . given two random variables , * x * and * y * , each one produces events and with probabilities and , respectively , the joint probability between these events is represented by .then , mutual information is defined as = } ] , and } ] , decays to zero , the probability of having a point in that is mapped to is equal to the probability of being in times the probability of being in .that is typically what happens in random processes . if the measure is invariant , then =\mu(\sigma_{\omega}) ] , = } ] , and , , and represent the probability of the sampled trajectory points to fall in the line of the grid , in the column of the grid , and in the box of the grid , respectively .due to the time invariance of the set assumed to exist , the probability measure of the non - sampled trajectory is equal to the probability measure of the sampled trajectory . if a system that has a time invariant measure is observed ( sampled ) once every time interval , the observed set has the same natural invariant density and probability measure of the original set . as a consequence ,if has a time invariant measure , the probabilities , , and ( used to calculate ) are equal to , , and . consequently , , , and , and therefore . substituting into eq .( [ original_mir_sampled ] ) , we finally arrive to where between two nodes is calculated from eq .( [ is ] ) .therefore , in order to calculate the mir , we need to estimate the time for which the correlation of the system approaches zero and the probabilities , , of the experimental non - sampled experimental points to fall in the line of the grid , in the column of the grid , and in the box of the grid , respectively .consider that our attractor is generated by a 2d expanding system that possess 2 positive lyapunov exponents and , with .imagine a box whose sides are oriented along the orthogonal basis used to calculate the lyapunov exponents .then , points inside the box spread out after a time interval to along the direction from which is calculated . at , , which provides in eq .( [ t ] ) , since .these points spread after a time interval to along the direction from which is calculated .after an interval of time , these points spread out over the set .we require that for , the distance between these points only increases : the system is expanding .imagine that at , fictitious points initially in a square box occupy an area of .then , the number of boxes of sides that contain fictitious points can be calculated by . from eq .( [ t ] ) , , since .we denote with a lower - case format , the probabilities , , and with which fictitious points occupy the grid in .if these fictitious points spread uniformly forming a compact set whose probabilities of finding points in each fictitious box is equal , then ( ) , , and .let us denote the shannon s entropy of the probabilities , and as , , and .the mutual information of the fictitious trajectories after evolving a time interval can be calculated by . since , and , then . at , we have that and , leading us to .therefore , defining , , we arrive at .we defining as where being the number of boxes that would be covered by fictitious points at time . at time , these fictitious points are confined in an -square box .they expand not only exponentially fast in both directions according to the two positive lyapunov exponents , but expand forming a compact set , a set with no `` holes '' . at , they spread over . using and in eq .( [ d ] ) , we arrive at , and therefore , we can write that to calculate the maximal possible mir , of a random independent process , we assume that the expansion of points is uniform only along the columns and lines of the grid defined in the space , i.e. , , ( which maximises and ) , and we allow to be not uniform ( minimising ) for all and , then }. \label{is_lower}\ ] ] since , dividing by , taking the limit of , and reminding that the information dimension of the set in the space is defined as =}}{\log{(\epsilon)}} ] and , respectively , and eqs .( [ d ] ) and ( [ icl ] ) read and , respectively . from the way we have defined expansion rates, we expect that . because of the finite time interval and the finite size of the regions of points considered , regions of points that present large derivatives , contributing largely to the lyapunov exponents , contribute less to the expansion rates . if a system has constant derivative ( hyperbolic ) and has constant natural measure , then .there are many reasons for using expansion rates in the way we have defined them in order to calculate bounds for the mir .firstly , because they can be easily experimentally estimated whereas lyapunov exponents demand huge computational efforts .secondly , because of the macroscopic nature of the expansion rates , they might be more appropriate to treat data coming from complex systems that contains large amounts of noise , data that have points that are not ( arbitrarily ) close as formally required for a proper calculation of the lyapunov exponents .thirdly , expansion rates can be well defined for data sets containing very few data points : the fewer points a data set contains , the larger the regions of size need to be and the shorter the time is .finally , expansion rates are defined in a similar way to finite - time lyapunov exponents and thus some algorithms used to calculate lyapunov exponents can be used to calculate our defined expansion rates .to illustrate the use of our bounds , we consider the following two bidirectionally coupled maps where ] .using produces already similar results . if , the set might not be only expanding . might be overestimated .results for experimental networks of double - scroll circuits . on the left - side upper corner pictograms represent how the circuits ( filled circles ) are bidirectionally coupled . as ( green online ) filled circles , as the ( red online ) thick line , and as the ( brown online ) squares , for a varying coupling resistance .the unit of these quantities shown in these figures is ( kbits / s ) .( a ) topology i , ( b ) topology ii , ( c ) topology iii , and ( d ) topology iv . in all figures , increases smoothly from 1.25 to 1.95 as varies from 0.1k to 5k .the line on the top of the figure represents the interval of resistance values responsible to induce almost synchronisation ( as ) and phase synchronisation ( ps).,width=264,height=264 ] has been estimated by the method in ref .since we assume that the space where mutual information is being measured is 2d , we will compare our results by considering in the method of ref . a 2d space formed by the two collected scalar signals . in the method of ref . the phase space is partitioned in regions that contain 30 points of the continuous trajectory . since that these regions do not have equal areas ( as it is done to calculate and ) , in order to estimate we need to imagine a box of sides , such that its area contains in average 30 points . the area occupied by the set is approximately given by , where is the number of occupied boxes .assuming that the 79980 experimental data points occupy the space uniformly , then on average 30 points would occupy an area of .the square root of this area is the side of the imaginary box that would occupy 30 points .so , .then , in the following , the `` exact '' value of the mir will be considered to be given by , where is estimated by .the three main characteristics of the curves for the quantities , , and ( appearing in fig . [ figure4_letter00 ] ) with respect to the coupling strength are that ( i ) as the coupling resistance becomes smaller , the coupling strength connecting the circuits becomes larger , and the level of synchronisation increases followed by an increase in , , and , ( ii ) all curves are close , ( iii ) and as expected , for most of the resistance values , and .the two main synchronous phenomena appearing in these networks are almost synchronisation ( as ) , when the circuits are almost completely synchronous , and phase synchronisation ( ps ) . for the circuits considered in fig .[ figure4_letter00 ] , as appears for the interval ] . within this region of resistance values the exchange of information between the circuits becomes large .ps was detected by using the technique from refs . . to analytically demonstrate that the quantities and can be well calculated in stochastic systems , we consider the following stochastic dynamical toy model illustrated in fig . [ toy_model ] . init points within a small box of sides ( represented by the filled square in fig .[ toy_model](a ) ) located in the centre of the subspace are mapped after one iteration of the dynamics to 12 other neighbouring boxes .some points remain in the initial box .the points that leave the initial box go to 4 boxes along the diagonal line and 8 boxes off - diagonal along the transverse direction .boxes along the diagonal are represented by the filled squares in fig .[ toy_model](b ) and off - diagonal boxes by filled circles . at the second iteration ,the points occupy other neighbouring boxes , as illustrated in fig .[ toy_model](c ) , and at the time the points do not spread any longer , but are somehow reinjected inside the region of the attractor .we consider that this system is completely stochastic , in the sense that no one can precisely determine the location of where an initial condition will be mapped .the only information is that points inside a smaller region are mapped to a larger region . at the iteration , there will be boxes occupied along the diagonal ( filled squares in fig .[ toy_model ] ) and ( filled circles in fig .[ toy_model ] ) boxes occupied off - diagonal ( along the transverse direction ) , where for =0 , and for and . is a small number of iterations representing the time difference between the time for the points in the diagonal to reach the boundary of the space and the time for the points in the off - diagonal to reach this boundary .the border effect can be ignored when the expansion along the diagonal direction is much faster than along the transverse direction . at the iteration , there will be boxes occupied by points . in the following calculations we consider that .we assume that the subspace is a square whose sides have length 1 , and that , so . for , the attractor does not grow any longer along the off - diagonal direction .the time , for the points to spread over the attractor , can be calculated by the time it takes for points to visit all the boxes along the diagonal .thus , we need to satisfy . ignoring the 1 appearing in the expression for due to the initial box in the estimation for the value of , we arrive that .this stochastic system is discrete . in order to take into consideration the initial box in the calculation of ,we pick the first integer that is larger than , leading to be the largest integer that satisfies the largest lyapunov exponent or the order-1 expansion rate of this stochastic toy model can be calculated by , which take us to therefore , eq . ( [ tm_t ] ) can be rewritten as .the quantity can be calculated by , with .neglecting and the 1 appearing in due to the initial box , we have that ] , we can write that . placing from eq .( [ tm_t ] ) into takes us to . finally , dividing by , we arrive that \nonumber \\ & = & \log{(2)}(1-r ) .\label{tm_ist}\end{aligned}\ ] ] as expected from the way we have constructed this model , eq .( [ tm_ist ] ) and ( [ tm_ic ] ) are equal and .had we included the border effect in the calculation of , denote the value by , we would have typically obtained that , since calculated considering a finite space would be either smaller or equal than the value obtained by neglecting the border effect . had we included the border effect in the calculation of , denote the value by , typically we would expect that the probabilities would not be constant .that is because the points that leave the subspace would be randomly reinjected back to .we would conclude that .therefore , had we included the border effect , we would have obtained that . the way we have constructed this stochastic toy model results in .this is because the spreading of points along the diagonal direction is much faster than the spreading of points along the off - diagonal transverse direction .in other words , the second largest lyapunov exponent , , is close to zero .stochastic toy models which produce larger , one could consider that the spreading along the transverse direction is given by , with ] , and .in order to show that our expansion rate can be used to calculate these quantities , we consider that the experimental system is uni - dimensional and has a constant probability measure .additive noise is assumed to be bounded with maximal amplitude , and having constant density .our order-1 expansion rate is defined as }. \label{define_exp_rates_sup}\ ] ] where measures the largest growth rate of nearby points .since all it matters is the largest distance between points , it can be estimated even when the experimental data set has very few data points . since , in this example , we consider that the experimental noisy points have constant uniform probability distribution , can be calculated by }. \label{define_exp_rates_sup1}\ ] ] where represents the largest distance between pair of experimental noisy points in an -square box and represents the largest distance between pair of the points that were initially in the -square box but have spread out for an interval of time . the experimental system ( without noise )is responsible to make points that are at most apart from each other to spread to at most to apart from each other .this points spread out exponentially fast according to the largest positive lyapunov exponent by substituting eq .( [ delta ] ) in ( [ define_exp_rates_sup1 ] ) , and expanding to first order , we obtain that , and therefore , our expansion rate can be used to estimate lyapunov exponents .as rigorously shown in , the decay with time of the correlation , , is proportional to the decay with time of the density of the first poincar recurrences , , which measures the probability with which a trajectory returns to an -interval after iterations .therefore , if decays with , for example exponentially fast , will decay with exponentially fast , as well .the relationship between and can be simply understood in chaotic systems with one expanding direction ( one positive lyapunov exponent ) . as shown in , the `` local '' decay of correlation ( measured in the -interval )is given by , where is the probability measure of a chaotic trajectory to visit the -interval .consider the shift map .for this map , and there are an infinite number of possible intervals that makes , for a finite .these intervals are the cells of a markov partition . as recently demonstrated by [ p. pinto , i. labouriau , m. s. baptista ] , in piecewise - linear systems as the shift map , if is a cell in an order- markov partition and , then and by the way a markov partition is constructed we have that .since that , we arrive at that , for a special finite time .notice that can be rewritten as .since for this map , the largest lyapunov exponent is equal to , then , which is exactly equal to the quantity , the time interval responsible to make the system to lose its memory from the initial condition and that can be calculated by the time that makes points inside an initial -interval to spread over the whole phase space , in this case ] and represents the connecting adjacent matrix . if node connects to node , then , and 0 otherwise .assume that the nodes are connected all - to - all .then , the positive lyapunov exponents of this network are : and }$ ] , with .assume also that the subspace has dimension and that positive lyapunov exponents are observed in this space and that .substituting these lyapunov exponents in eq .( [ icu_new ] ) , we arrive at we conclude that there are two ways for to increase. either one considers larger measurable subspaces or one increases the coupling between the nodes .this suggests that the larger the coupling strength is the more information is exchanged between groups of nodes . for arbitrary topologies, one can also derive analytical formulas for in this network , since for can be calculated from .one arrives at where is the largest eigenvalue ( in absolute value ) of the laplacian matrix .concluding , we have shown a procedure to calculate mutual information rate ( mir ) between two nodes ( or groups of nodes ) in dynamical networks and data sets that are either mixing , or present fast decay of correlations , or have sensitivity to initial conditions , and have proposed significant upper ( ) and lower ( ) bounds for it , in terms of the lyapunov exponents , the expansion rates , and the capacity dimension . since our upper bound is calculated from lyapunov exponents or expansion rates , it can be used to estimate the mir between data sets that have different sampling rates or experimental resolution ( e.g. the rise of the ocean level and the average temperature of the earth ) , or between systems possessing a different number of events .additionally , lyapunov exponents can be accurately calculated even when data sets are corrupted by noise of large amplitude ( observational additive noise ) or when the system generating the data suffers from parameter alterations ( `` experimental drift '' ) .our bounds link information ( the mir ) and the dynamical behaviour of the system being observed with synchronisation , since the more synchronous two nodes are , the smaller and will be. this link can be of great help in establishing whether two nodes in a dynamical network or in a complex system not only exchange information but also have linear or non - linear interdependences , since the approaches to measure the level of synchronisation between two systems are reasonably well known and are been widely used .if variables are synchronous in a time - lag fashion , it was shown in ref . that the mir is independent of the delay between the two processes .the upper bound for the mir could be calculated by measuring the lyapunov exponents of the network ( see supplementary information ) , which are also invariant to time - delays between the variables . *acknowledgments * m. s. baptista was partially supported by the northern research partnership ( nrp ) and alexander von humboldt foundation .m. s. baptista would like to thank a. politi for discussions concerning lyapunov exponents .rubinger , e.r .v. junior and j.c .sartorelli thanks the brazilian agencies capes , cnpq , fapemig , and fapesp . | the amount of information exchanged per unit of time between two nodes in a dynamical network or between two data sets is a powerful concept for analysing complex systems . this quantity , known as the mutual information rate ( mir ) , is calculated from the mutual information , which is rigorously defined only for random systems . moreover , the definition of mutual information is based on probabilities of significant events . this work offers a simple alternative way to calculate the mir in dynamical ( deterministic ) networks or between two data sets ( not fully deterministic ) , and to calculate its upper and lower bounds without having to calculate probabilities , but rather in terms of well known and well defined quantities in dynamical systems . as possible applications of our bounds , we study the relationship between synchronisation and the exchange of information in a system of two coupled maps and in experimental networks of coupled oscillators . |
scarce doubts remain that dark energy ( de ) exists . not only snia data indicate an accelerated cosmic expansion ( perlmutter et al . 1997 , 1998 , riess et al .1998 , foley et al . 2007 ) ; also cmb and deep sample data show a clear discrepancy between the total density parameter approaching unity , and the matter density parameter ( see , _e.g. _ , spergel et al .de covers this gap ; its state parameter must approach today , so apparently excluding that de is made of free particles ( : de pressure , energy density ) .the true nature of de is however still elusive ; a false vacuum and a self interacting scalar field are among the most popular hypotheses for it ( wetterich 1988 , ratra & peebles 1988 ) .in this paper we explore some possible consequences of our poor knowledge of de nature .in particular we test the risk that other cosmological parameter estimates are biased by a inadequate parametrization of the de component .we shall see that this risk is real .theoretical predictions had an astonishing success in fitting cmb data .for instance , the sw effect , predicting low data , or primeval compression waves , predicting peaks and deeps , were clearly detected .there is little doubt that we are exploring the right range of models .when we investigate de nature through cmb data , we must bear in mind that they were mostly fixed at a redshift when the very de density should be negligible , and so affects peak and deep positions indirectly , through the values of and ( here / s / mpc is the present hubble parameter ; are the present total , cdm , baryon density parameters ) .later information on de state equation , conveyed by the isw effect , is seriously affected by cosmic variance and often relies on the assumption that a single opacity parameter can account for reionization , assumed to be ( almost ) instantaneous . accordingly ,if we assume dynamical de ( dde ) , due to a scalar field self interacting through a potential cmb data allow to exclude some interaction shape , _e.g. _ ratra peebles ( 1988 ) potentials with significantly large energy scales , but hardly convey much information on potential parameters . in spite of that , when we choose a de potential or a specific scale dependence of the de state parameter we risk to bias the values of other cosmological parameters , sometimes leading to premature physical conclusions .an example is the value of the primeval spectral index for scalar fluctuation . using wmap3 data ( spergel et al .2007 ) and assuming a cosmology , the value is `` excluded '' at the 2 confidence level . on the contrary , colombo & gervasi ( 2007 ) showed that this is no longer true in a dde model based on a sugra potential ( brax & martin 1999 , 2001 ; brax , martin & riazuelo 2000 ) , whose likelihood was the same of .the risk that our poor knowledge of de nature biases parameter determination is even more serious if dm de coupling is allowed . coupled de ( cde ) cosmologies were studied by various authors ( see , _ e.g. _ , wetterich 1995 , amendola 2000 , bento , bertolami & sen 2002 , macci et al .2004 ) .while dde was introduced in the attempt to ease de _ fine tuning _ problems , cde tries to ease the _ coincidence _ problem .let us then parametrise the strength of dm de coupling through a parameter defined below .when is large enough , dm and de scale ( quasi ) in parallel since a fairly high redshift . in turnthis modifies the rate of cosmic expansion whenever dm and/or de contributions to the total energy density are non negligible , so that limits on can be set through data .the range allowed ( .12 ; mainini , colombo & bonometto 2005 , mainini & bonometto 2007 , see also majerotto , sapone & amendola 2004 , amendola , campos & rosenfeld , 2006 ) , unfortunately , is so limited that dm and de are doomed to scale differently , but in a short redshift interval .clearly , this spoils the initial motivation of coupling , but , once the genie is outside the lamp , it is hard to put him back inside : even though the coupling solves little conceptual problems , we should verify that no bias arises on the other parameters , for the neglect of s consistent with data .this is far from being just a theoretical loophole , the still unknown physics of the dark components could really imply the presence of a mild dm de coupling , and its discovery could mark a step forwards in the understanding of their nature . herewe test this possibility by performing some numerical experiments .we assume dde and cde due to a sugra potential and use mcmc techniques to fit the following parameter set : , , , , , and ( constant ) ; is the angular size of the sound horizon at recombination ( see however below ) , and are spectral index and amplitude of scalar waves , no tensor mode is considered .the plan of the paper is as follows : in section 2 we discuss how artificial data are built , outlining the models selected , the dde potential used and the sensitivity assumed . in section 3we briefly debate the features of the mcmc algorithm used and illustrate a test on its efficiency , also outlining the physical reasons why some variables are more or less efficiently recovered . in section 4 we discuss the results of an analysis of dde artificial data , against the assumption . in section 5we briefly summarize why and how cde models are built and do the same of sec . 4 for cde models .this section yields the most significant results of this work . in section 6we draw our conclusions .in order to produce artificial data we use camb , or a suitable extension of it ( see below ) , to derive the angular spectra , , for various sets of parameter values .artificial data are then worked out from spectra , according to perotto et al .( 2006) .the analysis we report is however mostly based on a single choice of parameters , wmap5 inspired ( see komatsu et al . , 2008 , spergel et al .2007 ) , in association with three different s .two other parameter choices will also be used : ( i ) to test the efficiency of the mcmc algorithm ; ( ii ) to add results for a still lower value , so strengthening our conclusions .the parameter sets for the most general case and the ( ii ) case are shown in table 1 . in these casesde is due to a scalar field self interacting through a sugra potential ( planck mass ) which has been shown to fit cmb data at least as well as ( colombo & gervasi 2007 ) ; as outlined in the table , we input the value of the energy scale the corresponding value is determined by the program itself ; the choice is consistent with colombo & gervasi ( 2007 ) findings and is however scarcely constrained by data .the meaning of the coupling constant is discussed at the beginning of section 5 ..__cosmological parameters for artificial cmb data ._ _ [ cols="^,^,^",options="header " , ] -.8truecm starting from the spectral components and assuming that cosmic fluctuations are distributed according to a gaussian process , we generate _ realizations _ of the coefficients of the spherical harmonic expansions for the temperature and e polarization fields , according to the expressions \sqrt{c_l^{(tt ) } } g_{lm}^{(1 ) } + \sqrt{c_l^{(ee ) } - \left[\left(c_l^{(te)}\right)^2/c_l^{(tt ) } \right ] } g_{lm}^{(2)}~ , \label{alme}\ ] ] where both for any and , are casual variables , distributed in a gaussian way with null averages and unit variance , so that the equality is approached when the number of realizations increases . together with eqs .( [ almt ] ) and ( [ alme ] ) this grants that if averages are taken over an `` infinite '' set of sky realizations .> from a single realization , we can define estimators of the power as taken independently from each other , and at a given are the sum of the squares of gaussian random variables , so they are distributed according to a with , which approaches a gaussian distribution centered around the fiducial , , values , as increases .if we consider and simultaneously , the set of estimators , , follow a wishart distribution ( see , e.g. , percival & brown 2006 ) .we consider here an idealized full sky experiment characterized by : ( i ) finite resolution , that we shall set through a full width half maximum angle , where the antenna sensitivity is reduced to assuming a circularly symmetric gaussian beam profile ; ( ii ) background noise due to the apparatus , that we shall assume to be _ white _ , _i.e. _ with an assigned variance under these simplified assumptions , the characteristics of an experiment are completely defined by the values of and . herewe shall take while and .these can be considered conservative estimates of sensitivity in the forthcoming planck experiment .when the number of the parameters to be determined from a given data set is large , the whole parameter space can not be fully explored within a reasonable computing time .montecarlo markov chain ( mcmc ) algorithms are then used , whose efficiency and reliability have been widely tested . in this work we used the mcmc engine and statistical tools provided by the cosmomc package ( lewis & bridle 2002 ; _cosmologist.info/cosmomc_ ) .this tool set allows us to work out a suitable set of markov chains and analyze their statistical properties , both in the full parameter space , and in lower dimensional subspaces .of particular interest are the _marginalized _ distributions of each parameter , obtained by integrating over the distribution of the other parameters .the above algorithms implement the following steps .( i ) let be a point model in the parameter space .the spectra of such model are computed and their corresponding likelihood is evaluated according to + { \rm const . } ~,\ ] ] where stands for either fiducial or realized spectra .the above expression holds for a single or spectrum ; for its generalization to 3 spectra and a discussion of the effects of anisotropic noise , sky cuts , etc .see , e.g. percival & brown ( 2006 ) .( ii ) the algorithm then randomly selects a different point , in parameter space , according to a suitable _ selection function_. the probability of accepting is given by .if is accepted , it is added to the chain , otherwise the multiplicity of increases by 1 .( iii ) the whole procedure is then iterated starting either from or from again , until a stopping condition is met .the resulting chain is defined by , where are the points explored and the multiplicity is the number of times was kept .the whole cycle is repeated until the chains reach a satisfactory _ mixing _ and a good_ convergence_. the first requisite essentially amounts to efficiently exploring the parameter space , by quickly moving through its whole volume .in particular , the fact that the algorithm sometimes accepts points with lower likelihood than the current point , avoids permanence in local minima , while a careful choice of parametrization and of the selection function minimize the time spent exploring degenerate directions .a good convergence instead guarantees that the statistical properties of the chains ( or of a suitably defined subset of the chains ) , correspond to those of the underlying distribution .here we implement the convergence criterion of gelman & rubin ( 1992 ) , based on computation : convergence and mixing are reached when . fulfilling such criterion requires points , in the cases considered here .the algorithm was preliminarily tested by using as true cosmology and the set of parameter values shown in figure [ lcdm ] .such figure reports the likelihood distributions found in this case , both for input parameters ( with asterisk ) and for several derived parameters as well . besides confirming that input values are suitably recovered , mostly with fairly small errors , fig .[ lcdm ] shows a broad non gaussian distribution for values , with little skewness but significant kurtosis .this is a well known consequence of the geometrical degeneracy present in cmb spectra , and clearly shows the difficulty of cmb data to yield results on de nature .the non gaussian behavior is even more accentuated for some derived parameter , as , and the universe age we shall recover analogous features in the next plots ; here we want to stress that they do not derive from fitting a partially unsuitable parameter set .figure [ lcdm ] also shows how errors increase when passing to some secondary parameter , from primary parameters specifically devised to break degeneracies .for instance , the errors on and are and , respectively . from these parameters , and derived , whose errors are .the point here is that the primary variable is setting the peak of the _quasi_gaussian last scattering band ( lsb ) .it is then easy to see that so that is apparently independent from and as , in the former equation while , in the latter one and only sets the normalization of the latter term at the r.h.s ., so that as a matter of fact , these equations exhibit a mild dependence of on and , as ] ( here is the present cmb temperature and is number of neutrino families in the radiative background ) .information of and , thence , on can be obtained only if the factor , yielding the evolution of the ratio between de and matter at low redshift , is under control .it is certainly so if a model is assumed ; but , if we keep as free parameter , the uncertainty on it , ranging around 60 reflect the difficulty to obtain the secondary parameters and .more in general , eq . ( [ hoh ] ) shows that cmb data are sensitive to a change of causing a variation of lsb distance . on the contrary , does not discriminate between different yielding the same . incidentally , all that confirms that it is unnecessary to fix with high precision , or to discuss whether it is a suitable indicator of the lsb depth .knowing with approximation is quite suitable to this analysis .let us now discuss what happens if de state equation , in the `` real '' cosmology , can not be safely approximated by a constant to build data , we use then a sugra cosmology and our own extension of camb , directly dealing with a sugra potential both in the absence and in the presence of dm de coupling .artificial data worked out for model a with were then fit to the same parameters as in figure [ lcdm ] .the whole findings are described in table [ tab1 ] ( second column ) . in figure [ dde ]we add further information , comparing the value distributions in fits assuming either a cosmology with or a sugra cosmology , then including as parameter , instead of .the two fits agree well within 1 , among themselves and with input values . in the figurewe show the fiducial case . -.2truecm cccc + parameter & & & & + input value & av . value & av . value & av .value + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + for & & & + for & && + for & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + -.1truecmlet us briefly comment the results in the absence of coupling , when the fitting parameter set includes instead of : ( i ) and are recovered without any bias .( ii ) the same holds for if a numerically refined value of is used ; the expression of , as above outlined , has a precision .errors on , however , are negligible in comparison with errors arising from the scarce knowledge of .as outlined in the previous section , the main problem to fix , , and , resides in the difficulty to determine the state equation of the expansion source since de becomes substantially sub dominant or dominant .this also makes clear why the formal error in the determination of the above three parameters is wider when the fit assumes a constant varying yields an immediate and strong effect on de contributions and state equation , extending up to now ; on the contrary , only huge displacements of the energy scale cause significant variations to de contributions and state equation within the family of sugra cosmologies . in other terms , when varies , the part of the functional space that covers is not so wide .accordingly , the greater error bars on , , , found when we use as a parameter , do not arise because we fit data to a `` wrong '' cosmology , but because varying leads to a more effective spanning of the functional space of .as a matter of fact , the reliability of the fit must be measured by comparing the likelihood of the best fit parameter sets .likelihood values , given in the next section , do not discriminate between the two fits .otherwise , a direct insight into such reliability is obtained by looking at the width of the marginalized _ a posteriori _ likelihood distributions on primary parameter values .figure [ dde ] then confirms that such distributions , although slightly displaced , have similar width for all primary parameters . a general conclusion this discussion allows to draw is to beware from ever assuming that error estimates on secondary parameters , as , , and , are safein cosmologies including cold dm and de , the equation obeyed by the stress energy tensors of the dark components yields if their pressure and energy densities are , and is the conformal time and dots indicate differentiation in respect to eqs .( [ conti0 ] ) and ( [ conti ] ) state that no force , apart gravity , acts between standard model particles and the dark components .( [ conti0 ] ) is fulfilled if dm and de separately satisfy the eqs . where are traces of the stress energy tensors . can be an arbitrary constant or , _e.g. _ , a function of itself ; when , the two dark components are uncoupled and so are the cases we considered up to here .it is however known ( wetterich 1995 , amendola 2000 , amendola & quercellini 2003 ) that self consistent theories can be built with so that is used to parametrize the strength of dm de coupling . by using mcmc techniques ,we explored cosmologies with and 0.1 ( model a ) ; we also report some results for ( model b ) .we aim to show that , when is excluded from the parameter budget , the values mcmc provide for some parameters can be significantly biased , even though is small . 0.5truecm0.5truecm in table [ tab1 ] ( 3rd4th columns ) we report the results of fitting the parameter set including constant , in place of and .the table allows an easy comparison with the uncoupled case , when all parameters unrelated to de are nicely recovered . on the contrary , already for ,the input value is formally s away from best fit , in the fiducial case , and more than 5 s in some realization . the effect is stronger for , yielding a discrepancy s . that coupling affects the detection of is not casual : in coupled models a continuous exchange of energy between dm and de occurs . several parameters exhibit a non gaussian distribution , however .this is visible in figure [ couple ] , where we also provide a comparison between likelihood distributions when ( i ) the fit includes just or , instead , ( ii ) the parameter space includes and .the comparison allows to appreciate that , in the ( i ) case , is apparently much better determined than in the ( ii ) case .appreciable discrepancies concern also the parameter and they reflect onto the derived parameters , and , whose distribution however appears significantly non gaussian .these results deserve to be accompanied by a comparison between the likelihood values in the different cosmologies .the following table shows for the fiducial cases only : the likelihood values for realizations are systematically smaller , as expected , but confirm the lack of significance shown here .given that coupled models have 1 additional parameter over the model , differences indicate that both models are an equally good fit of data .the conclusion is that spectral differences , between input and best fit models , lay systematically below the error size . as a matter of fact, we are apparently meeting a case of degeneracy .this statistical observation is corroborated by a direct insight into angular spectra , provided by figures [ cla ] and [ clb ] .we show the behaviors of the spectra for the model a , with , , , compared with the for the best fitting ( ) model .differences are so small to be hardly visible .figure [ cla ] refers just to anisotropy . in its lower panelthe ratio is compared with wmap5 1 error size ( is the difference between the spectra of the input and best fit models ) .notice that shifts by 1 or 2 units along the abscissa , at large values , would still badly cut off the apparent ratios .clearly , degeneracy can be removed only if the error size is reduced by a factor or more , in the region between the first deep and the second acoustic peak . a further example we wish to add concerns a still smaller coupling intensity , ( model b , table [ tab02 ] ) . at this coupling level ,the mcmc meet all input parameter values within a couple of s .a closer inspection of plots similar to figure [ couple ] , not reported here , shows that the probability of realizations yielding formally more than 3 s away from the true value , is still the genie is still not completely back inside the lamp .cc + & input value & b.f .value + & + & + & + & + & + & + & + & + & + & + & + & + & + & + & + & + ( at ) & + & + & + & + & + & + & + & + & + & + & + & + & + & + & + & + & + & + & + & +the degeneracy observed in cmb spectra could be broken off through different observables .we plan to deepen this aspect in a forthcoming work , by building detailed data sets accounting from the dependence on cosmology of the expansion rate and growth factor .a first insight into the actual situations can be however gained through an inspection of hubble diagrams and transfer functions . in figure [ hubdia ]we show the redshift dependence of the luminosity distances for models with , 0.05 , 0.1 and compare them with the corresponding best fit models with constant , as well as with scdm models with the same value of .these plots clearly indicate that a fit with snia data would hardly allow any discriminatory signal : discrepancies from constant model increase with ; but , even for the case , they hardly exceed of the difference from scdm . in figure [ trans ]we then exhibit the transfer functions ( multiplied by to improve the visibility of details ) for models a. in the cases and , we notice slight displacements for the bao system and the slope .the actual setting of bao s is however subject to non linear effects and residual theoretical uncertainties are wider than the shifts in the plots .the change of slope is also easily compensated by a shift of by , widely within expected observational errors .notice then that the relative position of the input and best fit functions is opposite in the two cases .accordingly , in the intermediate case , not shown in the plot , the overlap is almost exact and the observed scale dependence of the growth factor , at low redshift , does not break the degeneracy at all .the situation is different when is considered . for ,the bao displacement is indeed relevant ; to compensate for the change of slope , we would then require a shift of greater than 0.1 .a numerical analysis can therefore determine at which value , probably intermediate between 0.05 and 0.1 , the cmb spectra degeneracy is broken .the nature of the dark cosmic components is unclear .de could be a self interacting field , yielding a scale dependent state parameter which bias does then arise on cosmological parameter estimates , if performed by assuming ? this question regards also parameters which do not describe de ; their estimate could be biased because the _ true _ model is not directly explored .a first conclusion of this work is that such bias exists but , in the cases we treated , yields acceptable displacements , within 1 .suppose however that future data allow to exclude , within 3 s , when we assume .setting discriminates among inflationary potentials .it would be however legitimate to assess that , at that stage , such conclusions would still be premature .our analysis was then extended to the case of dm de coupling .the idea that dm and de have related origins or arise from the same field has been widely pursued ( see , _ e.g. _ kamenshchik , moschella & pasquier 2001 , bento , bertolami sen 2002 , 2004 ; mainini & bonometto 2004 ; for a review , see copeland , sami & tsujikawa 2006 ) .coupling causes dm de energy exchanges and this option was first explored in the attempt to ease the _ coincidence _ problem .unfortunately , when a dm de coupling , strong enough to this aim , is added to models , predictions disagree with data .this does not forbid , however , that the physics of the dark components includes a weaker coupling or that a stronger coupling is compensated by other features ( see , _ e.g. _ , la vacca et al .2008 ) . in this work we pointed out that a significant degeneracy exists , so that we can find an excellent fit of cmb data for coupled de cosmologies just by using constant uncoupled cosmologies .the fit is so good that even likelihood estimates do not allow to distinguish between the `` true '' cosmology and the best fit constant model , at the present or foreseeable sensitivity levels ._ unfortunately , however , the values obtained for several parameters are then widely different from input ones . _ in fact, if we ignore the coupling degree of freedom , when data are analyzed , we can find biased values for some primary parameter as , for which input values lay s away from what is `` detected '' .also and , which are secondary parameters , are significantly biased . in particular , with a coupling as low as , we found model realisations yielding estimate s away from the input value . if we keep to the fiducial case , however , the probability to find its input value is 36.5 for and 26.5 for this outlines the tendency to find smaller values than `` true '' ones . as a matter of fact , however , because of the uncertainty induced by our ignorance on de state equation , the width of errors on secondary parameter significantly exceeds the width for primary ones , and a 5 discrepancy is partially hidden by such ignorance .this agrees with the fact that data analysis shows a significant increase of errors on secondary parameters when the set of models inspected passes from to generic cosmologies .for instance , in the very wmap5 analysis , the error on increases by a huge factor , when one abandons the safety of to explore generic constant models .a general warning is then that errors obtained assuming are to be taken with some reserve .an important question is whether the bias persists when different observables , besides cmb , are used .a preliminary inspection shows that snia data would hardly provide any discrimination . on the contraryan analysis of fluctuation growth can be discriminatory if .08 .an even more discriminatory signal could be found in the dependence of the growth factor is suitably analyzed .observational projects aiming at performing a tomography of weak lensing , like dune euclid ( see , _e.g. _ , refregier et al .2006 , 2008 ) , therefore , can be expected to reduce this degeneracy case . | when cmb data are used to derive cosmological parameters , their very choice does matter : some parameter values can be biased if the parameter space does not cover the `` true '' model . this is a problem , because of the difficulty to parametrize dark energy ( de ) physics . we test this risk through numerical experiments . we create artificial data for dynamical or coupled de models and then use mcmc techniques to recover model parameters , by assuming a constant de state parameter and no dm de coupling . for the de potential considered , no serious bias arises when coupling is absent . on the contrary , , and thence and , suffer a serious bias when the `` true '' cosmology includes even just a mild dm de coupling . until the dark components keep an unknown nature , therefore , it can be important to allow for a degree of freedom accounting for dm de coupling , even more than increasing the number of parameters accounting for the behavior . |
efron and morris ( 1975 ) demonstrated the benefit of simultaneous estimation using a simple example of using the batting outcomes of 18 players in the first 45 at - bats in the 1970 season to predict their batting average for the remainder of the season . essentially , improved batting estimates shrink the observed averages towards the average batting average of all players .one common way of achieving this shrinkage is by means of a random effects model where the players underlying probabilities are assumed to come from a common distribution , and the parameters of this `` random effects '' distribution are assigned vague prior distributions . in modern sabermetrics research, a batting average is not perceived to be a valuable measure of batting performance .one issue is that the batting average assigns each possible hit the same value , and it does not incorporate in - play events such as walks that are beneficial for the general goal of scoring runs .another concern is that the batting average is a convoluted measure that combines different batting abilities such as not striking out , hitting a home run , and getting a hit on a ball placed in - play .indeed , it is difficult to really say what it means for a batter to `` hit for average '' .similarly , an on - base percentage does not directly communicate a batter s different performances in drawing walks or getting base hits .a deeper concern about a batting average is that chance plays a large role in the variability of player batting averages , or the variability of a player s batting average over seasons .albert ( 2004 ) uses a beta - binomial random effects model to demonstrate this point .if a group of players have 500 at - bats , then approximately half the variability in the players batting average is due to chance ( binomial ) variation the remaining half is due to variability in the underlying player s hitting probabilities .in contrast , other batting rates are less affected by chance . for example, only a small percentage of players observed home run rates are influenced by chance much of the variability is due to the differences in the batters home run abilities .the role of chance has received recent attention to the development of fip ( fielding independent pitching ) measures .mccracken ( 2001 ) made the surprising observation that pitchers had little control of the outcomes of balls that were put in - play .one conclusion from this observation is that the babip , batting average on balls put in - play , is largely influenced by luck or binomial variation , and the fip measure is based on outcomes such as strikeouts , walks , and home runs that are largely under the pitcher s control .following bickel ( 2004 ) , albert ( 2004 ) illustrated the decomposition of a batting average into different components and discussed the luck / skill aspect of different batting rates .in this paper , similar decompositions are used to develop more accurate predictions of a collection of batting averages . essentially , the main idea is to first represent a hitting probability in terms of component probabilities , estimate groups of component probabilities by means of random effects models , and use these component probability estimates to obtain accurate estimates of the hitting probabilities .sections 3 , 4 , 5 illustrate the general ideas for the problem of simultaneously estimating a collection of `` batting average '' probabilities and section 8 demonstrates the usefulness of this scheme in predicting batting averages for a following season .section 7 illustrates this plan for estimating on - base probabilities .section 8 gives a historical perspective on how the different component hitting rates have changed over time , and section 9 illustrates the use of this perspective in understanding the career trajectories of hitters and pitchers .the fip measure is shown in section 10 as a function of particular hitting rates and this representation is used to develop useful estimates of pitcher fip abilities .section 11 concludes by describing several modeling extensions of this approach .since efron and morris ( 1975 ) , there is a body of work finding improved measures of performance in baseball .tango et al ( 2007 ) discuss the general idea of estimating a player s true talent level by adjusting his past performance towards the performance of a group of similar players and the appendix gives the familiar normal likelihood / normal prior algorithm for performing this adjustment .brown ( 2008 ) , mcshane et al ( 2011 ) , neal et al ( 2010 ) , and null ( 2009 ) propose different `` shrinking - type '' methods for estimating batting abilities for major league batters. similar types of methods are proposed by albert ( 2006 ) , piette and james ( 2012 ) , and piette et al ( 2010 ) for estimating pitching and fielding metrics .albert ( 2002 ) and piette et al ( 2012 ) focus on the problem of simulataneously estimating player hitting and fielding trajectories .albert ( 2004 ) , bickel ( 2004 ) and bickel and stotz ( 2003 ) describe decomposition of batting averages .baumer ( 2008 ) performs a similar decomposition of batting average ( ) and on - base percentage ( ) with the intent of showing mathematically that is more sensitive than to the batting average on balls in - play .the basic decomposition of a batting average is illustrated in figure 1 .suppose one divides all at - bats into strikeouts ( so ) and not - strikeouts ( not so ) . of the ab that are not strikeouts , we divide into the home runs ( hr ) and the balls that are put `` in - play '' .finally , we divide the balls in - play into the in - play hits ( hip ) and the in - play outs ( oip ) .this representation leads to a decomposition of the batting average .we first write the proportion of hits as the proportion of ab that are not strikeouts times the proportion of hits among the non - strikeouts . continuing , if we breakdown these opportunities by , then we write the hit proportion as the proportion of non - strikeouts that are home runs plus the proportion of non - strikeouts that are singles , doubles , or triples . finally , we write the proportion of non - strikeouts that are singles , doubles , or triples as the proportion of non - strikeouts that are not home runs times the proportion of balls in - play ( ) that are hits. putting it all together , we have the following representation of a batting average : where the relevant rates are : * the strikeout rate * the home run rate * the batting average on balls in play rate . instead of simply recording hits and outs, we are regarding the outcomes of an at - bat as multinomial data with the four outcomes so , hr , hip , and oip .the decomposition of a batting average leads to a multinomial sampling model for the hitting outcomes , and the multinomial sampling leads to an attractive representation of the likelihood of the underlying probabilities .there are four outcomes of an at - bat : so , hr , hip , and oip ( strikeout , home run , hit - in - play , and out - in - play ) .let denote the probability that an at - bat results in a strikeout , let denote the probability a non - strikeout results in a home run , and denotes the probability that a ball - in - play ( not so or hr ) results in an in - play hit .if this player has at - bats , the vector of counts of so , hr , hip , and oip is multinomial with corresponding probabilities , and these expressions are analogous to the breakdowns of the hitting rates . for example , the probability a hitter gets a home run in an at - bat is equal to the probability that the person does not strike out ( ) , times the probability that the hitter gets a home run among all the non - strikeouts ( ) .likewise , the probability a batter gets a hit in - play is the probability he does not strikeout times the probability he does not get a home run in a non - strikeout times the probability he gets a hit in a ball put into play .denote the multinomial counts for a particular player as where is the total number of at - bats .the likelihood of the associated probabilities is given by with some rearrangement of terms , one can show that the likelihood has a convenient factorization : \times \left[p_{hr}^{y_{hr } } ( 1 - p_{hr})^{n - y_{so } - y_{hr}}\right]\\ & & \times \ , \, \left[p_{hip}^{y_{hip } } ( 1 - p_{hip})^{n - y_{so } - y_{hr } - y_{hip}}\right ] \\ & = & l_1 \times l_2\times l_3\end{aligned}\ ] ] from the above representation , we see * is the likelihood for a binomial( ) distribution * is the likelihood for a binomial( ) distribution * is the likelihood for a binomial( ) distribution above we consider the multinomial likelihood for a single player , when in reality we have hitters with unique hitting probabilities . for the player , we have associated probabilities .following the same factorization , it is straightforward to show that the likelihood of the vectors of probabilities , and is given by factorization of the likelihood motivates an attractive way of simultaneously estimating the multinomial probabilities for all players .suppose that one s prior belief about the vectors , , and are independent , and we represent each set of probabilities by an exchangeable model represented by a multilevel prior structure .in particular , suppose that the strikeout probabilities are believed to be exchangeable .one way of representing this belief is by the following mixture of betas model .* are independent , where a density has the form and is the beta function .the parameter is the prior mean and is a `` precision '' parameter in the sense that the prior variance is a decreasing function of . * the beta parameters )are assigned the vague prior similarly , we represent a belief in exchangeability of the home run probabilities in by assigning a similar two - stage prior with unknown hyperparameters , and . likewise , an exchangeable prior on the hit - in - play probabilities is assigned with hyperparameters , and .we saw that the likelihood function factors into independent components corresponding to the so , hr , and hip data .since the prior distributions of the probability vectors , , and are independent , it follows that these probability vectors also have independent posterior distributions .we summarize standard results about the posterior of the vector of strikeout probabilities with the understanding that similar results follow for the other two vectors .the posterior distribution of the vector can be represented by the product where is the posterior distribution of the parameters of the random effects distribution , and is the posterior of the probabilities conditional on the random effects distribution parameters .we discuss each distribution in turn .the random effects distribution represents the `` talent curve '' of the players with respect to strikeouts , and the posterior mode of this distribution tells us about the center and spread of this distribution .in particular , represents the average strikeout rate among the players , and the estimated standard deviation measures the spread of this talent curve .given values of the parameters and , the individual strikeout probabilities , ..., have independent beta distributions where is beta with shape parameters and , where and represent the number of strikeouts and at - bats for the player . using this representation , the posterior mean of the strikeout rate for the playeris given by plugging in the posterior estimates for and , we get the posterior estimate of the player striking out : the same methodology was used to estimate the home run probabilities and the hit - in - play probabilities for all players denote the three sets of estimates as , , and , respectively .one can use these estimates to estimate the hitting probabilities using the representation for an individual estimate , by substituting the `` component '' estimates , , and into this expression , we obtained a `` component '' estimate of that we denote by . to illustrate the use of these exchangeable models , we collect hitting data for all players with at least 100 at - bats in the 2011 season .( we used 100 ab as a minimum number of at - bats to exclude pitchers from the sample . ) three exchangeable models are fit , one to the collection of strikeout rates \{ } , one to the collection of home run rates \{ } , and one to the collection of in - play hit rates \{}. table 1 displays values of the random effect parameters and for each of the three fits .the average strikeout rates is about 20% , the average home run rate ( among ab removing strikeouts ) is 3.6% , and the average in - play hit rate is .the estimated values of are informative about the spreads of the associated probabilities .the relatively small estimated value of for reflects a high standard deviation , indicating a large spread in the player strikeout probabilities .the estimated value of for home runs is also relatively small , indicating a large spread in home run probabilities .in contrast , the estimated value of for hits in play is large , indicated that players abilities to get in - play hits are more similar ..estimates of random effect parameters for strikeout data , home run data , and in - play hit data from the 2011 season . [ cols=">,>,>,>",options="header " , ] his three observed rates are , , and from the fitted model , these observed rates are shrunk or adjusted towards average values using the formulas : ( note that beltran s strikeout and home runs are slightly adjusted towards the average values due to the small estimated values of . in contrast, the consequence of the large estimated is that beltran s in - play hit rate is adjusted about half of the way towards the average value . ) using these estimates , beltran s hit probability is estimated to be which is smaller than his observed batting average of 156 / 520 = 0.300 .much of this adjustment in his batting average estimate is due to the adjustment in beltan s in - play hitting rate .if one is primarily interested in estimating the hitting probabilities , there is a well - known simpler alternative approach based on an exchangeable model placed on the probabilities .if the hitting probability of the player is given by , then one can assume that are distributed from a common beta curve with parameters and , and assign a vague prior .then the posterior mean of is approximated by where the player is observed to have hits in at - bats and and are estimates from this exchangeable model .the following prediction contest is used to compare the proposed component estimates \{ } with the batting average estimates \{ } * first we collect hitting data for all players with at least 100 at - bats in both 2011 and 2012 seasons .* fit the component model on the hitting data for 2011 season get estimates of the strikeout , home run , and in - play hit probabilities for all hitters and use these three sets of estimates to get the component estimates \{}. * use the hit / at - bat data for all players in the 2011 season and the single exchangeable model to compute the estimates . *use both the component estimates and the single exchangeable estimates to predict the batting averages of the players in the 2012 season .let and denote the number of hits and at - bats of the player in the 2012 season .compute the root sum of squared prediction errors for both methods . + the improvement in using the components estimates is .a positive value of indicates that the component estimates are providing closer predictions than the single exchangeable estimates .this prediction contest was repeated for each of the seasons 1963 through 2012 . batting data for all players in seasons and collected , y = 1963 , ... , 2012 .the component and single exchangeable models were each fit to the data in season and the improvement in using the component method over the single exchangeable model in predicting the batting averages in season was computed .figure 2 graphs the prediction improvement as a function of the season .it is interesting that the component estimates were not uniformly superior to the estimates from a single exchangeable model .however the component method appears generally to be superior to the standard method , especially for seasons 1963 - 1980 and 1995 - 2012 .we have focused on the decomposition of an at - bat . in a similar manner , one can decompose a plate appearance as displayed in figure 3 .if we ignore sacrifice hits ( both sh and sf ) , then one can express an on - base percentage as if one combines walks and hit by pitches and defines the `` walk rate '' then one can write where is the batting average. this representation makes it clear that an obp is basically a function of a hitter ability to draw walks , as measured by the walk rate and his batting average .also , following the logic of the previous section , this representation suggests that one may accurately estimate a player s on - base probability by combining separate accurate estimates of his walk probability and his hitting probability . in thissetting , one can simultaneously estimate on - percentages of a group of players by separately estimating their walk probabilities and their hit probabilities .one represents a probability that a player gets on - base as this suggests a method of estimating a collection of on - base probabilities . 1 .estimate the walk probabilities \{ } by use of an exchangeable model .2 . estimate the hitting probabilities \{ } by use of an exchangeable model .3 . estimate the on - base probabilities by use of the formula where and are estimates of the walk probability and the hit probability for the player .figure 4 demonstrates the value of this method in providing better predictions . as in the `` prediction contest '' of section 5.2, we are interested in predicting the on - base probabilities for one season given hitting data from the previous season .two prediction methods are compared the `` single exchangeable '' method fits one exchangeable data using the on - base fractions , and the `` component '' method separately estimates the walk rates and hitting rates for the players .one evaluates the goodness of predictions by the square root of the sum of squared prediction errors and one computes the improvement in using the component procedure over the single exchangeable method .these methods are compared for 50 prediction contests using data from each of the seasons 1963 through 2012 to predict the on - base proportions for the following season . as most of the points fall above the horizontal line at zero, this demonstrates that the component method generally is an improvement over the one exchangeable method .to obtain a historical perspective of the change in hitting rates , the basic exchangeable model was fit to rates for all batters with at least 100 ab for each of the seasons 1960 through 2012 .for each season , we estimate the mean talent and associated precision parameter the associated estimated posterior standard deviation of the talent distribution is figure 5 displays the pattern of mean strikeout rates for all batters with at least 100 ab .note that the average strikeout rate among batters initially showed a decrease from 1970 through 1980 but has steadily increased until the current season .if we performed fits of the exchangeable model for all pitchers for each season from 1960 through 2012 , one would see a similar pattern in the mean strikeout rates .figure 6 displays the estimated standard deviations of the strikeout abilities of all batters with at least 100 ab across seasons and overlays the estimated season standard deviations of the strikeout abilities of all pitchers .first , note that among batters , the spread of the strikeout abilities shows a similar pattern to the mean strikeout rate there is a decrease from 1970 to 1980 followed by a steady increase to the current day .the spread of strikeout abilities among pitchers shows a different pattern .the standard deviations for pitchers have steadily increased over seasons , and the spread in the talent distribution for pitchers is significantly smaller than the spread of the talents for batters .one way of measuring the effectiveness of a batter or a pitcher is to look at the vector of rates for a particular season . plotting these rates over a player s career , one gainsa general understanding of the strengths of the batter or pitcher and learns when these players achieved peak performances .albert ( 2002 ) demonstrates the value of looking at career trajectories to better understand the growth and deterioration of player s batting abilities .the four observed rates have different averages and spreads , and as we see from figures 5 and 6 , the averages and spreads can change dramatically over different seasons .we use residuals from the predictive distribution to standardize these rates .let denote the number of successes in opportunities for a player in a particular season and suppose the underlying probabilities of the players follow a beta curve with mean and precision .the predictive density of the rate has mean and standard deviation when the exchangeable model is fit , one obtains estimates of the random effects parameters and , and obtains an estimate of the standard deviation .define the standardized residual in the following plots of the standardized residuals of the walk / hit - by - pitch rates , strikeout rates , home run rates , and hit - in - play rates will be displayed to show special strengths of hitters and pitchers .the graphs of the standardized rates are displayed for the careers of mickey mantle in figure 7 and ichiro suzuki in figure 8 .looking at the four graphs of figure 7 in a clockwise manner from the upper - left , one sees * mantle drew many walks / hbp and his walk / hbp rate actually increased during his career .* mantle had an above - average strikeout rate . *his home run rate hit a peak during the middle of his career .* his in - play hit rate decreased towards the end of his career .in contrast , by looking at figure 8 , one sees that suzuki had consistent low walk / hbp , strikeout , and home run rates throughout his career .he was especially good in his hit - in - play rate , although there was much variability in these rates and showed a decrease towards the end of his career .these displays of standardized rates are also helpful for understanding the strengths of pitchers in the history of baseball .figures 9 and 10 display the standardized rates for the hall of fame pitchers greg maddux and steve carlton .maddux was famous for his low walk rate and generally low era .looking at the trajectories of his rates in figure 9 , one sees that maddux s best walk rates occurred during the last half of his career .his best strikeout rate , home run rate , and hip rate occurred about 1995 and all three of these rates deteriorated from 1995 until his retirement in 2008 .in contrast , one sees from figure 10 that carlton had a slightly below average walk rate and a high strikeout rate during his career .since all of these rates significantly deteriorated towards the end of his career , perhaps carlton should have retired a few years earlier .based on these graphs , carlton s peak season in terms of performance was about 1980 , the season when the phillies won the world series .recently , there has been an increased emphasis on the use of fielding - independent - performance ( fip ) measures of pitchers .the idea is to construct a measure based on the outcomes such as walks , hit - by - pitches , strikeouts , and home runs that a pitcher directly controls . the usual definition of fip is given by where , , , and are the counts of these different events, is the innings pitched , and is a constant defined to ensure that the average fip is approximately equal to the league era . although is defined in terms of counts , it is straightforward to write it as a function of the four rates , , , and . let denote the count of batters faced , then \end{aligned}\ ] ] substituting these expressions into the formula and ignoring the constant term , the measure is expressed solely in terms of these four rates .although on face value , the measure seems to depend on the sample size ( the number of batters faced ) , the value of cancels out in the substitution .all of the observed rates are estimates of the underlying probabilities of those events .if we take the expression of , ignoring the constant , and replace the rates with probabilities , we get an expression for a pitcher s ability denoted by : using data for a single season , we can use separate exchangeable models to estimate the walk probabilities \{ } , the strikeout probabilities \{ } , the home run probabilities \{ } , and the hit - in - play probabilities \{ } for all pitchers .if we substitute the probability estimates into the formula , we get new estimates at the observed measures for all pitchers in a particular season . based on our earlier work , one would anticipate that our new estimates of ability would be superior to usual estimates in predicting the values of the pitchers in the following season . as in our evaluation of the performance of the improved batting probabilities , the new estimatescan be compared with exchangeable estimates based on the standard representation of the statistic . for a given pitcher , suppose one collects the measurement for each inning pitched .if the pitcher pitches for innings , then the measurements can be denoted by and the statistic is simply the sample mean .it is reasonable to assume that is normal with mean and variance , where reflects the variability of the values of within innings .based on this representation , one can estimate the abilities \{ } by use of an exchangeable model where the abilities are assigned a normal curve with mean and standard deviation , and a vague prior is assigned to . by fitting this model , one shrinks the observed values for the pitchers towards an average value . againa prediction experiment is used to predict the values for all pitchers from a season given these measures from the previous season .the `` standard '' method predicts the values using the single exchangeable model , and the `` component '' method first separately estimates the four sets of probabilities with exchangeable models , and then substitutes these estimates in the formula to obtain predictions . as might be expected , the component method results in a smaller prediction error for practically all of the seasons of the study .this again demonstrates the value of this `` divide and conquer '' approach to obtain superior estimates of pitcher characteristics that are functions of the underlying probabilities .in the sabermetrics literature , the regression effect is well known ; to predict a batter s hitting rate for a given season , one takes one s previous season s hitting average and move this estimate towards an average .this paper extends this approach to estimating a batting measure that is a function of different rates .apply the random effects model to get accurate estimates at the component rates for all players , and then substitute these estimates into the function to get improved predictions of the batting measures .this approach was easy to apply for the batting probability and on - base probabilities situations due to the convenient factorization of the likelihood and use of independent exchangeable prior distributions .the choice of a single beta random effects curve was chosen for convenience due to attractive analytical features , but this `` component '' approach can be used for any choice of random effects model .for example , one may wish to use covariates in modeling the probabilities that hitters get a hit on balls put in play .if is the probability that the player gets a hit , then one could assume that are independent from beta distributions where the prior means satisfy the logistic model where is a relevant predictor such as the speed of the ball off the bat .as before , the prior parameters would be assigned a weakly informative prior to complete the model .the fip measure was motivated from the basic observation that a team defense , not just a pitcher , prevents runs , and one wishes to devise alternative measures that isolate a pitcher s effectiveness . in a similar fashion , the goal here is to isolate the different components of a hitter s effectiveness .these component estimates are useful by themselves , but they are also helpful in estimating ensemble measures of ability such as the probability of getting on base .mcshane , b. , braunstein , a. , piette , j. and jensen , s. ( 2011 ) , `` a hierarchical bayesian variable selection approach to major league baseball hitting metrics , '' _ journal of quantitative analysis in sports _ , 7 , issue 2 . | standard measures of batting performance such as a batting average and an on - base percentage can be decomposed into component rates such as strikeout rates and home run rates . the likelihood of hitting data for a group of players can be expressed as a product of likelihoods of the component probabilities and this motivates the use of random effects models to estimate the groups of component rates . this methodology leads to accurate estimates at hitting probabilities and good predictions of performance for following seasons . this approach is also illustrated for on - base probabilities and fip abilities of pitchers . |
understanding biodiversity and coevolution is a central challenge in modern evolutionary and theoretical biology . in this context , for some decadesmuch effort has been devoted to mathematically model dynamics of competing populations through nonlinear , yet deterministic , set of rate equations like the equations devised by lotka and volterra or many of their variants .this heuristic approach is often termed as population - level description . as a common feature, these deterministic models fail to account for stochastic effects ( like fluctuations and spatial correlations ) .however , to gain some more realistic and fundamental understanding on generic features of population dynamics and mechanisms leading to biodiversity , it is highly desirable to include internal stochastic noise in the description of agents kinetics by going beyond the classical deterministic picture .one of the main reasons is to account for discrete degrees of freedom and finite - size fluctuations .in fact , the deterministic rate equations always ( tacitly ) assume the presence of infinitely many interacting agents , while in real systems there is a _large _ , yet _finite _ , number of individuals ( recently , this issue has been addressed in refs . ) . as a consequence ,the dynamics is intrinsically stochastic and the unavoidable finite - size fluctuations may have drastic effects and even completely invalidate the deterministic predictions .interestingly , both _ in vitro _ and _ in vivo _ experiments have recently been devoted to experimentally probe the influence of stochasticity on biodiversity : the authors of refs . have investigated the mechanism necessary to ensure coexistence in a community of three populations of _ escherichia coli _ and have numerically modelled the dynamics of their experiments by the so - called ` rock - paper - scissors ' model , well - known in the field of game theory .this is a three - species cyclic generalization of the lotka - volterra model . as a result ,the authors of ref . reported that in a well - mixed ( non - spatial ) environment ( i.e. when the experiments were carried out in a flask ) two species got extinct after some finite time , while coexistence of the populations was never observed .motivated by these experimental results , in this work we theoretically study the stochastic version of the cyclic lotka - volterra model and investigate in detail the effects of finite - size fluctuations on possible population extinction / coexistence . for our investigation , as suggested by the flask experiment of ref . , the stochastic dynamics of the cyclic lotka - volterra model is formulated in the natural language of urn models and by adopting the so - called individual - based description . in the latter , the explicit rules governing the interaction of a _finite number _ of individuals with each other are embodied in a master equation .the fluctuations are then specifically accounted for by an appropriate fokker - planck equation derived from the master equation via a so - called van kampen expansion .this program allows us to quantitatively study the deviations of the stochastic dynamics of the cyclic lotka - volterra model with respect to the rate equation predictions and to address the question of the extinction probability , the computation of which is the main result of this work . from a more general perspective, we think that our findings have a broad relevance , both theoretical and practical , as they shed further light on how stochastic noise can dramatically affect the properties of the numerous nonlinear systems whose deterministic description , like in the case of the cyclic lotka - volterra model , predicts the existence of neutrally stable solutions , i.e. cycles in the phase portrait .this paper is organized as follows : the cyclic lotka - volterra model is introduced in the next section and its deterministic rate equation treatment is presented . in section [ stoch_appr ] , we develop a quantitative analytical approach that accounts for stochasticity , a fokker - planck equation is derived from the underlying master equation within a van kampen expansion .this allows us to compute the variances of the agents densities .we also study the time - dependence properties of the system by carrying out a fourier analysis from a set of langevin equations .section [ sect - ext - prob ] is devoted to the computation of the probability of having extinction of two species at a given time , which constitutes the main issue of this work . in the final section ,we summarize our findings and present our conclusions ., , and .the latter may correspond to the strategies in a rock - paper - scissors game , or to different bacterial species .[ cycle ] ] the cyclic lotka volterra model under consideration here is a system where three states , , and cyclically dominate each other : invades , outperforms , and in turn dominates over , schematically drawn in fig .[ cycle ] .these three states , , and allow for various interpretations , ranging from strategies in the rock - paper - scissors game over tree , fire , ash in forest fire models or chemical reactions to different bacterial species .in the latter case , a population of poison producing bacteria was brought together with another being resistant to the poison and a third which is not resistant . as the production of poison as well as the resistance against it have some cost , these species show a cyclic dominance : the poison - producing one invades the non - resistant , which in turn reproduces faster than the resistant one , and the latter finally dominates the poison - producing . in a well - mixed environment , like a flask in the experiments , eventually only one species survives .the populations are large but finite , and the dynamics of reproduction and killing events may to a good approximation be viewed as stochastic . ) , red or medium gray ( ) , and blue or dark gray ( ) . at each time step ,two random individuals are chosen ( indicated by arrows in the left picture ) and react ( right picture ) ., title="fig : " ] ) , red or medium gray ( ) , and blue or dark gray ( ) . at each time step ,two random individuals are chosen ( indicated by arrows in the left picture ) and react ( right picture ) ., title="fig : " ] motivated from these biological experiments , we introduce a stochastic version of the cyclic lotka - volterra model .consider a population of size which is well mixed , i.e. in the absence of spatial structure .the stochastic dynamics used to describe its evolution , illustrated in fig . [ urn ] , is referred to as `` urn model '' and closely related to the moran process . at every time step ,two randomly chosen individuals are selected , which may at certain probability react according to the following scheme : with reaction rates , and .we observe the cyclic dominance of the three species .also , the total number of individuals is conserved by this dynamics ; this will of course play a role in our further analysis .we now proceed with the analysis of the deterministic version of the system ( [ react ] ) .this will prove insightful for building a stochastic description of the model , which is the scope of sec .[ stoch_appr ] .the deterministic rate equations describe the time evolution of the densities , , and for the species , , and ; they read where the dot stands for the time derivative . these equationsdescribe a well - mixed system , without any spatial correlations , as naturally implemented in urn models or , equivalently , infinite dimensional lattice systems or complete graphs . in the following , the eqs .( [ re ] ) are discussed and , from their properties , we gain intuition on the effects of stochasticity . already from the basic reactions ( [ react ] ) we have noticed that the total number of individuals is conserved , which is a property correctly reproduced by the the rate equations ( [ re ] ) . setting the total density , meaning the sum of the densities , , and , to unity , we obtain for all times .only two out of the three densities are thus independent , we may view the time evolution of the densities in a two - dimensional phase space .( [ re ] ) together with ( [ total_dens ] ) admit three trivial ( absorbing ) fixed points : ; ; and .they denote states where only one of the three species survived , the other ones died out .in addition , the rate equations ( [ re ] ) also predict the existence of a fixed point which corresponds to a reactive steady state , associated with the coexistence of all three species : to determine the nature of this fixed point , we observe that another constant of motion exists for the rate equations ( [ re ] ) , namely the quantity does not evolve in time .in contrast to the total density ( [ total_dens ] ) , this constant of motion is only conserved by the rate equations but does not stem from the reaction scheme ( [ react ] ) .hence , when considering the stochastic version of the cyclic model , the total density remains constant but the expression ( [ const2 ] ) will no longer be a conserved quantity .the above fixed point ( [ fp - c ] ) and constant of motion ( [ const2 ] ) have been derived and discussed also within the framework of game theory , see e.g. ref . . in fig .[ simplex_as ] , we depict the ternary phase space for the densities , , and : the solutions of the rate equations ( [ re ] ) are shown for different initial conditions and a given set of rates , , and . as the rate equations ( [ re ] ) are nonlinear in the densities , we can not solve them analytically , but use numerical methods .due to the constant of motion , the solutions yield cycles around the reactive fixed point ( thus corresponding to case 3 in durrett and levin s classification ) . the three trivial steady states , corresponding to saddle points within the linear analysis , are the edges of the simplex .the reactive stationary state , as well as the cycles , are neutrally stable , stemming from the existence of the constant of motion ( [ const2 ] ) . especially , the reactive fixed point is a _ center fixed point_. the boundary of the simplex denotes states where at least one of the three species died out ; as cyclic dominance is lost , states that have reached this boundary will evolve towards one of the edges , making the boundary _ absorbing_. .the rate equations predict cycles , which are shown in black .their linearization around the reactive fixed point is solved in proper coordinates ( blue or dark gray ) .the red ( or light gray ) erratic flow denotes a single trajectory in a finite system ( ) , obtained from stochastic simulations .it spirals out from the reactive fixed point , eventually reaching an absorbing state .[ simplex_as ] ] the nonlinearity of eqs .( [ re ] ) induces substantial difficulties in the analytical treatment .however , much can already be inferred from the linearization around the reactive fixed point ( [ fp - c ] ) , which we will consider in the following .we therefore introduce the deviations from the reactive fixed point , denoted as : using conservation of the total density ( [ total_dens ] ) , we can eliminate , and the remaining linearized equations ( [ re ] ) may be put into the form , with the vector and the matrix the reactive fixed point is associated to the eigenvalues of , where is given by oscillations with this frequency arise in its vicinity . in proper coordinates , these oscillations are harmonic , being the solution of , with .for illustration , we have included these coordinates , in which the solutions take the form of circles around the origin , in fig .[ simplex_as ] .the linear transformation is given by the equations are easily solved : eventually , we obtain the solutions for the linearized rate equations : \cos(\omega_0 t)\cr & + \omega_0\bigg\{\frac{1}{k_c}[a(0)-a^*]+\frac{k_b+k_c}{k_bk_c}[b(0)-b^*]\bigg\}\sin(\omega_0 t)\cr \label{re - lin - sol}\end{aligned}\ ] ] where and follow by cyclic permutations .to establish the validity of the linear analysis ( [ a - lin])-([re - lin - sol ] ) , we compare the ( numerical ) solution of the rate equations ( [ re ] ) with the linear approximations ( [ re - lin - sol ] ) . as shown in fig .[ dens_comp ] , when , the agreement between the nonlinear rate equations ( [ re ] ) and the linear approximation ( [ re - lin - sol ] ) is excellent , both curves almost coincide . on the other hand , the nonlinear terms appearing in eqs .( [ re ] ) become important already when and are responsible for significant discrepancies both in the amplitudes and frequency from the predictions of eq .( [ re - lin - sol ] ) .( color online ) the deterministic time - evolution of the density for small and large amplitudes .the prediction ( [ re - lin - sol ] ) , shown in black , is compared to the numerical solution ( red or gray ) of the rate equations ( [ re ] ) . for small amplitudes ( ) ,both coincide .however , for large amplitudes ( ) , they considerably differ both in amplitude and frequency .we used reaction rates . ]we now aim at introducing a measure of distance to the reactive fixed point within the phase portrait . in the next section , this quantity will help quantify effects of stochasticity . as it was recently proved useful in a related context , we aim at taking the structure of cycles predicted by the deterministic equations , see fig .[ simplex_as ] , into account by requiring that the distance should not change on a given cycle . motivated from the constant of motion ( [ const2 ] ) ,we introduce with the normalization factor ( see below ) being conserved by the eqs .( [ re ] ) , remains constant on every deterministic cycle .as it vanishes at the reactive fixed point and monotonically grows when departing from it , yields a measure of the distance to the latter . expanding the radius in small deviations from the reactive fixed point results in \cr & + o({\bf x}^2)\quad .\label{r - x}\end{aligned}\ ] ] in the variables , with our choice for , it simplifies to corresponding to the radius of the deterministic circles , which emerge in the variables .the fact that the number of particles is _ finite _ induces fluctuations that are not accounted for by eqs .( [ re ] ) . in the following ,our goal is to understand the importance of fluctuations and their effects on the deterministic picture ( [ re ] ) .we show that , due to the neutrally stable character of the deterministic cycles , fluctuations have drastic consequences . intuitively , we expect that in the presence of stochasticity , each trajectory performs a random walk in the phase portrait , interpolating between the deterministic cycles ( as will be revealed by considering ) , eventually reaching the boundary of the phase space .there , the cyclic dominance is completely lost , as one of the species gets extinct .of the two remaining ones , one species is defeating the other , such that the latter soon gets extinct as well , leaving the other one as the only survivor ( this corresponds to one of the trivial fixed points , the edges of the ternary phase space ) .we thus observe the boundary to be absorbing , and presume the system to always end up in one of the absorbing states .a first indication of the actual emergence of this scenario can be inferred from the stochastic trajectory shown in fig .[ simplex_as ] .in this section , we set up a stochastic description for the cyclic lotka - volterra model in the urn formulation with finite number of individuals . starting from the master equation of the stochastic process , we obtain a fokker - planck equation for the time evolution of the probability of finding the system in state at time .it allows us to gain a detailed understanding of the stochastic system .in particular , we will find that , as anticipated at the end of the last section , after long enough time , the system reaches one of the absorbing states .our main result is the time - dependence of the extinction probability , being the probability that , starting at a situation corresponding to the reactive fixed point , after time two of the three species have died out . it is obtained through mapping onto a known first - passage problem .we compare our analytical findings to results from stochastic simulations . for the sake of clarity and without loss of generality , throughout this section , the case of equal reaction rates is considered .details on the unequal rates situation are relegated to appendix [ app_gen_rates ] .we carried out extensive stochastic simulations to support and corroborate our analytical results .an efficient simulation method originally due to gillespie was implemented for the reactions ( [ react ] ) .time and type of the next reaction taking place are determined by random numbers , using the poisson nature of the individual reactions . for the extinction probability , to unravel the universal time - scaling , system sizes ranging from to considered , with sample averages over realizations .let us start with the master equation of the processes ( [ react ] ) .we derive it in the variables , which were introduced in ( [ x ] ) as the deviations of the densities from the reactive fixed point . using the conservation of the total density , , is eliminated and kept as independent variables .the master equation for the time - evolution of the probability of finding the system in state at time thus reads where denotes the transition probability from state to the state within one time step ; summation extends over all possible changes .we choose the unit of time so that , on average , every individual reacts once per time step .+ according to the kramers - moyal expansion of the master equation to second order in , this results in the fokker - planck equation +\frac{1}{2}\partial_i\partial_j[\mathcal{b}_{ij}({\bf x})p({\bf x},t ) ] ~. \label{fokker_planck}\ ] ] here , the indices stand for and ; in the above equation , the summation convention implies summation over them .the quantities and are , according to the kramers - moyal expansion : note that is symmetric . for the sake of clarity , we outline the calculation of : the relevant changes in the density result from the basic reactions ( [ react ] ) , they are in the first reaction , in the second and in the third .the corresponding rates read for the first reaction ( the prefactor of enters due to our choice of time scale , where reactions occur in one unit of time ) , and for the third , resulting in .the other quantities are calculated analogously , yielding + with .van kampen s linear noise approximation further simplifies these quantities . in this approach, the values are expanded around to the first order .as they vanish at the reactive fixed point , we obtain : where the matrix elements are given by the matrix , already given in ( [ a - lin ] ) , embodies the deterministic evolution , while the stochastic noise is encoded in .to take the fluctuations into account within the van kampen expansion , one approximates by its values at the reactive fixed point .hence , we find : and the corresponding fokker - planck equation reads +\frac{1}{2}\mathcal{b}_{ij}\partial_i\partial_jp({\bf x},t ) \quad .\label{gen - f - p}\ ] ] for further convenience , we now bring eq .( [ gen - f - p ] ) into a more suitable form by exploiting the polar symmetry unveiled by the variables * y*. as for the linearization of eqs .( [ re ] ) , it is useful to rely on the linear mapping , with .interestingly , it turns out that this transformation diagonalizes .one indeed finds , with in the variables , the fokker - planck equation ( [ gen - f - p ] ) takes the simpler form ({\bf y},t)\cr & + \frac{1}{12n}[\partial_{y_a}^2+\partial_{y_b}^2]p({\bf y},t)\quad .\label{f - p - y}\end{aligned}\ ] ] to capture the structure of circles predicted by the deterministic approach , we introduce polar coordinates : note that in the vicinity of the reactive fixed point , denotes the distance , which is now a random variable : . the fokker - planck equation ( [ f - p - y ] )eventually turns into (r,\phi , t)~ .\cr \label{pol - f - p}\end{aligned}\ ] ] the first term on the right - hand - side of this equation describes the system s deterministic evolution , being the motion on circles around the origin at frequency .stochastic effects enter through the second term , which corresponds to isotropic diffusion in two dimensions with diffusion constant .note that it vanishes in the limit of infinitely many agents , i.e. when .if we consider a spherically symmetric probability distribution at time , i.e. independent of the angle at , then this symmetry is conserved by the dynamics according to eq .( [ pol - f - p ] ) and one is left with a radial distribution function .the fokker - planck equation ( [ pol - f - p ] ) thus further simplifies and reads (r , t)\quad .\label{f_p_r}\ ] ] this is the diffusion equation in two dimensions with diffusion constant , expressed in polar coordinates , for a spherically symmetric probability distribution .this is the case on which we specifically focus in the following . within our approximations around the reactive fixed point ,the probability distribution is thus the same as for a system performing a two - dimensional random walk in the variables .intuitively , such behavior is expected and its origin lies in the neutrally stable character of the cycles of the deterministic solution of the rate equations ( [ re ] ) . the cycling around the reactive fixed point does not yield additional effects when considering spherically symmetric probability distributions . furthermore , due to the existence of the constant of motion ( [ const2 ] ) , the neutral stability does not only hold in the vicinity of the reactive fixed point , but in the whole ternary phase space .we thus expect the distance from the reactive fixed point , a random variable , to obey a diffusion - like equation in the whole phase space ( and not only around the reactive fixed point ) . actually , qualitatively identical behavior is also found in the general case of non - equal reaction rates , , and ( see appendix [ app_gen_rates ] ) , as the constant of motion ( [ const2 ] ) again guarantees the neutral stability of the deterministic cycles . in this subsection , we will use the fokker - planck equation ( [ f_p_r ] ) to investigate the time - dependence of fluctuations around the reactive fixed point .in particular , we are interested in the time evolution of mean deviation from the latter . the average square distance will be found to grow linearly in time , rescaled by the number of individuals , before saturating .the frequency spectrum arising from erratic oscillations around the reactive fixed point is of further interest to characterize the stochastic dynamics : we will show that the frequency ( [ freq_gen_rates ] ) predicted by the deterministic approach emerges as a pole in the power spectrum .( color online ) the averaged square radius as a function of the rescaled time .the blue ( or dark gray ) curve represents a single trajectory , which is seen to fluctuate widely .the linear black line indicates the analytical prediction ( [ r_time_evol ] ) , the red ( or light gray ) one corresponds to sample averages over realizations in stochastic simulations .hereby , we used a system size of . ] being interested in the vicinity of the reactive fixed point , in this subsection we ignore the fact that the boundary of the ternary phase space is absorbing . then, the solution to the fokker - planck equation ( [ f_p_r ] ) with the initial condition , where the prefactor ensures normalization , is simply a gaussian : this result predicts a broadening of the probability distribution in time , as the average square radius increases linearly with increasing time : as the time scales linearly with , increasing the system size results in rescaling the time and the broadening of the probability distribution takes longer . to capture these findings, we introduce the rescaled variable . in fig .[ radius_time_evol ] , we compare the time evolution of the squared radius obtained from stochastic simulations with the prediction of eq .( [ r_time_evol ] ) and find a good agreement in the linear regime around the reactive fixed point , where .in fact , for short times , displays a linear time dependence , with systematic deviation from ( [ r_time_evol ] ) at longer times .we understand this as being in part due to the linear approximations used to derive ( [ r_time_evol ] ) , but also , and more importantly , to the fact that so far we ignored the absorbing character of the boundary .we conclude that the latter invalidates the gaussian probability distribution for longer times .this issue , which requires a specific analysis , is the scope of section [ sect - ext - prob ] , where a proper treatment is devised .( color online ) time evolution of the variances of the densities and when starting at the reactive fixed point .the blue ( or dark gray ) curves correspond to a single realization , while the red ( or light gray ) ones denote averages over 1000 samples .our results are obtained from stochastic simulations with a system size of .the black line indicates the analytical predictions . ] from the finding together with the spherical symmetry of ( [ gaussian ] ) , resulting in , we readily obtain the variances of the densities and : according to these results , the average square deviations of the densities from the reactive fixed point grow linearly in the rescaled time , thus exhibiting the same behavior as we already found for the average squared radius ( [ r_time_evol ] ) . in fig .[ fluct ] , these findings are compared to stochastic simulations for small times , where the linear growth is indeed recovered .( color online ) variances of the densities and when starting at a cycle away from the fixed point .the black lines denote our analytical results , while the red ( or gray ) ones are obtained by stochastic simulations as averages over 1000 realizations .they are seen to agree for small times , while for larger ones the stochasticity of the system induces erratic oscillations .these results were obtained when starting from . ]we may consider fluctuations of the densities , , and not only around the reactive stationary state , as obtained in ( [ fluct_dens ] ) , but also as the deviations from the deterministic cycles .we consider the latter in the linear approximation around the fixed point , given by eq .( [ re - lin - sol ] ) .the fluctuations around them are again described by the fokker - planck equation ( [ fokker_planck ] ) with and given by eqs .( [ alpha_b ] ) , but now , , and are the deviations from the deterministic cycle : , where , , and obey eqs .( [ re ] ) and characterize the deterministic cycles given by eq .( [ re - lin - sol ] ) . again performing van kampen s linear noise approximation, we obtain a fokker - planck equation of the type ( [ gen - f - p ] ) , now with the matrices note that the entries of the above matrices now depend on time via and . the fokker - planck equation ( [ gen - f - p ] ) yields equations for the time evolution of the fluctuations : they may be solved numerically for , , and , yielding growing oscillations . in fig .[ fluct_as ] we compare these findings to stochastic simulations .we observe a good agreement for small rescaled times , while the oscillations of the stochastic results become more irregular at longer times .+ recently , it has been shown that the frequency predicted by the deterministic rate equations appears in the stochastic system due to a `` resonance mechanism '' .internal noise present in the system covers all frequencies and induces excitations ; the largest occurring for the `` resonant '' frequency predicted by the rate equations . here ,following the same lines , we address the issue of the characteristic frequency in the stochastic cyclic lotka - volterra model . .it agrees with stochastic simulations ( solid ) .the inset shows the erratic oscillations of the density of one of the species for one realization .the system size considered is .[ urn_power_spectrum ] ] in the vicinity of the reactive fixed point , the deterministic rate equations ( [ re ] ) predict density oscillations with frequency given in eq .( [ freq_gen_rates ] ) . for the stochastic model ,we now show that a spectrum of frequencies centered around this value arises . the most convenient way to compute this power spectrum from the fokker - planck equation ( [ gen - f - p ] ) is through the set of equivalent langevin equations : with the white noise covariance matrix : . from the fouriertransform of eq .( [ langevin ] ) , it follows that \cr & = \frac{4}{3n}\frac{1 + 3\omega^2}{(1 - 3\omega^2)^2 } \quad .\end{aligned}\ ] ] the power spectrum has a pole at the characteristic frequency already predicted from the rate equations ( [ re ] ) , . for increasing system size ,the power spectrum displays a sharper alignment with this value . in fig .[ urn_power_spectrum ] we compare our results to stochastic simulations and find an excellent agreement , except for the pole , where the power spectrum in finite systems obviously has a finite value .these results were obtained in the vicinity of the reactive fixed point , where the linear analysis ( [ re - lin - sol ] ) applies . as already found in the deterministic description ( see fig .[ dens_comp ] ) , when departing from the center fixed point , nonlinearities will alter the characteristic frequency .so far , within the stochastic formulation , the fluctuations around the reactive steady state were found to follow a gaussian distribution , linearly broadening in time .however , when approaching the absorbing boundary , the latter alters this behavior , see fig .[ radius_time_evol ] . in the following, we will incorporate this effect in our quantitative description .it plays an essential role when discussing the extinction probability , which is the scope of this subsection .the probability that one or more species die out in the course of time is of special interest within population dynamics from a biological viewpoint . when considering meta - populations formed of local patches , such questions were e.g. raised in ref . . here, we consider the extinction probability that , starting at the reactive fixed point , after time two of the three species have died out . in our formulation , this corresponds to the probability that after time the system has reached the absorbing boundary of the ternary phase space , depicted in fig .[ simplex_as ] . considering states far from the reactive fixed point, we now have to take into account the absorbing nature of the boundary .this feature is incorporated in our approach by discarding the states having reached the boundary , so that a vanishing density of states occurs there . as the normalizationis lost , we do no longer deal with probability distributions : in fact , discarding states at the boundary implies a time decay of the integrated density of states [ which , for commodity , we still refer to as ] . ignoring the nonlinearities , the problem now takes the form of solving the fokker - planck equation ( [ f_p_r ] ) for an initial condition , and with the requirement that has to vanish at the boundary .hereby , the triangular shaped absorbing boundary is regarded as the outermost ( degenerate ) cycle of the deterministic solutions . as the linearization in the variables around the reactive fixed point maps the cycles onto circles ( see above ) , the triangular boundary is mapped to a sphere as well .although the linearization scheme on which these mappings rely is inaccurate in the vicinity of the boundary , it is possible to incorporate nonlinear effects in a simple and pragmatic manner , as shown below .however , first let us consider the linearized problem , being a first - passage to a sphere of radius , which is , e.g. , treated in chapter 6 of ref . . the solution is known to be a combination of modified bessel functions of the first and second kind . actually, the laplace transform of the density of states reads where is the diffusion constant ; and denote the bessel function of the first , resp .second , kind and of order zero ; and is the radius of the absorbing sphere .it is normalized at the initial time ; however , for later times , the total number of states will decay in time , as states are absorbed at the boundary .equipped with this result , we are now in a position to calculate the extinction probability . it can be found by considering the probability current at the absorbing boundary , namely : whose laplace transform is given by the extinction probability at time is obtained from by integrating over time until time : .therefore , the laplace transform of reads .again , we notice that we can write this equation in a form that depends on the time only via the rescaled time : hence , for different system sizes , one obtains the same extinction probability provided one considers the same value for the scaling variable ( see fig .[ urn_transition ] ) .we can not solve the inverse laplace transform appearing in eq .( [ ext_prob_u ] ) , but might be expanded according to as considering the first three terms in this expansion , i.e contributions up to , yields the approximate result \quad . \label{approx_ext_prob}\ ] ] numerically , we have included higher terms .the results for contributions up to in ( [ exp_i0 ] ) are shown in fig .[ urn_transition ] .an estimate of the effective distance to the absorbing boundary is determined by plugging either , or , or into eq .( [ r - gen ] ) , which yields . from fig .[ urn_transition ] , we observe the extinction probability to be overestimated by ( [ approx_ext_prob ] ) with .this stems from nonlinearities altering the analysis having led to ( [ approx_ext_prob ] ) .however , adjusting on physical grounds , we are able to capture these effects .another estimate of is obtained by considering the expression eq .( [ r - x ] ) , which arises from a linear analysis . asthe extinction of two species requires , in the expression eq .( [ r - x ] ) this leads to the estimate for . the comparison with fig .[ urn_transition ] , shows that ( [ approx_ext_prob ] ) together with is a lower bound of .a simple attempt to interpolate between the above estimates is to consider the mean value of these radii , , which happens to yield an excellent agreement with results from stochastic simulations , see fig .[ urn_transition ] . for the latter ,we have considered systems of , and individuals . rescaling the time according to ,they are seen to collapse on a universal curve .this is well described by eq .( [ approx_ext_prob ] ) with as the radius of the absorbing boundary .( color online ) the extinction probability when starting at the reactive fixed point , depending on the rescaled time .stochastic simulations for different system sizes ( : triangles ; : boxes ; : circles ) are compared to the analytical prediction ( [ ext_prob_u ] ) .the left ( blue or dark gray ) is obtained from , the right one ( red or light gray ) from , and the middle ( black ) corresponds to the average of both : . ] to conclude , we consider the mean time that it takes until one species becomes extinct . from we find for the rescaled mean absorption time which for our best estimate yields a value of .motivated by recent _ in vitro _ and _ in vivo _ experiments aimed at identifying mechanisms responsible for biodiversity in populations of _ escherichia coli _ , we have considered the stochastic version of the ` rock - paper - scissors ' , or three - species cyclic lotka - volterra , system within an urn model formulation .this approach allowed us to quantitatively study the effect of finite - size fluctuations in a system with a large , yet _ finite _ , number of agents .while the classical rate equations of the cyclic lotka - volterra model predict the existence of one ( neutrally stable ) center fixed point , associated with the _ coexistence _ of all the species , this picture is _ dramatically _ invalidated by the fluctuations which unavoidably appear in a finite system .the latter were taken into account by a fokker - planck equation derived from the underlying master equation through a van kampen expansion . within this scheme, we were able to show that the variances of the densities of individuals grow in time ( first linearly ) until extinction of two of the species occurs .in this context , we have investigated the probability for such extinction to occur at a given time . as a main result of this work, we have shown that this extinction probability is a function of the scaling variable . exploiting polar symmetries displayed by the deterministic trajectories in the phase portrait and using a mapping onto a classical first - passage problem, we were able to provide analytic estimates ( upper and lower bounds ) of the extinction probability , which have been successfully compared to numerical computation . from our results, it turns out that the classical rate equation predictions apply to the urn model with a finite number of agents only for short enough time , i.e. in the regime . as time increases, the probability of extinction grows , asymptotically reaching for , so that , for finite , fluctuations are _ always _ responsible for extinction and thus dramatically jeopardize the possibility of coexistence and biodiversity .interestingly , these findings are in qualitative agreement with those ( both experimental and numerical ) reported in ref . , where it was found that in a well mixed environment ( as in the urn model considered here ) two species get extinct . while this work has specifically focused on the stochastic cyclic lotka - volterra model , the addressed issues are generic .indeed , we think that our results and technical approach , here illustrated by considering the case of a paradigmatic model , might actually shed further light on the role of fluctuations and the validity of the rate equations in a whole class of stochastic systems .in fact , while one might believe that fluctuations in an urn model should always vanish in the thermodynamic limit , we have shown that this issue should be dealt with due care : this is true for systems where the rate equations predict the existence of an asymptotically stable fixed point , which is always reached by the stochastic dynamics .in contrast , in systems where the deterministic ( rate equation ) description predicts the existence of ( neutrally stable ) center fixed points , such as the cyclic lotka - volterra model , fluctuations have dramatic consequences and hinder biodiversity by being responsible ( at long , yet finite , time ) for extinction of species . in this case , instead of a deterministic oscillatory behavior around the linearly ( neutrally ) stable fixed point , the stochastic dynamics always drives the system toward one of its absorbing states .thus , the absorbing fixed points , predicted to be linearly unstable within the rate equation theory , actually turn out to be the _ only stable fixed points _available at long time .we would like to thank u. c. tuber and p. l. krapivsky for helpful discussions , as well as a. traulsen and j. c. claussen for having made manuscript available to us prior to publication .m.m . gratefully acknowledges the support of the german alexander von humboldt foundation through the fellowship no .iv - scz/1119205 stp .in section [ stoch_appr ] , when considering the stochastic approach to the cyclic lotka - volterra model , for the sake of clarity we have specifically turned to the situation of equal reaction rates , . in this appendix , we want to provide some details on the general case with unequal rates . while the mathematical treatment becomes more involved , we will argue that the qualitative general situation still follows along the same lines as the ( simpler ) case that we have discussed in detail in sec .[ stoch_appr ] .the derivation of the fokker - planck equation ( [ gen - f - p ] ) is straightforward , following the lines of subsection [ master - f - p ] .the matrix remains unchanged and is given in eq .( [ a - lin ] ) , for we now obtain : the corresponding fokker - planck equation reads +\frac{1}{2}\mathcal{b}_{ij}\partial_i\partial_jp({\bf x},t ) \quad .\ ] ] again , we aim at benefitting from the cyclic structure of the deterministic solutions , and perform a variable transformation to , with given in eq .( [ s - matrix ] ) . as already found in subsection [ det - eq ], turns into , such that in the variables the deterministic solutions correspond to circles around the origin . for the stochastic part , entering via , a technical difficulty arises . it is transformed into being no longer proportional to the unit matrix as in the case of equal reaction rates .we can do slightly better by using an additional rotation , , with rotation angle this variable transformation leaves invariant , but brings to diagonal form , with unequal diagonal elements .the stochastic effects thus correspond to _anisotropic diffusion_. however , for large system size , the effects of the anisotropy are washed out : the system s motion on the deterministic cycles , described by , occurs on a much faster timescale then the anisotropic diffusion , resulting in an averaging over the different directions . to calculate the time evolution of the average deviation from the reactive fixed point , , we start from the the fluctuations in , which fulfill the equations using eq .( [ r - y ] ) , we obtain the dependence on the deterministic part has dropped out , and the solution to the above equation with initial condition is a linear increase in the rescaled time : \frac{t}{n}\quad,\ ] ] valid around the reactive fixed point . as in the case of equal reaction rates , we have a linear dependence , corresponding to a two - dimensional random walk . as a conclusion of the above discussion , the general case of unequal reaction rates qualitatively reproduces the behavior of the before discussed simplest situation of equal rates ( confirmed by stochastic simulations ) .the latter turns out to already provide a comprehensive understanding of the system . | cyclic dominance of species has been identified as a potential mechanism to maintain biodiversity , see e.g. b. kerr , m. a. riley , m. w. feldman and b. j. m. bohannan [ nature * 418 * , 171 ( 2002 ) ] and b. kirkup and m. a. riley [ nature * 428 * , 412 ( 2004 ) ] . through analytical methods supported by numerical simulations , we address this issue by studying the properties of a paradigmatic non - spatial three - species stochastic system , namely the ` rock - paper - scissors ' or cyclic lotka - volterra model . while the deterministic approach ( rate equations ) predicts the coexistence of the species resulting in regular ( yet neutrally stable ) oscillations of the population densities , we demonstrate that fluctuations arising in the system with a _ finite number of agents _ drastically alter this picture and are responsible for extinction : after long enough time , two of the three species die out . as main findings we provide analytic estimates and numerical computation of the extinction probability at a given time . we also discuss the implications of our results for a broad class of competing population systems . |
optical networks have traditionally employed three main switching paradigms , namely circuit switching , burst switching , and packet switching , which have extensively studied respective benefits and limitations . in order to achieve the predictable network service of circuit switching while enjoying some of the flexibilities of burst and packet switching ,_ dynamic circuit switching _ has been introduced .dynamic circuit switching can be traced back to research toward differentiated levels of blocking rates of calls .today , a plethora of network applications ranging from the migration of data and computing work loads to cloud storage and computing as well as high - bit rate e - science applications , e.g. , for remote scientific collaborations , to big data applications of governments , private organizations , and households are well supported by dynamic circuit switching . moreover , gaming applications benefit from predictable low - delay service provided by circuits , as do emerging virtual reality applications .also , circuits can aid in the timely transmission of data from continuous media applications , such as live or streaming video .video traffic is often highly variable and may require smoothing before transmission over a circuit or require a combination of circuit transport for a constant base bit stream and packet switched transport for the traffic burst exceeding the base bit stream rate .both commercial and research / education network providers have recently started to offer optical dynamic circuit switching services . while dynamic circuit switching has received growing research attention in core and metro networks , mechanisms for supporting dynamic circuit switching in passive optical networks ( pons ) , which are a promising technology for network access , are largely an open research area . as reviewed in section [ lit : sec ] , pon research on the upstream transmission direction from the distributed optical network units ( onus ) to the central optical line terminal ( olt )has mainly focused on mechanisms supporting packet - switched transport . while some of these packet - switched transport mechanisms support quality of service akin to circuits through service differentiation mechanisms , to the best of our knowledgethere has been no prior study of circuit - level performance in pons , e.g. , the blocking probability of circuit requests for a given circuit request rate and circuit holding time . in this article , we present the first circuit - level performance study of a pon with polling - based medium access control .we make three main original contributions towards the concept of efficiently supporting both * * dy**namic * * c**ircuit * * a**nd * * p**acket traffic in the upstream direction on a * pon * , which we refer to as * dycappon * : * we propose a novel dycappon polling cycle structure that exploits the dynamic circuit transmissions to mask the round - trip propagation delay for dynamic bandwidth allocation to packet traffic . * we develop a stochastic knapsack - based model of dycappon to evaluate the circuit - level performance , including the blocking probabilities for different classes of circuit requests .* we analyze the bandwidth sharing between circuit and packet traffic in dycappon and evaluate packet - level performance , such as mean packet delay , as a function of the circuit traffic .this article is organized as follows .we first review related work in section [ lit : sec ] . in section [ sec :model ] , we describe the considered access network structure and define both the circuit and packet traffic models as well as the corresponding circuit- and packet - level performance metrics . in section [ dycappon : sec ] , we introduce the dycappon polling cycle structure and outline the steps for admission control of dynamic circuit requests and dynamic bandwidth allocation to packet traffic . in section[ sec : analysis ] we analyze the performance metrics relating to the dynamic circuit traffic , namely the blocking probabilities for the different circuit classes .we also analyze the bandwidth portion of a cycle consumed by active circuits , which in turn determines the bandwidth portion available for packet traffic , and analyze the resulting mean delay for packet traffic . in section [ eval : sec ]we validate numerical results from our analysis with simulations and present illustrative circuit- and packet - level performance results for dycappon .we summarize our conclusions in section [ sec : conclusion ] and outline future research directions towards the dycappon concept .the existing research on upstream transmission in passive optical access networks has mainly focused on packet traffic and related packet - level performance metrics .a number of studies has primarily focused on differentiating the packet - level qos for different classes of packet traffic , e.g. , .in contrast to these studies , we consider only best effort service for the packet traffic in this article . in future work, mechanisms for differentiation of packet - level qos could be integrated into the packet partition ( see section [ dycappon : sec ] ) of the dycappon polling cycle .the needs of applications for transmission with predictable quality of service has led to various enhancements of packet - switched transport for providing quality of service ( qos ) .a few studies , e.g. , , have specifically focused on providing deterministic qos , i.e. , absolute guarantees for packet - level performance metrics , such as packet delay or jitter .several studies have had a focus on the efficient integration of deterministic qos mechanisms with one or several lower - priority packet traffic classes in polling - based pons , e.g., .the resulting packet scheduling problems have received particular attention . generally , these prior studies have found that fixed - duration polling cycles are well suited for supporting consistent qos service .similar to prior studies , we employ fixed - duration polling cycles in dycappon , specifically on a pon with a single - wavelength upstream channel .the prior studies commonly considered traffic flows characterized through leaky - bucket parameters that bound the long - term average bit rate as well as the size of sudden traffic bursts .most of these studies include admission control , i.e. , admit a new traffic flow only when the packet - level performance guarantees can still be met with the new traffic flow added to the existing flows . however , the circuit - level performance , i.e. , the probability of blocking ( i.e. , denial of admission ) of a new request has not been considered .in contrast , the circuits in dycappon provide absolute qos to constant bit rate traffic flows without bursts and we analyze the probability of new traffic flows ( circuits ) being admitted or blocked .this flow ( circuit ) level performance is important for network dimensioning and providing qos at the level of traffic flows . for completeness, we briefly note that a pon architecture that can provide circuits to onus through orthogonal frequency division multiplexing techniques on the physical layer has been proposed in .our study , in contrast , focuses on efficient medium access control techniques for supporting circuit traffic .a qos approach based on burst switching in a pon has been proposed in . to the best of our knowledge ,circuit level performance in pons has so far only been examined in for the specific context of optical code division multiplexing .we also note for completeness that large file transmissions in optical networks have been examined in , where scheduling of large data file transfers on the optical grid network is studied , in , where parallel transfer over multiple network paths are examined , and in , where files are transmitted in a burst mode , i.e. , sequentially .sharing of a general time - division multiplexing ( tdm ) link by circuit and packet traffic has been analyzed in several studies , e.g. .these queueing theoretic analyses typically employed detailed markov models and become computationally quite demanding for high - speed links . also , these complex existing models considered a given node with local control of all link transmissions . in contrast , we develop a simple performance model for the distributed transmissions of the onus that are coordinated through polling - based medium access control in dycappon .our dycappon model is accurate for the circuits and approximate for the packet service .more specifically , we model the dynamics of the circuit traffic , which is given priority over packet traffic up to an aggregate circuit bandwidth of in dycappon , with accurate stochastic knapsack modeling techniques in section [ percir : sec ] . in section [ pkt_perf : sec ] , we present an approximate delay model for the packet traffic , which in dycappon can consume the bandwidth left unused by circuit traffic . & transmission rate [ bit / s ] of upstream channel + & transm .rate limit for circuit service , + & number of onus + & one - way propagation delay [ s ] + + & bit rates [ bit / s ] for circuit classes + & aggregate circuit requests arrival rate [ circuits / s ] + & prob . that a request is for circuit type + & mean circuit bit rate [ bit / s ] of offered circuit traf .+ & mean circuit holding time [ s / circuit ] + & offered circuit traffic intensity ( load ) + , & mean [ bit ] and variance of packet size + & packet traffic intensity ( load ) ; is agg .packet + & generation rate [ packets / s ] at all onus + + & total cycle duration [ s ] , constant + & cycle duration ( rand . var . ) occupied by circuit traf .+ & mean per - cycle overhead time [ s ] for upstream + & transmissions ( report transm . times , guard times ) + + & state vector of numbers of circuits of class + & aggregate bandwidth of active circuits + & equilibrium probability for active circuits having + & aggregate bandwidth + + & blocking probability for circuit class + & mean packet delay [ s ] + we consider a pon with onus attached to the olt with a single downstream wavelength channel and a single upstream wavelength channel .we denote for the transmission bit rate ( bandwidth ) of a channel [ bits / s ] .we denote [ s ] for the one - way propagation delay between the olt and the equidistant onus .we denote [ s ] for the fixed duration of a polling cycle .the model notations are summarized in table [ not : tab ] .( 160,32 ) ( -10,0)(1,0)190 ( -10,20)(1,0)190 ( -15,21)(0,0)[b]olt ( -15,-1)(0,0)[t]onu ( -5,20 ) ( 1,-3)6.6 ( -10,0)(1,3)6.6 ( 18,0)(1,3)6.6 ( 52,20)(1,-3)6.6 ( 46,0)(1,3)6.6 ( 59,0)(1,3)6.6 ( 107,20)(1,-3)6.6 ( 101,0)(1,3)6.6 ( 112,0)(1,3)6.6 ( 162,20)(1,-3)6.6 ( 156,0)(1,3)6.6 ( 169,0)(1,3)6.6 ( 11,0.75)(0,0)[b] ( 2.5,9.75)(0,0)[b] ( 33,0.75)(0,0)[b] ( 52.5,0.75)(0,0)[b] ( 57,9.75)(0,0)[b] ( 81,0.75)(0,0)[b] ( 107.25,0.75)(0,0)[b ] ( 112.5,9.75)(0,0)[b] ( 137,0.75)(0,0)[b] ( 167.5,9.75)(0,0)[b] ( -3,20)(0,1)10 ( 52,20)(0,1)10 ( 107,20)(0,1)10 ( 162,20)(0,1)10 ( -10,11)(0,0 ) [ ] ( 46,11)(0,0 ) [ ] ( 102,11)(0,0 ) [ ] ( 157,11)(0,0 ) [ ] ( 7,25)(-1,0)10 ( 42,25)(1,0)10 ( 25,25)(0,0)[]cycle , dur . ( 62,25)(-1,0)10 ( 97,25)(1,0)10 ( 77.5,25)(0,0)[]cycle , dur . ( 117,25)(-1,0)10 ( 152,25)(1,0)10 ( 135.25,25)(0,0)[]cycle , dur . for circuit traffic , we consider classes of circuits with bandwidths .we denote [ requests / s ] for the aggregate poisson process arrival rate of circuit requests .a given circuit request is for a circuit of class , with probability .we denote the mean circuit bit rate of the offered circuit traffic by .we model the circuit holding time ( duration ) as an exponential random variable with mean .we denote the resulting offered circuit traffic intensity ( load ) by . for packet traffic ,we denote and for the mean and the variance of the packet size [ in bit ] , respectively .we denote for the aggregate poisson process arrival rate [ packets / s ] of packet traffic across the onus and denote for the packet traffic intensity ( load ) . throughout, we define the packet sizes and circuit bit rates to include the per - packet overheads , such as the preamble for ethernet frames and the interpacket gap , as well as the packet overheads when packetizing circuit traffic for transmission . for circuit traffic , we consider the blocking probability , i.e. , the probability that a request for a class circuit is blocked , i.e. , can not be accommodated within the transmission rate limit for circuit service .we define the average circuit blocking probability as . for packet traffic ,we consider the mean packet delay defined as the time period from the instant of packet arrival at the onu to the instant of complete delivery of the packet to the olt .in order to provide circuit traffic with consistent upstream transmission service with a fixed circuit bandwidth , dycappon employs a polling cycle with a fixed duration [ s ] . an active circuit with bandwidth is allocated an upstream transmission window of duration in every cycle .thus , by transmitting at the full upstream channel bit rate for duration once per cycle of duration , the circuit experiences a transmission bit rate ( averaged over the cycle duration ) of .we let denote the aggregate of the upstream transmission windows of all active circuits in the pon in cycle , and refer to as the circuit partition duration .we refer to the remaining duration as the packet partition of cycle . as illustrated in fig .[ fig : cycle ] , a given cycle consists of the circuit partition followed by the packet partition . during the packet partition of each cycle, each onu sends a report message to the olt .the report message signals new circuit requests as well as the occupancy level ( queue depth ) of the packet service queue in the onu to the olt .the signaling information for the circuit requests , i.e. , requested circuit bandwidth and duration , can be carried in the report message of the mpcp protocol in epons with similar modifications as used for signaling information for operation on multiple wavelength channels .specifically , for signaling dynamic circuit requests , an onu report in the packet partition of cycle carries circuit requests generated since the onu s preceding report in cycle .the report reaches the olt by the end of cycle and the olt executes circuit admission control as described in section [ ac : sec ] .the onu is informed about the outcome of the admission control ( circuit is admitted or blocked ) in the gate message that is transmitted on the downstream wavelength channel at the beginning of cycle . in the dycappon design ,the gate message propagates downstream while the upstream circuit transmissions of cycle are propagating upstream .thus , if the circuit was admitted , the onu commences the circuit transmission with the circuit partition of cycle . for signaling packet traffic , the onu report in the packet partition of cycle carries the current queue depth as of the report generation instant .based on this queue depth , the olt determines the effective bandwidth request and bandwidth allocation as described in section [ dba : sec ] .the gate message transmitted downstream at the beginning of cycle informs the onu about its upstream transmission window in the packet partition of cycle . as illustrated in fig .[ fig : cycle ] , in the dycappon design , the circuit partition is positioned at the beginning of the cycle , in an effort to mitigate the idle time between the end of the packet transmissions in the preceding cycle and the beginning of the packet transmissions of the current cycle .in particular , when the last packet transmission of cycle arrives at the olt at the end of cycle , the first packet transmission of cycle can arrive at the olt at the very earliest one roundtrip propagation delay ( plus typically negligible processing time and gate transmission time ) after the beginning of cycle .if the circuit partition duration is longer than the roundtrip propagation delay , then idle time between packet partitions is avoided . on the other hand , if , then there an idle channel period of duration between the end of the circuit partition and the beginning of the packet partition in cycle .( 160,32 ) ( 0,20)(1,0)160 ( 0,30)(1,0)160 ( 0,5)(0,1)25 ( 160,5)(0,1)25 ( 80,10)(0,1)20 ( 8,20)(0,1)10 ( 12,20)(0,1)10 ( 30,20)(0,1)10 ( 34,20)(0,1)10 ( 76,20)(0,1)10 ( 80,20)(0,1)10 ( 92,20)(0,1)10 ( 96,20)(0,1)10 ( 110,20)(0,1)10 ( 114,20)(0,1)10 ( 142,20)(0,1)10 ( 156,20)(0,1)10 ( 4,25)(0,0 ) [ ] ( 9.5,25)(0,0 ) [ ] ( 21,25)(0,0 ) [ ] ( 31.5,25)(0,0 ) [ ] ( 50,25)(0,0 ) [ ] ( 77.5,25)(0,0 ) [ ] ( 85,25)(0,0 ) [ ] ( 93.5,25)(0,0 ) [ ] ( 102,25)(0,0 ) [ ] ( 111.5,25)(0,0 ) [ ] ( 126,25)(0,0 ) [ ] ( 148,25)(0,0 ) [ ] ( 157.5,25)(0,0 ) [ ] ( 20,15)(-1,0)20 ( 60,15)(1,0)20 ( 40,16)(0,0)[] ( 40,11)(0,0)[]circuit partition ( 100,15)(-1,0)20 ( 140,15)(1,0)20 ( 120,15)(0,0)[]packet partition ( 60,5)(-1,0)60 ( 100,5)(1,0)60 ( 80,5)(0,0)[]cycle duration note that this dycappon design trades off lower responsiveness to circuit requests for the masking of the roundtrip propagation delay. specifically , when an onu signals a dynamic circuit request in the report message in cycle , it can at the earliest transmit circuit traffic in cycle .on the other hand , packet traffic signaled in the report message in cycle can be transmitted in the next cycle , i.e. , cycle .[ fig : cycle_det ] illustrates the structure of a given cycle in more detail , including the overheads for the upstream transmissions .each onu that has an active circuit in the cycle requires one guard time of duration in the circuit partition .thus , with denoting the number of onus with active circuits in the cycle , the duration of the circuit partition is . in the packet partition ,each of the onus transmits at least a report message plus possibly some data upstream , resulting in an overhead of .thus , the overhead per cycle is the resulting aggregate limit of the transmission windows for packets in cycle is if there is little packet traffic , the circuit partition and the immediately following packet transmission phase denoted p1 in fig .[ fig : cyclell ] may leave significant portions of the fixed - duration cycle idle .in such low - packet - traffic cycles , the olt can launch additional polling rounds denoted p2 , p3 , and p4 in fig .[ fig : cyclell ] to serve newly arrived packets with low delay . specifically ,if all granted packet upstream transmissions have arrived at the olt and there is more than time remaining until the end of the cycle ( i.e. , the beginning of the arrival of the next circuit partition ) at the olt , then the olt can launch another polling round . ( 160,32 ) ( 0,0)(1,0)140 ( 0,20)(1,0)140 ( 0,21)(0,0)[b]olt ( 0,-1)(0,0)[t]onu ( 8,20 ) ( 1,-3)6.6 ( 3,0)(1,3)6.6 ( 17,0)(1,3)6.6 ( 38,20)(1,-3)6.6 ( 31,0)(1,3)6.6 ( 45,0)(1,3)6.6 ( 67,20)(1,-3)6.6 ( 60,0)(1,3)6.6 ( 74,0)(1,3)6.6 ( 97,20)(1,-3)6.6 ( 90,0)(1,3)6.6 ( 104,0)(1,3)6.6 ( 119,0)(1,3)6.6 ( 133,0)(1,3)6.6 ( 10,0.75)(0,0)[b] ( 143,0.75)(0,0)[b] ( 26,0.75)(0,0)[b] ( 53,0.75)(0,0)[b] ( 82,0.75)(0,0)[b] ( 112,0.75)(0,0)[b] ( 126,0.75)(0,0)[b]idle ( 10,20)(0,1)10 ( 140,20)(0,1)10 ( 3,11)(0,0 ) [ ] ( 31,11)(0,0 ) [ ] ( 60,11)(0,0 ) [ ] ( 90,11)(0,0 ) [ ] ( 119,11)(0,0 ) [ ] ( 14,11)(0,0 ) [ ] ( 43,11)(0,0 ) [ ] ( 72,11)(0,0 ) [ ] ( 102,11)(0,0 ) [ ] ( 50,25)(-1,0)40 ( 100,25)(1,0)40 ( 76,25)(0,0)[]cycle , fixed duration for each circuit class , the olt tracks the number of currently active circuits , i.e. , the olt tracks the state vector representing the numbers of active circuits . taking the inner product of with the vector representing the bit rates of the circuit classes gives the currently required aggregate circuit bandwidth which corresponds to the circuit partition duration for a given limit , of bandwidth available for circuit service , we let denote the state space of the stochastic knapsack model of the dynamic circuits , i.e. , where is the set of non - negative integers . for an incoming onu request for a circuit of class ,we let denote the subset of the state space that can accommodate the circuit request , i.e. , has at least spare bandwidth before reaching the circuit bandwidth limit .formally , thus , if presently , then the new class circuit can be admitted ; otherwise , the class circuit request must be rejected ( blocked ) . with the offline scheduling approach of dycappon, the reported packet queue occupancy corresponds to the duration of the upstream packet transmission windows , requested by onu . based on these requests , and the available aggregate packet upstream transmission window ( [ gp : eqn ] ), the olt allocates upstream packet transmission windows with durations , to the individual onus .the problem of fairly allocating bandwidth so as to enforce a maximum cycle duration has been extensively studied for the limited grant sizing approach , which we adapt as follows .we set the packet grant limit for cycle to if an onu requests less than the maximum packet grant duration , it is granted its full request and the excess bandwidth ( i.e. , difference between and allocated grant ) is collected by an excess bandwidth distribution mechanism . if an onu requests a grant duration longer than , it is allocated this maximum grant duration , plus a portion of the excess bandwidth according to the equitable distribution approach with a controlled excess allocation bound . with the limited grant sizing approach , there is commonly an unused slot remainder of the grant allocation to onus due to the next queued packet not fitting into the remaining granted transmission window .we model this unused slot remainder by half of the average packet size for each of the onus .thus , the total mean unused transmission window duration in a given cycle is this section , we employ techniques from the analysis of stochastic knapsacks to evaluate the blocking probabilities of the circuit class .we also evaluate the mean duration of the circuit partition , which governs the mean available packet partition duration , which in turn is a key parameter for the evaluation of the mean packet delay in section [ delan : sec ] .the stochastic knapsack model is a generalization of the well - known erlang loss system model to circuits with heterogeneous bandwidths . in brief , in the stochastic knapsack model , objects of different classes ( sizes ) arrive to a knapsack of fixed capacity ( size ) according to a stochastic arrival process .if a newly arriving object fits into the currently vacant knapsack space , it is admitted to the knapsack and remains in the knapsack for some random holding time .after the expiration of the holding time , the object leaves the knapsack and frees up the knapsack space that it occupied . if the size of a newly arriving object exceeds the currently vacant knapsack space , the object is blocked from entering the knapsack , and is considered dropped ( lost ) .we model the prescribed limit on the bandwidth available for circuit service as the knapsack capacity .the requests for circuits of bandwidth , arriving according to a poisson process with rate are modeled as the objects seeking entry into the knapsack .an admitted circuit of class occupies the bandwidth ( knapsack space ) for an exponentially distributed holding time with mean .we denote for the set of states that occupy an aggregate bandwidth , i.e. , let denote the equilibrium probability of the currently active circuits occupying an aggregate bandwidth of . through the recursive kaufman - roberts algorithm , which is given in the appendix ,the equilibrium probabilities can be computed with a time complexity of and a memory complexity of . the blocking probability is obtained by summing the equilibrium probabilities of the sets of states that have less than available circuit bandwidth , i.e. , we define the average circuit blocking probability the performance evaluation for packet delay in section [ pkt_perf : sec ] requires taking expectations over the distribution of the aggregate bandwidth occupied by circuits . in preparation for these packet evaluations, we define ] . in this sectionwe analyze the delay and delay variations experienced by circuit traffic as it traverses a dycappon network from onu to olt .initially we ignore delay variations , i.e. , we consider that a given circuit with bit rate has a fixed position for the transmission of its bits in each cycle .three delay components arise : the `` accumulation / dispersal '' delay of for the bits of circuit traffic that are transmitted per cycle .note that the first bit arriving to form a `` chunk '' of bits experiences the delay at the onu , waiting for subsequent bits to `` fill up ( accumulate ) '' the chunk . the last bit of a chunk experiences essentially no delay at the onu , but has to wait for a duration of at the olt to `` send out ( disperse ) '' the chunk at the circuit bit rate .the other delay components are the transmission delay of and the propagation delay .thus , the total delay is circuit traffic does not experience delay variations ( jitter ) in dycappon as long as the positions ( in time ) of the circuit transmissions in the cycle are held fixed .when an ongoing circuit is closing down or a new circuit is established , it may become necessary to rearrange the transmission positions of the circuits in the cycle in order to keep all circuit transmissions within the circuit partition at the beginning of the cycle and avoid idle times during the circuit partition .adaptations of packing algorithms could be employed to minimize the shifts in transmission positions .note that for a given circuit service limit , the worst - case delay variation for a given circuit with rate is less than as the circuit could at the most shift from the beginning to the end of the circuit partition of maximum duration . inserting the circuit partition duration from ( [ xin : eqn ] ) into the expression for the aggregate limit on the transmission window for packets in a cycle from ( [ gp : eqn ] ) and taking the expectation ] ( [ ebeta : eqn ] ) , i.e. , we linearly weigh packet delay metrics with the probability masses for the aggregate circuit bandwidth .we also neglect the `` low - load '' operating mode of section [ lowload : sec ] in the analysis .in the proposed dycappon cycle structure , a packet experiences five main components , namely the reporting delay from the generation instant of the packet to the transmission of the report message informing the olt about the packet , which for the fixed cycle duration of dycappon equals half the cycle duration , i.e. , , the report - to - packet partition delay from the instant of report transmission to the beginning of the packet partition in the next cycle , the queuing delay from the reception instant of the grant message to the beginning of the transmission of the packet , as well as the packet transmission delay with mean , and the upstream propagation delay . in the report - to - packet partition delaywe include a delay component of half the mean duration of the packet partition to account for the delay of the reporting of a particular onu to the end of the packet partition .the delay from the end of the packet partition in one cycle to the beginning of the packet partition of the next cycle is the maximum of the roundtrip propagation delay and the mean duration of the circuit partition .thus , we obtain overall for the report - to - packet partition delay \\ & = & \frac{1}{2 } \left ( \gamma + \e_{\beta } \left[\max \left\ { 2 \tau,\ \frac{\beta \gamma}{c } \right\ } \right ] - \omega_o \right).\end{aligned}\ ] ] we model the queueing delay with an m / g/1 queue . generally , for messages with mean service time , normalized message size variance , and traffic intensity , the m / g/1 queue has expected queueing delay for dycappon , we model the aggregate packet traffic from all onus as feeding into one m / g/1 queue with mean packet size and packet size variance .we model the circuit partitions , when the upstream channel is not serving packet traffic , through scaling of the packet traffic intensity .in particular , the upstream channel is available for serving packet traffic only for the mean fraction of a cycle .thus , for large backlogs served across several cycles , the packet traffic intensity during the packet partition is effectively hence , the mean queueing delay is approximately thus , the overall mean packet delay is approximately consider an epon with onus , a channel bit rate gb / s , and a cycle duration ms .each onu has abundant buffer space and a one - way propagation delay of to the olt .the guard time is and the report message has 64 bytes .we consider classes of circuits as specified in table [ cir : tab ] ..circuit bandwidths and request probabilities for classes of circuits in performance evaluations . [ cols= " < , > , > , > " , ] for the packet traffic , we observe from table [ fig : chi ] a very slight increase in the mean packet delays as the circuit traffic load increases .this is mainly because the transmission rate limit for circuit service bounds the upstream transmission bandwidth the circuits can occupy to no more than in each cycle .as the circuit traffic load increases , the circuit traffic utilizes this transmission rate limit more and more fully . however , the packet traffic is guaranteed a portion of the upstream transmission bandwidth . formally , as the circuit traffic load grows large ( ), the mean aggregate circuit bandwidth approaches the limit , resulting in a lower bound for the packet traffic load limit ( [ pimax : eqn ] ) of and corresponding upper bounds for the effective packet traffic intensity and the mean packet delay . in fig .[ fig : cc ] we examine the impact of the transmission rate limit for circuit traffic .we consider different compositions of the total traffic load .we observe from fig .[ fig : cc](a ) that the average circuit blocking probability steadily decreases for increasing . in the example in fig .[ fig : cc ] , the average circuit blocking probability drops to negligible values below 1 % for values corresponding to roughly twice the offered circuit traffic load .for instance , for circuit load , drops to 0.9 % for gb / s .the limit thus provides an effective parameter for controlling the circuit blocking probability experienced by customers . from fig .[ fig : cc](b ) , we observe that the mean packet delay abruptly increases when the limit reduces the packet traffic portion of the upstream transmission bandwidth to values near the packet traffic intensity .we also observe from fig .[ fig : cc](b ) that the approximate packet delay analysis is quite accurate for small to moderate values ( the slight delay overestimation is due to neglecting the low packet traffic polling ) , but underestimates the packet delays for large .large circuit traffic limits give the circuit traffic more flexibility for causing fluctuations of the occupied circuit bandwidth , which deteriorate the packet service . summarizing , we see from fig . [fig : cc](b ) that as the effective packet traffic intensity approaches one , the mean packet delay increases sharply .thus , for ensuring low - delay packet service , the limit should be kept sufficiently below .when offering circuit and packet service over shared pon upstream transmission bandwidth , network service providers need to trade off the circuit blocking probabilities and packet delays . as we observe from fig .[ fig : cc ] , the circuit bandwidth limit provides an effective tuning knob for controlling this trade - off . as a function of packet traffic load . ] the fig .[ fig : lm ] we examine the impact of low - packet - traffic mode polling from section [ lowload : sec ] on the mean packet delay .we observe from fig .[ fig : lm ] that low - packet - traffic mode polling substantially reduces the mean packet delay compared to conventional polling for low packet traffic loads .this delay reduction is achieved by the the more frequent polling which serves packets quicker in cycles with low load due to circuit traffic .we have proposed and evaluated dycappon , a passive optical network that provides dynamic circuit and packet service .dycappon is based on fixed duration cycles , ensuring consistent circuit service , that is completely unaffected by the packet traffic load .dycappon masks the round - trip propagation delay for polling of the packet traffic queues in the onus with the upstream circuit traffic transmissions , providing for efficient usage of the upstream bandwidth .we have analyzed the circuit level performance , including the circuit blocking probability and delay experienced by circuit traffic in dycappon , as well as the bandwidth available for packet traffic after serving the circuit traffic .we have also conducted an approximate analysis of the packet level performance . through extensive numerical investigations based on the analytical performance characterization of dycappon as well as verifying simulations, we have demonstrated the circuit and packet traffic performance and trade - offs in dycappon .the provided analytical performance characterizations as well as the identified performance trade - offs provide tools and guidance for dimensioning and operating pon access networks that provide a mix of circuit and packet oriented service .there are several promising directions for future research on access networks that flexibly provide both circuit and packet service .one important future research direction is to broadly examine cycle - time structures and wavelength assignments in pons providing circuit and packet service . in particular , the present study focused on a single upstream wavelength channel operated with a fixed polling cycle duration .future research should examine the trade - offs arising from operating multiple upstream wavelength channels and combinations of fixed- or variable - duration polling cycles .an exciting future research direction is to extend the pon service further toward the individual user , e.g. , by providing circuit and packet service on integrated pon and wireless access networks , such as , that reach individual mobile users or wireless sensor networks .further , exploring combined circuit and packet service in long - reach pons with very long round trip propagation delays , which may require special protocol mechanisms , see e.g. , , is an open research direction .another direction is to examine the integration and interoperation of circuit and packet service in the pon access network with metropolitan area networks and wide area networks to provide circuit and packet service . in this appendix , we present the recursive kaufman - roberts algorithm for computing the equilibrium probabilities that the currently active circuit occupy an aggregated bandwidth . forthe execution of the algorithm , the given circuit bandwidths and limit are suitably normalized so that incrementing in integer steps covers all possible combinations of the circuit bandwidth .for instance , in the evaluation scenario considered in section [ eval_setup : sec ] , all circuit bandwidth are integer multiples of 52 mb / s .thus , we normalize all bandwidths by 52 mb / s and for e.g. , gb / s execute the following algorithm for .( the variables , and refer to their normalized values , e.g. , for the gb / s example , in the algorithm below ) .the algorithm first evaluates unnormalized occupancy probabilities that relate to a product - form solution of the stochastic knapsack .subsequently the normalization term for the occupancy probabilities is evaluated , allowing then the evaluation of the actual occupancy probabilities .m. batayneh , d. schupke , m. hoffmann , a. kirstaedter , and b. mukherjee , `` link - rate assignment in a wdm optical mesh network with differential link capacities : a network - engineering approach , '' in _ proc .honet _ , nov .2008 , pp . 216219 .l. qiao and p. koutsakis , `` adaptive bandwidth reservation and scheduling for efficient wireless telemedicine traffic transmission , '' _ ieee trans .vehicular technology _60 , no . 2 ,632643 , feb .m. reisslein , j. lassetter , s. ratnam , o. lotfallah , f. fitzek , and s. panchanathan , `` traffic and quality characterization of scalable encoded video : a large - scale trace - based study , part 1 : overview and definitions , '' arizona state univ . ,tech . rep . , 2002 .k. shuaib , f. sallabi , and l. zhang , `` smoothing and modeling of video transmission rates over a qos network with limited bandwidth connections , '' _ int .journal of computer networks and communications _, vol . 3 , no . 3 , pp .148162 , may 2011 .g. van der auwera and m. reisslein , `` implications of smoothing on statistical multiplexing of h. 264/avc and svc video streams , '' _ ieee trans . on broadcasting _ , vol .55 , no . 3 , pp .541558 , sep .2009 .n. charbonneau , a. gadkar , b. h. ramaprasad , and v. vokkarane , `` dynamic circuit provisioning in all - optical wdm networks using lightpath switching , '' _ opt ._ , vol . 9 , no . 2 , pp . 179 190 , 2012a. munir , s. tanwir , and s. zaidi , `` requests provisioning algorithms for dynamic optical circuit switched ( docs ) networks : a survey , '' in _ proc .multitopic conference ( inmic ) _ , dec .2009 , pp .r. skoog , g. clapp , j. gannett , a. neidhardt , a. von lehman , and b. wilson , `` architectures , protocols and design for highly dynamic optical networks , '' _ opt ._ , vol . 9 , no . 3 , pp . 240251 , 2012 .e. van breusegem , j. cheyns , d. de winter , d. colle , m. pickavet , p. demeester , and j. moreau , `` a broad view on overspill routing in optical networks : a real synthesis of packet and circuit switching ? '' _ optical switching and networking _ , vol . 1 , no . 1 , pp .5164 , 2005 .m. mcgarry , m. reisslein , f. aurzada , and m. scheutzow , `` shortest propagation delay ( spd ) first scheduling for epons with heterogeneous propagation delays , '' _ieee j. on selected areas in commun ._ , vol .28 , no . 6 , pp .849862 , aug .a. sivakumar , g. sankaran , and k. sivalingam , `` performance analysis of onu - wavelength grouping schemes for efficient scheduling in long reach - pons , '' _ opt .switching netw ._ , vol . 10 , no . 4 ,pp . 465474 , 2013 .i. tomkos , l. kazovsky , and k .-kitayama , `` next - generation optical access networks : dynamic bandwidth allocation , resource use optimization , and qos improvements , '' _ ieee netw . _ ,26 , no . 2 ,pp . 46 , 2012 .f. zanini , l. valcarenghi , d. p. van , m. chincoli , and p. castoldi , `` introducing cognition in tdm pons with cooperative cyclic sleep through runtime sleep time determination , '' _ opt .switching netw ., in print _ , 2013 .f. aurzada , m. scheutzow , m. herzog , m. maier , and m. reisslein , `` delay analysis of ethernet passive optical networks with gated service , '' _ osa journal of optical networking _ , vol . 7 , no . 1 , pp .2541 , jan . 2008 .f. aurzada , m. scheutzow , m. reisslein , n. ghazisaidi , and m. maier , `` capacity and delay analysis of next - generation passive optical networks ( ng - pons ) , '' _ ieee trans . on communications _ ,59 , no . 5 , pp .13781388 , may 2011 .j. angelopoulos , h .- c .leligou , t. argyriou , s. zontos , e. ringoot , and t. van caenegem , `` efficient transport of packets with qos in an fsan - aligned gpon , '' _ ieee comm . mag ._ , vol .42 , no . 2 ,9298 , feb .c. assi , y. ye , s. dixit , and m. ali , `` dynamic bandwidth allocation for quality - of - service over ethernet pons , '' _ ieee journal on selected areas in communications _21 , no . 9 ,14671477 , nov .2003 .y. luo and n. ansari , `` limited sharing with traffic prediction for dynamic bandwidth allocation and qos provioning over epons , '' _ osa journal of optical networking _ , vol . 4 , no . 9 , pp . 561572 , sep . 2005 .m. radivojevic and p. matavulj , `` implementation of intra - onu scheduling for quality of service support in ethernet passive optical networks , '' _ ieee / osa j. lightw ._ , vol . 27 , no . 18 , pp . 40554062 , sep . 2009s. sherif , a. hadjiantonis , g. ellinas , c. assi , and m. ali , `` a novel decentralized ethernet - based pon access architecture for provisioning differentiated qos , '' _ ieee / osa j. of lightwave technology _ , vol .22 , no . 11 , pp . 24832497 , nov .a. shami , x. bai , n. ghani , c. assi , and h. mouftah , `` qos control schemes for two - stage ethernet passive optical access networks , '' _ ieee j. on sel .areas in commun ._ , vol .23 , no . 8 , pp . 14671478 , nov .m. vahabzadeh and a. ghaffarpour rahbar , `` modified smallest available report first : new dynamic bandwidth allocation schemes in qos - capable epons , '' _ optical fiber technolgy _ ,17 , no . 1 , pp .716 , jan .t. berisa , a. bazant , and v. mikac , `` bandwidth and delay guaranteed polling with adaptive cycle time ( bdgpact ) : a scheme for providing bandwidth and delay guarantees in passive optical networks , '' _j. of opt ._ , vol . 8 , no . 4 , pp .337345 , 2009 .y. qin , d. xue , l. zhao , c. k. siew , and h. he , `` a novel approach for supporting deterministic quality - of - service in wdm epon networks , '' _ optical switching and networking _ , vol .10 , no . 4 , pp .378392 , 2013 .l. zhang , e .- s . an , h .-yeo , and s. yang , `` dual deb - gps scheduler for delay - constraint applications in ethernet passive optical networks , '' _ ieice trans ._ , vol .86 , no . 5 , pp . 15751584 , 2003 . f. an , y. hsueh , k. kim , i. white , and l. kazovsky , `` a new dynamic bandwidth allocation protocol with quality of service in ethernet - based passive optical networks , '' in _ proc .iasted woc _ ,vol . 3 , jul .2003 , pp . 165169 .i. hwang , j. lee , k. lai , and a. liem , `` generic qos - aware interleaved dynamic bandwidth allocation in scalable epons , '' _ ieee / osa j. of optical commun . and_ , vol . 4 , no . 2 ,99107 , feb .2012 .n. merayo , t. jimenez , p. fernandez , r. duran , r. lorenzo , i. de miguel , and e. abril , `` a bandwidth assignment polling algorithm to enhance the efficiency in qos long - reach epons , '' _ eu ._ , vol . 22 , no . 1 ,3544 , jan . 2011 .s. de , v. singh , h. m. gupta , n. saxena , and a. roy , `` a new predictive dynamic priority scheduling in ethernet passive optical networks ( epons ) , '' _ opt . switch ._ , vol . 7 , no . 4 , pp .215223 , 2010 .w. kwong , g .- c .yang , and j .-g . zhang , `` prime - sequence codes and coding architecture for optical code - division multiple - access , '' _ ieee transactions on communications _ , vol .44 , no . 9 , pp . 11521162 , 1996 .m. mcgarry , m. reisslein , and m. maier , `` ethernet passive optical network architectures and dynamic bandwidth allocation algorithms , '' _ ieee communication surveys and tutorials _10 , no . 3 , pp . 4660 ,2008 . c. assi , y. ye , s. dixit , and m. ali , `` dynamic bandwidth allocation for quality - of - service over ethernet pons , '' _ ieee journal on selected areas in communications _ , vol .21 , no . 9 ,14671477 , nov .g. kramer , b. mukherjee , s. dixit , y. ye , and r. hirth , `` supporting differentiated classes of service in ethernet passive optical networks , '' _ osa j. of optical netw ._ , vol . 1 , no . 9 , pp .280298 , aug .2002 .f. aurzada , m. levesque , m. maier , and m. reisslein , `` fiwi access networks based on next - generation pon and gigabit - class wlan technologies : a capacity and delay analysis , '' _ ieee / acm trans ., in print _ , 2014 .j. coimbra , g. schtz , and n. correia , `` a game - based algorithm for fair bandwidth allocation in fibre - wireless access networks , '' _ optical switching and networking _ , vol . 10 , no . 2 , pp . 149 162 , 2013 . a. dhaini , p .- h . ho , and x. jiang , `` qos control for guaranteed service bundles over fiber - wireless ( fiwi ) broadband access networks , '' _ ieee / osa j. lightwave techn_ , vol . 29 , no . 10 , pp . 15001513 , 2011 .m. maier , n. ghazisaidi , and m. reisslein , `` the audacity of fiber - wireless ( fiwi ) networks , '' in _ proc . of accessnetslecture notes of the institute for computer sciences , social informatics and telecommunications engineering.1em plus 0.5em minus 0.4emspringer , 2009 , vol . 6 , pp .n. moradpoor , g. parr , s. mcclean , and b. scotney , `` iidwba algorithm for integrated hybrid pon with wireless technologies for next generation broadband access networks , '' _ opt. switching ._ , vol . 10 , no . 4 , pp . 439457 , 2013 . a. seema and m. reisslein , `` towards efficient wireless video sensor networks : a survey of existing node architectures and proposal for a flexi - wvsnp design , '' _ ieee comm . surv . & tut . _ , vol . 13 , no . 3 , pp . 462486 , third quarter 2011 .a. mercian , m. mcgarry , and m. reisslein , `` offline and online multi - thread polling in long - reach pons : a critical evaluation , '' _ ieee / osa j. lightwave techn ._ , vol . 31 , no . 12 , pp . 20182228 , jun. 2013 .h. song , b. w. kim , and b. mukherjee , `` long - reach optical access networks : a survey of research challenges , demonstrations , and bandwidth assignment mechanisms , '' _ ieee commun ._ , vol . 12 , no . 1 ,pp . 112123 , 1st quarter 2010 .m. maier , m. reisslein , and a. wolisz , `` a hybrid mac protocol for a metro wdm network using multiple free spectral ranges of an arrayed - waveguide grating , '' _ computer networks _41 , no . 4 , pp . 407433 , mar .m. scheutzow , m. maier , m. reisslein , and a. wolisz , `` wavelength reuse for efficient packet - switched transport in an awg - based metro wdm network , '' _ ieee / osa journal of lightwave technology _ , vol .21 , no . 6 , pp . 14351455 , jun .yang , m. maier , m. reisslein , and m. carlyle , `` a genetic algorithm - based methodology for optimizing multiservice convergence in a metro wdm network , '' _ ieee / osa journal of lightwave technology _ , vol .21 , no . 5 , pp .11141133 , may 2003 .m. yuang , i .-f . chao , and b. lo , `` hopsman : an experimental optical packet - switched metro wdm ring network with high - performance medium access control , '' _ ieee / osa journal of optical communications and networking _ , vol . 2 , no . 2 ,91101 , feb . | dynamic circuits are well suited for applications that require predictable service with a constant bit rate for a prescribed period of time , such as cloud computing and e - science applications . past research on upstream transmission in passive optical networks ( pons ) has mainly considered packet - switched traffic and has focused on optimizing packet - level performance metrics , such as reducing mean delay . this study proposes and evaluates a dynamic circuit and packet pon ( dycappon ) that provides dynamic circuits along with packet - switched service . dycappon provides flexible packet - switched service through dynamic bandwidth allocation in periodic polling cycles , and consistent circuit service by allocating each active circuit a fixed - duration upstream transmission window during each fixed - duration polling cycle . we analyze circuit - level performance metrics , including the blocking probability of dynamic circuit requests in dycappon through a stochastic knapsack - based analysis . through this analysis we also determine the bandwidth occupied by admitted circuits . the remaining bandwidth is available for packet traffic and we conduct an approximate analysis of the resulting mean delay of packet traffic . through extensive numerical evaluations and verifying simulations we demonstrate the circuit blocking and packet delay trade - offs in dycappon . dynamic circuit switching ; ethernet passive optical network ; grant scheduling ; grant sizing ; packet delay ; stochastic knapsack . |
consider an equity market with asset capitalizations at time , and with covariance and relative risk rates and , respectively . at any given time , these rates are _ nonanticipative functionals _ of past - and - present capitalizations , ; they are not specified with precision but are , rather , subject to `` knightian uncertainty . '' to wit , for a given collection of nonempty compact and convex subsets on , where is the space of real , symmetric , positive definite matrices , and is the origin in , they are subject to the constraint in other words , the pair must take values at time inside the compact , convex set which is determined by the current location of the asset capitalization process ; but within this range , the actual value is allowed to depend on past capitalizations as well .[ to put it a little differently : the constraint ( [ e.3 ] ) is not necessarily `` markovian , '' as long as the sets in ( [ 1.b ] ) are not singletons . ] under these circumstances , what is the highest return on investment relative to the market that can be achieved using nonanticipative investment rules , and with probability one under all possible market model configurations that satisfy the constraints of ( [ e.3 ] ) ?what are the weights in the various assets of an investment rule that accomplishes this ? answers : subject to appropriate conditions , and \\[-8pt ] \eqntext{i = 1 , \ldots , n , 0 \le t \le t,}\end{aligned}\ ] ] respectively . herethe function ] , and to do this with -probability one , under any model that might materialize .we shall refer to of ( [ 1.7 ] ) as the _ arbitrage function _ for the family of meta - models , and think of it as a version of the arbitrage function studied in fernholz and karatzas ( ) which is `` robust '' with respect to .the quantity of ( [ 1.7 ] ) is strictly positive ; see proposition [ proposition_1 ] below and the discussion following it . on the other hand , the set of ( [ 1.7 ] )contains the number , so clearly if , then for every even for when the infimum in ( [ 1.7 ] ) is attained , as indeed it is in the context of theorem 1 below there exists an investment rule such that holds for every . in other words ,the investment rule leads then to _ strong arbitrage relative to the market _ portfolio in the terminology of here with the extra feature that such arbitrage is now _ robust _ , that is , holds under any possible admissible system or `` model '' that might materialize . if , on the other hand , , then such outperformance of ( equivalently , strong arbitrage relative to ) the market is just not possible over all meta - models . in either case ,the highest return on investment relative to the market achievable using ( nonanticipative ) investment rules , is given as .[ remark_1 ] instances of occur , when there exists a constant such that either or holds for every .see the survey paper , examples 11.1 and 11.2 [ as well as , for additional examples ] .[ proposition_1 ] the quantity of ( [ 1.7 ] ) satisfies \\[-8pt ] \eqntext{\displaystyle \mbox{where } \phi(t,\mathbf{x}):= \sup_{\mathcal{m } \in \mathfrak{m } ( \mathbf{x } ) } \biggl ( \frac { \mathbb{e}^ { \mathbb { p}^\mathcal{m } } [ l(t ) x(t ) ] } { x_1 + \cdots+ x_n } \biggr ) .}\end{aligned}\ ] ] furthermore , under conditions ( [ measur])([uppsemicont ] ) , there exists an admissible system such that } { x_1 + \cdots+ x_n } .\ ] ] take an arbitrary element of the set on the right - hand side of ( [ 1.7 ] ) and an arbitrary admissible system .there exists then an investment rule with the inequality valid -a.s . on the strength of ( [ expo ] ) and ( [ 1.6 ] ) ,the process is a -supermartingale ; thus ( [ repr ] ) and ( [ 1.d ] ) lead to \nonumber\\[-8pt]\\[-8pt ] & \ge&\mathbb{e}^ { \mathbb{p}^\mathcal{m } } [ l ( t ) x(t ) ] > 0 .\nonumber\end{aligned}\ ] ] the inequality in ( [ 1.8 ] ) follows now from the arbitrariness of and .the existence of an admissible system that satisfies ( [ 1.88 ] ) follows from theorem 3.4 in , in conjunction with the dynamics of ( [ 1.1 ] ) and ( [ expotoo ] ) .although strong arbitrage relative to the market may exist within the framework of the models studied here ( cf .remark [ remark_1 ] ) , the existence of a strictly positive supermartingale deflator process as in ( [ expo ] ) proscribes scalable arbitrage opportunities , also known as _ unbounded profits with bounded risk _ ( _ upbr _ ) ; this is reflected in the inequality of ( [ 1.8 ] ) .we refer the reader to for the origin of the resulting _ nupbr _ concept , and to for an elaboration of this point in a different context , namely , the existence and properties of the numraire portfolio .finally , let us write ( [ 1.8 ] ) as } { x_1 + \cdots+ x_n } .\hspace*{-10pt}\ ] ] we have for this quantity the interpretation as the `` arbitrage function for the model , '' at least when the matrix in invertible for every and when -martingales can be represented as stochastic integrals with respect to the brownian motion in ( [ 1.1 ] ) .consider now a continuous function with which is of class on and satisfies on this domain the fully nonlinear partial differential inequality ( pdi ) \\[-8pt ] \eqntext{\forall a \in\mathcal{a } ( \mathbf { z } ) .}\end{aligned}\ ] ] we shall denote by the collection of all such continuous functions which are of class on and satisfy ( [ 2.1 ] ) and ( [ 2.2 ] ) .this collection is nonempty , since we can take ; however , need not contain only one element .let us fix an initial configuration and consider any admissible system .applying it s rule to the process in conjunction with ( [ 1.c ] ) and ( [ 1.1 ] ) , we obtain its -semimartingale decomposition as \\[-8pt ] & & { } + \sum_{\nu=1}^n \bigl [ r_\nu(t , \mathfrak{x } ) - u \bigl(t - t , \mathfrak{x}(t ) \bigr ) \widetilde{\vartheta}_\nu(t , \mathfrak{x } ) \bigr ] \,\mathrm{d}w_\nu(t ) .\nonumber\end{aligned}\ ] ] here we have used the notation of ( [ 1.d ] ) , and have set from the inequality of ( [ 2.2 ] ) , coupled with the fact that holds for all , this last expression is clearly not positive . as a result , the positive process of ( [ 2.4 ] ) is a -supermartingale , namely , \nonumber\\[-8pt]\\[-8pt ] & = & \mathbb{e}^ { \mathbb{p}^\mathcal{m } } [ l(t)x(t ) | \mathcal{f}(t ) ] \nonumber\end{aligned}\ ] ] holds -a.s . , and ; in particular , \\ & = & \mathbb{e}^ { \mathbb{p}^\mathcal{m } } [ l(t ) x(t ) ] \qquad \forall\mathcal{m } \in\mathfrak{m } ( \mathbf{x } ) .\end{aligned}\ ] ] with the notation of ( [ 1.8 ] ) , we obtain in this manner the following analog of the inequality in proposition [ proposition_1 ] : digging in this same spot , just a bit deeper , leads to our next result ; this is very much in the spirit of theorem 5 in and of section ii.2 in lions ( ) .[ proposition_2 ] for horizon , initial configuration and function in the collection , we have the inequality furthermore , the markovian investment rule generated by this function through \\[-8pt ] & & { } + \frac{z_i } { z_1 + \cdots+ z_n } , \qquad ( t,\mathbf{z } ) \in[0,t ] \times(0 , \infty)^n\nonumber\end{aligned}\ ] ] for each , satisfies for every admissible system the inequality fora fixed initial configuration , an arbitrary admissible model and any function , let us recall the notation of ( [ 2.4 ] ) and re - cast the dynamics of ( [ 2.5 ] ) as here by virtue of ( [ 2.5 ] ) , ( [ 2.6 ] ) and ( [ 1.d ] ) we have written \\[-8pt ] & & { } - \vartheta_\nu(t , \mathfrak{x})\nonumber\end{aligned}\ ] ] for and have introduced in the notation of ( [ 2.7 ] ) the continuous , increasing process the expression of ( [ 2.10 ] ) suggests considering the markovian investment rule as in ( [ 2.11.a ] ) ; then we cast the expression of ( [ 2.10 ] ) as on the strength of ( [ 1.6 ] ) , the value process generated by this investment rule starting with initial wealth , satisfies the equation juxtaposing this to ( [ 2.9 ] ) , and using the positivity of along with the nonnegativity and nondecrease of , we obtain the -a.s .comparison , thus with this leads to ( [ 2.11.b ] ) , in conjunction with ( [ 2.1 ] ) .we conclude from ( [ 2.11.b ] ) that the number belongs to the set on the right - hand side of ( [ 1.7 ] ) , and the first comparison in ( [ 2.8 ] ) follows ; the second is just a restatement of ( [ 1.8 ] ) .suppose that the function of ( [ 1.8 ] ) belongs to the collection .then is the smallest element of ; the infimum in ( [ 1.7 ] ) is attained ; we can take in ( [ 2.11.b ] ) , ( [ 2.11.a ] ) ; and the inequality in ( [ 2.8 ] ) holds as equality , that is , coincides with the arbitrage function _ interpretation : _imagine that the small investor is a manager who invests for a pension fund and tries to track or exceed the performance of an index ( the market portfolio ) over a finite time - horizon .he has to do this in the face of uncertainty about the characteristics of the market , including its covariance and price - of - risk structure , so he acts with extreme prudence and tries to protect his clients against the most adverse market configurations imaginable [ the range of such configurations is captured by the constraints ( [ e.3 ] ) , ( [ 1.b ] ) ] .if such adverse circumstances do not materialize his strategy generates a surplus , captured here by the increasing process of ( [ 2.11 ] ) with , which can then be returned to the ( participants in the ) fund .we are borrowing and adapting this interpretation from .similarly , the markovian investment rule generated by the function in ( [ 2.11.a ] ) , ( [ fg ] ) implements the best possible outperformance of the market portfolio , as in ( [ 2.11.b ] ) .for the purposes of this section we shall impose the following _ growth _ condition on the family of subsets of in ( [ e.3.a ] ) , ( [ 1.b ] ) : there exists a constant , such that for all we have we shall also need the following _ strong ellipticity _ condition , which mandates that for every nonempty , compact subset of , there exists a real constant such that [ assa ] there exist a continuous function , a -function , and a continuous square root of , namely such that , with the vector - valued function defined by condition ( [ d.22 ] ) is satisfied , whereas the system of stochastic differential equations , \nonumber\\[-8pt]\\[-8pt] \eqntext{x_i(0)=x_i>0 , i=1,\ldots , n}\end{aligned}\ ] ] has a solution in which the state process takes values in .a bit more precisely , this assumption posits the existence of a markovian admissible system consisting of a filtered probability space , and of two continuous , adapted process and on it , such that under the probability measure the process is -dimensional brownian motion , the process takes values in a.s . and ( [ 1.1 ] ) holds with as in ( [ d.2 ] ) , and with , ( ) .the system of equations ( [ d.1 ] ) can be cast equivalently as \\[-8pt ] & & \hphantom{x_i(t ) \biggl [ } { } + \biggl ( \sum_{j=1}^n \mathbf{a}_{i j } ( \mathfrak { x } ( t ) ) x_j ( t ) d_j h ( \mathfrak { x } ( t ) ) \biggr)\ , \mathrm{d } t \biggr].\nonumber\end{aligned}\ ] ] [ assb ] in the notation of the previous paragraph and under the condition we define on the continuous functions and ] . at this point we shall impose the following requirements on , the family of compact , convex subsets of in ( [ 1.b ] ) , ( [ e.3 ] ) :there exists a constant , such that for all we have the strengthening of the growth condition in ( [ 3.2.gg ] ) , as well as the `` shear '' condition \le c .\ ] ] then the following identity holds -a.s .: for justification of the claims made in this subsection , we refer to section 7 in fernholz and karatzas ( ) , as well as , and in addition , of course , to the seminal work by fllmer ( , ) .the special structure of the filtered measurable space that we selected in this section is indispensable for this construction and for the representation ( [ foell ] ) ; whereas the inequality from condition ( [ thetatrace ] ) is important for establishing the representation of ( [ ttaui ] ) .let us fix then an initial configuration and denote by the collection of stochastic systems that consist of the filtered measurable space , of a probability measure , of an -valued brownian motion under , and of the cordinate mapping process which satisfies -a.s .the system of the stochastic equations in ( [ 1.1.a ] ) here the elements , of the matrix are progressively measurable functionals that satisfy , in the notation of ( [ e.3.a ] ) , as in section [ subsec2.3 ] , we shall denote by the subcollection of that consists of _ markovian _ auxiliary admissible systems , namely , those for which the equations of ( [ aux ] ) are satisfied with and , and with measurable functions and that satisfy the condition , .we invoke the same markovian selection results as in section [ subsec2.3 ] , to ensure that the process is strongly markovian under any given , . by analogy with ( [ ttaui ] ) , we consider then for every we have whereas , holds for every .[ remark_7 ] as in section [ subsec2.2 ] , solving the stochastic equation ( [ aux ] ) subject to condition ( [ const1 ] ) amounts to requiring that the process be a local supermartingale , for every continuous which is of class on and has compact support ; here is the nonlinear second - order partial differential operator in ( [ e.3.d ] ) .[ remark_3 ] the total capitalization process satisfies , by virtue of ( [ aux ] ) , the equation , \\\widetilde{n}(\cdot ) & : = & \sum_{\nu=1}^n \int_0^\cdot\biggl ( \sum_{i=1}^n ( x_i ( t ) / x(t ) ) \sigma_{i \nu } ( t , \mathfrak { x } ) \biggr)\ , \mathrm{d } \widetilde{w}_\nu(t ) .\end{aligned}\ ] ] under the measure , the process is a continuous local martingale with quadratic variation from ( [ 3.2.g ] ) , so the total capitalization process takes values in , -a.e . ; here is a one - dimensional -brownian motion .this is in accordance with our selection of the punctured nonnegative orthant in ( [ 1.b ] ) as the state - space for the process under . under this measure , the relative weights , are nonnegative local martingales and supermartingales , in accordance with ( [ mu ] ) , and since these processes are bounded , so they are actually martingales . once any one of the processes [ i.e. , any one of the processes becomes zero , it stays at zero forever ; of course , not all of them can vanish at the same time .in section [ subsec7.1 ] we started with an arbitrary admissible system and produced an `` auxiliary '' admissible system , for which the property ( [ foell ] ) holds .thus , for every we deduce \\[-8pt ] & \hspace*{3pt}\ge & \sup_{\mathcal { m } \in\mathfrak { m } ( \mathbf { x } ) } \biggl ( \frac { \mathbb{e}^ { \mathbb{p}^\mathcal{m } } [ l(t )x(t ) ] } { x_1 + \cdots+ x_n } \biggr)= \phi(t , \mathbf { x } ) .\nonumber\end{aligned}\ ] ] we suppose from now onwards that , for every progressively measurable functional which satisfies we can select a progressively measurable functional with [ see the `` measurable selection '' results in chapter 7 of ] .we introduce now the functional as in ( [ 1.d ] ) and also , by analogy with ( [ ttau.a ] ) , the stopping rule [ cf . , where stopping rules of this type also play very important roles in the study of arbitrage ] .we recall now ( [ const2 ] ) ; on the strength of the requirement ( [ thetatrace ] ) , this gives for every , and , for every . in conjunction with ( [ 3.2.g ] ), we obtain from ( [ ttau111 ] ) that holds for every , and that , holds for every .we deduce for the stopping rules of ( [ ttau1 ] ) and ( [ ttaui1 ] ) the identification .let us fix now a stochastic system as in section [ subsec7.2 ] , pick a progressively measurable functional with for all and select a progressively measurable functional as in ( [ thetaalpha ] ) . for _ this_ and _ this _ , we define by ( [ ttau101 ] ) as well as \\[-8pt ] \eqntext{\mbox{for } 0 \le t < \mathcal{t}}\end{aligned}\ ] ] as in ( [ lambda ] ) , and set in the notation of ( [ ttau1 ] ) .the resulting process is a local martingale and a supermartingale under , and we have , -a.e . we introduce also the sequence of -stopping rules which satisfy and , for every . from novikov s theorem [ e.g. , , page 198 ] , is a uniformly integrable -martingale ; in particular , holds for every .thus , the recipe , \qquad a \in\mathcal{f}(s_n)\ ] ] defines a consistent sequence , or `` tower , '' of probability measures on .appealing to the results in , pages 140143 [ see also the appendix of fllmer ( ) ] , we deduce the existence of a probability measure on such that \quad \mbox{holds for every } a \in \mathcal{f}(s_n ) , n \in\mathbb{n}.\hspace*{-25pt}\ ] ] ( here again , the special structure imposed in this section on the filtered measurable space is indispensable . ) therefore , for every we have = \mathbb{e}^\mathbb{q } \bigl [ \lambda ( t ) \cdot \mathbf { 1}_{\{s_n > t\ } } \bigr]\ ] ] by optional sampling , whereas monotone convergence leads to .\ ] ] the following result echoes similar themes in .[ lemma_1 ] the process of ( [ lamda ] ) , ( [ lamda_too ] ) is a -martingale , if and only if we have [ i.e. , if and only if the process never hits the boundary of the orthant , -a.s . ] .if holds , the nonnegativity of and ( [ wald ] ) give \le\mathbb{e}^\mathbb{q } [ \lambda ( t ) ] \qquad \forall t \in(0 , \infty ) .\vadjust{\goodbreak}\ ] ] but is a -supermartingale , so the reverse inequality \le \lambda ( 0 ) = 1 ] holds for all , so is a -martingale .if , on the other hand , is a -martingale , then =1 ] , ; these choices satisfy ( [ 3.2.g ] ) , ( [ thetatrace ] ) . condition ( [ 3.2 ] ) is satisfied in this case automatically ( in fact , with ) , as are ( [ 3.2.gg ] ) and ( [ d.4 ] ) : it suffices to take and , which induces in ( [ d.2 ] ) .these functions are all locally bounded and locally lipschitz continuous on .the hjb equation ( [ 3.9 ] ) satisfied by the arbitrage function becomes , \ ] ] and reduces to the _ linear _ parabolic equation of ( [ 7.25 ] ) for the choice of variances in ( [ volstab ] ) .the reason for this reduction is that the expression on the left - hand side of ( [ 3.11 ] ) is negative , so we have considerable simplification in this case . in this example , the arbitrage function can be represented as \ ] ] in terms of the components of the -valued capitalization process .these are now time - changed versions , of the independent squared - bessel processes run with a time change common for all components , and with independent standard brownian motions [ see fernholz and karatzas ( , ) , and pall ( ) for more details ] .for any given investment rule and admissible system , let us consider the quantity this measures , as a proportion of the initial total market capitalization , the smallest initial capital that an investor who uses the rule and operates within the market model , needs to set aside at time in order for his wealth to be able to `` catch up with the market portfolio '' by time , with -probability one .our next result exhibits the arbitrage function of ( [ 1.7 ] ) as the min max value of a zero - sum stochastic game between two players : the investor , who tries to select the rule so as to make the quantity of ( [ 9.1 ] ) as small as possible and `` nature , '' or the goddess tyche herself , who tries to thwart him by choosing the admissible system or `` model '' to his detriment .[ theorem_3 ] under the conditions of theorem [ theorem_1 ] , we have for the quantities of ( [ 9.1 ] ) and ( [ 1.9 ] ) we claim indeed , if the set on the right - hand side of ( [ 9.1 ] ) is empty , we have and nothing to prove ; if , on the other hand , this set is not empty , then for any of its elements the process is a -supermartingale , and therefore ( [ 1.89 ] ) , that is , , still holds and ( [ 9.3 ] ) follows again . taking the infimum with respect to on the left - hand side of ( [ 9.3 ] ) , then the supremum of both sides with respect to , we obtain from ( [ 1.9 ] ) .the quantity is the lower value of the stochastic game under consideration . in order to complete the proof of ( [ 9.2 ] ) it suffices , on the strength of theorem [ theorem_1 ] , to show that the upper value of this game satisfies to see this , we introduce for each given investment rule the quantity that is , the smallest proportion of the initial market capitalization that allows an investor using the rule to be able to `` catch up with the market portfolio '' by time with -probability one , no matter which admissible system ( model ) might materialize .we have clearly which leads to and proves ( [ 9.5 ] ) .let us place ourselves now in the context of theorem [ theorem_2 ] and observe that proposition [ proposition_1 ] , along with proposition [ proposition_2 ] and its corollary , yields by virtue of ( [ 9.3 ] ) for and of ( [ 2.11.b ] ) for . here is the `` least favorable admissible system '' that attains the supremum over in ( [ 1.8 ] ) , and denotes the investment rule of ( [ 2.11.a ] ) with . in this setting , the investment rule attains the infimum in ( [ 9.8 ] ) , and we obtain then \\[-8pt ] \eqntext{\forall \mathcal{m } \in\mathfrak{m } ( \mathbf { x})}\end{aligned}\ ] ] on the strength of ( [ 9.7 ] ) . putting ( [ 9.9 ] ) and ( [ 9.10 ] ) together we deduce the saddle property of the pair .in particular , the investment rule of ( [ 2.11.a ] ) with is seen to be the investor s best response to the least favorable admissible system of proposition [ proposition_1 ] , and vice - versa . in this sensethe investor , once he has figured out a least favorable admissible system , can allow himself the luxury to `` forget '' about model uncertainty and concentrate on finding an investment rule that satisfies , as in ( [ 9.9 ] ) , that is , on outperforming the market portfolio with the least initial capital within the context of the least favorable model .we are grateful to professors jaka cvitani , nicolai krylov , mete soner , nizar touzi , hans fllmer , erhan bayraktar , johan tysk , and to the two reviewers , for helpful advice on the subject matter of this paper and for bringing relevant literature to our attention . numerous helpful discussions with drs .robert fernholz , adrian banner , vasileios papathanakos , phi - long nguyen - thanh , and especially with johannes ruf , are gratefully acknowledged . | in an equity market model with `` knightian '' uncertainty regarding the relative risk and covariance structure of its assets , we characterize in several ways the highest return relative to the market that can be achieved using nonanticipative investment rules over a given time horizon , and under any admissible configuration of model parameters that might materialize . one characterization is in terms of the smallest positive supersolution to a fully nonlinear parabolic partial differential equation of the hamilton jacobi bellman type . under appropriate conditions , this smallest supersolution is the value function of an associated stochastic control problem , namely , the maximal probability with which an auxiliary multidimensional diffusion process , controlled in a manner which affects both its drift and covariance structures , stays in the interior of the positive orthant through the end of the time - horizon . this value function is also characterized in terms of a stochastic game , and can be used to generate an investment rule that realizes such best possible outperformance of the market . and . . |
collective departure is a decision - making faced by all social species that travel in groups . in this process, an individual generally initiates the movement of the group out of a residence site or towards a new direction .the identity and motivation of this initiator can widely vary according to the social organisation of the considered species . on the one hand ,the leadership is often assumed by a unique or a subset of individuals that monopolise the decisions in hierarchical societies .these individuals can be older , dominant or of a specific sex .these characteristics are generally long - lasting and result in a consistant leadership over time , generally observed in stable and closed groups . on the other hand , the initiators can also be temporarily more motivated due to their physiological state , level of information or position in the group . in these cases , the initiation can be done by any individuals of the group without consistency over time . this mechanism is often present in social species that live in open groups with no consistent membership like bird flocks or fish schools .although each individual can initiate collective movement in these more egalitarian societies , some characteristics may enhance the probability of some members to take the leadership .for example , bold individuals that have a higher tendency to explore new areas will more often lead departures .similarly , group members with higher nutritional needs will be more motivated to initiate movements towards foraging spots .therefore , even in non - hierarchical species , leadership can be heterogeneously distributed among the group members . in this context, we studied the distribution of the leadership in the zebrafish _ danio rerio_. in its natural habits , _ danio rerio _ is a gregarious species that live in small groups ( a few to a dozen individuals ) in shallow freshwaters .it has become a widely studied and well known model organism in genetics and neuroscience but also in ethology . in this context ,our goal is to evaluate the presence of leaders or their emergence during successive collective departures .to do so , we observe groups of 2 , 3 , 5 , 7 and 10 zebrafish swimming in an experimental arena consisting of two rooms connected by a corridor .our aim is to measure the number of collective departure from one room to the other that are initiated by each fish .then , we put in relation the propensity of the individuals to lead departures to the number of attempts that they made as well as their swimming speed .fish experiments were performed in accordance with the recommendations and guidelines of the buffon ethical committee ( registered to the french national ethical committee for animal experiments # 40 ) after submission to the state ethical board for animal experiments .the fish were reared in housing facilities zebtec and fed two times a day ( special diets services sds-400 scientific fish food ) .we kept fish under laboratory conditions , , 500 salinity with a 10:14 day : night light cycle .water ph was maintained at 7 and nitrites ( no ) are below 0.3 mg / l .all zebrafish observed in this study were 6 - 12 months old at the time of the experiments .we observed groups of zebrafish swimming in an arena consisting of two square rooms connected by a corridor starting at one corners of each room placed in 100 x 100 x 30 experimental tank ( fig .[ fig : setup ] ) .the walls of the arena were made of white opaque pmma .the water depth was kept at 6 cm during the experiments in order to keep the fish in nearly 2d to facilitate their tracking .one lamp ( 400w ) was placed on the floor at each edge of the tank which is 60 cm above the floor to provide indirect lightning .the whole setup is confined behind white sheets to isolate experiments and homogenize luminosity .a high resolution camera was mounted 1.60 m above the water surface to record the experiment at a resolution of 2048 x 2048 and at 15 frames per second .we observed 12 groups of two , three , five , seven and ten adult laboratory wild - type zebrafish ( _ danio rerio _ ) ab strain during one hour for a total of 60 experiments . before the trials, the fish were placed with a hand net in a cylindrical arena ( 20 cm diameter ) in one of the two rooms . following a 5 minutes acclimatisation period , the camera started recording and the fish were released and able to swim in the experimental arena .after one hour , the fish were caught by a hand net and replaced in the rearing facilities .the videos were analysed off - line by the idtracker software .this multi - tracking software extracts specific characteristics of each individual and uses them to identify each fish without tagging throughout the video .this method avoids error propagation and is able to successfully solves crossing , superposition and occlusion problems .however , the tracking system failed to correctly track one experiment with two fish , one experiment with five fish and two experiments with ten fish . therefore , these four experiments were excluded from our analysis . for all other experiments, we obtained the coordinates of all fish at each time step . with these coordinates, we built the trajectories of each fish and computed their position in the arena and their instantaneous speed calculated on three positions and computed as the distance between and divided by 2 time steps ..first , we quantified for all the replicates the total number of collective residence events ( cre ) defined as the whole group resting in one of the two rooms .the number of cre decreases with the size of the groups with a median number of 233 cre for 2 fish to 131 cre for groups of 10 fish ( fig .[ fig : ndepartures]a ) .moreover , we counted the total number of collective departures events ( cde ) defined as the whole group leaving one of the resting sites for the corridor towards the other one .the number of cde also decreases with the size of the group but with a stronger difference between the groups ( from a median number of 212 for two zebrafish to 16 cde for groups of 10 zebrafish ( fig .[ fig : ndepartures]b ) .these observations show that 90% of the cre were followed by a collective departure in groups of 2 fish while only 12% were in groups 10 fish .thus , larger groups were more likely to split into subgroups during departures while small groups of fish swam together most of the time .thanks to the individual tracking of the fish , we were able to determine the identity of the first fish that left a room for all collective departures .thus , for all groups , we computed the proportion of collective departure initiated by each fish and ranked the group members according to the proportion of departure that they led .to characterise the distribution of the leadership among the group members , we ranked each individual according to the proportion of collective departures that they have initiated and compared the distributions of the ranks with two theoretical ones .on the one hand , we simulated a situation where all fish have the same probability ( ) to initiate a departure. on the other hand , we simulated a despotic configuration with a fish that has a 0.9 probability to initiate collective movement while the others have only a chance to start a departure .the experimental data lay between these two extreme scenarios ( fig .[ fig : rankedinitiation ] for 5 fish ) . in groups of five fish ,the 1 ranked fish initiated 45% of the collective departures on average .this value is largely below the 90% observed in the despotic simulation but also higher than the 25% of the uniform repartition .these results highlight an heterogeneous distribution of leadership among group members , some fish have a higher tendency to start departure than others .we observed similar results for groups of 2 , 3 , 7 and 10 fish . ) while the despotic distribution assumed that one fish has a higher probability ( ) to initiate collective departure while other have a small probability to do so ( ) .the results show that the frequency of initiation in groups of zebrafish differs from both simulated distribution highlighting a distributed leadership . ] to determine whether this distributed leadership was related to a different success rate or the number of initiation attempts , we measured the number of time that each fish was the first to exit a resting site independently of its success to be followed by the other group members ( i.e. an _ attempt _ ) .the intra - group proportion of attempts made by each fish shows a continuum from an egalitarian to a more despotic distribution for all group sizes ( fig .[ fig : prop_attempt ] ) . for each group , we compared the distribution of the total number of attempts made by each fish with a theoretical homogeneous distribution . for two fish ,4 dyads out of 11 did not significantly differ from the homogeneous distribution ( test , see table for details ) . in groups of three fish, the hypothesis of homogeneous distribution was not rejected in only one trio . finally , all groups of 5 , 7 and 10 fish significantly differ from the equal distribution of the number of attempts .thus , like the distribution of initiations , the attempts are also heterogeneously distributed among the fish of the group .then , we checked for a potential correlation between the number of initiations and the number of attempts made by each fish .a linear regression shows that the number of initiation is linearly correlated to the number of attempts perform by the fish and that the coefficient of this correlation depends on the group size ( fig .[ fig : attempts]a ) . for groups of two fish ,92% of the attempts made by a fish resulted in a collective departure of the whole group .moreover , the linear relation between the numbers of attempts and initiations highlights that this high success rate is shared by all fish .therefore , the fish leading the largest number of departures were the more active fish .in addition , the proportion of initiation led is proportional to the proportion of attempts performed by each fish ( fig .[ fig : attempts]b ) .these results highlight that all fish have the same success rate in starting a collective departure when they attempt it .thus , the higher status of initiator of some fish is not related to a higher influence on other fish or a better success but on a higher tendency to exit the resting sites .the results for other group sizes are qualitatively similar but differ quantitatively : while the linear relation persists , the success rate for each attempt decrease when the group size increases ( 90% for 3 fish , 66% for 5 fish , 53% for 7 fish and 26% for 10 fish ) .next , we studied the temporal distribution of the leading event to highlight a potential succession of temporary leaders . to do so , we computed the probability of a fish to perform two successive initiations and compared this probability to the proportion of cde that the fish has led .a temporal segregation of the initiators would results in a high probability of successive initiations compared to the proportion of led cde while homogeneously distributed initiations would give similar probabilities of successive initiations and led cde .for all group sizes , we observed a linear correlation between the two proportions ( fig .[ fig : samelead ] ) .thus , the probability of a fish to initiate a cde was not dependent on its status of initiator or follower during the previous cde .finally , we looked at a potential link between the motion characteristics of the individual and the number of attempts that they have made .in particular , we measured the average linear speed of all individual as an indicator of their motility . there is a positive correlation between the average speed of the individual and the number of attempts that they performed .however , this correlation is only significant for groups of 5 , 7 and 10 fish ( fig .[ figure4]a - f ) .we also compared the intra - group ranking of the fish for the number of initiation with their intra - group ranking for the linear speed .we used the kendall s coefficient to measure the association between the two rankings .the intra - group ranking for the initiation is positively correlated to the intro - group ranking for the linear speed ( fig .[ figure4]c ) for groups of 3 , 5 , 7 and 10 fish .thus , the fish with a highest average speed of its group is more likely to also be the fish that has started the largest number of departures , except for dyads .these results show that the initiation of collective movements is related to the motility of the fish , particularly in large groups .the initiation of collective movement in fish is often reported as a distributed process in which each fish can potentially lead a departure .this distribution of the leadership is particularly suited for large schools of hundreds or thousands of fish that have to detect and avoid attacks from predator coming potentially from any direction .thus , the first fish to spot a predator can start an escaping manoeuvre that will be propagated from neighbour to neighbour in the whole school . in these conditions, it is difficult to imagine a permanent leader monopolising the initiation of collective movement .however , the situation can be quite different in smaller groups of fish .while they still have to avoid predators , the inter - individual differences can be more marked due to the small number of group members .therefore , a heterogeneous distribution of the leadership due to individual characteristic is more likely to appear . here, we showed that the initiation of collective departure is a distributed process among the group members in zebrafish .however , the role of initiator is not homogeneously distributed , with some individuals leading more departures than others . by also measuring the number of attempt made by each fish, we highlighted that this heterogeneous distribution was not the result of a higher success rate of some individual that could have a higher tendency to be followed .on the contrary , the success of the fish to trigger a collective departure was linearly correlated to their number of attempt .therefore , the best initiators do not seem to occupy a particular hierarchical status in the group but are the most motivated individuals to exit the room .in addition , we showed that the initiation process was not temporarily organised with a fish leading the group during a particular time period before being relayed by another fish , but a distributed status during the whole experimental time .finally , we showed that the motility of the fish is a predictor of the tendency of the fish to initiate collective departure .indeed , the intra - group ranking of the fish for average speed of the fish was correlated to its intra - group ranking for departure attempts .thus , these results provide evidence that the emergence of specific initiator of collective movement in zebrafish is more related to individual characteristics of the group members than to a particular status in the group .all authors declare having no competing interests .b.c . performed the data analysis , ran the simulations and wrote the manuscript .a.s . and y.c .performed the experiments and collected the experimental data .l.c . developed the video treatment and tracking system .designed , conceived , coordinated the study and revised the manuscript .all authors gave final approval for publication .this work was supported by european union information and communication technologies project assisibf , ( fp7-ict - fet n. 601074 ) .the funders had no role in study design , data collection and analysis , decision to publish , or preparation of the manuscript . | for animals living in groups , one of the important questions is to understand what are the decision - making mechanisms that lead to choosing a motion direction or leaving an area while preserving group cohesion . here , we analyse the initiation of collective departure in zebrafish _ danio rerio_. in particular , we observed groups of 2 , 3 , 5 , 7 and 10 zebrafish swimming in a two resting sites arena and quantify the number of collective departure initiated by each fish . while all fish initiated at least one departure , the probability to be the first one to exit a resting site is not homogeneously distributed with some individuals leading more departures than others . we show that this number of initiation is linearly proportional to the number of attempts performed and that all fish have the same success rate to lead the group out of a resting sites after an attempt . in addition , by measuring the average swimming speed of all fish , we highlight that the intra - group ranking of a fish for its proportion of initiation is correlated to its intra - group ranking in average speed . these results highlight that the initiation of collective departure in zebrafish is a heterogeneously distributed process , even if all individual have the same success rate after attempting a departure . |
experimental areas in particle accelerator facilities are notoriously noisy places due to beamline elements like magnets , slits , pumps , power supplies , and fans as well as other assorted electronics equipment that either runs continuously or is intermittently turning on and off .all of the above may cause voltage ripples that significantly compromise photomultiplier tube ( pmt ) current pulses digitized by analog - to - digital converters ( adcs ) .the problem is compounded especially in analysis of data from segmented detectors where total energy and/or direction of each measured particle are derived from smeared adc values of several adjacent detector modules .an experimenter might try to follow the recipes for reduction of pedestal noise couplings by designing recommended magnetic shielding and applying proper electrical grounding principles .but such efforts are in practice usually met with only limited success . in order to minimize the electronic noise arising from `` dirty '' electrical grounds and from cross - talk between adjacent detector adc channels ,quality coaxial cables are used for connecting pmt anode outputs with inputs of fast electronics units .this arrangement though does not protect against stray magnetic fields at low frequencies that cause the so - called `` ground loops '' . while a low frequency noise component can be removed by using isolation transformers and capacitor - coupled inputs to the adcs , that approach is not an option in high - rate environments where ac - coupled devices would produce unacceptable signal distortions and rate - dependent baseline shifts .custom electronic circuits developed to address the problem of low frequency voltage ripples are described in refs .they are designed to provide a correspondence between a simple saw - tooth waveform and an experimental ac - power cycle , digitizing that information and passing it to the data stream for an offline noise correction .another method that uses one or more `` blackened pmts '' ( bpmts ) in parallel with the detector active pmts is described and compared with active ac - noise synchronization circuits in ref .the bpmts are dummy phototubes under high voltage but with no attached detector modules , whose signals are digitized in exactly the same way as those of the active detectors. a drawback / disadvantage of the method is that for complicated experimental layouts operating in noisy environments and with multiple local grounds prevailing in the areas close to beamlines , more than a handful of the bpmts would be necessary to account properly for the correlated noise .moreover , because bpmts should be mechanically and electrically a part of a detector and should be physically close the active pmts , they could be affected by , for example , cherenkov radiation caused by relativistic minimum ionizing particles in photocathode glass windows , destroying the noise correlations . in the analysis that followswe take advantage of the fact that our apparatus , the pibeta detector at psi , consists of individual detectors that are optically isolated from each other . therefore, one particular physics trigger will not excite more than a handful of active detectors and the role of a `` blackened pmt '' can be played by different active detector lines in different events .we call our procedure `` the second pedestal noise correction '' .the pibeta apparatus is a large acceptance 3 non - magnetic detector optimized for measurements of electrons and photons in the energy range of 10100 .the heart of the detector is a spherical 240-module pure csi calorimeter that is supplemented with a segmented 9-piece active plastic target , a pair of conventional cylindrical mwpcs , and a cylindrical 20-counter plastic hodoscope for charged particle tracking and identification .all parts of the detector are mounted at the center of a 2.5.6 m steel platform that also carries high voltage power supplies , a detector cooling system , coiled coaxial delay cables on the one side , and fast trigger electronics on the opposite side of the platform .therefore , the pibeta detector is a compact self - contained assembly that can be moved easily from its parking place into a chosen experimental area as a single unit and made operational within a day . due to the detector s proximity to elements of the beamline, our fast electronics are exposed to a significant contaminating electronic hum .[ fig : waveform ] shows a typical baseline of an analog signal that is to be digitized in an adc device , captured on a tektronix tds 744 digital oscilloscope .a snapshot displays ground - loop noise with frequencies of and in a 20 interval .typical peak - to - peak noise amplitude is .pibeta pmt signal outputs are split either directly at the pmt voltage dividers or at passive custom - made analog signal splitters .one branch of a calorimeter analog signal is delayed in coaxial cables and then split again to provide inputs for fastbus adc units and discriminators / tdcs / scalers .the other branch of the pmt signal is connected to analog summing and discriminator modules of the fast trigger electronics .the master trigger outputs are delayed and subsequently used to provide adc gates and tdc start / stop signals .metal frames of the pmt high voltage supplies on one end of the platform and the detector conductive support structure in the center are connected with 10/20 mm thick copper cables to fast trigger electronics grounding points in order to decrease the noise arising from the ground loops .voltage differences between different parts of the detector are measured with a digital voltmeter and after the grounding connections are put in place are reduced to less than 4 . for digitizing fast pmt pulseswe have used a lecroy adc model 1882f unit , a high - density fastbus converter with 96 current - integrating negative inputs .the 1885f unit provided a wide dynamic range with 12-bit data for the resolution of 0.05 / count and less than 1% quantization error .raw pedestals are set at about 200 - 300 channels and in our particular experimental environment had the average root - mean - square ( rms ) widths of - 20 channels .[ fig : baseline ] presents an analog signal of a single csi calorimeter detector on 2 / division and 100 / division scale taken in 6 persistence mode .clearly , with an adc gate width set to 100 , the adc unit will integrate variable amounts of the noise current .a 70 electron or photon will produce a current pulse with an amplitude of .25 and a fwhm width .this current pulse into 50 will integrate to a charge of about 150 , placing it at 3/4 of the full scale . a common noise with an amplitude 5 present on the baseline will integrate in a 100 gate to a 10 charge , or almost 7% of an actual particle signal . as the energies and/or directions of measured particles are deduced from adcs of a centrally hit calorimeter module and on averagetwo nearest neighbors hit , the resulting uncertainty in energy ( direction ) could add up to 20% .a 190 plastic scintillator counter ( bc-400 ) is placed horizontally above the electronics racks , about 3 m away from the main pibeta detector . by virtue of its position, the counter is shielded from the experimental radiation area by a 5 cm thick lead brick wall as well as by a 50 cm wide concrete wall .operating with a high discriminator threshold it is counting only random cosmic muons at about 1 - 2 frequency , having a stable counting rate for both beam - on and beam - off periods .the discriminated signals from that scintillator counter define our random trigger .the main pibeta detector trigger is defined dynamically in a programmable logic unit . in practice ,the master trigger is a logic or of up to 12 individual triggers .high rate physics triggers in the mix are prescaled , with prescaling factors depending on the trigger type and the beam stopping rate .the random trigger is connected to the trigger mix logic unit and in production runs it is always included among the enabled triggers .a single production run is limited to 200,000 events , yielding typically random triggers per run interspersed among the physics triggers .we note parenthetically that in an off - line analysis adc values for all detectors in random events are written out to a text file .these data sets are used in a geant detector simulation , enabling us to account rigorously for residual adc noise and accidental event pile - ups in the energy spectra .in a pre - production calibration run data are first collected in so - called `` pedestal mode '' with the beam off and with only random trigger enabled in the trigger mix .raw pedestals are automatically calculated at the end of the calibration run .namely , the pedestal histograms are filled with the random trigger events expressed in adc channel numbers , where a subscript labels a detector and an index represents an event number . at the end of the run these histogramsare analyzed for the mean pedestal positions .the mean pedestal positions and widths are saved in the data acquisition database residing completely in shared memory to allow for fast access .`` calibrated adc values '' in the subsequent production runs would then be differences between raw adc readings and the pedestal values retrieved from the database : the analyzer calibration program also calculates the correlation coefficients among all possible detector pairs `` '' that belong to the same detector group , e.g. , for 240 calorimeter detectors , 40 plastic veto hodoscope counters , etc .the expression used for calculating the correlation coefficient is : where sums extend over all pedestal events with calibrated adc values less than 20 channels and is the total number of such events .the 20 channel limit corresponds to the half - width at tenth - maximum of the typical noise distribution and discriminated against ( very rare ) cosmic events in our random data set that would smear up our correlation matrix .the correlation coefficient matrix is saved in a text file that is analyzed by an offline computer program to identify the detector noise groups .the program user can select the maximum number of noise groups ( default 10 ) , minimum group size ( default 8) and the minimum correlation coefficient ( default 0.75 ) between the detectors in the same noise group .the offline code combines detectors into noise groups on a trial - and - error basis with the over - all goal of maximizing the average intergroup correlation coefficients , while keeping the number of groups as small as possible and the minimum number of channels in any group above a given limit . with a minimum group size of eight detectorsthe program generated 10 noise groups for the 240 csi channels with an average intergroup correlation coefficient 0.90 and minimum correlation coefficient greater than 0.75 .the noise groups suggested by the program are subsequently hard - coded in the analyzer calibration program .an example of six plastic hodoscope ( pv ) noise groups is given below in a section of c code : .... / * adc groups for common noise suppression * / # define max_noise_group 100 int adc_group[][max_noise_group ] = { / * plastic veto detectors * / { 240,241,242,243,244,245,246,247,248 , 249,250,251,252,253,254,255 , -1 } , { 256,257,258,259 , -1 } , { 260,261,262,263,264 , -1 } , { 265,266,267,268,269,270,271 , -1 } , { 272,273,274,275 , -1 } , { 276,277,278,279 , -1 } , { -1 } , } ; .... where individual pv counters are identified by their signal indices .the value `` '' at the end of each line follows the last detector in every noise group .each detector is included in one and only one noise group .detectors belonging to the same noise group are usually physically close and have their signal and high voltage cables bundled together .the algorithm for calculating of noise - corrected and calibrated adc values resides in the analyzer program with predefined noise groups .the algorithm loops over all adc channels and for every event : 1 . subtracts the raw pedestal values retrieved from the online database from raw adc variables ( `` first pedestal correction '' ) ; 2 . identifies the common noise group that the current detector belongs to ; 3 . finds the channel with the smallest adc value in that group; 4 . finds all the channels in the group under consideration which are within 15 counts above the minimum of the group and calculates their average adc value ; 5 .subtracts that average value ( `` second pedestal correction '' ) from the adc values of all other group members . at the beginning of eachrun the pedestal histograms are reset and at the end of the run they are again analyzed by fitting lineshapes with a gaussian function .the means of the distributions are updated in the database as new pedestal values .[ fig : correlation ] shows a scatter plot of adc values for two different csi detectors acquired with a random trigger .uncorrected adc values are plotted in the top panel showing the correlation in an electronic noise : these two detectors are eventually assigned to the same electronics noise group . using the secondary pedestal correction procedure described above, we obtained a scatter - plot of the corrected adc values presented in the lower panel of fig .[ fig : correlation ] .examples of one - dimensional adc histograms for two different detector types , namely a plastic scintillator counter and one active target segment are shown in figs .[ fig : improve ] and [ fig : target ] , respectively .top panels represent raw adc spectra , bottom panels show the noise suppression in the corrected spectra . in fig .[ fig : improve ] a peak at channel number 6 is inserted by the fitting function in the analyzer program .its position corresponds to 3 standard deviations of the corrected pedestal lineshape . in readout of the adc modules all channels with the values below 3 suppressed by being considered equal to zero and are not written to tape .this zero suppression criterion compresses the size of our data files by an order of magnitude .a comparison between the smearing of raw pedestal data and the spectra obtained after the noise suppression for four different detector types is given in table [ tab1 ] .the table summarizes the maximum and minimum noise group sizes , rms widths of raw and corrected pedestal histograms as well as an important equivalent energy deposition corresponding to the corrected pedestal rms values .we see that the energy equivalents of the rms widths range from 0.13 in csi detectors to 0.40 in the segmented target .the target pedestals are more difficult to correct because of ( a ) the small number of target segments ( 9 ) that form a noise group , and ( b ) the high rate of physics triggers in the target , with up to 1 stopping pions .we have implemented correction of modular detector adc readings for the correlated common noise using the method of common noise detector groups . the reduction in the pedestal width of up to a factor of fiveis achieved with the corrected pedestal rms values as narrow as 1.5 adc channels .this improvement compares favorably with the active circuit noise suppression methods where improvement factors are - 6 . in a representative case of pure csi calorimeterthe procedure reduces the correlated noise contribution to the equivalent energy of 0.13 per detector module . for the energy range of interest , 10100 ,this value is smaller than the photoelectron statistics contribution of and is comparable with the pmt dark current noise that amounts to .1 .this work is supported and made possible by grants from the us national science foundation and the paul scherrer institute .d. poani , d. day , e. frle , r. m. marshall , j. s. mccarthy , r. c. minehart , k. o. h. ziock , m. daum , r. frosch and d. renker , psi r-89.01 experiment proposal , ( paul scherrer institute , villigen , 1988 ) ..comparison of the pedestal noise reduction for the various elements of the pibeta detector .the values represent averages in one typical production run with pedestal events .the minimum and maximum number of detectors in one noise group is given in the second column .the energy ( angle ) equivalent of the corrected pedestal rms is listed in the fifth column . [ cols="<,^,^,^,^ " , ] | we describe a simple procedure for reducing adc common noise in modular detectors that does not require additional hardware . a method using detector noise groups should work well for modular particle detectors such as segmented electromagnetic calorimeters , plastic scintillator hodoscopes , cathode strip wire chambers , segmented active targets , and the like . we demonstrate a `` second pedestal noise correction '' method by comparing representative adc pedestal spectra for various elements of the pibeta detector before and after the applied correction . pacs numbers : 07.50.h , 07.05.hd , 07.05.kf _ keywords : _ adc pedestals , correlated common noise , daq zero suppression psfig.sty , , |
_ human migration _ , i.e. the movement of large numbers of people out of , or into specific geographical areas ,is seen as one of the biggest challenges that face the human societies in the 21st century .on one hand , part of the human population has reasons to _ emigrate _ into countries which provide a `` better '' life on the other hand , industrialized countries can not sustain their current situation without the _ immigration _ of people .the real problem arises because the `` demand '' and the `` supply '' side can not be matched .industrialized countries fear that immigrants do not contribute to their further economic growth but , on the contrary , deplete their wealth by taking advantage of a social security , health , and educational system which they did not contribute to .if we move this problem on the more abstract level of a game - theoretical model , we can distinguish between two types of agents : those _ cooperating _ , i.e. being able to integrate in a society and to contribute to a common good , namely economic growth , and those _ defecting _, i.e. without the ability to socially integrate and thus depleting a common good at the cost of the cooperating agents . certainly , based on their past experience , agents can adapt , i.e. they can change their strategy from defection to cooperation and vice versa dependent on the payoff they receive in a given environment .the question for an industrialized country would be then to define an optimal immigration rate that ( a ) does not destroy the common good , and ( b ) allows agents to adapt to the assumed cooperative environment within one or two generations , even if they may have not immigrated with a cooperative strategy .the problems of cooperation and defection and the payoff - dependent adoption of strategies have been discussed in the framework of the prisoner s dilemma ( pd ) and the iterated pd ( ipd ) game ( see sect . [ 2 ] ) . with our paper , we add to this framework the ability to migrate between different countries ( `` islands '' ) .our aim is to reveal optimal conditions for the migration of agents such that cooperating strategies can take over even on those islands where they were initially not present .we note that migration was previously studied in a game - theoretical context by different authors .our work differs from these attemts in various respects .first of all , we do not assume that migration is based on the anticipated success this shifts the conceptual problem of the `` outbreak of cooperation '' to proposing rules with non - local information such that two cooperators meet at the same place , from which cooperating clusters can grow .we also do not make migration dependent on local variables such as the number of defectors in the neighborhood or random choices of `` empty '' places .in fact , human migration is rarely targeted at less densely crowded places , it is rather the opposite .further , we do not assume one - shot games such as the pd , but instead consider the ipd in which the number of repeated interaction as well as the mix of up to 8 different strategies plays a crucial role .eventually , we do not use an agent - based model in which update and migration rules are freely defined , to study their impact on computer simulations on a lattice .our approach proposes a population based model in which subpopulations are defined with respect to ( a ) their interaction strategy , and ( b ) their spatial location .the consideration of separated `` islands '' allows a coarse - grained implementation of spatial structures which is in between a lattice or network approach and a meanfield description .it is known that spatial structures have an impact on the outbreak of cooperation , but their influence varies with other degrees of freedom , such as update rules , synchronization , interaction topology , payoff matrix .therefore , in this paper we adopt mostly standard assumptions about the interaction type ( ipd with encounters ) and interaction topology ( panmictic subpopulations ) , strategy adoption ( replication proportional to payoff ) , and migration ( fixed fraction of the population ) . to understand the basic dynamics , we first investigate `` isolated '' islands ( no migration ) to find out about the conditions for the `` outbreak of cooperation '' without external influence .this `` outbreak '' is defined as the critical point ( strategy mix , number of encounters ) beyond which a whole island is going to be occupied by cooperating agents , if agents adopt strategies proportional to their average payoff .then , we add migration between islands to this dynamics to find out under which conditions the outbreak of cooperation can be enhanced .it is important to note that migration does not distinguish between strategies or islands , i.e. there are no better suited strategies for immigration , or bad places with high emigration rates which we consider as artificial assumptions to explain the `` outbreak of cooperation '' . to determine the robustness of our findings, we always consider worst case scenarios , i.e. initial settings in which most islands have either an entirely defective subpopulation , or at least a defective majority .we further control for important parameters such as the pool of available strategies , the number of interactions or the number of islands , for which critical conditions are derived .our finding that migration is indeed able to boost the outbreak of cooperation is remarkable both because it is based on minimal assumptions about the dynamics and because it follows from a quite systematic investigation of the underlying conditions .let us investigate a population of agents divided into subpopulations on different islands which imply a coarse - grained spatial structure , i.e. . agents at island are assumed to interact with the other agents on their island ( _ panmictic population _ ) .but , in general we assume that migration between the islands is possible , with the respective migration rates given by .these define the fraction of the agent population at island that migrates to island in a given time interval .figure [ fig : lattice ] shows the case of 3 different locations .( with different strategies ) are spatially distributed on islands from where they can migrate at rates .,width=226 ] our model basically considers two different time scales for interaction and migration . we define a _ generation _ to be the time in which each agent has interacted with all other agents a given number of times , denotes as .thus the total number of interactions during each generation in the panmictic population is roughly . atany given time , agent can interact only with one other agent in a `` two - person game '' .but during one generation , it interacts with all other agents times ( repeated two - person game ) . for the interaction ,we adopt the well known _iterative prisoner s dilemma _ ( ipd ) from game theory . at each encounter between agents and , both have two actions to choose from , either to _ collaborate _ ( ) or to _ defect _ ( ) , _ without knowing _ the action chosen by their counterpart .the outcome of that interaction is described in the following payoff matrix : {ccc } & \textbf{c } & \textbf{d}\\ \textbf{c } & \;(r , r)\ ; & \;(s , t)\ ; \\\textbf{d } & \;(t , s)\ ; & \;(p , p)\ ; \end{array}\ ] ] if both agents have chosen to collaborate , then they both receive the payoff .if one of the agents chose to defect and the other one to cooperate , then the defecting agent receives the payoff and the cooperating the payoff .if both defect , they both receive the payoff . in the special class of _ prisoner s dilemma _ ( pd ) games , the payoffs have to fullfil the following two inequations : which are met by the standard values , , , .this means in a cooperating environment , a defector will get the highest payoff . for ( `` one - shot '' game ) choosing action is unbeatable , because it rewards the higher payoff for agent no matter if the opponent chooses or . at the same time ,the payoff for _ both agents and _ is maximized when both cooperate .a simple analysis shows that defection is a so - called _ evolutionary stable strategy _ ( ess ) in a one - shot pd .if the number of cooperators and defectors in the population is given by and respectively , then the expected average payoff for cooperators will be .similarly the expected average payoff for defectors will be .since and , is always larger than for a given number , and pure defection would be optimal in a one - shot game .even one defector is sufficient to invade the complete population of cooperators .but in a _ consecutive _ game with short _ memory _ , both agents , by simply choosing , would end up earning less than they would earn by cooperating .thus , the number of games two agents play together becomes important .this makes sense only if the agents can remember the previous choices of their opponents , i.e. if they have a memory of steps .then , for the _ iterated prisoner s dilemma _ ( ipd ) , they are able to develop different _ strategies _ based on their past experiences with their opponents .we note that the ipd game was studied both in the context of a panmictic population ( ) and assuming a spatial population structure . usually , in an ipd only a _ one - step _ memory is taken into accout . based on the known previous choice of its opponent, either or , agent has then the choice between _ eight _ different strategies . following a notation introduced in ,these strategies are coded in a 3-bit binary string ] such that the outbreak of cooperation happens on all islands . denotes the the smallest value of the bandwidth of optimal migration rates shown in fig .[ fig : howto_find_threshold ] . i.e. , for fixed and a given number of islands with the initial conditions specified in eq .( [ eq : threshold ] ) , there exist a tuple of critical values which determine the _ outbreak of cooperation_. the outbreak shown in fig .[ fig : howto_find_threshold](middle ) for is observed for .this leads us to the question , how the threshold value and the critical migration rate change if we change the number of islands .( black line ) and relative effort ( dashed line ) dependent on the number of islands .symbols represent results of numerical calculations .fitted values of eq .( [ eq : nonlin - k ] ) , , .( right ) optimal migration rate dependent on the number of islands .[ fig : migra_vs_k_and_thres_vs_k],title="fig:",width=226 ] ( black line ) and relative effort ( dashed line ) dependent on the number of islands .symbols represent results of numerical calculations .fitted values of eq .( [ eq : nonlin - k ] ) , , .( right ) optimal migration rate dependent on the number of islands .[ fig : migra_vs_k_and_thres_vs_k],title="fig:",width=226 ] the results of exhaustive numerical calculations which had to search for the outbreak of cooperation on the whole parameter space are shown in fig .[ fig : migra_vs_k_and_thres_vs_k](left ) . obviously , for the threshold frequency results from the analysis given in [ 2d ] .if the number of islands increases , has to increase as well , in order to be able to `` export '' cooperating agents while still staying above the theshold .this increase , however , is nonlinear in .precisely , as can be verified in fig .[ fig : migra_vs_k_and_thres_vs_k ] , the values of the constants depend implicitely also on the payoff matrix , eq . ( [ 2payoff ] ) , which defines the comparative advantage of each strategy for reproduction and invasion , subsequently .plotting we see that for , with every new island added the _ relative effort _ to invade the new one with cooperators _ decreases _ , while for this effort _ increases _ quadratically with . however , the effort of invading other islands with cooperators always stays below the threshold of an isolated island , which is 0.2 .this is a direct consequence of the combined processes of migration and reproduction .newly arriving cooperative agents will reproduce on the other islands , this way reaching the threshold value the before the next generation is ready for migration .hence , _ diversification of the reproduction sites _ for cooperating agents lower the relative effort for the outbreak of cooperation and makes it more favorable . as a second consequence of the nonlinear dependency , the export of cooperation from _one _ island to other islands is only possible for a limited number of islands , because of the two effects already mentioned above .i.e. the `` export '' of cooperating agents and the `` import '' of defecting agents both decrease the fraction of cooperating agents below the threshold frequency . hence , for the two strategy case discussed here , already leads to the collapse of exporting cooperation . the critical migration rate dependent on the number of islands is shown in fig .[ fig : migra_vs_k_and_thres_vs_k](right ) .we recall that this gives the minimal fraction of the population that has to migrate in order to allow the outbreak of cooperation on any island . if it is at the same the maximum fraction to not let cooperation collapse back home , i.e. it is the optimal migration rate ( see fig . [fig : howto_find_threshold](middle ) eventually , one can ask how the outbreak of cooperation is affected if instead of the two strategies tft , all - d three or four strategies are considered . in general , the presence of both all - c and a - tft favors the invasion of all - d at the end , whereas the presence of a - tft favors the invasion of tft thus we can expect that the thresholds are slightly higher or lower in such cases . a more involved discussion is presented in [ divers ] .before summarizing our findings , we wish to comment on a related strand of literature about multi - level selection in populations . there , quite similar to the setup presented in our paper , a population consisting of several groups ( subpopulations ) was considered where interaction only occurs between agents of the same subpopulation .the fitness of agents was determined by the payoff obtained from an evolutionary game , simply chosen as a non - iterative prisoner s dilemma , and their reproduction was assumed to be dependent on their payoff according to a moran process .further a stochastic dynamics was considered .in addition to these differences ( pd , moran process , stochastic dynamics ) , also a separate group dynamics was assumed .groups can split when reaching a certain size , which denotes a amplified replication process at the group level .precisely , groups that contain fitter agents reach the critical size faster and , therefore , split more often .this concept leads to selection among groups , although only individuals reproduce .it allows for the emergence of higher level selection ( group ) from lower level reproduction ( agents ) .it was shown that cooperation in such a setting is favored if the size of groups is small whereas the number of groups is large .migration does _ not _ support the outbreak of cooperation in this model because it favors the invasion of defectors rather than of cooperators .in contrast to these investigations , we have contributed to the analysis of evolutionary ( deterministic ) ipd games in two different domains : * for the meanfield limit , represented by a panmictic population where each agent interacts with other agents times , a quantitative investigation of the basins of attraction was provided .dependent on the initial mix of two cooperative ( all - c , tft ) and two defective ( all - d , a - tft ) strategies and the number of interactions , we could derive the critical conditions ( threshold frequencies ) under which the outbreak of cooperation can be expected in a panmictic population . a detailed analysis of the two , three and four strategy cases helped to better understand the contribution of each strategy on the final outcome . * using the `` unpertubed '' ( or meanfield ) limit as a reference point, the impact of migration on the outbreak of cooperation in a spatially distributed system could be quantified .we found that there is a bandwidth of optimal migration rates which lead to the induction ( or `` export '' ) of cooperation to a number of islands which otherwise would converge to defection .we were able to determine the critical conditions which guarantee the outbreak of cooperation in a worst case scenario .remarkably , the relative effort to export cooperation to defecting islands was decreasing with in some range and always stayed below the critical value for an isolated island .i.e. effectively it became easier to turn defecting into cooperating islands , provided the optimal migration rate was reached .it is important to note that the outbreak of cooperation is not enforced by a maximal migration rate but , as stated above , by a small range of optimal migration rates .this is because migration , different from other approaches , is not seen as an unidirectional dynamics , where agents just move to a `` better '' place . instead, our model is based on the assumption that it is a _ bidirectional_ process , which we deem a more realistic assumptions when considering a dynamics over many generations .in fact , agents , or their offsprings , which have obtained certain skills or wealth at their immigration country , quite often decide to start new buisness back home at their origin country because their new capabilities give them a considerable advantage there , while it is only a marginal advantage in their immigration country .this assumption is in line with many policies about immigration / education of foreigners in a country , to give indirect support for development and to allow future business with those countries , where some agents with positive experience have resettled . at the end , as we have shown in our paper , it remarkably helps to spread `` cooperation '' globally , if the number of breeding places for such a strategy is increased ( above a minimum threshold ) .this work on the outbreak of cooperation was supported by the cost - action mp0801 _ physics of competition and conflicts_. f.s .announces financial support from the swiss state secretariat for education and research ser c09.0055 .10 urlstyle [ 1]doi:#1 axelrod , r. and hamilton , w. , the evolution of cooperation , _ science _ * 211 * ( 1981 ) 13901396 .bladon , a. j. , galla , t. , and mckane , a. j. , evolutionary dynamics , intrinsic noise and cycles of co - operation , _ physcal review e _ * 81 * ( 2010 ) 066122(112 ) .cohen , m. d. , riolo , r. l. , and axelrod , r. , the emergence of social organization in the prisoner s dilemma : how context - preservation and other factors promote cooperation , technical report 99 - 01 - 002 , santa fe institute ( 1999 ) , working paper .doebeli , m. , blarer , a. , and ackermann , m. , population dynamics , demographic stochasticity , and the evolution of cooperation , _ proc .natl acad .sci , usa _ * 94 * ( 1997 ) 51675171 .fogel , d. b. , on the relationship between the duration of an encounter and evolution of cooperation in the iterated prisoner s dilemma , _ evolutionary computation _ * 3 * ( 1995 ) 349363 .helbing , d. and yu , w. , migration as a mechanism to promote cooperation , _ adv .complex syst _ * 11 * ( 2008 ) 641652 .helbing , d. and yu , w. , the outbreak of cooperation among success - driven individuals under noisy conditions , _ proceedings of the national academy of sciences _ * 106 * ( 2009 ) 3680 .hofbauer , j. and sigmund , k. , _ the theory of evolution and dynamical systems _( cambridge university press , 1988 ) .imhof , l. a. , fudenberg , d. , and nowak , m. a. , evolutionary cycles of cooperation and defection ., _ proceedings of the national academy of sciences of the united states of america _ * 102 * ( 2005 ) 10797800 .jiang , l. , wang , w. , lai , y. , and wang , b. , role of adaptive migration in promoting cooperation in spatial games , _ physical review e _ * 81 * ( 2010 ) 036108 .lozano , s. , arenas , a. , and snchez , a. , mesoscopic structure conditions the emergence of cooperation on social networks , _ plos one _ * 3 * ( 2008 ) e1892 .michael , m. , natural selection and social learning in prisoner s dilemma , _ sociological methods and research _ * 25 * ( 1996 ) 103138 .nowak , m. a. and sigmund , k. , tit for tat in heterogeneous populations , _ nature _ * 355 * ( 1992 ) 250253 .rapoport , a. , prisoner s dilemma : reflections and recollections , _ simulation and gaming _ * 26 * ( 1996 ) 489503 .roca , c. , cuesta , j. , and snchez , a. , effect of spatial structure on the evolution of cooperation , _ physical review e _ * 80 * ( 2009 ) 046106 .schweitzer , f. , behera , l. , and mhlenbein , h. , evolution of cooperation in a spatial prisoner s dilemma , _ advances in complex systems _ * 5 * ( 2002 ) 269300 .schweitzer , f. , mach , r. , and m , h. , agents with heterogeneous strategies interacting in a spatial ipd agent s strategies , in _ lecture notes in economics and mathematical systems _lux , t. , reitz , s. , and samanidou , e. , i ( springer , 2005 ) , pp .87102 .szabo , g. , antal , t. , szabo , p. , and droz , m. , spatial evolutionary prisoner s dilemma game with three strategies and external constraints , _ physical rev .* 62 * ( 2000 ) 10951103 .szabo , g. and toke , c. , evolutionary prisoner s dilemma game on a square lattice , _ physical review e _ * 58 * ( 1998 ) 6973 .traulsen , a. and nowak , m. a. , evolution of cooperation by multilevel selection , _ proceedings of the national academy of sciences of the united states of america _ * 103 * ( 2006 ) 109525 .traulsen , a. , sengupta , a. m. , and nowak , m. a. , stochastic evolutionary dynamics on two levels , _ journal of theoretical biology _ * 235 * ( 2005 ) 393401 .traulsen , a. , shoresh , n. , and nowak , m. a. , analytical results for individual and group selection of any intensity ., _ bulletin of mathematical biology _ * 70 * ( 2008 ) 141024 .vainstein , m. h. and arenzon , j. j. , disordered environments in spatial games , _ physical review e _ ( 2002 ) .vega - redondo , f. , _ economics and the theory of games _ , 1st edn .( cambridge university press , new york , 2003 ) .in order to illustrate eq .( [ strat - payoff ] ) , let us look at two agents and .if both of them play strategy , tft , then , during the different `` iterations '' , their choices are as follows : {ccccccccccccccccc } & & 1 & 2 & 3 & 4 & \cdots & n_{g}\\ & \mathrm{payoff}\;\ , a_{i}^{(11 ) } : & r & r & r & r & \cdots & r \\i & ( tft ) : & c & c & c & c & \cdots & c\\ j & ( tft ) : & c & c & c & c & \cdots & c \\ & \mathrm{payoff}\;\ , a_{j}^{(11 ) } : & r & r & r & r & \cdots & r \end{array}\ ] ] consequently , the average payoff for both agents and is . if agent playing tft meets with an agent playing strategy ( all - d ) the series of choices would be : {rrcccccccccccccccc } & & 1 & 2 & 3 & 4 & \cdots & n_{g}\\ & \mathrm{payoff}\;\ , a_{i}^{(12 ) } : & s & p & p & p & \cdots & p \\ i & ( tft ) : & c & d & d & d & \cdots & d \\j & ( all - d ) : & d & d & d & d & \cdots & d \\ & \mathrm{payoff}\;\ , a_{j}^{(21 ) } : & t & p & p & p & \cdots & p \end{array}\ ] ] which leads to an average payoff for agent of /n_{g} ] .if agent playing a - tft interacts with agent playing all - d , the series of choices reads : {rrcccccc } & & 1 & 2 & 3 & 4 & \cdots & n_{g } \\ & \mathrm{payoff}\;\ , a_{i}^{(42 ) } : & p & s & s & s & \cdots & s \\i & ( a - tft ) : & d & c & c & c & \cdots & c \\j & ( all - d ) : & d & d & d & d & \cdots & d \\ & \mathrm{payoff}\;\ , a_{j}^{(24 ) } : & p & t & t & t & \cdots & t \end{array}\ ] ] that means the average payoff for agent is /n_{g} ] .the remaining cases can be explained similarly . only the three more complex cases are given below . follows from : {rrcccccccccccccccc } & & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & \cdots \\ & \mathrm{payoff}\;\ , a_{i}^{(14 ) } : & s & p & t & r & s & p & t & r & \cdots\\ i & ( tft ) : & c & d & d & c & c & d & d & c & \cdots \\j & ( a - tft ) : & d & d & c & c & d & d & c & c & \cdots \\ & \mathrm{payoff}\;\ , a_{j}^{(41 ) } : & t & p & s & r & t & p & s & r & \cdots \end{array}\ ] ] here , the average payoff of agent results from a repeated payoff series , and the value of defines when this series is truncated .similarly , the series for is .thus the first term of , eq . ( [ a - qr ] ) calculates all _ completed _ payoff series ( ) , while the second term calculates the remaining payoffs ( ) . for , we find eventually : {rrcccccccccccccccc } & & 1 & 2 & 3 & 4 & \cdots \\ & \mathrm{payoff}\;\ , a_{i}^{(44 ) } : & p & r & p & r & \cdots \\ i & ( a - tft ) : & d & c & d & c & \cdots \\j & ( a - tft ) : & d & c & d & c & \cdots \\ & \mathrm{payoff}\;\ , a_{j}^{(44 ) } : & p & r & p & r & \cdots \end{array}\ ] ] here , the period of the payoff series is just 2 instead of 4 , and the expression for , eq .( [ a - qr ] ) follows accordingly .we assume that the population consists of only two strategies : and - d , eq .( [ strat ] ) .this is an interesting combination since defection is known to be the only equilibrium in an one - shot pd game , while tft fares well in a repeated pd interaction , given there is a critical number of encounters ( discussed below ) . the average payoff received by agent playing strategy ( ) results from eq .( [ strat - payoff ] ) .choosing we find : = \left[\begin{array}{cc } 3.0 & 0.75 \\ 2.0 & 1.0 \end{array}\right]\ ] ] applying eqs .( [ a - bar ] ) , ( [ a - per ] ) , we find for the total average payoff : on the other hand , it follows from the stationary condition ( [ stat ] ) : the combined eqs .( [ a - tot-2 ] ) , ( [ eq : simul ] ) have to be solved simultaneously for the possible stationary frequencies , . as the resultwe find : solution ( i ) implies that either strategy all - d or tft invades the population completely , which are the two stable attractors of the system in the case of only two strategies .on the other hand , solution ( ii ) describes the coexistence of the two strategies with different frequencies within the total population . in the given case ,it is is an unstable point attractor that separates the basins of two stable attractors , and therefore acts as a separatrix here .i.e. , for an initial frequency of strategy , the dynamics of the system will converge into a stationary state that is entirely dominated by strategy all - d ( where the payoff per agent is ) , whereas in the opposite case the population will entirely adopt strategy tft ( average payoff ) .the threshold value strongly decreases with the number of encounters , as shown in fig .[ fc-3 ] for the case of four strategies .this raises the question about the critical for which tft could invade the whole population .let us compare the average payoffs of the strategies tft and all - d : the respective values of can be calculated from eq .( [ strat - payoff ] ) for different numbers of .we find that only the elements and change with , as follows : with and , we find for that .this implies that for all possible initial frequencies of the two strategies in the population .thus , from the dynamics of eq .( [ a - dyn ] ) ( fitness - proportional selection ) the extinction of the strategy tft results , in agreement with the known results . for find , , which again implies that for all possible initial frequencies of the two strategies in the population , i.e. the extinction of strategy tft .however , for , we find , . thus , there exist some initial frequencies of the two strategies for which results .i.e. , we can conclude that for the given payoff matrix is the _ threshold value _ for the number of interactions between each two agents that may lead to the invasion of cooperation in the whole population .when strategy all - c is added to the previous strategies tft and all - d , agents playing all - d will benefit from agents playing all - c .therefore , if the frequency of agents playing all - c is increased in the initial population , it can be expected that the basin of attraction of the tft strategy will shrink , while the basin of attraction of the all - d strategy will grow , compared to the case of two strategies , discussed in [ 2d ] .the stationary frequencies have now to be calculated from the following set of coupled equations that result from eqs .( [ a - bar ] ) , ( [ a - per ] ) and the stationary condition ( [ stat ] ) : the matrix elements can be calculated from eq .( [ strat - payoff ] ) .with we find the following stationary solutions : we note that a mean - field analysis of the three - strategy case was also dicussed in and , assuming extensions such as mutations and fluctuations , analysed further in .we consider the deterministc case here . solutions ( i ) imply that either strategy tft or all - d or all - c invades the entire population .we note that only the first two solutions are stable ones , while the last point - attractor , is an unstable one , because any small pertubation ( i.e. the invasion of one defecting agent ) will transfer the cooperating system into a defecting one .the first of the solutions ( ii ) describing coexisting strategies is already known from the investigation in [ 2d ] to be an unstable one . in the absense of the third strategy , it defines the separatrix point .the second solution ( ii ) however is a stable one , indicating that both agents playing tft and all - c can coexist in the panmictic population .note that there is neither a stable nor an unstable coexistence of all three strategies .the separatix that divides the different basins of attraction is now a line in a two - dimensional space of the initial frequencies . butdifferent from [ 2d ] the stationary solutions of eq .( [ separat-3 ] ) do not give further information about the separatices . in order to calculate the different basins of attraction , we therefore have numerically solved eq .( [ a - dyn ] ) for the full range of initial frequencies : , , , and have evaluated the average payoff in the stationary limit . if , obviously the whole population has adopted strategy all - d .similarly , if and , the whole population has adopted strategy tft .however , if and , then there is a coexistence of agents playing strategy tft and all - c . the basins of attraction are shown in fig .[ fig : basin_3 ] for two different values of and . in the latter case , results for the payoffs in eq .( [ strat - payoff ] ) .for we can distinguish between three different regions in fig .[ fig : basin_3 ] .region denotes the range of initial frequencies that lead to the adoption of the all - d strategy in the whole population , region denotes the range of initial frequencies that lead to the adoption of the tft strategy in the whole population , while region denotes the range of initial frequencies that lead to the coexistence of both tft and all - c strategies .that lead to a particluar stable solution , eq .( [ separat-3 ] ) . : adoption of all - d strategy in the whole population , : adoption of tft strategy in the whole population , : coexistence of both tft and all - c strategies .( left ) , ( right ) . [fig : basin_3],title="fig:",width=226 ] that lead to a particluar stable solution , eq .( [ separat-3 ] ) . : adoption of all - d strategy in the whole population , : adoption of tft strategy in the whole population , : coexistence of both tft and all - c strategies .( left ) , ( right ) . [fig : basin_3],title="fig:",width=226 ] since regions and both describe the adoption of cooperating strategies in the population , the most interesting line in fig .[ fig : basin_3 ] is the separatrix between region ( all defectors ) and region ( all cooperators ) .we can easily interpret this line based on our previous analysis of the two - strategy case , [ 2d ] .the diagonal in fig .[ fig : basin_3 ] represents , i.e. a population with only two strategies , tft and all - d .thus the separatrix line between regions and starts from the separatrix point , , , eq . ( [ separat-3 ] ) . further below the diagonal ,the frequency of the strategy all - c increases in the initial population , which in turn increases the threshold frequency necessary for the invasion of the tft strategy . in a certain range of frequencies , the separatrix line between defection and cooperation can be described by a linear relation , found numerically as : however , we notice that the separatrix line between regions and never hits the x - axis . for very low values of , i.e. close to the x - axis , it makes a sharp turn towards the origin .this means that for a vanishing initial frequency of all - d there will be no route to the respective attraction region , which is obviously correct .the influence of the parameter is shown by comparing the left ( ) and the right part ( ) of fig .[ fig : basin_3 ] . in the latter casethe basin of attraction ( exclusive domination of strategy all - d ) becomes very small . in order to further quantify the influence of on the dominating strategies in the stationary limit, we have also calculated the relative size of each basin of attraction . if , and denote the area of the regions , and in fig .[ fig : basin_3 ] , the relative sizes are defined as follows : the results are shown in fig .[ fig : bar_3 ] for the two different values of . ) of the basins of attraction shown in fig .[ fig : basin_3 ] . the left bars ( shaded area )refer to ( fig .[ fig : basin_3 ] left ) , while the right bars ( black area ) refer to ( fig .[ fig : basin_3 ] right ) .thus , the change indicates the influence of on the size of the basins of attraction .[ fig : bar_3],width=226 ] in fig .[ fig : bar_3 ] , , eq . ( [ area ] ) denotes the relative size of the basin of attraction for cooperation resulting from both solutions , domination of tft and coexistence of tft and all - c .as we see , for , the cooperative basin only has about the same size as the basin of attraction for defection , that mean that about half of the possible initial conditions will lead to a population of defecting agents , at the end . only for size of the defection basin becomes insignificant as compared to the cooperative basin .this again explains the role of in influencing cooperation .in section [ sec : threshold ] , we computed the critical conditions for the outbreak of cooperation in the presence of only two strategies , all - d and tft . if we consider the two additional strategies all - c and a - tft , eq .( [ strat ] ) , these critical conditions change dependent on the initial values of the four strategies and their distribution on the different islands .thus , instead of a complete investigations we only discuss the following sample configurations with : these initial conditions imply that .the results of extensive calculations of the relative effort and the critical migration rate are shown in fig .[ fig : thres_and_migr_vs_k_all ] and can be compared to the two strategy case , eq .( [ eq : threshold ] ) . versus number of islands .( right ) migration rate versus number of islands .migration can only promote cooperation if the number of islands is ,title="fig:",width=226 ] versus number of islands .( right ) migration rate versus number of islands .migration can only promote cooperation if the number of islands is ,title="fig:",width=226 ] again , we notice that the relative effort to invade other islands by agents playing tft is non - monotonously dependent on and always stays below the threshold values observed without migration , as given in sect .[ 3.1 ] , [ 3d ] . regarding the impact of the different strategies on the outbreak of cooperation, we see that in the presence of all - c all - d benefits more than tft in terms of payoff which raises the threshold frequency . however , adding a - tft benefits all - d less than adding all - c , which lowers the threshold frequency .this explains why the curve for the _ four _ strategy case is in between the curves corresponding to _ two _ and _ three _ strategies . from fig .[ fig : thres_and_migr_vs_k_all](right ) the optimal migration rate is found to be almost constant for three and four strategies in contrast to the two strategy case . | we consider a population of agents that are heterogeneous with respect to ( i ) their strategy when interacting times with other agents in an iterated prisoners dilemma game , ( ii ) their spatial location on different islands . after each generation , agents adopt strategies proportional to their average payoff received . assuming a mix of two cooperating and two defecting strategies , we first investigate for isolated islands the conditions for an exclusive domination of each of these strategies and their possible coexistence . this allows to define a threshold frequency for cooperation that , dependent on and the initial mix of strategies , describes the outbreak of cooperation in the absense of migration . we then allow migration of a fixed fraction of the population after each generation . assuming a worst case scenario where all islands are occupied by defecting strategies , whereas only one island is occupied by cooperators at the threshold frequency , we determine the optimal migration rate that allows the outbreak of cooperation on _ all _ islands . we further find that the threshold frequency divided by the number of islands , i.e. the relative effort for invading defecting islands with cooperators decreases with the number of islands . we also show that there is only a small bandwidth of migration rates , to allow the outbreak of cooperation . larger migration rates destroy cooperation . _ keywords : _ migration , cooperation , iterated prisoners dilemma |
bernstein polynomial basis plays an important role in computer graphics for geometric modeling , curve and surface approximation .some interesting features have been investigated for this basis in the last decades ; for instance , it is proven to be an optimal stable basis among nonnegative bases in a sense described in .also , it provides some optimal shape preserving features .we refer to for detailed properties and applications in computer aided geometric design ( cagd ) .bernstein basis has also been used for the numerical solution of differential , integral , integro - differential and fractional differential equations , see e.g. and the references therein .however , it is not orthogonal and so leads to dense linear systems when using in numerical methods .some numerical approaches implement the orthogonalized bernstein basis .however , as we will see in the next section , it fails to keep some interesting properties of the bernstein basis .another approach uses the dual bernstein polynomials ( dbp ) introduced by juttler in 1998 . to the best of our knowledge, the dbp basis has been only discussed from the cagd point of view ( see the works of lewanowicz and wozny e.g. ) .so it is of interest to explore some new aspects of this basis in order to facilitate the numerical methods for differential equations that are based on bernstein polynomials and to present a method for time fractional diffusion equation in two dimensions .fractional partial differential equations ( fpdes ) have been widely used for the description of some important physical phenomena in many applied fields including viscoelastic materials , control systems , polymer , electrical circuits , continuum and statistical mechanics , etc .the subdiffusion equation is a fpde describing the behavior of anomalous diffusive systems with the probability density of particles diffusing proportional to the mean square displacement with .anomalous diffusion equations have been used for modeling transport dynamics , especially the continuous time random walk , the contamination in porous media , viscoelastic diffusion , etc . for the numerical solution of the one - dimensional problem, we refer to and the references therein .some classic numerical methods for pdes have been developed for the simulation of two - dimensional subdiffusion equation , for example the finite difference schemes , meshless methods , finite element method , alternating direction implicit methods , etc . in this paper , deriving some new aspects of dbps , we present suitable combinations of these functions in order to develop a dual - petrov - galerkin method for solving the following 2d subdiffusion equation ,\label{eq : main}\end{aligned}\ ] ] with the following initial and boundary conditions ,\label{bvs}\end{aligned}\ ] ] where , is the laplacian operator , , is the diffusion coefficient and is the source function . here, denotes the caputo fractional derivative of order , with respect to defined as the main contribution of our work is the development of an accurate bernstein dual - petrov - galerkin method and the application for the numerical simulation of the 2d subdiffusion equation .it is shown the method leads to sparse linear systems . to give a matrix approach of the method , we present some results concerning the dbps including a recurrence formula for the derivative , constructing a new basis using dbps , deriving the operational matrix of differentiation and also providing the transformation matrices between the dbps and the new basis .the paper is organized as follows : section [ sec : bern=000026dual ] presents some new aspects of dbps and provides modal basis functions and the associated transformation matrices between the bases .section [ sec : variationalformulation ] is devoted to the bernstein - spectral formulation of the subdiffusion problem ( [ eq : main])-([bvs ] ) and the stability and convergence results are discussed in section [ sec : error - estimation ] .numerical examples are provided in section [ sec : numerical - examples ] .the paper ends with some concluding remarks in section [ sec : con ] .the bernstein polynomials with degree on the unit interval are defined by the set forms a basis for , the space of polynomials of degree not exceeding .these polynomials possess end - point interpolation property , i.e. , also , the summation is one and the integral over the unit interval is constant , namely the derivative enjoys the three - term recurrence relation where we adopt the convention that for and . as we mentioned in the preceding section ,the bernstein basis is not orthogonal .the corresponding orthogonalized basis , obtained e.g. , by the gram - schmidt process fails to keep some interesting aspects of the original basis .we will not consider this basis in the present work .instead we turn to the dual basis .the dbps are defined as with the coefficients given by it is verified that they satisfying the biorthogonal system [ , theorem 3 ] it is worth noting that less than a quarter of the entries of transformation matrix between the bernstein and dual bernstein basis ] , .\(ii ) for all ] need not to be computed by ( [ eq : dual])-([eq : dualcoefficients ] ) .it especially gives .one may use compact combinations of orthogonal polynomials as a basis in the galerkin methods for boundary value problems which leads to sparse linear systems in some problems ( see e.g. , ) . here, we use this idea for the non - orthogonal bernstein polynomials to present a simple and accurate dual - petrov - galerkin spectral method for two - dimensional subdiffusion equation .compact combinations of the basis functions are referred to as the modal basis functions ( see [ , section 1.3 ] ) .vanishpsi]let be an integer , be the dual bernstein basis and set for where then , the polynomials vanish at 0 and 1 , so the set forms a basis for . by ( [ eq : dual ] ) and ( [ eq : bernbound ] ) , we have from lemma [ dual properties ], we infer by ( [ eq : dualvalue0 ] ) and ( [ eq : dualvalue1 ] ) , we obtain for it is easy to see is linearly independent . since , this set is a basis for .this completes the proof .figure [ fig : dual - bern ] . illustrates the dbps and the modal basis functions for .it is seen that the modal basis functions have less values than the corresponding dual functions on the unit interval , expecting less round - off errors .graphs of dbps ( top ) and the modal basis functions ( bottom).,title="fig:"]graphs of dbps ( top ) and the modal basis functions ( bottom).,title="fig:"]graphs of dbps ( top ) and the modal basis functions ( bottom).,title="fig : " ] graphs of dbps ( top ) and the modal basis functions ( bottom).,title="fig:"]graphs of dbps ( top ) and the modal basis functions ( bottom).,title="fig:"]graphs of dbps ( top ) and the modal basis functions ( bottom).,title="fig : " ] for , consider the and the consisting of dual functions given by ( [ eq : dual ] ) and the modal basis functions given by ( [ eq : compactbasis ] ) , respectively : ^{t},\label{eq : dualbasisvector}\\ & & \mathbf{\psi}(\cdot)=[\psi_{i}\left(\cdot\right):\,0\leq i\leq n-2]^{t}.\label{eq : modalbasisvector}\end{aligned}\ ] ] for simplicity , we ignore the dependence of the vectors on variable .first , note that where ] is given by the dbps is a basis for , so we expand for as integration by parts and ( [ eq : bernderiv ] ) imply that the biorthogonality ( [ eqbiorthogonality ] ) gives now , the result is proved by considering ( [ eq : dualvalue0 ] ) and ( [ eq : dualvalue1 ] ) .the matrix is a sparse matrix of order with for ; for instance , see the matrix given below .set for then , from ( [ eq : p sait to sait ] ) , we infer the following five - term recurrence relation is deduced where we set for and we derive the transformation matrices that map the bernstein and chebyshev coefficients now we derive the transformation matrix that maps the derivative of modal basis functions to dbps .this facilitates the use of galerkin method in the next section . in the following, matrix stands for a matrix with and nonzero diagonals below and above the main diagonal , respectively .[ lemma1forq]let the vectors and be defined as in ( [ eq : dualbasisvector ] ) and ( [ eq : modalbasisvector ] ) , respectively .then , where is an matrix given by combining ( [ eq : saitosaitilde(g ) ] ) with ( [ eq : p sait to sait ] ) , implies to prove that is a matrix , it is sufficient to show that for and for and for by ( [ dual properties ] ) note that s vanish at the boundary values according to proposition [ prop 1 .vanishpsi ] .the proof is complete . to see the sparsity of the transformation matrices , , and for shown in the following .in this section , at first the problem ( [ eq : main])-([bvs ] ) is discretized in time . then we develop a bernstein dual - petrov - galerkin method using ? .the matrix formulation and the error estimations are also provided in this section . consider the subdiffusion equation ( [ eq : main ] ) at as let be an approximation of at for where is the time step length . the time fractional derivative can be approximated by definition ( [ eq : fracdef ] ) and using forward difference for the derivative inside as where and for and .the error is bounded by where the coefficient depends only on .the time discretization ( [ eq:4.1 - 1 ] ) is referred to as l1 approximation ( see e.g. ) . substituting from ( [ eq:4.1 - 1 ] ) into ( [ eq : attimek+1 ] ) and multiplying both sides by and dropping , the following time - discrete scheme is obtained with and is given by the initial condition ( [ iv ] ) with the error for it reads as the boundary conditions for the semidiscrete problem is on . consider the problem ( [ eq : semi - discrete ] ) with and the homogeneous dirichlet boundary conditions .we seek an approximate solution in the sobolev space .a weak formulation of the problem ( [ eq : semi - discrete ] ) is to find such that : let be the space of polynomials over with degree not exceeding and .the galerkin formulation of the ( [ eq : weakform ] ) is to find such that : with being the standard -norm and an interpolation operator .since , and due to ( [ eq : bernbound ] ) , we choose a basis for it by removing the first and last bernstein polynomials of degree , i.e. , ^{t}.\label{eq : bernbasisvector}\end{aligned}\ ] ] using ( [ eq : bernderiv ] ) , it is easy to verify where is a tridiagonal matrix of order and ^{t} ] and ] .it is worth to note that the coefficient matrix of the above system is the same for all time steps and to be evaluated just once for all in terms of the trial vector ( [ eq : bernbasisvector ] ) , and test vector ( [ eq : modalbasisvector ] ) , we may write to facilitate the computations , in what follows , these matrices are related to the transformation matrices introduced in section [ subsec : transformation - matrices - and ] .first , note that by the biorthogonality ( [ eqbiorthogonality ] ) , we have =:\mathbf{\tilde{i}}.\label{eq : itilde}\end{aligned}\ ] ] now from ( [ eq : saitosaitilde(g ) ] ) , and writing as ] is a matrix introduced lemma [ lemma1forq ] .hence , is a pentadiagonal matrix and is the product of a pentadiagonal and a tridiagonal matrix plus a sparse matrix . from lemma [ lemma1forq ] and ( [ eq : amatrix ] ), it is seen that is a seven - diagonal matrix .notice that the solution of linear system ( [ eq : tensorprodsys ] ) requires the matrices and is obtained by a sparse matrix - matrix multiplication ( [ eq : amatrix ] ) and entries of are given by ( [ eq : bentries ] ) . since the coefficient matrix of the linear system ( [ eq : tensorprodsys ] ) remains intact for a fixed , only the rhs vector to be computed for different time steps , up to a desired time .so it is efficient to use a band - lu factorization for solving the system .it is remarkable that for a matrix , the lu - factorization can be done with flops and backward substitutions require flops [ , section 4.3 ] .for the error analysis , we assume the problem ( [ eq : main ] ) to be homogeneous , . for bilinear form in ( [ eq : galerkin ] ) is continuous and coercive in .the existence and uniqueness of the solution for both the weak form ( [ eq : galerkin ] ) and the galerkin form ( [ eq : galerkin ] ) is guarantied by the well - known lax - milgram lemma .we define the following inner product and the associated energy norm on : [ th : stabilityforthe - weak - semidiscrete]the weak form ( [ eq : weakform ] ) is unconditionally stable : let in ( [ eq : weakform ] ) . then , giving ( [ eq : stabilityofsemidiscrete ] ) for by the definition ( [ eq : energynorm ] ) , the schwarz inequality and the inequality .by mathematical induction , assume ( [ eq : stabilityofsemidiscrete ] ) holds for let in ( [ eq : weakform ] ) , i.e. , it is easy to see that the rhs coefficients in ( [ eq : semi - discrete ] ) are positive .so we obtain so the proof is done .[ th : convergenceforsemidiscrete]let be the solution of the equation ( [ eq : main ] ) with conditions ( [ iv])-([bvs ] ) and be the solution of the the semidiscrete problem ( [ eq : semi - discrete ] ) .then , the idea of the proof comes from .we first prove by ( [ eq : main ] ) and ( [ eq : semi - discfork=00003d0 ] ) , we have in which for and by using , and ( [ eq : truncationerrormulti ] ) , we get i.e. , ( [ eq : komaki ] ) holds for . by induction ,assume ( [ eq : komaki ] ) holds for . using ( [ eq : main ] ) and ( [ eq : semi - discrete ] ) , we get for , it reads as proving ( [ eq : komaki ] ) for that completes the proof of ( [ eq : komaki ] ) .consider , then there exists a such that which gives now using this along with ( [ eq : komaki ] ) proves ( [ eq : error u(t_k)-u^k ] ) . in order to derive ( [ eq : error u - u^k alpha->1 ] ) , we first prove by ( [ eq : forj=00003d1 ] ) , the inequality ( [ eq : komaki2 ] ) holds for .assume ( [ eq : komaki2 ] ) holds for .then , from ( [ eq : main ] ) , ( [ eq : semi - discrete ] ) and ( [ eq : truncationerrormulti ] ) , we obtain so ( [ eq : komaki2 ] ) holds for . from and ( [ eq : komaki2 ] ) , we get ( [ eq : error u - u^k alpha->1 ] ) .let be the -orthogonal projection operator from into associated with the energy norm defined in ( [ eq : energynorm ] ) .due to the equivalence of this norm with the standard norm , we have the following error estimation [ ; relation ( 4.3 ) ] the idea of the proof for the following result comes from the paper .let be the solution of the variational formulation ( [ eq : weakform ] ) and be the solution of the scheme ( [ eq : galerkin ] ) , assuming and for some then , for , where depends only on .we have by the projection operator . by definition of the norm ( [ eq : energynorm ] ) , we get by the weak form ( [ eq : weakform ] ) , the rhs of the above equation is replaced as subtracting ( [ eq:11 ] ) from ( [ eq : galerkin ] ) , we have where and .let , then with , we obtain as in the proof of theorem [ th : convergenceforsemidiscrete ] , it is first proved by induction that : for then , by using ( [ eq : boundfor b_k ] ) and the projection error ( [ eq : projectionerror ] ) the desired result is derived .the following theorem is obtained by the triangle inequality along with the inequalities ( [ eq : error u(t_k)-u^k ] ) and ( [ eq : error u^k-(u_n)^k ] ) .let be the solution of the problem ( [ eq : main ] ) with the initial and boundary conditions given by ( [ iv])-([bvs ] ) and be the solution of the scheme ( [ eq : galerkin ] ) . then , assuming and , we have the constants and are independent of , , .it is seen that the method has the so - called spectral convergence in space and the order of convergence in time .here , some numerical experiments are provided to show the accuracy and efficiency of the proposed method . for the computations , we use maple 18 . on a laptop withcpu core i3 1.9 ghz and ram 4 running windows 8.1 platform . to compute the errors, we use the discrete and errors defined as respectively , where is the exact solution of the problem ( [ eq : main])-([bvs ] ) , is the approximation solution ( [ eq : approxsol ] ) at and .also , the convergence rates in space and time are respectively computed by where is the error with stands for the dimension of basis and is the time - step size .however , as it is common in the literature , we will show the spectral convergence of the proposed method by logarithmic scaled error plots .[ exa:1 sin(pi*x)sin(pi*y)t^2]consider the problem ( [ eq : main ] ) with and the exact solution .table [ tab : spatialrateexample1 ] shows the convergence of the method for for some fractional orders .also figure also , figure [ fig : ex1 ] demonstrate the logarithmic error plot for .table [ tab : timerate ] illustrates the temporal rate of convergence with .convergence of the spectral method in terms of some norms with and ] [ exa : ex2nosourse]to see the method works for the case in which there is no source term , consider the problem ( [ eq : main])-([bvs ] ) with the initial condition , and no source term .the errors and also the rate of convergence are provided in table [ tab : spatialexample2 ] where the solution with is treated as the exact solution .the errors are reported at we have used the eight point gauss - legendre quadrature rule to perform the integrals ( [ eq : integralsoff ] ) in the right hand side of the linear system ( [ eq : tensorprodsys ] ) .numerical results confirm the convergence and the accuracy of the method .in this paper , some new aspects of dual bernstein polynomials have been discussed .a suitable compact combinations of these polynomials has been derived for developing a dual - petrov - galerkin variational formulation for the numerical simulation of two - dimensional subdiffusion equation .it was shown that the method leads to sparse linear systems .the illustrated numerical examples have been provided to show the accuracy of the method .it is important to note that the transformation matrices and the operational matrix for differentiation of dual bernstein polynomials that have been obtained in this work can be used similarly for developing bernstein - based dual - petrov - galerkin galerkin methods for other fractional partial differential equations on bounded domains .gao , g.h . ,sun , h.w . ,sun , z.z .: stability and convergence of finite difference schemes for a class of time - fractional sub - diffusion equations based on certain superconvergence .j. comput .phys 280 , 510 - 528 ( 2015 ) jani , m. , babolian , e. , javadi , s. , bhatta , d. : banded operational matrices for bernstein polynomials and application to the fractional advection - dispersion equation .numer algor .doi : 10.1007/s11075 - 016 - 0229 - 1 maleknejad , k. , basirat , b. , hashemizadeh , e. : a bernstein operational matrix approach for solving a system of high order linear volterrafredholm integro - differential equations .modelling 55 , 13631372 ( 2012 ) zhao , y. , zhang , y. , shi , d. , liu , f. , turner , i. : superconvergence analysis of nonconforming finite element method for two - dimensional time fractional diffusion equations . appl .lett 59 , 38 - 47 ( 2016 ) | in this paper , we develop a bernstein dual - petrov - galerkin method for the numerical simulation of two - dimensional subdiffusion equation . the equation is first discretized in time using the l1 approximation . then , a spectral discretization is applied by introducing suitable combinations of dual bernstein polynomials as the test functions and the bernstein polynomials as the trial ones . we derive the exact sparse operational matrix of differentiation for the dual bernstein basis which provides a matrix based approach for the spatial discretization . it is also shown that the proposed method leads to banded linear systems that can be solved efficiently . finally , the stability and convergence of the proposed method is discussed theoretically . some numerical examples are provided to support the theoretical claims and to show the accuracy and efficiency of the method . |