article
stringlengths
0
456k
abstract
stringlengths
0
65.5k
synchronization of nonlinear oscillators is widely studied in physical and biological systems for underlying interests ranging from novel communications strategies to understanding how large and small neural assemblies efficiently and sensitively achieve desired functional goals .the analysis of biological systems may , beyond their intrinsic interest , often provide physicists with novel dynamical systems possessing interesting properties in their component oscillators or in the nature of the interconnections .we have presented our analysis of the experimental synchronization of two biological neurons as the electrical coupling between them is changed in sign and magnitude .subsequent to that analysis we have developed computer simulations of the dynamics of the neurons which are based on conductance based hodgkin - huxley ( hh ) neuron models .these numerical simulations quantitatively reproduced the observations in the laboratory .the study of isolated neurons from the stomatogastric ganglion ( stg ) of the california spiny lobster _ panulirus interruptus _ using tools of nonlinear time series analysis shows that the number of active degrees of freedom in their membrane potential oscillations typically ranges from three to five .the appearance of low dimensional dynamics in this biological system led us to develop models of its action potential activity which are substantially simpler than the hh models we and others have used to describe these systems .we adopted the framework established by hindmarsh and rose ( hr ) in which the complicated current voltage relationships of the conductance based models are replaced by polynomials in the dynamical variables , and the coefficients in the polynomials are estimated by analyzing the observed current / voltage curves for the neurons under study .building on biological experiments and on numerical analysis of models for the oscillations of isolated neurons , we have constructed low dimensional analog electronic neurons ( ens ) whose properties are designed to emulate the membrane voltage characteristics of the individual neurons . using these simple, low dimensional ens we report here their synchronization and regularization properties , first when they are coupled electrically as the sign and magnitude of the coupling is varied , and then when they are coupled by excitatory and inhibitory chemical synapses .we have also studied the behavior of an hybrid system , i.e. , one en and one biological neuron coupled electrically . as our models were developed on data acquired from biological neurons in synaptic isolation , the results we present here on pairs of interacting ens and hybrid systems serve to provide further confirmation of the properties of those model neurons , numerical and analog .we have studied and built three dimensional and four dimensional models of hr type having the form where and are the constants which embody the underlying current and conductance based dynamics in this polynomial representation of the neural dynamics . is the membrane voltage in the model , represents a `` fast '' current in the ion dynamics , and we choose , so is a `` slow '' current . taken alone the first three equations of the model can reproduce several modes of spiking - bursting activity observed in stg cells .the first three equations were used in analog realization for our earlier experiments with 3d ens the equation for represents an even slower dynamical process ( ) , and it is included because a slow process such as the calcium exchange between intracellular stores and the cytoplasm was found to be required in hodgkin - huxley modeling to fully reproduce the observed chaotic oscillations of stg neurons .both the three dimensional and four dimensional models have regions of chaotic behavior , but the four dimensional model has much larger regions in parameter space where chaos occurs , presumably for many of the same reasons the calcium dynamics gives rise to chaos in hh modeling .the calcium dynamics is an additional degree of freedom with a time constant three times slower than the characteristic bursting times . in our analog circuit realization of the en model we used and .the implementation of these constants in analog circuits always has about 5% tolerance in the components .the main parameters we used in controlling the modes of spiking and bursting activity of the model are the dc external current and the time constants and of the slow variables .figure [ 4dphases ] shows a chaotic time series of the four variables using the parameters above .note how modulates the length of the bursts in .each local minimum in the global oscillations of coincides with a short burst period .the complexity achieved by the addition of can be observed in the projections of space to various three - dimensional spaces , and respectively , as shown in figure [ 4dphases ] .table 1 presents the lyapunov exponents calculated from the vector field of equation ( [ eq : hrmodel ] ) for the 3d and 4d ens .a positive lyapunov exponent is present in both models , indicating conclusively that they are oscillating chaotically . from this spectrum of lyapunov exponents, we can evaluate the lyapunov dimension which is an estimate of the fractal dimension of the strange attractor for the ens .the lyapunov dimension is defined by finding that number of lyapunov exponents satisfying then is defined as for each en is displayed in the last column table 1 . [ cols="^,^,^,^,^,^",options="header " , ] table 1 : lyapunov exponents and lyapunov dimension calculated from the vector field ( equation ( [ eq : hrmodel ] ) ) for the 3d and 4d electronic neuron models . as a reminder to the reader : the sum of all lyapunov exponents must be negative , and this is so for our results . also , one lyapunov exponent must be 0 as we are dealing with a differential equation .we designed and built an analog electronic circuit which integrates equation ( [ eq : hrmodel ] ) .we chose to build an analog device instead of using numerical integration of the mathematical model on the cpu of a pc or on a dsp board because digital integration of these equations is a slow procedure associated with the three different time scales in the model .furthermore , a digital version of an en requires digital to analog and analog to digital converters to connect the model to biological cells .analog circuits are small , simple and inexpensive devices ; it is easy to connect them to a biological cell , as we discuss below ( see also ) . in a practical sensenearly an unlimited number of them can work together in real - time experiments .finally , looking ahead to the construction of real - time networks of large number of these neurons , analog implementation is a necessity .the block diagram of the analog circuit we use to represent the three dimensional and the four dimensional ens is shown in figure ( [ 4dcircuit ] ) .it includes four integrators indicated by , two multipliers , two adders , and two inverters .we used off - the - shelf general purpose operational amplifiers , national instruments model tl082 , to build the integrators , adder and inverter and used analog devices model ad633 as analog multipliers .the integrator at the top of the diagram receives all components of , e.g. , , etc .it has an additional input ( called ) which can be used for connections with other circuits or neurons .the integrators invert the sign of their input , so the output signal will be multiplied by a time constant chosen to make the en oscillate on the same time scale as the biological neurons . the signal is used to generate the nonlinear functions and and these values go to the second and third integrators .similarly , the other integrators generate voltages proportional to , , and .a renormalization was made in the rest of the time constants in the circuit to make .note that this rescaling is responsible for the different amplitudes in the numerical ( figure [ 4dphases ] ) and analog ( figures [ free ] , [ posel ] , [ negel ] , [ exchem ] , [ inchem ] ) experiments .this circuit design allows us to easily switch from a three dimensional to a four dimensional model of the neuron .we can connect or disconnect one wire , indicated as point a in figure [ 4dcircuit ] , to enable or disable the circuit block shown in the rectangle with a dashed outline . in equation([eq: hrmodel ] ) this corresponds to setting in the equation .the block indicated as na in figure [ 4dcircuit ] is an adjustable nonlinear amplifier .we use it to rescale and change the shape of the output signal .it can shrink or stretch different parts of the waveform , change the amplitude and move the trace as a whole up or down .this shape adjustment is particularly important in experiments with groups of biological and electronic neurons interconnected with each other .living neurons , even taken from the same biological structure , may generate considerably different waveforms .the relative size of spikes and the interburst hyperpolarization is variable from cell to cell . in our circuitswe can precisely adjust the waveform of the en to be very close to that of each biological neuron in our experiments .another reason to use circuits with variable waveforms is that it opens up the possibility of studying how the action potential waveforms affect the interactions among the neurons , electronic and biological .indeed , the ability to vary the details of the waveforms provides an interesting handle on design of biometric circuitry for a variety of applications .in living nervous systems one finds three general types of synaptic connections among neurons : ohmic electrical connections ( also called gap junctions ) and two types of chemical connections , excitatory and inhibitory . for our studies of the interconnections among ens and among ens and biological neurons , we built electronic circuits to emulate excitatory and inhibitory synaptic connections as well as the ohmic electrical connections .the stg neural circuits are dominated by inhibitory interconnections and by ohmic electrical connections .we now describe how we implemented each , and then we turn to the results of our synchronization experiments with these network connections .we implemented an electrical synapse between the ens by injecting into one of the neurons ( ) a current proportional to the voltage difference between the two membrane potentials of the ens and into the other neuron ( ) injecting the same current but with the opposite strength .the current into is + while we chose the dimensionless synaptic strength in the range . ] of figure [ mapel ] we show examples of the time series for the membrane potentials of the two neurons in figures [ posel ] and [ negel ] .* when the two neurons are uncoupled and display independent chaotic oscillations as shown in figure [ free ] .* for small , positive coupling , regions of nearly independent chaotic spiking - bursting activity are observed as well as some regions of synchronized bursting activity as shown in figure [ posel](b ) where we set .there is a small range of in which intermittent anti - phase bursting behavior can be found .the burst length in this case is kept nearly regular from burst to burst as shown in figure [ posel](a ) . * for the behavior is still chaotic for the two neurons but most of the bursts are synchronized as shown in figure [ posel](c ) where we set . * from the bursting activity becomes regular going from a region in which there is partial synchronization ( spikes not synchronized ) , as shown in figure [ posel](d ) where we set , to a region of total synchronization ( bursts and spikes synchronized ) , shown in figure [ posel](e ) where we set . * from there is total synchronization in the spiking - bursting activity , and the oscillations are chaotic as shown in figure [ posel](f ) where we set .+ results for * for negative coupling , the oscillations are predominantly chaotic and the hyperpolarizing regions , where the membrane voltage is quite negative , of the signals are all in anti - phase .the average burst length decreases as the coupling becomes stronger as shown in figure [ negel ] . for a small range of very long burstswere observed as shown in figure [ negel](a ) . and provide quantitative measures of the synchronization between two ens . in our report on the experimental work with two biological cells ,the results for and can be seen in figure 5 of that paper .note that , as in the case of coupled biological neurons , we have here a bifurcation between positive and negative electrical coupling . in the experimental work on electrically coupled biological neurons a value for the external coupling serves to null out the natural coupling of about that amount , so the figures here and in the earlier paper are to be compared by sliding here to there . both in the biological and electronic experiments , the sharp phase transition from very small for positive coupling to large , nearly constant values is associated with the rather rapid change from nearly and then fully synchronous behavior for positive couplings to out - of - phase oscillations for negative couplings . the and curves in the paper on coupled biological neurons shows far fewer points and consequently less detail that our curves for coupled 4d ens .clearly this is because of the resolution in the biological experiments and the difficulty in performing experiments at such closely chosen values of . at this timethe details of behavior revealed in the present experiments on ens have not been verified in the biological setting .one should view our figure [ mapel ] and figure 5 of as in excellent qualitative agreement .we have observed the behavior of two four dimensional ens coupled with _identical _ chemical synapses .two electrical versions of chemical synapses were built with identical parameters and then used to coupled two four dimensional ens . in the equations ,equation ( [ chemsynapse ] ) , we integrate to represent the chemical synapse , all parameters were set equal in the two connecting circuits .we then varied in each chemical synapse over the range for an excitatory synapse , namely , and over the range for an inhibitory synapse , namely , .the other parameters were held fixed at , and . in figure [ mapchem ]we collect the statistical results , expressed in our usual quantities and , for both excitatory and inhibitory synaptic connections .negative values of represent inhibitory connections .this , perhaps apparently peculiar , method of presentation allows us to see immediately the relationship between excitatory and inhibitory interconnections . as earlier with electrical couplings we provide a verbal description of each region of behavior over the whole range of .we show examples of the time series for the membrane potential of the two neurons in figure [ exchem ] and figure [ inchem ] . when coupled with implementations of excitatory chemicalsynapses the ens displayed the following behaviors : * when the two neurons are uncoupled and display independent chaotic oscillations as shown in figure [ free ] . * for positive coupling a transition from the chaotic behavior to regular spiking / burstingis observed . for small coupling the independent chaotic spiking / bursting activity of the uncoupled neuronsis replaced by a behavior in which most of the bursts are synchronized , but the oscillations are still chaotic as shown in figure [ exchem](a ) for .as is increased all the bursts become synchronized , and the activity becomes periodic as shown in figure [ exchem](b ) for . * for the bursts remain synchronized and get longer , but there are no longer any spikes during the bursts as shown in figure [ exchem](c ) for .finally we report on our experiments with an electronic version of an inhibitory chemical synapse .this inhibitory synaptic coupling occurs in the lobster pyloric cpg as well as many other cpgs , and we have suggested that inhibitory chemical coupling will lead to regularization of the chaotic oscillations of the individual neurons .* for small the oscillations are still chaotic , but all of the hyperpolarizing regions of the membrane voltages are in anti - phase as shown in figure [ inchem](a ) for .* when the oscillations become periodic , and all the hyperpolarizing regions are in out - of - phase as shown in figure [ inchem](b ) .* for the out - of - phase behavior of the hyperpolarizing regions remains , but the oscillations are chaotic again as shown in figure [ inchem](c ) for . * for the oscillations regularize again , and the behavior is periodic with out - of - phase bursting as shown in figure [ inchem](d ) for and in figure [ inchem](e ) for . * for the oscillations are chaotic and long out - of - phase bursts are observed as shown in figure [ inchem](f ) for .the only experiments we know which relate to these observations on two chemically coupled ens are not a precise match , but bear noting .r. elson has isolated a pair of lp and pd neurons from the lobster pyloric cpg ; these have mutual inhibitory coupling .elson varied the strength of the chemical coupling using neuromodulators and making measurements at four values of over a nominal rage of 20 ns to 60 ns .he observed only the behavior reported in the penultimate item of our experiments on inhibitory coupling .unfortunately , control of the identity of the mutual inhibitory couplings was not possible , nor was it possible for us to directly compare the calibration of elson s indication of the magnitude of with our own choices in using ens .to date then , we have no direct laboratory evidence on synchronization of biological neurons mutually coupled with chemical synapses .this is in contrast to our observations on electrically coupled biological neurons .this represents an interesting opportunity for biological experiments which may be directly compared to our results using ens .we have previously reported experiments on replacing the ab neuron from the pyloric cpg in its interaction with an isolated pair of pd neurons with a * three dimensional * en .for completeness in light of the work reported in this paper , we carried out an experiment in which one of our four dimensional neurons was coupled bidirectionally to one of the pd neurons in the ab / pd pacemaker group of the pyloric cpg .the full description of the methods used in the biological preparation will appear elsewhere , but here we quite briefly summarize those points important to the main thrusts of this article .these experiments were carried out on one of the two pyloric dilator ( pd ) neurons from the pyloric central pattern generator ( cpg ) of the lobster stomatogastric ganglion ( stg ) .the stg of the california spiny lobster , _ panulirus interruptus , _ was removed using standard procedures and pinned out in a dish lined with silicone elastomer and filled with normal lobster saline .the stg was isolated from its associated anterior ganglia , which provide activating inputs , by cutting the stomatogastric nerve .two glass microelectrodes were inserted in the soma of the pd neuron : one for intracellular voltage recording and another one for current injection .the voltage signals were digitized at 10000 samples / sec .the two pd neurons remained coupled to each other and to the autonomously bursting interneuron ( ab ) by their natural electrical synapses , but were isolated from the rest of the cpg by blocking chemical input synapses with picrotoxin .the artificial electrical coupling was provided by injecting in the en and in the pd opposite currents .more details of the experimental setup can be found in .the membrane voltage of the en was reshaped to make its amplitude ratio in spiking / bursting mode , its total amplitude and its voltage offset similar to those of the pd neuron . only electrical coupling , positive and negative ,is reported here .we connected the neurons with the analog electrical synapse and observed their spiking - bursting behavior as shown in figure [ hybrid ] . when uncoupled , the neurons had independent spiking / bursting activity as shown in figure [ hybrid](b ) .for large enough negative coupling the neurons are synchronized and fire out - of - phase as shown in figure [ hybrid](a ) . for positive coupling the neurons show synchronized bursting activity as shown in figure [ hybrid](c ) . for this value of bursts are synchronized but not the spikes .this result is in agreement with the experiments made with a pair of electrically coupled ens , as we discussed above , as well as for a pair of living stg neurons .the ens described in this paper are simple analog circuits which integrate four dimensional differential equations representing fast and slow subcellular processes that give rise to the characteristic spiking and spiking - bursting behavior of cpg neurons .the single neurons can be easily set into a chaotic regime that reproduces the irregular firing patterns observed in biological neurons when isolated from the rest of the cpg .this study comprises : ( a ) two electrically coupled ens and ( b ) two ens connected with excitatory and inhibitory chemical synapses .these two types of connections exist in almost all known cpgs .the range of observations summarized in figures [ mapel ] and [ mapchem ] shows the rich behavior and complexity of these minimal network configurations .it indicates how small changes in the coupling conductance can drive the cells into completely different regimes .in particular , some of our experiments predict the appearance of chaotic out - of - phase synchronization for different coupling configurations .these results are displayed in figures [ negel]c and [ inchem]f . in general , the experiments with the ens contribute directly to our understanding of the origin of regularization of individually chaotic neurons through cooperative activity .how complicated should one require a model neuron to be ?in our view the answer depends on the neural function one wishes to represent .the analysis of the electrical activity of isolated neurons from the lobster pyloric cpg indicates that the number of active degrees of freedom is not very large , ranging from three to five in various environments , and this suggests a very simple representation in terms of dynamical equations .our analysis of much richer hodgkin - huxley models of these individual neural oscillators also indicates that in the regime of biological operation , the number of active degrees of freedom is equally small . on this basiswe developed the hindmarsh - rose type models of these neurons both in numerical simulation and in analog electrical circuitry .this paper has moved that inquiry about the complexity of representation for the components of a biologically realistic neural network to another level .here we have investigated whether the simplified neural models , when coupled together in small networks , but in biologically realistic manners , can reproduce our observations biological neurons alone . the striking result of the observations presented here , when the experimental setup matches that of the biological networks , is that the observed behavior of the ens also matches .further , using our ens , we are able to make distinct predictions about the behavior of biological or hybrid ( biological and en ) networks in settings not yet investigated .our experiments on coupled biological neurons and ens provide further ground for testing the validity of numerical and electronic models of individual neural behavior as well as presenting interesting new examples of coupled nonlinear oscillators .hybrid circuits with biological and electronic neurons coupled together are a powerful mechanism to understand the modes of operation of cpgs .the hybrid system constitutes an easy way to change the connectivity and global topology of the cpg .the roles of intrinsic dynamics of the neurons and the synaptic properties of the network in rhythm generation can be easily studied with these hybrid configurations ) .there are previous efforts studying electronic neurons alone and in conjunction with biological neurons .an early example is the work of yarom where a network of four oscillators , realized as an analog circuit , was interfaced with an olivary neuron in a slice preparation .yarom studied the response of the olivary neuron when it received oscillating electrical input from the network .there was no feedback from the biological neuron to the network he constructed .le masson , et al developed a digital version of a neuron comprising a hodgkin - huxley ( hh ) model of various pyloric cpg neurons with three compartments and eight different ion channels which ran on a dsp board located on the bus of a personal computer .they connected this model into a variety of different configurations of subcircuits of the pyloric cpg replacing at various times the lp , a pd or a py neuron .using this ` hybrid ' setup they verified that many aspects of the pyloric rhythm are accurately reproduced when their dsp based neuron replaces one of the biological neurons in their system . in subsequent work , this group has developed vlsi devices for integrating the hh models and has utilized them in mixed circuits ( ens and biological neurons ) , replacing the dsp version of the conductance models in their biological preparations .the complexity of these ens has not been needed in our modeling nor in the further experiments on their interaction with each other as reported here .we have not found any reports in the literature on the mutual interaction of these analog vlsi neural circuits .there are two interesting directions to which the results reported here may point : * \(1 ) biologically realistic neural networks of much greater size than the elementary ones investigated here may be efficiently investigated numerically or in analog circuitry using the realistic , but simple hr type models .the integration of the model equations is no challenge to easily available computing power and large networks should be amenable to investigation and analysis . * \(2 ) the networks investigated here are subcircuits of a biological circuit of about fifteen neurons which has the functional role of a control system : commands are presented from other ganglia of the lobster and this pyloric circuit must express voltage activity to the muscles to operate a pump for shredded food passing from the stomach to the digestive system .many other functions are asked of biological neural networks . using the full richness of hh models forthe component neurons may seem attractive at one level , but the results presented here suggest that many interesting questions may be asked of those networks using the simplified component neurons studied here .r.d . pinto was supported by the brazilian agency fundao de amparo pesquisa do estado de so paulo - fapesp .pv acknowledges support from mec . partial support for this work came from the u.s department of energy , office of science , under grants de - fg03 - 90er14138 and de - fg03 - 96er14592 .we also acknowledge the many conversations we have had with ramon huerta , rob elson and allen selverston on the dynamics of cpg neurons .renaud - le masson , s. , e. marder , g. le masson , and l. abbott , `` hybrid circuits of interacting computer models and biological neurons '' , _ neural information processing systems 5 _ , pp .813 - 819 san mateo , ca .morgan kaufmann publishers , ( 1993 ) .le masson , g. , s. le masson , and m. moulins , `` from conductances to neural network properties : analysis of simple circuits using the hybrid network method '' , _ prog .biophys . molec .biol . _ * 64 * , 201 - 220 ( 1995 ) .laflaquiere , a. , le masson , s. , dom , j.p . , and le masson , g. , `` accurate analog vlsi model of calcium - dependent bursting neurons '' , ( 1997 ieee international conference on neural networks .proceedings ( cat .no.97ch36109 ) , proceedings of international conference on neural networks ( icnn97 ) , houston , tx , usa , 9 - 12 june 1997 . )new york , ny , usa : ieee , 1997 . p.882 - 7 vol.2 . 4 vol .xlvi+2570 pp . ; dupeyron , d. , le masson , s. , deval , y. , le masson , g. , dom , j .-, `` a bicmos implementation of the hodgkin - huxley formalism , '' ( proceedings of the fifth international conference on microelectronics for neural networks and fuzzy systems .microneuro96 , proceedings of fifth international conference on microelectronics for neural networks , lausanne , switzerland , 12 - 14 feb .los alamitos , ca , usa : ieee comput .soc . press , 1996 .p.311 - 16 .x+370 pp ; le masson , s. , deval , y. , le masson , g. , tomas , j. , dupeyron , d. , dom , j.p , `` a bicmos asic for modeling biological neurons '' , ( proceedings of the fourth international conference on microelectronics for neural networks and fuzzy systems , proceedings of the fourth international conference on microelectronics for neural networks and fuzzy systems , turin , italy , 26 - 28 sept .los alamitos , ca , usa : ieee comput .soc . press , 1994 .p.202 - 6 .xi+457 pp .an excellent introduction to the issues involved , the approximations employed , the methods of implementation as well as to the results of some experiments can be found in the dissertation of g. le masson available online at .
we report on experimental studies of synchronization phenomena in a pair of analog electronic neurons ( ens ) . the ens were designed to reproduce the observed membrane voltage oscillations of isolated biological neurons from the stomatogastric ganglion of the california spiny lobster _ panulirus interruptus_. the ens are simple analog circuits which integrate four dimensional differential equations representing fast and slow subcellular mechanisms that produce the characteristic regular / chaotic spiking - bursting behavior of these cells . in this paper we study their dynamical behavior as we couple them in the same configurations as we have done for their counterpart biological neurons . the interconnections we use for these neural oscillators are both direct electrical connections and excitatory and inhibitory chemical connections : each realized by analog circuitry and suggested by biological examples . we provide here quantitative evidence that the ens and the biological neurons behave similarly when coupled in the same manner . they each display well defined bifurcations in their mutual synchronization and regularization . we report briefly on an experiment on coupled biological neurons and four dimensional ens which provides further ground for testing the validity of our numerical and electronic models of individual neural behavior . our experiments as a whole present interesting new examples of regularization and synchronization in coupled nonlinear oscillators . = 6.6 in = 9.10 in = -0.0 in = -0.5 in
humans can effortlessly and rapidly recognize surrounding objects , despite the tremendous variations in the projection of each object on the retina caused by various transformations such as changes in object position , size , pose , illumination condition and background context .this invariant recognition is presumably handled through hierarchical processing in the so - called ventral pathway .such hierarchical processing starts in v1 layers , which extract simple features such as bars and edges in different orientations , continues in intermediate layers such as v2 and v4 , which are responsive to more complex features , and culminates in the inferior temporal cortex ( it ) , where the neurons are selective to object parts or whole objects . by moving from the lower layers to the higher layers , the feature complexity , receptive field size and transformation invariance increase , in such a way that the it neurons can invariantly represent the objects in a linearly separable manner .another amazing feature of the primates visual system is its high processing speed . the first wave of image - driven neuronal responses in it appears around 100 ms after the stimulus onset .recordings from monkey it cortex have demonstrated that the first spikes ( over a short time window of 12.5 ms ) , about 100 ms after the image presentation , carry accurate information about the nature of the visual stimulus .hence , ultra - rapid object recognition is presumably performed in a feedforward manner . moreover , although there exist various intra- and inter - area feedback connections in the visual cortex , some neurophysiological and theoretical studies have also suggested that the feedforward information is usually sufficient for invariant object categorization . appealed by the impressive speed and performance of the primates visual system ,computer vision scientists have long tried to `` copy '' it .so far , it is mostly the architecture of the visual system that has been mimicked .for instance , using hierarchical feedforward networks with restricted receptive fields , like in the brain , has been proven useful . in comparison , the way that biological visual systems learn the appropriate features has attracted much less attention .all the above - mentioned approaches somehow use non biologically plausible learning rules .yet the ability of the visual cortex to wire itself , mostly in an unsupervised manner , is remarkable . here, we propose that adding bio - inspired learning to bio - inspired architectures could improve the models behavior . to this end , we focused on a particular form of synaptic plasticity known as spike timing - dependent plasticity ( stdp ) , which has been observed in the mamalian visual cortex .briefly , stdp reinforces the connections with afferents that significantly contributed to make a neuron fire , while it depresses the others .a recent psychophysical study provided some indirect evidence for this form of plasticity in the human visual cortex . in an earlier study , it is shown that a combination of a temporal coding scheme where in the entry layer of a spiking neural network the most strongly activated neurons fire first with stdp leads to a situation where neurons in higher visual areas will gradually become selective to complex visual features in an unsupervised manner .these features are both salient and consistently present in the inputs .furthermore , as learning progresses , the neurons responses rapidly accelerates .these responses can then be fed to a classifier to do a categorization task . in this study , we show that such an approach strongly outperforms state - of - the - art computer vision algorithms on view - invariant object recognition benchmark tasks including 3d - object and eth-80 datasets .these datasets contain natural and unsegmented images , where objects have large variations in scale , viewpoint , and tilt , which makes their recognition hard , and probably out of reach for most of the other bio - inspired models .yet our algorithm generalizes surprisingly well , even when `` simple classifiers '' are used , because stdp naturally extracts features that are class specific .this point was further confirmed using mutual information and representational dissimilarity matrix ( rdm ) .moreover , the distribution of objects in the obtained feature space was analyzed using hierarchical clustering , and objects of the same category tended to cluster together .the algorithm we used here is a scaled - up version of the one presented in .essentially , many more c2 features and iterations were used .our code is available upon request .we used a five - layer hierarchical network , largely inspired by the hmax model ( see fig .[ model_figure ] ) .specifically , we alternated simple cells that gain selectivity through a sum operation , and complex cells that gain shift and scale invariance through a max operation .however , our network uses spiking neurons and operates in the temporal domain : when presented with an image , the first layer s cells , detect oriented edges and the more strongly a cell is stimulated the earlier it fires .these spikes are then propagated asynchronously through the feedforward network .we only compute the first spike fired by each neuron ( if any ) , which leads to efficient implementations .the justification for this is that later spikes are probably not used in ultra - rapid visual categorization tasks in primates .we used restricted receptive fields and a weight sharing mechanism ( i.e. convolutional network ) . in our model ,images are presented sequentially and the resulting spike waves are propagated through to the layer , where stdp is used to extract diagnostic features . more specifically , the first layer s cells detect bars and edges using gabor filters . herewe used convolutional kernels corresponding to gabor filters with the wavelength of 5 and four different preferred orientations ( ) .these filters are applied to five scaled versions of the original ) .hence , for each scaled version of the input image we have four maps ( one for each orientation ) , and overall , there are 4 20 maps of cells ( see the maps of fig .[ model_figure ] ) . evidently , the cells of larger scales detect edges with higher spatial frequencies while the smaller scales extract edges with lower spatial frequencies . indeed , instead of changing the size and spatial frequency of gabor filters, we are changing the size of input image .this is a way to implement scale invariance at a low computational cost .each cell emits a spike with a latency that is inversely proportional to the absolute value of the convolution .thus , the more strongly a cell is stimulated the earlier it fires ( intensity - to - latency conversion , as observed experimentally ) . to increase the sparsity at a given scale and location ( corresponding to one cortical column ) , only the spike corresponding to the best matching orientationis propagated ( i.e. a winner - take - all inhibition is employed ) . in other word , for each position in the four orientation maps of a given scale , the cell with highest convolution value emits a spike and prevents the other three cells from firing . for each map , there is a corresponding map .each cell propagates the first spike emitted by the cells in a square neighborhood of the map which corresponds to one specific orientation and one scale ( see the maps of fig . [ model_figure ] ) . cells thus execute a maximum operation over the cells with the same preferred feature across a portion of the visual field , which is a biologically plausible way to gain local shift invariance .the overlap between the afferents of two adjacent cells is just one row , hence a subsampling over the maps is done by the layers as well .therefore , each map has fewer cells than the corresponding map . features correspond to intermediate - complexity visual features which are optimum for object classification .each feature has a prototype cell ( specified by a - synaptic weight matrix ) , which is a weighted combination of bars ( cells ) with different orientations in a square neighborhood .each prototype cell is retinotopically duplicated in the five scale maps ( i.e. weight - sharing is used ) . within those maps, the cells can integrate spikes only from the four maps of their corresponding processing scales .this way , a given feature is simultaneously explored in all positions and scales ( see maps of fig .[ model_figure ] with same feature prototype but in different processing scales specified by different colors ) .indeed , duplicated cells in all positions of all scale maps integrate the spike train in parallel and compete with each other . the first duplicate reaching its threshold , if any , is the winner .the winner fires and prevents the other duplicated cells in all other positions and scales from firing through a winner - take - all inhibition mechanism .then , for each prototype , the winner cell triggers the unsupervised stdp rule and its weight matrix is updated .the changes in its weights are applied over all other duplicate cells in different positions and scales ( weight sharing mechanism ) .this allows the system to learn frequent patterns , independently of their position and size in the training images .the learning process begins with features initialized by random numbers drawn from a normal distribution with mean and std , and the threshold of all cells is set to 64 ( ) . through the learning process , a local inhibition between different prototype cells is used to prevent the convergence of different prototypes to similar features : when a cell fires at a given position and scale , it prevents all the other cells ( independently of their preferred prototype ) from firing later at the same scale and within a neighborhood around the firing position .thus , the cell population self - organizes , each cell trying to learn a distinct pattern so as to cover the whole variability of the inputs .moreover , we applied a k - winner - take - all strategy in layer to ensure that at most two cells can fire for each processing scale .this mechanism , only used in the learning phase , helps the cells to learn patterns with different real sizes . without it ,there is a natural bias toward small " patterns ( i.e. , large scales ) , simply because corresponding maps are larger , and so likeliness of firing with random weights at the beginning of the stdp process is higher . a simplified version of stdp is used to learn the weights as follows : where and respectively refer to the index of post- and presynaptic neurons , and are the corresponding spike times , is the synaptic weight modification , and and are two parameters specifying the learning rate .note that the exact time difference between two spikes ( ) does not affect the weight change , but only its sign is considered .these simplifications are equivalent to assuming that the intensity - to - latency conversion of cells compresses the whole spike wave in a relatively short time interval ( say , ms ) , so that all presynaptic spikes necessarily fall close to the postsynaptic spike time , and the time lags are negligible .the multiplicative term ensures the weights remain in the range [ 0,1 ] and maintains all synapses in an excitatory mode .the learning phase starts by which is multiplied by 2 after each 400 postsynaptic spikes up to a maximum value of . a fixed ratio ( -4/3 ) is used .this allows us to speed up the convergence of features as the learning progresses .initiation of the learning phase with high learning rates would lead to erratic results . for each prototype, a cell propagates the first spike emitted by the corresponding cells over all positions and processing scales , leading to the global shift- and scale - invariant cells ( see the layer of fig .[ model_figure ] ) .to study the robustness of our model with respect to different transformations such as scale and viewpoint , we evaluated it on the _ 3d - object _ and _ eth-80 _ datasets .the 3d - object is provided by savarese et al . at cvglab , stanford university .this dataset contains 10 different object classes : bicycle , car , cellphone , head , iron , monitor , mouse , shoe , stapler , and toaster .there are about 10 different instances for each object class .the object instances are photographed in about 72 different conditions : eight view angles , three distances ( scales ) , and three different tilts .the images are not segmented and the objects are located in different backgrounds ( the background changes even for different conditions of the same object instance ) .figure [ objects ] presents some examples of objects in this dataset .the eth-80 dataset includes 80 3d objects in eight different object categories including apple , car , toy cow , cup , toy dog , toy horse , pear , and tomato .each object is photographed in 41 viewpoints with different view angles and different tilts .figure s1 in supplementary information provides some examples of objects in this dataset from different viewpoints . for both datasets ,five instances of each object category are selected for the training set to be used in the learning phase .the remaining instances constitute the testing set which is not seen during the learning phase , but is used afterward to evaluate the recognition performance .this standard cross - validation procedure allows to measure the generalization ability of the model beyond the specific training examples .note that for 3d - object dataset , the original size of all images were preserved , while the images of eth-80 dataset are resized to pixels in height while preserving the aspect ratio .the images of both datasets were converted to grayscale values . [ cols="^,^,^,^,^,^,^,^,^,^,^,^",options="header " , ] in an other experiment , we analyzed the class dependency of the features for our model . to this end , the 50 most informative features , when classifying a specific class against all the other classes ,are selected by employing the mutual information technique . in other words , for each class , we selected those 50 features which have the highest activity for samples of that class and have less activity for other classes .afterwards , the number of common features among the informative features of each pair of classes are computed as provided in table [ table_example2 ] . on average , there are only about 5.4 common features between pairs of classes .although there are some common features between any two classes , their co - occurrence with the other features help the classifier to separate them from each other . in this way , our model can represent various object classes with a relatively small number of features .indeed , exploiting the intermediate complexity features , which are not common in all classes and are not very rare , can help the classifier to discriminate instances of different classes . in a previous study , it has been shown that using the hmax model with random dot patterns in the layer can reach a reasonable performance , comparable to the one obtained with random patches cropped from the training images .it seems that this is due to the dependency of hmax to the application of a powerful classifier .indeed , the use of both random dot or randomly selected patches transform the images into a complex and nested feature space and it is the classifier which looks for a complex signature to separate object classes .the deficiencies emerge when the classification problem gets harder ( such as invariant or multiclass object recognition problems ) and then even a powerful classifier is not able to discriminate the classes . here, we show that the superiority of our model is due to the informative feature extraction through a bio - inspired learning rule . to this end , we have compared the performances on 3d - object dataset obtained with random features versus stdp features , as well as a very simple classifier versus svm . to generate random features ,we have set the weight matrix of each feature of our model with random values .first , we have computed the mean and standard deviation ( std ) ( ) of the number of active ( nonzero ) weights in the features learned by stdp .second , for each random feature , the number of active weights , , is computed by generating a random number based on the obtained mean and std .finally , a random feature is constructed by uniformly distributing the randomly generated values in the weight matrix .in addition , we designed a simple classifier comprised of several one - versus - one classifiers .for each binary classifier , two subset of features with high occurrence probabilities in one of the two classes are selected . in more details , to select suitable features for the first class , the occurrence probabilities of features in this class are divided by the corresponding occurrence probabilities in the second class .then , a feature is selected if this ratio is higher than a threshold .the optimum threshold value is computed by a trial and error search in which the performance over the training samples is maximized . to assign a class label to the input test sample, we performed an inner product on the feature value and feature probability vectors .finally , the class with the highest probability is reported to the combined classifier .the combined classifier selects the winner class based on a simple majority voting . for 500 random features , using the svm and the simple classifier , our model reached classification performances of 71% and 21% on average , respectively .whereas , for the learned features , both the svm and simple classifiers attained reasonable performances of 96% and 79% , respectively . based on these results , it can be concluded that the features obtained through the bio - inspired unsupervised learning projects the objects into an easily separable space , while the feature extraction by selection of random patches ( drawn from the training images ) or by generation of random patterns leads to a complex object representation .position and scale invariance in our model are built - in , thanks to weight sharing and scaling process .conversely , view - invariance must be obtained through the learning process . here , we used all images of five object instances from each category ( varied in all dimensions ) to learn the visual features , while images of all other object instances of each category were used to test the network .hence , the model was exposed to all possible variations during the learning to gain view - invariance . moreover , near or opposite views of the same object shares some features which are suitable for invariant object recognition .for instance , consider the overall shape of a head , or close views of a bike wheel which could be a complete circle or an ellipse .regarding the fact that stdp tends to learn more frequent features in different images , different views of an object could be invariantly represented based on more common features .our model appears to be the best choice when dealing with few object classes , but huge variations in view points .as pointed out in previous studies , both hmax and deepconvnet models could not handle these variations perfectly .conversely , our model is not appropriate to handle many classes , which requires thousands of features , like in the imagenet contest , because its time complexity is roughly in , where is the number of features ( briefly : since the number of firing neurons per image is limited , if the number of features is doubled , reaching convergence will take roughly twice as many images , and the processing time for each of them will be doubled as well ) .for example , extracting 4096 features in our model , the same number of features in deepconvnet , would take about 67 times it took us to extract 500 .however , parallel implementation of our algorithm could speed - up the computation time by several orders of magnitude . even in this case, we do not expect to outperform the deepconvnet model on the imagenet database , since only the shape similarities are taken into account in our model and the other cues such as color or texture are ignored .importantly , our algorithm has a natural tendency to learn salient contrasted regions , which is desirable as these are typically the most informative .most of our features turned out to be class - specific , and we could guess what they represent by doing the reconstructions ( see fig . [ features ] and fig .since each feature results from averaging multiple input images , the specificity of each instance is averaged out , leading to class archetypes .consequently , good classification results can be obtained using only a few features , or even using ` simple ' decision rules like feature counts and majority voting ( here ) , as opposed to a ` smart classifier ' such as svm .there are some similarities between stdp - based feature learning , and non - negative matrix factorization , as first intuited in , and later demonstrated mathematically in . within both approaches ,objects are represented as ( positive ) sums of their parts , and the parts are learned by detecting consistently co - active input units .our model could be efficiently implemented in hardware , for example using address event representation ( aer ) . with aer ,the spikes are carried as addresses of sending or receiving neurons on a digital bus .time ` represents itself ' as the asynchronous occurrence of the event .thus the use of stdp will lead to a system which effectively becomes more and more reactive , in addition to becoming more and more selective .furthermore , since biological hardware is known to be incredibly slow , simulations could run several order of magnitude faster than real time .as mentioned earlier , the primate visual system extracts the rough content of an image in about 100ms .we thus speculate that some dedicated hardware will be able to do the same in the order of a millisecond or less .recent computational , psychophysical , and fmri experiments demonstrate that the informative intermediate complexity features are optimal for object categorization tasks .but the possible neural mechanisms to extract such features remain largely unknown .the hmax model ignores these learning mechanisms and imprints its features with random crops from the training images , or even uses random filters .most individual features are thus not very informative , yet in some cases , a ` smart ' classifier such as svm can efficiently separate the high - dimensional vectors of population responses .many other models use supervised learning rules , sometimes reaching impressive performance on natural image classification tasks .the main drawback of these supervised methods , however , is that learning is slow and requires numerous labeled samples ( e.g. , about 1 million in ) , because of the credit assignment problem .this contrasts with humans who can generalize efficiently from just a few training examples .we avoid the credit assignment problem by keeping the features fixed when training the final classifier ( that being said , fine - tuning them for a given classification problem would probably increase the performance of our model ; we will test this in future studies ) .even if the efficiency of such hybrid unsupervised - supervised learning schemes has been known for a long time , few alternative unsupervised learning algorithms have been shown to be able to extract complex and high - level visual features ( see ) .finding better representational learning algorithms is thus an important direction for future research and seeking for inspiration in the biological visual systems is likely to be fruitful .we suggest here that the physiological mechanism known as stdp is an appealing start point . considering the time relation among the incoming inputs is an important aspect of spiking neural networks .this property is critical to promote the existing models from static vision to continuous vision .a prominent example is the trace learning rule , suggesting that the invariant object representation in ventral visual system is instructed by the implicit temporal contiguity of vision .also , in various motion processing and action recognition problems , the important information lies in the appearance timing of input features .our model has this potential to be extended for continuous and dynamic vision something that we will further explore .to date , various bio - inspired network architectures for object recognition have been introduced , but the learning mechanism of biological visual systems has been neglected . in this paper , we demonstrate that the association of both bio - inspired network architecture and learning rule results in a robust object recognition system .the stdp - based feature learning , used in our model , extracts frequent diagnostic and class specific features that are robust to deformations in stimulus appearance .it has previously been shown that the trivial models can not tolerate the identity preserving transformations such as changes in view , scale , and position . to study the behavior of our model confronted with these difficulties , we have tested our model over two challenging invariant object recognition databases which includes instances of 10 different object classes photographed in different views , scales , and tilts .the categorization performances indicate that our model is able to robustly recognize objects in such a severe situation .in addition , several analytical techniques have been employed to prove that the main contribution to this success is provided by the unsupervised stdp feature learning , not by the classifier . using representational dissimilarity matrix ,we have shown that the representation of input images in layer are more similar for within - category and dissimilar for between - category objects . in this way , as confirmed by the hierarchical clustering , the objects with the same category are represented in neighboring regions of feature space . hence , even if using a simple classifier , our model is able to reach an acceptable performance , while the random features fail .we would like to thank mr .majid changi ashtiani at the math computing center of ipm ( http://math.ipm.ac.ir/mcc ) for letting us to perform some parts of the calculations on their computing cluster .we also thank dr .reza ebrahimpour for his helpful discussions and suggestions .10 url # 1`#1`urlprefixhref # 1#2#2 # 1#1 s. thorpe , d. fize , c. marlot , et al . ,speed of processing in the human visual system , nature 381 ( 6582 ) ( 1996 ) 520522 .i. biederman , recognition - by - components : a theory of human image understanding ., psychological review 94 ( 2 ) ( 1987 ) 115 .j. j. dicarlo , d. zoccolan , n. c. rust , how does the brain solve visual object recognition ? , neuron 73 ( 3 ) ( 2012 ) 415434 .p. lennie , j. a. movshon , coding of color and form in the geniculostriate visual pathway ( invited review ) , journal of the optical society of america a 22 ( 10 ) ( 2005 ) 20132033 .a. s. nandy , t. o. sharpee , j. h. reynolds , j. f. mitchell , the fine structure of shape tuning in area v4 , neuron 78 ( 6 ) ( 2013 ) 11021115 .k. tanaka , h .- a .saito , y. fukada , m. moriya , coding visual images of objects in the inferotemporal cortex of the macaque monkey , journal of neurophysiology 66 ( 1 ) ( 1991 ) 170189 .c. p. hung , g. kreiman , t. poggio , j. j. dicarlo , fast readout of object identity from macaque inferior temporal cortex , science 310 ( 5749 ) ( 2005 ) 863866 .n. c. rust , j. j. dicarlo , selectivity and tolerance ( `` invariance '' ) both increase as visual information propagates from cortical area v4 to it , journal of neuroscience 30 ( 39 ) ( 2010 ) 1297812995 .h. liu , y. agam , j. r. madsen , g. kreiman , timing , timing , timing : fast decoding of object information from intracranial field potentials in human visual cortex , neuron 62 ( 2 ) ( 2009 ) 281290 .w. a. freiwald , d. y. tsao , functional compartmentalization and viewpoint generalization within the macaque face - processing system , science 330 ( 6005 ) ( 2010 ) 845851 .f. anselmi , j. z. leibo , l. rosasco , j. mutch , a. tacchetti , t. poggio , unsupervised learning of invariant representations with low sample complexity : the magic of sensory cortex or a new framework for machine learning ? , arxiv preprint arxiv:1311.4158 ( 2014 ) 123 .k. fukushima , neocognitron : a self organizing neural network model for a mechanism of pattern recognition unaffected by shift in position . ,biological cybernetics 36 ( 4 ) ( 1980 ) 193202 .y. lecun , y. bengio , convolutional networks for images , speech , and time series , in : m. a. arbib ( ed . ) , the handbook of brain theory and neural networks , cambridge , ma : mit press , 1998 , pp .255258 .t. serre , l. wolf , s. bileschi , m. riesenhuber , t. poggio , robust object recognition with cortex - like mechanisms , ieee transactions on pattern analysis machine intelligence 29 ( 3 ) ( 2007 ) 411426 . http://dx.doi.org/10.1109/tpami.2007.56 [ ] .h. lee , r. grosse , r. ranganath , a. y. ng , convolutional deep belief networks for scalable unsupervised learning of hierarchical representations , proceedings of the 26th annual international conference on machine learning ( icml ) ( 2009 ) 18http://dx.doi.org/10.1145/1553374.1553453 [ ] .a. krizhevsky , i. sutskever , g. hinton , imagenet classification with deep convolutional neural networks . , in : neural information processing systems ( nips ) , lake tahoe , nevada , 2012 , pp .q. v. le , building high - level features using large scale unsupervised learning , in : 2013 ieee international conference on acoustics , speech and signal processing , ieee , 2013 , pp .http://dx.doi.org/10.1109/icassp.2013.6639343 [ ] .g. m. ghose , learning in mammalian sensory cortex ., current opinion in neurobiology 14 ( 4 ) ( 2004 ) 513518 .z. kourtzi , j. j. dicarlo , learning and neural plasticity in visual object recognition ., current opinion in neurobiology 16 ( 2 ) ( 2006 ) 152158 . http://dx.doi.org/10.1016/j.conb.2006.03.012 [ ] . c. d. meliza , y. dan , receptive - field modification in rat visual cortex induced by paired visual stimulation and single - cell spiking ., neuron 49 ( 2 ) ( 2006 ) 183189 . http://dx.doi.org/10.1016/j.neuron.2005.12.009 [ ] .s. huang , c. rozas , m. trevino , j. contreras , s. yang , l. song , t. yoshioka , h .- k .lee , a. kirkwood , associative hebbian synaptic plasticity in primate visual cortex , journal of neuroscience 34 ( 22 ) ( 2014 ) 75757579 .http://dx.doi.org/10.1523/jneurosci.0983-14.2014 [ ] .d. e. feldman , the spike - timing dependence of plasticity ., neuron 75 ( 4 ) ( 2012 ) 556571 . http://dx.doi.org/10.1016/j.neuron.2012.08.001 [ ] .d. b. t. mcmahon , d. a. leopold , stimulus timing - dependent plasticity in high - level vision ., current biology 22 ( 4 ) ( 2012 ) 332337 .http://dx.doi.org/10.1016/j.cub.2012.01.003 [ ] .t. masquelier , s. j. thorpe , unsupervised learning of visual features through spike timing dependent plasticity ., plos computational biology 3 ( 2 ) ( 2007 ) e31 .s. savarese , l. fei - fei , 3d generic object categorization , localization and pose estimation , in : ieee 11th international conference on computer vision ( iccv ) , 2007 , pp .http://dx.doi.org/10.1109/iccv.2007.4408987 [ ] .b. pepik , m. stark , p. gehler , b. schiele , multi - view priors for learning detectors from sparse viewpoint data , in : international conference on learning representations ( iclr ) , banff , ab , canada , 2014 , pp .b. leibe , b. schiele , analyzing appearance and contour based methods for object categorization , in : computer vision and pattern recognition , 2003 . proceedings .2003 ieee computer society conference on , vol . 2 , ieee , 2003 , pp .ii409 .n. pinto , d. d. cox , j. j. dicarlo , why is real - world visual object recognition hard ? , plos computational biology 4 ( 1 ) ( 2008 )http://dx.doi.org/10.1371/journal.pcbi.0040027 [ ] .n. pinto , y. barhomi , d. d. cox , j. j. dicarlo , comparing state - of - the - art visual features on invariant object recognition tasks , in : ieee workshop on applications of computer vision ( wacv ) , ieee , 2011 , pp .463470 .m. ghodrati , a. farzmahdi , k. rajaei , r. ebrahimpour , s .- m .khaligh - razavi , feedforward object - vision models only tolerate small image variations compared to human , frontiers in computational neuroscience 8 ( 2014 ) 74 .j. pohjalainen , o. rsnen , s. kadioglu , feature selection methods and their combinations in high - dimensional classification of speaker likability , intelligibility and personality traits , computer speech & language .n. kriegeskorte , m. mur , p. bandettini , representational similarity analysis connecting the branches of systems neuroscience , frontiers in systems neuroscience 2 ( 4 ) .f. murtagh , p. contreras , algorithms for hierarchical clustering : an overview , wiley interdisciplinary reviews : data mining and knowledge discovery 2 ( 1 ) ( 2012 ) 8697 .s. j. thorpe , m. imbert , biological constraints on connectionist modelling , in : connectionism in perspective , amsterdam : elsevier , 1989 , pp .s. celebrini , s. thorpe , y. trotter , m. imbert , dynamics of orientation coding in area v1 of the awake primate . ,visual neuroscience 10 ( 5 ) ( 1993 ) 811825 .d. g. albrecht , w. s. geisler , r. a. frazor , a. m. crane , visual cortex neurons of monkeys and cats : temporal dynamics of the contrast response function , journal of neurophysiology 88 ( 2 ) ( 2002 ) 888913 .o. shriki , a. kohn , m. shamir , fast coding of orientation in primary visual cortex . ,plos computational biology 8 ( 6 ) ( 2012 ) e1002536 .m. riesenhuber , t. poggio , hierarchical models of object recognition in cortex ., nature neuroscience 2 ( 11 ) ( 1999 ) 10191025 .g. a. rousselet , s. j. thorpe , m. fabre - thorpe , taking the max from neuronal responses ., trends cogn sci 7 ( 3 ) ( 2003 ) 99102 .s. ullman , m. vidal - naquet , e. sali , visual features of intermediate complexity and their use in classification ., nature neuroscience 5 ( 7 ) ( 2002 ) 682687 .j. mutch , u. knoblich , t. poggio , cns : a gpu - based framework for simulating cortically - organized networks , tech .mit - csail - tr-2010 - 013 / cbcl-286 , massachusetts institute of technology , cambridge , ma ( february 2010 ) .y. jia , e. shelhamer , j. donahue , s. karayev , j. long , r. girshick , s. guadarrama , t. darrell , caffe : convolutional architecture for fast feature embedding , arxiv preprint arxiv:1408.5093 .d. d. cox , t. dean , neural networks and neuroscience - inspired computer vision , current biology 24 ( 18 ) ( 2014 ) r921r929 .j. z. leibo , j. mutch , s. ullman , t. poggio , from primal templates to invariant recognition , mit - csail - tr-2010 - 057 , cbcl-293 , massachusetts institute of technology , cambridge , ma .b. lemoine , a. s. maida , gpu facilitated unsupervised visual feature acquisition in spiking neural networks , in : neural networks ( ijcnn ) , the 2013 international joint conference on , ieee , 2013 , pp .r. vanrullen , s. j. thorpe , rate coding versus temporal order coding : what the retinal ganglion cells tell the visual cortex . , neural comput 13 ( 6 ) ( 2001 ) 12551283 .d. d. lee , h. s. seung , learning the parts of objects by non - negative matrix factorization . , nature 401 ( 6755 ) ( 1999 ) 788791 .t. masquelier , s. j. thorpe , learning to recognize objects using waves of spikes and spike timing - dependent plasticity , in : the 2010 international joint conference on neural networks ( ijcnn ) , 2010 , pp . 18 .http://dx.doi.org/10.1109/ijcnn.2010.5596934 [ ] .k. carlson , m. richert , n. dutt , j. krichmar , biologically plausible models of homeostasis and stdp : stability and learning in spiking neural networks , in : the 2013 international joint conference on neural networks ( ijcnn ) , 2013 , pp .18 . c. zamarreo ramos , l. a. camuas mesa , j. a. prez - carrasco , t. masquelier , t. serrano - gotarredona , b. linares - barranco , on spike - timing - dependent - plasticity , memristive devices , and building a self - learning visual cortex , frontiers in neuroscience 5 ( march ) ( 2011 ) 22 .o. bichler , d. querlioz , s. j. thorpe , j .-bourgoin , c. gamrat , extraction of temporally correlated features from dynamic vision sensors with spike - timing - dependent plasticity . , neural networks : the official journal of the international neural network society 32 ( 2012 ) 33948 . http://dx.doi.org/10.1016/j.neunet.2012.02.022 [ ] .t. dorta , m. zapata , j. madrenas , g. snchez , aer - srt : scalable spike distribution by means of synchronous serial ring topology address event representation , neurocomputing 171 ( 2016 ) 16841690 .c. diaz , g. sanchez , g. duchen , m. nakano , h. perez , an efficient hardware implementation of a novel unary spiking neural network multiplier with variable dendritic delays , neurocomputing .m. sivilotti , wiring considerations in analog vlsi systems with application to field - programmable networks , ph.d .thesis , comput ., california inst .technol . , pasadena , ca ( 1991 ) . t. serrano - gotarredona , t. masquelier , t. prodromakis , g. indiveri , b. linares - barranco , stdp and stdp variations with memristors for spiking neuromorphic learning systems . ,frontiers in neuroscience 7 ( february ) ( 2013 ) 2 .http://dx.doi.org/10.3389/fnins.2013.00002 [ ] .a. harel , s. ullman , d. harari , s. bentin , basic - level categorization of intermediate complexity fragments reveals top - down effects of expertise in visual perception , journal of vision 11 ( 8) ( 2011 ) 18 .y. lerner , b. epshtein , s. ullman , r. malach , class information predicts activation by object fragments in human object areas , journal of cognitive neuroscience 20 ( 7 ) ( 2008 ) 11891206 .t. serre , a. oliva , t. poggio , a feedforward architecture accounts for rapid categorization ., proc natl acad sci u s a 104 ( 15 ) ( 2007 ) 64246429 .http://dx.doi.org/10.1073/pnas.0700622104 [ ] .d. l. k. yamins , h. hong , c. f. cadieu , e. a. solomon , d. seibert , j. j. dicarlo , performance - optimized hierarchical models predict neural responses in higher visual cortex . , proceedings of the national academy of sciences of the united states of americahttp://dx.doi.org/10.1073/pnas.1403112111 [ ] .e. t. rolls , g. deco , computational neuroscience of vision , oxford university press , oxford , uk , 2002 .m. a. ranzato , f. j. huang , y .- l .boureau , y. lecun , unsupervised learning of invariant feature hierarchies with applications to object recognition , in : ieee conference on computer vision and pattern recognition ( cvpr ) , 2007 , pp .http://dx.doi.org/10.1109/cvpr.2007.383157 [ ] .h. goh , n. thome , m. cord , j .- h .lim , learning deep hierarchical visual feature coding , neural networks and learning systems , ieee transactions on pp ( 99 ) ( 2014 ) 11 .http://dx.doi.org/10.1109/tnnls.2014.2307532 [ ] .t. masquelier , relative spike time coding and stdp - based orientation selectivity in the early visual system in natural continuous and saccadic vision : a computational model . , journal of computational neuroscience 32 ( 3 ) ( 2012 ) 42541 . http://dx.doi.org/10.1007/s10827-011-0361-9 [ ] .p. fldik , learning invariance from transformation sequences , neural computation 3 ( 1991 ) 194200 .escobar , g. s. masson , t. vieville , p. kornprobst , action recognition using a bio - inspired feedforward spiking network , international journal of computer vision 82 ( 3 ) ( 2009 ) 284301 .here we provide the results of feature analysis techniques such as rdm and hierarchical clustering on eth-80 dataset for both hmax and our model .some sample images of eth-80 dataset are shown in fig .[ eth_samples ] . in fig .[ rdms_supliment_tims ] and fig .[ rdms_supliment_hmax ] the rdms of features of our model and hmax in eight view angels are presented , respectively .it can be seen that our model can better represent classes with high shape similarities such as tomato , apple , and pear or cow , horse , and dog with respect to the hmax model .also , the hierarchical clustering of whole training data based on their representations on feature spaces of our model and hmax are demonstrated in fig .[ cluster_tim_eth80 ] and fig.[cluster_hmax_eth80 ] , respectively . as for the 3d - object dataset, hmax feature extraction leads to a nested representation of different object classes which causes a poor classification accuracy . hereagain a huge number of images which belong to different classes are assigned to a large cluster with lower than 0.14 internal dissimilarities .on the other hand , our model has distributed images of different classes in different regions of feature space .note that the largest cluster of our model includes the instances of tomato , apple , and pear classes which their shapes are so similar .
retinal image of surrounding objects varies tremendously due to the changes in position , size , pose , illumination condition , background context , occlusion , noise , and nonrigid deformations . but despite these huge variations , our visual system is able to invariantly recognize any object in just a fraction of a second . to date , various computational models have been proposed to mimic the hierarchical processing of the ventral visual pathway , with limited success . here , we show that the association of both biologically inspired network architecture and learning rule significantly improves the models performance when facing challenging invariant object recognition problems . our model is an asynchronous feedforward spiking neural network . when the network is presented with natural images , the neurons in the entry layers detect edges , and the most activated ones fire first , while neurons in higher layers are equipped with spike timing - dependent plasticity . these neurons progressively become selective to intermediate complexity visual features appropriate for object categorization . the model is evaluated on _ 3d - object _ and _ eth-80 _ datasets which are two benchmarks for invariant object recognition , and is shown to outperform state - of - the - art models , including deepconvnet and hmax . this demonstrates its ability to accurately recognize different instances of multiple object classes even under various appearance conditions ( different views , scales , tilts , and backgrounds ) . several statistical analysis techniques are used to show that our model extracts class specific and highly informative features . * keywords : * view - invariant object recognition , visual cortex , stdp , spiking neurons , temporal coding
the singular value decomposition of a real matrix is a factorization of the form , where is a orthogonal matrix , is a rectangular diagonal matrix with non - negative real numbers on the diagonal , and is an orthogonal matrix .the diagonal entries of are known as the _ singular values _ of .the columns of are the _ left - singular vectors _ of , while the columns of are the _ right - singular vectors _ of . if is symmetric , the singular values are given by the absolute value of the eigenvalues , and the singular vectors are just the eigenvectors of . here , and in the sequel , whenever we write _ singular vectors _ , the reader is free to interpret this as left - singular vectors or right - singular vectors provided the same choice is made throughout the paper .consider a real ( deterministic ) matrix with singular values and corresponding singular vectors we will call the data matrix .in general , the vector is not unique .however , if has multiplicity one , then is determined up to sign .an important problem in statistics and numerical analysis is to compute the first singular values and vectors of . in particular , the largest few singular values and corresponding singular vectorsare typically the most important . among others, this problem lies at the heart of principal component analysis ( pca ) , which has a very wide range of applications ( for many examples , see and the references therein ) and in the closely related low rank approximation procedure often used in theoretical computer science and combinatorics . in application , are typically large and is small , often a fixed constant .a problem of fundamental importance in quantitative science ( including pure and applied mathematics , statistics , engineering , and computer science ) is to estimate how a small perturbation to the data effects the spectrum .this problem has been discussed in virtually every text book on quantitative linear algebra and numerical analysis ( see , for instance , ) .a basic model is as follows . instead of , one needs to work with , where represents the perturbation matrix .let denote the singular values of with corresponding singular vectors .we consider the following natural questions. is a good approximation of ?[ quest : weyl ] is a good approximation of ?these two questions are addressed by the davis - kahan - wedin sine theorem and weyl s inequality .let us begin with the first question in the case when . a canonical way ( coming from the numerical analysis literature ;see for instance ) to measure the distance between two unit vectors and is to look at , where is the angle between and taken in $ ] .it has been observed by numerical analysts ( in the setting where is deterministic ) for quite some time that the key parameter to consider in the bound is the gap ( or separation ) between the first and second singular values of .the first result in this direction is the famous davis - kahan sine theorem for hermitian matrices .the non - hermitian version was proved later by wedin . throughout the paper, we use to denote the spectral norm of a matrix .that is , is the largest singular value of .[ wedin ] theorem [ wedin ] is trivially true when since sine is always bounded above by one .in other words , even if the vector is not uniquely determined , the bound is still true for any choice of . on the other hand , when , the proof of theorem [ wedin ] reveals that the vector is uniquely determined up to sign .theorem [ wedin ] is a simple corollary of ( * ? ? ?* theorem v.4.4 ) which is originally due to wedin ; we present a proof below for completeness .more generally , one can consider approximating the -th singular vector or the space spanned by the first singular vectors .naturally , in these cases , one must consider the gaps question [ quest : weyl ] is addressed by weyl s inequality . in particular ,weyl s perturbation theorem gives the following deterministic bound for the singular values ( see ( * ? ? ?* theorem iv.4.11 ) for a more general perturbation bound due to mirsky ) .[ weyl s bound ] [ theorem : weyl ] for more discussions concerning general perturbation bounds , we refer the reader to and references therein .we now pause for a moment to prove theorem [ wedin ] . if , the theorem is trivially true since sine is always bounded above by one .thus , assume . by theorem [ theorem : weyl ] , we have and hence the singular vectors and are uniquely determined up to sign . by another application of theorem [ theorem : weyl ] , we obtain rearranging the inequalities, we have therefore , by ( * ? ? ?* theorem v.4.4 ) , we conclude that and the proof is complete .let us now focus on the matrices and .it has become common practice to assume that the perturbation matrix is random .furthermore , researchers have observed that data matrices are usually not arbitrary .they often possess certain structural properties . among these properties , one of the most frequently seen is having low rank ( see , for instance , and references therein ) .the goal in this paper is to show that in this situation , one can significantly improve classical results like theorems [ wedin ] and [ theorem : weyl ] . to give a quick example ,let us assume that and are matrices and that the entries of are independent and identically distributed ( iid ) random variables with zero mean , unit variance ( which is just matter of normalization ) , and bounded fourth moment .it is well known that in this case with high probability .here we use to denote a term which tends to zero as tends to infinity . ]* chapter 5 ) .thus , the above two theorems imply [ wedin - cor ] for any , with probability , and among others , this shows that if one wants accuracy in the first singular vector computation , needs to satisfy we present the results of a numerical simulation for being a matrix of rank 2 when , , and where is a random bernoulli matrix ( its entries are iid random variables that take values with probability ) .the results , shown in figure [ young ] , turn out to be very different from what predicts .it is easy to see that for the parameters and , corollary [ wedin - cor ] does not give a useful bound ( since ) .however , figure [ young ] shows that , with high probability , , which means approximates with a relatively small error . where is a deterministic matrix with rank ( for the figure on top and for the one below ) and the noise is a bernoulli random matrix , evaluated from samples ( top figure ) and samples ( bottom figure ) . in both figures ,the largest singular value of is taken to be .,title="fig:",width=453 ] where is a deterministic matrix with rank ( for the figure on top and for the one below ) and the noise is a bernoulli random matrix , evaluated from samples ( top figure ) and samples ( bottom figure ) . in both figures ,the largest singular value of is taken to be .,title="fig:",width=453 ]trying to explain the inefficiency of the davis - kahan - wedin bound in the above example , the second author was led to the following intuition . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ if has rank , all actions of focus on an dimensional subspace ; intuitively then , must act like an dimensional random matrix rather than an dimensional one . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ this means that the _ real dimension _ of the problem is , not . while it is clear that one can not automatically ignore the ( rather wild ) action of outside the range of , this intuition ,if true , would show that what really matters in or is , the rank of , rather than its size . if this is indeed the case , one may hope to obtain a bound of the form for some constant ( with some possible corrections )this is much better than when has low rank and explains the phenomenon arising from figure [ young ] . in ,the second author managed to prove under certain conditions .while the right - hand side is quite close to the optimal form in , the main problem here is that in the left - hand side one needs to square the sine function .the bound for with was done by an inductive argument and was rather complicated .finally , the problem of estimating the singular values was not addressed at all in . in this paper , by using an entirely different ( and simpler ) argument, we are going to remove the unwanted squaring effect .this enables us to obtain a near optimal improvement of the davis - kahan - wedin theorem .one can easily extend the proof to give a ( again near optimal ) bound on the angle between two subspaces spanned by the first few singular vectors of and their counterparts of .( this is the space one often actually cares about in pca and low rank approximation procedures . ) finally , as a co - product , we obtain an improved version of weyl s bound , which also supports our _ real dimension _ intuition .our results hold under very mild assumptions on and . as a matter of fact , in the strongest results, we will not even need the entries of to be independent .as an illustration , let us first state a result in the case that is a matrix and is a bernoulli matrix ( the entries are iid bernoulli random variables , taking values with probability ) . [theorem : main1 ] let be a bernoulli random matrix and fix .then there exists constants ( depending only on ) such that the following holds .let be a matrix with rank satisfying and . then , with probability at least , notice that the assumptions on are normalized ( as we assume that the variance of the entries in is one ) .if the error entries have variance , then we need to scale accordingly by replacing by ; thus , the assumptions become weaker as decreases . for the singular values ,a good toy result is the following [ thm : probweyl0 ] let be an bernoulli random matrix and fix . then there exists a constant ( depending only on ) such that the following holds .let be an matrix with rank satisfying .then with probability at least it may be useful for the reader to compare these new bounds with the bounds obtained directly from the davis - kahan - wedin sine theorem and weyl s inequality ( see corollary [ wedin - cor ] ) .both theorems above are corollaries of much more general statements , which we describe in the next sections .in the literature , there are many models of random matrices .we can capture almost all natural models by focusing on a common property .[ def : concentration ] we say the random matrix is -concentrated if for all unit vectors , and every , the key parameter is . it is easy to verify the following fact , which asserts that the concentration property is closed under addition . [ fact1 ] if is -concentrated and is -concentrated , then is -concentrated for some depending on .furthermore , the concentration property guarantees a bound on . a standard net argument ( see lemma [ lemma : net ] ) shows [ fact2 ] if is -concentrated then there are constants such that . for readers not familiar with random matrix theory , let us point out why the concentration property is expected to hold for any natural model. if is random and is fixed , then the vector must look random .it is well known that in a high dimensional space , a random vector , with very high probability , is nearly orthogonal to any fixed vector .thus , one expects that very likely , the inner product of and is small . definition [ def : concentration ] is a way to express this observation quantitatively .it turns out that all random matrices with independent entries satisfying a mild condition have the concentration property .this class covers virtually all examples one sees in practice .in particular , lemma [ lemma : bernoulli ] shows that if is a bernoulli random matrix , then is -concentrated , and with high probability .a convenient feature of the definition is that independence between the entries is not a requirement .for instance , it is easy to show that a random orthogonal matrix satisfies the concentration property .we continue the discussion of the -concentration property ( definition [ def : concentration ] ) in section [ sec : concentration ] .let us state an extension of theorem [ theorem : main1 ] .[ thm : main ] assume that is -concentrated for a trio of constants , and suppose has rank .then , for any , with probability at least [ remark : boundone ] using fact [ fact2 ] , one can replace on the right - hand side by , which yields that with probability at least however , we prefer to state our theorems in the form of theorem [ thm : main ] , as the bound , in many cases , may not be optimal .another useful corollary of theorem [ thm : main ] is the following .for any constant there are constants and such that if , then with probability at least . the first term on the right - hand side corresponds to the conjectured optimal bound .the second term is necessary .if , then the intensity of the noise is much stronger than the strongest signal in the data matrix , so would corrupt completely . thus in order to retain crucial information about , it seems necessary to assume . we are not absolutely sure about the necessity of the third term , but under the condition , this term is superior to the davis - kahan - wedin bound .we are able to extend theorem [ thm : main ] in two different ways .first , we can bound the angle between and for any index .second , and more importantly , we can bound the angle between the subspaces spanned by and , respectively . as the projection onto the subspaces spanned by the first few singular vectors ( i.e. low rank approximation ) plays an important role in a vast collection of problems ,this result potentially has a large number of applications .we are going to present these two results in the next section . to conclude this section ,let us mention that related results have been obtained in the case where the random matrix contains gaussian entries . in , r. wang estimates the non - asymptotic distribution of the singular vectors when the entries of are iid standard normal random variables .recently , allez and bouchaud have studied the eigenvector dynamics of when is a real symmetric matrix and is a symmetric brownian motion ( that is , is a diffusive matrix process constructed from a family of independent real brownian motions ) .our results also seems to have a close tie to the study of spiked covariance matrices , where a different kind of perturbation has been considered ; see for details. it would be interesting to find a common generalization for these problems .first , we consider the problem of approximating the -th singular vector for any . in light of the davis - kahan - wedin result and theorem [ thm : main ] , it is natural to consider the gap [ thm : general ] assume that is -concentrated for a trio of constants .suppose has rank , and let be an integer .then , for any , with probability at least in the next theorem , we bound the largest principal angle between for some integer , where is the rank of .let us recall that if and are two subspaces of the same dimension , then the ( principal ) angle between them is defined as where denotes the orthogonal projection onto subspace .[ thm : subspace ] assume that is -concentrated for a trio of constants .suppose has rank , and let be an integer .then , for any , with probability at least where and are the -dimensional subspaces defined in .it remains an open question to give an efficient bound for subspaces corresponding to an arbitrary set of singular values .however , we can use theorem [ thm : subspace ] repeatedly to obtain bounds for the case when one considers a few intervals of singular values . for instance , by applying theorem [ thm : subspace ] twice , we obtain assume that is -concentrated for a trio of constants .suppose has rank , and let be integers .then , for any , with probability at least where let for any subspace , let denote the orthogonal projection onto .it follows that , where denotes the identity matrix . by definition of the subspaces , we have thus , by , we obtain theorem [ thm : subspace ] can now be invoked to bound and , and the claim follows .finally , let us present the general form of theorem [ thm : probweyl0 ] for singular values .[ thm : probweyl ] assume that is -concentrated for a trio of constants .suppose has rank , and let be an integer .then , for any , with probability at least and with probability at least notice that the upper bound for given in involves . in many situations , the lower bound incan be used to provide an upper bound for .we now briefly give an overview of the paper and discuss some of the key ideas behind the proof of our main results .for simplicity , let us assume that and are real symmetric matrices .( in fact , we will symmetrize the problem in section [ sec : prelim ] below . )let be the eigenvalues of with corresponding ( orthonormal ) eigenvectors .let be the largest eigenvalue of with corresponding ( unit ) eigenvector .suppose we wish to bound ( from theorem [ thm : main ] ) .since it suffices to bound for .let us consider the case when . in this case, we have since and , we obtain thus , the problem of bounding reduces to obtaining an upper bound for and a lower bound for the gap .we will obtain bounds for both of these terms by using the concentration property ( definition [ def : concentration ] ) .more generally , in section [ sec : prelim ] , we will apply the concentration property to obtain lower bounds for the gaps when , which will hold with high probability .let us illustrate this by now considering the gap .indeed , we note that applying the concentration property , we see that with probability at least . as , we in fact observe that thus , if is sufficiently large , we have ( say ) with high probability . in section [ sec : proof ], we will again apply the concentration property to obtain upper bounds for terms of the form . at the end of section [ sec :proof ] , we combine these bounds to complete the proof of theorems [ thm : main ] , [ thm : general ] , [ thm : subspace ] and [ thm : probweyl ] . in section [ sec :concentration ] , we discuss the -concentration property ( definition [ def : concentration ] ) . in particular , we generalize some previous results obtained by the second author in . finally , in section [section : app ] , we present some applications of our main results . among others ,our results seem useful for matrix recovery problems .the general matrix recovery problem is the following . is a large matrix .however , the matrix is unknown to us .we can only observe its noisy perturbation , or in some cases just a small portion of the perturbation .our goal is to reconstruct or estimate an important parameter as accurately as possible from this observation .furthermore , several problems from combinatorics and theoretical computer science can also be formulated in this setting .special instances of the matrix recovery problem have been investigated by many researchers using spectral techniques and combinatorial arguments in ingenious ways .we propose the following simple analysis : if has rank and , then the projection of on the subspace spanned by the first singular vectors of is close to the projection of onto the subspace spanned by the first singular vectors of , as our new results show that and are very close .moreover , we can also show that the projection of onto is typically small .thus , by projecting onto , we obtain a good approximation of the rank approximation of . in certain cases ,we can repeat the above operation a few times to obtain sufficient information to recover completely or to estimate the required parameter with high accuracy and certainty .in this section , we present some of the preliminary tools we will need to prove theorems [ thm : main ] , [ thm : general ] , [ thm : subspace ] , and [ thm : probweyl ] .to begin , we define the symmetric block matrices and we will work with the matrices and instead of and .in particular , the non - zero eigenvalues of are and the eigenvectors are formed from the left and right singular vectors of .similarly , the non - trivial eigenvalues of are ( some of which may be zero ) and the eigenvectors are formed from the left and right singular vectors of . along these lines , we introduce the following notation , which differs from the notation used above .the non - zero eigenvalues of will be denoted by with orthonormal eigenvectors , such that let be the orthonormal eigenvectors of corresponding to the -largest eigenvalues . in order to prove theorems [ thm : main ] , [ thm : general ], [ thm : subspace ] , and [ thm : probweyl ] , it suffices to work with the eigenvectors and eigenvalues of the matrices and .indeed , proposition [ prop : sine ] will bound the angle between the singular vectors of and by the angle between the corresponding eigenvectors of and .[ prop : sine ] let and be unit vectors .let be given by then since , we have thus , and the claim follows. we now introduce some useful lemmas .the first lemma below , states that if is -concentrated , then is -concentrated , for some new constants and .[ lemma : tilde ] assume that is -concentrated for a trio of constants .let and .then for all unit vectors , and every , let be unit vectors in , where and .we note that thus , if any of the vectors are zero , follows immediately from .assume all the vectors are nonzero .then thus , by , we have and the proof of the lemma is complete. we will also consider the spectral norm of .since is a symmetric matrix whose eigenvalues in absolute value are given by the singular values of , it follows that we introduce -nets as a convenient way to discretize a compact set .let .a set is an -net of a set if for any , there exists such that .the following estimate for the maximum size of an -net of a sphere is well - known ( see for instance ) .[ lemma : net ] a unit sphere in dimensions admits an -net of size at most lemmas [ lemma : r - norm ] , [ lemma : largest ] , and [ lemma : j - largest ] below are consequences of the concentration property . [ lemma : r - norm ] assume that is -concentrated for a trio of constants .let be a matrix with rank .let be the matrix whose columns are the vectors . then , for any , clearly is a symmetric matrix .let be the unit sphere in .let be a -net of .it is easy to verify ( see for instance ) that for any symmetric matrix , for any fixed , we have by lemma [ lemma : tilde ] . since , we obtain [ lemma : largest ] assume that is -concentrated for a trio of constants .suppose has rank .then , for any , with probability at least . in particular ,if , then with probability at least . if , in addition , , then for with probability at least .we observe that by lemma [ lemma : tilde ] , we have for every , and follows .if , then the bound can be obtained by taking in .assume . taking in yields for with probability at least .using the courant minimax principle , lemma [ lemma : largest ] can be generalized to the following .[ lemma : j - largest ] assume that is -concentrated for a trio of constants .suppose has rank , and let be an integer .then , for any , with probability at least . in particular , with probability at least .in addition , if , then for with probability at least .it suffices to prove .indeed , the bound follows from by taking , and follows by taking .let be the unit sphere in . by the courant minimax principle , thus, it suffices to show for all .let be a -net of . by lemma [ lemma : net ] , .we now claim that indeed , fix a realization of .since is compact , there exists such that . moreover , there exists such that .clearly the claim is true when ; assume .then , by the triangle inequality , we have and follows . applying and lemma [ lemma : tilde ] , we have and the proof of the lemma is complete .we will continually make use of the following simple fact : section is devoted to theorems [ thm : main ] , [ thm : general ] , [ thm : subspace ] , and [ thm : probweyl ] . to begin , define the subspace let be the orthogonal projection onto .[ lemma : proj_bound ] assume that is -concentrated for a trio of constants .suppose has rank , and let be an integer .then with probability at least . consider the event by lemma [ lemma : j - largest ] ( or lemma [ lemma : largest ] in the case ) , holds with probability at least .fix . by multiplying on the left by and on the right by , we obtain since .thus , on the event , we have we conclude that , on the event , and the proof is complete .[ lemma : uproj ] assume that is -concentrated for a trio of constants .suppose has rank , and let be an integer .define to be the matrix with columns .then , for any , with probability at least define the event by lemmas [ lemma : r - norm ] , [ lemma : j - largest ] , and [ lemma : proj_bound ] , it follows that fix .we multiply on the left by and on the right by to obtain we note that and where is the diagonal matrix with the values on the diagonal . for the right - hand side of , we write , where is the matrix with columns and is the orthogonal projection onto .thus , on the event , we have here we used the fact that is a sub - matrix of and hence combining the above computations and bound yields on the event .we now consider the entries of the diagonal matrix .on , we have that , for any , by writing the elements of the vector in component form , it follows that and hence on the event .since this holds for each , the proof is complete . with lemmas [ lemma : proj_bound ] and [ lemma : uproj ] in hand , we now prove theorems [ thm : main ] , [ thm : general ] , [ thm : subspace ] , and [ thm : probweyl ] . by proposition [ prop :sine ] , in order to prove theorems [ thm : main ] and [ thm : general ] , it suffices to bound because are formed from the left and right singular vectors of and .we write where is the orthogonal projection onto .then applying the bounds obtained from lemmas [ lemma : proj_bound ] and [ lemma : uproj ] ( with ) , we obtain with probability at least we now note that the correct absolute constant in front can now be deduced from the bound above and proposition [ prop : sine ] .the lower bound on the probability given in can be written in terms of the constants by recalling the definitions of and given in lemma [ lemma : tilde ] . we again write where is the orthogonal projection onto .then we have that for any , we have that moreover , from lemmas [ lemma : proj_bound ] and [ lemma : uproj ] , we have with probability at least and with probability at least .the proof of theorem [ thm : general ] is complete by combining the bounds above for .however , and are formed from the left and right singular vectors of and . to avoid the dependence on both the left and right singular vectors, one can begin with and consider only the coordinates of which correspond to the left ( alternatively right ) singular vectors . by then following the proof for only these coordinates, one can bound the left ( right ) singular vectors by terms which only depend on the previous left ( right ) singular vectors . ] .as in the proof of theorem [ thm : main ] , the correct constant factor in front can be deduced from proposition [ prop : sine ] .define the subspaces by proposition [ prop : sine ] , it suffices to bound .let be the orthogonal projection onto . by lemmas [ lemma : proj_bound ] and [ lemma : uproj ], it follows that with probability at least on the event where holds , we have by the triangle inequality and the cauchy - schwarz inequality .thus , by , we conclude that on the event where holds .the claim now follows from proposition [ prop : sine ] .the lower bound follows from lemma [ lemma : j - largest ] ; it remains to prove .let be the matrix whose columns are given by the vectors , and recall that is the orthogonal projection onto .let denote the unit sphere in . then for , we multiply on the left by and on the right by to obtain here we used and the fact that . therefore, we have the deterministic bound by the cauchy - schwarz inequality , it follows that by the courant minimax principle , we have thus , it suffices to show that with probability at least .we decompose and obtain thus , by lemma [ lemma : r - norm ] and , we have with probability at least , and the proof is complete .in this section , we give examples of random matrix models satisfying definition [ def : concentration ] . [ lemma : bernoulli ] there exists a constant such that the following holds .let be a random bernoulli matrix .then and for any fixed unit vectors and positive number , the bounds in lemma [ lemma : bernoulli ] also hold for the case where the noise is gaussian ( instead of bernoulli ) . indeed , when the entries of are iid standard normal random variables , has the standard normal distribution .the first bound is a corollary of a general concentration result from .it can also be proved directly using a net argument .the second bound follows from martingale different sequence inequality ; see also for a direct proof with a more generous constant .we now verify the -concentration property for slightly more general random matrix models .we will discuss these matrix models further in section [ section : app ] . in the lemmas below, we consider both the case where is a real symmetric random matrix with independent entries and when is a non - symmetric random matrix with independent entries .[ lemma : conc - sym ] let be a real symmetric random matrix where is a collection of independent random variables each with mean zero .further assume with probability , for some .then for any fixed unit vectors and every we write as the right side is a sum of independent , bounded random variables , we apply hoeffding s inequality ( ( * ? ? ?* theorem 2 ) ) to obtain here we used the fact that because are unit vectors .since each has mean zero , it follows that , and the proof is complete .[ lemma : conc - nonsym ] let be a real random matrix where is a collection of independent random variables each with mean zero . further assume with probability , for some .then for any fixed unit vectors , and every the proof of lemma [ lemma : conc - nonsym ] is nearly identical to the proof of lemma [ lemma : conc - sym ] . indeed , follows from hoeffding s inequality since can be the written as the sum of independent random variables ; we omit the details .many other models of random matrices satisfy definition [ def : concentration ] .if the entries of are independent and have a rapidly decaying tail , then will be -concentrated for some constants .one can achieve this by standard truncation arguments .for many arguments of this type , see for instance . as an example, we present a concentration result from when the entries of are iid sub - exponential random variables .[ lemma : conc - sub ] let be a real random matrix whose entries are iid copies of a sub - exponential random variable with constant , i.e. for all .assume has mean 0 and variance 1 .then there are constants ( depending only on ) such that for any fixed unit vectors and any , one has finally , let us point out that the assumption that the entries are independent is not necessary .as an example , we mention random orthogonal matrices . for another example, one can consider the elliptic ensembles ; this can be verified using standard truncation and concentration results , see for instance and ( * ? ? ?* chapter 5 ) .the matrix recovery problem is the following : is a large unknown matrix .we can only observe its noisy image , or in some cases just a small part of it .we would like to reconstruct or estimate an important parameter as accurately as possible from this observation . consider a deterministic matrix be a random matrix of the same size whose entries are independent random variables with mean zero and unit variance . for convenience, we will assume that , for some fixed , with probability .suppose that we have only partial access to the noisy data .each entry of this matrix is observed with probability and unobserved with probability for some small . we will write if the entry is not observed .given this sparse observable data matrix , the task is to reconstruct . the matrix completion problem is a central one in data analysis , and there is a large collection of literature focusing on the low rank case ; see and references therein .a representative example here is the netflix problem , where is the matrix of ratings ( the rows are viewers , the columns are movie titles , and entries are ratings ) . in this section , we are going to use our new results to study this problem .the main novel feature here is that our analysis allows us to approximate _ any given column ( or row ) _ with high probability .for instance , in the netflix problem , one can figure out the ratings of any given individual , or any given movie . in earlier algorithms we know of, the approximation was mostly done for the frobenius norm of the whole matrix .such a result is equivalent to saying that a _ random _ row or column is well approximated , but can not guarantee anything about a specific row or column . without loss of generality , we assume is a square matrix .the rectangular case follows by applying the analysis below to the matrix defined in .we assume that is large and asymptotic notation such as will be used under the assumption that .let be a deterministic matrix with rank where are the singular values with corresponding singular vectors .let be iid indicator random variables with .the entries of the sparse matrix can be written as where it is clear that the are independent random variables with mean 0 and variance .this way , we can write in the form , where is the random matrix with independent entries .we assume ; in fact , our result works for being a negative power of .let and consider the subspace spanned by and spanned by , where ( alternatively is the -th singular vector of ( alternatively ) .fix any and consider the -th columns of and .denote them by and , respectively .we have notice that is efficiently computable given and .( in fact , we can estimate very well by the density of , so we do nt even need to know . ) in the remaining part of the analysis , we will estimate the three error terms on the right - hand side .[ lemma : projection ] let be a random vector in whose coordinates are independent random variables with mean 0 , variance at most , and are bounded in absolute value by .let be a fixed subspace of dimension and be the projection of onto .then where are absolute constants . the first term is bounded from above by .the second term has the form , where is the random vector with independent entries , which is the -th column of .notice that entries of are bounded ( in absolute value ) by with probability .applying lemma [ lemma : projection ] ( with the proper normalization ) , we obtain since . by setting , implies that , for any , with probability at least . to bound , we appeal to theorem [ thm : subspace ] .assume for a moment that is -concentrated for some constants .let .then it follows that , for any , with probability at least where is an absolute constant . since it remains to bound .we first note that . by talagrand s inequality ( see or ( * ? ? ?* theorem 2.1.13 ) ) , we have in addition , thus , we conclude that with probability at least . [theorem : recovery ] assume that has rank and with probability .assume that is -concentrated for a trio of constants .let be an arbitrary index between and , and let and be the -th columns of and .let be an integer , and let be the subspace spanned by the first singular vectors of .let be the singular values of .assume . then , for any , with probability at least where and is an absolute constant .as this theorem is a bit technical , let us consider a special , simpler case .assume that all entries of are of order and .thus , any column has length .assume furthermore that and for some .then our analysis yields
matrix perturbation inequalities , such as weyl s theorem ( concerning the singular values ) and the davis - kahan theorem ( concerning the singular vectors ) , play essential roles in quantitative science ; in particular , these bounds have found application in data analysis as well as related areas of engineering and computer science . in many situations , the perturbation is assumed to be random , and the original matrix has certain structural properties ( such as having low rank ) . we show that , in this scenario , classical perturbation results , such as weyl and davis - kahan , can be improved significantly . we believe many of our new bounds are close to optimal and also discuss some applications .
entanglement , first recognized as the characteristic trait of quantum mechanics , has been used for a long time as the main indicator of the quantumness of correlations . indeed , as shown in ref . , for pure - state computation , exponential speed - up occurs only if entanglement grows with the size of the system .however , the role played by entanglement in mixed - state computation is less clear .for instance , in the so - called deterministic quantum computation with one qubit ( dqc1 ) protocol , quantum speed - up can be achieved using factorized states . as shown in ref . , speed - up could be due to the presence of another quantifier , the so called quantum discord , which is defined as the difference between two quantum analogs of the classical mutual information .the relationship between entanglement and quantum discord has not been completely understood , since they seem to capture different properties of the states . in ref . , it is shown that even if quantum discord and entanglement are equal for pure states , mixed states maximizing discord in a given range of classical correlations are actually separable .the relation between discord and entanglement has been discussed in refs . , and an operational meaning in terms of state merging has been proposed in .recently , the use of quantum discord has been extended to multipartite states .a measure of genuinely multipartite quantum correlations has been introduced in . in ref . , an attempt to generalize the definition of quantum discord in multipartite systems based on a collective measure has been proposed . in ref . , the authors proposed different generalizations of quantum discord depending on the measurement protocol performed .entanglement in multipartite systems has been shown to obey monogamy in the case of qubits and continuous variables .monogamy means that if two subsystems are highly correlated , the correlation between them and other parties is bounded ._ proved that , unlike entanglement , quantum discord is in general not monogamous .they also suggested , based on numerical results , that w states are likely to violate monogamy .in this brief report , using the koashi - winter formula , we prove that , for pure states , quantum discord and entanglement of formation obey the same monogamy relationship , while , for mixed states , distributed discord exceeds distributed entanglement .then , we give an analytical proof of the violation of monogamy by all w states .furthermore , we suggest the use of the interaction information as a measure of monogamy for the mutual information . finally , as a further application of koashi - winter equality , we prove , the conjecture on upper bounds of quantum discord and classical correlations formulated by luo _et al . _ for rank 2 states of two qubits .in classical information theory , mutual information between parties and is defined as , where is the shannon entropy and is the conditional shannon entropy of after has been measured . an equivalent formulation[ can be obtained using bayes rules , because of which . on the other hand ,if we try to quantize these quantities , replacing probabilities with density matrices and the shannon entropy with the von neumann entropy , their counterparts differ substantially .the quantum mutual information is defined as where is the von neumann entropy and are the reduced states after tracing out party , while the quantized version of measures the classical part of the correlations and it is given by ,\label{clas}\ ] ] with the conditional entropy defined as , and where is the density matrix after a positive operator valued measure ( povm ) has been performed on . in some cases , orthogonal measurements are enough to find the maximum in eq .( [ clas ] ) .quantum discord is thus defined as the difference between and : .\ ] ] quantum discord can be considered as a measure of how much disturbance is caused when trying to learn about party when measuring party , and has been shown to be null only for a set of states with measure zero . both classical correlations and quantum discord are asymmetric under the exchange of the two sub - parties ( i.e. , and ) .while is invariant under local unitary transformations and can not increase under local operations and classical communication , is not monotonic under local operations .for instance , in , it is shown how to create quantum correlations under the action of local noise .given a measure of correlation , monogamy implies a tradeoff on bipartite correlations distributed along all the partitions ( ) : coffman , kundu , and wootters showed that this property applies to three - qubit states once the square of the concurrence ( ) plays the role of .the extension of the proof to -partite ( ) qubit systems has been given in ref .as pointed out in , however , entanglement of formation does not satisfy the criterion given in eq .( [ monog ] ) . then, even if people usually refers to entanglement as a monogamous quantity , it would be worth paying attention to the entanglement monotone in use . in trying to apply this property to quantum discord , prabhu _ et al ._ showed that monogamy is obeyed if and only if the interrogated interaction information is less than or equal to the unmeasured interaction information .then , the authors found through numerical simulations that the subset of w states are not monogamous , in contrast with greenberger - horne - zeilinger ( ghz ) states , which can be monogamous or not . here , we prove that , for pure states , the monogamy equations for quantum discord and for entanglement of formation coincide .let us consider the pure tripartite state .quantum discord of any of the couples of sub - parties is given by , where , while .as shown in ref . , the following relationships between conditional entropies and entanglement of formation ( ) holds : this formula allows one to write using eq .( [ kw ] ) , monogamy equation is then equivalent to where has been employed .the equality of conservation law for distributed entanglement of formation and quantum discord , even if not associated to monogamy , was already noticed by fanchini _et al . _because of this equivalence , the violation of eq .( [ entmon ] ) by w states , whose numerical evidence has been given in ref . , admits an analytical proof .let us recall that , apart from local operations , a generic pure state of three qubits , belonging to the ghz class , can be written as .the family of w states is obtained fixing . as shown by coffman , kundu , and wootters , w states have zero three - tangle; that is , they obey to show that eq .( [ concw ] ) implies , it is enough to note that is a concave function of , since $ ] , where is the binary entropy , and both and admit values between and .then , if we apply the mapping from to to the three elements of eq .( [ concw ] ) , we find if ( i.e. for biseparable states ) and otherwise . as noticed in ref . , ghz states can be monogamous or not .actually , a numerical analysis shows that about half of them do not respect monogamy . to see a transition from observation to violation of monogamy , we consider the family of states .note that is the maximally entangled ghz state , while coincides with the maximally entangled w state . for , qubit is factorized , and ( [ discmon ] ) becomes an equality . in fig .[ figure ] , is plotted as a function of for different values of . as expected , for any , there is a threshold for above which the states are monogamous . , as quantified by , as a function of for different values of ( see the main text ) .states are monogamous where the respective curves are positive .black solid line is for , black dashed line is for , orange ( gray ) solid line is for , and orange ( gray ) dashed line is for .according to the analytical proof , for w states ( ) is always positive ( i.e. , these states are never monogamous ) . for ghz states , there exists a threshold value of above which monogamy is satisfied .this threshold goes to zero for vanishing , since in this case qubit becomes factorized and all the related entanglement quantifiers vanish as well.,title="fig:",width=302 ] + once the assumption of pure state is relaxed , eq .( [ kw ] ) becomes an inequality : .then , , or , using the subadditivity of von neumann entropy , thus , for mixed states , monogamy of quantum discord has a stricter bound than monogamy of entanglement .the search for monogamy of correlations can be extended to .it is actually easy to note that , for pure tripartite states , monogamies of quantum discord and classical correlations are complementary , that is . to prove it , it is sufficient to observe that mutual information obeys the generalization of eq .( [ minfmon ] ) to mixed states presents some interesting aspects .we have where is called correlation information . in the language of density matrices , it can be defined as where are all the possible strings containing integer numbers between and , with for any , and counts the length of each string .for instance , for a bipartite system , coincides with the ordinary mutual information , and in the tripartite case .it can be checked that , for odd , for any pure state . in classical information theory ,the interaction information has been introduced with the aim of measuring the information that is contained in a given set of variables and that can not be accounted for considering any possible subset of them .it should then measure genuine -partite correlations .actually , can be negative .thus , according to the criteria given , for instance , in ref . , it can not be used as a correlation measure .then , its meaning is widely debated .equation ( [ minfxmon ] ) suggests that it plays the role played by the tangle in the distribution of , since it is invariant under index permutation , and it can be called a `` mutual information tangle . '' when is negative , it quantifies the lack of monogamy of the mutual information . as shown by prabhu _, monogamy of discord relies on the relationship between and its interrogated version . , it has been conjectured that , given a bipartite state , defined in the hilbert space , the following upper bounds for quantum discord and classical correlations could exist : , \\{\cal j}_{a , b}&\le & \min [ s(\varrho_{a}),s(\varrho_{b})].\end{aligned}\ ] ] it is trivial to prove the existence of such an upper bound for entanglement of formation , its definition being based on the convex roof construction .if inequality ( [ discmix ] ) were an equality , it would be easy to extend the proof to discord .actually , inequality ( [ discmix ] ) is telling us that distributed discord could exceed distributed entanglement , and these upper bounds could be violated . while a a partial proof of the conjecture has been given in ref . using the language of quantum operations , a full proof for the case of rank 2 states of two qubits can be given using the koashi - winter formula . by applying a purification procedure ,we add an ancillary hilbert space and write a pure tripartite state such that . since has rank 2 , is a three - qubit state . as a consequence of eq .( [ kw ] ) , we have then , inequalities and are immediately verified .let us now separately discuss the cases and .in the first case , we only need to prove . in ref . , using the invariance under index permutation of the three tangle introduced by coffman , kundu , and wootters , we proved that , for the case of three qubits , if , then .this chain rule implies , and then as we wanted to prove .assuming now , we are left to show that .writing explicitly , we use the chain rule to write and to obtain this ends the proof .we have studied the monogamy properties of pure tripartite state .we have shown that quantum discord and entanglement of formation obey the same monogamy relationship . applying this equivalence to the case of three qubits ,we have shown , by analytical demonstration , that , for all the w states , quantum discord is not monogamous , in contrast with ghz states , where discord can be monogamous or not . in an example , we have shown the transition from monogamy to absence of monogamy for a subfamily of ghz states .the equivalence between quantum discord and entanglement of formation concerning monogamy raises a subtle question that it is worth considering . while people usually claim that , for qubits , entanglement is monogamous , all we know is that there exists an entanglement monotone ( the square of the concurrence ) that is in fact monogamous . by analogy , we can say that the results of ref . do not exhaust the search for monogamy of quantum discord and other correlations , where monogamous monotone indicators could be found . using the connection between discord end entanglement of formation, we have also shown that in the case of rank 2 states of two qubits , as conjectured by luo _et al . _ , quantum discord and classical correlations are bounded from above by the single - qubit von neumann entropies . a full proof can not be given because , for mixed states , the equality of conservation law for distributed entanglement of formation and quantum discord is broken .g. adesso and f. illuminati , new j. phys . *8 * , 15 ( 2006 ) ; g. adesso , a. serafini , and f. illuminati , phys . rev .a * 73 * , 032345 ( 2006 ) ; t. hiroshima , g. adesso , and f. illuminati , phys . rev . lett . * 98 * , 050503 ( 2007 ) .s. campbell , t. j. g. apollaro , c. di franco , l. banchi , a. cuccoli , r. vaia , f. plastina , and m. paternostro , arxiv:1105.5548 ; f. ciccarello and v. giovannetti , arxiv:1105.5551 ; a. streltsov , h. kampermann , and d. bruss , phys .lett . * 107 * , 170502 ( 2011 ) .a. acn , a. andrianov , l. costa , e. jan , j. i. latorre , and r. tarrach , phys .lett . * 85 * , 1560 ( 2000 ) ; w. dr , g. vidal , and j. i. cirac , phys .a * 62 * , 062314 ( 2000 ) ; a. acn , d. bruss , m. lewenstein , and a. sanpera , phys .lett . * 87 * , 040401 ( 2001 ) .
in contrast with entanglement , as measured by concurrence , in general , quantum discord does not possess the property of monogamy , that is , there is no tradeoff between the quantum discord shared by a pair of subsystems and the quantum discord that both of them can share with a third party . here , we show that , as far as monogamy is considered , quantum discord of pure states is equivalent to the entanglement of formation . this result allows one to analytically prove that none of the pure three - qubit states belonging to the subclass of w states is monogamous . a suitable physical interpretation of the meaning of the correlation information as a quantifier of monogamy for the total information is also given . finally , we prove that , for rank 2 two - qubit states , discord and classical correlations are bounded from above by single - qubit von neumann entropies .
let us consider the initial - boundary value problem for the korteweg - de vries ( kdv ) equation with a decaying initial datum existence and uniqueness of real - valued , classical solutions can be proved via the inverse scattering transform , introduced by green , gardner , kruskal and miura in their seminal work .the long time behavior of these last ones has been extensively investigated in the literature ( ) .the solutions are known to eventually decompose into a certain number of solitons , travelling to the right , plus a radiation part , propagating to the left . in this paperwe wish to consider the so - called _ soliton region _, formed by those points of the -plane satisfying , for some fixed constant . in order to detail more about existing results ,let us recall that the solutions of ( [ cp ] ) are uniquely individuated by the scattering data of the operator associated with the initial datum .these last ones consist of a finite number of eigenvalues , , with , of the corresponding norming constants , and of the reflection coefficient .the long time asymptotics of solutions of ( [ cp ] ) in the soliton region reads as follows here the phase - shifts are given by }.\end{aligned}\ ] ] the term in ( [ nuova_sol ] ) is know to be small for large , its magnitude depending on the smoothness and decay properties of .this formula was established by hirota , tanaka and wadati and toda independently , for vanishing reflection coefficient .the general case was first treated by tanaka and shabat ( ) .more recently , grunert and teschl proved such asymptotic behavior for initial data with lower regularity .their approach relies on the steepest descent analysis of riemann - hilbert problem [ cesare ] ( see next section ) via a decomposition of the nonanalytic reflection coefficient into an analytic approximant and a small rest . in this paperwe examine the same riemann - hilbert problem via the modern dbar method , introduced by miller and mclaughlin in and . in particular , we establish a better estimate of for a larger class of initial data . our main result is the following [ hauptsatz ] let the reflection coefficient associated to the initial datum belong to , for some integer .assume that and its first derivatives tend zero at .moreover , let belong to the wiener algebra on the real line .that is , assume that this last one is the image , via fourier transform , of some function in .fix .then there exists a constant such that for all sufficiently large and .notice that these hypotheses are satisfied by all initial data admitting moments ( ) : theorem [ hauptsatz ] is achieved via a careful treatment of the imaginary part of the phase defined in ( [ def_phi ] ) .after the standard procedure , the jump ( [ salto ] ) of the riemann - hilbert problem [ cesare ] across the real axis is decomposed and displaced partly below and partly above it .accordingly , develops a non - vanishing real part , providing a decay of the decomposed jumps towards the identity matrix .the novelty here is that subsequently we reconsider the oscillations originating from the imaginary part of . from their analysis ,we extract additional information about the decay of the error term , corresponding exactly to our improvement of the estimates . to our best knowledge ,the idea of this last step is new in the literature .+ further advantages of our approach are the following .first of all , our analysis requires less sophisticated technical means , employing basically calculus at an undergraduate level and the van der corput lemma . using them we provide simpler and more explicit expressions for ( see formulas ( [ ciliegio ] ) , ( [ saldini ] ) and the following ones in section [ nuova_ossessione ] ) .these ones can be easily employed for a more detailed , long time asymptotic expansion including higher order corrections .moreover , they might turn out to be useful for the analysis of analogue riemann - hilbert problems beyond the framework of integrability .this a current research interest of ours .let us fix an integer and a constant .we assume for the remaining part of the paper and prove theorem [ hauptsatz ] . the hypotheses on the reflection coefficientare also understood to hold , without further recalling them .analogously to , we produce the solution of kdv corresponding to the initial datum - or equivalently , to the associated scattering data , and - via the following [ cesare ] find a function meromorphic away from the real axis , with simple poles at , satisfying : i _ `` jump condition '' . for every onehas where `` residue condition '' for .`` symmetry condition '' `` normalization condition '' here the phase is given by the solution of the system ( [ cp ] ) at an arbitrary time is then recovered by the formula the right - hand side being computed from the expansion proof of theorem [ hauptsatz ] consists in subsequent reformulations of the riemann - hilbert problem , till obtaining a convenient , equivalent integral equation defined on the plane . in this sectionwe remove the jump of across the real axis , exchanging it for some non - analytic behaviour on a strip around this last one .let us fix the parameter we define the following regions with and we will indicate the corresponding reflected strips w.r.t .the real axis ( see figure [ figura_striscie ] ) .let us also fix the notation for the remaining part of the paper .we wish to consider the following non - analytic extension of the reflection coefficient to the whole upper - half plane : }\cdot \chi{\left(}\frac{v}{\delta } { \right)}.\end{aligned}\ ] ] here is chosen as follows is by no means unique .any other smooth function with analogue `` cut - off '' properties would do . ] } } & 1\leq v < 2\\ 0 & v \geq 2 \end{array } \right.\end{aligned}\ ] ] notice that vanishes on . moreover , there exists a constant such that where these are the main properties motivating our choice of such extension . using ( [ def_r ] ) , one can decompose the jump matrix as follows here and mimicking the classical nonlinear steepest descent method ( , ) , we introduce riemann - hilbert problem [ cesare ] for is then equivalent to the following [ dbar_problem_per_m_tilde ] find a two dimensional , vector - valued function continuous on and differentiable with continuity as a function of two real variables away from the real axis , such that * i. * : : `` -condition '' in particular , is holomorphic on * ii . * : : `` residue condition '' .the vector - valued function has simple poles at , where it satisfies for .* iii . * : : `` symmetry condition '' * iv . *: : `` normalization condition '' [ remark_claudia ] the solution of the meromorphic -problem needs actually to be differentiable also on the real axis , although possibly not with continuity .this is easily deduced , using ( [ diseg_r ] ) , from the representation this is nothing else than the generalization of the cauchy integral formula for smooth functions . here is understood to be a small , compact rectangle containing the point , which for our purposes is chosen on the real line .our next goal is to remove the poles from vector . to this purpose ,we introduce in this section a model riemann - hilbert problem .an explicit expression of its solution wo nt be necessary for our analysis , but only some of its elementary properties concerning regularity and asymptotic behavior .these last ones are provided by proposition [ roseto ] .find a two times two matrix valued meromorphic function whose only poles are simple and lie in , satisfying : ii _ : : `` residue condition '' for .iii _ : : `` symmetry condition '' iv _ : : `` normalization condition '' for some constant possibly depending on and .let us remark that one can not normalize imposing it to approach the identity matrix at infinity .no solution would then exist for a set of exceptional points which accumulate in the neighborhood of the peaks of the solitons as ( see , chap .38 for more details about this kind of issues ) .normalization ( [ norm_matricial ] ) will do for our purposes .all the information we need about the model riemann - hilbert problem is contained in the following [ roseto ] there exists a unique solution to the model riemann - hilbert problem .this last one has the form }\end{aligned}\ ] ] where and the functions and and their derivative w.r.t . are bounded in the whole -plane .the constant is determined by conditions and , and has the following asymptotic behaviour here is a positive constant and },\quad\quad\quad j=1,2,\ldots , m.\end{aligned}\ ] ] ansatz ( [ aneins ] ) and ( [ anzwei ] ) follow from points and of the model riemann - hilbert problem .the `` residue conditions '' translate , concerning the vector , into a system of linear equations here indicates the m - dimensional column vector whose entries are all one .the matrices and , respectively symmetric and diagonal , are given by the constants s are defined as follows : they vary between zero and for and real .the matrix is easily proved to be positive definite . consequently also is , for all and real , and the system ( [ sist_orig ] ) has a unique solution .now , by elementary calculations the inverse of is shown to be entry - wise bounded as the entries of the diagonal matrix vary between zero and .this proves that also is bounded .differentiating ( [ sist_orig ] ) w.r.t . , one obtains where a solution for this system exists and is unique . rewriting ( [ sist_orig ] ) as the right - hand side is evidently bounded .so , in view of ( [ collegamento ] ) , also is .this results then into boundedness for .the function can also be treated similarly .the residue conditions yield in this case the system here is defined as above and now , from the `` residue conditions '' and from ( [ aneins ] ) one has so that both and its derivative w.r.t . are bounded on the whole -plane . the same is then proved for the vector , via arguments analogue to the ones above .finally , the asymptotic estimate ( [ clas]-[sic ] ) is a classical result , already available in . we now wish to estimate the discrepancy between and the solution of the ( matricial ) model riemann - hilbert problem . to this purpose ,let us introduce the `` error vector '' }^{-1}. \end{aligned}\ ] ] we start with a characterization of its following directly from the meromorphic -problem for .it consists in the following find a 2-dimensional vector - valued function continuous on the whole complex plane and differentiable with continuity ( as a function of two real variables ) on , such that * i _ * : : `` -condition '' . for all one has where ^{t\phi(w ) } & 0\end{array } { \right)}}\ , [ m(w)]^{-1 } & \mbox { if } { \text{im}}(w)\geq 0,\\ m(w)\,\scriptsize{{\left(}\begin{array}{cc } 0 & [ { \overline{\partial}}r(-w)]e^{-t\phi(w ) } \\ 0 & 0 \end{array } { \right)}}\ , [ m(w)]^{-1 } & \mbox { if } { \text{im}}(w)\leq 0 . \end{array}\right.\normalsize\end{aligned}\ ] ] * iii * _ : : `` symmetry condition '' .the vector - valued function is an even function * iv * _ : : `` normalization condition '' . from conditions ( [ resuno]-[norm_matricial ] ) on deduces via simple arguments of complex analysis that from formula ( [ def_e ] ) , it is then not evident that is smooth in the origin .this follows indeed from property ( [ simm_e ] ) together with the observation that is differentiable in zero ( see remark [ remark_claudia ] ) .let us now define the operator as follows }{\left(}w { \right)}:= -\frac{1}{\pi } \iint_{{\mathbb{r}}^2 } \frac{{\mathbf{e}}(s)b(s)}{w - s}{\textrm{d}}a(s).\end{aligned}\ ] ] the smooth -problem above is easily shown to be equivalent to the following integral equation ( see and for further details ) .this is the final reformulation of the riemann - hilbert problem [ cesare ] , on which we will perform our analysis starting from the next section . in this sectionwe study existence and uniqueness of a solution for equation ( [ mostar]-[sarajevo ] ) .[ proposizione_principale ] there exists a constant , depending on , such that for all and all .fix such and and let belong to . by elementary algebraic manipulations one obtains on the matrix vanishes , because does . on onehas in view of this last one , of ( [ diseg_r ] ) and of ( [ norm_matricial ] ) one also has on this last one further simplifies to it follows that }^{-\frac{1}{2}}}_{{l^2{\left(}{\mathbb{r } } , { \textrm{d}}a { \right ) } } } { \textrm{d}}b\\ & \leq \frac{c_2}{2 } \cdot\sqrt[4]{\frac{\pi^3}{3\delta}}e^{-c_0\delta t}\int_{\delta}^{2\delta } \frac{{\textrm{d}}b}{\sqrt{\abs*{b - v}}}\\ & \leq c_3 \cdot\sqrt[4]{\delta}\ , e^{-c_0\delta t}.\end{aligned}\ ] ] let us now consider the region . in view of ( [ stima_insieme ] ) one has }^{-\frac{1}{2}}}_{{l^2{\left(}{\mathbb{r } } , { \textrm{d}}a { \right)}}}\nonumber\\ & \leq c_5 \cdot \int_0^{\delta } \frac{b^{n-1 } e ^{-c_0 tb}}{\sqrt[4]{tb}\sqrt{\abs*{b - v } } } { \textrm{d}}b \nonumber \\ & \leq \frac{c_6}{t^{n-\frac{1}{2}}}.\end{aligned}\ ] ] the regions and are treated analogously .this completes the proof .a direct consequence of the analysis above is the following , fundamental [ corollario_sano ] the integral equation ( [ mostar]-[sarajevo ] ) has a unique solution in , whenever is sufficiently large and is greater or equal than . moreover , for such solution , one has uniformly with respect to . in view of ( [ stima_inf ] ) , one can invert the operator by means of neumann series .this easily yields both parts of the thesis .this last result guarantees that the integral equation ( [ mostar]-[sarajevo ] ) is an equivalent characterization of the error vector introduced in ( [ def_e ] ) .such an equivalence will be exploited in order to extract as much explicit information as possible about its asymptotic behavior . from the integral equation ( [ mostar ] - [ sarajevo ] )it is easy to deduce that the error vector has the following asymptotic expansion where here is understood to approach infinity along the imaginary axis . in this regime ,the nonalaytic vector coincides with .so ( [ def_e ] ) yields plugging ( [ norm_matricial ] ) and ( [ drina ] ) into ( [ costarica ] ) gives for the first component of the vector defined in ( [ tisa ] ) . plugging this last identity into ( [ sava ] ) ,one obtains comparing with ( [ nuova_sol ] ) and recalling ( [ clas ] ) yields for some constant . estimating the magnitude of this quantity will complete the proof of theorem [ hauptsatz ] . to that purpose, we will need the following [ ionio ] the solution of the integral equation ( [ mostar]-[sarajevo ] ) is differentiable with respect to for all positive and all sufficiently large .there exists a constant , dependent on , such that for all sufficiently large and all .the -norm above is understood to be computed w.r.t .the complex variable . solving the integral equation ( [ mostar]-[sarajevo ] ) by neumann series , one obtains }(w;x , t).\end{aligned}\ ] ] for each term of the series on the right hand side , the recursive formula }(w;x , t ) = { \mathbb{j}}_x{\left\ { } \newcommand{\rb}{\right\}}{\mathbb{j}}^{n-1}{\left[}(1,0 ) { \right]}(w;x , t ) \rb + { \mathbb{j}}{\left\ { } \newcommand{\rb}{\right\}}{\frac{\partial}{\partial x}}{\mathbb{j}}^{n-1}{\left[}(1,0 ) { \right]}(w;x , t ) \rb , & & n\geq 1 , \end{aligned}\ ] ] holds , where }(w ) : = -\frac{1}{\pi}\iint_{{\mathbb{r}}^2}\frac{{\left[}{\frac{\partial}{\partial x}}b(s;x , t){\right]}{\mathbf{e}}(s)}{w - s } { \textrm{d}}a(s ) & & { \mathbf{e}}\in { \textrm{l}^\infty ( \mathbb{r}^2 ) } .\end{aligned}\ ] ] by analogue calculations as in the proof of theorem [ proposizione_principale ] , ( [ fortuna ] ) yields the estimate }(w;x , t)}_{\infty } \leq n\cdot { \left(}\frac{c}{t^{n-\frac{1}{2 } } } { \right)}^ n , & & x\geq c_0 t\end{aligned}\ ] ] valid for some constant , all positive integers and all sufficiently large .as a consequence , }(w;x , t)\end{aligned}\ ] ] converges uniformly in this -region , and coincides with the derivative with respect to of the right - hand side of ( [ fleons ] ) .this gives differentiability of the left - hand side and estimate ( [ gemona ] ) .the main point here is to control the first term in the right - hand side of ( [ ciliegio ] ) . using ( [ sila ] ), we get for this one the expression }{\textrm{d}}a(s).\end{aligned}\ ] ] taking the derivative under the sign of integral and applying lemma [ ionio ] and corollary [ corollario_sano ] one obtains where }\rb { \textrm{d}}a(s ) .\end{aligned}\ ] ] now , in view of lemma [ roseto ] , one can determine a positive constant such that for . so that }{\textrm{d}}b\\ & \leq c_2 \int_{0}^{2\delta } b^n{\left[}\frac{1}{\sqrt{tb } } + \frac{1}{{\left(}tb { \right)}^{\frac{3}{2 } } } { \right]}e^{-c_0 tb } { \textrm{d}}b\\ & = \frac{c_2}{t^{n+1}}\int_0^{+\infty } { \tilde{b}}^n { \left(}\frac{1}{\sqrt{{\tilde{b } } } } + \frac{1}{{\tilde{b}}^{\frac{3}{2 } } } { \right)}e^{-c_0 { \tilde{b } } } { \textrm{d}}{\tilde{b}}\,\ , \leq \,\, c_3 t^{-n-1}\end{aligned}\ ] ] substiting in ( [ celoria ] ) one obtains where the asymptotic estimate is to be understood as uniform with respect to greater or equal than . again by means of ( [ banda_semplice ] ) one easily determines a positive constant such that for all suffciently large and greater or equal than .we then turn to analyse , in this same -regime , an explicit expression for the integrand above is provided by where and the function was defined in ( [ aneins ] ) .one can then rewrite ( [ brasile ] ) as follows put from the original definition ( [ def_phi ] ) of , separating real and imaginary parts , }.\end{aligned}\ ] ] in view of our assumptions on the reflection coefficient , there exists a function such that substituting in ( [ kolmogorov ] ) and using fubini s theorem , one obtains then }e^ { 2ita{\left[}4a^2 + { \left(}{\frac{x}{t}}- 12 b^2 { \right)}- \frac{\rho}{2 t } { \right ] } } { \textrm{d}}a \rb { \textrm{d}}\rho\end{aligned}\ ] ] put }e^{ 2ita{\left[}4a^2 + { \left(}{\frac{x}{t}}- 12 b^2 { \right)}- \frac{\rho}{2 t } { \right ] } } { \textrm{d}}a . \end{aligned}\ ] ] to study this integral we use the van der corput lemma ( see , pp 334 ) .we obtain in this way that there exists a ( universal ) constant such that }}_{l^1({\mathbb{r}},{\textrm{d}}a ) } \rb\end{aligned}\ ] ] from explicit expression ( [ aneins ] ) one deduces the existence of a constant such that uniformly for real and .it immediately follows that and that }}_{l^1({\mathbb{r}},{\textrm{d}}a ) } & \leq c_{7}{\left[}\int_{-\infty}^{+\infty } e^{-24tba^2}{\textrm{d}}a + \int_{-\infty}^{+\infty } tba^2 e^{-24tba^2 } { \textrm{d}}a { \right]}\\ & = c_{7}{\left[}\frac{1}{\sqrt{tb}}\int_{-\infty}^{\infty}e^{-24{\tilde{a}}^2}{\textrm{d}}{\tilde{a}}+ \frac{1}{\sqrt{tb}}\int_{-\infty}^{+\infty } { \tilde{a}}^2 e ^{-24{\tilde{a}}^2 } { \textrm{d}}{\tilde{a}}{\right]}\,\leq \,\frac{c_{8}}{\sqrt{tb}}.\end{aligned}\ ] ] via ( [ bombieri ] ) this yields by elementary estimates in ( [ cicerone ] ) , then , here the constant is understood to depend also on the -norm of .this treatment of integral was inspired by , lemma 5.1 . finally , substituting according to this last one in ( [ tricomi ] ) , one obtains plugging this inequality and ( [ cile ] ) into ( [ saldini ] ) gives the thesis .research partially supported by the austrian science fund ( fwf ) under grant no .y330 and by the indam group for mathematical physics .the author wishes to thank gerald teschl , ira egorova and spyros kamvissis for valuable discussions .its , a.r . , asymptotic behavior of the solution of the nonlinear schr'odinger equation , and isomonodromic deformations of systems of linear differential equations , _ dokl .sssr _ 261 ( 1981 ) 14 - 18 ( in russian ) ; _ soviet ._ 24 ( 1982 ) , 452 - 456 ( in english ) .mclaughlin , kt - r ., miller p.d . ,the dbar steepest descent method and the asymptotic behavior of polynomials orthogonal on the unit circle with fixed and exponentially varying nonanalytic weights , _ imrp int ._ 2006 , art .i d 48673 , 177 ..
we address the problem of long - time asymptotics for the solutions of the korteweg - de vries equation under low regularity assumptions . we consider decreasing initial data admitting only a finite number of moments . for the so - called `` soliton region '' , an improved asymptotic estimate is provided , in comparison with the one in . our analysis is based on the dbar steepest descent method proposed by p. miller and k. t. d. -r . mclaughlin .
the grist project ( ) is enabling astronomers and the public to interact with the grid projects that are being constructed worldwide , and bring to flower the promise of easy , powerful , distributed computing .our objectives are to understand the role of service - oriented architectures in astronomical research , to bring the astronomical community to the grid particularly teragrid , and to work with the national virtual observatory ( nvo ) to build a library of compute - based web services . the scientific motivation for grist derives from creation and mining of wide - area federated images , catalogs , and spectra. an astronomical image collection may include multiple pixel layers covering the same region on the sky , with each layer representing a different waveband , time , instrument , observing condition , etc .the data analysis should combine these multiple observations into a unified understanding of the physical processes in the universe .the familiar way to do this is to cross - match source lists extracted from different images .however , there is growing interest in another method of federating images that reprojects each image to a common set of pixel planes , then stacks images and detects sources therein .while this has been done for years for small pointing fields , we are using the teragrid to perform this processing over wide areas of the sky in a systematic way , using ( pq ) survey data .we expect this `` hyperatlas '' approach will enable us to identify much fainter sources than can be detected in any individual image ; to detect unusual objects such as transients ; and to deeply compare ( e.g. , using principal component analysis ) the large surveys such as sdss , 2mass , dposs , etc .( williams et al .2003 ) .grist is helping to build an image - federation pipeline for the palomar - quest synoptic sky survey ( djorgovski et al .2004 ) , with the objectives of mining pq data to find high redshift quasars , to study peculiar variable objects , and to search for transients in real - time ( mahabal et al .our pq processing pipeline will use the teragrid for processing and will comply with widely - accepted data formats and protocols supported by the vo community .the grist project is building web and grid services as well as the enabling workflow fabric to tie together these distributed services in the areas of data federation , mining , source extraction , image mosaicking , coordinate transformations , data subsetting , statistics histograms , kernel density estimation , and r language utilities exposed by services ( graham et al .2004 ) , and visualization .composing multiple services into a distributed workflow architecture , as illustrated in figure [ fig : workflow ] , with domain experts in different areas deploying and exposing their own services , has a number of distinct advantages , including : * proprietary algorithms can be made available to end users without the need to distribute the underlying software . *software updates done on the server are immediately available to all users .* a particular service can be used in different ways as a component of multiple workflows . *a service may be deployed close to the data source , for efficiency .interactive deployment and control of these distributed services will be provided from a workflow manager . we expect to use nvo services for data access images , catalogs , and spectra as well as the nvo registry for service discovery .as described in section [ sec : workflow ] , much of the pipeline and mining software for grist will be built in the form of web services .one of the reasons for building services is to be able to use them from a thin client , such as a web browser .however , for such services to be able to process private data or use high - end computing , there must be strong authentication of the user . the vo and grid communities are converging around the idea of x.509 certificates as a suitable credential for such authentication .however , most astronomers do not have such a certificate , and we do nt want to make them go through the trouble of getting one unless it is truly necessary .therefore , we are building services with `` graduated security '' , meaning not only that small requests on public data are available anonymously and simply , but also that large requests on private data can be serviced through the same interface . however in the latter case , a certificate is necessary .thus the service `` proves its usefulness '' with a simple learning curve , but requires a credential to be used at full - strength ( see illustration in figure [ fig : grad_sec ] ) .a key science - driven workflow we are constructing is illustrated in the schematic in figure [ fig : pq - pipeline ] .the primary objectives are to search for high redshift quasars and optical transients in data from the palomar - quest sky survey .the pipeline begins by federating multiwavelength datasets , and matching objects detected with the _ z _ filter with catalogs at other frequencies .cluster analysis performed on the resulting color - color plots ( e.g. , _i - z _ vs._ z - j _ ) yield new quasar candidates , and outliers may indicate the presence of other objects of interest .single epoch transients are indicated by objects that are detected in one filter but not others .an object that is detected in the reddest filter is of special interest since it could be a highly obscured object or a high redshift quasar . for multi - epoch transient search , illustrated in the lower part of figure [ fig : pq - pipeline ] ,we compare new data with a database of past epochs to detect new transients or other variable objects .as described above , a primary objective of the pq survey is the fast discovery of new types of transient sources by comparing data taken at different times .such transients should be immediately re - observed to get maximum scientific impact , so we are experimenting with `` dawn processing '' on the teragrid , meaning that data is streamed from the telescope to the compute facility as it is taken ( rather than days later ) .the pipeline itself is being built with streaming protocols so that unknown transients ( e.g. , newly identified variables or asteroids ) can be examined within hours of observation with a view to broadcasting an email alert to interested parties .grist is developing a library of interoperable grid services for astronomical data mining on the teragrid , compliant with grid and vo data formats , standards , and protocols .for ease of use , grist services are built with graduated security , requiring no more formal authentication than is appropriate for a given level of usage .grist technology is part of a palomar - quest data processing pipeline , under construction , to search for high red - shift quasars and optical transients .more information on grist can be found on our project web site at .
the grist project is developing a grid - technology based system as a research environment for astronomy with massive and complex datasets . this knowledge extraction system will consist of a library of distributed grid services controlled by a workflow system , compliant with standards emerging from the grid computing , web services , and virtual observatory communities . this new technology is being used to find high redshift quasars , study peculiar variable objects , search for transients in real time , and fit sdss qso spectra to measure black hole masses . grist services are also a component of the `` hyperatlas '' project to serve high - resolution multi - wavelength imagery over the internet . in support of these science and outreach objectives , the grist framework will provide the enabling fabric to tie together distributed grid services in the areas of data access , federation , mining , subsetting , source extraction , image mosaicking , statistics , and visualization .
complex network tools have been used in different fields from social , technological and biological networks , to analyze the structure and dynamics of the network .different centrality measures have been used to find the important vertices in a network e.g. , degree centrality , eigenvector centrality , katz centrality , vertex betweenness centrality , edge betweenness centrality , closeness centrality , etc . among them ,the simplest measure is the degree centrality .degree of a vertex is the number of edges connected to it .pagerank is another important measure and is used in network tasks like , information retrieval , link prediction and community detection . in 1998 , brin and page developed the pagerank algorithm to rank websites in their search engine .pagerank score , , of a vertex in an undirected and unweighted network , is defined as , where is the damping factor , also known as teleportation factor , is the adjacency matrix corresponding to the network , is the degree of the vertex and is the intrinsic , non - network contribution .many researchers have studied the significance of in pagerank . however , till now , no study has been taken place for the same for . in many casesthe value of has been taken as .localization of eigenvector is a common phenomenon in most of the networks and it is frequently suggested that the main cause of localization is due to the presence of high degree vertices .localization means that most of the weight of the centrality accumulates in a few number of vertices .recently , empirical studies in pagerank suggest that pagerank does not show localization . in this paper ,we show that depending upon intrinsic and non - network contribution , pagerank value changes . this article has been organized in the following way : in chapter [ aproof ], we show that if the intrinsic and non - network contribution of the vertices are the same or proportional to their degrees , then the pageranks remain same or proportional to the degree of the vertices .we also show that if the intrinsic and non - network contribution of the vertices is zero then pagerank values become proportional to the degrees . in chapter[ numerical ] , our study on simulated and empirical networks support our findings . on the contrary to the common beliefwe show that , for some networks , pagerank centrality can be localized and the value of can effect in the localization of the pagerank centrality . chapter [ conclusion ] concludes with some discussion .consider a simple undirected connected network of size , with adjacency matrix .the pagerank centrality of a vertex in the network is defined as : where is the damping factor ( google search engine uses ) , denotes the degree of vertex , and is the intrinsic and non - network contribution to the vertex . in matrix form ,the equation ( [ equ1 ] ) can be written as : where is the diagonal matrix with elements and .equation ( [ equ2 ] ) can be written as : since equation ( [ equ3 ] ) can be written as the sum of an infinite series : now , is a column stochastic matrix . since is undirected and connected is diagonalizable .one can write as a linear combination of the eigenvectors of , as then the equation ( [ equ5 ] ) becomes let be the eigenvalues of corresponding to the eigenvectors , respectively .then after some simple calculation , we get since , , we get therefore , the pagerank vector of a network is the linear sum of eigenvectors of . if the value of is proportional to degrees , then equation ( [ equ6 ] ) becomes : where is the leading eigenvector of .note that is proportional to the degree of the network .therefore , equation ( [ equ10 ] ) reduces to hence , pagerank centrality is proportional to the degree of the network .pagerank can also be quantified successively with an initial estimation of from equation ( [ equ1 ] ) or ( equivalently equation ( [ equ2 ] ) ) .equation ( [ equ2 ] ) will be , where is any initial value of usually taken .if the intrinsic and non - network contribution is zero , then from equation ( [ remark2 ] ) , we get in this case , we take any non - zero vector as will not contribute anything .after iterative steps we have now , writing as a linear combination of the eigenvectors of , we get for some appropriate choice of .then since for all , we get in the limit . as and are constants and is proportional to degree , it follows that pagerank centrality is proportional to the degree centrality .we explore four different values of intrinsic and non - network contribution , . 1 . is inversely proportional to the degree of vertex .2 . all are equal to one . is proportional to the degree of vertex . is proportional to the square of the degree of vertex . in all the above cases ,we have taken . our numerical simulation on erds - rnyi random network ,we have observed that if the value of is proportional to the inverse of the degree of vertex then pagerank score increases for a few number of low degree vertices and for other , pagerank is stable ( see figure [ fig1](a ) ) . here` stable ' means , whether the pagerank scores are positively correlated with the degree . if is equal or proportional to the degrees of the vertices then pagerank centrality is proportional to the degree of the vertices ( see figure [ fig1](c ) ) .pagerank values are stable when is either equal to one or proportional to the square of the degree of vertex ( see figure [ fig1](b ) and figure [ fig1](d ) ) . in comparison with random network , small - world and scale - free network are stable with any value of that we have considered ( see figure [ fig2 ] and figure [ fig3 ] ) .however , in real - world networks , the results are not consistent for the above mentioned cases . in blog network, we have observed that pagerank score fluctuates when is inversely proportional to the degree of vertex ( see figure [ figblog](a ) ) .when all are equal to one , pagerank value fluctuates but less than that in the previous case ( see figure [ figblog](b ) ) .when is proportional to the degree of vertex , and pagerank score correlate positively ( see figure [ figblog](c ) ) .pagerank values are stable when is proportional to the square of the degree of vertex ( see figure [ figblog](d ) ) . in p2p ( peer - to - peer ) network ,we have seen high fluctuation of pagerank value for some high degree vertices when is inversely proportional to the degree of a vertex ( see figure [ figp2p](a ) ) .similar observation is also found when all are equal to one ( see figure [ figp2p](b ) ) .when is proportional to the degree of vertex , pagerank score is also proportional to the degree of vertex ( see figure [ figp2p](c ) ) .pagerank score is stable when is proportional to the square of the degree of vertex ( see figure [ figp2p](d ) ) . for email network ,we have observed strong fluctuation of pagerank value when is inversely proportional to the degree of vertex ( see figure [ figemail](a ) ) . when all are equal to one , we have seen less fluctuation of pagerank value than that in the first case ( see figure [ figemail](b ) ) .pagerank value also becomes proportional to the vertex degree when is proportional to the same ( see figure [ figemail](c ) ) .when is proportional to the square of the degree of vertex , higher degree vertices hold higher pagerank values ( see figure [ figemail](d ) ) . in pgp ( pretty good privacy )network , some high degree vertices fluctuate in pagerank values when is inversely proportional to the degree of a vertex ( see figure [ figpgp](a ) ) .similar result has been found when all are equal to one ( see figure [ figpgp](b ) ) .when is equal or proportional to the degree of vertex , pagerank value is proportional to the same ( see figure [ figpgp](c ) ) .pagerank value increases with the increment of degree of vertices when is proportional to the square of the degree of vertex ( see figure [ figpgp ] ) .internet network is comparatively stable than other networks when is inversely proportional to the degree of vertex and when all are equal to one ( see figure [ figas](a ) and figure [ figas](b ) ) . when is equal or proportional to the degree of vertex , pagerank value is also proportional to the same ( see figure [ figas](c ) ) .when is proportional to the square of the degree of vertex , high degree vertices take the high pagerank score ( see figure [ figas](d ) ) .* localization of pagerank centrality : * the extent of localization can be measured by calculating inverse participation ratio ( ipr ) .the ipr of a pagerank vector is calculated as : when ipr is of order as , the pagerank vector is localized .if ipr , is delocalized .we observe that random and small - world network do not show localization of pagerank centrality .in these two networks , the lowest iprs observe when is inversely proportional to the degree of vertex and the highest iprs observe when is the square of the degree of the vertices ( see figure [ ipr ] ( a ) and [ ipr ] ( b ) ) .however , in scale - free network the pagerank is localized and the highest ipr is observed when is proportional to the square of the degree of a vertex and the lowest ipr is observed when all are equal to one ( see figure [ ipr ] ( c ) ) . in blogs network , the pagerank is localized and the highest ipr is observed when the value of is inversely proportional to the degree of the vertices .the lowest value of ipr is seen when is proportional to the degree of the vertex ( see figure [ ipr ] ( d ) ) .the pagerank is not localized in p2p and email networks . in both of these networks ,the highest value of iprs are observed when is proportional to the square of the degree of vertex and the lowest ipr is observed when all are equal to one ( see figure [ ipr ] ( e ) and [ ipr ] ( f ) ) .the pagerank is localized in pgp network and the highest value ipr is observed when is proportional to the square of the degree and the lowest ipr is seen when is proportional to the degree of the vertex ( see figure [ ipr ] ( g ) ) . in internet network , pagerank centrality is localized .the highest value of ipr is observed when is the square of the degree of the network and the lowest ipr is observed when is proportional to the degree of the vertex ( see figure [ ipr ] ( h ) ) .in this work , we have shown that pagerank centrality depends upon intrinsic , non - network contribution of the vertices .if the intrinsic and non - network contribution of the vertices is proportional to the degree then pagerank score becomes proportional to the degree of the vertices .we have also shown that if the intrinsic and non - network contribution of pagerank centrality is zero , then it also becomes proportional to the degree of the network .our numerical simulation of three network models ( erds - rnyi random network , watts - strogatz small - world network and barabsi - albert scale - free network ) shows that the pagerank scores are more resilient in the intrinsic and non - network contribution in small - world and scale - free networks than in the random network . among all real - world networks studied here ,blog , p2p , e - mail and pgp , pagerank score fluctuates heavily when intrinsic and non - network contribution are either inversely proportional to the degree or equal to one .when the value of is proportional to degree , pagerank keeps the same proportional value .high degree vertices possess high pagerank value when the is proportional to the square of the degree .internet network shows very small fluctuation of pagerank score when is either inversely proportional or equal to one and for other two values of show similar result as other real networks . on the contrary to the common beliefwe show that pagerank centrality can show localization for some networks and the extent of localization depends upon the intrinsic and non - network contribution of the pagerank centrality . in synthetic networks, we have seen that the value of ipr increases with the increase in the value of .however , in the real - world networks studied here , no such correlation is found . in blog network, ipr attains the highest value when is inversely proportional to the degree and for other networks the highest value of ipr was observed when is proportion to the square of the degree . in a nutshell , we have observed that the intrinsic and non - network contribution plays an important role in pagerank centrality calculation . here , we have considered the underlying undirected structure of all the networks .the giant component ( i.e. , the largest connected sub - network ) has been considered when the network is not connected .* three simulated networks : * erds - rnyi random network is constructed with 1000 vertices and two vertices are connected with probability . in watts - strogatz small - world network ,the number of vertices is , average degree of initial regular graph is and the rewiring probability is .barabsi - albert scale - free network is generated with vertices and size of the seed network .a new vertex is added with the existing vertices .* blogs network : * here vertices are the blogs and an edge represents a hyperlink between two blogs .the number of vertices and edges are 1222 and 16714 , respectively .the data were downloaded from konect and used in * gnutella peer - to - peer ( p2p ) network : * gnutella is a system where individual can exchange files over the internet directly without going through a web site . here , peer - to - peer means persons to persons .the vertices are hosts in the gnutella network topology and the edges represent connection between them .the number of vertices are 22663 and number of edges are 54693 .p2p network data were downloaded from snap and used in . *e - mail network : * e - mail network was constructed based on the exchange of mails between the members of the university of rovira i virgili ( tarragone ) . herethe vertices are the users , and two users are connected by an edge if one has sent or received email from other .the number of vertices and edges are 1133 and 5451 , respectively .the data were downloaded from and used in * pretty good privacy network : * here the vertices are the users of pretty good privacy ( pgp ) algorithm and edges are the interactions between them .the number of vertices and edges are 10680 and 24316 , respectively .the data were downloaded from and used in .* internet network : * the vertices in this network are autonomous systems ( collection of computers and routers ) and the edges are the routes taken by the data traveling between them .the number of vertices and edges are 22963 and 48436 , respectively .internet data were downloaded from and used in .the author thanks anirban banerjee and abhijit chakraborty for critical reading and suggestion .author also thanks prof .asok kumar nanda for helping to prepare the manuscript .author gratefully acknowledges the financial support from csir , india through a doctoral fellowship .
pagerank centrality is used by google for ranking web - pages to present search result for a user query . here , we have shown that pagerank value of a vertex also depends on its intrinsic , non - network contribution . if the intrinsic , non - network contributions of the vertices are proportional to their degrees or zeros , then their pagerank centralities become proportion to their degrees . some simulations and empirical data are used to support our study . in addition , we have shown that localization of pagerank centrality depends upon the same intrinsic , non - network contribution .
the temporal nature of interference in wireless networks depends on the underlying traffic as well as the subcarrier allocations of neighbouring base stations ( which usually employ multicarrier systems like ofdm ) . in practice ,due to the bursty nature of data traffic and uncoordinated subcarrier allocations across base stations , the resulting interference at the physical layer tends to be bursty .in addition to the potential for harnessing such burstiness , feedback from the receivers is another resource available in wireless networks . with these motivations , in this paperwe study parallel ( multicarrier ) interference channels with bursty interference links and output feedback from the receivers . in and ,the problem of harnessing bursty interference was studied for a single carrier setup without feedback .a multicarrier version of was studied in . to study benefits of feedback, considered a single carrier setup with bursty interference and output feedback from the receivers . in ,bursty interference was modeled using a bernoulli random state ( instantiated i.i.d . over time ) and a complete capacity characterization was given for the linear deterministic setup . in this paper , we study the multicarrier version of _ i.e. , _ output feedback in multicarrier systems with bursty interference . since developed optimal single carrier schemes , a natural question arises in the multicarrier version : is it always optimal to treat each subcarrier _ separately _ and just copy the optimal scheme in on each subcarrier ? as the following example illustrates , such a separation may not be always optimal . [ [ toy - example ] ] toy example + + + + + + + + + + + consider two parallel symmetric -user linear deterministic interference channels ( ldics ) as shown in figure [ fig : toy_example ] .the first subcarrier has one direct link ( ) and one interfering link ( , hence ) and the second subcarrier has one direct link and three interfering links ( ) .causal output feedback is available from the receivers to the respective transmitters .bernoulli random states ] indicate the presence of interference in the first and second subcarrier respectively and are instantiated i.i.d .( over time ) from an arbitrary joint distribution . for this example, we assume the expectation of both the states to be . subcarriers . ]our goal here is to find the maximum achievable symmetric rate .using the optimal single carrier schemes in , we can achieve symmetric rate from the first subcarrier and symmetric rate from the second subcarrier .summing these rates , we can achieve a total symmetric rate .now , we will show that rate is achievable by coding _ across _ the subcarriers rather than treating the subcarriers separately .we use a block based pipelined scheme ( block length ) as follows .the transmitters always send fresh symbols in the first subcarrier ( -symbols for and -symbols for as shown in figure [ fig : toy_example ] ) . in the first subcarrier , for sufficiently large , with high probability ( w.h.p . ) only -symbols in a block get interfered at ( and -symbols at ) . at the end of a block , due to feedback from , knows exactly which of its transmitted -symbols caused interference at ( since the same state variable ] ( in the second subcarrier ) over the next time slots .due to bursty interference , w.h.p .only of these linear combinations appear at ; but this is sufficient to decode -symbols constituting the linear combinations . using these -symbols can now recover all the interfered -symbols in the previous block and hence achieve rate from the first subcarrier ( same for due to symmetry ) . for the remaining levels in the second subcarrier ,the following is done : lowest levels are not used ( = d_3[t]=0 ] where is a finite field .the received signal in subcarrier at is given by : = \mathbf{g}_j^{q_j - n_j}\mathbf{x}^{(i)}_j[t ] + s_j[t ] \mathbf{g}_j^{q_j - k_j}\mathbf{x}^{(i')}_j[t ] \end{aligned}\ ] ] where is a shift matrix in the terminology of deterministic channel models , ] denotes the transmitted signal on subcarrier of user , and parameters and represent the direct and interfering link strengths in subcarrier .figure [ fig : system_model ] shows the channel model for subcarrier . without loss of generality , we assume and let denote the normalized strength of the interfering signal in subcarrier .for every time instant , it is convenient to consider a subcarrier as indexed levels of bit pipes ; each bit pipe carries a symbol from . in ldsetup : and represent direct and interfering link strengths .presence of interference at time index is determined by bernoulli random variable ] , such that |^2 \leq 1 ] is gaussian noise . as in the ld setup , ] ( takes values in ) .the bernoulli random variables , s_2[t],\ldots s_m[t]\} ] instantiated i.i.d . over time . in this paper, we restrict the analysis to joint distributions with the same marginal probabilities for every ] .the transmitters are assumed to know the above statistics , but are limited to causal information on the interference realizations in the subcarriers ( through feedback ) .[ [ rate - requirements ] ] rate requirements + + + + + + + + + + + + + + + + + we consider the same rate requirements for both ld and gn setups .base station intends to send message to over time slots ( time index ) .rate ( corresponding to ) is considered achievable if the probability of decoding error is vanishingly small as .the capacity region for in the ld setup is given by the following rate inequalities , where and , and .as shown in figure [ fig : crossover ] , the shape of the capacity region depends on the value of .an intuitive interpretation of comes from our inner bounds ; implies there are enough levels in subcarriers with ( very strong interference ) to recover the interfered signals for subcarriers with ( weak interference ) and ( strong interference ) . and .the dashed line representing inequality ( [ eq : ob_3 ] ) is active only when .symmetric capacity ( ) for and is given by and respectively . ]the details of the rate tuples marked in figure [ fig : crossover ] are listed below : + : + + + + + + + : .the following rate inequalities are outer bounds on achievable in the gn setup . where and , and . in the gn setup , assuming , and ( rational ) , where denotes the symmetric capacity and .[ cor : separate ] in the ld setup , for achieving symmetric capacity , treating subcarriers separately is optimal when all or all ( and for the degenerate case of ) . for the remaining cases , coding across subcarriers achieves symmetric capacity .similarly , in the gn setup , treating subcarriers separately is gdof optimal when all or all .in this section , we focus on proofs of outer bounds in the ld and gn setups . we refer to outer bounds ( [ eq : causal_ld ] ) and ( [ eq : causal_gn ] ) as causal outer bounds as they account for the causal knowledge of subcarrier interference states at the transmitter , for , the symmetric capacity stems from causal outer bound ( [ eq : causal_ld ] ) ; hence the subscript `` c '' for causal . ] . for proving these causal outer bounds ,we use a subset entropy inequality by madiman and tetali which we describe in section [ sec : madiman_tetali ] , prior to the proofs .then we introduce some additional notation in section [ sec : add_notation ] followed by outer bound proofs for the ld setup ( section [ sec : ld_ob ] ) and gn setup ( section [ sec : gn_ob ] ) .we now describe a subset entropy inequality by madiman and tetali .consider a hypergraph where is a finite ground set and is a collection of subsets of .a function is called a fractional partition of if it satisfies the following condition . with the above definition , the subset entropy inequality can now be stated as follows , where is a fractional partition and the above inequality holds for any collection of jointly distributed random variables .the differential entropy version of the above inequality has the same form . to use these inequalities in our setups ,we first choose a suitable fractional partition as explained below .for , let = \left ( s_1[t],\ ; s_2[t],\ldots s_m[t ] \right ) = \mathbf{s} ] is governed by the joint probability distribution =\mathbf{s}) ] ) is chosen as set .now , we define a fractional partition as follows .=\mathbf{s}_e ) } { p } \label{eq : fractional_cover_setup}\end{aligned}\ ] ] where and denotes the joint state where only the subcarriers whose index is in set face interference .the fractional partition condition holds as follows .=\mathbf{s}_e ) } { p } = \frac{\mathbb{e}(s_j[t])}{p}=1\end{aligned}\ ] ] in section [ sec : ld_causal ] , we demonstrate the application of inequality ( [ eq : madiman_tetali ] ) , in conjunction with the fractional partition defined in ( [ eq : fractional_cover_setup ] ) , for proving outer bound ( [ eq : causal_ld ] ) .similarly , in section [ sec : causal_gn ] , for proving outer bound ( [ eq : causal_gn ] ) we use the differential entropy version of inequality ( [ eq : madiman_tetali ] ) with the same fractional partition . for notational convenience ,we use indicator functions and to denote the absence and presence of interference in subcarrier when the joint state realization across subcarriers is =\mathbf{s } \in \{0,1\}^m ] .* = ( \mathbf{y}^{(i)}_1[t],\;\mathbf{y}^{(i)}_2[t],\ldots \mathbf{y}^{(i)}_m[t]) ] : received signal ( across subcarriers ) for at time when =\mathbf{s} ] and ] . * ] .* }^{(i)}[1],\ ; \mathbf{v}_{\mathbf{s}[2]}^{(i)}[2 ] , \ldots \mathbf{v}_{\mathbf{s}[t]}^{(i)}[t]) ] : interfering signals ( across subcarriers ) at when all its subcarriers face interference at time index .this is equivalent to ] : received signal ( across subcarriers ) at when all its subcarriers are interference free at time index .this is equivalent to ] .* = ( y^{(i)}_1[t],\ ; y^{(i)}_2[t],\ldots y^{(i)}_m[t]) ] : received signal ( across subcarriers ) for at time when =\mathbf{s} ] and ] .* =(z^{(i)}_1[t],\ ; z^{(i)}_2[t],\ldots z^{(i)}_m[t]) ] : receiver noise in interfered subcarriers for at time index when =\mathbf{s} ] : receiver noise in interference free subcarriers for at time index when =\mathbf{s} ]_ i.e. , _ interfering signal ( if present ) plus noise , across subcarriers , for at time index when =\mathbf{s} ] : interfering signal plus noise in interfered subcarriers for at time index when =\mathbf{s} ] ) . *}^{(i)}[1 ] \oplus \mathbf{z}^{(i)}[1 ] , \ ; \mathbf{v}_{\mathbf{s}[2]}^{(i)}[2 ] \oplus \mathbf{z}^{(i)}[2 ] , \ldots \mathbf{v}_{\mathbf{s}[t]}^{(i)}[t ] \oplus \mathbf{z}^{(i)}[t]) ] : interfering signal plus noise ( across subcarriers ) at when all its subcarriers face interference at time index .this is equivalent to \oplus \mathbf{z}^{(i)}[t] ] : received signal ( across subcarriers ) at when all its subcarriers are interference free at time index .this is equivalent to ] .see appendix [ sec : ld_ob_r1 ]. using fano s inequality for , for any , there exists a large enough such that ; | \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , \mathbf{s}[t ] ) \nonumber\\ & = \sum_{t=1}^{n } \sum_{\mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) h ( \mathbf{y}_{\mathbf{s}}^{(1)}[t ] | \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , \mathbf{s}[t]=\mathbf{s } ) \nonumber \\ & \quad - \sum_{t=1}^{n } \sum_{\mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) h ( \mathbf{y}_{\mathbf{s}}^{(1)}[t ] | w^{(1 ) } , \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , \mathbf{s}[t]= \mathbf{s } ) \nonumber\\ & \leq \sum_{t=1}^{n } \sum_{\mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) \sum_{j=1}^{m } n_j \mathbb{i}_{j \not \in \mathbf{s } } + \max(n_j , k_j ) \mathbb{i}_{j \in \mathbf{s } } \nonumber \\ & \quad - \sum_{t=1}^{n } \sum_{\mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) h ( \mathbf{v}_{\mathbf{s}}^{(1)}[t ] | w^{(1 ) } , \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , \mathbf{s}[t]=\mathbf{s } ) \nonumber\\ & = \sum_{t=1}^{n } \sum_{j=1}^{m } \sum _ { \mathbf{s } } n_j \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) \mathbb{i}_{j \not \in \mathbf{s } } + \max(n_j , k_j ) \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) \mathbb{i}_{j \in \mathbf{s } } \nonumber\\ & \quad - \sum_{t=1}^{n } \sum_{\mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) h ( \mathbf{v}_{\mathbf{s}}^{(1)}[t ] | w^{(1 ) } , \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , \mathbf{s}[t]=\mathbf{s } ) \nonumber\\ & = n \sum_{j=1}^{m } ( 1-p ) n_j + p\max(n_j , k_j ) \nonumber \\ & \quad- \sum_{t=1}^{n } \sum_{\mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) h ( \mathbf{v}_{\mathbf{s}}^{(1)}[t ] | w^{(1 ) } , \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , \mathbf{s}[t]=\mathbf{s } ) \nonumber\end{aligned}\ ] ] =\mathbf{s } ) h ( \mathbf{v}_{\mathbf{s}}^{(1)}[t ] | w^{(1 ) } , \mathbf{v}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } ) \nonumber\\ & \stackrel{(a)}\leq n \sum_{j=1}^{m } ( 1-p ) n_j + p\max(n_j , k_j ) - p \sum_{t=1}^{n } h ( \tilde{\mathbf{v}}^{(1)}[t ] | w^{(1 ) } , \mathbf{v}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } )\nonumber\\ & = n \sum_{j=1}^{m } n_j + ( \max(n_j , k_j)-n_j ) p - p \sum_{t=1}^{n } h ( \tilde{\mathbf{v}}^{(1)}[t ] | w^{(1 ) } , \mathbf{v}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } ) \label{eq : causal_1}\end{aligned}\ ] ] where ( a ) follows by using ( [ eq : madiman_tetali ] ) for the fractional partition defined in ( [ eq : fractional_cover_setup ] ) . using fano s inequality for , for any , there exists a large enough such that ; , \mathbf{y}^{(1)}[t ] | \mathbf{y}^{(2)}_{1:t-1 } , \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , w^{(1 ) } , \mathbf{s}[t ] ) \nonumber\\ & = \sum_{t=1}^{n } \sum _ { \mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) ( h ( \mathbf{y}_{\mathbf{s}}^{(2)}[t],\mathbf{y}_{\mathbf{s}}^{(1)}[t ] | \mathbf{y}^{(2)}_{1:t-1 } , \mathbf { y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , w^{(1 ) } , \mathbf{s}[t]=\mathbf{s } ) \nonumber\\ & \quad - h ( \mathbf{y}_{\mathbf{s}}^{(2)}[t ] , \mathbf{y}_{\mathbf{s}}^{(1)}[t ] | \mathbf{y}^{(2)}_{1:t-1 } , \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , w^{(1 ) } , w^{(2 ) } , \mathbf{s}[t]= \mathbf{s } ) ) \nonumber\\ & = \sum_{t=1}^{n } \sum_{\mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) h ( \mathbf{y}_{\mathbf{s}}^{(2)}[t ] , \mathbf{y}_{\mathbf{s}}^{(1)}[t ] | \mathbf{y}^{(2)}_{1:t-1 } , \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , w^{(1 ) } , \mathbf{s}[t]=\mathbf{s } ) \nonumber\\ & = \sum_{t=1}^{n } \sum_{\mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) h ( \hat{\mathbf{x}}^{(2)}[t ] , \mathbf{v}_{\mathbf{s}}^{(1)}[t ] | \mathbf{y}^{(2)}_{1:t-1 } , \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , w^{(1 ) } , \mathbf{s}[t]=\mathbf{s } ) \nonumber\\ & \leq \sum_{t=1}^{n } \sum_{\mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) h ( \hat{\mathbf{x}}^{(2)}[t ] , \tilde{\mathbf{v}}^{(1)}[t ] | \mathbf{y}^{(2)}_{1:t-1 } , \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , w^{(1 ) } , \mathbf{s}[t]=\mathbf{s})\nonumber\\ & = \sum_{t=1}^{n } h ( \hat{\mathbf{x}}^{(2)}[t ] , \tilde{\mathbf{v}}^{(1)}[t ] | \mathbf{y}^{(2)}_{1:t-1 } , \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , w^{(1 ) } ) \sum_{\mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) \nonumber\\ & = \sum_{t=1}^{n } h ( \hat{\mathbf{x}}^{(2)}[t ] , \tilde{\mathbf{v}}^{(1)}[t ] | \mathbf{y}^{(2)}_{1:t-1 } , \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , w^{(1 ) } ) \nonumber\\ & \leq \sum_{t=1}^{n } h ( \hat{\mathbf{x}}^{(2)}[t ] , \tilde{\mathbf{v}}^{(1)}[t ] | \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , w^{(1 ) } ) \nonumber\\ & = \sum_{t=1}^{n } h ( \hat{\mathbf{x}}^{(2)}[t ] , \tilde{\mathbf{v}}^{(1)}[t ] | \mathbf{v}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , w^{(1 ) } ) \label{eq : causal_2}\end{aligned}\ ] ] using inequalities ( [ eq : causal_1 ] ) and ( [ eq : causal_2 ] ) , | \tilde{\mathbf{v}}^{(1)}[t ] , w^{(1 ) } , \mathbf{v}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } ) \nonumber\\ & \leq n \sum_{j=1}^{m } n_j + ( \max(n_j , k_j)-n_j ) p + p \sum_{t=1}^{n } h ( \hat{\mathbf{x}}^{(2)}[t ] | \tilde{\mathbf{v}}^{(1)}[t ] ) \nonumber\\ & \leq n \sum_{j=1}^{m } n_j + ( \max(n_j , k_j)-n_j ) p + p \sum_{t=1}^{n } \sum_{j=1}^{m } ( n_j - k_j)^{+}\nonumber\\ & = n p \delta + n \sum_{j=1}^{m } ( 1+p ) n_j % \sum_{i=1}^{m } \max(n_i , k_i ) \nonumber \\ & \quad + \max(n_i - k_i,0 ) - 2n_i\nonumber\\\end{aligned}\ ] ] where .the bound on follows by symmetry , and this completes the proof of outer bound ( [ eq : causal_ld ] ) .the above proof demonstrates a connection between subset entropy inequalities and bursty interference in multicarrier systems . in , we demonstrated a similar connection by using a sliding window subset entropy inequality to show tight outer bounds for the case without feedback ( in multicarrier systems with bursty interference ) .+ see appendix [ sec : ld_ob_r1_r2_sum ] .see appendix [ sec : gn_r1_ob ] . using fano s inequality for , for any , there exists a large enough such that ; | \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , \mathbf{s}[t ] ) \nonumber\\ & = \sum_{t=1}^{n } \sum_{\mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) h ( \mathbf{y}^{(1)}[t ] | \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , \mathbf{s}[t]=\mathbf{s } ) - \sum_{t=1}^{n } \sum_{\mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) h ( \mathbf{y}^{(1)}[t ] | w^{(1 ) } , \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , \mathbf{s}[t]= \mathbf{s } ) \nonumber\\ & \leq \sum_{t=1}^{n } \sum_{\mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) h ( \mathbf{y}^{(1)}[t ] |\mathbf{s}[t]=\mathbf{s } ) - \sum_{t=1}^{n } \sum_{\mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) h ( \mathbf{y}^{(1)}[t ] | w^{(1 ) } , \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , \mathbf{s}[t]= \mathbf{s } ) \nonumber\\ & \stackrel{(a)}\leq n \sum_{j=1}^{m } ( 1-p ) \log \left ( 1+|g_{d , j}|^2 \right ) + p \log \left ( 1 + \left ( |g_{d , j}| + |g_{i , j}| \right ) ^2 \right ) + nm \log ( \pi e ) \nonumber\\ & \quad - \sum_{t=1}^{n } \sum_{\mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) h ( \mathbf{y}^{(1)}[t ] | w^{(1 ) } , \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , \mathbf{s}[t]= \mathbf{s } ) \nonumber\\ & = n \sum_{j=1}^{m } ( 1-p ) \log \left ( 1+|g_{d , j}|^2 \right ) + p \log \left ( 1 + \left ( |g_{d , j}| + |g_{i , j}| \right ) ^2 \right ) + nm \log ( \pi e ) \nonumber\\ & \quad - \sum_{t=1}^{n } \sum_{\mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) h ( \mathbf{v}_{\mathbf{s}}^{(1)}[t ] \oplus \mathbf{z}^{(1)}[t ] | w^{(1 ) } , \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , \mathbf{s}[t]=\mathbf{s } ) \nonumber\\ & = n \sum_{j=1}^{m } ( 1-p ) \log \left ( 1+|g_{d , j}|^2 \right ) + p \log \left ( 1 + \left ( |g_{d , j}| + |g_{i , j}| \right ) ^2 \right ) + nm \log ( \pi e ) \nonumber\\ & \quad - \sum_{t=1}^{n } \sum_{\mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) h ( \mathbf{v}_{\mathbf{s}}^{(1)}[t ] \oplus \mathbf{z}_{\mathbf{s}}^{(1)}[t ] | w^{(1 ) } , \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , \mathbf{s}[t]=\mathbf{s } ) \nonumber\\ & \quad - \sum_{t=1}^{n } \sum_{\mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) h ( \mathbf{z}_{\mathbf{s}^c}^{(1)}[t ] | \mathbf{v}_{\mathbf{s}}^{(1)}[t ] \oplus \mathbf{z}_{\mathbf{s}}^{(1)}[t ] , w^{(1 ) } , \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , \mathbf{s}[t]=\mathbf{s } )\nonumber\\ & = n \sum_{j=1}^{m } ( 1-p ) \log \left ( 1+|g_{d , j}|^2 \right ) + p \log \left ( 1 + \left ( |g_{d , j}| + |g_{i , j}| \right ) ^2 \right ) + nm \log ( \pi e ) \nonumber\\ & \quad - \sum_{t=1}^{n } \sum_{\mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) h ( \mathbf{v}_{\mathbf{s}}^{(1)}[t ] \oplus \mathbf{z}_{\mathbf{s}}^{(1)}[t ] | w^{(1 ) } , \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , \mathbf{s}[t]=\mathbf{s } ) - \sum_{t=1}^{n } \sum_{\mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) \sum_{j=1}^m \mathbb{i}_{j \not \in \mathbf{s } } \log(\pie ) \nonumber \\ & = n \sum_{j=1}^{m } ( 1-p ) \log \left ( 1+|g_{d , j}|^2 \right ) + p \log \left ( 1 + \left ( |g_{d , j}| + |g_{i , j}| \right ) ^2 \right ) + nm \log ( \pi e ) \nonumber\\ & \quad - \sum_{t=1}^{n } \sum_{\mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) h ( \mathbf{v}_{\mathbf{s}}^{(1)}[t ] \oplus \mathbf{z}_{\mathbf{s}}^{(1)}[t ] | w^{(1 ) } , \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } )- n m ( 1-p)\log(\pi e ) \nonumber \\ & \stackrel{(b)}\leq n \sum_{j=1}^{m } ( 1-p ) \log \left ( 1+|g_{d , j}|^2 \right ) + p \log \left ( 1 + \left ( |g_{d , j}| + |g_{i , j}| \right ) ^2 \right ) + nm \log ( \pi e ) \nonumber\\ & \quad - p \sum_{t=1}^{n } h ( \tilde{\mathbf{v}}^{(1)}[t ] \oplus \mathbf{z}^{(1)}[t ] | w^{(1 ) } , \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } ) - n m ( 1-p)\log(\pi e ) \nonumber\end{aligned}\ ] ] \oplus \mathbf{z}^{(1)}[t ] | w^{(1 ) } , \mathbf{v}^{(1)}_{1:t-1}\oplus \mathbf{z}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } ) - n m ( 1-p)\log(\pi e ) \label{eq : gn_causal_a}\end{aligned}\ ] ] where ( a ) follows from the proof of ( [ eq : gn_looser_r1 ] ) ( see appendix [ sec : gn_r1_ob ] ) and ( b ) follows by using the differential entropy version of inequality ( [ eq : madiman_tetali ] ) for the fractional partition defined in ( [ eq : fractional_cover_setup ] ) . using fano s inequality for , for any , there exists a large enough such that ; , \mathbf{y}^{(1)}[t ] | \mathbf{y}^{(2)}_{1:t-1 } , \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , w^{(1 ) } , \mathbf{s}[t ] ) \nonumber\\ & = \sum_{t=1}^{n } \sum _ { \mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) i(w^{(2 ) } ; \mathbf{y}^{(2)}[t ] , \mathbf{y}^{(1)}[t ] | \mathbf{y}^{(2)}_{1:t-1 } , \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , w^{(1 ) } , \mathbf{s}[t]=\mathbf{s } ) \nonumber\\ & = \sum_{t=1}^{n } \sum _ { \mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) i(w^{(2 ) } ; \hat{\mathbf{x}}^{(2)}[t]\oplus \mathbf{z}^{(2)}[t ] , \mathbf{v}^{(1)}_{\mathbf{s}}[t ] \oplus \mathbf{z}^{(1)}[t ] | \mathbf{y}^{(2)}_{1:t-1 } , \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , w^{(1 ) } , \mathbf{s}[t]=\mathbf{s } ) \nonumber\\ & = \sum_{t=1}^{n } \sum _ { \mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) i(w^{(2 ) } ; \hat{\mathbf{x}}^{(2)}[t]\oplus \mathbf{z}^{(2)}[t ] , \mathbf{v}_{\mathbf{s}}^{(1)}[t ] \oplus \mathbf{z}^{(1)}_{\mathbf{s}}[t ] | \mathbf{y}^{(2)}_{1:t-1 } , \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , w^{(1 ) } , \mathbf{s}[t]=\mathbf{s } ) \nonumber\\ & \quad + \sum_{t=1}^{n } \sum _ { \mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) i(w^{(2 ) } ; \mathbf{z}^{(1)}_{\mathbf{s}^c}[t ] | \hat{\mathbf{x}}^{(2)}[t]\oplus \mathbf{z}^{(2)}[t ] , \mathbf{v}_{\mathbf{s}}^{(1)}[t ] \oplus \mathbf{z}_{\mathbf{s}}^{(1)}[t ] , \mathbf{y}^{(2)}_{1:t-1 } , \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , w^{(1 ) } , \mathbf{s}[t]=\mathbf{s } ) \nonumber\\ & = \sum_{t=1}^{n } \sum _ { \mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) i(w^{(2 ) } ; \hat{\mathbf{x}}^{(2)}[t]\oplus \mathbf{z}^{(2)}[t ] , \mathbf{v}_{\mathbf{s}}^{(1)}[t ] \oplus \mathbf{z}^{(1)}_{\mathbf{s}}[t ] | \mathbf{y}^{(2)}_{1:t-1 } , \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , w^{(1 ) } , \mathbf{s}[t]=\mathbf{s } ) \nonumber\\ & \leq \sum_{t=1}^{n } \sum _ { \mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) i(w^{(2 ) } ; \hat{\mathbf{x}}^{(2)}[t]\oplus \mathbf{z}^{(2)}[t ] , \tilde{\mathbf{v}}^{(1)}[t ] \oplus \mathbf{z}^{(1)}[t ] | \mathbf{y}^{(2)}_{1:t-1 } , \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , w^{(1 ) } , \mathbf{s}[t]=\mathbf{s } ) \nonumber\\ & = \sum_{t=1}^{n } \sum _ { \mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) h(\hat{\mathbf{x}}^{(2)}[t]\oplus \mathbf{z}^{(2)}[t ] , \tilde{\mathbf{v}}^{(1)}[t ] \oplus \mathbf{z}^{(1)}[t ] | \mathbf{y}^{(2)}_{1:t-1 } , \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , w^{(1 ) } , \mathbf{s}[t]=\mathbf{s } ) \nonumber\\ & \quad -\sum_{t=1}^{n } \sum _ { \mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) h(\hat{\mathbf{x}}^{(2)}[t]\oplus \mathbf{z}^{(2)}[t ] , \tilde{\mathbf{v}}^{(1)}[t ] \oplus \mathbf{z}^{(1)}[t ] | \mathbf{y}^{(2)}_{1:t-1 } , \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , w^{(1 ) } , w^{(2 ) } , \mathbf{s}[t]=\mathbf{s } ) \nonumber\\ & = \sum_{t=1}^{n } \sum _ { \mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) h(\hat{\mathbf{x}}^{(2)}[t]\oplus \mathbf{z}^{(2)}[t ] , \tilde{\mathbf{v}}^{(1)}[t ] \oplus \mathbf{z}^{(1)}[t ] | \mathbf{y}^{(2)}_{1:t-1 } , \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , w^{(1 ) } , \mathbf{s}[t]=\mathbf{s } ) - 2 nm \log(\pi e ) \nonumber\\ & \leq \sum_{t=1}^{n } \sum _ { \mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) h(\hat{\mathbf{x}}^{(2)}[t]\oplus \mathbf{z}^{(2)}[t ] , \tilde{\mathbf{v}}^{(1)}[t ] \oplus \mathbf{z}^{(1)}[t ] | \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , w^{(1 ) } ) - 2 nm \log(\pi e ) \nonumber\\ & = \sum_{t=1}^{n } h(\hat{\mathbf{x}}^{(2)}[t]\oplus \mathbf{z}^{(2)}[t ] , \tilde{\mathbf{v}}^{(1)}[t ] \oplus \mathbf{z}^{(1)}[t ] | \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , w^{(1 ) } ) - 2 nm \log(\pi e ) \nonumber\\ & = \sum_{t=1}^{n } h(\hat{\mathbf{x}}^{(2)}[t]\oplus \mathbf{z}^{(2)}[t ] , \tilde{\mathbf{v}}^{(1)}[t ] \oplus \mathbf{z}^{(1)}[t ] | \mathbf{v}^{(1)}_{1:t-1 } \oplus \mathbf{z}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , w^{(1 ) } ) - 2 nm \log(\pi e ) \label{eq : gn_causal_b}\end{aligned}\ ] ] using inequalities ( [ eq : gn_causal_a ] ) and ( [ eq : gn_causal_b ] ) , \oplus \mathbf{z}^{(1)}[t ] | w^{(1 ) } , \mathbf{v}^{(1)}_{1:t-1}\oplus \mathbf{z}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } ) - n m ( 1-p)\log(\pi e ) \nonumber \\ & \quad + p\sum_{t=1}^{n } h(\hat{\mathbf{x}}^{(2)}[t]\oplus \mathbf{z}^{(2)}[t ] , \tilde{\mathbf{v}}^{(1)}[t ] \oplus \mathbf{z}^{(1)}[t ] | \mathbf{v}^{(1)}_{1:t-1 } \oplus \mathbf{z}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , w^{(1 ) } ) - 2pnm \log(\pi e)\nonumber \end{aligned}\ ] ] \oplus \mathbf{z}^{(2)}[t ] | \tilde{\mathbf{v}}^{(1)}[t ] \oplus \mathbf{z}^{(1)}[t ] , \mathbf{v}^{(1)}_{1:t-1 } \oplus \mathbf{z}^{(1)}_{1:t-1 } , \mathbf{s}_{1:t-1 } , w^{(1 ) } ) - pnm \log(\pi e)\nonumber \\ &\leq n \sum_{j=1}^{m } ( 1-p ) \log \left ( 1+|g_{d , j}|^2 \right ) + p \log \left ( 1 + \left( |g_{d , j}| + |g_{i , j}| \right ) ^2 \right ) \nonumber\\ & \quad + p\sum_{t=1}^{n } \sum_{j=1}^m h ( g_{d , j}x_j^{(2)}[t ] + z_j^{(2)}[t ] | g_{i , j}x_j^{(2)}[t ] + z_j^{(1)}[t ] ) - pnm \log(\pi e)\nonumber \\ & \leq n \sum_{j=1}^{m } ( 1-p ) \log \left ( 1+|g_{d , j}|^2 \right ) + p \log \left ( 1 + \left ( |g_{d , j}| + |g_{i , j}| \right ) ^2 \right ) \nonumber\\ & \quad + p\sum_{t=1}^{n } \sum_{j=1}^m \log \left ( \pi e \left ( 1 + \frac{|g_{d , j}|^2}{1 + |g_{i , j}|^2 } \right ) \right ) - pnm \log(\pi e ) \nonumber\\ & = n \sum_{j=1}^{m } ( 1-p ) \log \left ( 1+|g_{d , j}|^2 \right ) + p\log \left ( 1 + \left ( |g_{d , j}| + |g_{i , j}| \right ) ^2 \right ) + p \log \left ( 1 + \frac{|g_{d , j}|^2}{1 + |g_{i , j}|^2 } \right ) \nonumber\\ & = n \left ( ( 1+p ) \sum_{j=1}^{m } \log \left ( 1+|g_{d , j}|^2 \right ) + p \delta_{g } \right ) % --------------\end{aligned}\ ] ] where . the bound on follows by symmetry , and this completes the proof of outer bound ( [ eq : causal_gn ] ) .+ see appendix [ sec : ob_sum_gn ] .in this section , we focus on schemes for achieving the symmetric capacity in the ld setup ( see appendix [ sec : d1d2_ach ] and [ sec : q1q2_ach ] for achievability of remaining corner points in figure [ fig : crossover ] ) . in section [ sec : single_carrier ] , we briefly review the single carrier schemes in and describe a _ bursty relaying _ technique ( used in our multicarrier schemes ) . in section [ sec : separability ] , we mention the cases where treating subcarriers separately is optimal ( _ i.e. , _ simply copying the optimal single carrier scheme on each subcarrier leads to the symmetric capacity ) . for the remaining cases ,we propose multicarrier schemes ( covered in sections [ sec : helping_mechanism ] and [ sec : insufficient_help ] ) , which employ a helping mechanism where some _ helper _ levels in subcarriers with are used to recover interfered signals in subcarriers with . for ( section [ sec : helping_mechanism ] ) ,the helping mechanism is optimal ; whereas for ( section [ sec : insufficient_help ] ) the helping mechanism is run in parallel with the single carrier schemes to achieve symmetric capacity . after describing our multicarrier schemes , in section [ sec : ld_inner_examples ] we provide some illustrative examples . the single carrier version of our setup ( _ i.e. , _ ) was studied in . for notational consistency, we use ( subcarrier index ) in stating the results from .we simply restate below the schemes in for the regimes and ; but for the regime we mention a slightly different scheme that makes describing our multicarrier schemes in sections [ sec : helping_mechanism ] and [ sec : insufficient_help ] more convenient .[ [ regime - alpha_1-leq-1 ] ] regime + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + for this regime , the symmetric capacity is . to achieve this , a two phase scheme ( same for and )is used as briefly described below has slight variation from this scheme . for details ,see . ]( see for details ) : * phase : transmitters in phase at time index send fresh symbols on all levels .if there is no interference at time index ( occurs w.p . ) , all symbols can be decoded at the intended receiver and both transmitters stay in phase for time index . if there is interference ( occurs w.p . ) , only the bottom symbols get interfered at a receiver and the transmitters transition to phase for time index . * phase : transmitters send the past interference ( obtained from receiver feedback ) on the top levels and fresh symbols on the remaining levels . both transmitters transition to phase for the next time index after phase .figure [ fig : markov_chain ] shows the underlying markov chain for this scheme . .] [ [ regime-1alpha_1-leq-2 ] ] regime + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + for this regime , the symmetric capacity is . to achieve this , a two phase scheme is used as briefly described below ( see for details ) : * phase : transmitters in phase at time index send fresh symbols on the top levels and the bottom levels are not used . if there is no interference at time index ( occurs w.p . ) , all symbols can be decoded at the intended receiver and both transmitters stay in phase .if there is interference at time index ( occurs w.p . ) , only symbols get interfered at a receiver and the transmitters transition to phase for time index .* phase : transmitters send fresh symbols in the top levels . in the next levels ( below the top levels ) , the interfering symbols ( obtained through receiver feedback ) from the previous time index are sent .the remaining levels in the bottom are not used . both transmitters transition to phase for the next time index after phase .the underlying markov chain in this scheme is same as the one in figure [ fig : markov_chain ] .[ [ regime - alpha_1 - 2-bursty - relaying ] ] regime ( bursty relaying ) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + for this regime , the symmetric capacity is . in ,this is achieved using a markov chain based scheme similar to the ones described above . for convenience in describing our multicarrier schemes ,we derive a block version of the scheme in as follows . in each block of duration , transmitters send fresh symbols on the top levels and never use the bottom levels . since the bottom levels are never used , the fresh symbols from the top levels are always received interference free .this realizes rate . from the levels in the middle ( below the top levels ) , we realize an additional rate over two blocks as follows . for the first block , creates linear combinations from fresh symbols and sends these linear combinations in the middle levels . for large enough , w.h.p . receives such linear combinations . decodes the constituent fresh symbols from these linear combinations and sends them to ( through feedback ) . now creates new linear combinations from these symbols and sends them in the middle levels during the next block . w.h.p . receives such linear combinations and decodes all the constituent symbols .this leads to an additive rate of at ( and similarly at ) . in the remainder of this paper, we refer to this technique ( for middle levels in subcarriers with ) as _ bursty relaying _ since - pair effectively acts as a relay for - and vice versa .figure [ fig : bursty_relaying ] illustrates this technique of bursty relaying .adding the rate from bursty relaying in middle levels and rate from the top levels , we achieve rate .middle levels ( below the top levels ) when . as shown , receives linear combinations in symbols during a block of duration .it decodes and sends the constituent symbols to which again creates linear combinations from these symbols . in the next block, receives linear combinations from and decodes the constituent symbols . ] using outer bounds ( [ eq : causal_ld ] ) and ( [ eq : ob_3 ] ) for ld setup and achievability rates for the single carrier schemes in , the following can be easily verified . * for _ i.e. , _ when interference is either never present or always present , the symmetric capacity can be achieved by treating the subcarriers separately . * for , when all subcarriers have , the symmetric capacity can be achieved by treating the subcarriers separately . * for , when all subcarriers have , the symmetric capacity can be achieved by treating the subcarriers separately . hence , the subcarriers are _ separable _ in the above cases . when we have subcarriers with as well as subcarriers with ( and ) , we employ coding across subcarriers ( through a helping mechanism described in the next subsection ) to achieve symmetric capacity ; we assume such a _ mixed _ collection of subcarriers in describing our multicarrier schemes in sections ( [ sec : helping_mechanism ] ) and ( [ sec : insufficient_help ] ) .when , .we will now describe the achievability of using a block based scheme . in each block of duration , fresh symbols are sent in the following levels ( same for both transmitters by symmetry ) : * all levels for subcarriers with .* top levels for subcarriers with . andthe following levels are not used : * bottom levels of subcarriers with . *bottom levels of subcarriers with . because of the above choices , in every block ( for large enough ) : * in subcarriers with , w.h.p . fresh symbols get interfered . * in subcarriers with , w.h.p . symbols get interfered . * in subcarriers with , the top fresh symbols are always received interference free . in total , each receiver needs to recover interfered symbols in each block .this recovery is done in a pipelined fashion in the next block using a _ helping mechanism _ described below .[ [ helping - mechanism ] ] helping mechanism + + + + + + + + + + + + + + + + + we will use the term _ helper levels _ for the middle levels ( below the top levels ) in subcarriers with ; hence helper levels in total .after each block , due to feedback from , knows exactly which of its transmitted symbols caused interference at .the number of such symbols , as described above , is w.h.p .equal to . now creates linear combinations of these symbols and sends the linear combinations on any of the helper levels in the subsequent block . w.h.p . of such linear combinations are received at .this is sufficient to recover all the interfered symbols at in the previous block .as all the interfered symbols in a block are recovered using the above mechanism , we realize rate . if , some of the helper levels are still available ; precisely of them .we realize an additional rate of from such leftover helper levels using the bursty relaying scheme described in section [ sec : single_carrier ] . adding the rate from the leftover helper levels to , we achieve the symmetric capacity .when , .before we proceed to the details , we give a high level idea of the scheme as follows .simply copying the scheme for in section [ sec : helping_mechanism ] does not work for this case since there are not enough helper levels ( ) compared to the number of levels facing interference ( ) .the trick in this case is to _ help as much as possible_. for each subcarrier with , we select _ helped _ levels ; these levels face interference and the interfered symbols are recovered using the helping mechanism described in section [ sec : helping_mechanism ] . the total number of helped levels equals the number of helper levels ( ) . for the remaining interfered levels in subcarriers with ,we run the optimal single carrier scheme ( with a slight modification ) in parallel with the helping mechanism . adding the rates from the single carrier schemes and the helping mechanism , we achieve the symmetric capacity .this high level idea can also be illustrated by rewriting as shown below . where is the total number of helped levels , and for subcarriers ( with ) being helped the effective direct and interfering link strengths are and .the last two terms in ( [ eq : rate_split ] ) come from the optimal single carrier schemes for ( that run in parallel with the helping mechanism ) .we now describe the achievability of in detail . in subcarriers with ,the transmitters always send fresh symbols in the top levels and never use the bottom levels .this realizes rate . for each subcarrier with , we assign a non - negative integral value with the following constraints : ( ) for and for , ( ) .simply put , denotes the number of helped levels in a subcarrier and the total number of such levels equals the number of helper levels available in subcarriers with .having fixed for each subcarrier with , we now describe the modifications needed in the optimal single carrier scheme for parallel execution with the helping mechanism .[ [ modification - for - alpha_j-1 ] ] modification for + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the bottom levels ( of the direct link ) are selected as helped levels as shown in figure [ fig : parallel](a ) and interfered symbols in these levels are recovered using the helping mechanism described in section [ sec : helping_mechanism ] . for the modified single carrier scheme ,phase remains the same as in and the modification is only in phase . for illustration purposesconsider that in phase for a subcarrier with , sends fresh symbols ] . if there is no interference , all the fresh symbols are received and the transmitters stay in phase . if there is interference , the transmitters transition to phase . in the scheme in ,all interfering symbols were sent on the top levels in phase ; in the modified scheme the transmitters just send the top interfering symbols in the top levels as shown in figure [ fig : parallel](a ) . in the remaining levels ,fresh symbols are sent ( starred symbols in figure [ fig : parallel](a ) ) . ignoring the bottom levels , the resulting system of linear equations at the receivers is exactly the same as in with direct link strength and interfering link strength .thus at end of phase , is able to decode ( interfered symbols in phase ) and ( fresh symbols in phase ) . to decode interfered symbols in the helped levels ,the helping mechanism is used ( which collects all interfered symbols in helped levels during a block of duration and enables their recovery in the next block ) .so effectively , the rate obtained from a subcarrier with is .[ [ modification - for - alpha_j-1 - 1 ] ] modification for + ++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the case is just an aggregated version of the simple case . for this simple case , either or . if , we use the helping mechanism to recover the interfered symbols . if , there are no helped levels and we simply use the scheme for in .[ [ modification - for-1alpha_j-2 ] ] modification for + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the top levels ( of the direct link at the receiver ) are selected as helped levels as shown in figure [ fig : parallel](b ) .again , phase remains the same as in and the modification is only for phase . for illustration purposes , consider that in phase for a subcarrier with , sends fresh symbols ] on the top levels .the bottom levels are not used .if there is no interference , all the fresh symbols are received and the transmitters stay in phase . if there is interference , the transmitters transition to phase . in phase of the scheme in ,the bottom levels were not used and the interfering symbols in phase were sent on the levels above the unused levels . in the modified scheme , the transmitters send only interfering symbols ( from phase ) on the levels above the unused levels in the bottom .these interfering symbols correspond to the levels below the top levels in the direct link at the receiver as shown in figure [ fig : parallel](b ) . in the remaining levels ,fresh symbols are sent ( starred symbols in figure [ fig : parallel](b ) ) . ignoring the helped levels , the resulting system of linear equations at the receivers is exactly the same as in with direct link strength and interfering link strength .thus at end of phase , is able to decode ( interfered symbols in phase ) and ( fresh symbols in phase ) . to decode interfered symbols in the helped levels ,the helping mechanism is used ( which collects all interfered symbols in helped levels during a block of duration and enables their recovery in the next block ) .so effectively , the rate obtained from a subcarrier with is .taking into account the above modifications and adding the rates across subcarriers we achieve rate .the toy example in section [ sec : introduction ] considered two subcarriers with , , and ( and ) .as illustrated in the toy example , the middle level in the second subcarrier helped in recovering interfered symbols in the first subcarrier .with reference to our achievability scheme for , the middle level in the second subcarrier is a helper level ( green level in figure [ fig : toy_example_revisited](a ) ) whereas the ( only ) level in the first subcarrier is a helped level ( red level in figure [ fig : toy_example_revisited](a ) ) .since there is only one helped level and one helper level , and . to illustrate ideas behind our achievability schemes for and , we slightly modify the toy example as described below . and ( c ) example [ ex : delta_pos ] . ,title="fig : " ] and ( c ) example [ ex : delta_pos ] . ,title="fig : " ] and ( c ) example [ ex : delta_pos ] ., title="fig : " ] [ ex : delta_neg ] compared to the original toy example , we have modified only the first subcarrier . for this case, there are two levels in the first subcarrier which face may interference but there is only one helper level ( green level in figure [ fig : toy_example_revisited](b ) ) available in the second subcarrier . hence and .we help the bottom level in the first subcarrier ( as we did in the original toy example ) and by simply copying the scheme in the original toy example we achieve rate . for the top level in the first subcarrier ( gray level in figure [ fig : toy_example_revisited](b ) ) ,we use the optimal single carrier scheme for and achieve additional rate . in this example , it is easy to see that the helping mechanism and the single carrier scheme can be executed in parallel .[ ex : delta_pos ] compared to the original toy example , we have modified only the second subcarrier such that it has one extra middle level ( blue level in figure [ fig : toy_example_revisited](c ) ) . for this case , and . the helping mechanism is used as in the original toy example to achieve rate .additional rate is achieved using the bursty relaying technique for the extra middle level in the second subcarrier ( blue level in figure [ fig : toy_example_revisited](c ) ) .in this section , we first describe tight outer bounds ( described below ) followed by tight inner bounds ( in sections [ sec : gdof_inner_pos ] and [ sec : gdof_inner_neg ] ) on the gdof for gn setup . as mentioned in section [ sec : main_results ] , for the gdof analysis we assume , and . we assume a rational to simplify the achievability schemes ( described in sections [ sec : gdof_inner_pos ] and [ sec : gdof_inner_neg ] ) . with the above assumptions ,the gdof for gn setup is defined as follows , where is the symmetric capacity . from outer bounds ( [ eq : causal_gn ] ) and ( [ eq : ob_sum_gn ] ) for the gn setup , we have bounds on as follows , where . using ( [ eq : c_sym_bound ] ) , the following outer bound on gdof holds , where .+ in the remainder of this section , we describe achievability schemes ( inner bounds ) which achieve outer bound ( [ eq : gdof_ob ] ) .the schemes for the gdof setting mimic the achievability schemes for symmetric capacity in the ld setup by using techniques from .hence , the scheme for ( section [ sec : gdof_inner_pos ] ) in the gdof setting mimics the scheme for in ld setup and the scheme for ( section [ sec : gdof_inner_neg ] ) mimics the scheme for in ld setup .we use a block based scheme ( block size ) which mimics the scheme for in section [ sec : helping_mechanism ] for ld setup . for convenience in describing our scheme , we will work with the following _ real _ channel ( the achievable rate for the complex channel in gn setup is just twice the achievable rate for this channel ) . & = \sqrt{snr}\ ; x^{(i)}_j[t ] + \left ( s_j[t ] \right ) \sqrt{inr_j}\ ; x^{(i')}_j[t ] + z^{(i)}_j[t ] \label{eq : gdof_real_channel}\end{aligned}\ ] ] where ,\ ; x^{(i')}_j[t ] \in \mathbb{r} ] and \sim \mathcal{n}(0,1)$ ] .similar to the analysis in , we consider where and are positive integers .furthermore , is such that is an integer ( always possible since all are rational ) . by letting grow to infinity, we get a sequence of snrs that approach infinity . using ( [ eq : gdof_snr ] ) , the received signal in ( [ eq : gdof_real_channel ] ) can be rewritten as follows . & = q^m x^{(i)}_j[t ] + \left ( s_j[t ] \right ) q^{m\beta_j } x^{(i')}_j[t ] + z^{(i)}_j[t ] \label{eq : gdof_q_channel}\end{aligned}\ ] ] following , we will express positive real signals in -ary representation using -ary digits ( which we will refer to as `` qits '' , similar to ) . to mimic the achievability scheme for in ld setup ( section [ sec : helping_mechanism ] ) , we use the following structure for the input signals ( we drop the time index for convenience ) . * for , \end{aligned}\ ] ] where and for the remaining , .[ eq : struct_2 ] * for , \label{eq : struct_1}\end{aligned}\ ] ] where for . * for , \label{eq : struct_1_2}\end{aligned}\ ] ] where and for the remaining , + .the structure ( _ i.e. , _ non - zero qits ) used is same as in the scheme for ld setup ( section [ sec : helping_mechanism ] ) .the restrictions on the values taken by non - zero qits arises from techniques in ( these simplify the analysis by preventing carry overs when signals interfere , see for details ) . in the absence of noise , it is easy to see the similarities between the ld setup and above setup ; qits in a signals are similar to _ levels _ in the ld setup .the following example makes this similarity more precise for the case of subcarriers with . in a subcarrier with ,the received signal at after interference ( in the absence of noise ) is as follows . + \left [ x^{(i')}_{j , m\beta_j } \ ; x^{(i')}_{j , m\beta_j-1 } \ ; \ldots \ ; x^{(i')}_{j , m+1 } \ ; 0\;0\;\ldots 0\;.\;0 \;0 \right]_q \end{aligned}\ ] ] clearly , the top qits of the direct signal ( _ i.e. , _ ) are interference free in the above scenario and by doing a modulo operation at the receiver , one can completely recover the direct signal . even in the presence of noise , due to bounded variance of the noise, the higher qits can be decoded with negligible probability of error ( as ) .having shown the similarity between ld setup and the above setup in the absence of noise , we now describe the rates that we can achieve from the subcarriers in the gdof setting .[ [ beta_j - leq-1 ] ] + + + + + + + + + + + + + + + + + + + + + + + + + + + + in this case , over a block only qits in the direct signal are interfered .assuming we are able to recover all ( except ) interfering qits ( using the helping mechanism described for below ) , we can achieve the following rate : the above rate follows directly from the analysis in .[ [ beta_j - leq-2 ] ] + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in this case , over a block only qits in the direct signal are interfered .assuming we are able to recover all ( except ) interfering qits ( using the helping mechanism described for below ) , we can achieve the following rate : [ [ beta_j-2 ] ] + + + + + + + + + + + + + + + + + + + + + + + + + the top qits in the subcarriers with are always received interference free . so from them we can achieve rate : we now describe the helping mechanism for the gdof setting . for removing the interfering qits for subcarriers with in the previous block , we need to use _ helper _ qits in subcarriers with ; these are the middle qits below the top qits . since , we have sufficient number of such helper qits to recover all interfering qits in subcarriers with .the helping mechanism is same as described for the ld setup ( with minor changes for the -ary setup ) . from the leftover helper qits, we can achieve an additional rate using the bursty relaying technique .summing the rates for all subcarriers we have the following inner bound ( a factor of is included to account for the complex channel ) . so , where ( a ) follows from large enough .since the inner bound on gdof matches the outer bound , we have a tight result when . as in the case of in section [ sec : gdof_inner_pos ] , we focus on the real channel in ( [ eq : gdof_q_channel ] ) for our achievability scheme . the scheme for this casemimics the achievability of symmetric capacity in ld setup for by using the techniques from .since we have already illustrated the usage of techniques from ( for the case ) in mimicking the ld setup schemes for the gdof setting , we will briefly sketch the inner bound for . following the strategy of _ helping as much possible _ for the case in ld setup, we use the middle qits ( below the top qits ) in subcarriers with as helper qits .all the helper qits are used to recover interference in helped qits in subcarriers with ( each subcarrier with has helped qits and ) .so we get the following rates from subcarriers : * for * for * for it should be noted that due to noise , some of the interfering qits in phase ( of the single carrier scheme executed in parallel with the helping mechanism ) may not be decoded correctly at ( after feedback ) and this may affect the recovery of qits in phase .however , it can be shown that such an _ error propagation _leads to reduction ( compared to the case without noise ) in the achievable rate for a subcarrier . combining the rates from all subcarriers, we have the following bound ( factor of included for the complex channel ) . where ( a ) follows from .now , we have the following bound on the gdof ; where ( a ) follows from large enough .the above inner bound matches outer bound ( [ eq : gdof_ob ] ) when and this completes the gdof characterization .the work was supported in part by nsf awards 1136174 and 1314937 . additionally , we gratefully acknowledge support by intel and verizon . 99 n. khude , v. prabhakaran and p. viswanath , `` opportunistic interference management , '' in proc .inf . theory ( isit ) , 2009 .n. khude , v. prabhakaran and p. viswanath , `` harnessing bursty interference , '' in proc .information theory workshop ( itw ) , 2009 .s. avestimehr , s. diggavi and d. tse , `` wireless network information flow : a deterministic approach , '' ieee trans .theory , vol .1872 - 1905 , apr . 2011 .m. madiman and p. tetali , `` information inequalities for joint distributions , with interpretations and applications , '' ieee trans .theory , vol .2699 - 2713 , june 2010 . j. jiang , n. marukala and t. liu , `` symmetrical multilevel diversity coding with an all - access encoder , '' in proc .inf . theory ( isit ) , pp .1662 - 1666 , 2012 .wang , c. suh , s. diggavi and p. viswanath , `` bursty interference channel with feedback , '' in proc .theory ( isit ) 2013 .extended version available : https://sites.google.com/site/ihsiangw/isit13preprintburstyic s. mishra , i .- h . wang and s. diggavi , `` opportunistic interference management for multicarrier systems , '' in proc .inf . theory ( isit ) 2013 .extended version available : http://arxiv.org/abs/1305.2985 s. a. jafar and s. vishwanath , `` generalized degrees of freedom of the symmetric gaussian k user interference channel , '' ieee trans .theory , vol .56 , no . 7 , pp . 3297 - 3303 , jul .2010 . using fano s inequality for , for any , there exists a large enough such that ; | \mathbf{s}[t ] ) \nonumber\\ & = \sum_{t=1}^{n } \sum_{\mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) h(\mathbf{y}^{(1)}[t]| \mathbf{s}[t]= \mathbf{s } ) \nonumber\\ & \leq \sum_{t=1}^{n } \sum_{\mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) \sum_{j=1}^{m } n_j \mathbb{i}_{j \not \in \mathbf{s } } + \max(n_j , k_j ) \mathbb{i}_{j \in \mathbf{s } } \nonumber\\ & = n\sum_{j=1}^{m } n_j + p ( \max(n_j , k_j)-n_j ) \nonumber\\ & = n p\delta + n\sum_{j=1}^m n_j(1+p ) - ( n_j - k_j)^{+}p\end{aligned}\ ] ] where .the outer bound on follows by symmetry and this completes the proof of outer bound ( [ eq : ob_4 ] ) . using fano s inequality for and , for any , there exists a large enough such that ; | \mathbf{s}[t ] ) + \sum_{t=1}^{n } h ( \hat{\mathbf{x}}^{(2)}[t ] |\mathbf{v}^{(1)}_{\mathbf{s}[t]}[t ] , \mathbf{s}[t ] ) \nonumber \\ & = \sum_{t=1}^{n } \sum_{\mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) h(\mathbf{y}^{(1)}[t ] | \mathbf{s}[t]=\mathbf{s } ) + \sum_{t=1}^{n } \sum_{\mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) h ( \hat{\mathbf{x}}^{(2)}[t ] | \mathbf{v}^{(1)}_{\mathbf{s}[t]}[t ] , \mathbf{s}[t]=\mathbf{s } ) \nonumber\\ & \leq \sum_{t=1}^{n } \sum_{\mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) \sum_{j=1}^{m } n_j \mathbb{i}_{j \not \in \mathbf{s } } + \max(n_j , k_j ) \mathbb{i}_{j \in \mathbf{s } } + \sum_{t=1}^{n } \sum_{\mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) \sum_{j=1}^{m } n_j \mathbb{i}_{j \not \in \mathbf{s } } + ( n_j - k_j)^{+ } \mathbb{i}_{j \in \mathbf{s } } \nonumber\\ & = \sum_{t=1}^{n } \sum_{j=1}^{m } n_j ( 1-p ) + \max(n_j , k_j ) p + \sum_{t=1}^{n } \sum_{j=1}^{m } n_j ( 1-p ) + ( n_j - k_j)^{+ } p \nonumber \\ & = np\delta + 2n \sum_{j=1}^{m } n_j \end{aligned}\ ] ] where .this completes the proof of outer bound ( [ eq : ob_3 ] ) . using fano s inequality for , for any , there exists a large enough such that ; , \mathbf{y}^{(2 ) } [ t ] | \mathbf{y}^{(2)}_{1:t-1 } , \mathbf{y}^{(1)}_{1:t-1 } , w^{(1 ) } , w^{(2 ) } , \mathbf{s}_{1:n }\right ) \nonumber\\ & = h \left ( \mathbf{y}^{(1)}_{1:n } , \mathbf{y}^{(2)}_{1:n } | w^{(2 ) } , \mathbf{s}_{1:n } \right ) - \sum_{t=1}^{n } h \left ( \mathbf{z}^{(1)}[t ] , \mathbf{z}^{(2 ) } [ t ] | \mathbf{y}^{(2)}_{1:t-1 } , \mathbf{y}^{(1)}_{1:t-1 } , w^{(1 ) } , w^{(2 ) } , \mathbf{s}_{1:n }\right ) \nonumber\\ & = h \left ( \mathbf{y}^{(1)}_{1:n } , \mathbf{y}^{(2)}_{1:n } | w^{(2 ) } , \mathbf{s}_{1:n } \right ) - \sum_{t=1}^{n } h \left ( \mathbf{z}^{(1)}[t ] , \mathbf{z}^{(2 ) } [ t ] \right ) \nonumber\\ & = h \left ( \mathbf{y}^{(1)}_{1:n } , \mathbf{y}^{(2)}_{1:n } | w^{(2 ) } , \mathbf{s}_{1:n } \right ) - 2n m \log \left ( \pi e \right ) \nonumber\\ & = \sum_{t=1}^{n } h \left ( \mathbf{y}^{(1)}[t ] , \mathbf{y}^{(2)}[t ] | \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{y}^{(2)}_{1:t-1 } , w^{(2 ) } , \mathbf{s}_{1:n } \right ) - 2n m \log \left ( \pi e \right ) \nonumber\\ & \leq \sum_{t=1}^{n } h \left ( \mathbf{y}^{(1)}[t ] , \mathbf{y}^{(2)}[t ] | \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{y}^{(2)}_{1:t-1 } , w^{(2 ) } , \mathbf{s}[t ] \right ) - 2 n m \log \left ( \pi e \right ) \nonumber\\ & = \sum_{t=1}^{n } \sum_{\mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) h \left ( \mathbf{y}^{(1)}[t ] , \mathbf{y}^{(2)}[t ] | \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{y}^{(2)}_{1:t-1 } , w^{(2 ) } , \mathbf{s}[t]=\mathbf{s } \right ) - 2 n m \log \left ( \pi e \right ) \nonumber\\ & \leq \sum_{t=1}^{n } \sum_{\mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) \sum_{j=1}^m \mathbb{i}_{j \not \in \mathbf{s } } \left ( \log \left ( \pi e \left ( 1 + |g_{d , j}|^2 \right ) \right ) + \log \left ( \pi e \right ) \right ) + \mathbb{i}_{j \in \mathbf{s } } \log \left ( ( \pi e)^2 \left ( 1 + |g_{d , j}|^2 + |g_{i , j}|^2 \right ) \right ) - 2 n m \log \left ( \pi e \right ) \nonumber\\ & = n \sum_{j=1}^m ( 1-p ) \log \left ( 1 + |g_{d , j}|^2\right ) + p \log \left ( 1 + |g_{d , j}|^2 + |g_{i , j}|^2 \right ) \end{aligned}\ ] ] the bound on follows by symmetry and this completes the proof of outer bound ( [ eq : ob_r^i_gn ] ) .we also prove a looser bound on as shown below ( the proof for this looser bound is used in the proof of outer bounds ( [ eq : causal_gn ] ) and ( [ eq : ob_sum_gn ] ) ) .using fano s inequality for , for any , there exists a large enough such that ; |\mathbf{y}^{(1)}_{1:t-1 } , w^{(2 ) } , w^{(1 ) } , \mathbf{s}_{1:n } ) \nonumber\\ & \leq h(\mathbf{y}^{(1)}_{1:n}| \mathbf{s}_{1:n } ) - \sum_{t=1}^{n } h(\mathbf{y}^{(1)}[t]|\mathbf{y}^{(2)}_{1:t-1 } , \mathbf{y}^{(1)}_{1:t-1 } , w^{(2 ) } , w^{(1 ) } , \mathbf{s}_{1:n } ) \nonumber\\ & = h(\mathbf{y}^{(1)}_{1:n}| \mathbf{s}_{1:n } ) - \sum_{t=1}^{n } h(\mathbf{z}^{(1)}[t]|\mathbf{y}^{(2)}_{1:t-1 } , \mathbf{y}^{(1)}_{1:t-1 } , w^{(2 ) } , w^{(1 ) } , \mathbf{s}_{1:n } ) \nonumber\\ & = h(\mathbf{y}^{(1)}_{1:n}| \mathbf{s}_{1:n } ) - \sum_{t=1}^{n } h(\mathbf{z}^{(1)}[t ] ) \nonumber\\ & = h(\mathbf{y}^{(1)}_{1:n}| \mathbf{s}_{1:n } ) -n m \log(\pi e ) \nonumber\\ & \leq \sum_{t=1}^{n } h(\mathbf{y}^{(1)}[t]| \mathbf{s}[t ] ) - n m \log(\pi e ) \nonumber\end{aligned}\ ] ] =\mathbf{s } ) h(\mathbf{y}^{(1)}[t]| \mathbf{s}[t]= \mathbf{s } ) - n m \log(\pi e ) \nonumber\\ & \leq \left ( \sum_{t=1}^{n } \sum_{\mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) \sum_{j=1}^{m } \log(\pi e ( 1+|g_{d , j}|^2 ) ) \mathbb{i}_{j \not \in \mathbf{s } } + \log \left ( \pi e \left ( 1 + \left ( |g_{d , j}| + |g_{i , j}| \right ) ^2 \right ) \right ) \mathbb{i}_{j \in \mathbf{s } } \right ) - n m \log(\pie ) \nonumber\\ & = \left ( \sum_{t=1}^{n } \sum_{j=1}^{m } ( 1-p ) \log \left ( \pi e \left ( 1+|g_{d , j}|^2 \right ) \right ) + p \log \left ( \pi e \left ( 1 + \left ( |g_{d , j}| + |g_{i , j}| \right ) ^2 \right )\right ) \right ) - n m \log(\pi e ) \nonumber\\ & = n \sum_{j=1}^{m } ( 1-p ) \log \left ( 1+|g_{d , j}|^2 \right ) + p \log \left ( 1 + \left ( |g_{d , j}| + |g_{i ,j}| \right ) ^2 \right ) \label{eq : gn_looser_r1}\end{aligned}\ ] ] as mentioned above , this is a looser bound compared to ( [ eq : ob_r^i_gn ] ) , but the above proof is used in proving outer bounds ( [ eq : causal_gn ] ) and ( [ eq : ob_sum_gn ] ) .using fano s inequality for and , for any , there exists a large enough such that ; , \mathbf{y}^{(2)}[t ] | \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{y}^{(2)}_{1:t-1 } , \mathbf{s}_{1:n } , w^{(1 ) } , w^{(2 ) } ) \nonumber\\ & = h(\mathbf{y}^{(1)}_{1:n } | \mathbf{s}_{1:n } ) + h ( \mathbf{y}^{(2)}_{1:n } | \mathbf{y}^{(1)}_{1:n } , \mathbf{s}_{1:n } , w^{(1 ) } ) - \sum_{t=1}^{n } h ( \mathbf{z}^{(1)}[t ] , \mathbf{z}^{(2 ) } [ t ] | \mathbf{y}^{(1)}_{1:t-1 } , \mathbf{y}^{(2)}_{1:t-1},\mathbf{s}_{1:n } , w^{(1 ) } , w^{(2 ) } ) \nonumber\\ & = h(\mathbf{y}^{(1)}_{1:n } | \mathbf{s}_{1:n } ) + h ( \mathbf{y}^{(2)}_{1:n } | \mathbf{y}^{(1)}_{1:n } , \mathbf{s}_{1:n},w^{(1 ) } ) - 2 n m \log(\pi e ) \nonumber\\ & = h(\mathbf{y}^{(1)}_{1:n } | \mathbf{s}_{1:n } ) + h ( \hat{\mathbf{x}}^{(2)}_{1:n } \oplus \mathbf{z}^{(2)}_{1:n } | \mathbf{y}^{(1)}_{1:n } , \mathbf{s}_{1:n } , w^{(1 ) } ) - 2 n m \log(\pi e ) \nonumber\\ & = h(\mathbf{y}^{(1)}_{1:n } | \mathbf{s}_{1:n } ) + h ( \hat{\mathbf{x}}^{(2)}_{1:n } \oplus \mathbf{z}^{(2)}_{1:n } | \mathbf{v}^{(1)}_{1:n}\oplus \mathbf{z}^{(1)}_{1:n } , \mathbf{s}_{1:n } , w^{(1 ) } ) - 2 n m \log(\pi e ) \nonumber\\ & \stackrel{(a ) } \leq n \sum_{j=1}^{m } ( 1-p ) \log \left ( 1+|g_{d , j}|^2 \right ) + p \log \left ( 1 + \left ( |g_{d , j}| + |g_{i , j}| \right ) ^2 \right ) \nonumber\\ & \quad + h ( \hat{\mathbf{x}}^{(2)}_{1:n } \oplus \mathbf{z}^{(2)}_{1:n } | \mathbf{v}^{(1)}_{1:n}\oplus \mathbf{z}^{(1)}_{1:n } , \mathbf{s}_{1:n } , w^{(1 ) } ) - n m \log(\pi e ) \nonumber\\ & \leq n \sum_{j=1}^{m } ( 1-p ) \log \left ( 1+|g_{d , j}|^2 \right ) + p \log \left ( 1 + \left ( |g_{d , j}| + |g_{i , j}| \right ) ^2 \right ) \nonumber\\ & \quad + \sum_{t=1}^{n } \sum_{\mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) h ( \hat{\mathbf{x}}^{(2)}[t ] \oplus \mathbf{z}^{(2)}[t ] | \mathbf{v}_{\mathbf{s}}^{(1 ) } [ t]\oplus \mathbf{z}^{(1 ) } [ t ] , w^{(1 ) } , \mathbf{s}[t]=\mathbf{s } ) - n m \log(\pi e ) \nonumber\\ & \leq n \sum_{j=1}^{m } ( 1-p ) \log \left ( 1+|g_{d , j}|^2 \right ) + p \log \left ( 1 + \left ( |g_{d , j}| + |g_{i , j}| \right ) ^2 \right ) \nonumber\\ & \quad + \sum_{t=1}^{n } \sum_{\mathbf{s } } \mathbb{p}(\mathbf{s}[t]=\mathbf{s } ) \sum_{j=1}^{m } \log(\pi e ( 1+|g_{d , j}|^2 ) ) \mathbb{i}_{j \not \in \mathbf{s } } + \log \left ( \pi e \left ( 1 + \frac{|g_{d , j}|^2}{1 + |g_{i , j}|^2 } \right ) \right ) \mathbb{i}_{j \in \mathbf{s } } - n m \log(\pi e ) \nonumber\\ & = n \sum_{j=1}^{m } ( 1-p ) \log \left ( 1+|g_{d , j}|^2 \right ) + p \log \left ( 1 + \left ( |g_{d , j}| + |g_{i , j}| \right ) ^2 \right ) \nonumber\\ & \quad + n \sum_{j=1}^{m } ( 1-p)\log \left ( 1+|g_{d , j}|^2 \right ) + p \log \left ( 1 + \frac{|g_{d , j}|^2}{1 + |g_{i , j}|^2 } \right ) \nonumber\\ % & = n \sum_{j=1}^{m } 2(1-p ) \log \left ( 1+|g_{d , j}|^2 \right ) + p \log \left ( 1 + \left ( |g_{d , j}| + |g_{i , j}| \right ) ^2 \right ) + p \log \left ( 1 + \frac{|g_{d , j}|^2}{1 + |g_{i , j}|^2 } \right ) \nonumber\\ & = n \left ( 2 \sum_{j=1}^{m } \log \left ( 1+|g_{d , j}|^2 \right ) + p \delta_g\right ) \end{aligned}\ ] ] where ( a ) follows from the proof of ( [ eq : gn_looser_r1 ] ) ( see appendix [ sec : gn_r1_ob ] ) and .as shown in figure [ fig : crossover ] , these corner points appear when .we will describe the achievability of and achievability of follows by symmetry .the achievability of is similar to achieving ( described in section [ sec : helping_mechanism ] ) ; with a slight modification for subcarriers with .the additive term appears in because of bursty relaying in the leftover helper levels ( in number ) . for , to achieve , we use an asymmetric version of bursty relaying as follows : in every block sends linear combinations of fresh symbols in the leftover helper levels . receives such linear combinations in every block ; it recovers the constituent symbols and forwards them to . in the next block, creates linear combinations of the constituent symbols sent by and sends them on its leftover helper levels . receives of these linear combinations and thus recovers the constituent symbols .so compared to , now gains an additional rate but loses not using its leftover helper levels for its own messages ; it just uses them to relay messages for . ] rate .this completes the achievability of .both and are achieved using a separation based scheme ( _ i.e. , _ no coding across subcarriers ) .we first describe the achievability of ; achievability of follows by symmetry . in can rewrite rate as follows . also , from the single carrier schemes in , the following rate tuples are achievable for a single carrier setup :
we study parallel symmetric -user interference channels when the interference is bursty and feedback is available from the respective receivers . presence of interference in each subcarrier is modeled as a memoryless bernoulli random state . the states across subcarriers are drawn from an arbitrary joint distribution with the same marginal probability for each subcarrier and instantiated i.i.d . over time . for the linear deterministic setup , we give a complete characterization of the capacity region . for the setup with gaussian noise , we give outer bounds and a tight generalized degrees of freedom characterization . we propose a novel helping mechanism which enables subcarriers in very strong interference regime to help in recovering interfered signals for subcarriers in strong and weak interference regimes . depending on the interference and burstiness regime , the inner bounds either employ the proposed helping mechanism to code across subcarriers or treat the subcarriers separately . the outer bounds demonstrate a connection to a subset entropy inequality by madiman and tetali .
the gamma model of parallel programming was introduced by jean - pierre bentre and daniel le mtayer in ( see also for a more accessible account of the model and for a recent account of it ; also see for a thorough presentation of the field of multiset processing ) . at the time of its introduction ,parallel programming as a mental activity was considered more difficult ( one might also say cumbersome ) than sequential programming , something that even today it is still valid to a certain degree .bentre and le mtayer designed gamma in order to ease the design and specification of parallel algorithms .thus , making a parallel programming task easier compared to previously available approaches .gamma was inspired by the chemical reaction model . according to this metaphor, the state of a system is like a chemical solution in which _ molecules _ , that is , processes , can interact with each other according to _ reaction rules _ , while all possible contacts are possible since a magic hand stirs the solution continuously . in gamma solutionsare represented by_ multisets _( i.e. , an extension of sets that is based on the assumption that elements may occur more than one time , see for more details ) .processes are elements of a multiset and the reaction rules are multiset rewriting rules .thus , gamma is a formalism for multiset processing .the chemical abstract machine ( or cham for short ) is a model of concurrent computation developed by grard berry and grard boudol .the cham is based on the gamma model and was designed as a tool for the description of concurrent systems . basically , each cham is a chemical solution in which floating molecules can interact with each other according to a set of reaction rules .in addition , a magical mechanism stirs the solution so as to allow possible contacts between molecules .as is evident , a number of interactions happen concurrently .informally , a process is a program in execution , which is completely characterized by the value of the _ program counter _ ,the contents of the processor s registers and the process _ stack _ ( see for an overview ) . naturally , two or more processes may be initiated from the same program ( e.g. , think of a web browser like firefox ) . in general ,such processes are considered as distinct execution sequences .although , they have the same text section , the data sections will vary .interestingly , one can view the totality of processes that run in a system at any moment as a multiset that evolves over time , which is built _ on - the - fly _ and may be viewed as the recorded history of a system .however , there is a problem that is not covered by this scheme and which i plan to tackle here : the processes are not identical , but _similar _ and so i need a more expressive mathematical formalism to describe running processes .fuzzy set theory was introduced by lotfi asker zadef in 1965 ( see for a thorough presentation of the theory of fuzzy sets ) .fuzzy set theory is based on the denial of the idea that an element either belongs or does not belong to some set .instead , it has introduced the notion of a _ membership degree _ according to which an element belongs to a fuzzy subset of an ordinary or _crisp _ set .usually , this degree is a real number between zero and one ( i.e. , ) .for example , i may say that the degree to which belongs to the fuzzy subset is . according to fuzzy set theory, when an element belongs to a fuzzy subset with membership degree equal to zero , this does not imply that the element does not belong to a set . on the contrary , it simply means that the element belongs to the fuzzy subset with degree that is equal to zero .formally , given a universe a fuzzy subset is characterized by a function ] . by applying the _ airlock _ constructor`` '' to some molecule and a solution , one gets the new molecule .a specific rule has the following form where and are molecules . also , there are four general transformation rules : 1 . the _ reaction _ rule where an instance of the right - hand side of a rule replace the corresponding instance of its left - hand side . in particular , if there is a rule and if the and are instances of the and the by a common substitution , then \rightarrow[m_{1}^{\prime } , m_{2}^{\prime},\ldots , m_{l}^{\prime}].\ ] ] 2 .the _ chemical _ rule specifies that reactions can be performed freely within any solution : where is the multiset sum operator .3 . according to the _ membrane _ rule sub - solutions can freely evolve in any context : \rightarrow[c(s')]},\ ] ] where is molecule with a hole in which another molecule is placed .note that solutions are treated as megamolecules .the _ airlock _ rule has the following form : \uplus s\leftrightarrow [ m \lhd s].\ ] ]at any moment any operating system is executing a number of different processes . quite naturally , many different processes are instances of the same program . for example , in a multi - user environment , different users may run different instances of the same web browser .cases like this can be naturally modeled by _multisets _ , which from a generalization of sets .any multiset may include more than one copy of an element .typically , a multiset is characterized by a function , where is the set of natural numbers including zero and means that multiset contains copies of .one may say that the elements of a multiset are tokens of different types .thus , a multiset consisting of two copies of `` a '' and three copies of `` b '' can be viewed as a structure that consists of two tokens of type `` a '' and three tokens of type `` b '' ( see for a thorough discussion of this idea ) .when dealing with an operating system , one may argue that types represent the various programs available to users and tokens represent the various processes corresponding to these programs .although it does make sense to view programs as types and processes as tokens , still not all tokens are identical .for example , different people that use a particular web browser view different web pages and have different numbers of tabs open at any given moment .thus , we can not actually talk about processes that are identical , but we can surely talk about processes that are similar .the next question that needs to be answered is : how can we compare processes ?an easy solution is to define _ archetypal _ processes and then to compare each process with its corresponding archetypal process .the outcome of this procedure will be the estimation of a similarity degree .but what exactly is an archetypal process ?clearly , there is no single answer to this question , but roughly one could define it as follows : assume that is an executable program for some computational platform ( operating system , cpu , etc . ) , then an archetypal process of is the program running with minimum resources . here the term _ minimum resources_ means that the archetypal process consumes the least possible resources ( e.g. , memory , cpu time , etc . ) .let be some web browser , then an archetypal process of is the web browser which when starts shows a blank page in only one tab / window .similarly , an archetypal process for the unix ` ls ` command is the command running without arguments in an empty directory .assume that and are two processes initiated from the same binary .assume also that is an archetypal process and that is a method that measures the similarity degree of some process to .in different words , means that is similar with to a degree equal to .if denotes degree to which the two processes and are similar , then suppose that one has selected a number of criteria to choose and specify archetypal processes .let denote the set of all possible archetypal processes for a particular system and a particular set of criteria .without loss of generality , we can think that the elements of are the names of all programs that can possibly be executed on a system .for instance , for some typical unix system , the set may contain the names of programs under /usr / bin , /usr / local / bin , /opt / sfw / bin , etc .suppose that and that at some moment , are processes initiated from program .then this situation can naturally be modelled by fuzzy multisets , that is , multisets whose elements belong to the set to some degree .a fuzzy multiset is characterized by a higher - order function \rightarrow\mathbb{n}) ] . thus , in general , at any given moment the processes running in a system can be described by a multiset of pairs , where denotes the membership degree . however , such a structure reflects what is happening in a system a given moment .thus , to describe what is going on in a system at some time interval we need a structure that can reflect changes as time passes .the most natural choice that can solve this problem is a form of a _ set through time_. bill lawvere was the first to suggest that sheaves can be viewed as continuously varying sets ( see and for a detailed account of this idea ) . since in this particular casei am interested in _ fuzzy multisets continuously varying through time _, it seems that sheaves are not the structures i am looking for .however , as i will show this is not true .but before i proceed , it is more than necessary to give the general definition of a sheaf .the definition that follows has been borrowed from ( readers not familiar with basic topological notions should consult any introductory textbook , e.g. , see ) : let be a topological space and its collection of open sets . a sheaf over is a pair where is a topological space and is a local homeomorphism ( i.e. , each has an open neighborhood in that is mapped homeomorphically by onto , and the later is open in ) .let us now construct a fuzzy multiset through time .suppose that \rightarrow\mathbb{n} ] also characterizes the same fuzzy multiset . and if this function characterizes some fuzzy multiset , then it is absolutely reasonable to say that the graph of this function characterizes the same fuzzy multiset .let , , where is a set of indices , be the graphs of all functions that characterize fuzzy multisets .in addition , assume that each is an open set of a topological space .obviously , it is not difficult to define such a topological space .for example , it is possible to define a metric between points and and from this to define a metric topology .having defined a topology on )\times\mathbb{n} ] is a fuzzy subset of , then is a fuzzy subgroup of if by replacing with some other fuzzy -norm , one gets a more general structure called a -fuzzy group .also , by following this line of thought it is possible to define more complex fuzzy algebraic structures ( e.g. , fuzzy rings , etc . ) .but , this is not the only way to fuzzify a group .indeed , m. demirci and j. recasens proposed in a fuzzy subgroup to be a group , where denotes the degree to which .an even more fuzzier algebraic structure is one were both the elements belongs to the underlying set to a degree and the result of the operation is the `` real '' result up to some degree .assume that is a group and that ] ) and it is denoted by .similarly , one can define fuzzy -ary relations . a fuzzy labeled transition system ( flts ) over a crisp set of actions is a pair consisting of * a set of states ; * a fuzzy relation called the _fuzzy transition relation_. if the membership degree of is , `` indicates the strength of membership within the relation '' . ]we write to denote that the plausibility degree to go from state to state by action is . more generally , if , then is called a _ derivative _ of with plausibility degree equal to . as in the crisp case, it is quite reasonable to ask when two states in a flts should be considered equivalent to some degree . for example , what can be said about the two fltses depicted in figure [ similar : fltses ] ? in order to be able to answer this question we need to define the notion of similarity between fltses .lr ( 150,130)(-50,50 ) ( 20,100 ) ( 15,100) ( 31,100)(1,0)30 ( 40,105)a ( 35,90)0.50 ( 70,100 ) ( 65,100) ( 130,130 ) ( 125,130) ( 81,100)(4,3)40 ( 95,120)b ( 105,105)0.75 ( 130,70 ) ( 125,70) ( 81,100)(5,-4)40 ( 95,95)c ( 80,75)0.80 & ( 150,130)(-50,50 ) ( 20,100 ) ( 15,100) ( 70,130 ) ( 65,130) ( 31,100)(4,3)30 ( 45,120)a ( 55,105)0.65 ( 70,70 ) ( 65,70) ( 31,100)(5,-4)30 ( 45,95)a ( 30,75)0.55 ( 120,130 ) ( 115,130) ( 80,130)(1,0)30 ( 90,135)b ( 85,120)0.80 ( 120,70 ) ( 115,70) ( 80,70)(1,0)30 ( 90,75)c ( 85,60)0.80 assume that is an flts and that is a fuzzy binary relation over . then is called a _ strong fuzzy simulation over with simulation degree _ , denoted by if , whenever if , then there exists such that , , and .we say that _ strongly fuzzily simulates _ with degree ] , , if there is a strong fuzzy bisimulation such that .some strong fuzzy bisimulations are constructed by others more simple ones as the proof of the following propositions shows .[ sfb:2 ] suppose that each , , is a strong fuzzy bisimulation with simulation degree .then the following fuzzy relations are all strong fuzzy bisimulations : 1 . for the identity relation it holds that and for all and where .in addition , it holds that , which trivially shows that is a strong fuzzy bisimulation .2 . in order to show that is a strong fuzzy bisimulation we need to show that the inverse of ( i.e. , ) is a strong fuzzy bisimulation , which is obvious from definition [ sfbs:1 ] .3 . recall that if and are two strong fuzzy bisimulations with similarity degrees and , respectively , then is a new fuzzy binary relation such that .\ ] ] obviously , , which shows exactly what we were looking for .assume that and are two strong fuzzy bisimulations .then their union as fuzzy binary relations is defined as follows ..\ ] ] from this it not difficult to see that ,\ ] ] proves the simplest case . from thisit is not difficult to see why the general case holds .the following statement reveals some properties of the strong fuzzy bisimulation . 1 . is an equivalence relation ; 2 . is a strong fuzzy bisimulation .1 . recall that a fuzzy binary relation is a _ fuzzy equivalence relation _ if it is reflexive , symmetric , and transitive . for reflexivity, it is enough to consider the identity relation , which is a strong fuzzy bisimulation . for symmetryit is enough to say that given a strong fuzzy bisimulation , its inverse is also a strong fuzzy bisimulation .finally , for transitivity it is enough for say that the relational composition of two strong fuzzy bisimulations is also a strong fuzzy bisimulation .this is a direct consequence of proposition [ sfb:2 ] .[ [ fuzzy - x - machines ] ] fuzzy -machines + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + -machines are a model of computation that has been introduced by samuel eilenberg .roughly , given an arbitrary set and a family of relations where , an -machine of type is an automaton over the alphabet .although a labeled transition system is not an automaton ( e.g. , there are no terminal states ) , still it is very easy to define automata using the data of a labeled transition system as a starting point .thus , a fuzzy automaton is actually a special case of a fuzzy labeled transition system that includes a set of initial states and a set of final states . given a fuzzy automaton over an alphabet ( a set ) , a partial functioncan be defined where is the free monoid with base and is the concatenation of strings and . note that is the _ left multiplication _ and it is defined as .thus , is the inverse of the left multiplication . now , by replacing each edge of an automaton with an edge the result is a new automaton which is a fuzzy -machine .note that the type of such a machine is .obviously , one can construct an -machine even from an flts , but the result will not be a _ machine _ since it will not have initial and terminal states .this view is correct when one has in mind the classical view of a machine as a conceptual device that after processing its input in a _finite _ number of operations terminates .interestingly , there are exceptions to this view that are widely used even today .for example , an operating system does not cease to operate and those who stop unexpectedly are considered failures .thus , one can assume that a machine will not terminate to deliver a result but instead it delivers ( partial ) results as it proceeds ( see for more details ) .on the other hand if states are elements of some fuzzy subset , then we can say that there is a termination degree associated with each computation .in other words , a computation may not completely stop or it may stop at some unexpected moment .this is a novel idea , since the established view is that a computation must either stop or it will loop forever .ideas like this one could be used to model the case where an external agent abruptly terminates a computation . however, a full treatment of these and other similar ideas is out of the scope of this paper .roughly , a fucham can be identified with a solution with fuzzy molecules and a set of fuzzy reaction rules . a solution with fuzzy moleculescan be modelled by a fuzzy multiset , while fuzzy reaction rules can be described by fuzzy transitions . before presenting a formal definition of fuchamslet us informally examine whether it makes sense to talk about solutions with fuzzy molecules and about fuzzy reaction rules . to begin with consider the following concrete chemical reaction rule : according to the `` traditional '' view , two hydrogen molecules react with one oxygen molecule to create two water molecules .a fuzzy version of this reaction rule should involve fuzzy molecules and it should be associated with a plausibility degree .this is justified by the _fact _ that molecules of some chemical element or compound are not identical but rather similar to a degree with an ideal molecule of this particular element or compound .in other words , not all hydrogen and oxygen molecules that react to create water are identical .for example , think of deuterium and tritium as `` hydrogen '' molecules up to some degree that react with oxygen to produce heavy water , tritiated water and/or partially tritiated water , that is , water up to some degree .thus , the `` water '' molecules produced when millions of `` hydrogen '' molecules react with oxygen molecules are not identical but just similar ( if , in addition , the reader considers hydrogen peroxide , that is , , then things will get really _ fuzzy _ ) . obviously , the higher the similarity degree , the more likely it is that the reaction will take place . andthis is the reason one must associate with each reaction rule a plausibility degree .although these ideas may seem unnatural , still g.f .cerofolini and p. amato have used fuzziness and linear logic to propose an axiomatic theory for general chemistry . in particular , they have developed ideas similar to ours in order to explain how chemical reaction take place , which means that my proposal , which i call the _ fuzzy chemical metaphor _ , is not unnatural at all .the fuzzy chemical metaphor is essentially identical to the chemical metaphor , nevertheless , it assumes that molecules of the same kind are similar and not identical .solutions of fuzzy molecules may react according to a number of fuzzy reaction rules , whereas each rule is associated with a feasibility degree that specifies how plausible it is that a particular reaction will take place .a fucham is an extension of the ( crisp ) cham that is build around the fuzzy chemical metaphor . like its crisp counterpart, any fucham may have up to four different kinds of transformation rules that are described below .[ [ fuzzy - reaction - rules ] ] fuzzy reaction rules + + + + + + + + + + + + + + + + + + + + assume that we have a solution with fuzzy molecules that are supposed to react according to some fuzzy reaction rule . then the reaction will take place only when the similarity degree of each molecule is greater or equal to the feasibility of the particular reaction rule . [feasible : rule ] assume that and are archetypal molecules .then is an _ ideal _ fuzzy reaction rule with feasibility degree that describes how likely it is that molecules may react to create molecules .suppose that is an instance of to degree , that is , the molecule is similar to with degree equal to .then the following fuzzy reaction \overset{}{\underset{\lambda}{\rightarrow } } [ m_{1}^{\prime},\ldots , m_{l}^{\prime}]\ ] ] is _ feasible _ with feasibility degree equal to if the similarity degree of a molecule depends on the similarity degrees of the atoms that make up this particular molecule .the most natural choice is to assume that it is the minimum of these degrees .it is quite possible to have a situation where the same reacting molecules may be able to yield different molecules , something that may depend on certain factors . in different words, we may have a solution where a number of different fuzzy reaction rules are _ potentially applicable_. in this case , the reaction rule with the highest feasibility degree is _ really applicable_. assume that is a solution for which the following reaction rules are potentially applicable and that , are the similarity degrees of the actual molecules that are contained in .then the really applicable rule is the one that satisfies the conditions of definition ( [ feas : eq ] ) and whose feasibility degree is the largest among the feasibility degrees of all potentially applicable rules . using the perl pseudo - code of algorithm [ alg:1 ], one can compute the really applicable and the potentially applicable rules . [ [ the - fuzzy - chemical - rule ] ] the fuzzy chemical rule + + + + + + + + + + + + + + + + + + + + + + + mixing up two solutions and yields a new solution that contains the molecules of both and .in other words , is the sum of and , or more formally note that in order to find the sum of two or more fuzzy multisets we work as in the crisp case ( see ) . nonetheless , because of restriction [ feas : eq ] , the fuzzy chemical rule takes the following form : note that this restriction applies to all other general rules . [[ fuzzy - airlock - rule ] ] fuzzy airlock rule + + + + + + + + + + + + + + + + + + the airlock rule creates a new molecule that consists of a solution and a molecule .therefore , one needs to define a similarity degree for solutions in order to be able to estimate the similarity degree of the new molecule .suppose that is a process solution represented by a fuzzy multiset .the similarity degree of to a solution that contains only prototype molecules is given by : if is a fuzzy solution and a fuzzy molecule , then \uplus s\overset{}{\underset{\lambda}{\leftrightarrow } } [ m\lhd s]}.\ ] ] [ [ fuzzy - membrane - rule ] ] fuzzy membrane rule + + + + + + + + + + + + + + + + + + + suppose that denotes the similarity degree of molecule . then the fuzzy membrane rule is formulated as follows : \overset{}{\underset{\lambda}{\rightarrow } } [ c(s')]}.\ ] ]the cham has been used to describe the operations of various process calculi and algebras , which have been proposed to describe concurrency . since the cham is a special case of the fucham, one can use the fucham to describe the operational semantics of any such formalism .nevertheless , this is almost meaningless as there is no reason to use a hammer to hit a nail ! on the other hand , it would make sense to ( try to ) describe the operational semantics of a formalism that has been designed to describe concurrency in a vague environment .the truth is that there is no process algebra or process calculus that is built around the fundamental notion of vagueness .therefore , it is not possible to perform a similar exercise for the fucham . a different way to demonstrateits expressive power is to take an existing cham , which has been designed to describe some crisp process calculus or algebra , and then to try to fuzzify the description and , consequently , to come up with a description of a ( hypothetical ? ) fuzzy process calculus or algebra .naturally , this approach does not lead to a full - fledged theory of vague or imprecise concurrency theory , but rather it can be considered as an exercise in defining behaviors from models . forthis exercise i will use the -calculus and the corresponding cham .the -calculus is a mathematical formalism especially designed for the description of _ mobile processes _ , that is , processes that live in a virtual space of linked processes with mobile links .the -calculus is a basic model of computation that rests upon the primitive notion of _ interaction_. it has been argued that interaction is more fundamental than reading and writing a storage medium , thus , the -calculus is more fundamental than turing machines and the -calculus ( see and the references herein for more details ) . in the -calculus processes are described by process expressions that are defined by the following abstract syntax : if , then is the null process that does nothing .in addition , denotes an _ action prefix _ that represents either sending or receiving a message , or making a silent transition : the expression behaves just like one of the s , depending on what messages are communicated to the composite process ; the expression denotes that both processes are concurrently active ; the expression means that the use of the message is restricted to the process ; and the expression means that there are infinitely many concurrently active copies of .as it stands the only way to introduce fuzziness in the -calculus is to assume that action prefixes are fuzzy .usually , it is assumed that there is an infinite set of names from which the various names are drawn . in our case, it can be assumed that names are drawn from $ ] . in other words ,a name would be a pair which will denote that the name used is similar to the prototype with degree equal to .skeptic readers may find this idea strange as an is always an and nothing more or less . indeed , this is true , nevertheless , if we consider various drawn from different ( computer ) fonts , then one is more than some other . to fully understand this idea ,consider the sequence of letters in figure [ typo : sim ] borrowed from .clearly , the rightmost symbol does not look like an a at least to the degree the second and the third from the left look like an a. i call this kind of similarity _ typographic similarity _ , for obvious reasons .thus , one can say that names are typographically similar .berry and boudol provide two encodings of the -calculus , but for reasons of simplicity i will consider only one of them .the following rules can be used to describe the functionality of the -calculus . [ cols= "< , < " , ] note that the first four rules are common to the two encodings of berry and boudol .unfortunately , these rules do not describe summation .however , one can imagine that a sum is an inactive megamolecule that changes to a simpler molecule in a single step .of course , this is a crude idea and not a bulletproof solution , so i will not say more on the matter . in order to proceed with this exercise , it is necessary to fuzzify the encoding presented above . basically , the reaction and -conversion rules are the most problematic rules. a fuzzification of these rules can be obtained by attaching to each rule a plausibility degree . in the first case , it is reasonable to demand that the similarity degrees of and are the same and at the same time greater than the feasibility degree of the plausibility degree and also the difference of the similarity degrees of and is not greater than the plausibility degree . in other words ,the reaction ,q,\ ] ] is feasible if and .similarly , the -conversion \ ] ] if plausible only if .because of this definition , it is necessary to define the notion of fuzzy structural congruence .one option is to use a slightly modified version of the definition provided in .the slight modification involves -conversion plus . from the discussionso far it should be clear that if and only if . with these redefinitionsit is not difficult to go `` back '' to a fuzzy version of the -calculus .i have tried to merge fuzziness with concurrency theory .the rationale of this endeavor is that one can view processes as being similar and not just identical or completely different . in order to build a model of concurrency in a vague environment ,i have introduced fuzzy labeled transition systems and proved some important properties . in passing ,i have defined fuzzy -machines and discussed some interesting ideas .the i introduced fuchams as a model of concurrent computation in a fuzzy environment where fuzzy processes interact to perform a computation according to some fuzzy reaction rules .the model was used to device a toy process calculus which is a fuzzy extension of the -calculus .the next step is to use the ideas developed here to develop real fuzzy process calculi and algebras and thus to broaden the study of concurrency .in addition , these ideas may form the basis for developing fuzzy programming languages and maybe fuzzy computers .i would like to thank athanassios doupas for his help during the early stages of this work .alessandra di pierro , chris hankin , and herbert wiklicky . .in frank s. de boer , marcello m. bonsangue , susanne graf , and willem - paul de roever , editors , _ formal methods for components and objects , 4th international symposium , fmco 2005 amsterdam , the netherlands , november 1 - 4 , 2005 revised lectures _ , number 4111 in lecture notes in computer science , pages 388407 .springer - verlag , berlin , 2006 .
fuzzy set theory opens new vistas in computability theory and here i show this by defining a new computational metaphor the fuzzy chemical metaphor . this metaphor is an extension of the chemical metaphor . in particular , i introduce the idea of a state of a system as a solution of fuzzy molecules , that is molecules that are not just different but rather similar , that react according to a set of fuzzy reaction rules . these notions become precise by introducing fuzzy labeled transition systems . solutions of fuzzy molecules and fuzzy reaction rules are used to define the general notion of a fuzzy chemical abstract machine , which is a _ realization _ of the fuzzy chemical metaphor . based on the idea that these machines can be used to describe the operational semantics of process calculi and algebras that include fuzziness as a fundamental property , i present a toy calculus that is a fuzzy equivalent of the -calculus .
consider the following situation .alice writes a message to bob consisting of the numbers of several bank accounts to which bob has to send some money .she writes in a hurry ( she just got to know that the transfers are urgent if they do not want to pay delay punishment , but currently she has little time ) .therefore her characters are not very well legible , so bob may misread some numbers .however , there are some rules for the possible mistakes , e.g. , a may be thought to be a but never a .this relation between the possible digits need not be symmetric : it is possible that a is sometimes read as a but a may not be decoded as a . these rules of possible confusions are known both by alice and by bob . asalice is aware of the possibility that bob misread her message , later in the day she sends another message to bob , the goal of which is to make bob certain whether he read ( decoded ) the first message correctly or not . if he did he can transfer the money with complete confidence that he sends it to the right accounts .if he did not he will know that he does not know the account numbers correctly and so he better wait and pay the punishment than transfer the money to the wrong place .the second message will be received by bob correctly for certain , but it uses an expensive device , e.g. , alice sends it as an sms from another country after she has arrived there .( now we understand why she was in a hurry : she had to arrive to the airport in time . ) for some reason , every character sent from this foreign country costs a significant amount of money for her .so she wants to send the shortest possible message that makes it sure ( here we insist on zero - error ) that bob will know whether his decoding of the original handwritten message was error - free or not .the problem is to determine the best rate of communication over the second channel as the length of the original message received tends to infinity . in section [ sec : dil ]we describe the abstract communication model for this scenario and show that the best achievable rate is a parameter of an appropriate directed graph .we will see that this parameter of a directed graph is a generalization of the parameter ( of an undirected graph ) called witsenhausen rate .( this means that we also obtain a new interpretation of witsenhausen rate . ) in section [ sec : bounds ] we investigate the relationship with other graph parameters .these include sperner capacity and the dichromatic number .the former is a generalization of shannon s graph capacity to directed graphs .though originally defined to give a general framework for some problems in extremal set theory ( see ) , sperner capacity also has its own information theoretic relevance , see .the dichromatic number is a generalization of the chromatic number to directed graphs introduced in . using the above mentioned relations we determine our new parameter for some specific directed graphs .in section [ sec : comp ] we consider a compound channel type version of the problem parallel to . in section [ sec : extr ]some connections to extremal set theory are pointed out . in section [ sec :ambit ] we will consider the setup where the requirement is more ambitious and we want that alice s second ( the error - free but expensive ) message make bob able to decode the original message with zero - error . (that is , he will know the message itself not only the correctness or incorrectness of his original decoding of alice s handwriting . )we will see that this setting leads to the witsenhausen rate of an undirected graph related to the problem .this gives a further new interpretation of witsenhausen rate .all logarithms are meant to be of base .the abstract setting for our communication scenario is the following .we have a source whose output is sent through a noisy channel .( this belongs to alice s handwriting . )the input and output alphabets of this channel are identical and they coincide with the output alphabet of the source . it is known how the noisy channel can deform the input , in particular we know what ( input ) letters can become a certain , possibly different ( output ) letter on the other side .( we always assume though that every letter can result in itself , that is get through the noisy channel without alteration . )later another message is sent ( by the same sender ) to the same receiver .this second message is sent via a noiseless channel and its goal is to make zero - undetected - error decoding possible , i.e. , after having received this second message the receiver should be able to decide whether it decoded the first message correctly .the use of the noiseless channel is expensive , so the second message should be kept as short as possible .let the shortest possible message that satisfies the criteria have length when characters of the source output are encoded together .let denote the noisy channel .the efficiency of the communication is measured by the quantity that we call the dilworth rate of the noisy channel .( for an explanation of the name see remark [ rem : name ] in subsection [ subsect : dilwdef ] . ) [ rem : cvsr ] note the special feature of the problem that we characterize a channel by a rate , that is with a parameter that , unlike channel capacity , we want to be as small as possible .the reason is that we measure the reliability of a channel not by the amount of information it can safely transfer but with the amount of information needed _ to be added _ for making the communication reliable . the relevant properties of are described by a directed graph having the ( common input and output ) alphabet as its vertex set and the following edge set .an ordered pair of two letters forms a directed edge of if and only if and the output of can be when it is fed by at the input .[ rem : ve ] as usual we will use to denote the vertex set and to denote the edge set of a directed graph .we will use similar notation for undirected graphs that we always consider to be the same as a _ symmetrically directed graph_. in such a graph an ordered pair of two vertices forms a directed edge if and only if the reversely ordered pair is also present in the digraph as a directed edge .we will use the term _ oriented graph _ for directed graphs that do not contain any edge together with its reversed version .that is is an oriented graph if implies .as it is also customary , the term _ digraph _ will be used as a synonym for `` directed graph '' . to express as a graph parameter we need the following notion . the and product of two directed graphs and is defined as follows .the vertex set of is the direct product and vertex sends a directed edge to iff either and or and or and .the -th and power of a digraph , denoted by is the -wise and product of digraph with itself .observe that this graph exponentiation extends to sequences of letters the relation between individual letters and expressing that feeding to the noisy channel may result in observing letter at the output .a sequence of letters at the input of can result in another such sequence at the output of if at each coordinate the character in the first sequence can result in the corresponding character of the second sequence .( this includes the possibility that the character does not change when sent through . )[ rem : ao ] the terminology of graph products is not completely standardized .the and product we just defined is also called normal product , strong direct product , or strong product .we follow the paper of alon and orlitsky when use the name and product , because we find this name informative .a similar remark applies to the or product that we will introduce later in definition [ defi : or ] . recall that the chromatic number of a graph is the minimal number of colors that suffice to color the vertices of so that adjacent vertices get different colors .if is a digraph , its chromatic number is understood to be the chromatic number of the underlying undirected graph .[ prop : chi ] it is easy to see that the above limit always exists .( the reason is the submultiplicative behaviour of the chromatic number under the and product ) . * proof .* alice and bob can agree in advance in a proper coloring of with colors .alice can send bob the color of the vertex belonging to the original source output using bits .bob compares this to the color of the vertex representing the sequence he obtained as a result of his decoding .if the latter color is identical to the one alice has sent him , then he can be sure that his decoding was error - free .this is because any other sequence that could result in his decoded sequence is adjacent ( in ) to this decoded sequence , so its color is different . on the other hand , if alice sent a shorter message through the noiseless channel , then she could not have distinct messages and thus there must exist two adjacent vertices in that are encoded to the same codeword by alice ( for the noiseless channel ) .then one of the two sequences represented by these two adjacent vertices could result in the other one , while this other one could also result in itself .thus bob can not make the difference between these two sequences , one of which is the correct source output sequence while the other one differs from it .so receiveing this message bob could not be sure whether his decoding was error - free or not . the right hand side expression in proposition [ prop : chi ] can be considered as a digraph parameter that we will call the dilworth rate of the digraph .[ defi : dilrate ] for a directed graph we define its ( logarithmic ) _ dilworth rate _ to be the non - logarithmic dilworth rate is {\chi(\vec{g}^{\wedge t}}).\ ] ] obviously , [ rem : name ] let be the directed graph on vertices with a single directed edge .if we consider the vertices of as characteristic vectors of subsets of a -element set then can be interpreted as the asymptotic exponent of the minimum number of antichains ( sets of pairwise incomparable elements ) in the boolean lattice of these subsets that can cover all the subsets .the exact value of this minimum number is given ( easily ) by a special case of what is called the `` dual of dilworth s theorem '' ( also called mirsky s theorem , see ) .this connection to dilworth s celebrated result is the reason for calling our new parameter dilworth rate .note that the name sperner capacity was picked by the authors of for analoguous reasons : the sperner capacity of the digraph has a similar relationship with sperner s theorem . the and product is also defined for undirected graphs .considering undirected graphs as symmetrically directed graphs the definition is straightforward .witsenhausen considered the `` zero - error side - information problem '' that led him to introduce the quantity that is called the witsenhausen rate of ( the undirected ) graph .it is straightforward from the definitions that if is a symmetrically directed graph and is the underlying undirected graph ( that we consider equivalent ) , then .thus dilworth rate is indeed a generalization of witsenhausen rate to directed graphs .[ rem : nrfam ] we note that nayak and rose defines what they call `` the witsenhausen rate of a set of directed graphs '' .though formally this gives the dilworth rate of a directed graph , the focus of is elsewhere .when its motivating setup results in a family consisting of a single digraph , then this digraph is symmetrically directed .( see also theorem [ thm : wm ] in section [ sec : comp ] . )sperner capacity was introduced by gargano , krner and vaccaro .traditionally this parameter is defined by using the or product .[ defi : or ] the or product of directed graphs and has vertex set and sends a directed edge to iff either or .the -th or power is the -wise or product of digraph with itself .let denote the complete directed graph on vertices , that is the one we obtain from a(n undirected ) complete graph when substituting each of its edges by the two oriented edges and .the ( directed ) complement of a digraph is the directed graph on vertex set having edge set now we note the straightforward relation of the and and or powers that . the ( logarithmic ) sperner capacity of digraph is defined ( see ) as where denotes the _ symmetric clique number _ , that is the cardinality of the largest symmetric clique in digraph : the size of the largest set where for each both and are edges of . using the above relation of the and and or products , sperner capacity ( of the complementary graph )can also be defined as where stands for the independence number ( size of the largest edgeless subset of the vertex set ) of graph .this is the definition given in .( the authors of call this value the sperner capacity of . ) when is an undirected ( or symmetrically directed ) graph , then , the _ shannon capacity _ of graph ( see ) .we will need a sort of probabilistic refinement of our capacity - like parameters called their `` within - a - type '' versions , see .first we need the concept of -typical sequences , cf . .[ defi : peps ] let be a finite set , a probability distribution on , and .a sequence in is said to be -typical if for every we have , where .we denote the set of -typical sequences in by . when we also write for .if we say that the _ type _ of is . let stand for either or .for a directed ( or undirected ) graph and we denote by ] .let be either of the following graph parameters of the directed graph : independence number , clique number , chromatic number , clique cover number ( which is the chromatic number of the complementary graph ) , symmetric clique number , or transitive clique number .( the latter is the size of the largest subset of the elements of which can be linearly ordered so that if precedes then the oriented edge is present in . )let the asymptotic parameter be defined as while stands for .[ defi : captyp ] the parameter of a digraph within a given type is the value we note that for several of the allowed choices of and we obtain a graph parameter that already exists in the literature . for example , when and the power we look at is the or power , we get sperner capacity within a given type , that has an important role in the main results of the papers .if we choose and the or power , we obtain the functional called graph entropy , which is defined in and has several nice properties , see , as well as important applications , see e.g. .when but the exponentiation is the and power , then we arrive to the within a type version of dilworth rate .the special case of this for an undirected graph was already known under name `` complementary graph entropy '' that could justifiably be called `` witsenhausen rate within a given type '' .this parameter was introduced by krner and longo and further investigated by marton .although this within - a - type version of witsenhausen s invariant was introduced earlier than the non - probabilistic version ( cf . ) , for the sake of consistancy we denote it by . note that marton proved the important identity where is the entropy of the probability distribution .this holds for any probability distribution on .along the same lines one can also prove the following theorem .we give its proof for the sake of completeness .we will use the notion of fractional chromatic number in the proof .let denote the set of independent sets in .a function is a fractional coloring of if for every vertex we have , that is the sum of the weights puts on independent sets containing is at least .( a proper colorig is also a fractional coloring : the color classes get weight , the other independent sets get weight . )the fractional chromatic number is , that is the minimum ( taken over all fractional colorings ) of the total weight put on independent sets by a fractional coloring .( formally we should write infimum but it is known that the minimum is always attained . see the book for a detailed account on fractional graph parameters . )we will need the following properties of the fractional chromatic number .[ defi : vtrans ] a directed graph is vertex - transitive if for any two vertices it admits an automorphism that maps to . if is a vertex - transitive graph , then ( for a proof see , proposition 3.1.1 on page 41 . ) for every graph we have {\chi(f^{\odot t}})=\lim_{t\to\infty}\sqrt[t]{\chi_f(f^{\odot t}}).\ ] ] the latter follows from lovsz s result stating that and the obvious inequality that holds for all ( finite simple ) graphs .[ thm : sumhp ] let be a directed graph and an arbitrary fixed probability distribution on . then * proof .* note that by the well - known ( and more or less trivial ) inequality for every graph , we have .clearly , this relation also holds if we have a directed graph in place of the undirected graph .this is straightforward since and are defined to be identical to the corresponding parameter of the underlying undirected graph .it is also well - known ( cf .e.g. ) that .the last two relations immediately give for the reverse inequality let us fix a sequence of probability distributions on the vertex set of our graph so that and notice that is a vertex - transitive graph , since every sequence forming an element of can be transformed into any other such sequence by simply permuting the coordinates .thus using standard techniques of the method of types , cf . we can already state our lower bound on .we need the fact that fixing the length the number of distinct types of a sequence over some fixed alphabet is only a polynomial funcion of ( cf . the type counting lemma 2.2 in ) , while the parameters we investigate are asymptotic exponents of some graph parameters that grow exponentially as tends to infinity . with this in mindwe can write [ thm : lowerb ] * proof .* using the above equalities , we obtain this gives the lower bound in the statement . note that sperner capacity is unkown for many graphs , so the lower bound above usually does not give a known numerical value . still , there are some examples of graphs where sperner capacity is known and is non - trivial .a basic example is the cyclically oriented triangle , or more generally , any cyclically oriented cycle .first we formulate a consequence of the above formula .[ cor : vtrans ] if is a vertex - transitive digraph then * proof .* let denote the uniform distribution on the vertex set of .if is vertex - transitive then by symmetry and .combining these equalities with theorem [ thm : lowerb ] we obtain and thus the statement . now we will use this corollary to determine the dilworth rate of the cyclically oriented -length cycle for every .note that the complement of a cyclically oriented cycle is a cyclically oriented cycle of the same length together with all diagonals as bidirected ( or equivalently , undirected ) edges . for there are no diagonals , so the cyclic triangle is isomorphic to its complement .the sperner capacity of the cyclic triangle was determined in , cf .also , and its value is .this result was generalized by alon , who proved that the sperner capacity of a digraph is bounded from above by where and stand for the maximum outdegree and maximum indegree of , respectively .the indegree and outdegree of a vertex is the number of edges at that are oriented towards or outwards , respectively . for a further generalization of alon s result . ) on the other hand , sperner capacity is bounded from below by ( the logarithm of ) the transitive clique number , the number of vertices in a largest transitively directed complete subgraph , denoted by .( this is an easy observation which implies that substituting by in the definition of sperner capacity gives the same value , i.e. it gives an alternative definition of sperner capacity , see and also proposition 4 and the remark following it in . )note that a transitively directed complete subgraph meant here is not necessarily induced .it is allowed that some reverse edges are also present on the same subset of vertices .[ cor : ck ] the dilworth rate of the cyclically oriented -cycle is * proof .* let the directed complement of be denoted by . since , alon s above mentioned result implies that the sperner capacity of is at most .it is easy to see that , so the lower bound mentioned above is also .since the above two bounds coincide , the sperner capacity of is equal to . using that is vertex transitive corollary [ cor : vtrans ] implies the statement . that corollary [ cor : ck ] shows that the dilworth rate is a true generalization of witsenhausen rate since if .[ defi : acyc ] call a subset of the vertex set of a directed graph acyclic if it induces an acyclic subgraph .the latter means that there is no oriented cycle on these vertices .the acyclicity number of a directed graph is the number of vertices in a largest acyclic subset of . note that unlike for a transitive clique we do not allow reverse edges in an acyclic subgraph .let be an odd number .the following tournaments ( oriented complete graphs ) are also generalizations of the cyclic triangle .let and is an edge iff for some .( figure 1 shows the tournament . ) note that it holds for every directed graph that reversing all of its edges does not change the value of either its sperner capacity or of its dilworth rate .this implies that if is a tournament then we have and . by the four values are equal . .] [ cor : tourn ] for all odd integers we have * proof .* we know that holds for all directed graphs ( cf . and also the discussion before corollary [ cor : ck ] for an equivalent statement concerning the complementary digraph ) .this gives by ( see the note before stating the corollary ) alon s result can be applied implying that our lower bound is sharp .since is vertex - transitive we can apply corollary [ cor : vtrans ] to complete the proof . observe that corollary [ cor : tourn ] shows not only that the value of the dilworth rate of an oriented graph may differ from the witsenhausen rate of the underlying undirected graph , but also that the difference can be arbitrarily large .indeed , denoting the complete graph on vertices by we have for every . the left hand side of the inequality is bounded above by , while the right hand side goes to infinity with .now we show that the ( logarithm of the ) _ dichromatic number _ defined in is an upper bound on the dilworth rate .[ def : dic ] the _ dichromatic number _ of a directed graph is the minimum number of acyclic subsets that cover . a partition of into acyclic subsets will be called a _ directed coloring _ or _dicoloring_. we note that an undirected edge ( meaning a bidirected edge ) is considered to be a -length cycle , therefore its two endpoints can not be both contained in an acyclic set .this shows that for undirected ( equivalently , symmetrically directed ) graphs the dichromatic number is equal to the chromatic number .we do not use the term `` acyclic coloring '' , because it is already used for a completely different concept , see . [ thm : upb ] for any directed graph * proof .* let us fix a directed coloring of digraph consisting of acyclic subsets ( `` color classes '' ) .for each let denote the color class that contains . now consider .it has ^t ] as a strengthening of the previous theorem we will show that we can also write a natural fractional relaxation of the dichromatic number on the right hand side above .( we could not prove this right away , as the weaker statement will be used in the proof . ) to prove this stronger statement we need some preparation , in particular we will use the following observations .first note , that holds for any digraph .this is simply because independent sets in are special acyclic sets , so any proper coloring of is also a directed coloring of .[ prop : submult ] the dichromatic number is submultiplicative with respect to the and product , i.e. in particular , ^t.\ ] ] a straightforward consequence of proposition [ prop : submult ] is that the limit {\chi_{\rm dir}(f^{\wedge t})} ] contains a directed cycle .( recall that ] contradicting the assumption that is acyclic .the above implies that is a directed coloring of .as it uses colors the statement is proved .[ lem : nemno ] for any digraph and positive integer we have ^k.\ ] ] * proof .* fix an arbitrary positive integer .we can write {\chi(\vec{f}^{\wedge mk})}\\ = & \lim_{m\to\infty}\sqrt[k]{\sqrt[m]{\chi([\vec{f}^{\wedge k}]^{\wedge m})}}\\ = & \sqrt[k]{\lim_{m\to\infty}\sqrt[m]{\chi([\vec{f}^{\wedge k}]^{\wedge m})}}\\ = & \sqrt[k]{r_{\rm d}(\vec{f}^{\wedge k } ) } , \end{array}\ ] ] that implies the statement. [ prop : asympt ] for any digraph we have {\chi_{\rm dir}(\vec{f}^{\wedge t})}=r_{\rm d}(\vec{f}).\ ] ] * proof . * by we have{\chi_{\rm dir}(\vec{f}^{\wedge t})}\le \lim_{t\to\infty}\sqrt[t]{\chi(\vec{f}^{\wedge t})}=r_{\rm d}(\vec{f}).\ ] ] for the reverse inequality we can write {r_{\rm d}(\vec{f}^{\wedge t})}\le\lim_{t\to\infty}\sqrt[t]{\chi_{\rm dir}(\vec{f}^{\wedge t})},\ ] ] where the equality follows by noticing that lemma [ lem : nemno ] above is valid for all positive integers and the inequality is a consequence of applied for . [ def : fracdic ] let the set of subsets of the vertex set inducing an acyclic subgraph in a digraph be .a function is called a _fractional directed coloring _ ( or fractional dicoloring ) if for we have the _ fractional dichromatic number _ of is where the minimum is taken over all fractional directed colorings of .note the obvious inequality for any digraph .we will need the following lemma .[ lem : dicfsub ] for any digraphs and we have * proof .* let and be optimal fractional directed colorings of and , respectively .we use the observation , already verified in the proof of proposition [ prop : submult ] , stating that if and then the direct product is in , i.e. induces an acyclic subdigraph in .now give the following weights to the acyclic sets of .if has a product structure , i.e. for some and , then let .if is not of this form , then let .for any we have thus is a fractional dicoloring of .now we have this completes the proof . [ cor : powfsub ] for any digraph and any positive integer we have ^t.\ ] ] also need the following result .[ prop : asyfrac ] for any digraph we have {\chi_{{\rm dir},f } ( \vec{f}^{\wedge t})}=r_{\rm d}(\vec{f}).\ ] ] for the proof we need some preparation .a hypergraph consists of a vertex set and an edge set , where the elements of are subsets of .a covering of hypergraph is a set of edges the union of which contains all elements of .let denote the minimum number of edges in a covering of .a fractional covering of a hypergraph is a function satisfying for every that .the fractional covering number is where the minimization is over all fractional covers . clearly , .lovsz proved in ( cf . also ) that where , that is the cardinality of a largest edge in . for a directed graph let where , i.e. it consists of the acyclic subsets of vertices in .it is straightforward that and while .thus the above result implies that * proof of proposition [ prop : asyfrac ] * we have {\chi_{{\rm dir},f}({\vec{f}}^{\wedge t})}\le \lim_{t\to\infty}\sqrt[t]{\chi_{\rm dir}({\vec{f}}^{\wedge t})}=r_{\rm d}({\vec{f}}) ] of our graph , that cover all vertices in ^{\wedge 2}) ] thus by lemma [ lem : nemno ] we get .let us denote the vertices of by in their cyclic order , so that in we have and the unique outneighbor of is .( that is , the outdegree vertex is , and thus the outdegree vertices in are and , while and have outdegree . ) the following six subsets of induce acyclic subgraphs of ^{\wedge 2}) ] the pair does not send an edge to vertex , therefore the corresponding set of vertices induces an acyclic subgraph in ^{\wedge 2} ] .then there are two -length source outputs , that is two sequences in ^{\wedge t}) ] and for which alice sends the same message when encoding either of them for the noiseless channel .the adjacency of and in ^{\wedge t} ] then alice can make bob able to decide for sure what the original message was. indeed , fix a proper coloring of ^{\wedge t} ] colors in advance that is known by both parties .encode each color by a ( distinct ) sequence of ^{\wedge t})\rceil] is connected to all those sequences that could result in the same sequence when sent through what can result in , all these sequences have a different color than in our coloring of ^{\wedge t} ] as stated . not every graph can appear as the closure graph of some directed graph .let be a(n undirected ) bipartite graph with . then can not be the closure graph of any directed graph .* let be the closure graph of a directed graph .observe that if has an edge connecting two vertices that were not adjacent ( in either direction ) in , then is contained in a triangle in .let be a bipartite graph with more edges than vertices .by bipartiteness contains no triangle , so if it is a closure graph of some graph , then is just a directed version of .if any vertex have indegree at least in that would generate a triangle in , so the closure graph could not be itself .since the sum of indegrees equals the number of edges , we can not avoid having a vertex with indeegree two if for any finite simple undirected graph , there exists a directed graph such that contains as an induced subgraph . * proof .* let be an arbitrary finite simple undirected graph .for every edge consider a new vertex .we add the oriented edges and to our graph .now delete the edges of thus obtaining a graph on vertex set containing only the oriented edges leading to some vertex .it is straightforward to see , that contains graph as an induced subgraph . general problem concerning the dilworth rate is to determine it for specific directed graphs .since this is a difficult and mostly open problem to the related notions of shannon and sperner capacities as well as for the witsenhausen rate , we can not expect that this problem is easy .nevertheless , we have seen some digraphs for which it was solvable ( at least when using some non - trivial results already established for sperner capacity ) . still , there are some directed graphs for which determining the dilworth rate seems particularly interesting .[ prob : altot ] what is the dilworth rate of the graph we presented in subsection [ subsec : dichrom ] ? recall that we know tournaments play a special role in our setting , because they are exactly those oriented graphs the complement of which is also an oriented graph ( that is one without bidirected edges ) .so it may have some particular interest how their dilworth rate behave .is there a tournament for which is strictly smaller than ?useful discussions with imre csiszr are gratefully acknowledged .j. krner , coding of an information source having ambiguous alphabet and the entropy of graphs , in : _ transactions of the 6th prague conference on information theory , etc ._ , 1971 , academia , prague , ( 1973 ) , 411425 .g. simonyi , graph entropy : a survey , in : _ combinatorial optimization _ , ( w. cook , l. lovsz , p. seymour eds . ) , dimacs series in discrete mathematics and theoretical computer science , volume 20 , ams , providence , ri , 1995 , 399 - 441 .
we investigate a communication setup where a source output is sent through a free noisy channel first and an additional codeword is sent through a noiseless but expensive channel later . with the help of the second message the decoder should be able to decide with zero - error whether its decoding of the first message was error - free . this scenario leads to the definition of a digraph parameter that generalizes witsenhausen s zero - error rate for directed graphs . we investigate this new parameter for some specific directed graphs and explore its relations to other digraph parameters like sperner capacity and dichromatic number . when the original problem is modified to require zero - error decoding of the whole message then we arrive back to the witsenhausen rate of an appropriately defined undirected graph . keywords : zero - error , graph products , sperner capacity , dichromatic number , witsenhausen rate
in these lectures we explore the subject of nonequilibrium dynamics . before getting into any kind of detaillet us first establish what we mean by a nonequilibrium system .this is best done by taking stock of our understanding of an equilibrium system .consider the canonical ( boltzmann ) distribution for a systems with configurations labelled each with energy : where .the task is to calculate the partition function from which all thermodynamic properties , in principle , can be computed .the distribution ( [ pcan ] ) applies to systems in thermal equilibrium _i.e. _ free to exchange energy with an environment at temperature .it can easily be generalised to systems free to exchange particles , volume etc but it always relies on the concept of the system being at equilibrium with its environment .if one were interested in dynamics , for example to simulate the model on a computer , one might choose transition rates between configurations to satisfy where is the transition rate from configuration to .condition ( [ db ] ) is known as detailed balance and guarantees ( under the assumption of ergodicity to be discussed below ) that starting from some nonequilibrium initial condition the system will eventually reach the steady state of thermal equilibrium given by ( [ pcan ] ) .we will discuss further this dynamical relaxation process and the properties of the steady state endowed by the detailed balance condition in section [ sec : theory ] . for the moment we notethat a system relaxing to thermal equilibrium is one realisation of a nonequilibrium system . in recent yearssuch relaxation dynamics have been of special interest , for example , in the study of glassy dynamics whereby , on timescales realisable in experiment ( or simulation ) , the system never reaches the equilibrium state and it is a very slowly evolving nonequilibrium state that is observed .this is sometimes referred to as ` off - equilibrium ' dynamics .also let us mention the field of domain growth whereby an initially disordered state is quenched ( reduced to a temperature below the critical temperature for the ordered phase ) and relaxes to an ordered state through a process of coarsening of domains .the interesting physics lies in the scaling regime of the coarsening process which is observed before the equilibrium ( ordered ) state is reached .the other meaning of nonequilibrium refers to a system that reaches a steady state , but not a steady state of thermal equilibrium .examples of such nonequilibrium steady states are given by driven systems with open boundaries where a mass current is driven through the system .thus the system is driven by its environment rather being in thermal equilibrium with its environment .a pragmatic definition of a nonequilibrium system that encompasses all of the scenarios above is as a model defined by its dynamics rather than any energy function _i.e. _ the configurations of the model are sampled through a local stochastic dynamics which _ a priori _ does not have to obey detailed balance .these notes broadly follow the four lectures given at the summer school .in addition a tutorial class was held to explore points left as exercises in the lectures . in the presentnotes these exercises are included in a self - contained form that should allow the reader to work through them without getting stuck or else leave them for another time and continue with the main text .the notes are structured as follows : in section 2 we give an overview of two simple models that we are mainly concerned with in these lectures . in section3 we then set out the general theory of the type of stochastic model we are interested in and point out the technical difficulties in calculating dynamical or even steady - state properties .section 4 is an interlude in which we introduce , in a self - contained way , a mathematical tool the -deformed harmonic oscillator that will prove itself of use in the final two sections . in section 5we present the solution of the partially asymmetric exclusion process and amongst other things how the phase diagram ( figure [ fig : taseppd ] ) is generalised . in section[ sec : sbac ] we discuss the exact solution of a stochastic ballistic annihilation and coalescence model .in this work we will focus on two exemplars of nonequilibrium systems : the partially asymmetric exclusion process and a particle reaction model .these models have been well studied over the years and a large body of knowledge has been built up .we introduce the models at this point but will come back to these models in more detail in sections [ sec : pasep ] and [ sec : sbac ] in which we summarise some recent analytical progress .the asymmetric simple exclusion process ( asep ) is a very simple driven lattice gas with a hard core exclusion interaction .consider particles on a one - dimensional lattice of length say .at each site of the lattice there is either one particle or an empty site ( to be referred to as a vacancy or hole)there is no multiple occupancy .the dynamics are defined as follows : during each time interval each particle has probability of attempting a jump to its right and probability of attempting a jump to its left ; a jump can only succeed if the target site is empty . in this work will be concerned with the case so that we obtain continuous time dynamics with particles attempting hops to the right with rate 1 and hops to the left with rate . in this limitno two particles will jump at the same time .the other limit of would correspond to fully parallel dynamics where all particles attempt hops at the same time .this type of dynamics is employed ( for ) in traffic flow modelling ( see schadschneider this volume ) . to complete the specification of the dynamics we have to fix the boundary conditions .it turns out these are of great significance .the following types of boundary conditions have been considered : periodic : : in this case we identify site with site .the particles then hop around a ring and the number of particles is conserved .open boundaries : : in this case particles attempt to hop into site with rate .a particle at site leaves the lattice with rate .thus the number of particles is not conserved at the boundaries .one can also understand these boundary conditions in terms of a site 0 being a reservoir of particles with fixed density and site being a reservoir with fixed density . we shall primarily be concerned with these boundary conditions because , as first pointed out by krug , they can induce phase transitions .the dynamics is illustrated in figure [ fig : pasep ] .closed boundaries : : in this case the boundaries act as reflecting walls and particles can not enter or leave the lattice . thus one has a zero current of particles in the steady state and the system obeys detailed balance. we shall not be interested in this case .infinite system : : finally we could consider our lattice of size as a finite segment of an infinite system .the macroscopic properties of the asep in which we are interested are the steady - state current ( flux of particles across the lattice ) and density profile ( average occupancy of a lattice site ) .these can be expressed in terms of the binary variables , where if site is occupied by a particle and if site is empty , and which together completely specify a microscopic configuration of the system .then , the density at site is defined as where the angular brackets denote an average over all histories of the stochastic dynamics .one can think of this average as being an average over an ensemble of systems all starting from the same initial configuration at time 0 .let us consider for the moment the totally asymmetric dynamics ( _ i.e. _ no backward particle hops are permitted ) .one can use the ` indicator ' variables intuitively to write down an equation for the evolution of the density .for example , if we choose a non - boundary site ( so that we can avoid prescribing any particular boundary conditions ) we obtain note that gives the probability that site is occupied and site is empty .thus , since particles hop forward with rate 1 , first term on the right hand side gives the rate at which particles enter site and the second term gives the rate at which they leave site .if one needs more convincing of the argument used to obtain ( [ ev1 ] ) consider what happens at site in an infinitesimal interval : \tau_i(t ) + \left [ 1-\tau_i(t ) \right ] \tau_{i-1}(t ) & { \ensuremath{\mathrm{d}}}t \\[0.2ex ] \tau_{i}(t)\tau_{i+1}(t ) & { \ensuremath{\mathrm{d}}}t \end{array } \right .\;.\ ] ] the first equation comes from the fact that with probability ( dropping terms of ) , neither of the sites or is updated and therefore remains unchanged . the second andthe third equations correspond to updating sites and respectively . if one averages ( [ rule ] ) over the events which may occur between and and all histories up to time one obtains ( [ ev1 ] ) .the same kind of reasoning allows one to write down straight away an equation for the evolution of . or for any other correlation function .these equations are exact and give in principle the time evolution of any correlation function .however , the evolution ( [ ev1 ] ) of requires the knowledge of which itself ( [ ev2 ] ) requires the knowledge of and so that the problem is intrinsically an -body problem in the sense that the calculation of any correlation function requires the knowledge of all the others .this is a situation quite common in equilibrium statistical mechanics where , although one can write relationships between different correlation functions , there is an infinite hierarchy of equations which in general makes the problem intractable .for the case of open boundary conditions the number of particles in the system is not conserved .the evolution equations ( [ ev1]-[ev2 ] ) are then valid everywhere in the bulk but they are modified at the boundary sites . for example ( [ ev1 ] ) becomes in the steady state , where the time derivatives of correlation functions are zero , ( [ ev1],[ev3],[ev4 ] ) can be rewritten as a conserved current these equations simply express that for the density to be stationary one must have the current into any site equal to the current out of that site .for example , is the probability that the leftmost site is empty multiplied by the rate at which particles are inserted onto it , and hence yields the current of particles entering the system .similarly is the rate at which particles leave the system . to motivate the study of this modellet us consider a few applications .first we may consider the asep as a very simplistic model of traffic flow with no overtaking ( more realistic generalisations are discussed by schadschneider , this volume ) .joining many open boundary systems together into a network forms a very simple model for city traffic .such a network appears to represent faithfully real traffic phenomena such as jamming .a major motivation for the study of the asep is its connection to interface dynamics and hence to the kpz equation ( noisy burgers equation ) .the kpz equation is a stochastic non - linear partial differential equation that is central to much of modern statistical physics . in these lectureswe do not have time to develop the theory of the kpz equation , rather we refer the reader to . herewe just make clear the connection to a particular interface growth model . the asymmetric exclusion model may be mapped exactly onto a model of a growing interface in dimensions as shown in figure [ fig : singlestep ] .the mapping is obtained by associating to each configuration of the particles a configuration of an interface : a particle at a site corresponds to a downwards step of the interface height of one unit whereas a hole corresponds to an upward step of one unit .the heights of the interface are thus defined by the dynamics of the asymmetric exclusion process in which a particle at site may interchange position with a neighbouring hole at site , corresponds to a growth at a site which is a minimum of the interface height _i.e. _ if then a growth event turns a minimum of the surface height into a maximum thus the asep maps onto what is known as a single - step growth model meaning that the difference in heights of two neighbouring positions on the interface is always of magnitude one unit .since whenever a particle hops forward the interface height increases by two units , the velocity at which the interface grows is related to the current in the asymmetric exclusion process by periodic boundary conditions for the particle problem with particles and holes correspond to an interface satisfying , _ i.e. _ to helical boundary conditions with an average slope .the case of open boundary conditions corresponds to special growth rules at the boundaries . because of this equivalence, several results obtained for the asymmetric exclusion process can be translated into exactly computable properties of the growing interface .one can also go on to list further applications . indeed early applications concerned biophysical problems such as single - filing constraint in transport across membranes and the kinetics of biopolymerisation .let us now discuss some interesting results that have been derived over the last decade for the open boundary system .for the moment we present them without proof , although we shall recover some of them in the analysis of later sections . in figure[ fig : taseppd ] we present the phase diagram ( corresponding to the limit ) for the totally asymmetric exclusion process _i.e. _ when as predicted within a mean - field approximation .it should be noted that we consider the steady state in the limit , thus it is implicit that we have taken the limit first .note there are three phases by phase it is meant a region in the phase diagram where the current has the same analytic form ; phase boundaries divide regions where the current and bulk density ( defined as the mean occupancy of a site near the centre of the lattice as ) have different forms : for and , for and , for and , consider the high density phase . in this phasethe exit rate is low and the bulk density is controlled by the right boundary .one can think of this as queue of traffic formed at a traffic light that does nt let many cars through at a time .the low density phase is the opposite scenario where the left boundary controls the density through a low input rate of particles .one can think of this as cars being let onto an empty road only a few at a time .finally in the maximal current phase the current of particles is saturated _i.e. _ increasing or any further does not increase the current or change the bulk density . in this phase the density of particles decays from its value at the left boundary to its value in the bulk as where is the distance from the left boundary .this phase diagram of the model has many interesting features .firstly we have both continuous ( from high density or low density to maximal current ) and discontinuous ( from low density to high density ) phase transitions even though it is a one - dimensional system . in the former transitions there are discontinuities in the second derivative of the current . in the latteralthough the current is continuous across the phase transition its first derivative and the bulk density jump . along the transition line one has coexistence between a region of the high density phase adjacent to the right boundary and a region of the low density phase adjacent to the left boundary .these results are very suggestive that the current can be thought of as some free energy .we shall see later through an exact solution how this can be quantified. also note that throughout the maximal current phase one has long - range , power - law correlations functions _i.e. _ long - range correlations are generic .this contrasts with usual behaviour in equilibrium systems where power - law correlations are seen at non - generic , critical points ( an exception is the kosterlitz - thouless phase that may emerge at low temperature in two - dimensional equilibrium systems ) .we now have a look at reaction - diffusion systems wherein particles can react with each other to form some by - product or more simply annihilate each other . well - studied reaction systems are the first reaction signifies that when two particles meet they react and annihilate each other , the second that the two reagents coalesce . these systems model molecular reactions and the dynamics of laser induced quasiparticles known as excitons .however the particles in this type of system can also represent composite objects such as aggregating traffic jams or , as we shall see , domain walls in phase - ordering kinetics .consider first the reaction .clearly the steady state of such a system is the uninteresting dead state where all particles ( except perhaps one ) have been annihilated .the true interest lies in the late time regime wherein the system may enter a _ scaling _ regime . by thisit is meant that the system is statistically invariant under rescaling by a typical length scale that depends on time . in this casethe length scale can be taken as the typical distance between particles . for diffusing particles ,known results are that for the typical distance between particles grows as or equivalently the density of a particles decreases as .we shall recover this result in the next section .the result differs from a naive mean - field description obtained by assuming which would predict .a similar density decay law holds for the process and it has been shown that the two processes are basically equivalent .in addition to having a range of applications , diffusive reaction systems have served as prototypes for the development of a variety of theoretical tools including field - theoretic techniques and the renormalisation group and exact methods in one dimension .one can also consider _ ballistic _ particle motion . in this class of models ,particles move deterministically with constant velocity and on meeting have some probability of coalescing into one particle or annihilating .a seminal paper by elskens and frisch that considered the one - dimensional deterministic case where right moving and left moving particles always annihilate on contact , showed the decay depends on the initial densities of the two species .in particular , only if there are equal initial densities of the right - moving and left - moving particles .ballistic models apply to chemical reactions when the inter - reactant distance is less than what would be the mean free path of particles .such models may also be applied to various domain growth problems and as we shall describe in section 6 , the smoothing of a growing interface .having now given a rough overview of some of the interest in low dimensional nonequilibrium systems , the rest of these lectures will be aimed at a given a deeper understanding of how these phenomena come about in some of the simple models we have discussed . in this section we give a brief introduction to stochastic processes focussing on particular aspects that will be relevant to the models to be studied in detail in later sections .we consider a system that can be in a finite number of microscopic configurations and whose configuration changes according to transition rates .the simplest example is of a particle performing a random walk in continuous time on a one - dimensional lattice : the particle hops to the right with rate and to the left with rate .( the meaning of a ` rate ' is that in an infinitesimal time interval an event happens with probability . ) since probability is conserved we can write a continuity equation for the probability which is referred to as ` the master equation ' the first two terms on the right hand side represent ` rates in ' from other configurations ( _ i.e. _ different positions of the particle ) ; the last term represents the rate out of the given configuration .note that the master equation is linear in the probabilities . for the case of the random walker ,the master equation ( [ rw ] ) can be solved analytically : it is a diffusion equation in discrete space but continuous time .if we consider a periodic lattice of sites ( site is identified with site 1 ) the preferred method is to calculate the fourier modes .defining yields from ( [ rw ] ) thus the steady state corresponds to the mode ( ) .the eigenvalues yield decay times of each mode . for finite and large the equilibration time goes like where the dynamic exponent as expected of a diffusion process .random walker on the 1d lattice review the solution of equation ( [ rw ] ) using fourier modes described above . show that if the initial condition is the probability distribution at time is given by where is given by ( [ lambdak ] ) if you are unfamiliar with the discrete fourier transform , you should first show that for and hence where is as given by equation ( [ pfourier ] ) .consider now the same random walk problem but on an infinite rather than periodic lattice .show from equation ( [ rw ] ) that the generating function obeys f(z , t)\ ] ] and hence \exp[\sqrt{pq}(z+z^{-1})t]\;. \label{ex : rwgen}\ ] ] given that the generating function of modified bessel functions of the first kind is defined as \;,\ ] ] show by expanding ( [ ex : rwgen ] ) that the general solution for the random walk problem is \sum_{\ell = -\infty}^{\infty } \left ( \frac{p}{q } \right)^{\!\frac{x-\ell}{2 } } p(\ell , 0 ) i_{x-\ell}(2 \sqrt{pq}\ , t ) \;.\ ] ] some of the properties of the modified bessel functions can be gleaned by considering the special case of a symmetric random walker that begins at the origin , _ i.e. _ the case and .then , .show then that the property immediately follows , as does the fact that .also , show from ( [ rw ] ) that the bessel functions satisfy the differential equation \;.\ ] ] more generally we consider ` many - body ' problems .for example a system of many particles on a lattice obeying stochastic dynamical rules . for concretenesswe consider the simple example of the one - dimensional ising model in which at each site there is a spin and periodic boundary conditions are imposed . as mentioned in the introduction the dynamics of an equilibrium modelis usually chosen to satisfy detailed balance . in the case of the ising model under spin - flip dynamics , whereby each spin can flip with a certain rate according to the directions of the neighbouring spins , the detailed balance condition on the spin - flip rates of spin at site reads \label{isingdb}\ ] ] ( we have taken ) . in _ glauber_ dynamics ( [ isingdb ] ) is satisfied by choosing } { 2}\ ] ] as is easily verified .then , in the limit the spin flips for a down - spin ( ) at site occur with the rates indicated in figure [ isingt=0 ] and similarly for up - spins at . note the final transition where the spin flips against both its neighbours is forbidden .thus domains of aligned spins can not break up .the domain walls between aligned domains _i.e. _ neighbouring pairs of spins or make random walks on the bonds of the original lattice and annihilate when they meet . as noted above ,this process of diffusing ` particles ' ( here the domain walls ) that annihilate on meeting is sometimes denoted .the solution for the mean domain length has been known for some time basically it can be solved by considering the random walk performed by the length of a domain ( distance between neighbouring particles ) . to see the solution for the mean domain size within the kinetic ising model consider the master equation which can be written where is a configuration of the ising spins and is the same configuration with spin flipped .the number of domain walls is given by so to study the coarsening process , in which domain walls are eliminated and aligned domains increase in size , one considers the equal time correlation function where the average indicates an average over initial conditions and possible histories of the stochastic dynamics .it can be shown that obeys a discrete diffusion equation with boundary condition .the fact that we obtain a closed set of equations for two point correlation functions is a very happy property that allows one to solve easily for these correlation functions basically ( [ ceq ] ) is an equation of the same form as ( [ rw ] ) but with a boundary condition corresponding to a source of walkers at site 0 : see for details .note that the method generalises to finite temperature but not to higher dimensions .actually higher point correlation functions are more difficult to calculate ; nevertheless it has been shown that the one - dimensional glauber model can be turned into a free fermion problem which then implies , in principle at least , that any correlation function can be calculated .remarkably , the full domain size probability distribution has been explicitly calculated for the one - dimensional -state potts model , which includes the ising model as the case .( note that a calculation such as exercise 2 only calculates the first moment of the domain sizes ) .the dynamics of domain walls in the -state potts model , represented as particles , corresponds to diffusion and , on meeting , reaction according to {ll } a \quad \mbox { with probability}\quad ( q-2)/(q-1)\\ \emptyset \quad \mbox{with probability}\quad 1/(q-1 ) \end{array } \right.\ ] ] to understand this equation note that when a domain is elimated each of the neighbouring domains may be in one of states so that they have probability of being in the same state and therefore coalescing .thus for we recover and for ( infinite state potts model ) we recover .time dependence of the 1d ising model the mean magnetisation in the 1d ising model is defined as where is the number of spins on the lattice , is the set of all possible spin combinations and is the value of the spin at site , in configuration .show for general single spin - flip dynamics as described by ( [ kineticisingmaster ] ) that the magnetisation obeys the differential equation under glauber dynamics defined by equation ( [ glauberrates ] ) show that and hence that ) ] . under zero - temperature glauber dynamics ,the evolution of is described by the set of difference equations ( [ ceq ] ) which has the stationary solution .verify this solution , and explain its physical meaning . to obtain the time - dependence of introduce a function through .the time - evolution is governed by equation ( [ rw ] ) with , and subject to the boundary condition .this boundary condition can be satisfied through an appropriate choice of the initial conditions in the ` unphysical ' region ( this is equivalent to using the method of images ) . using the general solution ( [ ex : rwsoln ] ) andthe properties of the modified bessel functions derived in exercise 1 , show that if .hence show that \ ] ] is the solution for the spin correlation function that has and with the initial spin orientations uncorrelated , _i.e. _ .show that this implies that for these initial conditions , 1\!\!1 ] yields also the hamiltonian ( [ ho ] ) becomes the operators are , of course , raising and lowering operators .more generally they are bosonic operators , the defining property of which is the commutation relation ( [ boson ] ) . in the number ( or energy )basis they obey thus in this basis and the hamiltonian is diagonal .the number labels the energy eigenstates .the quantised energy excitations obey bose statistics and the operators ( [ energybasis ] ) create and annihilate bosons respectively .the hermite polynomials come into play when we consider the projection of an energy eigenstate onto the position eigenbasis defined by then where is the hermite polynomial of degree .we now consider a ` deformation ' of ( [ boson ] ) by introducing a parameter observe that taking recovers bosonic operators whereas gives fermionic operators .thus it was originally hoped that the -deformed operators might describe some new excitations interpolating between fermions and bosons .the operators and now operate on basis vectors ( with ) as follows : thus , which is the analogue of the energy operator , is diagonal in this basis the eigenstates of the ` position operator ' are given by finally , we identify the -hermite polynomials as using ( [ adqaction],[aqaction ] ) and ( [ qherm ] ) it is straightforward to derive a recurrence relation this can be compared to the usual recurrence relation for hermite polynomials to make the connection between -hermite and ordinary hermite polynomials we would like the recurrence relations to coincide in the limit .the prescription is in which both the hermite polynomial and the independent variable are transformed .the factors of that appear can be traced back to the fact that we set when doing the quantum mechanics , but we take in the -deformed case . explicit formul for can be found using a generating function technique , the details of which differ slightly depending on whether or . herewe discuss in detail the case .first we define a generating function for the -hermite polynomials where have introduced a ` -factorial ' notation defined through we later encounter products of these factorials for which we use a standard shorthand : the -factorial in ( [ eqn : gdef ] ) is introduced for convenience as will now become apparent .we obtain a functional relation for by multiplying both sides of equation ( [ qhreccur ] ) by and performing the required summations : it is convenient to parameterise the ` position ' eigenstates by an angle where and , and to replace by ( a function of ) . then ( [ eqn : gfuncrel ] ) becomes since we can iterate to obtain where we have used the -factorial notation ( [ qfacgen ] ) .note that is just which we are free to set to .the infinite product has a well - known and easy to verify series representation valid for , from which , with a little effort , we may extract the form of . expanding both sides of ( [ eqn : gnice ] ) in and comparing coefficients we find }{0pt}{}{\,n\,}{\,k\,}_{\!q } } } e^{i(n-2k)\theta}\ ] ] where }{0pt}{}{\,n\,}{\,k\,}_{\!q}}} ] .verify also that when and converges , it can also be written as hint : this is most easily achieved by checking that both the series and product representations of satisfy the functional relation and that they are numerically equal for a special value of .finally , show that where is the generating function of -hermite polynomials ( [ eqn : gdef],[eqn : gnice ] ) .we now consider in more detail the partially asymmetric exclusion process ( pasep ) introduced in section [ sec : asepintro ] . recall that we discussed some of the properties of the special case where the reverse hop rate was set to zero and in particular we presented a phase diagram for the model ( figure [ fig : taseppd ] ) .we now describe how one obtains such a phase diagram exactly for general .we do this by relating the steady - state probability distribution of the model to products of matrices which can be mapped onto the -deformed harmonic oscillator ladder operators . in this sectionwe review the matrix approach to finding the steady state of the model .the approach has been the subject of lectures at a previous summer school in this series and the reader is referred there for further details . herethe bare essentials of the method are outlined .consider first a configuration of particles and its steady - state probability .we use as an ansatz for an ordered product of matrices where if site is occupied and if it is empty . to obtain a probability ( a scalar value ) from this matrix product ,we employ two vectors and in the following way : note that here the matrices , bra and ket vectors are in an auxiliary space and are not related to the space of configurations and the probability ket vector we considered earlier .the factor is included to ensure that is properly normalised .this latter quantity , analogous to a partition function as was discussed in section [ sec : theory ] , has the following simple matrix expression through which a new matrix is defined : note that if and do not commute is a function of both the number and position of particles on the lattice , as expected for a non - trivial steady state .the algebraic properties of the matrices can be deduced from the master equation for the process .it can be shown that sufficient conditions for equation ( [ eqn : pansatz ] ) to hold are in this formulation steady state correlation functions are easily expressed .for example , the mean occupation number ( density ) of site may be written as to get a feel for why the conditions ( [ decom][dv ] ) give the correct steady state consider the current of particles between sites and . in the matrix product formulation the current becomes now using ( [ decom ] ) and the fact that we find we see that , as is required in the steady state , the current is independent of the bond chosen . to see that the current into site one reduces to the same expression we use ( [ ew ] ) also the current out of site reduces to the same expression using ( [ dv ] ) note that the fact that all the above expressions for the current are equivalent is a necessary condition for the matrix formulation to be correct .a sufficient condition is to check that the master equation ( [ meqn : solveme ] ) is satisfied for all configurations .the algebra ( [ decom][dv ] ) allows one to do this systematically .our task now is to evaluate the matrix products in the above expressions for , and by applying the rules ( [ decom][dv ] ) . in the case treated by using ( [ decom ] ) repeatedly to ` normal - order ' matrix products : that is , to obtain an equivalent sum of products in which all matrices appear to the left of any matrices .for example , consider powers of then finding a scalar value would be straightforward using ( [ ew ] ) and ( [ dv ] ) : the difficulty with this approach lies in the combinatorial problem of finding the coefficients .this was solved for in . however , the solution ( and the problem of actually calculating the current and correlation functions ) for arbitrary remained open for some time although the phase diagram was conjectured . recently the problem has been overcome by making the connection with the -hermite polynomials discussed in section [ sec : qdhp ] . to make the connection with the -deformed harmonic oscillator ,let us define one finds that the algebraic rules ( [ decom][dv ] ) reduce to thus and are related to the -bosonic operators discussed in section [ sec : qdhp ] ; such a relationship was first pointed out in . also we see that and are eigenvectors of the -bosonic operators .such eigenvectors are known as coherent states ( see exercise 3 ) . in the oscillator s `` energy '' eigenbasis ( [ energybasis ] ) one can check that they have components for the moment we consider , so that the vectors ( [ eqn : vwofn ] ) are normalisable .we introduced earlier a matrix which appears in the expressions for the mean particle density and current .we now see that this matrix can be written as a linear combination of the identity and the `` position '' operator : as we saw in section [ sec : qdhp ] the eigenstates of the oscillator in the co - ordinate representation are continuous -hermite polynomials. clearly , the eigenvectors of are the same as those for and therefore knowledge of them permits diagonalisation of .let us illustrate how to apply the procedure to obtain an expression for the normalisation when .first we insert a complete set of states into the expression for the normalisation ( [ eqn : zdef ] ) : by design , the matrix is acting on its eigenvectors , so using ( [ eqn : cofx ] ) and we obtain now we know the form of and in the basis ( [ eqn : vwofn ] ) .therefore we insert a complete set of the basis vectors in ( [ eqn : zint ] ) to find observe that the final sum in this equation is nothing but ( [ eqn : gdef ] ) , the generating function of the -hermite polynomials ! thus ,when and , we may write where is given by expression ( [ eqn : gnice ] ) . putting all this together , we arrive at an exact integral form for the normalisation ^n g(\theta , w ) g(\theta , v)\ ] ] which , written out more fully and using the notation ( [ qfacgen ] ) , reads ^n \frac{(e^{2i\theta},e^{-2i\theta};q)_\infty } { ( ve^{i\theta},ve^{-i\theta},we^{i\theta},we^{-i\theta};q)_\infty } \;.\ ] ] when or equation ( [ eqn : zint : q<1:short ] ) is not well - defined because does not converge when .rather than finding a representation of the quadratic algebra ( [ decom][dv ] ) that does not suffer from this problem , one can simply analytically continue the integral ( [ eqn : zint : q<1:long ] ) to obtain when or takes on a value greater than one .since a contour integral is defined by the residues it contains , to analytically continue the integral one simply has to follow the poles of the integrand as they move in and out of the original integration contour ( see exercise 4 ) .one can also apply the procedure of the previous section to find an integral representation of the normalisation for the case of .the only difference is that we must use the polynomials .it turns out that this amounts to substituting by with the limits on now running from to and using the weight function ( [ eqn : nu : q>1 ] ) ^n g(u , w ) g(u , v ) \;.\ ] ] it is possible to derive an alternative expression for which takes the form of a finite sum rather than an integral and is valid for _ all _ values of the model parameters .such an expression implies the solution of the combinatorial problem of reordering operators under the rule ( [ decom ] ) .in the formula ( eq .55 in that paper ) was derived by explicitly evaluating the integral ( [ eqn : zint : q<1:short ] ) using further properties of -hermite polynomials and their generating functions . the phase diagram for the partially asymmetric exclusion processis now obtained by calculating the current through the relation and the above exact expressions for the normalisation ..[table : forward]the forms of the particle current in the forward biased phases . [ cols="^,^,^ " , ] for the case of _ forward bias _ ( ) in which particle hops are biased from the left boundary ( where particles are inserted ) to the right boundary ( where they are removed ) one expects a nonvanishing current in the thermodynamic limit .this is calculated by applying the saddle - point method to the integral ( [ eqn : zint : q<1:long])see exercise 4 .it turns out that for large then in the thermodynamic limit .phase transitions arise from different asymptotic forms of the current due to the analytic continuation ( see exercise 4 ) for the integral ( [ eqn : zint : q<1:long ] ) .ultimately one finds the expressions presented in table [ table : forward ] and thence the phase diagram shown in figure [ fig : paseppd ] .note that the current as given by ( [ jz ] ) is a ratio of two partition functions of size and . in equilibriumstatistical mechanics ( in the large limit ) this ratio is equivalent to the fugacity ( within the grand canonical ensemble ) . recalling that and the chemical potential is equivalent to the gibbs free energy per particle , gives credence to our claim in section 2.1.4 that the current acts as a free energy for the system .the phase diagram for the current in the forward - bias regime of the pasep . ]we note that this phase diagram has a very similar form to that of figure [ fig : taseppd ] which applied for the special case of total asymmetry .that is , one finds the same three phases , namely a ( i ) high density , ( ii ) low density and ( iii ) maximal current phase . as discussed in section [ sec : asepintro ] the transition between the low and high density phases ( i ) and ( ii ) is first order and those to the maximal current phase ( iii ) are second order .note from table [ table : forward ] that , for example , the first derivative of the current is discontinuous at the transition between the low and high density phases .similar methods can be used to find the density profiles in the pasep : the details of the calculations and results are presented in .it turns out that the low- and high - density phases subdivide into three subphases for nonzero less than one . in each of the low density subphases ,the bulk density is ; however what distinguishes the subphases is that the decay of the density profile from the right boundary takes a different form in each .the same is true of high - density phase , except that and that the density decay is at the left boundary .evaluation of the current in the pasep with show that the integral representation of the normalisation for , equation ( [ eqn : zint : q<1:long ] ) , can be recast as a contour integral where is the unit circle centred on the origin , the large number of singularities in the integrand at the origin makes this a difficult integral to evaluate exactly using , _e.g. _ , the residue theorem .instead , we shall use this contour integral to obtain the currents in the pasep for large system size .first , we will apply the saddle - point method to the above integral which is valid for the range .the idea is to deform the contour of integration such that it passes through a saddle point in ] ) .show that this saddle point is at and that the path of steepest descent ( at least near ) is parallel to the imaginary axis . also , show that .insert taylor expansions of and ( to second order ) about the saddle point into ( [ ex : oint ] ) to obtain you should now convince yourself that the deformation of the integration contour to pass through along the path of steepest descent in ] in which ] reduces to the free energy functional known from equilibrium statistical physics and so the nonlocal functional would seem to generalise a well - known equilibrium concept .an interesting kind of boundary - induced phase transition , manifesting spontaneous symmetry breaking , is found when the asymmetric exclusion process is generalised to two oppositely moving species of particle : one species is injected at the left , moves rightwards and exits at the right ; the other species is injected at the right , moves leftwards and exits at the left .intuitively one can picture the system as a narrow road bridge : cars moving in opposite directions can pass each other but can not occupy the same space .the model has a left - right symmetry when the injection rates and exit rates for the two species of particles are symmetric .however for low exit rates ( ) this symmetry is broken and the lattice is dominated by one of the species at any given time .this implies that the short time averages of currents and bulk densities of the two species of particles are no longer equal . over longer timesthe system flips between the two symmetry - related states . in the the mean flip time between the two states has been calculated analytically and shown to diverge exponentially with system size .thus the ` bridge ' model provided a first example of spontaneous symmetry breaking in a one dimensional system . in the models discussed so far the open boundariescan be thought of as inhomogeneities where the order parameter ( particle density ) is not conserved .inhomogeneities which conserve the order parameter can be considered on a periodic system .indeed a single defect bond on the lattice ( through which particles hop more slowly ) is sufficient to cause the system to separate into two macroscopic regions of different densities : a high density region which can be thought of as a traffic jam behind the defect and a low density region in front of the defect . herethe presence of the drive appears necessary for the defect to induce the phase separation .moving defects ( _ i.e. _ particles with dynamics different from that of the others ) have also been considered and exact solutions obtained . for the simple case of a slowly moving particle the phenomenon of a queue of particles forming behind it has been shown to be analogous to bose condensation .in section [ sec : sbacintro ] we introduced two reaction systems , namely annihilation ( ) and coalescence ( ) .as noted in section [ sec : sbacintro ] , it is well known that annihilation and coalescence reactions are equivalent when the reactant motion is diffusive and hence lead to the same density decay in one dimension .we now consider in detail the distinct case of ballistic reactant motion by which it is meant that particles move with some fixed velocity .we study a class of models that comprise an arbitrary combination of annihilation and coalescence of particles with two ( conserved ) velocities and stochastic reaction dynamics and which was solved in . as we shall see , an exact solution is possible by virtue of a matrix product method in which the matrices involved can be written in terms of the raising and lowering operators of the -deformed harmonic oscillator .a related , but distinct , ballistic reaction model ( not contained within the class discussed here ) had also revealed a connection between ballistic reaction systems and -deformed operator algebras .we now define the class of models to be considered . at time reactants are placed randomly on a line .each particle is assigned a velocity ( right - moving ) or ( left - moving ) with probability and respectively .particles move ballistically until two collide , at which point one of four outcomes follows , see figure [ fig : outcomes ] : the particles pass through each other with probability ; the particles coalesce into a left ( right ) moving particle with probability ( ) ; the particles annihilate with probability . here is the probability that some reaction occurs .as an example of an application of this model consider the identification of right- and left - moving particles with the edges of terraces as illustrated in figure [ fig : rsos ] .if new particles are added to the system in such a way that they only stick to the sides , and the rate of particle addition is taken to infinity , one obtains ballistic motion of terrace edges .when two edges meet , a terrace is completed which corresponds to annihilation of approaching particles .hence the relevant parameters of the general annihilation coalescence model are . the scattering reaction , which occurs with rate , corresponds to the possibility of a new terrace being formed when two edges meet .the case of deterministic annihilation was studied by elskens and frisch . hereparticles always annihilate on contact .this case was found to exhibit a density decay that depends on time as , but only if the initial densities of the two particle species ( left- and right - moving ) are the same . in the following analysis of the more general class of reaction systems, we find that such a result persists if two ` effective ' initial densities ( to be defined below ) are equal .furthermore , we will see that the introduction of stochasticity into the reaction dynamics _i.e. _ the parameter , induces a transition in the density decay form .our aim is to calculate the density decay .to do this we shall consider without loss of generality a right moving test particle as illustrated in figure [ fig : traj ] .we define as the probability that the test particle survives up to a time respectively . from figure[ fig : traj ] one can see that the initial spacing of the particles on the line does not affect the _ sequence _ of possible reactions for any given particle , in particular for the test particle ( we return to this point later ) .also note that after a given time , the test particle may only have interacted with the particles initially placed within a distance ( and to the right ) of the chosen particle .these two facts imply that the survival probability can be expressed in terms of two independent functions .the first is , the probability that initially there were exactly particles in a region of size .the second is , the probability that the test particle survives reactions with the particles initially to its right , and depends only on the sequence of the particles .explicitly , thus the problem is reduced to two separate combinatorial problems of calculating and .note that the choice of allows one to consider a model defined on a lattice or on the real line . in the present work we assume particlesare initially placed on a line with nearest - neighbour distances chosen independently from an exponential distribution with unit mean .then we can use a well - known result we now show that the second problem ( calculation of ) can be formulated within a matrix product approach . asan example consider a test particle encountering the string of reactants depicted in figure [ fig : traj ] .we claim that the probability of the test particle surviving through this string may be written as where are matrices and , vectors with scalar product .thus we write , in order , a matrix ( ) for each right ( left ) moving particle in the initial string .the conditions for an expression such as ( [ eqn : fex ] ) to hold for an arbitrary string are , in fact , rather intuitive ) \\ \label{eqn : wdef } \langle w| l & = & \langle w| ( q+p\eta_r ) \\r |v\rangle & = & |v\rangle\;. \label{eqn : vdef}\end{aligned}\ ] ] the condition ( [ eqn : rl ] ) just echoes the reactions that occur in figure [ fig : outcomes ] _ i.e. _ after an interaction between a right - moving and left - moving particle there are four possible outcomes ( see figure [ fig : outcomes ] ) corresponding to the four terms on the right hand side of ( [ eqn : rl ] ) with probabilities given by the respective coefficients . using ( [ eqn : rl ] ) , any initial matrix product such as ( [ eqn : fex ] ) can be reduced to a sum of terms of the form corresponding to all possible final states ensuing from the initial string and with coefficients equal to the probabilities of each final state .these final states give the possible sequences of particles that the right - moving test particle will encounter .the test particle will survive such a final state and pass through the left - moving particles with probability . the condition ( [ eqn : vdef ] )ensures that this probability is obtained for each possible final state _i.e._ finally the condition ( [ eqn : wdef ] ) , along with , ensures that once a right - moving particle passes through all the left - moving particles in the string it no longer plays a role .thus ( [ left ] ) becomes note that the reason for a matrix product formulation is different from the pasep .there the steady state probability of any configuration , for arbitrary system size , could be written as a matrix product . herethe probability of a particle surviving some arbitrary sequence of particles can be written as a matrix product .the above approach relies on an important property of the system which is invariance of a reaction sequence with respect to changes of initial particle spacings . to understand this , consider again figure [ fig : traj ] . by altering the initial spacings of the particles , the absolute times at which trajectories intersect and reactions may occur ( if the reactants have survived ) may be altered .for example , by increasing the spacing between the fifth and sixth particles , the trajectories of the third and fourth particles can be made to intersect first .however as we have already seen , for any particle , the _ order _ of intersections it encounters does not change and so the final states and probabilities are invariant .this invariance is manifested in the matrix product by the fact that the order in which we use the reduction rule ( [ eqn : rl ] ) is unimportant _i.e. _ matrix multiplication is associative .thus , it is the invariance with respect to initial spacings that allows the system to be solved by using a product of matrices .we now proceed to evaluate . averaging ( [ eqn : fex ] ) over all initial strings of length , recalling that and are the probabilities that a particle is assigned velocity respectively , yields to evaluate we first write in these equationswe have introduced two new parameters which we call effective initial particle densities and whose ratio turns out to be an important parameter of the model. one can verify from ( [ eqn : rl][eqn : vdef ] ) that satisfy a -deformed harmonic oscillator algebra thus ( [ eqn : f ] ) can be written as ^n | v \rangle\;. \label{eqn : fmat}\ ] ] the vectors and are eigenvectors of , and as in section [ sec : pasep ] , they can be explicitly calculated . at this pointone can see that we have precisely the same mathematical structure as we did when solving for the steady state of the pasep compare ( [ eqn : ho][eqn : coherent ] ) with ( [ pasepqho1][pasepqho3 ] ) . we just need to carry out the same procedure of diagonalising a matrix that is linear combination of the identity operator and the position operator of a -deformed harmonic oscillator . rather than repeat the details here we go directly to a discussion of the phase diagram .deterministic ballistic annihilation and random walks the elskens - frisch model of deterministic ballistic annihilation is a special case of the stochastic model described above that has the parameters . for the casewhere particles are initially placed on all sites of a lattice with equal probability of right- and left - moving particles , write down all possible initial conditions that would allow a right - moving test particle to survive through one , two and three sites .hence relate the survival probability of the test particle after sites to the probability of a random walker not returning to the origin after steps .hint : devise a criterion for the survival of the test particle that involves the relative number of left- and right - moving particles for a given initial condition .construct the matrices and and the vectors and and explain how the resulting expression for ( with ) is related to the random walk problem .we consider the long - time density decay which is obtained from the survival probabilities , for right and left moving test particles through ( can be deduced from through the left - right symmetry of the model . ) the matrix product calculation for implies that the density decays as in this equation is the fraction of particles remaining once no more reactions are possible ( _ i.e. _ all particles moving in the same direction ) .we now list the results the residual density when .( results for one be deduced using the left - right symmetry of the model . ) for \ ] ] for } { ( ct)^{1/2}}\ ] ] for }{(ct)^{3/2}}\nonumber\end{gathered}\ ] ] for we now consider the phase diagram figure [ fig : sbacpd ] .this is spanned by the values of ( the ratio of effective densities ) and ( the stochasticity parameter ) : parameterises all information concerning the asymmetry between the right and left moving particles whereas parameterises the level of stochasticity .thus there is universality of ballistic annihilation and coalescence : for a generic choice of , defining a particular annihilation - coalescence model , the same four decay regimes are found by varying the initial densities or stochasticity parameter . in this way the universality can be considered as a ` law of corresponding states ' .the line ( equal effective densities ) is non - generic since a single , power law , decay regime is found .the decay does not depend on the stochasticity .this special line was found by elskens and frisch for the case of deterministic annihilation ( ) and we have shown that this special line is present for all combinations of annihilation and coalescence .this phase can be understood through the picture of .density fluctuations in the initial conditions lead to trains of left- and right - moving particles : in a length the excess particle number is which yields the density decay . at long times ,the train size is large and so a particle in one train encounters many particles in the other and will eventually react making the parameter irrelevant .the second phase found in ( labelled the elskens - frisch phase in the diagram ) is seen to persist for nonzero and in this phase the two particle species decay at equal rates .the two new phases that arise in the full model for ( _ i.e. _ as a consequence of randomness in the reaction dynamics ) have the contrasting property that two particle species decay at _ unequal _ rates leaving a non - zero population of left - moving particles .a simple example of non - equal decays is the case ( ). then left - moving particles do not decay but simply absorb the right - moving particles with probability giving .our results show that , in general , increasing leads to a non - trivial transition at to a regime where the two species have different decay forms .in these lectures we have tried to give a flavour of the collective phenomena exhibited by simple low - dimensional systems with stochastic dynamics .specifically we have seen that the partially asymmetric exclusion process exhibits a number of phase transitions ( of both first and second order ) and long - range correlations even in one dimension .we have also shown how similar mathematical techniques can be applied in the distinct context of a particle reaction system to obtain an exact solution .we hope that the reader with a background in equilibrium statistical physics is pleasantly surprised by the diversity of phenomena that emerge in low - dimensional nonequilibrium systems .we have also endeavoured to illustrate how novel analytical techniques are being developed for these systems . through the inclusion of background material and the exercises , we hope also to have inspired confidence in the reader to approach the literature and make an active contribution to this fertile field . *acknowledgments * : we would like to thank our collaborators f. colaiori , f. essler and y. kafri with whom the work of section 5 and section 6 was carried out and also m. j. e. richardson who contributed to the early development of that work. g. m. schtz _ exactly solvable models for many - body systems far from equilibrium _ in _ phase transitions and critical phenomena _ vol .* 19 * , c. domb and j. l. lebowitz ( eds . ) , ( academic press , london , 2000 ) .
in these lectures we give an overview of nonequilibrium stochastic systems . in particular we discuss in detail two models , the asymmetric exclusion process and a ballistic reaction model , that illustrate many general features of nonequilibrium dynamics : for example coarsening dynamics and nonequilibrium phase transitions . as a secondary theme we shall show how a common mathematical structure , the -deformed harmonic oscillator algebra , serves to furnish exact results for both systems . thus the lectures also serve as a gentle introduction to things -deformed . and nonequilibrium dynamics , stochastic processes , phase transition , asymmetric exclusion process , reaction kinetics 02.50.-r , 05.40.-a , 05.70.fh
, the time at which the signal is to be estimated.,scaledwidth=48.0% ] estimation theory is the science of determining the state of a system , such as a dice , an aircraft , or the weather in boston , from noisy observations . as shown in fig .[ classes ] , estimation problems can be classified into four classes , namely , prediction , filtering , retrodiction , and smoothing . for applications that do not require real - time data , such as sensing and communication , smoothing is the most accurate estimation technique .i have recently proposed a time - symmetric quantum theory of smoothing , which allows one to optimally estimate classical diffusive markov random processes , such as gravitational waves or magnetic fields , coupled to a quantum system , such as a quantum mechanical oscillator or an atomic spin ensemble , under continuous measurements . in this paper ,i shall demonstrate in more detail the derivation of this theory using a discrete - time approach , and how it closely parallels the classical time - symmetric smoothing theory proposed by pardoux .i shall apply the theory to the design of homodyne phase - locked loops ( pll ) for narrowband squeezed optical beams , as previously considered by berry and wiseman .i shall show that their approach can be regarded as a special case of my theory , and discuss how their results can be generalized and improved .i shall also discuss the weak value theory proposed by aharonov _et al . _ in relation with the smoothing theory , and how their theory may be regarded as a smoothing theory for quantum degrees of freedom . in particular , the smoothing quasiprobability distribution proposed in ref . is shown to naturally arise from the statistics of weak position and momentum measurements .this paper is organized as follows : in sec .[ classical ] , pardoux s classical time - symmetric smoothing theory is derived using a discrete - time approach , which is then generalized to the quantum regime for hybrid classical - quantum smoothing in sec . [ hybrid ] .application of the hybrid classical - quantum smoothing theory to pll design is studied in sec .[ adaptive ] .the relation between the smoothing theory and aharonov _ et al . _s weak value theory is then discussed in sec .[ conclusion ] concludes the paper and points out some possible extensions of the proposed theory .consider the classical smoothing problem depicted in fig . [ classical_smoothing ] .let }\end{aligned}\ ] ] be a vectoral diffusive markov random process that satisfies the system it differential equation where is a vectoral wiener increment with mean and covariance matrix given by the superscript denotes the transpose .the vectoral observation process satisfies the observation it equation where is another vectoral wiener increment with mean and covariance matrix given by for generality and later purpose , and are assumed to be correlated , with covariance define the observation record in the time interval as the goal of smoothing is to calculate the conditional probability density of , given the observation record in the time interval .it is more intuitive to consider the problem in discrete time first .the discrete - time system and observation equations ( [ system ] ) and ( [ observ ] ) are the observation record } & \equiv { \left\{\delta y_{t_0},\delta y_{t_0+\delta t},\dots,\delta y_{t-\delta t}\right\}}\end{aligned}\ ] ] also becomes discrete .the covariance matrices for the increments are and the increments at different times are independent of one another . because and are proportional to , one should keep all linear _ and _ quadratic terms of the wiener increments in an equation according to it calculus when taking the continuous time limit . with correlated and , it is preferable , for technical reasons , to rewrite the system equation ( [ discrete_system ] ) as },\end{aligned}\ ] ] where can be arbitrarily set because the expression in square brackets is zero .the system equation becomes }\delta t+ d(x_t , t)\delta y_t \nonumber\\&\quad + b(x_t , t)\delta w_t - d(x_t , t)\delta v_t.\end{aligned}\ ] ] the new system noise is \delta t.\end{aligned}\ ] ] the covariance between the new system noise and the observation noise is }\delta t,\end{aligned}\ ] ] and can be made to vanish if one lets the new equivalent system and observation model is then } \nonumber\\&\quad + b(x_t , t)\delta u_t , \label{new_system}\\ \delta y_t & = c(x_t , t)\delta t + \delta v_t , \label{new_observ}\end{aligned}\ ] ] with covariances }\delta t , \\ { \left\langle\delta v_t\delta v_t^t\right\rangle } & = r(t)\delta t , \\ { \left\langle\delta u_t\delta v_t^t\right\rangle } & = 0.\end{aligned}\ ] ] the new system and observation noises are now independent , but note that becomes dependent on . according to the bayes theorem ,the smoothing probability density for can be expressed as } ) & = \frac{p(\delta y_{[t_0,t-\delta t]}|x_\tau ) p(x_\tau ) } { p(\delta y_{[t_0,t-\delta t ] } ) } , \label{bayes } \\p(\delta y_{[t_0,t-\delta t ] } ) & = \int dx_\tau p(\delta y_{[t_0,t-\delta t]}|x_\tau ) p(x_\tau),\end{aligned}\ ] ] where and is the _ a priori _ probability density , which represents one s knowledge of absent any observation. functions of are assumed to also depend implicitly on . splitting } ] in terms of }) ] can be determined from the system equation ( [ new_system ] ) and is equal to , due to the markovian nature of the system process .so } ) & = \int dx_t p(x_{t+\delta t}|x_t,\delta y_{t } ) p(x_t|\delta y_{[t_0,t ] } ) , \label{ck}\end{aligned}\ ] ] which is a generalized chapman - kolmogorov equation . is }^{-1 } \delta z_t\bigg\ } , \label{gauss_system}\end{aligned}\ ] ] where }.\end{aligned}\ ] ] next , write }) ] using the bayes theorem as } ) & = p(x_t|\delta y_{[t_0,t-\delta t]},\delta y_t ) \nonumber\\ & = \frac{p(\delta y_t|x_t,\delta y_{[t_0,t-\delta t]})p(x_t|\delta y_{[t_0,t-\delta t ] } ) } { \int dx_t ( \textrm{numerator } ) } \nonumber\\ & = \frac{p(\delta y_t|x_t)p(x_t|\delta y_{[t_0,t-\delta t ] } ) } { \int dx_t ( \textrm{numerator } ) } , \label{bayes2}\end{aligned}\ ] ] where } ) = p(\delta y_t|x_t) ] in eq .( [ multitime ] ) can be expressed as },\delta y_{[\tau , t-\delta t ] } ) & = p(\delta y_{t-\delta t}|x_{[\tau , t-\delta t]},\delta y_{[\tau , t-2\delta t ] } ) \nonumber\\&\quad\times p(x_{[\tau , t-\delta t]},\delta y_{[\tau , t-2\delta t ] } ) . \label{multitime2}\end{aligned}\ ] ] using the markovian property of the observation process , },\delta y_{[\tau , t-2\delta t ] } ) = p(\delta y_{t-\delta t}|x_{t-\delta t } ) , \label{markov_observe}\end{aligned}\ ] ] which can be determined from the observation equation ( [ new_observ ] ) and is given by eq .( [ gauss_observ ] ) . applying eqs .( [ multitime ] ) , ( [ markov_system ] ) , ( [ multitime2 ] ) , and ( [ markov_observe ] ) repeatedly , one obtains } ) & = \int dx_t \int dx_{t-\delta t } p(x_t|x_{t-\delta t},\delta y_{t-\delta t } ) \nonumber\\&\quad\times p(\delta y_{t-\delta t}|x_{t-\delta t } ) \nonumber\\&\quad\times \int dx_{t-2\delta t } p(x_{t-\delta t}|x_{t-2\delta t},\delta y_{t-2\delta t } ) \nonumber\\&\quad\times p(\delta y_{t-2\delta t}|x_{t-2\delta t})\dots\nonumber\\&\quad\times \int dx_\tau p(x_{\tau+\delta t}|x_\tau,\delta y_\tau ) \nonumber\\&\quad\times p(\delta y_{\tau}|x_\tau)p(x_\tau ) . \label{expand}\end{aligned}\ ] ] comparing this equation with eq .( [ retro ] ) , can be expressed as defining the unnormalized retrodictive likelihood function at time as one can derive a linear backward stochastic differential equation for by applying it calculus backward in time to eq .( [ retro2 ] ) .the result is which is the adjoint equation of the forward dmz equation ( [ zakai ] ) , to be solved backward in time in the backward it sense , defined by with the final condition the adjoint equation with respect to a linear differential equation is defined as where is a linear operator and is the adjoint of , defined by with respect to the inner product after solving eq .( [ zakai ] ) for and eq .( [ backward_zakai ] ) for , the smoothing probability density is since and are solutions of adjoint equations , their inner product , which appears as the denominator of eq .( [ smooth ] ) , is constant in time .the denominator also ensures that is normalized , and and need not be normalized separately .the estimation errors depend crucially on the statistics of .if any component of , say , is constant in time , then filtering of that particular component is as accurate as smoothing , for the simple reason that must be the same for any , and one can simply estimate at the end of the observation interval ( ) using filtering alone .this also means that smoothing is not needed when one only needs to detect the presence of a signal in detection problems , since the presence can be regarded as a constant binary parameter within a certain time interval . in general , however , smoothing can be significantly more accurate than filtering for the estimation of a fluctuating random process in the middle of the observation interval .another reason for modeling unknown signals as random processes is robustness , as introducing fictitious system noise can improve the estimation accuracy when there are modeling errors .if , , and are gaussian , one can just solve for their means and covariance matrices , which completely determine the probability densities .this is the case when the _ a priori _ probability density is gaussian , and the means and covariance matrices of , , and can then be solved using the linear mayne - fraser - potter ( mfp ) smoother .the smoother first solves for the mean and covariance matrix of using the kalman filter , given by with the initial conditions at determined from .the mean and covariance matrix of are then solved using a backward kalman filter , with the final condition and . in practice, the information filter formalism should be used to solve the backward filter , in order to avoid dealing with the infinite covariance matrix at .finally , the smoothing mean and covariance matrix are note that and are the mean and covariance matrix of a likelihood function and not those of a conditional probability density , so to perform optimal retrodiction ( ) one should still combine and with the _ a priori _ values .consider the problem of waveform estimation in a hybrid classical - quantum system depicted in fig .[ hybrid_smooth ] .the classical system produces a vectoral classical diffusive markov random process , which obeys eq .( [ system ] ) and is coupled to the quantum system . the goal is to estimate via continuous measurements of both systems .this setup is slightly more general than that considered in ; here the observations can also depend on .this allows one to apply the theory to pll design for squeezed beams , as considered by berry and wiseman , and potentially to other quantum estimation problems as well .the statistics of are assumed to be unperturbed by the coupling to the quantum system , in order to avoid the nontrivial issue of quantum backaction on classical systems . for simplicity ,in this section we neglect the possibility that the system noise driving the classical system is correlated with the observation noise , although the noise driving the quantum system can still be correlated with the observation noise due to quantum measurement backaction .just as in the classical smoothing problem , the hybrid smoothing problem is solved by calculating the smoothing probability density . because a quantum system is involved , one may be tempted to use a hybrid density operator to represent one s knowledge about the hybrid classical - quantum system .the hybrid density operator describes the joint classical and quantum statistics of a hybrid system , with the marginal classical probability density for and the marginal density operator for the quantum system given by } , \\\hat\rho(\tau ) & = \int dx_\tau \hat\rho(x_\tau),\end{aligned}\ ] ] respectively .the hybrid operator can also be regarded as a special case of the quantum density operator , when certain degrees of freedom are approximated as classical .unfortunately , the density operator in conventional predictive quantum theory can only be conditioned upon past observations and not future ones , so it can not be used as a quantum version of the smoothing probability density . the classical time - symmetric smoothing theory , as a combination of prediction and retrodiction , offers an important clue to how one can circumvent the difficulty of defining the smoothing quantum state . again casting the problem in discrete time , and defining a hybrid effect operator as , which can be used to determine the statistics of future observations given a density operator at , },\end{aligned}\ ] ]one may write , in analogy with eq .( [ twofilter ] ) , } ) & = \frac{p(x_\tau,\delta y_{\textrm{future}}|\delta y_{\textrm{past } } ) } { p(\delta y_{\textrm{future}}|\delta y_{\textrm{past } } ) } \nonumber\\ & = \frac{{\operatorname{tr}}[\hat e(\delta y_{\textrm{future}}|x_\tau ) \hat\rho(x_\tau|\delta y_{\textrm{past } } ) ] } { \int dx_\tau { \operatorname{tr}}[\hat e(\delta y_{\textrm{future}}|x_\tau ) \hat\rho(x_\tau|\delta y_{\textrm{past } } ) ] } , \label{two_qfilter}\end{aligned}\ ] ] where is the analog of the filtering probability density and is the analog of the retrodictive likelihood function .one can then solve for the density and effect operators separately , before combining them to form the classical smoothing probability density .since the hybrid density operator can be regarded as a special case of the density operator , the same tools in quantum measurement theory can be used to derive a filtering equation for the hybrid operator .first , write }) ] as } ) & = \int dx_t \mathcal k(x_{t+\delta t}|x_t)\hat\rho(x_t|\delta y_{[t_0,t ] } ) , \label{qck}\end{aligned}\ ] ] where is a completely positive map that governs the markovian evolution of the hybrid state independent of the measurement process . equation ( [ qck ] ) may be regarded as a quantum version of the classical chapman - kolmogorov equation . for infinitesimal , }_{x= x_{t+\delta t}}.\end{aligned}\ ] ]the hybrid superoperator can be expressed as } \nonumber\\&\quad + \frac{1}{2}\sum_{\mu,\nu}{\frac{\partial ^2}{\partial x_\mu\partial x_\nu } } { \left[{\left(bqb^t\right)}_{\mu\nu}\hat\rho(x)\right ] } , \label{super_l}\end{aligned}\ ] ] where governs the evolution of the quantum system , governs the coupling of to the quantum system , via an interaction hamiltonian for example , and the last two terms governs the classical evolution of .next , write }) ] using the quantum bayes theorem as } ) & = \hat\rho(x_t|\delta y_{[t_0,t-\delta t]},\delta y_t ) \nonumber\\ & = \frac{\mathcal j(\delta y_t|x_t)\hat\rho(x_t|\delta y_{[t_0,t-\delta t ] } ) } { \int dx_t{\operatorname{tr}}(\textrm{numerator})}. \label{qbayes}\end{aligned}\ ] ] the measurement superoperator , a quantum version of , is defined as for infinitesimal and measurements with gaussian noise , the measurement operator can be approximated as ,\end{aligned}\ ] ] where is a vectoral observation process , is a vector of hybrid operators , generalized from the purely quantum operators in ref . so that the observations may also depend directly on the classical degrees of freedom , and is assumed to be positive . to cast the theory in a form similar to the classical one , perform unitary transformations on and , where is a unitary matrix , and rewrite the measurement operator as is a generalization of in the classical case , and is again a positive - definite matrix that characterizes the observation uncertainties and is real and symmetric with eigenvalues .note that is defined as the adjoint of each vector element , and is defined as the matrix transpose of the vector . for example , the evolution of })$ ] can thus be calculated by iterating the formula } ) \nonumber\\ & = \int dx_t \mathcal k(x_{t+\delta t}|x_t ) \frac{\mathcal j(\delta y_t|x_t)\hat\rho(x_t|\delta y_{[t_0,t-\delta t ] } ) } { \int dx_t{\operatorname{tr}}(\textrm{numerator})}.\end{aligned}\ ] ] taking the continuous time limit via it calculus and defining the conditional hybrid density operator at time as one obtains } , \label{qks}\end{aligned}\ ] ] where } , \\d\eta_t & \equiv dy_t - \frac{dt}{2}{\langle\hat c+\hat c^\dagger\rangle}_{\hat f}\end{aligned}\ ] ] is a wiener increment with covariance matrix , h.c .denotes the hermitian conjugate , and the initial condition is the _ a priori _ hybrid density operator .equation ( [ qks ] ) is a quantum version of the ks equation ( [ kushner ] ) and can be regarded as a special case of the belavkin quantum filtering equation .a linear version of the ks equation for an unnormalized is and the normalization is }.\end{aligned}\ ] ] equation ( [ qzakai ] ) is a quantum generalization of the dmz equation ( [ zakai ] ) . taking a similar approach to the one in sec .[ retro_smooth ] and using the quantum regression theorem , one can express the future observation statistics as } \label{qretro}\\ & = \int dx_t { \operatorname{tr}}\bigg[\int dx_{t-\delta t } \mathcal k(x_t|x_{t-\delta t } ) \nonumber\\&\quad\cdot \mathcal j(\delta y_{t-\delta t}|x_{t-\delta t } ) \nonumber\\&\quad\cdot \int dx_{t-2\delta t } \mathcal k(x_{t-\delta t}|x_{t-2\delta t } ) \nonumber\\&\quad\cdot \mathcal j(\delta y_{t-2\delta t}|x_{t-2\delta t})\dots\nonumber\\&\quad\cdot \int dx_\tau \mathcal k(x_{\tau+\delta t}|x_\tau ) \mathcal j(\delta y_{\tau}|x_\tau)\hat\rho(x_\tau)\bigg ] , \label{qexpand}\end{aligned}\ ] ] which are analogous to eq .( [ retro ] ) and eq .( [ expand ] ) , respectively . comparing eq .( [ qretro ] ) with eq .( [ qexpand ] ) , and defining the adjoint of a superoperator as , such that } & = { \operatorname{tr}}{\left\{{\left[\mathcal o^*\hat e(x)\right]}\hat\rho(x)\right\}},\end{aligned}\ ] ] the hybrid effect operator can be written as the operation may also be regarded as a hybrid superoperator on a hybrid operator , and is the adjoint of , defined by with respect to the hilbert - schmidt inner product }. \label{inner}\end{aligned}\ ] ] one can then rewrite eqs .( [ qretro ] ) , ( [ qexpand ] ) , and ( [ hybrid_effect ] ) more elegantly as in the continuous time limit , a linear stochastic differential equation for the unnormalized effect operator can be derived .the result is to be solved backward in time in the backward it sense , with the final condition equation ( [ backward_qzakai ] ) is the adjoint equation of the forward quantum dmz equation ( [ qzakai ] ) with respect to the inner product defined by eq .( [ inner ] ) .it is a generalization of the classical backward dmz equation ( [ backward_zakai ] ) . finally , after solving eq .( [ qzakai ] ) for and eq .( [ backward_qzakai ] ) for , the smoothing probability density is } { \int dx{\operatorname{tr}}[\hat g(x,\tau)\hat f(x,\tau)]}. \label{qsmooth}\end{aligned}\ ] ] the denominator of eq .( [ qsmooth ] ) ensures that is normalized , so and need not be normalized separately .table [ analogs ] lists some important quantities in classical smoothing with their generalizations in hybrid smoothing for comparison .[ cols="<,<,<,<",options="header " , ] to solve eqs .( [ qzakai ] ) , ( [ backward_qzakai ] ) , and ( [ qsmooth ] ) , one way is to convert them to equations for quasiprobability distributions .the wigner distribution is especially useful for quantum systems with continuous degrees of freedom .it is defined as where and are normalized position and momentum vectors .it has the desirable property which is unique among generalized quasiprobability distributions . the smoothing probability density given by eq .( [ qsmooth ] ) can then be rewritten as where and are the wigner distributions of and , respectively .equation ( [ wigner_smooth ] ) resembles the classical expression ( [ smooth ] ) with the quantum degrees of freedom and marginalized .if is nonnegative and the stochastic equations for and converted from eqs .( [ qzakai ] ) and ( [ backward_qzakai ] ) have the same form as the classical dmz equations given by eqs .( [ zakai ] ) and ( [ backward_zakai ] ) , the hybrid smoothing problem becomes equivalent to a classical one and can be solved using well known classical smoothers .for example , if and are gaussian , is also gaussian , and their means and covariances can be solved using the linear mfp smoother described in sec .consider the pll setup depicted in fig . [ pll ] .the optical parametric oscillator ( opo ) produces a squeezed vacuum with a squeezed quadrature and an antisqueezed quadrature .the squeezed vacuum is then displaced by a real constant to produce a phase - squeezed beam , the phase of which is modulated by , an element of the vectoral random process described by the system it equation ( [ system ] ) .the output beam is measured continuously by a homodyne pll , and the local - oscillator phase is continuously updated according to the real - time measurement record .the use of pll for phase estimation in the presence of quantum noise has been mentioned as far back as 1971 by personick .wiseman suggested an adaptive homodyne scheme to measure a constant phase , which was then experimentally demonstrated by armen _ et al . _for the optical coherent state .berry and wiseman and pope _ et al . _ studied the problem with being a wiener process .berry and wiseman later generalized the theory to account for narrowband squeezed beams ._ et al.__also studied the problem for the case of being a gaussian process , but the squeezing model considered in refs . is not realistic . using the hybrid smoothing theory developed in sec .[ hybrid ] , one can now generalize these earlier results to the case of an arbitrary diffusive markov process and a realistic squeezing model .let be the hybrid density operator for the combined quantum - opo - classical - modulator system .the evolution of the opo below threshold in the interaction picture is governed by } , \\\hat h_0 & = -i\frac{\hbar\chi}{2 } { \left(\hat a\hat a-\hat a^\dagger\hat a^\dagger\right ) } \\ & = \frac{\hbar\chi}{2}{\left(\hat q\hat p + \hat p \hat q\right)},\end{aligned}\ ] ] where is the annihilation operator for the cavity optical mode , and and are the antisqueezed and squeezed quadrature operators , respectively , defined as with the commutation relation & = i.\end{aligned}\ ] ] the classical phase modulator does not influence the evolution of the opo , so but it modulates the opo output . in this case is where is the transmission coefficient of the partially reflecting opo output mirror , , and the symbol and sign conventions here roughly follows those of refs . . to ensure the correct unconditional quantum dynamics ,the hamiltonian should be changed to ( ref . , sec .11.4.3 ) },\end{aligned}\ ] ] in order to eliminate the spurious effect of the displacement term in on the opo .after some algebra , the forward stochastic equation for the wigner distribution becomes } \nonumber\\&\quad -{\left[{\left(\chi -\frac{\gamma}{2}\right)}{\frac{\partial } { \partial q}}{\left ( qf\right)}+{\left(-\chi -\frac{\gamma}{2}\right)}{\frac{\partial } { \partial p}}{\left(pf\right)}\right ] } \nonumber\\&\quad + \frac{\gamma}{4}{\left({\frac{\partial^2 f}{\partial q^2}}+{\frac{\partial^2f}{\partial p^2}}\right)}\bigg\ } \nonumber\\&\quad + dy_t \bigg[\sin(\phi-\phi_t ' ) { \left(2b+\sqrt{2\gamma}q + \sqrt{\frac{\gamma}{2}}{\frac{\partial } { \partial q}}\right ) } \nonumber\\&\quad + \cos(\phi-\phi_t ' ) { \left(\sqrt{2\gamma}p+\sqrt{\frac{\gamma}{2}}{\frac{\partial } { \partial p}}\right)}\bigg]f . \label{wigner_zakai}\end{aligned}\ ] ] this is precisely the classical dmz equation ( [ zakai ] ) with correlated system and observation noises .the equivalent classical system equations are then and the equivalent observation equation is where and are independent wiener increments with covariance . and , which appear in both the system equation and the observation equation , are simply quadratures of the vacuum field , coupled to both the cavity mode and the output field via the opo output mirror. equations ( [ equiv_system ] ) and ( [ equiv_observ ] ) coincide with the model of berry and wiseman in ref . when is a wiener process , and eq .( [ wigner_zakai ] ) is the continuous limit of their approach to phase estimation .this approach can also be regarded as an example of the general method of accounting for colored observation noise by modeling the noise as part of the system .if , is an additive white gaussian noise , and the model is reduced to that studied in refs . . in that case , it is desirable to make follow as closely as possible , so that can be approximated as and the kalman filter can be used if is gaussian .provided that eq .( [ approx ] ) is valid , one should make the conditional expectation of , given by for phase - squeezed beams , it also seems desirable to make close to in order to minimize the magnitude of .equation ( [ lo_phase ] ) may not provide the optimal in general , however , as it does not necessarily minimize the magnitude of or the estimation errors .the optimal control law for should be studied in the context of control theory . while needs to be updated in real time and must be calculated via filtering, the estimation accuracy can be improved by smoothing .the backward dmz equation for is the adjoint equation with respect to eq .( [ wigner_zakai ] ) , given by } \nonumber\\&\quad + \frac{\gamma}{4}{\left({\frac{\partial^2 g}{\partial q^2}}+{\frac{\partial^2 g}{\partial p^2}}\right)}\bigg\ } \nonumber\\&\quad + dy_t \bigg[\sin(\phi-\phi_t ' ) { \left(2b+\sqrt{2\gamma}q - \sqrt{\frac{\gamma}{2}}{\frac{\partial } { \partial q}}\right ) } \nonumber\\&\quad + \cos(\phi-\phi_t ' ) { \left(\sqrt{2\gamma}p-\sqrt{\frac{\gamma}{2}}{\frac{\partial } { \partial p}}\right)}\bigg]g , \label{wigner_bzakai}\end{aligned}\ ] ] and the smoothing probability density is given by eq .( [ wigner_smooth ] ) .the use of linear smoothing for the case of being a gaussian process and being a white gaussian noise has been studied in refs .practical strategies of solving eqs .( [ wigner_zakai ] ) and ( [ wigner_bzakai ] ) in general are beyond the scope of this paper , but classical nonlinear filtering and smoothing techniques should help .one can also use the hybrid smoothing theory to study the general problem of force estimation via a squeezed probe beam and a homodyne pll , by modeling the phase modulator as a quantum mechanical oscillator instead and combining the problem studied in this section with the force estimation problem studied in ref .previous sections focus on the estimation of classical signals , but there is no reason why one can not apply smoothing to quantum degrees of freedom as well , as shown in fig . [ quantum_smoothing ] .first consider the predicted density operator at time conditioned upon past observations , given by },\end{aligned}\ ] ] where the classical degrees of freedom are neglected for simplicity .the predicted expectation of an observable , such as the position of a quantum mechanical oscillator , is } = \frac{{\operatorname{tr}}[\hat o\hat f(\tau)]}{{\operatorname{tr}}[\hat f(\tau)]}. \label{predict}\end{aligned}\ ] ] one may also use retrodiction , after some measurements of a quantum system have been made , to estimate its initial quantum state before the measurements , using the retrodictive density operator defined as }.\end{aligned}\ ] ] the retrodicted expectation of an observable is } = \frac{{\operatorname{tr}}[\hat g(\tau)\hat o]}{{\operatorname{tr}}[\hat g(\tau)]}. \label{retrodict}\end{aligned}\ ] ] causality prevents one from going back in time to verify the retrodicted expectation , but if the degree of freedom with respect to at time is entangled with another `` probe '' system , then one can verify the retrodicted expectation by measuring the probe and inferring .the idea of verifying retrodiction by entangling the system at time with a probe can also be extended to the case of smoothing , as proposed by aharonov __ . in the middle of a sequence of measurements ,if one weakly couples the system to a probe for a short time , so that the system is weakly entangled with the probe , and the probe is subsequently measured , the measurement outcome on average can be characterized by the so - called weak value of an observable , defined as } { { \operatorname{tr}}[\hat g(\tau)\hat f(\tau)]}.\end{aligned}\ ] ] the weak value becomes a prediction given by eq .( [ predict ] ) when future observations are neglected , such that , and becomes a retrodiction given by eq .( [ retrodict ] ) when past observations are neglected and there is no _ a priori _ information about the quantum system at time , such that .when and are incoherent mixtures of eigenstates , the weak value becomes and is consistent with the classical time - symmetric smoothing theory described in sec .[ classical ] .hence , the weak value can be regarded as a quantum generalization of the smoothing estimate , conditioned upon past and future observations .one can also establish a correspondence between a classical theory and a quantum theory via quasiprobability distributions .given the smoothing probability density in terms of the wigner distributions in eq .( [ wigner_smooth ] ) , one may be tempted to undo the marginalizations over the quantum degrees of freedom and define a smoothing quasiprobability distribution as where and are the wigner distributions of and , respectively .intriguingly , , being the product of two wigner distributions , can exhibit quantum position and momentum uncertainties that violate the heisenberg uncertainty principle .this has been shown in ref . , when the position of a quantum mechanical oscillator is monitored via continuous measurements and smoothing is applied to the observations . from the perspective of classical estimation theory , it is perhaps not surprising that smoothing can improve upon an uncertainty relation based on a predictive theory .the important question is whether the sub - heisenberg uncertainties can be verified experimentally . argues that it can be done only by bayesian estimation , but in the following i shall propose another method based on weak measurements .it can be shown that the expectation of using is } { { \operatorname{tr}}[\hat g(\tau)\hat f(\tau ) ] } = { \operatorname{re } } { } _ { \hat g}{\langle\hat q\rangle}_{\hat f},\end{aligned}\ ] ] which is the real part of the weak value , and likewise for , so the smoothing position and momentum estimates are closely related to their weak values .more generally , consider the joint probability density for a quantum position measurement followed by a quantum momentum measurement , conditioned upon past and future observations : } , \label{joint_measurement } \\\mathcal c & \equiv \int dy_q dy_p { \operatorname{tr}}\big[\hat g(\tau)\hat m_p(y_p)\hat m_q(y_q ) \nonumber\\&\quad\times \hat f(\tau)\hat m_q^\dagger(y_q)\hat m_p^\dagger(y_p)\big],\end{aligned}\ ] ] where the measurement operators }{|q\rangle}{\langleq| } , \\ \hat m_p(y_p ) & = \int dp { \left(\frac{\epsilon_p}{2\pi}\right)}^{\frac{1}{4 } } \exp{\left[-\frac{\epsilon_p}{4}(y_p - p)^2\right]}{|p\rangle}{\langlep|}\end{aligned}\ ] ] are assumed to be gaussian and backaction evading .after some algebra , }\tilde p(q , p ) , \label{convolution}\\ \tilde p(q , p ) & \equiv \frac{1}{2\pi\mathcal c } \int du dv\exp{\left(-\frac{\epsilon_q u^2+\epsilon_p v^2}{8}\right ) } \nonumber\\&\quad\times { \left\langlep+\frac{v}{2}\right|}\hat g(\tau){\left|p-\frac{v}{2}\right\rangle}\exp(ivq ) \nonumber\\&\quad\times { \left\langleq-\frac{u}{2}\right|}\hat f(\tau){\left|q+\frac{u}{2}\right\rangle}\exp(ipu).\end{aligned}\ ] ] from the perspective of classical probability theory , eq .( [ convolution ] ) can be interpreted as the probability density of noisy position and momentum measurements with noise variances and , when the measured object has a classical phase - space density given by . in the limit of infinitesimally weak measurements , , and , can be obtained approximately from an experiment with small and by measuring for the same and and deconvolving eq .( [ convolution ] ) . in practice , and only need to be small enough such that .this allows one , at least in principle , to experimentally demonstrate the sub - heisenberg uncertainties predicted in ref . in a frequentist way , not just by bayesian estimation as described in ref .note , however , that can still go negative , so it can not always be regarded as a classical probability density .this underlines the wave nature of a quantum object and may be related to the negative probabilities encountered in the use of weak values to explain hardy s paradox .in conclusion , i have used a discrete - time approach to derive the classical and quantum theories of time - symmetric smoothing .the hybrid smoothing theory is applied to the design of pll , and the relation between the proposed theory and aharonov _ et al . _s weak value theory is discussed .possible generalizations of the theory include taking jumps into account for the classical random process and adding quantum measurements with poisson statistics , such as photon counting .potential applications not discussed in this paper include cavity quantum electrodynamics , photodetection theory , atomic magnetometry , and quantum information processing in general . on a more fundamental level, it might also be interesting to generalize the weak value theory and the smoothing quasiprobability distribution to other kinds of quantum degrees of freedom in addition to position and momentum , such as spin , photon number , and phase .a general quantum smoothing theory would complete the correspondence between classical and quantum estimation theories .discussions with seth lloyd and jeffrey shapiro are gratefully acknowledged . this work is financially supported by the keck foundation center for extreme quantum information theory . h. l. van trees , _ detection , estimation , and modulation theory , part i _ ( wiley , new york , 2001 ) ; _ detection , estimation , and modulation theory , part ii : nonlinear modulation theory _( wiley , new york , 2002 ) ; _ detection , estimation , and modulation theory , part iii : radar - sonar processing and gaussian signals in noise _ ( wiley , new york , 2001 ) .p. warszawski , h. m. wiseman , and h. mabuchi , * 65 * , 023802 ( 2002 ) ; p. warszawski and h. m. wiseman , j. opt .b : quant .semiclass .opt . * 5 * , 1 ( 2003 ) ; * 5 * , 15 ( 2003 ) ; n. p. oxtoby , p. warszawski , h. m. wiseman , he - bi sun and r. e. s. polkinghorne , * 71 * , 165317 ( 2005 ) .v. p. belavkin , radiotech .elektron . * 25 * , 1445 ( 1980 ) ; v. p. belavkin , in _ information complexity and control in quantum physics _ , edited by a. blaquire , s. diner , and g. lochak ( springer , vienna , 1987 ) , p. 311 ; v. p. belavkin , in _stochastic methods in mathematics and physics _ , edited by r. gielerak and w. karwowski ( world scientific , singapore , 1989 ) , p. 310 ; v. p. belavkin , in _ modeling and control of systems in engineering , quantum mechanics , economics , and biosciences _ , edited by a. blaquire ( springer , berlin , 1989 ) , p. 245 .a. barchielli , l. lanz , and g. m. prosperi , nuovo cimento , * 72b * , 79 ( 1982 ) ; found .* 13 * , 779 ( 1983 ) ; h. carmichael , _ an open systems approach to quantum optics _ ( springer - verlag , berlin , 1993 ) .s. m. barnett , d. t. pegg , j. jeffers , o. jedrkiewicz , and r. loudon , * 62 * , 022313 ( 2000 ) ; s. m. barnett , d. t. pegg , j. jeffers , and o. jedrkiewicz , * 86 * , 2455 ( 2001 ) ; d. t. pegg , s. m. barnett , and j. jeffers , * 66 * , 022106 ( 2002 ) .
classical and quantum theories of time - symmetric smoothing , which can be used to optimally estimate waveforms in classical and quantum systems , are derived using a discrete - time approach , and the similarities between the two theories are emphasized . application of the quantum theory to homodyne phase - locked loop design for phase estimation with narrowband squeezed optical beams is studied . the relation between the proposed theory and aharonov _ et al . _ s weak value theory is also explored .
the study of gross properties of bulk self - gravitating objects using thomas - fermi model has been discussed in a very lucid manner in the text book on statistical physics by landau and lifshitz . in this bookthe model has also been extended for the bulk system with ultra - relativistic electrons as one of the constituents .analogous to the conventional white dwarf model , these electrons in both non - relativistic and ultra - relativistic cases provide degeneracy pressure to make the bulk system stable against gravitational collapse .the results are therefore an alternative to lane - emden equation or chandrasekhar equation .the mathematical formalism along with the numerical estimates of various parameters , e.g. , mass , radius , etc . for white dwarf starsmay be taught as standard astrophysical problems in the master of science level classes for the students having astrophysics and cosmology as spacial paper .the standard version of thomas - fermi model has also been taught in the m.sc level atomic physics and statistical mechanics general classes .more than two decades ago , bhaduri et.al . developed a formalism for thomas - fermi model for a two dimensional atom .the problem can also be given to the advanced level m.sc .students in the quantum mechanics classes . the numerical evaluation of various quantities associated with the two dimensional atomsare also found to be useful for the students to learn numerical techniques , computer programing along with the physics of the problem . the work of bhaduri et.al .is an extension of standard thomas - fermi model for heavy atoms into two - dimensional scenario . a two - dimensional version of thomas - fermi model has also been used to study the stability and some of the gross properties of two - dimensional star cluster . in our opinionthis is the first attempt to apply thomas - fermi model to two - dimensional gravitating object .however , to the best of our knowledge the two - dimensional generalization of thomas - fermi model to study gross properties of bulk self - gravitating objects , e.g. , white dwarfs has not been reported earlier .this problem can also be treated as standard m.sc level class problem for the advanced level students with astrophysics and cosmology as special paper . in this articlewe shall therefore develop a formalism for two dimensional version of thomas - fermi model to investigate some of the gross properties of two - dimensional hypothetical white dwarf stars .the work is essentially an extension of the standard three - dimensional problem which is discussed in the statistical physics book by landau and lifshitz .the motivation of this work is to study newtonian gravity in two - dimension .analogous coulomb problem with logarithmic type potential has been investigated in an extensive manner . however , the identical problem for gravitating objects has not been thoroughly studied ( except in ) .one can use this two - dimensional gravitational picture as a model calculation to study the stability of giant molecular cloud during star formation and also in galaxy formation .the article is arranged in the following manner . in the next sectionwe have developed the basic mathematical formalism for a two - dimensional hypothetical white dwarf star . in section - iii , we have investigated the gross properties of white dwarf stars in two - dimension . in section - iv ,the stability of two - dimensional white dwarfs with ultra - relativistic electrons as one of the constituents have been studied . finally in the last section we have given the conclusion of this work .in this section we start with the conventional form of poisson s equation , given by ( see also ) where is the gravitational potential , is the surface density of matter and for the two - dimensional scenario has to be expressed in its two - dimensional form .let us assume that the bulk object in two dimensional geometry is an hypothetical white dwarf star for which the inward gravitational pressure is balanced by the outward degeneracy pressure of the two dimensional electron gas .the mass of the object is coming from the heavy nuclei distributed in two - dimensional bounded geometry . now the fermi energy for a two dimensional electron gas is given by where and are respectively the fermi momentum and the mass of the electrons .further , the number of electrons per unit surface area is given by hence the electron fermi energy is let is the baryon mass per electron , then it can very easily be shown that where is the mass density ( mass per unit area ) of the matter . then following ,the thomas - fermi condition is given by since is a function of radial coordinate , the chemical potential and the matter density also depend on the radial coordinate . then replacing the gravitational potential with the electron fermi energy from eqn.(6 ) , the poisson s equation ( eqn.(1 ) )can be written as assuming circular symmetry i.e. , independent of coordinate , replacing by from eqn.(5 ) and expressing in two - dimensional polar form , we have from eqn.(7 ) substituting , where may be called the scaled radial coordinate with the constant we can write down the poisson s equation in the following form it is quite obvious that in this case the solution is given by where is the ordinary bessel function of order zero .the surface of the two - dimensional bulk object is obtained from the first zero (say ) of the bessel function , i.e. , hence the radius of the object is given by .further , at the centre , i.e. , for , therefore the solution can also be written in the form hence the central density is given by therefore we can also write gives the variation of matter density with circular symmetry , the poisson s equation in polar coordinate can also be written in the following form let us now integrate this differential equation with respect to r from , which is the centre to , the surface of the bulk two dimensional stellar object. then we have where the mass of the object . now expressing in terms of and using the solution for ( eqn.(14 ) ), we finally get where we have used the standard relation .therefore this transcendental equation ( eqn.(20 ) ) can be solved numerically to get mass - radius relation for the two - dimensional stellar objects in thomas - fermi model in the non - relativistic scenario . in fig.(1 ) , we have shown graphically the form of mass radius relation for such objects .for the sake of illustration we have chosen the central density in such a manner that the maximum mass of the object is .the average density can also be obtained from the definition which can be expressed in terms of central density , and is given by this section following we have considered a two - dimensional hypothetical white dwarf star composed of massive ions and degenerate ultra - relativistic electron gas in two - dimensional geometrical configuration .the stability of the object is governed by electron degeneracy pressure . in the ultra - relativistic scenariothe energy of an electron is given by the usual expression where is the velocity of light and is the electron momentum .the fermi energy for this degenerate electron gas is then given by with , the electron fermi momentum . hence the number density ( the surface value ) for ultra - relativistic electron gas can be written in the following form therefore the electron fermi energy where and is the mass density for such ultra - relativistic matter . then following eqn.(7 ) , we have the thomas - fermi equation in two - dimension in the ultra - relativistic electron gas scenario where is the scaled radial coordinate and the constant unfortunately the non - linear differential equation given by eqn.(27 ) can not be solved analytically . to obtain its numerical solutionwe have used the standard four point runge - kutta numerical technique and a code is written in fortran 77 to solve eqn.(27 ) using the initial conditions ( i ) , at ( which indicates the origin of the two - dimensional white dwarf ) , which is the maximum value of , then as a consequence ( ii ) , the other initial condition . the surface of this bulk two - dimensional objectis then obtained from the boundary condition , with the scaled radius parameter . from the numerical solution of eqn.(27 ) with the initial conditions ( i ) and ( ii ) , the actual value of the radius can be obtained . as a cross check ,we have used mathematica in linux platform and got almost same result . in our formalism , instead of solving for the gravitational potential in two - dimension , we have solved numerically for the electron chemical potential with the initial conditions ( i ) and ( ii ) . the initial condition ( i ) depends on the central density , which we have supplied by hand as input . in the non - relativistic case by trial and error methodwe have fixed the value g to get .whereas in the ultra - relativistic electron gas scenario , we have started from g and gone upto g .since we are solving numerically for and the central density is supplied by hand , the singularity at the origin ( ) does not appear in our analysis .singularity at the origin also do not appear when lane - emden equation ( newtonian picture ) and tov equation ( general relativistic scenario ) are solved numerically by supplying the central density by hand . in this casethe mass - radius relation can also be derived from eqn.(17 ) with replaced by from eqn.(6 ) , the thoma - fermi condition , and is given by unlike the non - relativistic scenario , in this case the derivative term at the surface has to be obtained from the numerical solution of thomas - fermi equation ( eqn.(27 ) ) . in fig.(2 ) we have shown the graphical form of mass radius relation for the ultra - relativistic case .the qualitative nature of fig.(1 ) and fig.(2 ) are completely different . unlike the non - relativistic situation , where the mass radius relation is obtained only for a particular central density , in this case each point on the m - r curve corresponds a particular central density .this is obvious from the initial condition ( i ) , where is the given central density . since the central density has a fixed value in the non - relativistic scenario , the mass of the object increases with the increase of its radius . to make these two figures more understandable , in fig.(3 ) andfig.(4 ) we have plotted the variations of mass and radius respectively for the object against the central density in the ultra - relativistic scenario . in fig.(3 ) we have plotted the mass of the object expressed in terms of solar mass , with the central density of the object expressed in terms of normal nuclear density . in fig.(4 ) we have shown the variation of radius of the object with central density . from the study of density profiles of the object in the ultra - relativistic scenario ,we have noticed that for the low value of the central density , the numerical solution of eqn.(27 ) shows that the matter density vanishes or becomes negligibly small for quite large radius value , whereas for very high central density , the matter density vanishes very quickly .therefore in this model the stellar objects with very low central density are large in size and since the matter density is low enough , the mass is also quite small . on the other hand for the large values of central density , the objects are small in size , i.e. , they are quite compact in size but massive enough .this study explains the nature of variations of mass and radius with central density for the ultra - relativistic case .further in ultra - relativistic scenario the average density of matter inside the bulk two - dimensional stellar object is given by which exactly like the non - relativistic case also depends on the central density and the radius of the object .however , unlike the non - relativistic situation , here the average density depends on the surface value of the gradient of electron fermi energy , which has to be obtained numerically from the solution of eqn.(27 ) .in conclusion we would like to comment that although the formalism developed here is for a hypothetical stellar object , which is basically an extension of standard three dimensional problem discussed in the book by landau & lifshitz , we strongly believe that it may be considered as interesting post - graduate level problem for the physics students , including its numerical part .this problem can also be treated as a part of dissertation project for post graduate physics students .the study of thomas - fermi model in two - dimension for self - gravitating objects may be used for model calculations of star formation and galaxy formation from giant gaseous cloud .finally , we believe that the problem solved here has some academic interest . l.d .landau and e.m lifshitz , statistical physics , part - i , butterworth / heinemann , oxford , ( 1998 ) pp .shapiro and s.a .teukolsky , black holes , white dwarfs and neutron stars , john wiley and sons , new york , ( 1983 ) pp .61 and 188 .h.q . huang and k , n , yu , stellar astrophysics , springer , ( 1998 ) , pp . 337 .r.k bhaduri , s das gupta and s.j lee , am .j. phys . ,* 58 * ( 1990 ) 983 .sanudo , a.f .pacheco , revista mexicana de fisica , * e53 * , ( 2006 ) 82 .sanu engineer , k. srinivasan and t. padmanabhan , astrophys .* 512 * , ( 1999 ) 1 .m. abramowitz and i.a stegun ( ed . ) , handbook of mathematical functions , dovar publication , ny , ( 1972 ) pp .
in this article we have solved an hypothetical problem related to the stability and gross properties of two dimensional self - gravitating stellar objects using thomas - fermi model . the formalism presented here is an extension of the standard three - dimensional problem discussed in the book on statistical physics , part - i by landau and lifshitz . further , the formalism presented in this article may be considered as class problem for post - graduate level students of physics or may be assigned as a part of their dissertation project .
recall the classical linear transient 2nd - order convection - diffusion problem for an unknown scalar function on a bounded domain : {rcll } \partial_{t}u - \varepsilon \delta u + { \boldsymbol \beta}\cdot \operatorname{\bf grad } u & = & f & \text{in } \omega\;,\\ u & = & 0 & \text{on } \partial\omega\;,\\ u(0 ) & = & u_{0}\;. \end{array}\end{gathered}\ ] ] here , and in the remainder of the paper , stands for a smooth vector field . for encounter a singularly perturbed boundary value problem , whose _ stable _ discretization has attracted immense attention in numerical analysis , see and the many references cited therein .the boundary value problem turns out to be a member of a larger family of 2nd - order boundary value problems ( bvp ) , which can conveniently be described using the _ calculus of differential forms_. it permits us to state the generalized transient 2nd - order convection - diffusion problems as {rcll } \ast \partial_t \omega(t ) - \varepsilon ( -1)^{l } d\ast d \omega(t ) + \ast l_{\boldsymbol \beta } \omega ( t ) & = & \varphi \quad & \text{in } \omega\subset\mathbb{r}^{n}\;,\\ \imath^{\ast}\omega & = & 0 & \text{on } \partial\omega\;,\\ \omega(0 ) & = & \omega_{0}\;. \end{array}\end{gathered}\ ] ] these are bvps for an unknown time dependent -form , , on the domain .the symbol stands for a hodge operator mapping an -form to an -form , and denotes the exterior derivative .dirichlet boundary conditions are imposed via the trace of the -form .we refer to ( * ? ? ?* sect . 2 ) and ( * ? ? ?2 ) for more details and an introduction to the calculus of differential forms in the spirit of this article . by denote the lie derivative of for the given velocity field ( * ? ? ?* ch . 5 ) .it provides the convective part of the differential operator in .details will be given in sect .[ sec : extr - contr - lie ] below . in two and three dimensionsdifferential forms can be modelled by means of functions and vector fields through so - called vector proxies , see ( * ? ? ?7 ) , ( * ? ? ?* table 2.1 ) , and ( * ? ? ?* table 2.1 ) to indicate that a form is mapped to its corresponding euclidean vector proxy ] .thus , for , in the case of induced by the euclidean metric on , the following vector analytic incarnations of are obtained .the solution -form is replaced with an unknown vector field or function , _ cf . _* sect . 2 ) .* for we recover . * in the case we arrive at a convection - diffusion problem {rcll } \partial_{t}{{\mathbf{u}}}+ \varepsilon{\operatorname{\bf curl}}{\operatorname{\bf curl}}{{\mathbf{u}}}- { \boldsymbol \beta}\times { \operatorname{\bf curl}}{{\mathbf{u}}}+ \operatorname{\bf grad}({{\mathbf{u}}}\cdot{\boldsymbol \beta } ) & = & { { \mathbf{f } } } & \text{in } \omega\;,\\ { { \mathbf{u}}}\times{{\mathbf{n}}}&= & 0 & \text{on } \partial\omega\;,\\ { { \mathbf{u}}}(0 ) & = & { { \mathbf{u}}}_{0}\;. \end{array } \end{gathered}\ ] ] * the corresponding boundary value problems for vector proxies of 2-forms read {rcll } \partial_{t}{{\mathbf{u}}}-\varepsilon{\operatorname{\bf grad}}{{\mathsf{div}}}{{\mathbf{u}}}+ { \boldsymbol \beta}({{\mathsf{div}}}{{\mathbf{u } } } ) + \operatorname{\bf curl}({{\mathbf{u}}}\times{\boldsymbol \beta } ) & = & { { \mathbf{f } } } & \text{in } \omega\;,\\ { { \mathbf{u}}}\cdot{{\mathbf{n}}}&= & 0 & \text{on } \partial\omega\;,\\ { { \mathbf{u}}}(0 ) & = & { { \mathbf{u}}}_{0}\;. \end{array } \end{gathered}\ ] ] * for the diffusion term vanishes and we end up with pure convection problems for a scalar density {rcll } \partial_{t}u + { { \mathsf{div}}}({\boldsymbol \beta}u ) & = & f & \text{in } \omega\;,\\ { { \mathbf{u}}}(\cdot,0 ) & = & { { \mathbf{u}}}_{0}\ ; , \end{array } \end{gathered}\ ] ] where we assume on though equivalent to the vector proxy formulations very much conceal the common structure inherent in - .thus , in this article , we consistently adopt the perspective of differential forms .thinking in terms of co - ordinate free differential forms offers considerable benefits as regards the construction of structure preserving spatial discretizations . by now thisis widely appreciated for boundary value problems for , where the so - called discrete exterior calculus , or , equivalently , the mimetic finite difference approach , or discrete hodge - operators have shed new light on existing discretizations and paved the way for new numerical methods .all these methods have in common that -forms are approximated by -co - chains on generalized triangulations of the computational domain .this preserves co - homology and , thus , plenty of algebraic properties enjoyed by the solutions of the continuous bvp carry over to the discrete setting , see , e.g. , ( * ? ? ?moreover , simplicial and tensor product triangulations allow the extension of co - chains to discrete differential forms , which furnishes structure preserving finite element methods .also in this paper the focus will be on galerkin discretization by means of discrete differential forms and how it can be used to deal with lie derivatives . in light of the success of discrete differential forms, it seems worthwhile exploring their use for the more general equations .this was pioneered in and this article continues and extends these considerations .revealing structure is not our only motivation : as already pointed out the discretization of has been the object of intense research , but and have received scant attention , though , is relevant for numerical modelling , e.g. , in magnetohydrodynamics .apart from standard galerkin finite element methods and , we would like to mention as attempts to devise a meaningful discretization of the transport term in .the first paper relies on a lagrangian approach along with a divergence preserving projection , which can be interpreted as an interpolation onto co - chains .the second article employs edge degrees of freedom ( 1-co - chains ) combined with a special transport scheme . both refrain from utilizing exterior calculuswe stress that this article is confined to the derivation and formulation of fully discrete schemes for on a single _ fixed _ triangulation of .principal attention is paid to the treatment of the convective terms .the discussion centers on structural and algorithmic aspects , whereas rigorous numerical analysis of stability and convergence analysis will be addressed in some forthcoming work , because it relies on certain technical results that must be established for each degree of the form separately .the plan of the paper is as follows : we first review the definition of lie derivatives and material derivatives for differential forms and give a short summary on discrete forms .next , we present a semi - lagrangian approximation procedure for material derivatives of arbitrary discrete -forms .afterwards we present an eulerian treatment of convection of discrete forms .these two sections focus on the formulation of fully discrete schemes .they are subsequently elaborated for the eddy current model in moving media as an example for non - scalar convection - diffusion , see also .finally we present numerical experiments for convection of -forms in dimensions .they give empiric hints on convergence and stability .we introduce a space - time domain ] . given a vector field and initial values the associated flow ^ 2 \mapsto \mathbb r^n ] . in this case hence is the inverse of : for convenience we abbreviate .it follows directly from that the jacobian solves : we write for the space of differential -forms on ( cf .* chapter v , section 3 ) ) .differential -forms can be seen as continuous additive mappings from the set of compact oriented , piecewise smooth , -dimensional sub - manifolds of into the real numbers .the spaces can be equipped with the -norm , see ( * ? ? ?2.2 ) . for euclidean hodge operatorthis agrees with the usual -norm of the vector proxies .although it is very common to regard the mapping as integration we adopt the notation of a duality pairing instead of . only for we will use the integral symbol-forms can be identified with real valued functions .recall the definition of the directional derivative for a smooth function : the lie derivative of -forms generalizes this .note that the point evaluation of -forms is replaced with evaluation on -dimensional oriented sub - manifolds .thus the lie derivative of a -form is in terms of the pullback defined by we can also write following the extrusion is the union of flux lines emerging at running from to ( figure [ fig : extrusion ] ) .the orientation of induces an orientation of the extrusion such that the boundary is : plugging this into the definition of the lie derivative ( [ eq : liedefinition ] ) we get by means of stokes theorem the contraction operator is defined as the limit of the dual of the extrusion : and we recover from ( [ eq : lieextrusion ] ) cartan s formula ( * ? ? ?5.3 ) for the lie derivative : for -forms the second term vanishes , for -forms the first one .[ t][b] [ t][b] with respect to velocity field .,title="fig : " ] for time dependent -forms the limit value : corresponding to is the rate of change of the action of -forms in moving media , hence a material derivative .we deduce : since the exterior derivative and the lie derivative commute we have : as a consequence closed forms remain closed when they are advected by the material derivative .if and then this property is preserved : .[ remark : closed ] for -forms with with vector proxy formulas and give a general convective term while for -forms with vector proxy the convective term reads for closed -forms , with vector proxy , which satisfies , we find we rely on a version of discrete exterior calculus spawned by using discrete differential forms .we equip with a simplicial triangulation .the sets of subsimplices of dimension are denoted by and have cardinality . in words , is the set of vertices of , the set of edges , etc .we write for the space of whitney -forms defined on , see ( * ? ? ?23 ) and ( * ? ? ?* sect . 3 ) .from the finite element point of view a whitney -form is a linear combination of certain basis functions associated with subsimplices of .the basis functions have compact support and vanish on all cells that have trivial intersection with subsimplex .locally , e.g. in each cell with , the basis functions can be expressed in terms of the barycentric coordinate functions and whose gradients .the barycentric coordinate functions are associated with vertices of cell , e.g. .if denotes the index set of vertices of subsimplex then the construction of these spaces ensures that the traces on element boundaries are continuous .the evaluation maps , are the corresponding degrees of freedoms , which give rise to the usual nodal interpolation operators , also called derham maps ( * ? ? ?3.1 ) , discrete differential forms are available for more general triangulations ( * ? ? ?the whitney forms feature piecewise linear vector proxies , but can be extended to schemes with higher polynomial degree ( * ? ? ?for the sake of simplicity we are not going to discuss those .there are two basically different strategies to discretize material derivatives .we either approximate the difference quotient ( [ eq : matdefinition ] ) directly or use the formulation ( [ eq : material_deriv ] ) in terms of partial time derivative of forms and lie derivative .the former policy is referred to as lagrangian and in our setting of a single fixed mesh it is known as _ semi - lagrangian_. it is explored in this section .the latter is called _ eulerian _ and reduces to a method of lines approach based on a spatial discretization of the lie derivatives .its investigation is postponed to sect .[ sec : euler - appr - discr ] .we begin the study of the semi - lagrangian method with a look at the pure transport problem : given some and find , such that here and below we assume in the following that on the boundary of the normal components of the vector field vanish , hence .this is a restrictive but very common assumption for semi - lagrangian methods . only a few semi - lagrangian methods like ellam can handle a non - vanishing velocity field on the boundary .there are two different variational formulations for the transport problem .the _ direct _variational formulation reads : given and initial data find , such that the _ adjoint _ variational formulation for reads : given and initial data find , such that [ lem:31 ] the direct ( [ def : direct_var ] ) and the adjoint ( [ def : weak_var ] ) variational formulations are equivalent .let and . since and : and the equivalence follows .[ cor : duallie ] under the assumption : replacing the limits in ( [ def : direct_var ] ) and ( [ def : weak_var ] ) with a finite difference quotient yields semi - discrete timestepping schemes : given and find , such that and given and find , such that restricting the semi - discrete formulation to the discrete spaces , we end up with the following direct and adjoint schemes : given and find , such that and find , such that a significant advantage of the semi - lagrangian approach is the preservation of closeness ( see remark [ remark : closed ] ) .this property is important in many physical applications .the adjoint semi - lagrangian timestepping fulfils this in a weak sense .a form is weakly closed if .for the adjoint semi - lagrangian timestepping boils down to as the exterior derivative and pullback commute we conclude with .hence is weakly closed if is weakly closed .[ remark : weakclosed ] [ remark : stab_const_cont ] another advantage of semi - lagrangian methods is the straightforward stability analysis for the homogeneous transport problem . the corresponding direct scheme reads : given , find testing with we immediately get the stability estimate : with . for and euclidean hodge the concrete constants can be deduced from vector proxy representations of the pullback .the stability estimates then read : where is the euclidean matrix norm .stability will depend in general on the distortion effected by . for even the usual assumption does not guarantee stability .it is important to note that both the stability estimates and the preservation of weak closedness as previously stated assume exact evaluation of the bilinear forms and . any non - trivial problem will require the use of additional approximations .we propose the following three approximation steps . 1. first we use some numerical integrator to determine approximations to the exact evolutions of vertices .note that the direct scheme requires to solve backward in time , while for the adjoint scheme we calculate forward in time .the simplest numerical integrator for this first step is the forward euler scheme , which gives .then we approximate the evolution of arbitrary points as the convex combination of the approximated evolutions of surrounding vertices : this yields a piecewise linear , continuous approximation of the exact evolution .+ note that for small time steps and lipschitz continuous the approximating flow maps the simplicial triangulation onto a new simplicial triangulation , such that each subsimplex of has an image in .finally , we apply the nodal interpolation operators of discrete forms ( * ? ? ?3.3 ) to map the transported discrete forms and onto the space of discrete forms .let us illustrate these three steps for the case of -forms and -forms : to determine now the interpolant of a discrete transported -form , by linearity it is enough to consider the basis functions .since for all vertices and the matrix operator with entries maps the expansions coefficients of to the coefficients of .this means that in each time step we not only need to determine the points but also the location within the triangulation . to find the element , in which is located , we trace the path of the trajectory from one element to the next . based on this datathe matrix entries ( [ def : p0_entries ] ) can be assembled element by element ( see fig . [fig : transport_vertex ] for the direct scheme ) . , we move backward along the trajectory starting from and identify the crossed elements and . in this case and are the only non - zero entries in the row of . ]the advantage of the second approximation step , the linear interpolation , is elucidated by the treatment of -forms .the interpolation of a transported discrete is determined by the interpolation of transported basis forms and the condition for all edges .this defines a matrix , mapping the expansion coefficients of to those of .the matrix entries are line integrals along the straight line from to , if is the edge connecting the vertices and ( see fig .[ fig : transport_edge ] for the direct scheme ) .( black curved line ) is approximated by a straight line (black dashed line ) . in the casedepicted here all basis functions associated with edges of elements , and yield a non - zero entry . ] to determine the entries of the -th row , we trace the straight line from to and calculate for each crossed element the line integrals for the attached basis functions. if e.g. the line crosses an element with edge from point to point ( see fig . [fig : sketch_edge ] for the direct scheme ) , then the element contribution to is : }b_{i'}^1 = \lambda_{j'}(a)\lambda_{k'}(b)- \lambda_{k'}(a)\lambda_{j'}(b)\,,\end{aligned}\ ] ] where and are the terminal vertices of edge . ] .^{l-1 } \wedge \gamma\ , , \end{aligned}\ ] ] one is now tempted to use the right hand side of to define approximations for the convection terms .the difficulty here are the face integrals , which are not well - defined for . motivated by finite volume approximations , we replace in the case in the face integrals with a flux function depending on the values and on adjacent elements .we use the characterization ( [ eq : liedefinition ] ) of the lie derivative to determine such consistent flux functions .[ def : bd ] let be lipschitz continuous , and then is the variational _ central _ finite difference , with we could have used one - sided finite differences as well .next we show that the limit exists also for discrete differential forms .[ lemma : standard1 ] the limit exists , and we have the representation with upwind and downwind traces and and the pointwise limit . is the index set of faces , that are not on the boundary of .it s cardinality is .here we used let us first note that for all the pointwise limit exists .we decompose the central difference into a sum of upwind and downwind finite differences : and look first at the limit for the upwind finite difference .we split the integration over the domain in a sum of integrals over patches with smooth integrand : and distinguish three cases , see fig .[ fig : limit_pic ] : * and : * and : this implies first that .if elements and share a face it is clear that and we have and by the definition of contraction . * since by assumption is bounded on we get : after summation over all elements we obtain : for the upwind finite difference .a similar argument for the downwind finite difference shows : combining these two results for upwind and downwind finite difference we get the assertion . .in the case of the upwind finite difference we have , that discrete -forms are discontinuous on edges of the blue triangulation .their pullbacks are discontinuous on edges of the red dashed triangulation , the image of . ]the careful examination of the proof of lemma [ lemma : standard1 ] shows that the existence of the limit can be shown for quite general approximation spaces for differential forms .we used only the continuity inside each element . for the sake of completenesswe rewrite here the standard convection bilinear form for vector proxies or in the case . denotes the normal vector of a face .discrete whitney -forms are continuous . hence in this casethe face integrals in vanish and we obtain the standard bilinear form for scalar convection : for discrete -forms we have both volume and face contributions : and similar for discrete forms : for discrete -forms we get : for -forms in 3d the limit corresponds to the non - stabilized discontinuous galerkin method for the advection problem from . in the lowest order case this is identical with the scheme for discrete -forms .the limit for the upwind finite difference yields the stabilized discontinuous galerkin methods with upwind numerical flux . in practice we will use local quadrature rules to evaluate the volume and face integrals in .this will not destroy the consistency if the quadrature rules are first order consistent . in the definition of from def .[ def : bd ] we may also appeal to lemma [ lem:31 ] and shift the difference quotient onto the test function .this carries over to the limit and yields two different eulerian timestepping methods for problem that , in analogy to sect .[ sec : deriv_slg ] , we may call `` direct '' and `` adjoint '' . the direct standard eulerian timestepping scheme for convection - diffusion of differential formsis : given and , find such that : the adjoint standard eulerian timestepping is : given and , find such that : now we use the definition of lie - derivatives as limit of a difference quotient to define a proper interpolation operator for lie - derivatives of discrete forms .let be a subsimplex of the triangulation .then both limit values , the limit from the upwind direction and the limit from the downwind direction exist for lipschitz continuous .the continuity of traces of discrete differential forms ensures that integrals are well - defined .the existence of upwind and downwind limits then follows from the existence of the limits for locally linear , which in turn can be calculated explicitly by distinguishing different geometric situations . we skip the tedious details .observe that in general e.g. for -forms a short calculation shows and with upwind and downwind traces and . only for cases where is continuous in a neighbourhood of subsimplex upwind and downwindlimit agree . since and correspond to integral evaluation of the usual interpolation operators , we could either use the upwind or the downwind limit to define discrete lie - derivatives .we will use the upwind limit for the direct formulation and the downwind limit for the adjoint formulation .this is motivated by taking the cue from semi - lagrangian schemes .let be lipschitz continuous and .further let be the subsimplices corresponding to the degrees of freedom of basis functions then is the upwind interpolated discrete lie - derivative .analogously the downwind interpolated lie - derivative is it follows from the cartan formula that with the exterior derivatives can be evaluated exactly for discrete differential forms .thus , the contraction operators are the only approximations . but for locally constant discrete forms these approximations are exact and one can prove consistency once one has interpolation estimates for discrete differential forms .this is outside the scope of this paper .we finally want to stress that it is important to treat the limits and for as limits of co - chains rather than integrals with certain integrands that are defined as pointwise limits .first , these pointwise limits are not well - defined since discrete forms are in general discontinuous across element boundaries .second , even for special cases where the pointwise limit exist we can have to understand this , we may consider the 2d example depicted in fig . [ fig : limit_example ] with constant .we take such that for all edges except the edge between vertices and and calculate the projection of the lie - derivative onto edge between vertices and .then it is clear that and therefor on the other hand , one can easily show that : hence since an explicit calculation of the upwind limits might be expensive for arbitrary , we have to find consistent approximations .the preceding discussion shows that this must be done very carefully .replacing the integration simply with some quadrature will not yield a consistent approximation .we combine the upwind discretization with an implicit euler method and state the implicit upwind methods for convection - diffusion for discrete differential forms .the implicit direct upwind eulerian timestepping scheme for the variational formulation of convection - diffusion of differential forms is : given and , find such that : the adjoint upwind eulerian timestepping scheme for the adjoint variational formulation of convection - diffusion of differential forms is : given and , find such that : in contrast to the semi - lagrangian schemes and , in and we face a non - symmetric algebraic system in each time step , whose iterative solution can be challenging .we therefor consider a second kind of upwind eulerian scheme , that treats the convection in an explicit fashion . the direct semi - implicit upwind eulerian timestepping scheme for the variational formulation of convection - diffusion of differential forms is : given and , find such that : the adjoint upwind eulerian timestepping scheme for the variational formulation of convection - diffusion of differential forms is : given and , find such that : these semi - implicit schemes do not coincide with the usual method of lines approach .nevertheless there are two heuristic arguments that could justify the semi - implicit schemes .note first that for discrete -forms the semi - lagrangian and the semi - implicit eulerian scheme are identical , if we use a time step length and explicit euler for solving the characteristic equation .the direct semi - lagrangian scheme for discrete -forms was , written here for vector proxies and for and : find such that while the semi - implicit eulerian scheme was : find such that with on the other hand and by the time step condition : hence we have the identity : the second argument , that supports the definition of semi - implicit schemes applies to forms of any degree and follows from the identity : hence we can derive the semi - implicit schemes and from the semi - lagrangian schemes and in replacing e.g. the analysis of a semi - implicit discontinuous galerkin scheme for scalar non - linear convection - diffusion can be found in . for the linear case this scheme is modulo certain quadrature rules identical with the semi - implicit euler scheme for -forms .there is also a very close link between both the semi - lagrangian and semi - implicit eulerian methods and arbitrary lagrangian - eulerian methods ( ale ) .ale methods are operator splitting methods , that treat the diffusion part in a lagrangian and the convection part in an eulerian fashion . after a certain number of lagrangian iterationssteps an eulerian iteration step maps the mesh function on the distorted mesh back to the initial mesh .the semi - lagrangian and semi - implicit eulerian methods apply this mapping in each time step . in many problemsthese remap operations should preserve certain properties of the mesh function , e.g. volume or closedness .it is the discrete differential form approach , that naturally guaranties such properties for the methods presented here and for the ale method in .in this section we will derive two different convection - diffusion equations for the electromagnetic part of magnetohydrodynamics ( mhd ) models .the first one is an equation for the magnetic field and requires the adjoint methods while the second one for the magnetic vector potential can be solved via the direct methods . commonly in mhd applications ,one neglects the displacement current .this reduced model , called eddy current model , is a system of equations for the magnetic field , the electric field , the magnetic field density , the current density and external source : a uniformly positive conductivity will be assumed in the sequel . to rewrite the eddy current model in terms of a material derivative we substitute and add to faraday s law ( [ eq : fala ] ) .this leaves the solution unchanged , since .hence we end up with the system : next we state the two different variational formulations , _ cf . _2.3 ) . for simplicitywe impose homogeneous electric boundary conditions on . testing ( [ eq : fl ] ) with , integration by parts yields : we eliminate using ohm s law and end up with the following variational formulation : seek such that : in order to eliminate as well we use corollary [ cor : duallie ] and get or , equivalently this variational formulation can be approximated using a semi - lagrangian framework yielding an algebraic system : where and are the mass matrices and , .the evaluation of the right hand side requires the calculation of the matrix from .lemma ( [ lemma : commute ] ) shows that the solution of ( [ eq : h_timestepping ] ) is weakly closed if the initial data is weakly closed and . alternatively , we may use one of the implicit eulerian schemes or where is the stiffness matrix for either the standard or the downwind discretization . while these schemes will not preserve the weakly closed property , the semi - implicit upwind eulerian scheme with algebraic system does , due to the interpolation based construction . here , for simplicity , we assume homogeneous magnetic boundary conditions on .the ansatz and solves faraday s law ( [ eq : fl ] ) .we then multiply ampere s law with a test form and integrate by parts : the material laws ( [ eq : mat_sigma ] ) and ( ) eliminate and and we get the -based variational formulation : find such that : the algebraic system for the direct semi - lagrangian scheme then reads as while the systems for the implicit and semi - implicit eulerian schemes are : and here and are the stiffness matrices for and , . corresponds either to the standard or the upwind eulerian discretization of the convection part .we finally we present a few numerical examples that illustrate the performance of the methods derived here . we take and look at {rcll } \partial_t \ast \omega(t ) - \varepsilon ( -1)^{l } d\ast d \omega(t ) + l_{\boldsymbol \beta } \ast\omega ( t ) & = & \varphi \quad & \text{in } \omega\subset\mathbb{r}^{2}\;,\\ \imath^{\ast}\omega & = & 0 & \text{on } \partial\omega\;,\\ \omega(0 ) & = & \omega_{0 } & \text{in } \omega \ ; \end{array}\end{gathered}\ ] ] for 1-forms . in vector proxy notion with this reads {rcll } \partial_{t}{{\mathbf{u}}}+ \varepsilon { \operatorname{\bf curl}}\text{curl } { { \mathbf{u}}}+ { \boldsymbol \beta}({{\mathsf{div}}}{{\mathbf{u } } } ) + { \operatorname{\bf curl}}({{\mathbf{u}}}\times { \boldsymbol \beta } ) & = & { { \mathbf{f } } } & \text{in } \omega\;,\\ { { \mathbf{u}}}\cdot{{\mathbf{n}}}&= & 0 & \text{on } \partial\omega\;,\\ { { \mathbf{u}}}(0 ) & = & { { \mathbf{u}}}_{0 } & \text{in } \omega\ ; , \end{array}\end{gathered}\ ] ] with , and , .we approximate by discrete -forms on a triangular mesh .we study the adjoint semi - lagrangian and eulerian methods , , and for this boundary value problem .* experiment i. * in the first experiment we take in problem the domain ^ 2 $ ] , the divergence free velocity and then with neither the convection nor the diffusion part dominates and we expect for all three schemes first order convergence in the mesh size , if the time step is of order .the convergence plots in figures [ fig : convplot1 ] and [ fig : convplot2 ] show the -error at time and confirm this for different ratios .* experiment iii .* next , we examine the stability properties of the three different schemes for dominating convection , meaning that we choose in problem very small . while for all three schemes are stable for small and larger cfl - numbers ( see fig .( [ fig : convplot1 ] ) and ( [ fig : convplot2 ] ) ) we encounter this for small only for the semi - lagrangian and the implicit eulerian scheme ( see fig .( [ fig : stab1 ] ) and ( [ fig : stab2 ] ) ) . as expected in the convection dominated case, one has to use either the semi - lagrangian or fully implicit eulerian scheme .* experiment iv . *another question that we address concerns the stabilization for the eulerian schemes . for the implicit timestepping schemes we need to solve in each time step a stationary convection - diffusion problem .it is well - known that in the scalar case non - stabilized methods would produce highly oscillating solutions for convection - diffusion problems with dominating convection . to study this issue, we consider the stationary convection diffusion problem for -forms and : {rcll } { { \mathbf{u}}}+ \varepsilon { \operatorname{\bf curl}}\text{curl } { { \mathbf{u}}}+ { \operatorname{\bf grad}}({\boldsymbol \beta } \cdot { { \mathbf{u } } } ) - { { \mathbf{r}}}\boldsymbol \beta \text{curl } { { \mathbf{u}}}&= & { { \mathbf{f } } } & \text{in } \omega=[-1,1]^2 \;,\\ { { \mathbf{u}}}\cdot{{\mathbf{n}}}&= & 0 & \text{on } \partial\omega\;,\\ { { \mathbf{u}}}(0 ) & = & { { \mathbf{u}}}_{0}\;. \end{array}\end{gathered}\ ] ] we could either use the standard approximation or the upwind interpolated discrete lie - derivative in the discretization of .again , we choose and the divergence free velocity . a. bossavit .discretization of electromagnetic problems : the `` generalized finite differences '' . in w.h.a .schilders and w.j.w .ter maten , editors , _ numerical methods in electromagnetics _ , volume xiii of _ handbook of numerical analysis _ , pages 443522 .elsevier , amsterdam , 2005 .h. de sterk .multi - dimensional upwind constrained transport on unstructured grids for `` shallow water '' magnetohydrodynamics . in _ proceedings of the 15th aiaa cfd conference , jun 11 - 14 , anaheim , ca_. aiaa , 2001 .hyman and m. shashkov .mimetic finite difference methods for maxwell s equations and the equations of magnetic diffusion . in f.l .teixeira , editor , _ geometric methods for computational electromagnetics _ , volume 32 of _ pier _ , pages 89121 . emw publishing , cambridge , ma , 2001 .
we consider generalized linear transient convection - diffusion problems for differential forms on bounded domains in . these involve lie derivatives with respect to a prescribed smooth vector field . we construct both new eulerian and semi - lagrangian approaches to the discretization of the lie derivatives in the context of a galerkin approximation based on discrete differential forms . details of implementation are discussed as well as an application to the discretization of eddy current equations in moving media .
the motivation for this project is to design a novel optical system for quasi - real time alignment of tracker detector elements used in high energy physics ( hep ) experiments .fox - murphy _ et.al ._ from oxford university reported their design of a frequency scanned interferometer ( fsi ) for precise alignment of the atlas inner detector .given the demonstrated need for improvements in detector performance , we plan to design an enhanced fsi system to be used for the alignment of tracker elements in the next generation of electron positron linear collider ( ilc ) detectors .current plans for future detectors require a spatial resolution for signals from a tracker detector , such as a silicon microstrip or silicon drift detector , to be approximately 7 - 10 . to achieve this required spatial resolution, the measurement precision of absolute distance changes of tracker elements in one dimension should be on the order of 1 .simultaneous measurements from hundreds of interferometers will be used to determine the 3-dimensional positions of the tracker elements .we describe here a demonstration fsi system built in the laboratory for initial feasibility studies .the main goal was to determine the potential accuracy of absolute distance measurements ( adm s ) that could be achieved under controlled conditions .secondary goals included estimating the effects of vibrations and studying error sources crucial to the absolute distance accuracy .a significant amount of research on adm s using wavelength scanning interferometers already exists . in one of the most comprehensive publications on thissubject , stone __ describe in detail a wavelength scanning heterodyne interferometer consisting of a system built around both a reference and a measurement interferometer , the measurement precisions of absolute distance ranging from 0.3 to 5 meters are 250 nm by averaging distance measurements from 80 independent scans .detectors for hep experiment must usually be operated remotely for safety reasons because of intensive radiation , high voltage or strong magnetic fields .in addition , precise tracking elements are typically surrounded by other detector components , making access difficult . for practical hep application of fsi , optical fibers for light delivery and returnare therefore necessary .we constructed a fsi demonstration system by employing a pair of single - mode optical fibers of approximately 1 meter length each , one for transporting the laser beam to the beam splitter and retroreflector and another for receiving return beams .a key issue for the optical fiber fsi is that the intensity of the return beams received by the optical fiber is very weak ; the natural geometrical efficiency is for a measurement distance of 0.5 meter . in our design, we use a gradient index lens ( grin lens ) to collimate the output beam from the optical fiber .we believe our work represents a significant advancement in the field of fsi in that high - precision adm s and vibration measurements are performed ( without a _ priori _ knowledge of vibration strengths and frequencies ) , using a tunable laser , an isolator , an off - the - shelf f - p , a fiber coupler , two single - mode optical fibers , an interferometer and novel fringe analysis and vibration extraction techniques .two new multiple - distance - measurement analysis techniques are presented , to improve precision and to extract the amplitude and frequency of vibrations .expected dispersion effects when a corner cube prism or a beamsplitter substrate lies in the interferometer beam path are confirmed , and observed results agree well with results from numerical simulation .when present , the dispersion effect has a significant impact on the absolute distance measurement .the limitations of our current fsi system are also discussed in the paper , and major uncertainties are estimated .the intensity of any two - beam interferometer can be expressed as where and are the intensities of the two combined beams , and are the phases . assuming the optical path lengths of the two beams are and , the phase difference in eq .( 1 ) is , where is the optical frequency of the laser beam , and c is the speed of light . for a fixed path interferometer , as the frequency of the laseris continuously scanned , the optical beams will constructively and destructively interfere , causing `` fringes '' .the number of fringes is where is the optical path difference between the two beams , and is the scanned frequency range .the optical path difference ( opd for absolute distance between beamsplitter and retroreflector ) can be determined by counting interference fringes while scanning the laser frequency .a schematic of the fsi system with a pair of optical fibers is shown in fig.1 .the light source is a new focus velocity 6308 tunable laser ( 665.1 nm 675.2 nm ) . a high - finesse ( ) thorlabs sa200 f- p is used to measure the frequency range scanned by the laser . the free spectral range ( fsr ) of two adjacent f - p peaksis 1.5 ghz , which corresponds to 0.002 nm .a faraday isolator was used to reject light reflected back into the lasing cavity .the laser beam was coupled into a single - mode optical fiber with a fiber coupler .data acquisition is based on a national instruments daq card capable of simultaneously sampling 4 channels at a rate of 5 ms / s / ch with a precision of 12-bits .omega thermistors with a tolerance of 0.02 k and a precision of 0.01 are used to monitor temperature .the apparatus is supported on a damped newport optical table . in order to reduce air flow and temperature fluctuations ,a transparent plastic box was constructed on top of the optical table .pvc pipes were installed to shield the volume of air surrounding the laser beam . inside the pvc pipes ,the typical standard deviation of 20 temperature measurements was about .temperature fluctuations were suppressed by a factor of approximately 100 by employing the plastic box and pvc pipes .the beam intensity coupled into the return optical fiber is very weak , requiring ultra - sensitive photodetectors for detection . considering the limited laser beam intensity and the need to split into many beams to serve a set of interferometers , it is vital to increase the geometrical efficiency . to this end, a collimator is built by placing an optical fiber in a ferrule ( 1 mm diameter ) and gluing one end of the optical fiber to a grin lens .the grin lens is a 0.25 pitch lens with 0.46 numerical aperture , 1 mm diameter and 2.58 mm length which is optimized for a wavelength of 630 nm .the density of the outgoing beam from the optical fiber is increased by a factor of approximately 1000 by using a grin lens .the return beams are received by another optical fiber and amplified by a si femtowatt photoreceiver with a gain of .for a fsi system , drifts and vibrations occurring along the optical path during the scan will be magnified by a factor of , where is the average optical frequency of the laser beam and is the scanned frequency . for the full scan of our laser , .small vibrations and drift errors that have negligible effects for many optical applications may have a significant impact on a fsi system . a single - frequency vibration may be expressed as , where , and are the amplitude , frequency and phase of the vibration respectively . if is the start time of the scan , eq .( 2 ) can be re - written as /c \end{array } \eqno{(3)}\ ] ] if we approximate , the measured optical path difference may be expressed as \times \\ \sin[\pi f_{vib}(t+t_0)+\phi_{vib } ] \end{array } \eqno{(4)}\ ] ] where is the true optical path difference without vibration effects .if the path - averaged refractive index of ambient air is known , the measured distance is .if the measurement window size is fixed and the window used to measure a set of is sequentially shifted , the effects of the vibration will be evident .we use a set of distance measurements in one scan by successively shifting the fixed - length measurement window one f - p peak forward each time .the arithmetic average of all measured values in one scan is taken to be the measured distance of the scan ( although more sophisticated fitting methods can be used to extract the central value ) . for a large number of distance measurements , the vibration effects can be greatly suppressed .of course , statistical uncertainties from fringe and frequency determination , dominant in our current system , can also be reduced with multiple scans .averaging multiple measurements in one scan , however , provides similar precision improvement as averaging distance measurements from independent scans , and is faster , more efficient , and less susceptible to systematic errors from drift . in this way, we can improve the distance accuracy dramatically if there are no significant drift errors during one scan , caused , for example , by temperature variation .this multiple - distance - measurement technique is called slip measurement window with fixed size , shown in fig.2 .however , there is a trade off in that the thermal drift error is increased with the increase of because of the larger magnification factor for a smaller measurement window size . in order to extract the amplitude and frequency of the vibration , another multiple - distance - measurement technique called slip measurement window with fixed start pointis shown in fig.2 . in eq .( 3 ) , if is fixed , the measurement window size is enlarged one f - p peak for each shift , an oscillation of a set of measured values reflects the amplitude and frequency of vibration .this technique is not suitable for distance measurement because there always exists an initial bias term including which can not be determined accurately in our current system .the typical measurement residual versus the distance measurement number in one scan using the above technique is shown in fig.3(a ) , where the scanning rate was 0.5 nm / s and the sampling rate was 125 ks / s .measured distances minus their average value for 10 sequential scans are plotted versus number of measurements ( ) per scan in fig.3(b ) . the standard deviations ( rms ) of distance measurements for 10 sequential scansare plotted versus number of measurements ( ) per scan in fig.3(c ) .it can be seen that the distance errors decrease with an increase of .the rms of measured distances for 10 sequential scans is 1.6 if there is only one distance measurement per scan ( ) .if and the average value of 1200 distance measurements in each scan is considered as the final measured distance of the scan , the rms of the final measured distances for 10 scans is 41 nm for the distance of 449828.965 , the relative distance measurement precision is 91 ppb .some typical measurement residuals are plotted versus the number of distance measurements in one scan( ) for open box and closed box data with scanning rates of 2 nm / s and 0.5 nm / s in fig.4(a , b , c , d ) , respectively .the measured distance is approximately 10.4 cm .it can be seen that the slow fluctuations of multiple distance measurements for open box data are larger than that for closed box data .the standard deviation ( rms ) of measured distances for 10 sequential scans is approximately 1.5 if there is only one distance measurement per scan for closed box data . by using multiple - distance - measurement technique ,the distance measurement precisions for various closed box data with distances ranging from 10 cm to 70 cm collected in the past year are improved significantly , precisions of approximately 50 nanometers are demonstrated under laboratory conditions , as shown in table 1 .all measured precisions listed in the table 1 .are the rms of measured distances for 10 sequential scans .two fsi demonstration systems , air fsi and optical fiber fsi , are constructed for extensive tests of multiple - distance - measurement technique , air fsi means fsi with the laser beam transported entirely in the ambient atmosphere , optical fiber fsi represents fsi with the laser beam delivered to the interferometer and received back by single - mode optical fibers ..distance measurement precisions for various setups using the multiple - distance - measurement technique .[ cols="^,^,^,^,^ " , ] based on our studies , the slow fluctuations are reduced to a negligible level by using the plastic box and pvc pipes to suppress temperature fluctuations .the dominant error comes from the uncertainties of the interference fringes number determination ; the fringes uncertainties are uncorrelated for multiple distance measurements . in this case, averaging multiple distance measurements in one scan provides a similar precision improvement to averaging distance measurements from multiple independent scans , but is faster , more efficient and less susceptible to systematic errors from drift .but , for open box data , the slow fluctuations are dominant , on the order of few microns in our laboratory .the measurement precisions for single and multiple distance open - box measurements are comparable , which indicates that the slow fluctuations can not be adequately suppressed by using the multiple - distance - measurement technique .a dual - laser fsi system intended to cancel the drift error is currently under study in our laboratory ( to be described in a subsequent article ) . from fig.4(d ) , we observe periodic oscillation of the distance measurement residuals in one scan , the fitted frequency is hz for the scan .the frequency depends on the scanning rate , . from eq.(4 ) , it is clear that the amplitude of the vibration or oscillation pattern for multiple distance measurements depends on $ ] .if , are constant values , it depends on the size of the distance measurement window .subsequent investigation with a ccd camera trained on the laser output revealed that the apparent hz vibration during the 0.5 nm / s scan arose from the beam s centroid motion . because the centroid motion is highly reproducible ,we believe that the effect comes from motion of the internal hinged mirror in the laser used to scan its frequency .the measurable distance range is limited in our current optical fiber fsi demonstration system for several reasons . for a given scanning rate of 0.25 nm / s ,the produced interference fringes , estimated by , are approximately 26400 in a 40-second scan for a measured distance ( ) of 60 cm , that is fringes / s , where is the scanned frequency and is the speed of light .the currently used femtowatt photoreceiver has 3 db frequency bandwidth ranging from 30 - 750 hz , the transimpedance gain decreases quickly beyond 750 hz .there are two ways to extend the measurable distance range .one straightforward way is to extend the effective frequency bandwidth of the femtowatt photoreceiver ; the other way is to decrease the interference fringe rate by decreasing the laser scanning rate .there are two major drawbacks for the second way ;one is that larger slow fluctuations occur during longer scanning times ; the other is that the laser scanning is not stable enough to produce reliable interference fringes if the scanning rate is lower than 0.25 nm / s for our present tunable laser .in addition , another limitation to distance range is that the intensity of the return beam from the retroreflector decreases inverse - quadratically with range .in order to test the vibration measurement technique , a piezoelectric transducer ( pzt ) was employed to produce vibrations of the retroreflector . for instance , the frequency of the controlled vibration source was set to hz with amplitude of . for distance measurements in one scan ,the magnification factor for each distance measurement depends on the scanned frequency of the measurement window , , where , is the average frequency of the laser beam in the measurement window , scanned frequency ghz , where i runs from 1 to , shown in fig.5(a ) .the distance measurement residuals for 2000 distance measurements in the scan are shown in fig.5(b ) , the oscillation of the measurement residuals reflect the vibration of the retroreflector .since the vibration is magnified by a factor of for each distance measurement , the corrected measurement residuals are measurement residuals divided by the corresponding magnification factors , shown in fig.5(c ) . the extracted vibration frequencies and amplitudes using this technique are hz , , respectively , in good agreement with expectations .another demonstration was made for the same vibration frequency , but with an amplitude of only nanometers .the magnification factors , distance measurement residuals and corrected measurement residuals for 2000 measurements in one scan are shown in fig.6(a ) , fig.6(b ) and fig.6(c ) , respectively .the extracted vibration frequencies and amplitudes using this technique are hz , nanometers .in addition , vibration frequencies at 0.1 , 0.5 , 1.0 , 5 , 10 , 20 , 50 , 100 hz with controlled vibration amplitudes ranging from 9.5 nanometers to 400 nanometers were studied extensively using our current fsi system .the measured vibrations and expected vibrations all agree well within the 10 - 15% level for amplitudes , 1 - 2% for frequencies , where we are limited by uncertainties in the expectations .vibration frequencies far below 0.1 hz can be regarded as slow fluctuations , which can not be suppressed by the above analysis techniques . for comparison ,nanometer vibration measurement by a self - aligned optical feedback vibrometry technique has been reported .the vibrometry technique is able to measure vibration frequencies ranging from 20 hz to 20 khz with minimal measurable vibration amplitude of 1 nm . our second multiple - distance - measurement technique demonstrated above has capability to measure vibration frequencies ranging from 0.1 hz to 100 hz with minimal amplitude on the level of several nanometers , without a _ priori _ knowledge of the vibration strengths or frequencies .dispersive elements such as a beamsplitter , corner cube prism , etc . in the interferometer can create an apparent offset in measured distance for an fsi system , since the optical path length of the dispersive element changes during the scan . the small opd change caused by dispersionis magnified by a factor of and has a significant effect on the absolute distance measurement for the fsi system . the measured optical path difference be expressed as where and refer to the opd at times t and t0 , respectively , and are the wavelength of the laser beam at times t and t0 , c is the speed of light , d1 and d2 are true geometrical distances in the air and in the corner cube prism , and are the refractive index of ambient atmosphere and the refractive index of the corner cube prism for , respectively . the measured distance , where is the average refractive index around the optical path .the sellmeier formula for dispersion in crown glass ( bk7) can be written as , where , the beam wavelength is in unit of microns , , , , , , .if we use the first multiple - distance - measurement technique described above to make 2000 distance measurements for one typical scan , where the corner cube prism is used as retroreflector , we observe a highly reproducible drift in measured distance , as shown in fig.7 , where the fitted distance drift is microns for one typical scan using a straight line fit .however , there is no apparent drift if we replace the corner cube prism by the hollow retroreflector .numerical simulations have been carried out using eq.(5 ) and eq.(6 ) to understand the above phenomena .for instance , consider the case d1 = 20.97 cm and d2 = 1.86 cm ( the uncertainty of d2 is 0.06 cm ) , where the first and the last measured distances among 2000 sequential distance measurements are denoted and , respectively . using the sellmeier equation ( eq.(6 ) ) for modeling the corner cube prism material ( bk7 ) dispersion , we expect = 373.876 microns and = 367.707 microns .the difference between and is microns which agrees well with our observed microns drift over 2000 measured distances .the measured distance shift and drift strongly depend on d2 , but are insensitive to d1 .a change of 1 cm in d1 leads to a 3-nanometer distance shift , but the same change in d2 leads to a 200-micron distance shift . if a beamsplitter is oriented with its reflecting side facing the laser beam , then there is an additional dispersive distance shift .we have verified this effect with 1-mm and a 5-mm beam splitters .when we insert an additional beamsplitter with 1 mm thickness between the retroreflector and the original beamsplitter in the optical fiber fsi system , we observe a 500 microns shift on measured distance if is fixed consistent with the numerical simulation result . for the 5-mm beam splitter ( the measured thickness of the beam splitteris mm ) , the first 20 scans were performed with the beamsplitter s anti - reflecting surface facing the optical fibers and the second 20 scans with the reflecting surface facing the optical fibers .the expected drifts ( ) for the first and the second 20 scans from the dispersion effect are 0 and microns , respectively .the measured drifts by averaging measurements from 20 sequential scans are microns and microns , respectively .the measured values agree well with expectations .in addition , the dispersion effect from air is also estimated by using numerical simulation .the expected drift ( ) from air dispersion is approximately -0.07 microns for an optical path of 50 cm in air , this effect can not be detected for our current fsi system .however , it could be verified by using a fsi with a vacuum tube surrounding the laser beam ; the measured distance with air in the tube would be approximately 4 microns larger than for an evacuated tube . in summary , dispersion effects can have a significant impact on absolute distance measurements , but can be minimized with care for elements placed in the interferometer or corrected for , once any necessary dispersive elements in the interferometer are understood .some major error sources are estimated in the following ; \1 ) error from uncertainties of fringe and scanned frequency determination .the measurement precision of the distance ( the error due to the air s refractive index uncertainty is considered separately below ) is given by . where , , , , , are measurement distance , fringe numbers , scanned frequency and their corresponding errors . for a typical scanning rate of 0.5 nm / s with a 10 nm scan range , the full scan time is 20 seconds .the total number of samples for one scan is 2.5 ms at a sampling rate of 125 ks / s .there is about a 4 sample ambiguity in fringe peak and valley position due to a vanishing slope and the limitation of the 12-bit sampling precision .however , there is a much smaller uncertainty for the f - p peaks because of their sharpness .thus , the estimated uncertainty is for one full scan for a magnification factor .if the number of distance measurements , the distance measurement window is smaller , the corresponding magnification factor is , where , is the average frequency of the laser beam , ghz .one obtains , .\2 ) error from vibrations . the detected amplitude and frequency for vibration ( without controlled vibration source )are about 0.3 and 3.2 hz .the corresponding time for sequential distance measurements is 5.3 seconds .a rough estimation of the resulting error gives for a given measured distance meters .\3 ) error from thermal drift .the refractive index of air depends on air temperature , humidity and pressure ( fluctuations of humidity and pressure have negligible effects on distance measurements for the 20-second scan ) .temperature fluctuations are well controlled down to about (rms ) in our laboratory by the plastic box on the optical table and the pipe shielding the volume of air near the laser beam . fora room temperature of 21 , an air temperature change of will result in a 0.9 ppm change of air refractive index .for a temperature variation of in the pipe , distance measurements , the estimated error will be , where the magnification factor .the total error from the above sources , when added in quadrature , is , with the major error sources arising from the uncertainty of fringe determination and the thermal drift .the estimated relative error agrees well with measured relative spreads of in real data for measured distance of about 0.45 meters . besides the above error sources, other sources can contribute to systematic bias in the absolute differential distance measurement .the major systematic bias comes from the uncertainty in the fsr of the f - p used to determine the scanned frequency range .the relative error would be if the fsr were calibrated by a wavemeter with a precision of .a wavemeter of this precision was not available for the measurements described here .the systematic bias from the multiple - distance - measurement technique was also estimated by changing the starting point of the measurement window , the window size and the number of measurements , the uncertainties typically range from 10 to 50 nanometers . systematic bias from uncertainties in temperature , air humidity and barometric pressure scalesare estimated to be negligible .an optical fiber fsi system was constructed to make high - precision absolute distance and vibration measurements .a design of the optical fiber with grin lens was presented which improves the geometrical efficiency significantly .two new multiple - distance - measurement analysis techniques were presented to improve distance precision and to extract the amplitude and frequency of vibrations .absolute distance measurement precisions of approximately 50 nm for distances ranging from 10 cm to 70 cm under laboratory conditions were achieved using the first analysis technique . the second analysis technique measures vibration frequencies ranging from 0.1 hz to 100 hz with minimal amplitude of a few nanometers .we verified an expected dispersion effect and confirmed its importance when dispersive elements are placed in the interferometer .major error sources were estimated , and the observed errors were found to be in good agreement with expectation .fox - murphy , d.f .howell , r.b .nickerson , a.r .weidberg , `` frequency scanned interferometry(fsi ) : the basis of a survey system for atlas using fast automated remote interferometry '' , nucl .inst . meth .a383 , 229 - 237(1996 ) american linear collider working group(161 authors ) , `` linear collider physics , resource book for snowmass 2001 '' , prepared for the department of energy under contract number de - ac03 - 76sf00515 by stanford linear collider center , stanford university , stanford , california .hep - ex/0106058 , slac - r-570 299 - 423(2001 ) p. a. coe , `` an investigation of frequency scanning interferometry for the alignment of the atlas semiconductor tracker '' , doctoral thesis , st .peter s college , university of oxford , keble road , oxford , united kingdom , 1 - 238(2001 )
in this paper , we report high - precision absolute distance and vibration measurements performed with frequency scanned interferometry using a pair of single - mode optical fibers . absolute distance was determined by counting the interference fringes produced while scanning the laser frequency . a high - finesse fabry - perot interferometer(f - p ) was used to determine frequency changes during scanning . two multiple - distance - measurement analysis techniques were developed to improve distance precision and to extract the amplitude and frequency of vibrations . under laboratory conditions , measurement precision of 50 nm was achieved for absolute distances ranging from 0.1 meters to 0.7 meters by using the first multiple - distance - measurement technique . the second analysis technique has the capability to measure vibration frequencies ranging from 0.1 hz to 100 hz with amplitude as small as a few nanometers , without a _ priori _ knowledge .
the determination of geometric parameters of extrasolar planets has an essential role in inferring their densities and hence their compositions , masses and ages .this information leads to refinements of the models of planetary systems and yields important constraints on planet formation . during the last several yearsthere is a sharp rise in the detections of transiting extra - solar planets ( tep ) mainly by the wide - field photometric variability surveys : ( i ) ground - based observations as superwasp , hatnet , ogle - iii , tres , etc . ;( ii ) space missions as _ corot _ and _ kepler _ .the _ kepler _ mission produced a real bump of the number of exoplanet candidates , dozens of them yet confirmed ( * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* etc . ) .the space - based missions , especially _ kepler _ , have the photometric ability to detect even transiting terrestrial - size planets . the recent increasing number of the planet - candidate discoveries and the increasing precision of the observations allow to investigate fine effects such as : ( a ) : : rotational and orbital synchronization and alignment ; ( b ) : : zonal flows and violent atmospheric dynamics due to large temperature contrast between day - sides and night - sides of the planets ; ( c ) : : departures from sphericity of the planets and spin precession ; ( d ) : : rings and satellites ; ( e ) : : stellar spots ; ( f ) : : dependence of the derived planetary radius from the limb - darkening coefficients ; ( g ) : : irradiation by the parent star , planet transmission spectra , atmospheric lensing due to atmospheric refraction , rayleigh scattering , cloud scattering , refraction and molecular absorption of starlight in the planet atmosphere , etc .the study of such fine effects requires very precise methods for determination of parameters of the planetary systems from the observational data .recently , we established that the synthetic light curves generated by the widely - used codes for planet transits deviated from the expected smooth shape .this motivated us to search for a new approach for the direct problem solution of the planet transits .we managed to realize this idea successfully for the case of orbital inclination and linear limb - darkening law .this paper presents continuation of the new approach for arbitrary orbital inclinations and arbitrary limb - darkening laws .the global parameters of the configurations of teps are different from those of eclipsing binary systems .the main geometric difference is that the radii of the two components of teps are very different , while those of ebs are comparable . as a result ,almost all models based on numerical integration over the stellar surfaces of the components give numerical errors , especially around the transit center .the overcoming of this problem required specific approaches and models for the study of teps .the first solution of the direct problem of the planet transits is that of .they derived analytical formulae describing the light decreasing due to covering of stellar disk by a dark ( opaque ) planet in cases of quadratic and nonlinear limb - darkening laws .the formulae of ( * ? ? ?* further m&a solution ) contain several types of special functions ( beta function , appel s hypergeometric function and gauss hypergeometric function , complete elliptic integral of the third kind ) . to generate synthetic transits created idl and fortran codes occultsmall ( for small planet ) , occultquad ( for quadratic limb - darkening law ) and occultnl ( for nonlinear limb - darkening laws ) that are based on numerical calculations of the special function values .these codes are widely used by many investigators for analysis of observed transits and improved later by different authors . in the meantime analytical solution for the particular case of total transit and uniform stellar disk ( i.e. neglecting the limb - darkening effect ) , made calculation of the secondary planet eclipses while studied the problem of eccentric orbits .the second direct problem solution for the planet transits was made by .he derived analytical formulae for the computation the light curves of planet transits for arbitrary limb - darkening laws .this approach is similar to that of the kopal s functions and the derived formulae contain different special functions ( elliptic integrals of the first , second and third order ) .most of the codes for inverse problem solutions of planet transits are based on the solution .for instance , the recent packages tap and autokep . estimated the fit quality of the inverse problem corresponding to the solution .it should be noted that the inverse problem solution of the planet transits is not a trivial task .the known codes for stellar eclipses are not applicable for the analysis of most planet transits due to the non - effective convergence of the differential corrections in cases of observational precisions poorer than 1/10 the depth of planet transit .ebop is the only model for ebs , which heavily modified version jktebop can be applied successfully to teps .recently , solutions of the whole inverse problem based on simultaneous modeling of photometric and spectral data of exoplanets performed using the markov - chain monte carlo ( mcmc ) code were proposed ( * ? ? ?* ; * ? ? ?* etc . ) .but their subroutines for fitting of the transits also use the solution for quadratic limb - darkening law .for instance , the subroutine exofast_occultquad of the package exofast is a new improved version of occultquad .an opportunity to generate synthetic transit light curve as well as to search for fit to own observational data is provided from the website exoplanet transit database ( etd ) .it is based on the occultsmall routine of the solution that uses the simplification of the planet trajectory as a straight line over the stellar disk .let s consider configuration from a spherical planet with radius orbiting a spherical star with radius on circle orbit with radius , period and initial epoch .let s the line - of - sight is inclined at an angle to the orbital plane of the planet .usually , it is assumed that the limb - darkening of the main - sequence stars may be represented by the linear function \ ] ] where is the light intensity at the center of the stellar disk depending on the stellar temperature , and is the angle between the normal to the current point of the stellar surface and the line of sight . found that the more accurate limb - darkening functions are the quadratic law \ ] ] and `` nonlinear '' law \ ] ] which is a taylor series in to fourth order in 1/2 .square - root law is proposed by \ ] ] while proposed logarithmic law for early stars .\ ] ] our solution can be applied for arbitrary limb - darkening law where is an arbitrary function of .the possibility to use an arbitrary limb - darkening law is one of the main advantages of our approach .the luminosity of the planetary system at phase out - of - transit is where is the stellar luminosity and is the planet luminosity .the phase is calculated by the period and the initial epoch .the planet luminosity is variable out of the eclipse ( because more of the day - side of the planet is visible after the occultation while more of the night - side of the planet is visible before the transit ) . due to the relatively small size and low temperature of the planet, it can be assumed that its disk is uniform and its luminosity does not change during the transit , i.e. where depends on the planet temperature .the luminosity during the transit is where is the light decrease due to the covering of the star by the planet .it might be expressed in the form where the integration is on the stellar area covered by the planet .hence , the solution of the direct problem for the planet transit is reduced to a calculation of a surface integral .it is not a trivial task due to the nonuniform - illuminated stellar disk .if we assume the out - of - transit flux to be =1 , then its decreasing during the transit is described by the expression for the next considerations we use coordinate system whose origin coincides with the stellar center .the axis is along the line - of - sight and the plane coincides with the visible plane .we choose the axis to be along the projection of the normal to the orbit on the visible plane .taking into account that the stellar isolines of equal light intensity are concentric circles with radius we may calculate the light decrease as a sum ( integral ) of the contributions of differential uniformly - illuminated arcs with central angles 2 and area ( fig . [ fig01 ] ) . in this waywe transform the surface integral ( [ eq11 ] ) to a linear one where the integrand of our main equation ( [ eq13 ] ) is different from that of the main equation ( 2 ) of the solution . as a result , the methods of numerical calculation of these integrals are different .the integration limits and are the extremal radii of the stellar isolines that are covered by the planet at orbital phase .these limits depend on the configuration parameters .it is appropriate to assume the separation as a size unit and to work with dimensionless quantities : relative radius of the planet ; relative radius of the star ; relative radius of the stellar isoline .then the equation for the light decrease during the transit can be rewritten in the following form : }\ ] ] where ^ 4\ ] ] assuming the center of the transit to be at phase 0.0 we consider only the phase interval ( 0 , 0.5 ) because in the case of spherical star and planet the transit light curve in the range ( -0.5 , 0 ) is symmetric to that in the range ( 0 , 0.5 ) . for a circular orbit the coordinates ( in units ) of the planet center at phase the coordinates ( in units ) of the intersection points of the stellar brightness isoline with radius and the planet limb are respectively ( fig .[ fig01 ] ) : where we have introduced the designations further we will derive the expressions for the angle and the integration limits in equation ( [ eq16 ] ) for different combinations of geometric parameters and at different phases . the transit is partial if the orbital inclination is into the range where the phases of outer contacts star - planet are ( beginning of the transit ) and ( end of the transit ) where the partial transit occurs into the phase range [ 0 , .for the sake of brevity we will not write further the dependence of on . ( a.1 ) : : if ( fig .[ fig01 ] ) the light decreasing is calculated by ( [ eq16 ] ) where limits and expression for are given in table [ tab01 ] ( case a.1 ) .( a.2 ) : : if then the integral in ( [ eq16 ] ) can be presented as a sum of two integrals ( fig .[ fig02 ] ) whose limits and expressions for 2 are given in table [ tab01 ] ( case a.2 ) .( a.3 ) : : if the integral ( fig .[ fig03 ] ) is a sum of three integrals whose limits and expressions for are given in table [ tab01 ] ( case a.3 ) .[ cols="^,^,^,^,^",options="header " , ] if the orbital inclination is into the range where then the transit develops from partial to total . in this casethe planet does not cover the star center at any phase ( fig .[ fig04 ] ) .the phases of the inner contact star - planet are ( during planet entering ) and ( during planet exit ) where into the phase range [ the transit is partial and geometry is similar to the case a. into the phase range [ 0 , the transit is total .( b.1 ) : : if then the integral is calculated by ( [ eq16 ] ) where limits and expression for are given in table [ tab01 ] ( case b.1 ) .( b.2 ) : : if the integral is a sum of three integrals whose attributes are given in table [ tab01 ] ( case b.2 ) ., case b.2 ] if the orbital inclination is into the range the transit develops from partial to total and finally the planet covers the stellar center . the phases at which the planet limb touches the stellar center are ( before the transit center ) and ( after the transit center ) where into the phase range [ the transit is partial and the geometry is similar to the case a. if then the integral is calculated by ( [ eq16 ] ) where limits and expression for are given in table [ tab01 ] ( case a.1 ) .if and then the integral is presented as a sum of two integrals whose attributes are the same as those of the case a.2 ( table [ tab01 ] ) .if and then the integral is presented as a sum of three integrals whose attributes are the same as those of the case a.3 ( table [ tab01 ] ) . into the phase range[ the transit is total , outside the stellar center .the geometry is similar to case b. if then the integral is calculated by ( [ eq16 ] ) where limits and expression for are given in table [ tab01 ] ( case b.1 ) .if then the integral is presented as a sum of three integrals whose attributes are the same as those of the case b.2 ( table [ tab01 ] ) . into the phase range [ 0 , the planet covers the stellar center and there are three subcases depending on the phase that corresponds to the moment when .( c.1 ) : : if then into the phase range [ ( for which ) the integral is presented as a sum of six integrals ( fig . [ fig06 ] ) whose attributes are given in table [ tab01 ] ( case c.1 ) .( c.2 ) : : if then into the phase range [ 0 , ( for which ) the integral is presented as a sum of six integrals ( fig . [ fig07 ] ) whose attributes are given in table [ tab01 ] ( case c.2 ) . ( c.3 ) : : if then into the whole phase range [ 0 , is fulfilled and the integral is presented as a sum of six integrals ( fig .[ fig06 ] ) whose attributes are the same as those of case c.1 .note : the condition is satisfied for where ] ] finally , it should be noted that for the arbitrary limb - darkening law ( [ eq06 ] ) the stellar luminosity l is given by the integral it has analytical solution for all known limb - darkening functions .particularly , for the wide - used limb - darkening laws ( equations from ( [ eq01 ] ) to ( [ eq05 ] ) ) the stellar luminosity is calculated by the formulae integral in equation ( [ eq16 ] ) can not be solved analytically .that is why we had to carry out a numerical solution of this integral .for this reason we wrote the code tac - maker ( * * t**ransit * * a**nalytical * * c**urve ) whose input parameters are : * radius of the orbit ; * period and initial epoch ; * radius of the star ; * radius of the planet ; * orbital inclination ; * temperature of the star ; * temperature of the planet ; * coefficients of the limb - darkening ; * step in phase ; * parameter of precision of the numerical calculations of the integrals .the code tac - maker provides the possibility to choose the limb - darkening law from a list of the known wide - spread functions ( 1 5 ) , or to write arbitrary function ) .this is a significant advantage of the proposed approach as even now the accuracy of the most used quadratic limb - darkening law is worse than the achieved from _ kepler _ for large planets with .moreover , the code tac - maker allows to obtain the stellar limb - darkening coefficients from the transit solution and to compare them with the theoretical values of calculated for different temperatures , surface gravities , metal abundances and micro - turbulence velocities .the code is written in python 2.7 language with a graphical user interface . at the verybeginning the code checks which condition ( case a , case b , or case c ) is satisfied for the given combination of configuration parameters ( stellar and planet radii and orbital inclination ) . after that the code calculates the characteristic phases for the respective case .further the code makes numerical calculation of the integral ( [ eq16 ] ) for the current phase and chosen function using the scipy package .finally , the code repeats the procedure for each phase of the corresponding phase ranges .the output results flow as data file ( phase , flux ) .the code allows to search for solutions of observed transits by the method of trials and errors varying the input parameters .the data file might be in format magnitude or flux and or phase .an estimate of the fit quality is the calculated value of .moreover , the two plots of the current solution showing the observational data with the synthetic transit as well as the corresponding phase distribution of the residuals allow fast finding of a good fit . to validate our approach we used comparison with the wide - spread solution , particularly we compared the synthetic light curves generated by the code tac - maker and those produced by the code occultnl for the same configuration parameters and the same limb - darkening law .for this purpose we applied the freely available version of the last code without any changes , particularly with its default precision , while our code worked with the default precision of the scipy package for the numerical calculations of the integrals .we established that for linear limb - darkening law the two synthetic transits coincide ( fig .[ fig08 ] , top ) . however , the detailed review reveals the meandering course of the solution around the smooth course of the tac - maker transit curve ( fig .[ fig08 ] , second panel ) .the phase derivatives of the fluxes ( fig .[ fig08 ] , third panel ) exhibit more clearly which one of the two solutions is precise : the derivative of the tac - maker solution has almost linear course while that of the solution reveals plantigrade shape .this result allows us to assume that our solution of the direct problem for the planet transit in the case of linear limb - darkening law is more accurate than that of . for more detailed comparison of the two approaches we analyzed the differences ( residuals ) between the flux values of the tac - maker solution and the solution for the same configuration parameters and phases .we established that they depend on different parameters ( limb - darkening coefficients , inclination , phase , planet radius , planet temperature ) ., , , , k , ._ top panel _: the coincidence of the two solutions for the linear limb - darkening law and moderate precision of the method ; _ second panel _ : the difference of the two solutions for small part of the transit in big scale ; _ third panel _ : the first derivatives of the two solutions for small part of the transit in big scale ; _ bottom panel _ : the residuals for the chosen part of the transit in big scale . ] for linear limb - darkening law the residuals vary with the phase by oscillating way around level 0 ( fig . [ fig09 ] )which analysis led us to the following conclusions .( a ) : : the frequencies and amplitudes of the oscillating residuals depend on the limb - darkening coefficients ( fig .[ fig09 ] , top ) .the residuals are zero for that means that the solution is precise for uniform star .+ and different linear limb - darkening coefficients ; _ bottom _ for different orbital inclinations and ( all residuals oscillate around level 0 but are shifted vertically for a good visibility ) . ] ( b ) : : the frequencies of the residual oscillations decrease to the central part of the transit for all orbital inclinations , especially for small inclinations ( fig .[ fig09 ] , bottom ) .( c ) : : the amplitudes of the residuals are bigger for partial transits than for total ones ( fig .[ fig09 ] , bottom ) .( d ) : : the residual values increase with the planet radius ( as it should be expected ) .we present comparison with those nonlinear limb - darkening laws which are available in the code occultnl .( 1 ) : : the comparison of the synthetic light curves generated for the same configuration parameters and quadratic limb - darkening law revealed that those produced by the code occultnl exhibited ( fig .[ fig10 ] ) : ( i ) small - amplitude oscillations similar to those for the linear limb - darkening law ; ( ii ) the fluxes into the transit are systematically smaller than ours ( the underestimation increases with the coefficient of the quadratic term ) .( 2 ) : : the comparison of the synthetic light curves generated for the same configuration parameters and squared root limb - darkening law revealed that those produced by the code occultnl suffered from the same disadvantages as those for quadratic limb - darkening law ( fig .[ fig10 ] ) . ; _ bottom _ panel : residuals for the code tac - maker corresponding to different values of . ] the detailed comparison ( validation ) of our approach with the wide - spread method demonstrated that the difference between them did not exceed for the considered three types limb - darkening laws .this leads to the conclusion that for variety of model parameters the new approach gives reasonable and expected synthetic transits which practically coincide with those of the solution .there is a possibility to increase the precision of the synthetic transits generated by the code occultnl ( e. agol , private communication ) .its convergence criterion is the maximum change to be less than the product of two multipliers , the factor ( is an integer ) and the transit depth .we established that the decreasing of from its default value leads to the following effects ( fig .[ fig10 ] , top and middle panels ) : ( i ) decreasing of the amplitudes of the oscillating residuals ; ( ii ) increasing of their frequencies ; ( iii ) decreasing of the underestimation of the fluxes for the whole transit . on the other hand the default value of is ( as it is defined for the package scipy ) .its increasing leads to some small imperfections of the numerical calculations made by our code ( fig . [ fig10 ] , bottom ) . the two considered approaches of the direct problem solution lead to different definitions of the precision parameters of the corresponding codes .therefore , it is not reasonable to compare the precision of the generated synthetic transits for equal values of the parameters and .though , we established that the two methods allow one to reach the desired ( practically arbitrary ) precision of calculations by appropriate choice of the corresponding precision parameter . moreover , for the best precisions , the differences between the transit fluxes generated with the two codes are below .the computational speed is another important characteristic of the codes for transit synthesis , especially when they are used for the solution of the corresponding inverse problem . in principle, this speed depends on two parameters : time precision ( step in phase ) and space precision ( size of the differential element of the integrals ) .the thorough comparison of the computational speeds of different codes for the solution of the same problem requires these codes to be written in the same languages , to be run on the same computers and for the equal model parameters .that is why it is not reasonable to compare the computational speed of our code with the others .nevertheless , we made some tests which revealed the following results .( a ) : : the reducing of ( i.e. the increasing of the precision of the synthetic transit ) by one order leads to 3 - 5 times increase of the computational time while the reducing of from to ( with five orders ) requires around 270 times longer computational time for the code occultnl . (b ) : : the decreasing of from to ( with seven orders ) requires around 15 times longer computational time .( c ) : : the formal comparison of the absolute computational speeds of the python code tac - maker and idl code occultnl revealed that our code is slightly faster for high - precision calculations while for the low - precision calculations the code occultnl is faster than the code tac - maker . from these results as well as from the consideration that each idl code is faster than its python version we conclude that the code tac - maker possesses good computational speed for the high - precision calculations .( d ) : : the new subroutine exofast_occultquad is around 2 orders faster than its progenitor occultquad .moreover , we established that exofast_occultquad ( with idl , fortran and python versions ) is considerably improved version of occultquad not only concerning the computational speed but also concerning the precision ( we had found some bugs of occultquad ) . although the computational speed of exofast_occultquad is higher than those of tac - maker and occultnl , it should be remembered that exofast_occultquad can produce transits only for quadratic limb - darkening law .the previous transit solutions consider the planet as a black lens and the corresponding codes do not fit the planet temperature . as a resultonly a formal parameter , the equilibrium temperature of the planet depending on the stellar heating ( i.e. , on stellar temperature and planet distance ) , can be calculated ( out of the procedure of transit modelling ) .the increasing precision of the observations raises the problem of the planet temperature , i.e. to study the effect of the planet temperature on the transit and to search for a possibility to determine this physical parameter from the observations .the new approach allowed such tests to be directly carried out because is an input parameter of its own ( unlike the previous methods ) . as a result , we established that increasing of the planet temperature from 0 k to 1000 k causes the decreasing of the transit depth up to while the increasing of the planet temperature from 0 k to 2000 k leads to shallower transit up to .the contribution of the planet temperature on the transit depth increases both with the ratio and limb - darkening coefficients .moreover , it rapidly increases with wavelength .the obtained estimations revealed that the effect of the planet temperature is lower than the precision of the present optical photometric observations , even than that of the _ kepler _ mission .hence , the determination of the real planet temperature from the observed transit in optics is postponed for the near future . however , the sensibility of the observations at longer wavelengths to the planet emission is higher .recently , there have been reports that the observed ir fluxes of some hot jupiter systems during occultation are higher than those corresponding to their equilibrium temperatures .these results imply that the determination of the planet temperature from the observed transits is forthcoming . until such observational precisionis reached the code tac - maker could be used for theoretical investigations of the effects of different thermal processes ( as ohmic heating , tidal heating , internal energy sources , etc . ) on the planet transits .although the analytical solutions of the problems are very useful , the modern astrophysical objects and configurations are quite complex to allow analytical descriptions . instead of analytical solutions we are now able to use fast numerical computations .this paper presents a new solution of the direct problem of the transiting planets .it is based on the transformation of the double integrals describing the light decrease during the transit to linear ones .we created the code tac - maker for generation of synthetic transits by numerical calculations of the linear integrals .the validation of our approach was made by comparison with the results of the wide - spread method for modelling of planet transits .it was demonstrated that our method gave reasonable and expected synthetic transits for linear , quadratic and squared - root limb - darkening laws and arbitrary combinations of input parameters .the main advantages of our approach for the planet transits are : ( 1 ) : : it gives a possibility , for the first time , to use an arbitrary limb - darkening law ) of the host star ; ( 2 ) : : it allows acquisition of the stellar limb - darkening coefficients from the transit solution and comparison with the theoretical values of ; ( 3 ) : : it gives a possibility , in principle , to determine the planet temperature from the observed transits .our estimations reveal that the effect of the non - zero planet temperature to the transit depth is lower than the precision of the present optical photometric observations .however , the higher sensibility of the observations at longer wavelengths ( ir ) to the planet emission implies that the determination of the planet temperature from the observed transits is forthcoming .these properties of our approach and the practically arbitrary precision of the calculations of the code tac - maker reveal that our solution of the planet transit problem is able to meet the challenges of the continuously increasing photometric precision of the ground - based and space observations .we plan to build an inverse problem solution for the planet transit ( for determination of the configuration parameters ) on the basis of our direct problem solution and by using the derived simple analytical expressions in this paper to obtain initial values of the fitted parameters .the code tac - maker is available for free download from the astrophysics source code library or its own site .the research was supported partly by funds of projects do 02 - 362 , do 02 - 85 , and ddvu 02/40 - 2010 of the bulgarian scientific foundation .we are very grateful to eric agol , the referee of the manuscript , for the valuable recommendations and useful notes .alonso r. , brown t. m. , torres g. , latham d. w. , sozzetti a. , mandushev g. , belmonte j. a. charbonneau d. et al ., 2004 , apj , 613 , l153 arnold l. , schneider j. , 2006 , dies.conf , 105 baglin a. , auvergne m. , boisnard l. , lam - trong t. , barge p. , catala c. , deleuil m. , michel e. et al . , 2006 , in 36th cospar scientific assembly .held 16 - 23 july 2006 , in beijing , china .bakos g. , noyes r. w. , kovcs g. , stanek k. z. , sasselov d. d. , domsa i. , 2004 , pasp , 116 , 266 batygin , k. , stevenson , d. j. , bodenheimer , p. h. , 2011 , apj 738 , 1 bodenheimer , p. , lin , d. , mardling , r. , 2001 , apj 548 , 466 bodenheimer , p. ; laughlin , g. ; lin , d. , 2003 , apj 592 , 555 borucki w. j. , koch d. , basri g. , batalha n. , brown t. , caldwell d. , caldwell j. , christensen - dalsgaard j. et al . , 2010 , sci , 327 , 977 borucki w. j. , koch d. g. , brown t. m. , basri g. , batalha n. m. , caldwell d. a. , cochran w. d. , dunham e. w. et al . , 2010 ,apj , 713l , 126 burrows a. , rauscher e. , spiegel d. s. , menou k. , 2010 , apj , 719 , 341 carter j. a. , winn j. n. , 2010 , apj , 716 , 850 claret a. , 2000 , a&a , 363 , 1081 claret a. , 2004 , a&a , 428 , 1001 collier cameron a. , wilson d. m. , west r. g. , hebb l. , wang x .- b . , aigrain s. , bouchy f. , christian d. j. , et al .2007 , mnras , 380,1230 croll b. , jayawardhana r. , fortney j. j. , lafrenire d. , albert l. , 2010 , apj , 718 , 920 cody a. m. , sasselov d. d. , 2002 , apj , 569 , 451 diaz - cordoves j. , gimenez a. , 1992 , a&a , 259 , 227 dittmann j. a. , close l. m. , green e. m. , fenwick m. , 2009 , apj , 701 , 756 dunham e. w. , borucki w. j. , koch d. g. , batalha n. m. , buchhave l. a. , brown t. m. , caldwell d. a. , cochran w. d. et al . , 2010 , apj , 713 , l136 eastman j. , gaudi s. , agol e. , 2013 , pasp , 125 , 83 etzel p. b. , 1975 , mst , 1 etzel p. b. , 1981 , psbs.conf , 111 gazak j. z. , johnson j. a. , tonry j. , dragomir d. , eastman j. , mann a. w. , agol e. , 2012 , adast , 2012 , 30 gibson n. p. , aigrain s. , pollacco d. l. , barros s. c. c. , hebb l. , hrudkov m. , simpson e. k. , skillen i. et al. , 2010 , mnras , 404 , l114 gillon m. , demory b .- o . , triaud a. h. m. j. , barman t. , hebb l. , montalbn j. , maxted p. f. l. , queloz d. et al . , 2009 , a&a , 506 , 359 gimnez a. , 2006 , a&a , 450 , 1231 guillot , t. , showman , a. p. , 2002, a&a 385 , 156 hbrard g. , dsert j .- m . , daz r. f. , boisse i. , bouchy f. , lecavelier des etangs a. , moutou c. , ehrenreich d. et al . , 2010 , a&a , 516 , a95 hubbard w. b. , fortney j. j. , lunine j. i. , burrows a. , sudarsky d. , pinto p. , 2001, apj , 560 , 413 hui l. , seager s. , 2002 , apj , 572 , 540 jackson b. , barnes r. , greenberg r. , 2008 , mnras , 391 , 237 jackson b. , miller n. , barnes r. , raymond s. , fortney j. , greenberg r. , 2010 , mnras , 407 , 910 kipping d. m. , 2008 , mnras , 389 , 1383 kipping d. m. , 2010 , mnras , 407 , 301 kipping d. , bakos g. , 2011 , apj , 733 , 36 kjurkchieva d. p. , dimitrov d. p. , 2012 ,iaus , 282 , 474 klinglesmith d. a. , sobieski s. , 1970 , aj , 75 , 175 knutson h. a. , charbonneau d. , allen l. e. , fortney j. j. , agol e. , cowan n. b. , showman a. p. , cooper c. s. et al . , 2007 , natur , 447 , 183 koch d. g. , borucki w. j. , rowe j. f. , batalha n. m. , brown t. m. , caldwell d. a. , caldwell j. , cochran w. d. et al . , 2010 , apj , 713 , l131 kopal z. , 1950 , harci , 454 , 1 latham d. w. , borucki w. j. , koch d. g. , brown t. m. , buchhave l. a. , basri g. , batalha n. m. , caldwell d. a. et al . , 2010 , apj , 713 , l140 mandel k. , agol e. , 2002 , apj , 580 , l171 miller n. , fortney j. j. , jackson b. , 2009 , apj , 702 , 1413 pl , a. , bakos g. . , torres g. , noyes r. w. , fischer d. a. , johnson j. a. , henry g. w. , butler r. p. et al . , 2010 , mnras , 401 , 2665 penev k. , jackson b. , spada f. , thom n. , 2012 , apj 751 , 96 penev k. , sasselov d. , 2011 , apj 731 , 67 poddan s. , brt l. , pejcha o. , 2010 , newa , 15 , 297 pollacco d. l. , skillen i. , collier cameron a. , christian d. j. , hellier c. , irwin j. , lister t. a. , street r. a. et al . , 2006 , pasp , 118 , 1407 pollacco d. l. , skillen i. , collier cameron a. , loeillet b. , stempels h. c. , bouchy f. , gibson n. p. hebb , l. et al .2008 , mnras , 385 , 1576 popper d. m. , etzel p. b. , 1981 , aj , 86 , 102 rabus m. , alonso r. , belmonte j. a. , deeg h. j. , gilliland r. l. , almenara j. m. , brown t. m. , charbonneau d. et al . , 2009 , a&a , 494 , 391 seager s. , malln - ornelas g. , 2003 , apj , 585 , 1038 seager s. , sasselov d. d. , 2000 , apj , 537 , 916 seager s. , whitney b. , 2000 , aspc , 212 , 232 showman a. , guillot t. , 2002 , a&a 385 , 166 southworth j. , 2008 , mnras , 386 , 1644 southworth j. , maxted p. f. l. , smalley b. , 2004a , mnras , 351 , 1277 southworth j. , maxted p. f. l. , smalley b. , 2004b , mnras , 349 , 547 steffen j. h. , fabrycky d. c. , ford e. b. , carter j. a. , dsert j .- m . , fressin f. , holman m. j. , lissauer j. j. et al . , 2012 , mnras , 421 , 2342 udalski a. , paczynski b. , zebrun k. , szymanski m. , kubiak m. , soszynski i. , szewczyk o. , wyrzykowski l. et al . , 2002 , aca , 52 , 1 winn j. n. , albrecht s. , johnson j. a. , torres g. , cochran w. d. , marcy g. w. , howard a. w. , isaacson h. et al ., 2011 , apj , 741 , l1 wu y. , lithwick y. , 2012 , astro - ph : arxiv:1210.7810
we present a new solution of the direct problem of planet transits based on transformation of double integrals to single ones . on the basis of our direct problem solution we created the code tac - maker for rapid and interactive calculation of synthetic planet transits by numerical computations of the integrals . the validation of our approach was made by comparison with the results of the wide - spread mandel & agol ( 2002 ) method for the cases of linear , quadratic and squared root limb - darkening laws and various combinations of model parameters . for the first time our approach allows the use of arbitrary limb - darkening law of the host star . this advantage together with the practically arbitrary precision of the calculations make the code a valuable tool that faces the challenges of the continuously increasing photometric precision of the ground - based and space observations . [ firstpage ] methods : analytical methods : numerical planetary system binaries : eclipsing
web applications constitute valuable up - to - date tools in a number of different cases .one such case is their use in the management of environmental problems so as to protect civilians from any unfortunate consequences that these problems can cause . their evolution , therefore , has been especially important in many cases , one of them being in the development of systems of administration of the air quality .the right of accessing environmental information has been enacted in european level through appropriate legislation , which are incorporated in the relevant greek legislation see - .nowadays , the combination of telecommunications and new technologies create a framework for developing such systems increasingly sophisticated - .that is just diffusion of environmental information and public access which was attempted effectively through the system codenamed eap ( laboratory of atmospheric pollution and environmental physics ) in western macedonia .it is developed for the first time in 2002 , providing the possibility for direct information to the public about the air quality , as it was recorded in the four atmospheric measurement stations established in the capitals of countries kozani , florina , kastoria and grevena though an appropriate web - site , as well as sms , with the possibility for extension of stations and also the historical measurements privilege . for every station a previous and current index of pollution appears ( in a scale 1 - 10 ) with an appropriate colour scale .the system was expanded and upgraded in may 2010 , which consists in transferring data , the way of presentation as well as the amount of information provided .specifically it is recommended : a ) the combine use of different methods of transportation in real or almost real time data of terminal stations measurements to a central base station b ) the environmental information is promoted to the internet , with a properly designed dynamic website with enabled navigating of google map , , . in this paperthe novelty of information system eap is presented , in a dynamic , easily accessible and user - friendly manner .it consists of a structured system that users have access to and they can manipulate thoroughly , as well as of a system for accessing and managing results of measurements in a direct and dynamic way .it provides updates about the weather and pollution forecast for the next few days ( based on current day information ) in western macedonia .these forecasts are displayed through dynamic - interactive web charts and the visual illustration of the atmospheric pollution of the region in a map using images and animation images .moreover , there is the option to view historical elements .an additional new function is the use of online reports to monitor , analyze , control and processing measurements , historical data and statistics of each station in real time over the internet .this function focuses on designing an effective and user - friendly process . finally , the management system of measurement stations , the administrator has the ability to dynamically create , modify and delete objects , points and information of each station on the googlemap . in this way the processing ( update , delete ,add ) of points is easier .the a.q.m.e.i.s .application has been developed using open source software tools like html , javascript , php and mysql .html is the language for the internet interface design .the goal for html was to create a platform - independent language for constructing hypertext documents to communicate multimedia information easily over the internet .javascript is a client - side scripting language that provides powerful extensions to the html used to compose web pages and is mainly utilised for checking and validating web form input values to make sure that no invalid data are submitted to the server .php is the most popular server - side programming language for use on web servers .any php code in a requested file is executed by the php runtime , usually to create dynamic web page content .it can also be used for command - line scripting and client - side gui applications .php is a cross platform programming language and can be used with many relational database management systems ( rdbms) . mysql is a high - performance , multi - thread , multi - user rdbms and is built around a client - server architecture .also mysql uses the structured query language standard giving a great deal of control over this relational database system .finally , apache server is responsible for expecting requests from various programs - users and then serve the pages , according to the standards set by the protocol http ( hypertext transfer protocol ) .in this section the user interface and functions of this application are described .there are three ( 3 ) levels of user access ( groups of users ) . on the first level the user has the ability to be informed in real time about the weather conditions , the air pollution and air pollution indices in an area of interest using google map.the second level is for authorized users only , who can be informed analytically through reports about measurements of a specific time period .the third level is for the administrator , who has access to all information and who also inserts , updates or deletes data from the database .the administrator can also interfere dynamically and manage all the information of the googlemap .the online web station reports is a new online web feature which offers to the approved members of the application to monitor , analyze , check and process the measurements using statistics of each station in real time .furthermore , the ability of pumping previous measurements is given ; a function that did not exist in the web application until a short time ago .the login is achieved through the use of a special personal password that is given to the members by the support group of the web application .the users input the password and after validation , they can perform a number of available functions in a safe and user - friendly online web environment .more specifically the feature offers the following functions to its members : presentation of daily , weekly , monthly and according to the user s choice values either for a chosen station of all its measured data or for specifically chosen measured parameters ( sensors ) , with simultaneous calculation and presentation of maximum , minimum , average values , sum , number and percentage of measurements in a table or the ability of output of data in ms excel .the functions of the new web features are described in detail .there are four categories of online web station reports , i.e. daily , weekly , monthly and custom .each report is displayed in three parts ( forms ) . in the first form , by choosing a stationthe image is displayed as well as various information about the specific station ( fig .1 ) . if the user does not choose a station , an error report appears . by clicking on next , the second form of the report appears in which the user can choose which measure fields to be shown , as well as the measure time interval ( 5min or 60min ) and the specific date that those measurements were taken ( fig .2 ) .the dates differentiate according to the report category that the user will choose ; more specifically , there are : a ) daily report : the current date appears .b ) weekly : the first day of the current week is set as the starting date .c ) monthly : the first day of the current month is set as the starting date .d ) periodical : the current dates are set as the dates ( from - to ) with the ability of changing the spaces ( from - to ) by the user if the user does not choose any measure fields or chooses date in which there are no figures reported , then the system will display an error message . by clicking on report the algorithm moves to the last tab of the report where a table of contents appears in a dynamic way with information about the hour , the measure fields , measurement results as well as other statistical data ( fig .3 ) . also , the algorithm calculates and displays the number of measurements ( e.g. 100 measurements were found ) , the current page and the total number of pages ( e.g. page 1 from 12 ) .depending on the number of records , an equal number of pages is created .the application can display 25 measurements per page .also the users can move to any page they wish so as to have access to any measurement of interest .every form of measurement also has a status field ( numbers 0 , 1 or 2 ) .this table is used to check the validity of the measurements of a field . in this way , if there are no results for a specific date in one field , then the indication no data is displayed .all checks are made based on the status field .if , however the measurements in a field are wrong for a specific date due to various factors , then the indication offscan is displayed . in a similar manner, checks rely on the status field . for every measure field in a specific momentthe following statistics are taken into account a ) average , b ) minimum value , c ) date and time that minimum value was found , d ) the number of records of the minimum value , e ) maximum value , g ) the date and time of record of the maximum value , h ) the total number of measurements , i ) the % percentage . by pressing excel the measurements of the station can be displayed on excel form , which the user is able to open or save it for later use ( fig .4 ) .another innovation of the web application is the dynamic management of the measurements from each station in a simple way using an online geographical web interface .the administrator using this specific feature ( management of the measurement stations using googlemaps ) can insert , delete and modify easily and simply data having as a purpose the dynamic renewal on the googlemap .this gives the advantage to the administrator to use the specific feature as a platform of visualization of information without having to write on their own not even a line of code for this purpose .moreover , an important element of the feature is the easy expansion and integration n ( n = count ) measurement stations on the interactive googlemap. station management is made through the interactive interface of googlemap ; the administrator of this application can insert , delete and modify dynamically a certain point ( station ) in an area ( according to geographical latitude and longitude ) . to insert a certain station in the map ,the following actions are required : a ) the insertion of municipality of choice : the user chooses through a list the one which the station belongs to ; b ) the insertion of the type of station of measurement : the user chooses if the station is meteorological or one that measures pollution or both .all the data is stored in the application mysql database . as a last step, the administrator sets the name of the station , the longitude and latitude , municipality , address , description , type of station and the image of the station .( fig . 5 ) .next , all information are stored into the database and are retrieved from there to be displayed dynamically ( both the points and information ) on the googlemap . on the map userscan see meteorological information as well as information about pollution from various stations and areas . for every station a previous and current index of pollution appears ( in a scale 1 - 10 ) with an appropriate corresponding colour scale . by clicking on each point of the station the information ( i.e. online measurements , air pollution indices for the previous and current day , general information about the station )is displayed .the user may also activate or deactivate one or more points on the map . to achieve the dynamic update of the measurement stations on the googlemap, the file airlab_markers.php is called .it is responsible for the creation and update of the xml file .more specifically , the data of the application , i.e. the name and the measurements of the station , their geographical position where they belong , the general information with the representative photograph of each station , even the representation symbol , are retrieved in xml structure , submitting the appropriate preset sql question to the database , via the corresponding code of the php page .the xml file has an element ( root - top level element ) and especially the <markers > </markers>. the remaining elements are nested to this . for the appropriate structure of the xmlfile , there is an additional code in the airlab_markers.php file .all necessary checks about the validity of the data then take place .the algorithm was realized by php scripts and the specific feature that was developed , is supported by the internet explorer , firefox , opera and google chrome browsers .an additional new feature is the weather forecast with the aid of dynamic web charts , is committed to deliver the most reliable , accurate weather information possible .it provides free , real - time and online weather information for the web users with the state - of - the - art technology monitors conditions and forecasts in the area of western macedonia in the next few days ( fig .the information is produced in a high - end server in eap / wmaqis and is read and stored in a database and it appears in the internet with the form of dynamic web graphs ( fig.7 ) .meteorological parameters are temperature , humidity , wind speed , wind direction , accumulated precipitation , mixing height and total solar radiation .php scripts retrieves from the mysql database the 24 average hourly values for each meteorological parameter ( except for the accumulated precipitation , for which a total of 6 hours is taken into account ) on each location in western macedonia .next , the information is displayed with a graph .the user then can choose a location to see the weather forecast . by choosing history ,the previous meteorological measurements and figures are displayed using graphs .another important part of the a.q.m.e.i.s .application is the atmospheric pollution forecast of the pm10 ( particulate matter ) concerning the next few days in western macedonia .our application displays dynamically these regions in a map using images ; according to pollution percentages in a certain region , the corresponding colour scale is represented denoting the levels of pollution .choosing region , pollutant agent , source of emission and date , pollution for the previous and current dates , as well as the ones of the next three days are displayed .this part of the application uses javascript , while a very small part of the code was written in php ( dates management ) .the air pollution model produces image files ( xxx.jpg ) in the hard disc of the server in which javascript searches and then displays . in cases where not enough environmental data exist for a certain date ,an image appears entitled pollution image display unavailable. the necessary validation integrity checks of the dates are also made ( from - to ) . finally choosing history the user can see older images of pollution rates in a certain region of western macedonia .choosing movement , a javascript algorithm is executed animating the pollution illustrations in the area of interest .( fig 9 ) .mysql is a very fast and powerful database management system . allows the user to store , search , sort and recall data efficiently .this application stores all information in the mysql database , so that they can be retrieved dynamically every time they are needed .the architecture of this database consists of a total of 23 tables .8 tables ( s001t05 , s001t60 , s002t05 , s002t60 , s003t05 , s003t60 , s004t05 , s004t60 ) include measurements collected from the stations ; they are used in reports and in the dynamic system for monitoring the air pollution , via an interactive chart .s00x refers to station x and t05 or t60 refer to a measure time interval ( i.e. an average 5 or 60 minute measurement ) .the primary key is date_time , while the rest of the fields ( value1 up to value32 ) store meteorological and environmental data .tables city_info , points_categories and points are used for the management of stations that use google map . in particular , city_info stores the town in which each point ( station ) is located , along with additional information .fields include id ( main key ) , title , en_title , lat , log .table points_categories stores the station s category ( fields : id is the primary key , title and en_title ) . on the tablepoints points are set up , along with additional information .these fields are id ( primary key ) , category , city , en_city , address , title , description , lat , log , thumb , image .finally 12 tables ( mamyntaio , mflorina , mgrevena , mkastoria , mkkomi , mkozani , mnpedio , mpetrana , mpontokomi , mptl , mservia , msiatista ) which store weather forecast information are used .every table represents a certain location in western macedonia .fields date , hour have been set up as primary key denoting solely each record .the rest ( wdir , temp , rhum , tempscr , rhumscr , tsr , netr , sens , evap , wstar , zmix , ustar , lstar , rain and snow ) are meteorological parameters .tables about pollution forecasts do not exist , all measurements are stored in the hard disc of the server as image files .a sample table of weather forecast has the following form : mkozani ( date , hour , wdir , temp , rhum , tempscr , rhumscr , tsr , netr , sens , evap , wstar , zmix , ustar , lstar , rain and snow . )the proposed a.q.m.e.i.s .application is part of a system - air quality monitoring network , which was developed in the laboratory of atmospheric pollution and environmental physics of technological education institute of western macedonia , to monitor the air quality in western macedonia area , with industrial focus on the region of prolemais - kozani basin .this system was co - financed by the teiwm , regional operational programm 2000 - 2006 western macedonia and recently by the municipality of kozani .the architecture of this system is constituted by five terminal stations , which collect environmental information , the central station and a web server .different technologies ( adsl , gprs , ethernet ) are used to transfer the data to the central station .the data are sent every half an hour to the main station which collects the complete set of data and transfers them to the web server every sixty minutes , where under the application proposed in this paper provides meteorological , environmental , weather and air pollution forecast data in west macedonia area .further details on the design of the above mentioned air quality monitoring network can be obtained from , , .an operational monitoring , as well as high resolution local - scale meteorological and air quality forecasting information system(a.q.m.e.i.s . ) for western macedonia , hellas , has been developed and is operated by the laboratory of atmospheric pollution and environmental physics / tei western macedonia . in this paper the novelty of information system is presented , in a dynamic , easily accessible and user - friendly manner . the application is developed using state of the art web technologies ( ajax , google maps etc ) and under the philosophy of the open source software that gives the ability to users / authors to update / enrich the code so that their augmentative needs are met .design of a webbased information system for ambient air quality data in west macedonia , greece , a.g .triantafyllou , v. evagelopoulos , e.s .kiros , c. diamantopoulos , 7th panhellenic ( international ) conference of meteorology , climatology and atmospheric physics , nicosia 28 - 30 september , 2004 .council directive 90/313/eec of 7 june 1990 on the freedom of access to information on the environment .council directive 92/72/eec of 21 september 1992 on air pollution by ozone .council directive 1999/30/ec of 22 april 1999 relating to limit values for sulphur dioxide , nitrogen dioxide and oxides of nitrogen , particulate matter and lead in ambient air .council directive 96/62/ec of 27 september 1996 on ambient air quality assessment and management .council directive 90/313/ec in expanding the level of access in information karatzas k. and j. kukkonen , 2009 , cost action es0602 workshop proceedings `` quality of life information services towards a sustainable society for the atmospheric environment '' ed .k. karatzas and j. kukkonen .schimak g. , 2003 : environmental data management and monitoring system uwedat .environmental modelling and software 18 , 573 - 580 .xuan zhu , allan p. dale , 2001 : javaahp : a webbased decision analysis tool for natural resource and environmental management , environmental modelling and software 16 , 251 - 262 . atmospheric pollution - atmospheric boundary layer , advances in measurements technologies , a.g .triantafyllou , ed .tei of western macedonia , kozani 2004 , pp.256 .triantafyllou , v. evagelopoulos , s. zoras , design of a webbased information system for ambient air quality data , journal of environmental management , 80(3 ) , 230 - 236 ( 2006 ) .triantafyllou , j. skordas , ch.diamantopoulos , e. topalis , `` the new dynamic air quality information systemand its application in west macedonia , greece '' , p.4 , 4th macedonian enviromental conference , thessaloniki , greece , march 2011 .ioannis skordas , george f. fragulis , athanasios g. triantafyllou , `` eairquality : a dynamic web application for evaluating the air quality index for the city of kozani , hellas '' , pci 2011 15th , panhellenic conference on informatics , 30 september - 02 october 2011 , kastoria , greece .http://www.airlab.edu.gr/ beginning web programming with html , xhtml , and css , jon duckett , wiley publishing , 2008 .core php programming 3rd edition , leon atkinson with zeev suraski , pearson education , inc , publishing as prentice hall professional technical reference , 2004 mysql : the complete reference , vikram vaswani , mcgraw - hill brandon a. nordin , 2004 .apache : the definitive quide 3rd edition , ben laurie and peter laurie , oreilly , 2003 .triantafyllou a.g ., krestou a. , hurley p. and thatcher m. , `` an operational high resolution localscale meteorological and air quality forecasting system for western macedonia , greece : some first results '' , proceedings of the 12th international conference on environmental science and technology ( cest2011),8 10 september 2011 , pp.a1904:1911
an operational monitoring , as well as high resolution local - scale meteorological and air quality forecasting information system for western macedonia , hellas , has been developed and is operated by the laboratory of atmospheric pollution and environmental physics / tei western macedonia since 2002 , continuously improved . in this paper the novelty of information system is presented , in a dynamic , easily accessible and user - friendly manner . it consists of a structured system that users have access to and they can manipulate thoroughly , as well as of a system for accessing and managing results of measurements in a direct and dynamic way . it provides updates about the weather and pollution forecast for the next few days ( based on current day information ) in western macedonia . these forecasts are displayed through dynamic - interactive web charts and the visual illustration of the atmospheric pollution of the region in a map using images and animation images .
six years after the financial crisis of 2007 - 2008 , millions of households worldwide are still struggling to recover from the aftermath of those traumatic events .the majority of losses are indirect , such as people losing homes or jobs , and for the majority , income levels have dropped substantially . for the economy as a whole , and for households and for public budgets , the miseries of the market meltdown of 2007 - 2008 are not yet over.as a consequence , a consensus for the need for new financial regulation is emerging .future financial regulation should be designed to mitigate risks within the financial system as a whole , and should specifically address the issue of systemic risk ( sr ) .sr is the risk that the financial system as a whole , or a large fraction thereof , can no longer perform its function as a credit provider , and as a result collapses . in a narrow sense , it is the notion of contagion or impact from the failure of a financial institution or group of institutions on the financial system and the wider economy .generally , it emerges through one of two mechanisms , either through interconnectedness or through the synchronization of behavior of agents ( fire sales , margin calls , herding ) .the latter can be measured by a potential capital shortfall during periods of synchronized behavior where many institutions are simultaneously distressed .measures for a potential capital shortfall are closely related to the leverage of financial institutions .interconnectedness is a consequence of the network nature of financial claims and liabilities .several studies indicate that financial network measures could potentially serve as early warning indicators for crises .in addition to constraining the ( potentially harmful ) bilateral exposures of financial institutions , there are essentially two options for future financial regulation to address the problem : first , financial regulation could attempt to reduce the financial fragility of `` super - spreaders '' or _ systemically important financial institutions _ ( sifis ) , i.e. limiting a potential capital shortfall .this can achieved by reducing the leverage or increasing the capital requirements for sifis .`` super - spreaders '' are institutions that are either too big , too connected or otherwise too important to fail .however , a reduction of leverage simultaneously reduces efficiency and can lead to pro - cyclical effects .second , future financial regulation could attempt strengthening the financial system as a whole .it has been noted that different financial network topologies have different probabilities for systemic collapse . in this sensethe management of sr is reduced to the technical problem of re - shaping the topology of financial networks .the basel committee on banking supervision ( bcbs ) recommendation for future financial regulation for sifis is an example of the first option .the basel iii framework recognizes sifis and , in particular , global and domestic systemically important banks ( g - sibs or d - sibs ) .the bcbs recommends increased capital requirements for sifis the so called `` sifi surcharges '' .they propose that sr should be measured in terms of the impact that a bank s failure can have on the global financial system and the wider economy , rather than just the risk that a failure could occur .therefore they understand sr as a global , system - wide , loss - given - default ( lgd ) concept , as opposed to a probability of default ( pd ) concept . instead of using quantitative models to estimate sr, basel iii proposes an indicator - based approach that includes the size of banks , their interconnectedness , and other quantitative and qualitative aspects of systemic importance .there is not much literature on the problem of dynamically re - shaping network topology so that networks adapt over time to function optimally in terms of stability and efficiency .a major problem in nw - based sr management is to provide agents with incentives to re - arrange their local contracts so that global ( system - wide ) sr is reduced .recently , it has been noted empirically that individual transactions in the interbank market alter the sr in the total financial system in a measurable way .this allows an estimation of the marginal sr associated with financial transactions , a fact that has been used to propose a tax on systemically relevant transactions .it was demonstrated with an agent - based model ( abm ) that such a tax the _ systemic risk tax _ ( srt ) is leading to a dynamical re - structuring of financial networks , so that overall sr is substantially reduced . in this paperwe study and compare the consequences of two different options for the regulation of sr with an abm . as an example for the first option we study basel iii with capital surcharges for g - sibs and compare it with an example for the second option the srt that leads to a self - organized re - structuring of financial networks .a number of abms have been used recently to study interactions between the financial system and the real economy , focusing on destabilizing feedback loops between the two sectors .we study the different options for the regulation of sr within the framework of the crisis macro - financial model . in this abm , we implement both the basel iii indicator - based measurement approach , and the increased capital requirements for g - sibs .we compare both to an implementation of the srt developed in .we conduct three `` computer experiments '' with the different regulation schemes .first , we investigate which of the two options to regulate sr is superior .second , we study the effect of increased capital requirements , the `` surcharges '' , on g - sibs and the real economy .third , we clarify to what extend the basel iii indicator - based measurement approach really quantifies sr , as indented by the bcbs .the basel iii indicator - based measurement approach consists of five broad categories : size , interconnectedness , lack of readily available substitutes or financial institution infrastructure , global ( cross - jurisdictional ) activity and `` complexity '' .as shown in [ indicator ] , the measure gives equal weight to each of the five categories .each category may again contain individual indicators , which are equally weighted within the category .p5 cm p7 cm p3 cm category ( and weighting ) & individual indicator & indicator weighting + cross - jurisdictional activity ( 20% ) & cross - jurisdictional claims & 10% + & cross - jurisdictional liabilities & 10% + size ( 20% ) & total exposures as defined for use in the basel iii leverage ratio & 20% + interconnectedness ( 20% ) & intra - financial system assets & 6.67% + & intra - financial system liabilities & 6.67% + & securities outstanding & 6.67% + substitutability / financial institution infrastructure ( 20% ) & assets under custody & 6.67% + & payments activity & 6.67% + & underwritten transactions in debt and equity markets & 6.67% + complexity ( 20% ) & notional amount of over - the - counter ( otc ) derivatives & 6.67% + & level 3 assets & 6.67% + & trading and available - for - sale securities & 6.67% here below we describe each of the categories in more detail . this indicator captures the global `` footprint '' of banks . the motivation is to reflect the coordination difficulty associated with the resolution of international spillover effects .it measures the bank s cross - jurisdictional activity relative to other banks activities .here it differentiates between [ ( i ) ] cross - jurisdictional claims , and cross - jurisdictional liabilities .this indicator reflects the idea that size matters .larger banks are more difficult to replace and the failure of a large bank is more likely to damage confidence in the system . the indicator is a measure of the normalized total exposure used in basel iii leverage ratio .a bank s systemic impact is likely to be related to its interconnectedness to other institutions via a network of contractual obligations .the indicator differentiates between [ ( i ) ] in - degree on the network of contracts out - degree on the network of contracts , and outstanding securities .the indicator for this category captures the increased difficulty in replacing banks that provide unique services or infrastructure .the indicator differentiates between [ ( i ) ] assets under custody payment activity , and underwritten transactions in debt and equity markets .banks are complex in terms of business structure and operational `` complexity '' . the costs for resolving a complex bank is considered to be greater .this is reflected in the indicator as [ ( i ) ] notional amount of over - the - counter derivatives level 3 assets , and trading and available - for - sale securities .the score of the basel iii indicator - based measurement approach for each bank and each indicator , e.g. cross - jurisdictional claims , is calculated as the fraction of the individual banks with respect to all banks and then weighted by the indicator weight ( ) .the score is given in basis points ( factor ) where is the set of indicators and the weights from [ indicator ] .p3 cm p3 cm p3 cm p6 cm bucket & score range & bucket thresholds & higher loss absorbency requirement ( common equity as a percentage of risk - weighted assets ) + 5 & d - e & 530 - 629 & 3.50% + 4 & c - d & 430 - 529 & 2.50% + 3 & b - c & 330 - 429 & 2.00% + 2 & a - b & 230 - 329 & 1.50% + 1 & cutoff point - a & 130 - 229 & 1.00% in the basel iii `` bucketing approach '' , based on the scores from [ score ] , banks are divided into four equally sized classes ( buckets ) of systemic importance , seen here in [ buckets ] .the cutoff score and bucket thresholds have been calibrated by the bcbs in such a way that the magnitude of the higher loss absorbency requirements for the highest populated bucket is 2.5% of risk - weighted assets , with an initially empty bucket of 3.5% of risk - weighted assets .the loss absorbency requirements for the lowest bucket is 1% of risk - weighted assets .the loss absorbency requirement is to be met with common equity .bucket five will initially be empty .as soon as the bucket becomes populated , a new bucket will be added in such a way that it is equal in size ( scores ) to each of the other populated buckets and the minimum higher loss absorbency requirement is increased by 1% of risk - weighted assets .we use an abm , linking the financial and the real economy .the model consists of banks , firms and households .one pillar of the model is a well - studied macroeconomic model , the second pillar is an implementation of an interbank market . in particular , we extend the model used in with an implementation of the basel iii indicator - based measurement approach and capital surcharges for sibs .for a comprehensive description of the model , see .the agents in the model interact on four different markets .[ ( i ) ] firms and banks interact on the credit market , generating flows of loan ( re)payments .banks interact with other banks on the interbank market , generating flows of interbank loan ( re)payments .households and firms interact on the labour market , generating flows wage payments .households and firms interact on the consumption - goods market , generating flows of goods .banks hold all of firms and households ?cash as deposits .households are randomly assigned as owners of firms and banks ( share - holders ) .agents repeat the following sequence of decisions at each time step : firms define labour and capital demand , banks rise liquidity for loans , firms allocate capital for production ( labour ) , households receive wages , decide on consumption and savings , firms and banks pay dividends , firms with negative cash go bankrupt , banks and firms repay loans , illiquid banks try to rise liquidity , and if unsuccessful , go bankrupt . households which ownfirms or banks use dividends as income , all other households use wages .banks and firms pay of their profits as dividends .the agents are described in more detail below .there are households in the model .households can either be workers or investors that own firms or banks .each household has a personal account at one of the banks , where index represents the household and index the bank .workers apply for jobs at the different firms .if hired , they receive a fixed wage per time step , and supply a fixed labour productivity .a household spends a fixed percentage of its current account on the consumption goods market .they compare prices of goods from randomly chosen firms and buy the cheapest .there are firms in the model .they produce perfectly substitutable goods . at each time step firm computes its own expected demand and price .the estimation is based on a rule that takes into account both excess demand / supply and the deviation of the price from the average price in the previous time step .each firm computes the required labour to meet the expected demand . if the wages for the respective workforce exceed the firm s current liquidity , the firm applies for a loan .firms approach randomly chosen banks and choose the loan with the most favorable rate .if this rate exceeds a threshold rate , the firm only asks for percent of the originally desired loan volume . based on the outcome of this loan request , firms re - evaluate the necessary workforce , and hire or fire accordingly .firms sell the produced goods on the consumption goods market .firms go bankrupt if they run into negative liquidity .each of the bankrupted firms debtors ( banks ) incurs a capital loss in proportion to their investment in loans to the company .firm owners of bankrupted firms are personally liable .their account is divided by the debtors _ pro rata_. they immediately ( next time step ) start a new company , which initially has zero equity . their initial estimates for demand and price set to the respective averages in the goods market .the model involves banks .banks offer firm loans at rates that take into account the individual specificity of banks ( modeled by a uniformly distributed random variable ) , and the firms creditworthiness .firms pay a credit risk premium according to their creditworthiness that is modeled by a monotonically increasing function of their financial fragility .banks try to grant these requests for firm loans , providing they have enough liquid resources .if they do not have enough cash , they approach other banks in the interbank market to obtain the necessary amount .if a bank does not have enough cash and can not raise the total amount requested on the interbank market , it does not pay out the loan .interbank and firm loans have the same duration .additional refinancing costs of banks remain with the firms . each time step firms and banks repay percent of their outstanding debt ( principal plus interest ) .if banks have excess liquidity they offer it on the interbank market for a nominal interest rate .the interbank relation network is modeled as a fully connected network and banks choose the interbank offer with the most favorable rate .interbank rates offered by bank to bank take into account the specificity of bank , and the creditworthiness of bank .if a firm goes bankrupt the respective creditor bank writes off the respective outstanding loans as defaulted credits .if the bank does not have enough equity capital to cover these losses it defaults . following a bank default an iterative default - event unfolds for all interbank creditors .this may trigger a cascade of bank defaults . for simplicity s sake, we assume a recovery rate set to zero for interbank loans . this assumption is reasonable in practice for short term liquidity .a cascade of bankruptcies happens within one time step .after the last bankruptcy is resolved the simulation is stopped . in the abm ,we implement the size indicator by calculating the total exposures of banks .total exposure includes all assets of banks excluding cash , i.e. loans to firms and loans to other banks .interconnectedness is measured in the model by interbank assets ( loans ) and interbank liabilities ( deposits ) of banks . as a measure for substitutability we use the payment activity of banks .the payment activity is measured by the sum of all outgoing payments by banks . in the modelwe do not have cross - jurisdictional activity and banks are not engaged in selling complex financial products including derivatives and level 3 assets .we therefore set the weights for global ( cross - jurisdictional ) activity and complexity to zero .banks in the model have to observe loss absorbency ( capital ) requirements according to basel iii .they are required to hold 4.5% of common equity ( up from 2% in basel ii ) of risk - weighted assets ( rwas ) .rwas are calculated according to the standardized approach , i.e. with fixed weights for all asset classes . as fixed weights we use 100% for interbank loans and commercial loans .we define equity capital of banks in the model as common equity . in the model banksare allocated to the five buckets shown in [ buckets ] based on the scores obtained by [ score ] .banks have to meet additional loss absorbency requirements as shown in [ buckets ] .the score is calculated at every time step and capital requirements must be observed before providing new loans .the srt is implemented in the model as described in , for details see [ srt ] and .the srt is calculated according to [ srteq_simple ] and is imposed on all interbank transactions .all other transactions are exempted from srt . before entering a desired loan contraction , loan requesting bank obtains quotes for the srt rates from the central bank , for various potential lenders .bank chooses the interbank offer from bank with the smallest total rate .the srt is collected into a bailout fund .below we present the results of three experiments .the first experiment focuses on the performance of different options for regulation of sr , the second experiment on the consequences of different capital surcharges for g - sibs , and the third experiment on the effects of different weight distributions for the basel iii indicator - based measurement approach . in order to understand the effects of basel iii capital surcharges for g - sibs on the financial system and the real economy , and to compare it with the srt , we conduct an experiment where we consider three cases : [ ( i ) ] financial system regulated with basel ii , financial system regulated with basel iii , with capital surcharges for g - sibs , financial system with the srt . with basel ii we require banks to hold 2% of common equity of their rwas .rwas are calculated according to the standardized approach .the effects of the different regulation policies on total losses to banks ( see [ risk_measures ] ) are shown in [ fig : regulation_type](a ) .clearly , with basel ii ( red ) fat tails in the loss distributions of the banking sector are visible .basel iii with capital surcharges for g - sibs slightly reduces the losses ( almost not visible ) .the srt gets almost completely rid of big losses in the system ( green ) .this reduction in major losses in the financial system is due to the fact that with the srt the possibility of cascading failure is drastically reduced .this is shown in [ fig : regulation_type](b ) where the distributions of cascade sizes ( see [ risk_measures ] ) for the three modes are compared . with the basel regulation policies we observe cascade sizes of up to banks , while with the srtthe possibility of cascading failure is drastically reduced .clearly , the total transaction volume ( see [ risk_measures ] ) in the interbank market is not affected by the different regulation policies .this is seen in [ fig : regulation_type](c ) , where we show the distributions of transaction volumes . in [ fig : regulation_type](d ) we show the _ sr - profile _ of the financial system given by the rank - ordered debtranks of all banks .the sr - profile shows the distribution of systemic impact across banks in the financial system .the bank with the highest systemic impact is to the very left .obviously , the srt drastically reduces the systemic impact of individual banks and leads to a more homogeneous spreading of sr across banks .basel iii with capital surcharges for g - sibs sees a slight reduction in the systemic impact of individual banks .next , we study the effect of different levels of capital surcharges for g - sibs . herewe consider three different settings : [ ( i ) ] capital surcharges for g - sibs as specified in basel iii ( [ buckets ] ) , double capital surcharges for g - sibs , threefold capital surcharges for g - sibs . with larger capital surcharges for g - sibs , we observe a stronger effect of the basel iii regulation policy on the financial system .clearly , the shape of the distribution of losses is similar ( [ fig : surcharges](a ) ) .the tail of the distributions is only reduced due to a decrease in efficiency ( transaction volume ) , as is seen in [ fig : surcharges](c ) .evidently , average losses are reduced at the cost of a loss of efficiency by roughly the same factor .this means that basel iii must reduce efficiency in order to show any effect in terms of sr , see [ discussion ] . in [ fig : surcharges](d ) we show the sr - profile for different capital surcharges for g - sibs .clearly , the systemic impact of individual banks is also reduced at the cost of a loss of efficiency by roughly the same factor . in [ fig : surcharges](b ) we show the cascade size of synchronized firm defaults . herewe see an increase in cascading failure of firms with increasing capital surcharges for g - sibs .this means larger capital surcharges for g - sibs can have pro - cyclical side effects , see [ discussion ] . finally , we study the effects of different weight distributions for the basel iii indicator - based measurement approach . in particular , we are interested in whether it is more effective to have capital surcharges for `` super - spreaders '' or for `` super - vulnerable '' financial institutions .a vulnerable financial institution in this context is an institution that is particularly exposed to failures in the financial system , i.e. has large credit or counterparty risk ( cr ) .holding assets is generally associated with the risk of losing the value of an investment . whereas a liability is an obligation towards a counterparty that can have an impact if not fulfilled . to define an indicator that reflects sr and identifies `` super - spreaders '' we set the weight on intra - financial system liabilities to 100% . for an indicator that reflectscr of interbank assets and identifies vulnerable financial institutions of we set the weight on intra - financial system assets to 100% .weights on all other individual indicators are set to zero .specifically , we consider three different weight distributions .[ ( i ) ] weights as specified in basel iii ( [ indicator ] ) , indicator weight on interbank liabilities set to 100% , indicator weight on interbank assets set to 100% . to illustrate the effects of different weight distributions, we once again use larger capital surcharges for g - sibs , as specified in basel iii . to allow a comparison between the different weight distributions , we set the level of capital surcharges for each weight distribution to result in a similar level of efficiency ( credit volume ) .specifically , for the weight distribution as specified in basel iii ( [ indicator ] ) we multiply the capital surcharges for g - sibs as specified in basel iii ( [ buckets ] ) by a factor , for indicator weight on interbank liabilities by and for indicator weight on interbank assets by . in [ fig : weights](b ) we show that efficiency , as measured by transaction volume in the interbank market , is indeed similar for all weight distributions . in [ fig : weights](a ) we show the losses to banks for the different weight distributions .clearly , imposing capital surcharges on `` super - spreaders '' ( yellow ) does indeed reduce the tail of the distribution , but does not , however , get completely rid of large losses in the system ( yellow ) .in contrast , imposing capital surcharges on `` super - vulnerable '' financial institutions ( red ) shifts the mode of the loss distribution without reducing the tail , thus making medium losses more likely without getting rid of large losses . to illustrate the effect of the different weight distributions , we show the risk profiles for sr and cr .risk profiles for sr show the systemic impact as given by the debtrank ( banks are ordered by , the most systemically important being to the very left ) .risk profiles for cr show the distribution of _ average vulnerability _ , i.e. a measure for cr in a financial network , for details see [ vulnerability_section ] .the corresponding sr- and cr - profiles are shown for the different weight distributions in [ fig : weights](c ) and [ fig : weights](d ) .imposing capital surcharges on `` super - spreaders '' leads to a more homogeneous spreading of sr across all agents , as shown in [ fig : weights](c ) ( yellow ) . whereas capital surcharges for `` super - vulnerable '' financial institutions leads to a more homogeneous spreading of cr ( [ fig : weights](d ) ( red ) ) .interestingly , homogeneous spreading of cr ( sr ) leads to a more unevenly distributed sr ( cr ) , as seen by comparing [ fig : weights](c ) and [ fig : weights](d ) .we use an abm to study and compare the consequences of the two different options for the regulation of sr .in particular we compare financial regulation that attempts to reduce the financial fragility of sifis ( basel iii capital surcharges for g - sibs ) with a regulation policy that aims directly at reshaping the topology of financial networks ( srt ) .sr emerges in the abm through two mechanisms , either through interconnectedness of the financial system or through synchronization of the behavior of agents .cascading failure of banks in the abm can be explained as follows .triggered by a default of a firm on a loan , a bank may suffer losses exceeding its absorbance capacity .the bank fails and due to its exposure on the interbank market , other banks may fail as well .cascading failure can be seen in [ fig : regulation_type](b ) .basel iii capital surcharges for g - sibs work in the abm as follows .demand for commercial loans from firms depends mainly on the expected demand for goods .firms approach different banks for loan requests .if banks have different capital requirements , banks with lower capital requirements may have a higher leverage and provide more loans .effectively , different capital requirement produce inhomogeneous leverage levels in the banking system . imposing capital surcharges for g - sibsmeans that banks with a potentially large impact on others must have sizable capital buffers . with capital surcharges for g - sibs ,non - important banks can use higher leverage until they become systemically important themselves .this leads to a more homogenous spreading of sr , albeit that reduction and mitigation of sr is achieved `` indirectly '' .the srt leads to a self - organized reduction of sr in the following way : banks looking for credit will try to avoid this tax by looking for credit opportunities that do not increase sr and are thus tax - free . as a result ,the financial network rearranges itself toward a topology that , in combination with the financial conditions of individual institutions , will lead to a new topology where cascading failures can no longer occur .the reduction and mitigation of sr is achieved `` directly '' by re - shaping the network . from the experiments we conclude that basel iii capital surcharges for g - sibs reduce and mitigate sr at the cost of a loss of efficiency of the financial system ( [ fig : surcharges](a ) and [ fig : surcharges](c ) ) .this is because overall leverage is reduced by capital surcharges . on the other hand ,the srt keeps the efficiency comparable to the unregulated financial system .however it avoids cascades due to the emergent network structure .this means potentially much higher capital requirements for `` super - spreaders '' do address sr but at the cost of efficiency .sr through synchronization emerges in the abm in the following way : similarly to the root cause of cascading failure in the banking system , an initial shock from losses from commercial loans can lead to a synchronization of firm defaults .if banks are able to absorb initial losses , the losses still result in an implicit increase in leverage . in order to meet capital requirements ( basel iii ), the bank needs to reduce the volume of commercial loans .this creates a feedback effect of firms suffering from reduced liquidity , which in turn increases the probability of defaults and closes the cycle .this feedback effect is inherent to the system and can be re - inforced by basel iii capital surcharges for g - sibs . in the experiments we see that this feedback effect gets more pronounced with larger capital surcharges for g - sibs ( [ fig : surcharges](b ) ) .here we see an increase in cascading failure of firms with increasing capital surcharges for g - sibs .this means capital surcharges for g - sibs can have pro - cyclical side effects .basel iii measures systemic importance with an indicator - based measurement approach .the indicator approach is based on several individual indicators consisting of banks assets and liabilities . by studying the effect of different weight distributions for the individual indicators, we find that basel iii capital surcharges for g - sibs are more effective with higher weights on liabilities ( [ fig : weights](a ) ) . this means quite intuitively that asset - based indicators are more suitable for controlling credit risk and liability based indicators are more suitable for controlling systemic risk .since the indicators in the basel proposal are predominantly based on assets , the basel iii indicator - based measurement approach captures predominantly credit risk and therefore does not meet it s declared objective . to conclude with a summary at a very high level , the policy implications obtained with the abm are[ ( i ) ] re - shaping financial networks is more effective and efficient than reducing leverage , capital surcharges for g - sibs ( `` super - spreaders '' ) can reduce sr , but must be larger than specified in basel iii to have an measurable impact , and thus cause a loss of efficiency .basel iii capital surcharges for g - sibs can have pro - cyclical side effects .debtrank is a recursive method suggested in to determine the systemic relevance of nodes in financial networks .it is a number measuring the fraction of the total economic value in the network that is potentially affected by a node or a set of nodes . denotes the interbank liability network at any given moment ( loans of bank to bank ) , and is the capital of bank . if bank defaults and can not repay its loans , bank loses the loans . if does not have enough capital available to cover the loss , also defaults .the impact of bank on bank ( in case of a default of ) is therefore defined as \quad.\ ] ] the value of the impact of bank on its neighbors is .the impact is measured by the _ economic value _ of bank . for the economic value we use two different proxies .given the total outstanding interbank exposures of bank , , its economic value is defined as to take into account the impact of nodes at distance two and higher , this has to be computed recursively . if the network contains cycles , the impact can exceed one . to avoid this probleman alternative was suggested in , where two state variables , and , are assigned to each node . is a continuous variable between zero and one ; is a discrete state variable for three possible states , undistressed , distressed , and inactive , .the initial conditions are , and ( parameter quantifies the initial level of distress : ] .after we add a specific liability , we denote the liability network by where is the kronecker symbol . the marginal contribution of the specific liability on the expected systemic loss is where is the debtrank of the liability network and the total economic value with the added liability .clearly , a positive means that increases the total sr .finally , the marginal contribution of a single loan ( or a transaction leading to that loan ) can be calculated .we denote a loan of bank to bank by . the liability network changes to since and can have a number of loans at a given time , the index numbers a specific loan between and .the marginal contribution of a single loan ( transaction ) , is obtained by substituting by in [ marginal_effect ] . in this wayevery existing loan in the financial system , as well as every hypothetical one , can be evaluated with respect to its marginal contribution to overall sr .the central idea of the srt is to tax every transaction between any two counterparties that increases sr in the system .the size of the tax is proportional to the increase of the expected systemic loss that this transaction adds to the system as seen at time .the srt for a transaction between two banks and is given by \quad .\label{srteq_simple } \end{gathered}\ ] ] is a proportionality constant that specifies how much of the generated expected systemic loss is taxed . means that 100% of the expected systemic loss will be charged . means that only a fraction of the true sr increase is added on to the tax due from the institution responsible . for details , see . using debtrankwe can also define an _average vulnerability _ of a node in the liability network . recall that is the state variable , which describes the health of a node in terms of equity capital after time steps ( , i.e. all nodes are either undistressed or inactive ) .when calculating for containing only a single node , we can define as the health of bank in case of default of bank note that is necessary to simulate the default of every single node of the liability network separately to obtain .with we can define the average vulnerability of a node in the liability network as the average impact of other nodes on node where is the number of nodes in the liability network .we use the following three observables : ( 1 ) the size of the cascade , as the number of defaulting banks triggered by an initial bank default ( ) , ( 2 ) the total losses to banks following a default or cascade of defaults , , where is the set of defaulting banks , and ( 3 ) the average transaction volume in the interbank market in simulation runs longer than time steps , where represents new interbank loans at time step .monica billio , mila getmansky , andrew w lo , and loriana pelizzon .econometric measures of connectedness and systemic risk in the finance and insurance sectors ._ journal of financial economics _ , 1040 ( 3):0 535559 , 2012. sheri markose , simone giansante , and ali rais shaghaghi . financial network of us cds market : topological fragility and systemic risk . _ journal of economic behavior & organization _ , 830 ( 3):0 627646 , 2012 .sebastian poledna , stefan thurner , j. doyne farmer , and john geanakoplos .leverage - induced systemic risk under basle ii and other credit risk policies ._ journal of banking & finance _ , 42:0 199212 , 2014 .sebastian poledna , jos luis molina - borboa , serafn martnez - jaramillo , marco van der leij , and stefan thurner .the multi - layer network nature of systemic risk and its implications for the costs of financial crises ._ journal of financial stability _ , 20:0 7081 , 2015 .domenico delli gatti , mauro gallegati , bruce c greenwald , alberto russo , and joseph e stiglitz .business fluctuations and bankruptcy avalanches in an evolving network economy ._ journal of economic interaction and coordination _ , 40 ( 2):0 195212 , 2009 .stefano battiston , domenico delli gatti , mauro gallegati , bruce greenwald , and joseph e stiglitz .liaisons dangereuses : increasing connectivity , risk sharing , and systemic risk ._ journal of economic dynamics and control _ , 360 ( 8):0 11211141 , 2012 . d. delli gatti , e. gaffeo , m. gallegati , g. giulioni , and a. palestrini ._ emergent macroeconomics : an agent - based approach to business fluctuations_. new economic windows .springer , 2008 .isbn 9788847007253 .rama cont , amal moussa , and edson santos . network structure and systemic risk in banking systems . in jean - pierre fouque and joseph a langsam ,editors , _ handbook of systemic risk _ , pages 327368 .cambridge university press , 2013 .stefano battiston , michelangelo puliga , rahul kaushik , paolo tasca , and guido caldarelli .debtrank : too central to fail ? financial networks , the fed and systemic risk ._ scientific reports _ , 20 ( 541 ) , 2012 .
in addition to constraining bilateral exposures of financial institutions , there are essentially two options for future financial regulation of systemic risk ( sr ) : first , financial regulation could attempt to reduce the financial fragility of global or domestic systemically important financial institutions ( g - sibs or d - sibs ) , as for instance proposed in basel iii . second , future financial regulation could attempt strengthening the financial system as a whole . this can be achieved by re - shaping the topology of financial networks . we use an agent - based model ( abm ) of a financial system and the real economy to study and compare the consequences of these two options . by conducting three `` computer experiments '' with the abm we find that re - shaping financial networks is more effective and efficient than reducing leverage . capital surcharges for g - sibs can reduce sr , but must be larger than those specified in basel iii in order to have a measurable impact . this can cause a loss of efficiency . basel iii capital surcharges for g - sibs can have pro - cyclical side effects .
the success of sat technology in practical applications is largely driven by _ incremental solving_. sat solvers based on conflict - driven clause learning ( cdcl ) gather information about a formula in terms of learned clauses .when solving a sequence of closely related formulae , it is beneficial to keep clauses learned from one formula in the course of solving the next formulae in the sequence .the logic of quantified boolean formulae ( qbf ) extends propositional logic by universal and existential quantification of variables .qbf potentially allows for more succinct encodings of pspace - complete problems than sat .motivated by the success of incremental sat solving , we consider the problem of incrementally solving a sequence of syntactically related qbfs in prenex conjunctive normal form ( pcnf ) . building on search - based qbf solving with clause and cube learning ( qcdcl ) , we present an approach to incremental qbf solving , which we implemented in our solver .different from many incremental sat and qbf solvers , allows to add clauses to and delete clauses from the input pcnf in a stack - based way by _push _ and _ pop _ operations . a related stack - based framework was implemented in the sat solver . a solver api with _push _ and _ pop _ increases the usability from the perspective of a user .moreover , we present an optimization based on this stack - based framework which reduces the size of the learned clauses .incremental qbf solving was introduced for qbf - based bounded model checking ( bmc ) of partial designs .this approach , like ours , relies on selector variables and assumptions to support the deletion of clauses from the current input pcnf .the quantifier prefixes of the incrementally solved pcnfs resulting from the bmc encodings are modified only at the left or right end .in contrast to that , we consider incremental solving of _ arbitrary _ sequences of pcnfs . for the soundnessit is crucial to determine which of the learned clauses and cubes can be kept across different runs of an incremental qbf solver .we aim at a general presentation of incremental qbf solving and illustrate problems related to clause and cube learning .our approach is _ application - independent _ and applicable to qbf encodings of _ arbitrary _ problems .we report on experiments with constructed benchmarks .in addition to experiments with qbf - based conformant planning using , our results illustrate the potential benefits of incremental qbf solving in application domains like synthesis , formal verification , testing , planning , and model enumeration , for example .we introduce terminology related to qbf and search - based qbf solving necessary to present a general view on incremental solving . for a propositional variable , or a _ literal _ , where denotes the variable of .a _ clause _ ( _ cube _ ) is a disjunction ( conjunction ) of literals .a _ constraint _ is a clause or a cube .the empty constraint does not contain any literals . a clause ( cube ) is _ tautological _ ( _ contradictory _ ) if and .a propositional formula is in _ conjunctive ( disjunctive ) normal form _ if it consists of a conjunction ( disjunction ) of clauses ( cubes ) , called cnf ( dnf ) . for simplicity ,we regard cnfs and dnfs as sets of clauses and cubes , respectively . a quantified boolean formula ( qbf ) is in _ prenex cnf ( pcnf ) _ if it consists of a quantifier - free cnf and a _ quantifier prefix _ with where are _ quantifiers _ and are _ blocks _( i.e. sets ) of variables such that and for , and .the blocks in the quantifier prefix are _ linearly ordered _ such that if . the linear ordering is extended to variables and literals : if , and , and if for literals and .we consider only _ closed _ pcnfs , where every variable which occurs in the cnf is quantified in the prefix , and vice versa .a variable is _ universal _ , written as , if and _ existential _ , written as , if .a literal is universal if and existential if , written as and , respectively .an _ assignment _ is a mapping from variables to the truth values _ true _ and _ false_. an assignment is represented as a set of literals such that , for , if is assigned to false ( true ) then ( ) .pcnf under an assignment _ is denoted by ] by the usual simplifications of boolean algebra and superfluous quantifiers and blocks are deleted from the quantifier prefix of ] is the formula obtained from under the assignment defined by the literals in .the _ semantics _ of closed pcnfs is defined recursively .the qbf is satisfiable and the qbf is unsatisfiable .the qbf is satisfiable if ] are satisfiable .the qbf is satisfiable if ] are satisfiable .a pcnf is _ satisfied under an assignment _ if = \top ] .satisfied and falsified clauses are defined analogously .given a constraint , for denotes the set of universal and existential literals in .for a clause , _ universal reduction _ produces the clause ._ q - resolution _ of clauses is a combination of resolution for propositional logic and universal reduction . given two non - tautological clauses and and a pivot variable such that and and .let be the _ tentative q - resolvent _ of and .if is non - tautological then it is the _ q - resolvent _ of and and we write .otherwise , and do not have a q - resolvent .given a pcnf , a _q - resolution derivation _ of a clause from is the successive application of q - resolution and universal reduction to clauses in and previously derived clauses resulting in .we represent a derivation as a directed acyclic graph ( dag ) with edges ( 1 ) if and ( 2 ) and if .we write if there is a derivation of a clause from .otherwise , we write .q - resolution is a sound and refutationally - complete proof system for qbfs .a _ q - resolution proof _ of an unsatisfiable pcnf is a q - resolution derivation of the empty clause .we briefly describe search - based qbf solving with conflict - driven clause learning and solution - driven cube learning ( qcdcl ) and related properties . in the context of _ incremental _ qbf solving , clause and cube learning requires a special treatment , which we address in section [ sec_inc_satisfiability ] .given a pcnf , a qcdcl - based qbf solver successively assigns the variables to generate an assignment .if is falsified under , i.e. = \bot ] , then a new learned _ cube _ is constructed based on the following _ model generation rule _ , _ existential reduction _ and _ cube resolution_. [ def_model ] given a pcnf , an assignment such that = \top ] is defined similarly to pcnfs .analogously to clause derivations , we write if there is a derivation of a cube from the pcnf . during a run of a qcdcl - based solver the learned constraints can be derived from the current pcnf .[ prop_lcl_lcu_properties ] let be the acnf obtained by qcdcl from a pcnf .it holds that ( 1 ) and ( 2 ) .proposition [ prop_lcl_lcu_properties ] follows from the correctness of constraint learning in _ non - incremental _ qcdcl .that is , we assume that the pcnf is not modified over time. however , as we point out below , in _ incremental _ qcdcl the constraints learned previously might no longer be derivable after the pcnf has been modified .[ def_constraint_correct ] given the acnf of the pcnf , a clause ( cube ) is _ derivable _ with respect to if .otherwise , if , then is _ non - derivable_. due to the correctness of model generation , existential / universal reduction , and resolution , constraints which are derivable from the pcnf can be added to the acnf of , which results in a satisfiability - equivalent ( ) formula . [ prop_learning_satequiv ] let be the acnf of the pcnf .then ( 1 ) and ( 2 ) .we define _ incremental qbf solving _ as the problem of solving a sequence of using a qcdcl - based solver .thereby , the goal is to not discard all the learned constraints after the pcnf has been solved .instead , to the largest extent possible we want to re - use the constraints that were learned from in the process of solving the next pcnf . to this end , the acnf of for , which is maintained by the solver ,must be initialized with a set of learned clauses and a set of learned cubes such that , and proposition [ prop_learning_satequiv ] holds with respect to .the sets and contain the clauses and cubes that were learned from the previous pcnf and potentially can be used to derive further constraints from .if and at the beginning , then the solver solves the pcnf _incrementally_. for the first pcnf in the sequence , the solver starts with empty sets of learned constraints in the acnf .each pcnf for in the sequence has the form .the cnf part of results from of the previous pcnf in the sequence by addition and deletion of clauses .we write , where and are the sets of deleted and added clauses .the quantifier prefix of is obtained from of by deletion and addition of variables and quantifiers , depending on the clauses in and .that is , we assume that the pcnf is closed and that its prefix does not contain superfluous quantifiers and variables . when solving the pcnf using a qcdcl - based qbf solver , learned clauses and cubes accumulate in the corresponding acnf .assume that the learned constraints are derivable with respect to .the pcnf is modified to obtain the next pcnf to be solved .the learned constraints in and might become non - derivable with respect to in the sense of definition [ def_constraint_correct ] .consequently , proposition [ prop_learning_satequiv ] might no longer hold for the acnf of the new pcnf if previously learned constraints from and appear in and . in this case , the solver might produce a wrong result when solving .assume that the pcnf has been solved and learned constraints have been collected in the acnf .the clauses in are deleted from to obtain the cnf part of the next pcnf .if the derivation of a learned clause depends on deleted clauses in , then we might have that but . in this case , is non - derivable with respect to the next pcnf . hence must be discarded before solving starts so that in the initial acnf . otherwise ,if then the solver might construct a bogus q - resolution proof for the pcnf and , if is satisfiable , erroneously conclude that is unsatisfiable .[ ex_clause_delete_incorrect_clause ] consider the pcnf from example [ ex_running ] .the derivation of the clause shown in fig .[ fig_derivations ] depends on the clause .we have that .let be the pcnf obtained from by deleting .then because is the only clause which contains the literal .hence a possible derivation of the clause must use .however , no such derivation exists in .there is no clause containing a literal which can be resolved with to produce after a sequence of resolution steps .consider the pcnf with which is obtained from by _ only adding _ the clauses , but not deleting any clauses . assuming that for all in the acnf , also .hence all the learned clauses in are derivable with respect to the next pcnf and can be added to the acnf .like above , let be the acnf of the previously solved pcnf .dual to clause deletions , the addition of clauses to can make learned cubes in non - derivable with respect to the next pcnf to be solved .the clauses in are added to to obtain the cnf part of .an initial cube has been obtained from a model of the previous pcnf , i.e. = \top ] with respect to the next pcnf because of an added clause ( and hence also ) such that \not = \top ] , and hence .the cube is also derivable since .assume that the clause is added to resulting in the unsatisfiable pcnf .now is non - derivable with respect to since = \bot ] . for , we have = \top ] .the definition of assumptions can be applied recursively to the pcnf ] , since is the first block in the quantifier prefix of ] under a set of assumptions when later solving ] , otherwise if \not = \top ] .however , for the cube derived from it holds that .the assignment is a model of .let be the initial cube generated from .then is derivable with respect to . in practice , qcdcl - based solvers typically store only the learned cubes , which might be a small part of the derivation dag , and no edges .therefore , checking the cubes in a traversal of is not feasible . even if the full dag is available , the checking procedure is not optimal as pointed out in example [ ex_cube_check_overapprox ] .furthermore , it can not be used to check cubes which have become non - derivable after cleaning up by proposition [ prop_cleaned_up_cubes_sound ] .hence , it is desirable to have an approach to checking the derivability of _ individual _ learned cubes which is independent from the derivation dag . to this end, we need a condition which is sufficient to conclude that some _ arbitrary _ cube is derivable with respect to a pcnf , i.e. to check whether . however , we are not aware of such a condition . as an alternative to keeping the full derivation dag in memory ,a _ fresh _selector variable can be added to _ each _ newly learned initial cube .similar to selector variables in clauses , these variables are transferred to all derived cubes .potentially non - derivable cubes are then disabled by assigning the selector variables accordingly .however , different from clauses , it must be checked _explicitly _ which initial cubes are non - derivable by checking the condition in definition [ def_model ] for all initial cubes in the set of learned cubes .this amounts to an asymmetric treatment of selector variables in clauses and cubes .clauses are added to and removed from the cnf part by _ push _ and _ pop _ operations provided by the solver api .this way , it is known precisely which clauses are removed .in contrast to that , cubes are added to the set of learned cubes on the fly during cube learning . moreover , the optimization based on the temporal ordering of selector variables from the previous section is not applicable to generate shorter cubes since cubes are not associated to stack frames . due to the complications illustrated above , we implemented the following simple approach in to keep only initial cubes . every initial cube computed by the solveris stored in a linked list of bounded capacity , which is increased dynamically .the list is separate from the set of learned clauses .assume that a set of clauses is added to the cnf part of the current pcnf to obtain the cnf part of the next pcnf .all the cubes in the current set of learned cubes are discarded . for every added clause and for every initial cube , it is checked whether the assignment given by is a model of the next pcnf .initial cubes for which this check succeeds are added to the set of learned cubes in the acnf of the next pcnf after existential reduction has been applied to them .if the check fails , then is removed from .it suffices to check the initial cubes in only with respect to the clauses , and not the full cnf part , since the assignments given by the cubes in are models of the _ current _ pcnf . in the end ,the set contains only initial cubes all of which are derivable with respect to the acnf . if clauses are removed from the formula , then by proposition [ prop_cleaned_up_cubes_sound ] variables which do not occur anymore in the formula are removed from the initial cubes in . in the incremental qbf - based approachto bmc for partial designs , all cubes are kept across different solver calls under the restriction that the quantifier prefix is modified only at the left end .this restriction does not apply to incremental solving of pcnf where the formula can be modified arbitrarily .the api of provides functions to manipulate the prefix and the cnf part of the current pcnf .clauses are added and removed by the _ push _ and _ pop _ operations described in section [ sec_cnf_stack ] .new quantifier blocks can be added at any position in the quantifier prefix .new variables can be added to any quantifier block .variables which no longer occur in the formula and empty quantifier blocks can be explicitly deleted .the quantifier block containing the frame selector variables is invisible to the user .the solver maintains the learned constraints as described in sections [ sec_clause_del ] and [ sec_clause_add ] without any user interaction .the _ push _ and _ pop _ operations are a feature of . additionally , the api supports the manual insertion of selector variables into the clauses by the user .similar to incremental sat solving , clauses can then be enabled and disabled manually by assigning the selector variables as assumptions via the api . in this case , these variables are part of the qbf encoding and the optimization based on the frame ordering presented in section [ sec_clause_del ] is not applicable . after a pcnf has been found unsatisfiable ( satisfiable ) under assumptions where the leftmost quantifier block is existential ( universal ) , the set of relevant assumptions which were used by the solver to determine the result can be extracted .& _ discard lc _ & _ keep lc _ & _diff.(% ) _ + : & & & -10.88 + : & 3,833,077 & 2,819,492 & -26.44 + : & 139,036 & 116,792 & -16.00 + : & 8,243 & 6,360 & -22.84 + : & 99.03 & 90.90 & -8.19 + : & 28.56 & 15.74 & -44.88 + & _ discard lc _ & _ keep lc _ & _diff.(% ) _ + : & & & -14.40 + : & & & -3.62 + : & 117,019 & 91,737 & -21.61 + : & 10,322 & 8,959 & -13.19 + : & 100.15 & 95.36 & -4.64 + : & 4.18 & 2.83 & -32.29 + & _ discard lc _ & _ keep lc _ & _diff.(% ) _ + : & & & -86.62 + : & 186,237 & 15,031 & -91.92 + : & 36,826 & 1,228 & -96.67 + : & 424 & 0 & -100.00 + : & 21.94 & 4.32 & -79.43 + : & 0.75 & 0.43 & -42.66 + & _ discard lc _ &_ keep lc _ & _diff.(% ) _ + : & & & -77.94 + : & 103,330 & 8,199 &-92.06 + : & 31,489 & 3,350 & -89.37 + : & 827 & 5 & -99.39 + : & 30.29 & 9.78 & -67.40 + : & 0.50 & 0.12 & -76.00 + to demonstrate the basic feasibility of general incremental qbf solving , we evaluated our incremental qbf solver based on the instances from _qbfeval12 second round ( sr ) _ with and without preprocessing by .we disabled the sophisticated dependency analysis in terms of dependency schemes in and instead applied the linear ordering of the quantifier prefix in the given pcnfs . for experiments, we constructed a sequence of related pcnfs for _ each _ pcnf in the benchmark sets as follows .given a pcnf , we divided the number of clauses in by 10 to obtain the size of a slice of clauses .the first pcnf in the sequence contains the clauses of one slice .the clauses of that slice are removed from .the next pcnf is obtained from by adding another slice of clauses , which is removed from .the other pcnfs in the sequence are constructed similarly so that finally the last pcnf contains all the clauses from the original pcnf . in our tests , we constructed each pcnf from the previous one in the sequence by adding a slice of clauses to a new frame after a _ push _ operation .we ran on the sequences of pcnfs constructed this way with a wall clock time limit of 1800 seconds and a memory limit of 7 gb .tables [ tab_experiments_eval2012 ] and [ tab_experiments_eval2012_pop ] show experimental results on sequences of pcnfs and on the reversed ones , respectively .to generate , we first solved the sequence and then started to discard clauses by popping the frames from the clause stack of depqbf via its api . in one run ( _discard lc _ ) , we always discarded all the constraints that were learned from the previous pcnf so that the solver solves the next pcnf ( with respect to table [ tab_experiments_eval2012_pop ] ) starting with empty sets of learned clauses and cubes . in another run ( _keep lc _ ) , we kept learned constraints as described in sections [ sec_clause_del ] and [ sec_clause_add ] . this way ,70 out of 345 total pcnf sequences were fully solved from the set _qbfeval12-sr _ by both runs , and 112 out of 276 total sequences were fully solved from the set _ qbfeval12-sr - bloqqer_. the numbers of assignments , backtracks , and wall clock time indicate that keeping the learned constraints is beneficial in incremental qbf solving despite the additional effort of checking the collected initial cubes . in the experiment reported in table [ tab_experiments_eval2012 ] clausesare always added but never deleted to obtain the next pcnf in the sequence .thereby , across all incremental calls of the solver in the set _qbfeval12-sr _ on average 224 out of 364 ( 61% ) collected initial cubes were identified as derivable and added as learned cubes . forthe set _ qbfeval12-sr - bloqqer _ , 232 out of 1325 ( 17% ) were added .related to table [ tab_experiments_eval2012_pop ] , clauses are always removed but never added to obtain the next pcnf to be solved , which allows to keep learned cubes based on proposition [ prop_cleaned_up_cubes_sound ] . across all incremental calls of the solver in the set _qbfeval12-sr _ on average 820 out of 1485 ( 55% ) learned clauses were disabled and hence effectively discarded because their q - resolution derivation depended on removed clauses .for the set _ qbfeval12-sr - bloqqer _ , 704 out of 1399 ( 50% ) were disabled .we presented a general approach to incremental qbf solving which integrates ideas from incremental sat solving and which can be implemented in any qcdcl - based qbf solver .the api of our incremental qbf solver provides _ push _ and _ pop _ operations to add and remove clauses in a pcnf .this increases the usability of our implementation .our approach is application - independent and applicable to arbitrary qbf encodings .we illustrated the problem of keeping the learned constraints across different calls of the solver . to improve cube learning in incremental qbf solving ,it might be beneficial to maintain ( parts of ) the cube derivation in memory .this would allow to check the cubes more precisely than with the simple approach we implemented .moreover , the generation of proofs and certificates is supported if the derivations are kept in memory rather than in a trace file .dual reasoning and the combination of preprocessing and certificate extraction are crucial for the performance and applicability of cnf - based qbf solving .the combination of incremental solving with these techniques has the potential to further advance the state of qbf solving .our experimental analysis demonstrates the feasibility of incremental qbf solving in a general setting and motivates further applications , along with the study of bmc of partial designs using incremental qbf solving .related experiments with conformant planning based on incremental solving by showed promising results .further experiments with problems which are inherently incremental can provide more insights and open new research directions .audemard , g. , lagniez , j.m ., simon , l. : improving glucose for incremental sat solving with assumptions : application to mus extraction . in : jrvisalo , m. , van gelder , a. ( eds . ) sat .lncs , vol .7962 , pp .309317 . springer ( 2013 ) cashmore , m. , fox , m. , giunchiglia , e. : planning as quantified boolean formula . in : raedt , l.d . ,bessire , c. , dubois , d. , doherty , p. , frasconi , p. , heintz , f. , lucas , p.j.f .frontiers in artificial intelligence and applications , vol .242 , pp . 217222 .ios press ( 2012 ) hillebrecht , s. , kochte , m.a . ,erb , d. , wunderlich , h.j . , becker , b. : accurate qbf - based test pattern generation in presence of unknown values . in : macii ,e. ( ed . ) date .. 436441 . eda consortium san jose , ca , usa / acm dl ( 2013 ) lonsing , f. , egly , u. : incremental qbf solving by depqbf ( extended abstract ) . in : the 4th international congress on mathematical software , icms 2014 , seoul ,korea , august 2014 . proceedings .lncs , vol .springer ( 2014 ) , to appear .lonsing , f. , egly , u. , van gelder , a. : efficient clause learning for quantified boolean formulas via qbf pseudo unit propagation . in : jrvisalo ,m. , van gelder , a. ( eds . ) sat .lncs , vol . 7962 , pp .100115 . springer ( 2013 ) marin , p. , miller , c. , becker , b. : incremental qbf preprocessing for partial design verification - ( poster presentation ) . in : cimatti , a. , sebastiani , r. ( eds . ) sat .lncs , vol . 7317 , pp .473474 . springer ( 2012 ) niemetz , a. , preiner , m. , lonsing , f. , seidl , m. , biere , a. : resolution - based certificate extraction for qbf - ( tool presentation ) . in : cimatti ,a. , sebastiani , r. ( eds . ) sat .lncs , vol . 7317 , pp .430435 . springer ( 2012 ) silva , j.p.m ., lynce , i. , malik , s. : conflict - driven clause learning sat solvers . in : biere ,a. , heule , m. , van maaren , h. , walsh , t. ( eds . ) handbook of satisfiability , faia , vol .ios press ( 2009 ) zhang , l. , malik , s. : towards a symmetric treatment of satisfaction and conflicts in quantified boolean formula evaluation . in : hentenryck , p.v .( ed . ) cp .lncs , vol . 2470 , pp .springer ( 2002 )
we consider the problem of incrementally solving a sequence of quantified boolean formulae ( qbf ) . incremental solving aims at using information learned from one formula in the process of solving the next formulae in the sequence . based on a general overview of the problem and related challenges , we present an approach to incremental qbf solving which is application - independent and hence applicable to qbf encodings of arbitrary problems . we implemented this approach in our incremental search - based qbf solver and report on implementation details . experimental results illustrate the potential benefits of incremental solving in qbf - based workflows .
analysis of network data is important in a range of disciplines and applications , appearing in such diverse areas as sociology , epidemiology , computer science , and national security , to name a few .network data here refers to observed edges between nodes , possibly accompanied by additional information on the nodes and/or the edges , for example , edge weights .one of the fundamental questions in analysis of such data is detecting and modeling community structure within the network .a lot of algorithmic approaches to community detection have been proposed , particularly in the physics literature ; see for reviews .these include various greedy methods such as hierarchical clustering ( see for a review ) and algorithms based on optimizing a global criterion over all possible partitions , such as normalized cuts and modularity .the statistics literature has been more focused on model - based methods , which postulate and fit a probabilistic model for a network with communities .these include the popular stochastic block model , its extensions to include varying degree distributions within communities and overlapping communities , and various latent variable models .the stochastic block model is perhaps the most commonly used and best studied model for community detection . for a network with nodes defined by its adjacency matrix , this model postulates that the true node labels are drawn independently from the multinomial distribution with parameter , where for all , and is the number of communities , assumed known .conditional on the labels , the edge variables for are independent bernoulli variables with = p_{c_i c_j},\ ] ] where ] , where s are node degree parameters which satisfy an identifiability constraint .if the degree parameters only take on a discrete number of values , one can think of the degree - corrected block model as a regular block model with a larger number of blocks , but that loses the original interpretation of communities . in the bernoulli distribution for replaced by the poisson , primarily for ease of technical derivations , and in fact this is a good approximation for a range of networks .fitting block models is nontrivial , especially for large networks , since in principle the problem of optimizing over all possible label assignments is np - hard . in the bayesian framework , markov chain monte carlo methods have been developed , but they only work for networks with a few hundred nodes .variational methods have also been developed and studied ( see , e.g. , ) , and are generally substantially faster than the gibbs sampling involved in mcmc , but still do not scale to the order of a million nodes .another bayesian approach based on a belief propagation algorithm was proposed recently by decelle et al . , and is comparable to ours in theoretical complexity , but slower in practice ; see more on this in section [ secsimulations].=1 in the non - bayesian framework , a profile likelihood approach was proposed in : since for a given label assignment parameters can be estimated trivially by plug - in , they can be profiled out and the resulting criterion can be maximized over all label assignments by greedy search .the same method is used in to fit the degree - corrected block model .the speed of the profile likelihood algorithms depends on exactly what search method is used and the number of iterations it is run for , but again these generally work well for thousands but not millions of nodes .a method of moments approach was proposed in , for a large class of network models that includes the block model as a special case .the generality of this method is an advantage , but it involves counting all occurrences of specific patterns in the graph , which is computationally challenging beyond simple special cases. some faster approximations for block model fitting based on spectral representations are also available , but the properties of these approximations are only partially known .profile likelihood methods have been proven to give consistent estimates of the labels when the degree of the graph grows with the number of nodes , under both the stochastic block models and the degree - corrected version . to obtain `` strong consistency '' of the labels , that is , the probability of the estimated label vector being equal to the truth converging to 1 , the average graph degree has to grow faster than , where is the number of nodes . to obtain `` weak consistency, '' that is , the fraction of misclassified nodes converging to 0 , one only needs .asymptotic behavior of variational methods is studied in and , and in this belief propagation method is analyzed for both the sparse [ and the dense ( ) regimes , by nonrigorous cavity methods from physics , and a phase transition threshold , below which the labels can not be recovered , is established . in fact , it is easy to see that consistency is impossible to achieve unless , since otherwise the expected fraction of isolated nodes does not go to 0 . the results one can get for the sparse case , such as , can only claim that the estimated labels are correlated with the truth better than random guessing , but not that they are consistent . in this paper , for the purposes of theory we focus on consistency and thus necessarily assume that the degree grows with .however , in practice we find that our methods are very well suited for sparse networks and work well on graphs with quite small degrees .our main contribution here is a new fast pseudo - likelihood algorithm for fitting the block model , as well as its variation conditional on node degrees that allows for fitting networks with highly variable node degrees within communities .the idea of pseudo - likelihood dates back to , and in general amounts to ignoring some of the dependency structure of the data in order to simplify the likelihood and make it more tractable. the main feature of the adjacency matrix we ignore here is its symmetry ; we also apply block compression , that is , divide the nodes into blocks and only look at the likelihood of the row sums within blocks .this leads to an accurate and fast approximation to the block model likelihood , which allows us to easily fit block models to networks with tens of millions of nodes .another major contribution of the paper is the consistency proof of one step of the algorithm .the proof requires new and somewhat delicate arguments not previously used in consistency proofs for networks ; in particular , we use the device of assuming an initial value that has a certain overlap with the truth , and then show the amount of overlap can be arbitrarily close to purely random . finally , we propose spectral clustering with perturbations , a new clustering method of independent interest which we use to initialize pseudo - likelihood in practice . for sparse networks ,regular spectral clustering often performs very poorly , likely due to the presence of many disconnected components .we perturb the network by adding additional weak edges to connect these components , resulting in regularized spectral clustering which performs well under a wide range of settings .the rest of the paper is organized as follows .we present the algorithms in section [ secalgorithms ] , and prove asymptotic consistency of pseudo - likelihood in section [ secconsist ] .the numerical performance of the methods is demonstrated on a range of simulated networks in section [ secsimulations ] and on a network of political blogs in section [ secexample ] .section [ secdiscuss ] concludes with discussion , and the contains some additional technical results .the joint likelihood of and could in principle be maximized via the expectation maximization ( em ) algorithm , but the e - step involves optimizing over all possible label assignments , which is np - hard .instead , we introduce an initial labeling vector , , which partitions the nodes into groups .note that for convenience we partition into the same number of groups as we assume to exist in the true model , but in principle the same idea can be applied with a different number of groups ; in fact dividing the nodes into groups with a single node in each group instead gives an algorithm equivalent to that of .the main quantity we work with are the block sums along the columns , for , .let .further , let be the matrix with entries given by let be the row of , and let be the column of .let and .our approach is based on the following key observations : for each node , conditional on labels with : are mutually independent ; , a sum of independent bernoulli variables , is approximately poisson with mean . with true labels unknown, each can be viewed as a mixture of poisson vectors , identifiable as long as has no identical rows . by ignoring the dependence among , using the poisson assumption , treating as latent variables , and setting , we can write the pseudo log - likelihood as follows ( up to a constant ) : a pseudo - likelihood estimate of can then be obtained by maximizing .this can be done via the standard em algorithm for mixture models , which alternates updating parameter values with updating probabilities of node labels .once the em converges , we update the initial block partition vector to the most likely label for each node as indicated by em , and repeat this process for a fixed number of iterations . for any labeling ,let , if , and .we suppress the dependence on whenever there is no ambiguity .the details of the algorithmic steps can be summarized as follows . _the pseudo - likelihood algorithm ._ initialize labels , and let , , , , and . then repeat times : compute the block sums according to ( [ eqbikdef ] ) . using current parameter estimates and , estimate probabilities for node labels by given label probabilities , update parameter values as follows : return to step 2 unless the parameter estimates have converged .update labels by and return to step 1 .update as follows : .in practice , in step 6 we only include the terms corresponding to greater than some small threshold .the em method fits a valid mixture model as long as the identifiability condition holds , and is thus guaranteed to converge to a stationary point of the objective function .another option is to update labels after every parameter update ( i.e. , skip step 4 ) .we have found empirically that the algorithm above is more stable , and converges faster . in general, we only need a few label updates until convergence , and even using ( one - step label update ) gives reasonable results with a good initial value. the choice of the initial value of , on the other hand , can be important ; see more on this in section [ secinitvalue ] . for networks with hub nodes or those with substantial degree variability within communities ,the block model can provide a poor fit , essentially dividing the nodes into low - degree and high - degree groups .this has been both observed empirically and supported by theory .the extension of the block model designed to cope with this situation , the degree - corrected block model , has an extra degree parameter to be estimated for every node , and writing out a pseudo - likelihood that lends itself to an em - type optimization is more complicated .however , there is a simple alternative : consider the pseudo - likelihood conditional on the observed node degrees . whether these degrees are similar or not will not then matter , and the fitted parameters will reflect the underlying block structure rather than the similarities in degrees . the conditional pseudo - likelihood is again based on a simple observation : if random variables are independent poisson with means , their distribution conditional on is multinomial . applying this observation to the variables , we have that their distribution , conditional on labels with and the node degree , is multinomial with parameters , where . the conditional log pseudo - likelihood ( up to a constant )is then given by and the parameters can be obtained by maximizing this function via the em algorithm for mixture models , as before .we again repeat the em for a fixed number of iterations , updating the initial partition vector after the em has converged .the algorithm is then the same as that for unconditional pseudo - likelihood , with steps 2 and 3 replaced by : based on current estimates and , let given label probabilities , update parameter values as follows : we now turn to the question of how to initialize the partition vector .note that the full likelihood , pseudo - likelihoods and , and other standard objective functions used for community detection such as modularity can all be multi - modal . the numerical results in section [ secsimulations ] suggest that the initial value can not be entirely arbitrary , but the results are not too sensitive to it .we will quantify this further in section [ secsimulations ] ; here we describe the two options we use as initial values , both of which are of independent interest as clustering algorithms for networks .one of the simplest possible ways to group nodes in a network is to separate them by degree , say by one - dimensional -means clustering applied to the degrees as in .this only works for certain types of block models , identifiable from their degree distributions , and in general -means does not deal well with data with many ties , which is the case with degrees .instead , we consider two - dimensional -means clustering on the pairs , where is the number of paths of length 2 from node , which can be obtained by summing the rows of . a more sophisticated clustering scheme is based on spectral properties of the adjacency matrix or its graph laplacian .let be diagonal matrix collecting node degrees .a common approach is to look at the eigenvectors of the normalized graph laplacian , choosing a small number , say , corresponding to largest ( in absolute value ) eigenvalues , with the largest eigenvalue omitted ; see , for example , .these vectors provide an -dimensional representation for nodes of the graph , on which we can apply -means to find clusters ; this is one of the versions of spectral clustering , which was analyzed in the context of the block model in .we found that this version of spectral clustering tends to do poorly at community detection when applied to sparse graphs , say , with expected degree .the -dimensional representation seems to collapse to a few points , likely due to the presence of many disconnected components .we have found , however , that a simple modification performs surprisingly well , even for values of close to 1 .the idea is to connect all disconnected components which belong to the same community by adding artificial `` weak '' links . to be precise ,we `` regularize '' the adjacency matrix by adding multiplied by the adjacency matrix of an erdos renyi graph on nodes with edge probability , where is a constant .we found that , empirically , works well for the range of considered in our simulations , and that the results are essentially the same for all thus we make the simplest and computationally cheapest choice of , adding a constant matrix of small values , namely , where is the all - ones -vector , to the original adjacency matrix .the rest of the steps , that is , forming the laplacian , obtaining the spectral representation and applying -means , are performed on this regularized version of .we note that to obtain the spectral representation , one only needs to know how the matrix acts on a given vector ; since , the addition of the constant perturbation does not increase computational complexity .we will refer to this algorithm as spectral clustering with perturbations ( scp ) , since we perturb the network by adding new , low - weight `` edges . ''by consistency we mean consistency of node labels ( to be defined precisely below ) under a block model as the size of the graph grows . for the theoretical analysis , we only consider the case of communities .we condition on the community labels , that is , we treat them as deterministic unknown parameters . for simplicity , here we consider the case of balanced communities , each having nodes .an extension to the unbalanced case is provided in the supplementary material .the assumption of balanced communities naturally leads us to use the class prior estimates in ( [ eqcpl1stepiter ] ) .we call this assumption ( e ) ( for equal class sizes ) : assume each class contains nodes , and set . without loss of generality, we can take for . as an intermediate step in proving consistency for the block model introduced in section [ secintro ] ,we first prove the result for a _ directed _ block model .recall that for the ( undirected ) block model introduced earlier , one has in the directed case , we assume that all the entries in the adjacency matrix are drawn independently , that is , we will use different symbols for the adjacency and edge - probability matrices in the two cases .this is to avoid confusion when we need to introduce a coupling between the two models . in both cases ,we have assumed that diagonal entries of the adjacency matrices are also drawn randomly ( i.e. , we allow for self - loops as valid within - community edges ) .this is convenient in the analysis with minor effect on the results .the directed model is a natural extension of the block model when one considers the pseudo - likelihood approach ; in particular , it is the model for which the pseudo - likelihood assumption of independence holds .it is also a useful model of independent interest in many practical situations , in which there is a natural direction to the link between nodes , for example , in email , web , routing and some social networks .the model can be traced back to the work of holland and leinhardt and wang and wong in which it has been implicitly studied in the context of more general exponential families of distributions for directed random graphs .our approach is to prove a consistency result for the directed model , with an edge - probability matrix of the form note that the only additional restriction we are imposing is that has the same diagonal entries .both and depend on and can in principle change with at different rates .this is a slightly different parametrization from the more conventional , where ( and ) do not depend on , and .we use this particular parametrization here because we only consider the case , and it makes our results more directly comparable to those obtained in the physics literature , for example , .a coupling between the directed and the undirected model that we will introduce allows us to carry the consistency result over to the undirected model , with the edge - probability matrix asymptotically , the two edge - probability matrices have comparable ( to first order ) expected degree and out - in - ratio ( as defined by ) , under mild assumptions .the average degrees for and are and , respectively .the latter is as long as .the condition is satisfied as soon as the average degree of the directed model has sublinear growth : .the same holds for out - in - ratios . for our analysis, we consider an e - step of the cpl algorithm .it starts from some initial estimates , and of parameters , and , together with an initial labeling , and outputs the label estimates ,\ ] ] where are the elements of the matrix obtained by row normalization of ^t ] be the binary entropy function .let us also consider the collection of estimates which have the same ordering as true parameters , then , we have the following result .[ thmcpldir ] assume , and let .let the adjacency matrix be generated according to the directed model ( [ eqdirmod ] ) with edge - probability matrix ( [ eqedgeprobdir ] ) , and assume .then , there exists a sequence such that and \le\exp \bigl ( { -n\bigl[h ( \gamma)-\kappa_{\gamma}(n)\bigr ] } \bigr),\ ] ] where = o(1) ] , we obtain , and . since by assumption and , it follows that .then , ( [ eqcpl1simp ] ) is equivalent to .recalling that , we can write the condition as and . let be the set of all with , that is , for , let be the fraction of mismatches over community .note that the overall mismatch is .\ ] ] since we are focusing on , we are concerned with . in a slight abuse of notation , in ( [ eqoverallmis ] ) is in fact an upper bound on the mismatch ratio as defined in ( [ eqmisddef ] ) , since here we are using a particular permutation the identity .let us define , for and , then we have where the inequality is due to treating the ambiguous case as error .we now set out to bound this in probability .let us start with a tail bound on for fixed and .[ lemxitailbound ] for any and ] .note that | \le\max(\alpha _ { ij},1-\alpha_{ij } ) \le1 ] where is the binary entropy function , and is as defined in the statement of the theorem .( see lemma 6 in the supplementary material for a proof . )applying lemma [ lemnisdtailbound ] with and the union bound , we obtain \\ & & \qquad \le \exp \bigl\ { { m \bigl [ 2h(\gamma ) -e { \bar{p}}_1(0 ) u_n\log u_n+ 2\kappa _ { \gamma}(n ) \bigr ] } \bigr\}.\end{aligned}\ ] ] pick such that it follows , using , that \le\exp\bigl\ { -\bigl[h(\gamma ) - \kappa_{\gamma}(n)\bigr ] n\bigr\}.\ ] ] by symmetry the same bound holds for .it follows from ( [ eqoverallmis ] ) that the same holds for .this completes the proof of theorem [ thmcpldir ] .recall that and are the adjacency matrices of the undirected and directed cases , respectively .let us define , , as we did in the directed case , but based on instead of . for example , .our approach is to introduce a _ deterministic coupling _ between and , which allows us to carry over the results of the directed case .let {ij } = \cases { 0 , & \quad , \cr 1 , & \quad otherwise.}\ ] ] in other words , the graph of is obtained from that of by removing directions .note that which matches the relation between ( [ eqedgeprobdir ] ) and ( [ eqedgeprobundir ] ) . from ( [ eqcoupleaat ] ), we also note that let us now upper - bound in terms of . based on ( [ eqcoupleineq ] ) , only those that are equal to to the upper bound .more precisely , let , and take from now on .then we further notice that . to simplify notation ,let us define where the dependence on is due to being derived from [ recall that .thus we have shown recall from definition ( [ eqdefagam ] ) that [ lemaistailbound ] fix . for , we have = { \mathbb{p}}\bigl [ { { \widetilde{a}}_{*i}}(\sigma ) > ( 1 + { \varepsilon } ) { a_\gamma}\bigr ] \le\exp \biggl\ { { -\frac{{\varepsilon}^2}{1+{\varepsilon}/3 } } { a_\gamma}\biggr\}.\ ] ] the equality of the two probabilities follows by symmetry .let us prove the bound for .we apply bernstein inequality .note that = \sum_{j \in{\mathcal{s}}_{11 } } { \mathbb{e}}[{\widetilde{a}}_{ij}]+ \sum _ { j \in{\mathcal{s}}_{12 } } { \mathbb{e}}[{\widetilde{a}}_{ij } ] \\ & = & \sum_{j \in{\mathcal{s}}_{11 } } \frac{a}{m } + \sum _ { j \in{\mathcal{s}}_{12 } } \frac{b}{m } = a\gamma+ b(1-\gamma ) = { a_\gamma}.\end{aligned}\ ] ] since , we obtain \le\exp \biggl ( { -\frac{t^2}{2(\mu+ t/3 ) } } \biggr).\ ] ] setting completes the proof . from ( [ eqxiaxitineq ] ), it follows that which is the logical or .this can be seen ( as usual ) by noting that if the rhs does not hold , then , implying . translating to indicator functions , averaging over ( i.e. , applying ) , we get where , and similarly for . note that and , while not independent , have the same distribution by symmetry , so we can focus on bounding one of them .the key is that each one is a sum of i.i.d .terms , for example , .we have a bound on from lemma [ lemnisdtailbound ] .we can get similar bounds on the -terms . to start , let ,\qquad { \bar{q}}_1(r ) = \frac1m\sum_{i=1}^mq_i(r),\ ] ] similar to ( [ eqpirdef ] ) , and note that these quantities too are independent of the particular choice of .[ lemqtailbound ] for , \le\exp \bigl ( { - e m{\bar{q}}_1(r ) u \log u } \bigr).\ ] ] follows from lemma [ lempoitail ] in the , by noting that is an independent sequence of bernoulli variables .the same bound holds for .recall the definition of from ( [ eqpirdef ] ) . using ( [ eqnqineq ] ) and lemmas [ lemnisdtailbound ] and [ lemqtailbound ], we get \biggr ] \\ & & \qquad \le{\mathbb{p}}\biggl [ \sup_{\sigma\in\sigma^\gamma } \frac1 m { \widetilde{n}}_{n,1}(\sigma;r ) \ge e u_n{\bar{p}}_1(r ) \biggr]\\ & & \qquad\quad{}+ 2 { \mathbb{p}}\biggl [ \sup_{\sigma\in\sigma^\gamma } \frac1m{\widetilde{q}}_{n,1 * } ( \sigma;r/2 ) \ge e v_n{\bar{q}}_1(r ) \biggr ] \\ & & \qquad \le \exp \bigl\ { { m \bigl [ 2h(\gamma ) -e { \bar{p}}_1(r ) u_n\log u_n+ 2\kappa _ { \gamma}(n ) \bigr ] } \bigr\}\\ & & \qquad\quad{}+ 2\exp \bigl\ { { m \bigl [ 2h(\gamma ) -e { \bar{q}}_1(r ) v_n\log v_n+ 2\kappa _ { \gamma}(n ) \bigr ] } \bigr\}\end{aligned}\ ] ] as long as . now ,take , so that lemma [ lemaistailbound ] implies now , in lemma [ lemxitailbound ] , take .note that the assumption implies .in addition as before .thus , the chosen is valid for lemma [ lemxitailbound ]. furthermore , .hence , the lemma implies ^ 2\frac { ( a - b)^2}{a+b } } \biggr\}.\ ] ] pick and such that the rest of the argument follows as in the directed case .this completes the proof of theorem [ thmcplundir ] .the proposed pseudo - likelihood algorithms provide fast and accurate community detection for a range of settings , including large and sparse networks , contributing to the long history of empirical success of pseudo - likelihood approximations in statistics . for the theoretical analysis ,we did not focus on the convergence properties of the algorithms , since standard em theory guarantees convergence to a local maximum as long as the underlying poisson or multinomial mixture is identifiable .the consistency of a single iteration of the algorithm was established for an initial value that is better than purely arbitrary , as long as , roughly speaking , the graph degree grows , and there are two balanced communities with equal expected degrees .the theory shows that this local maximum is consistent , and unique in a neighborhood of the truth , so in fact there is no need to assume that em has converged to the global maximum , an assumption which is usually made in analyzing em - based estimates . the theoretical analysis can be extended to the general two - community model with possibly unbalanced communities , as detailed in the supplementary material .extending our argument to more than two communities also seems possible , but that would require extremely meticulous tracking of a large number of terms which we did not pursue .we conjecture that additional results may be obtained under weaker assumptions if one focuses simply on estimating the parameters of the block model rather than consistency of the labels , just like one can obtain results for a labeling correlated with the truth ( instead of consistent ) under weaker assumptions discussed in remark [ weakerassum ] .for example , in a very recent paper , results are obtained under very weak assumptions for the mean squared error of estimating the block model parameter matrix ( which in itself does not guarantee consistency of the labels ) .while the primary interest in community detection is estimating the labels rather than the parameters , we plan to investigate this further to see if and how our conditions can be relaxed .while in theory any `` reasonable '' initial value guarantees convergence , in practice the choice of initial value is still important , and we have investigated a number of options empirically .spectral clustering with perturbations , which we introduced primarily as a method to initialize pseudo - likelihood , deserves more study , both empirically ( e.g. , investigating the optimal choice of the tuning parameter ) , and theoretically .this is also a topic for future work .here is a lemma which we used quite often in proving consistency results in section [ secproofs ] .[ lempoitail ] consider to be independent bernoulli variables with = p_i ] and .then , for any , we have we apply a direct chernoff bound .let . then , by a result of hoeffding ( also see ) , for any convex function . letting , we obtain for , where we have used .the rhs is the chernoff bound for a poisson random variable with mean , and can be optimized to yield letting for and noting that , we get which is the desired bound .we would like to thank roman vershynin ( mathematics , university of michigan ) for highly illuminating discussions .
many algorithms have been proposed for fitting network models with communities , but most of them do not scale well to large networks , and often fail on sparse networks . here we propose a new fast pseudo - likelihood method for fitting the stochastic block model for networks , as well as a variant that allows for an arbitrary degree distribution by conditioning on degrees . we show that the algorithms perform well under a range of settings , including on very sparse networks , and illustrate on the example of a network of political blogs . we also propose spectral clustering with perturbations , a method of independent interest , which works well on sparse networks where regular spectral clustering fails , and use it to provide an initial value for pseudo - likelihood . we prove that pseudo - likelihood provides consistent estimates of the communities under a mild condition on the starting value , for the case of a block model with two communities . , , +
sensor networks can be described as a collection of wireless devices with limited computational abilities which are , due to their ad - hoc communication manner , vulnerable to misbehavior and malfunction .it is therefore necessary to support them with a simple , _ computationally friendly _ protection system . due to the limitations of sensor networks, there has been an on - going interest in providing them with a protection solution that would fulfill several basic criteria .the first criterion is the ability of self - learning and self - tuning .because maintenance of ad hoc networks by a human operator is expected to be sporadic , they have to have a built - in _ autonomous _mechanism for identifying user behavior that could be potentially damaging to them .this learning mechanism should itself minimize the need for a human intervention , therefore it should be self - tuning to the maximum extent .it must also be computationally conservative and meet the usual condition of high detection rate .the second criterion is the ability to undertake an action against one or several misbehaving users .this should be understood in a wider context of co - operating wireless devices acting in collusion in order to suppress or minimize the adverse impact of such misbehavior .such a co - operation should have a low message complexity because both the bandwidth and the battery life are of scarce nature .the third and last criterion requires that the protection system does not itself introduce new weaknesses to the systems that it should protect .an emerging solution that could facilitate implementation of the above criteria are artificial immune systems ( ais ) .ais are based on principles adapted from the human immune system ( his ) ; the basic ability of his is an efficient detection of potentially harmful foreign agents ( viruses , bacteria , etc . ) .the goal of ais , in our setting , is the identification of nodes with behavior that could possibly negatively impact the stated mission of the sensor network .one of the key design challenges of ais is to define a suitable set of efficient genes .genes form a basis for deciding whether a node misbehaves . they can be characterized as measures that describe a network s performance from a node s viewpoint . given their purpose , they must be easy to compute and robust against deception .misbehavior in wireless sensor networks can take upon different forms : packet dropping , modification of data structures important for routing , modification of packets , skewing of the network s topology or creating ficticious nodes ( see for a more complete list ) .the reason for sensors ( possibly fully controlled by an attacker ) to execute any form of misbehavior can range from the desire to save battery power to making a given wireless sensor network non - functional .malfunction can also be considered a type of unwanted behavior .the human immune system is a rather complex mechanism able to protect humans against an amazing set of extraneous attacks .this system is remarkably efficient , most of the time , in discriminating between _ self _ and _ non - self _ antigens .a non - self antigen is anything that can initiate an immune response ; examples are a virus , bacteria , or splinter .the opposite to non - self antigens are self antigens ; self antigens are human organism s own cells .the process of t - cells maturation in thymus is used as an inspiration for learning in ais .the maturation of t - cells ( detectors ) in thymus is a result of a pseudo - random process . after a t - cellis created ( see fig . [fig : det_gen ] ) , it undergoes a censoring process called _ negative selection_. during negative selection t - cells that bind self are destroyed .remaining t - cells are introduced into the body .the recognition of non - self is then done by simply comparing t - cells that survived negative selection with a suspected non - self .this process is depicted in fig .[ fig : non_self_det ] .it is possible that the self set is incomplete , while a t - cell matures ( tolerization period ) in the thymus .this could lead to producing t - cells that should have been removed from the thymus and can cause an autoimmune reaction , i.e. it leads to _false positives_. a deficiency of the negative selection process is that alone it is not sufficient for assessing the damage that a non - self antigen could cause .for example , many bacteria that enter our body are not harmful , therefore an immune reaction is not necessary .t - cells , actors of the adaptive immune system , require co - stimulation from the innate immune system in order to start acting .the innate immune system is able to recognize the presence of harmful non - self antigens and tissue damage , and signal this to certain actors of the adaptive immune system .the random - generate - and - test approach for producing t - cells ( detectors ) described above is analyzed in .in general , the number of candidate detectors to the self set size needs to be exponential ( if a matching rule with fixed matching probability is used ) .another problem is a consistent underfitting of the non - self set ; there exist `` holes '' in the non - self set that are undetectable . in theory , for some matching rules, the number of holes can be very unfavorable . in practical terms , the effect of holes depends on the characteristics of the non - self set , representation and matching rule .the advantage of this algorithm is its _ simplicity _ and good experimental results in cases when the number of detectors to be produced is fixed and small .a review of other approaches to detector computation can be found in .a sensor network can be defined in graph theoretic framework as follows : a sensor network is a net where are the set of nodes and edges at time , respectively .nodes correspond to sensors that wish to communicate with each other .an edge between two nodes and is said to exist when is within the radio transmission range of and vice versa .the imposed symmetry of edges is a usual assumption of many mainstream protocols .the change in the cardinality of sets can be caused by switching on / off one of the sensors , failure , malfunction , removal , signal propagation , link reliability and other factors .data exchange in a point - to - point ( uni - cast ) scenario usually proceeds as follows : a user initiated data exchange leads to a route query at the network layer of the osi stack .a routing protocol at that layer attempts to find a route to the data exchange destination .this request may result in a path of non - unit length .this means that a data packet in order to reach the destination has to rely on successive forwarding by intermediate nodes on the path .an example of an on - demand routing protocol often used in sensor networks is dsr .route search in this protocol is started only when a route to a destination is needed .this is done by flooding the network with rreq control packets .the destination node or an intermediate node that knows a route to the destination will reply with a rrep control packet .this rrep follows the route back to the source node and updates routing tables at each node that it traverses .a rerr packet is sent to the connection originator when a node finds out that the next node on the forwarding path is not replaying . at the mac layer of the osi protocol stack , the medium reservation is often contention based . in order to transmit a data packet , the ieee 802.11 mac protocol uses carrier sensing with an rts - cts - data - ack handshake .should the medium not be available or the handshake fails , an exponential back - off algorithm is used .this is combined with a mechanism that makes it easier for neighboring nodes to estimate transmission durations .this is done by exchange of duration values and their subsequent storing in a data structure known as network allocation vector ( nav ) . with the goal to save battery power ,researchers suggested , a sleep - wake - up schedule for nodes would be appropriate .this means that nodes do not listen continuously to the medium , but switch themselves off and wake up again after a predetermined period of time .such a sleep and wake - up schedule is similarly to duration values exchanged among nodes .an example of a mac protocol , designed specifically for sensor networks , that uses such a schedule is the s - mac . a sleep and wake - up schedulecan severely limit operation of a node in _ promiscuous mode_. in promiscuous mode , a node listens to the on - going traffic in the neighborhood and collects information from the overheard packets .this technique is used e.g. in dsr for improved propagation of routing information .movement of nodes can be modeled by means of a mobility model .a well - known mobility model is the _ random waypoint model _ . in this model ,nodes move from the current position to a new randomly generated position at a predetermined speed . after reaching the new destinationa new random position is computed .nodes pause at the current position for a time period before moving to the new random position . for more information on sensor networks ,we refer the reader to .motivated by the positive results reported in we have undertaken a detailed performance study of ais with focus on sensor networks .the general conclusions that can be drawn from the study presented in this document are : 1 . given the ranges of input parameters that we used and considering the computational capabilities of current sensor devices , we conclude that ais based misbehavior detection offers a decent detection rate . 2 .one of the main challenges in designing well performing ais for sensor networks is the set of `` genes '' .this is similar to observations made in . 3 .our results suggest that to increase the detection performance , an ais should benefit from information available at all layers of the osi protocol stack ; this includes also detection performance with regards to a simplistic flavor of misbehavior such as packet dropping .this supports ideas shortly discussed in where the authors suggest that information available at the application layer deserves more attention .we observed that somewhat surprisingly a gene based purely on the mac layer significantly contributed to the overall detection performance .this gene poses less limitations when a mac protocol with a sleep - wake - up schedule such as the s - mac is used .it is desirable to use genes that are _ `` complementary '' _ with respect to each other .we demonstrated that two genes , one that measures correct forwarding of data packets , and the other one that indirectly measures the medium contention , have exactly this property .we only used a single instance of learning and detection mechanism per node .this is different from approach used in , where one instance was used for each of possible neighbors .our performance results show that the approach in may not be feasible for sensor networks .it may allow for an easy sybil attack and , in general , instances might be necessary , where is the total number of sensors in the network . instead , we suggest that flagging a node as misbehaving should , if possible , be based on detection at several nodes .7 . only less than 5% detectors were used in detecting misbehavior .this suggests that many of the detectors do not comply with constraints imposed by the communications protocols ; this is an important fact when designing ais for sensor networks because the memory capacity at sensors is expected to be very limited .the data traffic properties seem not to impact the performance .this is demonstrated by similar detection performance , when data traffic is modeled as constant bit rate and poisson distributed data packet stream , respectively .we were unable to distinguish between nodes that misbehave ( e.g. deliberately drop data packets ) and nodes with a behavior resembling a misbehavior ( e.g. drop data packets due to medium contention ) .this motivates the use of danger signals as described in .the approach applied in does , however , not completely fit sensor networks since these might implement only a simplified version of the transport layer .in our approach , each node produces and maintains its own set of detectors .this means that we applied a direct one - to - one mapping between a human body with a thymus and a node .we represent self , non - self and detector strings as bit - strings .the matching rule employed is the _ r - contiguous bits matching rule_. two bit - strings of equal length match under the r - contiguous matching rule if there exists a substring of length at position in each of them and these substrings are identical .detectors are produced by the process shown in fig .[ fig : det_gen ] , i.e. by means of negative selection when detectors are created randomly and tested against a set of self strings .each antigen consists of several genes ._ genes _ are performance measures that a node can acquire locally without the help from another node . in practical termsthis means that an antigen consists of genes ; each of them encodes a performance measure , averaged in our case over a time window .an antigen is then created by concatenating the genes . when choosing the correct genes , the choice is limited due to the simplified osi protocol stack of sensors . for example, mica2 sensors using the tinyos operating system do not guarantee any end - to - end connection reliability ( transport layer ) , leaving only data traffic at the lower layers for consideration .let us assume that the routing protocol finds for a connection the path from the source node to the destination node , where and .we have used the following _ genes _ to capture certain aspects of mac and routing layer traffic information ( we averaged over a time period ( window size ) of 500 seconds ) : 1 . *mac layer : * 2 .ratio of complete mac layer handshakes between nodes and and rts packets sent by to .if there is no traffic between two nodes this ratio is set to ( a large number ) .this ratio is averaged over a time period .a complete handshake is defined as a completed sequence of rts , cts , data , ack packets between and .3 . ratio of data packets sent from to and then subsequently forwarded by to . if there is no traffic between two nodes this ratio is set to ( a large number ) .this ratio is computed by in promiscuous mode and , as in the previous case , averaged over a time period .this gene was adapted from the watchdog idea in .time delay that a data packet spends at before being forwarded to .the time delay is observed by in promiscuous mode . if there is no traffic between two nodes the time delayis set to zero .this measure is averaged over a time period .this gene is a quantitative extension of the previous gene .* routing layer : * 6 . the same ratio as in # 2 but computed separately for rerr routing packets. 7 . the same delay as in # 3 but computed separately for rerr routing packets .the gene # 1 can be characterized as mac layer quality oriented _ it indirectly measures the medium contention level_. the remaining genes are watchdog oriented .this means that they more strictly fit a certain kind of misbehavior .the gene # 2 can help detect whether packets get correctly forwarded ; the gene # 3 can help detect whether forwarding of packets does not get intentionally delayed . as we will show later , in the particular type of misbehavior ( packet dropping ) that we applied ,the first two genes come out as `` the strongest '' .the disadvantage of the watchdog based genes is that due to limited battery power , nodes could operate using a sleep - wake - up schedule similar to the one used in the s - mac .this would mean that the node has to stay awake until the node ( monitored node ) correctly transmits to .the consequence would be a longer wake - up time and possible restrictions in publishing sleep - wake - up schedules .in the authors applied a different a set of genes , based only on the dsr routing protocol .the observed set of events was the following : a = rreq sent , b = rrep sent , c = rerr sent , d = data sent and ip source address is not of the monitored ( neighboring ) node , e = rreq received , f = rrep received , g = rerr received , h = data received and the ip destination address is not of the monitored node .the events d and h take into consideration that the source and destination nodes of a connection might appear as misbehaving as they seem to `` deliberately '' create and delete data packets .then the set of their four genes is as follows :number of e over a time period .2 . number of ( e*(a or b ) ) over a time period .number of h over a time period .4 . number of ( h*d ) over a time period .the time period ( window size ) in their case was 10s ; * is the kleene star operator ( zero or more occurrences of any event(s ) are possible ) .similar to our watchdog genes , these genes impose additional requirements on mac protocols such as the s - mac .their dependence on the operation in promiscuous mode is , however , more pronounced as a node has to continuously observe packet events at all monitored nodes .the research in the area of what and to what extent can be or should be locally measured at a node , is independent of the learning mechanism used ( negative selection in both cases ) .performance of an ais can partly depend on the ordering and the number of used genes .since longer antigens ( consisting of more genes ) indirectly imply more candidate detectors , the number of genes should be carefully considered .given genes , it is possible to order them in different ways . in our experience ,the rules for ordering genes and the number of genes can be summed up as follows : 1 ) keep the number of genes small . in our experiments, we show that with respect to the learning mechanism used and the expected deployment ( sensor networks ) , 2 - 3 genes are enough for detecting a basic type of misbehavior .2 ) order genes either randomly or use a predetermined fixed order . defining a utility relation between genes , and ordering genes with respect to it can , in general , lead to problems that are considered intractable .our results however suggest , it is important to understand relations between different genes , since genes are able to complement each other ; this can lead to their increased mutual strength . on the other hand, random ordering adds to robustness of the underlying ais . for an attacker ,it is namely more difficult to deceive , since he does not know how genes are being used .it is currently an open question , how to impose a balanced solution .3 ) genes can not be considered in isolation .our experiments show , when a detector matched an antigen under the -contiguous matching rule , usually this match spanned over several genes .this motivates design of matching rules that would not limit matching to a few neighboring genes , offer more flexibility but still require that a gene remains a partly atomic unit . learning and detectionis done by applying the mechanisms shown in figs .[ fig : det_gen ] and [ fig : non_self_det ] .the detection itself is very straightforward . in the learning phase , a misbehavior - free period ( see on possibilities for circumventing this problem ) is necessary so that nodes get a chance to learn what is the normal behavior . when implementing the learning phase , the designer gets to choose from two possibilities : 1 ) learning and detection at a node get implemented for each neighboring node separatelythis means that different antigens have to get computed for each neighboring node , detector computation is different for each neighboring node and , subsequently , detection is different for each neighboring node .the advantage of this approach is that the node is able to directly determine which neighboring node misbehaves ; the disadvantage is that instances ( is the number of neighbors or node degree ) of the negative selection mechanism have to get executed ; this can be computationally prohibitive for sensor networks as can , in general , be equal to the total number of sensor .this allows for an easy sybil attack in which a neighbor would create several identities ; the node would then be unable to recognize that these identities belong to the same neighbor .this approach was used in .2 ) learning and detection at a node get implemented in a single instance for all neighboring nodes .this means a node is able to recognize anomaly ( misbehavior ) but it may be unable to determine which one from the neighboring nodes misbehaves .this implies that nodes would have to cooperate when detecting a misbehaving node , exchange anomaly information and be able to draw a conclusion from the obtained information .an argument for this approach is that in order to detect nodes that misbehave in collusion , it might be necessary to rely to some extent on information exchange among nodes , thus making this a natural solution to the problem .we have used this approach ; a post - processing phase ( using the list of misbehaving nodes ) was necessary to determine whether a node was correctly flagged as misbehaving or not .we find the second approach to be more suited for wireless sensor networks .it is namely less computationally demanding .we are unable , at this time , to estimate the frequency of a complete detector set computation .both approaches can be classified within the four - layer architecture ( fig .[ fig : arch ] ) that we introduced in .the lowermost layer , data collection and preprocessing , corresponds to genes computation and antigen construction .the learning layer corresponds to the negative selection process .the next layer , local and co - operative detection , suggests , an ais should benefit from both local and cooperative detection .both our setup and the setup described in only apply _ local _ detection .the uppermost layer , local and co - operative response , implies , an ais should also have the capability to undertake an action against one or several misbehaving nodes ; this should be understood in a wider context of co - operating wireless devices acting in collusion in order to suppress or minimize the adverse impact of such misbehavior .to our best knowledge , there is currently no ais implementation for sensor networks taking advantage of this layer ._ which is the correct one ? _an interesting technical problem is to tune the parameter for the -contiguous matching rule so that the underlying ais offers good detection and false positives rates .one possibility is a lengthy simulation study such as this one . through multiparameter simulationwe were to able to show that offers the best performance for our setup . in experimented with the idea of `` growing '' and `` shrinking '' detectors ; this idea was motivated by .the initial for a growing detector can be chosen as , where is the detector length .the goal is to find the smallest such that a candidate detector does not match any self antigen .this means , initially , a larger ( more specific ) is chosen ; the smallest that fulfills the above condition can be found through binary search . for shrinking detectors, the approach is reciprocal .our goal was to show that such growing or shrinking detectors would offer a better detection or false positives rate .short of proving this in a statistically significant manner , we observed that the growing detectors can be used for self tuning the parameter .the average value was close to the determined through simulation ( the setup in that case was different from the one described in this document ) .our experiments show that only a small number of detectors get ever used ( less than 5% ) .the reason is , they get produced in a random way , not considering structure of the protocols .for example , a detector that is able to detect whether i ) data packets got correctly transmitted and ii ) 100% of all mac layers handshakes were incomplete is superfluous as this case should never happen . in ,the authors conclude : _`` ... uniform coverage of non - self space is not only unnecessary , it is impractical ; non - self space is too big''_. application driven knowledge can be used to set up a rule based system that would exclude infeasible detectors ; see for a rule based system aimed at improved coverage of the non - self set . in , it is suggested that unused detectors should get deleted and the lifetime of useful detectors should be extended . in a companion paper , we have reviewed different types of misbehavior at the mac , network and transport layers of the osi protocol stack .we note that solutions to many of these attacks have been already proposed ; these are however specific to a given attack . additionally , due to the limitations of sensor networks , these solutions can not be directly transfered . the appeal of ais based misbehavior detection rests on its simplicity and applicability in an environment that is extremely computationally and bandwidth limited .misbehavior in sensor networks does not have to be executed by sensors themselves ; one or several computationally more powerful platforms ( laptops ) can be used for the attack . on the other hand , a protection using such more advanced computational platforms is , due to e.g. the need to supply them continuously with electric power , harder to imagine .it would also create a point of special interest for the possible attackers .the purpose of our experiments was to show that ais are a viable approach for detecting misbehavior in sensor networks .furthermore , we wanted to cast light on internal performance of an ais designed to protect sensor networks .one of our central goals was to provide an in - depth analysis of relative usefulness of genes ._ definitions of input and output parameters : _ the input parameters for our experiments were : parameter for the -contiguous matching rule , the ( desired ) number of detectors and misbehavior level .misbehavior was modeled as random packet dropping at selected nodes .+ the performance ( output ) measures were arithmetic averages and 95% confidence intervals of detection rate , number of false positives , real time to compute detectors , data traffic rate at nodes , number of iterations to compute detectors ( number of random tries ) , number of non - valid detectors , number of different ( unique ) antigens in a run or a time window , and number of matches for each gene .the detection rate is defined as , where is the number of detected non - self strings and is the total number of non - self strings . a false positive in our definitionis a string that is not self but can still be a result of anomaly that is identical with the effects of a misbehavior .a non - valid detector is a candidate detector that matches a self string and must therefore be removed .the number of matches for each gene was evaluated using the -contiguous matching rule ; we considered two cases : i ) two bit - strings get matched from the left to the right and the first such a match will get reported ( matching gets interrupted ) , ii ) two bit - strings get matched from the left to the right and all possible matches will get reported .the time complexity of these two approaches is and , respectivelly ; , where is the bitstring length .the first approach is exactly what we used when computing the real time necessary for negative selection , the second approach was used when our goal was to evaluate relative usefulness of each gene .* scenario description : * we wanted to capture `` self '' and `` non - self '' packet traffic in a large enough synthetic static sensor network and test whether using an ais we are able to recognize non - self , i.e. misbehavior .the topology of this network was determined by making a _ snapshot _ of 1,718 mobile nodes ( each with 100 m radio radius ) moving in a square area of 2,900m,950 m as prescribed by the random waypoint mobility model ; see figure [ fig : topo](a ) .the motivation in using this movement model and then creating a snapshot are the results in our previous paper that deals with structural robustness of sensor network .our preference was to use a slightly bigger network than it might be necessary , rather than using a network with unknown properties .the computational overhead is negligible ; simulation real time mainly depends on the number of events that require processing .idle nodes increase memory requirements , but memory availability at computers was in our case not a bottleneck .we chose source and destination pairs for each connection so that several alternative independent routes exist ; the idea was to benefit from route repair and route acquisition mechanisms of the dsr routing protocol , so that the added value of ais based misbehavior detection is obvious .we used 10 cbr ( constant bit rate ) connections .the connections were chosen so that their length is hops and so that these connections share some common intermediate nodes ; see figure [ fig : topo](b ) . for each packet received or sent by a nodewe have captured the following information : ip header type ( udp , 802.11 or dsr in this case ) , mac frame type ( rts , cts , data , ack in the case of 802.11 ) , current simulation clock , node address , next hop destination address , data packet source and destination address and packet size . _encoding of self and non - self antigens : _ each of the five genes was transformed in a 10-bit signature where each bit defines an interval of a gene specific value range .we created self and non - self antigen strings by concatenation of the defined genes .each self and non - self antigen has therefore a size of 50 bits .the interval representation was chosen in order to avoid carry - bits ( the gray coding is an alternative solution ) ._ constructing the self and non - self sets : _ we have randomly chosen 28 non - overlapping 500-second windows in our 4-hour simulation . in each 500-second window self and non - self antigensare computed for each node .this was repeated 20 times for independent glomosim runs . _misbehavior modeling : _ misbehavior is modeled as random data packet dropping ( implemented at the network layer ) ; data packets include both data packets from the transport layer as well as routing protocol packets . that should get dropped will simply not be inserted into the ip queue ) ; we have randomly chosen 236 nodes and these were forced to drop of data packets .however , there were only 3 - 10 nodes with misbehavior and with a _ statistically significant _ number of packets for forwarding in each simulation run ; see constraint c2 in section [ sec : eval ] ._ detection : _ a neighboring node gets flagged as misbehaving , if a detector from the detector set matches an antigen .since we used a single learning phase , we had to complement this process with some routing information analysis .this allowed us to determine , which one from the neighboring nodes is actually the misbehaving one . in the future, we plan to rely on co - operative detection in order to replace such a post - analysis ._ simulation phases : _ the experiment was done in four phases . 1 .20 independent glomosim runs were done for one of misbehavior levels and `` normal '' traffic .normal means that no misbehavior took place .2 . self and non - self antigen computation ( encoding ) .the 20 `` normal '' traffic runs were used to compute detectors . given the 28 windows and 20 runs , the sample size was 20 = 560 , i.e. detectors at each node were discriminated against 560 self antigens .4 . using the runs with misbehavior levels ,the process shown in fig .[ fig : non_self_det ] was used for detection ; we restricted ourselves to nodes that had in both the normal and misbehavior traffic at least a certain number of data packets to forward ( packet threshold ) .the experiment was then repeated with different , desired number of detectors and misbehavior level .the parameters for this experiment are summarized in fig .[ fig : exp - parameters2 ] .the injection rate and packet sizes were chosen in order to comply with usual data rates of sensors ( e.g. 38.4kbps for mica2 ; see ) .we chose the glomosim simulator over other options ( most notably ns2 ) because of its better scaling characteristics and our familiarity with the tool .when evaluating our results we define two additional constraints : 1 .we define a node to be detected as misbehaving if it gets flagged in at least 14 out of the 28 possible windows .this notion indirectly defines the time until a node is pronounced to be misbehaving .we call this a _ window threshold_. 2. a node has to forward in average at least packets over the 20 runs in both the `` normal '' and misbehavior cases in order to be included into our statistics .this constraint was set in order to make the detection process more reliable .it is dubious to flag a neighboring node of as misbehaving , if it is based on `` normal '' runs or runs with misbehavior , in which node had no data packets to forward ( he was not on a routing path ) .we call this a _ packet threshold _ ; was in our simulations chosen from ._ example : _ for a fixed set of input parameters , a node forwarded in the `` normal '' runs in average 1,250 packets and in the misbehavior runs ( with e.g. level 30% ) 750 packets .the node would be considered for misbehavior detection if , but not if . in other words ,a node has to get a chance to learn what is `` normal '' and then to use this knowledge on a non - empty packet stream .the results related to computation of detectors are shown in figure [ fig:300 m ] . in our experimentswe have considered the desired number of detectors to be max .4,000 ; over this threshold the computational requirements might be too high for current sensor devices .we remind the reader , each time the parameter is incremented by , the number of detectors should double in order to make these two cases comparable .figure [ fig:300m](a ) shows the real time needed to compute the desired set of detectors .we can see the real time necessary increases proportionally with the desired number of detectors ; this complies with the theoretical results presented in .figure [ fig:300m](b ) shows the percentage of non - valid detectors , i.e. candidate detectors that were found to match a self string ( see figure [ fig : det_gen ] ) .this result points to where the optimal operation point of an ais might lie with respect to the choice of parameter and the choice of a fixed number of detectors to compute .we remind the reader , the larger is the parameter the smaller is the probability that a detector will match a self string .therefore overhead connected to choosing the parameter prohibitively small should be considered when designing an ais .figure [ fig:300m](c ) shows the total number of generate - and - test tries needed for computation of detector set of a fixed size ; the 95% confidence interval is less than . in figure[ fig : perform](a ) we show the dependence of detection ratio on the packet threshold .we conclude that except for some extremely low threshold values ( not shown ) the detection rate stays constant .this figure also shows that when misbehavior level was set very low , i.e. 10% , the ais struggled to detect misbehaving nodes .this is partly a result of our coarse encoding with only 10 different levels . at the 30 and 50% misbehaving levels the detection rate stays solid at about 70 - 85% .the range of the 95% confidence interval of detection rate is 3.8 - 19.8% .the fact that the detection rate did not get closer to 100% suggests , either the implemented genes are not sufficient , detection should be extended to protocols at other layers of the osi protocol stack , a different ordering of genes should have been applied or our ten level encoding was too coarse .it also implicates that watchdog based genes ( though they perfectly fit the implemented misbehavior ) should not be used in isolation , and in general , that the choice of genes has to be very careful. figure [ fig : perform](b ) shows the impact of on detection rate . when the ais performs well , for the detection rate decreases .this is caused by the inadequate numbers of detectors used at higher levels of ( we limited ourselves to max .4,000 detectors ) . figure [ fig : perform](c ) shows the number of false positives .we remind that in our definition false positives are both nodes that do not drop any packets and nodes that drop packets due to other reasons than misbehavior . in a separate experiment we studied whether the 4-hour ( 560 samples ) simulation time was enough to capture the diversity of the self behavior .this was done by trying to detect misbehavior in 20 independent misbehavior - free glomosim runs ( different from those used to compute detectors ) .we report that we did not observe a single case of an autoimmune reaction . in fig .[ fig : det_used1](a ) we show the total number of runs in which a node was identified as misbehaving . the steep decline for values ( in this and other figures ) documents that in these cases it was necessary to produce a higher number of detectors in order to cover the non - self antigen space .the higher the , the higher is the specificity of a detector , this means that it is able to match a smaller set of non - self antigens . in fig .[ fig : det_used1](b ) and ( c ) we show the number of detectors that got matched during the detection phase ( see fig .[ fig : non_self_det ] ) . fig .( b ) shows the number of detectors matched per run , fig .( c ) shows the number of detectors matched per window .( b ) is an upper estimate on the number of unique detectors needed in a single run .given that the total number of detectors was 2,000 , there were less than 5% detectors that would get used in the detection phase .the tight confidence intervals only for . ] for the number of unique detectors matched per window ( see fig .( c ) ) is a direct consequence of the small variability of antigens as shown in fig .[ fig : det_used2](a ) . fig .[ fig : det_used2](a ) shows the number of unique antigens that were subject to classification into self or non - self .the average for is about 1.5 .this fact does not directly imply that the variability of the data traffic would be inadequate .it is rather a direct consequence of our choice of genes and their encoding ( we only used 10 value levels for encoding ) .[ fig : det_used2](b ) shows the number of matches between a detector and an antigen in the following way .when a detector under the -contiguous matching rule matches only a single gene within an antigen , we would increment the `` single '' counter .otherwise , we would increment the `` multiple '' counter .it is obvious that with increasing , it gets more and more probable that a detector would match more than a single gene .the interesting fact is that the detection rate for both and is about 80% ( see fig .[ fig : perform](a ) ) and that the rate of non - valid detectors is very different ( see fig .[ fig:300m](b ) ) .this means that an interaction between genes has positively affected the later performance measure , without sacrificing on the former one .this leads to a conlusion that genes should not be considered in isolation . fig .[ fig : det_used2](c ) shows the performance of gene # 1 .the number of matches shows that this gene contributed to the overall detection performance of our ais .[ fig : det_used3](a - c ) sum up performance of the five genes for different values of .again , an interesting fact is the contribution of gene # 1 to the overall detection performance .the usefulness of gene # 2 was largely expected as this gene was tailored for the kind of misbehavior that we implemented .the other three genes came out as marginally useful .the importance of the somewhat surprising performance of gene # 1 is that it can be computed in a simplistic way and does not require continuous operation of a node . in an additional experiment, we examined the impact of data traffic pattern on the performance .we used two different data traffic models : the constant bit rate ( cbr ) and a poisson distributed data traffic . in many scenarios ,sensors are expected to take measurements in constant intervals and , subsequently , send them out for processing .this would create a constant bit rate traffic .poisson distributed traffic could be a result of sensors taking measurements in an event - driven fashion .for example , a sensor would take a measurement only when a target object ( e.g. a person ) happens to be in its vicinity .the setup for this experiment was similar to that presented in fig .[ fig : exp - parameters2 ] with the additional fact that the data traffic model would now become an input parameter . with the goal to reduce complexity of the experimental setup , we fixed and we only considered cases with and detectors . in order to match the cbr traffic rate , the poisson distributed data traffic model had a mean arrival expectation of packet per second ( ) .as in the case with cbr , we computed the detection rate and the rate of false positives with the associated arithmetic averages and confidence intervals . the results based on these two traffic models were similar , actually , we could not find the difference between them to be statistically significant .this points out that the detection process is robust against some variation in data traffic .this conclusion also reflects positively on the usefulness of the used genes .more importantly , it helped disperse our worries that the results presented in this experimental study could be unacceptably data traffic dependent .in the authors introduced an ais based misbehavior detection system for ad hoc wireless networks .they used glomosim for simulating data traffic , their setup was an area of 800 m with 40 mobile nodes ( speed 1 m / s ) of which 5 - 20 are misbehaving ; the routing protocol was dsr .four genes were used to capture local behavior at the network layer .the misbehavior implemented is a subset of misbehavior introduced in this paper ; their observed detection rate is about 55% .additionally , a co - stimulation in the form of a danger signal was used in order to inform nodes on a forwarding path about misbehavior , thus propagating information about misbehaving nodes around the network . in authors describe an ais able to detect anomalies at the transport layer of the osi protocol stack ; only a wired tcp / ip network is considered .self is defined as normal pairwise connections .each detector is represented as a 49-bit string .the pattern matching is based on r - contiguous bits with a fixed . discusses a network intrusion system that aims at detecting misbehavior by capturing tcp packet headers .they report that their ais is unsuitable for detecting anomalies in communication networks .this result is questioned in where it is stated that this is due to the choice of problem representation and due to the choice of matching threshold for -contiguous bits matching . to overcome the deficiencies of the generate - and - test approach a different approachis outlined in .several signals each having a different function are employed in order to detect a specific misbehavior in sensor wireless networks .unfortunately , no performance analysis was presented and the properties of these signals were not evaluated with respect to their misuse .the main discerning factor between our work and works shortly discussed above is that we carefully considered hardware parameters of current sensor devices , the set of input parameters was designed in order to target specifically sensor networks and our simulation setup reflects structural qualities of such networks with regards to existence of multiple independent routing paths . in comparison to we showed that in case of static sensor networks it is reasonable to expect the detection rate to be above 80% .although we answered some basic question on the suitability and feasibility of ais for detecting misbehavior in sensor networks a few questions remain open .the key question in the design of ais is the quantity , quality and ordering of genes that are used for measuring behavior at nodes . to answer this question a detailed formal analysis of communications protocolswill be needed .the set of genes should be as `` complete '' as possible with respect to any possible misbehavior .the choice of genes should impose a high degree of sensor network s survivability defined as _the capability of a system to fulfill its mission in a timely manner , even in the presence of attacks , failures or accidents _it is therefore of paramount importance that the sensor network s mission is clearly defined and achievable under normal operating conditions .we showed the influence and usefulness of certain genes in order to detect misbehavior and the impact of the parameter on the detection process .in general , the results in fig . [fig : det_used3 ] show that gene # 1 and # 2 obtained of all genes the best results , with gene # 2 showing always the best results .the contribution of gene # 1 suggests that observing the mac layer and the ratio of complete handshakes to the number of rts packets sent is useful for the implemented misbehaviour .gene # 2 fits perfectly for the implemented misbehavior .it therefore comes as no surprise that this gene showed the best results in the detection process .the question which remains open is whether the two genes are still as useful when exposed to different attack patterns .it is currently unclear whether genes that performed well with negative selection , will also be appropriate for generating different flavors of signals as suggested within the _ danger theory _it is our opinion that any set of genes , whether used with negative selection or for generating any such a signal , should aim at capturing intrinsic properties of the interaction among different components of a given sensor network .this contradicts approaches applied in where the genes are closely coupled with a given protocol .the reason for this statement is the _ combined performance _ of gene # 1 and # 2 .their interaction can be understood as follows : data packet dropping implies less medium contention since there are less data packets to get forwarded .less data packets to forward on the other hand implies easier access to the medium , i.e. the number of complete mac handshakes should increase .this is an interesting _ complementary _ relationship since in order to deceive these two genes , a misbehaving node has to appear to be correctly forwarding data packets and , at the same time , he should not significantly modify the `` game '' of medium access .it is improbable that the misbehaving node _ alone _ would be able to estimate the impact of dropped packets on the contention level .therefore , he lacks an important feedback mechanism that would allow him to keep the contention level unchanged . for that, he would need to act in collusion with other nodes .the property of complementarity moves the burden of excessive communication from normally behaving nodes to misbehaving nodes , thus , exploiting the ad hoc ( local ) nature of sensor networks .our results thus imply , _ a `` good '' mixture of genes should be able to capture interactions that a node is unable to influence when acting alone . _it is an open question whether there exist other useful properties of genes , other than complementarity .we conclude that the random - generate - and - test process , with no knowledge of the used protocols and their behavior , creates many detectors which might show to be superfluous in detecting misbehavior .a process with some basic knowledge of protocol limitations might lead to improved quality of detectors . in the authors stated that the random - generate - and - test process _`` is innefficient , since a vast number of randomly generated detectors need to be discarded , before the required number of the suitable ones are obtained '' . _our results show that at , the rate of discarded detectors is less than .hence , at least in our setting we could not confirm the above statement .a disturbing fact is , however , that the size of the self set in our setting was probably too small in order to justify the use of negative selection .a counter - balancing argument is here the realistic setup of our simulations and a decent detection rate .we would like to point out that the fisher iris and biomedical data sets , used in to argue about the apropriateness of negative selection for anomaly detection , could be very different from data sets generated by our simulations .our experiments show that anomaly ( misbehavior ) data sets based on sensor networks could be in general very sparse .this effect can be due to the limiting nature of communications protocols .since the fisher iris and biomedical data sets were in not evaluated with respect to some basic properties e.g. degree of clustering , it is hard to compare our results with the results presented therein . in order to understand the effects of misbehavior better ( e.g. the propagation of certain adverse effects ), we currently develop a general framework for ais to be used within the jist / swans network simulator .this work was supported by the german research foundation ( dfg ) under the grant no .sz 51/24 - 2 ( survivable ad hoc networks sane ) .
a sensor network is a collection of wireless devices that are able to monitor physical or environmental conditions . these devices ( nodes ) are expected to operate autonomously , be battery powered and have very limited computational capabilities . this makes the task of protecting a sensor network against misbehavior or possible malfunction a challenging problem . in this document we discuss performance of artificial immune systems ( ais ) when used as the mechanism for detecting misbehavior . we show that ( i ) mechanism of the ais have to be carefully applied in order to avoid security weaknesses , ( ii ) the choice of genes and their interaction have a profound influence on the performance of the ais , ( iii ) randomly created detectors do not comply with limitations imposed by communications protocols and ( iv ) the data traffic pattern seems not to impact significantly the overall performance . we identified a specific mac layer based gene that showed to be especially useful for detection ; genes measure a network s performance from a node s viewpoint . furthermore , we identified an interesting complementarity property of genes ; this property exploits the local nature of sensor networks and moves the burden of excessive communication from normally behaving nodes to misbehaving nodes . these results have a direct impact on the design of ais for sensor networks and on engineering of sensor networks . = 1.0 sensor networks , ad hoc wireless networks , artificial immune systems , misbehavior .
in markov chain monte carlo ( mcmc ) , one simulates a markov chain and uses sample averages to estimate corresponding means of the stationary distribution of the chain .mcmc has become a staple tool in the physical sciences and in bayesian statistics . when sampling the markov chain , the transitions are driven by a stream of independent random numbers .in this paper , we study what happens when the i.i.d . random numbers are replaced by deterministic sequences , or by some dependent values . the motivation for replacing i.i.d . points is that carefully stratified inputs may lead to more accurate sample averages .one must be cautious though , because as with adaptive mcmc , the resulting simulated points do not have the markov property .the utmost in stratification is provided by quasi - monte carlo ( qmc ) points .there were a couple of attempts at merging qmc into mcmc around 1970 , and then again starting in the late 1990s .it is only recently that significant improvements have been reported in numerical investigations .for example , tribble reports variance reductions of several thousand fold and an apparent improved convergence rate for some gibbs sampling problems .those results motivate our theoretical work .they are described more fully in the literature survey below . to describe our contribution ,represent mcmc sampling via for , where is a nonrandom starting point and .the points belong to a state space .the function is chosen so that form an ergodic markov chain with the desired stationary distribution when independently . for a bounded continuous function ,let and .then as . in this paper , we supply sufficient conditions on and on the deterministic sequences so that holds when those deterministic sequences are used instead of random ones .the main condition is that the components of be taken from a completely uniformly distributed ( cud ) sequence , as described below .ours are the first results to prove that deterministic sampling applied to mcmc problems on continuous state spaces is consistent . in practice, of course , floating point computations take place on a large discrete state space . but invoking finite precision does not provide a satisfying description of continuous mcmc problems . in a finite state space argument ,the resulting state spaces are so big that vanishingly few states will ever be visited in a given simulation .then if one switches from to to bit representations , the problem seemingly requires vastly larger sample sizes , but in reality is not materially more difficult . to avoid using the finite state shortcut ,we adopt a computational model with infinite precision . as a side benefit ,this paper shows that the standard practice of replacing genuine i.i.d .values by deterministic pseudo - random numbers is consistent for some problems with continuous state spaces .we do not think many people doubted this , but neither has it been established before , to our knowledge .it is already known from roberts , rosenthal and schwartz that , under certain conditions , a geometrically ergodic markov chain remains so under small perturbations , such as rounding . that work does not address the replacement of random points by deterministic ones that we make here .there have been a small number of prior attempts to apply qmc sampling to mcmc problems .the first appears to have been chentsov , whose work appeared in 1967 , followed by sobol in 1974 .both papers assume that the markov chain has a discrete state space and that the transitions are sampled by inversion .unfortunately , qmc does not usually bring large performance improvements on such unsmooth problems and inversion is not a very convenient method .chentsov replaces i.i.d .samples by one long cud sequence , and this is the method we will explain and then adapt to continuous problems .sobol uses what is conceptually an matrix of values from the unit interval .each row is used to make transitions until the chain returns to its starting state .then the sampling starts using the next row .it is like deterministic regenerative sampling .sobol shows that the error converges as in the very special case where the transition probabilities are all rational numbers with denominator a power of .these methods were not widely cited and , until recently , were almost forgotten , probably due to the difficulty of gaining large improvements in discrete problems , and the computational awkwardness of inversion as a transition mechanism for discrete state spaces .the next attempt that we found is that of liao in 1998 .liao takes a set of qmc points in ^d ] are denoted by .two points with for define a rectangle ] for short .the indicator ( or characteristic ) function of a set is written .we assume the reader is familiar with the definition of the ( proper ) riemann integral , for a bounded function on a finite rectangle \subset{\mathbb{r}}^d ] , for .let ^d ] is , just as we would use in plain monte carlo .the difference is that in qmc , distinct points are chosen deterministically to make the discrete probability distribution with an atom of size at each close to the continuous ^d ] is the star discrepancy of in dimension is ^d}}|\delta({\mathbf{a}};{\mathbf{x}}_1,\ldots,{\mathbf{x}}_n)|.\ ] ] for , the star discrepancy reduces to the kolmogorov smirnov distance between a discrete and a continuous uniform distribution .a uniformly distributed sequence is one for which as .if are uniformly distributed then provided that is riemann integrable . under stronger conditions than riemann integrability, we can get rates of convergence for qmc .the koksma hlawka inequality is is the total variation of in the sense of hardy and krause . for properties of and other multidimensional variation measures ,see . equation ( [ eq : khbound ] ) gives a deterministic upper bound on the integration error , and it factors into a measure of the points quality and a measure of the integrand s roughness .there exist constructions where holds for any .therefore , functions of finite variation can be integrated at a much better rate by qmc than by mc .rates of convergence of , where denotes the smoothness of the integrand which can therefore be arbitrarily large , can also be achieved .equation ( [ eq : khbound ] ) is not usable for error estimation .computing the star discrepancy is very difficult , and computing is harder than integrating .practical error estimates for qmc may be obtained using randomized quasi - monte carlo ( rqmc ) . in rqmc each ^d ] random variables based on either discrepancy measures over rectangles or on spectral measures .those conditions are enough to prove convergence for averages of riemann integrable functions , but not for lebesgue integrable functions . as a result ,ordinary monte carlo with pseudo - random numbers is also problematic for lebesgue integrable functions that are not riemann integrable . in the markov chain context, we need a lesser known qmc concept as follows .a sequence ] random variables into the desired nonuniformly distributed quantities , as well as the function of those quantities whose expectation we seek . in some problems, we are unable to find such transformations , and so we turn to mcmc methods .suppose that we want to sample for a density function defined with respect to lebesgue measure on . for definiteness , we will seek to approximate . in this section ,we briefly present mcmc . for a full description of mcmc , see the monographs by liu or robert and casella . in an mcmc simulation ,we choose an arbitrary with and then for update via where ^d ] .it begins with a proposal taken from a transition kernel . with genuinely random proposals ,the transition kernel gives a complete description .but for either quasi - monte carlo or pseudo - random sampling , it matters how we actually generate the proposal .we will assume that ] is a generator for the distribution on if when ^d ] be a generator for the transition kernel with conditional density . the metropolis hastings sampler has where and the mis update is a special case of the metropolis hastings update in which does not depend on .the rwm update is a special case of the metropolis hastings update in which for some generator not depending on .let with and . to construct the systematic scan gibbs sampler ,let be a -dimensional generator of the full conditional distribution of given for all .this gibbs sampler generates the new point using ^d ] .the systematic scan gibbs sampler has where , for , }}({\mathbf{u}}_j)\ ] ] and } = ( \phi_1({\mathbf{x}},{\mathbf{u}}),\ldots,\phi_{j-1}({\mathbf{x}},{\mathbf{u } } ) , x_{j+1},\ldots , x_d) ] .there are many other slice samplers .see .it is elementary that implies .it is more usual to use , but our setting simplifies when we assume is updated first .we generate our random variables as functions of independent uniform random variables .the generators we consider require a finite number of inputs , so acceptance - rejection is not directly covered , but see the note in section [ sec : conclusions ] .for an encyclopedic presentation of methods to generate nonuniform random vectors , see devroye . here , we limit ourselves to inversion and some generalizations culminating in the rosenblatt chentsov transformation introduced below .we will not need to assume that can be sampled by inversion .we only need inversion for an oracle used later in a coupling argument .let be the cdf of , and for define take and , using extended reals if necessary .then has distribution on when ] is where and if ^s ] is the finite sequence , with , where and for .the rosenblatt chentsov transformation starts off using and inversion to generate and then it applies whatever generators are embedded in with the innovations , to sample the transition kernel .the transition function need not be based on inversion .in this section , we prove sufficient conditions for some deterministic mcqmc samplers to sample consistently .the same proof applies to deterministic pseudo - random sampling .first , we define consistency , then some regularity conditions , and then we give the main results .our definition of consistency is that the empirical distribution of the mcmc samples converges weakly to .[ def : consist ] the triangular array for in an infinite set consistently samples the probability density function if holds for all bounded continuous functions .the infinite sequence consistently samples if the triangular array of initial subsequences with for does . in practice, we use a finite list of vectors and so the triangular array formulation is a closer description of what we do .however , to simplify the presentation and avoid giving two versions of everything , we will work only with the infinite sequence version of consistency .triangular array versions of cud sampling for discrete state spaces are given in .it suffices to use functions in a convergence - determining class .for example , we may suppose that is uniformly continuous , or that } ] . here, we define some assumptions that we need to make on the mcmc update functions .[ def : coupleregion ] let ^d ] . the mcmc is _ regular _ ( _ for bounded continuous functions _ )if the function is riemann integrable on ^{d(m+1)} ] , where is either ] of rectangles ] converges to }\pi({\mathbf{x}})\,{{d}}{\mathbf{x}} ] , define .next , we present a theorem from alsmeyer and fuh on iterated random mappings .the step iteration , denoted , is defined by and for .[ thm : ifmalsfuh ] let the update function be jointly measurable in and with ^d } \log(\ell({\mathbf{u}}))\,{{d}}{\mathbf{u}}<0 ] .assume that there is a point with ^d } \log^+ ( d(\phi({\mathbf{x}}',{\mathbf{u}}),{\mathbf{x}}'))\,{{d}}{\mathbf{u}}<\infty ] have asymptotically the correct proportion of points .once again , similar arguments apply for bounded continuous functions of .although not explicitly stated there , the proof of , theorem 1 , shows the existence of sequences for which for all , where is a constant independent of and . unfortunately, no explicit construction of such a sequence is given in .then for any sequence of natural numbers with we obtain that in theorem [ thm : withhomecouple ] , we assumed that the coupling region is jordan measurable in theorem [ thm : withiterfunction ] , we do not have a coupling region , but still have an analogous assumption , namely that the sets are jordan measurable . a condition on which guarantees that is jordan measurable is given in section [ sec : gibbsexample ] .theorem [ thm : withhomecouple ] used coupling regions .these are somewhat special .but they do exist for some realistic mcmc algorithms .let be the update for the metropolized independence sampler on obtaining the proposal , where generates samples from the density , which are accepted when assume that the importance ratio is bounded above , that is , suppose also that there is a rectangle \subset[0,1]^{d-1} ] is a coupling region .the set has positive jordan measure .suppose that .then and so , regardless of .[ lem : slicecouple ] let be a density on a bounded rectangular region \subset{\mathbb{r}}^s ] be the domain of the inversive slice sampler .let for ^{s+1} ] , then . if , then and are in the set ] is ] . one could revise theorem [ thm : withhomecouple ] to include couplings that happen within some number of steps after happens . in this case, it is simpler to say that the chain whose update comprises two iterations of the inversive slice sampler satisfies theorem [ thm : withhomecouple ] . for a chainwhose update is just one iteration , the averages over odd and even numbered iterations both converge properly and so that chain is also consistent .alternatively , we could modify the space of values so that all ] .another transformation for the same distribution is and changing the conditional distribution of given on a set of measure leaves the distribution of unchanged . but for this version , we find can be discontinuous on more than a set of measure and so this inverse rosenblatt transformation of is not regular . in practice , of course , one would use the regular version of the transformation . butpropagating riemann integrability to a function built up from several other functions is not always straightforward .the core of the problem is that the composition of two riemann integrable functions need not be riemann integrable . as an example ,consider thomae s function on , where it is assumed that and in the representation have no common factors .this is continuous except on and so it is riemann integrable .the function is also riemann integrable . but for , which is famously not riemann integrable .the class of riemann integrable functions , while more restrictive than we might like for conclusions , is also too broad to use in propagation rules . first , we show that the acceptance - rejection step in metropolis hastings does not cause problems with riemann integrability .[ lem : accrej ] let and suppose that , and are real - valued riemann integrable functions on ^k ] define then is riemann integrable on ^{k+1} ] .riemann integrability of gives .similarly , .therefore , )=0 ] of into congruent subcubes ( whose boundaries overlap ) . then ] .it is enough to consider the components one at a time and in turn to show and are riemann integrable .however , as the example with thomae s function shows , even the indicator function of an interval applied to a riemann integrable function can give a non - riemann integrable composite function .we may avoid truncation by employing bounded continuous test functions .we will use the following simple corollary of lebesgue s theorem .[ lem : ctsofriemann ] for and , let be riemann integrable functions from ^k ] .let be a continuous function from ^k ] .because is continuous , . buttherefore , and so is riemann integrable by lebesgue s. we can also propagate riemann integrability through monotonicity . if is a monotone function from to and is the indicator of an interval , then is the indicator of an interval too , and hence is riemann integrable , when that interval is of finite length .[ lem : rosenblatt ] let be the cdf of and for , let be the conditional cdf of given .suppose that the cdfs are continuous functions of and that the quantile functions are continuous in \times{\mathbb{r}}^{j-1} ] , for , and so is .this latter only depends on , for , and so we write it as . for ,let .the set is the interval ] is jordan measurable because is .the boundary of is contained within the intersection of the graph of and the boundary of ^{k+1} ] , started with a riemann integrable function and continued via the metropolis hastings update .let be defined in terms of the proposal function ^{d-1}\to{\mathbb{r}}^s ] used in the rosenblatt chentsov transformation .we only need to show that is a riemann integrable function of ^{d(m+1)} ] , hence it is riemann integrable . now suppose that is a riemann integrable function on ^{dm} ] , so it ignores its last arguments .let be the proposal , .this is a continuous function of two riemann integrable functions on ^{d(m+1)-1} ] , and so is riemann integrable .then is a riemann integrable function on ^{dm+d} ] we have .therefore , there is a such that for all , where ^{dm}\dvtx \|{\mathbf{u}}^\ast-{\mathbf{v}}\|_{l_2 } < \delta\} ] .therefore , we have ^{dm}\dvtx d(\phi_m({\mathbf{x}},{\mathbf{u}}),\phi_m(\widehat{{\mathbf{x}}},{\mathbf{u } } ) ) = c \ } \bigr ) = 0,\ ] ] for any .the set of points where is discontinuous is given by ^{dm}\dvtx \forall\delta > 0\ \exists{\mathbf{v}},{\mathbf{v } } ' \in n_\delta({\mathbf{u } } ) \mbox { such that } \\ & & \hspace*{4.5pt } d(\phi_m({\mathbf{x}},{\mathbf{v}}),\phi_m(\widehat{{\mathbf{x}}},{\mathbf{v } } ) ) > \gamma^m \mbox { and } d(\phi_m({\mathbf{x}},{\mathbf{v}}'),\phi_m(\widehat{{\mathbf{x}}},{\mathbf{v } } ' ) ) \le \gamma^m \}.\end{aligned}\ ] ] as and ^{dm}\dvtx d(\phi_m({\mathbf{x}},{\mathbf{u}}),\phi_m(\widehat{{\mathbf{x}}},{\mathbf{u } } ) ) < \gamma^m \} ] are the same distribution , in that they can not be distinguished with positive probability from any countable sample of independent values .riemann integrals are usually defined for ^d ] or .these latter theories are designed for bounded functions . in monte carlo simulations ,sometimes values are produced .these end points can be problematic with inversion , where they may yield extended real values , and hence good practice is to select random number generators supported in the open interval .for our gibbs sampler example with the probit model , we required .this was necessary because otherwise the values might fail to belong to .our slice sampler example had equal to the bounded rectangle ] or other bounded test functions .also , the chain will not get stuck at an unbounded point .we have demonstrated that mcqmc algorithms formed by metropolis hastings updates driven by completely uniformly distributed points can consistently sample a continuous stationary distribution .some regularity conditions are required , but we have also shown that those conditions hold for many , though by no means all , mcmc updates . the result is a kind of ergodic theorem for qmc like the ones in and for finite state spaces . when rqmc is used in place of qmc to drive an mcmc simulation , then instead of cud points , we need to use weakly cud points .these satisfy for all and all .our version of mcmc above leaves out some methods in which one or more components of are generated by acceptance - rejection sampling because then we can not assume . a modification based on splicing i.i.d . ] . for integers , let .if are completely uniformly distributed , then ^{dk} ] .let be the volume of .for integers , define on ^{rdk} ]. therefore , too , for otherwise some rectangle would get too few points .now , we prove the main theorems from section [ sec : consistency ] .proof of theorem [ thm : withhomecouple ] pick . now let and for define the sequence as the rosenblatt chentsov transformation of .suppose that is regular and for a bounded rectangle \subset{\mathbb{r}}^s ] .then where and for , notice that ] because ) ] when ^{d(m+1)} ] let }({\mathbf{x}}) ] and } ] or = \varnothing ] otherwise .let be such that and .now take .for large enough , we can take .then has volume at most .thus , implies that , which in turn implies that , and so .therefore , we have a similar argument shows that .combining the three bounds yields establishing consistency when the gibbs sampler is regular .since the result holds trivially for the function , the result follows .the coupling region in theorem [ thm : withhomecouple ] was replaced by a mean contraction assumption ^d } \log(\ell({\mathbf{u}}))\,{{d}}{\mathbf{u } } < 0 $ ] in theorem [ thm : withiterfunction ] . this way we obtain ( possibly different )coupling type regions for each .we remedy this situation by letting depend on , which in turn requires us to use a stronger assumption on the cud sequence , namely , that .we thank seth tribble , erich novak , ilya m. sobol and two anonymous reviewers for helpful comments .gelman , a. and shirley , k. ( 2010 ) .inference from simulations and monitoring convergence . in _ handbook of markov chain monte carlo : methods and applications_. ( s. brooks , a. gelman , g. jones and x .-meng , eds . ) 131143 . chapman and hall / crc press , boca raton , fl .lecuyer , p. and lemieux , c. ( 1999 ) .quasi - monte carlo via linear shift - register sequences . in _ proceedings of the 1999 winter simulation conference _( p. a. farrington , h. b. nembhard , d. t. sturrock and g. w. evans , eds . ) 632639 .ieee press , piscataway , nj . owen , a. b. ( 1995 ) .randomly permuted -nets and -sequences . in _monte carlo and quasi - monte carlo methods in scientific computing _ ( h. niederreiter and p. jau - shyong shiue , eds . ) 299317 .springer , new york .owen , a. b. ( 2005 ) .multidimensional variation for quasi - monte carlo . in_ contemporary multivariate analysis and design of experiments : in celebration of prof .kai - tai fang s 65th birthday _ ( j. fan and g. li , eds . ) .world sci .hackensack , nj .
the random numbers driving markov chain monte carlo ( mcmc ) simulation are usually modeled as independent random variables . tribble [ markov chain monte carlo algorithms using completely uniformly distributed driving sequences ( 2007 ) stanford univ . ] reports substantial improvements when those random numbers are replaced by carefully balanced inputs from completely uniformly distributed sequences . the previous theoretical justification for using anything other than i.i.d . points shows consistency for estimated means , but only applies for discrete stationary distributions . we extend those results to some mcmc algorithms for continuous stationary distributions . the main motivation is the search for quasi - monte carlo versions of mcmc . as a side benefit , the results also establish consistency for the usual method of using pseudo - random numbers in place of random ones . , and .
micro - swimmers are of general interest lately , motivated by both engineering and biological problems . they can be remarkably subtle as was illustrated by e. m. purcell in his famous talk on life at low reynolds numbers " where he introduced a deceptively simple swimmer shown in fig .[ fig : purcell ] .purcell asked `` what will determine the direction this swimmer will swim ? ''this simple looking question took 15 years to answer : koehler , becker and stone found that the direction of swimming depends , among other things , on the stroke s _ amplitude _ : increasing the amplitudes of certain small strokes that propagate the swimmer to the right result in propagation to the left .this shows that even simple qualitative aspects of low reynolds number swimming can be quite un - intuitive .purcell s swimmer made of three slender rods can be readily analyzed numerically by solving three coupled , non - linear , first order , differential equations . however , at present there appears to be no general method that can be used to gain direct qualitative insight into the properties of the solutions of these equations .our first aim here is to to describe a geometric approach which allows one to describe the qualitative features of the solution of the swimming differential equations without actually solving them .our tools are geometric .the first tool is the notion of curvature borrowed from non - abelian gauge theory .this curvature can be represented graphically by landscape diagrams such as figs .[ fig : optstrkhh],[fig : fphieta=2],[fig : purcellcurvx ] which capture the qualitative properties of general swimming strokes .we have taken care not to assume any pre - existing knowledge about gauge theory on part of the reader .rather , we have attempted to use swimming as a natural setting where one can build and develop a picture of the notions of non - abelian gauge fields .purcell s original question , `` what will determine the direction this swimmer will swim ? ''can often be answered by simply looking at such landscape pictures .our second tool is a notion of metric and curvature associated with the dissipation .the `` dissipation curvature '' can be described as a landscape diagram and it gives information on the geometry of `` shape space '' .this gives us useful geometric tools that give qualitative information on the solutions of rather complicated differential equations .we begin by illustrating these geometric methods for the symmetric version of purcell s swimmer , shown in fig .[ fig : symmetric - p ] .symmetry protects the swimmer against rotations so it can only swim on a straight line .this makes it simple to analyze by elementary analytical means . in particular , it is possible to predict , using the landscape portraits of the swimming curvature fig .[ fig : optstrkhh ] , which way it will swim .in this ( abelian ) case the swimming curvature gives full quantitative information on the swimmer .we then turn to the non - abelian case of the usual purcell s swimmer which can also rotate .there are now several notions of swimming curvatures : the rotation curvature and the two translation curvature .the translation curvatures are non - abelian .this means that they give precise information of small strokes but this information can not be integrated to learn about large strokes. this can be viewed as a failure of stokes integration theorem .nevertheless , as we shall explain , they do give lots of qualitative information about large strokes as well . .the location of the swimmer in the plane is determined by two position coordinates and one orientation .the lengths of each arm is and the length of the body .,title="fig:",width=302 ] +purcell s swimmer , which was invented as the simplest animal that can swim that way " , is not simple to analyze . a variant of it that is simple to analyze is shown in the fig [ fig : symmetric - p ] . and the length of the body .the position of the swimmer is denoted by . , title="fig:",width=8 ] + the swimmer has four arms , each of length and one body arm of length , ( possibly of zero length ) .the swimmer can control the angles and the arms are not allowed to touch .both angles increase in the counterclockwise direction , .being symmetric , this swimmer can not rotate and can swim only in the `` body '' direction .it falls into the class of `` simple swimmers '' which includes the `` three linked spheres '' of najafi and glolestanian and the pushmepullyou , whose hydrodynamics is elementary because they can not turn .let us first address purcell s question `` what will determine the direction this swimmer will swim ? '' for the stroke shown in fig . [fig : optstrkhh ] . in the stroke ,the swimmer moves the two arms backwards together and then bring them forward one by one .the first half of the cycle pushes the swimmer forward and the second half pulls it back .which half of the cycle wins ? to answer that , one needs to remember that swimming at low reynolds numbers relies more on effective anchors than on good propellers .since one needs twice the force to drag a rod transversally than to drag it along its axis , an open arm acts like an anchor .this has the consequence that rowing with both arms , in the same direction and in phase , is _ less effective _ than bringing them back out of phase .the stroke actually swims backwards .this reasoning also shows that the swimmer is sisyphian : it performs a lot of forward and backward motion for little net gain .the swimming equation at low reynolds number is the requirement that the total force ( and torque ) on the swimmer is zero .the total force ( and torque ) is the sum of the forces ( and torques ) on the four arms and body . for the symmetric swimmer ,the torque and force in the transversal direction vanish by symmetry .the swimming equation is the condition that the force in the body - direction vanishes .this force depends linearly on the known rate of change of the controls , and the unknown the velocity of the `` body - rod '' .it gives a linear equation for the velocity .for slender arms and body , the forces are given by cox slender body theory : the element of force , , acting on a segment of length located at the point on the slender body is given by where is a unit tangent vector to the slender - body at and its velocity there . is the viscosity and the slenderness ( the ratio of length to diameter ) .the force on the a - th arm depends linearly on the velocities of the controls and swimming velocity .for example , the force component in the x - direction on the a - th arm takes the form _f_aj^x_j+f_a^xxx where are functions of the controls , given by elementary integrals [ eq : f_jk ] f_a^xx= k ( ^2_a-2 ) _ 0^ds , f_aj^x=2 k_j _ 0^s ds similar equations hold for the left arm and the body .the requirement that the total force on the swimmer vanishes gives a linear relation between the variation of the controls and the displacement where [ eq : a - def ] a(,)= , = as one expects , the body is just a `` dead weight '' and a trim swimmer with is best ..,title="fig:",width=453 ] + the notation stresses that the differential displacement does not integrate to a function of the controls , .this is the essence of swimming : fails to return to its original value with .for this reason , swimming is best captured not by the differential one - form but by the differential two - form and is the rational function given in eq .( [ eq : a - def ] ) . is commonly known as the curvature and its surface integral for a region enclosed by a curve gives , by stokes , the distance covered in one stroke .it gives complete information on both the direction of swimming and distance .the total curvature associated with the full square of shape space , is 0 by symmetry . the total positive curvature associated with the triangular half of the square, is 0.274 .this means that swimmer can swim , at most , about a quarter of its arms length in a single stroke . is a differential two form and as such is assigned a numerical value only when in comes with a region of integration . by itself, it has no numerical value . to say that the curvature is large at a given point in shape , or control , space requires fixing some a - priori measure .for example one can pick the flat measure for in which case gives numerical values for the curvature .however , one can pick instead the flat measure for one gets a different function .a natural measure on shape space is determined by the dissipation .we turn to it now .the power of swimming at low reynolds numbers is quadratic in the driving : where is a function on shape space and we use the summation convention where repeated indices are summed over .this suggests the natural metric in shape space is , in either coordinate systems , in particular , the associated area form is d_1d_2= d_1d_2 the curvature can now be assigned a natural numerical value [ eq : numerical - curvature ] 2 a^2(,)= 2 a^2(,)_1_2 each arm of the symmetric purcell swimmer dissipate energy at the rate [ eq : dissipation - arm ] -_0^df(s)vds & = & -k_0^ ( ( tv)^2 -2 vv)ds + & = & -k_0^ ( x^2 - 2 s^2 _j^2 ^2_j-2(s _ j_j - x)^2 ) ds + & = & 3 ( 3x^2 + 2 _ j^2 + 6 _ j x ) and the total energy dissipation by the arms is evidently [ eq : dissipation ] 3 ( 6x^2 + 2 ( _ 1 ^ 2+_2 ^ 2 ) + 6 ( _ 1+_2 ) x ) in a body - less swimmer , , this is also the total dissipation , and we consider this case from now on . since we are interested in the metric up to units , we shall henceforth set . ] . plugging the swimming equation , eq . ([ eq : dx ] ) gives : [ eq : metric ] g()=a(,0 ) ( cc 5 - 2_2 ^ 2 + _ 1 ^ 2 & _ 1 _ 2 +_ 1 _ 2 & 5 - 2_1 ^ 2 + _ 2 ^ 2 + ) , _ j=_j and is given in eq .( [ eq : a - def ] ) . in particular, is a smooth function on shape space while is singular at the boundaries .one can now meaningfully plot the curvature which is shown in fig .[ fig : optstrkhh ] efficient swimming covers the largest distance for given energy resource and at a given speed .alternatively , it minimizes the energy needed for covering a given distance at a given speed . fixing the speed for a given distance is equivalent to fixing the time . in this formulationthe variational problem takes the form of a problem in lagrangian mechanics of minimizing the action where q is a lagrange multiplier . is given in eq .( [ eq : dx ] ) . this can be interpreted as a motion of a a charged particle on a curved surface in an external magnetic field .conservation of energy then says that the solution has constant speed ( in the metric ) . for a closed path ,the kinetic term is then proportional to the length of the path and the constraint is the flux enclosed by it .thus the variational problem can be rephrase geometrically as the isoperimetric problem " : find the shortest path that encloses the most flux .the charged particle moves on a curved surface .how does this surface look like ? from the dissipation metric we can calculate , using brioschi formula , the gaussian curvature ( not to be confused with ) of the surface . a plot of it is given in fig .[ fig : gausscurv ] . + inspection of fig .[ fig : optstrkhh ] suggest that pretty good strokes are those that enclose only one sign of the curvature .the actual optimal stroke can only be found numerically .it is plotted in figure [ fig : optstrkhh ] .the efficiency for this stroke is about the same as the efficiency of the purcell s swimmer for rectangular strokes of , but less than the optimally efficient strokes found in .cox theory does not allow the arms to get too close .how close they are allowed to get depends on the slenderness .the smallest angle allowed must be such that . as the optimal stroke gets quite close to the boundary , with it can be taken seriously only for sufficiently slender bodies with , which is huge .the optimal stroke is therefore more of mathematical than physical interest .one can use a refine slender body approximation by taking high order terms in cox s expansion for the force .this will leave the structure without changes , but will made eq .[ eq : f - yosi ] and eq .[ eq : metric ] much more complicated . from a mathematical point of viewit is actually quite remarkable that a minimizer exists . by thiswe mean that the optimal stroke does not hit the boundary of shape space where cox theory is squeezed out of existence .this can be seen from the following argument .inspection of eq .( [ eq : numerical - curvature ] ) and eq .( [ eq : metric ] ) shows that the curvature vanishes linearly near the boundary of shape space ( this is most easily seen in the coordinates ) .suppose now that the optimal path ran along the boundary .shifting the path a distance away from the boundary would shorten it linearly in while the change in the flux integral will be only quadratic .this shows that the path that hits the boundary can not be a minimizer .purcell swimmer can move in either direction in the plane and can also rotate . since the euclidean group is not abelian ( rotations and translations do not commute ) the notion of `` swimming curvature '' that proved to be so useful in the abelian case needs to be modified .as we shall explain , landscape figures can be used to give qualitative geometric understanding of the swimming and in particular can be used to answer purcell question `` what will determine the direction this swimmer will swim ? '' . however , unlike the abelian case , the swimming curvature does not give full quantitative information on the swimming and one can not avoid solving a system of differential equations in this case if one is interested in quantitative details . the location and orientation of the swimmer ( in the lab frame ) shall be denoted by the triplet where is the orientation of the swimmer , see fig .[ fig : purcell ] , and are cartesian coordinates of the center of the `` body . '' .] we use super - indices and greek letters to designate the response while lower indices and roman characters designate the controls .the common approach to low reynolds numbers swimming is to write the equations of motion in a fixed , lab frame .we first review this and then describe an alternate approach where the equations of motions are written in a frame that instantaneously coincides with the swimmer . by general principles of low reynolds numbers hydrodynamicsthere is a linear relations between the change in the controls increases counterclockwise and clockwise . ] and the response [ eq : linear - response ] -.1em.8ex-.6emdx^=a^_j d_j , ( summation over repeated indices implied . ) note that since there are two controls , while for the three responses .the response coefficients are functions of both the control coordinates and the location coordinates of the swimmer in the lab .however , in a homogeneous medium it is clear that can only be a function of the orientation .moreover , in an isotropic medium it can only dependent on the orientation through [ eq : ra ] ^_i ( , ) = r^ ( ) a^_i ( ) ; r^ ( ) = ( * 20c 1 & 0 & 0 + 0 & & - + 0 & & + ) in the lab frame , the nature of the solution of the differential equations is obscured by the fact that one can not determine independently for different points on the stroke ( because of the dependence on ) .the coefficients may be viewed as the transport coefficients in a rest frame that instantaneously coincides with the swimmer .they play a key role in the geometric picture that we shall now describe . in the frame of the swimmer onehas [ eq : linear - response - non - ab ] -.1em.8ex-.6emdy^= a^_j d_j which is an equation that is fully determined by the controls. the price one pays is that the coordinates can not be simply added to calculate the total change in a stroke , since one has to consider the changes in the reference frame as well . in order to do that, must be viewed as ( infinitesimal ) elements of the euclidean group [ eq : euclidean ] e(y^)= ( ccc y^0 & y^0 & y^1 + - y^0 & y^0 & y^2 + 0 & 0 & 1 + ) the composition of along a stroke is a matrix multiplication [ eq : matrix - prod ] e()= _ e(dy^ ( ) ) the product is , of course , non commutative .we denote generators of translations and rotations by [ eq : translations ] e^0= ( ccc 0 & 1 & 0 + -1 & 0 & 0 + 0 & 0 & 0 + ) , e^1= ( ccc 0 & 0 & 1 + 0 & 0 & 0 + 0 & 0 & 0 + ) , e^2= ( ccc 0 & 0 & 0 + 0 & 0 & 1 + 0 & 0 & 0 + ) they satisfy the lie algebra [ e^0,e^]=-^0e^,=0 where is the completely anti - symmetric tensor .one can write eq .( [ eq : linear - response - non - ab ] ) concisely as a matrix equation [ eq : matrix - form ] -.1em.8ex-.6emdy = a_j d_j,-.1em.8ex-.6emdy= y^e^ , a_j = a_j^e^ where and are matrices ( summation over repeated indices implied ) . ,plotted with the flat measure on ,title="fig:",width=453 ] + plotted using the measure induced by dissipation , title="fig:",width=377 ] + once the ( six ) transport coefficients are known , one can , in principle , simply integrate the system of three , first order , non - linear ordinary differential equations , eq .( [ eq : linear - response ] ) . this can normally be done only numerically .numerical integration is practical and useful , but not directly insightful .we want to describe tools that allow for a qualitative understating swimming in the plane without actually solving any differential equation .low reynolds numbers swimmers perform lots of mutually cancelling maneuvers with a small net effect .the swimming curvature measure only what fails to cancel for infinitesimal strokes . since reversinga loop reverses the response , it is natural to expect , that for a closed ( square ) loop is proportional to the area form . integrating eq .( [ eq : matrix - form ] ) around a closed infinitesimal loop gives y = d_1d_2 where = _ 1 a_2- _ 2 a_1 -[a_1,a_2 ] , = ^e^,_j= and are matrices . has the structure of curvature of a non - abelian gauge field . in coordinates, this reads ^= _ 1 a^_2- _ 2 a^_1 + ^0( a^0_1 a^_2 - a^0_2 a^_1 ) , in the lab coordinates one has , of course , x^=^d_1d_2 , ^= r^ ( ) ^ ( ) , the curvature is abelian when the commutator vanishes .this is the case in eq .( [ eq : numerical - curvature ] ) and it is also the case for the rotational curvature .the abelian curvature gives full information on the swimming of finite stroke by simple application of stokes formula .this is , unfortunately , not the case in the non - abelian case .one can not reconstruct the translational motion of a large stroke from the infinitesimal closed strokes because stokes theorem only works for commutative coordinates and are not .+ for purcell swimmer , although explicit , is rather complicated . since the dissipation metric is complicated too , we give two plots of : figs .( [ fig : fphieta=2 ] , [ fig : purcellcurvx],[fig : purcellcurvy ] ) give the curvature relative to the flat measure on , and describe how far the swimmer swims for small strokes . in figs .( [ fig : fphinorm ] , [ fig : purcellcurvxnorm],[fig : purcellfynorm ] ) the curvature is plotted relative to the dissipation measure and it displays the energy efficiency of strokes . using cox theory , in a manner analogous to what was done in for the symmetric swimmer , one can calculate explicitly the force ( and torque ) on the -th rod in the form f_a^= f_aj^_j + f_a^ x^ where are explicit and relatively simple functions of the controls ( compare with eq .( [ eq : f_jk ] ) ) . the swimming equation are then given by _a f_a^= ( _ af_aj^)_j + ( _ a f_a^ ) x^=0 this reduces the problem of finding the connections to a problem in linear algebra .formally ^_j=(_a f_a^)^-1 ( _ af_aj^ ) where the bracket on the left is interpreted as a matrix , with entries , and the inverse means an inverse in the sense of matrices .although this is an inverse of only a matrix the resulting expressions are not very insightful .we spare the reader this ugliness which is best done using a computer program . picking the center point of the body as the reference fiducial point is , in the terminology of wilczek and shapere a choice of gauge .this particular choice is nice because it implies symmetries of the connection .observe first that the interchange corresponds to a rotation of the swimmer by . plugging this in eq .( [ eq : ra ] ) one finds [ eq : symmetries - r ] a^_1 ( _ 1,_2)=\ { ll + a^_2 ( -_2,-_1 ) , & + -a^_2 ( -_2,-_1 ) , & + .this relates the two half of the square divided by the diagonal .a second symmetry comes from the interchange corresponding to the reflection of the swimmer around the central vertical of the middle link . some reflection shows then that [ eq : symmetries-2 ] a^_1 ( _ 1,_2)=\ { ll + a^_2 ( _ 2,_1 ) , & + -a^_2 ( _ 2,_1 ) , & + .this relates the two halves of the square divided by the diagonal .the symmetries can be combined to yield the result that and are anti - symmetric and is symmetric under inversion a^0_j()=-a^0_j(- ) , a^1_j()=a^1_j(- ) , a^2_j()=-a^2_j(- ) the rotational motion of purcell swimmer , in any finite stroke , is fully captured by the abelian curvature ^0 = ^0=_1 a^0_2- _ 2 a^0_1 this reflects the fact that rotations in the plane are commutative . the symmetry of eq .( [ eq : symmetries - r ] ) implies [ eq : symmetries - r - d ] ( _ 2a^0_1 ) ( _ 1,_2)= ( _ 1a^0_2 ) ( -_2,-_1 ) and this says that _ ani - symmetric _ under reflection in the diagonal .similarly , eq . ( [ eq : symmetries-2 ] ) implies [ eq : symmetries-2 ] ( _ 2a^0_1 ) ( _ 1,_2)= -(_1a^0_2 ) ( _ 2,_1 ) , and this says that is _ symmetric _ about the line .( [ fig : fphieta=2 ] ) is a plot of the curvature and it clearly has the requisite symmetries . the total curvature associated with the full square of shape space vanishes ( by symmetry ) . for , one can see in fig .[ fig : fphieta=2 ] three positive islands surrounded by three negative lakes .the total curvature associated with the three islands is quite small , about 0.1 .this means that purcell swimmer with turns only a small fraction of a circle in any full stroke . shown with the the tam and hosoi optimal distance stroke .the curvature is given relative to the flat measure in ., title="fig:",width=529 ] + .the curvature is given relative to the flat measure in ., width=453 ] the curvatures corresponding to the two translations of a swimmer with are shown in figs .( [ fig : purcellcurvx],[fig : purcellcurvy],[fig : purcellcurvxnorm],[fig : purcellfynorm])(here we use for comparison with ) .the symmetries of the figures are a consequence of eqs .( [ eq : symmetries - r],[eq : symmetries-2 ] ) .form the first we have [ eq : symmetries-3 ] ( _ 2a^_1 ) ( _ 1,_2)=- ( _ 1a^_2 ) ( -_2,-_1 ) , = 1,2 which implies that are _ symmetric _ under reflection in the diagonal . similarly , from the eq .( [ eq : symmetries-2 ] ) we have [ eq : symmetries-4 ] ( _ 2a^_1 ) ( _ 1,_2)=\ { ll + ( _ 1a^2_2 ) ( _ 2,_1 ) , & + -(_1a^1_2 ) ( _ 2,_1 ) , & + .this says that _ symmetric _ and _ anti - symmetric _ under reflection in the diagonal .the curvatures for the translations is non - abelian and can not be used to _ calculate _ the swimming distances for _ finite _ strokes because the stokes theorem fails .the landscape figures for the translational curvatures provide precise information on the swimming distance for infinitesimal strokes .they are then also useful to characterize small strokes .the question is how small is small ? for a stroke of size , the controls are of size and the swimming distance measures by the curvature is .the error in this has terms .] of the form .this suggest that the relative error in the swimming distance as measured a finite stroke is of the order .hence , a stroke is small provided .clearly , a purcell swimmer swims substantially less than an arm length as the arm moves .this says that and so strokes of the order of a radian can be viewed as small strokes .the x - curvature is symmetric under inversion ^1()= ^1(- ) since both and are antisymmetric under inversion , one sees that the non - abelian part of the x - curvature is of order near the origin . the x - translational curvature , which is non - zero near the origin , is almost abelian for small strokes .the y - curvature , in contrast , is anti - symmetric under inversion ^12()= -^2(- ) and so vanishes linearly at the origin .the non - abelian part is also anti - symmetric under inversion , and it too vanishes linearly .the y - curvature is therefore _ not _ approximately abelian for small strokes , but it is small . the swimming direction can be easily determined for those strokes that live in a region where the translational curvature has a fixed sign . this answers purcell s question for many strokes .strokes the enclose both signs of the curvature are subtle .purcell s swimmer can reverse its direction of propagation by increasing the stroke amplitude .this can be seen from the landscape diagram , fig .( [ fig : purcellcurvx ] ) : small square strokes near the origin sample only slightly negative curvature .as the stroke amplitude increases the square gets larger and begins to sample regions where the curvature has the opposite sign , eventually sampling regions with substantial positive curvature .the curvature landscapes are useful when one wants to search for optimal strokes as they provide an initial guess for the stroke .( this initial guess can then be improved by standard optimization numerical methods . ) for example , tam and hosi looked for strokes that cover the largest possible distance . for strokes near the origin , a local optimizer is the stroke that bounds the approximate square blue region in fig .[ fig : purcellcurvx ] ( in this case ) . the curvature normalized by dissipation , fig .[ fig : purcellcurvxnorm ] gives a guide for finding efficient small strokes .caution must be made , since while the displacement can be approximated from the surface area , the energy dissipation is proportional to the stroke s length and not the stroke s area . in regimeswhere the gaussian curvature of the dissipation ( fig .[ fig : gausscurvpurcell ] ) is positive - it is possible to have strokes with small length which bounds large area . in the case of purcells swimmer , this suggest two possible regimes : around the origin and the positive curvature island in the upper left ( lower right ) corner of fig . [fig : gausscurvpurcell ] .the optimizer near the origin is the optimal stroke found in , while the optimizer in the upper left corner - although more efficient ( pay attention to the values of in fig . [ fig : purcellfynorm ] ) , is of mathematical interest only , since it is near the boundary , where the first order slender body approximation eq .( [ eq : cox ] ) is relevant only for extremely slender bodies .
we develop a qualitative geometric approach to swimming at low reynolds number which avoids solving differential equations and uses instead landscape figures of two notions of curvatures : the swimming curvature and the curvature derived from dissipation . this approach gives complete information for swimmers that swim on a line without rotations and gives the main qualitative features for general swimmers that can also rotate . we illustrate this approach for a symmetric version of purcell s swimmer which we solve by elementary analytical means within slender body theory . we then apply the theory to derive the basic qualitative properties of purcell s swimmer .
in the last few decades the advances in the observational cosmology have led the field to its `` golden age . ''cosmologists are beginning to nail down the basic cosmological parameters , and have started asking questions about the nature of the initial conditions provided by inflation , which apart from solving the flatness and horizon problem , also gives a mechanism for producing the seed perturbations for structure formation , and other testable predictions .the main predictions of a canonical inflation model are : ( i ) spatial flatness of the observable universe , ( ii ) homogeneity and isotropy on large scales of the observable universe , ( iii ) nearly scale invariant and adiabatic primordial density perturbations , and ( iv ) primordial perturbations to be very close to gaussian .the cosmic microwave background ( cmb ) data from the wilkinson anisotropy probe ( wmap ) , both temperature and polarization anisotropies , have provided hitherto the strongest constraints on these predictions .there is no observational evidence against simple inflation models .non - gaussianity from the simplest inflation models that are based on a slowly rolling scalar field is very small ; however , a very large class of more general models with , e.g. , multiple scalar fields , features in inflaton potential , non - adiabatic fluctuations , non - canonical kinetic terms , deviations from bunch - davies vacuum , among others ( * ? ? ?* for a review and references therein ) generates substantially higher amounts of non - gaussianity .the amplitude of non - gaussianity constrained from the data is often quoted in terms of a non - linearity parameter ( defined in section [ model ] ) .many efficent methods for evealuating bispectum of cmb temperature anisotropies exist . so far , the bispectrum tests of non - gaussianity have not detected any significant in temperature fluctuations mapped by cobe and wmap .different models of inflation predict different amounts of , starting from to , above which values have been excluded by the wmap data already . on the other hand ,some authors have claimed non - gaussian signatures in the wmap temperature data .these signatures can not be characterized by and are consistent with non - detection of .currently the constraints on the come from temperature anisotropy data alone . by also having the polarization information in the cosmic microwave background, one can improve sensivity to primordial fluctuations .although the experiments have alrady started characterizing polarization anisotropies , the errors are large in comparison to temperature anisotropy .the upcoming experiments such as planck will characterize polarization anisotropy to high accuracy .are we ready to use future polarization data for testing gaussianity of primordial fluctuations ?do we have a fast estimator which allows us to measure from the combined analysis of temperature and polarization data ? in this paperwe extend the fast cubic estimator of from the temperature data and derive a fast way for measuring primordial non - gaussianity using the cosmic microwave background temperature and polarization maps .we construct a cubic statistics , a cubic combination of ( appropriately filtered ) temperature and polarization maps , which is specifically sensitive to the primordial perturbations .this is done by reconstructing a map of primordial perturbations , and using that to define our estimator .we also show that the inverse of the covariance matrix for the optimal estimator is the same as the product of inverses we get in the fast estimator .our estimator takes only operations in comparison to the full bispectrum calculation which takes operations . here refers to the total number of pixels . for planck , andso the full bispectrum analysis is not feasible while ours is .the harmonic coefficients of the cmb anisotropy can be related to the primordial fluctuation as : where and are the harmonic coefficients of the primordial curvature perturbations and the primordial isocurvature perturbations respectively at a given comoving distance ; is the radiation transfer function of either adiabatic or isocurvature perturbations ; x refers to either t or e ; and is the bessel function of order . a beam function and the harmonic coefficient of noise instrumental and observational effects .( [ phi_alm ] ) is written for flat background , but can easily be generalized . any non - gaussianity present in the primordial perturbations or , can get transfered to the observed cmb i.e. , via eq .( [ phi_alm ] ) . due to the smallness of isocurvature contribution over the curvature perturbation we will drop the isocurvature contribution from eq .( [ phi_alm ] ) and further we will use a popular and simple non - gaussianity model given by where is the linear gaussian part of the perturbations , and is a non - linear coupling constatnt characterizing the amplitude of the non - gausianity .the bispectrum in this model can be written as : the above form of the bispectrum is specific to the model chosen and so in general the constrains on do not necessarily tell us about non - gaussianity of other models .this model , for instance , fails completely when non - gaussianity is localized in a specific range in space , the case that is predicted from inflation models with features in inflaton potential . even for the simplest inflation models based on a slowly rolling scalar field ,the bispectrum of _ inflaton perturbations _ yields a non - trivial scale dependence of , although the amplitude is too small to detect . on the other hand the bispectrum of _ curvature perturbations _contains the contribution after the horizon exit , which is non - zero even when inflaton perturbations are exactly gaussian .this contribution actually follows eq .( [ phi_bispec ] ) .curvaton models also yield the bispectrum in the form given by eq .( [ phi_bispec ] ) .reconstruction of primordial perturbations from the cosmological data allows us to be more sensitive to primordial non - gaussianity , which is important because non - gaussianity in the cosmic microwave background data does not necessarily imply the presence of primordial non - gaussianity .the method of reconstruction from temperature data is described in and that from the combined analysis of temperature and polarization data is described in , where we reconstruct the perturbations using an operator .these operators are given by : where is the power spectrum of the primordial curvature perturbations , and is the noise covariance matrix .the primordial perturbation then can be estimated as : figure 1 shows the improvement in reconstruction due to additional information from the cmb polarization .we can construct a quantity that is analogous to the ksw fast estimator of from temperature data but generalize it to include polarization data as well .this quantity has a simple interpretation in terms of the tomographic reconstruction of the primoridal potential described in .it is the radial integral of a cubic combination of the scalar potential reconstructions , the b terms with the analogous a term . as in , we form a cubic statistics given by where , and is a fraction of sky .index and can either be or .we find ( see appendix a ) that reduces to where is : 1 when , 6 when , and 2 otherwise ; is the theoretical bispectrum for , and is given by : where since is proportional to , an unbiased estimator of can be written as : the most time consuming part of the calculation is the harmonic transformation necessary for eqs .( [ b ] ) and([a ] ) .the fast estimator defined above takes only operations times the number of sampling points for , which is of the order 100 .hence this is much faster than the full bispectrum analysis , which scales as . here is the number of pixels . in the next sectionwe will show that this fast unbiased estimator is also optimal , by proving equivalence with a known optimal but slow estimator. in their recent paper babich and zaldarriaga have found an optimal estimator for which minimizes the expected given by this optimal estimator is where the index and runs over , unlike in the fast estimator case where and run over all the 8 combinations . in appendixb we show that the inverse of the covariance matrix for the bz estimator is the same as the product of inverses we get in the fast estimator , eq . [ fastestim ] ; hence our estimator is optimalto test optimality of our fast estimator we use eqs .( [ fnl_estimate ] ) and ( [ s_prim ] ) to measure from simulated gaussian skies .the error bars on are then derived from monte carlo simulations of our fast estimator .the simulated errors are compared with the cramer rao bound , where is a fraction of sky , and is the fisher matrix given by : where is : 1 when , 6 when , and 2 otherwise .we have used model with , and a constant scalar spectral index . since the contribution to the integral in eqs .[ b ] and [ a ] comes mostly from the decoupling epoch ( for our simulations ) , our integration limits are mpc and mpc , with the sampling mpc .refining the sampling of the integral by a factor of 2 does not change our results significantly .the results are summarized in figure [ fig3 ] .we separately explore the effects of excluding foreground contaminated sky regions and noise degradation due to the instrument .for definiteness we have used the published planck noise amplitudes which are described in table 2 , though we ignore the effect of the scanning strategy and noise correlations .we shall come back to this point in [ sec : conclusions ] .we find that the low modes of the polarization bispectrum are contaminated significantly when .this is illustrated by the lower right panel of figure [ fig3 ] , where we sum over all the modes for contribution to the bispectrum ( because the temperature bispectrum is not contaminated by the sky cut ) , while varying for the other terms ( ) .our results show that one may simply remove the contamination in the polarization bispectrum due to the sky cut by removing ( when ) without sacrificing sensitivity to .the contamination appears to be less when noise is added , as the dominant constraint still comes from the temperature data which are insensitive to the contamination due to the sky cut .we can explain this behaviour simply in terms of the coupling between spherical harmonic modes induced by the sky cut .the key observation is that the low polarization spectrum is very small in the theoretical model we chose , but if it could be observed it would add information to the estimator .therefore , for a noiseless experiment , an optimal estimator designed for full sky work will assign a large weight to the low polarization modes .since the low spectrum rises very steeply towards higher a sky cut creates power leakage from higher to lower .this biases the low spectrum significantly and this bias is amplified by the large coefficient the estimator assigns to these modes . in a realistic experiment including noise , the low polarization modes are difficult to measure , since noise dominates .accordingly , the estimator assigns small weights to the low polarization modes and the sky cut has a much less prominent effect .our fast estimator takes only operations times the number of sampling points for the integral over , which is of the order 100 .hence this is much faster than the full bispectrum analysis as discussed in , which goes as . here is the number of pixels . for planckwe expect , so performing 100 simulations using 50 cpus takes only 10 hours using our fast estimator , while we estimate it would take approximately years to do the brute force bispectrum calculation using the same platform ..planck noise properties assumed for our analysis[tbl-2 ] [ cols="^,^,^,^,^,^,^,^ " , ] +starting with the tomographic reconstruction approach we have found a fast , feasible , and optimal estimator of , a parameter characterizing the amplitude of primordial non - gaussianity , based on three - point correlations in the temperature and polarization anisotropies of the cosmic microwave background .using the example of the planck mission our estimator is faster by factors of order than the estimator described by babich and zaldarriaga , and yet provides essentially identical error bars .the speed of our estimator allows us to study its statistical properties using monte carlo simulations .we have explored the effects of instrument noise ( assuming homogeneous noise ) , finite resolution , as well as sky cut .we conclude that our fast estimator is robust to these effects and extracts information optimally when compared to the cramer rao bound , in the limit of homogeneous noise .we have uncovered a potential systematic effect that is important for instruments measuring polarization with extremely high signal - to - noise on large scales .the inevitable removal of contaminated portions of the sky causes any estimator based on the pseudo - bispectrum to be contaminated by mode - to - mode couplings at low .we have demonstrated that by simply excluding low polarization modes from the analysis removes this systematic error with negligible information loss .it has been shown that inhomogeneous noise causes cubic estimators based on the psudo - bispectrum with a flat weighting to be significantly suboptimal .a partial solution to this problem has been found by , where a linear piece has been added to the estimator in addition to the cubic piece .it should be straightforward to apply their method to our estimator .finally , our reconstruction approach may be extended to find fast estimators for higher order statistics , for example trispectrum based estimators of .this is the subject of ongoing work .we acknowledge stimulating discussions with michele liguori .ek acknowledges support from an alfred p. sloan fellowship .bdw acknowledges support from the stephen hawking endowment for cosmological research .some of the results in this paper have been derived using the cmbfast package by uros seljak and matias zaldarriaga and the healpix package .this work was partially supported by the national center for supercomputing applications under tg - mca04t015 and by university of illinois .we also utilized the teragrid cluster ( www.teragrid.org ) at ncsa .bdw and apsy s work is partially supported by nsf ast o5 - 07676 and nasa jpl subcontract 1236748 .natexlab#1#1 acquaviva , v. , botrolo , n. , matarrese , s. , & riotto , a. 2003 , nucl .b , 667 , 119 albrecht , a. & steinhardt , p. j. 1982 , phys .lett . , 48 , 1220 , d. , creminelli , p. , & zaldarriaga , m. 2004 , journal of cosmology and astro - particle physics , 8 , 9 , d. & zaldarriaga , m. 2004 , , 70 , 083005 bardeen , j. m. , steinhardt , p. j. , & turner , m. s. 1983 , phys .d. , 28 , 679 , n. , komatsu , e. , matarrese , s. , & riotto , a. 2004 , , 402 , 103 , r. , dunkley , j. , & pierpaoli , e. 2006 , , 74 , 063503 , c. l. , halpern , m. , hinshaw , g. , jarosik , n. , kogut , a. , limon , m. , meyer , s. s. , page , l. , spergel , d. n. , tucker , g. s. , wollack , e. , wright , e. l. , barnes , c. , greason , m. r. , hill , r. s. , komatsu , e. , nolta , m. r. , odegard , n. , peiris , h. v. , verde , l. , & weiland , j. l. 2003 , , 148 , 1 , p. , hansen , f. k. , liguori , m. , marinucci , d. , matarrese , s. , moscardini , l. , & vittorio , n. 2006 , , 369 , 819 . 2006 , , 369 , 819 , g. & szapudi , i. 2006 , , 647 , l87 , x. , easther , r. , & lim , e. a. 2006 , arxiv astrophysics e - prints , astro - ph/0611645 chiang , l. y. , naselsky , p. d. , & coles , p. 2004, astrophys .j , 602 , 1 chiang , l. y. , p. d , n. , verkhodanov , o. v. , & way , m. j. 2003 , astrophys .j , 590 , l65 , p. , nicolis , a. , senatore , l. , tegmark , m. , & zaldarriaga , m. 2006 , journal of cosmology and astro - particle physics , 5 , 4 , p. , senatore , l. , zaldarriaga , m. , & tegmark , m. 2006 , arxiv astrophysics e - prints , astro - ph/0610600 falk , t. , rangarajan , r. , & srednicki , m. 1993 , astrophys .j. , 403 , l1 gangui , a. , lucchin , f. , matarrese , s. , & mollerach , s. 1994 , astrophys . j. , 430 , 447 , k. m. , hivon , e. , banday , a. j. , wandelt , b. d. , hansen , f. k. , reinecke , m. , & bartelmann , m. 2005 , , 622 , 759 guth , a. h. 1981 , phys .d , 23 , 347 guth , a. h. & pi , s. y. 1982 , phys .lett . , 49 , 1110 hawking , s. w. 1982 , phys .b. , 115 , 295 , g. , nolta , m. r. , bennett , c. l. , bean , r. , dore , o. , greason , m. r. , halpern , m. , hill , r. s. , jarosik , n. , kogut , a. , komatsu , e. , limon , m. , odegard , n. , meyer , s. s. , page , l. , peiris , h. v. , spergel , d. n. , tucker , g. s. , verde , l. , weiland , j. l. , wollack , e. , & wright , e. l. 2006 , arxiv astrophysics e - prints , g. , spergel , d. n. , verde , l. , hill , r. s. , meyer , s. s. , barnes , c. , bennett , c. l. , halpern , m. , jarosik , n. , kogut , a. , komatsu , e. , limon , m. , page , l. , tucker , g. s. , weiland , j. l. , wollack , e. , & wright , e. l. 2003 , , 148 , 135 , n. & komatsu , e. 2006 , , 73 , 083007 , a. , spergel , d. n. , barnes , c. , bennett , c. l. , halpern , m. , hinshaw , g. , jarosik , n. , limon , m. , meyer , s. s. , page , l. , tucker , g. s. , wollack , e. , & wright , e. l. 2003 , , 148 , 161 komatsu , e. , kogut , a. , nolta , m. n. , bennett , c. l. , halpern , m. , hinshaw , g. , jarosik , n. , limon , m. , meyer , s. s. , page , l. , spergel , d. n. , tucker , g. s. , verde , l. , wollack , e. , & wright , e. l. 2003 , astrophys . j. , 148 , 119 komatsu , e. n. & spergel , d. n. 2001 , phys . rev .d , 63 , 063002 komatsu , e. n. , spergel , d. n. , & wandelt , b. d. 2005 , astrophys .j. , 634 , 14 komatsu , e. n. , wandelt , b. d. , spergel , d. n. , banday , a. j. , & gorski , k. m. 2002 , astrophys .j. , 566 , 19 kovac , j. m. , leitch , e. m. , pryke , c. , carlstrom , j. e. , halverson , n. w. , & holzapfel , w. l. 2002 , nature , 420 , 772 larson , d. l. & wandelt , b. d. 2004 , astrophys .j. , 613 , l85 linde , a. d. 1982 , phys .lett . , b108 , 389 , d. h. & rodrguez , y. 2005 , physical review letters , 95 , 121302 , d. h. , ungarelli , c. , & wands , d. 2003 , , 67 , 023503 maldacena , j. 2003 , j. high energy phys ., 05 , 013 , t. e. , ade , p. a. r. , bock , j. j. , bond , j. r. , borrill , j. , boscaleri , a. , cabella , p. , contaldi , c. r. , crill , b. p. , de bernardis , p. , de gasperis , g. , de oliveira - costa , a. , de troia , g. , di stefano , g. , hivon , e. , jaffe , a. h. , kisner , t. s. , jones , w. c. , lange , a. e. , masi , s. , mauskopf , p. d. , mactavish , c. j. , melchiorri , a. , natoli , p. , netterfield , c. b. , pascale , e. , piacentini , f. , pogosyan , d. , polenta , g. , prunet , s. , ricciardi , s. , romeo , g. , ruhl , j. e. , santini , p. , tegmark , m. , veneziani , m. , & vittorio , n. 2006 , , 647 , 813 mukhanov , v. f. , feldman , h. a. , & brandenberger , r. h. 1992 , phys ., 215 , 203 mukherjee , p. & wang , y. 2004 , astrophys .j. , 613 , 51 , l. , hinshaw , g. , komatsu , e. , nolta , m. r. , spergel , d. n. , bennett , c. l. , barnes , c. , bean , r. , dore , o. , halpern , m. , hill , r. s. , jarosik , n. , kogut , a. , limon , m. , meyer , s. s. , odegard , n. , peiris , h. v. , tucker , g. s. , verde , l. , weiland , j. l. , wollack , e. , & wright , e. l. 2006 , apjs , in press ( astro - ph/0603450 ) , h. v. , komatsu , e. , verde , l. , spergel , d. n. , bennett , c. l. , halpern , m. , hinshaw , g. , jarosik , n. , kogut , a. , limon , m. , meyer , s. s. , page , l. , tucker , g. s. , wollack , e. , & wright , e. l. 2003 , , 148 , 213 , d. s. & bond , j. r. 1990 , , 42 , 3936 . 1991 , , 43 , 1005 sato , k. 1981 , phys ., 99b , 66 seljak , u. & zaldarriaga , m. 1996 , astrophys .j , 469 , 437 , k. m. & zaldarriaga , m. 2006 , arxiv astrophysics e - prints , astro - ph/0612571 , d. n. , bean , r. , dore , o. , nolta , m. r. , bennett , c. l. , hinshaw , g. , jarosik , n. , komatsu , e. , page , l. , peiris , h. v. , verde , l. , barnes , c. , halpern , m. , hill , r. s. , kogut , a. , limon , m. , meyer , s. s. , odegard , n. , tucker , g. s. , weiland , j. l. , wollack , e. , & wright , e. l. 2006 , apjs , in press ( astro - ph/0603449 ) , d. n. , verde , l. , peiris , h. v. , komatsu , e. , nolta , m. r. , bennett , c. l. , halpern , m. , hinshaw , g. , jarosik , n. , kogut , a. , limon , m. , meyer , s. s. , page , l. , tucker , g. s. , weiland , j. l. , wollack , e. , & wright , e. l. 2003 , , 148 , 175 starobinsky , a. a. 1982 , phys .b. , 117 , 175 , r. 2006 , arxiv astrophysics e - prints , astro - ph/0608116 , l. , wang , l. , heavens , a. f. , & kamionkowski , m. 2000 , , 313 , 141 vielva , p. , gonzalez , e. m. , barreiro , r. b. , sanz , j. l. , & cayon , l. 2004 , astrophys .j , 609 , 22 , l. & kamionkowski , m. 2000 , , 61 , 063504 yadav , a. p. s. & wandelt , b. d. 2005 , phys .d , 70 , 123004we derive an expectation value of the cubic statistics given by eq .[ s_prim ] using the form of b and a as given by eq .[ b ] and [ a ] which simplifies to where is the angular bispectrum , and the cmb bispectrum and can be averaged as above due to isotropy . in deriving have also used : the theoretical primordial angular bispectrum can be written as : where using the above form of the theoretical bispectrum , further simplifies to now since , where , which is 1 when , 6 when , and 2 otherwise .in this appendix we prove the equivalence between the optimal estimator given by and our fast estimator eqn .( [ eq : est2 ] ) this is analogous to the temperature - only case . on the left hand side and over all the 8 possible ordered combinations , while for the estimator on the right hand side and run only over the four unordered combinations .we prove the equivalence between the optimal estimator and the fast estimator for only one combination of and , as the proof is same for all the combinations.the covariance matrix * cov * is obtained in terms of , , and ( as in equation 7 in babich and zaldarriaga ( 2004 ) ) by applying wick s theorem , the covariance matrix above has the form where , , run over , which after simplification gives : where are the elements of the matrix .the above matrix is nothing but , after the additional permutations with fixed and are summed up .for example , for the index runs over the set .this completes the proof .
measurements of primordial non - gaussianity ( ) open a new window onto the physics of inflation . we describe a fast cubic ( bispectrum ) estimator of , using a combined analysis of temperature and polarization observations . the speed of our estimator allows us to use a sufficient number of monte carlo simulations to characterize its statistical properties in the presence of real world issues such as instrumental effects , partial sky coverage , and foreground contamination . we find that our estimator is optimal , where optimality is defined by saturation of the cramer rao bound , if noise is homogeneous . our estimator is also computationally efficient , scaling as compared to the scaling of the brute force bispectrum calculation for sky maps with pixels . for planck this translates into a speed - up by factors of millions , reducing the required computing time from thousands of years to just hours and thus making estimation feasible for future surveys . our estimator in its current form is optimal if noise is homogeneous . in future work our fast polarized bispectrum estimator should be extended to deal with inhomogeneous noise in an analogous way to how the existing fast temperature estimator was generalized .
we consider stochastic simulation ( or monte carlo simulation ) of the solution of a dirichlet boundary value problem of a time - independent schrdinger equation with constant potential . for positive potentials this equationis called the yukawa equation or the linearized poisson boltzmann equation . for zero potentialthe equation is known as the laplace equation . for negative potentialit is known as the helmholtz equation .the connection between the dirichlet boundary value problems and the brownian motion date back to kakutani , who provided a stochastic representation of the solution of the laplace equation with dirichlet boundary conditions in terms of the exit locations of a brownian motion .later , this connection has been extended for the schrdinger equation , cf . and references therein .these stochastic representations for constant non - zero potentials include the exit time of the brownian motion in addition to the exit location ; and in the case of non - constant potential , the stochastic representation depends on the entire path of the brownian motion up to the time it exits the domain .stochastic representations provide monte carlo simulation methods for the solutions of the boundary value problems .these methods are especially attractive in high dimensions where deterministic methods are typically extremely costly .the obvious idea to use the stochastic representations is to simulate brownian particles on the domain with a fine time - mesh .this is , however , very costly computationally . in order to avoid the simulation of the trajectory of the brownian particles very precisely ,different algorithms have been proposed . in this paper, we consider different walk on spheres algorithms that simulate the brownian motion only on successive spheres in the domain .if one only needs to simulate the exit location , and not the exit time , of the brownian motion , one can use the very efficient classical walk on spheres ( wos ) algorithm due to muller .unfortunately , the stochastic representation involving only the exit locations of brownian motion corresponds precisely the laplace equation .this is our motivation to transform the constant - potential schrdinger equation into a laplace equation : to make the classical wos algorithm applicable . also , this transformation should be of interest in its own right . the transformation , the so - called duffin correspondence , removes the constant potential in the schrdinger equation with the cost of adding one extra dimension to the boundary value problem .the idea of the correspondence is due to duffin , where the correspondence was used for the yukawa equation , i.e. for the case of positive constant potential , on the plane .this was later extended to general euclidean spaces in . in this paper , we extend the duffin correspondence to cover also the helmholtz case , i.e. negative constant potential , in general euclidean spaces .let us note that there are already efficient modified wos algorithms that simulate the exit time ( or its laplace transform ) and the exit location of the brownian motion that can be used for the constant - potential schrdinger equation studied here .indeed , such stochastic simulation algorithms have been studied excessively ; cf . , just to mention few. basically , in a modified wos algorithm that simulates the laplace transform of the exit time and the exit location of the brownian particle , one needs to keep track of a multiplicative weight for the simulated brownian particle .we call this algorithm the weighted walk on spheres ( wwos ) and recall it in section [ sect : wos - algorithms ] .if the constant potential is negative , then the weight of the brownian particle can be reinterpreted as independent exponential killing of the particle .we call this algorithm the killing walk on spheres ( kwos ) and recall it in section [ sect : wos - algorithms ] . therefore , we admit that there are already efficient algorithms for the problem studied here. however , it is our opinion that our duffin correspondence wos algorithm ( dwos ) is of comparable efficiency to the known modified wos algorithms and has the advantage of being both simpler to comprehend and easier to implement than the modified wos algorithms known so far . indeed , dwos algorithm is simply the classical wos algorithm with an added dimension and multiplicatively modified boundary data : there is no need to keep track of any weight or killing .moreover , if the wos algorithm is already implemented , the dwos algorithm does not need implementation : it is simply the wos algorithm with different input .the rest of the paper is organized as follows . in section [sect : prelim ] , we lay the setting and recall the connection between the dirichlet boundary value problems and the brownian motion . in section [ sect : duffin ] , we prove our main result , the duffin correspondence , and the stochastic representation of the solutions of the constant - potential schrdinger equation without the stopping time distribution .section [ sect : wos - algorithms ] is devoted to the different wos algorithms and their implementations . in section [sect : comparisons ] , we provide examples and comparisons of the different wos algorithms . finally , in section [ sect : conclusions ] we draw some some conclusions on the performance of the dwos algorithm .let be a domain ( i.e. , open and connected ) satisfying assumption [ ass ] below .let .denote by the laplacian with respect to the variable .we consider the dirichlet - type boundary value problem of the schrdinger equation with constant potential : here is ( continuous and ) bounded on .the case corresponds to the yukawa equation , or the linearized poisson boltzmann equation .the case is the laplace equation .the case is the helmholtz equation . the required regularity conditions for the domain of a helmholtz yukawa type dirichlet boundary value problem to admit a unique bounded ( strong ) solution are best expressed by using probabilistic tools and the brownian motion : let be a standard -dimensional brownian motion .let be the first exit time of the brownian motion from the domain , i.e. , here , as always , use the normal convention that the following assumptions on the domain are always in force , although not explicitly stated later : [ ass ] 1 .the domain is _ wiener regular _ ,i.e. , \quad\mbox{for all } y\in\partial d.\ ] ] 2 .the domain is _ wiener small _ , i.e. , = 1 \quad\mbox{for all } x\in d.\ ] ] 3 . finally , we assume , by using the terminology of chung and zhao , that the domain is _ gaugeable _ , i.e. , < \infty.\ ] ] [ rem : ass ] 1 . all domains with piecewise boundary are wiener regular .2 . if any projection of the domain on any subspace , is bounded , then is wiener bounded .3 . for ,the gauge condition [ ass](iii ) is vacuous ; for it is essential .+ actually , it follows from ( * ? ? ?* theorem 4.19 ) that the gauge condition [ ass](iii ) is satisfied if and only if , where is the principal dirichlet eigenvalue of the negative half - laplacian , i.e. , for the boundary value problem admits only the trivial solution . by the rayleigh krahn inequality where is the smallest positive zero of the bessel function equality is attained in if and only if is a ball ; e.g. for the unit ball we have this relation is pronounced later in formula .numerical approximations of are give in table [ tab : rayleigh ] .+ .numerical approximations of the rayleigh faber krahn constant for . [ cols="^,^",options="header " , ] the following stochastic representation of the bounded solutions to the boundary value problem is well - known .see , e.g. , ( * ? ? ?* chapter 4 ) or ( * ? ? ? * chapter 4 ) : [ pro : kakutani ] the boundary value problem admits a unique bounded solution given by .\ ] ]associated with the helmholtz or the yukawa boundary value problem with , there is a classical laplace boundary value problem with on an extended domain : indeed , define by set and finally , denote , and set with this notation , consider the following family of laplace boundary value problems indexed by : [ thm : duffin ] let be fixed .then is the unique bounded solution to the helmholtz or yukawa boundary value problem if and only if is the unique bounded solution to the laplace equation .the yukawa case , , was shown in ( for general ) and in the original paper by duffin ( for ) .let us consider the helmholtz case , .the proof that if and only if is straightforward and can be done exactly as in the yukawa case . also , it is straightforward to see that satisfies assumptions [ ass](i ) and [ ass](ii ) if and only if satisfies assumptions [ ass](i ) and [ ass](ii ) .the essential difference to the yukawa case is that now is unbounded in the co - ordinate .consequently , the boundary data is not bounded .this is where we need the gauge condition [ ass](iii ) .indeed , we can approximate the solution in ] for given values of , where and , and then use the identity and the elementary properties of the expected value to obtain the approximation -{\mathbb{p}}\left[\tau_1\le t_j\right]\right).\ ] ] 2 .one may use the formula to tabulate values of the function for various values of , and then use the table and appropriate interpolation to approximate the function for an arbitrary value of .the first approach was used in examples of section [ sect : comparisons ] .approximations for function , for various values of and are illustrated in figure [ fig : psifun ] ( cf .the figure in for an illustration of for positive values of ) . for for , where on the top ( left ) .the function , , where the dimension are , where on the top ( right ) ., title="fig:",width=207 ] for for , where on the top ( left ) .the function , , where the dimension are , where on the top ( right ) . ,title="fig:",width=207 ] note that is the extension of the -function , that appears in , but with different parametrization .now we are ready to give the weighted walk on spheres ( wwos ) algorithm : the approximation for is , by proposition [ pro : kakutani ] , here is the exit time for the each individual particle and .the individual particle exit locations and weights are generated by algorithm [ alg : dwos ] below : [ alg : wwos ] fix a small parameter .1 . initialize : , 2 . while : 1 .set .sample independently from the unit sphere ( by using algorithm [ alg : uniform - sphere ] ) .3 . set and .here is given by for and for .3 . when : 4 .set to be the orthogonal projection of to .return and . for the yukawa case , the weight loss of the particle can be interpreted as independent exponential killing of the particle .see , or for details .consequently , the wwos algorithm [ alg : wwos ] can be reinterpreted as killing walk on spheres ( kwos ) .our estimator for is where , are independent simulations of the trajectories brownian particles starting from point , and the set contains the particles that are not killed ; is the termination - step time of the algorithm .the individual particles are generated by algorithm [ alg : kwos ] below .[ alg : kwos ] fix a small parameter .1 . initialize : .2 . while : 1 . set .2 . kill the particle with probability .if the particle is killed , the algorithm terminates and returns .sample independently from the unit sphere ( by using algorithm [ alg : uniform - sphere ] ) .4 . set .3 . when : 4 .set to be the orthogonal projection of to .return .in this section , we give examples to motivate our algorithms and to illustrate their potential applications .the examples were computed by using a straightforward implementation algorithms in one and two - dimensional settings and chosen from the point of view of visualization .all computations were performed on a macbook air laptop .wolfram mathematica 10.2 and basic implementations of the algorithms with no performance optimizations were used .obviously , the stochastic approaches presented here are more attractive in higher dimensions , where many deterministic simulation methods are not available , or lead into excessive computation times .also , it should be noted that the algorithms in this paper are particularly suitable for parallel computation , as simulated paths are independent from each other .let $ ] and consider the differential equation with boundary values and .then the exact solution to the boundary value problem is given by the exact solution as well as its approximations with the dwos algorithm , where the problem is first lifted into dimension two by using the duffin correspondence , and with the wwos algorithm are illustrated in figure [ fig : onedim ] .( dashed ) and its approximations with dwos ( left ) and wwos ( right ) algorithms ., title="fig:",width=207 ] ( dashed ) and its approximations with dwos ( left ) and wwos ( right ) algorithms . , title="fig:",width=207 ] next , we consider two simple boundary value problems of the equation on polygonal domains in the plane : 1 .the trapezoidal domain defined by the points and , where the boundary values are given by and with ( cf .example 4.4 and figure 2 of ) .2 . the non - convex l - shaped domain defined by the points and , where boundary values are given by the non - continuous function and .we compute an approximation to the solution of the above boundary value problems by using both the dwos algorithm and the wwos algorithm . for dwos ,the problem is lifted to the dimension three .the dwos and wwos on the domain are illustrated in figure [ fig : wos23diml ] .approximations to the solutions are illustrated in figure [ fig : sol - trape ] and figure [ fig : sol - l ] .our experiments suggest that wwos algorithm is more stable than dwos and thus a smaller number of simulations are required for a comparable result . with dwos ( left ) and wwos ( right ) algorithms . in dwos , the problem is first lifted into the dimension three by using the duffin correspondence .the wwos algorithm is purely two - dimensional , but it requires computation of the weight function . , title="fig:",width=207 ] with dwos ( left ) and wwos ( right ) algorithms . in dwos ,the problem is first lifted into the dimension three by using the duffin correspondence .the wwos algorithm is purely two - dimensional , but it requires computation of the weight function . , title="fig:",width=207 ] ( dwos , left ) and ( wwos , right ) and of the solution on the domain .the boundary values are given by the function and . , title="fig:",width=219 ] ( dwos , left ) and ( wwos , right ) and of the solution on the domain .the boundary values are given by the function and . , title="fig:",width=196 ] ( dwos , left ) and ( wwos , right ) and of the solution on the domain .the boundary values are given by the function and . , title="fig:",width=219 ] ( dwos , left ) and ( wwos , right ) and of the solution on the domain .the boundary values are given by the function and ., title="fig:",width=196 ]it is clear that the kwos algorithm is better than the wwos algorithm , when it is applicable ( i.e. , in the yukawa case ) .our experiments in section [ sect : comparisons ] suggest that , at least on low dimensions , wwos algorithm is more stable than the dwos algorithm . consequently , it seems that the dwos algorithm is best used in high dimensions , where adding one extra dimension should not make much difference .however , the dwos algorithm is an extension , not a modification , of the classical wos algorithm , i.e. , if one has wos already implemented , implementing dwos is simply a matter of giving different input parameters to the wos algorithm .
we show that a constant - potential time - independent schrdinger equation with dirichlet boundary data can be reformulated as a laplace equation with dirichlet boundary data . with this reformulation , which we call the duffin correspondence , we provide a classical walk on spheres ( wos ) algorithm for monte carlo simulation of the solutions of the boundary value problem . we compare the obtained duffin wos algorithm with existing modified wos algorithms .
all musical pieces are similar , but some are more similar than others . apart from being an infinite source of discussion ( `` haydn is just like mozart no , he s not ! '' ) , such similarities are also crucial for the design of efficient music information retrieval systems .the amount of digitized music available on the internet has grown dramatically in recent years , both in the public domain and on commercial sites .napster and its clones are prime examples .websites offering musical content in some form or other ( mp3 , midi , ) need a way to organize their wealth of material ; they need to somehow classify their files according to musical genres and subgenres , putting similar pieces together .the purpose of such organization is to enable users to navigate to pieces of music they already know and like , but also to give them advice and recommendations ( `` if you like this , you might also like '' ) .currently , such organization is mostly done manually by humans , but some recent research has been looking into the possibilities of automating music classification .a human expert , comparing different pieces of music with the aim to cluster likes together , will generally look for certain specific similarities .previous attempts to automate this process do the same . generally speaking, they take a file containing a piece of music and extract from it various specific numerical features , related to pitch , rhythm , harmony etc .one can extract such features using for instance fourier transforms or wavelet transforms .the feature vectors corresponding to the various files are then classified or clustered using existing classification software , based on various standard statistical pattern recognition classifiers , bayesian classifiers , hidden markov models , ensembles of nearest - neighbor classifiers or neural networks .for example , one feature would be to look for rhythm in the sense of beats per minute .one can make a histogram where each histogram bin corresponds to a particular tempo in beats - per - minute and the associated peak shows how frequent and strong that particular periodicity was over the entire piece . in we see a gradual change from a few high peaks to many low and spread - out ones going from hip - hip , rock , jazz , to classical .one can use this similarity type to try to cluster pieces in these categories .however , such a method requires specific and detailed knowledge of the problem area , since one needs to know what features to look for .our aim is much more general .we do not look for similarity in specific features known to be relevant for classifying music ; instead we apply a general mathematical theory of similarity .the aim is to capture , in a single similarity metric , _ every effective metric _ : effective versions of hamming distance , euclidean distance , edit distances , lempel - ziv distance , and so on .such a metric would be able to simultaneously detect _ all _ similarities between pieces that other effective metrics can detect .rather surprisingly , such a `` universal '' metric indeed exists .it was developed in , based on the `` information distance '' of .roughly speaking , two objects are deemed close if we can significantly `` compress '' one given the information in the other , the idea being that if two pieces are more similar , then we can more succinctly describe one given the other . here compression is based on the ideal mathematical notion of kolmogorov complexity , which unfortunately is not effectively computable .it is well known that when a pure mathematical theory is applied to the real world , for example in hydrodynamics or in physics in general , we can in applications only approximate the theoretical ideal .but still the theory gives a framework and foundation for the applied science .similarly here .we replace the ideal but noncomputable kolmogorov - based version by standard compression techniques .we lose theoretical optimality in some cases , but gain an efficiently computable similarity metric intended to approximate the theoretical ideal .in contrast , a later and partially independent compression - based approach of for building language - trees while citing is by _ ad hoc _ arguments about empirical shannon entropy and kullback - leibler distance resulting in non - metric distances . earlier research has demonstrated that this new universal similarity metric works well on concrete examples in very different application fields the first completely automatic construction of the phylogeny tree based on whole mitochondrial genomes , and a completely automatic construction of a language tree for over 50 euro - asian languages .other applications , not reported in print , are detecting plagiarism in student programming assignments , and phylogeny of chain letters .in this paper we apply this compression - based method to the classification of pieces of music .we perform various experiments on sets of mostly classical pieces given as midi ( musical instrument digital interface ) files .this contrasts with most earlier research , where the music was digitized in some wave format or other ( the only other research based on midi that we are aware of is ) .we compute the distances between all pairs of pieces , and then build a tree containing those pieces in a way that is consistent with those distances . first ,as proof of principle , we run the program on three artificially generated data sets , where we know what the final answer should be .the program indeed classifies these perfectly .secondly , we show that our program can distinguish between various musical genres ( classical , jazz , rock ) quite well .thirdly , we experiment with various sets of classical pieces .the results are quite good ( in the sense of conforming to our expectations ) for small sets of data , but tend to get a bit worse for large sets .considering the fact that the method knows nothing about music , or , indeed , about any of the other areas we have applied it to elsewhere , one is reminded of dr johnson s remark about a dog s walking on his hind legs : `` it is not done well ; but you are surprised to find it done at all . ''the paper is organized as follows .we first give a domain - independent overview of compression - based clustering : the ideal distance metric based on kolmogorov complexity , and the quartet method that turns the matrix of distances into a tree . in section [ secdetails ]we give the details of the current application to music , the specific file formats used etc . in section [ secresults ]we report the results of our experiments .we end with some directions for future research .each object ( in the application of this paper : each piece of music ) is coded as a string over a finite alphabet , say the binary alphabet .the integer gives the length of the shortest compressed binary version from which can be fully reproduced , also known as the _ kolmogorov complexity _ of .`` shortest '' means the minimum taken over every possible decompression program , the ones that are currently known as well as the ones that are possible but currently unknown .we explicitly write only `` decompression '' because we do not even require that there is also a program that compresses the original file to this compressed version if there is such a program then so much the better . technically , the definition of kolmogorov complexity is as follows .first , we fix a syntax for expressing all and only computations ( computable functions ) .this can be in the form of an enumeration of all turing machines , but also an enumeration of all syntactically correct programs in some universal programming language like java , lisp , or c. we then define the kolmogorov complexity of a finite binary string as the length of the shortest turing machine , java program , etc . in our chosen syntax .which syntax we take is unimportant , but we have to stick to our choice .this choice attaches a definite positive integer as the kolmogorov complexity to each finite string . though defined in terms of a particular machine model ,the kolmogorov complexity is machine - independent up to an additive constant and acquires an asymptotically universal and absolute character through church s thesis , and from the ability of universal machines to simulate one another and execute any effective process .the kolmogorov complexity of an object can be viewed as an absolute and objective quantification of the amount of information in it .this leads to a theory of _ absolute _ information _ contents _ of _ individual _ objects in contrast to classic information theory which deals with _ average _ information _ to communicate _ objects produced by a _random source_. so gives the length of the ultimate compressed version , say , of .this can be considered as the amount of information , number of bits , contained in the string .similarly , is the minimal number of bits ( which we may think of as constituting a computer program ) required to reconstruct from . in a way expresses the individual `` entropy '' of minimal number of bits to communicate when sender and receiver have no knowledge where comes from .for example , to communicate mozart s `` zauberflte '' from a library of a million items requires at most 20 bits ( ) , but to communicate it from scratch requires megabits . for more details on this pristine notion of individual informationcontent we refer to the textbook .as mentioned , our approach is based on a new very general similarity distance , classifying the objects in clusters of objects that are close together according to this distance . in mathematics ,lots of different distances arise in all sorts of contexts , and one usually requires these to be a ` metric ' , since otherwise undesirable effects may occur .a metric is a distance function that assigns a non - negative distance to any two objects and , in such a way that 1 . only where 2 . ( symmetry ) 3 . ( triangle inequality ) a familiar example of a metric is the euclidean metric , the everyday distance between two objects expressed in , say , meters .clearly , this distance satisfies the properties , , and ( substitute amsterdam , brussels , and chicago . )we are interested in `` similarity metrics '' .for example , if the objects are classical music pieces then the function if and are by the same composer and otherwise , is a similarity metric , albeit a somewhat elusive one .this captures only one , but quite a significant , similarity aspect between music pieces . in , a new theoretical approach to a wide class of similarity metricswas proposed : the `` normalized information distance '' is a metric , and it is universal in the sense that this single metric uncovers all similarities simultaneously that the metrics in the class uncover separately .this should be understood in the sense that if two pieces of music are similar ( that is , close ) according to the particular feature described by a particular metric , then they are also similar ( that is , close ) in the sense of the normalized information distance metric .this justifies calling the latter _ the _ similarity metric .oblivious to the problem area concerned , simply using the distances according to the similarity metric , our method fully automatically classifies the objects concerned , be they music pieces , text corpora , or genomic data .more precisely , the approach is as follows .each pair of such strings and is assigned a distance there is a natural interpretation to : if , say , then we can rewrite where is the information in about satisfying the symmetry property up to a logarithmic additive error .that is , the distance between and is the number of bits of information that is not shared between the two strings per bit of information that could be maximally shared between the two strings .it is clear that is symmetric , and in it is shown that it is indeed a metric .moreover , it is universal in the sense that every metric expressing some similarity that can be computed from the objects concerned is comprised ( in the sense of minorized ) by .it is these distances that we will use , albeit in the form of a rough approximation : for we simply use standard compression software like ` gzip ' , ` bzip2 ' , or ` compress ' . to compute the conditional version , we use a sophisticated theorem , known as `` symmetry of algorithmic information '' in says so to compute the conditional complexity we can just take the difference of the unconditional complexities and .this allows us to approximate for every pair .our actual practice falls short of the ideal theory in at least three respects : \(i ) the claimed universality of the similarity distance holds only for indefinitely long sequences .once we consider strings of definite length , the similarity distance is only universal with respect to `` simple '' computable normalized information distances , where `` simple '' means that they are computable by programs of length , say , logarithmic or polylogarithmic in .this reflects the fact that , technically speaking , the universality is achieved by summing the weighted contribution of all similarity distances in the class considered with respect to the objects considered .only similarity distances of which the complexity is small ( which means that the weight is large ) with respect to the size of the data concerned kick in .\(ii ) the kolmogorov complexity is not computable , and it is in principle impossible to compute how far off our approximation is from the target value in any useful sense .\(iii ) to approximate the information distance in a practical sense we use the standard compression program bzip2 .while better compression of a string will always approximate the kolmogorov complexity better , this is , regrettably , not true for the ( normalized ) information distance .namely , using ( [ eq.condition ] ) we consider the difference of two compressed quantities .different compressors may compress the two quantities differently , causing an increase in the difference even when both quantities are compressed better ( but not both as well ) . in the normalized information distancewe also have to deal with a ratio that causes the same problem .thus , a better compression program may not necessarily mean that we also approximate the ( normalized ) information distance better .this was borne out by the results of our experiments using different compressors . despite these caveats it turns out that the practice inspired by the rigorous ideal theory performs quite well .we feel this is an example that an _ ad hoc _ approximation guided by a good theory is preferable above _ ad hoc _ approaches without underlying theoretical foundation .the above approach allows us to compute the distance between any pair of objects ( any two pieces of music ) .we now need to cluster the objects , so that objects that are similar according to our metric are placed close together .we do this by computing a phylogeny tree based on these distances . such a phylogeny tree can represent evolution of species but more widely simply accounts for closeness of objects from a set with a distance metric . such a tree will group objects in subtrees : the clusters . to find the phylogeny treethere are many methods .one of the most popular is the quartet method .the idea is as follows : we consider every group of four elements from our set of elements ( in this case , musical pieces ) ; there are such groups . from each group we construct a tree of arity 3 , which implies that the tree consists of two subtrees of two leaves each .let us call such a tree a _quartet_. there are three possibilities denoted ( i ) , ( ii ) , and ( iii ) , where a vertical bar divides the two pairs of leaf nodes into two disjoint subtrees ( figure [ figquart ] ) .the cost of a quartet is defined as the sum of the distances between each pair of neighbors ; that is , .for any given tree and any group of four leaf labels , we say is with if and only if the path from to does not cross the path from to .note that exactly one of the three possible quartets for any set of 4 labels must be consistent for any given tree .we may think of a large tree having many smaller quartet trees embedded within its structure ( figure [ figquartex ] ) .the total cost of a large tree is defined to be the sum of the costs of all consistent quartets .first , generate a list of all possible quartets for all groups of labels under consideration .for each group of three possible quartets for a given set of four labels , calculate a best ( minimal ) cost , and a worst ( maximal ) cost .summing all best quartets yields the best ( minimal ) cost .conversely , summing all worst quartets yields the worst ( maximal ) cost .the minimal and maximal values need not be attained by actual trees , however the score of any tree will lie between these two values . in order to be able to compare tree scores in a more uniform way , we now rescale the score linearly such that the worst score maps to 0 , and the best score maps to 1 , and term this the _ normalized tree benefit score _ . the goal of the quartet method is to find a full tree with a maximum value of , which is to say , the lowest total cost .this optimization problem is known to be np - hard ( which means that it is infeasible in practice ) but we can sometimes solve it , and always approximate it .the current methods in are far too computationally intensive ; they run many months or years on moderate - sized problems of 30 objects .we have designed a simple method based on randomization and hill - climbing .first , a random tree with nodes is created , consisting of leaf nodes ( with 1 connecting edge ) labeled with the names of musical pieces , and non - leaf or _internal _ nodes labeled with the lowercase letter `` n '' followed by a unique integer identifier .each internal node has exactly three connecting edges . for this tree , we calculate the total cost of all consistent quartets , and invert and scale this value to find .typically , a random tree will be consistent with around of all quartets .now , this tree is denoted the currently best known tree , and is used as the basis for further searching .we define a simple mutation on a tree as one of the three possible transformations : 1 . a _ leaf swap _ , which consists of randomly choosing two leaf nodes and swapping them .2 . a _ subtree swap _ , which consists of randomly choosing two internal nodes and swapping the subtrees rooted at those nodes .3 . a _ subtree transfer _ , whereby a randomly chosen subtree ( possibly a leaf ) is detached and reattached in another place , maintaining arity invariants .each of these simple mutations keeps invariant the number of leaf and internal nodes in the tree ; only the structure and placements change .define a full mutation as a sequence of at least one but potentially many simple mutations , picked according to the following distribution .first we pick the number of simple mutations that we will perform with probability . for each such simple mutation , we choose one of the three types listed above with equal probability .finally , for each of these simple mutations , we pick leaves or internal nodes , as necessary .notice that trees which are close to the original tree ( in terms of number of simple mutation steps in between ) are examined often , while trees that are far away from the original tree will eventually be examined , but not very frequently .so in order to search for a better tree , we simply apply a full mutation on to arrive at , and then calculate .if , then keep as the new best tree . otherwise , try a new different tree and repeat .if ever reaches , then halt , outputting the best tree .otherwise , run until it seems no better trees are being found in a reasonable amount of time , in which case the approximation is complete .note that if a tree is ever found such that , then we can stop because we can be certain that this tree is optimal , as no tree could have a lower cost .in fact , this perfect tree result is achieved in our artificial tree reconstruction experiment ( section [ sect.artificial ] ) reliably in less than ten minutes . for real - world data, reaches a maximum somewhat less than , presumably reflecting inconsistency in the distance matrix data fed as input to the algorithm , or indicating a search space too large to solve exactly . on many typical problems of up to 40 objectsthis tree - search gives a tree with within half an hour . for large numbers of objects , tree scoring itselfcan be slow ( as this takes order computation steps ) , and the space of trees is also large , so the algorithm may slow down substantially . for larger experiments, we use a c++/ruby implementation with mpi ( message passing interface , a common standard used on massively parallel computers ) on a cluster of workstations in parallel to find trees more rapidly .we can consider the graph of figure [ figprogress ] , mapping the achieved score as a function of the number of trees examined .progress occurs typically in a sigmoidal fashion towards a maximal value .a problem with the outcomes is as follows : for natural data sets we often see some leaf nodes ( data items ) placed near the center of the tree as singleton leaves attached to internal nodes , without sibling leaf nodes .this results in a more linear , stretched out , and less balanced , tree .such trees , even if they represent the underlying distance matrix faithfully , are hard to fully understand and may cause misunderstanding of represented relations and clusters . to counteract this effect , and to bring out the clusters of related items more visibly , we have added a penalty term of the following form : for each internal node with exactly one leaf node attached , the tree s score is reduced by 0.005 .this induces a tendency in the algorithm to avoid producing degenerate mostly - linear trees in the face of data that is somewhat inconsistent , and creates balanced and more illuminating clusters .it should be noted that the penalty term causes the algorithm in some cases to settle for a slightly lower score than it would have without penalty term .also the value of the penalty term is heuristically chosen .the largest experiment used 60 items , and we typically had only a couple of orphans causing a penalty of only a few percent .this should be set off against the final score of above 0.85 .another practicality concerns the stopping criterion , at which value we stop .essentially we stopped when the value did nt change after examining a large number of mutated trees .an example is the progress of figure [ figprogress ] ,initially , we downloaded 118 separate midi ( musical instrument digital interface , a versatile digital music format available on the world - wide - web ) files selected from a range of classical composers , as well as some popular music .each of these files was run through a preprocessor to extract just midi note - on and note - off events .these events were then converted to a player - piano style representation , with time quantized in second intervals .all instrument indicators , midi control signals , and tempo variations were ignored . for each track in the midi file , we calculate two quantities : an _ average volume _ and a _ modal note_.the average volume is calculated by averaging the volume ( midi note velocity ) of all notes in the track .the modal note is defined to be the note pitch that sounds most often in that track .if this is not unique , then the lowest such note is chosen .the modal note is used as a key - invariant reference point from which to represent all notes .it is denoted by , higher notes are denoted by positive numbers , and lower notes are denoted by negative numbers .a value of indicates a half - step above the modal note , and a value of indicates a whole - step below the modal note .the tracks are sorted according to decreasing average volume , and then output in succession . for each track , we iterate through each time sample in order , outputting a single signed 8-bit value for each currently sounding note .two special values are reserved to represent the end of a time step and the end of a track .this file is then used as input to the compression stage for distance matrix calculation and subsequent tree search .with the natural data sets of music pieces that we use , one may have the preconception ( or prejudice ) that music by bach should be clustered together , music by chopin should be clustered together , and so should music by rock stars .however , the preprocessed music files of a piece by bach and a piece by chopin , or the beatles , may resemble one another more than two different pieces by bach by accident or indeed by design and copying .thus , natural data sets may have ambiguous , conflicting , or counterintuitive outcomes . in other words ,the experiments on actual pieces have the drawback of not having one clear `` correct '' answer that can function as a benchmark for assessing our experimental outcomes . before describing the experiments we did with midi files of actual music , we discuss three experiments that show that our program indeed does what it is supposed to do at least in artificial situations where we know in advance what the correct answer is .the similarity machine consists of two parts : ( i ) extracting a distance matrix from the data , and ( ii ) constructing a tree from the distance matrix using our novel quartet - based heuristic .* testing the quartet - based tree construction : * we first test whether the quartet - based tree construction heuristic is trustworthy : we generated a random ternary tree with 18 leaves , and derived a distance metric from it by defining the distance between two nodes as follows : given the length of the path from to , in an integer number of edges , as , let except when , in which case .it is easy to verify that this simple formula always gives a number between 0 and 1 , and is monotonic with path length .given only the matrix of these normalized distances , our quartet method exactly reconstructed represented in figure [ figarttreereal ] , with . * testing the similarity machine on artificial data : * given that the tree reconstruction method is accurate on clean consistent data , we tried whether the full procedure works in an acceptable manner when we know what the outcome should be like : we randomly generated 22 separate 1-kilobyte blocks of data where each byte was equally probable and called these _ tags_. each tag was associated with a different lowercase letter of the alphabet .next , we generated 80-kilobyte files by starting with a block of purely random bytes and applying one , two , three , or four different tags on it . applying a tag consists of ten repetitions of picking a random location in the 80-kilobyte file , and overwriting that location with the universally consistent tag that is indicated .so , for instance , to create the file referred to in the diagram by `` a '' , we start with 80 kilobytes of random data , then pick ten places to copy over this random data with the arbitrary 1-kilobyte sequence identified as tag _a_. similarly , to create file `` ab '' , we start with 80 kilobytes of random data , then pick ten places to put copies of tag _ a _ , then pick ten more places to put copies of tag _ b _ ( perhaps overwriting some of the _ a _ tags ) . because we never use more than four different tags , and therefore never place more than 40 copies of tags , we can expect that at least half of the data in each file is random and uncorrelated with the rest of the files .the rest of the file is correlated with other files that also contain tags in common ; the more tags in common , the more related the files are .the resulting tree is given in figure [ figtaggedfiles ] ; it can be seen that clustering occurs exactly as we would expect .the score is 0.905 .* testing the similarity machine on natural data : * we test gross classification of files based on markedly different file types . here, we chose several files : 1 .four mitochondrial gene sequences , from a black bear , polar bear , fox , and rat .four excerpts from the novel _ the zeppelin s passenger _ by e. phillips oppenheim 3 .four midi files without further processing ; two from jimi hendrix and two movements from debussy s suite bergamasque 4 . two linux x86 elf executables ( the _ cp _ and _ rm _ commands ) 5 . two compiled java class files .as expected , the program correctly classifies each of the different types of files together with like near like .the result is reported in figure [ figfiletypes ] with equal to 0.984 . before testing whether our program can see the distinctions between various classical composers ,we first show that it can distinguish between three broader musical genres : classical music , rock , and jazz .this should be easier than making distinctions `` within '' classical music .all musical pieces we used are listed in the tables in the appendix . for the genre - experiment we used 12 classical pieces ( the small set from table [ tableclassicalpieces ] , consisting of bach , chopin , and debussy ) , 12 jazz pieces ( table [ tablejazzpieces ] ) , and 12 rock pieces ( table [ tablerockpieces ] ) .the tree that our program came up with is given in figure [ figgenres ] .the score is 0.858 .the discrimination between the 3 genres is good but not perfect .the upper branch of the tree contains 10 of the 12 jazz pieces , but also chopin s prlude no . 15 and a bach prelude .the two other jazz pieces , miles davis `` so what '' and john coltrane s `` giant steps '' are placed elsewhere in the tree , perhaps according to some kinship that now escapes us but can be identified by closer studying of the objects concerned . of the rock pieces ,9 are placed close together in the rightmost branch , while hendrix s `` voodoo chile '' , rush `` yyz '' , and dire straits `` money for nothing '' are further away . in the case of the hendrix piecethis may be explained by the fact that it does not fit well in a specific genre .most of the classical pieces are in the lower left part of the tree .surprisingly , 2 of the 4 bach pieces are placed elsewhere .it is not clear why this happens and may be considered an error of our program , since we perceive the 4 bach pieces to be very close , both structurally and melodically ( as they all come from the mono - thematic `` wohltemperierte klavier '' ) .however , bach s is a seminal music and has been copied and cannibalized in all kinds of recognizable or hidden manners ; closer scrutiny could reveal likenesses in its present company that are not now apparent to us .in effect our similarity engine aims at the ideal of a perfect data mining process , discovering unknown features in which the data can be similar . in table[ tableclassicalpieces ] we list all 60 classical piano pieces used , together with their abbreviations .some of these are complete compositions , others are individual movements from larger compositions .they all are piano pieces , but experiments on 34 movements of symphonies gave very similar results ( section [ secsymphonies ] ) .apart from running our program on the whole set of 60 piano pieces , we also tried it on two smaller sets : a small 12-piece set , indicated by ` ( s ) ' in the table , and a medium - size 32-piece set , indicated by ` ( s ) ' or ` ( m ) ' .the small set encompasses the 4 movements from debussy s suite bergamasque , 4 movements of book 2 of bach s wohltemperierte klavier , and 4 preludes from chopin s opus 28 .as one can see in figure [ figsmallset ] , our program does a pretty good job at clustering these pieces .the score is also high : 0.958 .the 4 debussy movements form one cluster , as do the 4 bach pieces .the only imperfection in the tree , judged by what one would intuitively expect , is that chopin s prlude no . 15 lies a bit closer to bach than to the other 3 chopin pieces .this prlude no 15 , in fact , consistently forms an odd - one - out in our other experiments as well .this is an example of pure data mining , since there is some musical truth to this , as no .15 is perceived as by far the most eccentric among the 24 prludes of chopin s opus 28 . the medium set adds 20 pieces to the small set : 6 additional bach pieces , 6 additional chopins , 1 more debussy piece , and 7 pieces by haydn .the experimental results are given in figure [ figmediumset ] .the score is slightly lower than in the small set experiment : 0.895 .again , there is a lot of structure and expected clustering .most of the bach pieces are together , as are the four debussy pieces from the suite bergamasque .these four should be together because they are movements from the same piece ; the fifth debussy item is somewhat apart since it comes from another piece .both the haydn and the chopin pieces are clustered in little sub - clusters of two or three pieces , but those sub - clusters are scattered throughout the tree instead of being close together in a larger cluster .these small clusters may be an imperfection of the method , or , alternatively point at musical similarities between the clustered pieces that transcend the similarities induced by the same composer .indeed , this may point the way for further musicological investigation .figure [ figlargeset ] gives the output of a run of our program on the full set of 60 pieces .this adds 10 pieces by beethoven , 8 by buxtehude , and 10 by mozart to the medium set .the experimental results are given in figure [ figlargeset ] .the results are still far from random , but leave more to be desired than the smaller - scale experiments . indeed , the score has dropped further from that of the medium - sized set to 0.844 .this may be an artifact of the interplay between the relatively small size , and large number , of the files compared : ( i ) the distances estimated are less accurate ; ( ii ) the number of quartets with conflicting requirements increases ; and ( iii ) the computation time rises to such an extent that the correctness score of the displayed cluster graph within the set time limit is lower than in the smaller samples .nonetheless , bach and debussy are still reasonably well clustered , but other pieces ( notably the beethoven and chopin ones ) are scattered throughout the tree .maybe this means that individual music pieces by these composers are more similar to pieces of other composers than they are to each other ?the placement of the pieces is closer to intuition on a small level ( for example , most pairing of siblings corresponds to musical similarity in the sense of the same composer ) than on the larger level .this is similar to the phenomenon of little sub - clusters of haydn or chopin pieces that we saw in the medium - size experiment . finally , we tested whether the method worked for more complicated music , namely 34 symphonic pieces .we took two haydn symphonies ( no .95 in one file , and the four movements of 104 ) , three mozart symphonies ( 39 , 40 , 41 ) , three beethoven symphonies ( 3 , 4 , 5 ) , of schubert s unfinished symphony , and of saint - saens symphony no .the results are reported in figure [ figsymphonies ] , with a quite reasonable score of 0.860 .our research raises many questions worth looking into further : * the program can be used as a data mining machine to discover hitherto unknown similarities between music pieces of different composers or indeed different genres . in this mannerwe can discover plagiarism or indeed honest influences between music pieces and composers .indeed , it is thinkable that we can use the method to discover seminality of composers , or separate music eras and fads .* a very interesting application of our program would be to select a plausible composer for a newly discovered piece of music of which the composer is not known .in addition to such a piece , this experiment would require a number of pieces from known composers that are plausible candidates .we would just run our program on the set of all those pieces , and see where the new piece is placed .if it lies squarely within a cluster of pieces by composer such - and - such , then that would be a plausible candidate composer for the new piece . *each run of our program is different even on the same set of data because of our use of randomness for choosing mutations in the quartet method. it would be interesting to investigate more precisely how stable the outcomes are over different such runs .* at various points in our program , somewhat arbitrary choices were made .examples are the compression algorithms we use ( all practical compression algorithms will fall short of kolmogorov complexity , but some less so than others ) ; the way we transform the midi files ( choice of length of time interval , choice of note - representation ) ; the cost function in the quartet method .other choices are possible and may or may not lead to better clustering .ideally , one would like to have well - founded theoretical reasons to decide such choices in an optimal way .lacking those , trial - and - error seems the only way to deal with them . *the experimental results got decidedly worse when the number of pieces grew .better compression methods may improve this situation , but the effect is probably due to unknown scaling problems with the quartet method or nonlinear scaling of possible similarities in a larger group of objects ( akin to the phenomenon described in the so - called `` birthday paradox '' : in a group of about two dozen people there is a high chance that at least two of the people have the same birthday ) .inspection of the underlying distance matrices makes us suspect the latter . *our program is not very good at dealing with very small data files ( 100 bytes or so ) , because significant compression only kicks in for larger files .we might deal with this by comparing various sets of such pieces against each other , instead of individual ones .we thank john tromp for useful discussions .99 d. benedetto , e. caglioti , and v. loreto .language trees and zipping , _ physical review letters _ , 88:4(2002 ) 048702 .algorithm makes tongue tree , _ nature _ , 22 january , 2002 .bennett , p. gcs , m. li , p.m.b .vitnyi , and w. zurek .information distance , _ ieee transactions on information theory _ , 44:4(1998 ) , 14071423 .d. bryant , v. berry , p. kearney , m. li , t. jiang , t. wareham and h. zhang .a practical algorithm for recovering the best supported edges of an evolutionary tree .11th acm - siam symposium on discrete algorithms _, 287296 , 2000 .w. chai and b. vercoe .folk music classification using hidden markov models ._ proc . of international conference on artificial intelligence _ , 2001 .r. dannenberg , b. thom , and d. watson .a machine learning approach to musical style recognition , _ proc .international computer music conference _ , pp .344 - 347 , 1997 .m. grimaldi , a. kokaram , and p. cunningham .classifying music by genre using the wavelet packet transform and a round - robin ensemble .technical report tcd - cs-2002 - 64 , trinity college dublin , 2002 .t. jiang , p. kearney , and m. li . a polynomial time approximation scheme for inferring evolutionary trees from quartet topologies and its application ._ siam j. computing _ , 30:6(2001 ) , 19421961 .m. li , j.h .badger , x. chen , s. kwong , p. kearney , and h. zhang . an information - based sequence distance and its application to whole mitochondrial genome phylogeny , _ bioinformatics _ , 17:2(2001 ) , 149154 .m. li and p.m.b .algorithmic complexity , pp .376382 in : _ international encyclopedia of the social & behavioral sciences _ , n.j .smelser and p.b .baltes , eds . ,pergamon , oxford , 2001/2002 .m. li , x. chen , x. li , b. ma , p. vitnyi .the similarity metric , _ proc .14th acm - siam symposium on discrete algorithms _, 2003 .m. li and p.m.b ._ an introduction to kolmogorov complexity and its applications _ , springer - verlag , new york , 2nd edition , 1997 .p. scott .music classification using neural networks , 2001 .+ http://www.stanford.edu/class/ee373a/musicclassification.pdf shared information distance or software integrity detection , computer science , university of california , santa barbara , http://dna.cs.ucsb.edu/sid/ g. tzanetakis and p. cook , music genre classification of audio signals , _ ieee transactions on speech and audio processing _ , 10(5):293302 , 2002.the 60 classical pieces used ( ` m ' indicates presence in the medium set , ` s ' in the small and medium sets ) [ cols="<,<,<",options="header " , ]
we present a fully automatic method for music classification , based only on compression of strings that represent the music pieces . the method uses no background knowledge about music whatsoever : it is completely general and can , without change , be used in different areas like linguistic classification and genomics . it is based on an ideal theory of the information content in individual objects ( kolmogorov complexity ) , information distance , and a universal similarity metric . experiments show that the method distinguishes reasonably well between various musical genres and can even cluster pieces by composer .
nowadays , many engineering applications can be posed as convex quadratic problems ( qp ) .several important applications that can be modeled in this framework such us model predictive control for a dynamical linear system and its dual called moving horizon estimation , dc optimal power flow problem for a power system , linear inverse problems arising in many branches of science or network utility maximization problems have attracted great attention lately . since the computational power has increased by many orders in the last decade ,highly efficient and reliable numerical optimization algorithms have been developed for solving the optimization problems arising from these applications in very short time .for example , these hardware and numerical recent advances made it possible to solve linear predictive control problems of nontrivial sizes within the range of microseconds and even on hardware platforms with limited computational power and memory .the theoretical foundation of quadratic programming dates back to the work by frank & wolfe . after the publication of the paper many numerical algorithmshave been developed in the literature that exploit efficiently the structure arising in this class of problems .basically , we can identify three popular classes of algorithms to solve quadratic programs : active set methods , interior point methods and ( dual ) first order methods ._ active set methods _ are based on the observation that quadratic problems with equality constraints are equivalent to solving a linear system .thus , the iterations in these methods are based on solving a linear system and updating the active set ( the term active set refers to the subset of constraints that are satisfied as equalities by the current estimate of the solution ) .active set general purpose solvers are adequate for small - to - medium scale quadratic problems , since the numerical complexity per iteration is cubic in the dimension of the problem .matlab s _ quadprog _ function implements a primal active set method .dual active set methods are available in the codes ._ interior point methods _ remove the inequality constraints from the problem formulation using a barrier term in the objective function for penalizing the constraint violations .usually a logarithmic barrier terms is used and the resulting equality constrained nonlinear convex problem is solved by the newton method . since the iteration complexity grows also cubically with the dimension , interior - point solvers are also the standard for small - to - medium scale qps .however , structure exploiting interior point solvers have been also developed for particular large - scale applications : e.g. several solvers exploit the sparse structure of the quadratic problem arising in predictive control ( cvxgen , forces ) .a parallel interior point code that exploits special structures in the hessian of large - scale structured quadratic programs have been developed in . _first order methods _ use only gradient information at each iterate by computing a step towards the solution of the unconstrained problem and then projecting this step onto the feasible set .augmented lagrangian algorithms for solving general nonconvex problems are presented in the software package lancelot .for convex qps with simple constraints we can use primal first order methods for solving the quadratic program as in . in this casethe main computational effort per iteration consists of a matrix - vector product . when the projection on the primal feasible set is hard to compute , an alternative to primal first order methods is to use the lagrangian relaxation to handle the complicated constraints and then to apply dual first order algorithms for solving the dual .the computational complexity certification of first order methods for solving the ( augmented ) lagrangian dual of general convex problems is studied e.g. in and of quadratic problems is studied in . in these methods the main computational effort consists of solving at each iteration a lagrangian qp problem with simple constraints for a given multiplier , which allows us to determine the value of the dual gradient for that multiplier , and then update the dual variables using matrix - vector products .for example , the toolbox fiordos auto - generates code for primal or dual fast gradient methods as proposed in .the algorithm in dualizes only the inequality constraints of the qp and assumes available a solver for linear systems that is able to solve the lagrangian inner problem .however , both implementations do not consider the important aspect that the lagrangian inner problem can not be solved exactly in practice .the effect of inexact computations in dual gradient values on the convergence of dual first order methods has been analyzed in detail in .moreover , most of these papers generate approximate primal solutions through averaging . on the other hand , in practiceusually the last primal iterate is employed , since in practice these methods converge faster in the primal last iterate than in a primal average sequence .these issues motivate our work here . _contributions_. in this paper we analyze the computational complexity of several ( augmented ) dual first order methods implemented in duquad for solving convex quadratic problems . contrary to most of the results from the literature , our approach allows us to use inexact dual gradient information ( i.e. it allows to solve the ( augmented ) lagrangian inner problem approximately ) and therefore is able to tackle more general quadratic convex problems and to solve practical applications .another important feature of our approach is that we provide also complexity results for the primal latest iterate , while in much of the previous literature convergence rates in an average of primal iterates are given .we derive in a unified framework the computational complexity of the dual and augmented dual ( fast ) gradient methods in terms of primal suboptimality and feasibility violation using inexact dual gradients and two types of approximate primal solutions : the last primal iterate and an average of primal iterates . from our knowledgethis paper is the first where both approaches , dual and augmented dual first order methods , are analyzed uniformly .these algorithms are also implemented in the efficient programming language c in duquad , and optimized for low iteration complexity and low memory footprint .the toolbox has a dynamic matlab interface which make the process of testing , comparing , and analyzing the algorithms simple .the algorithms are implemented using only basic arithmetic and logical operations and thus are suitable to run on low cost hardware .the main computational bottleneck in the methods implemented in duquad is the matrix - vector product .therefore , this toolbox can be used for solving either qps on hardware with limited resources or sparse qps with large dimension ._ contents_. the paper is organized as follows . in section [ sec_pf ]we describe the optimization problem that we solve in duquad . in section [ sec_duquad ]we describe the the main theoretical aspects that duquad is based on , while in section [ numerical_tests ] we present some numerical results obtained with duquad . _notation_. for denote the scalar product by and the euclidean norm by .further , ] its distance . for a matrix we use the notation for the spectral norm .in duquad we consider a general convex quadratic problem ( qp ) in the form : where is a convex quadratic function with the hessian , , is a simple compact convex set , i.e. a box ] . *fgm * : in the fast gradient method for smooth convex problems , where and . in this case we get a particular version of nesterov s accelerated scheme that updates two sequences and has been analyzed in detail in .* * fgm** : in fast gradient algorithm for smooth convex problems with strongly convex objective function , with constant , we choose for all . in this casewe get a particular version of nesterov s accelerated scheme that also updates two sequences .the convergence rate of algorithm * fom*( ) in terms of function values is given in the next lemma : [ lemma_sublin_dfg ] for smooth convex problem assume that the objective function is strongly convex with constant and has lipschitz continuous gradient with constant .then , the sequences generated by algorithm * fom*( ) satisfy : where , with the optimal set of , and is defined as follows : thus , algorithm * fom * has linear convergence provided that .otherwise , it has sublinear convergence .in this section we describe an inexact dual ( augmented ) first order framework implemented in duquad , a solver able to find an approximate solution for the quadratic program . for a given accuracy , is called an -_primal solution _ for problem if the following inequalities hold : the main function in duquad is the one implementing the general algorithm * fom*. note that if the feasible set of is simple , then we can call directly * fom*( ) in order to obtain an approximate solution for . however , in general the projection on is as difficult as solving the original problem . in this casewe resort to the ( augmented ) dual formulation for finding an -primal solution for the original qp .the main idea in duquad is based on the following observation : from we observe that for computing the gradient value of the dual function in some multiplier , we need to solve exactly the inner problem ; despite the fact that , in some cases , the ( augmented ) lagrangian is quadratic and the feasible set is simple in , this inner problem generally can not be solved exactly .therefore , the main iteration in duquad consists of two steps : * step 1 * : for a given inner accuracy and a multiplier solve approximately the inner problem with accuracy to obtain an approximate solution instead of the exact solution , i.e. : in duquad , we obtain an approximate solution using the algorithm * fom*( ) .from , lemma [ lemma_sublin_dfg ] we can estimate tightly the number of iterations that we need to perform in order to get an -solution for : the lipschitz constant is , the strong convexity constant is ( provided that e.g. ) and ( the diameter of the box set ) .then , the number of iterations that we need to perform for computing satisfying can be obtained from . *step 2 * : once an -solution for was found , we update at the outer stage the lagrange multipliers using again algorithm * fom*( ) . note that for updating the lagrange multipliers we use instead of the true value of the dual gradient , an approximate value given by : . in it has been proved separately , for dual and augmented dual first order methods , that using an appropriate value for ( depending on the desired accuracy that we want to solve the qp ) we can still preserve the convergence rates of algorithm * fom*( ) given in lemma [ lemma_sublin_dfg ] , although we use inexact dual gradients . in the sequel ,we derive in a unified framework the computational complexity of the dual and augmented dual ( fast ) gradient methods . from our knowledge , this is the first time when both approaches , dual and augmented dual first order methods , are analyzed uniformly .first , we show that by introducing inexact values for the dual function and for its gradient given by the following expressions : then we have a similar descent relation as in given in the following lemma : given such that holds , then based on the definitions we get the following inequalities : - d_\rho(\lambda ) \leq\ !l_{\text{d } } \|\mu - \lambda\|^2 + \ !2 { \epsilon_{\text{in}}}\quad \forall \lambda,\mu \!\in\ ! { \mathbb{r}^}p.\end{aligned}\ ] ] from the definition of , and it can be derived : which proves the first inequality . in order to prove the second inequality ,let be a fixed primal point such that .then , we note that the nonnegative function has lipschitz gradient with constant and thus we have : taking now and using , then we obtain : furthermore , combining with and we have : using the relation we have : which shows the second inequality of our lemma .this lemma will play a major role in proving rate of convergence for the methods presented in this paper .note that in enters linearly , while in enters quadratically in the context of augmented lagrangian and thus in the sequel we will get better convergence estimates than those in the previous papers . in conclusion , for solving the dual problem in duquad we use the following inexact ( augmented ) dual first order algorithm : recall that satisfying the inner criterion and .moreover , is chosen as follows : * * dgm * : in ( augmented ) dual gradient method , where for all , or equivalently for all , i.e. the ordinary gradient algorithm . * * dfgm * : in ( augmented ) dual fast gradient method , where and , i.e. a variant of nesterov s accelerated scheme .therefore , in duquad we can solve the smooth ( augmented ) dual problem either with dual gradient method * dgm * ( ) or with dual fast gradient method * dfgm * ( is updated based on ) .recall that for computing in duquad we use algorithm * fom*( ) ( see the discussion of step 1 ) . when applied to inner subproblem , algorithm * fom*( ) will converge linearly provided that .moreover , when applying algorithm * fom*( ) we use warm start : i.e. we start our iteration from previous computed . combining the inexact descent relation with lemma [ lemma_sublin_dfg ]we obtain the following convergence rate for the general algorithm * dfom*( ) in terms of dual function values of : [ lemma_sublin_dfg_inexact ] for the smooth ( augmented ) dual problem the dual sequences generated by algorithm * dfom*( ) satisfy the following convergence estimate on dual suboptimality : where recall and is defined as in .note that in ( * ? ? ?* theorem 2 ) , the convergence rate of * dgm * scheme is provided in the average dual iterate and not in the last dual iterate .however , for a uniform treatment in theorem [ lemma_sublin_dfg_inexact ] we redefine the dual final point ( the dual last iterate when some stopping criterion is satisfied ) as follows : {{\mathcal{k}}_d} ] , then we observe that .thus , we have . using the definition of , we obtain : using and the bound for the values and from in the previous inequality , we get : it remains to estimate the primal suboptimality .first , to bound below we proceed as follows : {\mathcal{k } } \rangle \nonumber\\ & \le f(\hat{u}^k_{\epsilon } ) + { \lvert\lambda^*\rvert } { \lvertg\hat{u}^k_{\epsilon}+g - \left[g\hat{u}^k_{\epsilon } + g\right]_{\mathcal{k}}\rvert}\nonumber\\ & = f(\hat{u}^k_{\epsilon } ) + r_d \ ; \text{dist}_{\mathcal{k } } \left(g\hat{u}^k_{\epsilon } + g\right ) .\end{aligned}\ ] ] combining the last inequality with , we obtain : secondly , we observe the following facts : for any , and the following identity holds : based on previous discussion , and , we derive that taking now , and using an inductive argument , we obtain : provided that . from , and, we obtain that the average primal iterate is -primal optimal .further , we analyze the primal convergence rate of algorithm * dfgm * in the average primal iterate : let be some desired accuracy and be the primal average iterate given in , generated by algorithm * dfgm * , i.e. algorithm * dfom*( ) with for all , using the inner accuracy from . then , after number of outer iterations given in , is -primal optimal for the original qp .recall that we have defined .then , it follows : for any we denote and thus we have {\mathcal{k}_d} ] for all , we obtain : {\mathcal{k}_d } ) \right)\right\| \\ & = \left\| \sum\limits_{j=0}^k \frac{\theta_j}{s_k } \left ( \bar{\nabla } d_{\rho}(\mu^j ) - 2 l_{\text{d}}(z^j-[z^j]_{\mathcal{k}_d } ) \right)\right\| \\ & \overset{\eqref{feasibility_aux1}}{= } \frac{l_\text{d}}{s_k}{\lvertl^{k}-l^0\rvert } \le \frac{4l_\text{d}}{(k+1)^2 } { \lvertl^k -l^{0}\rvert}.\end{aligned}\ ] ] taking in lemma [ th_tseng_2 ] and using that the two terms and are positive , we get : thus , we can further bound the primal infeasibility as follows : therefore , using and from , it can be derived that : further , we derive sublinear estimates for primal suboptimality .first , note the following relations : summing on the history and using the convexity of , we get : using in lemma [ th_tseng_2 ] , and dropping the term , we have : moreover , we have that : now , by choosing the lagrange multiplier and in , we have : on the other hand , we have : {\mathcal{k } } \rangle\nonumber\\ & \le f(\hat{u}^k_{\epsilon } ) + r_d \ ; \text{dist}_{{\mathcal{k}}}(g \hat{u}^k_{\epsilon } + g).\end{aligned}\ ] ] taking and from , and using , we obtain : finally , from , and , we get that the primal average sequence is primal optimal . in conclusion , in duquad we generate two approximate primal solutions and for each algorithm * dgm * and * dfgm*. from previous discussion it can be seen that theoretically , the average of primal iterates sequence has a better behavior than the last iterate sequence . on the other hand , from our practical experience( see also section [ numerical_tests ] ) we have observed that usually dual first order methods are converging faster in the primal last iterate than in a primal average sequence .moreover , from our unified analysis we can conclude that for both approaches , ordinary dual with and augmented dual with , the rates of convergence of algorithm * dfom * are the same .in this section we derive the total computational complexity of the algorithmic framework * dfom*. without lose of generality , we make the assumptions : however , if any of these assumptions does not hold , then our result are still valid with minor changes in constants .now , we are ready to derive the total number of iterations for * dfom * , i.e. the total number of projections on the set and of matrix - vector multiplications and . [ in : th_last ] let be some desired accuracy and the inner accuracy and the number of outer iterations be as in . by setting and assuming that the primal iterate is obtained by running the algorithm * fom*( ) , then ( ) is ( ) primal optimal after a total number of projections on the set and of matrix - vector multiplications and given by : from lemma [ lemma_sublin_dfg ] we have that the inner problem ( i.e. finding the primal iterate ) for a given can be solved in sublinear ( linear ) time using algorithm * fom*( ) , provided that the inner problem has smooth ( strongly ) convex objective function , i.e. has ( ) .more precisely , from lemma [ lemma_sublin_dfg ] , it follows that , regardless if we apply algorithms * dfgm * or * dgm * , we need to perform the following number of inner iterations for finding the primal iterate for a given : combining these estimates with the expressions for the inner accuracy , we obtain , in the first case , the following inner complexity estimates : multiplying with the number of outer iterations from and minimizing the product over the smoothing parameter ( recall that and ) , we obtain the following optimal computational complexity estimate ( number of projections on the set and evaluations of and ) : which is attained for the optimal parameter choice : using the same reasoning for the second case when , we observe that the value is also optimal for this case in the following sense : the difference between the estimates obtained with the exact optimal and the value are only minor changes in constants .therefore , when , the total computational complexity ( number of projections on the set and evaluations of and ) is : in conclusion , the last primal iterate is -primal optimal after ( ) total number of projections on the set and of matrix - vector multiplications and , provided that ( ) .similarly , the average of primal iterate is -primal optimal after ( ) total number of projections on the set and of matrix - vector multiplications and , provided that ( ) .moreover , the optimal choice for the parameter is of order , provided that .let us analyze now the computational cost per inner and outer iteration for algorithm * dfom*( ) for solving approximately the original qp : * inner iteration * : when we solve the inner problem with the nesterov s algorithm * fom*( ) , the main computational effort is done in computing the gradient of the augmented lagrangian defined in , which e.g. has the form : in duquad these matrix - vector operations are implemented efficiently in c ( matrices that do not change along iterations are computed once and only is computed at each outer iteration ) .the cost for computing for general qps is .however , when the matrices and are sparse ( e.g. network utility maximization problem ) the cost can be reduced substantially .the other operations in algorithm * fom*( ) are just vector operations and thus they are of order .thus , the dominant operation at the inner stage is the matrix - vector product .* outer iteration * : when solving the outer ( dual ) problem with algorithm * dfom*( ) , the main computational effort is done in computing the inexact gradient of the dual function : the cost for computing for general qps is .however , when the matrix is sparse , this cost can be reduced .the other operations in algorithm * dfom*( ) are of order .thus the dominant operation at the outer stage is also the matrix - vector product .[ fig : gprof_n150_dfgm_case1 ] displays the result of profiling the code with gprof . in this simulation , a standard qp with inequality constraints and dimensions and solved by algorithm * dfgm*. the profiling summary is listed in the order of the time spent in each file .this figure shows that almost all the time for executing the program is spent in the library module _ math - functions.c_. furthermore , _ mtx - vec - mul _ is by far the dominating function in this list .this function is multiplying a matrix with a vector , which is defined as a special type of matrix multiplication . in conclusion ,in duquad the main operations are the matrix - vector products .therefore , duquad is adequate for solving qp problems on hardware with limited resources and capabilities , since it does not require any solver for linear systems or other complicating operations , while most of the existing solvers for qps from the literature implementing e.g. active set or interior point methods require the capability of solving linear systems . on the other hand, duquad can be also used for solving large - scale sparse qp problems since the iterations are very cheap in this case ( only sparse matrix - vector products ) .duquad is mainly intended for small to medium size , dense qp problems , but it is of course also possible to use duquad to solve ( sparse ) qp instances of large dimension .the duquad software package is available for download from : + + and distributed under general public license to allow linking against proprietary codes .proceed to the menu point `` software '' to obtain a zipped archive of the most current version of duquad . the users manual and extensive source code documentation are available here as well .+ an overview of the workflow in duquad is illustrated in fig .[ fig : duquad_workflow ] .a qp problem is constructed using a matlab script called _test.m_. then , the function _duquad.m _ is called with the problem as input and is regarded as a prepossessing stage for the online optimization .the binary mex file is called , with the original problem and the extra info as input .main.c _ file of the c - code includes the mex framework and is able to convert the matlab data into c format .furthermore , the converted data gets bundled into a c struct and passed as input to the algorithm that solves the problem .we plot in fig .[ fig : qp_cpu ] the average cpu time for several solvers , obtained by solving random qp s with equality constraints ( and ) for each dimension , with an accuracy and the stopping criteria and less than the accuracy . in both algorithms * dgm * and * dfgm * we consider the average of iterates .since , we have chosen . in the case of algorithm * dgm * , at each outer iteration the inner problem is solved with accuracy . for the algorithm * dfgm* we consider two scenarios : in the first one , the inner problem is solved with accuracy , while in the second one we use the theoretic inner accuracy .we observe a good behavior of algorithm * dfgm * , comparable to cplex and gurobi .we plot in fig .[ fig : comparison_dfo ] the number of iterations of algorithms * dgm * and * dfgm * in the primal last and average iterates for random qps with inequality constraints ( and ) of variable dimension ranging from to .we choose the accuracy and the stopping criteria was and less than the accuracy . from this figurewe observe that the number of iterations are not varying much for different test cases and also that the number of iterations are mildly dependent on problem s dimension .finally , we observe that dual first order methods perform usually better in the primal last iterate than in the average of primal iterates .-0.1 cm , ) for * dgm * and * dfgm * in primal last / average of iterates for different test cases of the same dimension ( left ) and of variable dimension ( right ) ., title="fig:",scaledwidth=105.0%,height=188 ] a. domahidi , a. zgraggen , m. zeilinger , m. morari and c. jones , _ efficient interior point methods for multistage problems arising in receding horizon control _ , ieee conference decision and control , 668674 , 2012 .j. jerez , k. ling , g. constantinides and e. kerrigan , _ model predictive control for deeply pipelined field - programmable gate array implementation : algorithms and circuitry _ , iet control theory and applications , 6(8):10291041 , 2012 .v. nedelcu , i. necoara and q. tran dinh , _ computational complexity of inexact gradient augmented lagrangian methods : application to constrained mpc _ , siam journal control and optimization , 52(5):31093134 , 2014 .i. necoara , l. ferranti and t. keviczky , _ an adaptive constraint tightening approach to linear mpc based on approximation algorithms for optimization _ , optimal control applications and methods , doi : 10.1002/oca.2121 , 1 - 19 , 2015 .s. richter , m. morari and c. n. jones , _ towards computational complexity certification for constrained mpc based on lagrange relaxation and the fast gradient method _ , ieee conference decision and control , 52235229 , 2011 . r.d .zimmerman , c.e .murillo - sanchez and r.j .thomas , _ matpower : steady - state operations , planning , and analysis tools for power systems research and education _ , ieee transactions on power systems , 26(1):1219 , 2011 .
in this paper we present the solver duquad specialized for solving general convex quadratic problems arising in many engineering applications . when it is difficult to project on the primal feasible set , we use the ( augmented ) lagrangian relaxation to handle the complicated constraints and then , we apply dual first order algorithms based on inexact dual gradient information for solving the corresponding dual problem . the iteration complexity analysis is based on two types of approximate primal solutions : the primal last iterate and an average of primal iterates . we provide computational complexity estimates on the primal suboptimality and feasibility violation of the generated approximate primal solutions . then , these algorithms are implemented in the programming language c in duquad , and optimized for low iteration complexity and low memory footprint . duquad has a dynamic matlab interface which make the process of testing , comparing , and analyzing the algorithms simple . the algorithms are implemented using only basic arithmetic and logical operations and are suitable to run on low cost hardware . it is shown that if an approximate solution is sufficient for a given application , there exists problems where some of the implemented algorithms obtain the solution faster than state - of - the - art commercial solvers .
one of the factors of the enormous success of our societies is our ability to cooperate . while in most animal species cooperationis observed only among kin or in very small groups , where future interactions are likely , cooperation among people goes far beyond the five rules of cooperation : recent experiments have shown that people cooperate also in one - shot anonymous interactions and even in large groups .this poses an evolutionary puzzle : why are people willing to pay costs to help strangers when no future rewards seem to be at stake ?a growing body of experimental research suggests that cooperative decision - making in one - shot interactions is most likely a history - dependent dynamic process . _dynamic _ because time pressure , cognitive load , conceptual priming of intuition , and disruption of the right lateral prefrontal cortex have all been shown to promote cooperation , providing direct evidence that automatic actions are , on average , more cooperative than deliberate actions ._ history - dependent _ because it has been found that previous experience with economic games on cooperation and intuition interact such that experienced subjects are less cooperative than inexperienced subjects , but only under time pressure and that intuition promotes cooperative behavior only among inexperienced subjects with above median trust in the setting where they live .while this latter paper also shows that promoting intuition versus reflection has no effect among experienced subjects , its results are inconclusive with regard to people with little trust in their environment , due to the limited number of observations . more generally , the limitation of previous studies is that they have all been conducted in developed countries and so they do not allow to draw any conclusions about what happens among people from a societal background in which they are exposed to frequent non - cooperative acts .two fundamental questions remain then unsolved .what is the effect of promoting intuition versus deliberation among people living in a non - cooperative setting ?how does this interact with previous experience with economic games on cooperative decision - making ?the first question is particularly intriguing since , based on existing theories , several alternatives are possible .the social heuristics hypothesis ( shh ) , introduced by rand and colleagues to explain the intuitive predisposition towards cooperation described above `` posits that cooperative decision making is guided by heuristic strategies that have generally been successful in one s previous social interactions and have , over time , become internalized and automatically applied to social interactions that resemble situations one has encountered in the past .when one encounters a new or atypical social situation that is unlike previous experience , one generally tends to rely on these heuristics as an intuitive default response .however , through additional deliberation about the details of the situation , one can override this heuristic response and arrive at a response that is more tailored to the current interaction '' .then , according to the shh , inexperienced subjects living in a non - cooperative setting should bring their non - cooperative strategy ( learned in the setting where they live ) in the lab as a default strategy .these subjects are then predicted to act non - cooperatively both under time pressure , because they use their non - cooperative default strategy , and under time delay , because defection is optimal in one - shot interactions . however , this is not the only possibility .several studies have shown that patients who suffered ventromedial prefrontal cortex damage , which causes the loss of emotional responsiveness , are more likely to display anti - social behavior .these findings support the interpretation that intuitive emotions play an important role in pro - social behavior and form the basis of haidt s social intuition model ( sim ) according to which moral judgment is caused by quick moral intuitions and is followed ( when needed ) by slow , ex post facto , moral reasoning . while the sim does not make any prediction on what happens in the specific domain of cooperation , it would certainly be consistent with a general intuitive predisposition towards cooperation , mediated by positive emotions , and independent of the social setting in which an individual is embedded .a third alternative is yet possible .motivated by work suggesting that people whose self - control resources have been taxed tend to cheat more and be less altruistic , it has been argued that self - control plays an important role in overriding selfish impulses and bringing behavior in line with moral standards .this is consistent with kohlberg s rationalist approach , which assumes that moral choices are guided by reason and cognition : as their cognitive capabilities increase , people learn how to take the other s perspective , which is fundamental for pro - social behavior .this rationalist approach makes the explicit prediction that promoting intuition always undermine cooperation . in sum, the question of how promoting intuition versus reflection affects cooperative behavior among people living in a non - cooperative setting is far from being trivial and , based on existing theories , all three possibilities ( positive effect , negative effect , no effect ) are , a priori , possible . concerning previous experience on economic games , while the sim and the rationalist approach do not make any prediction about its role on cooperative decision - making among people living in a non - cooperative setting, the shh predicts that it has either a null or a positive effect driven by intuitive responses . this because experienced participants , despite their living in a non - cooperative setting , _ might _ have internalized a cooperative strategy to be used only in experiments . of course , the shh does not predict that a substantial proportion of subjects have _ in fact _ developed such a context - dependent cooperative intuition - and this is why the predicted effect is _ either _ positive _ or _ null . in the former case, however , the shh predicts that the positive effect should be driven by intuitive responses , since the shh assumes that experience operates primarily through the channel of intuition .here we report on an experiment aimed at clarifying these points .we provide evidence of two major results : ( i ) promoting intuition versus reflection has no effect on cooperation among subjects living in a non - cooperative setting and with no previous experience with economic games on cooperation ; ( ii ) experienced subjects are more cooperative than inexperienced subjects , but only when acting under time pressure . taken together, these results suggest that cooperation is a learning process , rather than an instinctive impulse or a self - controlled choice , and that experience operates primarily via the channel of intuition .in doing so , they shed further light on human cooperative decision - making and provide further support for the social heuristics hypothesis .we have conducted an experiment using the online labor market amazon mechanical turk ( amt ) recruiting participants only from india .india is a particularly suited country to hire people from for our purpose : if , as many studies have confirmed , good institutions are crucial for the evolution of cooperation , and if , as many scholars have argued , corruption and cronyism are endemic in indian society , then residents in india are likely to have very little trust on strangers and so they are likely to have internalized non - cooperative strategies in their every - day life .one study confirms this hypothesis , by showing that spiteful preferences are widespread in the village of uttar pradesh and this ultimately implies residents inability to cooperate . at the same time , according to demographic studies on amt population , india is the second most active country on amt after the us , which facilitates the procedure of collecting data .participants were randomly assigned to either of two conditions : in the _ time pressure _ condition we measured intuitive cooperation ; in the _ time delay _ condition we measured deliberate cooperation . as a measure of cooperation, we adopted a standard two - person prisoner s dilemma ( pd ) with a continuous set of strategies .specifically , participants were given an endowment of , and asked to decide how much , if any , to transfer to the other participant .the amount transferred would be multiplied by 2 and earned by the other participant ; the remainder would be earned by themselves , but without being multiplied by any factor .each participant was informed that the other participant was facing the same decision problem .participants in the time pressure condition were asked to make a decision within 10 seconds and those in the time delay condition were asked to wait for at least 30 seconds before making their choice . after making their decision , participants had to answer four comprehension questions , after which they entered the demographic questionnaire , where , along with the usual questions , we also asked `` to what extent have you previously participated in other studies like to this one ( e.g. , exchanging money with strangers ) ? '' using a 5 point likert - scale from `` never '' to `` several times '' .as in previous studies , we used the answer to this question as a measure of participant s previous experience with economic games on cooperative decision - making .as in these studies , we say that a subject is _ inexperienced _ if he or she answered `` never '' to the above question .in the supplementary online material we also report the results of a pilot , in which we measured participants level of experience by asking them to report the extent to which they had participated in _exactly _ the same task before .although the use of the word `` exactly '' may lead to confusion , with some minor differences in the details , our main results are robust to the use of this measure ( see supplementary online material for more details ) . after collecting the results ,bonuses were computed and paid on top of the participation fee ( ) . no deception was used .a total of 949 subjects participated in our experiment . taken globally ,results contain a lot of noise , since only 449 subjects passed the comprehension questions . herewe restrict our analysis to subjects who passed all comprehension questions and we refer the reader to the supplementary online material for the analysis of those subjects who failed the attention check .we include in our analysis also subjects who did not obey the time constraint in order to avoid selection problems that impair causal inference .first we ascertain that our time manipulation effectively worked . analyzing participants decision times , we find that those in the time delay condition ( ) took , on average , 45.64 seconds to make their decision , while those under time pressure took , on average , only 20.04 seconds .thus , although many subjects under time pressure did not obey the time constraint , the time manipulation still had a substantial effect .participants under time pressure transferred , on average , 27.93% of their endowment , and those under time delay transferred , on average , 28.57% of their endowment .linear regression using time manipulation as a dummy variable confirms that the difference is not statistically significant ( coeff , ) , even after controlling for age , sex , and level of education ( coeff ) .we note that restricting the analysis to subjects who obeyed the time constraint leads to qualitatively equivalent results ( 26.11% under time pressure vs 29.02% under time delay , coeff , ) .thus , promoting intuition versus reflection does not have any effect on cooperative decision - making . _ en passant _ , we note that these percentages are far below those observed among us residents in a very similar experiment .more precisely , in this latter paper , us residents started out with a endowment and were asked to decide how much , if any , to give to the other person .as in the current study , the amount transferred would be multiplied by 2 and earned by the other player .strictly speaking , these two experiments are not comparable for three reasons .first , in there was no time manipulation ; second , the initial endowments were different ; third , stakes used in the experiment in the us did not correspond to the same stakes in indian currency . according to previous research ,these differences are minor .indeed , recent studies have argued that stakes do not matter as long as they are not too high and that neutrally framed pds give rise to a percentage of cooperation sitting between that obtained in the time pressure condition and that obtained in the time delay condition . thus comparing the percentage of cooperation in the current study ( 28% ) with that reported in ( 52% ) supports our assumption that the average indian sample is particularly non - cooperative or , at least , less cooperative than the average us sample .next we investigate our main research questions .figure 1 summarizes our results , providing visual evidence that ( i ) promoting intuition versus reflection has no significant effect on cooperation among inexperienced subjects ; that ( ii ) experienced subjects cooperate more than inexperienced subjects , but only when acting under time pressure . more specifically , we find that inexperienced subjects under time pressure ( ) transferred , on average , of their endowment while those under time delay ( ) transferred , on average , of their endowment .the difference is not significant ( coeff , ) , even after controlling for all socio - demographic variables ( coeff , ) .thus , promoting intuition versus reflection has no effect on cooperation among inexperienced subjects living in a non - cooperative setting .this finding is robust to controlling for people who did not obey the time constraint ( coeff , ) and to controlling for people who obeyed the time contraint ( coeff , ) . to explore the interaction between experience and cooperative behavior , as in previous studies , we separate subjects into experienced and inexperienced .this procedure comes from the observation that , although the level of experience is a categorical variable , the association between participant s objective level of experience and their answer to our question is objective only in case of inexperienced subjects .linear regression predicting cooperation using experience as a dummy variable confirms that experience with economic games on cooperation favors the emergence of cooperative choices , but only among people in the time pressure condition ( time pressure : coeff , ; time delay : coeff , ) .these results are robust to including control on the socio - demographic variables ( time pressure : coeff , ; time delay : coeff , ) .our main results are also robust to using non - parametric tests , such as wilcoxon rank - sum : the rate of cooperation of inexperienced subjects is not statistically distinguishable from the rate of cooperation of inexperienced subjects acting under time delay both when they act under time pressure ( ) and under time delay ( ) ; and the rate of cooperation of inexperienced subjects is significantly smaller than that of experienced subjects , but only among those acting under time pressure ( time pressure : ; time delay : ) . for completeness , we also report the results of linear regression predicting cooperation using level of experience as independent variable .we find that level of experience has a marginally significant positive effect on cooperation among subjects acting under time pressure ( coeff , ) and has no effect on cooperation among subjects acting under time delay ( coeff = , ) .also these results are robust to including control on all the socio - demographic variables ( time pressure : coeff , ; time delay : coeff , ) .the increase of cooperation from inexperienced subjects to experienced subjects seems to be driven by participants under time pressure who did _ not _ obey the time constraint .specifically , linear regression predicting cooperation using experience as a dummy variable yields non - significant results in case of participants who obeyed the time pressure condition ( without control : coeff , ; with control : coeff , ) and significant results in case of participants who did not obey it ( without control : coeff , ; with control : coeff , ) .this is not surprising and it is most probably due to noise generated by a combination of two factors : the set of people who obeyed the time pressure contraint is very small ( for instance , only 14 inexperienced people obeyed the time constraint ) and it is more likely to contain people who did not understand the decision problem but passed the comprehension questions by chance ( which we estimated to be 5% of the total .see supplementary online material ) . ) .previous experience with economic games on cooperation has a positive effect on cooperation , but only among participants in the time pressure condition ( ) . _ _ ]we have shown that ( i ) promoting intuition via time pressure versus promoting deliberation via time delay has no effect on cooperative behavior among subjects residents in india with no previous experience with economic games on cooperation , and that ( ii ) experience has a positive effect on cooperation , but this effect is significant only among subjects acting under time pressure .our results have several major implications , the first of which is providing further support for the social heuristics hypothesis ( shh ) .introduced in order to organize the growing body of literature providing direct and indirect evidence that , on average , intuitive responses are more cooperative than reflective responses , the shh contends that people internalize strategies that are successful in their everyday social interactions and then apply them to social interactions that resemble situations they have encountered in the past .thus , when they encounter a new or atypical situation , people tend to rely on these heuristics and use them as intuitive responses .deliberation can override these heuristics and adjust the behavior towards one that is more tailored to the current interaction .as such , the shh makes a prediction that has not been tested so far : inexperienced subjects living in a non - cooperative setting should act non - cooperatively both under time pressure , because they use their non - cooperative default strategy ( learned in the setting where they live ) , and under time delay , because defection is optimal in one - shot interactions .our results support this prediction . besides this prediction , the shh is also consistent with an interaction between level of previous experience with economic games on cooperation , time pressure , and cooperation in one - shot interactions : experienced people , despite their living in a non - cooperative setting , _ might _ have internalized a cooperative strategy , to be used only in amt .the shh does not predict that a substantial proportion of experienced people have _ in fact _developed this context - dependent intuition for cooperation , but it is certainly consistent with a positive effect of experience on cooperation driven by intuitive responses .our results provide evidence for this phenomenon .as mentioned in the introduction , kohlberg s rationalist approach makes the explicit prediction that promoting intuition should always undermine cooperation .thus our results support the shh versus kohlberg s rationalistic approach .of course , this does _ not _ imply that the rationalist approach should be completely rejected : it is indeed supported by many experimental studies involving pro - social behaviors other than cooperation . if anything , our results point out that different pro - social behaviors may emerge from different cognitive processes .classifying pro - social behaviors in terms of the processes involved is an important direction for future research towards which , to the best of our knowledge , only one recent study has attempted a first step . supporting the shh, our results suggest that economic models of human cooperation should start taking dual processes and individual history into account .indeed , virtually all major models of human cooperation are static and decontextualized and only a handful of papers have recently attempted a first step in the direction of taking dual processes into account . we believe that extending these approaches to incorporate also individual history could be a promising direction for future research .our findings go beyond the mere support of the shh . our cross cultural analysis , although it is formally not correct , shows that residents in india are , on average , less cooperative than us residents .the difference is so large ( 28% vs 52% ) that it is hard to explain it by appealing to minor differences in the experimental designs and so it deserves to be commented .one possibility , supported by the experimental evidence that good institutions are crucial in promoting cooperation and the evidence that india struggles on a daily basis to fight corruption in politics at both the national and local levels , is that residents in india may have internalized non cooperative behavior in their everyday life ( because cooperation is not promoted by their institutions ) and they tend to apply it also to the new situation of a lab experiment .one far - reaching consequence of this interpretation is that the role of local institutions may go far beyond regularizing behavior .if institutions do not support cooperative behavior , selfishness may even get internalized and applied to atypical situations where people rely on heuristics .while this interpretation is supported by a recent study showing that norms of cooperation learned in one experiment spill over to subsequent experiments where there are no norms , we recommend caution on our interpretation , since our results do _ not _ show directly that inexperienced residents in india are less cooperative than us residents _ because _ they are embedded into a society whose institutions do not promote cooperative behavior .however , we believe that this is a fundamental point that deserves to be rigorously addressed in further research .interestingly , we have shown that experienced residents in india are significantly more cooperative than inexperienced ones .this correlation appears to be even more surprising if seen in light of recent studies reporting that experience has a _ negative _ effect on cooperation among residents in the us .although the sign of these effects are different , they share the property that they are driven by intuitive responses .thus they are in line with the shh , which assumes that experience operates mainly through the channel of intuition , but it does not make any prediction about the sign of the effect of experience , which may ultimately depend on a number of factors . while it is relatively easy to explain a negative effect of experience with economic games on cooperation , by appealing to learning of the payoff maximizing strategy , explaining a positive effect is harder .one possibility is that experienced subjects have learned cooperation in iterated games , where it might be strategically advantageous , and tend to apply it also in one - shot games .another possibility is that turkers are developing a feeling of community that may favor the emergence of pro - social preferences .understanding what mechanisms can promote the emergence of cooperation from a non - cooperative setting is certainly a fundamental topic for further research .1 trivers rl .1971 the evolution of reciprocal altruism .* 46 * , 35 - 57 .axelrod r , hamilton wd .1981 the evolution of cooperation ._ science _ * 211 * , 1390 - 1396 .ostrom e. 2000 collective action and the evolution of social norms ._ j. econ .perspect . _* 14 * , 137 - 158 .milinski m , semmann d , kranbeck hj .2002 reputation helps solve the ` tragedy of the commons ' ._ nature _ * 415 * , 424 - 426 .fehr e , fischbacher u. 2003 the nature of human altruism ._ nature _ * 425 * , 785 - 791 .boyd r , gintis h , bowles s , richerson pj .2003 the evolution of altruistic punishment .sci usa _ * 100 * , 3531 - 3535 .perc m , szolnoki a. 2010 coevolutionary games : a mini review ._ biosystems _ * 99 * , 109 - 125 .apicella cl , marlowe fw , fowler jh , christakis na . 2012 social networks and cooper- ation in hunter - gatherers ._ nature _ * 481 * , 497 - 501 .capraro v. 2013 a model of human cooperation in social dilemmas ._ plos one _ * 8 * , e72427 .crockett mj .2013 models of morality . _ trends .* 17 * , 363 - 366 .rand dg , nowak ma .2013 human cooperation ._ trends cogn ._ * 17 * , 413 - 425 .zaki j , mitchell jp .2013 intuitive prosociality .* 22 * , 466 - 470 .capraro v , halpern jy .2014 translucent players : explaining cooperative behavior in social dilemmas ._ available at ssrn 2509678_. hauser op , rand dg , peysakhovich a , nowak ma .2014 cooperating with the future ._ nature _ * 511 * , 220 - 223 .bone je , wallace b , bshary r , raihani nj .2014 the effect of power asymme- tries on cooperation and punishment in a prisoners dilemma game . _plos one _ , * 10 * , e0117183 .2006 five rules for the evolution of cooperation ._ science _ * 314 * , 1560 - 1563 .wong ry , hong y. 2005 dynamic influences of culture on cooperation in the prisoner s dilemma ._ psychol ._ * 16 * , 429 - 434 .khadjavi m , lange a. 2013 prisoners and their dilemma ._ j. econ ._ , * 92 * , 163 - 175 .capraro v , jordan jj , rand dg .2014 heuristics guide the implementation of social preferences in one - shot prisoner s dilemma experiments .* 4 * , 6790 .capraro v , smyth c , mylona k , niblo ga .2014 benevolent characteristics promote cooperative behaviour among humans . _plos one _ * 9 * , e102881 .capraro v , marcelletti a. 2014 do good actions inspire good actions in others ? __ * 4 * , 7470 .barcelo h , capraro v. 2015 group size effect on cooperation in one - shot social dilemmas .rep . _ * 5 * , 7937 .rand dg , greene jd , nowak ma .2012 spontaneous giving and calculated greed ._ nature _ * 489 * , 427 - 430 .rand dg , peysakhovich a , kraft - todd gt , newman ge , wurzbacher o , nowak ma , greene jd .2014 social heuristics shape intuitive cooperation .commun . _ * 5 * , 3677 .cone j , rand dg .2014 time pressure increases cooperation in competitively framed social dilemmas . _plos one _ , * 9 * , e115756 .rand dg , kraft - todd gt .2014 reflection does not undermine self - interested prosociality ._ * 8 * , 300 .rand dg , newman ge , wurzbacher om .2014 social context and the dynamics of cooperative choice ._ j. behav ._ doi:10.1002/bdm.1837 .schulz jf , fischbacher u , thni c , utikal v. 2012 affect and fairness : dictator games under cognitive load ._ j. econ .psychol . _ * 41 * , 77 - 87 .cornelissen g , dewitte s , warlop l. 2011 are social value orientations expressed automatically ?decision making in the dictator game ._ pers . social psychol ._ doi : 10.1177/0146167211405996 .roch sg , lane ja , samuelson cd , allison st , dent jl .2000 cognitive load and the equality heuristic : a two - stage model of resource overconsumption in small groups .proc . _ * 83 * , 185 - 212 .lotz s. 2014 spontaneous giving under structural inequality : intuition promotes cooperation in asymmetric social dilemmas . _ available at ssrn : http://ssrn.com / abstract=2513498_. ruff c , ugazio g , fehr e. 2013 changing social norm compliance with noninvasive brain stimulation . _ science _ * 342 * , 482 - 484 .damasio ar , tranel d , damasio h. 1990 individuals with sociopathic behavior caused by frontal damage fail to respond autonomically to social stimuli . _ behav .brain res . _ * 41 * , 81 - 94 .bechara a , damasio ar , damasio h , anderson sw .1994 insensitivity to future consequences following damage to human prefrontal cortex ._ cognition _ * 50 * , 7 - 15 .bechara a , tranel d , damasio h , damasio ar .1996 failure to respond autonomically to anticipated future outcomes following damage to prefrontal cortex .cortex _ * 6 * , 215 - 225 .haidt j. 2001 the emotional dog and its rational tail : a social intuitionist approach to moral judgment ._ psychol .rev . _ * 108 * , 814 - 834 .mead nl , baumeister rf , gino f , schweitzer me , ariely d. 2009 too tired to tell the truth : self - control resource depletion and dishonesty . _psychol . _ * 45 * , 594 - 597 .gino f , schweitzer me , mead nl , ariely d. 2011 unable to resist temptation : how self- control depletion promotes unethical behavior .* 115 * , 191 - 203 .halali e , bereby - meyer y , ockenfels a. 2013 is it all about the self ?the effect of self- control depletion on ultimatum game proposers .neurosci . _ * 7 * , 240 .xu h , bgue l , bushman bj .2012 too fatigued to care : ego depletion , guilt , and prosocial behavior .psychol . _ * 48 * , 1183 - 1186 .achtziger a , als - ferrer c , wagner ak .2014 money , depletion , and prosociality in the dictator game ._ j. neurosci ._ in press .kohlberg l. 1963 the development of childrens orientations toward a moral order .dev . _ * 6 * , 11 - 33 .paolacci g , chandler j , ipeirotis pg .2010 running experiments on amazon mechanical turk .* 5 * , 411 - 419 .horton jj , rand dg , zeckhauser rj .2011 the online laboratory : conducting experiments in a real labor market . _* 14 * , 399 - 425 .2012 the promise of mechanical turk : how online labor markets can help theorists run behavioral experiments ._ j. theor .biol . _ * 299 * , 172 - 179 .bardhan p. 2000irrigation and cooperation : an empirical analysis of 48 irriga- tion communities in south india .. change _ * 48 * , 847 - 865 .dayton - johnson j. 2000 determinants of collective action on the local commons : a model with evidence from mexico .* 62 * , 181 - 208 .fujiie m , hayami y , kikuchi m. 2005 the conditions of collective action for local commons management : the case of irrigation in the philippines .econ . _ * 33 * , 179 - 189 .henrich j , boyd r , bowles s , camerer c , fehr e , gintis h , et al .2005 `` economic man '' in cross - cultural perspective : behavioral experiments in 15 small - scale societies .brain sci ._ * 28 * , 795 - 815 .henrich j , mcelreath r , barr a , ensminger j , barrett c , bolyanatz a , et al .2006 costly punishment across human societies ._ science _ * 312 * , 1767 - 1770 .herrmann b , thni c , gchter s. 2008 antisocial punishment across societies ._ science _ * 319 * , 1362 - 1367 .buchan nr , grimalda g , wilson r , brewer m , fatas e , foddy m. 2009 globalization and human cooperation .usa _ * 106 * , 4138 - 4142 .gchter s , herrmann b. 2009 reciprocity , culture and human cooperation : previous insights and a new cross - cultural experiment .b _ * 364 * , 791 - 806 . gchter s , herrmann b , thni c. 2010 culture and cooperation . _b _ * 365 * , 2651 - 2661 .buchan nr , brewer mb , grimalda g , wilson rk , fatas e , foddy m. 2011 global social identity and global cooperation ._ psychol .* 22 * , 821 - 828 .andersson k , agrawal a. 2011 inequalities , institutions , and forest commons ._ global environ . change _ * 21 * , 866 - 875 .bigoni m , bortolotti s , casari m , gambetta d , pancotto f. 2014 amoral familism , social capital , or trust ? the behavioral foundations of the italian north - south divide . _ working paper_. das sk .2001 _ public office , private interest : bureaucracy and corruption in india . _ oxford university press .guha r. 2007 _ india after gandhi : the history of the world s largest democracy_. pan .2008 curbing corruption in india : an impossible dream ? _ asian j. polit .* 16 * , 240 - 259 .miklian j , carney s. 2013 corruption , justice and violence in democratic india ._ sais rev ._ * 33 * , 37 - 49 .fehr e , hoff k , kshetramade m. 2008 spite and development . _rev . _ * 98 * , 494 - 499 .ross j , irani l , silberman m , zaldivar a , tomlinson b. 2010 who are the crowdworkers ? : shifting demographics in mechanical turk ._ chi10 extended abstracts on human factors in computing systems _ , 2863 - 2872 .tinghg g , andersson d , bonn c , bttiger h , josephson c , lundgren g , et al .2013 intuition and cooperation reconsidered ._ nature _ * 498 * , e1-e2 .amir o , rand dg , gal yk .2012 economic games on the internet : the effect of $ 1 stakes . _ plos one _ * 7 * , e31461 .engel c , rand dg .2014 what does clean really mean ?the implicit framing of decontextualized experiments .lett . _ * 122 * , 386 - 389 .kieslich pj , hilbig be .2014 cognitive conflict in social dilemmas : an analysis of response dynamics .mak . _ * 9 * , 510 - 522 .righetti f , finkenauer c , finkel ej .2013 low self - control promotes the willingness to sacrifice in close relationships ._ psychol ._ doi:10.1177/0956797613475457 .rand dg , epstein zg .2014 risking your life without a second thought : intuitive decision - making and extreme altruism . _plos one _ * 9 * , e109687 .corgnet b , espn am , hernn - gonzlez r. 2015 the cognitive basis of social behavior : cognitive reflection overrides antisocial but not always prosocial motives . _ available on researchgate_. fudenberg d , levine dk .2006 a dual - self model of impulse control .* 96 * , 1449 - 1476 .fudenberg d , levine dk .2011 risk , delay , and convex self - control costs . _ am .j. microecon ._ * 3 * , 34 - 68 .fudenberg d , levine dk .2012 timing and self - control ._ econometrica _ * 80 * , 1 - 42 .dreber a , fudenberg d , levine dk , rand dg .2014altruism and self - control ._ available at ssrn : http://ssrn.com / abstract=2477454_. peysakhovich a , rand dg . in press .habits of virtue : creating norms of cooperation and defection in the laboratory ._ this supplementary information is divided in three sections : in the first section we report more details about the statistical analysis ; in the second section we report full instructions of our experiment . in the third sectionwe report the results of our pilot ( containing a methodological error in the measure of participants level of experience ) . with some differences in the details ,these results are in line with those reported in the main text and thus provide further evidence in support of our main result that experienced subjects cooperate more than inexperienced subjects and that experience operates primarily through the channel of intuition .we start by reporting the socio - demographics of participants who passed the comprehension questions .data are summarized in the table .we remind that participants level of experience was measured using a 5-point likert - scale from 1=never to 5=several times .the socio - demographic statistics show that the majority of subjects in the time pressure condition did not obey the time constraint .this is likely due to the fact that reading the instructions of the decision problem takes about six seconds and so participants had only 4 seconds to understand the problem and make their decision . however , the mean decision time of participants in the time pressure condition was much smaller than the mean decision time of participants in the time delay condition , providing evidence that the time manipulation still had a substantial effect . [ cols="<,^,^",options="header " , ]finally we analyze subjects who failed the attention test . indeed , the large number of participants who failed the comprehension questions ( about half of the total number ) is worrisome that a substantial fraction of those who passed the comprehension questions may have passed them just by chance .although this rate of passing is not much lower than that in similar studies conducted in the us ( in a very similar experiment , conducted with american subjects and published in , 32% of subjects did not pass the comprehension questions ) it could potentially generate noise in our data .indeed , subjects who failed the attention test played essentially at random ( time pressure : average transfer = 47.20% ; time delay : average transfer = 46.35% ) . to exclude this possibility ,we analyse failers responses to the comprehension questions .we start with the first comprehension question , which asked `` what is the choice by you that maximizes your outcome ? '' .participants could choose any even amount of money from 0c to 20c , for a total of 11 possible choices .figure 1 reports the distribution of responses of people who failed at least one comprehension questions .the distribution is clearly tri - modal , with 60% of responses equally distributed among the two extreme choices ( full cooperation and full defection ) and the mid - point ( transfer half ) .the remaining 40% is equally distributed among all other choices .consequently , assuming that confused participants respond to the first comprehension question according to this distribution , the probability that a confused participant pass the first comprehension question by chance is equal to 1/5 .next we analyze the responses to the second comprehension question , which asked `` what is the choice by you that maximizes the other participant s outcome ? '' .figure 2 reports the distribution of answers .also in this case we find a tri - modal distribution , although this time the correct answer appeared with higher frequency ( about 50% ) . assuming that a confused participant answered this question according to this distribution , we conclude that a confused participant had probability about 1/10 to pass the first two comprehension questions by chance .then we analyze the responses to the third comprehension question , which asked `` what is the choice by the other participant that maximizes your outcome ? ''figure 3 reports the distribution of answers .this time the distribution is essentially bi - modal , with about 3/5 of people answering correctly . assuming that confused participants answered according to this distribution , we conclude that a confused participant had probability about 3/50 to pass the first three comprehension questions by chance .now , it is impossible to have a clue of what the proportion of confused participants who passed the fourth comprehension question by chance is .the analysis above suggests that 3/50 is an upper bound of the probability that a confusing subject passed all comprehension questions by chance .interestingly , 88% of subjects who answered 20c in the third comprehension and failed the fourth comprehension question , answered 20c also to the fourth comprehension question , which asked `` what is the choice by the other participant that maximizes the other participant s outcome ? '' .this suggests that the actual probability of passing all comprehension questions by chance is much lower that 3/50 . in any case , although our results do not allow to make a precise estimation of noise , the proportion of people who passed the comprehension questions by chance is likely below 5% of the total , suggesting that noise is a minor problem in our data .participants were randomly assigned to either the time pressure condition or the time delay condition . in both conditions , after entering their worker i d , participants were informed that they would be asked to make a choice in a decision problem to be presented later and that comprehension questions would be asked .participants were also informed that the survey ( which was made using the software qualtrics ) contained a skip logic which would automatically exclude all participants failing any of the comprehension questions . specifically , this screen was as follows : _ welcome to this hit ._ _ this hit will take about five minutes .for the participation to this hit , you will earn 0.50 us dollars , that is , about 31 inr .you can also earn additional money depending on the decisions that you and the other participants will make ._ you will be asked to make one decision .there is no incorrect answer .however : _ _ important : after making the decision , to make sure you understood the decision problem , we will ask some simple questions , each of which has only one correct answer . if you fail to correctly answer any of those questions , the survey will automatically end and you will not receive any redemption code and consequently you will not get any payment ._ _ with this in mind , do you wish to continue ? _ at this stage , they could either leave the study or continue .those who decided to continue were redirected to an introductory screen where we gave them all the necessary information about the decision problem , but without telling exactly which one it is .this is important in order to have the time pressure and time delay conditions work properly in the next screen .this introductory screen for the participants in the time pressure condition was the following : _ you have been paired with another participant .you can earn additional money depending on the decision you will make in the next screen .you will be asked to make a choice that can affect your and the other participant s outcome .the decision problem is symmetric : also the other participant is facing the same decision problem .after the survey is completed , you will be paid according to your and the other participant s choices ._ _ you will have only 10 seconds to make the choice ._ _ this is the only interaction you have with the other participant .he or she will not have the opportunity to influence your gain in later parts of the hit .if you are ready , go to the next page ._ the introductory screen for the participants in the time delay condition was identical , a part from the fact that the sentence ` you will have only 10 seconds to make the choice ' was replaced by the sentence ` you will be asked to think for at least 30 seconds before making your choice .use this time to think carefully about the decision problem ' .the decision screen was the same in both conditions : _ you and the other participant are both given us dollars .you and the other participant can transfer , independently , money to the each other .every cent you transfer , will be multiplied by and earned by the other participant .every cent you do not transfer , will be earned by you ._ _ how much do you want to transfer ? _ by using appropriate buttons , participants could transfer any even amount of money from to .en passant , we observe that reading the decision screen takes about six seconds and thus participants under time pressure had only about four seconds to make their choice . to assure that time pressure and time delay work properly ,it is necessary that comprehension questions are asked after the decision has been made .thus , right after the decision screen , participants faced the following four comprehension questions . _ what is the choice by you that maximizes your outcome ? __ what is the choice by you that maximizes the other participant s outcome ? _ _ what is the choice by the other participant that maximizes your outcome ?_ _ what is the choice by the other participant that maximizes the other participant s outcome ? _ by using appropriate buttons , participants could select any even amount of money from to .participants who failed any of the comprehension questions were automatically excluded from the survey . those who answered all questions correctly entered the demographic questionnaire , where we asked for their age , sex , reason for their choice , and , most importantly , level of experience in these games . specifically , we asked the following question : _ to what extent have you previously participated in other studies like to this one ( e.g. , exchanging money with strangers ) ? _ answerswere collected using a 5 point likert - scale from `` 1=never '' to `` 5=several times '' .our pilot experiment was identical to our main experiment , except for the fact that , as a measure of experience , we asked participants to self - report the extent to which they had participated in `` exactly '' the same task before. participants could choose between : never , once or twice , and several times .the use of the word `` exactly '' is problematic , since it might lead to confusion : what does the answer `` exactly the same task '' imply ?does it imply that participants have participated in a task with exactly the same instructions ( including time constrains ) or does it imply that the participant have participated in a task containing the same economic game ?figure 4 reports the results of the pilot .we observe that , indeed , the details are different : level of experience seem to have a inverted - u effect on cooperation , which , since it has not been replicated in the main experiment , is probably due to confusion regarding the interpretation of the word `` exactly '' .however , as in our main experiment , we find that experienced subjects are significantly more cooperative than little or no experienced subjects and that this behavioral change is mainly driven by intuitive responses ( see figure 5 ) .specifically , linear regression confirms that little experienced subjects are significantly less cooperative than inexperienced subjects both under time pressure ( coeff , ) and under time delay ( coeff , ) ; and confirms that experienced subjects are significantly more cooperative than little experienced subjects both under time pressure ( coeff , ) and under time delay ( coeff , ) .moreover , experienced subjects were also significantly more cooperative than inexperienced subjects , both under time pressure ( coeff , p ) and under time delay ( coeff , p ) .thus experience has a significant inverted - u effect on cooperation , where little experienced subjects cooperate the least and experienced subjects the most .the coefficients of the previous regressions suggest that the motivations behind the initial decrease of cooperation , which affects subjects under time pressure and those under time delay to exactly the same extent , are different from the motivations behind the subsequent flourishing of cooperation , which seems to affect subjects under time pressure to a larger extent than those under time delay . to confirm this , we use linear regression to predict decision among experienced subjects using time pressure as a dummy variable .we find that experienced subjects under time pressure are nearly significantly more cooperative than experienced subjects under time delay ( coeff , ) .
recent studies suggest that cooperative decision - making in one - shot interactions is a history - dependent dynamic process : promoting intuition versus deliberation has typically a positive effect on cooperation ( dynamism ) among people living in a cooperative setting and with no previous experience in economic games on cooperation ( history - dependence ) . here we report on a lab experiment exploring how these findings transfer to a non - cooperative setting . we find two major results : ( i ) promoting intuition versus deliberation has no effect on cooperative behavior among inexperienced subjects living in a non - cooperative setting ; ( ii ) experienced subjects cooperate more than inexperienced subjects , but only under time pressure . these results suggest that cooperation is a learning process , rather than an instinctive impulse or a self - controlled choice , and that experience operates primarily via the channel of intuition . in doing so , our findings shed further light on the cognitive basis of human cooperative decision - making and provide further support for the recently proposed social heuristics hypothesis . for mathematics and computer science ( cwi ) , 1098 xg , amsterdam , the netherlands . of political sciences , luiss guido carli , 00197 , roma , italy . contact author : v.capraro.nl__forthcoming in proceedings of the royal society b : biological sciences _ _
in this paper , we consider the problem of the nonparametric density deconvolution of , the density of identically distributed variables , from a sample in the model where the s and s are independent sequences , the s are i.i.d . centered random variables with common density , that is is a noise with known density and known noise level . due to the independence between the s and the s ,the problem is to estimate using the observations with common density the function is often called the convolution kernel and is completely known here . denoting by the fourier transform of ,it is well known that since , two factors determine the estimation accuracy in the standard density deconvolution problem : the smoothness of the density to be estimated , and the one of the error density which are described by the rate of decay of their fourier transforms . in this context ,two classes of errors are usually considered : first the so called ordinary smooth " errors with polynomial decay of their fourier transform and second , the super smooth " errors with fourier transform having an exponential decay . for further references about density deconvolution see e.g. carroll and hall ( 1988 ) , devroye ( 1989 ) , fan ( 1991a , b ) , liu and taylor ( 1989 ) , masry ( 1991 , 1993a , b ) , stefansky ( 1990 ) , stefansky and carroll ( 1990 ) , taylor and zhang ( 1990 ) , zhang ( 1990 ) and cator ( 2001 ) , pensky and vidakovic ( 1999 ) , pensky ( 2002 ) , fan and koo ( 2002 ) , butucea ( 2004 ) , butucea and tsybakov ( 2004 ) , koo ( 1999 ) . the aim of the present paper is to provide a complete simulation study of the deconvolution estimator constructed by a penalized contrast minimization on a model , a space of square integrable functions having a fourier transform with compact support included into ] ( see meyer ( 1990 ) ) , we denote by such a space and consider the collection of linear spaces , with , , and with , as projection spaces .consequently , \},\end{aligned}\ ] ] and the orthogonal projection of on , is given by , with . sincethis orthogonal projection involves infinite sums , we consider in practice , the truncated spaces defined as where is an integer to be chosen later .associated to those spaces we consider the orthogonal projection of on denoted by and given by associate this collection of models to the following contrast function , for belonging to some of the collection since = \langle t , g\rangle, ] ( see ibragimov and hasminskii ( 1983 ) ) .consequently , under ( [ momg ] ) , if , the rate of convergence of is obtained by selecting the space , and thus , that minimizes one can see that if becomes too large , the risk explodes , due to the presence of the second term .hence appears to be the cut between the relevant low frequencies used in the fourier transforms to compute the estimate and the high frequencies which are not used ( and may even degrade the quality of the risk ) .we give the resulting rates in table [ rates ] . for a density satisfying ( [ super ] ) , rates are , in most cases , known to be the optimal one in the minimax sense ( see fan ( 1991a ) , butucea ( 2004 ) , butucea and tsybakov ( 2004 ) ) .we refer to comte et al .( 2005 ) for further discussion about optimality .
we consider the problem of estimating the density of identically distributed variables , from a sample where , and is a noise independent of with known density . we generalize adaptive estimators , constructed by a model selection procedure , described in comte et al . ( 2005 ) . we study numerically their properties in various contexts and we test their robustness . comparisons are made with respect to deconvolution kernel estimators , misspecification of errors , dependency , ... it appears that our estimation algorithm , based on a fast procedure , performs very well in all contexts . universit paris v , map5 , umr cnrs 8145 . ] iut de paris v et universit dorsay , laboratoire de probabilits , statistique et modlisation , umr 8628 . ] adaptive estimation . density deconvolution . model selection . penalized contrast . projection estimator . simulation study . data - driven .
recent discoveries have underlined a key role of astrophysics in the study of nature . in this paperwe are presenting a potential instrument for measuring high energy photon polarization with a proven detector technique which should allow preparation of a reliable tool for the space - borne observatory .polarization of the photon has played an important role ( sometimes even before it was recognized ) in physics discoveries such as the famous young s interference experiment , michelson - morley s test of the ether theory , determination of the neutral pion parity and many others , including more recently the spin structure of the nucleon .polarization of the cosmic microwave background ( cmb ) will likely be a crucial observable for the inflation theory ( see planck [ sci.esa.int/planck ] and bicep [ bicepkeck.org ] results ) . during the last decade ,observations from the agile [ agile.rm.iasf.cnr.it ] and fermi - lat [ www-glast.stanford.edu ] pair production telescopes have enhanced our understanding of gamma ( ) ray astronomy . with the help of these telescopes numerous high energy ray sources have been observed .however , the current measurements are insufficient to fully understand the physics mechanism of such ray sources as gamma ray bursts ( grbs ) , active galactic nuclei ( agns ) , blazars , pulsars , and supernova remnants ( snrs ) . even though both telescopes cover a wide range of energy ( from 20 mev to more than 300 gev ) , neither of them is capable of polarization measurements .medium to high energy photon polarimeters for astrophysics were proposed by the nasa group and recently by the saclay group .both are considering ar(xe)-based gas - filled detectors : the time projection chamber with a micro - well or micromega section for amplification of ionization .in this paper we evaluate the features of an electron - positron pair polarimeter for the full energy range from 20 mev to 1000 mev and then propose a specific design for a polarimeter in the 100 to 300 mev energy range using silicon micro - strip detectors , msds , whose principal advantage with respect to the gas - based tpc is that the spatial and two - track resolution is about five to ten times better . the paper is organized in the following way : in section [ sec_moti ] we briefly discuss the motivation for cosmic ray polarimetry in the high energy region .section [ sec_pol ] is devoted to measurement techniques , polarimeters being built and current proposals . in section [ sec_flux ]we calculate the photon flux coming from the crab pulsar and crab nebula .the design of the new polarimeter and its performance are discussed in the last few sections .there are several recent reviews of photon polarimetry in astrophysics which address many questions which we just briefly touch on in this section .photon polarimetry for energy below a few mev is a very active field of astrophysical research , and some examples of the productive use of polarimetry at these energies include : detection of exoplanets , analysis of chemical composition of planetary atmosphere , and investigation of interstellar matter , quasar jets and solar flares .however , no polarization measurements are available in the medium and high energy regions because of the instrumental challenges .the primary motivation in proposing a polarimeter is our interest in understanding the emission and production mechanisms for polarized rays in pulsars , grbs , and agns by measuring polarization of cosmic rays in this under - explored energy region ( mev ) .additionally , the polarization observations from the rotation - powered and accretion - powered pulsar radiation mechanisms could help to confirm the identification of black hole candidates .polarization measurements could reveal one of the possible effects induced by quantum gravity , the presence of small , but potentially detectable , lorentz or cpt violating terms in the effective field theory .these terms lead to a macroscopic birefringence effect of the vacuum ( see for more information ) .up to now , the highest energy linear polarization measurement has been for grb 061122 in the 250 - 800 kev energy range , and vacuum birefringence has not been observed in that region .therefore , extending polarization sensitivity to higher energies could lead to detection of vacuum birefringence , which would have an extraordinary impact on fundamental physics , or in the case of null detection we could significantly improve the present limits on the lorentz invariance violation parameter .further , according to the observations by the energetic gamma ray experiment telescope ( egret ) [ heasarc.gsfc.nasa.gov/docs/cgro/egret ] , the synchrotron emission of the crab nebula is significant in the energy below 200 mev .additionally , the theoretical studies state that most of the rays coming from the crab nebula around 100 mev may come from its inner knot , so the observations in the neighborhood of 100 mev will help to test this theoretical hypothesis and confirm the emission mechanism .furthermore , the observation of the rays from the crab pulsar provides strong evidence of the location of the ray emitting region as it lies in the center of the nebula .it is also worth mentioning that polarimetry could test the theories assuming existence of axions ( hypothetical particles introduced to solve the strong cp problem of qcd ) .it is interesting that the same axions or axion - like particles can serve as a foundation for a relevant mechanism of sun luminosity .a theoretical study has shown that polarization observations from grbs can be used to constrain the axion - photon coupling : for the axion mass ev .the limit of the coupling scales is ; therefore , the polarimetry of grbs at higher energies would lead to tighter constraints . in two of the following subsectionswe will briefly explain how polarization measurements are involved in confirming the emission mechanism and geometry of two above - mentioned sources .pulsars are a type of neutron star , yet they are highly magnetized and rotate at enormous speeds .the questions concerning the way magnetic and electric fields are oriented , how particles are accelerated and how the energy is converted into radio and rays in pulsars are still not fully answered .because of the extreme conditions in pulsars interiors , they can be used to understand poorly known properties of superdense , strongly magnetized , and superconducting matter . moreover , by studying pulsars one can learn about the nuclear reactions and interactions between the elementary particles under these conditions , which can not be reproduced in terrestrial laboratories .particle acceleration in the polar region of the magnetic field results in gamma radiation , which is expected to have a high degree of polarization .depending on the place where the radiation occurs , the pulsar emission can be explained in the framework of a polar cap model or an outer cap model . in both models ,the emission mechanism is similar , but polarization is expected to be dissimilar ; hence , polarimetry could be used to understand the pulsar s emission mechanism .polarization measurements would also help to understand grbs .the grbs are short and extremely bright bursts of rays .usually , a short - time ( from s to about s ) peak of radiation is followed by a long lasting afterglow .the characteristics of the radiation emitted during the short - time peak and during the afterglow are different .the number of high - energy photons which may be detected during the short - time burst phase is expected to be small compared with the one for the long - lived emission .while only about 3% of the energy emitted during the short - time burst is carried by high - energy photons with mev , the high energy photons of the afterglow carry about half of the total emitted energy . therefore , there is a possibility of observing polarization of high energy photons during the afterglow .the emission mechanism of grbs , the magnetic composition , and the geometry and morphology of grb jets are still uncertain but can be at least partly revealed in this way .it is worth noting that several studies have discussed how the degree of polarization , , depends on the grb emission mechanisms . in one example , using monte carlo methods toma showed that the compton drag model is favored when the degree of polarization , and - concerns the synchrotron radiation with the ordered magnetic fields model .moreover , studies by mundell and lyutikov have proven that polarimetry could assist in revealing the geometry of grb jets .several physical processes such as the photoelectric effect , thomson scattering , compton scattering , and electron - positron pair production can be used to measure photon linear polarization .polarimeters based on the photoelectric effect and thomson scattering are used at very low energies .compton polarimeters are commonly used for energies from 50 kev to a few mev . ) above an energy range of a few mev . ]some of the major achievements in astrophysics that were obtained using polarimetry are : the discovery of synchrotron radiation from the crab nebula ; the study of the surface composition of solar system objects ; the measurement of the x - ray linear polarization of the crab nebula , which is still one of the best measurements of linear polarization for astrophysical sources ; mapping of solar and stellar magnetic fields ; detection of polarization in the cmb radiation ; and analysis of large scale galactic magnetic fields . the measurement of polarization in this high energy ray regime can be done by detecting the electron - positron pairs produced by rays and analysis of a non - uniformaty of event distribution in the electron - positron pair plane angle , as discussed in ref .however , implementation of this technique should consider limitations due to multiple coulomb scatterings in the detector , and there are no successful polarization measurements for astrophysical sources in the energy regime of interest in our paper . a number of missions have included cosmic ray observations , but only a few of them are capable of measuring polarization .the polarimetry measurements were mainly restricted to rays with low energies e 10 mev . as an example, the reuven ramaty high energy solar spectroscopic imager ( rhessi ) [ hesperia.gsfc.nasa.gov/rhessi3 ] , launched to image the sun at energies from 3 kev to 20 mev , was capable of polarimetry up to 2 mev , and the results were successfully used to study the polarization of numerous solar flares .the spi detector international gamma - ray astrophysics laboratory ( integral ) instrument [ sci.esa.int/integral ] has the capability of detecting polarization in the range of 3 kev to 8 mev .it was used to measure the polarization of grb 041219a , and later a high degree of polarization of rays from that source was confirmed .the tracking and imaging gamma ray experiment ( tigre ) compton telescope , which observes rays in the range of mev , can measure polarization up to 2 mev . recently ,morselli proposed gamma - light to detect rays in the energy range 10 mev 10 gev ,and they believe that it will provide solutions to all the current issues that could not be resolved by agile and fermi - lat in the energy range 10 200 mev .it can also determine the polarization for intense sources for the energies above a few hundred mev with high accuracy .in spite of the limitations of the instruments capability , there are numerous polarimetry studies in ray astrophysics , and various proposals have been put forth regarding medium and high energy ray polarimeters .for example , bloser proposed the advanced pair telescope ( apt ) , also a polarimeter , in the 50 mev 1 gev range . that proposal uses a gas - based time projection chamber ( tpc ) with micro - well amplification to track the , path .the polarization sensitivity was estimated by using geant4 monte carlo simulations .preliminary results indicated that it will be capable of detecting linearly polarized emissions from bright sources at 100 mev .as an updated version of the apt , hunter suggested the advanced energetic pair telescope ( adept ) for ray polarimetry in the medium energy range ; further , they mentioned that it would also provide better photon angular resolution than fermi - lat in the range of 5 to 200 mev .harpo is a hermetic argon tpc detector proposed by bernard which would have high angular resolution and would be sensitive to polarization of rays with energies in the mev - gev range .a demonstrator for this tpc was built , and preliminary estimates of the spatial resolution are encouraging .currently , the harpo team is finalizing a demonstrator set up to characterize a beam of polarized rays in the energy range of 2 76 mev .observations of the rays from the crab pulsar and crab nebula have been reported in ref . for eight months of survey data with fermi - lat .the pulsar dominates the phase - averaged photon flux , but there is an off - pulse window ( 35% of the total duration of the cycle ) when the pulsar flux is negligible and it is therefore possible to observe the nebular emission . according to the conducted analysis , the spectrum of the crab nebula in the - mev range can be described by the following combined expression : where the quantity is measured in representing the number of photons reaching of the detector area per second , per mev of energy .the energy on the right hand side is measured in gev .the prefactors and are determined by 35% of the total duration of the cycle , while and .the first and second terms on the right hand side , as well as the indices `` sync '' and `` ic '' , correspond to the synchrotron and inverse compton components of the spectrum , respectively . as one can see , these terms have different dependence on the energy since they represent different contributions to the total spectrum . the first part ( the synchrotron radiation )comes from emission by high energy electrons in the nebular magnetic field while the second part is due to the inverse compton scattering of the primary accelerated electrons . for conveniencelet us rewrite the expression for the spectrum of the crab nebula in the form where the energy on both sides is now measured in mev , so and . integrating , for the photon flux above mev coming from the crab nebula we obtain the number giving for the total cycle duration , or .at the same time the averaged spectrum of the crab pulsar is described in as follows : where , and the cut - off energy .as before , the energy on both sides is measured in mev . integrating this expression , for the photon flux above mev coming from the crab pulsar we obtain the number , or .thus , the pulsar s photon flux is twice as intensive as the nebula s. a fast photometer could be insert in the polarimeter instrumentation to collect events and have a temporal tag and consequently distinguish between nebula and pulsar photons , see e.g. .we will use the numbers above for an estimation of the polarimeter results at a 100 mev energy cut .for a 500 mev cut the statistics drops by a factor of five ( because is much higher the exponential factor does not play a role ) .it is worth noting that the estimates following from approximately agree with the corresponding estimates made in ( where the formulas ( 1 ) and ( 2 ) describe the synchrotron and inverse compton components of the crab nebula spectrum while the formula ( 3 ) describes the averaged crab pulsar spectrum ) .really , according to , the total crab nebula photon flux above mev is , while for the crab pulsar the value again reads .the photo production of an electron - positron pair in the field of nuclei is a well understood process which was calculated in qed with all details including the effect of photon linear polarization , see e.g. ref .the kinematics and variables of the reactions are shown in fig .[ fig : kinematics ] .the distribution of events over an azimuthal angle of a positron ( electron ) relative to the direction of an incident photon has the following form : + , where is the analyzing power , is the degree of the photon linear polarization , and is the angle of the photon linear polarization vector in the detector coordinate system . in practice , angle be used instead of because at the photon energies of interest the co - planarity angle .pair photo production ( left picture ) and the azimuthal angles in the detector plane from ref .the photon momentum is directed along the z axis .the photon polarization vector is parallel to the x axis .the angle is the angle between the photon polarization plane and the plane constructed by the momentum of the photon and the momentum of the positron ( the electron ) .the angle is called the co - planarity angle .the labels p and n indicate the positions of the crossings of the detector plane by the positron and the electron .the azimuthal angle between the polarization plane and the vector is a directly measurable parameter.,scaledwidth=65.0% ] the value of analyzing power was found to be a complicated function of the event parameters and detection arrangement .the numerical integration of the full expression could be performed for given conditions , see e.g. . in a high energy limit a compact expression for the integrated analyzing power for the pair photo - production from atomic electronwas obtained in ref . .the practical design and the test of the polarimeter for a beam of high energy photons were reported in ref .there we detected both particles of the pair and reconstructed the azimuthal angle of the pair plane ( see fig .[ fig : kinematics ] ) .the analyzing power , averaged over energy sharing between electron and positron and pair open angle of the experimental acceptance , has been found to be 0.116.002 , comparable to a 0.14 value as shown in fig .[ fig : asy_e+e - m ] reproduced from .when the pair components move through the converter , the azimuthal angle built on pair coordinates and pair vertex becomes blurred due to multiple scattering .it is useful to note that the purpose of development in ref . was a polarimeter for an intense photon beam .the thickness of the converter in the beam polarimeter was chosen to be very small to minimize systematics of the measurement of the photon polarization degree .however , the polarimeter could be calibrated by using the highly polarized photon beams produced in the laser - backscattering facilities . for the cosmic ray polarimeter, we propose a larger converter thickness and calibration of the device .such an approach is more productive for cosmic rays studies where a relative systematic error on the polarization degree at the level of 3 - 5% is acceptable .let us also note that for the photon beam polarimetry there are additional options such as a coherent pair production in an oriented crystal and a magnetic separation of the pair components used many years ago in nuclear physics experiments .for the space - borne photon investigation , those polarimeters are not applicable for the obvious reasons of the limited angle range for the coherent effects and the large weight and power consumption of the magnetic system .an active converter with a coordinate resolution of a few microns would allow us to construct a dream device , a very efficient polarimeter .a real world active - converter device , a gas - filled tpc , has a spatial resolution of 100 m and much larger two - track resolution of 1.5 - 2 mm ( for a few cm long drift distance ) .such a polarimeter will be a very productive instrument for the photon energy range below 50 mev .however , because of these resolutions , it would be hard to measure the degree of polarization of photons whose energy is bigger than 100 mev .a polarimeter with separation of the converter and pair detector functions could benefit from the high coordinate resolution of the silicon msd of 10 - 15 m , its two - track resolution of 0.2 mm , and flexibility for the distance between a converter and pair hits detector : between them would be a vacuum gap .the key parameters of the polarimeter are the efficiency , , and analyzing power , . herewe outline the analysis of the figure - of - merit , .we will consider a polarimeter as a stack of individual flat cells , each of which is comprised of a converter with a two - dimensional coordinate readout and a coordinate detector for two - track events with no material between them .the thickness of the converter , where the photon produces the electron - positron pair , defines in the first approximation the polarimeter efficiency as follows : the efficiency of one cell is with is the thickness of the converter in units of radiation length .the is the reduction of the photon flux due to absorption in a single cell defined as , where is the thickness of the cell in units of radiation length , and is the number of cells in the device of length and geometrical thickness of the cell .the converter thickness needs to be optimized because above some thickness it does not improve the or the accuracy of the polarimeter result ( see the next section ) . as it is shown in fig .[ fig : asy_e+e - m ] , selection of the symmetric pairs ( ) provides an analyzing power while the averaged over pair energy sharing .however , the value of the is largest when the cut on pair energy sharing is relaxed .the practical case for the energy cut is , which allows us to avoid events with a low value of and most -electron contamination .the average value of for such a range of energy sharing is 0.20 .the energy of the particles and the shower coordinates could be measured by a segmented electromagnetic calorimeter or estimated from the width of the track , which due to multiple scattering is inversely proportional to the particle momentum . in the photon energy range of interest, both electron and positron will pass through a large number of cells .determination of particle energy based on multiple scattering would provide % relative energy resolution , which is sufficient for the proposed cut .estimation of the particle energy could also be useful for rejection of the hits in msds induced by the -electrons .a monte carlo simulation was used to evaluate the general effects of pair production in the converter and the specific design of the polarimeter .we used a geant3-based mc code to study the photon detection efficiency and electron - positron pair azimuthal distribution in a wide range of the converter thickness up to 10% of radiation length . because both the pair opening angle and multiple scattering are scaled with the photon energy , the distributions are almost energy independent .we present first the results for the 100 mev photon energy for different thicknesses of the converter at a fixed distance of 20 mm between the converter end and the detector .we used a standard geant3 pair production generator for the unpolarized photons and at the conversion point introduced a weighting factor for each event as to simulate a polarization effect on an event - by - event basis .the value of azimuthal angle modulation at the pair production point was fixed at 0.20 , which is the average value of the analyzing power at production over the selected range of particle energies .the pair component propagation was realized in the mc , and the track parameters were evaluated .[ fig : mc100 ] shows the summary of mc results .the apparent optimum converter thickness is close to 1 mm for which the projected is reasonably large and the is close to the saturation limit .however , we are expecting that when -electron hits are included in analysis the optimum thickness for the 100 mev photon case will be smaller and the would be a bit lower .-0.05 in the coordinate detector allows determination of the opening angle between the pair components and the azimuthal angle of the pair plane relative to the lab coordinate system , the main variable for measurement of the photon polarization .such a detector is characterized by the coordinate resolution , , and the minimum two - track distance , , at which coordinates of two tracks could be determined with quoted accuracy .the is typically 2 mm for drift chamber .for tpc with micromega amplification stage and strip - type readout , is about 4 strips or 1.6 mm ( pitch equal to 400 ) . for silicon msd about 0.20 mm ( pitch equal to 50 ) .the opening angle between the pair components is on the order of , where is the photon energy and is the electron rest mass .the events with an opening angle larger than but less than provide most of the analyzing power , as it is shown in ref .it is easy to find that the resulting geometrical thickness of the cell is , whose numerical values are shown in tab .[ tab : cell_thickness ] . 0.1 in .the geometrical cell thickness , , in * cm * for the different detectors and photon energies .[ cols="^,^,^,^",options="header " , ] [ tab : cell_thickness ] for photon energies above 100 mev , the silicon msd is a preferable option because of the limit on the apparatus s total length . indeed , considering a 300 cm total length and an energy of 500 mev , the number of cells is 5 for the drift chamber option , 6 for the tpc / micromega , and 46 for the msd option . on another side ,the total amount of matter in the polarimeter should be limited to one radiation length or less , because of significant absorption of the incident photons which will reduce the average efficiency per cell .for example , in the msd option 54% absorption will occur with 46 cells ( 1 mm thickness 2d readout converter detector and two 0.3 mm thickness 1d readout track detectors ) .the detection efficiency ( pair production in the converters ) could be estimated as ( eq . [ eq : eff ] ) , which is about 34% for the selected parameters of the polarimeter . however , the useful statistics result is lower due to a cut on the pair components energies and contributions of the non - pair production processes , especially at low photon energy . for a photon energy of 100 mev ,the obtained efficiency is 0.28% per cell and the overall efficiency for the 46 cell polarimeter is 9% .the efficiency becomes significantly larger ( % ) for a photon energy of 1000 mev .assuming observation of the crab pulsar photon source with a 1 m detector for one year the total statistics of pairs ( above a 100 mev photon energy cut ) was estimated to be .for the projected analyzing power of 0.10 the statistical accuracy of the polarization measurement is for the msd detector option .realization of such high accuracy would require a prior calibration of the polarimeter at a laser - back scattering facility .the detectors of the cell of the msd - based polarimeter include two parts the first for the measurement of the x and y coordinates of each pair component and the second for the measurement of the x and y coordinates of the production vertex .the second part should be done using a two - dimensional readout msd because with a one - dimensional readout the most useful events will have only one coordinate of the vertex .the first part of the cell could be realized with a one- or two - dimensional readout .we consider below the two - dimensional readout for the third plane only .-0.05 in the thickness of the silicon plate is 0.3 ( 1.0 ) mm , and the readout strip pitch is 50(100 ) for the first two ( third ) planes .two first msds are rotated by 90 , so three planes allow determination of the coordinates in two - track events .the third plane will serve as a converter for the next cell .it provides both coordinates of the production vertex , see fig .[ fig : cells ] for a three cell example .the first two msds also serve as veto detector(s ) for the photon converted in the third msd .a proposed 46 cell structure in a 300 cm long polarimeter leads to a cell length 6.5 cm , which allows optimal coverage of a wide range of photon energies .a calorimeter will be used for crude ( 10 - 20% ) measurement of the photon energy ( combined energy of the pair ) .the configuration proposed above called for a 1 mm thickness msd with two - sided readout strips , which is twice as large as the maximum currently available from industry .we are nevertheless expecting that such an advance in technology could be made for the current project . in any casethe 1 mm converter could be replaced by two 0.5 mm converters .the projected results of polarization measurement are shown in fig . [fig : projected - result ] .the photon angular resolution of the proposed system could be estimated from the msd spatial resolution and thickness of the cell as 1 mrad for 100 mev photon energy .a ray polarimeter for astrophysics could be constructed using silicon msd technology .each of 46 cells will include one msd with 2d readout of 1 mm thickness and 0.1 mm pitch and two msds with 1d readout of 0.3 mm thickness and 0.05 mm pitch . using a total of 138 m area of msd ( 46 cells ) and readout channels ( assuming a factor of 10 multiplexing ) the polarimeter would provide a device with 9 - 15% photon efficiency , a 0.10 analyzing power , and a 1 mrad angular resolution . in a year - long observation , the polarization of the photons from the crab pulsarwould be measured to 6% accuracy at an energy cut of 100 mev and % accuracy at an energy cut of 1000 mev .the authors are grateful to s.d .hunter for stimulating and fruitful discussion .we would like to acknowledge contributions by v. nelyubin and s. abrahamyan in the development of the mc simulation .this work is supported by nasa award nnx09av07a and nsf crest award hrd-1345219 . t. young , phil .royal society london , * 92 * 12 ( 1802 ) .a. michelson and e. morley , american journal of science * 34 * , 333 ( 1887 ) . c. n. yang , phys .rev . * 77 * , 722 ( 1950 ) ; j. h. berlin and l. madansky , phys .rev . * 78 * , 623 ( 1950 ) . c. aidala , s. bass , d. hasch , and g. mallot , rev .* 85 * , 655 ( 2013 ) , arxiv : hep - ph/1209.2803 .p. f. bloser _ et al ._ , the mega project : science goals and hardware development , new astronomy reviews * 50 * , 619 ( 2006 ) .d. bernard and a. delbart , nucl .instr . meth . a * 695 * , 71 ( 2012 ) ; d. bernard , nucl .instr . meth .a * 729 * , 765 ( 2013 ) , arxiv : astro - ph/1307.3892 .f. lei , a.j . dean and g.l .hills , space science reviews * 82 * , 309 ( 1997 ) .m.l . mcconnell and j.m .ryan , new astronomy reviews * 48 * , 215 ( 2004 ) .m.l . mcconnell and p.f .bloser , chinese journal of astronomy and astrophysics * 6 * , 237 ( 2006 ) .h. krawczynski _ et al ._ , astroparticle physics * 34 * , 550 ( 2011 ) . w. hajdas and e. suarez - garcia , polarimetry at high energies , in photons in space : a guide to experimental space astronomy , eds . m.c.e .huber _ et al ._ , 599 ( springer , 2013 ) .m. pohl , particle detection technology for space - borne astro - particle experiments , arxiv : physics/1409.1823 .m. dovciak _et al . _ ,mnras * 391 * , 32 ( 2008 ) , arxiv : astro - ph/0809.0418 .s. r. kelner , soviet journal of nuclear physics * 10 * , 349 ( 1970 ) ; s.r .kelner , yu.d .kotov , and v.m .logunov , soviet journal of nuclear physics * 21 * , 313 ( 1975 ) .m. l. mcconnell _ et al ._ , in aas / solar physics division meeting 34 , bulletin of the american astronomical society , vol .35 , p. 850( 2003 ) .e. kalemci _et al . _ ,* 169 * , 75 ( 2007 ) , arxiv : astro - ph/0610771 .a. morselli __ , nuclear physics b , proc. supp . * 239 * , 193 ( 2013 ) , arxiv : astro - ph/1406.1071 .et al . _ ,astroparticle physics * 59 * , 18 ( 2014 ) .d. bernard _ et al ._ , harpo : a tpc as a gamma - ray telescope and polarimeter , arxiv : astro - ph/1406.4830 ; d. bernard , harpo : a tpc as a high - performance -ray telescope and a polarimeter in the mev - gev energy range , conseil scientifique du labex p2io , 17 december 2014 ._ , the astrophysical journal * 708 * , 1254 ( 2010 ). f. meddi _ et al ._ , publications of the astronomical society of the pacific , volume 124 , issue 195 , pp.448 - 453 ( 2012 ) ; f. ambrosino _ et al ._ , journal of the astronomical instrumentation , volume 2 , issue 1 , i d . 1350006 ; f. ambrosino _ et al ._ , proceedings ot the spie , volume 9147 , i d .91478r 10 pp .r. buehler _ et al ._ , the astrophysical journal * 749 * , 26 ( 2012 ). h. olsen and l. c. maximon , phys .rev . * 114 * , 887 ( 1959 ) ; l. c. maximon and h. olsen , phys . rev . * 126 * , 310 ( 1962 ) .b. wojtsekhowski , d. tedeschi , and b. vlahovic , nucl .instr . meth .a * 515 * , 605 ( 2003 ) .v. boldyshev and y. peresunko , yad .* 14 * , 1027 ( 1971 ) , translation in the soviet journal of nuclear physics , * 14(5 ) * , 576 ( 1972 ) . c. de jager _j. a * 19 * , 275 ( 2004 ) ; arxiv : physics/0702246 .
a high - energy photon polarimeter for astrophysics studies in the energy range from 20 mev to 1000 mev is considered . the proposed concept uses a stack of silicon micro - strip detectors where they play the roles of both a converter and a tracker . the purpose of this paper is to outline the parameters of such a polarimeter and to estimate the productivity of measurements . our study supported by a monte carlo simulation shows that with a one - year observation period the polarimeter will provide 6% accuracy of the polarization degree for a photon energy of 100 mev , which would be a significant advance relative to the currently explored energy range of a few mev . the proposed polarimeter design could easily be adjusted to the specific photon energy range to maximize efficiency if needed .
the past decade has seen a renewed interest in low frequency radio astronomy with a strong focus on cosmology with the highly redshifted 21 cm line of neutral hydrogen .numerous facilities and experiments are already online or under construction , including the giant metre - wave radio telescope ( gmrt ; ) , the low frequency array ( lofar ; ) , the long wavelength array ( lwa ; ) and the associated large aperture - experiment to detect the dark ages ( leda ) experiment , the cylindrical radio telescope ( crt / baoradio , formerly hshs , ) , the experiment to detect the global eor step ( edges ; ) , the murchison widefield array ( mwa ; ) , and the donald c. backer precision array for probing the epoch of reionization ( paper ; ). 21 cm cosmology experiments will need to separate bright galactic and extragalactic foregrounds from the neutral hydrogen signal , which can be fainter by as much as 5 orders of magnitude or more ( see , e.g. , and ) .as such , an unprecedented level of instrumental calibration will be necessary for the detection and characterization of the 21 cm signal . achieving this level of calibration accuracy is complicated by the design choice of many experiments to employ non - tracking antenna elements ( e.g. lwa , mwa , lofar and paper ) .non - tracking elements can provide significant reductions in cost compared to traditional dishes , while also offering increased system stability and smooth beam responses .however , non - tracking elements also present many calibration challenges beyond those of traditional radio telescope dishes .most prominently , the usual approach towards primary beam calibration pointing at and dithering across a well - characterized calibrator source is not possible . instead , each calibrator can only be used to characterize the small portion of the primary beam it traces out as it passes overhead .additionally , the wide fields of view of many elements make it non - trivial to extract individual calibrator sources from the data ( see , e.g. , and for approaches to isolate calibrator sources ) .finally , many of these arrays use dipole and tile elements , the response of which are not easily described by simple analytic functions . in this paper , we present a method for calibrating the primary beams of non - tracking , wide - field antenna elements using astronomical sources .we illustrate the technique using both simulated and observed paper data from a 12 antenna array at the nrao site in green bank , wv .paper is an interferometer operating between 100 and 200 mhz , targeted towards the highly - redshifted 21 cm signal from the epoch of reionization .although the cosmological signal comes from every direction on the sky , an accurate primary beam model will be necessary to separate the faint signal from bright foreground emission . in this work, we use a subset of the brightest extragalactic sources to calibrate the primary beam . because interferometers like paper are insensitive to smooth emission on large scales , these extragalactic sources are easily detectable despite the strongly increasing brightness of galactic synchrotron emission at low radio frequencies . only the measured relative flux densities of each sourceare needed to create a beam model , but to facilitate comparision with other catalogs , we use the absolute spectrum of cygnus a from to place our source measurements on an absolute scale .the structure of this paper is as follows : in [ motivation ] we motivate the problem and the need for a new approach to primary beam calibration for wide - field , drift - scanning elements . in [ methods ] we present our technique for primary beam calibration .we show the results of applying the method to simulated and actual observations in [ sims ] and [ data ] , respectively , and we conclude in [ conclusions ] .for non - tracking arrays with static pointings such as paper , every celestial source traces out a repeated source track " across the sky , and across the beam , each sidereal day . the basic relationship between the perceived source flux density ( which we shall call a measurement , ) measured at time and the intrinsic source flux density ( ) is : where is the response of the primary beam toward the time - dependent source location .if the inherent flux density of each source were well - known , it would be straightforward to divide each by to obtain along the source track . to form a complete beam model , onewould then need enough well - characterized sources to cover the entire sky .in the 100 - 200 mhz band , however , catalog accuracy for most sources is lagging behind the need for precise beam calibration , which in turn is necessary for generating improved catalogs . without accurate source flux densities ,both and are unknowns , and equation [ basicmeas ] is underconstrained .the problem becomes tractable if several sources pass through the same location on the sky , and therefore are attenuated by the same primary beam response .however , the density of bright sources at low frequencies is insufficient to relate sources at different declinations .additional information is necessary to break the degeneracy between primary beam response and the inherent flux densities of sources .one way to overcome the beam - response / flux - density degeneracy described above is to assume rotational symmetry in the beam . under this assumption ,each source creates two source tracks across the sky : one corresponding to the actual position of the source , and the other mirrored across the beam center . under this symmetry assumption , tracks overlap at crossing points , " as schematically illustrated in figure [ trackexample ] for two sources : cygnus a and virgo a. and steps in elevation and azimuth , respectively ._ right : _ the tracks of cygnus a and virgo a , and their rotated images overlaid .there are now 6 crossing points .these are the locations where the beam and source parameters can be solved for independently . ] at a crossing point , there are only 3 unknowns , since each source is illuminated by the same primary beam response , . since two circles on the sky cross twice (if they cross at all ) , there will be two independent relations that together provide enough information to constrain both the primary beam response at those points and the flux densities of sources passing through them ( relative to an absolute flux scale ) . by using multiple sources ,it is possible to build a network of crossing points that covers a large fraction of the primary beam .furthermore , such a network allows data to be calibrated to one well - measured fiducial calibrator . for observations with the green bank deployment of paper , source fluxesare related to cygnus a from .in the rest of this section , we discuss the algorithmic details of how this approach to primary beam calibration is implemented .these steps are : 1 .extracting measurements of perceived source flux densities versus time from observations ( [ pfluxes ] ) , 2 .entering measurements into a gridded sky and finding crossing points ( [ gridding ] ) , 3 . solving a least - squares matrix formulation of the problem ( [ lstsqs ] ) , and , finally , 4 . deconvolving the irregularly sampled sky to create a smooth beam ( [ deconvolution ] ) .we also discuss prior information that can be included to refine the beam model in [ priors ] .the principal data needed for this approach are measurements of perceived flux density versus time for multiple calibrator sources as they drift through the primary beam . in practice ,any method of extracting perceived individual source flux densities ( such as image plane analysis ) could be used ; the beam calibration procedure is agnostic as to the source of these measurements . in this work ,we use delay / delay - rate ( ddr ) filters to extract estimates of individual source flux densities as a function of time and frequency .the frequency information can be used to perform a frequency - dependent beam calibration if the snr in the observations is high enough .these filters work in delay / delay - rate space ( the fourier dual of frequency / time space ) to isolate flux density per baseline from a specific geometric area on the sky ; for a full description of the technique , the reader is referred to .the first step of our pipeline to produce perceived flux density estimates is to filter the sun from each baseline individually in ddr space .this is done to ensure that little to no signal from the sun , which is a partially - resolved and time - variable source , remains in the data .( in principle , data from different observing runs separated by several months could provide complete sky coverage while avoiding daytime data altogether .we chose to use data from only one 24 hour period to minimize the effect of any long timescale instabilities in the system ) .a markov - chain monte carlo ( mcmc ) method for extracting accurate time- and frequency - dependent source models via self - calibration using ddr - based estimators is then used to model the 4 brightest sources remaining : cygnus a , cassiopeia a , virgo a , and the crab nebula ( taurus a ) .the mcmc aspect of this algorithm iterates on a simultaneous model of all sources being fit for to minimize sidelobe cross - terms between sources . after removing the models of cygnusa , cassiopeia a , virgo a and taurus a from the data , a second pass of the mcmc ddr self - calibration algorithm extracts models of the remaining 22 sources listed in table [ srctable ]. we can increase the signal - to - noise at a crossing point by combining all measurements within a region over which the primary beam response can be assumed to be constant . to define these regions , we grid the sky . in this work ,the beam model is constructed on an healpix map with pixels on a side .the choice of grid pixel size is somewhat arbitrary .using a larger pixel size broadens the beam coverage of each source track , creating more crossing points and helping to constrain the overall beam shape .however , when the pixels are too large , each pixel includes data from sources with larger separations on the sky .since the principal tenet of this approach is that each source within a crossing point sees the same primary beam response , excessively large pixels can violate this assumption , resulting in an inaccurate beam model . for paper data , a healpix grid with pixelsis found to be a good balance between these competing factors , as will be explained in [ beamsims ] .for other experiments with narrower , more rapidly evolving primary beams , smaller pixels may be necessary . to introduce our measurements of perceived flux density into the grid , we first recast equation [ basicmeas ] into a discrete form : where and are the respective perceived source flux densities and primary beam responses in the pixel , and is a source index labelling the inherent source flux density , . to generate a single measurement of a source for each pixel , we use a weighted average of all measurements of that source falling with a single pixel .the weights are purely geometric and come from interpolating the measurement between the four nearest pixels .once all the data are gridded , we solve equation [ healmeas ] for all crossing pixels simultaneously . to do this, we set up a linearized least - squares problem using logarithms : because thermal noise in measurements becomes non - gaussian in equation [ logmeas ] , the solution to the logarithmic formulation of the least - squares problem is biased . in [ beamsims ], we investigate the effect of this bias using simulations and find that the accuracy of our results is not limited by this bias , but instead by sidelobes of other sources .therefore , while equation [ basicmeas ] can in principle be solved without resorting to logarithms using an iterative least - squares approach , we find that this is not necessary given our other systematics .once the logarithms are taken , we can construct a solvable matrix equation , which can be expressed generally as : where is a column - matrix of weights , is a column - matrix of measurements , is the matrix - of - condition , describing which measurements are being used to constrain which parameters , and is a column matrix containing all the parameters we wish to measure : the beam responses and the source flux densities . to illustrate the form of this equation , we present the matrix representation of the cygnus a / virgo a system shown in figure [ trackexample ] : on the left - hand side of the equation are the two column matrices and .the weights , , are defined in [ weighting ] . contains logarithms of the perceived source flux density measurements in each pixel , .recall that each corresponds to one source . on the right - hand side, the weighting column matrix appears again , followed by the matrix - of - condition , and then , a column matrix containing the parameters we wish to solve for : the primary beam response at the 6 crossing points and the flux densities of virgo and cygnus .the matrix - of - condition , , identifies which sources and crossing points are relevant for each equation .we have schematically divided it : to the left of the vertical line are the indices used for selecting a particular crossing point ; to the right are those for the sources .the first 4 lines represent the two northern cygnus / virgo crossing points , and the next four represent the two southern ones ( which are identical copies of the northern ones ) .finally , the last 4 lines represent the 2 points where virgo crosses itself ; notice that the cygnus source column is blank for these 4 rows .it should be noted that equations [ matrix2 ] and [ matrix1 ] contains no absolute flux density reference . the simplest way to setthis scale is to treat all the recovered flux densities as relative values compared to the flux calibrator ( cygnus a in the case presented here ) .one can then place all the flux densities onto this absolute scale .an equally valid approach is to append an extra equation with a very high weight , which sets the flux calibrator to its catalog flux value . in a least - squares approach ,optimal weights are inversely proportional to the variance of each measurement . to calculate the variance of each measurement , we must propagate the uncertainty in the initial perceived flux density measurements through the averaging and logarithm steps .the noise level in each interferometric visibility is roughly constant , and the ddr filters average over a fixed number of visibilities for each perceived flux density estimate , leading to equal variance at each time sample . to produce optimal weights accounting for the many time samples averaged into each beam pixel and the propagation of noise through the logarithm , each logarithmic measurement in equation [ matrix2 ]should be weighted by : where indexes the time step ( which is generally fast enough to produce many measurements inside a pixel ) , is the geometric sub - pixel interpolation weighting of each measurement , and is the weighted average of all measurements of a source s flux density in pixel . without the square root ,these weights would be proportional to the inverse of the variance in each logarithmic measurement ; the sum over the geometric weights is the standard reduction of variance for a weighted average , and the factor of comes from propagating variances through the logarithm .the square root appears because a least - squares solver using matrix inversion will add in an additional factor of the weight , leading to the desired inverse - variance weighting .other solvers using different methods may require different weights .the solution of equation [ matrix2 ] , the matrix , contains two distinct sets of parameters : the beam responses at crossing points and the flux densities of each source ( modulo an absolute scale ) .there is additional information that may be included to improve the beam model beyond that generated by solving for the responses at crossing points .given the flux - density solutions for each source , each source track now provides constraints on beam pixels that are not crossing points . by dividing eachperceived flux density source track by the estimated inherent flux density from the least - squares inversion , we produce a track of primary beam responses , with greater coverage than that provided by crossing points alone .we illustrate the difference in coverage in figure [ cross_v_track ] .used to calibrate the paper primary beam .the projection is orthographic with zenith at the center ; dotted lines are and steps in elevation and azimuth , respectively . _right : _ the sky coverage of all 25 source tracks .although the least - squares inversion solves only for the beam response at crossing points , we can include all source track data using the recovered flux density of each source to create a primary beam estimate along the entire track . ]the left hand panel shows the locations of crossing points for the 25 sources used to calibrate the paper beam , listed in table [ srctable ] .the right hand panel shows the increased beam coverage that comes from including non - crossing point source - track data .to create an initial beam model , we average within each pixel the estimated beam responses from each source , weighting by the estimated flux of that source . given equal variance in each initial perceived flux density measurement , this choice of weights is weighting by signal to noise . to produce a model of the beam response in any direction , we must fill in the gaps left by limited sky sampling .we use a clean - like deconvolution algorithm in fourier space to fill in the holes in the beam .we iteratively fit and remove a fraction of the brightest fourier component of our beam until the residuals between the model and the data are below a specified threshold .in addition to measured beam responses derived from source tracks , we add the constraint that the beam must go to zero beyond the horizon . in the deconvolution, each pixel is weighted by the estimate of the beam response in that pixel , reflecting that snr will be highest where the beam response is largest .this beam - response weighting was unnecessary in previous steps , since the least - squares approach solved for each pixel independently .this weighting scheme again represents signal - to - noise weighting , given the equal variance in our initial perceived flux density estimates .the result of the deconvolution is an interpolated primary beam model with complete coverage across the sky . up to this point ,we have made only two fairly weak assumptions about our beam : that it possesses rotational symmetry , and that the response is zero below the horizon . however ,if we have additional prior knowledge about our beam , we can better constrain the final model . in particular, we can use beam smoothness constraints to identify unphysically small scale features introduced by sidelobes in the source extraction or by incomplete sampling in the deconvolution .we choose to incorporate additional smoothness information by filtering our model in fourier space to favor large - scale modes .paper dipoles were designed with emphasis on spatial and spectral smoothness in primary beam shape . to smooth the paper beam model, we choose a cutoff in fourier space that corresponds to the scale at which of the power is accounted for in a computed electromagnetic model of the beam .while such a filter is not necessarily generalizable to other antenna elements , we find it necessary to suppress the substantial sidelobes associated with observations from a 12-antenna paper array that are discussed below .to test the robustness of this approach , we apply it to several simulated data sets .the results of these simulations , with and without gaussian noise , are described below in [ beamsims ] .we also simulate raw visibility data to test the effectiveness of source extraction ; these simulations are discussed in [ vissims ] . the major difference between these two methods of simulationis that the simulation using raw visibility data allows for imperfect source isolation , leading to contamination of the source tracks by sidelobes of other sources .these sidelobes have a significant effect on the final beam model that is derived .we simulate perceived flux density tracks using several model beams of different shapes , including ones with substantial ellipticity and an rotation around zenith .we also input several source catalogs , including a case with jy sources spaced every degree in declination , and a case using the catalog values and sources listed in table [ srctable ] that approximately match the sources extracted from observations . in all combinations of beam models and source catalogs ,we recover the input beam and source values with error .the average error in source flux density is 2.5% .we see no evidence for residual bias , as the distribution of error is consistent with zero mean to within one standard deviation .we also test the effect of adding various levels of gaussian noise to a simulation involving a fiducial beam model and the 25 sources listed in table [ srctable ] .only when the noise level exceeds an rms of 10 jy in each perceived flux density measurement does the mean error in the solutions exceed 10% .as the noise increases beyond this level , there is a general trend to bias recovered flux densities upward ; as mentioned earlier , this bias is introduced by the logarithms in equation [ logmeas ] .however , the expected corresponding noise level of ddr - extracted perceived flux density measurements from our 12-element paper array is jy per sample . at a simulated rms noise level of 1 jy, the solutions are recovered with a mean error of .this result validates our previous statement that the bias introduced by the logarithms in equation [ logmeas ] is not a dominant source of error .it is also worth noting that these simulations were used to identify as the best healpix pixel side for our grid ; with too large a pixel size ( ) the model becomes significantly compromised .this results from combining measurements of sources that are subject to significantly different primary beam responses .we choose not to use a smaller pixel size , since it will reduce the fractional sky coverage of our crossing points and increase the computational demand of the algorithm .we also apply this technique to simulated visibilities in order to test the complete analysis pipeline , including the ddr - based estimation of perceived source flux densities .the visibility simulations are implemented in the aipy software toolkit .the simulated observations correspond to actual observations made with the 12-element paper deployment in green bank described in we match the observations in time , antenna position , and bandwidth .we also include the expected level of thermal noise in the simulated visibilities . we simulate perceived " visibilities by attenuating the flux density of each source by a model primary beam . the ddr filters return estimates of perceived source flux density for the input primary beam . in these simulations , we only include bright point sources and a uniform - disk model of the sun . as a result, source extraction is expected to be more accurate in simulation than in real data , since the sources we extract account for of the simulated signal .however , these simulations do provide a useful test of ddr - filter - based source extraction and of the level of contamination from sidelobes . as might be expected ,the estimates of perceived source flux density versus time from simulated visibilities contain structure that is not attributable to the beam .these features are almost all due to sidelobes of other sources ; they persist in real data and are reproducible over many days .figure [ trackcompare ] shows the difference in the source tracks produced for cassiopeia a by each simulation method ( cases 1 and 2 in the figure ) , as well as the track extracted from the observed data described [ data ] ( case 3 ) .we find that the ddr filter only extracts a fraction of the flux density from the crab nebula .this bias is unique to crab , and is most likely a result of the proximity of the sun to crab in these observations , coupled with the limited number of independent baselines in the 12-antenna array .for this reason , we choose to exclude the crab from our final analysis . performing the least - squares inversion described in [ lstsqs ] on the simulated source tracks , we find that the sidelobes introduce errors into estimates of source flux densities .the average error in flux density is 10% , with the largest errors exceeding 20% .the distribution of errors is consistent with zero mean , indicating no strong biasing of the flux densities .we also find that the resultant primary beam model matches the input beam to within 15% percent .however , the model contains small scale variations , as seen in figure [ filter ] . fills in gaps in sky coverage , but leaves small scale structure that is not associated with the primary beam ( gray curve ) . by filtering the solution in the fourier domain to incorporate prior knowledge of beam smoothness , one can achieve an improved beam model ( black curve ) . ] using prior knowledge of beam smoothness , we can reject these features as unphysical and apply a fourier - domain filter , as described in [ priors ] .this filter reduces the effect of sidelobes in the source tracks , as they appear on fourier scales that are not allowed by smoothness constraints .even with a relatively weak prior on smoothness ( i.e. , retaining more fourier modes than are present in our input model ) , we substantially reduce the small scale variations in our beam .the effect of the filter on small scale variations is illustrated in figure [ filter ] .the filter also reduces large scale errors in the model , bringing the overall agreement to with 10% of the input beam . in summary , we find that although interfering sidelobes from other sources compromise our source flux density measurements , our approach recovers the input primary beam model at the 15% level . with the introduction of a fourier space filter motivated by prior knowledge of the beam smoothness, the output model improves to within 10% of the input .it is also worth noting that the effectiveness of the beam calibration technique presented here will substantially improve with larger arrays , where sidelobe interference will be reduced and source tracks will more closely resemble the ideal case discussed in [ beamsims ] .in this section , we describe the application of this technique to 24 continuous hours of data taken with the paper array deployed at the nrao site near green bank from july 2 to july 3 , 2009 . at this time ,the array consisted of 12 crossed - dipole elements . only the north / south linear polarization from each dipole was correlated .the data used in this analysis were observed between 123 to 170 mhz ( avoiding rfi ) and were split in 420 channels .the array was configured in a ring of 300 m radius .the longest baselines are 300 m , while the shortest baseline is 10 m ; the configuration is shown in figure [ antpos ] .this configuration gives an effective image plane resolution of .in addition to the ddr - filter algorithm described in [ pfluxes ] , several other pre - processing steps are necessary .per - antenna phase and amplitude calibration are performed by fringe fitting to cygnus a and other bright calibrator sources .we also use a bandpass calibration based on the spectrum of cygnus a. this calibration is assumed to be stable over the full 24 hours .two further steps in the reduction pipeline were first described in : gain linearization to mitigate data quantization effects in the correlator , and rfi excision .new to this work is the use of the cable and balun temperatures to remove time - dependent gains caused by temperature fluctuations .a thirteenth dipole was operated as the gain - o - meter " described in . in brief , the gain - o - meter " is an antenna where the balun is terminated on a matched load , rather than a dipole .the measured noise power from this load tracks gain fluctuations in the system .we record the temperature of several components of our gain - o - meter " using the system described in .we find a strong correlation between the absolute gain of our system and the temperature of these components ; this effect is illustrated in figure [ tdepgains ] .we correct for these gain changes by applying a linear correction derived from the measured temperature values .this step is crucial for the success of the beam calibration ; without correcting for them , these gain drifts are indistinguishable from an east / west asymmetry in the primary beam . after these steps , we process the data with the ddr - filtering algorithm to produce estimates of perceived flux density versus time for the 25 sources listed in table [ srctable ] .the first estimate of the paper primary beam derived with this approach is shown in figure [ dirtybeam ] . , dotted lines are and steps in elevation and azimuth , respectively . ]this figure is produced by dividing each source track by an estimate of its inherent flux density produced by the least squares inversion .this transforms each track into an estimate of the primary beam response , which are the added together into a healpix map .there are significant fluctuations from pixel to pixel that are clearly unphysical , and relate to source sidelobes .there are also significant gaps in beam coverage , even with 25 source tracks .figure [ cleanbeam ] show the results after deconvolving the sampling pattern from figure [ dirtybeam ] . , dotted lines are and steps in elevation and azimuth , respectively .this model was produced by deconvolving the sampling pattern from figure [ dirtybeam ] , interpolating over the gaps in declination where there are no strong calibrator sources . ]the beam is now complete across the entire sky , although there is still substantial small - scale structure unrelated to the inherent beam response .we suppress these features in our final model by retaining only the large scale fourier components , as described in [ deconvolution ] .this final model is shown in figure [ filterbeam ] ., the grayscale colors are linear and show the primary beam response , normalized to 1.0 at zenith .solid black lines are contours of constant beam response in 10% increments ; the thick black line is the half - power point .dotted lines are and steps in elevation and azimuth , respectively .this smooth model was produced by retaining only the large scale fourier components from figure [ cleanbeam ] .the cutoff in fourier space was derived from a computed electromagnetic paper beam simulation .this fourier mode selection substantially smooths out variations in the initial solution that are unphysical based on the smoothness of beam response in the simulation . ]it is clear that with our deconvolution and choice of fourier modes , we recover a very smooth , slowly evolving beam pattern , as expected from previous models of the paper primary beam .the measured flux densities of the calibrator sources , produced by the least - squares inversion , are presented in table [ srctable ] .the overall flux scale is normalized to the value of cygnus a reported in .error - bars are estimated by measuring the change in flux necessary to increase between the model and the measured perceived source flux density tracks by 1.0 . as noted in [ vissims ] , these flux densities can be compromised by sidelobes of other sources. however , the results are generally accurate to within .we perform several tests to investigate the validity of our results .the comparison of our recovered fluxes to their catalog values shows generally good agreement .as argued in and , there is considerable difficulty in accurately comparing catalogs , particularly at these frequencies .differences in interferometer resolution and synthesized beam patterns can lead to non - trivial disagreement in measured source flux densities .given this fact , and the complicated sidelobes associated with a 12-element array present in our method , we do not find the lack of better agreement with catalog fluxes troubling . to test the stability of our pipeline, we test a separate observation spanning the following day .the beam model produced by this data matches our first model to within 2.5% , and source flux densities remain consistent to 5% .this test confirms our suspicion that our errors are not dominated by random noise .we have presented a new technique for calibrating the primary beam of a wide - field , drift - scanning antenna element .the key inputs to the method are measurements of perceived flux density of individual sources versus time as they drift through the beam . in this paper, we use delay / delay - rate filters as estimators in a self - calibration loop to extract such measurements from raw interferometric data . however , the remainder of our method is agnostic to how these tracks are derived .the only assumption necessary to make this an over - constrained and solveable problem is rotational symmetry in the beam . with this assumption , we create crossing points " where there is enough information for a least - squares inversion to solve for both the inherent flux density of each source and the response of primary beam at the crossing points .we test this approach using simulated tracks of perceived source flux density across the sky and simulated visibilities . using the source tracks, the least - squares inversion reliably recovers the primary beam values and source flux densities in the presence of noise 10 times that present in a 12-element paper array .the simulated visibilities demonstrate that sidelobes of other bright sources are the source of the dominant errors in source extraction when only 12 antennas are used .the presence of these features in the perceived source flux densities limits the accuracy of estimates of inherent source flux densities .however , in simulation , we are able to recover a primary beam model accurate to within 15% percent and source flux densities with an average error of 10% . using prior information regarding beam smoothness , we improve our model to better than 10% accuracy .while these caveats about the effectiveness of the least - squares technique in the presence of sidelobes may seem worrisome , it bears repeating that these are issues with data quality and not with the technique itself .for example , data from a 32-element paper array has shown that ddr filters can extract the sources used in this analysis with little systematic biases .( we do not use this data here do the lack of a temperature record for gain stabilization and the presence of a particularly active sun when the array was operating . )therefore , it seems that this technique has significant potential for precise beam calibration on larger arrays .another future goal for this technique is to calibrate the frequency - dependence of the beam . here, we have used our entire bandwidth to improve signal - to - noise in source extractions . for a larger array with higher snr and lower sidelobes ,our perceived source flux density measurements can be cut into sub - bands to look at the beam as a function of frequency .finally , it is possible to forgo the assumption of rotational symmetry altogether and allow for possible north / south variation in the beam .an experiment in which the dipoles are _ physically _ rotated on a daily basis can be used to create the same kind of crossing points , since one has changed the section of the beam each source crosses through .work is progressing on such an experiment using the paper array .the paper project is supported through the nsf - ast program ( award # 0804508 ) , and by significant efforts by staff at nrao s green bank and charlottesville sites .paper acknowledges the significant correlator development efforts of jason r. manley .we also thank our reviewer for their helpful comments .arp acknowledges support from the nsf astronomy and astrophysics postdoctoral fellowship under award ast-0901961 .ra & dec & measured flux density ( jy ) & catalog flux density ( jy ) & name + 1:08:54.37 & + 13:19:28.8 & & 58 , 49 & 3c 33 + 1:36:19.69 & + 20:58:54.8 & & 27 & 3c 47 + 1:37:22.97 & + 33:09:10.4 & & 50 & 3c 48 + 1:57:25.31 & + 28:53:10.6 & & 7.5 , 16 , 23 & 3c 55 + 3:19:41.25 & + 41:30:38.7 & & 50 & 3c 84 + 4:18:02.81 & + 38:00:58.6 & & 60 & 3c 111 + 4:37:01.87 & + 29:44:30.8 & & 204 & 3c 123 + 5:04:48.28 & + 38:06:39.8 & & 85 & 3c 134 + 5:42:50.23 & + 49:53:49.1 & & 63 & 3c 147 + 8:13:17.32 & + 48:14:20.5 & & 66 & 3c 196 + 9:21:18.65 & + 45:41:07.2 & & 42 & 3c 219 + 10:01:31.41 & + 28:48:04.0 & & 30 & 3c 234 + 11:14:38.91 & + 40:37:12.7 & & 21.5 & 3c 254 + 12:30:49.40 & + 12:23:28.0 & & 1100 & vir a + 14:11:21.08 & + 52:07:34.8 & & 74 & 3c 295 + 15:04:55.31 & + 26:01:38.9 & & 72 & 3c 310 + 16:28:35.62 & + 39:32:51.3 & & 49 & 3c 338 + 16:51:05.63 & + 5:00:17.4 & & 325 , 378 , 373 & her a + 17:20:37.50 & -0:58:11.6 & & 180 , 236 , 276 , 215 & 3c 353 + 18:56:36.10 & + 1:20:34.8 & & 680 & 3c 392 + 19:59:28.30 & + 40:44:02.0 & & 10623 & cyg a + 20:19:55.31 & + 29:44:30.8 & & 36 & 3c 410 + 21:55:53.91 & + 37:55:17.9 & & 43 & 3c 438 + 22:45:49.22 & + 39:38:39.8 & & 50 & 3c 452 + 23:23:27.94 & + 58:48:42.4 & & 6230 & cas a
we present a new technique for calibrating the primary beam of a wide - field , drift - scanning antenna element . drift - scan observing is not compatible with standard beam calibration routines , and the situation is further complicated by difficult - to - parametrize beam shapes and , at low frequencies , the sparsity of accurate source spectra to use as calibrators . we overcome these challenges by building up an interrelated network of source crossing points " locations where the primary beam is sampled by multiple sources . using the single assumption that a beam has 180 rotational symmetry , we can achieve significant beam coverage with only a few tens of sources . the resulting network of crossing points allows us to solve for both a beam model and source flux densities referenced to a single calibrator source , circumventing the need for a large sample of well - characterized calibrators . we illustrate the method with actual and simulated observations from the precision array for probing the epoch of reionization ( paper ) .
recently , economy has become an active research area for physicists .they have investigated stock markets using statistical tools , such as the correlation function , multifractal , spin - glass models , and complex networks . as a consequence , it is now found evident that the interaction therein is highly nonlinear , unstable , and long - ranged .all those companies in the stock market are interconnected and correlated , and their interactions are regarded as the important internal force of the market .the correlation function is widely used to study the internal inference of the market . however , the correlation function has at least two limitations : first , it measures only linear relations , although a linear model is not a faithful representation of the real interactions in general .second , all it says is only that two series move together , and not that which affects which : in other words , it lacks directional information .therefore participants located in hubs are always left open to ambiguity : they can be either the most influential ones or the weakest ones subject to the market trend all along .it should be noted that introducing time - delay can be a good remedy for these limitations .some authors use such concepts as time - delayed correlation and time - delayed mutual information , and these quantities construct asymmetric matrices by preserving directionality . in case that the length of delay can be appropriately determined , one can also measure the ` velocity ' whereby the influence spreads . in this paper , however , we rely on a newly - devised variant of information to check its applicability .information is an important keyword in analyzing the market or in estimating the stock price of a given company .it is quantified in rigorous mathematical terms , and the mutual information , for example , appears as meaningful choice replacing a simple linear correlation even though it still does not specify the direction .the directionality , however , is required to discriminate the more influential one between correlated participants , and can be detected by the transfer entropy ( te ) .this concept of te has been already applied to the analysis of financial time series by marschinski and kantz .they calculated the information flow between the dow jones and dax stock indexes and obtained conclusions consistent with empirical observations .while they examined interactions between two huge markets , we may construct its internal structure among _ all _ participants .let us consider two processes , and .transfer entropy from to is defined as follows : where and represent the states at time of and , respectively .. in terms of relative entropy , it can be rephrased as the _ distance _ from the assumption that has no influence on ( i.e. ) .one may rewrite eq .( [ te1 ] ) as : from the property of conditional entropy. then the second equality shows that te measures the change of entropy rate with knowledge of the process .( [ te2 ] ) is practically useful , since the te is decomposed into entropy terms and there has been already well developed technique in entropy estimation .there are two choices in estimating entropy of a given time series .first , the symbolic encoding method divides the range of the given dataset into disjoint intervals and assign one symbol to each interval .the dataset , originally continuous , becomes a discrete symbol sequence .marschinski and kantz took this procedure and introduced the concept called _ effective transfer entropy_. the other choice exploits the generalized correlation integral . prichard and theiler showed that the following holds for data : ,\ ] ] where determines the size of a box in the box - counting algorithm .we define the fraction of data points which lie within of by where is the heaviside function , and calculate its numerical value by the help of the box - assisted neighbor search algorithm after embedding the dataset into an appropriate phase space .the generalized correlation integral of order is then given by notice that is expressed as an averaged quantity along the trajectory and it implies a kind of ergodicity which converts an ensemble average into a time average , .temporal correlations are not taken into consideration since the daily data already lacks much of its continuity .it is rather straightforward to calculate entropy from a discrete dataset using symbolic encoding . butdetermining the partition remains as a serious problem , which is referred to as the _ generating partition problem_. even for a two - dimensional deterministic system , the partition lines may exhibit considerably complicated geometry and thus should be set up with all extreme caution . hence the correlation integral method is often recommended if one wants to handle continuous datasets without over - simplification , and we will take this route .in addition , one has to determine the parameter . in a sense, this parameter plays a role of defining the resolution or the scale of concerns , just as the number of symbols does in the symbolic encoding method . before discussing how to set , we remark on the finite sampling effect : though it is pointed out that the case of does not suffer much from finiteness of the number of data , then the positivity of entropy is not guaranteed instead .thus we choose the conventional shannon entropy , throughout this paper .there have been works done on correcting entropy estimation .these correction methods , however , can be problematic when calculating te , since the fluctuations in each term of eq .( [ te2 ] ) are not independent and should not be treated separately .we actually found that a proper selection of is quite crucial , and decided to inactivate the correction terms here .a good value of will discriminate a real effect from zero . without _ a priori _ knowledge , we need to scan the range of in order to find a proper resolution which yields meaningful results from a time series .for reducing the computational time , however , we resort to the empirical observation that an airline company is quite dependent on the oil price while the dependency hardly appears in the opposite direction . fig .[ fig_te1 ] shows this unilateral effect : the major oil companies , chevron and exxon mobile , have influence over delta airline . from which maximizes the difference between two directions ( fig .[ fig_te2 ] ) , we choose the appropriate scale for analyzing the data . even in observing the temporal evolution, this value gives good discrimination through the whole period . in fig .[ fig_te1 ] , the influence seems reversed on very small length scales .the te , however , is known to increase monotonically under refinement of the partitions in many cases and the refined partition means the small length scale which is covered by the small in the correlation integral method .hence we regard this reversal as a finite sample effect in this paper , but it seems worth looking further into the characteristics of te analysis . and we set in eq .( [ te1 ] ) since other values does not make significant differences .this study deals with the daily closure prices of 135 stocks listed on new york stock exchange ( nyse ) from 1983 to 2003 ( trading days , trading day ) , obtained through the website .we select stocks which is listed on nyse over the whole periods .the companies in a stock market are usually grouped into business sectors or industry categories , and our data contain 9 business sectors ( basic materials , utilities , healthcare , services , consumer goods , financial , industrial goods , conglomerates , technology ) and 69 industry categories .the following method shows how the information flows between the groups : suppose that we have a time series data , representing the daily closure price of a company at time .a stock market analysis usually prefers treating the log return value : to the original price itself , since it satisfies the additive property : .this log return transformation also make the result invariant under the arbitrary scaling of the input data .therefore , in order to measure the information transfer between two companies , say and , we create the log return time series and from the raw price data. then one can calculate the transfer entropies and between them from the equalities in the section 2 . for obtaining an overview of the market, we consider groups of similar companies .let be a company of the group , and be one of the group .the _ information flow index _ between these two groups is defined as a simple sum : in addition , we define the _ net information flow index _ to measure the disparity in influences of the two groups as : if is positive , we can say that the category influences to the category . we examine the market with two grouping methods .one is business sector , and the other is industry category . grouping into business sectors , however , does not exhibit clear directionality : the influence of the sector just alternates from that of the sector .in other words , the difference between and over the whole period is almost 0 ( zero ) .this unclarity comes from the fact that a business sector contains so many diverse companies that its directionality just cancels out . on the other hand ,if we construct the asset tree through the minimum spanning tree , each business sector forms a subset of the asset tree and the subsets are connected mainly through the hub .then , it can be said that each of the business sectors forms a cluster and there are no significant direct links among them .hence we employ the industry category grouping , more detailed than the business sectors .we have to exclude the categories which contain only one element , and table [ category ] lists the remaining industry categories used in the analysis . as in our previous observation, it is verified again that oil companies and airline companies are related in a unilateral way : the category 20 , major oil & gas , has continuing influence over the category 19 , major airline , during the whole 14 periods under examination ( ) .one can easily find such relations in other categories : for example , the category 20 always influences on the categories 15 ( independent oil&gas ) , 22 ( oil&gas equipment&services ) , and 23 ( oil&gas refining&marketing ) .it also affects the category 27 ( regional airlines ) over 13 periods and maintains its power on the whole market during 11 periods ( fig . [ map ] ) .it is well - known that economy greatly depends on the energy supply and price such as oil and gas .transfer entropy analysis quantitatively proves this empirical fact .the top three influential categories ( in terms of periods ) are the categories 10 ( diversified utilities ) , 12 ( electric utilities ) and 20 .all of ten companies in the categories 10 and 12 are again related to the energy industry , such as those for holding , energy delivery , generation , transmission , distribution , and supply of electricity . on the contrary ,an airline company is sensitive to the tone of the market .these companies receive information from other categories almost all the time ( category 19 : 11 periods , category 27 : 12 periods ) . the category 8 ( credit services ) and the category 9 ( diversified computer systems , including only hp and ibm in our data ) are also market - sensitive as easily expected .we calculated the transfer entropy with the daily data of the us market .the concept of transfer entropy provides a quantitative value of general correlation and the direction of information .thus it reveals how the information flows among companies or groups of companies , and discriminates the market - leading companies from the market - sensitive ones .as commonly known , the energy such as natural resources and electricity is shown to greatly affect economic activities and the business barometer .this analysis may be applied to predicting the stock price of a company influenced by other ones . in short , te proves its possibility as a promising measure to detect directional information .we suggest that the merits and demerits of te should be judged in details with respect to those of the classical methods like the correlation matrix theory . over the whole 14 periods .the degree of darkness represents the number of periods when is affected by , and s are left blank .for example , the category 10 affects almost all the other categories and is affected by the categories 12 and 25 in a few periods .the row of a market - leading category is bright on the average , while that of a market - sensitive one is dark.,scaledwidth=100.0% ]
in terms of transfer entropy , we investigate the strength and the direction of information transfer in the us stock market . through the directionality of the information transfer , the more influential company between the correlated ones can be found and also the market leading companies are selected . our entropy analysis shows that the companies related with energy industries such as oil , gas , and electricity influence the whole market . , , , transfer entropy , information flow , econophysics , stock market + 05.20.gg , 89.65.gh , 89.70.+c
it is known that the most important component of an adaptive optics ( ao ) system is the corrector , which is the deformable mirror ( dm ) .it can be a continuous face sheet or a surface formed by mirror segments . in the former the mirror boundaryis fixed and voltage applied to any one actuator influences the neighbouring surface as well . in the case of the micro - machined membrane deformable mirror(mmdm ) [ 2 ] this influence can be as large as 60 [ 3 ] .however , they are still preferred in many ao systems [ 4 - 8 ] because of their low cost and capability of achieving large stroke . a 37 channel mmdm from okotech , netherlands ,is being used by us at the udaipur solar observatory ( uso ) for its solar adaptive optics [ 9 - 11 ] ( sao ) system .although the mmdm is not common in many sao systems , its performance has been validated for solar observations[12 ] . in an ao system , it is necessary to bias the dm in such a way that the stroke can be achieved in both positive and negative directions from the biased position . in general , this can be achieved by applying a constant voltage to all the actuators . in case of the mmdm , application of a constant voltage to all the actuators deforms the mirror to a shape often approximated to be parabolic [ 13 ] . however , such approximations do not hold , at points away from the centre .a dm , whose surface is distorted by applying a uniform set of voltage to its actuators , and placed in the optical setup at an angle to the beam will undoubtedly mimic a tilted parabolic mirror .such a tilted curved / parabolic mirror will induce optical aberrations such as defocus , astigmatism and coma . while defocus can be compensated by moving the imaging system , astigmatism and coma can not be corrected by a simple alignment .as the dm is also being used to compensate a turbulence - induced curvature term , it is necessary to determine the maximum angle of incidence at which the dm can be placed in the optical setup without introducing an additional set of aberrations than that which are inherent to the dm .otherwise , at relatively large angles of incidence , when the dm corrects an atmosphere - induced defocus ( curvature term ) , it will inevitably introduce astigmatism . in order to minimize this astigmatism ,an additional correction is required .this constraints the bandwidth of the system . along with ` setup ' induced aberrations , intrinsic aberrations ( aberrations due to mirror s surface profile )are also very important . as the technical passport of the dm states that the initial figure of the dm is astigmatic upto 1.3 fringes(p - v)[14 ], we would like to study the intrinsic aberrations as well . in this paperwe estimate , to a first order , the aberrations introduced in the optical system as a result of folding the beam using a dm with a bias voltage , for various angles of incidence .the rest of the paper is organized as follows .section 2 describes theoretical aspects related to the deformation of the mirror under an external , uniform voltage and in section 3 we present the theoretical investigation on astigmatism and defocus in such a deformed system .section [ simu ] describes the simulations which demonstrate the aberrations introduced when the dm is placed at different angles of incidence .the degradation in the observed image quality with varying curvature for different angles of incidence is discussed in section [ expt ] using an experimental setup .the understandings from the simulations and the observations from the experiment are discussed in section [ discu ] .is the diameter of the surface and is di - electric constant . is the distance between the electrodes ab and cd .thickness of the wafer is . ] in figure 1 , the silicon wafer w , with upper surface polished is place on the + ve electrode ab , which is subjected to potential , while the -ve electrode cd is grounded .when , the wafer w has its undistorted shape , while it gets distorted to the shape , shown in , when .if be the separation between the plates and be the dielectric permittivity of the material , filling the space between the plates , a charge density develops on the plates .the resulting stress is given by thus , if be the young s modulus , be the poisson ratio and be the thickness of the wafer , then the deformation of the upper surface of the wafer is given by ^ 2 \\\textrm{for}\;x = y=0,\quad z(0,0 ) & = & -\frac{p}{64}\frac{3}{2}\frac{(1-{\sigma}^2)}{eh^3}a^4 = -h_0\end{aligned}\ ] ] where is defined as the sag of the membrane .+ the gradient of the surface at any point on the membrane are given by \nonumber \\ z_y & = & \partial z/\partial y = \frac{4h_0}{a^4}\:y\left[a^2-(x^2+y^2)\right ] \label{zderivate}\end{aligned}\ ] ] accordingly , the two curvatures are given for \nonumber\\ k_y = \frac{1}{\rho_{y}}= -\frac{{\partial}^2 z } { { \partial y}^2 } & = & -\frac{4h_0}{a^4}\:\left[a^2-x^2 - 3\:y^2\right ] \label{rxry}\end{aligned}\ ] ] the convention being that the curvature is positive if the surface is convex ( center of curvature is located at i.e below the undistorted surface ) and negative if the surface is concave ( center of curvature is located at i.e above the undistorted surface ) .where , and are the radii of curvature along and direction respectively .the point where or or both equal to zero indicates the point of inflexion .thus , at the center and we can express the equation ( 2 ) as where , . in the literature , the formula that is traditionally used for the radius of curvature is .this gives a value , which is much larger than the actual radius of curvature at the center , it s magnitude being .this is because on these formula , the circumference of the mirror and the point of sag are fitted to lie on a sphere for which simple geometry gives a radius as given above .the actual values are , however given by equation [ rxry ] , showing that the surface can not be described by a unique radius of curvature .the various aberrations , which appear in the fraunhofer limit are thus to be worked out with being given by equation [ sageqn ] , in which is found to be proportional to , a fact , which we present in our further results .( b ) gradient and ( c ) curvature of the membrane at different points along the diameter for application of different voltages from 0 to 200 v;(d ) elevation , ( continuous line ) , gradient ( dotted line ) and curvature ( dashed line ) , for the application of 200 volts ; vertical dotted lines show the points of inflexion .different parameters for a typical wafer that are being used here are : = 165 , = 0.24 , h=2.5 m , =7.5 .the colors in plots ( a - c ) , black , blue , green , yellow and red represent for voltages 0 , 50 , 100 , 150 , and 200 v , respectively . ]figure show typical manifestations of , inclination and curvature of the membrane and the location of inflexion for typical values of various parameters of equation [ sageqn ] .it is clear that the variations in and at different points on the surface , imply a reflecting surface with varying inclination , which would , in geometrical viewpoint , reflect the light from these points of incidence in different directions , consistent with the laws of reflection .the curvatures and ascribe focusing and defocusing properties of the reflecting membrane surface .figure [ sagimg ] show the surface map of the membrane with the application of voltage ; we fitted this surface with 21 zernike polynomials [ 16 ] , the retrieved coefficients show that the ( defocus ) and ( spherical aberration ) are the dominant terms .figure [ raydia ] illustrates the image formation by a dm .we assume that the dm lies in the plane in its un - deformed state , and the curved profile represents the dm under the influence of uniform voltage .the deformation follows the equation ( 6 ) given in section 2 .let a ray from a point source (- , , ) be incident on the curved mirror at .the light ray reflected by the dm is imaged using a lens of focal length on to a screen ` s ' placed at a distance of from the lens .the lens lies in the plane such that form a right handed system of orthogonal axes .the lens introduces a path length ` lens ' to the ray on propagating through the lens . in order to find the path followed by the rays , we calculate the path length and minimize the same with respect to and as demanded by fermat s principle .the total optical path is given by for a perfectly aligned optical system , we have , so that \nonumber\\ \alpha_{22}&=&{1}/{l}-{2\:\cos\;\theta_1}/{\rho_o}\nonumber\\ \beta_{11}&=&{1}/{l}+{1}/{z_f}-{1}/{f_0}\nonumber\\ \beta_{22}&=&{1}/{z_f}\nonumber\\ \gamma_{11}&=&{-\cos\:\theta_1}/{l},\nonumber\\ \gamma_{22}&=&{-1}/{l } , \quad\delta_{0}={-1}/{z_f}\end{aligned}\ ] ] here , we have neglected the terms contributing to a constant phase and kept only the leading terms that contribute to astigmatism and defocus . in order to find the path of the ray we apply fermat s principle . thus keeping ( x , y ) fixed we minimize w.r.t respectively , to yield , which are linear equations in the variables x , y , , , x , y. solving the above linear equations , \,x = \alpha\,x\nonumber\\ y&=&\left[(\alpha_{22}\beta_{11}-{\gamma_{22}}^2)/(\gamma_{22}\delta_o)\right]\,y = \beta\,y \label{lineqnsol}\end{aligned}\ ] ] here , the definitions for , can be found from the pre - factors .these equations relate ( ) with each other . in other words , if the diameter of the lens be much larger than , i.e. the diameter of the dm then for any point of incidence ( ) on the dm , we know the point ( ) , which the ray of light reaches. on defining we can write the coordinates of the point ( ) on the dm to be and .thus , various rays , incident on the circle : on the dm , will reach the point ( ) , where , and .this means that describes an ellipse .thus on making we find all rays will reach where on the screen all the rays will lie inside a ellipse such that the results given in ( 12 ) and ( 13 ) enable us to estimate the astigmatic aberration inside the system . consider a case where on adjusting the screen and making we can make . in this case , the illumination on the screen will be confined only along the axis .the necessary condition for that , i.e requires , as seen from equations ( 11 ) .then on using the definition given in equation ( 9 ) , we find in the limits and , similarly on adjusting we can make . in this case and the illumination on the screen is confined only along the axis . for this to happen , we must have so that for , we get on using the definitions in equations ( 9 ) , the astigmatism in the system is then estimated as the above equation for astigmatism ( ) shows a strong dependence on , it varies very fast for and blows up as ( c.f .table 1 ) .it also shows that varies as . for any value of the patch of light onthe screen lies inside an ellipse , defined by equations ( 12 ) and ( 13 ) .thus , the extent of the patch is defined by averaging over .these give , \end{aligned}\ ] ] the circle of least confusion is located at a point , where is minimum , i.e. at a value of for which , \end{aligned}\ ] ] + these derivatives can be evaluated from equations ( 11 ) on using the definitions given in equation ( 9 ) .we find that in the limit , i.e. for curvatures not too large , as is the case for most practical situations . .dependence of circle of least confusion on angle of incidence [ cols="^,^,^,^,^,^,^,^",options="header " , ] in oc-1 , the dominant aberration is defocus along with a negligible amount of spherical aberration of the 3rd order ( table [ info1 ] ) .these aberrations increase with the increase in voltage as seen from their corresponding zernike coefficients ( cf . ,table [ info1 ] ) . to compensate these aberrations the focal plane positionmust be shifted which is achieved by minimizing the merit function for an optimal focal plane position in zemax .the process of optimization yields a wavefront error which is well within the diffraction limit . in oc-2 ,the dominant aberrations were defocus ( z4 ) , astigmatism ( z6 ) along with small amounts of coma ( z7 ) ( table [ info2 ] ) .these aberrations also increase with an increase in voltage .after optimizing the focal plane position , the z4 coefficient reduces considerably , while the coefficient z6 , namely astigmatism , remains unchanged .the overall wavefront quality varies from /12.5 to 6.6 when the voltage changes from 25 - 225 , before optimization .after optimization the values vary from /25 to 2.83 for the same range of voltage .thus at an angle of incidence of 45 , application of any voltage to the dm introduces an additional astigmatism and coma ( cf . , table 3 ) .it is to be noted that here the coefficient of coma is very small in comparison to that of astigmatism .hence , we neglect coma in our further analysis .however , we conjecture that the coefficient of coma is negligible because of the large f - number of the curved dm mirror , this may not be the case with a tilted curved mirror with a smaller f - number , where the coma is equally important as astigmatism . ) .bottom panels : wavefront error ( wfe ) after optimization for optimum focus .relation between applied voltage and astigmatism and defocus are shown in the respective figures .the symbols represent the fitted values based on relations derived empirically.,width=302 ] as a further step , the amount of astigmatism induced by the ( curved ) dm for different angles of incidence was studied , as it is the only term that remains unaltered after optimization . as a matter of conveniencethe astigmatism is expressed as the change in tangential and sagital focal planes rather than zernike coefficients as these values can be compared with those from an optical setup having a test target .figure [ intri ] shows the increase in astigmatism ( change in sagital and tangential focii ) with increase in voltage ( i.e decrease in rc ) for different angles of incidence .dependence of astigmatism and defocus on voltage and angle of incidence is derived empirically .it shows that defocus depends only on the voltage applied , whereas astigmatism depends both on the voltage applied and the angle of incidence .this is in agreement with the theory ( cf ., equations 16 and 19 ) .similarly , the wavefront error of the system at optimum focus varies with angle of incidence and it shows a similar trend as astigmatism . the system shows a considerable amount of wfe for angles of incidence 9 , with the wfe is worse than /10 for any voltage .the technical document of the dm states that in the absence of any voltage the initial mirror figure is astigmatic upto 1.3 fringes . in the simulations ,this intrinsic aberration was modeled by utilizing zernike fringe sag " [ 17 ] .the terms z5 , z6 and , z12 , z13 refer to the first and third order astigmatism , respectively and were adjusted such that the initial difference between the tangential and sagital focus was 6 mm which corresponds to a wavefront error of 0.18 at = 550 nm .the obtained coefficients were assigned to the dm to make the surface astigmatic .the entire exercise as described in section [ simintri ] was repeated , astigmatism and wavefront error were estimated for different angles of incidence . at an angle of incidence of 0 ,the wavefront error remains constant at 0.18 after optimizing the focal plane for different values of rc , which correspond to the intrinsic astigmatism . at any other angle of incidencethe wavefront error shows quadratic variation with the increase in rc with a minimum value being around 0.18 .this shows that the astigmatism arising from the surface figure needs to be corrected before using the dm in an optical setup .to compare the results from the zemax simulations , we carried out an experiment with the dm , using an f#15 coude telescope as the light feed . with a set of relay lenses the image was magnified by a factor of 2 . at the focal plane of the f#30 beam an artificial targetwas placed which was illuminated by sunlight . a lens of focal length 200 mm was used to collimate the light modulated by the target .the dm was placed in the collimated beam which reflects the beam towards an imaging lens of focal length 300 mm ( refer figure [ od ] ) .the voltage to the dm was controlled by 2 , 8-bit pci cards whose maximum output voltage to any channel is 5 v. the individual actuators were first assigned a port address as stated in the user manual and the pci output was checked for each channel .a high voltage amplifier consisting of 2 high voltage amplifier boards , boost the signal from the pci cards .each amplifier board contains 20 non - inverting dc amplifiers with a gain of 59 .a high voltage stabilized dc supply was used to power the amplifier boards .the maximum operating voltage of the dm is 162 v which corresponds to 256 digital - to - analog counts ( dac ) .a pixel , cool - snap hq ccd with a pixel size of 6.45 m from roper scientific was used to image the target .the difference between tangential and sagital planes and the circle of least confusion ( optimum focus ) were measured in order to estimate the intrinsic as well as induced aberrations . .top panel show change in focus w.r.t change in voltage .bottom panel show intrinsic astigmatism , as there is no tilt in the dm , astigmatism observed is intrinsic to the dm .solid line represents the linear fit corresponds to the data points.,scaledwidth=70.0% ] for measuring the intrinsic aberration the dm was placed at an angle of 0 similar to oc-1 in figure [ od ] .the target had an l shape pattern and it was observed that the vertical line was focused at one location while the horizontal line at another .this difference in the sagital and tangential focus is caused by astigmatism . to see if this astigmatism changes with the curvature of the dm , a uniform voltage was applied to all 37 actuators of dm from 0 to 225 dacs in steps of 25 dacs . for every voltage set , the tangential, sagital and optimum focal positions ( and ) were measured and the corresponding images were recorded . the difference between the sagital and tangential focal positions do not vary with voltage ( cf . , figure [ expt0 ] ) which is in agreement with the results from the theory and simulation when an intrinsic astigmatism is considered for 0 angle of incidence .the optimum focus changes quadratically with the voltage applied . 220 pixels with a scale of 4.3 micron per pixel.,width=415 ] the presence of an intrinsic astigmatism results in a poor image quality which can be judged visually as shown in figure [ images0 ] which also shows images taken by a plane mirror kept at the same location of the dm .astigmatism essentially arises due to different curvatures along different directions , hence as a first order trial we applied maximum voltage to a specific line of actuators alone and recorded the corresponding image as shown in figure [ images0]c . as a performance measurement , contrast of the images was estimated before and after correction for the entire image as well as for a few selected features as shown in figure [ images0]d .although the improvement in image contrast is nominal , it is evident that a specific voltage set is necessary to compensate for the degradation caused by the intrinsic astigmatism .the experiment was repeated by placing the dm at several angles of incidence and the tangential , sagital and optimum focus positions ( and ) were measured and the corresponding images were also recorded .the defocus is quantified by noting the distance by which the screen has to be shifted to reach the position of the circle of least confusion .the astigmatic aberrations is quantified by . both and proportional to , their dependence on are strikingly different as seen from equations 16 and 20 . while the defocus has a weak dependence on , that of has a strong dependence on as seen from the functional forms of and .the results are displayed in figure [ expt45 ] which is in agreement with those from the theory and simulations .interestingly , for angles less than 10 the astigmatism remains nearly a constant with the applied voltage as seen from the theory and simulations .images obtained for different angles of incidence by applying 225 dacs to all the actuators of dm are shown in figure [ images45 ] .a deformable mirror is an important component in an adaptive optics system for compensation of atmospheric turbulence .a simple optical setup was made to study alignment issues and the intrinsic aberrations of a deformable mirror without employing any kind of wavefront sensor .the nature of the surface deformation under the influence of a uniform voltage to all the actuators of deformable mirror is obtained theoretically and given in equation ( 2 ) .the equation is used to calculate the sag of the deformed / curved mirror .it is shown that radius of curvature of the deformable mirror is inversely proportional to the square of the voltage applied .furthermore , we also analytically estimated the defocus and astigmatism due to such a curved mirror , which are shown in equations ( 16 ) and ( 20 ) .simulations were performed with different voltages to the deformable mirror , which is kept at different angles of incidence .it is demonstrated that the estimated error in focus is solely a function of the applied voltage where the former has a quadratic dependence on the latter .the same is seen in astigmatism as well with an additional non - linear dependence on the angle of incidence .simulations also shows that coefficient of coma is negligible in comparison to astigmatism , could be due to the large f - number of dm .similar results were obtained from the experiment , wherein a 37 channel mmdm is placed in the collimated beam at different angles of incidence .the dm when placed at 0 angle of incidence shows a finite amount of intrinsic astigmatism which does not vary significantly with the radius of curvature ( i.e. application of voltage to dm ) ; this is in agreement with the technical report provided by the manufacturer .it is also observed that the optimum focal plane position changes quadratically with voltage and does not exhibit a change of sign which concludes that the dm does not possess any intrinsic curvature that would change sign on application of voltages .we have also shown that the image quality degrades with the application of voltage and the astigmatism increases with the increase in angle of incidence . from both , simulations and experiment it is demonstrated that when the dm is kept at angles greater than and voltages are applied to it, there is an induced astigmatism which increases in general with the angle of incidence as is also predicted by the theory .the close correlation between theory and experiment enable us to reach the following conclusion about the performance of dm .it shows that apart from the forces appearing due to charging of the capacitor , there are no other spurious deforming forces in the system and the optics of the dm can thus be regarded to have sufficient reliability .the detailed intensity distribution on the screen can also be computed by using equation ( 9 ) for the phase distortion due to deformation .it is concluded that in order to operate the dm in real - time to correct for atmospheric induced aberrations , it necessary to a ) keep the dm at a very small angle of incidence and b ) to derive a suitable voltage set that will compensate the intrinsic astigmatism as well .this voltage set can be simply added to any constant voltage set required to bias the dm surface , since the intrinsic astigmatism is independent of voltage ; it naturally facilitates using a suitable bias voltage in order to allow the membrane to move in either direction , which is being pursued and will be reported in future . in practical situations , for wavefront correctionthe deformation may not be as simple as considered in the present investigations .however , the present study enables us to ascertain the inherent defects in the system which has to be kept in mind , while using the deformable mirror for phase distortion correction .rohan e. louis is grateful for the financial assistance from the german science foundation ( dfg ) under grant de 787/3 - 1 and the european commission s fp7 capacities programme under grant agreement number 312495 .lijun zhu , pang - chen sun , dirk - uwe bartsch , william r. freeman , and yeshaiahu fainman , `` adaptive control of a micromachined continuous - membrane deformable mirror for aberration compensation '' , appl . opt . * 38 * , 168 - 176 ( 1999 ) .d. s. dayton , s. restaino , j. gonglewski , j. gallegos , s. macdermott , s. browne , s. rogers , m. vaidyanathan , and m. shilko , `` laboratory and field demonstration of a low cost membrane mirror adaptive optics system '' , opt .commun . * 176 * , 339 - 345 ( 2000 ) .oskar van der luehe , dirk soltau , thomas berkefeld , and thomas schelenz `` kaos : adaptive optics system for the vacuum tower telescope at teide observatory '' in _ innovative telescopes and instrumentation for solar astrophysics _ , stephen l. keil , sergey v. avakyan , ed .spie * 4853 * , 187 - 193 ( 2003 ) .goran b. scharmer , peter m. dettori , mats g. lofdahl , and mark shand , `` adaptive optics system for the new swedish solar telescope '' in _ innovative telescopes and instrumentation for solar astrophysics _ , stephen l. keil , sergey v. avakyan , ed .spie * 4853 * , 370 - 380 ( 2003 ) . c. u. keller , claude plymate , and s. m. ammons , `` low - cost solar adaptive optics in the infrared '' proc .spie * 4853 * , in _ innovative telescopes and instrumentation for solar astrophysics _, stephen l. keil , sergey v. avakyan , ed ., 351 - 359 ( 2003 ) .
a deformable mirror ( dm ) is an important component of an adaptive optics system . it is known that an on - axis spherical / parabolic optical component , placed at an angle to the incident beam introduces defocus as well as astigmatism in the image plane . although the former can be compensated by changing the focal plane position , the latter can not be removed by mere optical re - alignment . since the dm is to be used to compensate a turbulence - induced curvature term in addition to other aberrations , it is necessary to determine the aberrations induced by such ( curved dm surface ) an optical element when placed at an angle ( other than ) of incidence in the optical path . to this effect , we estimate to a first order , the aberrations introduced by a dm as a function of the incidence angle and deformation of the dm surface . we record images using a simple setup in which the incident beam is reflected by a 37 channel micro - machined membrane deformable mirror for various angles of incidence . it is observed that astigmatism is a dominant aberration which was determined by measuring the difference between the tangential and sagital focal planes . we justify our results on the basis of theoretical simulations and discuss the feasibility of using such a system for adaptive optics considering a trade - off between wavefront correction and astigmatism due to deformation .
to securing communications through the use of information theoretic notions was started by shannon in the 1940 s . in his papers he related fundamental notions , such as entropy , to the secrecy of cryptographic systems .inspired by the work of shannon , wyner published a paper proving that it was theoretically possible to secure communications solely through the choice of an encoding scheme for a specific channel model .wyner s security model capitalizes on the eavesdropper receiving a noisier copy of what the intended user receives . in general , for communications security, assumptions on the capabilities of eavesdroppers are required to design systems . in wyners paper , the presumption is on the channel noise of the eavesdropper .on the other hand , in contemporary cryptography it is assumed that the eavesdropper is subject to certain computational limitations .it is accepted that many people have attempted to break a cryptosystem , and their best efforts to recover plaintext require an infeasible amount of time . in particular ,the computational complexity of factoring products of primes is unknown , although it is widely accepted to be a hard problem .thus the only guarantee of the security of these systems is that numerous people have attempted to attack the system and have not succeeded .the question for physical layer security is what circumstances will allow similar conclusions about the eavesdropper s capabilities as in cryptography . in physical layer security , channel noise combined with a special encoder provide the security .thus for wyner s method , how can one ensure that the eavesdropper s channel has specific noise characteristics , thus limiting his / her decoding capabilities ? the goal in this paper is to work with a well - studied channel model , and gradually relax restrictions on the eavesdropper s capabilities .as these relaxations are made , the effect on the security of the system will be evaluated .we will show that if the real channel characteristics deviate even slightly from the assumptions , all guarantees of security are lost . since in a practical system , no assumptions ( either on the intended user s or the eavesdropper s channel ) are verifiable , we can not claim security .the intended users rarely have control over the eavesdropper s access ( especially in the wireless setting ) or the eavesdropper s behavior , so basing a security system on questionable assumptions is risky . on the other hand , wyner s basic ideacan be easily combined with cryptographic methods to achieve a secure system based on computationally intractable problems .for example , we can artificially create a channel that is information - theoretically secure under the assumption that the shared key can not be recovered , i.e. , the recovery of the key is computationally intractable .the paper is organized as follows . in sectionii , the channel model is presented .the general results for wiretap coding and achieving secrecy are discussed in section iii .section iv quantifies the amount of secrecy lost when the eavesdropper has better access than anticipated . in sectionv , a shared key cryptosystem is presented from mihaljevic , that sets the path for a more general method of combining stochastic encoding with computationally intractable problems .the notation for this paper is as follows .random variables will be represented by uppercase letters ( ) , and their instances by lower case letters ( ) .caligraphic uppercase letters will represent the domain of the corresponding random variable ( ) .since we will consider a sequence of codes , , we will use a subscript to denote the corresponding parameters , i.e. , will have rate .the communication model used for this paper is a broadcast channel with confidential messages ( bcc ) , and depicted in fig .[ fmod ] . in general , the bcc model has confidential messages and public messages .the bcc model is a generalization of the wyner wiretap model . in this paper, there will only be confidential messages .the model in this paper consists of a memoryless concatenated additive white gaussian noise ( awgn ) channel followed by a 2-level and l - level analog to digital ( a / d ) converters on the main and wiretap channels , respectively .( stochastic encoder ) a * _ stochastic encoder _ * , , with rate , input , and output , is a channel with transition probability , that satisfies ; 1 . 2 .if and , then the input to the stochastic encoder is a uniform random variable , , which is the source message , and has entropy the stochastic encoder , with rate , outputs . for our channel model , we let . in this modelthere is a legitimate receiver and an eavesdropper .the legitimate user receives .when , the eavesdropper receives . to distinguish the case when , we will say that the eavesdropper receives .there is independent awgn noise in both the main and wiretap channel .we assume that the eavesdropper has a noise variance greater than that of the legitimate receiver . when just considering the input and output , the channels are both discrete memoryless channels ( dmc ) .the legitimate channel users will construct a security system based on the assumption that the eavesdropper is restricted to a two - level a / d converter , and .the goal in this paper is to quantify the effect of making an incorrect assumption about , and suggest an improvement through the use of a shared key based cryptosystem . in the case , it follows that the main and eavesdropper channels can be individually modeled as binary symmetric channels , , and respectively .the corresponding crossover probabilities are , and where is the cdf for the normal distribution .since , it follows that .this means that the capacity of the wiretap channel is lower than the capacity of the main channel .( stochastically degraded channel ) the channel , , is said to be * _ stochastically degraded _ * with respect to the channel , , if there exists a channel , , such that for every first note , the concatenation of and channels is a channel , where .thus with regards to conditional probabilities , the concatenated and split bscs are equivalent , see fig .[ stocheq ] .it follows that the wiretap channel is stochastically degraded with respect to the main channel .this is important since the results in wyner s wiretap paper pertain strictly to concatenated channels , yet they are extendable to split channels under these conditions .it is clear that we have a channel model where the eavesdropper is `` worse off '' than the intended receiver . nowthe question is how the additional uncertainty in the eavesdropper s channel leads to secrecy . to quantify the ability to send confidential messages over a broadcast channel ,we define first a measure of uncertainty that the eavesdropper has with regards to the original message .( equivocation ) the * _ equivocation _ * of the eavesdropper is defined as in the event , i.e. , provides no information about the original source , the eavesdropper s only available method is to guess according to the distribution of the source message . in designing a code ,the objective is to achieve security through maximizing the equivocation .that is if , then the system is secure .( wiretap code ) a * _ wiretap code _ * , , for a bcc consists of * message set , * stochastic encoder , * decoder , secrecy in these systems is introduced through the use of the stochastic encoder .the encoder will map one message to different outputs such that when the eavesdropper has enough error , any attempt to decode will result in a random message .assume that the main channel is noiseless , and the wiretap channel is a .let .define the stochastic encoder as follows , * or with equal probability * or with equal probability we can calculate the equivocation of this simple coding scheme , which results in \\ \approx & 0.954 \end{aligned}\ ] ] observe in the case we transmit the message without any encoding over the channel , the eavesdropper will have equivocation .thus we have increased the equivocation of the eavesdropper using a wiretap code .increasing equivocation through a random one - to - many mapping is the notion behind using a stochastic encoder for security . in the above example, we presented a rate stochastic encoder that increased the eavesdropper s equivocation .the question remains as to what are the limits with regards to possible rates , and equivocation .first we define requirements for a rate - equivocation pair to be achievable .( achievability ) a rate - equivocation pair , , is * achievable * if there exists a series of wiretap codes , such that 1 . ( * rate * ) 2 . ( * probability of error * ) 3 . ( * equivocation * ) the second achievability condition assures that the legitimate user will be able to decode the confidential message .the third achievability condition is called weak secrecy , and puts a lower bound on the wiretapper s equivocation .now we have the framework to define the secrecy capacity .( secrecy capacity ) the secrecy capacity of a bcc with no common message is thus is the maximum rate at which error free communication is possible over the main channel , with maximum equivocation over the wiretap channel .furthermore the region of achievable rates is characterized by the following generalization of the wyner s results .( region of achievability)(korner and csiszar ) given a bcc with no common message the region , is a closed convex set for which there exist random variables , and , such that and are markov chains .furthermore , the conditional distribution of ( respectively ) given characterizes the main ( respectively wiretap ) channel and 1 . 2 . ] in particular , \ ] ] since the wiretap channel is stochastically degraded with respect to the main channel , and give us that for our channel model , the following corollary holds .[ acst ] ( achievability for stochastically degraded channels)(korner and csiszar ) if the wiretap channel is stochastically degraded with respect to the main channel , then the above region , , simplifies to those pairs such that 1. ] and {\mathcal{q}} ] is a markov chain , , any a / d(l ) converter with associated partition will have {\mathcal{q}_l } ) \leq i(x;w ) ] is just for a particular a / d(l ) converter where . in short, discretizing corresponds to an adequately chosen a / d(l ) converter , which is precisely .note and are referenced from fig .2 , where we use the notation when .the dependence on is implicit in .observe by the approximation lemma , it will suffice to approximate the equivocation associated with the a / d(l ) by the equivocation given by direct access to the awgn channel .it is simpler to calculate instead of . then using the approximation lemma, we have , [ mel](max equivocation loss ) this follows by direct application of the approximation lemma to the equivocation loss theorem .now we are left with calculating .first we shall calculate the cdf of .\end{aligned}\ ] ] therefore taking the derivative , then by the definition of mutual information , ) , width=288 ] we have calculated the mutual information , and can now use corollary [ mel ] to estimate the loss in equivocation .looking at ( [ mele ] ) we see as the noise power in the wiretap channel grows , that the equivocation loss goes to zero , which is illustrated in fig .3 . on the other hand ,as the noise power in the wiretap channel gets smaller , secrecy is lost . in particular , if the , approximately bits of secrecy is lost per bit transmitted .the equivocation is less than half of the value for the case of a two level a / d converter . in this particular instance , where the eavesdropper is allowed to use arbitrary precision analog to digital converters , it is difficult to guarantee a particular amount of secrecy .the analysis presented here is for a simple model , and is present only to illustrate the difficulty in making assumptions about the physical limitations in channel access of an eavesdropper . in the analysis of more complicated scenarios , especially in the wireless domain , incorrect assumptions onthe eavesdropper will present similar security losses .on the other hand , this does not constitute a proof that physical layer security using wyner s method is not possible .in the previous section , we illustrated what happens if the intended users believe that the eavesdropper uses an a / d(2 ) converter , but in reality the eavesdropper uses an a / d(l ) converter .while this is , in many regards , a toy example , it illustrates that a wrong assumption ( belief ) on the eavesdropper s channel leads to a compromised security system . in other words ,a system is information - theoretically secure only if the assumptions ( made by the intended users ) of the eavesdropper s channel are correct .however , in practice , the intended users do not control the eavesdropper s channel nor behavior .hence the intended users can never verify their assumptions .in a practical setting , apart from using a better a / d converter , the eavesdropper can ; 1 ) use multiple antennas , 2 ) decrease the distance to the transmitter , 3 ) use non - awgn channels , etc ... if the intended users make the wrong guess regarding any of these parameters , they will not achieve a secure system . thus making assumptions about the wiretapper s capabilitiesis difficult .one potential alternative is to combine some of the techniques of modern cryptography with wyner s secrecy coding techniques .we will artificially create random channels , and then use the corresponding stochastic encoding to secure the message .our assumption will be , as in contemporary cryptography , that recovering the preshared key used to create the channel is computationally intractable .we remove the dependence on a true source of randomness . in particular , the stochastic encoder is dependent on a source of true randomness .our solution will be to use general purpose pseudorandom number generators as input to the stochastic encoder and input to create artificial channels . by pseudorandom number generation , we follow the guideline of goldreich , + _ `` loosely speaking , general - purpose pseudorandom generators are efficient deterministic programs that expand short randomly selected seeds into longer pseudorandom bit sequences , where the latter are defined as computationally indistinguishable from truly random sequences by any efficient algorithm . '' _ our general solution strategy is to simulate the main and wiretap channels in the wyner wiretap channel model at the transmitter prior to transmission , see fig .the resulting data is then transmitted over a physical channel with channel coding such that it is assumed the eavesdropper and intended users have perfect copies of the sent data .when we simulate the wiretap channel , it is done in such a way that the intended receiver with knowledge of a preshared key can reverse the simulation .in addition , at no point do we assume a true source of randomness .the first step in designing our system , is to choose the preferred channel types for the main and wiretap channel , where the wiretap channel is stochastically degraded with respect to the main channel .we choose an encoding scheme that will achieve information theoretic security based on the assumption that the real channel model was implemented . up to this point , we are exactly in the framework of wyner s paper .this is where we take a separate path .we use pseudorandom number generation as the source of randomness for the stochastic encoding , and to artificially create the main channel .the pseudorandom noise generators are initialized with a randomly selected seed that only the sender knows . in order to emulate the wiretap channel, we use a general purpose pseudorandom number generator that is seeded with a preshared key . for the wiretap channelwe require that given the preshared key , the intended receiver must be able to reverse the process of emulating the wiretap channel . in this modelthe intended receiver is knowledgeable of the shared key for the wiretap noise , and can remove this noise exactly . the pseudorandom noise used to emulate the main channel , can not be removed , and the intended user decodes using the error correcting code . at this pointthe intended user has recovered the source message . on the other hand , the eavesdropper is knowledgable of the complete design of the system , with the exception of the preshared key .the eavesdropper s attacks will then be based on exploiting the weakness of the pseudorandom number generation to recover the secret key , and seeds .we provide an example that follows the approach of mihaljevic in system design . in particular , we assume the following notation : {i=1}^l : \text{plaintext } \\\bold{r}&=[r_i]_{i=1}^{m - l } : \text{bernoulli}(1/2 ) \text{i.i.d .random variables } \\\bold{u } & = [ u_i]_{i=1}^k : \text{public bernoulli}(1/2 ) \text{i.i.d .random variables } \\\bold{s } & = [ s_{i , j}]_{i=1 \ : j=1}^{k \:\quad n } : \text { binary } ( k \times n ) \text{secret key matrix } \\\bold{v } & = [ v_i]_{i=1}^n : \text { unknown bernoulli}(p ) \text{i.i.d .random variables } \\\bold{m } & : \text{invertible mixing matrix } ( m \times m ) \\ f_e & : \text{stochastic encoder from } \ { 0,1 \}^m \rightarrow \{0,1\}^n \\g & : \text{decoder } \\\end{aligned}\ ] ] then encryption and decryption are as follows .the ciphertext , , is calculated as where denotes concatenation .let , be the truncation function which returns the first bits of .decryption of the ciphertext , is shown below , there are three main parts in the design of this system . in particular , we first use concatenation of a random vector and a mixing matrix to increase the entropy of the input to the encoding .this helps protect against known plaintext attacks .then we create a main channel that is a , and we employ a stochastic encoding to correct for the corresponding errors .finally we use a secret key matrix , and public random vector to create a `` noisier '' channel for the eavesdropper .the new system is in fig .[ prng ] . from an information theoretic point of view , since the error correction will compensate for the added randomness , this system is not secure , as the eavesdroppers channel is deterministic . from herewe take the viewpoint of conventional cryptography , that solving for the secret key is computationally intractable . in particular , this is an instance of the learning parity with noise ( lpn ) problem , which is provably np - hard .furthermore the lpn problem is a natural choice for stochastic encoding techniques .this follows since solving for the secret key , , is equivalent to decoding a random linear block code .thus we are combining a computationally hard problem with an information theoretic based security method .we have now made a trade off on assumptions about the eavesdropper s channel , to one on the ability to solve a computationally hard problem .the latter is widely accepted in conventional cryptography .a motivation for this paper , was to create a lightweight cryptographic system based on concepts from physical layer security .two potential issues that will increase the complexity of using this system are ; 1 ) the creation of satisfactory and cryptographically secure random variables , and 2 ) proper choice of coding parameters .creation of cryptographically secure pseudo - random numbers is a difficult problem , and in this security system they are directly applied to plaintext .this leaves an opening for attacks on the pseudo - random number generator ( prng ) .furthermore a source of public randomness is assumed .this leaves the system open to man in the middle attacks .thus we must include measures to guarantee availability and authenticity of the public random vector .in addition , we must choose adequate coding parameters for the length of the vector , and the stochastic encoder , .a thorough analysis should be performed to find a characterization of the eavesdropper s channel with regards to wyner s wiretap model . in particular , assuming the randomness in is cryptographically secure , we say the main channel is a channel .then we may find a as large as possible and base the construction of on the assumption that the eavesdropper s channel is a .observe choosing large , increases the rate of our stochastic encoder , which reduces the security of our system .thus we must find an upper limit for such that the resulting parameters for and ensure it is computationally infeasible for the eavesdropper to recover the secret key .in this paper , one of the concerns in basing a security system on assumptions about the eavesdropping capabilities of an unintended user has been adressed .an alternative has been presented which uses one of the main principles of wyner s paper , security through stochastic encoding .it is a convenient solution , in the sense that it does not require any changes in infrastructure . in particular, it can be added on at the application layer .it is noted , that this is a toy example , and used to illustrate a point .a valid counter argument to the solution proposed in this paper , is that there is still an assumption made on the eavesdropper s capabilities in using this method .the onus has been transferred from relying on physically acquiring the signal to computational limitations . in regard to this argument , this paper than serves two purpose .first , it shows that care must be taken in assumptions about the eavesdropper .in addition , for a simple case it illustrates how to calculate the effect of incorrect assumptions , and engineer a system within certain secrecy tolerances .the author would mainly like to thank prof .aleksandar kavcic for his patience , and assistance in writing this technical paper. m. j. mihaljevic .an approach for light - weight encryption employing dedicated coding , _globecom _ , ieee , pp .874 - 880 , 2012 i. csiszar and j. korner .broadcast channels with confidential messages ._ information theory , ieee transactions on _ , 24(3):339348 , may 1978 .
this paper highlights security issues that can arise when incorrect assumptions are made on the capabilities of an eavesdropper . in particular , we analyze a channel model based on a split binary symmetric channel ( bsc ) . corresponding security parameters are chosen based on this channel model , and assumptions on the eavesdroppers capabilities . a gradual relaxation of the restrictions on the eavesdropper s capabilities will be made , and the resulting loss of security will be quantified . an alternative will then be presented that is based on stochastic encoding and creating artificially noisy channels through the usage of private keys . the artificial channel will be constructed through a deterministic process that will be computationally intractable to reverse .
in the present year 2000 we are celebrating the 100th birthday of quantum theory and the 75th birthday of quantum mechanics .thus it took only 25 years from the first inception of the new theory until its final consolidation .quantum field theory is almost as old as quantum mechanics .but the formulation of a fully consistent synthesis of the principles of quantum theory and classical relativisitic field theory has been a long and agonizing process and , as a matter of fact , has not yet come to a satisfactory end , in spite of many successes . the best approximation to nature in the microscopic and relativistic regime of elementary particle physics which we presently have , the so called _ standard model _, does not yet have the status of a mathematically consistent theory .it may be regarded as an efficient algorithm for the theoretical treatment of certain specific problems in high energy physics , such as the perturbative calculation of collision cross sections , the numerical analysis of particle spectra etc .yet nobody has been able so far to prove or disprove that the model complies with _ all _ fundamental principles of relativistic quantum physics .this somewhat embarassing situation is not so widely known .it is therefore gratifying that the clay mathematics institute has recently drawn attention to it by endowing a price of 1.000.000 ] are given by it then follows from the preceding proposition and the basic relations of tomita takesaki theory that the net on is local ( operators localized in disjoint intervals commute ) , and stable ( being a ground state for ) .thus , given two algebras in suitable relative position , one can construct a full chiral quantum field theory .( in a similar way one sees that any modular intersection fixes a conformally invariant quantum field theory on the compactified light ray s . )local nets on + in order to obtain in a similar manner local nets on higher dimensional space times , one has to proceed from a larger set of algebras in appropriate relative positions . in the case of two dimensional minkowski space , one starts from three algebras , forming two half sided modular inclusions . by the preceding proposition onethen has two unitary groups , where are interpreted as light cone coordinates of .these groups are assumed to commute , in this way one obtains a unitary positive energy representation of the translations on which the modular group of acts like a lorentz transformation , one then can proceed as in the preceding case and define algebras for wedge shaped regions of the form , setting the algebras associated with diamonds , cf .figure 3 , are obtained by setting in this way one arrives at a local , poincar covariant net on where describes the vacuum vector .local nets on , + the construction of local nets from a few algebras was recently extended to three and four dimensional minkowski space in .the crucial and difficult step in this approach is the formulation of conditions which guarantee that the modular groups affiliated with the algebras and the underlying vector generate representations of the poincar group .any family of algebras in suitable modular positions fixes a local , net on with vacuum state .we refrain from giving here the precise conditions on the algebras and only note that , in analogy to the cases discussed before , the corresponding modular groups generate representation of with positive energy .the algebras can consistently be assigned to wedge regions in and the local algebras associated with diamonds are defined by taking intersections .the converse of this statement is a well known theorem by bisognano and wichmann , cf .also .these intriguing results should admit a generalization to other space time manifolds , cf .also for a related approach .moreover , they seem to be of relevance for the classification of local nets and are possibly a step towards a novel , completely algebraic approach to the construction of local nets .the preceding account of recent results in aqft illustrates the role of this approach in relativistic quantum field theory : it is a concise framework which is suitable for the development of new constructive schemes , the mathematical implementation of physical concepts and ideas , the elaboration of general computational methods and the clarification of the relation between different theories as well as their structural analysis and classification .so this framework complements the more concrete approaches to relativistic quantum field theory and thereby contributes to the understanding and mathematical consolidation of this important area of mathematical physics .99 a. jaffe , constructive quantum field theory , p. 111 in : _ mathematical physics 2000_ , a. fokas et al .eds . , imperial college press 2000 r. haag and d. kastler , ( 1964 ) 848 d. buchholz and r. haag , ( 2000 ) 3674 d. buchholz , current trends in axiomatic field theory , preprint hep - th/9811233 s. coleman , phys .* d11 * ( 1975 ) 2088 m. lscher , nucl .* b326 * ( 1989 ) 557 k. intriligator and n. seiberg , nucl .b , proc .* 45bc * ( 1996 ) 1 w. driessler and j. frhlich , ann .henri poincar * 27 * ( 1977 ) 221 h .- j .borchers and j. yngvason , ( 1992 ) 15 r. haag , _ local quantum physics _ ,2nd revised ed . , springer ( 1996 ) r.m .wald , _ quantum field theory in curved spacetime and black hole thermodynamics _, chicago university press 1994 r. brunetti , k. fredenhagen and m. khler , ( 1996 ) 633 d. buchholz , o. dreyer , m. florig and s.j .summers , ( 2000 ) 475 c.j .fewster , class . quantum grav .* 17 * ( 2000 ) 1897 h. sahlmann and r. verch , preprint math - ph/0002021 k. fredenhagen and j. hertel , ( 1981 ) 555 r. brunetti and k. fredenhagen , ( 2000 ) 623 m. dtsch and k. fredenhagen , ( 1999 ) 71 g. scharf , _ finite quantum electrodynamics : the causal approach _ , 2nd ed ., springer 1995 d. buchholz , phys .lett . * b174 * ( 1986 ) 331 d. buchholz , m. porrmann and u. stein , phys .* b267 * ( 1991 ) 377 d. buchholz , on the manifestations of particles , p. 177in : _ mathematical physics towards the 21st century _ , r.n .sen and a. gersten eds . , ben gurion university of the negev press 1994 d. buchholz , ( 1990 ) 631 m. porrmann , the concept of particle weights in local quantum field theory , phd thesis , universitt gttingen 2000 h. araki and r. haag , ( 1967 ) 77 d. buchholz and m. porrmann , ann .henri poincar * 52 * ( 1990 ) 237 d. iagolnitzer , _ scattering in quantum field theories : the axiomatic and constructive approaches _ , princeton university press 1993 e. witten , adv .* 2 * ( 1998 ) 253 m. bertola , j. bros , u. moschella and r. schaeffer , ads / cft correspondence for n point functions , preprint hep - th/9908140 k .- h .rehren , ann .henri poincar * 1 * ( 2000 ) 607 d. buchholz , j. mund and s.j .summers , transplantation of local nets and geometric modular action on robertson walker space times , preprint hep - th/0011237 h.w .wiesbrock , ( 1993 ) 83 ; erratum : ( 1997 ) 683 h.w .wiesbrock , ( 1997 ) 203 h.w .wiesbrock , ( 1998 ) 269 r. khler and h.w .wiesbrock , modular theory and the reconstruction of four dimensional quantum field theories , preprint , to appear in d. buchholz , c. dantoni and k. fredenhagen , ( 1987 ) 123 j.j .bisognano and e.h .wichmann , ( 1975 ) 985 ; ( 1976 ) 303 h.j .borchers , ( 2000 ) 3604
algebraic quantum field theory is an approach to relativistic quantum physics , notably the theory of elementary particles , which complements other modern developments in this field . it is particularly powerful for structural analysis but has also proven to be useful in the rigorous treatment of models . in this contribution a non technical survey is given with emphasis on interesting recent developments and future perspectives . topics covered are the relation between the algebraic approach and conventional quantum field theory , its significance for the resolution of conceptual problems ( such as the revision of the particle concept ) and its role in the characterization and possibly also construction of quantum field theories with the help of modular theory . the algebraic approach has also shed new light on the treatment of quantum field theories on curved spacetime and made contact with recent developments in string theory ( algebraic holography ) .
collective behaviour of interacting _ intelligent _ agents such as birds , insects or fishes shows highly non - trivial properties and sometimes it seems to be quite counter - intuitive . as well - known , many - body systems having a lot of _ non - intelligent _ elements , for instance , spins ( tiny magnets in atomic scale length ) , particles , random - walkers etc .also show a collective behaviour like a critical phenomenon of order - disorder phase transitions with ` spontaneous symmetry breaking ' in spatial structures of the system .up to now , a huge number of numerical studies in order to figure it out have been done by theoretical physicists and mathematicians .they attempted to describe these phenomena by using some probabilistic models and revealed the ` universality class ' of the critical phenomena by solving the problem with the assistance of computer simulations .of course , the validity of the studies should be checked by comparing the numerical results with the experimental findings . if their results disagree with the empirical data ,the models they used should be thrown away or should be modified appropriately .on the other hand , for the mathematical modeling of many - body systems having interacting _ intelligent _ agents ( animals ) , we also use some probabilistic models , however , it is very difficult for us to evaluate the modeling and also very hard to judge whether it looks like _ realistic _ or not due to a lack of enough empirical data to be compared .one of the key factors for such non - trivial collective behaviour of both _ non - intelligent _ and _ intelligent _ agents is obviously a ` competition ' between several different ( and for most of the cases , these are incompatible ) effects .for instance , the ising model as an example of collective behaviour of _ non - intelingent _ agents exhibits an order - disorder phase transition by competition between the ferromagnetic interactions between ising spins ( ` energy minimization ' ) and thermal fluctuation ( ` entropy maximization ' ) by controlling the temperature of the system . on the other hand , as a simplest and effective algorithm in computer simulations for flocks of _ intellingent _ agents , say , animals such as starlings , the so - called boids founded by reynolds has been widely used not only in the field of computer graphics but also in various other research fields including ethology , physics , control theory , economics , and so on . the boids simulates the collective behaviour of animal flocks by taking into account only a few simple rules for each interacting _ intelligent _ agent . however , there are few studies to compare the results of the boids simulations with the empirical data .therefore , the following essential and interesting queries still have been left unsolved ; * what is a criterion to determine to what extent the flocks seem to be _ realistic _ ? * is there any quantity ( statistics ) to measure the _ quality _ of the artificial flocks ? from the view point of ` engineering ' , the above queries are ( in some sense ) not essential because their main goal is to construct a useful algorithm based on the collective behaviour of agents .however , from the natural science view points , the difference between empirical evidence and the result of the simulation is the most important issue and the consistency is a guide to judge the validity of the computer modeling and simulation .recently , ballerini _ at al _ succeeded in obtaining the data for such collective animal behaviour , namely , empirical data of starling flocks containing up to a few thousands members .they also pointed out that the angular density of the nearest neighbors in the flocks is not uniform but apparently biased ( it is weaken ) along the direction of the flock s motion . with their empirical findings in mind , in this paper , we examine the possibility of the boids simulations to reproduce this _ anisotropy _ and we also investigate numerically the condition on which the anisotropy emerges . this paper is organized as follows . in the next section ,we explain the empirical findings by ballerini _et al _ and introduce a key concept _ anisotropy _ and a relevant quantity -value . in section 3 , the boids modeling and setting of essential parameters in our simulations are explicitly explained .the results are reported in section 4 .the last section provides concluding remarks .in this section , we briefly review the measurement of the realistic flocks and the evaluation of the empirical data by ballerini _ et al _ .they measured each bird s position in the flocks of starling ( _ sturnus vulgaris _ ) in three dimension . to get such 3d data , they used ` stereo matching ' which reconstructs 3d - object from a set of stereo photographs . from these data, they calculated the angular density of the nearest neighbours in the flock .they measured the angles ( , ) , where stands for the ` latitude ' of the nearest neighbour for each bird measured from the direction of the motion of the flock , whereas denotes ` longitude ' which specifies the position of the nearest neighbour for each bird around the direction of flock s motion , for all individuals in the flock and made the 2d - map of angular density distribution using the so - called ` mollweide projection ' .their figure clealy shows that the density is not uniform but obviously biased .for instance , we find from the figure that the dinsity around and are extremely low in comparison with the density in the other directions .the property of the biased distribution due to the absence of the birds along the direction of the flock s motion is referred to as _ anisotropy _the main goal of this paper is to reveal numerically that the artificial flock by the boids exhibits the anisotropy as the realistic flock shows . to quantify the degree of the anisotropy, we use a useful statistics ( a kind of ` order parameters ' in the research field of statistical mechanics ) introduced in the following subsections .ballerini _ et al _ also introduced a useful indicator , what we call -value .the -value is calculated according to the following recipe .let be an unit vector pointing in the direction of the -nearest neighbour of the bird and let us define the projection matrix in terms of the as follows . where is the number of birds in the flock .then , the -value is given by where denotes the normalized eigenvector corresponding to the smallest eigenvalue of the projection matrix . from the definition ,the coincides with the direction of the lowest density in the flock .the vector appearing in the equation ( [ eq : def_gamma ] ) means the unit vector of flock s motion .the bracket means the average over the ensembles of the flocks .the -value for the uniform distribution of the position for a given vector , namely , the for is easily calculated as where we used .therefore , the distribution of the -nearest neighbours has anisotropic structure when the -value is larger than , namely the condition for the emergence of the anisotropy is explicitly written by by measuring this -value for artificial flock simulations , one can show that the anisotropy also emerges in computer simulations . to put it into other words ,the system of flocks is spatially ` symmetric ' for , whereas the symmetry is ` spontaneously ' broken for .this ` spontaneous symmetry breaking ' is nothing but the emergence of anisotropy . in the following sections , we carry out the boids simulations and evaluate the anisotropy by the -value .then , we find that the ` spontaneous symmetry breaking ' mentioned above actually takes place by controlling the essential parameters appearing in the boids .to make flock simulations in computer , we use the so - called boids which was originally designed by reynolds .the boids is one of the well - known mathematical ( probabilistic ) models in the research fields of cg and animation .actually , the boids can simulate very complicated animal flocks or schools although it consists of just only three simple interactions for each agent in the aggregation : 1 . * cohesion * : making a vector of each agent s position toward the average position of local flock mates .* alignment * : making a vector of each agent s position towards the average heading of local flock mates and keeping the velocity of each agent the average value of flock mates .* separation * : controlling the vector of each agent to avoid the collision with the other local flock mates .it is important for us to bear in mind that ` local flock mates ' mentioned above denotes the neighbours within the range of views for each agent .each agent decides her ( or his ) next migration by compounding these three interactions . in our boids simulations ,each agent is defined as a mass point and specified by a set of 3d - coordinate , an unit vector of motion , and the speed .we define each agent s view as a sphere with a radius without any blind corner .we also define the ` separation sphere ' with a radius and the distance between the nearest neighbours is specified by the length ( see figure [ fig : range ] ) .the interaction of ` separation ' is switched on if and only if the is smaller than the .some other essential parameters appearing in our boids simulations and the setup are also explicitly given as follows .* * field of simulations : * three dimensional open space without any gravity or any air resistance .moreover , there is no wall and no ground surface . * * the number of agents in the flock : * the system size of simulations is * * initial condition on the speed of each agent : * . ** initial condition on the location of each agent : * all agents are distributed in a sphere with radius . ** the shape and the range of each agent s view : * a sphere with radius . * * the shape and the range of separation : * a sphere with radius . for the above setting of the parameters, we shall implement two types of programming codes .one is a programing code for a single simulation ( ` ss ' for short ) , and another code is for multiple simulations ( ` ms ' for short ) .the ss runs in the gui ( graphical user interface ) and it shows us a shape of the flock in real time , whereas the ms enables us to carry out a number of simulations with different initial conditions . by controling the three essential interactions , namely , ` cohesion ' , ` alignment ' and ` separation ' mentioned above , we obtain four different aggregations having different collective behaviours .each behavour of the aggregations is monitored ( observed ) by the ss . to specify the aggregation process of the flocks ,we define the update rule of the vector of movement for each agent by where means the time step of the update and denotes the -norm of a vector . is a vector pointing to the center of mass from each agent s position . denotes a vector to be obtained by averaging over the velocities of all agents . means a vector pointing to the direction of the movement of each agent to be separated from her ( or his ) nearest neighbouring mate .therefore , the aggregation of the flock is completely specified by the weights of the above vectors , namely , . among all possible combinations of these weights, we shall pick up typical four cases .each property and the shape of each aggregation are explained as follows . * _ case 1 _ ( * crowded aggregation ) * : the aggregation obtained by controlling the interaction of ` cohesion ' much stronger than the others , namely , . * _ case 2 _ ( * spread aggregation ) * : the aggregation obtained by controlling the interaction of ` separation ' much stronger than the others , namely , . * _ case 3 _ ( * synchronized aggregation ) * : the aggregation obtained by controlling the interaction of ` alignment ' much stronger than the others , namely , . * _ case 4 _ ( * flock aggregation ) * : this aggregation obtained by adjusting every interactions appropriately , namely , . using the ms , we simulate each aggregation for times for different initial conditions . in each simulation , we measure the angular distribution and then the value of is calculated .the number of crash is also updated when the coordinate of each agent is identical to ( is shared with ) the other agents .we start each measurement from the time point at which the total amount of the change in every agent s speed is close to through the 80 turns of the update . in the next section ,we explain the details of the result .in association with the each aggregation , we evaluate the -value with standard deviation and the average number of crashes for 200 independent runs of the boids simulations . in following ,we summarize the results . * _ case 1 _ ( * crowded aggregation ) * : the typical behaviour of the flock and the angular distribution are shown in the upper left panel in figure [ fig : fg3 ] and figure [ fig : fg4 ] , respectively .the -value is with standard deviation and the average number of crashes is . * _ case 2 _ ( * spread aggregation ) * : the typical behaviour of the flock and the angular distribution are shown in the upper right panel in figure [ fig : fg3 ] and figure [ fig : fg4 ] , respectively . the -value is with standard deviation and the average number of crashes is . *_ case 3 _ ( * synchronized aggregation ) * : the typical behaviour of the flock and the angular distribution are shown in the lower left panel in figure [ fig : fg3 ] and figure [ fig : fg4 ] , respectively .the -value is with standard deviation and the average number of crashes is . *_ case 4 _ ( * flock aggregation ) * : the typical behaviour of the flock and the angular distribution are shown in the lower right panel in figure [ fig : fg3 ] and figure [ fig : fg4 ] , respectively . the -value is with standard deviation and the average number of crashes is . - and -planes ) are shown ., title="fig:",width=264 ] - and -planes ) are shown ., title="fig:",width=264 ] - and -planes ) are shown ., title="fig:",width=264 ] - and -planes ) are shown ., title="fig:",width=264 ] the aggregation of _ case 4 _ ( ` flock aggregation ' ) has the highest -value among the four cases and its angular density clearly shows a lack of nearest neighbours along the direction of flock s motion leading to an anisotropy . for all aggregations except for the _ case 4 _ ,the -values are lower than , namely , they have no anisotropy . from these results, we find that boids computer simulations having appropriate weights shows anisotropy structures as real flocks exhibit .it is also revealed that one can evaluate to what extent an arbitrary flock simulation is close to real flocks through the -value . from the results obtained here , we might have another question , namely , it is important for us to answer the question such as whether the aggregation having a higher -value than the _ case 4 _ seems to be more _ realistic _ than the _ case 4 _ or not . to answer the question ,we carry out the simulations of the flock aggregation which has a higher -value ( _ case 5 _ ) than the _ case 4_. the results are summarized as follows . * _ case 5 _ ( * crowded aggregation ) * : the angular distribution is shown in figure [ fig : fg5 ] . the -value is with standard deviation and the average number of crashes is .-value than the _ case 4_. , width=283 ] obviously , the above aggregation has the highest -value leading to the strongest anisotropy among the five cases .however , it is hard for us to say that it is an _optimal flock _ because the number of cashes is also the highest ( times ) and to make matter worse , the number itself is apparently outstanding .this result tells us that an aggregation having much stronger anisotropic structures is not always a better flock .the above result is reasonably accepted because the -value is calculated from the angular distribution of nearest neighbours without any concept of the _ distance _ between agents .therefore , the flock having a dense network might have highly risks of crashes more than the sparse network . for this reason , in order to judge whether a given aggregation has a better flock behaviour or not , we should use the other criteria which take into account the distance between nearest neighbours . inspired by the empirical data analysis by ballerini _et al _ , we finally calculate the -value as a function of the order of the neighbour . the result is shown in figure [ fig : fg6 ] .-value as a function of the neighbour in the real flock ., width=302 ] in this figure , denotes the order of the neighbour , for instance , or means the nearest neighbour , the next nearest neighbour , respectively .the figure shows a similar behaviour to the corresponding plot in the reference , that is , the -values monotonically decrease as increases and they converge to beyond . this result might be a justification to conclude that our boids simulations having appropriate weight vectors actually simulate a _ realistic _ flock .in this paper , we showed that the anisotropy observed in the empirical data analysis also emerges in our boids simulations having appropriate weight vectors . from the -value we calculated, one can judge wheter an optional aggregation behaves like a _ real flock _ or not .the system of flocks is spatially ` symmetric ' for , whereas the symmetry is ` spontaneously ' broken for .we found from the behaviour of ` order parameter ' that this ` spontaneous symmetry breaking ' is nothing but the emergence of anisotropy . as well - known , there are some conjectures on the origin of the emergence of anisotropy .for instance , the effect of bird s vision is one of the dominant hypotheses .in fact , real starlings have lateral visual axes and each of the starlings has a blind rear sector .if all individuals in the flock move to avoid their nearest neighbours which are hidden in their blind sectors , the effect of the blind sector is more likely to be a factor to emerge the anisotropy of nearest neighbours in the front - rear directions .however , our result proved that this hypothesis is _ not always correct _ because agents in our simulation have no blind sector of their views .nevertheless , we found that our flock aggregation has an anisotropy of the nearest neighbours .the result means that the agent s blind as an effect of vision is _ not necessarily required _ to produce the anisotropy and much more essential factor for the anisotropy is _ the best possible combinations of three essential interactions _ in the boids .+ we hope that these results might help us to consider the relevant link between boids simulations and empirical evidence from real world .we were financially supported by grant - in - aid scientific research on priority areas _ ` deepening and expansion of statistical mechanical informatics ( dex - smi ) ' _ of the mext no .one of the authors ( ji ) was financially supported by insa ( indian national science academy ) - jsps ( japan society of promotion of science ) bilateral exchange programme .he also thanks saha institute of nuclear physics for their warm hospitality during his stay in india .m. ballerini , n. cabibbo , r. candelier , a. cavagna , e. cisbani , i. giardina , v. lecomte , a. orlandi , g. parisi , a. procaccini , m. viale and v. zdravkovic , _ interaction rulling animal collective behaviour depends on topological raher than metric distance , evidence from a field study _ , _ proceedings of the national academy of sciences usa _ * 105 * , pp.1232 - 1237 ( 2008 ). a. cavagna , i. giardina , a. orlandi , g. parisi , a. procaccini , m. viale and v. zdravkovic , _ the starflag handbook on collective animal behaviour : part ii , empirical methods _ , _ animal behaviour _ * 76 * ,issue 1 , pp237 - 248 ( 2008 ) .
in real flocks , it was revealed that the angular density of nearest neighbors shows a strong _ anisotropic structure _ of individuals by very recent extensive field studies by ballerini et al [ _ proceedings of the national academy of sciences usa _ * 105 * , pp.1232 - 1237 ( 2008 ) ] . in this paper , we show that this empirical evidence in real flocks , namely , the structure of anisotropy also emerges in an artificial flock simulation based on the _ boids _ by reynolds [ _ computer graphics _ * 21 * , pp.25 - 34 ( 1987 ) ] . we numerically find that appropriate combinations of the weights for just only three essential factors of the boids , namely , ` cohesion ' , ` alignment ' and ` separation ' lead to a strong anisotropy in the flock . this result seems to be highly counter - intuitive and also provides a justification of the hypothesis that the anisotropy emerges as a result of self - organization of interacting intelligent agents ( birds for instance ) . to quantify the anisotropy , we evaluate a useful statistics ( a kind of _ order parameters _ in statistical physics ) , that is to say , the so - called -value defined as an inner product between the vector in the direction of the lowest angular density of flocks and the vector in the direction of the moving of the flock . our results concerning the emergence of the anisotropy through the -value might enable us to judge whether an arbitrary flock simulation seems to be _ realistic _ or not . + * keywords * : self - organization , anisotropy , boids , swarm intelligence simulation , collective behaviour +
in recent years , compressive sensing ( cs ) has influenced many fields in signal processing .basically , the theory states that if an unknown signal can be sparsely represented , only a few linear and non - adaptive measurements of the signal suffice to accurately reconstruct it . denoting the measurement vectors by , the measurement process can be compactly written as ^{\top } \bf{s } + \bf{z } = \bf{\phi } \bf{s } + \bf{z},\ ] ] where is the measurement matrix , and constitutes possible sampling errors . due to the reduced dimensionality ,reconstructing from the measurements is ill - posed in general , and can not be done by simply inverting . however, additional model assumptions on may help to find a solution . in this context ,the _ sparse synthesis - approach _ and the _ co - sparse analysis - approach _ have proven extremely useful . in the sparse synthesis approachit is assumed that a signal can be decomposed into a linear combination of only a few columns , called atoms , of a known dictionary with , i.e. with being the sparse coefficient vector .many algorithms for solving the synthesis problem exist , cf . for an extensive overview .the co - sparse analysis approach is a similar looking but yet very different alternative to tackle the cs problem .its underlying assumption is that a signal multiplied by an _ analysis operator _ with results in a sparse vector .if denotes a function that measures sparsity , the analysis model assumption is exploited via the analysis model has proven useful in the field of image reconstruction and we thus restrict ourselves to compressively sensed images here .our approach is motivated by the observation that learning the operator leads to an improved image reconstruction quality , compared to applying a finite difference operator that approximates the image gradient , known as total variation ( tv - norm ) regularization , .in contrast to the task of dictionary learning only a few analysis operator learning algorithms have been proposed in the literature so far , cf . , , , .furthermore , from image denoising it is known that the reconstruction accuracy can be further improved when the dictionary or operator is not only learned on some general and representative training set , but rather directly on the specific signal that has to be reconstructed , .these observations prompted us to combine the image reconstruction performance of the analysis approach together with the accuracy improvement capabilities of a learned operator .the principle of cs relies on the fact that the signal has a sparse representation in a _ given _ basis or dictionary that is universal for the considered signal class of interest . however , such universal dictionaries do not necessarily result in the sparsest possible representation , which is crucial for the recovery success . due to this , in the concept of blind compressive sensing ( bcs ) has been introduced , which aims at simultaneously learning the dictionary and reconstructing the signal , see also and for an extension of this idea .note that all these methods are based on the synthesis model and consider the problem of finding a suitable dictionary , while in this paper we focus on the analysis model .in this work we address the problem of signal reconstruction from compressively sensed data regularized by an adaptively learned analysis operator .the work of hawe _ et al . _ , which focuses on learning a global patch based analysis operator from noise free training samples , has already shown the superior performance of a learned operator compared to state - of - the - art analysis and synthesis based regularization , like e.g. k - svd denoising , in the context of classical image reconstruction problems .that is why we extend this idea and build on their work to utilize the learning process to obtain a signal dependent regularization of the inverse problem .since we are dealing with compressive measurements , our approach can be interpreted as an analysis - based bcs problem with no prior knowledge about the operator .we extend the algorithm proposed in , where the operator is learned by a geometric conjugate gradient ( cg ) method on the so - called oblique manifold , to our setting of simultaneous image reconstruction and operator learning .this approach allows us to compensate for various sampling noise models , i.e. gaussian or impulsive noise , by simply exchanging the data fidelity term .to summarize , the advantages of our approach are as follows : ( i ) the learning process allows to adaptively find an adequate operator that fits the underlying image structure .( ii ) there is no necessity to train the operator prior to the reconstruction .( iii ) different noise types are handled by simply exchanging the data fidelity term .our goal is to find a local analysis operator with simultaneously to the signal that has to be reconstructed from the compressive measurements . here, the vector denotes a vectorized image of dimension , with being the width and being the height of the image , respectively , obtained by stacking the columns of the image above each other .note that the analysis operator has to be applied to local image patches rather than to the whole image .we denote the binary matrix that extracts the patch centered at the pixel by .furthermore , practice has shown that the learning process is significantly faster if _ centered _, i.e. zero mean patches are considered .this can be easily incorporated by multiplying the vectorized patch with , where and are the identity operator and the matrix with all elements equal to one , respectively .we employ constant padding at the image borders , i.e. replicating the boundary pixel values . in the end, we globally promote sparsity with an appropriate function and write for the problem of finding a suitable analysis operator where denotes an admissible set , which implies some constraints on to avoid trivial solutions .we follow the considerations of the authors in , demanding that : the rows of have unit euclidean norm , i.e. , for , where denotes the transposed of the -row of . the analysis operator has full rank , i.e. . the mutual coherence of the analysis operator should be moderate .these constraints motivate to consider the set of full rank matrices with normalized columns , which admits a manifold structure known as the oblique manifold here , is the diagonal matrix whose entries on the diagonal are those of .since we require the rows of to have unit euclidean norm , we restrict to be an element of . to enforce the rank constraint ( ii )we employ the penalty function furthermore , the mutual coherence of the analysis operator , formulated in constraint ( iii ) , can be controlled via the logarithmic barrier function of the atoms scalar products , namely considerations concerning the usefulness of these penalty functions can be found in . to measure the sparsity of the analyzed patches , we use the differentiable sparsity promoting function where is a positive constant and represents the standard basis vector with the same length as .since we are interested in simultaneous operator learning and image reconstruction , we further introduce a data term , which measures the fidelity of the reconstructed signal to the measurements .the choice of depends on the error model , i.e. by using the error is assumed to be gaussian distributed . if the noise is sparsely distributed over the measurements , we set .this error model has also been utilized in to compensate for sparse outliers in the measurements . finally , combining the data term with the constraints and the sparsity promoting function , the augmented lagrangian optimization problem for adaptively learning the analysis operator with simultaneous image reconstruction consists of minimizing the cost subject to with the measurement matrix .the scalar denotes the number of extracted image patches .the parameter weights the fidelity of the solution to the measurements and the parameters control the influence of the two constraints .since the cost function is restricted to a smooth manifold , we follow and employ a conjugate gradient on manifolds approach to solve the optimization problem .the cg approach is scalable and converges fast in practice .it is thus well - suited to handle the high dimensional problem of simultaneous image reconstruction and operator learning .the challenges for developing the cg method are the efficient computation of the riemannian gradient , the step - size and the update directions . to that end, we employ the product manifold structure of considered as a riemannian submanifold of .to enhance legibility in the remainder of this section we denote the oblique manifold by ob .we further denote the tangent space at a point as , with being a tangent vector at . the riemannian gradient at is given by the orthogonal projection of the standard ( euclidean ) gradient onto the tangent space .the orthogonal projection of a matrix onto the tangent space is obtained by . using the product structure and denoting the partial derivatives of by and , respectively , the riemannian gradient of the cost function is in cg methods the updated search directions are linear combinations of the respective gradient and the previous search directions .the identification of different tangent spaces is done by the so - called parallel transport , which transports a tangent vector along a _geodesic _ to the tangent space . in the manifold setting geodesics can be considered as the generalization of straight lines .we denote the geodesic from along the direction as . regarding the product manifoldthe new iterates are computed by where denotes the step size that leads to a sufficient decrease of the cost function .the parallel transport along the geodesics in the product manifold is then given by we use a hybridization of the hestenes - stiefel ( hs ) and the dai yuan ( dy ) formula as motivated in to determine the update of the search direction . with the shorthand notations and , as well as and the manifold adaptions of these formulas are where denotes the standard inner product in the respective euclidean spaces .with the hybrid update formula the new search directions are given by in our implementation we use the well - known backtracking line search which is adapted to the manifold setting until the armijo condition is met .we name our method analysis blind compressive sensing ( abcs ) and briefly summarize the whole procedure in algorithm 1 . for further details concerning cg - methods on the oblique manifoldthe reader is referred to , , and .* algorithm 1 * abcs * input : * initial operator , noisy measurements , measurement matrix , parameters + * set : * , , , , perform backtracking line search to get step size update to , cf .compute compute , cf .compute new cg - search directions maximum # of iterations * output : * , measure the image reconstruction accuracy we use the peak signal - to - noise ratio and the mean structural similarity index ( _ mssim _ ) , with the same set of parameters as originally suggested and implemented in . throughout our experiments we use a patch size of ,i.e. and set , as larger values of do not enhance the reconstruction quality .we initialized to be a random matrix and normalized the rows to unit norm . with this initialization , convergence to a local minimumwas observed in all our experiments .the parameters for the constraints are set to and .the constant in the sparsity inducing function is chosen as .the parameter takes into account the size of the image as well as the operator size and reads , with adjusted according to the noise level as explained below and a normalization factor .we evaluate our method on the three images _ girl _ ( ) , _ barbara _ ( ) , and _ _ texture _ _ ) ] ( ) .the measurements are obtained by using the real valued noiselet transformation proposed in . in the first experimentwe show the robustness of the abcs algorithm to sampling noise which follows a gaussian distribution . for this purpose ,the measurements have been artificially corrupted by additive white gaussian noise with standard deviation .the data term in reads .we assume the noise level to be known and set . two measurement rates and are considered .table [ tab : denoising ] shows the reconstruction performance for different noise levels . for comparison we used the algorithm of ( nesta ) , with tv - norm regularization and optimized parameters .figure [ fig : res ] shows the reconstructed images from measurements and a noise level of .we also tested the algorithm proposed in ( tval3 ) with different parameters , which achieves results comparable to nesta . due to space limitations , detailed results are not listed here . in all settings ,the same measurements are used ..image reconstruction from measurements corrupted by additive white gaussian noise with standard deviation .the measurement rates are ( top ) and ( bottom ) .achieved psnr in decibels and mssim .[ cols="^,^,^,^,^,^,^,^ " , ] both experiments confirm that the adaptively learned operator leads to an accuracy improvement compared to the reconstruction quality obtained with a fixed finite difference operator . in particular , the structures in the _ barbara _ and _ texture _ image are better preserved by abcs .in this article we proposed an analysis based blind compressive sensing algorithm that simultaneously reconstructs an image from compressively sensed data and learns an appropriate analysis operator .this process is formulated as an optimization problem , which is tackled via a geometric conjugate gradient approach that updates both the operator and the image as a whole at each iteration .furthermore , the algorithm can be easily adapted to different noise models by simply exchanging the data fidelity term .
in this work we address the problem of blindly reconstructing compressively sensed signals by exploiting the co - sparse analysis model . in the analysis model it is assumed that a signal multiplied by an analysis operator results in a sparse vector . we propose an algorithm that learns the operator adaptively during the reconstruction process . the arising optimization problem is tackled via a geometric conjugate gradient approach . different types of sampling noise are handled by simply exchanging the data fidelity term . numerical experiments are performed for measurements corrupted with gaussian as well as impulsive noise to show the effectiveness of our method .
the torsional oscillation was discovered by howard and labonte 1980 as a travelling wave pattern superposed on the differential rotation .the waves propagate from the poles toward the equator and they consist of prograde and retrograde zones with respect to the mean differential rotation profile , the amplitude of the deviations is about 7 m / sec .two waves ( two prograde and retrograde belts ) coexist in both hemispheres . the feature was absolutely unexpected at the time of discovery and several attemptshave been made to check and interpret it .the empirical studies basically confirmed the existence of the phenomenon by using either surface measurements ( ulrich , 2001 ) or subsurface detection by gong and mdi data ( howe et al ., 2000 ; komm et al 2001 ; zhao and kosovichev , 2004 ) . as a result of these investigationsthe torsional oscillation can be regarded to be a persistent feature which extends down to about 0.92 .the theoretical approaches were motivated by the similarity of the equatorward migrations of the shearing belts and the activity belts .this suggested that the activity cycle ( in particular the sprer s law ) must have something to do with the torsional oscillation .the first description was published by yoshimura ( 1981 ) who suggested a mechanism driven by a lorentz force wave as a by - product of the dynamo wave .recent models take into account the presence of the sunspots . in the model of petrovay and forgcs - dajka ( 2002 )the sunspots modify the turbulent viscosity in the convective zone which leads to the modulation of the differential rotation .in the model proposed by spruit ( 2003 ) the sunspots exert a cooling effect on the surface and this temperature variation results in geostrophic flows which would drive the torsional oscillation .the present work was motivated by these recent works which suggested a possible connection between the sunspots and the torsional oscillation phenomenon .we would like to find any spatial correlation between the torsional wave and any sunspot feature .earlier works also indicated some spatial connections but e.g. labonte and howard ( 1982 ) averaged the latitudinal magnetic activity distribution for a longer time whereas zhao and kosovichev ( 2004 ) only indicated the location of the activity belt with no distribution information .the aim of the present work is to follow the temporal variation of the latitudinal distribution of sunspot parameters in comparison with the migration of the torsional waves . as a first attempt three parameters were chosen to scrutiny .two of them may be trivial : the number and the total area of sunspots .the third parameter , the mean number of sunspots within the groups characterizes the complexity of the sunspot groups , because it can also be expected that the more complex is a sunspot group the more efficient is its interconnection with the local velocity fields .the sunspot data were taken from the most detailed sunspot database , the debrecen photoheliographic data ( dpd ) .this is the only material which contains the position and area data for each observable spot , even for the smallest ones , for each day .the temporal coverage of the dpd was partial at the time of this work so , as a first attempt , we restricted the study to the years 1986 - 1989 ( gyri et al , 1996 ) .the latitudinal distributions were determined in such a way that 1 degree latitudinal stripes were considered , and all mentioned parameters ( total number and total area of spots , as well as the mean number of spots per goups ) were added up for all stripes and three months .a total amount xxx spots were taken into account in the given period . to compare the obtained distributions with the torsional pattern one has to determine the latitudinal location of the torsional wave i.e. the latitudes of prograde / retrograde belts and the shearing zones .for this period the most suitable torsional data were provided by ulrich ( 2001 ) . in his figure 1 .the shearing zones were reasonably well recognizable in this period .figure 1 . shows the period 1986 - 89 taken from ulrich ( 2001 ) and the most probable lines of the shear zones ( dark regions indicate the prograde belts ) . in our further figuresthese lines were adopted to mark the migrating shear zones .the most obvious candidate may be the number of sunspots at a certain latitude .the numbers of all spots have been added up in 1 degree wide stripes and 3-month periods in such a way that each sunspot group was taken into account at that time when it contained the largest number of spots .the resulting distributions were plotted onto the plot of the migrating zones , see figure 2 , where the temporal dimension has been streched in order to minimize the overlap of the distribution curves . in this approachthe sizes of the spots were omitted , only the amount of the magnetix flux tubes was considered regardless of their strengths . the next possible candidate is the total area of the spots ( figure 3 ) .the procedure was the same as in the case of the sunspot numbers , all area data were added up by latitude stripes .sunspots were taken into account at that time when the total areas of their groups were the largest during their passage through the solar disc . in this approachthe total strength was considered regardless of the size distribution .the third candidate was chosen to check the assumption that in case of a certain sunspot - torsion interconnection the more complex sunspot groups could exert more efficient impacts on the ambient flows .the number of spots within the groups has been averaged for the latitude stripes and 3-month periods , the groups were considered at the time of their largest extensions .the resulting distributions are displayed in figure 4 .it is remarkable in the figures 2 . and 3 . that the distributions of both the spot number and area are so positioned that the peaks of the curves ( if they have unambiguous peaks at all ) are close to the shear zones but the bulges of the distributions are mostly situated in the faster beltsthe curves of the number and area are mostly different , as was expected , but the mentioned character seems to be overwhelming on both cases .this behaviour may be the signature of a really functioning interconnection between the torsional pattern and the magnetic flux ropes .as for the complexity of the groups , no such trend can be recognized so this feature ( the number of spots per groups ) is apparently unimportant from this point of view .the above features are the first ( preliminary ) results of our project and they seem to be encouraging . in the following works we are intended to extend the temporal domain of the study and to include also some further possible sunspot parameters to reveal the nature of this interaction .the present work was supported by the grants otka t 37725 and esa pecs no.98017 .one of the authors ( a.l . ) expresses his gratitude for the kind invitation and hospitality of the hvar observatory .* gyri , baranyi , t. , csepura , g. , gerlei , o. , ludmny a. 1996 , debrecen photoheliographic data for the year 1986 publ .debrecen obs .10 , 1 - 61 .* howard , r. , labonte , b. j. : 1980 , _ astrophys . j. _ * 239 * , l33-l36 . * howe , r. _ et al . _: 2000,_astrophys . j. _ * 533 * , l163-l166 . * komm r.w . , hill f. , howe r. , 2001 , _ astrophys .j. _ * 558 * 428 - 441 . * labonte , b. j.,howard r. : 1982,_solar physics _ * 75 * , 161 - 178 . *petrovay , k. , forgcs - dajka e. , : 2002 , _ solar physics _ * 205 * , 39 - 52 . * spruit , h. c. : 2003,_solar physics _ * 213 * , 1 - 21 . * ulrich , r. k. : 2001,_astrophys . j. _ * 560 * , 466 - 475 . * yoshimura , h. _ astrophys . j. _ * 247 * 1102 - 1112 .* zhao , j. , kosovichev , a. g. : 2004,_astrophys . j. _ * 603 * , 776 - 784 .
the torsional oscillation is a well established observational fact and there are theoretical attempts for its description but no final solution has yet been accepted . one of the possible candidates for its cause is the presence of sunspots modifying the streaming conditions . the present work focuses on the temporally varying latitudinal distribution of several sunspot features , such as the spot sizes and spot numbers . these features are different faces of the butterfly diagram . in fact some weak spatial correlations can be recognized .
many physical and biological systems under multi - component control mechanisms exhibit scale - invariant features characterized by long - range power - law correlations in their output .these scaling features are often difficult to quantify due to the presence of erratic fluctuations , heterogeneity , and nonstationarity embedded in the output signals .this problem becomes even more difficult in certain cases : ( i ) when we can not probe directly the quantity of interest in experimental settings , i.e. , the measurable output signal is a linear or nonlinear function of the quantity of interest ; ( ii ) when measuring devices impose a linear or nonlinear filter on the system s output ; ( iii ) when we are interested not in the output signal but in a specific component of it , which is obtained through a nonlinear transform ( e.g. , the magnitude or the sign of the fluctuations in the signal ) ; ( iv ) when comparing the dynamics of different systems by applying nonlinear transforms to their output signals ; or ( v ) when pre - processing the output signal by means of linear or nonlinear filters before the actual analysis .thus , to understand the intrinsic dynamics of a system , in such cases it is important to correctly analyze and interpret the dynamical patterns in the system s output .conventional two - point correlation , power spectrum , and hurst analysis methods are not suited for nonstationary signals , the statistical properties of which change with time . to address this problem , detrended fluctuation analysis ( dfa ) method was developed to accurately quantify long - range correlations embedded in a nonstationary time series .this method provides a single quantitative parameter the scaling exponent to quantify the scale - invariant properties of a signal .one advantage of the dfa method is that it allows the detection of long - range power - law correlations in noisy signals with embedded polynomial trends that can mask the true correlations in the fluctuations of a signal .recent comparative studies have demonstrated that the dfa method outperforms conventional techniques in accurately quantifying correlation properties over a wide range of scales .the dfa method has been widely applied to dna , cardiac dynamics , human electroencephalographic ( eeg ) fluctuations , human motor activity and gait , meteorology , climate temperature fluctuations , river flow and discharge , electric signals , stellar x - ray binary systems , neural receptors in biological systems , music , and economics . in many of these applicationsthe main problem is to differentiate scaling features in a system s output which are inherent to the underlying dynamics , from the scaling features which are an artifact of nonstationarities or different types of transforms and filters . in two previous studieswe have examined how different types of nonstationarities such as superposed sinusoidal and power - law trends , random spikes , cut - out segments , and patches with different local behavior affect the long - range correlation properties of signals .here we use the dfa method to investigate how the scaling properties of noisy correlated signals change under linear and nonlinear transforms .further , ( i ) we test to see under what types of transforms ( filters ) it is possible to derive information about the scaling properties of the signal of interest before the transformation , provided we know the correlation behavior of the transformed ( filtered ) signal , and ( ii ) we probe the `` apparent '' scaling of three common transformation functions after applying the dfa method exponential , logarithmic and polynomial .we also evaluate the limitations of the dfa method under linear and nonlinear transforms . specifically , we consider the following : \(1 ) _ correlation properties of signals after transforms of the type _ : , where is a stationary signal with _ a priori _ known correlation properties .\(i ) _ linear transform _ : .transforms of this type are often encountered in physical systems .for example : ( a ) from the fluctuations in the acceleration of a particle ( measurable quantity ) , one can derive information about how the force ( quantity of interest ) acting on this particle changes in time without directly measuring the force : ; ( b ) in pnp - transistors a difficult to directly measure base ( input ) current ( quantity of interest ) is amplified hundreds of times , so that small fluctuations in may lead to significant ( and measurable ) changes in the collector ( output ) signal ( measurable quantity ) : , and ( c ) changes in the volume ( quantity of interest ) of an ideal gas can be determined from fluctuations in the temperature ( measurable quantity ) provided the pressure is kept constant : .\(ii ) _ nonlinear polynomial transform _ : , where and takes on positive integer values .for example : ( a ) from fluctuations in the current ( measurable quantity ) one can extract information about the behavior of the power lost as heat ( quantity of interest ) in a resistor : ; ( b ) measuring the temperature fluctuations of a radiating body the stefan s law defines the power emitted per unit area : .further , linear and nonlinear polynomial filters are also used to renormalize data series representing an identical quantity measured in different systems before performing correlation analysis , e.g. , ( i ) normalizing heart rate recordings from different subjects to zero mean and unit standard deviation ( linear filters ) , or ( ii ) extracting the absolute value ( nonlinear filter ) of the heartbeat fluctuations in datasets obtained from different subjects . in this studywe consider two examples of nonlinear polynomial filters quadratic and cubic filters which represent the class of polynomial filters with even and odd powers , and we investigate how these filters change the correlation properties of signals .since polynomial filters with even power wipe out the sign information in a signal , we expect quadratic and cubic filters to have a different effect . a recent study by y. ashkenazy _ et al . _ shows that the magnitude of a signal ( without sign information ) exhibits different correlation properties from that of the original signal .thus it is necessary to investigate how quadratic and cubic filters change the scaling properties of correlated signals .\(iii ) _ logarithmic filter _ : , is also widely used in renormalizing datasets obtained from different sources before comparative analysis .for example , to compare the dynamics of price fluctuations of different company stocks , which may have a different average price , one often first obtains the relative price returns ] .next , we consider a dfa box starting at the coordinate and ending at , where is proportional to the number of points in the box . for any value of can expand the function in a taylor series : .\label{eqnq1}\end{aligned}\ ] ] since this expansion converges , a finite polynomial function can accurately approximate the exponential function in each dfa box .we note that the dfa- method applied to above polynomial functions gives the scaling exponent ( see ) .thus , for any exponential function we find that the dfa scaling does not depend on the value of the offset parameter and depends only on the order of the polynomial fit in the dfa- procedure [ fig .[ trends3](b ) ] .( ii)_general logarithmic function _ : .first , we substitute the variable by : ] , we find that if , the logarithmic function in all dfa boxes is converging , and thus each box can be approximated by a polynomial function , leading to scaling exponent depending only on the order of the dfa- method [ fig .[ trends2 ] ] . when , for certain values of , the series in eq .( [ eqnq2 ] ) is diverging . since ] .first , we substitute the variable by : ] .we divide the variable in the exponential by , so that ( ) is in the interval $ ] , as considered in sec .[ sectrends ] .the next step of the dfa method is to divide the integrated signal into boxes of length . for dfa-1 ,the squared detrended fluctuation function in the box , , is where the parameters and are obtained by a linear fit to the integrated signal using least squares in the box .these two parameters can be obtained analytically , although their expressions are too long . to obtainthe squared detrended fluctuation function for the entire signal partitioned in non - overlapping boxes of length , we sum over all boxes and calculate the average value : here , the index in the sum ranges from to ( there are boxes of length in the signal of length ) . using the analytical expressions for and , can be presented analytically in the form : due to the complexity of and , the expression of is very complicated .however , as ( and usually , ) , one can expand in powers of to obtain : thus the dfa-1 scaling exponent is ( in agreement with the numerical simulation in sec .[ sectrends ] , fig .[ trends3 ] ) . in general, we can obtain in a similar way that , when dfa- with an order of polynomial fit is used .k. ivanova , t. p. ackerman , e. e. clothiaux , p. ch .ivanov , h. e. stanley , and m. ausloos , j. geophys .res . * 108*,4268 ( 2003 ) .e. koscielny - bunde , a. bunde , s. havlin , h. e. roman , y. goldreich , and h. j. schellnhuber , phys .lett . * 81 * , 729 ( 1998 ) .j. f. eichner , e. koscielny - bunde , a. bunde , s. havlin , and h. j. schellnhuber , phys .e * 68 * , 046133 ( 2003 ) .k. fraedrich and r. blender , phys .* 90 * , 108501 ( 2003 ) .m. pattantyus - abraham , a. kiraly , and i. m. janosi , phys .e * 69 * , 021110 ( 2004 ) .z. siwy , m. ausloos , and k. ivanova , phys .e * 65 * , 031907 ( 2002 ) .a. varotsos , n. v. sarlis , and e. s. skordas , phys .e * 67 * , 021109 ( 2003 ) .p. a. varotsos , n. v. sarlis , and e. s. skordas , phys .e * 68 * , 031106 ( 2003 ) .
when investigating the dynamical properties of complex multiple - component physical and physiological systems , it is often the case that the measurable system s output does not directly represent the quantity we want to probe in order to understand the underlying mechanisms . instead , the output signal is often a linear or nonlinear function of the quantity of interest . here , we investigate how various linear and nonlinear transformations affect the correlation and scaling properties of a signal , using the detrended fluctuation analysis ( dfa ) which has been shown to accurately quantify power - law correlations in nonstationary signals . specifically , we study the effect of three types of transforms : ( i ) linear ( ) ; ( ii ) nonlinear polynomial ( ) ; and ( iii ) nonlinear logarithmic [ filters . we compare the correlation and scaling properties of signals before and after the transform . we find that linear filters do not change the correlation properties , while the effect of nonlinear polynomial and logarithmic filters strongly depends on ( a ) the strength of correlations in the original signal , ( b ) the power of the polynomial filter , and ( c ) the offset in the logarithmic filter . we further apply the dfa method to investigate the `` apparent '' scaling of three analytic functions : ( i ) exponential [ , ( ii ) logarithmic [ , and ( iii ) power law [ , which are often encountered as trends in physical and biological processes . while these three functions have different characteristics , we find that there is a broad range of values for parameter common for all three functions , where the slope of the dfa curves is identical . we further note that the dfa results obtained for a class of other analytic functions can be reduced to these three typical cases . we systematically test the performance of the dfa method when estimating long - range power - law correlations in the output signals for different parameter values in the three types of filters and the three analytic functions we consider .
game theory provides a useful framework for describing the evolution of systems consisting of selfish individuals .the prisoner s dilemma game ( pdg ) as a metaphor for investigating the evolution of cooperation has drawn considerable attention . in the pdg, two players simultaneously choose whether to cooperate or defect .mutual cooperation results in payoff for both players , whereas mutual defection leads to payoff gained both . if one cooperates while the other defects , the defector gains the highest payoff , while the cooperator bears a cost .this thus gives a simply rank of four payoff values : .one can see that in the pdg , it is best to defect regardless of the co - player s decision to gain the highest payoff .however , besides the widely observed selfish behavior , many natural species and human being show the altruism that individuals bear cost to benefit others .these observation brings difficulties in evaluating the fitness payoffs for different behavioral patterns , even challenge the rank of payoffs in the pdg .since it is not suitable to consider the pdg as the sole model to discuss cooperative behavior , the snowdrift game ( sg ) has been proposed as possible alternative to the pdg , as pointed out in ref .the main difference between the pdg and the sg is in the order of and , as in the sg .this game , equivalent to the hawk - dove game , is also of much biological interest .however , the original pdg and sg can not satisfyingly reproduce the widely observed cooperative behavior in nature and society .this thus motivates numerous extensions of the original model to better mimic the evolution of cooperation in the real world .since the spatial structure is introduced into the evolutionary games by nowak and may , there has been a continuous effort on exploring effects of spatial structures on the cooperation .it has been found that the spatial structure promotes evolution of cooperation in the pdg , while in contrast often inhibits cooperative behavior in the sg . in recent years ,extensive studies indicate that many real networks are far different from regular lattices , instead , show small - world and scale - free topological properties .hence , it is naturally to consider evolutionary games on networks with these kinds of properties .an interesting result found by santos and pacheco is that scale - free networks provide a unifying framework for the emergence of cooperation " .so far , most studies of evolutionary games over networks are based on static network structure .however , it has been pointed out that the network structure may coevolve with the game , where each individual would choose its co - players to gain more benefits , inducing the evolution of their relationship network .some previous works about weighted networks suggest that it is indeed the traffic increment spurs the evolution of the network to maintain the system s normal and efficient functioning . from this perspective ,in the present paper we propose an evolutionary model with respect to the interplay between the evolutions of the game and the network for characterizing the dynamics of some social and economic systems . in our model , the sg is adopted for its more general representation of the realism and evolutionary cooperative behavior . since growth is a common feature among networked systems , we assume that the network continuously grows by adding new agents to the existent network on the basis of the payoff preferential attachment .we focus on the evolution of the network structure together with the emergence and persistence of cooperation .simulation results show that the obtained networks follow a power - law distribution , , with exponent tuned by a model parameter .the average distance of the network scales logarithmically with the network size , which indicates the network has a small - world effect .interestingly , the assortative mixing properties generated by our model demonstrate that the model can well mimic social networks . in parallel , with the extension of the network ,the density of cooperators increases and approaches a stable value , which gives a new explanation for the emergence and persistence of cooperation .we also explore the wealth distribution , where the so - called wealth is the accumulated payoff distribution of each individual .the pareto law is well reproduced by our model . at last , we provide analyses for the obtained scale - free network structures . the paper is arranged as follows . in the following section ,we describe the model in detail , in sec .iii , simulation results and correspondent analytical ones are provided , and in sec .iv , the work is concluded .let us introduce briefly the sg first .consider two drivers are trapped in two side of a snowdrift .each driver has two possible selections , either shoving the snowdrift ( cooperator - c ) or remaining in the car and do nothing ( defect - d ) . if both cooperate , they could be back home on time , so that each will gain a reward of , whereas mutual defection results in still blocked by the snowdrift and each gets a payoff .if only one driver shovels ( takes c ) , then both drivers can be back home .the driver taking d gets home with do nothing and hence gets a payoff , while the driver taking c gains a sucker " payoff of .thus , the rank of four payoff values is .following common practice , the sg is rescaled with , and , where is a tunable parameter ranging from to . hence, the payoffs can be characterized by a single parameter for convenient study .[ 0.80 ] ( color online ) .degree distributions and relevant cumulative degree distributions for different values of are shown in the top panel and the bottom , respectively .the network size is .each distribution is obtained by averaging over distinct simulation .the degree distribution is nearly independent of .,title="fig : " ] our model starts from nodes randomly connected with probability , each of which represents a player ( in the following , we fix and examine that it has no influence on our results in present work ) .initially , the nodes are randomly assigned to be either strategy c or d with percentages . players interact with all their neighbors simultaneously and get payoffs according to the preset payoff parameter .the total payoff of a certain player is the sum over all its encounters .then , every node randomly selects a neighbor at the same time for possible updating its strategy .the probability that follows the strategy of the selected node is determined by the total payoff difference between them , i.e. , },\ ] ] where and are the total payoffs of and at the moment of the encounter . here, characterizes noise " , including bounded rationality , individual trials , errors in decision , _etc_. it should be noted that here plays a different role comparing with the cases of adopting the normalized payoff difference . in parallel , does not play the same role for different network sizes .since in our model , the network size gradually grows , it is not easy to keep the effect of unchanged at every time step . for simplicity ,we fix to during the evolution of the network . here, we adopt the synchronous updating rule . after each step that players update their strategies ,a new individual is added into the network with ( we fix for convenience ) links preferentially attached to existent nodes of higher payoffs , i.e. , where and are the total payoffs of and obtained in the interaction process . is a tunable parameter , which reflects the original payoff values of players when they join into the game system . for simplicity , we set be a constant .the payoff - based preferential selection takes into account the rich gets richer " characteristic and couples the dynamics of the evolutionary game and the evolution of the underlying network . after a new player joins into the network , the new one randomly choose strategy c or d and all old players preserve their strategies for the game in the next round . then , repeat the above procedures , and the network size gradually grows .[ 0.80 ] ( color online ) .average distance as a function of network size .each data point is obtained by averaging over network realizations .these results are independent of parameter .,title="fig : " ] numerical simulations are performed to quantify the structural properties of the obtained networks . in figure 1 , we show the degree distribution and correspondent cumulative degree distribution , in networks of size .the distributions clearly exhibit power - law behaviors , , in a broad range of degrees with a fat tail for very large degrees . besides, for the cumulative degree distribution , a cut - off at very large degrees is observed for each distribution , which corresponds to the fat tail range .the cumulative degree distribution provides a clear picture of the power - law behavior .these results indicate that the empirically observed scale - free structure can be generated from the coupling of the game and the evolution of the network , which may be an explanation for the heterogenous structure of many social and economical networked - systems . moreover , the exponent is a function of , which makes our model more general for mimicking a variety of real networks .we have checked that the parameter has slight effect on , while plays a major role .analytical results of the power - law distribution will be given after the discussion of the correlation between individuals s payoffs and the degrees of nodes occupied by them . [ 0.80 ] ( color online ) .assortative mixing coefficient as a function of parameter for different values of .the network size is .each data point is obtained by averaging over network realizations.,title="fig : " ] average path length is a key measure for quantifying the small - world effect , which is widely observed in the real world .the average path length of a network is defined as where is the shortest path length from node to node , and is the network size .we perform simulations on as a function of network size for different values of parameter .each data point is obtained by averaging over different network realizations .figure 2 shows that for all the values of , increases logarithmically with the growth of the network , but the slopes for different show slight differences .these results demonstrate that the small - world effect can be reproduced by the proposed model .another important structural feature useful for measuring the correlation among nodes of a network is the assortative mixing coefficient , or called degree - degree correlation , which is defined as follows : ^ 2}{m^{-1 } \sum_i \frac{1}{2}(j_i^2+k_i^2 ) -[m^{-1}\sum_i \frac{1}{2}(j_i+k_i)]^2},\ ] ] where and are the degrees of the two nodes at the end of the edge , with ( is the total number of edges of the observed graph ) .two main classes of possible correlations have been observed in the real world : assortative behavior if , which indicates that large - degree nodes are preferentially connected with other large - degree nodes , and disassortative if , which denotes that links are more easily built between large - degree nodes and small - degree ones .as demonstrated in ref , almost all social networks show positive values of , while others , including technological and biological networks , show negative .however , the mechanism that leads to the basic difference between these two classes networks remains unclear .we calculate the assortative mixing coefficient to check whether the generated networks by our model are suitable representations of social systems .figure 3 shows as a function of parameter for different values of .one can see that assortative behavior occurs as increases . for each value of , shows slight changes in the cases of low values of , while for large , assortative mixing is enhanced with the same value of .the reported results demonstrate that the networks generated by our model can well capture the key distinguished structural property of social networks .[ 0.80 ] ( color online ) .cooperation density as a function of network size for different with fixing in the top panel and for different with fixing in the bottom panel.,title="fig : " ] so far we have studied the evolution of the underlying network structure influenced by the game .next , we turn to the effect of changing network structure on the cooperative behavior in the game .the key quantity for measuring the cooperation level of the game is the density of cooperators , .figure 4 shows the time series of for different values of and .one can find that after a short period of temporary behavior , reaches a stable value with small fluctuations around the average value .hence , for each pair of and can be calculated by averaging a period time after the system enter a steady state . here , the network size is equal to the evolutionary time step , since each time an individual joins into the system . in fig . 5 , we report the depending on for different values of .each data point is obtained by averaging over 10 different simulations with an average from to for each simulation .one can see in fig . 5, in the case of , shows no difference for distinct . while for , the lower the value of , the higher the cooperation levelit has been known that scale - free networks favor the emergence and persistence of cooperation .thus , the fact that in our model is larger than that of well - mixed cases is attributed to the emergence of scale - free structural properties .at the very beginning , the network evolves from a core of random - like structure , in which cooperation can not dominate in the game . while as the network gradually grows , power - law degree distribution emerges , which leads to a sharp increase of , as shown in fig . 4 from to .the inhibited cooperation by the increment of is also ascribed to the weakened heterogeneity of degree distribution . as displayed in fig .1 , lower value of corresponds to stronger heterogeneity of degree distribution reflected by the longer fat tail .the above discussion gives a thorough picture that it is the growth and the payoff - based preferential attachment that produce scale - free network structures and meanwhile , the generated heterogeneity of degree distribution effectively promotes the emergence and persistence of cooperation . [ 0.80 ] ( color online ) .cooperator density as a function of payoff parameter , for different .each data point is obtained by averaging over 10 different simulations with an average from to for each simulation.,title="fig : " ] generally speaking , cooperation and defection are prototypical actions in economical systems . hence , evolutionary games may be suitable paradigms for studying and characterizing the phenomena observed in economical systems with players represented by agents .in such systems , a well - known and extensively studied phenomenon is the wealth distribution of agents which follows the pareto law in the high - income group . in order to check the validity of our model for understanding economical behavior, we investigate the wealth distribution by adopting the present evolutionary model , where the wealth of an agent is naturally represented by the accumulated payoff over time steps .figure 6 reports the accumulated distribution of accumulated payoff in the whole population for different model parameters and .one can see that power - law distribution can be observed in a wide range of , while the wealth distributions behave as exponential corrections in the zone of low values of , which is in accordance with the empirical evidence . moreover , in the left panel of fig .6 , has strong influence on the exponent of power - law distribution and higher value of corresponds to larger exponent .in contrast , in the right panel , nearly has no effect on the wealth distribution .the correlation between and the exponent of wealth distribution makes our model general for reproducing the empirical observation .[ 0.80 ] ( color online ) .accumulated wealth distribution for different with fixing in the left panel and for different with fixing in the right panel .the so - called wealth is the accumulated payoff over times of each individual .the results are obtained by averaging over network realizations when the network size reaches .the exponents of the power - law distribution in the left panel for and are and , respectively .the empirical data of usa and japan are and , respectively .hence , the real observations can be reproduced by our model by tuning the value of .,title="fig : " ] in the following , we provide some analysis for the scale - free network structure induced by the payoff - preferential attachment via considering the correlation between the accumulated payoff of individuals and their correspondent degrees . as shown in fig .7 , is a good linear function of with slope depending on in the simulations . using the mean - field approximation, a node with degree may have cooperative neighbors and defectors and itself may be cooperator with probability and defector with .thus , at time step , its payoff can be calculated as by substituting the elements of payoff matrix of the sg in eq .( 5 ) , where , , and , eq . (5 ) is simplified to then , we get the accumulated payoff where the approximation is ascribed to the fact that quickly reaches a stable value which is almost independent of ( as shown in fig .4 ) , so that are replaced by for simplicity .the inset of fig .7 illustrates the comparison between the simulation results and the analytical ones on the the normalized slopes vs .the theoretical predictions are calculated by substituting the simulation results of ( in fig .5 ) into eq .the theoretical results are consistent with simulations .[ 0.80 ] ( color online ) .correlation between the accumulated payoff of each individual and its degree for different values of .the results are obtained by averaging over network realizations . shows a linear function of .simulation and analytical results of the normalized slope of each line depending on are displayed in the inset .the network size is .,title="fig : " ] accordingly , considering the relation between the payoff of an individual and its degree in eq .( 6 ) , we give the evolution equation of the degree of a given node thus , we get with .then , in the infinite size , the degree probability distribution can be acquired by with though this expression is somewhat rough because of several approximations , such as , eq .( 9 ) can qualitatively describe the power - law degree distribution of the generated network .in addition , we should mention a network model proposed by dorogovtsev and mendes , which is related to the present work .in such model , by introducing the initial attractiveness " ( ia ) to each node , power - law degree distributions can be generated together with the exponent of the distribution controlled by strength of the ia .very interestingly , the ia plays a significant role in the emergence of the assortative mixing property . in our model, the parameter may play the same role as that of the ia in the perspective of assortative feature .the introduction of enlarges the probability of poor players being connected by the new one in the growth process .moreover , there is an approximately positive correlation between the payoff and the degree of a given individual .hence , enhances the connecting probability between small - degree individuals that results in the assortative mixing behavior .however , in our model , the degree distribution is not only controlled by and , but also by , as obtained in eq .our model couples the dynamical process of the sg and the evolution of the network , which leads to the difference between our model and the network model of dorogovtsev and mendes .in summary , we have studied the interplay of the evolutionary game and the relevant network structure .simulation results indicate that both scale - free structural property and high cooperation level result from the interplay between the game and the network .moreover , the resultant networks reproduce some typical features of social networks , including small - world and positive assortative mixing properties .the investigation of the wealth distribution of players indicates the validity of our model in mimicking the dynamical behavior of economical systems .however , some issues still remain unclear and deserve further study , such as the evolution of connections among existing nodes , as discussed in previous works . on the other hand , in the present work, we only consider the case of birth " of new players , which leads to the growth of the network .while , in social systems , death " and aging " are also important events and a previous work has already pointed out that the aging effect plays a significant role in the evolution of network structures .therefore , there is a need to consider the death and aging processes for better characterizing the evolutionary dynamics of social systems in the future study .
we study the interplay between evolutionary game and network structure and show how the dynamics of the game affect the growth pattern of the network and how the evolution of the network influence the cooperative behavior in the game . simulation results show that the payoff - based preferential attachment mechanism leads to the emergence of a scale - free structural property , . moreover , we investigate the average path length and the assortative mixing features . the obtained results indicate that the network has small - world and positive assortative behaviors , which are consistent with the observations of some real social networks . in parallel , we found that the evolution of the underlying network structure effectively promotes the cooperation level of the game . we also investigate the wealth distribution obtained by our model , which is consistent with the pareto law in the real observation . in addition , the analysis of the generated scale - free network structure is provided for better understanding the evolutionary dynamics of our model .
in classical coding theory , the _ weight distribution _ of a code is a useful tool for measuring a linear code s performance under maximum likelihood ( ml ) decoding . for codesdecoded using modern high - performance suboptimal decoding algorithms such as sum - product ( sp ) or linear - programming ( lp ) decoding , the _ pseudoweight _ is the appropriate analog of the codeword weight .there are different definitions of pseudoweight for different channels ; one of primary importance is the _ additive white gaussian noise _ ( awgn ) pseudoweight .the pseudoweight distribution considers all codewords in all codes derived from finite covers of the tanner graph , which compete with the codewords to be the best sp decoding solution .the set of pseudocodewords has a succinct characterization in terms of the so - called _ fundamental polytope _ or equivalently , the _ fundamental cone _ . also , pseudocodewords arising from finite covers of the tanner graph were shown to be equivalent to those responsible for failure of lp decoding . while much of the existing work in this area is concerned with performance characterization of particular codes , the performance of _ ensembles _ of _ low - density parity - check _ ( ldpc )codes is also of interest . in ,the growth rate of the weight distribution of irregular ldpc codes was derived , and a numerical technique was presented for its approximate evaluation .it was shown in ( * ? ? ?* corollary 50 ) that -regular ensembles with have a ratio of minimum awgn - pseudoweight to block length which decreases to zero asymptotically as . apart from this result , to the authors knowledge no _ ensemble _ results exist in the literature concerning awgn - pseudoweight . in this paper ,we make a first step in this direction .we define the degree- _ pseudoweight enumerating function _ of a linear block code , and use this concept to find an expression for the growth rate of the awgn - pseudoweight of regular ldpc code ensembles .we also present simulation results for the -regular and -regular ldpc code ensembles .we begin by providing some general settings and definitions . for ,we denote the multinomial coefficient by for with for each and , we denote the multivariate entropy function by all logarithms in the paper are to the base .let be a linear block code of length over the binary field , defined by where is an matrix over called the _ parity - check matrix _ of the code . also denote , and for each the _ tanner graph _ of a linear block code over with parity - check matrix is an equivalent characterization of .the tanner graph has vertex set , and there is an edge between and if and only if .we denote by the set of neighbors of a vertex .we next define what is meant by a finite cover of a tanner graph .( ) a graph is a _ finite cover _ of the tanner graph if there exists a mapping which is a graph homomorphism ( takes adjacent vertices of to adjacent vertices of ) , such that for every vertex and every , the neighborhood of is mapped bijectively to .( ) a cover of the graph is said to have degree , where is a positive integer , if for every vertex .we refer to such a cover graph as an -cover _ _ of .let be an -cover of the tanner graph representing the code with parity - check matrix .the vertices in the set are called _ copies _ of and are denoted , where .similarly , the vertices in the set are called _ copies _ of and are denoted , where .less formally , given a code with parity check matrix and corresponding tanner graph , an -cover of is a graph whose vertex set consists of copies of each vertex and copies of each vertex , such that for each , , the copies of and the copies of are connected in an arbitrary one - to - one fashion .for any , an -_cover codeword _ is a labelling of vertices of the -cover graph with values from such that all parity checks are satisfied .we denote the label of by for each , , and we may then write the -cover codeword in vector form as it is easily seen that belongs to a linear code of length over , defined by an parity - check matrix .to construct , for and , , we let , and it may be seen that is the tanner graph of the code corresponding to the parity - check matrix .we next define the concept of _ pseudocodeword _ as follows .let be a linear code of length with parity - check matrix .for any positive integer , a vector of length with nonnegative integer entries is said to be a _degree_- _ pseudocodeword _ of the code if and only if there exists an -cover codeword with for all .the _ pseudoweight _ of a degree- pseudocodeword of the code is equal to the vector , where has entries equal to for each .note that the pseudoweight as defined here corresponds to the `` type '' of a pseudocodeword in the notation of .note also that this notion of pseudoweight is applicable to different channels such as the additive white gaussian noise ( awgn ) channel , binary symmetric channel ( bsc ) or binary erasure channel ( bec ) .the awgn - pseudoweight of a pseudocodeword of length is defined by and its bsc - pseudoweight and bec - pseudoweight are defined in ( see also ( * ? ? ?* section 6 ) ) . the _ fundamental cone _ of the parity - check matrix is equal to the set of vectors such that for all , and [ def : cone ] in , it was shown that if is a binary linear code with an parity - check matrix , then a length- integer vector is a pseudocodeword of if and only if and where denotes the fundamental cone of , and the matrix in ( [ eq : pcw_condition_2_parity ] ) is interpreted over the integers .we next define the concept of pseudoweight enumerating function of a block code .the degree- _ pseudoweight enumerating function _ ( pwef ) of a block code of length is equal to where , and denotes the number of degree- pseudocodewords - cover codewords corresponding to a particular pseudocodeword .] of the code with pseudoweight .[ prop : b ] the degree- pwef of the single parity - check ( spc ) code of length is - t^{(m)}({{\mbox{\boldmath } } } ) \label{eq : b_definition}\ ] ] where , , , and for where the set in ( [ eq : t_recursion ] ) is the set of integer vectors satisfying for all , and where is even . in this case is a length- row vector of ones , so we have where ( using definition [ def : cone ] ) is the set of integer vectors which satisfy ( ) : : ( ) : : ( ) : : if there exists with and for , then it is straightforward to check that in ( [ eq : b_definition ] ) , the term \ ] ] takes into account all integer vectors which satisfy conditions ( ) and ( ) , and the term takes into account all those which violate the condition ( ) .in particular , , and .also note that \nonumber \\ - \frac{\partial t^{(m)}({{\mbox{\boldmath }}})}{\partial x_r } \label{eq : b_derivative}\end{gathered}\ ] ] for .for a positive integer , we define a -regular ldpc code ensemble as follows .the tanner graph of an ldpc code from the ensemble consists of variable nodes of degree , and check nodes of degree .the variable and check node sockets are connected by a permutation on the edges of the graph , each permutation being equiprobable .the concept of degree- assignment is defined next .this definition is a generalization of the definition of assignment in ( the definition in corresponds to that of a degree- assignment ) .a _ degree- assignment _ is a labelling of the edges of the tanner graph with numbers from the set .an assignment is said to have _ if edges are labelled for each .an assignment is said to be -_check - valid _ if according to this labelling , every check node recognizes a valid local degree- pseudocodeword . for any positive integer , the growth rate of the degree- awgn - pseudoweight distribution of the -regular ldpc code ensemble sequence defined by \label{eq : growth_rate_result}\ ] ] where denotes the expectation operator over the ensemble , and denotes the number of degree- pseudocodewords of awgn - pseudoweight of a randomly chosen ldpc code in the ensemble .the limit in ( [ eq : growth_rate_result ] ) assumes the inclusion of only those positive integers for which and q q x x q q q q q q q q q$}}})}{\partial q_r } \end{gathered}\ ] ] which is equivalent to \\ - j \log x_{0,r } + ( j-1 ) \log \left ( \frac{q_r}{1-\sum_{s=1}^{m } q_s } \right ) \\ = \lambda \left ( 2r \sum_{s=1}^{m } s q_s - \alpha r^2 \right ) \ ; .\end{gathered}\ ] ] the term in square brackets is equal to zero for each due to ( [ eq : bldx0_q_eqn ] ) ; therefore this simplifies to ( [ eq : lagrange_mult_theorem ] ) for each .note that for the case , the maximization in ( [ eq : optimization_for_awgn_pw ] ) is trivial and therefore the solution may be obtained directly from ( [ eq : f_definition ] ) as \\ - j \alpha \log x_{0 } - ( j-1 ) h(\alpha ) \label{eq : growth_rate_m1}\end{gathered}\ ] ] where is the unique positive real solution to the equation = \\ x_0 \left [ \left ( 1+x_0 \right)^{k-1 } - \left ( 1-x_0 \right)^{k-1 } \right ] \ ; .\label{eq : x0_m1}\end{gathered}\ ] ] note that is simply the growth rate of the weight distribution in this case , originally obtained in .also , this solution may be regarded as a special case of theorem [ thm : growth_rate ] where the solution for via ( [ eq : lagrange_mult_theorem ] ) is redundant .in this section the growth rates of the awgn - pseudoweight of two example ldpc code ensembles are evaluated using the solution of theorem [ thm : growth_rate ] .the growth rate curves for the -regular ldpc code ensemble and for the -regular ldpc code ensemble are shown in figures [ cap : gallager_36_ensemble ] and [ cap : gallager_48_ensemble ] respectively .note that for both ensembles .it is worthwhile to note some distictions between the present analysis and that of ( * ? ? ?* corollary 50 ) . in (* corollary 50 ) , it is proved that -regular ensembles with have a ratio of minimum awgn - pseudoweight to block length which decreases to zero asymptotically as .this result is not in conflict with the results of figures [ cap : gallager_36_ensemble ] and [ cap : gallager_48_ensemble ] .the detrimental pseudocodewords of ( * ? ? ?* corollary 50 ) are derived from the `` canonical completion '' ( * ? ? ?* definition 46 ) and , asymptotically , have awgn - pseudoweight _ sublinear _ in the block length therefore , these pseudocodewords do not appear in the present analysis .also , note that the analysis of ( * ? ? ?* corollary 50 ) takes the limit prior to ( or jointly with ) the limit , in contrast to the present analysis which takes the limit for finite . finally ,the result of ( * ? ? ?* corollary 50 ) is concerned with _ minimum _awgn - pseudoweight and not with the multiplicities of the corresponding pseudocodewords .
a solution is presented for the asymptotic growth rate of the awgn - pseudoweight distribution of regular low - density parity - check ( ldpc ) code ensembles for a selected graph cover degree . the evaluation of the growth rate requires solution of a system of nonlinear equations in unknowns . simulation results for the pseudoweight distribution of two regular ldpc code ensembles are presented for graph covers of low degree .
elections are a general and widely used framework for preference aggregation in human and artificial intelligence applications .an important negative result from social choice theory , the gibbard - satterthwaite theorem , states that every reasonable election system is manipulable .however , even though every election system is manipulable , it may be computationally infeasible to determine how to manipulate the outcome .bartholdi , tovey , and trick introduced the computational study of the manipulation problem and this began an exciting line of research that explores the computational complexity of different manipulative attacks on elections ( see , e.g. , ) .the notion of single - peaked preferences introduced by black is the most important restriction on preferences from political science and economics and is naturally an important case to consider computationally .single - peakedness models the preferences of a collection of voters with respect to a given axis ( a total ordering of the candidates ) .each voter in a single - peaked election has a single most - preferred candidate ( peak ) on the axis and the farther candidates are to the left / right from her peak the less preferred they are .single - plateauedness extends this to model when each voter has multiple most - preferred candidates that appear sequentially on the axis , but are otherwise single - peaked .this standard model of single - peaked preferences has many desirable social - choice properties . when the voters in an election are single - peaked , the majority relation is transitive and there exist voting rules that are strategy - proof ( i.e. , a voter can not misrepresent her preferences to achieve a personally better outcome ) .single - peakedness for total orders can also have an effect on the complexity of many different election attack problems when compared to the general case .the complexity of manipulative attacks often decreases when the voters in an election are single - peaked , and the winner problems for kemeny , dodgson , and young elections are in p when they are -complete in general .most of the abovementioned research on the computational complexity of manipulation of elections , both for the general case and for single - peaked electorates , has been limited to the assumption that voters have tie - free votes . in many real - world scenariosvoters have votes with ties , and this is seen in the online repository preflib that contains several different preference datasets that contain ties .there are also election systems defined for votes with ties , e.g. , the kemeny rule and the schulze rule .recent work considers the complexity of manipulation for top - order votes ( votes where all of the ties are between candidates ranked last ) .fitzsimmons and hemaspaandra considered the complexity of manipulation , control , and bribery for more general votes with ties , and also the case of manipulation for a nonstandard model of single - peakedness for top - order votes .menon and larson later examined the complexity of manipulation and bribery for an equivalent ( for top orders ) model of single - peakedness .fitzsimmons and hemaspaandra use the model of possibly single - peaked preferences from lackner where a preference profile of votes with ties is said to be single - peaked with respect to an axis if the votes can be extended to tie - free votes that are single - peaked with respect to the same axis . menon and larson use a similar model for top orders that they state is essentially the model of single - peaked preferences with outside options . both fitzsimmons and hemaspaandra and menon and larsonfind that these notions of single - peakedness exhibit anomalous computational behavior where the complexity of manipulation can increase when compared with the case of single - peaked total orders .we are the first to study the computational complexity of manipulation for the standard model of single - peaked preferences for votes with ties , and for single - plateaued preferences for votes with ties .in contrast to the recent related work using other models of single - peakedness with ties , we find that the complexity of weighted manipulation for -candidate scoring rules and for -candidate copeland elections for all does not increase when compared to the cases of single - peaked total orders , and that the complexity of weighted manipulation does not increase with respect to the general case of elimination veto elections .we also compare the social choice properties of these different models , and state a surprising result on the relation between the societal axis and the complexity of manipulation for single - peaked preferences .an _ election _ consists of a finite set of candidates and a finite collection of voters .we will sometimes refer to this collection of voters as a _ preference profile_. an _ election system _ is a mapping from an election to a set of winners , which can be any subset of the candidate set ( the nonunique winner model , our standard model ) , or at most a single candidate ( the unique winner model ) .each voter in an election has a corresponding vote ( or _ preference order _ ) over the set of candidates .this is often assumed to be a _ total order _ ,i.e. , a strict ordering of the candidates from most to least preferred .formally , a total order is a complete , reflexive , transitive , and antisymmetric binary relation .we use `` '' to denote strict preference between two candidates . similarly , a _ weak order _ is a total order without antisymmetry , so each voter can rank candidates as tied ( which we will sometimes refer to as indifference ) as long as their ties are transitive .we use `` '' to denote a tie between two candidates .a _ bottom order _ is a weak order where all ties are between top - ranked candidates and a _ top order _ is a weak order where all ties are between bottom - ranked candidates .notice that a total order is a weak order , a top order , and a bottom order , that a top order is a weak order , and that a bottom order is a weak order .throughout this paper we will sometimes refer to weak orders as votes with ties .for some of our results we consider weighted elections , where each voter has an associated positive integral weight and a voter with weight counts as unweighted voters all voting the same .our election systems include scoring rules , elimination veto , and copeland .we define each below and the extensions we use to properly consider votes with ties .given an election with candidates , a scoring rule assigns scores to the candidates using its corresponding -candidate scoring vector of the form where and each .so , when the preferences of a voter are a total order , the candidate ranked in position receives a score of from that voter .below we present examples of scoring rules and their corresponding -candidate scoring vector .* plurality : * with scoring vector . * veto : * with scoring vector .* borda : * with scoring vector . *triviality : * with scoring vector . to use a scoring rule to determine the outcome of an election containing votes with tieswe must extend the above definition of scoring rules .we use the definitions of scoring - rule extensions for weak orders from our previous work , which generalize the extensions introduced for top orders from baumeister et al . and from narodytska and walsh which in turn generalizes the extensions used by emerson for borda .given a weak - order vote , we can write it as , where each is a set of tied candidates , ( so in the case of a total order vote each is a singleton ) . for each ,let be the number of candidates strictly preferred to the candidates in .we now state the definitions of each of the four extensions . in example[ ex : scoring - rule - ext ] we present an example of how a given weak - order vote is scored using borda and each of the scoring - rule extensions . * min : * each candidate in receives a score of . *max : * each candidate in receives a score of .* round down : * each candidate in receives a score of .* average : * each candidate in receives a score of for top orders the scoring - rule extensions min , round down , and average are the same as round up , round down , and average used in the work by menon and larson .[ ex : scoring - rule - ext ] given the candidate set and the weak order vote we show the scores assigned to each candidate using borda and each of our extensions .we can write the vote as , so , , and , and , , and .recall that for total orders , the scoring vector for 5-candidate borda is .* borda using min : * , , and . * borda using max : * , , and . * borda using round down : * , , and . * borda using average : * , , and . for elimination veto for total orders ,the veto scoring rule is used , the candidate with the lowest score is eliminated , and the rule is repeated on the remaining votes restricted to the remaining candidates until there is one candidate left .we break ties lexicographically , and for comparison with related work our results for elimination veto use the unique winner model , and for votes with ties we use the min extension .pairwise election systems are one of the most natural cases for considering votes with ties .copeland is an important and well - known election system that is defined using pairwise comparisons between candidates . in a copeland electioneach candidate receives one point for each pairwise majority election with each other candidate she wins and points for each tie ( where and ) . for votes with ties we follow the obvious extensionalso used by baumeister et al . and narodytska and walsh .for copeland elections it will sometimes be easier to refer to the _ induced majority graph _ of an election . given an election induced majority graph is constructed as follows .each vertex in the induced majority graph corresponds to a candidate in , and for all candidates if by majority then there is an edge from to in the induced majority graph .we also will refer to the _ weighted majority graph _ of an election , where each edge from to in the induced majority graph is labeled with the difference between the number of voters that state and the number of voters that state .the computational study of the manipulation of elections was introduced by bartholdi , tovey , and trick , and conitzer , sandholm , and lang extended this to the case for weighted voters and a coalition of manipulators .we define the constructive weighted coalitional manipulation ( cwcm ) problem below .* name : * -cwcm * given : * a set of candidates , a collection of nonmanipulative voters , a collection of manipulative voters , and a preferred candidate .* question : * does there exist a way to set the votes of such that is a winner of under election system ? for the case of cwcm for each of our models of single - peaked preferences we follow the model introduced by walsh where the societal axis is given as part of the input to the problem and the manipulators must state votes that are single - peaked with respect to this axis ( for the corresponding model of single - peakedness ) .( see section [ sec : variants ] for all of the definitions of single - peaked preferences that we use . ) for our np - completeness results we will use the following well - known np - complete problem .* name : * partition * given : * given a collection of positive integers such that . *question : * does there exist a partition of into two subcollections and such that ?we consider four important models of single - peaked preferences for votes with ties . for the following definitions , for a given axis ( a total ordering of the candidates ) and a given preference order , we say that is strictly increasing ( decreasing ) along a segment of if each candidate is preferred to the candidate on its left ( right ) with respect to . in figure[ fig : ex - peak ] we present an example of each of the four models of single - peaked preferences , and in figure [ fig : relation ] we show how the four models relate to each other .we now give the definition of the standard model of single - peakedness from black .[ def : standard ] given a preference profile of weak orders over a set of candidates , is single - peaked with respect to a total ordering of the candidates ( an axis ) if for each voter , can be split into three segments , , and ( and can each be empty ) such that contains only the most preferred candidate of , is strictly increasing along and is strictly decreasing along .observe that for a preference profile of votes with ties to be single - peaked with respect to an axis , each voter can have a tie between at most two candidates at each position in her preference order since the candidates must each appear on separate sides of her peak .otherwise the preference order would not be _ strictly _ increasing / decreasing along the given axis .the model of single - plateaued preferences extends single - peakedness by allowing voters to have multiple most preferred candidates ( an indifference plateau ) .this is defined by extending definition [ def : standard ] so that can contain multiple candidates .lackner recently introduced another extension to single - peaked preferences , which we refer to as `` possibly single - peaked preferences '' throughout this paper .a preference profile is possibly single - peaked with respect to a given axis if there exists an extension of each preference order to a total order such that the new preference profile of total orders is single - peaked .this can be stated without referring to extensions , for votes with ties , in the following way .given a preference profile of weak orders over a set of candidates , is possibly single - peaked with respect to a total ordering of the candidates ( an axis ) if for each voter , can be split into three segments , , and ( and can each be empty ) such that contains the most preferred candidates of , is weakly increasing along and is weakly decreasing along .notice that the above definition extends single - plateauedness to allow for multiple indifference plateaus on either side of the peak .so for votes with ties , possibly single - peaked preferences model when voters have weakly increasing and then weakly decreasing or only weakly increasing / decreasing preferences along an axis .another generalization of single - peaked preferences for votes with ties was introduced by cantala : the model of single - peaked preferences with outside options .when preferences satisfy this restriction with respect to a given axis , each voter has a segment of the axis where they have single - peaked preferences and candidates appearing outside of this segment on the axis are strictly less preferred and the voter is tied between them . similar to how single - plateaued preferences extend the standard single - peaked model to allow voters to state multiple most preferred candidates , single - peaked preferences with outside options extends the standard model to allow voters to state multiple least preferred candidates . given a preference profile of weak orders over a set of candidates , is single - peaked with outside options with respect to a total ordering of the candidates ( an axis ) if for each voter , can be split into five segments , , , , and ( , , , and can each be empty ) such that contains only the most preferred candidate of , is strictly increasing along and is strictly decreasing along , for all candidates and , states , and for all candidates , states .menon and larson state that the model of single - peakedness for top orders that they use is similar to single - peaked preferences with outside options .it is clear from their paper that for top orders these models are the same .so for the remainder of the paper we will refer to the model used by menon and larson as `` single - peaked preferences with outside options for top orders . '', the solid line is the single - peaked order , the dashed line is the single - plateaued order , the dashed - dotted line is the single - peaked with outside options order , and the dotted line is the possibly single - peaked order . see figure [ fig : relation ] for the relationships between the four models . ]we now state some general observations on single - peaked preferences with ties , including how the models relate to each other , as well as their social - choice properties .it is easy to see that for total - order preferences each of the four models of single - peakedness with ties that we consider are equivalent . in figure[ fig : relation ] we show how each model relates for weak orders , top orders , and bottom orders . given a preference profile of votesit is natural to ask how to determine if an axis exists such that the profile satisfies one of the above restrictions .this is referred to as the consistency problem for a restriction .bartholdi and trick showed that single - peaked consistency for total orders can be determined in polynomial time ( i.e. , is in p ) , and fitzsimmons extended this result to show that single - peaked , single - plateaued , and possibly single - peaked consistency for weak orders is in p. this leaves the consistency problem for single - peaked preferences with outside options , which we will now show to be in p. it is easy to see that given a preference profile of top orders , it is single - peaked with outside options if and only if it is possibly single - peaked .so it is immediate from the result by lackner that shows that possibly single - peaked consistency for top orders is in p , that the consistency problem for single - peaked preferences with outside options for top orders is in p. for weak orders the construction used to show that single - peaked consistency for weak orders is in p by fitzsimmons can be adapted to hold for the case of single - peaked preferences with outside options for weak orders , so this consistency problem is also in p. given a preference profile of weak orders it can be determined in polynomial time if there exists an axis such that is single - peaked with outside options with respect to .one of the most well - known and desirable properties of an election with single - peaked preferences is that there exists a transitive majority relation .a majority relation is transitive if for all distinct candidates if and by majority then by majority .when the majority relation is transitive then candidates that beat - or - tie every other candidate by majority exist and they are denoted the weak condorcet winners .this also holds when the voters have single - plateaued preferences .it is not the case that the majority relation is transitive when preferences in an election are single - peaked with outside options or are possibly single - peaked .fitzsimmons points out that for possibly single - peakedness this was implicitly shown in an entry of table 9.1 in fishburn , where a preference profile of top orders that violates single - peaked preferences was described that does not have a transitive majority relation , and this profile is possibly single - peaked ( and single - peaked with outside options ) .cantala also provides an example of a profile that is single - peaked with outside options that does not have a weak condorcet winner , and so does not have a transitive majority relation .for total - order preferences , weighted manipulation for any fixed number of candidates is known to be np for every scoring rule that is not isomorphic to plurality or triviality . for single - peaked total orders ,faliszewski et al . completely characterized the complexity of weighted manipulation for 3-candidate scoring rules and this result was generalized by brandt et al . for any fixed number of candidates . these results both showed that the complexity of weighted manipulation for scoring rules often decreases when voters have single - peaked total orders . for top - order preferences , dichotomy theorems for 3-candidate weighted manipulation for scoring rules using round down , min , and averagewere shown by menon and larson for the general case and for the case of single - peaked preferences with outside options for top orders .menon and larson found that , counterintuitively , the complexity of weighted manipulation for single - peaked preferences with outside options often increases when moving from total orders to top orders .we also mention that for the scoring - rule extension max , we earlier found that the complexity of 3-candidate weighted manipulation for borda increases for possibly single - peaked preferences when moving from total orders to top orders .we show that for the standard model of single - peakedness and for single - plateauedness that the complexity of weighted manipulation for -candidate scoring rules using max , min , round down , and average does not increase when moving from total orders to top orders , bottom orders , or weak orders .the following results are close analogs to the case for single - peaked total orders due to brandt et al . .[ lem : sp - first ] if can be made a winner by a manipulation of top - order , bottom - order , or weak - order votes that are single - peaked , single - plateaued , possibly single - peaked , or single - peaked with outside options for a scoring rule using max , min , round down , or average , then can be made a winner by a manipulation ( of the same type ) in which all manipulators rank uniquely first .suppose that a manipulator has a vote of the form where is a weak order over ( where can be empty ) , , each of are nonempty groups of tied candidates , and .we have the following two cases .case 1 : : : if all of the candidates in appear on the same side of the candidates in with respect to the axis ( or is empty ) , then we can change the vote of the manipulator to where and are total orders such that the vote satisfies the given notion of single - peakedness with ties with respect to the axis .case 2 : : : otherwise , we let be the candidates to the left of and be the candidates to the right of with respect to the axis . without loss of generalitylet .we can then change the vote of the manipulator to .notice that in all of the above cases , for all candidates that does not decrease , regardless of which scoring - rule extension is used .so if is a winner with the initial vote , is still a winner in the vote that ranks uniquely first .[ thm : noninc ] let be a scoring vector . if -cwcm is in p for single - peaked total orders , then -cwcm is in p for single - peaked and single - plateaued top orders , bottom orders , and weak orders for all our scoring rule extensions .if the societal axis has in the leftmost or rightmost position , then , by lemma [ lem : sp - first ] , we can assume that all manipulators rank uniquely first .there is exactly one such single - peaked or single - plateaued vote with ties , namely to rank first and strictly rank the remaining candidates according to their position on the axis .so , let the societal axis be .we first consider the p cases for single - peaked total orders described in lemma 6.6 of brandt et al .where for all such that it holds that .the crucial observation in the proof of this lemma is that for every set of single - peaked total order votes , we are in one of the following two cases : 1 . for all , , .2 . for all , , . we will show that this is also the case for single - peaked and single - plateaued orders with ties .this then implies that the optimal vote for all manipulators is ( in case 1 ) or ( in case 2 ) . for the sake of contradiction let be a collection of nonmanipulators and and be integers such that and such that and .we now create an such that and and the argument in the proof of lemma 6.6 in brandt et al . still holds . *first , remove from all voters that have tied for first . notice that if we remove all votes with tied forfirst from then does not increase and neither nor decreases .so we can assume that is never tied for first in a vote ( although this does not mean that other candidates can not be tied for first in a vote when we are in the single - plateaued case ) .* we know that is not tied for first ( part of an indifference plateau ) , so is tied with at most one other candidate in each vote .so in every vote break the tie against .notice that this can lower only the score of regardless of whether we are using the min , max , round down , or average scoring - rule extension .let be the total weight of the voters in that rank some candidate in first and in the position of ( in every extension other than round down this means that is ranked ; for round down we mean that is ranked in the group ( i.e. , is followed by groups ) . using a similar argument to the proof of lemma 6.6 from brandt et al . with described above we can reach a contradiction .we now present the remaining p cases , which can be seen as the with - ties analog of the p cases of lemma 6.7 from brandt et al . .we examine each of the cases individually . for convenience ,we normalize the scoring vector so that .if then clearly the optimal action for the manipulators is to vote a total order with uniquely first and the remaining candidates ranked arbitrarily such that the vote is single - peaked .if , then , as pointed out in , lemma 6.6 applies , which has been handled above .the last p - time case is that , , and .notice that when the votes are single - peaked or when they are single - plateaued then and are the only candidates that can occur last in a vote , either uniquely or tied ( except for the case where all candidates are tied , but we can easily ignore these votes since they do not change the relative scores between candidates ) . since we know from lemma [ lem : sp - first ] that all of the manipulators can put strictly ranked first , we can have the following manipulator votes .* .* .* .notice that for all votes with first that . for all votes , or , or and tied for last , in which case ( the worst case occurs with max where they each receive a score of ) .so , . if , is the optimal manipulator vote . otherwise , and is the optimal manipulator vote .we have proven that the above cases hold for single - peaked and for single - plateaued weak orders , and it is clear that the case for single - peaked bottom orders follows from the case of single - peaked total orders ( since they are equivalent ) and it is also clear how to adapt the above arguments to hold for single - peaked top orders , and for single - plateaued top orders and bottom orders .the most surprising result in the work by menon and larson was that the complexity of 3-candidate cwcm for elimination veto for single - peaked preferences with outside options for top orders using min is np - complete , whereas for single - peaked total orders and even for total orders in the general case it is in p. -candidate elimination veto cwcm for total orders is in p in the unique - winner model . -candidate elimination veto cwcm for single - peaked total orders is in p in the unique - winner model . 3-candidate elimination veto cwcm for top orders that are single - peaked with outside options is np - complete in the unique - winner model .menon and larson state this case as a counterexample to the conjecture by faliszewski et al . ,which states that the complexity for a natural election system will not increase when moving from the general case to the single - peaked case .( though menon and larson do qualify that the conjecture from faliszewski et al .concerned total orders . )however , for the standard model of single - peaked preferences and for single - plateaued preferences , elimination veto cwcm for top orders , bottom orders , and weak orders using min is in p for any fixed number of candidates , thus the counterexample crucially relies on using a nonstandard definition .-candidate elimination veto cwcm for single - peaked and for single - plateaued top orders , bottom orders , and weak orders using min is in p in the unique - winner model .the proof of this theorem follows from a similar argument to the proof of the theorem for single - peaked total orders from menon and larson .this is because that proof follows from the fact that for total orders the candidate eliminated after each round is on the leftmost or rightmost location of the axis , and the reverse of an elimination order is single - peaked with respect to the axis .it is easy to see that both of these statements also hold for single - peaked votes with ties , since the only candidates that can be vetoed in each round are still the leftmost and rightmost candidates on the axis .for single - plateaued preferences it is possible for voters with an indifference plateaued to veto more than two candidates ( after all of the candidates ranked below their indifference plateau have been eliminated ) , but it is easy to see that these votes do not affect which candidate is eliminated since all of the remaining candidates are vetoed by such voters .it is also clear that lemma 12 from coleman and teague still holds which states that if there exists a collection of votes that can induce an elimination order , then this elimination order can be induced by all of the manipulators voting the reverse of the elimination order .so since we know that the reverse of an elimination order is single - peaked with respect to the axis , and that there are only polynomially many possible elimination orders with first , the manipulators simply try each elimination order with first .since the elimination order is always a total order , the above argument clearly holds for single - peaked and single - plateaued top orders , bottom orders , and weak orders .it is known that 3-candidate copeland cwcm for all rational is np - complete for total orders in the nonunique - winner model , and when ( also known as llull ) cwcm is in p for and the cases for remain open .fitzsimmons and hemaspaandra showed that the np - completeness of the 3-candidate case holds for top orders , bottom orders , and weak orders , and menon and larson independently showed the top - order case .fitzsimmons and hemaspaandra also showed that 3-candidate llull cwcm is in p for top orders , bottom orders , and weak orders . recall that weak condorcet winners always exist when preferences are single - peaked with ties , and when they are single - plateaued .so the results that llull cwcm for single - peaked total orders is in p from brandt et al. also holds for the case of single - peaked and single - plateaued top orders , bottom orders , and weak orders .copeland for was shown to be in p by yang for single - peaked total orders . [ t : yang ] copeland cwcm for is in p for single - peaked total orders .in contrast , for top orders that are single - peaked with outside options , it is np - complete even for three candidates . 3-candidate copeland cwcm for is np - complete for top orders that are single - peaked with outside options . for single - peaked and single - plateaued weak orders , bottom orders , and top orders we again inherit the behavior of single - peaked total orders .copeland cwcm for is in p for single - peaked and single - plateaued top orders , bottom orders , and weak orders .let be our axis .consider a set of nonmanipulators with single - plateaued weak orders .replace each nonmanipulator of weight with two nonmanipulators and of weight .the first nonmanipulator breaks the ties in the vote in increasing order of and the second nonmanipulator breaks the ties in the vote in decreasing order of , i.e. , if and , then and .note that and are single - peaked total orders and that the weighted majority graph induced by the nonmanipulators after replacement can be obtained from the weighted majority graph induced by the original nonmanipulators by multiplying each weight in the graph by 2 .when we also multiply the manipulator weights by 2 , we have an equivalent copeland cwcm problem , where all nonmanipulators are single - peaked total orders and all manipulators have even weight .suppose can be made a winner by having the manipulators cast single - plateaued votes with ties .now replace each manipulator of weight by two weight- manipulators .the first manipulator breaks the ties in the vote in increasing order of and the second manipulator breaks the ties in the vote in decreasing order of .now the replaced manipulator votes are single - peaked total orders and is still a winner .we need the following fact from the proof of theorem [ t : yang ] : _ if can be made a winner in the single - peaked total order case , then can be made a winner by having all manipulators cast the same p - time computable vote ._ it follows that can be made a winner by having all replaced manipulators cast the same single - peaked total order vote .but then can be made a winner by having all original manipulators cast the same single - peaked total order vote .since this vote is p - time computable , it follows that copeland cwcm for is in p for single - plateaued weak orders , and this also holds for single - peaked weak orders since every single - peaked profile of weak orders is also single - plateaued .it is clear to see that similar arguments hold for single - peaked top orders , and for single - plateaued top orders and bottom orders .the case for single - peaked bottom orders follows from the case for single - peaked total orders .recall from lemma [ lem : sp - first ] that for all our single - peaked models and all our scoring rule extensions , we can assume that all manipulators rank uniquely first .when given an axis where the preferred candidate is in the leftmost or rightmost location , there is exactly one single - peaked total order vote that puts first , namely , followed by the remaining candidates on the axis in order .this is also the case for single - peaked and single - plateaued orders with ties , since is ranked uniquely first and no two candidates can be tied on the same side of the peak .it follows that the weighted manipulation problems for scoring rules for single - peaked total orders and single - peaked and single - plateaued orders with ties are in p for axes where is in the leftmost / rightmost position .however , this is not the case when preferences with ties are single - peaked with outside options or possibly single - peaked . in case 1 of the proof of theorem 1 in the work by menon and larson , an axis of is used to show np - hardness for their model .it is attractive to conjecture that for single - peaked and single - plateaued preferences , the less symmetrical ( with respect to ) the axis is , the easier the complexity of manipulation , but surprisingly this turns out to not be the case , even for total orders . for the theorem stated below let and denote the number of candidates to the left and to the right on the axis with respect to the preferred candidate of the manipulators . for single - peaked total orders , single - peaked weak orders , and single - plateaued weak orders , is in p for and np - complete for and for all our scoring rule extensions . : careful inspection of the proof of lemma 6.6 from brandt et al . proves the following refined version of that lemma that takes the axis into account .[ l : sp2p ] let be a scoring rule .if for all such that and , it holds that then -cwcm for single - peaked total orders for this axis is in p. note that lemma [ l : sp2p ] applies in our case , since for all such that and , the following hold .[ cols="^,^,^,^,^ " , ] it follows that cwcm is in p for single - peaked total orders when .it follows from the arguments in the proof of theorem [ thm : noninc ] that cwcm is also in p for single - peaked and single - plateaued orders with ties for . to get np - completeness for the single - peaked and single - plateaued weak order cases for all our scoring rule extensions , we need to modify the proof for single - peaked total orders .careful inspection of the proof of lemma 6.4 from brandt et al . allows us to extract the following reduction from partition to cwcm for total orders that are single - peaked with respect to axis .given an instance of partition , a collection of positive integers such that , we are asking whether there exists a partition of this collection , i.e. , whether there exists a subcollection that sums to .the set of nonmanipulators consists of * one weight voter voting : .* one weight voter voting : .the weights of the manipulators are .if there exists a partition , then let the set of manipulators vote the following .* coalition of weight manipulators vote . *coalition of weight manipulators vote .then , and the scores of the other candidates are lower . for the converse ,note that the optimal total order votes for the manipulators are and .simple calculation shows that in order to make a winner , exactly half of the manipulator weight needs to vote .this corresponds to a partition . for the single - peaked and single - plateaued cases for votes with ties , note that by lemma [ lem : sp - first ], we can assume that all manipulators rank uniquely first .we will look at each of our scoring extensions : max , min , average , and round down . for max, if the manipulators can make a winner , we break the ties of the manipulators such that the resulting votes become total single - peaked orders . will still be a winner in the resulting election .this implies that the construction above also gives np - completeness for single - peaked votes with ties using max . for round down ,the construction above also works .suppose can be made a winner .again , we assume that all manipulators rank uniquely first .now replace all votes with ties ( i.e. , , , and ) by .note that and do not decrease , and so is still a winner of the resulting election , in which all manipulators vote a single - peaked total order .this implies that the construction above also gives np - completeness for single - peaked votes with ties using round down .for average , the construction above also works , though the proof needs a little more work .suppose can be made a winner .again , we assume that all manipulators rank uniquely first .so , .the optimal manipulator votes are : , , and .assume that these are the only manipulator votes and let , , and be the total manipulator weight voting each of these three votes , respectively . since is a winner , and .it follows that , , and .standard calculation shows that this implies that , and .thus the weights of the voters voting correspond to a partition . for min , the optimal votes for the manipulators are and .in order to make the argument from the total order case work , we need to adjust the weights of the nonmanipulators a bit .namely , we let the set of nonmanipulators consist of * one weight voter voting : .* one weight voter voting : .the weights of the manipulators are unchanged , i.e. , . note that if half the manipulator weight votes and remaining manipulators vote , then , , and .the scores of and are lower .similarly to the previous cases ,if can be made a winner then standard calculations show that exactly half of the manipulator weight votes , which corresponds to a partition .the work by menon and larson on the complexity of manipulation and bribery for single - peaked preferences with outside options for top orders is the most closely related to this paper . for manipulation , they show that for single - peaked preferences with outside options the complexity often increases when moving from total orders to top orders .they additionally considered a notion of nearly single - peakedness .we instead study the complexity of weighted manipulation for the standard model of single - peaked preferences with ties and for single - plateaued preferences with ties .the focus of our paper is on the computational aspects of models of single - peaked preferences with ties .these models can also be compared based on which social - choice properties that they have , such as the guarantee of a weak condorcet winner .barber compares such properties of the models of single - peaked , single - plateaued , and single - peaked with outside options for votes with ties .since single - peakedness is a strong restriction on preferences , in real - world scenarios it is likely that voters may only have nearly single - peaked preferences , where different distance measures to a single - peaked profile are considered . both the computational complexity of different manipulative attacks and detecting when a given profile is nearly single - peaked have been considered . an important computational problem forsingle - peakedness is determining the axis given a preference profile , this is known as its consistency problem .single - peaked consistency for total orders was first shown to be in p by bartholdi and trick .doignon and falmagne and escoffier , lang , and ztrk independently found faster direct algorithms .lackner proved that possibly single - peaked consistency for top orders is in p ( and for local weak orders and partial orders is np - complete ) , and fitzsimmons later showed that single - peaked , single - plateaued , and possibly single - peaked consistency for weak orders is in p.the standard model of single - peakedness is naturally defined for votes with ties , but different extensions have been considered .in contrast to recent work that studies the models of possibly single - peaked and single - peaked preferences with outside options and finds an anomalous increase in complexity compared to the tie - free case , we find that for scoring rules and other important natural systems , the complexity of weighted manipulation does not increase when moving from total orders to votes with ties in the standard single - peaked and in the single - plateaued cases .single - peaked and single - plateaued preferences for votes with ties also retain the important social - choice property of the existence of weak condorcet winners .this is not to say that possibly single - peaked and single - peaked preferences with outside options are without merit , since they both model easily understood structure in preferences .* acknowledgments : * we thank the anonymous referees for their helpful comments .this work was supported in part by nsf grant no .ccf-1101452 and a nsf graduate research fellowship under nsf grant no. dge-1102937 .
single - peakedness is one of the most important and well - known domain restrictions on preferences . the computational study of single - peaked electorates has largely been restricted to elections with tie - free votes , and recent work that studies the computational complexity of manipulative attacks for single - peaked elections for votes with ties has been restricted to nonstandard models of single - peaked preferences for top orders . we study the computational complexity of manipulation for votes with ties for the standard model of single - peaked preferences and for single - plateaued preferences . we show that these models avoid the anomalous complexity behavior exhibited by the other models . we also state a surprising result on the relation between the societal axis and the complexity of manipulation for single - peaked preferences .
the aim of any notation for software system design must be to faithfully capture the structure and behaviour of the proposed system .the aim of any tooling that supports such a notation must be that it is reliable and provides useful functionality .what kind of functionality ?certainly , the notation must be a good vehicle for communication - other people must be able to understand your ideas expressed in the notation .however , communication is not sufficient ; the notation must support other useful tasks that contribute to the development of a system .a design notation must allow a tool to provide feedback on whether the system is likely to work , perhaps even allow a tool to animate parts of the design .in addition , it should be possible to generate parts of a system from its design . in recent yearsthere has been interest in model driven architecture , software factories , tool definition languages and domain specific languages .all of these initiatives aim to provide design notations that achieve the aims outlined above .these technologies fall broadly into two categories : standard vs. extensible languages .the design notation is standard .uml 2.x is an example of this category whereby the notation is standard with some limited scope for extension points . in the case of umlthere is a very large number of different types of design element .each type of element can be stereotyped by defining properties and changing the iconization .this is similar to _ annotations _ recently added to java .the advantages of this approach are that the design notation is completely standard in terms of how it is presented to the user , therefore tools mature quickly , the expertise required to use the technology is relatively low , and information is interoperable between tools .the disadvantage is that the notation designers must preempt all the element types that will be required , leading to notational bloat , and the scope for notational extension is very limited .the design notation is user - defined .there are a number of technologies including microsoft visual studio dsl tools , gmf and metaedit+ that allow a user to define new notation .the extent to which the notation can be extended differs between technologies : some tooling limits the definition through wizards and some allows arbitrary extension through program - level interfaces . in most cases , the underlying model for the design notation ( the so - called meta - model ) is defined by the user and therefore the data representation is tailored to the application domain .an advantage of this approach is that both the notation and underlying data representation are a good fit for the application domain , therefore tooling can take advantage of this by supporting the use of the notation .the main disadvantages of this approach are that the skill levels required to work with technology are relatively high and that the graphical tooling must be developed ( even model - based ) for each new language .is it possible to achieve the advantages of each approach without the disadvantages ?analysis of the use of many design notations leads to the conclusion that many features recur while others differ . in most casesthere are package - like and class - like elements with association - like relationships between them .variations occur due to different categories of these basic features with their own specific properties .this paper describes an approach to domain specific languages that allows arbitrary extension of a small collection of underlying modelling concepts via access to a self - describing meta - model .it shows that by adding the notion of _ meta - packages _ to the meta - language , tooling can be designed that offers domain specific languages without wholesale definition of a new language ( such as that required by emf and gmf ) .the need for dsl tooling is discussed further by fowler .the result is an extensible modelling notation that reuses the same tooling .the use of meta - package based technology does not require skills in complete language definition and is not limited to the simple property - based extensions of the standards based approach described above .in addition , by extending the notion of a package slightly , many new language features can be added .meta - packages are implemented in xmf - mosaic and have been used on a number of projects including generating code for telecomms applications .the screen - shots and code in this paper are taken from a tutorial example that is implemented in xmf - mosaic .the book provides a good introduction to meta - modelling concepts .the rest of this paper is structured as follows : section [ sec : example ] provides the motivation for meta - packages using a simple example ; section [ sec : metapackages ] defines meta - packages and how associated tooling accommodates them ; section [ sec : beandsl ] implements the example dsl using meta - packages ; finally , section [ sec : review ] reviews meta - packages , describes some extensions and discusses related systems .design notations should support the generation of source code where this is sensible .the generation of source code from standard modelling notations raises a problem .one of the merits of a modelling notation is that it abstracts away from implementation details ; however , implementation details are required in order to generate the code . in effect, an arbitrary number of sub - categories of modelling element needs to be identified and represented in the design .each category can be defined in terms of properties , structure or behaviour .standard modelling notations often allow categories to be defined using simple properties .however , this is not addressing the real issue which is to be able to define new categories of modelling element with their own structure and behaviour .meta - packages support the definition of new element categories by allowing a standard modelling language to be extended at the meta - level .this section motivates the definition of meta - packages using a simple example .java has recently been extended with annotations which can be used to add properties to standard java program elements such as classes .the motivation for annotations has been the need to _ mark - up _java components with static information that can be used by tooling .an example use of annotations is in the implementation of enterprise information systems where java class use annotations to define a mapping to relational database tables .for example the simple model defined in figure [ fig : orderprocessing1 ] may give rise the to following java code : .... (name="order_table " ) public class order { private int i d ; private string address ; (name="order_id " ) public int getid ( ) { return i d ; } public void setid(int i d ) { this.id = i d ; } (name="shipping_address " ) public string getaddress ( ) { return address ; } public void setaddress(string address ) { this.address = address ; } } .... in which instances of the class order are to be persisted in a relational database as rows in the table named order_table with columns order_id and shipping_address .the primary key of this table is order_id .the tool used to represent the model in figure [ fig : orderprocessing1 ] is standard since it provides tools in the palette to create basic modelling elements : package , class , attribute etc . given a model using this tool , there is no way to distinguish between those model element that will become entity beans and those that will become basic java classes .figure [ fig : beanmodelling ] shows a tool that provides a new category of modelling element : beans .the model for the order processing system now includes two different categories of modelling element : classes and attributes , entitybeans and beanattributes .it is now possible to distinguish between the elements in terms of code generation .in addition , entitybean will have a property that defines the name of the relational database table used to represent its instances .in addition to defining new modelling entities , a dsl has semantics .part of the semantics of a language are the rules that govern correct model formation .these are expressed at the meta - package level , for example an entitybean is correctly formed when it has at most one bean attribute that is designated as an i d ( the primary key in the table ) .figure [ fig : dslsemantics ] shows the result of running the well - formedness checks for the meta - package over the instance shown in figure [ fig : beanmodelling ] .leaves of the tree are constraints that have been applied to the model elements .triangles represent constraints that are satisfied .crosses are constraints that have failed .therefore , in the example model the customer element has oneid ( a single attribute designated as a primary key ) but does not specify a persistent name ( a table in the database ) .meta - packages provide a mechanism for extending the basic language for class - based modelling shown in figure [ fig : orderprocessing1 ] with _ domain specific _ features such as those shown in figure [ fig : beanmodelling ] .tooling for standard modelling detects meta - package automatically and extends the functionality appropriately .meta - packages are a way of defining semantically rich dsls without having to specify associated tooling .the idea is that there is a single basic meta - package that defines a language .the concepts in the base meta - package can be extended to produce new meta - packages .each meta - package is a language and tooling is written against the base meta - package . since all new meta - packages are extensions of the base , the existing tooling will work with any new language . since languages are written at the meta - level , executable meta - modelling allows any new language to have a rich semantics .this section defines meta - packages and is structured as follows : section [ sub : xcore ] defines the base meta - package called xcore ; section [ sub : tools ] defines a tool model for languages over xcore ; section [ sub : diagrams ] defines a model for tool diagrams ; section [ sub : displays ] defines a model for the display elements on diagrams ; finally , section [ sub : mappings ] defines mappings that ensure the diagrams and model elements are synchronized in the tools .meta - packages are defined in a meta - circular language defined in figure [ fig : xcore ] .this language is part of the basic package , called xcore , of the xmf - mosaic modelling tool . in xmf - mosaic , everything is an element . classes are elements with names , constraints , attributes and parents . note that every element has a type ( _ of _ ) which is a class .a package is a class that contains elements and which has a meta - package .the idea of the meta - package association is that each element contained by a package is _ of _ some class contained by its meta - package . by default , the meta - package of all packages is xcore .the semantics of xcore is defined using a collection of constraints . since xcore is self defining these constraints are defines with respect to xcore itself .the following is an example which states that all elements of an enumerated type must be instances of that type and no other instances exist : .... context enum elements->forall(e | e.of = self ) context element of.of = enum implies of.elements->includes(self ) .... xcore supports executable meta - modelling via xmf .the following operations are used in the rest of this paper : .... context class allparents():set(class ) parents->iterate(p p = set { } | p + p.allparents ( ) ) end context package modellingelements():set(class ) elements->select(e | e.iskindof(class ) ) end context element tag(expected : class):string if of = expected then " " else of.name end end .... modelling in xcore - based languages is supported by tooling .figure [ fig : xcore ] shows an example of a modelling tool that consists of a palette on the left - hand side and a diagram containing modelling elements on the right .the palette contains buttons for the modelling elements defined in the modelling language .the tool palette depends on the modelling language s meta - package ; however there is only one tool engine which is parameterized with respect to the meta - package .figure [ fig : toolmodels ] shows the key elements involved in defining a modelling tool .a tool associates a package of modelling elements , a diagram and a palette of button groups .buttons are used to select creation modes .events on the diagram modify diagram elements and events to the package modify modelling elements .daemons on both the diagram and the package ensure that events from the model are propagated to the diagram and vice versa .figure [ fig : toolpalette ] shows the structure of a tool palette .the palette consists of groups of buttons which are selected in order to determine the creation mode on a diagram .each group corresponds to a meta - package .all packages have xcore as a meta - package by default and therefore have the xcore and xmap groups .if the meta - package of the package associated with a tool is p which inherits from xcore then the palette will have groups named xcore , xmap and p. this leads to the following constraint : .... context tool palette.groups.name = package.metapackage.allparents().name .... the buttons provided by each group are determined by the language elements defined in the meta - package .a modelling element is a sub - class of package , class or attribute as defined in xcore .therefore : .... context tool package.metapackage.allparents ( ) ->forall(p | palette.groups->exists(g | g.name = p.name and p.modellingelements().name = g.buttons.name ) ) .... each tool manages a diagram that consists of nodes and edges .each node has a display that is used to render the node on the diagram .each edge has a source and a target node and a collection of labels .the diagram model is shown in figure [ fig : diagrammodels ] .the tool requires that the diagram is always synchronized with the package .therefore the following constraint must always hold : .... context tool package.classes()->size = diagram.nodes()->size .... the constraint on edges is less easy to define since attributes may be shown on a diagram within classes or as edges .in addition , inheritance is shown as edges . to make matters more complexthe types of attributes and the target of inheritance edges may be imported from other packages .figure [ fig : displays ] shows some of the display types that can be used to render nodes on diagrams .two basic types of display are shown : boxes and text .package - based diagrams use specializations of these display types to render classes .a class box consists of a name box and several attribute boxes .note that attboxes and namebox are derived attributes since they are just specifically identified components of the displays . both attrbute boxes and name boxeshave some text for the name of the element .they also both have a tag which is used to identify the meta - type of the element if it not class or attribute respectively .an attribute box has an additional text field for the type of the attribute .diagram - model synchronization is handled by a collection of mappings associated with a tool .figure [ fig : mapping ] shows the mapping model for package diagrams .a mapping has the form a_x_b which associates instances of a with instances of b. each mapping class has a constraint that defines the synchronization requirements .many of these constraints state commutative properties of the model .for example : .... context tool mapping.package = package and mapping.diagram = diagram .... each of the mappings inherit from a class onetoone ( not shown ) which requires that all maplets are unique .a mapping instance can reference all of the elements in the domain and range of the mapping and also reference all of the associations ( one of which is itself ) .the following constraints require the onetoone mapping to have unique associations : .... context onetoone domain->forall(d | range->exists(r1 r2 | map(d , r1 ) and map(d , r2 ) implies r1 = r2 ) ) context onetoone range->forall(r | domain->exists(d1 d2 | map(d1,r ) and map(d , r2 ) implies d1 = d2 ) ) .... all of the classes in the package must be shown on the diagram and all class nodes on the diagram must correspond to some class in the package : .... context diagram_x_package package.classes = classbox_x_classes.class context diagram_x_package diagram.classboxes = classbox_x_classes.classbox .... all the attribute edges must be related and must correspond to an attribute . note that not all attributes need to be shown as edges on the diagram : .... context diagram_x_package attedge_x_attributes.attedge = diagram.attedges context diagram_x_package attedge_x_attributes.attribute ->subset(package.classes.attributes ) .... the name of a class on a diagram must always be synchronized with the name of the corresponding model element . if the element is directly an instance of class then its tag is empty otherwise the tag must be the name of the meta - type : .... context classbox_x_class classbox.name = class.name context classbox_x_class classbox.tag = class.tag .... all of the attributes in a class box must correspond to some attribute of the associated class .however , the not all attributes need to be displayed in a class box : .... context classbox_x_class attbox_x_attributes.attribute->subset(class.attributes ) context classbox_x_class attbox_x_attributes.attbox = classbox.attboxes .... attribute boxes offer a name , a type and an optional meta - tag .the following constraints require that these are always synchronized ( attribute edge tags are specified using similar constraints ) : .... context attbox_x_attribute attribute.name = attbox.name context attbox_x_attribute attribute.type.name = attbox.type context attbox_x_attribute attribute.tag = attbox.tag .... finally , any attribute edges that are shown on the diagram must correspond to attributes .the source of the edge must be a class box that corresponds to the owner of the attribute and the target of the edge must be a class that corresponds to the type of the attribute .the predicate isclass is not defined but is satisfied isclass(b , c ) when the box b is associated with the class c in the tool : .... context attedge_x_attribute isclass(attedge.sourcebox , attribute.owner ) context attedge_x_attribute isclass(attedge.targetbox , attribute.type ) .... all of the attributes must be shown as either edges or in boxes : .... context diagram_x_package package.classes.attributes = attedge_x_attributes.attribute + classbox_x_classes.attbox_x_attributes.attribute .... other constraints can be defined to require that an attribute edge can not be shown as a boxed attribute .the previous section has defined the meta - package relationship that allows new class - based dsls to be defined by extending meta - concepts .existing tooling mechanisms can accommodate the new dsls without any new code being necessary .this section shows how a new language to support beans for enterprise systems can be defined using the meta - package relationship .the result of the definition is a new tool as shown in figure [ fig : beanmodelling ] .this section is defined as follows : section [ sub : abstractsyntax ] describes the abstract syntax for the dsl ; section [ sub : concretesyntax ] describes how concrete syntax can be defined in a number of ways ; section [ sub : constraintchecking ] describes well - formedness semantics for the dsl ; section [ sub : codegeneration ] defines how java code is generated from the dsl . the first step in dsl definition is to define a new meta - package whose modelling concepts extend those of an existing meta - package .figure [ fig : beans ] shows the definition of the package named beans .the concepts are defined as follows : 00.00.0000 a bean container is a specialization of package .the contents of a bean container may be entity beans and the class beancontainer provides us with a container for bean specific constraints .an entity bean is a class that has a persistsas property that is used to name the relational database table used to contain all the instances of the class .a bean attribute is a special type of attribute that names the column in the database table used to contain its values .a bean attribute can be tagged as being a primary key in the relational table by setting its isid attribute .a bean attribute may also require a java accessor and updater .these properties are set via the attribute modifiers of the bean attribute .the beans package must be designated as a meta - package .this is done by making it _ inherit from _ xcore as shown in figure [ fig : metapackage2 ] . by default ,a package inherits from an empty package and is therefore not a meta - package . by making a package inherits from a meta - package , the new package also becomes a meta - package .the package orderprocessing is to be written in the language defined by beans . by defaultthe meta - package of a new package is xcore .figure [ fig : metapackage2 ] shows the meta - package of orderprocessing has been changed to beans .note that the super - package of orderprocessing has not been changed , so it stays as the default empty package and therefore orderprocessing is not itself a meta - package .a dsl does not require a concrete syntax , but it is generally important for usability to have one .dsls that are defined via meta - packages get a concrete diagram syntax for free .the tooling that is specified in section [ sec : metapackages ] detects the new modelling concepts and provides new palette buttons and labelling as appropriate .in addition to diagram syntax for a language , textual syntax is often useful .xmf has a text language for package definition that supports the declaration of meta - package information .this is exactly the same as the diagram tooling : the same text processing engine is used even though the language has changed via meta - package extension .the following shows how the orderprocessing bean container can be defined using xmf package definition : .... orderprocessing metaclass beancontainer metapackage beans namedelement isabstract name : string end end order metaclass entitybean identifier metaclass beanattribute : integer end address metaclass beanattribute : string end customer metaclass beanattribute : customer end product metaclass beanattribute : product end end customer metaclass entitybean extends namedelement end product metaclass entitybean extends namedelement amount metaclass beanattribute : integer end end end ....a problem with the basic text definition shown above is that it does not provide support for setting the new meta - properties such as persistas .these can be set independently , but it is much better if an entity can be defined as a modular unit .in addition , the above syntax exposes implementation details about the modelling concepts by requiring their meta - classes to be specified .xmf supports extensible syntax via syntax - classes which are normal classes that define grammars for processing text .this mechanism can be used to define a new concrete syntax for a dsl that translates into the basic definitions given above .the following is a dsl for beans that has been used to define the order processing system : .... orderprocessing entity namedelement name ( name ) : string end entity order(order_table ) [ namedelement ] * identifier(order_id ) : integer address ( shipping_address ) : string customer ( customer_ref ) : customer product ( product_ref ) : product end entity customer extends namedelement entity product extends namedelement amount ( amount ) : integer end end .... the example above hides all of the implementation detail .this is achieved by defining a new syntax class for beancontainer as shown below : .... beancontainer beancontainer : : = n = name es = entity * { [ | let p = < n > metaclass beancontainer metapackage beans end in < es->iterate(e x = [ | p | ] | [ | p.add(<e > ) ; < x > | ] ) > end | ] }. entity : : = n = name p = persist s = super as = att * { [ | let c = < n > metaclass entitybean extends < s > end in c.persistas : = < p.lift ( ) > ; < as->iterate(a x = [ |c | ] | [ | < x>.add(<a > ) | ] ) > end | ] } .persist : : = ' ( ' name ' ) ' .super : : = ' [ ' type ' ] ' .att : : = i = isidn = name p = persist ' : ' t = type { [ | let a = beanattribute(<n.lift()>,<t > ) in a.persistas : = < p.lift ( ) > end | ] } .end end ....constraint checking involves executable modelling ( being able to attach executable predicate expressions to classes as shown in figure [ fig : xcore ] ) .elements are well - formed when they satisfy all of the constraints defined by their class .in addition , containers , such as packages , are well formed when their contents are well - formed . in order for a bean container to be well - formed , all of the persistent elements must specify a name that can be used in the relational database : .... context persistent hasname persistas < > " " fail " must specify a persistent name . "end .... an entity bean is well formed when there is at most one bean attribute that designates a primary key : .... context entitybean oneid not a1,a2 in attributes - > a1 < > a2 and a1.isid and a2.isid end fail " can not have multiple ids . "end .... code generation involves a mapping from a model to source code .this is the essence of mda in which uml models are translated to code .however , uml does not allow access to the meta - level and therefore the scope for extending the modelling language with sophisticated translation mechanisms is limited . to translate from elements in figure [ fig : beans ] to the source code shown in section [ sec : example ] we can also use executable meta - modelling technology .a code export operation is defined for each class that is to be translated .xmf provides _ code - template _ technology that makes it easy to generate code .the following template : .... (out , leftmargin ) class < n > { } end .... writes an empty class definition to the output channel , newlines tab to leftmargin . within the java code templateall text is faithfully written to the output except for expressions delimited by < and > .such expressions are evaluated and then written to the output channel . in the example above , the generated class will have a name that is the value of the variable n. within expressions , the use of [ and ] delimiters switch back to literal code and nesting of [ , ] and < , > is permitted .entity beans are translated to code as follows : .... context entitybean code(out : outputchannel ) (out,7 ) (name=" < persistas > " ) public class< name > { < a in attributes when a.iskindof(beanattribute ) do [ private < a.typename ( ) >< a.name > ; ] end ; a in attributes when a.iskindof(beanattribute ) do a.code(out ) end > } end end .... each bean attribute is translated to code as follows : .... context beanattribute code(out : outputchannel ) let name = name.tostring ( ) then name = name.uppercaseinitialletter ( ) in (out,7 ) < if isid then [ ] end > (name=" < persistas > " ) < if self.canget ( ) then self.getcode(out,name,name ) end > < if self.canset ( ) then self.setcode(out,name,name ) end > end end end context beanattribute getcode(out , name , name ) (out,5 ) public < self.typename ( ) > get < name > ( ) { return < name > ; } end end context beanattribute setcode(out , name , name ) (out,5 ) public void set < name>(<self.typename ( ) > < name > ) {this.<name > = < name > ; } end end ....this paper has identified two broad approaches to dsls for design notations : standards based and user defined .standards , notably the uml family , offer mature tooling and interchangeable models , but can be bloated and lack mechanisms for sophisticated extensibility . technologies for user defined notations offer arbitrary flexibility but at a cost of complexity and starting from scratch each time .meta - packages is an meta - modelling based approach to defining languages that allows tooling to be developed that does not require significant modification each time a new language is developed . since the approach uses true meta - modelling , object - oriented techniques can be used to define semantics for the new language features .meta - packages have been specified and a simple example dsl for enterprise systems has been described .meta - packages are an approach rather than a single technology .the key features are a single meta - circular meta - model ( xcore defined here ) , executable modelling , and the meta - package relationship .this combination of features guarantees that any tooling based on the base meta - language will work with any new language that is defined .the xcore language as defined in this paper is rather small .meta - packages are supported by xmf - mosaic where xcore is much larger , however the principles are the same .one important feature supported by xmf - mosaic is the ability for diagrams to render element - nodes and slot - edges .since we advocate a true meta - modelling approach , everything is ultimately an instance of the class element .packages can contain elements . if diagrams can render elements and represent slot - values via edges then _any _ package element can be represented on a diagram and related to their owner .this feature guarantees that any language can be supported by tooling that is parameterized with respect to the base language , even if the element is not a specialization of the basic meta - concepts ( package , class , attribute etc ) .for example , classes could be extended to represent components with ports .a port can be represented on a diagram as a basic element with a slot - value edge from the component to the port .xmf - mosaic is available open - source under epl from the ceteva web - site ( http://www.ceteva.com ) .virtually all other tools for dsl definition do not implement the golden braid ( meta - model - instance ) .for example , gmf models are instances of ecore but can not themselves have instances . the same is true of uml and therefore uml tooling and of visual studio dsl tools .the golden braid is a key feature in the meta - package approach since it allows tooling to be defined that is reusable with models that can be extended with their own semantics .n. georgalas , m. azmoodeh , a. clark , a. evans , p. sammut , and j.willans .mda - driven development of standard - compliant oss components : the oss / j inventory case - study . in 2nd european workshop on mda ,kent , uk , 2004 .n. georgalas , m. azmoodeh . using mda in technology - independent specifications of ngoss architectures .first european workshop on model driven architecture with emphasis on industrial application ( mda - ia 2004 ) , enschede , the netherlands ( march 2004 ) .georgalas n. , azmoodeh m. and ou s. model driven integration of standards based oss components .proceedings of the eurescom summit 2005 on ubiquitous services and application , heidelberg , germany ( april 2005 ) .pierre cointe .metaclasses are first class : the objvlisp model . in normanmeyrowitz , editor , proceedings of the conference on object - oriented programming systems , languages and applications , pages 156165 .acm , october 1987 .
domain specific languages are used to provide a tailored modelling notation for a specific application domain . there are currently two main approaches to dsls : standard notations that are tailored by adding simple properties ; new notations that are designed from scratch . there are problems with both of these approaches which can be addressed by providing access to a small meta - language based on packages and classes . a meta - modelling approach based on _ meta - packages _ allows a wide range of dsls to be defined in a standard way . the dsls can be processed using standard object - based extension at the meta - level and existing tooling can easily be defined to adapt to the new languages . this paper introduces the concept of meta - packages and provides a simple example .
the ability to determine an unknown quantum state produced by some source is central for many applications in quantum information science .the procedure of reconstructing the quantum state , known as quantum state tomography , has therefore been under intense investigations and continues to attract a lot of attention . in the continuous variable regime , and in particular its quantum optical realizations , there are two commonly used approaches to quantum tomography . in optical homodyne tomography , the set of rotated quadratures is measured using balanced homodyne detection , thus allowing one to scan the phase space of the system .the alternative method uses the husimi -function which can be measured using a double homodyne detection scheme , and has the advantage that the reconstruction requires the measurement of only a single observable . both of the above instances fall under the class of gaussian measurements , i.e. , measurements which yield a gaussian measurement outcome distribution whenever the system is initially in a gaussian state .the purpose of this paper is to present a method for investigating whether or not a given set of such gaussian observables is informationally complete , i.e. , allows the reconstruction of an unknown quantum state of the system from the statistics .we consider an -mode electromagnetic field , whose phase space is therefore -dimensional . we show that by measuring a gaussian observable one obtains the values of the weyl transform of the state on a linear subspace of the phase space .therefore , for a set of observables the union of these subspaces needs to be `` sufficiently large '' in order for unique state determination to be possible .in particular , we show that if one does not have access to a single informationally complete gaussian observable , then one necessarily needs infinitely many observables . after these general results we focus on two specific instances .firstly , we investigate single informationally complete gaussian observables in more detail .we show that if we restrict to the smallest possible dimension of the outcome space , then the set of informationally complete gaussian observables is exhausted , up to linear transformations of the measurement outcomes , by gaussian observables which are covariant with respect to phase space translations .furthermore , we show that in a suitable symplectic coordinatization of the phase space , any informationally complete gaussian observable with a minimal outcome space is a postprocessing of the -function .secondly , we study commutative gaussian observables which then include projection valued ( also called sharp ) gaussian observables as special cases .we show that no finite set of such observables is informationally complete . for an arbitrary set of generalized field quadratures , i.e. , sharp gaussian observables with one - dimensional outcome space , we prove a further characterization for informational completeness .we also find an interesting connection between the generalized quadratures and informationally complete gaussian phase space observables .finally , we consider a more general scenario where the measurement coupling is represented by a general linear bosonic channel .also in the general case we obtain a characterization for informational completeness and deal explicitly with general covariant phase space observables .the hilbert space of an electromagnetic field consisting of bosonic modes is the -fold tensor product , where each single mode hilbert space is spanned by the number states .the creation and annihilation operators related to the mode are denoted by and . in the coordinate representation where the number states are represented by the hermite functions .the hilbert space of the entire field is then .the states of the field are represented by positive trace one operators acting on , and the observables are represented by positive operator valued measures ( povms ) defined on a -algebra of subsets of some measurement outcome set . in this paper , we will only consider observables taking values in .each observable is thus represented by a map , where is the borel -algebra of and denotes the set of bounded operators on , and which satisfies positivity , normalization and -additivity for any sequence of pairwise disjoint sets where the series converges in the weak operator topology . when a measurement of is performed on the system initially prepared in a state , the measurement outcomes are distributed according to the probability measure } ] and is the covariance matrix whose elements are } ] , we define the observable via and say that is a smearing of .note that for any state the corresponding probability distribution is just a convolution of the original one with the measure , i.e. , .if is gaussian , i.e , with , then the smeared observable is also gaussian and , using the fact that , we find that the parameters of the smeared observable are .since is nonzero everywhere , the smearing is informationally complete if and only if is .we will next construct a unitary measurement dilation for an arbitrary gaussian observable .this is done by first showing that any gaussian observable can be measured by applying a gaussian channel to the field and then performing homodyne detection on the output of the channel . since unitary dilations of gaussian channelsare known , this then allows us to construct the desired measurement dilation .suppose first that we have a gaussian channel determined by the parameters .let be the canonical spectral measure , i.e. , corresponds to multiplication by the indicator function of the set , and define the observable as .the characteristic function of is then where . for any denote so that , and we obtain by eq .. finally , we define the -matrix , the -matrix and the vector by so that the characteristic function can be expressed as in other words , the observable is gaussian .conversely , suppose that we have a gaussian povm determined by the parameters .we need to find parameters of a gaussian channel such that eqs .hold . to this end , first define the matrix by setting and otherwise , and then set so that . define also the matrix via and otherwise , and the vector in a similar manner as and otherwise . in order to prove the validity of the complete positivity condition, we note that by our construction this reduces to showing that which is an immediate concequence of and the definitions of the matrices in question . in order to reach the desired unitary dilation , we recall that any gaussian channel can be realized by coupling the -mode system to auxiliary gaussian modes , and then applying two types of unitary operators to the total system : symplectic transformations with , and displacements with ( see fig . [fig : dilation ] ) .the number of auxiliary modes can always be chosen so that , though the optimal choice depends on the details of the channel .the rest of this section is devoted to demonstrating how the parameters of the gaussian channel , and hence the corresponding observable , are determined by the dilation of the channel .input modes are coupled to auxiliary modes by two unitary couplings : a displacement and a symplectic unitary .after the coupling , homodyne detection is performed on output modes , while the other modes are discarded.,width=340 ] now suppose that the auxiliary -mode field is in some gaussian state with covariance matrix and displacement vector . since we are interested only in modes , we must discard the other modes after the interaction .the resulting channel is then given by \ ] ] where ] then reads } = \widehat{\rho}\ , ( { \bf a}_0{\bf p } ) e^{-\frac{1}{4}{\bf p}^t { \bf b}_0 { \bf p } - i { \bf v}^t_0{\bf p}},\ ] ] and by linearity , also holds with replaced by a general trace class operator in which case the measure is complex valued . since the gaussian term on the right - hand - side of eq . is non - zero, we can divide both sides by it .hence , by measuring we are able to determine the values for all in the set which is clearly a subspace of .we are now ready to prove the main general result of this paper . [prop : ic_set_of_gaussian_povms ] a set of gaussian observables is informationally complete if and only if is dense in .if is dense in , then by measuring the observables one can determine the values of on a dense set , so that by the continuity of the weyl transform , is uniquely determined .conversely , assume that is not dense. then there exists a nonempty open set in the complement of the closure of , so by lemma [ lemma : open_set ] , there exists a nonzero whose weyl transform vanishes outside . by and the injectivity of the fourier transform we then have } = 0 ] , . then the triple determines a covariant gaussian observable for which .next we take into account the symplectic structure of the phase space .the change of the canonical coordinates via a symplectic matrix transforms a phase space observable into the phase space observable given by in particular , a covariant gaussian observable parametrized by transforms into the covariant gaussian observable given by , because . since the positivity condition reduces to , we can use williamson s theorem to choose the symplectic matrix such that it diagonalizes , i.e. , , where the symplectic eigenvalues of satisfy . letting ] , such that , and we have in conclusion , we have proved that a gaussian observable is commutative exactly when it is a gaussian smearing of a sharp gaussian observable . since sharp observablesare commutative , we can use the along with condition to characterize sharp gaussian observables . indeed , since now and the operators are unitary , we have that is sharp if and only if for all .but since is gaussian , this reduces to the condition .this implies , and since by , we have .hence , sharp gaussian observables are parametrized by with .note that this can be obtained from the unbiased sharp gaussian observable by shifting the outcomes with the vector . here`` unbiased '' refers to the fact that for any state with zero expectation for all canonical quadratures , , also the expectation of vanishes .it is known that a single commutative observable is never informationally complete ( * ? ? ?for commutative gaussian observables with minimal outcome space this follows immediately from cor .[ cor : ic_gaussian_povm ] , because we have . for a general commutative gaussian observable this can still be seen directly by noting that the rank of is at most , as can be seen by applying the frobenius inequality ( * ? ? ?* eq . 4.3.3(9 ) , p. 61) : cor .[ cor : ic_finite_gaussian ] then tells that no finite set of commutative gaussian observables is informationally complete .it is worth noting that in the case of a single mode ( ) , and minimal outcome space ( ) we have so that is informationally complete if and only if is noncommutative . in other words ,informational incompleteness is equivalent to the observable being a smearing of a sharp observable .note that this is not true for multiple modes . indeed , using the above result that for commutative observables , we see that if , any given -matrix with , together with a chosen to satisfy , determine a gaussian observable which is informationally incomplete and noncommutative . before proceeding to more specific cases ,we make one more observation .it follows from that a set of commutative gaussian observables with matrices , is informationally complete if and only if the set of the corresponding sharp observables is such . in other words , a set of commutative observablescan always be reduced to a set of sharp observables , without affecting informational completeness .accordingly , we concentrate mostly on sharp observables in the concrete examples .consider now the special case of a one dimensional outcome space . then for any gaussian observable , the matrix is , in fact , a vector in , and we let .the condition is automatically satisfied , so we conclude that every gaussian observable is commutative , and thus a gaussian smearing of the corresponding sharp observable . furthermore ,if is a gaussian observable with matrix , then is a gaussian smearing of with , exactly when .the observable has the characteristic function so that is simply the spectral measure of the selfadjoint operator these operators are sometimes referred to as a generalized field quadratures , and they have also previously been considered in the context of quantum tomography .[ prop : ic_set_of_gaussian_povms ] now reduces to the following result : [ prop : one_dim_case ] let , be gaussian observables with corresponding vectors . then is informationally complete if and only if is dense in the surface of the unit ball of .we now look at two particular examples in the case of a single mode ( ) . by setting and obtain the well known rotated quadrature operators , the corresponding observables being denoted by . from prop .[ prop : one_dim_case ] we immediately see that we can restrict our attention to the values , and that a set of the form where is informationally complete if and only if is dense in .this therefore sharpens the result of , e.g. , by proving that density is indeed also necessary .explicitly , the line on which the weyl transform can be determined from the measurement of a quadrature , is given by as a slight modification , we set and so that we obtain the squeezed rotated quadratures .in this case we have so that for any change in the value of the squeezing parameter causes a change in the slope of the line .for instance , we can fix only two values and consider a set of squeezing parameters . by prop .[ prop : one_dim_case ] , the set is informationally complete if and only if is dense in the unit circle , which happens exactly when is dense in . indeed , for each of the four choices of signs , the map bijectively parametrizes the part of the unit circle lying in the interior of the -quadrant .in a similar manner we can consider any finite number of values for .the benefit of adding more rotations is that for a fixed one only has to find a set of squeezing parameters which is dense in some interval ] is a probability measure , then which corresponds to eq . with .hence , in particular , if is an informationally complete phase space observable , then is informationally complete if and only if .we have proved a characterization for the informational completeness of an arbitrary set of gaussian observables . as a consequence , we have shown that unless one has access to a single informationally complete gaussian observable , then one needs infinitely many observables .we have characterized informationally complete gaussian observables which are minimal in the sense of having the smallest possible dimension of the outcome space , as the observables which are bijective linear postprocessing of covariant gaussian phase space observables .we have then developed this further and shown that any minimal informationally complete gaussian observable is actually a postprocessing of the observable for which the outcome distribution is the -function of the state , given a suitable symplectic coordinatization of the phase space .we have also treated commutative gaussian observables separately , and shown that infinitely many such observables are needed in order to reach informational completeness . as a special casewe have characterized informationally complete sets of generalized field quadratures , i.e. , sharp gaussian observables with one - dimensional outcome space , and proved a connection between informationally complete gaussian phase space observables and generalized field quadratures .since gaussian observables can be measured by combining gaussian channels with homodyne detection , we have also studied the natural generalization to the case where the gaussian channel is replaced by a linear bosonic channel .also in this case we have obtained necessary and sufficient conditions for the informational completeness of any set of such observables .js acknowledges support from the academy of finland ( grant no .138135 ) and the italian ministry of education , university and research ( firb project rbfr10coaq ) .jk acknowledges support from the european chist - era / bmbf project cqc , and the project ep / j009776/1 .
we prove necessary and sufficient conditions for the informational completeness of an arbitrary set of gaussian observables on continuous variable systems with finite number of degrees of freedom . in particular , we show that an informationally complete set either contains a single informationally complete observable , or includes infinitely many observables . we show that for a single informationally complete observable , the minimal outcome space is the phase space , and the observable can always be obtained from the quantum optical -function by linear postprocessing and gaussian convolution , in a suitable symplectic coordinatization of the phase space . in the case of projection valued gaussian observables , e.g. , generalized field quadratures , we show that an informationally complete set of observables is necessarily infinite . finally , we generalize the treatment to the case where the measurement coupling is given by a general linear bosonic channel , and characterize informational completeness for an arbitrary set of the associated observables .
in the next years , the first generation of large interferometric gravitational - wave detectors should reach a sensitivity good enough to expect the first direct detection of gw signals . in parallel of the experimental work consisting in operating the detectors at their working point with background noises as small as possible , the future data analysis methods are being prepared for the various expected sources of gw , each of them requesting specific tools . yet , the wiener filtering is used in most of these fields due to its optimal characteristics for signals whose time evolutions are known . indeed , it is not only the filter giving the highest signal - to - noise ratio ( snr ) among all the linear ones but it has also the property of having the lowest false dismissal rate for a given false alarm rate among all filters .conversely , its main drawback is its poor robustness : as soon as the physical signal and the filter do not match exactly , the snr can be dramatically reduced . even if the searched waveform is analytically computed with high precision, it always depends on a vector of parameters whose values are specific to a given source and thus unknown e.g. its mass , the main frequency of emission ... and whose accurate estimation is indeed a major aim of the data analysis .the set of physically possible values for the vector is the continuous parameter space .a given filter can only be efficient i.e. recovering a large fraction of the snr in a restricted region of this space , called the efficiency area .therefore , many different templates must be used in parallel for a matched filtering procedure , in order to cover the whole parameter space .the lattice must be dense enough to ensure a minimal loss in snr for any real signal whose parameters do not exactly match any of the available filters , while the number of templates must be kept small to limit the computing power needed for the analysis .given these two requirements , it clearly appears that the choice of the set of templates is very important as it has major consequences on both the ultimate filtering performances and its feasibility . on the other hand ,tiling is most of the time very difficult as the efficiency area of a filter depends on and on the minimal match see section [ subsection : formalism ] .thus , in the general case , uniform coverage is not possible . in the one - dimensional case ( )the tiling is easy as the efficiency areas are straight segments of various lengths which can be aligned one after the other one see e.g. where the matched filtering with gaussian peak templates ( only depending on their width ) is studied .as soon as , one has to cope with overlaps , holes and borders . the problem of computing a set of templates for matched filtering purpose has already been studied for the compact inspiralling binaries search .the gw signal is accurately estimated thanks to various development methods taylor or pade post - newtonian ( pn ) expansions and both the estimated snr and the event rate make such events good candidates for a first detection . if the spins are neglected and the orbit circular , the parameter space is two - dimensional ( the two masses of the stars ) and the number of templates can be very high if the low mass region is included in . using the formalism defined in ref. ,owen and sathyaprakash present a method to cover this parameter space at the 2 pn order . to solve the question of placing new filters with respect to the previous ones, rectangles inscribed inside the efficiency areas are used instead of the real ellipses , bigger but more difficult to place accurately i.e. without holes . in this paper , a different tiling approach is studied for the case .the main idea is to construct the lattice of filters with an iterative method ensuring a ( rather optimal ) _ local _ template placement .the location of the first ellipse center is arbitrary and chosen by the user .then the procedure goes on and new ellipses increasing the coverage of are added one after the other . when it stops , the tiling of the full parameter space is normally complete fully covered but is certainly not optimal from the _ global _ point of view .as described in section [ section : tiling_details ] , the overlapping in the final configuration can be so important that a large fraction of ellipses are in fact completely covered by other ones and thus useless for the detection purposes , while wasting computing time .therefore , a second step is then started to clean the list of templates by selecting these ellipses and erasing the corresponding filters . in the end ,the final set of templates all useful only depends on the position of the first filter in the parameter space from which the locations of all the other ones have been iteratively computed . due to border effects which can not be properly estimated ,it is impossible to define the best initial position .so , a last trick to further decrease the number of templates is to merge some sets of templates corresponding to different starting points and to apply to the full list the reduction procedure previously mentioned . as our teamis mostly involved in the search for gw bursts ( short gw signals usually lasting a few milliseconds for which waveforms can not be predicted as accurately as for coalescing binaries , emitted by e.g. supernovae or the merging phase of compact stars inspiralling one around the other ) , this algorithm was originally developed to search for damped sine - like signals .indeed , such time - behavior can be seen in a large fraction of supernova explosion gw signals computed numerically see e.g. the waveforms of ref. and is also expected to occur when excited black holes come back to equilibrium . the latter case will be used in the following as a benchmark of the tiling method performances ; details are presented in section [ section : damped_sine ] .another interesting feature of this example is that the number of templates remains small even when the allowed mismatch loss of snr is kept very low .thus , it is possible to check the quality of the tiling ( with respect to the prescription originating the tiling method ) and to see how the characteristics of the set of templates evolve when changes . this experience may be very useful for the inspiralling binaries search whose tilings can not be tested so easily yet , they must show similar behaviors .in particular , the most significant result we obtain is that one should be able to reduce the constraint on the template spacing while keeping small the mean false dismissal rate of events , ultimately the only important quantity in the search of rare signals occurring at random time .therefore , a much less numerous set of filters would still be efficient enough , but at a smaller computing cost .after the introduction of some hypothesis and useful notations , the main steps of the tiling method are presented . as a linear filteringconsists in correlating the detector output with the corresponding filtering function , one defines the following scalar product to represent the filtering operation : where is the one - sided power spectrum density and the `` '' symbol means fourier transform .the normalization is chosen so that , in case of signal alone , is equal to the snr .if one now assumes that is almost monochromatic signal of frequency , one can show that if the noise spectrum is nearly flat around , eq .( [ eq : inner_product_frequency ] ) becomes approximately equal to the following equation : as the definition of the ambiguity function involves normalized templates see section [ subsubsection : ambiguity ] , the frequency - depending factor vanishes .thus , representing the filtering operation by a scalar product in the time domain is in fact accurate also with a colored noise for signals with a narrow extension in the frequency domain. this will be approximately the case for the damped sine waveforms see ref. for a detailed discussion which will be used to test the tiling algorithm in the next sections .the ambiguity function between two templates et is defined by : the templates are normalized : . is thus a measurement of the closeness between two templates .it can also considered as a way to see how well the template can be used to detect the signal : the ambiguity function is the mean fraction of the optimal snr achieved at a given distance in parameter space .following ref. , if is small enough , the ambiguity function can be approximated by a second order power expansion the first order is null as the expansion is performed around the absolute maximum of . with defining a metric on the parameter space .finally , one defines the minimal match as the lower bound of the recovered snrs , which means that the loss of the snr due to the mismatch between the template and the signal must be kept below .following ref. , one can note that with this definition , a of course _ pessimistic _ estimator of the fraction of false dismissals is : for instance , with , one has .this value of is usually found in the literature as the correspondence with this particular value of is easy .this quantity is the only input parameter of the tiling procedure ; it allows one to define precisely the efficiency area of the template by the following equation : the area of including all the vectors which match this inequality is the inner part of an ellipsoid centered on whose proper volume scales as .the average fraction of recovered snr for physical signals with parameters belonging to is at least equal to . tiling the parameter space consists of finding a set of filters so that the union of the surfaces completely covers .the aim is to achieve this task by using as few templates as possible to keep the computing cost manageable .to do so , one has to solve two related questions : * which tiling algorithm to use ? * how to test its quality ?the main problems encountered by any tiling procedure are : overlapping , gaps , areas lost beyond the physical parameter space ... the difference between the ellipsoid and the real border of the efficiency area is not really taken into account : in practical cases for one - step searches , is close to 1 and thus the ellipsoid is assumed to be a good approximation of the ambiguity surface .moreover , the overlapping between close ellipses is an advantage to discard such a problem .in fact , the tests of the sets of templates constructed with the tiling method presented below show that the tilings have no significant holes , even with the choice , which validates a posteriori this hypothesis . clearly , the value of from which the ellipsoid approximation of the efficiency area is no longer valid depends on the precise tiling problem considered .it is well - known that the optimal tiling of an infinite plane by identical disks of radius consists in placing their centers on an hexagonal lattice , separated by a distance of see figure [ fig : optimal_circle_tiling ] . in this way, the overlapping is minimized and the ratio between the sum of the disk areas and the surface effectively covered is this value would increase in any real situation due to the border effect .this property of circular tiling is used by the algorithm on the following way .once the location of a template in has been chosen , one computes its efficiency area . through a simple plane transformation, becomes a circle of unit radius and the six neighbor centers are placed on the regular hexagonal lattice previously described .this choice is nearly optimal if the shape of the efficiency areas is slowly varying with respect to their characteristic dimensions . using the inverse transformation ,the six new center positions are given in .then , each ellipse associated with a new center is tested in order to check if it covers a part of not already covered by the previously defined ellipses . if the ellipse is useful , its center is added to the center list and will be used later to place other centers .for example , we have found that with only about 20% of the centers are kept .this iterative algorithm stops either when the full surface of the parameter space is covered or when no new ellipse can be placed any longer. a successfully completed coverage computed with the former procedure can be usually redundant in the sense that a large fraction of ellipses are completely covered by others . to save computing time, useless templates need to be identified and deleted . like for the first step of the algorithm, one has to face the problem that erasing an ellipse is a local operation which can have global consequences : which template should be discarded first ? to answer this question , the following procedure has been set : * for each ellipse belonging to the tiling , one defines its _ utility _ which is the fraction of it covers alone .useless ellipses are thus characterized by . * among those ellipses , the one with the smallest area in the parameter space is dropped .this prescription is consistent with the idea that it is a priori better to keep big ellipses which have a higher potential of coverage .of course this general rule may not always be true but it looks reasonable . * utilities are then updated for the surrounding ellipses as they become more `` useful '' locally increases .then , this scheme is iterated until all the remaining ellipses have non zero utilities .this second step of the tiling algorithm is very important : for most of the simulations in the damped sine case , the number of templates is reduced by a factor of about two with respect to the first list . finally , the output of the whole procedure is a set of templates completely covering the parameter space and which depends on only one initial condition , that is the location of the first filter set at the beginning of the tiling generation .once more , this choice may have global consequences on the tiling quality for instance , shifting it horizontally/vertically may cause a column / line of templates to appear or vanish but can not be simply optimized .therefore , tilings starting at different points on the parameter space are computed in parallel ; then , all the lists of templates are merged in a single one .finally , the cleaning procedure is applied to this clearly redundant set of filters to obtain the final lattice of templates .thanks to this last step , its size is decreased by 10 or 15% with respect to the individual coverages in the damped sine case . in the damped sine case, five different tilings have been merged together to compute the final lattice of templates .there seems to be a good compromise between gain in the number of templates and computation time ( more than 80% of the ellipses of the full list are useless and so the cleaning procedure is much longer ) .this last merging step doubles the total computation time of the tiling , i.e. its duration is of the same order of magnitude as the sum of the computation times for the initial tilings .three variables can be used to control the quality of the tiling . * the number of templates . * the ratio between the sum of all the ellipse areas and the parameter space surface . * the ratio between the sum of all the ellipse areas inside and the parameter space surface .one has clearly .these two estimators allow to measure the overlapping between templates and the fraction of extra area outside the parameter space , while can be compared with its estimation computed by integrating the proper volume of the parameter space . due to the imperfections of the real tiling ( overlapping , border effects ... ) and to the fact that the analytical computation of the template numbers is only approximative by principle , it is interesting to check the consistency of the two numbers .an excited black hole , born e.g. after a supernova collapse or the merging of two compact objects , comes back to a stationary state by emitting gw .this emission can be described as a superposition of black hole quasi - normal modes .the dominant mode is expected to be quadrupolar with the longest damping time . in order to limit the number of free parameters defining the matched filtering parameter space, one assumes that after some transitory phase the waveform becomes ( with a proper choice of time origin ) : it was observed that a given couple is connected to one single set of physical parameters , the mass and the reduced angular momentum of the black hole .moreover , the corresponding relation can be expressed analytically with a 10% precision . introducing the quality factor , one has in geometrical units ( ) : \end{aligned}\ ] ]the normal mode frequency is , as expected , a decreasing function of the black hole mass , increasing with the rotation parameter as the quality factor , which is also independent of .therefore , detecting such gw signal would give direct physical information on its source .following ref. , two different methods estimating the optimal snr give identical results : * a calculation in the direct space assuming that the noise is white at the oscillation frequency ; * computing the fourier spectrum and approximating it by a dirac at the oscillation frequency .one has finally where is the one - sided power spectrum density of the noise .the proportionality constant depends on the physical process at the origin of the black hole formation . for high mass inspiralling binaries, it can be very large and leads to detections at cosmological distances , while for supernova collapses , it can be high enough to be detected in the local group . using this framework ,it is sufficient to study the two quadratures and to compute the corresponding sets of templates covering the two dimensional parameter space . as in the black hole oscillation case, no correlation is assumed between the oscillation frequency and the quality factor .therefore , is rectangular : \ ; \times \ ; \left [ f_{\text{min } } \ , ; \ , f_{\text{max } } \right ] \ ] ] its borders are chosen in the following way : * for q , by using the black hole normal mode range + * for f , by using the interferometric detector main characteristics + the mismatch between two templates of parameters and ) can be written ) there is a factor 2 between the ellipse coefficients and the metric ones : , and . ] with being the maximal expected loss in snr .the dimensionless coefficients , and only depend on the quality factor .their expressions are the following : * for pure damped cosines : * for pure damped sines : in the former case , assuming that the quality factor is much greater than 1 , the first order expansion of these coefficients in power of gives the results computed in ref. .the hypothesis is not valid in the total range ] with . assuming a uniform distribution of physical signals in this area , it is easy to compute the mean fraction of recovered snr : with , one gets which is very close to the value of achieved in our simulations .it is worth noting that the result is independant of ( and so of the particular efficiency area considered ) , which validates the assumption of computing the mean value of the fraction of snr recovered .one can estimate in the same way : the choice of gives , while with , one has .these values exceed the results of the numerical simulations ( respectively 2.6% and 10.7% ) but are quite comparable .how do these results change with a two - dimensional parameter space which is indeed the situation considered in this article ? by a proper choice of coordinates , one can like for the 1d - case assume that the chosen template is located at and that the efficiency area is defined by the equation of an ellipsoid written in its simplest form : straightforward calculations allow one to compute both the mean fraction of recovered snr and the mean loss of events : like for the one - dimensional case , these expressions do not depend on the particular template and are thus estimators of the mean values of these quantities . with ,one gets values higher than those computed numerically : and respectively .the discrepancy between these numbers and those given in the case is higher , even if the tiling studied here is 2-dimensional .the effect of the overlap between close templates for has been checked with the fully simplified model presented in figure [ fig : optimal_circle_tiling ] : efficiency areas are identical disks of a given radius . in this case , a more accurate expression of can be computed analytically by selecting for each physical signal the closest template , i.e. the one allowing to recover the highest fraction of snr . butthe gain is very small : the fraction of false dismissals decreases only by about 1 - 2% and remains in all cases higher than for the 1d - model .in fact , one could not expect much more from this refinement .figure [ fig : optimal_circle_tiling ] shows that only a fraction ( ) of the physical signals are affected by this improvement and that these particular signals have the worst matches .finally , it appears that the good performances in detection efficiency of the template bank are the consequence of two aspects : firstly , the parameter space considered in this study is much more complex than the ideal case of a parameter space where one could use a uniform lattice of templates , and secondly the generated tilings remain redundant despite all the steps used to reduce it as much as possible .there is certainly still room for future improvements for the geometrical tiling algorithm .the comparison of the numerical data with the analytical parameterization computed for the model ( 2d ) is also shown on figure [ fig : tiling_characteristics ] . on this plot , it is clear that it overestimates the snr losses with respect to the simulated tiling performances ; yet , it gives values closer to real data than the curve .thus , the fractions of false dismissals are also higher .this paper presents a method to tile any two dimensional parameter space with a given maximal loss in snr .the template list is build in two steps : the first one is an iterative algorithm which provides a complete coverage of without any area left uncovered .as the computed set of filters can be redundant , a second step including two cleaning steps allows one to significantly decrease the number of templates by dropping those which are useless .this algorithm has been used for the lattice of templates needed to look for damped sine signals with matched filtering .even if the number of filters involved is by some orders of magnitude below those computed for the inspiralling binaries search , this choice of waveform allowed us to strongly test the capability of the algorithm as the ellipse characteristics show large variations in or , equivalently that is curved .moreover , the algorithm using these templates can be directly implemented in an on - line filtering . as it is not possible to choose the optimal tiling of a given parameter space , the final configuration of filters is computed by mixing various sets of templates computed with different initial conditions ( the location of the initial filter used to develop the iterative algorithm ) and by applying to the global list the reduction procedure .this kind of average decreases the number of filters by about 15% with respect to an unique procedure .the number of templates finally computed is comparable to the analytical estimations and monte - carlo simulations show that the set of filters fulfills the initial requirement : minimizing the loss in snr in the whole parameter space . in the damped sine case, the study of the sets of templates shows that the effective loss of events ( as a function of ) is much less than its usual estimation : approximately behaves as instead of . with the usual prescription of 10% for the loss, the number of templates can be decreased by a factor close to 4 . seen in a reverse way , for a given prescription ,the snr fraction recovery is on average much higher than : 99.1% for as an example .even if the exponent value depends on the specific waveform details , such a behavior should be expected for the inspiralling binary case .this feature can allow us either to decrease the number of templates ( and then the computing power ) or perhaps obtain a more robust ( with respect to noise for example ) tiling of the parameter space .finally , simple parameterizations allowing one to predict the dependence in of the mean fraction of recovered snr and of the false dismissal fraction are presented .they are derived independently of the particular tiling method studied in this paper and should thus be generic .they do not give the power - law scaling of inferred from the results of our numerical simulations and they both overestimate the losses due to the finite lattice of templates w.r.t .the tilings we generated. it would be interesting to compare this situation with different tilings generated with other algorithms .t. zwerger & e. mller , _ astron ._ * 320 * 209 ( 1997 ) .+ h. dimmelmeier , j.a . font , and e. mller _ astron ._ * 388 * , 917 , ( 2002 ) .+ h. dimmelmeier , j.a . font , and e. mller _ astron ._ * 393 * 523 ( 2002 ) .
searching for a signal depending on unknown parameters in a noisy background with matched filtering techniques always requires an analysis of the data with several templates in parallel in order to ensure a proper match between the filter and the real waveform . the key feature of such an implementation is the design of the filter bank which must be small to limit the computational cost while keeping the detection efficiency as high as possible . this paper presents a geometrical method which allows one to cover the corresponding physical parameter space by a set of ellipses , each of them being associated to a given template . after the description of the main characteristics of the algorithm , the method is applied in the field of gravitational wave ( gw ) data analysis , for the search of damped sine signals . such waveforms are expected to be produced during the de - excitation phase of black holes the so - called ringdown signals and are also encountered in some numerically computed supernova signals . first , the number of templates computed by the method is similar to its analytical estimation , despite the overlaps between neighbor templates and the border effects . moreover , is small enough to test for the first time the performances of the set of templates for different choices of the minimal match , the parameter used to define the maximal allowed loss of signal - to - noise ratio ( snr ) due to the mismatch between real signals and templates . the main result of this analysis is that the fraction of snr recovered is in average much higher than , which dramatically decreases the mean percentage of false dismissals . indeed , it goes well below its estimated value of used as input of the algorithm . thus , as this feature should be common to any tiling algorithm , it seems possible to reduce the constraint on the value of and indeed the number of templates and the computing power without loosing as much events as expected in average . this should be of great interest for the inspiralling binaries case where the number of templates can reach some hundreds of thousands for the whole parameter space . = 2
one of the objectives of systems control is _ performance regulation _ , namely the output tracking of a given setpoint reference despite modeling uncertainties , time - varying system s characteristics , noise , and other unpredictable factors having the effects of system - disturbances .a commonly - practiced way to achieve tracking is by a feedback control law that includes an integrator .an integral control alone may have destabilizing effects on the closed - loop system , and hence the controller often includes proportional and derivative elements as well thereby comprising the well - known pid control .recently there has been a growing interest in performance regulation of event - driven systems , including discrete event dynamic systems ( deds ) and hybrid systems ( hs ) , and a control technique has been proposed which leverages on the special structure of discrete - event dynamics .the controller consists of a standalone integrator with an adaptive gain , adjusted in real time as part of the control law .the rule for changing the gain is designed for stabilizing the closed - loop system as well as for simplicity of implementation and robustness to computational and measurement errors .therefore it obviates the need for proportional and derivative elements , and can be implemented in real - time environments by approximating complicated computations by simpler ones . in other words ,the balance between precision and required computing efforts can be tilted in favor of simple , possibly imprecise computations .a key feature of the control law is that it is based on the derivative of the plant function , namely the relation between the system s control parameter and its output , which is computed or estimated by infinitesimal perturbation analysis ( ipa ) .this will be explained in detail in the following paragraphs .ipa is a well - known and well - tested technique for computing sample - performance derivatives ( gradients ) in deds , hs , and other event - driven systems with respect to controlled variables ; see for extensive presentations and surveys .its salient feature is in simple rules for tracking the propagations associated with a gradient along the sample path of a system , by low - cost algorithms .however , this simplicity may come at the expense of statistical unbiasedness of the ipa derivatives . in situationswhere ipa is biased , alternative perturbation - analysis techniques have been proposed , but they may require far - larger computing efforts than the basic ipa ( see ) . for the performance regulation technique described in this paper , it has been shown that ipa need not be unbiased and , as mentioned earlier , its most important requirement is low computational complexity .the control system we consider is depicted in figure 1 . assuming discrete time and one - dimensional variables, is the setpoint reference , denotes the time counter , the control variable is the input to the plant at time , is the corresponding output , and is the error signal at time .the control law is defined by the following equation , and we recognize this as an adder , the discrete - time analogue of an integrator , if the gain is a constant that does not depend on .the plant is an event - driven dynamical system whose output is related to its input in a manner defined in the next paragraph , and denoted by the functional term where is a random function .its ipa derivative is used to define the controller s gain via the equation and the error signal is defined as a recursive application of eqs .( 1)-(4 ) defines the closed - loop system . as for the plant , it can have the following form .consider a continuous - time or discrete - time dynamical system whose input is , and its state is for some ; the notation designates continuous time or discrete time .let be a function that is absolutely integrable over finite - length intervals .partition the time - axis into contiguous left - closed , right - open intervals , , called _ control cycles_. suppose that the input to the latter dynamical system has constant values during each interval , and it can be changed only at the boundary of these intervals . in the setting of the system of figure 1 , is the value of the input during , and can be either or where is the duration of .in the case of discrete time , a sum - term of the form replaces the integral .we do not specify how to determine the control cycles , and they can have an a - priori constant length , or their termination can be the result of certain events .for example , let the plant consist of an m / d/1 queue , is the value of the service time during , and is the time - average of the sojourn times of all jobs arriving during .ipa can be used to compute the derivative via a well - known formula .generalizing this example , suppose that the plant - system is a stochastic event - driven system ( deds or hs ) , is a control variable , assumed to have a constant value ( ) during , , is a random function of as indicated by eq .( 2 ) , and its derivative is computed by ipa . later we will be concerned with measurement and computational errors and hence modify eqs .( 2 ) and ( 3 ) accordingly .the development of the proposed regulation technique has been motivated primarily by applications to computer cores , especially regarding regulation of power and instruction - throughput by adjusting the core s clock rate ( frequency ) .concerning throughput , there are no prescriptive , let alone analytic models for the frequency - to - throughput relationship , and a complicated , intractable queueing model has had to be used for simulation . nonetheless a simple ipa algorithm has been developed and used to approximate the sample derivative for determining the integrator s gain via eq .the regulation technique was extensively tested on programs from an industry - based suite of benchmarks , splash-2 , using a detailed simulation platform for performance assessment of computer architectures , manifold .we reported the results in , and deemed them encouraging and meriting a further exploration of the regulation technique . in the context of ipa research, this regulation technique represents two new perspectives .first , the traditional application of ipa throughout its development has been to optimization , whereas here it is applied in a new way , namely to performance regulation .second , much of the research of ipa has focused on its unbiasedness , whereas here , in contrast , the concern is with fast computation which may come at the expense of accuracy and unbiasedness .the main novelty of the paper as compared to references is in the fact that it concerns not simulation but an actual implementation . in thiswe were facing new challenges associated with real - time measurements , computations , and control .consequently we were unable to control each core separately as in , and hence applied the regulation method to a processor containing multiple cores .furthermore , due to issues with real - time computation , we were forced to take drastically cruder approximations to the ipa derivatives than in , and in fact it seems that we drove the degree of imprecision to the limit .how this worked on application programs will be seen in the sequel . in any event ,the work described here is , to our knowledge , the first implementation ( beyond simulation ) of ipa in a real - time control environment .the rest of the paper is structured as follows .section 2 summarizes relevant convergence results of the regulation technique in an abstract setting .section 3 describes the system under study and its model , presents simulation results on manifold followed by implementation on a state - of - the - art computer processor , and compares the two .section 4 derives some lessons from these results and proposes directions for future research .this section recounts established results concerning convergence of the regulation technique defined by recursive applications of eqs .( 1 ) to ( 4 ) , as summarized in ref .ideally convergence means that hence ( see figure 1 ) .this can be achieved under suitable assumptions ( mentioned below ) if the plant system is time invariant , and hence the function does not depend on . in that casethe control loop comprised of equations ( 1)-(4 ) essentially implements newton s method for solving the equation , for which there are well - known sufficient conditions for convergence .these include situations where the derivative term in eq .( 3 ) is computed in error , for which convergence in the sense of ( 5 ) is ascertained under upper bounds on the magnitude of the error . if the system is time varying , eq .( 5 ) may not hold true , and in this case convergence can be characterized by the equation where depends on a measure of the system s variability . to make matters concretelet be a differentiable function , and suppose that the term in eq .( 2 ) is a functional approximation of . assuming that is differentiable as well, we can view the term as an approximation to in ( 3 ) .however , for reasons that will be seen in the sequel , we add another layer of approximation to , denoted by , so that eq .( 3 ) computes the term . defining and , eqs .( 2 ) and ( 3 ) become and respectively .the regulation technique now is defined by recursive applications of eqs .( 1),(7),(8),(4 ) . to analyze its convergence ,suppose first , to simplify the discussion , that there exists a closed , finite - length interval , , such that every point computed by the regulation algorithm is contained in ; this assumption will be removed later .moreover , satisfies the following assumption .\(i ) the function and the functions are differentiable throughout .( ii ) the function is either convex or concave , and monotone increasing or decreasing throughout .( iii ) there exists such that .various ways to relax this assumption will be discussed shortly .the following result was proved in ._ proposition 2.3 and lemma 2.2 in _ : for every , , and , there exist , , and such that , for every interval satisfying assumption 1 and the following two additional conditions : ( i ) , and ( ii ) : 1 ) . if for some , and for , , , and , then 2 ) .if for all , , , and , then in the context of the system considered in this paper , what we have in mind is a situation where the plant is an event - driven system controlled by a real - valued parameter , is an expected - value function defined over a finite horizon ( hence not in steady state and possibly dependent on an initial condition ) , is a sample - based approximation ( possibly biased ! ) of over the control cycle , and is a sample approximation of .a few remarks concerning assumption 1 are due . 1 ) .the differentiability assumption is unnecessary , convexity of and almost - sure differentiability of at a given point suffice .these conditions often arise in the context of ipa . under these weaker assumptions the proofs in be carried out in the context of convex analysis rather than differentiable calculus . 2 ) .the condition that , , can be enforced in the case where is a constraint set for the sequence . in that case( 1 ) would be replaced by where is the projection of onto i , i.e. , the point in closest to .the proof of convergence is unchanged .often the magnitude of the error terms and can be controlled by taking longer control cycles , but there is no way to ensure the inequalities and _ for every _ , which is stipulated as a condition for eq .practically , however , with long - enough control cycles we can expect those inequalities to hold for finite strings of , thus guaranteeing the validity of eq .. if these strings are long enough , equation ( 9 ) would result in approaching 0 at a geometric rate , then periodically jumping away due to the sporadic occurrence of larger errors , but again returning towards 0 rapidly , etc .this behavior has been observed in all of the examples where we tested the regulation technique for a variety of event - driven systems .another source of the jitters described in the previous paragraph is the time - varying nature of the system .this is particularly pronounced in the system tested in this paper , since the workload of programs processed by a core can vary widely in unpredictable ways .nonetheless we shall see that the regulation algorithm gives good results .it is possible to ascertain the assumptions underscoring the analysis in for simple systems ( e.g. , tandem queues and some marked graphs ) , but may be impossible to ascertain them for more complicated systems .for instance , it can be impossible to prove differentiability or even convexity of an expected - value function from characterizations of its sample realizations , or bounds on the errors associated with ipa .moreover , some of these assumptions , including convexity or concavity , were made in in order to enable an analysis but may not be necessary .the aforementioned convergence results serve to explain the behavior of the regulation method that was observed in all of our former experiments as well as in those described later in this paper .the system - architecture considered in this paper is based on an out - of - order ( ooo ) core technology whereby instructions may complete execution in an order different from the program order , hence the `` out of order '' designation .this enables instructions execution to be limited primarily by data dependency and not by the order in which they appear in the program .data dependency arises when an instruction requires variables that first must be computed by other instructions .a detailed description of ooo architectures can be found in , while a high - level description is contained in .here we provide an abstract functional and logical description , and refer the reader to for a more - detailed exposition .the functionality of an ooo core is depicted in figure 2 .instructions are fetched sequentially from memory and placed in the instruction queue , where they are processed by functional units , or servers in the parlance of queueing theory . the queue is assumed to have unlimited storage and there is a server associated with each buffer .the processing of an instruction starts as soon as it arrives _ and _ all of its required variables become available .the instruction departs from the queue as soon as its execution is complete _ and _ the previous instruction ( according to the program order ) departs . in the parlance of computer architectures ,an instruction is said to be _ committed _ when it departs from the queue .the instruction - throughput of the core is defined and measured by the average number of instructions committed per second . [ !t ] instructions generally are classified as _ computational instructions _ or _memory instructions_. access times of external , off - chip , memory instructions are one - to - two orders of magnitude longer than those of computational instructions .therefore most architectures make use of a hierarchical memory arrangement where on - chip cache access takes less time than external memory such as dram. first the cache is searched , and if the variable is found there then it is fetched and the instruction is completed .if the variable is not stored in cache ( a situation known as _ cache miss _ ) then it is fetched from external memory ( typically dram ) and placed in the cache , whence it is accessed and the instruction is completed .external memory instructions can be thought of as being placed in a finite - buffer , first - in - first - out queue , designated as the _ memory queue _ in figure 2 . when this queue becomes full , the entire memory access , including cache ,is stalled .thus , there are three causes for an instruction to be stalled : a computational or memory instruction waiting for variables computed by other instructions , a memory instruction waiting for the memory queue to become non - full , and any instruction waiting ( after processing ) for the previous instruction to depart from the queue .we point out that instructions involving computation and cache - access are subjected to the core s clock rate , while memory instructions involving external memory , such as dram , are not subjected to the same clock .this complicates the application of ipa and may cause it to be biased . a quantified discrete - event model of this processis presented in the appendix , and a more general description can be found in , which also contains a detailed algorithm for the ipa derivative of the throughput as a function of frequency .we use a cycle - level , full system discrete event simulation platform for multi - core architectures , manifold .the simulated model consists of a 16-core x86 processor die , where each core is in a separate clock domain and can control its own clock rate independently of other cores . for a detailed description of the manifold simulation environment and its capabilities, please see ref . .we simulated two application programs from the benchmark - suite splash-2 , barnes and water - ns .barnes is a computation - intensive , memory - light application while water - ns is memory intensive . for each execution ,all of the 16 cores of the processor run threads of the same benchmark concurrently while each one of them is controlled separately .the control cycle is set to ms for both barnes and water - ns .the frequency range of the cores is set to ] .saturation at the highest level can correspond to a negative offset of the average throughput from its target level , since it indicates that the system may be unable to raise the throughput to a desired level . during the period of frequency saturation indicated in figure 4 ,the throughput shown in figure 3 it more jittery and sporadically attains slightly - lower values than after time 25ms .it also shows these characteristics between the end of the saturation period and time .therefore , the extent of the effects of the saturation on the aforementioned offset of 46.4 mips is not clear. nonetheless we mention this point since it will be more pronounced in some of the results on which we report later . also , we computed the average throughput in the intervals ] , after the jittery behavior of the throughput has somewhat subsided .the results are 1,192.6 mips and 1,192.9 mips , respectively , corresponding to offsets of 7.4 mips and 7.1 mips from the target throughput of 1,200 mips .these results suggest that the frequency saturation plays some role in the larger , 46.4-mips offset that was computed over the interval ] interval is 990.2 mips , corresponding to an offset of 9.8 mips of the throughput from its target value of 1,000 mips .the frequency saturated at its upper limit only at 5 isolated control cycles with minimal effects on the throughput . for the throughput target of 800 mips ,the results show a rise in the throughput from its initial value of 679.3 mips to 800 mips in 1.9 ms , or 19 iterations .the average throughput in the interval [ 1.9ms,100ms ] was 839.6 mips , which is 39.6 mips off the target value of 800 mips .there was a considerable frequency saturation at the lowest level of 0.5 ghz , which explains the positive offset .returning to the results for the target level of 1,200 mips , we considered a way to reduce the throughput oscillations and frequency saturation by scaling down the gain in eq .we did this by replacing eq .( 1 ) by the following equation , for a suitably - chosen constant . after some experimentation on various benchmarks ( excluding those tested here ) we chose .the resulting frequencies did not saturate throughout the program s run , and yielded an average throughput of 1,198,5 mips , which is 1.5 mips off the target level of 1,200 mips .though working well for this example , this technique may be problematic when used with an implementation rather than simulation , as will be discussed in the next subsection . for water - ns ,consider first the throughput target of 1,200 mips .simulation results of throughput and frequency are shown in figure [ fig : mipsmanifoldwaternsco10target1200 ] and figure [ fig : freqmanifoldwaternsco10target1200 ] , respectively .we notice greater fluctuations and more saturation than for barnes .in particular , figure [ fig : freqmanifoldwaternsco10target1200 ] shows three distinct periods of frequency saturations at its upper limit of 5.0 ghz , and figure [ fig : mipsmanifoldwaternsco10target1200 ] shows very low throughput during these periods . to explain this , recall that water - ns is a memory - heavy program , and execution times of memory instructions are longer ( typically by one or two orders of magnitude ) than computational instructions . during those periods the instructions of water - ns mainly concern memory access , which are low - throughput instructions .the controller is applying its highest frequency in order to push the throughput to its target value , but that frequency is not high enough to have much effect .this is why the periods of high - limit frequency saturation are characterized by very low throughput .this has a pronounced affect of lowering the average frequency measured during the program s execution .in fact , the throughput obtained from the simulation rises from its initial value of 429 to its target level of 1,200 mips in about 1.8 ms , or 18 control cycles ( this rise is not evident from figure [ fig : mipsmanifoldwaternsco10target1200 ] due to its insufficient granularity ) , and the average throughout from the time the target level is reached ( ) to the end of the program - run ( ) is mips , which is 73.2 mips off the target level of 1,200 mips . despite this offset , we observe that as soon as the program transitions from memory mode to computational mode , as indicated by the end of the saturation periods in figure [ fig : freqmanifoldwaternsco10target1200 ] , the throughput returns quickly to about its target level , as can be seen in figure [ fig : mipsmanifoldwaternsco10target1200 ] . for the target throughput of 1,000 mips ,simulation results show the throughput increasing from its initial value of 472.1 mips to the target level on 1,000 mips in 2.3 ms , or 23 control cycles .there is considerable frequency saturation at the high limit of 5.0 ghz .the average throughput in the interval ] is 862.6 , mips , hence meaning an offset of 62.6 mips of the throughput from its target value of 800 mips . all of these results are summarized in table i , showing the offset ( in mips ) of average throughput from target throughput , obtained from manifold simulations of barnes and water - ns with throughput targets of 1,200 , 1,00 , and 800 mips . returning to the throughput target of 1,200 mips , an application of the modified algorithm with in eq .( 12 ) yielded the average throughput of 1,143.6 mips which is 56.4 mips off the target level of 1,200 mips .this is a smaller offset than the 73.2 mips obtained from the unmodified algorithm , and it is explained by the fact that there is still considerable , though less frequency saturation than with the unmodified algorithm . [ !t ] [ !t ] [ !t ] [ ! t ].manifold simulations : offset of average throughput from target levels [ cols="<,<,<,<",options="header " , ] comparing the data summarized in table i and table ii , we that the regulation technique performs slightly better on the haswell implementation platform than on the manifold simulation environment .the reason for this may be due to the fact that in the simulation experiment we regulate the throughput of each core separately , while in the implementation we control the average throughput of all the cores in the processor .this paper describes the testing of an ipa - based throughput regulation technique in multicore processors .the testing was performed on both a simulation environment and an implementation platform . despite crude approximations that have had to be made in the implementation setting , the proposed technique performed slightly better than in the simulation setting .future research will extend the regulation method from a centralized control of a single processor to a distributed control of networked systems .this section provides a quantitative description of the instruction - flow in the ooo - cache high - level model described at the beginning of section iii .denote by , , the instructions arriving at the instruction queue in increasing order .let denote the clock rate , or frequency , and let be the clock cycle . denote by the arrival time of relative to the arrival time of , namely , and let be the clock counter at which arrives .then , denote by the time at which execution of starts , and let denote the time at which execution of ends .we next describe a way to compute .consider first the case were is a computational instruction .if all of its required variables are available at its arrival time then .on the other hand , if has to wait for such variables , let denote the index ( counter ) of the instruction last to provide such a variable , then .next , if is a memory instruction , then is the time it starts a cache access .if the memory queue is not full at time , then .on the other hand , if the memory queue is full at time , let denote the index of the instruction at the head of the queue , then , . to compute , consider first the case where is a computational instruction .let denote the number of clock cycles it takes to execute .then , .on the other hand , if is a memory instruction , let denote the number of clock cycles it takes to perform a cache attempt .if the cache attempt is successful and the variable is found in cache , then .if the variable is not in cache , the instruction is directed to the memory queue .its transfer there involves a small number of clock cycles , , hence its arrives at the queue at time .the memory queue is a fifo queue whose service time represents an external - memory access , which is independent of the core s clock .denote by the sojourn time of at the memory queue . then .finally , the departure time of from the instruction queue , denoted by , is . given a control cycle consisting of instructions ,the throughput is defined as . since , we can view the throughput as a function of and denote it by .its ipa derivative , , can be computed by following the above dynamics of the instructions flow .this , and a more detailed discussion of the model , can be found in .y. wardi , x. c. seatzu , chen , and s. yalamanchili , performance regulation of event - driven dynamical systems using infinitesimal perturbation analysis , under review in _ nonlinear analysis : hybrid systems _ , also in arxiv , ref .arxiv:1601.03799v1 [ math.oc ] , 2016 .woo , m. oharat , e. torriet , j. singhi and a. guptat , the splash-2 programs : characterization and methodological considerations , _ proceedings of the isca 22nd annual international symposium on computer architectures , ( isca95 ) _ , santa margherita ligure , italy , 1995 . j. wang , j. beu , r. behda , t. conte , z. dong , c. kersey , m. rasquinha , g. riley , w. song , h. xiao , p. xu , and s. yalamanchili , manifold : a parallel simulation framework for multicore systems , _ proc .ieee international symposium on performance evaluation of systems and software ( ispass ) _, 2014 .x. chen , h. xiao , y. wardi , and s. yalamanchili , throughput regulation in shared memory multicore prtocessors , _ proc .22nd ieee intl .conference on high performance computing ( hipc ) _ , bengaluru , india , december 16 - 19 , 2015 . c. seatzu , and y. wardi , performance regulation via integral control in a class of stochastic discrete event dynamic systems , _ proc .12th ifac - ieee international workshop on discrete event systems ( wodes14 ) _ , paris , france , may 14 - 16 , 2014 . s. browne , j. dongarra , n. garner , g. ho , and p. mucci , a portable programming interface for performance evaluation on modern processors , _ the international journal of high performance computing applications _, volume 14 , number 3 , pp .189 - 204 , fall 2000 .
a new technique for performance regulation in event - driven systems , recently proposed by the authors , consists of an adaptive - gain integral control . the gain is adjusted in the control loop by a real - time estimation of the derivative of the plant - function with respect to the control input . this estimation is carried out by infinitesimal perturbation analysis ( ipa ) . the main motivation comes from applications to throughput regulation in computer processors , where to - date , testing and assessment of the proposed control technique has been assessed by simulation . the purpose of this paper is to report on its implementation on a machine , namely an intel haswell microprocessor , and compare its performance to that obtained from cycle - level , full system simulation environment . the intrinsic contribution of the paper to the workshop on discrete event system is in describing the process of taking an ipa - based design and simulation to a concrete implementation , thereby providing a bridge between theory and applications .
deep neural networks have recently been applied to various tasks , including image processing , speech recognition , natural language processing , and bioinformatics .although they have simple layered structures of units and connections , they outperform other conventional models by their ability to learn complex nonlinear relationships between input and output data . in each layer ,inputs are transformed into more abstract representations under a given set of the model parameters .these parameters are automatically optimized through training so that they extract the important features of the input data . in other words, it does not require either careful feature engineering by hand , or expert knowledge of the data .this advantage has made deep neural networks successful in a wide range of tasks , as mentioned above . however , the inference provided by a deep neural network consists of a large number of nonlinear and complex parameters , which makes it difficult for human beings to understand it .more complex relationships between input and output can be represented as the network becomes deeper or the number of units in each hidden layer increases , however interpretation becomes more difficult .the large number of parameters also causes problems in terms of computational time , memory and over - fitting , so it is important to reduce the parameters appropriately . since it is difficult to read the underlying structure of a neural network and to identify the parameters that are important to keep , we must perform experimental trials to find the appropriate values of the hyperparameters and the random initial parameters that achieve the best trained result . in this paper , to overcome such difficulties , we propose a new method for extracting a global and simplified structure from a layered neural network ( for example , figure [ fig : exp1 m ] and [ fig : exp3 m ] ) . based on network analysis, the proposed method defines a modular representation of the original trained neural network by detecting communities or clusters of units with similar connection patterns .although the modular neural network proposed by has a similar name , it takes the opposite approach to ours .in fact , it constructs the model structure before training with multiple split neural networks inside it .then , each small neural network works as an expert of a subset task .our proposed method is based on the community detection algorithm . to date , various methods have been proposed to express the characteristics of diverse complex networks without layered structures , however , no method has been developed for detecting the community structures of trained layered neural networks .the difficulty of conventional community detection from a layered neural network arises from the fact that an assumption commonly used in almost all conventional methods does not hold for layered neural networks : to detect the community structure of network , previous approaches assume that there are more intra - community edges that connect vertices inside a community than inter - community edges that connect vertices in mutually different communities .a network with such a characteristic is called assortative .this seems to be a natural assumption , for instance , for a network of relationships between friends . in layered neural networks , however , units in the same layer do not connect to each other and they only connect via units in their parent or child layers .this characteristic is similar to that of a bipartite graph , and such networks are called disassortative .it is not appropriate to apply conventional methods based on the assumption of an assortative network to a layered neural network .a basic community detection method that can be applied to either assortative or disassortative networks has been proposed by newman et al . in this paper , we propose an extension of this method for extracting modular representations of layered neural networks . the proposed method can be employed for various purposes . in this paper , we show its effectiveness with the following three applications . 1. * decomposition of layered neural network into independent networks * : the proposed method decomposes a trained neural network into multiple small independent neural networks .in such a case , the output estimation by the original neural network can be regarded as a set of independent estimations made by the decomposed neural networks . in other words, it divides the problem and reduces the overall computation time . in section [ sec : decomposition ] , we show that our method can properly decompose a neural network into multiple independent networks , where the data consist of multiple independent vectors .* generalization error estimation from community structure * : modularity is defined as a measure of the validity of a community detection result .section [ sec : gee ] reveals that there is a correlation between modularity and the generalization error of a layered neural network .it is shown that the appropriateness of the trained result can be estimated from the community structure of the network .* knowledge discovery from modular representation * : the modular representation extracted by the proposed method serves as a clue for understanding the trained result of a layered neural network .it extracts the community structure in the input , hidden , and output layer . in section [ sec : kd ] , we introduce the result of applying the proposed method to practical data .the remaining part of this paper is composed as follows : we first describe a layered neural network model in section [ sec : lnn ] .then , we explain our proposed method for extracting a modular representation of a neural network in section [ sec : communitydetect ] .the experimental results are reported in section [ sec : experiment ] , which show the effectiveness of the proposed method in the above three applications . in section [ sec : discussion ] , we discuss the experimental results . section [ sec : conclusion ] concludes this paper .we start by defining and a probability density function on .a training data set with a sample size is assumed to be generated independently from .let be a function from to of a layered neural network that estimates an output from an input and a parameter . for a layered neural network , , where is the weight of connection between the -th unit in the depth layer and the -th unit in the depth layer , and is the bias of the -th unit in the depth layer .a layered neural network with layers is represented by the following function : where a sigmoid function is defined by the training error and the generalization error are respectively defined by where is the euclidean norm of .the generalization error is approximated by where is a test data set that is independent of the training data set . to construct a sparse neural network, we adopt the lasso method in which the minimized function is defined by where is a hyperparameter .the parameters are trained by the stochastic steepest descent method , where is the training error computed only from the -th sample . here , is defined for training time such that which is sufficient for convergence of the stochastic steepest descent .( [ eq : dw ] ) is numerically calculated by the following procedure , which is called error back propagation : for the -th layer , for , algorithm [ alg_bp ] is used for training a layered neural network based on error back propagation . with this algorithm, we obtain a neural network whose redundant weight parameters are close to zero .randomly sample from uniform distribution on . , , where and is the -th element of -th sample .( 1 ) output calculation of all layers : let be an output of the -th unit in the depth layer . .( 2 ) update weight and bias based on back propagation , where is a small constant . .here we propose a new community detection method , which is applied to any layered neural networks ( figure [ fig : com ] ( a ) ) .the proposed method is an extension of the basic approach proposed by newman et al .it detects communities of assortative or disassortative networks . the key idea behind our method is that the community assignment of the units in each layer is estimated by using connection with adjacent layers .as shown in figure [ fig : com ] ( b ) , a partial network consisting of the connections between every layer and its adjacent layers is represented in the form of two matrices : and .the matrix and represent the connections between two layers of depth and , and two layers of depth and , respectively .an element is defined as if the absolute value of the connection weight between the -th unit in the depth layer and the -th unit in the depth layer is larger than , otherwise , where is called a weight removing hyperparameter . in a similar way , is defined from the connection weight between the -th unit in the depth layer and -th unit in the depth layer . for simplicity, we denote and as and , respectively , in the following explanation .our method is based on the assumption that units in the same community have a similar probability of connection from / to other units .this assumption is almost the same as that in the previous method , except that our method utilizes both incoming and outgoing connections of each community , and it detects communities in individual layers .therefore , the community detection result is derived in a similar way to the previous method , as explained in the rest of this section .as shown on the right in figure [ fig : com ] ( b ) , the statistical model for community detection has three kinds of parameters .the first parameter represents the prior probability of a unit in the depth layer that belongs to the community .the conditional probability of connections for a given community is represented by the second and third parameters and , where represents the probability that a connection to a unit in the community is attached from the -th unit in the depth layer .similarly , represents the probability that a connection from a unit in the community is attached to the -th unit in the depth layer .these parameters are normalized so that they satisfy the following condition : our purpose is to find the parameters that maximize the likelihood of given matrices . to solve this problem, we introduce the community assignment , where is the community of the -th unit in the depth layer .the parameters are optimized so that they maximize the likelihood of and : where then , the log likelihood of and is given by here , the community assignment is a latent variable and is unknown in advance , so we can not directly calculate the above .therefore , we calculate the expected log likelihood over instead . where is the number of units in the depth layer . by defining above equation can be rewritten as follows : the parameter represents the probability that the -th unit is assigned to the community . in other words ,the community detection result is given by the estimated .the optimal parameters for maximizing of eq .( [ eq : exploglh ] ) are found with the em algorithm .the parameters with given are recursively optimized .if maximizes , then they satisfy \left [ \prod_j { \tau'_{c , j}}^{b_{k , j } } \right]}{\sum_s \pi_s \left [ \prod_i { \tau_{s , i}}^{a_{i , k } } \right ] \left [ \prod_j { \tau'_{s , j}}^{b_{k , j } } \right ] } , \label{eq : q}\end{aligned}\ ] ] and the denominator and numerator in the last term of eq .( [ eq : qdefine ] ) are given by \left [ \prod_j { \tau'_{g_l , j}}^{b_{l , j } } \right ] \right\}\\ & = & \left\ { \pi_c \left [ \prod_i { \tau_{c , i}}^{a_{i , k } } \right ] \left [ \prod_j { \tau'_{c , j}}^{b_{k , j } } \right ] \right\ } \left\ { \prod_{l\neq k } \sum_s \pi_s \left [ \prod_i { \tau_{s , i}}^{a_{i , l } } \right ] \left [ \prod_j { \tau'_{s , j}}^{b_{l , j } } \right ] \right\},\end{aligned}\ ] ] and \left [ \prod_j { \tau'_{s , j}}^{b_{k , j } } \right],\end{aligned}\ ] ] where is the kronecker delta .therefore , is given by eq .( [ eq : q ] ) .the problem is to maximize of eq .( [ eq : exploglh ] ) with a given under the condition of eq .( [ eq : normalization ] ) .this is solved with the lagrangian undetermined multiplier method , which employs and from eq .( [ eq : lag1 ] ) , the following equations are derived : using eq .( [ eq : exploglh ] ) and eq .( [ eq : lag2 ] ) , we obtain from eq .( [ eq : lag3 ] ) and the condition of eq .( [ eq : normalization ] ) , lagrange s undetermined multipliers are determined , and eq .( [ eq : lag3 ] ) is rewritten as eq .( [ eq : pitau ] ) . from the above theorem ,the optimal parameters and the probability of community assignment for the optimized parameters are recursively estimated based on eq .( [ eq : q ] ) and ( [ eq : pitau ] ) . in this paper ,the community assigned to the -th unit is determined by the that maximizes ( figure [ fig : com ] ( c ) ) . finally , we use the following methods to determine a modular representation of a layered neural network that summarizes multiple connections between the pairs of communities ( figure [ fig : com ] ( d ) ) .+ + * four algorithms for determining bundled connections * : * method : community and have a bundled connection iff there exists at least one connection between the pairs of units .* method : let the number of units in communities and be and , respectively , and let the number of connections between the pairs of units be . communities and have a bundled connection iff holds , where is a threshold .* method : among the bundled connections defined by method , only those that satisfy the following ( 1 ) or ( 2 ) are kept and the others are removed .( 1 ) for any community in the same layer as community , . ( 2 ) for any community in the same layer as community , .* method : among the bundled connections defined by method , only those that satisfy the above ( 1 ) and ( 2 ) are kept and the others are removed . by these procedures , we obtain the modular representation of a layered neural network .in this section , we show three applications of the proposed method : ( 1 ) the decomposition of a layered neural network into independent networks , ( 2 ) generalization error estimation from a community structure , and ( 3 ) knowledge discovery from a modular representation . herewe verify the effectiveness of the proposed method in the above three applications .the following processing was performed in all the experiments : 1 .the input data were normalized so that the minimum and maximum values were and , respectively .the output data were normalized so that the minimum and maximum values were and , respectively .the initial parameters were independently generated as follows : .the connection matrix if the absolute value of the connection weight between the -th unit in the depth layer and the -th unit in the depth layer is larger than a threshold , otherwise .note that and are used instead of and for stable computation .similarly , is defined from the connection weight between the -th unit in the depth layer and the -th unit in the depth layer .all units were removed that had no connections to other units .5 . for each layer in a trained neural network, community detection trials were performed .we defined the community detection result as one that achieved the largest expected log likelihood in the last of iterations of the em algorithm .6 . in each community detection trial , the initial values of the parameters were independently generated from a uniform distribution on , and then normalized so that eq .[ eq : normalization ] held .7 . in visualization of modular representation ,all communities with no output bundled connections from them were regarded as unnecessary communities . in the output layer ,the communities with no input bundled connections were regarded in the same way as above .the bundled connections with such unnecessary communities were also removed .these unnecessary communities and bundled connections were detected from depth to , since the unnecessary communities in the shallower layers depend on the removal of unnecessary bundled connections in the deeper layers .we show that the proposed method can properly decompose a neural network into a set of small independent neural networks , where the data set consists of multiple independent dimensions . for validation , we made synthetic data of three independent parts , merged them , and applied the proposed method to decompose them into the three independent parts .the method we used to generate the synthetic data is shown in figure [ fig : exp1 ] . in the following , we explain the experimental settings in detail .first , three sets of input data were independently generated .all the sets contained input data with five dimensions , and their values followed : .then , three neural networks were defined , each of which has independent weights and biases . in each neural network , all the layers consisted of five units , and the number of hidden layers was set at one .the sets of weights and biases for the first , second and third neural networks are denoted as , , and , respectively .these parameters were randomly generated as follows : for the weights and , the connections with smaller absolute values than one were replaced by .finally , three sets of output data were generated by using the above input data and neural networks by adding independent noise following .the three generated sets of input and output data were merged into one set of data , as shown in figure [ fig : exp1 ] .we trained another neural network with dimensions for input , hidden and output layer using the merged data .then , a modular representation of the trained neural network was made with the proposed method .the results of the trained neural network , its community structure , and its modular representation are shown in figures [ fig : exp1o ] , [ fig : exp1c ] , and [ fig : exp1 m ] , respectively .the numbers above the input layer and below the output layer are the indices of the three sets of data .these results showed that the proposed method could decompose the trained neural network into three independent networks .[ fig : exp1o ] [ fig : exp1c ] in general , a trained result of a layered neural network is affected by different hyperparameters and initial parameter values . here , we show that the appropriateness of a trained result can be estimated from the extracted community structure by checking the correlation between the generalization error and the modularity . the modularity is defined as a measure of the validity of the community detection result , and it becomes higher with more intra - community connections and fewer inter - community connections .let the number of communities in the network be , and be a matrix whose element is the number of connections between communities and , divided by the total number of connections in the network .the modularity of the network is defined by this is a measure for verifying the community structure of assortative networks , so it can not be applied directly to layered neural networks . in this paper , we define a modified adjacency matrix based on the original adjacency matrix of a layered neural network , and use it for measuring modularity . in the modified adjacency matrix , an element indexed by row and column represents the number of common units that connect with both the -th and -th units ( figure [ fig : modularitycalc ] ) .we set the diagonal elements of modified adjacency matrix at , resulting that there are no self - loops .if the data consist of multiple independent sets of dimensions like the synthetic data used in the experiment in section [ sec : decomposition ] , the generalization error is expected to be smaller when the weights of the connections between independent sets of input and output are trained to be smaller .therefore , a higher modularity indicates a smaller generalization error .+ + + + + captypetable c|c||c|c|c & + & & & + & & & & + & & & & + & & & & + captypetable [ tab : cc ] in the experiment , we iterated the neural network training and community detection from the trained network times using the same data as in the experiment described in section [ sec : decomposition ]. the generalization error and modularity results for nine pairs of hyperparameters are shown in figure [ fig : exp2 ] , where is the lasso hyperparameter and is the weight removing hyperparameter .for the smaller and , the overall modularities were lower , which indicates that there were more connections between mutually independent neural networks .it was experimentally shown for some hyperparameters that better trained results were obtained ( with smaller generalization errors ) when the trained neural networks had clearer community divisions ( with higher modularity ) .table [ tab : cc ] shows the correlations for given . in order to show that the modular representation extracts the global structure of a trained neural network, we applied the proposed method to a neural network trained with practical data .we used data that represent the characteristics of each municipality in japan .the characteristics shown in table [ tab : notationsdata ] were used as the input and output data , and the data of municipalities that had any missing value were removed .there were training data .before the neural network was trained , all the dimensions for all sets of data were converted through the function of , because the original data are highly biased .the results are shown in figure [ fig : exp3data ] . the trained neural network and the modular representation using the above data are shown in figures [ fig : exp3o ] and [ fig : exp3 m ] , respectively .figure [ fig : exp3 m ] shows , for example , that the number of births , marriages , divorces , and people who engage in tertiary industry work ( a1 ) were inferred from the population aged 65 and older , the number of out - migrants , establishments , employees with employment and so on ( a4 ) . [ fig : exp3o ] from the extracted modular representation , we found not only the grouping of the input and output units , but also the relational structure between the communities of the input , output and hidden layers .for instance , the third community from the right in the depth layer ( b4 ) and the second community from the right in the depth layer ( b3 ) only connected to partial input and output units : they were used only for inferring the number of primary industrial employees and employed workers in their municipalities ( b1 , b2 ) , from the total population , the population under 15 years of age , the population of transference , and the numbers of family workers and executives ( b5 , b6 ) .in this section , we discuss the proposed algorithm from three viewpoints , the community detection method , the validation of the extracted result , and the scalability of our method .firstly , to extract a modular representation from a trained neural network , we employed a basic iterative community detection method for each layer .it is possible to modify this method , for example , by using the weights of connections or the connections in further layers .utilizing the output of each unit might also improve preciseness of the community detection result .the optimization of community detection methods and hyperparameters according to the task is future work .secondly , knowledge discovered from a modular representation depends on both the data and the analyst who utilizes the proposed method .there are both sensitive and robust communities which do and do not depend on them .therefore , it becomes important to separate the essential results from fluctuations .we anticipate that our method will form the basic analytic procedure of such a study .and lastly , in this paper , we experimentally evaluated the relationship between modularity and generalization error .it is well known that a community detection technique can be employed for large size networks .the analysis of larger datasets with higher dimensions would provide further information on layered neural networks .for such large datasets , it would also be important to evaluate the effectiveness of parallel computation , using the independent neural networks extracted with our proposed method .deep neural networks have achieved a significant improvement in terms of classification or regression accuracy over a wide range of applications by their ability to capture the complex hidden structure between input and output data . however , the discovery or interpretation of knowledge using deep neural networks has been difficult , since its internal representation consists of many nonlinear and complex parameters . in this paper, we proposed a new method for extracting a modular representation of a trained layered neural network .the proposed method detects communities of units with similar connection patterns , and determines the relational structure between such communities .we demonstrated the effectiveness of the proposed method experimentally in three applications .( 1 ) it can decompose a layered neural network into a set of small independent networks , which divides the problem and reduces the computation time .( 2 ) the trained result can be estimated by using a modularity index , which measures the validity of a community detection result . and( 3 ) providing the global relational structure of the network would be a clue to discover knowledge from a trained neural network .table [ tab : notationsdata ] shows the notations of the data used in the experiment described in section [ sec : kd ] .the experimental settings of the parameters are shown in table [ tab : notations ] .c|p5cm||c|p5 cm name & & name & + i1 & & i22 & + i2 & & i23 & + i3 & & i24 & + i4 & & o1 & + i5 & & o2 & + i6 & & o3 & + i7 & & o4 & + i8 & & o5 & + i9 & & o6 & + i10 & & o7 & + i11 & & o8 & + i12 & & o9 & + i13 & & o10 & + i14 & & o11 & + i15 & & o12 & + i16 & & o13 & + i17 & & o14 & + i18 & & o15 & + i19 & & o16 & + i20 & & o17 & + i21 & & o18 & + [ tab : notationsdata ] c|p6cm|c|c|c name & meaning & exp.1 & exp.2 & exp.3 + & & + & number of training data sets & & & + & number of test data sets & & & + & & & + & & & + & hyperparameter of lasso & & * & + & & + & weight removing hyperparameter & & * & + & & & + method & & & + & & + & & & + & & & + * : the nine parameters shown in the caption of figure [ fig : exp2 ] are used . [ tab : notations ]we would like to thank akisato kimura for his helpful comments on this paper .
deep neural networks have greatly improved the performance of various applications including image processing , speech recognition , natural language processing , and bioinformatics . however , it is still difficult to discover or interpret knowledge from the inference provided by a deep neural network , since its internal representation has many nonlinear and complex parameters embedded in hierarchical layers . therefore , it becomes important to establish a new methodology by which deep neural networks can be understood . in this paper , we propose a new method for extracting a global and simplified structure from a layered neural network . based on network analysis , the proposed method detects communities or clusters of units with similar connection patterns . we show its effectiveness by applying it to three use cases . ( 1 ) network decomposition : it can decompose a trained neural network into multiple small independent networks thus dividing the problem and reducing the computation time . ( 2 ) training assessment : the appropriateness of a trained result with a given hyperparameter or randomly chosen initial parameters can be evaluated by using a modularity index . and ( 3 ) data analysis : in practical data it reveals the community structure in the input , hidden , and output layers , which serves as a clue for discovering knowledge from a trained neural network . + _ keywords _ : layered neural networks , network analysis , community detection
underwater exploration represents a very important economic , technologic and scientific challenge .this is closely related to arctic and antarctic offshore resources , pollution monitoring , general oceanographic data collection and , recently , to underwater actuation .due to very large underwater areas and high damping properties of water , application of multiple autonomous underwater vehicles ( auvs ) in cooperative missions seems very promising . for application of auvs in networked or swarm mode ,there is a number of crucial issues : underwater sensing and communication ( s&c ) , cooperation and mission control , design of auv platforms , autonomous behavior and several collective aspects of running multiple auvs . in this work we concentrate on minimalistic local s&c , and related coordination strategies , being motivated by the following reasons . in several past and running projects devoted to underwater swarms , such as aquajelly , angels , cocoro , a number of auv platforms and sensing technologies has been developed .these works indicated two important issues : a successful auv platform needs a dedicated combination of different s&c technologies , moreover capabilities of underwater cooperation depends on the level of embodiment of on - board s&c systems .in several cases , even a simple multi - modal signal system leads to advanced cooperation ( see e.g. the case of multi - agent cooperation ) .since swarm approaches rely primarily on local interactions between auvs , the paper is devoted to local s&c systems ( unmodulated and modulated ir / blue light , rf and electric field ) , which can be used for robot - robot / robot - object detection and provide sub - modal information , such as direction and distances , as well as can be used for analog and digital communication .these systems , developed for each of the platforms is described in sec .[ sec : localcom ] , whereas sec .[ sec : conparison ] provides general overview over different s&c technologies . in sec .[ sec : experiments ] we shortly sketch a few behavioral experiments with these systems and finally in sec . [sec : conclusion ] conclude about useful combinations of s&c and their embodiment for collective underwater systems .in this section we give a short overview over different state - of - the - art s&c systems in an underwater environment , see e.g. - .four systems will be compared : * _ sonar _ : sonic waves travel very well under water and the energy and build - space required for generating and receiving them is very low .this approach is used in so - called acoustic modems .drawbacks of this approach are , firstly , the relatively low sound travel speed of roughly 1500 m ( much slower than any other s&c system ) , and , secondly , multiple reflections causing essential distortions in the signal . * _ radio _ : electromagnetic waves are a standard communication method in air ; its application under water creates several problems . due to water connectivity ,the attenuation of radio waves depends on the used frequency , which in turn results in the size of antenna .high frequencies ( 100 mhz ) only need a small antenna ( m ) while their range is restricted to m. lower frequencies ( 100 khz ) have a long range ( 100 m ) , but need a large antenna ( 100 m ) . *_ optical _ : using light as a communication channel can provide a compact size of transmitting equipment and acceptable range . due tothe color dependent attenuation of light in water , the communication range varies between a few centimeters in ir spectra and increases to over a meter by using blue or green light .* _ electric field _ : this is a new communication approach .it bases upon generating and measuring electric fields .the build - size and energy required for this system is very small .unfortunately the attenuation of electric fields is very high , liming the range of this communication channel to less than 1 m. we will discuss this approach in sec .[ sec : elect ] .table [ tab : comparisonofdifferentcommunicationchannels ] shows an overview of the discussed approaches ..comparison of different communication channels [ cols="<,>,>,>",options="header " , ] as a digital communication transceiver , the blue light system needs modulator , amplifier , signal conditioner , and protocol encoder / decoder . the one chip solution can be solved by using a cs 8130 irda chip from cirrus logic .since the blue light system has a directional s&c , two channels are not sufficient for the swarm robot to communicate in every direction .the half duplex behavior of each channel makes one channel unable to be applied for sensing , because the sensing mechanism requires to transmit and to receive the sensing signal in one time .therefore , the position of the transmitter and receiver are swapped with neighboring channels for sensing application . the system must be configured and calibrated for finding the best modulation type for underwater communication and sensing .according to the measurement results , both for communication and sensing , the quadrature amplitude modulation ( qam ) seems to be the best modulation for underwater application .[ fig : chat_blue_light](a ) shows the relation between current sensitivity and communication distance . by using this curve , an active sensing algorithm can be added to the inter - robot communication algorithm by varying the amplification and the sensitivity of the programmable amplifier via software , see fig .[ fig : chat_blue_light](b ) .the robot can approximate the distance with other robots by gradually decreasing the current sensitivity while communicating each other .the developed inter - robot communication algorithm has three phases .first , when two robots are in the communication range , they begin to establish the communication by sending their ids to each other .second , the communicating robots are approximating their distance by gradually decreasing the current sensitivity within the programmable gain amplifier .therefore , after knowing their own position , behavioral or cooperation phased can be performed .a robot will continuously iterate the first phase if there is no other robots in the communication range .hence an obstacle might reflect the transmitted signal and the robot would receive back the first phase communication packet that contains its own i d . after experimenting with optical s&c systems , we implemented another approach , which is inspired by weakly electric fish .these animals are capable of producing an electric field which they can use for localisation and communication . herewe try to use this bio - inspired approach for analog communication and navigation in robot swarms .* electric fields . *electric charges generate electrical fields in their vicinity .electric fields are vector fields . for a point charge the field intensity can be calculated at each point as with the permittivities ( vacuum ) and ( relativ ) .the field vectors of multiple point charges follow the superposition principle . in our robotthe electric field is generated by a dipole .the field intensity is proportional to the charge in the electrodes , which themselves are proportional to the applied voltage to the electrodes : with the voltage and the capacity . ** the intensity of an electric field can then be detected by measuring the differential potential between two electrodes in the field .this potential is proportional to the electric field intensity , which itself is proportional to the output voltage of the sender . by modulating the output voltage of the sender, information can be transmitted .* localization .* by using multiple pairs of electrodes in the receiver it is possible to calculate the bearing and distance to the sender .this is achieved by utilizing the drop in field intensity with relation to the distance .a sinus wave is impressed on the sender s electrodes which creates an oscillating electrical field .the field intensity depends mainly on the amplitude and frequency of the output voltage and some environmental conditions .the amplitude in the field intensity at a specific point is proportional to the amplitude of the output voltage : with the amplitude of the output signal and measured input , the frequency and the time . if sender and receiver are approximately in the same plane and the electrodes have the same orientation ( compare fig .[ fig : versuch ] left ) ( [ eq : amplitudesim ] ) can be simplified to : with the frequency dependent proportionality factor and the distance between sender and receiver .measuring the sinus amplitude ( , , , ) at four points with a specific geometrical pattern ( fig .[ fig : versuch ] right ) leads to the following equations : in setting up these equations it is assumed that so that the error in the angle and distance between the different sensors is minimal . in ( [ eq : amplitudessystem ] ) the proportional factor and output amplitude can be eliminated , under the condition of leading to : and * design limitations .* this approach has two design limitations : it requires the sender and receiver electrodes to have the same orientation ( i.e. vertical ) and to be roughly in the same plane ( horizontal to the orientation ) : * the first limitation holds no practical difficulties .our robot maintains a specific orientation , caused by its center of gravity . by placing one of the sender and receiver electrodes on top and one on the bottom of the robotthe orientation is always vertical . *the derivation above is only correct if sender and receiver are on the same plane , which is horizontal to the orientation of their electrodes . in a three dimensional environmentthis is not always true , but usually the working space is wider than it is high , even in a 3d environment .even so , we are working on overcoming these limitations .we are confident that the second can be eliminated by rearranging the receiver electrodes . to overcome the first limitation additional electrodes may be needed .as described in the previous sections , different local s&c systems utilize the same hardware components for sensing and communication .moreover , they use modal and sub - modal approaches , which provide not only message transmission , but also deliver spatial information about position and distances of robots and objects . in this sectionwe describe several behavioral experiments , performed with these systems .one of the implemented scenarios with aquajelly robots had the following form , see fig .[ fig : scenario ] . in water ,a robot sends sequentially in all ir channels its own i d . listening and sending timesare selected as approx .95% listening and 5% sending , so that all robots most of time silently observe the environment .receiving another - than - own i d means meeting another robot , whereas non - id ir light means own reflection from passive objects .granularity of ir channels is enough for rough collision avoidance with objects , e.g. walls of the aquarium .collision avoidance based on digital channels are impossible for more than two robots ( or robot and object ) . in opposite ,blue light channels emit almost all time .since light has additive properties , when two robots meet each other , intensity of light in the point of light - spheres - intersection creates a light gradient and can be locally sensed by both robots .especially interesting is the light gradient when several robots meet each other ; they create complex gradients , which can be used for precise multi - robot navigation .unfortunately , blue - light sensors continuously receive signals from their own light sphere so that no efficient communication is possible in this mode .initially all robots are fully charged .when a robot has a low energy value , it swims up and recharges . with the progress of experiment ,more and more robots swim up for recharging . in this way, several robots meet in the upper part of the aquarium and compete for the docking station , see fig .[ fig : scenario](from left to right ) .since only a robot with lowest energy value should recharge , all robots bilaterally exchange values of their own energy level . the robot with the lowest energy valuecan swim up .this local behavior leads to the following interesting collective behavior . due to light gradient created by many robots ,all robots exert optical pressure `` on each other , and collectively swim down , whereas only one most ' ' hungry " robot swims up and recharges , see fig . [ fig : scenario](right ) . in order to investigate the modulated blue light s&c system , we used underwater submarine toy as a mechanical platform with new electronic components for locomotion , computational and s&c capabilities .this submarine has three degrees of freedom and three actuators for moving forward / backward , turning left / right , and diving up to 1 meter .the necessary modifications of the submarine including the replacement of the original electronic parts with the new designed electronic boards , and drilling some new holes on the robot s body for communication / sensing transducers placements .cortex3 lm3s316 microcontroller with 25mhz of clock frequency , 16 kb of internal flash rom , and 16 kb of ram has been used in the platform .two motor drivers and two navigational sensors are placed on the main board of the electronic platform .the combination of the available pwm output from cortex3 and motor driver perform the ability to control the swimming velocity via software .a digital compass and pressure sensor are added as a three - dimensional orientation sensor ( an low - frequency rf part is foreseen for a backup communication with host ) . during the experiment, robot is deployed into the aquarium fulfilled with several obstacles .the available obstacles and aquarium s walls are used to examine the sensing capability , see fig .[ fig : blue_light_platform](left ) .both types of obstacles have different type of optical characteristic , which create different reflection behavior for the blue light .therefore , white papers can be put outside the aquarium walls to increase the reflection capability of the optical sensor . for testing the communication and active sensing capabilities ,one robot is deployed underwater and a static encoded blue light transceiver is installed on the aquarium wall as a measurement reference point , see fig .[ fig : blue_light_platform](right ) .the static transceiver can illuminate several type of light signals if it receives a specific blue light packet data from the swimming robot .different types of blinking signals are used to examine the functionality of the active sensing capability .this approach underlies several other experiments , where a few passive robots are identified by one active auv as foraging targets .the circuit for electric field communication is very simple .it consists mainly of a digital - analog - converter ( dac ) for the sender and four amplifiers ( op ) with analog - digital - converters ( adc ) for the receiver ( fig .[ fig : esversuch ] ) . * _ sender : _ the output of the 14-bit dac is directly tied to one of the sender electrodes while the other is connected to vcc/2 .the electrodes are in direct contact with the surrounding water .the output of the dac can be set to a voltage between gnd and vcc .this setup allows control of the field intensity and polarity . * _ receiver : _ the receiver has four pairs of electrodes in the water to measure the difference in the potential of the electric field in four places .the electrodes use capacitors as highpass - filters to filter dc signals .the signals are amplified by differential ops ( magnitude 1000 ) and digitized by 14-bit adcs . for the experiments the sender and receiver are put under water ( fig .[ fig : versuch ] right ) .the receiver has a sampling rate of 10 khz .the measured data is transmitted to a pc , where the bearing and distance are calculated . in the experimentthe sender was moved around the receiver and the received data was recorded together with a video tape of the experiment for comparison ( fig . [fig : esversuch ] ) .the true bearing was extracted from the video stream and compared with the from the electrical sensor data calculated bearing .[ fig : vergleichcords ] shows the result . for 50% of the measuring pointsthe error is less than 5 and it exceeds never more than 15. this might be further improved by increasing the magnitude of the amplifiers and reducing the noise through optimized circuits and digital filters .in this work we considered several optical and electric - field - based approaches for sensing and communication within .together with s&c approaches for , such as acoustic and low - frequency rf , they represent the available spectra of s&c technologies for underwater networked and swarm robotics . as indicated in the sec .[ sec : experiments ] and from other performed experiments , these approaches combine communication with localization , distance measurement and object detection . in several cases ,such a sub - modal information is available even during communication and can be used for very efficient behavioral strategies .each of the considered s&c system has its own benefits and weaknesses .it seems that no current single system is capable of achieving all the requirements on / , sensing , minimal build space , energy consumption and complexity .therefore the best approach lies in combining several of the available systems , for example in the way shown in fig [ fig : scheme ] .the optical system provides split wave - length dependent channels .it can be used in analog and digital mode with existing control circuits for ir systems ( e.g. irda or different modulations e.g. pcm / qam ) , which with small modifications can be used for green , cyan and blue light .the range and bandwidth are sufficient for local communication .the channel is directional which can be of benefit for swarm - based coordination approaches .additionally the reflection in analog mode can be used for navigation and detection tasks .the electrical sensor is a good supplementary element to directional optics .electric fields - based channels are omni - directional ; hardware required for generation and detection of electric fields utilizes off - the - shelf components and is compact and energy efficient .it can be used to calculate the bearing between sender and receiver , i.e. for self - localization .the range is small but sufficient for .it is also necessary to supplement these s&c systems by acoustic or ultra - low - frequency rf to provide global communication .sonar requires a bit larger hardware equipment than the optical system . with additional components, it can be used for measuring distances to obstacles .rf systems represent a trade - off between the frequency ( i.e. communication distances ) and the size of integrated antennas ( i.e. the size of platform ) .the control circuits are more complex than those for optic or acoustic approaches .since bandwidth for low - frequency rf is not sufficient for application of standard protocols ( e.g. zigbee ) , global rf communication represents some open problems .if more than one receiver is used , the bearing between sender and receiver can be calculated .however this would make the hole system more complex and expensive .comparing acoustic and low - frequency rf approaches for , acoustic one is more favorable due to more less complex hardware .usage of global communication for networked and swarm systems should be reduced to absolute minimum ( see for instance minimalistic approaches for cooperation and decision making , ) .the angels and cocoro projects are funded by the european commission within the work programm future and emergent technologies `` and ' ' cognitive systems and robotics " under the grant agreements no .231845 and 270382 .we want to thank all members of the project for fruitful discussions .
this paper is devoted to local sensing and communication for collective underwater systems used in networked and swarm modes . it is demonstrated that a specific combination of modal and sub - modal communication , used simultaneously for robot - robot and robot - object detection , can create a dedicated cooperation between multiple auvs . these technologies , platforms and experiments are shortly described , and allow us to make a conclusion about useful combinations of different signaling approaches for collective underwater systems .
sensors are sometimes distributed in a regular fashion to monitor an area .we assume that the sensors use wireless communication .most wireless communication protocols allow the sensors to send at arbitrary times . however , this can cause the following _ collision problems _ : if two distinct sensors and send at the same time and is within the interference range of , then frequently hardware limitations prevent from receiving the message of correctly . in addition , if two distinct sensors and send at the same time and a sensor is within interference range of both and , then will not be able to correctly receive either message . in these cases , the sensors and need to resend their messages , which is evidently a waste of energy .let us assume that the sensors have access to the current time , represented by an integer .one can assign each sensor node an integer and set up a periodic schedule such that a node with integer is allowed to broadcast messages at time if and only if . the goal of this paper is to give a convenient combinatorial formulation using lattice tilings that allows one to assign optimal schedules with minimal number of time slots such that no two sensors that are scheduled to broadcast simultaneously have intersecting interference ranges ; we call such schedules _ collision - free_. [ [ related - work . ] ] related work .+ + + + + + + + + + + + + since most communication protocols for wireless sensor networks are probabilistic in nature , there exist few prior works that are directly related to our approach .however , there exist a few notable exceptions that we want to discuss here .suppose for the moment that we are given a finite set of sensors that share the same frequency band for communication .the simplest way to ensure that the communication will be collision - free , is to use a time division multiple access ( tdma ) scheme . hereeach of the sensors is assigned a different time slot and scheduling is done in a round robin fashion . because of its simplicity , this scheme is used in many systems , see e.g. ( * ? ? ?* chapter 3.4 ) . the obvious disadvantage of tdma is that it does not scale : if the number of sensors is large , then the sensors can not communicate frequently enough .the basic tdma scheme does not take advantage of the fact that each sensor typically affects only small number of neighboring sensors by its radio communication .this prompts the question whether one can modify the tdma scheme and find a schedule with time slots that is collision - free . to answer this question , consider a directed graph that has a node for each sensor and an edge from vertex to vertex if and only if is affected by the radio communication of valid schedule with time slots corresponds to a distance-2 coloring with colors , that is , all vertices of distance must be assigned a different color (= time slot ) to avoid collision problems .therefore , the number of time slots of an optimal collision - free schedule coincides with the chromatic number of a distance-2 coloring .the distance-2 coloring problem is also known as the broadcast scheduling problem in the networking community .mccormick has shown that the decision problem whether a given graph has a distance-2 coloring with colors is np - complete .lloyd and ramanathan showed that the broadcast schedule problem even remains np - complete when restricted to planar graphs and time slots . due to these intractability results ,much of the subsequent research focused on heuristics for finding optimal schedules ; for instance , wang and ansari used simulated annealing , and shi and wang used neural networks to find optimal schedules .another popular direction of research are approximation algorithms for broadcast scheduling algorithms , see e.g. .[ [ contributions . ] ] contributions .+ + + + + + + + + + + + + + the main contributions of this paper can be briefly summarized as follows ( the terminology is explained in the subsequent sections ) : we develop a method that allows one to derive an optimal collision - free schedule from the tiling of a lattice .our scheme scales to an arbitrary number of sensors ; in fact , we formulate our schedules for an infinite number of sensors .schedules for a finite number of sensors are obtained by restriction , and these schedules remain optimal under very mild conditions ( given in the conclusions ) .our assumption on the set of prototiles ensures that an optimal schedule is obtained regardless of the chosen tiling . in section 4, we show that if our assumption on the set of prototiles is removed , then in general one will not obtain an optimal schedule .we formulate our results for arbitrary lattices in arbitrary dimensions , since the proofs are not more complicated than in the familiar case of the two - dimensional square lattice . for the square lattice ,there are polynomial - time algorithms available to check whether a given prototile can tile the lattice ; thus , despite the fact that finding optimal schedules is np - hard in general , one can use our method to easily construct optimal schedules in the case of a single prototile .this method of creating simple instances of an np - hard problem might be of independent interest .a euclidean lattice is a discrete subgroup of that spans the euclidean space as a real vector space . in other words ,there exist vectors in that are linearly independent over the real numbers such that and for each vector in there exists an open set containing but no other element of .in particular , the group is isomorphic to the additive abelian group .two examples of lattices in two dimensions are illustrated in figure [ fig : lattice ] .( 40,40 ) z1 = ( 1,0 ) ; z2 = ( 0,1 ) ; s = 12 ; drawarrow s*z1+s*z2 2*s*z1+s*z2 ; drawarrow s*z1+s*z2 s*z1 + 2*s*z2 ; pickup pencircle scaled 3 ; for k = 1 step s until 5*s : draw k*z1 + 1*z2 scaled 2 ; endfor ; for k = 1 step s until 5*s : draw k*z1+(s)*z2 ; endfor ; for k = 1 step s until 5*s : draw k*z1+(2*s)*z2 ; endfor ; for k = 1 step s until 5*s : draw k*z1+(3*s)*z2 ; endfor ; for k = 1 step s until 5*s : draw k*z1+(4*s)*z2 ; endfor ; label.lrt(btex etex , s*z1+s*z2 ) ; label.lrt(btex etex,2*s*z1+s*z2 ) ; label.lrt(btex etex , s*z1 + 2*s*z2 ) ; label.lrt(btex etex,2*s*z1 + 2*s*z2 ) ; ( 40,40 ) z1 = ( 1,0 ) ; z2 = ( 0.5,0.866 ) ; drawarrow s*z1+s*z2 2*s*z1+s*z2 ; drawarrow s*z1+s*z2 s*z1 + 2*s*z2 ; pickup pencircle scaled 3 ; for k = 1 step s until 5*s : draw k*z1 + 1*z2 scaled 2 ; endfor ; for k = 1 step s until 5*s : draw k*z1+(s)*z2 ; endfor ; for k = 1 step s until 5*s : draw ( -s+k)*z1+(2*s)*z2 ; endfor ; for k = 1 step s until 5*s : draw ( -s+k)*z1+(3*s)*z2 ; endfor ; for k = 1 step s until 5*s : draw ( -2*s+k)*z1+(4*s)*z2 ; endfor ; label.lrt(btex etex , s*z1+s*z2 ) ; label.lrt(btex etex,2*s*z1+s*z2 ) ; label.lrt(btex etex , s*z1 + 2*s*z2 ) ; label.lrt(btex etex,2*s*z1 + 2*s*z2 ) ; our goal is to find a deterministic collision - free periodic schedule for sensors located at the points of a lattice that is optimal in the number of time slots , _i.e. _ , no periodic schedule with a shorter period can be found that is collision - free .we call a finite subset of a _ prototile _ or a _ neighborhood _ of the point if and only if it contains itself . the particular nature of will be determined for instance by the type of antenna and by the signal strength used by the sensor .the elements in are the sensors affected by wireless communication of the sensor located at the point ( that is , only the elements in are within interference range of the sensor located at the point ) .we will first assume a homogeneous situation , namely the neighborhood affected by communication of the sensor located at a point in is of the form where the addition denotes the usual addition of vectors in .the set contains , since is contained in .some examples of neighborhoods are given in figure [ fig : neighborhoods ] .( 50,50 ) t = 12 ; z3 = t*(1,0 ) ; z4 = t*(0,1 ) ; def cross(expr a , b ) = begingroup pair g ; g = a*z3+b*z4 ; pickup pencircle scaled 2 ; draw ( g-(1,1))(g+(1,1 ) ) ; draw ( g-(1,-1))(g+(1,-1 ) ) ; endgroup ; enddef ; pickup pencircle scaled 3 ; for k = 1 step 1 until 7 : for l = 1 step 1 until 7 : draw k*z3+l*z4 ; endfor ; endfor ; label.lrt(btex 0 etex , 4*z3 + 4*z4 ) ; for k = 3 upto 5 : for l = 3 upto 5 : cross(k , l ) ; endfor ; endfor ; pickup pencircle scaled 1 ; pair tg ; tg = 2.5*(z3+z4 ) ; draw ( ( 0,0)(0,1)(1,1)(1,0)cycle ) scaled 36 shifted tg withcolor 0.8white ; ( 50,50 ) z3 = t*(1,0 ) ; z4 = t*(0,1 ) ; def cross(expr a , b ) = begingroup pair g ; g = a*z3+b*z4 ; pickup pencircle scaled 2 ; draw ( g-(1,1))(g+(1,1 ) ) ; draw ( g-(1,-1))(g+(1,-1 ) ) ; endgroup ; enddef ; pickup pencircle scaled 3 ; for k = 1 step 1 until 7 : for l = 1 step 1 until 7 : draw k*z3+l*z4 ; endfor ; endfor ; label.lrt(btex 0 etex , 4*z3 + 4*z4 ) ; for k = 3 upto 5 : cross(k,4 ) ; cross(4,k ) ; endfor ; pickup pencircle scaled 1 ; draw ( ( 0,1)(0,2)(1,2)(1,3)(2,3)(2,2)(3,2)(3,1)(2,1)(2,0)(1,0)(1,1)cycle ) scaled ( t ) shifted ( 2.5*(z3+z4 ) ) withcolor 0.8white ; ( 50,50 ) z3 = t*(1,0 ) ; z4 = t*(0,1 ) ; def cross(expr a , b ) = begingroup pair g ; g = a*z3+b*z4 ; pickup pencircle scaled 2 ; draw ( g-(1,1))(g+(1,1 ) ) ; draw ( g-(1,-1))(g+(1,-1 ) ) ; endgroup ; enddef ; pickup pencircle scaled 3 ; for k = 1 step 1 until 7 : for l = 1 step 1 until 7 : draw k*z3+l*z4 ; endfor ; endfor ; for k = 5 upto 6 : for l = 3 upto 5 : cross(k , l ) ; endfor ; endfor ; cross(3,4 ) ; cross(4,4 ) ; pickup pencircle scaled 1 ; draw ( ( 0,1)(0,2)(2,2)(2,3)(4,3)(4,0)(2,0)(2,1)(0,1 ) ) scaled t shifted ( 2.5*(z3+z4 ) ) withcolor 0.8white ; label.lrt(btex 0 etex , 3*z3 + 4*z4 ) ; our schedule will be a deterministic periodic schedule , that is , each sensor is assigned a certain time slot and it is only allowed to send during that time slot . since our schedule is required to be free of collision problems , it follows that the sensors located at distinct points and in can not broadcast at the same time unless let denote a subset of .we say that provides a _ tiling _ of with neighborhoods ( or tiles ) of the form if and only if the following two conditions hold : , for all distinct in .the set contains all the vectors that translate the prototile .condition * t1 * says that the whole lattice is covered by the translates of the prototile , when ranges over the elements of .condition * t2 * simply says that the translates of the tile do not overlap .the tilings provide us with an elegant means to construct an optimal deterministic schedule .[ th : thm1 ] let be a tiling of a euclidean lattice in with neighborhoods of the form . then there exists a deterministic periodic schedule that avoids collision problems using time slots .the schedule is optimal in the sense that one can not achieve this property with fewer than time slots .suppose that is the neighborhood of .for in the range , we schedule the sensors located at the points at time .we first notice that each sensor located at a point in is scheduled at some point in time , since by property * t1 * of a tiling . seeking a contradiction, we assume that the schedule is not collision - free .this means that at some time in the range there exist sensors located at the positions and with distinct and in such that .however , this would imply that for distinct and in , contradicting property * t2 * of a tiling .it follows that our schedule is collision - free .it remains to prove the optimality of the schedule .seeking a contradiction , we assume that there exists a schedule with time slots that is collision - free . this means that for some time slot in the range two elements and of must be scheduled .however , this would imply that the element is contained in both sets and , contradicting the assumption that the schedule with time slots is collision - free .we illustrate some aspects of the proof of the previous theorem in figure [ fig : thm1a ] .( 50,50 ) z3 = t*(1,0 ) ; z4 = t*(0,1 ) ; picture tile ; vardef pnt(expr a , b ) = (a*z3+b*z4 ) enddef ; pickup pencircle scaled 1 ; path p ; p = ( ( 0,1)(0,2)(2,2)(2,3)(4,3)(4,0)(2,0)(2,1)(0,1)cycle ) ; draw p scaled t shifted ( 0.5*z3 + 0.5*z4 ) withcolor 0.8white ; label(btex etex , pnt(1,2 ) ) ; label(btex etex , pnt(2,2 ) ) ; label(btex etex , pnt(3,3 ) ) ; label(btex etex , pnt(4,3 ) ) ; label(btex etex , pnt(3,2 ) ) ; label(btex etex , pnt(4,2 ) ) ; label(btex etex , pnt(3,1 ) ) ; label(btex etex , pnt(4,1 ) ) ; tile : = currentpicture ; currentpicture : = nullpicture ; draw tile ; draw tile shifted pnt(2,2 ) ; draw tile shifted pnt(0,4 ) ; draw tile shifted pnt(4,0 ) ; draw tile shifted pnt(4,4 ) ; draw tile shifted pnt(6,2 ) ; draw tile shifted pnt(8,0 ) ; draw tile shifted pnt(8,4 ) ; label(btex etex , pnt(0,4 ) ) ; label(btex etex , pnt(13,4 ) ) ; ( 50,50 ) z3 = t*(1,0 ) ; z4 = t*(0,1 ) ; picture tile ; vardef pnt(expr a , b ) = ( a*z3+b*z4 ) enddef ; pickup pencircle scaled 1 ; path p ; p = ( ( 0,1)(0,2)(2,2)(2,3)(4,3)(4,0)(2,0)(2,1)(0,1)cycle ) ; draw p scaled t shifted ( 0.5*z3 + 0.5*z4 ) withcolor 0.8white ; label(btex etex , pnt(1,2 ) ) ; label(btex etex , pnt(2,2 ) ) ; label(btex etex , pnt(3,3 ) ) ; label(btex etex , pnt(4,3 ) ) ; label(btex etex , pnt(3,2 ) ) ; label(btex etex , pnt(4,2 ) ) ; label(btex etex , pnt(3,1 ) ) ; label(btex etex , pnt(4,1 ) ) ; tile : = currentpicture ; currentpicture : = nullpicture ; draw tile ; draw tile shifted pnt(2,2 ) ; draw tile shifted pnt(0,4 ) ; draw tile shifted pnt(4,0 ) ; draw tile shifted pnt(4,4 ) ; draw tile shifted pnt(6,2 ) ; draw tile shifted pnt(8,0 ) ; draw tile shifted pnt(8,4 ) ; label(btex etex , pnt(0,4 ) ) ; label(btex etex , pnt(13,4 ) ) ; draw p scaled t shifted pnt(3.5,2.5 ) dashed evenly ;our concept of tiling a lattice with translates of a prototile turned out to be convenient for our purposes . in this section, we relate the tilings of a lattice to tilings of the euclidean space , so that we can benefit from the large number of results that are available in the literature .any tiling of a lattice can be converted into a tiling of as follows .let denote the union of the closed voronoi regions about the points in .then the translates with in yield a tiling of .conversely , any tiling of with translates of a tile consisting of the union of voronoi regions of points in evidently yields a tiling of the lattice in our sense .figure [ fig : quasipolyominoes ] shows some two - dimensional examples of voronoi regions .( 40,40 ) z1 = ( 1,0 ) ; z2 = ( 0,1 ) ; pickup pencircle scaled 1 ; fill ( ( 0,0)(1,0)(1,1)(0,1)cycle ) scaled s shifted ( 6.4*(z1+z2 ) ) withcolor 0.8white ; pickup pencircle scaled 3 ; for k = 1 step s until 3*s : draw k*z1 + 1*z2 scaled 2 ; endfor ; for k = 1 step s until 3*s : draw k*z1+(s)*z2 ; endfor ; for k = 1 step s until 3*s : draw k*z1+(2*s)*z2 ; endfor ; ( 40,40 ) z1 = ( 1,0 ) ; z2 = ( 0.5,0.866 ) ; vardef pnt(expr a , b ) = ( a*z1+b*z2 ) enddef ; pickup pencircle scaled 1 ; fill ( pnt(0,0)pnt(1,0)pnt(1,1)pnt(0,2 ) pnt(-1,2)pnt(-1,1)pnt(0,0)cycle ) scaled ( 0.5*s ) shifted ( z1+pnt(1*s,0.5*s ) ) withcolor 0.8white ; pickup pencircle scaled 3 ; for k = 1 step s until 3*s : draw k*z1 + 1*z2 scaled 2 ; endfor ; for k = 1 step s until 3*s : draw k*z1+(s)*z2 ; endfor ; for k = 1 step s until 3*s : draw ( -s+k)*z1+(2*s)*z2 ; endfor ; the union of voronoi regions about points in a lattice are also known as quasi - polyforms .a quasi - polyform that is homeomorphic to the unit ball in is known as a polyform .the books by grnbaum and shepherd and by stein and szab contain numerous examples of tilings obtained by translating quasi - polyforms ( and especially polyforms ) .the polyforms in the square grid are called polyominoes , the most well - known type of polyforms ; see golomb s book . by abuse of language, we will also refer to a prototile in as a polyomino if the union of the voronoi regions of form a polyomino . a prototile in a lattice that admits a tiling is called _exact_. it is natural to ask the following question : when is a given prototile exact , _i.e. _ , when does there exist a subset of such that the conditions * t1 * and * t2 * are satisfied ?beauquier and nivat gave a simple criterion that allows one to answer * q1 * for polynominos in the square lattice .roughly speaking , their criterion says that if can be surrounded by translates of itself such that there are no gaps or holes , then is exact ; see for details .in particular , it immediately follows that each prototile shown in figure [ fig : neighborhoods ] is exact .algorithmic criteria for deciding the question * q1 * are particularly interesting . for polyominoes in the square lattice , one can decide this question in time polynomial in the length of the boundary of the polyomino ( described by a word over the alphabet , which is short for up , down , left , and right ) , as wijshoff and van leeuwen have shown .the characterization of exactness of a polyomino by beauquier and nivat mentioned above leads to an algorithm , where is the length of the word describing the boundary .recently , gambini and vuillon derived an improved algorithm for this problem .less is known for arbitrary ( not necessarily connected ) prototiles in a general lattice .szegedy derived an algorithm to decide whether a prototile in a lattice is exact assuming that the cardinality of is a prime or is equal to 4 .we have seen that the conditions for tiling a lattice with a single prototile are somewhat restrictive .for example , we might want to allow different rotated versions of the tile if the radiation pattern of the antenna used by a sensor is asymmetrical .we might want to consider different tiles corresponding to various different signal strength settings .furthermore , we might want to allow sensors with various different styles of antenna .we can accommodate all these different situations by allowing translates of several prototiles instead of just a single one . in this section , we show that one can still obtain an optimal periodic schedule which guarantees that the schedule is collision - free , as long as sensors of the same type and setting are deployed within each tile and a constraint on the tiles is satisfied .let be a lattice in .let be prototiles in the lattice , that is , is a subset of that contains for .let be pairwise disjoint nonempty subsets of .we say that provide a tiling of with prototiles if and only if the following two conditions are satisfied : for all , we have for all in and in such that .condition * gt1 * ensures that the lattice is covered by translates of the prototiles .condition * gt2 * ensures that two distinct tiles will not overlap .the set contains all vectors that are used to translate the tile , that is , the set contains all shifted versions of that occur in the tiling of . since the sets are pairwise disjoint , it is clear that whenever .condition * gt2 * requires further that the translates of the prototile with elements in do not overlap .we will call a tiling of _ respectable _ if and only if the prototile contains all other prototiles , that is , for .if this is the case , then we call the_ respectable prototile_. suppose that we are given a tiling of respectively with neighborhoods of the form .we will assume that the sensors are deployed in the following fashion : a sensor at location in the neighborhood of an element in affects precisely the neighbors by interference , where is in the range .loosely speaking , condition * d1 * says that all elements in the neighborhood have neighborhood type .[ th : thm2 ] let be a respectable tiling of a euclidean lattice with neighborhoods of the type .suppose that the sensors are deployed according to the scheme * d1*. then there exists a deterministic periodic schedule that avoids collision problems using time slots .the schedule is optimal in the sense that one can not achieve this property with fewer than time slots .the periodic schedule is specified as follows .let . for all in the range , we schedule the elements at time if and only if is contained in the neighborhood .notice that all elements in will be scheduled at some point in time by property * gt1*. furthermore , condition * gt2 * ensures that an element in is not scheduled more than once within consecutive time steps .we claim that this schedule is collision - free . seeking a contradiction, we assume that two distinct elements in are scheduled at the same time , but yield a collision problem .in other words , there must exist integers and in the range , an element such that is contained in both and , and elements and with such that .this implies that for , contradicting property * gt2*. therefore , our schedule is collision - free . without loss of generality ,we may assume that the point in has a respectable neighborhood ( otherwise , simply shift the tiling such that this condition is satisfied ) . seeking a contradiction , we assume that there exists a deterministic periodic schedule with time slots that is collision - free .it follows that there must exist two distinct elements and in that are scheduled at the same time .however , this would imply that the element is contained in both and ; thus , , contradicting the fact that the schedule with time slots is collision - free . the previous theorem is a natural generalization of theorem [ th : thm1 ] .a salient feature of theorems [ th : thm1 ] and [ th : thm2 ] is that the optimal schedule is independent of the nature of the tiling of .notice that one can obtain a collision - free periodic schedule even when there does not exist a respectable prototile .in fact , the respectable prototile was only used in the last part of the proof of theorem [ th : thm2 ] to establish the optimality of the schedule .therefore , one might wonder what will happen in the non - respectable case .let us agree on some ground rules .we would like to maintain the fact that for each translated version of a prototile the schedule is the same , as this simplifies configuring the sensor network .however , in the non - respectable case we might have different prototiles of the same size , so we allow that the schedules in the different prototiles can be independently chosen , as long as this does not lead to collision problems. figure [ fig : non - respectable ] shows that the number of time steps in an optimal schedule depends on the chosen tiling when the tiling is non - respectable .( 50,50 ) z3 = t*(1,0 ) ; z4 = t*(0,1 ) ; picture ltile , rtile ; vardef pnt(expr a ,b ) = ( a*z3+b*z4 ) enddef ; pickup pencircle scaled 1 ; path p ; p = ( ( 0,1)(0,3)(1,3)(1,2)(2,2)(2,0)(1,0)(1,1)(0,1)cycle ) ; draw p scaled t shifted ( 0.5*z3 + 0.5*z4 ) withcolor 0.8white ; label(btex etex , pnt(1,2 ) ) ; label(btex etex , pnt(1,3 ) ) ; label(btex etex , pnt(2,2 ) ) ; label(btex etex , pnt(2,1 ) ) ; ltile : = currentpicture ; currentpicture : = nullpicture ; path q ; q = p reflectedabout ( ( 2,0),(2,1 ) ) ; draw q scaled t shifted ( -1.5*z3 + 0.5*z4 ) withcolor 0.8white ; label(btex etex , pnt(1,1 ) ) ; label(btex etex , pnt(1,2 ) ) ; label(btex etex , pnt(2,2 ) ) ; label(btex etex , pnt(2,3 ) ) ; rtile : = currentpicture ; currentpicture : = nullpicture ; draw ltile shifted pnt(0,5 ) ; draw ltile shifted pnt(1,2 ) ; draw ltile shifted pnt(3,0 ) ; draw ltile shifted pnt(4,1 ) ; draw ltile shifted pnt(5,2 ) ; draw ltile shifted pnt(4,5 ) ; draw ltile shifted pnt(2,7 ) ; draw ltile shifted pnt(1,6 ) ; draw rtile shifted pnt(3,3 ) ; draw rtile shifted pnt(2,4 ) ; ( 50,50 ) z3 = t*(1,0 ) ; z4 = t*(0,1 ) ; picture ltile , rtile ; vardef pnt(expr a , b ) = ( a*z3+b*z4 ) enddef ; pickup pencircle scaled 1 ; path p ; p = ( ( 0,1)(0,3)(1,3)(1,2)(2,2)(2,0)(1,0)(1,1)(0,1)cycle ) ; draw p scaled t shifted ( 0.5*z3 + 0.5*z4 ) withcolor 0.8white ; label(btex etex , pnt(1,2 ) ) ; label(btex etex , pnt(1,3 ) ) ; label(btex etex , pnt(2,2 ) ) ; label(btex etex , pnt(2,1 ) ) ; ltile : = currentpicture ; currentpicture : = nullpicture ; path q ; q = p reflectedabout ( ( 2,0),(2,1 ) ) ; draw q scaled t shifted ( -1.5*z3 + 0.5*z4 ) withcolor 0.8white ; label(btex etex , pnt(1,1 ) ) ; label(btex etex , pnt(1,2 ) ) ; label(btex etex , pnt(2,2 ) ) ; label(btex etex , pnt(2,3 ) ) ; rtile : = currentpicture ; currentpicture : = nullpicture ; for j = 0 step 2 until 4 : for i = 0 step 2 until 2 : draw rtile shifted pnt(i , j ) ; endfor ; endfor ; for j = 0 step 2 until 4 : for i = 4 step 2 until 6 : draw ltile shifted pnt(i , j ) ; endfor ; endfor ; draw pnt(4.5,0)pnt(4.5,9 ) dashed evenly ;we have introduced a deterministic periodic schedule for sensors using wireless communication that are placed on the points of a lattice .we have shown that the schedule is optimal assuming that there exists a respectable prototile .a natural question is whether the schedule remains optimal if one restricts the schedule from the lattice to a finite subset of .this question has an affirmative answer if contains a translate of the set , as the latter set consists of the respectable prototile and its neighbors , in which case our optimality proof carries over without change .another natural question is whether one can extend the method to the case of mobile sensors .this question has an affirmative answer . indeed , one straightforward way is to use our schedule to assign time slots to the locations rather than to the sensors .let us assume that the lattice points are spaced fine enough to ensure that only one sensor is within a voronoi region of a lattice point .if the time slot is assigned to a lattice point , then a sensor within the open voronoi region about can send at time if and only if and the interference range of fits within the tile of .clearly , this yields a collision - free schedule for mobile sensors .however , it should be stressed that there are many other solutions possible , but a comparison of such methods is beyond the scope of this paper .
suppose that wirelessly communicating sensors are placed in a regular fashion on the points of a lattice . common communication protocols allow the sensors to broadcast messages at arbitrary times , which can lead to problems should two sensors broadcast at the same time . it is shown that one can exploit a tiling of the lattice to derive a deterministic periodic schedule for the broadcast communication of sensors that is guaranteed to be collision - free . the proposed schedule is shown to be optimal in the number of time slots . * keywords : * distributed computing , scheduling sensors , lattice tiling , wireless communication .
integral field spectroscopy ( ifs ) provides a spectrum simultaneously for each spatial sample of an extended , two - dimensional field .basically , an ifs is located in the focal plane of a telescope and is composed by an integral field unit ( ifu ) and a spectrograph . the ifu acts as a coupler between the telescope and the spectrograph by reformatting optically a rectangular field into a quasi - continuous pseudo - slit located at the entrance focal plane of the spectrograph .therefore , the light from each pseudo - slit is dispersed to form spectra on the detector and a spectrum can be obtained simultaneously for each spatial sample within the ifu field .the ifu contains two main optical sub - systems : the fore - optics and the image slicer .the fore - optics introduces an anamorphic magnification of the field with an aspect ratio of 1 onto the set of slicer mirrors optical surfaces . in such wayeach spatial element of resolution forms a 1 pixels image on the detector ( i.e. the width of each slice corresponds to 2 pixels ) , which ensures correct sampling in the dispersion direction ( perpendicular to the slices ) and prevents under - sampling the spectra .this anamorphism can be avoided if under - sampling spectra is acceptable by science ( for example , the snap project ) or if a spectral dithering mechanism is included in the spectrograph in order to recover for the under - sampling .the image slicer optically divides the anamorphic ( or not ) two - dimensional field into a large number of contiguous narrow sub - images which are re - arranged along a one - dimensional slit at the entrance focal plane of the spectrograph .an image slicer is usually composed of a slicer mirror array located at the image plane of the telescope and associated with a row of pupil mirrors and a row of slit mirrors .the slicer mirror array is constituted of a stack of several thin spherical mirrors ( called `` slices '' ) which `` slice '' the anamorphic field and form real images of the telescope pupil on the pupil mirrors .the pupil mirrors are disposed along a row parallel to the spatial direction .each pupil mirror then re - images its corresponding slice of the anamorphic field on its corresponding slit mirror located at the spectrograph s focal plane ( slit plane ) .the slit mirrors are also diposed along a row parallel to the spatial direction . finally , each slit mirror which acts as a field lenses , re - images the telescope pupil ( pupil mirrors ) onto the entrance pupil of the spectrograph .the principle of an image slicer is presented in fig .[ fig : principle ] . [ cols="^ " , ] as an example in fig .[ fig : wfe ] , we consider the distribution of the theoretical wfe ( t - wfe ) of both fan - shaped and classical image slicers. the t - wfe of the fan - shaped image slicer is about 45.5 nm and the t - wfe of the classical image slicer is about 76.1 nm . it is interesting to note that the fore - optics ( see section [ sect : design ] ) provides a mean t - wfe of about 45.7 nm over the whole fov ( marked by a dashed line in fig .[ fig : wfe ] ) .thus , the fan - shaped image slicer preserves the image quality of the fore - optics over the whole fov ( all channels ) while the classical image slicer degrades the t - wfe by a factor two in certain channels .furthermore , the distribution of the t - wfe of the fan - shaped image slicer is sharp since the range of values is about 8 nm over all channels allowing to guaranty a negligible level of differential aberration in the field .this article presents an original concept of image slicer called `` fan - shaped '' .its design delivers good and homogeneous image quality over all ifu elements .we successfully apply its design to jwst / nirspec .here we did nt discuss about manufacturing aspects since the performance aspects were preponderant however further investigations are under studying to drastically reduce costs and manufacturing aspects in such a design by preserving performances . furthermore , a prototyping of the ifs ( ifu and spectrograph ) for the snap application is undergoing at lam .
integral field spectroscopy ( ifs ) provides a spectrum simultaneously for each spatial sample of an extended , two - dimensional field . it consists of an integral field unit ( ifu ) which slices and re - arranges the initial field along the entrance slit of a spectrograph . this article presents an original design of ifu based on the advanced image slicer concept . to reduce optical aberrations , pupil and slit mirrors are disposed in a fan - shaped configuration that means that angles between incident and reflected beams on each elements are minimized . the fan - shaped image slicer improves image quality in terms of wavefront error by a factor 2 comparing with classical image slicer and , furthermore it guaranties a negligible level of differential aberration in the field . as an exemple , we are presenting the design lam used for its proposal at the nirspec / ifu invitation of tender .
in many physics analyses , one seeks to extract information from a reaction with multi - dimensional kinematics. fits to binned data are often limited by statistical uncertainties ; thus , use of the unbinned maximum likelihood method is generally preferred .this method is well suited to obtaining estimators for unknown parameters in a hypothesis probability density function ( p.d.f . ) .it is also an excellent way of discriminating between hypotheses ; however , it does not provide a means of determining how well the best hypothesis describes the data . in this paper, we describe an _ ad - hoc _procedure for obtaining a goodness - of - fit measurement of a hypothesis whose parameters were determined using an unbinned maximum likelihood fit without having to bin the data .this quantity is not meant to replace the likelihood .the likelihood should still be used to determine which hypothesis best describes the data and to obtain estimators for any unknown parameters , _i.e. _ extract any physical observables .the goodness - of - fit quantity obtained using our method would simply be used to judge how well the best hypothesis describes the data . in our approach ,event - by - event standardized residuals are used to obtain the global .these residuals can also be used for diagnostic purposes to study the goodness - of - fit as a function of kinematics or detector components .determining where in phase space a fit fails to describe the data is vital to understanding what physics has not been accounted for in the p.d.f .identifying regions or components of the detector where a fit fails to describe the data could help diagnose unaccounted for inefficiencies in the acceptance calculation .other authors have also developed methods for obtaining goodness - of - fit quantities from unbinned data .the methods closest to ours are based on the nearest neighbors to a given event .however , there are several other recent articles on other procedures as well . while these and other authors have developed methods for obtaining goodness - of - fit quantities from unbinned maximum likelihood fits , the advantage of our approach lies in the event - by - event residuals .it is also straight - forward to apply our method to most experimental physics analyses .consider a data set composed of total events , each of which is described by coordinates , .the coordinates can be masses , angles , energies , _ etc_. two separate hypotheses have been proposed to describe the data .their probability density functions will be denoted by where are the functional dependencies on the coordinates , , and ( possible ) unknown parameters , , of hypothesis . using these p.d.f.s , unbinned maximum likelihood fitscan easily be performed to obtain estimators , , for the unknown parameters , , of each hypothesis .the likelihoods from these fits , , can be used to determine which hypothesis provides the better description of the data ; however , the following simple question about this hypothesis can not yet be answered : `` how well does it describe the data ? ''the aim of this procedure is to obtain a goodness - of - fit measurement of a hypothesis whose parameters were determined using an unbinned maximum likelihood fit .we first need to define a metric for the space spanned by the `` relevant '' coordinates of , _i.e. _ all coordinates on which the p.d.f has functional dependence .a reasonable choice is to use , where is the maximum possible difference between the coordinates , , of any two events in the sample . using this metric , the distance between any two events , ,is given by ^ 2,\ ] ] where the sum is over all relevant coordinates ( discussed above ) .for each event , we then compute the distance , , to the closest event using ( [ eq : dist ] ) .the value of is discussed below .thus , a hypersphere centered at the event with radius contains data events ( excluding the event itself ) . in this way, we can calculate the density of data events at the point . by comparing these density calculations to those predicted by a given hypothesis, we can determine how well the hypothesis describes the data without resorting to binning .for each event , the standardized residual , , for any hypothesis is then calculated according to where is the number of measured ( predicted ) events contained within a hypersphere of radius centered at and is the uncertainty on . for the case where the data set contains no background events ( dealing with backgrounds is discussed in section [ section : example : bkgd ] ) , and , provided is chosen to be large enough for the gaussian approximation to hold . the number of predicted events , , is calculated for hypothesis as ,\ ] ] where the integral in the numerator is over the hypersphere of radius centered at discussed above . in principle , there are cases where the integrals in ( [ eq : method - num - predicted ] ) can be performed analytically ; however , once detector acceptance is included , for which an analytic expression typically is not known , they must be done using the monte carlo technique .for these cases , can be calculated by generating monte carlo events according to phase space which leads to the following approximation of ( [ eq : method - num - predicted ] ) : ,\ ] ] where the heaviside function , , is used to restrict the sum in the numerator to monte carlo events within the hypersphere .the statistical uncertainty in the monte carlo calculation of would be , where is the number of monte carlo events within the hypersphere .we note here that if is small , then the uncertainty is better approximated by .if the hypothesis does in fact provide a good description of the data , then the values of obtained from ( [ eq : chi2-event ] ) will follow a distribution with one degree of freedom . the overall goodness - of - fitcan be obtained as follows : where is the number of free parameters present in the hypothesis p.d.f .hypotheses which describe the data well will yield values of , while those which do not describe the data well will yield larger values .as an example , we will consider the reaction in a single bin , _i.e. _ a single center - of - mass energy and production angle bin ( extending the example to avoid binning in production angle , or , is discussed below ) .the decays to about 90% of the time ; thus , we will assume we have a detector which has reconstructed events .below we will analyze a simulated photoproduction data set .the goal of our model analysis is to extract the polarization observables known as the _ spin density matrix elements _ , denoted by ( discussed below ) . in terms of the mass of the system , ,the events were generated according to 3-body phase space weighted by a voigtian ( a convolution of a breit - wigner and a gaussian ) to account for both the natural width of the and detector resolution .for this example , we chose to use mev / c for the detector resolution .the goal of our analysis is to extract the three measurable elements of the spin density matrix ( for the case where neither the beam nor target are polarized ) traditionally chosen to be , and .these can be accessed by examining the distribution of the decay products ( ) of the in its rest frame .for this example , we chose to work in the helicity system which defines the axis as the direction of the in the overall center - of - mass frame , the axis as the normal to the production plane and the axis is simply given by .the decay angles are the polar and azimuthal angles of the normal to the decay plane in the rest frame .the decay angular distribution of the in this frame is then given by we chose to use the values , and for our simulated data set .we will begin by considering the simplest case where the signal is able to be extracted without any background contamination and our detector efficiency is 100% .for this study , a simulated data set of 10,000 events was generated following the criteria described above ( see figure [ fig : gen - no - acc - no - bkgd ] ) . to facilitate the calculation of ( [ eq : method - num - predicted - mc ] ) ,100,000 monte carlo events were generated using the same distribution as the data ; however , the decay distribution was generated according to flat phase space in and .the likelihood function required to extract the spin density matrix elements is simply where is the decay angular distribution defined in ( [ eq : schil ] ) . to obtain estimators for the elements , we minimize which yields the values [ eq : rho - no - acc - no - bkgd ] these values are in good agreement with the values used to generate the data . at this point in our model analysis ,we have extracted the physical observables we are interested in .the spin and parity of the meson are well established ; however , what if we were analyzing a less well - known meson ? in that case , we may not be certain that , _i.e. _ the use of ( [ eq : schil ] ) might not be justified .prior to publishing our results , it would be reassuring to know that the fit performed using ( [ eq : schil ] ) does in fact provide a good description of our data . for this analysis ,the relevant coordinates are the decay angles and , since these are the only kinematic quantities for which the p.d.f . defined in ( [ eq : schil ] ) has dependence .the metric defined in ( [ eq : dist ] ) is then given by for this analysis .for each simulated event , we find the distance to the nearest neighbor event , .for now , we ll proceed using , determining what to use for this value is discussed in detail in section [ section : example : nc ] .the phase space monte carlo events we generated are then used to obtain the predicted number densities according to ,\ ] ] using the values given in ( [ eq : rho - no - acc - no - bkgd ] ) .for this example , the 100% acceptance makes explicitly computing the sum in the denominator unnecessary ; however , in the examples that follow it will be required .the standardized residuals , , for this hypothesis are obtained from ( [ eq : chi2-event ] ) as where is obtained from ( [ eq : npi - no - bkgd - no - acc ] ) .figure [ fig : chi2-no - acc - no - bkgd ] shows the , pull and confidence level distributions obtained for our simulated data set .the values are in excellent agreement with the theoretical distributions , _i.e. _ they do follow a distribution .the total is obtained by summing over all events .for this example , the value is 10344 ; thus , ; a clear indicator that this hypothesis provides an excellent description of the data .it is also instructive to examine hypotheses which do not perfectly describe the data .if the likelihood is maximized while requiring , then the spin density matrix elements extracted are and .figure [ fig : chi2-no-1 - 1-no - acc - no - bkgd ] shows the , pull and confidence level distributions obtained for our simulated data set under this hypothesis . since the data were generated with , this fit provides a fair description of the data ; the total . maximizing the likelihood while requiring yields .figure [ fig : chi2-no - off - no - acc - no - bkgd ] shows the , pull and confidence level distributions obtained for our simulated data set under this hypothesis .this fit clearly provides a poor description of the data ; the total .the total values do provide a goodness - of - fit value which accurately indicates how well the fit describes the data .thus , these values can be used to ascertain the quality of the hypotheses .it is also interesting to note that the pull and confidence level distributions are also good indicators .one could plot these quantities _ vs. _ kinematic variables to determine where a given hypothesis fails to describe the data . in this section, we will examine the value of ( the number of nearest neighbor events used to determine the values of the radii ) .figure [ fig : nc_ratio](a ) shows the value obtained for two choices of ( and ) for various event sample sizes .the data sets ( with sample sizes ranging from 50 to 10,000 events ) were all generated from the same parent distributions as in section [ section : example : simple ] .each data set was fit individually to extract the values used to obtain the s .figure [ fig : nc_ratio](a ) clearly shows that the value obtained does depend on the choice of , if is chosen to be greater than about 2% of the total number of events . for values of than 2% of the event sample size , the is quite stable .this behavior is expected .if is large relative to , then the method is averaging over large fractions of phase space ; thus , finer structure in the physics will not be properly accounted for .we also note here that as regardless of the quality of the fit , due to how is calculated ( see ( [ eq : method - num - predicted ] ) ) . figure [ fig : nc_ratio](b ) shows the value of obtained for the same two choices of for various event sample sizes for fits to the data which constrain .the values of are again stable when is chosen to be less than 2% of the data ; however , the values depend on the choice of .this behavior is again expected ; a similar effect can be seen in fits to binned data for which the value of depends on the number of bins .the larger the value of , the larger the can be .consider a kinematic region where the number density of the data is high , leading to a small value of .for this case , the largest obtainable value for is thus , there are two competing factors which should be considered when choosing the value of .the ratio of must be small enough to permit a true comparison of the finer structure of the physics ; however , the value of must be large enough such that the relative statistical uncertainties do not forbid larger values of .typically , these considerations will dictate be about 1%-2% of the event sample size .if this results in being less than 50 , _i.e. _ if , then this method will most likely be less effective . of course , this statement is not all encompassing ; in practice , it will depend on the physics .( color online ) events generated without background and with 100% acceptance .the quantities plotted are vs. ( red triangles ) and vs. ( black squares ) . , scaledwidth=45.0% ] it is also useful to examine how changes in the likelihood map onto changes in the values obtained using this method . herewe trace out both goodness - of - fit quantities near the optimal value of .figure [ fig : diff - comp ] shows the changes in and _ vs _ for the data set used in section [ section : example : simple ] ( and ) .clearly , an increase in does corresponds to an increase in we will now extend our example by including detector acceptance .the physics used in this section will be identical to that of section [ section : example : simple ] ; however , we will now include a detector efficiency , , given by we again generated 10,000 data events and 100,000 monte carlo events from the same parent distributions as in section [ section : example : simple ] convolved with the detector acceptance given in ( [ eq : detector - acc ] ) .figure [ fig : gen - acc - no - bkgd ] shows the decay angular distribution of this data set .the effects of the detector acceptance are clearly visible when compared with figure [ fig : gen - no - acc - no - bkgd ] . an unbinned maximum likelihood fit was performed and the following values were obtained for : [ eq : rho - acc - no - bkgd ] to obtain values , we apply the procedure just as in section [ section : example : simple ] .figure [ fig : chi2-acc - no - bkgd ] shows the , pull and confidence level distributions obtained for this simulated data set .the values are again in excellent agreement with the theoretical distributions .the total , indicating that this fit does provide an excellent description of the data ( as it should ) .( color online ) ( radians ) vs : events generated without background and with detector acceptance given by ( [ eq : detector - acc ] ) ., scaledwidth=45.0% ] in many physics analyses , there is a sample of non - interfering background events which can not be separated from the signal .we will now deal with this situation .the signal sample will simply be that of section [ section : example : acc ] ; thus , we will be including detector acceptance effects as well . for the background , we chose to generate it according to 3-body phase space weighted by a linear function in and in the decay angles .the number of background events generated , , was 10,000 ( the background events were also subjected to the detector acceptance given in ( [ eq : detector - acc ] ) ) .figure [ fig : decay - bkgd - acc](a ) shows the mass spectrum for all generated events and for just the background .the generated decay angular distributions for all events , along with only the signal and background are shown in figures [ fig : decay - bkgd - acc](b ) , ( c ) and ( d ) .there is clearly no way to separate out the signal events through the use of a cut .the method used to extract the signal events is described in detail in .it accurately preserves all kinematic correlations in the data while separating signal and background events .each event is assigned a signal weight factor , , or equivalently , a background weight factor , .these -factors are then used to weight each event s contribution to the `` log likelihood '' during unbinned maximum likelihood fits to extract physical observables .the method works in a very similar way to the one presented in this paper .a metric is defined in the space of all relevant kinematic variables and the nearest neighbor events ( we chose ) are selected .for this example , the metric defined in ( [ eq : example - metric ] ) can again be used . sinceeach subset of events occupies a very small region of phase space , the distributions can be used to determine each event s -factor while preserving the correlations present in the remaining kinematic variables ( and ) .to this end , unbinned maximum likelihood fits were carried out for each event , using its nearest neighbors , to determine the parameters in the p.d.f . where , parameterizes the signal as a voigtian ( convolution of a breit - wigner and a gaussian ) with mass gev / c , natural width gev / c and resolution .the parameter sets the overall strength of the signal .the background , in each small phase space region , was parameterized by the linear function the -factor for the event was then calculated as where is the event s mass and are the estimators for the parameters obtained from the event s fit .figures [ fig : decay - bkgd - acc](e ) and ( f ) show the extracted signal and background distributions , _i.e. _ they show the events weighted by and respectively .the agreement with the generated distributions ( see figures [ fig : decay - bkgd - acc](c ) and ( d ) ) is very good .we conclude this section by noting that the full covariance matrix obtained from each event s fit , , can be used to calculate the uncertainty in as the likelihood function used to obtain the spin density matrix elements , defined in ( [ eq : log - likelihood ] ) , can be modified to account for the presence of background events as follows : thus , the -factors are used to weight each event s contribution to the likelihood . using the -factors obtained in section [ section : example : q - factors ] , minimizing ( [ eq : log - l - q ] ) yields which are in good agreement with the generated values .the metric used to obtain the nearest neighbor events is again ( [ eq : example - metric ] ) ; however , the number of measured events contained within each hypersphere is now given by with uncertainty where the sums ( ) are over the events used to calculate the event s -factor and is the correlation factor between events and .this factor is equal to the fraction of shared nearest neighbor events used in calculating the -factors for these events .thus , ( [ eq : n_m - error ] ) accounts for both the statistical uncertainty in the signal yield , along with the uncertainty in the signal - background separation , _i.e. _ the uncertainty in the -factors .the number of predicted events , along with its uncertainty , is calculated the same way as in section [ section : example : simple ] and the values are again obtained using ( [ eq : chi2-event ] ) .figure [ fig : chi2-acc - bkgd ] shows the , pull and confidence level distributions obtained for our simulated data set .the values are again in excellent agreement with the theoretical distributions .this indicates that we have properly handled the uncertainties in the signal .the total .calculating the correlation factor , , in ( [ eq : n_m - error ] ) is very cpu intensive due to the multiple loops which need to be performed over the data events . in , we noted that assuming 100% correlation for all -factors provides a reasonable upper limit on the signal yield in any kinematic region . under this assumption , which eliminates much of the bookkeeping and cpu cycles needed to properly calculate ,the uncertainty on becomes where the sum is again over the nearest neighbor events used to calculate .figure [ fig : chi2-cor - acc - bkgd ] shows the , pull and confidence level distributions obtained for our simulated data set under this assumption .the deviations from the theoretical distributions are relatively small .the total . this value is smaller than 1 as expected; however , it is still reasonably close to the value obtained when the errors are handled rigorously .thus , this is a reasonable approximation and could be used effectively in situations where calculating the values of is not possible ( or , perhaps , desirable ) . to extend this example to allow for the case where the data is not binned in production angle, we would simply need to include or in the vector of relevant coordinates , . to perform a full partial wave analysis on the data, we would also need to include any additional kinematic variables which factor into the partial wave amplitudes , _e.g. _ the distance from the edge of the dalitz plot ( typically included in the decay amplitude ) .we would then construct the likelihood from the partial waves and minimize using the -factors obtained by applying our signal - background separation procedure , including the additional coordinates . the estimators from these event - based fitswould then serve as input into a higher dimensional version of the method presented in this example . the procedure for obtaining -values ,however , would remain almost unchanged .we would simply need to account for the additional relevant kinematic variables in our metric .if the fit provides a good description of the data , then the should be approximately one and the squares of the standardized residuals should follow a distribution .in this paper , we have presented an _ ad - hoc _procedure for obtaining values from unbinned maximum likelihood fits which does not require binning the data .this makes it very applicable to multi - dimensional problems .we have shown that these values accurately reflect how well a given hypothesis describes the data .this work was supported by grants from the united states department of energy no .de - fg02 - 87er40315 and the national science foundation no . 0653316 through the `` physics at the information frontier '' program .99 j. friedman , phys . rev . *d9 * , 11 ( 1973 ) .mark f. schilling , the annals of statistics , * 11 * , 13 , ( 1983 ) .mark f. schilling , journal of the american statistical association , * 81 * , 799 ( 1986 ) .b. aslan and g. zech , arxiv : hep - ex/0203010 ( 2003 ) .k. kinoshita , arxiv : physics/0312014 ( 2003 ) .r. raja , arxiv : physics/0401133 ( 2003 ) .v. h. regener , phys . rev . * 84 * , 161 ( 1951 ) . k. schilling , p. seyboth and g. wolf , nuclb15 * , 397 ( 1970 ) .m. williams , m. bellis and c. a. meyer , arxiv:0804.3382 ( submitted to nim ) .
a common goal in an experimental physics analysis is to extract information from a reaction with multi - dimensional kinematics . the preferred method for such a task is typically the unbinned maximum likelihood method . in fits using this method , the likelihood is a goodness - of - fit quantity in that it effectively discriminates between available hypotheses ; however , it does not provide any information as to how well the best hypothesis describes the data . in this paper , we present an _ ad - hoc _ procedure for obtaining values from unbinned maximum likelihood fits . this method does not require binning the data , making it very applicable to multi - dimensional problems . and 11.80.et,29.85.fj
in recent literature a lot of attention has been given to systems that are able to sense electromagnetic near fields ( evanescent waves ) and even to `` amplify '' them .the superlens proposed by pendry is one of such systems .his superlens is based on a veselago medium slab .the real parts of the permittivity and the permeability of the veselago slab are both negative at a certain wavelength .thus , the eigenwaves in the slab are _ backward _ waves , i.e. the wave phase and group velocities are antiparallel .this provides negative refraction and focusing of waves in a planar slab , as was outlined by veselago. however , pendry discovered that there was also a possibility to excite surface plasmon - polaritons on the slab surfaces and , due to that , amplify near fields .the slab thickness can be of the order of the wavelength , so that the plasmon - polaritons excited at both sides of the slab are strongly coupled . under certain conditions the plasmon - polariton excited at the back surface of the slab has a much stronger amplitude than that at the front surface .such amplification of evanescent waves is the key principle in subwavelength imaging .known experimental realizations of volumetric artificial materials with negative parameters are highly anisotropic structures that utilize dense arrays of parallel thin conducting wires and variations of split - ring resonators. proposed isotropic arrangements use orthogonal sets of split rings and also three - dimensional arrays of wires . at the same time , there have been achievements in modeling of veselago materials ( and pendry lens ) with the help of -circuits or transmission - line ( tl ) based structures. these networks do not rely on resonant response from particular inclusions , and the period of the mesh can be made very small as compared with the wavelength .these features allow realization of broadband and low - loss devices , which is extremely difficult if resonant inclusions are used .the transmission - line network approach has been successfully realized in one- and two - dimensional networks , but up to now there have been doubts if it is possible to design a three - dimensional ( 3d ) circuit analogy of the veselago medium .the difficulties arise from the fact that such a 3d network requires a common ground connector .any realization of such a ground will effectively work as a dense mesh of interconnected conductors that blocks propagation of the electromagnetic waves practically the same way as a solid metal does ( the structural period must be much less than the wavelength in order to realize an effectively uniform artificial material ) . in this paperwe introduce isotropic three - dimensional transmission - line networks that overcome this difficulty . in the tl - based networks that we study , the electromagnetic energy propagates through tl sections .the inside of every tl section is effectively screened from the inside of the other sections and from the outer space .this can be very naturally imagined with a 3d cubic - cell network of interconnected coaxial cable segments : the inner conductors of the segments are soldered at the network nodes ; the same is done for the outer conductors .the whole system appears as a 3d pipe network where every pipe holds a central conductor and those conductors are crossing at the node points inside the pipes .a loaded tl network can be realized now by placing loading elements inside the `` pipes '' . to couple the waves propagating inside the tl sections with the free - space waves one will have to apply a kind of antenna array with every antenna feeding a particular tl segment . when using transmission lines loaded with bulk elements we speak of waves in the meaning of discrete waves of voltages and currents defined at the loading positions .let us note that in the tl sections _ as such _ the usual , forward waves propagate . only because of the loading the discrete voltage and current waves appear as backward ones when appropriate loading impedances are used .while completing this manuscript , we learned about another possible design of a 3d transmission - line analogy of a backward - wave material described in ref .that design is based on kron s formal representation of maxwell s equations as an equivalent electric circuit. in ref .12 only 1d propagation was studied analytically and 3d properties were analyzed numerically .the proposed structure of 3d super - resolution lens consists of two forward - wave ( fw ) regions and one backward - wave ( bw ) region .the 3d forward - wave networks can be realized with simple transmission lines and the 3d backward - wave network with inductively and capacitively loaded transmission lines .one unit cell of the bw network is shown in fig .[ 3d_zy_tl_unit_cell ] ( the unit cell enclosed by the dotted line ) . in the 3d structurethere are impedances _ z_/2 and transmission lines also along the -axis ( not shown in fig .[ 3d_zy_tl_unit_cell ] ) . in view of potential generalizations ,the loads are represented by series impedances _z_/2 and shunt admittances _y _ , although for our particular purpose to realize a backward - wave network , the loads are simple capacitances and inductances .the unit cell of the fw network is the same as in fig .[ 3d_zy_tl_unit_cell ] but without the series impedances and shunt admittance .the equations that will be derived for these structures can be used in various implementations , but this paper will concentrate on the case when and .first we will derive the dispersion relation for a simplified 3d bw network , i.e. we will not take into account the transmission lines .such approximation is possible at low frequencies . without the transmission line segmentsthis derivation is quite simple and can be done by summing up all the currents that flow to the node ( ) and equating this sum with the current flowing to the ground ( through admittance ) , see fig .[ 3d_zy_tl_unit_cell ] .the result is ( u_+1,,+u_,+1,+u _ , , + 1+u_-1,,+u_,-1,+u_,,-1 - 6u _ , , ) = u_,,y simple_1 .we look for a solution of the form ( ) , and if we use , and , ( 1 ) can be reduced to ( e^-jq_x+e^-jq_y+e^-jq_z+e^+jq_x+e^+jq_y+e^+jq_z-6)=y , or ( q_x)+(q_y)+(q_z)=+3 .if we now insert and we get the dispersion relation for the _ lc_-loaded network : ( q_x)+(q_y)+(q_z)=-+3 .simple_3 next we want to take the transmission lines into account. the effect of the transmission lines can be derived by first evaluating a part of the three - dimensional network as the one shown in fig .[ 1d_deriv ] and deriving the relation between the current that flows into a node and the voltages of adjacent nodes .if current is flowing towards node ( ) and the current that goes into node ( ) from the left is , then using the abcd - matrix for a transmission line we get : u_1=a_tu_2+b_ti_2 u_1 , i_1=c_tu_2+d_ti_2 i_1 , where ( ccc a_t & b_t + c_t & d_t ) = ( ccc ( k_0d/2 ) & jz_0 ( k_0d/2 ) + jz_0 ^ -1 ( k_0d/2 ) & ( k_0d/2 ) ) .abcd in is the wavenumber of waves in the transmission lines . from and we can solve and as functions of and : i_1= , i_2=.similarly for and we get : i_3= , i_z=. i_z next we can derive two equations for the current flowing through the series impedance and solve from both of them : i_2=z^-1(u_2-u_3)=,u_2= ; i_3=z^-1(u_2-u_3)= i_3 , u_2=. u_2 if we let in both equations be equal , we can solve as a function of and : u_3= .u_3 in order to derive an equation for [ the current that flows into node ( ) from the direction of node ( ) ] as a function of and , we insert into and get i_z =- u_4 . if we use and , then . because of the symmetry we can derive the dispersion relation exactly the same way as in , and for the case , the result is ( q_x)+(q_y)+(q_z)=-3 , disp_rel_bw where s_bw= s_bw , k_bw=- k_bw .to derive the dispersion relation for the forward - wave network , we can use the equations derived for the backward - wave network letting and . this way we get from and the following equations for and : s_fw= , k_fw=-- . fromwe get the dispersion relation : ( q_x)+(q_y)+(q_z)=-3 .disp_rel_fw dispersion curves for backward - wave and forward - wave networks can be plotted if the values of the transmission line parameters and _ l _ and _ c _ are fixed .let us choose the parameters of the tls and the lumped components as : nh , pf , m ( the period of the network ) , ohm , ohm ( characteristic impedances of the tls ) .see figs .[ dispersion_bw ] and [ dispersion_fw ] for examples of dispersion curves when a -directed plane wave is considered ( i.e. ) . , where is the speed of light in vacuum .notice that the bw network supports backward - waves only in the region where 0.32 ghz.98 ghz . above that frequency band and the following stopband ,the bw network works as a normal fw network until the next stopband appears . by tuning the capacitance ( or inductance ) , the stopband between the bw and fw regions shown in fig .[ dispersion_bw ] can be closed , see fig .[ dispersion_cvar]a , where pf . as can be seen from fig .[ dispersion_cvar]b , by changing the value of from this `` balanced '' case , the stopband is formed either by moving the edge of the fw region up ( ) or by moving the edge of the bw region down ( ) .next the characteristic impedance of the backward - wave network is derived .if we assume that the interface between the two networks is in the center of capacitor _ c _ ( see fig . [ 1d_deriv ] , where ) , then we can define the characteristic impedance as z_0,bw==. first we have to express , and ( or optionally ) as functions of and .we can use equations .if we insert into and , we find , and as functions of and . therefore we can present , and simply as : , and . because for a wave moving along the -direction , the characteristic impedance can be expressed as z_0,bw== , z0_bw where a_bw= , b_bw=- , c_bw= , d_bw=- , e_bw= , f_bw=- f_bw . to derive the characteristic impedance of the forward - wave network, we can use the equations derived for the backward - wave network if we insert in them .if this condition applies , we get from : z_0,fw== , z0_fw where a_fw = c_fw= , b_fw = d_fw=- , e_fw= , f_fw=- . and can be plotted from and as functions of the frequency if the transmission line parameters and _ l _ and _ c _ are fixed .let us choose the parameters of the tls and the lumped components as : pf , nh , m , , ohm , ohm ( characteristic impedances of the tls ) .see figs .[ z0_fvar_bw ] and [ z0_fvar_fw ] for examples of the characteristic impedances when a -directed plane wave is considered ( i.e. ) .the effect of changing on the characteristic impedance can be seen in fig .[ z0_fvar_bw ] , and the effect of changing on the characteristic impedance is shown in fig .[ z0_fvar_fw ] .notice that for the bw network the characteristic impedance is continuous only in the `` balanced '' case ( pf here ) , because in the stopbands the real part of the impedance is zero .we consider a perfect lens with axis parallel to the -axis . to have perfect imaging , the lens should support all spatial harmonics ( i.e. waves with all possible transverse wavenumbers ) of the source field and for those values of , and should be equal in magnitude but opposite in sign . from fig .[ f_qzvar_bw_and_fw ] we can conclude that the matching of and ( which corresponds to a relative refraction index of ) can be achieved only at one frequency ( depending on the parameters of the forward - wave and backward - wave networks ) . in fig .[ f_qzvar_bw_and_fw ] this frequency is ghz . at this frequency the dispersion curves of the forward - wave and backward - wave networksintersect . in the analytical formthis means that -3=-3 , disp_ideal as can be seen from and .the dispersion curves in fig .[ f_qzvar_bw_and_fw ] are plotted so that and are zero ( a plane wave moving along the -axis ) . from fig .[ f_qzvar_bw_and_fw ] it is seen also that for ghz when .this means that the absolute value of the maximum wavenumber for propagating waves is approximately m .in addition to matching the wavenumbers ( refractive indices ) , to realize an ideal `` perfect lens '' the interfaces between the forward - wave and backward - wave networks should be also impedance - matched .if the two regions would not be matched , reflections from the interface would distort the field patterns both inside and outside the lens . as can be seen from figs .[ z0_fvar_bw ] and [ z0_fvar_fw ] , the characteristic impedances of the backward - wave network and the forward - wave network are about 40 ohms at the frequency where the wavenumbers are matched ( and when pf and ohm ) .notice that the impedances of the transmission lines in the forward - wave network have been lowered from 85 ohm to 70 ohm to achieve impedance matching of the forward - wave and backward - wave networks .next the effect of nonzero on the matching is considered ( at the optimal frequency ) .the minimum and maximum values of can be found from and . from andwe can plot and as functions of the transverse wavenumber ( ) if we fix the frequency .now and are surfaces with variables and . by comparing these surfaces, it was seen that they are practically the same for all possible values of .this happens only at the frequency ghz , where the dispersion curves of the forward - wave and backward - wave networks intersect . because the characteristic impedances are functions of the abcd - matrices and ( and is a function of ), we can plot and as functions of if we fix the frequency .now and are surfaces with variables and . by comparing these surfaces, it was seen that they are almost the same ( less than one percent difference ) for all possible values of at ghz .see fig .[ z0diff_ktvar ] for a 2d cut of relative difference between such surfaces ( now and therefore ) .the ideal `` perfect lens '' situation is achieved with ohm , as can be seen also from fig .[ z0diff_ktvar ] .when the frequency deviates from the optimal value ( for which is true ) , the wavenumbers and characteristic impedances are no longer matched .the effect of this is distortion of the image seen in the image plane of the lens .the transmission coefficient of the lens can be solved by considering the incident and reflected fields in the lens system .let us assume that the incident field outside the lens has the unit amplitude , the reflected field outside the lens has amplitude , the incident field inside the lens has amplitude , and the reflected field inside the lens has amplitude ( is the distance from the front edge of the lens ) . from these valueswe can form the following equations ( the length of the bw slab is and the transmission coefficient of the bw slab is , see fig . [ perfect_lens ] ) : 1+r = a+b t_1 , = t_2 , t_lens = ae^-jk_z , bw l+be^+jk_z , bw l t_3 , = .the resulting equation for the transmission coefficient is : t_lens(k_t)=. tthe total transmission from the source plane to the image plane ( see fig . [ perfect_lens ] ) is then ( distance from source plane to lens is and distance from lens to image plane is ) t_tot(k_t)=t_lens(k_t)e^-jk_z , fw(s_1+s_2 ) .t_tot the longitudinal wavenumber as a function of can be found from the dispersion relations .let us choose and so we can plot curves instead of surfaces . as a function of can now be plotted if the frequency is fixed .let us choose the lengths of the lens system as the following : m , m , m. now we can choose the frequency at which we want to calculate .let us study the transmission properties at the matching frequency ghz . from and we can plot the magnitude and phase of as a function of , see fig .[ t_tot ] , case 1 . from fig .[ t_tot ] it is seen that the `` lens '' works quite well for the propagating modes ( m m ) , see fig . [ source_image ] for an example of phase correction in the image plane . according to fig .[ t_tot ] , for evanescent modes ( m m ) the `` lens '' works only in a limited range of , where the absolute value of the transmission coefficient is greater than zero .one can notice that for evanescent modes a mismatch in affects mostly the phase of in the area of propagating waves and a mismatch in the characteristic impedances affects primarily the absolute value of . to improve the effect of evanescent waves enhancement, the characteristic impedances should be matched better in the evanescent wave area of ( i.e. m m ) .there are several ways to achieve a better matching of the characteristic impedances .first , there is of course a possibility to change the impedances of the transmission lines , but this is probably not practically realizable because it would require very accurate manufacturing ( even a very small deviation from the ideal impedance values destroys the effect of growing evanescent waves ) .the tuning of was tested and using the exact impedance required ( see fig . [ z0diff_ktvar ] ) , the transmission of evanescent waves was clearly improved .the resonance peaks in fig .[ t_tot ] were moved further away from the center and the absolute value of was larger than or equal to unity approximately for m m ) .second , there is a possibility to change the frequency and study if the impedance matching can be made better that way ( this also means that the matching of wavenumbers is made worse which can also destroy the effect of growing evanescent waves ) .this was tested and the best results were obtained using frequency ghz .the region of transmitted s was again increased , i.e. the resonance peaks in fig .[ t_tot ] were moved further away from the center and the absolute value of was larger than or equal to unity approximately for m m ) . the third way to enhance the growth of evanescent waves is to change the length of the `` lens '' . from itis seen that the growth of evanescent waves is destroyed by the term in the denominator .this term can be made smaller by decreasing the length of the `` lens '' .see fig .[ t_tot ] , case 2 ( is larger than or equal to unity approximately for m m ) and fig .[ source_image_evanescent ] , where the distances equal m , m and m. from fig .[ source_image_evanescent ] one can conclude that there is a significant growth of evanescent waves in the lens . by using the shortened lens _ and _ at the same time tuning the frequencyappropriately , it was seen that the transmission coefficient could be made practically ideal ( i.e. and for all possible values of ) . using the shortened lens ( same values as in fig .[ source_image_evanescent ] ) and frequency ghz , the absolute values of evanescent fields were indeed almost the same in the image plane and in the source plane ( less than one percent difference ) .how to manufacture three - dimensional transmission line networks ? the main problem is the ground plane , which should exist in all three dimensions .one solution would be to use coaxial transmission lines ( regular in the forward - wave network and loaded with lumped _ l_- and _ c_-components in the backward - wave network ) as shown in fig .[ 3d_coax_and_mstrip_unit_cell]a .this structure is realizable , but we propose a simpler structure based on microstrip lines , as presented in fig .[ 3d_coax_and_mstrip_unit_cell]b .the problem with microstrip lines is of course the design of intersections where the transmission lines from six directions meet .this problem can be overcome by using ground planes which have holes in them at these intersection points .this way the conducting strip can be taken through the substrate and thus connection of the vertical conducting strips becomes possible . the proposed structure ( see fig . [3d_coax_and_mstrip_unit_cell]b ) has been simulated in ansoft hfss ( version 9.2.1 ) . due to complexity of the structure and the limited calculation power available ,the three - dimensional structure was simulated only near the first lens interface of fig .[ perfect_lens ] .the simulated model had ( ) unit cells in the forward - wave region and unit cells in the backward - wave region .the properties of the transmission lines and lumped elements were the same as in fig .[ f_qzvar_bw_and_fw ] .the edges of the system were terminated with matched loads to prevent reflections ( ohm in the backward - wave region and ohm in the forward - wave region ) .different types of source fields ( plane waves with different incidence angles and a point source ) were tested and in all cases negative refraction was observed at the interface between the forward - wave and backward - wave networks at the expected frequency ( ghz ) .a two - dimensional cut of the proposed structure was simulated as a complete `` lens '' system .again negative refraction was seen at both interfaces , and therefore also focusing of propagating waves was observed .see fig .[ 2dplot ] for the plot of the phase of the electric field in the two - dimensional simulation .the source field is excited at the left edge of the system in fig .[ 2dplot ] .when the field magnitude is plotted and animated as a function of phase , it is clearly seen that the phase propagates to the right in the forward - wave regions and to the left in the backward - wave region .the energy of course propagates to the right in all regions .in this paper we have introduced and studied a three - dimensional transmission - line network which is a circuit analogy of the superlens proposed by pendry .the structure is a 3d - network of interconnected loaded transmission lines .choosing appropriate loads we realize forward - wave ( fw ) and backward - wave ( bw ) regions in the network . the dispersion equations and analytical expressions for the characteristic impedances for waves in fw and bw regions have been derived .a special attention has been given to the problem of impedance and refraction index matching of fw and bw regions . from the derived dispersion equationsit has been seen that there exist such a frequency at which the corresponding isofrequency surfaces for fw and bw regions coincide .theoretically this can provide distortion - less focusing of the propagating modes _ if _ the wave impedances of fw and bw regions are also well matched .impedance matching becomes even more important when the evanescent modes are taken into account . in this paperwe have shown that the wave impedances can be matched at least within 1% accuracy or better if the characteristic impedances of the transmission lines are properly tuned . however , from the practical point of view an accuracy better than 1% becomes hardly realizable .it has been shown that decreasing the thickness of the bw region reduces the negative effect of the impedance mismatch , while the amplification of the evanescent modes is preserved .we have also outlined a couple of prospective designs of the perfect lens discussed in this paper and numerically simulated their performance .this work has been done within the frame of the _ metamorphose _ network of excellence and partially funded by the academy of finland and tekes through the center - of - excellence program .the authors would like to thank dr .mikhail lapine for bringing paper to their attention and for helpful discussions .eleftheriades , a.k .iyer , and p.c .kremer , `` planar negative refractive index media using periodically _ l - c _ loaded transmission lines , '' _ ieee trans .microwave theory and techniques _ , vol .2702 - 2712 , dec . 2002 .a. grbic and g. v. eleftheriades , `` periodic analysis of a 2-d negative refractive index transmission line structure , '' _ ieee trans .antennas and propagation _ , vol .51 , no . 10 , pp .2604 - 2611 , oct .a. grbic and g.v .eleftheriades , `` negative refraction , growing evanescent waves and sub - diffraction imaging in loaded transmission - line metamaterials , '' _ ieee trans . microwave theory and techniques _ ,2297 - 2305 , dec . 2003
an isotropic three - dimentional perfect lens based on cubic meshes of interconnected transmission lines and bulk loads is proposed . the lens is formed by a slab of a loaded mesh placed in between two similar unloaded meshes . the dispersion equations and the characteristic impedances of the eigenwaves in the meshes are derived analytically , with an emphasis on generality . this allows designing of transmission - line meshes with desired dispersion properties . the required backward - wave mode of operation in the lens is realized with simple inductive and capacitive loads . an analytical expression for the transmission through the lens is derived and the amplification of evanescent waves is demonstrated . factors that influence enhancement of evanescent waves in the lens are studied and the corresponding design criteria are established . a possible realization of the structure is outlined .
the paper deals with the estimation problem in the heteroscedastic nonparametic regression model where the design points , is an unknown function to be estimated , is a sequence of centered independent random variables with unit variance and are unknown scale functionals depending on the design points and the regression function . typically , the notion of asymptotic optimality is associated with the optimal convergence rate of the minimax risk ( see e.g. , ibragimov , hasminskii,1981 ; stone,1982 ) . an important question in optimality results is to study the exact asymptotic behavior of the minimax risk .such results have been obtained only in a limited number of investigations .as to the nonparametric estimation problem for heteroscedastic regression models we should mention the papers by efromovich , 2007 , efromovich , pinsker , 1996 , and galtchouk , pergamenshchikov , 2005 , concerning the exact asymptotic behavior of the -risk and the paper by brua , 2007 , devoted to the efficient pointwise estimation for heteroscedastic regressions .heteroscedastic regression models are largely used in financial mathematics , in particular , in problem of calibrating ( see e.g. , belomestny , reiss , 2006 ) .an example of heteroscedastic regression models is given by econometrics ( see , for example , goldfeld , quandt , 1972 , p. 83 ) , where for consumer budget problems one uses some parametric version of model with the scale coefficients defined as where , and are some unknown positive constants .the purpose of the article is to study asymptotic properties of the adaptive estimation procedure proposed in galtchouk , pergamenshchikov , 2007 , for which a non - asymptotic oracle inequality was proved for quadratic risks .we will prove that this oracle inequality is asymptotically sharp , i.e. the asymptotic quadratic risk is minimal .it means the adaptive estimation procedure is efficient under some the conditions on the scales which are satisfied in the case .note that in efromovich , 2007 , efromovich , pinsker , 1996 , an efficient adaptive procedure is constructed for heteroscedastic regression when the scale coefficient is independent of , i.e. . in galtchouk , pergamenshchikov , 2005 , for the modelthe asymptotic efficiency was proved under strong the conditions on the scales which are not satisfied in the case .moreover in the cited papers the efficiency was proved for the gaussian random variables that is very restrictive for applications of proposed methods to practical problems . in the paperwe modify the risk .we take a additional supremum over the family of unknown noise distributions like to galtchouk , pergamenshchikov , 2006 .this modification allows us to eliminate from the risk dependence on the noise distribution .moreover for this risk a efficient procedure is robust with respect to changing the noise distribution .it is well known to prove the asymptotic efficiency one has to show that the asymptotic quadratic risk coincides with the lower bound which is equal to the pinsker constant . in the papertwo problems are resolved : in the first one a upper bound for the risk is obtained by making use of the non - asymptotic oracle inequality from galtchouk , pergamenshchikov , 2007 , in the second one we prove that this upper bound coincides with the pinsker constant .let us remember that the adaptive procedure proposed in galtchouk , pergamenshchikov , 2007 , is based on weighted least - squares estimates , where the weights are proper modifications of the pinsker weights for the homogeneous case ( when ) relative to a certain smoothness of the function and this procedure chooses a best estimator for the quadratic risk among these estimators . to obtain the pinsker constant for the model one has to prove a sharp asymptotic lower bound for the quadratic risk in the case when the noise variance depends on the unknown regression function . in this case , as usually, we minorize the minimax risk by a bayesian one for a respective parametric family . then for the bayesian risk we make use of a lower bound ( see theorem 6.1 ) which is a modification of the van trees inequality ( see , gill , levit , 1995 ). the paper is organized as follows . in section[ sec : ad ] we construct an adaptive estimation procedure . in section [ sec : co ] we formulate principal the conditions .the main results are presented in section [ sec : ma ] .the upper bound for the quadratic risk is given in section [ sec : up ] .in section [ sec : lo ] we give all main steps of proving the lower bound .in subsection [ subsec : tr ] we find the lower bound for the bayesian risk which minorizes the minimax risk . in subsection[ subsec : fa ] we study a special parametric functions family used to define the bayesian risk . in subsection [ subsec : br ]we choose a prior distribution for bayesian risk to maximize the lower bound .section [ sec : np ] is devoted to explain how to use the given procedure in the case when the unknown regression function is non periodic . in section [ sec : cn ] we discuss the main results and their practical importance .the proofs are given in section [ sec : pr ] .the appendix contains some technical results .in this section we describe the adaptive procedure proposed in galtchouk , pergamenshchikov , 2006 . we make use of the standard trigonometric basis in ] denotes the integer part of . to evaluate the error of estimation in the model we will make use of the empiric norm in the hilbert space ] , we define the empiric inner product moreover , we will use this inner product for vectors in as well , i.e. if + and , then the prime denotes the transposition .notice that if is odd , then the functions are orthonormal with respect to this inner product , i.e. for any , where is kronecker s symbol , if and for .[ re.ad.1 ] note that in the case of even , the basis is orthogonal and it is orthonormal except the function for which the normalizing constant should be changed .the corresponding modifications of the formulas for even one can see in galtchouk , pergamenshchikov,2005 . to avoid these complications of formulas related to even , we suppose to be odd .thanks to this basis we pass to the discrete fourier transformation of model : where , , and we estimate the function by the weighted least squares estimator where the weight vector belongs to some finite set from ^n ] .we suppose that the parameters and are functions of , i.e. and , such that , & \lim_{{\mathchoice{n\to\infty}{n\to\infty}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}\,{\varepsilon}_{{\mathchoice{n}{n}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}\,=\,0 \quad\mbox{and}\quad \lim_{{\mathchoice{n\to\infty}{n\to\infty}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}\,n^{\nu}\,{\varepsilon}_{{\mathchoice{n}{n}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}\,=+\infty\ , , \end{array } \right.\ ] ] for any .for example , one can take for where is any nonnegative constant .for each we define the weight vector as here ] .the penalty term we define as where is any slowly increasing sequence , i.e. for any . finally , we set the goal of this paper is to study asymptotic ( as ) properties of this estimation procedure .[ re.ad.3 ] now we explain why does one choose the cost function in the form . developing the empiric quadratic risk for estimate , one obtains it s natural to choose the weight vector for which this function reaches the minimum .since the last term on the right - hand part is independent of , it can be droped and one has to minimize with respect to the function equals the difference of two first terms on the right - hand part .it s clear that the minimization problem cannt be solved directly because the fourier coefficients are unknown.to overcome this difficulty , we replace the product by its asymptotically unbiased estimator ( see , galtchouk , pergamenshchikov , 2007 , 2008 ) . moreover , to pay this substitution , we introduce into the cost function the penalty term with a small coefficient .the form of the penalty term is provided by the principal term of the quadratic risk for weighted least - squares estimator , see galtchouk , pergamenshchikov , 2007 , 2008.the coefficient means , that the penalty is small , because the estimator approximates in mean the quantity asymptotically , as .note that the principal difference between the procedure and the adaptive procedure proposed by golubev , nussbaum , 1993 , for a homogeneous gaussian regression , consists in presence of the penalty term in the cost function .[ re.ad.4 ] as it was noted at remark [ re.ad.2 ] , nussbaum , 1985 , has shown that the weight coefficients of type provide the asymptotic minimum of the mean - squared risk at the regression function estimation problem for the homogeneous gaussian model , when the smoothness of the function is known .in fact , to obtain an efficient estimator one needs to take a weighted least squares estimator with the weight vector , where the index depends on smoothness of function and on coefficients , ( see below ) , which are unknown in our case .for this reason , galtchouk , pergamenshchikov , 2007 , 2008 , have proposed to make use of the family of coefficients , which contains the weight vector providing the minimum of the mean - squared risk .moreover , they proposed the adaptive procedure for which a non - asymptotic oracle inequality ( see , theorem [ th.m.1 ] below ) was proved under some weak conditions on the coefficients .it is important to note that due the properties of the parametric family , the secondary term in the oracle inequality is slowly increasing ( slower than any degree of ) .first we impose some conditions on unknown function in the model .let be the set of -periodic times differentiable functions .we assume that belongs to the following set where denotes the norm in ] , i.e. \,:\ , \sum_{{\mathchoice{j=1}{j=1}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}^\infty\,a_{{\mathchoice{j}{j}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}\theta^2_{{\mathchoice{j}{j}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}\le r\}\,,\ ] ] where and )^{2i}\,.\ ] ] here is the trigonometric basis defined in .now we describe the conditions on the scale coefficients . * _ for some unknown function \times { { \cal l}}_{{\mathchoice{1}{1}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}[0,1 ] \to \bbr_+ ] , the operator \to \bbr ] , i.e. for any from some vicinity of in ] is a bounded linear operator and the residual term , for each ] the operator defined in the condition satisfies the following inequality for any function from ] .moreover , _ [ re.co.1 ] let us explain the conditions .in fact , this is the regularity conditions of the function generating the scale coefficients .condition means that the function should be uniformly integrable with respect to the first argument in the sens of convergence .moreover , this function should be separated from zero ( see inequality ) and bounded on the class ( see inequality ) .boundedness away from zero provides that the distribution of observations is nt degenerate in , and the boundedness means that the intensity of the noise vector should be finite , otherwise the estimation problem hasnt any sens .conditions and mean that the function is regular , at any fixed , with respect to in the sens , that it is differentiable in the frchet sens ( see e.g. , kolmogorov , fomin , 1989 ) and moreover the frchet derivative satisfies the growth condition given by the inequality which permits to consider the example .last the condition is the usual uniform continuity the condition of the function at the function .now we give some examples of functions satisfying the conditions - .we set with some coefficients , , . in this case frchet derivative is given by it is easy to see that the function satisfies the conditions .moreover , the conditions are satisfied by any function of type where the functions and satisfy the following the conditions : * is a \times\bbr\to [ c_{{\mathchoice{0}{0}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}\,,\,+\infty) ] also , i.e. if an estimator is defined only at the design points , then we extend it as step function onto the interval \scriptstyle[0,x_{1}]\scriptscriptstyle[0,x_{1}]\scriptstyle(x_{k-1},x_k]\scriptscriptstyle(x_{k-1},x_k] ] such that where , all coordinates are , except the i - th equals to . then for any square integrable estimator of and any , where , and , the operator is defined in the condition .* proof * is given in appendix [ subsec : a.2 ] .[ re.tr.1 ] note that the inequality is some modification of the van trees inequality ( see , gill , levit , 1995 ) adapted to the model . in this section we define and study some special parametric family of kernel function which will be used to prove the sharp lower bound .let us begin by kernel functions .we fix and we set where is the indicator of a set , the kernel is such that it is easy to see that the function possesses the properties : moreover , for any and we divide the interval ] give a natural parametric approximation to the function on each subinterval .let be the trigonometric basis in ] , therefore now we have to choose the sequence . note that if we put in we can rewrite the inequality as where it is clear that therefore to obtain a positive finite asymptotic lower bound in we have to take the parameter as with some positive coefficient .moreover , the conditions - imply that , for sufficiently large , moreover , taking into account that for sufficiently large we obtain the following condition on where to maximize the function on the right - hand side of the inequality we take defined in . therefore we obtain that where furthermore , taking into account that we get where this means that to obtain in the maximal lower bound one has to take in it is important to note that if one defines the prior distribution in the bayesian risk by formulas , , and , then the bayesian risk would depend on a parameter , i.e. . therefore , the inequality implies that , for any , where the function is defined in for . now to end the definition of the sequence of the random functions defined by and one has to define the sequence .let us remember that we make use of the sequence with the coefficients constructed in for given in and for the sequence given by and for some fixed arbitrary .we will choose the sequence to satisfy the conditions .one can take , for example , + 1 ] of \scriptstyle+\scriptscriptstyle+ ] function such that for and for all , for example , }{[a',b']}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}(z)\ , \d z\,,\ ] ] where is some kernel function introduced in , multiplying the equation by the function and simulating the i.i.d . one comes to the estimation problem of the periodic regression function , i.e. where , and is some sufficiently small parameter .it is easy to see that if the sequence satisfies the conditions , then the sequence satisfies these conditions as well with conclusion , it should be noted that this paper completes the investigation of the estimation problem of the nonparametric regression function for the heteroscedastic regression model in the case of quadratic risk .it is proved that the adaptive procedure satisfies the non asymptotic oracle inequality and it is asymptotically efficient for estimating a periodic regression function . moreover , in section [ sec : np ] we have explained how to apply the procedure to the case of non periodic function .as far as we know , the procedure is unique for estimating the regression function at the model .let us remember once more the main steps of this investigation .the procedure combines the both principal aspects of nonparametric estimation : non asymptotic and asymptotic .non - asymptotic aspect is based on the selection model procedure with penalization ( see e.g. , barron , birg and massart , 1999 , or fourdrinier , pergamenshchikov , 2007 ) .our selection model procedure differs from the commonly used one by a small coefficient in the penalty term going to zero that provides the sharp non - asymptotic oracle inequality . moreover , the commonly used selection model procedure is based on the least - squares estimators whereas our procedure uses weighted least - squares estimators with the weights minimizing the asymptotic quadratic risk that provides the asymptotic efficiency , as the final result . from practical point of view, the procedure gives an acceptable accuracy even for small samples as it is shown via simulations by galtchouk , pergamenshchikov , 2008 .to prove the theorem we will adapt to the heteroscedastic case the corresponding proof from nussbaum , 1985 .first , from we obtain that , for any , where setting now with the function defined in , the index defined in , ] and we rewrite as follows with note that we have decomposed the first term on the right - hand of into the sum this decomposition allows us to show that is negligible and further to approximate the first term by a similar term in which the coefficients will be replaced by the fourier coefficients of the function . taking into account the definition of in we can bound as therefore , by lemma [ sec : le.a.1 ] we obtain us consider now the next term .we have now by lemma [ sec : le.a.2 ] and the definition we obtain directly the same property for , i.e. setting and applying the well - known inequality to the first term on the right - hand side of the inequality we obtain that , for any and for any , \label{sec : up.1 - 3 } & + { \widetilde}{\delta}_{{\mathchoice{1,n}{1,n}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}+ { \widetilde}{\delta}_{{\mathchoice{2,n}{2,n}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}+(1 + 1/\delta)\ , { \widetilde}{\delta}_{{\mathchoice{3,n}{3,n}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}\,,\end{aligned}\ ] ] where taking into account that and that we can show through lemma [ sec : le.a.3 ] that therefore , the inequality yields and to prove it suffices to show that first , it should be noted that the definition and the inequalities - imply directly moreover , by the definition of in , for sufficiently large , for which we find therefore , by the definition of the coefficients in furthermore , in view of the definition we calculate directly now , the definition of in and the condition imply the inequality .hence theorem [ th.u.1 ] . in this sectionwe prove theorem [ th.m.3 ] .lemma [ sec : le.u.1 ] implies that to prove the lower bounds and , it suffices to show where for any estimator , we denote by its projection onto , i.e. + .since is a convex set , we get now we introduce the following set where are i.i.d . random variables from andthe sequence is given in the condition .therefore , we can write that here the kernel function family is given in in which + 1 \scriptstylen\scriptscriptstylen\scriptstyle\{\zeta^2\ge n\}\scriptscriptstyle\{\zeta^2\ge n\}\scriptstylel\scriptscriptstylel ] indeed , setting we deduce where + 1 ] , + 1-n{\widetilde}{x}_{{\mathchoice{m}{m}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}})/(nh ) \quad \mbox{and } \quad v^*=([n{\widetilde}{x}_{{\mathchoice{m}{m}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}+nh]-n{\widetilde}{x}_{{\mathchoice{m}{m}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}})/(nh)\,.\ ] ] therefore , taking into account that the derivative of the function is bounded on the interval $ ] we obtain that taking into account the conditions on the sequence given in we obtain limiting equality which together with implies .now we study the behavior of . due to the inequality we estimate the frchet derivative as consider now the fisrt term on the right - hand side of this inequality .we have we recall that the sequence is defined in .therefore , property implies as to the second term on the right - hand side of , we get similarly , and , by therefore , and the condition implies . indeed , by the direct calculation it easy to see that , for any and for any vector , where the operator is defined in .moreover , we remember that . therefore , taking into account the property we obtain .brua , j .-. asymptotic efficient estimators for non - parametric heteroscedastic model ._ statistical metodologie _ , accepted to publication , available at _ http://hal.archives-ouvertes.fr/hal/-00178536/fr/ _galtchouk , l., pergamenshchikov , s.(2005 ) .efficient adaptive nonparametric estimation in heteroscedastic regression models .preprint of the strasbourg louis pasteur university , irma , 2005/020 available at _ http://www.univ-rouen.fr/lmrs/persopage/pergamenchtchikov _galtchouk , l., pergamenshchikov , s.(2007 ) .adaptive nonparametric estimation in heteroscedastic regression models .sharp non - asymptotic oracle inequalities .preprint of the strasbourg louis pasteur university , irma , 2007/09 , available at _ http://hal.archives-ouvertes.fr/hal/-00179856/fr/ _galtchouk , l., pergamenshchikov , s.(2008 ) .sharp non - asymptotic oracle inequalities for nonparametric heteroscedastic regression models ._ journal of nonparametric statistics _, accepted to publication
the paper deals with asymptotic properties of the adaptive procedure proposed in the author paper , 2007 , for estimating a unknown nonparametric regression . we prove that this procedure is asymptotically efficient for a quadratic risk , i.e. the asymptotic quadratic risk for this procedure coincides with the pinsker constant which gives a sharp lower bound for the quadratic risk over all possible estimates .
because an utterance is best understood in the context in which it is delivered , its interpreters must be able to identify the relevant context and recognize when it is altered , supplanted or revived .the transient nature of speech makes this task difficult .however , the difficulty is alleviated by the abundance of lexical and prosodic cues available to a speaker for communicating the location and type of contextual change .the investigation of the interaction between these cues presupposes a theory of contextual change in discourse .the theory relating attention , intentions and discourse structure is particularly useful because it provides a computational account of the current context and the mechanisms of contextual change .this account frames the questions i investigate about the correlation between between lexical and prosodic cues .in particular , the theory motivates the selection of the _ cue phrase_ a word or phrase whose relevance is to structural or rhetorical relations , rather than topic and the _ unfilled pause _ ( silent pause ) as significant indicators of discourse structure .to explain the organization of a discourse into topics and subtopics , grosz and sidner postulate three interrelated components of discourse a linguistic structure , an intentional structure and an attentional state . in the _ linguistic structure _ , the linear sequence of utterances becomes hierarchical utterances aggregate into discourse segments , and the discourse segments are organized hierarchically according to the relations among the purposes or _ discourse intentions _ that each satisfies .the relations among discourse intentions are captured in the _intentional structure_. it is this organization that is mirrored by the linguistic structure of utterances .however , while the linguistic structure organizes the verbatim content of discourse segments , the intentional structure contains only the intentions that underlie each segment .the supposition of an intentional structure explains how discourse coherence is preserved in the absence of a complete history of the discourse .rather , discourse participants summarize the verbatim contents of a discourse segment by the discourse intention it satisfies .the contents of a discourse segment are collapsed into an intention , and intentions themselves may be collapsed into intentions of larger scope .the discourse intention of greatest scope is the discourse purpose ( dp ) , the reason for initiating a discourse . within this ,discourse segments are introduced to fulfill a particular discourse segment purpose ( dsp ) and thereby contribute to the satisfaction of the overall dp .a segment terminates when its dsp is satisfied .similarly , a discourse terminates when the dp that initiated it is satisfied .the _ attentional state _ is the third component of the tripartite theory .it models the foci of attention that exist during the construction of intentional structures .the global focus of attention encompasses those entities relevant to the discourse segment currently under construction , while the local focus ( also called the _center_ ) is the currently most salient entity in the discourse segment .the local focus may change from utterance to utterance , while the global focus ( i.e. , current context ) changes only from segment to segment .the linguistic , intentional and attentional components are interrelated . in particular , the _ attentional state _ describes the processing of the _ discourse segment _ which has been introduced to satisfy the current _ discourse intention_. the functional interrelation is expressed temporally in spoken discourse the linguistic , intentional and attentional components devoted to one dsp co - occur .therefore , a change in one component reflects or induces changes in the rest .for example , changes ascribed to the attentional state indicate changes in the intentional structure , and moreover , are recognized via qualitative changes in the linguistic structure .it is because of their interdependence and synchrony that i can postulate the hypothesis that co - occurring linguistic and attentional phenomena in spoken discourse cue phrases , pauses and discourse structure and processing are linked .the part of the theory most directly relevant to my investigation are those constructs that model the attentional state .these are the _ focus space _ and the _ focus space stack_. the _ focus space _ is the computational representation of processing in the current context , that is , for the discourse segment currently under construction . within a focus space dwell representations of the entities evoked during the construction of the segment propositions , relations , objects in the world and the dsp of the current discourse segment .a focus space lives on a pushdown stack called the _ focus space stack_. the progression of focus in a discourse is modeled via the basic stack operations pushes and pops applied to the stack elements .for example , _ closure _ of a discourse segment is modeled by _ popping _ its associated focus space from the stack ; _ introduction _ of a segment is modeled by _ pushing _ its associated focus space onto the stack ; _ retention _ of the current discourse segment is modeled by leaving its focus space on the stack in order to add or modify its elements .the contents of a focus space whose dsp is satisfied are accrued in the longer lasting intentional structure .thus , at the end of a discourse the focus space stack is empty while the intentional structure is fully constructed .the focus space model abstracts the processing that all participants must do in order to accurately track and affect the flow of discourse .thus , it treats the emerging discourse structure and the changing attentional foci as publicly accessible properties of the discourse .although the participants themselves may act as if they are manipulating public structures , the informational and attentional properties of a discourse are , in fact , modeled only privately . in explaining certain lexical and prosodic features of discourse , it is often useful to return to these private models . for a speaker s utteranceis conditioned both by the state of the her own model and by her beliefs about those of her interlocutors . the time dependent nature of speech emphasizes the importance of synchronizing private models .lexical and prosodic focusing cues hasten synchronization .in particular , they guide the listeners in updating their models ( among them , the focus space stack ) to reflect the attentional changes already in effect for the speaker . for my analysis, the most relevant private model belongs to the _ current speaker _ , whose discourse intentions guide , for the moment , the flow of topic and attention in a discourse and whose spoken contributions provide the richest evidence of attentional state .if cue phrases and unfilled pause durations can be shown to correlate with attentional state ( and by definition , the intentional and linguistic structure ) , the attentional state they reveal belongs to the current speaker , and the attentional changes they denote are the ones the speaker makes in her own private model .the theory of the tripartite nature of discourse frames my hypotheses about the correlation of cue phrases , pause duration and discourse structure .the main hypotheses are these : that particular unfilled pause durations tend to correlate with particular cue phrases and that this correlation is occasioned by changes to the attentional state of the discourse participants , or , equivalently , by the emerging intentional structure of a discourse .changes to the attentional state occur at segment boundaries .cue phrases by definition evince these changes they are utterance and segment - initial words or phrases and they inform on structural or rhetorical relations rather than on topic . thus , for cue phrases, the question is not whether they correlate with attentional state , but how . to answer this question, we ask , for each cue phrase ( e.g. , _ now , to begin with , so _ ) , whether it signals particular and distinct changes to the attentional state .the correlation of unfilled pauses with attentional state is less certain because pauses appear at all levels of discourse structure .they are found within and between the smallest grammatical phrase , the sentence , the utterance , the speaking turn and the discourse segment .their correlation is mainly with the cognitive difficulty of producing a phrase or utterance . to link this correlation with the task of producing discourse structure, we must posit a variety of attentional operations with corresponding variability in cognitive difficulty . specifically, we construct the chain of assumptions that : to link unfilled pause duration to discourse structure we must first establish that operations on the attentional state can be distinguished sufficiently to explain the different demands that each operation makes on discourse processing and which , therefore , might be reflected in the duration of segment - initial unfilled pauses .the linking of pause duration to the processing of discourse segments motivates some auxiliary hypotheses that refine notions about the kinds of mental operations sanctioned by the focus space model and about the internal structure of a discourse segment .these auxiliary hypotheses are developed in this section . in the theory of discourse structure ,changes to the attentional state are modeled as operations on the focus space stack .these operations appear reducible to four distinct sequences of stack operations that correspond to four distinct effects on the attentional state , as follows : the arrangement is asymmetrical in that it is possible to pop more than one focus space per operation , but to push only one , as shown in table [ focusspaceops ] ..**the effect of the four focusing operation on the focus spaces ( fs ) in the pushdown focus space stack . * * [ cols="<,^,^ , < " , ] longer pauses were positively correlated with the number of segments opened or closed during one focusing operation ( r = .357 , _p_.001 ) .this finding might partially explain the long pauses that appear before a _ replace _, since a _ replace _ is the focusing operation most likely to affect the most focus spaces , by definition , it requires [ almost ] everything to be popped from the focusing before the initiation ( push ) of a new focus space .a correlation of pause duration and the depth of embedding in the linguistic structure ( or equivalently , the number of focus spaces still on the stack ) showed no significant effect on pause duration ( f(1,98 ) = 0.1861 , _p_.7 ) .the directions dialogue contained too few fragment - initial tokens to calculate meaningful statistics about their relation to focusing operations .therefore , the best course was to select from the raw data ( see table [ mean - values ] ) the patterns that were likely candidates for further testing .for example , _ so _ was never associated with an _ initiate _ operation and also was preceded by the smallest mean pause durations ( 0.13 seconds ) .a filled pause , with a similar mean pause duration ( 0.14 seconds ) was primarily associated with _ initiates _ and _ retains _ but never with _ replace_. and , while _ and _ shared the same focusing operations as a filled pause , its mean value for pause duration was more than twice as large ( 0.33 seconds ) .lccccl initial & & & & & + token & initiate&retain&return&replace&all + & 0.43 3 & 0.25 2 & 0.25 2 & & 0.33 7 + * but * & & 0.70 1&0.00 1&0.10 1&0.27 3 + * now * & & & & 0.55 2 & 0.55 2 + * oh * & & 0.00 2 & & & 0.00 2 + * so * & & 0.15 2 & 0.15 2 & 0.05 1 & 0.13 5 + * well * & & & & 0.20 2 & 0.20 2 + * yknow * & 0.40 2 & & & & 0.40 2 + * ordinal * & 0.40 1 & & & & 0.40 1 + * acknow * & 0.10 1 & 0.20 7 & & 0.90 1 & 0.27 9 + * ledgment * & & & & & + * filled * & 0.23 6 & 0.05 4 & 0.00 1 & & 0.14 11 + * pause * & & & & & + * unmarked * & & & & & + _ all _ & 0.32 23&0.21 55&0.26 11&0.65 11&0.29 100 + to compensate for the small sample sizes of the cue phrase data , all explicit lexical markers of structure ( cue phrase , acknowledgment , filled pause ) were collapsed into the category , _ marked_. the data in this category were compared to the data for lexically _ unmarked _ fragments . because the longest pauses preceded unmarked _returns _ and _ replacements _ , i predicted that unmarked operations would in general be preceded by longer pauses than marked .the results are in the direction predicted and are summarized in table [ marked - unmarked ] .the average duration for pauses preceding a marked focusing operation was 0.24 seconds ( standard deviation = 0.24 ) , while the average for pauses preceding unmarked operations was 0.33 seconds ( standard deviation = 0.36 ) .statistically this approaches significance ( t(96 ) = 1.58 , _p _ = .12 ) .thus far , analysis of the data identifies significantly longer pauses for the _ replace _ operation than for any other and shows that pause duration is positively correlated with the number of segments affected by one focusing operation .these findings begin to distinguish the focusing operations quantitatively , by number of focus spaces affected , and qualitatively , by whether they occur within an established context ( _ initiate , retain , return _ ) or at its beginning ( _ replace _ ) .although , the raw data in table [ rawdata ] appears to show patterns for specific segment - initial tokens , the number of tokens is insufficient for establishing a correlation between cue phrase and focusing operations , let alone a three - way relationship among cue phrase , pause duration and focusing tasks .the categorical classification present particular problems . for, uncertainties arose even with the application of a classification metric .perhaps these uncertainties should have been incorporated into the coding scheme or perhaps the categorical classifications should have been abandoned in favor of additional and quantifiable acoustical and lexical features .only partial conclusions can be drawn from the data . however , the results are useful toward refining the original hypotheses and determining the content of future research .the distinction between the pause data for marked and unmarked fragments is a case in point . for each focusing operation, the difference between mean pause durations at best only approaches significance ( see table [ 24 [ marked - unmarked ] ) .however , because the values for all focusing operations are always greater for unmarked utterances , a hypothesis is suggested : that , given a speech fragment and the focusing operation it evinces , the preceding unfilled pause will be longer if the fragment is lexically unmarked .lccccl speech & & & & & + fragment&initiate&retain&return&replace&all + & 0.30 13 & 0.17 18&0.13 6 & 0.36 7&0.24 44 + _ unmarked _ & & & & & + _ all _ & 0.32 23&0.21 55&0.26 11&0.65 11&0.29 100 + if this hypotheses is correct , two accounts can be constructed that would jointly predict the appearance of cue phrases . one account emphasizes the processes involved in choosing and communicating the state of global focus . the other emphasizes the mutually recognized ( by speaker and hearers ) attentional and intentional state of the discourse .together they identify the factors that would impel a speaker to precede an utterance with a cue phrase , an unfilled pause or both .if an unfilled pause preceding a lexically unmarked fragment is significantly longer , we might assume that a particular focusing operation is executed in a characteristic amount of time ( given adequate consideration of other contextual features ) .within this time , we might observe silence , a cue phrase or both . because both pause and cue phrase can appear at the same location in a phrase, we ask if their functions are equivalent , or instead , complementary .my hypothesis selects the second option , that they are complementary in the cognitive processing each reflects and in the discourse functions each fulfills . for, if the duration of an unfilled pause is evidence of the difficulty of a cognitive task , a cue phrase is evidence of its partial resolution . as a communicative device, cue phrases are more cooperative than silence . in silence, a listener can only guess at the current contents of the speaker s models .with the uttering of a cue phrase , the listener is at least notified that the speaker is constructing a response .the minimal cue in this regard is the filled pause . _bone fide _ cue phrases , however , herald not only an upcoming utterance , but a particular direction of focus and even a propositional relation between prior and upcoming speech .cue phrases serve not only the listener but also the speaker . because they commit to topic structure , but not to specific referents and discourse entities, they buy additional time for the speaker in which to complete a focusing operation and formulate the remainder of the utterance .the account of the influence of the currently observable state of the discourse rests on two patterns in the data : ( 1 ) the difference in pause durations for marked and unmarked _ initiate_s and _ retain_s is minimal ; and ( 2 ) the difference between marked and unmarked _ return_s and _ replace_s is greater .if these patterns can be shown to be significant , they suggest that remaining in the current context is less costly than returning to a former context , or establishing a new one .the corollary is the claim that an expected focusing operation need not be marked , while an unexpected operation is most felicitous when marked . in other words ,remaining in the current context or entering a subordinate context is expected behavior , while exiting the current context is not .exiting the current context ( focus space ) carries a greater risk of disrupting a mutual view of discourse structures .the extent of risk is assessed for the listener by the difficulty of tracking the change and for the speaker , by the difficulty of executing it .the risk originates in the nondeterministic definitions of _ return _ and _ replace _ operations both contain in their structure one or more pops .in addition , these operations can be confused because both begin identically , with a series of pops . because closing a focus space is a marked behavior , the clues to changing focus are most cooperative if they guide the listener toward re - invoking a prior context ( i.e. , a _ return _ ) or establishing a new one ( replace ) .thus , certain clues are more likely to mark a return to a former context ( e.g. , _ so , anyway , as i was saying _ ) , while others ( _ now _ , the ordinal phrases ) mark a _replace_. the goal of future investigations is to establish the bases for predicting the appearance of particular acoustical and lexical features .the speculations presented in this section provide a theoretical framework .if borne out , they can be re - fashioned as characterizations of the circumstances in which cue phrases and unfilled pauses are most likely to be used .the relationships among cue phrases , unfilled pauses and the structuring of discourse are investigated within the paradigm of the tripartite model of discourse . within this model, the postulation of four focusing operations provides an operational framework to which can be tied the discourse functions of cue phrases and the cognitive activity associated with the production of an utterance .especially , the difficulty of utterance production might be explained by the complexity of the co - occurring focusing operation .such a correspondence is , in fact , suggested by the positive correlation of pause duration and the number of focus spaces opened or closed in one operation on the focus space stack .however , because the classification of focusing operations is uncertain , more data and better tests are required to characterize the relationships among the lexical and acoustical correlates of topic and focus .in addition , the aptness of the tripartite model itself is not assured .the idealizations it contains may undergo modification in light of new results , or be augmented by other accounts of discourse processing . on the other hand , the analysis of more quantitative data may confirm the implications of the model , and its appropriateness as the foundation for investigating the lexical and prosodic features of discourse .many thanks to susan brennan who selected and ran the statistical tests on the data and to s. lines for helpful comments . various stages of this work were supervised in turn by chris schmandt and ken haase , both of the m.i.t . media laboratory .pierrehumbert , j. and hirschberg , j. , the meaning of intonation contours in the interpretation of discourse . in _ intentions in communication_. edited by cohen , p. r. , morgan .j. and pollack , m. e. , 1990 , pp .271 - 311 .sidner , c. l. , focusing in the comprehension of definite anaphora . in _ readings in natural language processing_. ed . by grosz , b. j. , sparck - jones , k. and webber , b. l , morgan kaufman publishers , inc . , 1986 ,363 - 394 .sorensen , j. m. and cooper , w. e. , syntactic coding of fundamental frequency in speech production . in _perception and production of fluent speech_. ed . by cole , r. a. , published by lawrence erlbaum , 1980 , pp.399 - 440 .walker , m. a. and whittaker , s. , mixed initiative in dialogue : an investigation into discourse segmentation . in_ proceedings of the 28th annual meeting of the association for computational linguistics _ , 1990 , pp.70 - 79 .
expectations about the correlation of cue phrases , the duration of unfilled pauses and the structuring of spoken discourse are framed in light of grosz and sidner s theory of discourse and are tested for a directions - giving dialogue . the results suggest that cue phrase and discourse structuring tasks may align , and show a correlation for pause length and some of the modifications that speakers can make to discourse structure .
in recent years the area of numerical analysis of stochastic differential equations ( sdes ) has expanded at a fast pace .this interest has been driven by different application areas , such as computational finance , neuroscience or electrical circuit engineering .a large part of research in stochastic numerics has been aimed towards the development and strong and weak convergence analysis of several classes of numerical methods . a further important issue for the investigation of numerical methods consists of examining methods for their ability to preserve qualitative features of the continuous system they are developed to approximate .a linear stability analysis is usually the first step of an analysis in this direction . for thisthe method of interest is applied to a scalar linear test equation and stability conditions on method parameters and step - size are derived and compared with the stability condition for the test equation . in deterministic numerical analysis the underlying idea for a linear stability analysis is based on the following line of reasoning : one linearises and centres a nonlinear ordinary differential equation around an equilibrium , the resulting linear system ( the jacobian of evaluated at equilibrium ) is then diagonalised and the system thus decoupled , justifying the use of the scalar test equation , for the analysis .we refer to , for example , ( * ? ? ?* chapter iv.2 ) for more detail on this procedure .in the stochastic case the same fundamental problem exists : we wish to preserve the qualitative behaviour of solutions of nonlinear stochastic differential equations following discretisation by a numerical method .once again the starting point is a linear stability analysis .we can linearise and centre a nonlinear sde around an equilibrium solution ( see for the corresponding theory ) , and this procedure yields a system of sdes with an -dimensional driving wiener process of the form research on stability analysis for sdes has focused on the scalar linear test equation , where its solution is called geometric brownian motion .this corresponds to considering the linearised sde system when it can be completely decoupled and with one driving wiener process .relevant references are given by , e.g. , .some first explorations of a linear stability analysis for systems of sdes have been made in , and in suitable linear test systems have been derived .the most common and well - known methods to treat sdes numerically are the euler - maruyama approximation and the milstein scheme developed in the last century .these methods and their drift - implicit counterparts have been studied for their stability behaviour , see for example , however always only for a one - dimensional noise term in the test equation .our aim in this article is to investigate the influence of multi - dimensional noise on the mean - square stability behaviour of these numerical methods and to compare their stability behaviour . assuming that the matrices and can be diagonalised and the system above decoupled , we propose a scalar linear equation with an -dimensional wiener process , that is , as a suitable test equation .we consider the -maruyama and the -milstein method and study the asymptotic mean - square stability properties of these methods .we find that the stability conditions for the -milstein method are not only stronger than those for the -maruyama method , but they also become more restrictive when increasing the number of noise terms in the sde . in particularwe find it impossible to conclude -stability for the -milstein method , in the sense that we can not define a value or bound for the parameter , such that the method applied to the test equation would be -stable for all _ for all and all parameters , . this indicates that analysing the stability behaviour of numerical methods using test equations with only a one - dimensional wiener process may not provide sufficient insight into the properties of the methods for practical high - dimensional application problems .we then present a modification of the -milstein method , where we introduce partial implicitness involving a further parameter in the hope to have a better control on the stability behaviour of the method .the stability analysis indicates that this is the case , although the stability condition still depends on the number of noise terms .numerical experiments illustrate the influence that the choice of method and method parameters have for practical simulation tasks . in section [ s.cc ]we introduce the linear test equation , the -maruyama method and the -milstein scheme , we give a precise definition of asymptotic mean - square stability and state stability conditions for the test equation in the continuous case .section [ s.dc ] is devoted to the analysis of the stability properties of the methods in terms of the arising stochastic difference equations and we compare the stability regions of both types of methods in section [ s.comp ] . in section [ s.sigma ]we introduce a new class of milstein type methods and analyse their stability behaviour .we illustrate the theoretical findings by some numerical experiments in section [ s.num ] .in this section we introduce the stochastic differential and difference equations , as well as the notions of stability , that we consider in this article .we will be concerned with asymptotic _ mean - square stability _ of the zero solution of a test equation with respect to perturbations in the initial data . as a test equationwe employ the following scalar linear stochastic differential equation with multiplicative noise driven by an -dimensional standard wiener process given on the probability space with a filtration . by we denote expectation with respect to .we assume that the coefficients and , are complex - valued and without loss of generality suppose that the initial value is non - random ( ( * ? ? ?* chapter 4 ) ) .the solution of is a complex - valued stochastic process , where is a vector - valued wiener process in .alternatively this complex - valued sde can be written as 2-dimensional vector sde in the real and imaginary parts .we denote the real and imaginary part of a complex number by and , respectively , denotes the complex conjugate and stands for the absolute value of some . for any non - zero initial value equationhas a non - trivial path - wise unique strong solution which we denote by when we wish to emphasise its dependence on the initial data . for equation obviously admits the _ zero solution _ , which is a steady state solution of the equation . ) [ d.stabil ] the zero solution of equation , , is 1 ._ mean - square stable _ ,if for each , there exists a such that whenever and ; 2 ._ asymptotically mean - square stable _ , if it is mean - square stable and if there exists a such that whenever the zero solution is called unstable if it is not stable in the mean - square sense .definition [ d.stabil ] is slightly more general than necessary in the present context , as for the simple linear equation given by we can take arbitrarily large and thus it does not play a significant role .[ l.contprob ] the zero solution of equation is asymptotically mean - square stable if and only if we now discuss the stochastic difference equations that arise by applying the -maruyama method and the -milstein scheme to the scalar test equation .we consider numerical methods for computing approximations of the solution of the test equation at discrete time points ( ) with constant step - size .the -maruyama method applied to the test equation reads where we have replaced the wiener increments by the scaled random variables . hereeach is one of independent sequences of mutually independent standard gaussian random variables , i.e. , each is -distributed .the -milstein method applied to the test equation is given by here we have additionally replaced the multiple wiener integrals as follows : ( i ) if the multiple wiener integral can be replaced by for , and ( ii ) if we use the identity to obtain for the method reduces to the _ ( forward ) euler - maruyama scheme _ , which is explicit in the drift as well as in the diffusion part .for the methods and are drift - implicit .for the scheme is known as _ stochastic trapezoidal rule _ and for we obtain the _ backward euler - maruyama method_. the two last terms in the -milstein method represent a higher order approximation of the diffusion part in equation in the sense of mean - square convergence .we call a method _ mean - square convergent with order _ ( ) if the global error , , satisfies with a positive error constant , which is independent of the step - size .it is well - known that in the case of multiplicative noise the -milstein method is mean - square convergent of order , whereas the -maruyama method is mean - square convergent of order .obviously the stochastic difference equations and admit the zero solution for the initial value , which is a steady state solution as well . for any non - zero initial value , the equations have a unique solution provided .we write again when we want to emphasise that the solution of the difference equations depends on the initial data .[ d.stabildiff ] the zero solution of the difference equations and is 1 . _mean - square stable _ , if for each , there exists a such that whenever and ; 2 . _asymptotically mean - square stable _, if it is mean - square stable and if there exists a such that whenever stability conditions will now involve the coefficients , ) of the test equation , as well as the method parameter and the step - size . it will be useful to describe the stability region of a stochastic differential or difference equation .we follow the presentation in and consider sets of parameters , , for which the zero solutions of the continuous and the discrete equations are asymptotically stable , that is further , we consider the extension of the deterministic notion of a - stability to the mean - square analysis setting and say that a numerical method is _ a - stable _ in mean - square , if whenever the zero solution of is asymptotically mean - square stable , then the same is true for the zero solution of the method for any step - size . using the above definitions of the stability regions , we call the -maruyama or -milstein method a - stable in mean - square if for all we have or the stochastic difference equations and we now derive stability conditions in dependence on the method parameter and the applied step - size , and compare these conditions with those for the continuous problem given in lemma [ l.contprob ] .we start by rearranging the stochastic difference equation into the following one - step recurrence equation .then by squaring and taking expectations we obtain a recurrence equation for the second moments .as mentioned before , we have to assume that to guarantee the existence of a unique solution of the recurrence equations . rearranging equation yields the recurrence where then the recurrence equation for the second moment , using , , and the complex conjugate equation of , reads from we can immediately read off necessary and sufficient stability conditions in terms of the above parameters . [ lemmarstababc ] the zero solution of the recurrence equation , representing a one - step maruyama - type method applied to the test equation , is asymptotically mean - square stable if and only if now rewriting the stability conditions in terms of the parameters and in the test equation , the method parameter and the step - size , using , we obtain the following result .the zero solution of the stochastic difference equation given by the -maruyama method applied to the scalar linear test equation is asymptotically mean - square stable if and only if we note that the first two terms in the left - hand sides of are equal to the left - hand side of , that is they correspond to the stability condition for the continuous problem . now comparing the stability condition for the continuous problem to that of the discrete problem, we immediately obtain from an extension of the result ( * ? ? ?4.1 ) to the case of driven by a multi - dimensional wiener process .[ c.maruyama ] for all it holds that in particular , for the -maruyama method is a - stable in mean - square . for and ,the stability condition for the -maruyama method is satisfied if and only if we now turn to the -milstein method and first follow the same steps as in the previous section to deduce a recurrence equation for the second moments of the solution of and read off the corresponding stability conditions . rearranging the difference equation , we obtain and is given in . note that rewriting the parameter in terms of provides a convenient way to compare the stability conditions for the -maruyama and -milstein method . with analogous calculations as in the previous section and by additionally using and , we find the recurrence for the second moments of the -milstein method as again we can read off from necessary and sufficient stability conditions in terms of the parameters , and . [ lemmilstababc ] the zero solution of the recurrence equation given by the -milstein method applied to the test equation , is asymptotically mean - square stable if and only if where .when applied to the linear scalar stochastic differential equation , other variants of one - step maruyama - type methods and one - step milstein - type methods can quite often be rearranged into the recurrence equations and , respectively , with appropriate definitions of the parameters , and .then lemmata [ lemmarstababc ] and [ lemmilstababc ] can be applied to obtain stability conditions interpreted with these definitions of , and .the next corollary follows by rewriting the stability condition in terms of the original parameters using .[ cormilstab ] the zero solution of the stochastic difference equation given by the -milstein method applied to the scalar linear test equation is asymptotically mean - square stable if and only if again the first two terms in the left - hand side of are equal to the left - hand side of .however , when comparing the above condition with the stability condition for the -maruyama method , we see that the left hand side of contains two additional terms , which are always non - negative and depend on the noise intensities ( ) but are independent of the parameter .thus the precise stability region of the -milstein method depends on the noise intensities ( ) .suppose our aim is to use the method parameter to determine the optimal stability region of the -milstein method for a given set of parameters in , that is we aim to find a value of such that the sum of the third to the fifth term in vanish for any step - size .then , assuming that holds , we obtain for every set of parameters a different value for that optimal .we define where for this optimal we immediately obtain from the stability condition for the -milstein method the following corollary . [ c.milstein ] for all it holds that assuming that , for , the stability condition for the -milstein method is satisfied if and only if in particular , condition implies that for and , the stability condition for the -milstein method is satisfied for any step - size . + in the case that , and and , then the zero solution of the -milstein scheme is also unstable only if condition is imposed on the step - size .intrinsic in the concept of -stability is the idea that the property of -stability of a method holds for the whole class of differential equations considered as test equations . in the setting of this articlethis implies that we would need to find a , a value of or a bound on , which is independent of , such that for . in the case of the -maruyama method the corresponding value of is , see corollary [ c.maruyama ] . however ,considering the simple case of multi - dimensional noise terms having equal noise intensities , we find that , at best , we can find an upper bound independent of the given parameters , but depending on the number of noise sources . to see this ,let .then , using the squared stability inequality in the form and in yields thus for example , for one , two and three noise sources ( ) , we can find a as and , respectively .in the one - dimensional noise case this corresponds to the result in ( * ? ? ?but in general there exists no upper bound for such that the -milstein method for any is -stable for the whole class of equations with arbitrary many noise terms . in the general case of the sde it is possible , essentially using the cauchy - schwarz inequality several times , to derive an upper bound for which also only depends on , but as it involves it becomes pointless for large .thus , it appears that the stability condition for the -milstein method becomes very restrictive for an increasing number of noise terms .in this section we aim to illustrate the results of the previous sections by visually comparing the stability regions and .as there are too many parameters in the system to do so in a two - dimensional plot , we only consider the case of the test sde with multi - dimensional noise where all the terms have the same noise intensity , i.e. , , and real - valued coefficients , i.e. , we have .we essentially follow regarding the scaling of the parameters in the plots .thus we set the stability conditions , and become note that for and condition yields the same stability region . to interpret the figures below , observe that given a test equation with parameter values and and , the point corresponds to the choice of step - size . then varying the step - size corresponds to moving along the ray that connects with the origin , where going on this ray in the direction of the origin corresponds to decreasing the step - size . for the scaling is appropriately adapted .figure [ fig.stabregm ] shows the mean - square stability regions of the zero solutions of the sde ( white area with a dashed border ) , the -maruyama approximation ( light - grey area ) and the -milstein approximation ( light dark area to dark - grey area ) for different values of the method parameter and illustrates how the stability region of the -milstein method decreases with and compares to that of the -maruyama method . in particular , one can observe that the mean - square stability regions for the -milstein method are always smaller than the stability region of the -maruyama method . for the -maruyama method the figure illustrates the property of -stability of the method for for all , where as for the -milstein method we can not conclude -stability for a particular and all . in ( * ? ? ?* section 4 ) and ( * ? ? ?* section 3 ) the stability regions of the -maruyama method and the -milstein method have been plotted separately for the scalar sde and real - valued . to emphasise the different stability properties of both methods even in the case of , we provide in figure [ fig.stabrega ] a plot of the stability regions of both methods together for the same setting as in and several values of the method parameter . in these cases the stability regions of the -maruyama method and the -milstein methodcoincide with that of the sde if and , respectively .when using the euler - maruyama method and the standard milstein method , that is taking , the methods that are most often used , then it appears from figure [ fig.stabrega ] that both stability regions are quite small but of similar size .this is already mentioned in . however , when taking mean - square accuracy into account and aiming to reduce numerical costs by using the milstein method with a larger step - size , one might easily be prevented from doing so by the more restrictive stability condition of the milstein method .a brief look at the deterministic case , that is ( ) in , , , etc . , reminds us that it is by making the approximation implicit and the introduction of the method parameter that one can control the stability properties of the method by suitably choosing . in this senseit would be useful to develop milstein - type methods that incorporate implicit higher order approximations of the diffusion term to have a method parameter for this purpose .in general , a straightforward introduction of implicitness into approximations of the diffusion term results in the numerical solution becoming unbounded with positive probability , see ( * ? ? ?* chap 1.3.4 ) and for methods avoiding this problem . in this articleour aim is to highlight the effect that an implicit discretisation of the diffusion term can have on the stability properties of the -milstein method and thus we take advantage of the simple structure of the test equation .the latter results in the fact that in each term in the second last sum in the -milstein method the double wiener integral can be replaced by , for .then the second last term in the method can be written as for each and we introduce an implicit approximation with an additional positive method parameter only in the latter term , which does not contain the random variable .thus we propose , for each , to use we emphasise that this represents a ( partial ) implicit approximation of the diffusion term in contrast to well - known approaches to implicit approximations of the drift term . applied to the scalar linear test - equation the --milstein method then takes the form , with the random variables defined as in section [ s.cc ] .the choice of yields the -milstein approximation of the stratonovich - sde . for the stability analysiswe follow the same procedure as in section [ s.dc ] and first rewrite as a one - step recurrence where the parameters and are given by we define and assume that to guarantee the existence of a solution to equation . now squaring and taking the expectation yields where .hence the zero - solution of the stochastic difference equation is asymptotically mean - square stable if and only if the factor on the right hand side of is less than 1 . rewriting this condition in terms of and rearranging, we obtain the following result : [ c.stabcondsigma ] the zero solution of the stochastic difference equation given by the --milstein method applied to the scalar linear test equation is asymptotically mean - square stable if and only if where again .we note that the first terms in the left - hand side of are equal to the left - hand side of .the additional term is negative for , i.e. , when the stability condition for the test equation is satisfied .thus , the stability condition in lemma [ c.stabcondsigma ] is less restrictive than the condition for the -milstein method . denoting by the stability region of the --milstein method , analogous considerations as for corollary [ c.milstein ]yield the next result .[ cormilstein ] define for all it holds that then , assuming that , for the zero solution of the --milstein method is asymptotically mean - square stable if and only if in particular , condition then implies that for and , the stability condition for the -milstein method is satisfied for any step - size . as in section [ s.comp ]we illustrate the stability regions of the --milstein method applied to the test sde with a single noise ( ) in terms of real - valued coefficients . in this casewe can find a bound on as .again , we set and , and the stability inequality in terms of and reads figures [ fig.stabregc ] - [ fig.stabrege ] compare the stability regions for the sde with , the -maruyama method and the --milstein method . in each figurewe have fixed the parameter and only the parameter varies . for the plots would correspond to those in figure [ fig.stabrega ] .we can see that the stability region of the --milstein method ( dark - grey area ) is becoming larger when increasing the parameter .moreover , the stability region coincides with the stability region of the test equation ( white area ) if .further , we consider again the sde with identical noise intensities and plot in figure [ fig.stabregf ] the stability regions for ( again the condition below is the same for and ) and corresponding to the scaling and , where condition now reads again the stability region of the --milstein method decreases with growing , as in the case of the -milstein method , which prevents to conclude -stability for a particular value of and all .however , the introduction of the ( partial ) implicitness in the diffusion term clearly provides an improvement of the stability behaviour for the --milstein method .in this section we aim to illustrate the effect that the choice of method and the choice of the method parameters and have on practical simulation runs .we consider the test equation with the following parameters : , , , , and integrate for ] with gridpoints .the parameter varies as .the analytical results in the previous sections also suggest choices of step - sizes , such that the zero solution of any one of the numerical methods for a fixed set of parameters is asymptotically mean - square stable and obviously it is possible to illustrate this by performing numerical experiments with fixed parameter sets and varying the step - size .however , the focus of this article is rather on comparing qualitative properties of _ methods _ than on how individual methods behave for various step - sizes .each of the four figures below consists of three pictures .the underlying idea of a stability analysis of numerical methods is to provide guidance for choosing method parameters and a step - size such that the numerical solution represents a good approximation of the true solution .( note that convergence of a method only guarantees this in the limit for . )thus , we plot in the left picture of each figure the mean - square error between the exact solution and numerical solution for the corresponding method and choice of method parameters .this allows to compare the impact of the choice of method on actually computing solutions of sdes .however , the quantity of interest in the stability definitions [ d.stabil ] and [ d.stabildiff ] are the second moments of the analytical and numerical solutions . therefore the other two pictures present plots of the second moments of the solutions of , , and , in the middle picture estimated from the numerical simulations of , and , in the right picture computed form the analytical solutions .this allows to compare the effect of the choice of method and parameters on the behaviour of the second moments analytically and also , how well this is approximated by the numerical methods . in detail , the figures present the following quantities , where and denote the values on grid points of a trajectory of the explicit solution of with the above parameters and of the numerical trajectory produced by one of the methods , respectively : { \mathsf{e}}(x_i^2 ) & = & s^i \cdot x_0 ^ 2 \quad \text { for all } i \ , .\label{numsecmom}\end{aligned}\ ] ] the expression for represents an estimator for the second moment of the numerical approximation , the expression for is the solution of the deterministic ode ( see proof of lemma [ l.contprob ] ) at discrete time - points and the last expression for is the result of applying the recurrences , and where denotes the factor in front of in the right - hand side of the corresponding recurrence equation .the number of trajectories computed for the above quantities is for each simulation .figures [ fig.stabresa ] to [ fig.stabrese ] illustrate the behaviour of the methods for the above set of parameters , in particular the same fixed step - size and different choices of and .the -maruyama method provides reliable approximations for , but not for , the -milstein method in same setting _ does not _ provide reliable approximations for and .in fact , the numerical solutions for in this setting diverge .we can observe the improvement in the stability behaviour when using the --milstein method introduced in the previous section .in contrast to the -milstein method this method produces reliable approximations for the choice of when also setting .further , for the choice of the --milstein approximation behaves satisfactorily for all values of , thus the implicit term in the diffusion approximation is effectively stabilising the method .a linear stability analysis has been performed for the -maruyama method and the -milstein method , using a linear test equation with several multiplicative noise terms .we have obtained stability conditions guaranteeing asymptotic mean - square stability of the zero solution of the stochastic difference equations resulting from both types of methods applied to the test sde . comparing the stability conditions and for the -maruyama and the -milstein method , respectively, it is quite obvious that the latter is more restrictive than the former , due to the fourth and fifth term in , i.e. , which is always non - negative . in particular , this term is the more restrictive , the larger the parameters in the diffusion term in are .now , in the case that these parameters are small , that is the so - called small noise case , it is well known from the corresponding mean - square convergence analysis that maruyama - type methods provide sufficiently accurate methods for practicable choices of step - sizes , see .milstein - type methods include higher order approximations of the diffusion term with the aim that they are accurate methods for more practicable choices of step - sizes just in the case that the diffusion term is large .in other words , when dealing with sdes with larger diffusion terms , numerical efficiency considerations would suggest using a milstein - type method with a larger step - size rather than a maruyama - type method with a small step - size to obtain numerical approximations of the solution of an sde with a similar accuracy .however , corollary [ cormilstab ] indicates that in this case there may be a trade - off and one is faced with restrictions on the step - size for the milstein - type method due to stability reasons .further , we have shown that the precise stability region of the -milstein method depends on the number and magnitude of the noise terms , whereas the stability region of the -maruyama method is independent of them .in particular , it is not possible to define a value such that the -milstein method can be called -stable for all for the class of sdes for an arbitrary number of driving wiener processes . for the -maruyama method , as in the deterministic case and the case of the test equation with .we provide a modified milstein - type method with a partially implicit diffusion approximation and demonstrate that the resulting stability behaviour can be controlled more favourably . the results highlight that it is necessary to include multi - dimensional noise into test equations and to study their effects on the practical behaviour of the methods .we thank martin riedler ( heriot - watt university , edinburgh ) and lukasz szpruch ( university of strathclyde , glasgow ) for fruitful discussions .
in this article we compare the mean - square stability properties of the -maruyama and -milstein method that are used to solve stochastic differential equations . for the linear stability analysis , we propose an extension of the standard geometric brownian motion as a test equation and consider a scalar linear test equation with several multiplicative noise terms . this test equation allows to begin investigating the influence of multi - dimensional noise on the stability behaviour of the methods while the analysis is still tractable . our findings include : ( i ) the stability condition for the -milstein method and thus , for some choices of , the conditions on the step - size , are much more restrictive than those for the -maruyama method ; ( ii ) the precise stability region of the -milstein method explicitly depends on the noise terms . further , we investigate the effect of introducing partially implicitness in the diffusion approximation terms of milstein - type methods , thus obtaining the possibility to control the stability properties of these methods with a further method parameter . numerical examples illustrate the results and provide a comparison of the stability behaviour of the different methods . _ keywords : _ stochastic differential equations , asymptotic mean - square stability , -maruyama method , -milstein method , linear stability analysis . _ ams subject classification : _ 60h10 , 65c20 , 65u05 , 65l20
in the last few years most cosmological parameters have been determined up to a few percent .the values of , , and can now be constrained with an unprecedent degree of accuracy ( see , for example , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* and references therein ) .the next challenge to cosmologists is to test the predictions of cosmological models at a few hundred kpc scales .it turns out that these are just the relevant scales involved in galaxy formation and evolution .galaxy formation and evolution are intriguing open questions whose resolution in connection with the global cosmological model will very likely advance considerably in this decade . even though the field is at its beginnings, the use of numerical methods to study how galaxies are assembled within a cosmological scenario from the field of primordial fluctuations , seems a convenient approach .the main advantage of these approaches ( i.e. , self - consistent hydrodynamical simulations ) , is that the physics is introduced at a very general level , and the system evolves as a consequence .we can follow the evolution of the dynamical and hydrodynamical properties of matter in the universe ; galaxy - like objects ( glos ) appear as a consequence of this evolution . and ,so , the building - up of objects ( cosmic network structure formation at high , collapse , interactions , mergers , accretions ) , as well as their hydrodynamical consequences ( instabilities , gas infall from halos to discs at hundred of kpc scales , gas inflow along discs at tens of kpc scales , shocks , cooling , piling - up of gas necessary for star formation ) , can be accurately followed .we get not only the _ properties _ of objects at any , but also an _ insight _ into the _ physical processes _responsible for their formation and evolution .moreover , numerical hydrodynamical simulations using particles permit very convenient comparisons of glos that form in simulations with observational data .simulations _ directly _ provide us , at each , with the structural and dynamical properties of each individual glo ( position and velocity of each of its particles , gas density and temperature of each of its baryonic constituents ) and with their individual star formation rates histories ( sfrhs ) .chemical abundance and spectrophotometrical data are the current standard to compare models of galaxy formation .it is expected that the next generation of astronomical facilities will make possible a new science : mass measurements for distant galaxies ( see , for example * ? ? ?glos formed in numerical simulations are particularly suited to be compared to this new kind of data .pre - prepared numerical experiments are adequate to describe in detail a particular phase of the formation or evolution of galaxies ( for example , merger events or orbital motions of satellites within halos ) , from initial conditions set by the experimenter .these initial conditions try to model conditions that would have arisen along the evolution of the systems under consideration .they are useful to study basic aspects of the physical processes relevant to evolution .for example , the works by barnes , hernquist and mihos , which have fundamental importance to understand the role played by mergers in galaxy evolution , have been carried out with this technique . however , in pre - prepared simulations , contrary to the self - consistent approach , the process under consideration is studied _ in isolation _ , and not in connection with the other relevant processes involved in galaxy formation and evolution (already mentioned ) that , moreover , could interact among themselves in a non - trivial way .these considerations stress the ability of self - consistent hydrodynamical simulations as a tool to learn how galaxies form from the field of primordial fluctuations and evolve into the objects we observe today . to properly handle this problem, a numerical code has to allow for enough mass , time and space resolution as well as a convenient dynamical range. they should be as fast as possible and with memory requirements within the current computer capabilities .a very important issue when designing a numerical code to study galaxy formation and evolution , is to make sure that conservation laws are accurately verified , and , particularly , i ) , that angular momentum is conserved at the scales relevant to disc formation ; otherwise , galaxy disc formation could meet with some difficulties ; and , ii ) , that entropy is conserved in reversible adiabatic processes , because the violation of this physical principle could produce spurious effects at galaxy scales . by the moment , star formation ( sf ) processes have to be modelled , either inspired in kpc or pc scale hydrodynamical simulations or other considerations . the first choice to be made when designing this kind of codes is the gravity solver . among current numerical methods , those that employ adaptive techniques in regions of high density , either from a lagrangian ( as ap3 m , * ? ? ?* ; * ? ? ?* ) or eulerian ( as the art and mlapm codes , * ? ? ?* ; * ? ? ?* ) approach , are the most suitable to meet the requirements of resolution and large dynamical range , accuracy and rapidity . a detailed comparison between ap3m and art codes has been carried out by .they have found out that these codes produce results that are consistent within a 10% , provided that ( is the number of integration steps ; stands for the dynamical range ) .the choice of the gravity solver in a cosmological simulation depends on its purpose . to study galaxy formation and evolution ,lagrangian codes have the advantage over eulerian codes that they permit to go backwards and forwards in time in a very easy way .for example , the constituent particles of a given object can be identified at a given redshift , , and one can then analyze their positions in phase space and the properties of the objects or structures they form at a different redshift , .this is a very convenient method to study evolutionary processes and it motivates our choice of an ap3m - based method as gravity solver for our simulations . to solve the hydrodynamical equations ( and , in general , any hyperbolic system of equations in partial derivatives ), there are also two basic different techniques : i ) , eulerian methods , and , ii ) , lagrangian methods .eulerian methods are based on the so - called godunov algorithm .their new formulations , using adaptive mesh refinements , are particularly well suited to combine with art - like codes when both gravitational and hydrodynamical forces are considered . for a comparison of the performances of a number of hydrodynamical codes of both kinds see .most of the lagrangian methods used in astrophysics are based on the sph ( smooth particle hydrodynamics ) technique .given the convenience of this technique to be applied to cosmological simulations , a number of authors developed sph codes to be used in a cosmological context .some of them follow . first used sph techniques in cosmological simulations ( a p3m - sph code ) . , as well as , coupled a sph code to the tree algorithm . modelled their treesph code after and introduced a parallel version of this code , while makes use of a special purpose hardware to compute the gravitational forces by direct summation ( grape ) . coupled sph with a pp algorithm in a code designed to be run on a connection machine , and incorporated in a tree - sph code the so - called terms ( see below ) .gadget uses either a tree scheme or grape , with individual integration timesteps , and both , serial and parallel versions .another parallel tree - sph code is gasoline .as already mentioned , ap3m - based codes are particularly well suited to study galaxy formation through self - consistent cosmological simulations .the first ap3m - sph code was introduced by ( hydra code , see also * ? ? ? carried out a second implementation . in these implementations ,the integration timestep is _ global _( i.e. , at a given time , the same for all particles ) . in cosmological simulations ,however , multiple time scales appear , due to their very large dynamical ranges from very dense volumes to very rarefied zones . to get an accurate enough integration scheme , and , at the same time , to avoid that particles in denser volumes slow down the simulations, it is advantageous to introduce _ individual _ integration timesteps , i.e. , at each time , different timesteps for each particle , depending on the density of the region it samples .this is the optimal design of the code to increase the mass resolution .another shortcoming of conventional sph formulations concerns the entropy violation of the dynamical equations , related to the space dependence of the smoothing length of sph particles , , as noted by some authors .a rigorous formulation of sph requires that additional terms must be included in the particle equations of motion which account for the variability of , usually termed as `` the terms '' . until very recently , they were considered as having a negligible effect on the global dynamics of systems and , therefore , sph codes ignored such additional terms and focused on energy conservation .alternatively , treatments of hydrodynamics based on the lagrange equations can be formulated that are well behaved in their conservation properties of both , energy and entropy , as that introduced recently by .the effects of entropy violation in sph codes are not completely clear and they need to be analyzed in much more detail , specially in simulations where galaxies are formed in a cosmological framework .previous works have analyzed this question in the case of the collapse of isolated objects and have found that , if such correction terms are neglected , the density peaks associated to central cores or shock fronts are overestimated at a level . to make up for these shortcomingswhen dealing with problems related to galaxy formation and evolution , we introduce a new code , deva , where gravity is solved by means of an ap3m - like technique , and hydrodynamics with a sph technique , with individual integration timesteps .the space dependence of the sph resolution scale , , has been taken into account , in order to ensure the conservative character of the equations of motion , as long as entropy and energy is considered .another important particularity of deva is the attention paid to angular momentum conservation , a key point to enable disc formation in simulations ( * ? ? ?* ; * ? ? ?* and references therein ) .our choice has been to put the stress into conservation laws rather than into saving cpu time . but saving cpu time has also been one of our concerns , so that the code is fast enough that cosmological self - consistent simulations can be run on a modest computer machine .the paper is organized as follows : is the introduction . in ,the sph method is briefly reviewed and we present the sph equations when the terms are considered . 3 is devoted to the particularities of the deva code , and 4 to test whether the code integrates correctly the hydrodynamical and n - body equations ( standard tests ) . in 5we introduce some self - consistent simulations run with deva , compare to one standard of reference for hydrodynamical simulations in a cosmological framework ( the santa barbara cluster comparison project * ? ? ?* ) , and analyze the effects of the terms in these simulations . finally , in 6 , we give a summary of the work and discuss deva performances .the basic idea of the sph method lies in representing the fluid elements by particles which act as interpolation centers to determine the local value of any macroscopic variable . in order to smooth out local statistical fluctuations ,this interpolation is performed by convolving the field with a smoothing ( or kernel ) function .for example , the smoothed estimate of the local density is where , is the mass of particle , and is the smoothing length for particle , which specifies the size of the averaging volume . ideally , the individual particle smoothing lengths must be updated so that each particle has a constant number of neighbors . by neighborswe mean those particles with distances .such a condition can be exactly implemented by constructing , for each particle , a list of its nearest neighbors .the smoothing length of is then defined to be where is the position vector of particle s most distant neighbor .since each particle has its own value , it is possible to find couples of particles such that is a neighbor of , but is not a neighbor of . in these cases, it is obvious that the reciprocity principleth particle belongs to the neighbor list of the particle , then it is mandatory that , at this same time , the particle belongs to the neighbor list of the particle ] is not satisfied and , therefore , simulations will not conserve momentum . in order to solve this problem, it is necessary to symmetrize the sph equations by using , for example , averaged kernels : \;.\ ] ] a first consequence of the adopted symmetrization procedure is the specific form for the kernel derivatives . as a matter of fact ,( [ wij ] ) implies that is a function of three variables : , and .consequently , its gradient is given by : \nonumber \\ + \frac{1}{2}\left[\frac{\partial w(r_{jk},h_j)}{\partial h_{j}}{{\mathbf{\nabla}}}_i h_{j } + \frac{\partial w(r_{jk},h_k)}{\partial h_{k } } { { \mathbf{\nabla}}}_i h_{k}\right]\end{aligned}\ ] ] the first part of eq .( [ nablawij ] ) , which does not involve derivatives of the smoothing lengths , is the usual symmetrized form of .the second part , which involves derivatives of the smoothing lengths , arises because of the spatial and temporal variability of .we shall refer to terms of this type as terms. most implementations of the sph algorithm consider only the first one and neglect the terms .the motion of particle is determined by the momentum and energy equations : where is the local gravitational potential , is the acceleration due to pressure forces , is the acceleration due to viscosity forces , is the specific internal energy , is the pressure ( with being the constant heat ratio ) , and is the power due to non - adiabatic heating or cooling processes .a fully consistent sph expression for pressure forces , satisfying all conservations laws ( including entropy conservation in reversible adiabatic problems ) , was obtained by : using eqs .( [ rho ] ) and ( [ nablawij ] ) to compute , one obtains \frac{{\mbox{\it{\bf r}}}_{ij}}{r_{ij } } \nonumber\\ & -&\frac{1}{4}\tilde{r}_i \sum_j m_j\left(\frac{p_i}{\rho_{i}^{2}}+ \frac{p_j}{\rho_{j}^{2}}\right ) \frac{\partial w(r_{ij},h_i)}{\partial h_{i}}\\ & + & \frac{1}{4}\sum_j \tilde{r_j}\delta_{ij_f } \sum_k \frac{m_j m_k}{m_i } \left(\frac{p_j}{\rho_{j}^{2}}+\frac{p_k}{\rho_{k}^{2}}\right ) \frac{\partial w(r_{jk},h_j)}{\partial h_{j}}\ ; , \nonumber\end{aligned}\ ] ] where .on the other hand , using eqs .( [ rho ] ) and ( [ nablawij ] ) to compute the derivative appearing in eq .( [ dedt ] ) , the energy equation becomes : \frac{\textbf{r}_{ij}\cdot\textbf{v}_{ij}}{r_{ij}}\nonumber\\ & + & \frac{1}{4}\frac{p_i}{\rho_{i}^{2}}\sum_j m_j \left[\frac{\partial w(r_{ij},h_i)}{\partial h_i } \breve{r}_i+ \frac{\partial w(r_{ij},h_j)}{\partial h_j } \breve{r}_j\right ] + { \cal{h}}_i\;,\end{aligned}\ ] ] where note that eqs .( [ accpress ] ) and ( [ energy ] ) have been deduced by using both spatial and time derivatives of the sph density as defined by eq .( [ rho ] ) _ with the symmetrization specified in eq .( 3 ) _ , because compatibility with the conservation laws requires that the sph force and energy equations are evaluated in consistency with the density definition .in the case of deva , this requirement increases the cpu time per integration step .in fact , since the density associated to a particle depends on both and , for , ( i.e. , for its nearest neighbors , see eq .[ rho ] ) , the computation of at a given integration step requires the knowledge of for these nearest neighbors at its beginning .value must be kept fixed all along the integration step in order to avoid violating the reciprocity principle ] this can be achieved either by using the values predicted in the previous integration step or by performing , at each step , a first loop over the particles to compute their values and , once it is over , a second loop to compute their hydrodynamical properties .since we look for a high accuracy rather than a high computational speed , we have adopted this latter possibility . as usual in sph , to account for dissipation at shocks , the above equations must be completed by adding an artificial viscous pressure term , . when the terms are considered , is added only to the leading term of equations ( [ accpress ] ) and ( [ energy ] ) , that is , those not involving terms : where we have adopted the standard viscous pressure proposed by : where and are constant parameters of order unity , is a softening parameter to prevent numerical divergences , is the local sound speed , and simulation of a system constituted by particles usually requires a computational effort which considerably varies from some regions ( or particles ) to other .for example , regions of high density and submitted to strong shocks need to be simulated with timesteps much shorter than the rest of the system . in the ap3m+sph codes described in the literature , all the particles in the systemare simultaneously advanced at each timestep .the particle needing the highest time resolution determines the timestep length of all the others .consequently , some few particles can slow down the simulation of a system . to make a code more efficient in handling with problems with multiple time scales ,the computational effort must be centered on those particles that require it , avoiding useless computations for the remaining particles . in other words , it is necessary to allow for different timesteps for each particle .a pec ( predict - evaluate - correct ) scheme with individual timesteps has been developed and implemented on our code in the following way : 1 .we enter the step ( which corresponds to the time ) with known positions , velocities , and accelerations , for all the particles .furthermore , any integration scheme with individual timesteps needs some information to identify , at each step , those particles needing a recomputation of their acceleration .this information is stored in two vectors and , where is the time at which the last update of was performed , while is the time at which a recomputation of will be necessary in the future . 2 .a list is constructed with those particles which will be advanced at the current step .such particles are labelled as _active_. obviously , the particle with the smallest prediction time , , must be included in this list , and fixes the timestep of the remaining active particles : since each step requires the update of many auxiliary arrays , it is impractical to advance only a single particle . for this reason , we label as _ active _ all particles within a cubic box around .the size of the activation box is chosen , at each position , so that it contains a small fraction of the total number of particles .3 . for all particles , active ornot , we predict the value of and at 4 .only for active particles , we compute their accelerations and correct and using : where the choice and maintains accuracy to second order both in positions and velocities . in these expressions, represents the time interval elapsed from the last evaluation of to that performed in the current timestep note that , unlike , the value is different for each active particle .we update the global time , as well as the and values of each active particle . here , in order to maintain the numerical stability of the ap3 m algorithm , the individual timestep must be smaller than the time scale for significant displacements or changes in velocity due to accelerations : where is the gravitational softening .the above integration scheme may easily be extended to include hydrodynamics .the sph processes involve three new independent variables in addition to those listed in eq .( [ variables ] ) : where is the smoothing length , the specific internal energy , and its derivative . for all particles, we must then predict the value of at and compute , for active particles , both their total acceleration ( and ) as well as their hydrodynamical variables ( , , and ) .these quantities are then used to correct the internal energy of active particles : where the choice maintains accuracy to second order in internal energies .now , the numerical stability requires additional limits on the timestep of each gas particle .a first timestep control is that concerning the time scale for significant displacements or changes in velocity due to accelerations : a second limit on is usually given by a timestep control which combines the courant and the viscous conditions : \;.\label{dtcv}\ ] ] when required , radiative cooling is implemented in an integral form using the fact that , due to the courant condition , the density field is nearly constant over a time - step : where is the power radiated per unit volume and is the change in due to cooling processes .this integral procedure circumvents the need of a control time for cooling , and , hence , it never limits the timestep .the numerical stability of our code requires that cooling effects must be updated at each step for all particles , active or not .otherwise , the simultaneous presence of already cooled and not yet cooled particles in a given object would break the local pressure equilibrium and , as a result , cold particles would fall to the object center causing a non - physical core of very high density ( see .2 for an example ) .fig [ cpu ] shows , for a typical cosmological simulation , the ratio of the cpu time consumed by an algorithm with individual timesteps to that consumed when all particles are simultaneously advanced .we see that the use of individual timesteps typically reduces the cpu time per step in a factor of five . in a pentiumiv 1.7ghz personal computer , the cpu time typically consumed by our code in a cosmological simulation without the terms is : a ) 25 seconds per step in simulations without radiative cooling ( such as the santa barbara cluster test of 5.1 ) , b ) from 25 ( at high redshifts ) to 70 ( at low redshifts ) seconds per step in cosmological simulations with radiative cooling ( such as those of 5.2 ) .these cpu times are increased by about 150% when the terms are taken into account .deva has been applied to different problems with known analytical or numerical solutions .the aim of such simulations was not only to test our code , but also to analyze the effects of the terms included in it .the one - dimensional shock tube problem proposed by has become a standard test of all transport and source terms ( including artificial viscosities ) of hydrodynamic algorithms .it considers a perfect gas distributed on the -axis .a diaphragm at initially separates two regions which have different densities and pressures .all particles are initially at rest . at time the diaphragm is broken and both regions start to interact .nonlinear waves are then generated at the discontinuity and propagate into each region : a shock wave which moves from the high to the small pressure region , while the associated rarefaction wave moves in the inverse sense .the analytical solution to this problem has been given by and . in our simulation, we have considered gas particles initially distributed in the interval according to : dissipational effects , other than those associated with the artificial viscosity ( with and ) , were ignored , as well as gravitational interactions .[ shocktube ] shows our results at .we see from this figure that our results are in excellent agreement with the analytical solutions .the resulting profiles both in the shock wave ( located at ) and in the contact discontinuity ( located at ) are much less rounded than in previous sph computations ( see , e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) as a result of having used a larger number of particles and , hence , a better resolution .we also note the almost complete suppression of post - shock oscillations in our results .these oscillations can be seen in the previous sph simulations of this problem , especially in the velocity field , while no high - frequency vibrations are perceptible in our results .the weak blip observed in the pressure profile at the contact discontinuity ( ) is normal in sph codes .such non - physical blip has been explained by as due to the fact that the smoothed estimate of pressure is computed by using discontinuous quantities .it is then inevitable that has some slight perturbation at the contact discontinuity , but it has a negligible effect on the motion . in this test , simulations including the terms gave exactly the same results as those neglecting such terms . a 3d - problem usually considered to test hydrodynamical codesis that concerning the adiabatic collapse of a non - rotating gas sphere . this problem has been studied from a finite - difference method by , and from sph simulations by and order to facilitate the comparison of our results to those obtained by these authors , we have taken their same initial conditions : a gas sphere of radius and total mass , with density profile all the gas particles are initially at rest and have the same specific internal energy .units were taken so that . initially far from equilibrium , the system collapses converting most of its kinetic energy into heat .a slow expansion follows and , at late times , a core - halo structure develops with nearly isothermal inner regions and the outer regions cooling adiabatically .we show in fig .[ hkfig ] different system profiles at end of the simulation .the solid line represents the numerical solution obtained when the terms have been included , the dashed line represents the numerical solution obtained when these terms have been neglected , and the points represent the numerical solution obtained by .we see that , although the solid and dashed lines are not exactly superposed , both solutions are coincident within the error bars .we can understand why the terms have a negligible effect in the two standard tests reported in this section .the effect of the terms on the thermal energy can be expressed by a time scale defined by where is the ratio of the specific thermal energy of particle to the change in due to the terms when the time elapsed , , is longer than this time scale , the terms will produce a non - negligible effect on the thermal energy .this condition can be expressed as where represents the area contained by the curve between and .when these timescales are computed for the non - adiabatic tests reported in this paper , one obtains ( in the shock tube problem ) , and ( in the collapse of a non - rotating gas sphere ) .the effect of the terms are then expected to be small .testing the effects of the terms on hydrodynamical evolution can be better worked out in isentropic processes .it is not easy , however , to get such kind of processes in simulations of gas evolution because gas easily develops shocks where dissipation must occur .a possibility is considering the adiabatic expansion of a gas sphere from a situation of equilibrium . using expansion rather than collapse has the advantage that shell crossing decreases substantially , so that viscous force terms can be removed and the evolution is isentropic .such simulations as that shown in [ collapse ] lead at late times , , to equilibrium spheres with density profiles as that displayed in fig .[ hkfig ] .we use the adiabatic expansion of such spheres to test the effects of the terms in deva .initial conditions were generated by performing a simulation as that described in [ collapse ] , using different numbers of particles . at , we switched - off its viscous pressures ( by setting ) to ensure that the subsequent evolution conserves the total entropy .the self - gravity was also switched - off at . in absence of gravitational interactions ,this system expands fast and , at , its central density has decreased by a factor of .the evolution from to must conserve both the total energy and entropy .table 1 shows the results for this series of simulations .we see that , when the correction terms are neglected , energy is conserved very accurately but there exists a considerable violation of the total entropy ( about in the time interval we have considered ) . in the opposite case , when such correction terms are taken into account , both total energy and total entropy are conserved very accurately ( about ) .these results appear to be independent on the number of particles .we then conclude that the correction terms cure entropy violation , allowing , at the same time , a very good energy conservation .the deva code is particularly well suited to numerically follow , in a cosmological context , the assembly from the field of primordial fluctuations of collapsed objects , such as galaxy clusters or galaxies . to illustrate deva performances in this situation, we briefly analyze some results of self - consistent simulations .self - consistency means that initial conditions are set at high as a montecarlo realization of the field of primordial fluctuations ( i.e. , perturbations , characterized by a spectrum , to a given cosmological model ) , and then they are left to evolve according with newton s laws and the hydrodynamical equations .the santa barbara cluster problem was proposed by to compare the results obtained from different codes .the formation of a x - ray cluster in a cold dark matter ( cdm ) universe has been simulated using most of the hydrodynamic codes available at that time , setting a standard of reference to test newly proposed hydrodynamic codes .the initial conditions of this test correspond to a peak of the density field smoothed with a gaussian filter of radius mpc according to the algorithm of .the perturbation was centered on a periodic cubic region of side mpc .the cosmological scenario is a flat cdm universe with km s mpc for the hubble constant ; for the present - day linear rms mass fluctuation in spherical top hat spheres of radius 16 mpc ; and for the baryon density ( in units of the critical density ) .64 dark matter and 64 baryon particles have been used with a softening length of 20 kpc . to test the influence of the terms ,two different simulations were run with deva . in one of these simulations ,the terms have been considered ( sbgh ) while in the other they have been neglected ( sbnogh ) . in figure[ stbfig ] , the density , temperature and entropy profiles of the cluster are plot .the stars represent the results obtained by jenkins from a high - resolution sph simulation using a parallel version of the hydra code , while the circles represent the results obtained by bryan & norman from a high - resolution adaptive mesh refinement shock - capturing code , samr , .as previously remarked by , we see that the sph and mesh results differ at the central region .this figure also shows the results obtained from our code both when the terms are included ( solid line ) and neglected ( dashed line ) .error bars correspond to the standard deviation of the individual sph data .we see that , now , our results differ slightly depending on whether the terms have been included or not ( the slope of the density profile flattens more rapidly in sbgh than in sbnogh ; the temperature profile is flat in sbgh and decreases within 100 kpc in sbnogh ; the entropy profile is almost flat within 100 kpc in sbgh and decreases continuously in sbnogh ) . moreover , when the terms are neglected , we obtain results that are similar to those of previous sph simulations , and very close to jenkins results , obtained with a much higher resolution .when these terms are taken into account , the results are intermediate between previous sph and grid results .this suggests that , at least in part , the difference between the sph and grid results could be due to the non - physical entropy introduced by sph codes .this non - physical entropy is negative and , therefore , it produces objects with a smaller central temperature and a higher central density .particular attention deserves the comparison of deva results with those obtained from the entropy conserving sph - tree formulation by springel & hernquist ( 2002a ) , hereafter s - gadget . in figure [ comparison ]we give a comparison of the santa barbara cluster entropy profiles obtained in the sbgh run and s - gadget , kindly provided by y. ascasibar and g. yepes .both have been run with the same number of particles and gravitational resolution .we see that the agreement is very good within the error bars .so , both techniques compare very well in terms of entropy conservation .the differences found between sbgh and sbnogh simulations can be understood on the basis of the timescale for the terms ( section 4.3 ) .the time integral of ( see eq .[ ts ] ) for this test is now larger than unity ( ) .we now report on several self - consistent simulations run in the context of flat cosmological models with the aim of studying galaxy assembly ( see table 2 for a summary ) .we have considered two different models with parameters whose values are consistent with their recent determinations from observations : simulations ( ) have ( 0.7 ) , ( 0.04 ) , ( 1.00 ) and ( 0.70 ) ( * ? ? ? * ; * ? ? ?* ; * ? ? ?* and references therein ) .both simulations share the same seed for the montecarlo realization of the initial fluctuation field , so that each object formed in one simulation has its counterpart in the other simulation . in each case , we have used dm particles and gas particles , in a periodic box of 10 mpc comoving side .the gravitational softening is kpc , and the minimum allowed smoothing length , , as usual . for each cosmological model , two simulationshave been run that are identical ( they have exactly the same initial conditions and the same values of the cosmological parameters ) except that in one case the terms have been included ( simulations -h and -h hereafter ) , while in the second case these terms have not been taken into account ( simulations -noh and -noh hereafter , see table 2 ) . note that in these simulations the cosmological volume is homogeneously sampled , in the sense that no resampling multimass technique has been used to study glo assembly .this work can be considered as an extension of previous works in standard cdm models , where cosmological simulations had been run with a different code based on a different numerical approach with fixed integration timesteps and particle masses .relative to these previous works , deva opens the possibility of considering the effects of the terms at scales of galaxies in self - consistent simulations .moreover , the simulations we report here represent an improvement of the baryonic mass resolution by factors of in the number of baryonic particles sampling a glo of a given total mass .also , the time resolution allowed by the multistep technique has been improved by a factor of in the denser areas of the box .cooling has been implemented as described in 3.4 , where the cooling curve is that from and for an optically thin primordial mixture of h and he ( , ) in collisional equilibrium and in absence of any significant background radiation field . concerning star formation , in this work we report on _ direct _ results ( i.e. , no spectrophotometric ) obtained with the simplest implementation of star formation in the code : through a parameterization , similar to those used by and , see for details , based on the jeans criterion for a collapsing region .gas particles are turned into stars according with an inefficient schmidt - law - like transformation rule ( see * ? ? ?* ; * ? ? ?* ) , where is a dimensionless star - formation efficiency parameter , and is a characteristic time - scale chosen to be equal to the maximum of the local gas - dynamical time , and the local cooling time , .equation ( [ star - rate ] ) implies that the probability that a gas particle forms stars in a time is as usual , we compute at each time step for all eligible gas particles and draw random numbers to decide which particles actually form stars .stellar feedback processes have not been explicitly considered , but a tuning of the efficiency parameters can mimic these feedback effects .galaxy - like objects of different morphologies appear in the simulation : disk - like objects ( dlos ) , early - type - like objects ( etlos ) and irregular objects .dlos contain gas in an extended disk , and most stars in a massive compact central concentration . in simulations with lower values ( not reported here ) , stars form also in the disks , along arms .etlos are very poor in gas and their stellar component have relaxed regular ellipsoidal shapes .irregulars have not defined shapes , and , in most cases , they are the product of a recent merger or interaction event .we note that glos formed in -noh and -h tend to be of later type than their -noh or -h counterparts , because of the lower values the parameters and take in simulations .first analyses of glo formation and evolution in simulations run with the deva code are reported in ; more details will be given elsewhere ( siz el al .2003 , in preparation ) . herewe focus on different aspects related with deva performances .one important issue related to disc formation in hydrodynamical simulations is specific angular momentum conservation at kpc scales . in fig .[ jespmass ] , the specific total angular momentum at is represented versus mass for dlos identified in -noh and -h with 150 km s ( is the disc scalelength , see * ? ? ?the specific total angular momentum is plot for dark haloes , ( open symbols ) , for the inner 83 per cent of the disc gas mass ( i.e. , the mass fraction enclosed by in a purely exponential disc ) , ( filled symbols ) , and for the stellar component of the dlos in our simulations , ( starred symbols ) .we see that , except for the three more massive objects , is of the same order as , so that these gas particles have collapsed conserving , on average , their angular momentum .moreover , dlos formed in our simulations are inside the box defined by observed spiral discs in this plot .in contrast , is much smaller than either or , meaning that the stellar component in the central parts has formed out of gas that had lost an important amount of its angular momentum in catastrophic events or that had never acquired it .we note that this result is similar to those obtained in simulations run with the code by , even if the codes are different , as explained above , and that the global cosmological models are also different . the smoothed estimate of the local gas density given by eq .( 2 ) and the ensuing formulation of sph equations , symmetrized to ensure the reciprocity principle , has made possible conservation in an axisymmetric potential .this is a delicate crucial point in sph codes . as stated in .2 ,its accurate implementation requires that the individual smoothing lengths must be completely updated at the beginning of each integration step , what increases the cpu time requirements .this complication can not be avoided , however , when an accurate conservation is a key point in the physical processes under study .a different approach to saving cpu time is then necessary . using different timesteps for each particle , useless computations for particles that do not require high time resolutionare avoided .this saves considerable amounts of cpu time .multistepping has allowed us to run these simulations , that have a considerable dynamical range , in a modest computing machine ( a pentium iv 1.7ghz personal computer , see above ) .let us now turn to the effects of the terms at kpc scales .as stated in .3 , their general effect is to correct the spurious negative entropy introduced in their absence by sph codes , mainly at the central regions of collapsed objects . as a consequence , when the terms are taken into account , dissipation by the gaseous component of a given glo increases , so that they are more disordered or dynamically hotter at their central regions .equilibrium is then attained with lower central baryon concentrations or densities , decreasing the amount of gas infall . but this is not the unique effect of terms on mass distribution .it is a well known effect that dark matter is pulled in by baryons as they lose their energy and fall onto these central volumes of collapsed configurations . decreasing the amount of gas infall translates , as a consequence , into a decreasing of the amount of dark matter at the glo centers . as an illustration of this effect on both the gaseous and dark components , in figure [ velcir ] ( upper panel )we show the circular velocity curves for the most massive etlo formed in -h ( thick lines ) and its counterpart formed in -noh ( thin lines ) . in the lower panel ,the corresponding circular velocity curves are represented for a dlo formed in -h ( thick lines ) and its counterpart in -noh ( thin lines ) . in these figures, is the radial distance to the glo center of mass , solid lines are the circular velocities , , short - dashed lines and long - dashed lines stand for the dark matter ( dm ) and the baryonic ( bar , both stellar and luminous ) contributions , respectively , given by : with = bar and dm . as can be clearly seen in these figures , the central distributions of both , dark matter and baryons , are different and in any case the concentrations are lower when the terms are included .these central concentrations are often quantitatively estimated in literature through the parameter ( the maximum or peak circular velocity , see * ? ? ?glos in -h or -h have lower values than their -noh and -noh counterparts , respectively .this can be seen in figure [ velcir ] for an etlo and a dlo , but the behavior is general for any glo . other useful parameter to quantitatively characterize circular velocity curves is the logarithmic slope ( ls ) , observationally defined for disc rotation curves as the slope of the straight line that fits in log - log scale from up to the last measured point in the rotation curve ( * ? ? ?* and references therein ; is the disk scalelength ) .lss are a measure of the glo halo compactness at the scales of 10 - 30 kpcs .as illustrated in figure [ velcir ] for an etlo and a glo , a tendency for glos to be less compact also at these scales when the terms are taken into account has been found in the simulations . a second effect of the terms is related to the amounts of stars formed in a given glo at its formation and all along its evolution until . in these simulations ,many stars form at the shock fronts , where gas is compressed to very high densities .softer shocks mean less star formation with the same efficiency parameters . to illustrate this effect , in figure [ massstar ] the ratios of the total stellar masses for the 8 ( 7 ) more massive etlos produced in -h ( -h ) over the total stellar masses of their -noh ( -noh ) counterparts , are plot versus their total virial masses , .we see that , except in one case , these ratios are smaller than the unity , as expected .concerning sizes , for etlos we define the intrinsic or 3d cold baryon effective radius , , as the radius of the sphere enclosing half the total etlo mass in cold baryons ( i.e. , cold gas or stars ) , .this is a measure of the etlo size at scales of the baryonic objects . in figure [ restar ]we plot for the -h ( -h ) etlos in units of those of their -noh ( -noh ) counterparts .as expected , -h ( -h ) etlos have larger baryonic sizes than their -noh ( -noh ) counterparts , except for one case .finally , shocks heat the gas , producing an extended diffuse hot component .the effect of the terms in this case is to lower the temperature of this diffuse gaseous halo component of glos relative to the case when these terms are not considered .histograms for the temperature distribution of the gaseous component of two of the most massive glos in -h ( thick lines ) and in -noh ( thin lines ) are shown in figure [ temphist ] . to draw these histograms , all the gas particles within a sphere of radius , centered at the glo center of mass ,have been considered ( is the radius where the curve of integrated plasma emission luminosity reaches its asymptotic value ) . as illustrated in the histograms for these two glos , the temperature distributions of the gaseous component for -h and -noh objects do not differ substantially . in both cases ,the gas is close to biphasic , but -h objects are slightly colder within than -noh objects .they are also slightly more extended at scales , with the small excess of colder particles placed at the outskirts of the gaseous configurations .the differences found between -h and -noh or between -h and -noh simulations can be understood on the basis of the timescale for the terms ( section 4.3 ) .the time integral of ( see eq .[ ts ] ) is now much larger than unity ( ) , indicating that the terms can not be neglected in this kind of simulations . as remarked in [incsph ] , cooling implementation in multistep codes has to be handled with caution : cooling processes need to be updated at each timestep for all particles .otherwise , gas particles involved in shocks suffer from a spurious cooling and non - physical cores of high density appear , giving rise to extremely compact objects . as an example , in figure [ restar ] we plot the 3d cold baryon half - mass radii for a variant of -noh , termed -mcool ( see table 2 for details ) , where only active particles at each timestep have been allowed to cool . in figure [ restar ]( starred symbols ) we see that now the effective radii for the most massive objects are significantly smaller than in -noh ; the difference becomes less important as the mass of the objects decreases and , as a consequence , the fraction of their constituent particles involved in shocks also decreases .note that , as the comparison of filled points and starred symbols in figure [ restar ] shows , the combined effects of entropy violation and spurious cooling can produce unphysically very small objects , a factor of ten smaller , in some cases , than the values found when these effects are circumvented ( i.e. , the -h simulation ) .we present deva , a multistep ap3m - like - sph code designed to study galaxy formation and evolution in connection with the global cosmological model , that uses a formulation of sph equations ensuring both energy and entropy conservation .multistepping is introduced to save cpu time . inself - consistent cosmological simulations multiple time scales appear , due to their large dynamical ranges . to avoid that particles in denser zones slow down the simulation , and , at the same time , to get a properly accurate integration algorithm ,it is then advantageous to use individual time steps . a comparison of the cpu time used in a self - consistent cosmological simulation when it is run with a _global _ timestep or with a _ multistep _ scheme indicates that in the second case results at an equivalent level of accuracy are produced times faster .when a multistep scheme is adopted , a delicate issue in the study of galaxy assembly in a cosmological context is the _ cooling _ implementation . in deva , as no cooling timescale is taken into consideration to fix the individual timestep for each particle , cooling must be calculated in a non - multistep way .otherwise , particles involved in strong shocks would spuriously cool and form dense objects characterized by unphysically small baryon half - mass radii .on writting deva , particular attention was paid that conservation laws of physics ( energy , entropy , momentum ) are correctly implemented in the code , so that they hold at scales and under physical conditions relevant for galaxy assembly in a cosmological context .the usual formulations of sph equations focus on energy conservation and they violate , by construction , entropy conservation . as a consequence ,a negative entropy is numerically produced in shocks that might ( or might not ) spuriously pollute the results , depending on the problem one studies .different authors have addressed the issue of entropy violation in sph codes and have given different alternative formulations of its equations .we have implemented a formulation that considers explicitly the effects of the terms in sph equations . by taking advantage of the structure of the ap3 m algorithm in the neighbor search ,the implementation of terms in the code is simple and noiseless . to test the relevance of entropy violation at shock locations under different physical conditions ,we have studied problems and run simulations ( namely , the adiabatic santa barbara cluster formation test , , and fully self - consistent cosmological simulations to study galaxy formation including cooling an star formation ) using identical initial conditions and two different versions of deva , one that takes into account the terms in sph equations and one that does not take them into account . we show that entropy violation has consequences on the thermodynamical properties of the very central regions of the santa barbara cluster and on the structure at kpc scales of galaxy - like objects ( glos ) formed in simulations , but it does not have any appreciable consequence on the standard non - cosmological tests of hydrodynamical codes . to understand the origins of these different behaviors , a criterion is introduced that allows to elucidate when entropy violation is expected to have appreciable consequences on the results . in standard tests , only moderate shocks and time scales are involved . as a consequence of the non - physical negative entropy numerically produced when the terms are neglected , both glos and the cluster present more concentrated baryon density profiles ( either star or gas ) .concerning the santa barbara cluster test , when the terms are not included , we get results that are similar to those of previous sph simulations that focus on energy conservation . however , when these terms are considered , the results are intermediate between those sph and grid results .for example , the temperature profile is decreasing within about 100 kpc of the cluster center when the simulation is run with standard sph codes ( including deva without terms ) , it increases when it is run with grid codes and it is flat when deva + is used .we would like to note that the accuracy figure of entropy conservation obtained with deva + and the entropy - conserving sph - tree code by springel & hernquist ( s - gadget , 2002a ) compare quite satisfactorily , as indicated by the good agreement between the santa barbara cluster entropy profiles obtained with both codes .in cosmological simulations , negative entropy causes galaxy - like objects ( glos ) to be dynamically less hot and gas infall onto their central regions is artificially increased , causing , also , an increase of the amount of dark matter at the glo centers . these results qualitatively agree with those obtained with s - gadget by in their simulation of a cosmological model .no quantitative comparisons are possible by the moment , because a standard of comparison for self - consistent cosmological simulations is unfortunately not available .an important result of this work is that the combined effects of entropy violation and multi - step cooling implementation in cosmological simulations can be particularly dramatic concerning the concentration of mass distribution in the galaxy - like objects they produce .for example , their baryon half - mass radii can be up to a factor of ten smaller than half - mass radii of glos produced in entropy - conserving non - multistep runs with deva . concerning momentum conservation , we have used a formulation of sph equations that is consistent with the smoothed estimate of the local gas density ( eqs ( 1 ) and ( 3 ) ) .equations are symmetrized to ensure that the reciprocity principle holds ( that is , if at a given time the particle belongs to the neighbor list of the particle , then it is mandatory that , at this same time , the particle belongs to the neighbor list of the particle ) , so that momentum and angular momentum are conserved .the implementation of this principle in a sph code increases considerably the cpu time per integration step , because a double loop on gas particles is necessary to evaluate smoothing lengths . to test angular momentum conservation ,we have measured the specific angular momentum of discs formed in self - consistent simulations .it has been found that conservation is good enough to obtain simulated discs with observational counterparts , without any need of previous heating , as already have shown ( see also * ? ? ?the use of a very high number of particles could ensure angular momentum conservation in sph - tree codes . in this paperit has been shown that codes paying a particular attention to the implementation of conservation laws of physics at the scales of interest can attain a good level of accuracy in conservation laws with more limited resources .it is a pleasure to thank a. knebe , m. norman , v. quilis , j. silk , j. sommers - larsen , p. tissera and g. yepes for useful information and discussions on the topics addressed in this paper .particular thanks are due to h.m.p .couchman for making public his ap3 m code , on which the gravitational part of deva is based , and to y. ascasibar and g. yepes for allowing us to use their results on the santa barbara cluster test .this project was partially supported by the mcyt ( spain ) through grant aya-0973 from the programa nacional de astronoma y astrofsica .we also thank the iberdrola foundation for financial support , and the centro de computacin cientfica ( uam , spain ) for computing facilities . 50 alimi j .- m ., serna a. , pastor c. & bernabeu g. 2002 , j. comput .( in press ) ascasibar , y. 2003 , phd thesis , universidad autnoma de madrid avila - reese , v. , & vzquez - semadeni , e. 2001 , , 553 , 645 barnes , j.e .1989 , , 338 , 123 barnes , j.e . , 1992 , , 393 , 484 barnes , j. e. & hernquist , l. e. 1991 , , 370 , l65 barnes , j. e. & hernquist , l. 1992 , , 30 , 705 barnes j. , & hut p. 1986 , 324 , 446 benz , w. & hills , j. g. 1987 , , 323 , 614 bond , j. r. , centrella , j. , szalay , a. s. , & wilson , j. r. 1984 , , 210 , 515 borgani , s. , governato , f. , wadsley , j. , menci , n. , tozzi , p. , quinn , t. , stadel , j. , & lake , g. 2002 , , 336 , 409 bryan , g.l . , norman , m.l . , stone , j.m . ,cen , r. , & ostriker , j.p .1995 , comput .comm . , 89 , 149 bryan , g.l . , & norman , m.l .1995 , baas , 187 , 9504 couchman , h. m. p. 1991, , 368 , l23 couchman , h.m.p . , thomas , & pierce , f.r . , 1995 , apj , 452 , 797 courteau , s. 1997 , , 114 , 2402 dalcanton , j. j. , spergel , d. n. , & summers , f. j. 1997 , , 482 , 659 dave , r. , dubinski , j. , & hernquist , l. 1997 , new astronomy , 2 , 277 domnguez - tenreiro , r. , serna , a. , siz , a. , & sierra - gonzlez de buitrago , m. m. 2003 , , in press dom ' inguez - tenreiro , r. , tissera , p. b. , & s ' aiz , a.1998 , , 508 , l123 elmegreen , b. 2003 , , in press evrard , a. e. 1988 , , 235 , 911 fall , s.m .1983 , in athanassoula e. , ed , iua symp . 100 , internal kinematics and dynamics of galaxies .reidel , dordrecht , p. 391frenk , c. s. et al . 1999 , , 525 , 554 gingold r. a. & monaghan j. j. 1977 , , 181 , 375 gingold r. a. & monaghan j. j. 1982 , j. comput .46 , 429 godunov , s. k. 1959 , matematicheskii sbornik , 47 , 271 governato , f. , mayer , l. , wadsley , j. , gardner , j.p . , willman , b. , hayashi , e. , quinn , t. , stadel , j. , & lake , g. 2002 , astro - ph/0207044 preprint hawley , j. f. , smarr , l. l. , & wilson , j. r. 1984 , , 277 , 296 hernquist , l. 1993 , , 404 , 717 hernquist , l. & katz , n. 1989 , , 70 , 419 hoffman , y. & ribak , e. 1991 , , 380 , l5 kang , h. , ostriker , j. p. , cen , r. , ryu , d. , hernquist , l. , evrard , a. e. , bryan , g. l. , & norman , m. l. 1994 , , 430 , 83 katz n. 1992 , 391 , 502 katz , n. , weinberg , d. h. , & hernquist , l. 1996 , , 105 , 19 kennicutt , r. 1998 , , 498 , 541 klein , r.i . , fisher , r.t . , mckee , c.f . & truelove , j.k . 1998 , in proceedings of the international conference of numerical astrophysics , ed .s .. m. miyama et al .( boston : kluwer academic ) , p. 131 knebe , a. , green , a. , & binney , j. 2001 , , 325 , 845 knebe , a. , kravtsov , a. v. , gottl " ober , s. , & klypin , a. a.2000 , , 317 , 630 kravtsov , a. v. , klypin , a. a. , & khokhlov , a. m. 1997 , , 111 , 73 kritsuk , a. g. & norman , m. l. 2002 , , 569 , l127 lahav , o. 2002 , in proceedings of the xxxviith rencontres de moriond on the cosmological model , astro - ph/0208297 preprint lahav , o. , bridle , l. , percival , w.j . , et al .2002 , astro - ph/0112162 preprint lucy , l. b. 1977 , , 82 , 1013 mihos , j. c. & hernquist , l. 1994 , , 425 , l13 mihos , j. c. & hernquist , l. 1996 , , 464 , 641 monaghan , j. j. 1992 , , 30 , 543 monaghan j. j. & gingold r. a. 1983 , j. comput .phys . , 52 , 374 monaghan , j. j. & lattanzio , j. c. 1985 , , 149 , 135 navarro , j. f. & white , s. d. m. 1993 , , 265 , 271 nelson , r. p. & papaloizou , j. c. b. 1993 , , 265 , 905 nelson , r. p. & papaloizou , j. c. b. 1994 , , 270 , 1 netterfield , c. b. et al .2002 , , 571 , 604 norman , m. l. & bryan , g.l . 1998 , in proceedings of the international conference of numerical astrophysics , ed .miyama et al .( boston : kluwer academic ) , p. 19padoan , p. , jimenez , r. , nordlund , a. , & boldyrev , s. 2003 , astro - ph/0301026 preprint padoan , p. , juvela , m. , goodman , a. & nordlund , a. 2001 , , 553 , 227 pearce , f. r. & couchman , h. m. p. 1997, new astronomy , 2 , 411 rasio , f. a. & shapiro , s. l. 1991 , , 377 , 559 siz a. , domnguez - tenreiro r. , & serna , a. 2002 , , 281 , 309 siz a. , domnguez - tenreiro r. , & serna , a. 2003 , , in press siz , a. , domnguez - tenreiro , r. , tissera , p. b. , & courteau , s. 2001 , , 325 , 119 serna a. , alimi j .-m . , & chize j .- p .1996 , , 461 , 884 silk , j. 2001 , , 324 , 313 sod , g. a. 1978 , j. comput .phys . , 27 , 1 sommer - larsen j. , & dolgov a. 2001 , , 551 , 608 spergel d. n. et al .2003 , astro - ph/0302209 preprint springel , v. & hernquist , l. 2002 , , 333 , 649 springel , v. & hernquist , l. 2002b , astro - ph/0206393 preprint springel , v. , yoshida , n. & white , s. 2001 , new astronomy , 6 , 79 steinmetz , m. 1996 , , 278 , 1005 steinmetz , m. & navarro , j. 1999 , , 513 , 555 teyssier , r. 2002 , a & a , in press thacker , r. j. & couchman , h. m. p. 2000 , , 545 , 728 thomas p. a. 1987 , ph.d .thesis , cambridge univ .thomas p. a. & couchman h. m. p. 1992, , 257 , 11 tissera p. b. , lambas d. g. , & abadi m. g. , 1997 , , 286 , 384 tissera p. b. , & domnguez - tenreiro , r. 1998 , , 297 , 177 tissera p. b. 2001 , , 540 , 384 tissera , p.b ., domnguez - tenreiro , r. , scannapieco , c. , & siz , a. 2002 , mnras , 333 , 327 tucker w. h. 1975 , radiation precesses in astrophysics ( new york , wiley ) vzquez - semadeni , e. , ostriker , e. c. , passot , t. , gammie , c. f. , & stone , j. m. 2000 , protostars & planets iv , ed . v. mannings et al .( tucson : univ . of arizona press ) , in press vedel , h. , hellsten , u. , & sommer - larsen , j. 1999 , , 271 , 743 verheijen , m. , bershady , m. , & andersen d. 2002 , mass galaxies at low and high redshift ( eso workshop , venice ) pp .24 - 26 wada , k. & norman , c. a. 2001 , , 547 , 172 yepes , g. , kates , r. , khokhlov , a. , & klypin , a. 1997 , , 284 , 235
we describe deva , a multistep ap3m - like - sph code particularly designed to study galaxy formation and evolution in connection with the global cosmological model . this code uses a formulation of sph equations which ensures both energy and entropy conservation by including the so - called terms . particular attention has also been paid to angular momentum conservation and to the accuracy of our code . we find that , in order to avoid unphysical solutions , our code requires that cooling processes must be implemented in a non - multistep way . we detail various cosmological simulations which have been performed to test our code and also to study the influence of the terms . our results indicate that such correction terms have a non - negligible effect on some cosmological simulations , especially on high density regions associated either to shock fronts or central cores of collapsed objects . moreover , they suggest that codes paying a particular attention to the implementation of conservation laws of physics at the scales of interest , can attain good accuracy levels in conservation laws with limited computational resources .
the current decade has witnessed a significant amount of effort towards discovering the networks of protein - protein interactions ( interactomes ) in a number of model organisms .these efforts resulted in hundreds of thousands of individual interactions between pairs of proteins being reported .repositories such as the biogrid , intact , mint , dip , bind and hprd have been established to store and distribute sets of interactions collected from high - throughput scans as well as from curation of individual publications .depending on its goals , each interaction database , maintained by a different team of curators located around the world includes and annotates interactions differently .consequently , while many interactions of specific interactomes are shared among databases , no one contains the complete known interactome for any model organism . constructing a full - coverage protein - protein interaction networktherefore requires retrieving and combining entries from many databases .this task is facilitated by several initiatives developed by the proteomics community over the years .the imex consortium was formed to facilitate interchange of information between different primary databases by using a standardized format .the proteomics standards initiative molecular interaction ( psi - mi ) format allows a standard way to represent protein interaction information .one of its salient features is the controlled vocabulary of terms that can be used to describe various facets of a protein - protein interaction including source database , interaction detection method , cellular and experimental roles of interacting proteins and others .the psi - mi vocabulary is organized as an ontology , a directed acyclic graph ( dag ) , where nodes correspond to terms and links to relations between terms .this enables the terms to be related in an efficient and algorithm - friendly manner .consistently annotated datasets are useful for development and assessment of interaction prediction tools .furthermore , such datasets also form the basis of interaction networks , for which numerous analysis tools have been developed . depending on biological aims of a tool ,different entities ( nodes ) and potentially weighted interactions ( edges ) may be preferred .the chance of conflicting predictions from different tools can be reduced by starting from a consistently annotated dataset that faithfully represents all available evidences . to maintain a coherent development of biological understanding ,it is indispensable to keep the reference datasets up - to - date .we examined several primary interaction databases with the aim of constructing non - redundant , consistently annotated and up - to - date reference datasets of physical interactions for several model organisms .unfortunately , the common standard format used by most primary databases still does not allow direct compilation of full non - redundant interactomes .this mainly results from the fact that different primary databases may use different identifiers for interacting proteins and different conventions for representing and annotating each interaction . combining interaction data from bind , biogrid , corum , dip , hprd , intact , mint , mpact , mppi and ophid , the irefindex database represents a significant advance towards a complete and consistent set of all publicly available protein interactions .apart from being comprehensive and relatively up - to - date , the main contribution of irefindex is in addressing the problem of protein identifiers by mapping the sequence of every interactant into a unique identifier that can be used to compare interactants from different source databases . in a further ` canonicalization ' procedure , different isoforms of the same proteinare mapped to the same canonical identifier . by adhering to the psi - mi vocabulary and file format, irefindex provides largely standardized annotations for interactants and interactions .construction of irefindex led to the development of irefweb , a web interface for interactive access to irefindex data .irefweb allows an easy visualization of evidence for interactions associated with user - selected proteins or publications .recently , the authors of irefindex and irefweb published a detailed analysis of agreement between curated interactions within irefindex that are shared between major databases .however , aiming to maintain all information from original sources , irefindex consequently , there will be features one desires to have that may not fit well within the scope of irefindex .for example , one may wish to treat interactions arising from enzymatic reactions as directed and to be able to selectively include / exclude certain types of reactions such as acetylation .in many cases , the information about post - translational modifications is available directly from source databases , but is not integrated into irefindex .another issue that propagates into irefindex from source databases has to do with protein complexes .some databases represent experimentally observed complexes as interactions with more than two participants , while others expand them into binary interactions using spoke or matrix model . recently observed that this different representation of complexes is responsible for a significant number of disagreements between major databases curating the same publication . from our earlier work , we found that such expanded complexes may lead to nodes with very high degree and often introduce undesirable shortcuts in networks . to fairly treat the information provided by protein complexes without exaggeration , it is preferable to replace the expanded interactions , either from spoke or matrix models , with a flat list of complex members. additionally , we discovered that the mapping of each protein to a canonical group by irefindex would sometimes place protein sequences clearly originating from the same gene ( for example differing in one or two amino acids ) into different canonical groups . to achieve the goal of constructing non - redundant , consistently annotated and up - to - date reference datasets , we developed a script , called ppitrim , that processes irefindex and produces a consolidated dataset of physical protein - protein interactions within a single organism .our script , called ppitrim , is written in the python programming language .it takes as input a dataset in irefindex psi - mi tab 2.6 format , with 54 tab - delimited columns ( 36 standard and 18 added by irefindex ) .after three major processing steps , it outputs a consolidated dataset , in psi - mi tab 2.6 format , containing only the 36 standard columns .the three processing steps are : ( i ) mapping all interactants to ncbi gene ids and removing all undesired raw interactions ; ( ii ) deflating potentially expanded complexes ; and ( iii ) at each step , ppitrim downloads the files it requires from the public repositories and writes its intermediate results as temporary files .the former is removed because it contains either computationally predicted interactions or interactions verified from the literature using text mining ( i.e. without human curation ) . as a first step , the script seeks to map each interactant to an ncbi entrez gene identifier . for most interactants, it uses the mapping already provided by irefindex . in the cases where irefindex provides only a uniprot knowledge base accession ,the script attempts to obtain a gene i d is used to query the ncbi entrez gene database for a matching gene record using an eutils interface . if a single unambiguous match is found , the record s gene i d is used for the interactant .every mapped gene i d is checked against the list of obsolete gene ids , which are no longer considered to have a protein product existing _ in vivo_. the interactants that can not be mapped to valid ( non - obsolete ) gene ids are removed along with all raw interactions they participate in . after assigning gene ids , the script considers the psi - mi ontology terms associated with interaction detection method , interaction type and interactants biological roles . using the full psi - mi ontology file in open biomedical ontology ( obo ) format , it replaces any non - standard terms in these fields ( labeled mi:0000 ) with the corresponding valid psi - mi ontology terms .the terms marked as obsolete in the psi - mi obo file are exchanged for their recommended replacements .the single exception are the interaction detection method terms for hprd ` in vitro ' ( mi:0492 , translated from mi:0045 label in irefindex ) and ` in vivo ' ( mi:0493 ) interactions , which are kept throughout the entire processing .source interactions annotated with a descendant of the term mi:0415 ( enzymatic study ) as their detection method or with a descendant of the term mi:0414 ( enzymatic reaction ) as their interaction type are classified as candidate biochemical reactions .this category also includes any interactions ( including those with more than two interactants ) where one of interactants has a biological role of mi:0501 ( enzyme ) or mi:0502 ( enzyme target ) . in the recent months ,the biogrid database has started to provide additional information about the post - translational modifications associated with the ` biochemical activity ' interactions , such as phosphorylation , ubiquitination etc .this information is available from the biogrid datasets in the new tab2 format but is not yet reflected in the psi - mi terms for interaction type provided in the psi - mi 2.5 format or in irefindex . since the post - translational modifications annotated by the biogrid can be directly matched to standard psi - mi terms , the script downloads the most recent biogrid dataset in tab2 format , extracts this information and assigns appropriate psi - mi terms for interaction type to the candidate biochemical reactions from irefindex that originate from the biogrid .any source interaction not classified as candidate biochemical reaction is considered for assignment to the candidate complex categories .this category includes all true complexes ( having edge type ` c ' in irefindex ) , interactions having a descendant of mi:0004 ( affinity chromatography ) as the detection method term or mi:0403 ( colocalization ) as the interaction type , as well as the interactions corresponding to the biogrid s ` co - purification ' category . interactions with interaction type mi:0407 ( direct interaction ) are never considered candidates for complexes . all source interactions not falling into candidate biochemical reaction or candidate complex categories are considered ordinary binary physical interactions . the phase ii script attempts to detect spoke - expanded complexes from ` candidate complex ' interactions and deflate them into interactions with multiple interactants .first , all candidate interactions are grouped according to their publication ( pubmed i d ) , source database , detection method and interaction type .each group of source interactions is turned into a graph and considered separately for consolidation into one or more complexes .when a portion of a group of interactions is deflated , we replace these source interactions by a complex containing all their participants .two procedures are used for consolidation : pattern detection and template matching ( fig .[ fig : complexes ] ) . pattern detection procedure is used only for the interactions from the biogrid . unlike the interactions from the dip ,those interactions are inherently directed since one protein is always labeled as bait and other as prey ( in many cases this labeling is unrelated to the actual experimental roles of the proteins ) . the pattern indicating a possible spoke - expanded complex consists of a single bait being linked to many preys . since all interactions in the biogrid s co - purification and co - fractionation categories arise from complexes that are spoke - expanded using an arbitrary protein as a bait ( biogrid administration team , private communication ) , a bait linked to two or more preys can in that case always be considered an expanded complex and deflated .such deflated complexes are assigned the edge type code ` g ' .the remainder of the complex candidate interactions from the biogrid were obtained by affinity chromatography and are , in most cases , also derived from complexes .here we adopted a heuristic that a bait linked to at least three preys can be considered a complex .clearly , some experiments involve a single bait being used with many independent preys , in which case this procedure would generate a false complex .therefore , complexes generated in this way are assigned a different edge type code ( ` a ' ) and the user is able to specify specific publications to be excluded from consideration as well as the maximal size of the complex .the second procedure is based on matching each group of candidate interactions to the complexes indicated by other databases ( templates ) , mostly from intact , mint , dip and bind . in this case , the script checks for each protein in the group whether it , together with all its neighbors , is a superset of a template complex .if so , all the candidate interactions between the proteins within the complex are deflated .the neighborhood graph is undirected for all source databases except the biogrid .the new complexes generated in this way are given the code ` r ' .the scripts also attempts to use complexes generated from the biogrid s interactions through a pattern detection procedure as templates , in which case the newly generated complexes have the code ` n ' .any source interactions that can not be deflated into complexes are retained for phase iii .the dag structure of an ontology naturally induces a partial order between the terms : for two terms and , we say that refines ( is smaller , precedes ) if there exists a directed path in the dag from to .two psi - mi terms can be considered compatible if they are comparable , that is , one refines the other .every nonempty collection of terms can be uniquely split into disjoint sets , such that every has a single maximal element ( an element comparable to and not smaller than any other member ) and contains all members of comparable to its maximal element .every subcollection is then consistent because there exists at least one term within it that can describe all its members , while any two members from different subcollections are incomparable .finest consistent term _ of a subcollection is the smallest member of that is comparable to all its members ( it can also be defined as the smallest member of the intersection of the transitive closures of all the members of . ) .if is a total order , where all members are pairwise comparable , the finest consistent term is the minimal term .on the other hand , the minimal term need not exist ( fig .[ fig : example1 ] ) , so that the finest consistent term is higher in the hierarchy and represents the most specific annotation that can be assigned to as a whole . to produce consolidated interactions from a single cluster ,each of its members ( interactions ) is identified with its psi - mi term for information detection method . for every cluster member , the set of all other members with compatible annotations (` compatible set ' ) is computed . as a special case ,the following detection method tags are treated as smaller than any other : ` unspecified method ' ( mi:0686 ) , ` in vivo ' and ` in vitro ' ( the latter two are from hprd only ) . in this way , non - specific annotationsare considered as compatible with all other , more specific evidences .compatible sets are further grouped according to their maximal elements . within each group ,the union of the compatible sets produces a subcluster .the finest consistent term for each subcluster is found by considering all psi - mi terms on the paths from the subcluster members to its maximum the search is not restricted to those terms that are within the subcluster ( fig .[ fig : example1 ] ) .this definition takes into account that a source database may report an interaction several times for the same publication , using the same or different interaction detection method .if two databases annotate the same interaction using incompatible terms , this is most likely due to an error or specific disagreement about the appropriate label , rather than that each database is reporting a different experiment from the same publication . to test ppitrim , we applied it to the yeast ( _ s .cerevisiae _ ) , human ( _ h .sapiens _ ) and fruitfly ( _ d . melanogaster _ ) datasets from irefindex release 8.0-beta , dated jan 19th 2011 .when processing the yeast dataset , we accounted for two special cases .first , we specifically removed the genetic interactions reported by because they were not labeled as genetic for all source databases .second , we excluded the dataset by from phase ii and retained all its interactions as binary undirected .the results of applying ppitrim to process irefindex 8.0 are shown in tables [ tbl : mapping2 ] [ tbl : consolidate ] .we chose to standardize proteins using ncbi gene identifiers rather than the irefindex - provided canonical ids ( crogids ) for several reasons .ncbi gene records not only associate each gene with a set of reference sequences , but also include a wealth of additional data ( e.g. list of synonyms ) and links to other databases such as gene ontology that are important when using the interaction dataset in practice .in addition , gene records are regularly updated and their status evaluated based on new evidence .thus , a gene record may be split into several new records or marked as obsolete if it corresponds to an orf that is known not to produce a protein . for network analysis applications ,it is desirable that only the proteins actually expressed in the cell are represented in the network and hence the gene status provided by ncbi gene is a valuable filtering criterion .however , crogids do have one advantage over ncbi gene ids in that they are protein - based and hence identical protein products of several genes ( like histones ) are clustered together . around 10% of crogidscould not be mapped to gene ids even after processing with ppitrim algorithms . a few interactors ( supplementary table 5 )have only pdb accessions as their primary ids since their interactions were derived from crystal structures . in such cases , oftenonly partial sequences of participating proteins are available .these partial sequences can not be fully matched to any uniprot or refseq record and hence are assigned a separate i d .hence , an improvement for our procedure , that would account for this case as well as for those unmapped proteins that differ from canonical sequences only by few amino acids , would be to use direct sequence comparison to find the closest valid reference sequence .this task may not be technically difficult ( a similar procedure was applied by to construct protein databases for mass spectrometry data analysis ) but is beyond the scope of ppitrim , which is intended as a relatively short standalone script . in our opinion , such additional mappings would best be performed at the level of reference sequence databases such as uniprot or refseq , which contain curator expertise to resolve ambiguous cases .protein complexes obtained through chromatography techniques provide information complementary to direct binary interactions . while it is often difficult to determine the exact layout of within - complex pairwise interactions , an identification of an association of several proteins using mass spectroscopy is an evidence for _ in vivo _ existence of that association .unfortunately , in spite of its great importance , the currently available information within irefindex is deficient because of different treatments of complexes by different source databases .our results ( table [ tbl : collapsing ] ) show that the apparently inflated complexity of interaction datasets can be substantially reduced by attempting to collapse spoke - expanded complexes .we feel that the benefits from reduction of interactome complexity outweigh the disadvantages from potentially over deflating interactions .the best way to solve would be at the level of source databases ( biogrid in particular ) , by reexamining the original publications .our complexes from the ` r ' category , where deflated complexes fully agree with an annotated complex from a different database , could serve as a guide in this case .overall , our processing significantly reduced the number of interactions within each of the three datasets considered ( table [ tbl : consolidate ] ) .this indicates a significant redundancy , particularly for protein complexes , original and deflated ( compare table [ tbl : collapsing ] with table [ tbl : consolidate ] ) , and for binary interactions .the directed interactions ( biochemical reactions ) are relatively rarer and largely non - redundant at this stage . given their importance in elucidating biological function , the directed interactions are expected to be discovered more fully with time . upon closer examination ( table [ tbl : conflicts ] ) , it can be seen that most common conflicts arise as instances of few specific labeling disagreements between databases . in many cases ,such disagreements arise from using different sub - terms of affinity chromatography ( see fig . [fig : example1 ] ) and can be resolved by assigning a more general term consistent with both conflicting terms . in many other cases ,the conflicts are due to biogrid internally using a more restricted detection method vocabulary than the imex databases ( dip , intact and mint ) . the ppitrim algorithms work best if accurate and fully populated fields for interaction detection method , publication and interaction type are available in its input dataset .this requirement is mostly fulfilled .nevertheless , we have noticed two minor inconsistencies .the first , which will be fixed in a subsequent release of irefindex ( ian donaldson , private communication ) , involves the psi - mi labels for interaction detection method for corum interactions and complexes .these are missing from irefindex although they are present in the original corum source files .the second issue concerns missing or invalid pubmed ids for certain interactions .we found that a number of interactions with missing pubmed ids come from mint . upon inspection of the original mint files, we discovered that in many cases mint supplies a digital object identifier ( doi ) for a publication as its identifier instead of a pubmed i d ( although the corresponding pubmed i d can be obtained from the mint web interface ) .to ensure consistency with other source databases within irefindex , it would be desirable to have the pubmed ids available for these interactions as well . in this paper, we have identified the tasks needed for using combined interaction datasets provided by irefindex as a basis for construction of reference networks and developed a script to process them into consistent consolidated datasets .we see ppitrim as answering a need for a consolidated database and hope that most of the issues that required processing will be eventually fixed in upstream databases and distributed through imex consortium . at this stagewe have not addressed the issue of quality of interactions although such information is available in some databases for some publications . utilizing the quality information in consolidating datasetsdemands a universal data - quality measure that is not yet existent .37 [ 1]#1 [ 1]`#1 ` urlstyle [ 1]doi : # 1 de las rivas , j. and fontanillo , c. protein - protein interactions essentials : key concepts to building and analyzing interactome networks ._ plos comput biol _ , 60 ( 6):0 e1000807 , 2010 .stark , c. , breitkreutz , b .-j . , chatr - aryamontri , a. _ et al . _ the biogrid interaction database : 2011 update . _nucleic acids res _ , 390 ( database issue):0 d698704 , 2011 .aranda , b. , achuthan , p. , alam - faruque , y. _ et al ._ the intact molecular interaction database in 2010 . _ nucleic acids res _ , 380 ( database issue):0 d52531 , 2010 .ceol , a. , chatr - aryamontri , a. , licata , l. _ et al ._ , the molecular interaction database : 2009 update ._ nucleic acids res _, 380 ( database issue):0 d5329 , 2010 .salwinski , l. , miller , c. s. , smith , a. j. _ et al . _ the database of interacting proteins : 2004 update . _ nucleic acids res _ , 320 ( database issue):0 d44951 , 2004 .alfarano , c. , andrade , c. e. , anthony , k. _ et al . _ the biomolecular interaction network database and related tools 2005 update . _ nucleic acids res _ , 330 ( database issue):0 d41824 , 2005 .isserlin , r. , el - badrawi , r. a. , and bader , g. d. the biomolecular interaction network database in psi - mi 2.5 . _ database ( oxford ) _ , 2011:0 baq037 , 2011 .keshava prasad , t. s. , goel , r. , kandasamy , k. _ et al ._ 2009 update . _ nucleic acids res _ , 370 ( database issue):0 d76772 , 2009 .cusick , m. e. , yu , h. , smolyar , a. _ et al . _ literature - curated protein interaction datasets . _ nat methods _ , 60 (1):0 3946 , 2009 . orchard , s. , kerrien , s. , jones , p. _ et al ._ submit your interaction data the imex way : a step by step guide to trouble - free deposition ._ proteomics _ , 7 suppl 1:0 2834 , 2007 .kerrien , s. , orchard , s. , montecchi - palazzi , l. _ et al ._ broadening the horizon level 2.5 of the hupo - psi format for molecular interactions ._ bmc biol _ , 5:0 44 , 2007 .markowetz , f. and spang , r. inferring cellular networks a review ._ bmc bioinformatics _ , 8 suppl 6:0 s5 , 2007 .gomez , s. m. , choi , k. , and wu , y. prediction of protein - protein interaction networks ._ curr protoc bioinformatics _ , chapter 8:0 unit 8.2 , 2008 .kanaan , s. p. , huang , c. , wuchty , s. _ et al ._ inferring protein - protein interactions from multiple protein domain combinations ._ methods mol biol _ , 541:0 4359 , 2009 .lewis , a. c. f. , saeed , r. , and deane , c. m. predicting protein - protein interactions in the context of protein evolution ._ mol biosyst _ , 60 ( 1):0 5564 , 2010 .chautard , e. , thierry - mieg , n. , and ricard - blum , s. interaction networks : from protein functions to drug discovery . a review ._ pathol biol ( paris ) _ , 570 ( 4):0 32433 , 2009 .przytycka , t. m. , singh , m. , and slonim , d. k. toward the dynamic interactome : it s about time ._ brief bioinform _, 110 ( 1):0 1529 , 2010 .ruepp , a. , waegele , b. , lechner , m. _ et al ._ : the comprehensive resource of mammalian protein complexes2009 . _ nucleic acids res _ , 380 ( database issue):0 d497501 , 2010 .gldener , u. , mnsterktter , m. , oesterheld , m. _ et al ._ : the mips protein interaction resource on yeast . _ nucleic acids res _ , 340 ( database issue):0 d43641 , 2006 .pagel , p. , kovac , s. , oesterheld , m. _ et al . _ the mips mammalian protein - protein interaction database ._ bioinformatics _ , 210 ( 6):0 8324 , 2005 .brown , k. r. and jurisica , i. online predicted human interaction database ._ bioinformatics _ , 210 ( 9):0 207682 , 2005 .razick , s. , magklaras , g. , and donaldson , i. m. : a consolidated protein interaction database with provenance ._ bmc bioinformatics _, 9:0 405 , 2008 .turner , b. , razick , s. , turinsky , a. l. _ et al ._ : interactive analysis of consolidated protein interaction data and their supporting evidence . _ database ( oxford ) _ , 2010:0 baq023 , 2010 .turinsky , a. l. , razick , s. , turner , b. _ et al ._ literature curation of protein interactions : measuring agreement across major public databases ._ database ( oxford ) _, 2010:0 baq026 , 2010 .stojmirovi , a. and yu , y .- k . : analyzing information flow in protein networks ._ bioinformatics _ , 250 ( 18):0 24479 , 2009 . maglott , d. , ostell , j. , pruitt , k. d. _ et al ._ : gene - centered information at ncbi .nucleic acids res _ , 390 ( database issue):0 d527 , 2011 . .the universal protein resource ( uniprot ) in 2010 ._ nucleic acids res _ , 380 ( database issue):0 d1428 , 2010 .smith , b. , ashburner , m. , rosse , c. _ et al . _ the obo foundry : coordinated evolution of ontologies to support biomedical data integration ._ nat biotechnol _ , 250 ( 11):0 12515 , 2007 .tong , a. h. y. , lesage , g. , bader , g. d. _ et al ._ global mapping of the yeast genetic interaction network ._ science _ , 3030 ( 5659):0 80813 , 2004 .collins , s. r. , kemmeren , p. , zhao , x .- c ._ et al ._ toward a comprehensive atlas of the physical interactome of saccharomyces cerevisiae ._ mol cell proteomics _ , 60 ( 3):0 43950 , 2007 .gavin , a .- c . , aloy , p. , grandi , p. _ et al ._ proteome survey reveals modularity of the yeast cell machinery ._ nature _ , 4400 ( 7084):0 6316 , 2006 .krogan , n. j. , cagney , g. , yu , h. _ et al . _ global landscape of protein complexes in the yeast saccharomyces cerevisiae ._ nature _ , 4400 ( 7084):0 63743 , 2006 .ashburner , m. , ball , c. a. , blake , j. a. _ et al ._ gene ontology : tool for the unification of biology .the gene ontology consortium ._ nat genet _ , 25:0 2529 , 2000 .alves , g. , ogurtsov , a. y. , and yu , y .- k .: mass - spectrometry based peptide identification web server with knowledge integration ._ bmc genomics _ , 9:0 505 , 2008 .hannich , j. t. , lewis , a. , kroetz , m. b. _ et al ._ defining the sumo - modified proteome by multiple approaches in saccharomyces cerevisiae ._ j biol chem _ , 2800 ( 6):0 410210 , 2005 .croft , d. , okelly , g. , wu , g. _ et al ._ reactome : a database of reactions , pathways and biological processes . _ nucleic acids res _ , 390 ( database issue):0 d6917 , 2011 .blaiseau , p. l. and thomas , d. multiple transcriptional activation complexes tether the yeast activator met4 to dna . _ embo j _ , 170 ( 21):0 632736 , 1998 .this work was supported by the intramural research program of the national library of medicine at the national institutes of health .we thank dr .donaldson for his critical reading of this manuscript and for providing us with the proprietary version of irefindex 7.0 dataset , which was used for initial development of ppitrim .rlp5.0cm > p5.4 cm column & short name & description & example + 1 & uida & smallest gene i d of the interactor a & entrezgene / locuslink:854647 + 2 & uidb & smallest gene i d of the interactor b & entrezgene / locuslink:855136 + 3 & alta & all gene ids of the interactor a & entrezgene / locuslink:854647 + 4 & altb & all gene ids of the interactor b & entrezgene / locuslink:855136 + 5 & aliasa & all canonical gene symbols and integer crogids of interactor a & entrezgene / locuslink : bnr1| icrogid:2105284 + 6 & aliasb & all canonical gene symbols and integer crogids of interactor b & entrezgene / locuslink : myo5| icrogid:3144798 + 7 & method & psi - mi term for interaction detection method & mi:0018(two hybrid ) + 8 & author & first author name(s ) of the publication in which this interaction has been shown & tong ah [ 2002]|tong-2002a-3 + 9 & pmids & pubmed id(s ) of the publication in which this interaction has been shown & pubmed:11743162 + 10 & taxa & ncbi taxonomy identifier for interactor a & taxid:4932(saccharomyces cerevisiae ) + 11 & taxb & ncbi taxonomy identifier for interactor b & taxid:4932(saccharomyces cerevisiae ) + 12 & interactiontype & psi - mi term for interaction type & mi:0407(direct interaction ) + 13 & sourcedb & psi - mi terms for source databases & mi:0000(mpact)|mi:0463(grid)| mi:0465(dip)|mi:0469(intact ) + 14 & interactionidentifier & a list of interaction identifiers & ppitrim : tyugksok231dh3ynsi6gbczjcfe=| mpact:8233|dip : dip-11198e|grid:147506| intact : ebi-601565|intact : ebi-601728| irigid:288990|edgetype : x + 15 & confidence & a list of ppitrim confidence scores & maxsources:2|dmconsistency : full| conflicts : s3oaixt5ta4vvruso1rc1ta9krk= + 16 & expansion & either ` none ' for binary interactions or ` bipartite ' for subunits of complexes & none + 17 & biologicalrolea & psi - mi term(s ) for the biological role of interactor a & mi:0499(unspecified role ) + 18 & biologicalroleb & psi - mi term(s ) for the biological role of interactor b & mi:0499(unspecified role ) + 19 & experimentalrolea & psi - mi term(s ) for the experimental role of interactor a & mi:0496(bait)|mi:0498(prey)| mi:0499(unspecified role ) + 20 & experimentalroleb & psi - mi term(s ) for the experimental role of interactor b & mi:0496(bait)|mi:0498(prey)| mi:0499(unspecified role ) + 21 & interactortypea & psi - mi term for the type of interactor a ( either ` protein ' or ` protein complex ' ) & mi:0326(protein ) + 22 & interactortypeb & psi - mi term for the type of interactor b ( always ` protein ' ) & mi:0326(protein ) + 29 & hostorganismtaxid & ncbi taxonomy identifier for the host organism & taxid:4932(saccharomyces cerevisiae ) + 31 & creationdate & date when ppitrim was run & 2011/05/11 + 32 & updatedate & date when ppitrim was run & 2011/05/11 + 35 & checksuminteraction & ppitrim i d for an interaction & ppitrim : tyugksok231dh3ynsi6gbczjcfe= + 36 & negative & always ` false ' & false + . interaction type is also adjusted to mi:0403 as recommended in ` psi-mi.obo ` ; hprd terms are treated as a special case , see main text ; mppi interactions in the human dataset . [ cols="<,<,<,<,<",options="header " , ] 8avruhg76vkifn2czgicnzzr00y= & grid & 14759368 & cft2 , ysh1 , pta1 , mpe1 & part of mrna cleavage / polyadenylation complex ( 4/10 proteins ) .+ 9ys57j / gbrbolnmmimsveonoraa= & grid & 14759368 & nut1 , med7 , med4 , sin4 , srb4 & part of mediator complex .+ ju+eokq6iplh9djkrtgrluvt7vm= & grid , mint & 14759368 & ubp6 , rpt3 , rpn9 , rpt1 , rpn8 , rpn2 , rpn7 , rpn1 & part of proteasome .mint does not contain complexes from the original paper .+ httmhgipyfit2vftrz94uww0rsy= & grid & 16429126 & ioc3 , htb1 , hta2 , hhf2 , isw1 , kap114 , itc1 , rps4a , vps1 , nap1 , rpo31 , isw2 , tbf1 , bro1 , mot1 & part of complex # 99 .+ lnnzfypgshcg7zkkynu6+fsk2eu= & grid & 16429126 & psk1 , nth1 , bmh2 , rtg2 , bmh1 & part of complex # 147 ( two core proteins plus three attachments ) .+ s2i6vrjfmwc6rkkm+oyxwkcg9yq= & grid & 16429126 & rpl4b , mnn10 , mnn11 , hoc1 , mnn9 , anp1 & core complex ( # 111 mannan polymerase ii ) + one attachment protein ( rpl4b ) .+ 1frmaapl2ruoqq202yujg55mafo= & grid , mint & 16554755 & rsm24 , rsm28 , mrps5 , mrp13 , mrps35 , rsm27 , rsm7 , rsm25 , mrps17 , mrps12 , rsm19 , mrp4 & part of complex # 1 .+ 5tbkyomk / g1h3vaqmionuobhhmq= & grid , mint & 16554755 & cft2 , ysh1 , mpe1 , pap1 & part of complex # 18 .+ 9f2dvj2rdgecp53lhonwrmwq14a= & grid , mint & 16554755 & kap95 , rtt103 , vma2 , rai1 , rat1 , rpb2 , srp1 & true experimental association but not part of any derived complex .+ avawv51 + 6fqe3dquygd / xfyrxxe= & grid , mint & 16554755 & rrp42 , rrp45 , rrp6 , csl4 , mpp6 , rrp4 , lrp1 , ddi1 & part of complex # 19 .+ nolewovavmsfrqedksut / mldemc= & grid , mint & 16554755 & cdc3 , shs1 , cdc11 , cdc12 & part of complex # 121 . + wa51i87lj1wgp / eef1ov / yvbw1y= & grid , mint & 16554755 & gtt2 , trx1 , crn1 , ssa3 , ipp1 , cmd1 , trx2 , tdh1 , rpl40b , cdc21 , oye2 & true experimental association but not part of any derived complex+ yn / hqxqvzob5hqrgpzvth28mgsy= & grid , mint & 16554755 & rrp43 , rrp42 , rrp45 , rrp40 , dis3 , rrp6 , rrp4 , lrp1 & part of complex # 19 .+ 1lrk+agi8hpgosagkhdznjwsvti= & grid & 20489023 & rtg3 , rtg2 , tor1 , tor2 , cka2 , myo2 , mks1 , kog1 & true experimental association .+ xwzvxejfgqjkcihjmqvf5gzhjjq= & dip , grid , mint & 20489023 & puf3 , sam1 , gcd6 , spt16 , mtc1 , ygk3 , lsm12 & true experimental association .+ 15vfqtoe5gxgnwpsy3ag0sq6a2u= & grid & 9891041 & ccr4 , hpr1 , paf1 , srb5 , gal11 & not a true complex .this is because of bad annotation of paf1srb5 interaction by the biogrid .completely opposite interpretation was given in the paper .+ d79idtwftaenrh8cq+c8cps389y= & grid & 10329679 & ypt1 , vps21 , ypt7 , gdi1 & true complex .this is the only experiment in the paper .+ ets4cgpheptqjb / fs5qxyzf0ke8= & grid & 11733989 & cdc39 , ccr4 , cdc36 , caf130 , caf40 , caf120 , pop2 , not5 , mot2 & true complex .caf120 is an unusual member that could almost be left out .+ 2koygdwzwywspn5mhk26gccc6lq= & grid & 14769921 & gbp2 , imd3 , tef1 , kem1 , ctk2 , ctk1 , ctk3 & true complex , except that tef1 should be tef2 .this is an error in the irefindex source file ; the biogrid website has the correct assignment . + kd07bbuf07sqy9np3d0lixss / ty= & grid & 15303280 & bud31 , rpl2b , prp19 , cdc13 , atp1 , rps4a , snu114 , mdh1 , mam33 , mrpl3 , mrpl17 , prp8 , prp22 , pab1 , brr2 & true association + zagz / izqker3/ntdlzpedad9cko= & grid & 16179952 & cdc40 , ufd1 , ssm4 , ubx2 & not a true complex , probably due to a typo in annotation .cdc40 can not be found anywhere in the paper and should most likely be cdc48 .+ rdu0dspan0qeadfsu5sv05ifihw= & grid & 16286007 & sin3 , rco1 , rpd3 , ume1 , eaf3 & true complex .+ vqbn3ddwtpgye9dzbatfnqzdfe0= & grid & 16615894 & vps36 , vps25 , vps28 , snf8 & vps28 binds the other three , which form a complex .+ lmdypan9kahbdaslws19x8k7kke= & grid & 20159987 & ubi4 , ufd2 , pex29 , ssm4 & biological association but indicated as ` not a stable complex ' in the paper .+ aakrh6qvahgxgvqhe399+faxpva= & grid & 20655618 & pex13 , pex10 , pex8 , pex12 & association is correct , although mutant strain was used to obtain this particular complex .+ > p12cmr consolidated terms & count mi:0018 ( two hybrid ) , mi:0045 ( experimental interaction detection ) , mi:0398 ( two hybrid pooling approach ) , mi:0399 ( two hybrid fragment pooling approach ) & 3959 mi:0090 ( protein complementation assay ) , mi:0111 ( dihydrofolate reductase reconstruction ) & 2612 mi:0090 ( protein complementation assay ) , mi:0112 ( ubiquitin reconstruction ) & 2077 mi:0004 ( affinity chromatography technology ) , mi:0676 ( tandem affinity purification ) & 1840 mi:0004 ( affinity chromatography technology ) , mi:0007 ( anti tag coimmunoprecipitation ) & 1408 mi:0018 ( two hybrid ) , mi:0045 ( experimental interaction detection ) , mi:0397 ( two hybrid array ) & 1231 mi:0018 ( two hybrid ) , mi:0045 ( experimental interaction detection ) & 954 mi:0018 ( two hybrid ) , mi:0397 ( two hybrid array ) & 914 mi:0045 ( experimental interaction detection ) , mi:0686 ( unspecified method ) & 628 mi:0004 ( affinity chromatography technology ) , mi:0019 ( coimmunoprecipitation ) & 598 mi:0018 ( two hybrid ) , mi:0398 ( two hybrid pooling approach ) & 506 mi:0004 ( affinity chromatography technology ) , mi:0007 ( anti tag coimmunoprecipitation ) , mi:0676 ( tandem affinity purification ) & 444 mi:0018 ( two hybrid ) , mi:0045 ( experimental interaction detection ) , mi:0686 ( unspecified method ) & 320 mi:0004 ( affinity chromatography technology ) , mi:0096 ( pull down ) & 217 mi:0415 ( enzymatic study ) , mi:0424 ( protein kinase assay ) & 192 mi:0045 ( experimental interaction detection ) , mi:0081 ( peptide array ) & 150 mi:0045 ( experimental interaction detection ) , mi:0676 ( tandem affinity purification ) & 120 mi:0492 ( in vitro ) , mi:0493 ( in vivo ) & 5739 mi:0018 ( two hybrid ) , mi:0398 ( two hybrid pooling approach ) & 5394 mi:0018 ( two hybrid ) , mi:0492 ( in vitro ) , mi:0493 ( in vivo ) & 2796 mi:0096 ( pull down ) , mi:0492 ( in vitro ) , mi:0493 ( in vivo ) & 2760 mi:0096 ( pull down ) , mi:0492 ( in vitro ) & 2134 mi:0018 ( two hybrid ) , mi:0492 ( in vitro ) & 1658 mi:0018 ( two hybrid ) , mi:0493 ( in vivo ) & 1193 mi:0018 ( two hybrid ) , mi:0397 ( two hybrid array ) & 1045 mi:0096 ( pull down ) , mi:0493 ( in vivo ) & 513 mi:0004 ( affinity chromatography technology ) , mi:0006 ( anti bait coimmunoprecipitation ) & 384 mi:0004 ( affinity chromatography technology ) , mi:0019 ( coimmunoprecipitation ) & 309 mi:0004 ( affinity chromatography technology ) , mi:0007 ( anti tag coimmunoprecipitation ) & 195 mi:0114 ( x - ray crystallography ) , mi:0492 ( in vitro ) & 166 mi:0004 ( affinity chromatography technology ) , mi:0096 ( pull down ) & 161 mi:0047 ( far western blotting ) , mi:0492 ( in vitro ) , mi:0493 ( in vivo ) & 106 mi:0018 ( two hybrid ) , mi:0398 ( two hybrid pooling approach ) & 17738 mi:0018 ( two hybrid ) , mi:0399 ( two hybrid fragment pooling approach ) & 1426
+ * * www.ncbi.nlm.nih.gov/cbbresearch/yu/downloads/ppitrim.html
the stability analysis of stochastic switched linear systems has attracted a significant amount of attention in the last two decades . in particular , their mean stability , which requires that the power of the norm of the state variable converges to zero in expectation .some early results on the mean stability of switched linear systems with an independent and identically distributed ( i.i.d . )switching signal can be found in .the stability characterizations of switched linear systems driven by an extension of homogeneous markov processes called semi - markov processes are available in .it is known that homogeneous markov processes having certain irreducibility and recurrence properties and discrete - time i.i.d .stochastic processes are special cases of a more general class of stochastic processes called _ regenerative processes _ .firstly introduced by smith , regenerative processes have found applications especially in queuing systems and network reliability analysis . as we will see later in example [ eg : process ] ,regenerative processes are also suitable to describe a controlled system under periodic maintenance . despite the above facts ,as far as we are aware of , no effort has been made to investigate switched linear systems with a regenerative switching signal .the aim of this paper is to give the characterization of the mean stability of a switched linear system with a regenerative switching signal , which we call a _ regenerative switched linear system_. we show that , if the exponent of the mean stability is even or the system is positive , then the mean stability of the system is characterized by the spectral radius of a matrix . the matrix is obtained as the expected value of the lift of the transition matrix of the system .the proof makes use of a stability - preserving discretization of the system at the embedded renewal process of the underlying regenerative process .the characterization in particular generalizes well - known floquet s theorem for the stability analysis of linear time - periodic systems .this paper is organized as follows . after preparing necessary notations and conventions , in section [ sec : regsystems ] we recall the definition of regenerative processes and then introduce regenerative switched linear systems . then section [ sec : mainresult ] presents the main result of this paper , which is followed by an example .the proof of the main result is given in section [ sec : proof ] . then section [ sec : disc ] discusses the discrete - time case .let be a probability space .for an integrable random variable on its expected value is denoted by ] , is defined as the real vector of length with its elements being the lexicographically ordered monomials indexed by all the possible exponents such that , where .it holds that } { \lvert x^{[m ] } \rvert } = { \lvert x \rvert}^m.\ ] ] we then define } \in \mathbb{r}^{n_m\times n_m} ] for every . for any matrix it holds that } ( ab)^{[m ] } = a^{[m ] } b^{[m]}\ ] ] provided the product is well defined .we also define } \in \mathbb{r}^{n_m \times n_m} ] .it is easy to check that } \bigl(e^{at}\bigr)^{[m ] } = e^{a_{[m]}t}\ ] ] for every .let us first recall the definition of regenerative stochastic processes .throughout this paper we fix an underlying probability space . a stochastic process is called a _ regenerative process _ if there exists a random variable , called a regeneration epoch , such that the following statements hold .* is independent of ; * is stochastically equivalent to . in the following we quote some consequences of the above definition from . by repeatedly applying the definition , one can obtain a sequence of independent and identically distributed random variables called _cycle lengths _ , which can be used to break into independent and identically distributed _ cycles _ , , .then the stochastic process defined by is called the _ embedded renewal process _ of . throughout this paper , for the sake of convenience , we set and call as the embedded renewal process of .the next example presents a regenerative process that is not a homogeneous markov process is of a systems and control theoretical interest .[ eg : process ] consider a dynamical system with a failure - prone controller .let us model the controlled system as a switched system with the two modes . instead of assuming that the transition of the mode can be described by a homogeneous markov process ( see , e.g. , ) , let us consider the scenario when the controlled system is under periodic maintenance , which is commonly employed in the literature from reliability theory .let the stochastic process represent the times at which a maintenance is performed . for simplicitywe assume that with probability one for every , i.e. , that every maintenance repairs a failure with probability one within a negligible time period .we set .furthermore we assume that equals , where is a constant and are independent and identically distributed random variables . represents the designed period of the maintenance and models its random perturbation .we assume that the length of the time for which the process stays at mode after the reset at follows an exponential distribution with parameter .in other words we are assuming that , on any interval of a sufficiently small length , the probability of the occurrence of a failure is approximately equal to .then is clearly a regenerative process with a regeneration epoch and the embedded renewal process .notice that is not a markov process because the length of the time while mode is active depends on .then we introduce the class of switched linear systems studied in this paper .let be a regenerative process that can take values in a set .let be a family of real matrices indexed by .then we call the stochastic differential equation as a _ regenerative switched linear system_. we assume that is a constant vector .the stability of is defined in the following standard manner .let be a positive integer .* is said to be _ exponentially mean stable _ if there exist and such that \leq \alpha e^{-\beta \mathchangescript{k}{t}}{\lvert x_0 \rvert}^m ] for any .we also introduce the notion of positivity for following .we say that is _ positive _ if implies with probability one for every . for to be positive it is clearly sufficient that all the matrices are metzler , i.e., the off - diagonal entries of each are all nonnegative .however it is not necessary , as illustrated in the following non - trivial example .consider a switched linear system with and since , a simple calculation shows the existence of such that if then , for every , the vector is in the sector .then we construct a regenerative process as follows . set and let follow the uniform distribution on ] is schur stable .based on the theorem and continuing from example [ eg : process ] , the next example presents the stability analysis of a linear time - invariant system with a failure - prone controller under periodic maintenance .consider the internally unstable linear time - invariant system with the failure - prone controller where the stabilizing feedback gain is obtained by solving a linear quadratic regulator problem .we assume that the transition between the modes follows the regenerative process described in example [ eg : process ] . with this labeling we have and . for simplicity andalso suppose that each follows uniform distribution on ] ( ) then equations and show } = e^{\bar a_2 \max(0 , r_1-h ) } e^{\bar a_1 \min(r_1,h)}.\ ] ] here we recall that for square matrices with the same dimensions and it holds that using this identity and the independence of and , since follows the exponential distribution with mean , we can show that } ] & = \int_{0.9 t}^{1.1 t } \int_0^\infty e^{\bar a_2\max(0 , t - s ) } e^{\bar a_1\min(t , s ) } e^{-s}ds\,\frac{dt}{0.2 t } \\ & = \frac{5}{t}\int_{0.9 t}^{1.1 t } \int_0^t e^{\bar a_2(t - s ) } e^{(\bar a_1 - i)s } \,ds\,{dt } + \frac{5}{t}\int_{0.9 t}^{1.1 t } \int_t^\infty e^{\bar a_1 t } e^{-s } \,ds\,{dt } \\ & = \frac{5}{t } \begin{bmatrix } i & o \end{bmatrix } \int_{0.9 t}^{1.1 t } \exp\left(\begin{bmatrix } \bar a_2&i\\o&\bar a_1 - i \end{bmatrix}t\right ) \,dt \begin{bmatrix } o \\ i \end{bmatrix } \\ & \hspace{2cm}+ \frac{5}{t}\int_{0.9 t}^{1.1 t } e^{(\bar a_1 - i)t}\,{dt}. \end{aligned}\ ] ] figure [ figure : stability ] shows the graph of the spectral radius of }] ] as varies , width=306 ] as is expected , instability is caused by making the period of the maintenance longer .we can see that , by theorem [ theorem : main ] , is mean square stable if and only if . the computation of the matrix }] ]. therefore , by theorem [ theorem : main ] , the linear time - periodic system is stable in the standard sense if and only if , which is the main consequence of floquet s theory .the proof of theorem [ theorem : main ] is based on the discretization of at the embedded renewal process of the underlying regenerative process . in order to analyze the stability of the discretization , in the next section we first present the stability analysis of discrete - time switched linear systems with i.i.d . parameters .then in section [ subsec : proof ] we give the proof of theorem [ theorem : main ] .let be independent and identically distributed random variables following a distribution on .consider the discrete - time switched linear system where is a constant vector .the mean stability of is introduced as follows .[ defn : sta : sigmamu ] let be a positive integer .* is said to be _ exponentially mean stable _ if there exist and such that \leq \alpha e^{-\beta k } { \lvert x_0 \rvert}^m\ ] ] for all .* is said to be _ stochastically mean stable _ if < \infty ] is schur stable. we shall show the cycle [ [ item : mu : expsta ] [ item : mu : stosta ] [ item : mu : shcsta ] [ item : mu : expsta ] ] .one can easily see [ [ item : mu : expsta ] [ item : mu : stosta ] ] .[ [ item : mu : stosta ] [ item : mu : shcsta ] ] : let denote the trajectory of with the initial state . since the identity shows } ] = e[f_0^{[m]}]e[x(k;x_0)^{[m]}] ] with a corresponding eigenvector .since the set } : x\in\mathbb{r}^n \} ] .multiplying }]^k ] by .therefore , by the triangle inequality and , we can see ] is schur stable. by , there exist and such that } ] \rvert } \leq \alpha e^{-\beta k } { \lvert x_0^{[m ] } \rvert } = \alpha e^{-\beta k } { \lvert x_0 \rvert}^m\ ] ] for every .we shall show that is exponentially mean stable .let and be arbitrary and write .we consider the two cases [ item : mu : even ] and [ item : mu : invariant ] separately .first assume that is even .take positive constants and such that .then \leq c_2^m e[{\lvert y \rvert}^m_m ] = c_2^m \sum_{i=1}^{n } { \left\lvert e[y_i^m ] \right\rvert}.\ ] ] since the random vector } ]. therefore this inequality together with and shows that is exponentially mean stable .next assume that is positive .notice that , by lemma [ lem : lemma36result ] , without loss of generality we can assume , which implies and hence } \geq 0 ] with probability one .therefore , the schwartz inequality shows \leq c_3 1_{n_m}^\top e[y^{[m ] } ] \leq c_3 { \lvert 1_{n_m } \rvert}\ , { \lvert e[y^{[m ] } ] \rvert} ] is far less than the size of the matrix used in ( * ? ? ?* theorem 5.1 ) . also the proof presented above is simpler than the proof of ( * ? ? ?* theorem 5.1 ) , which needs the approximation of by a sequence of finitely supported probability measures .let be a regenerative switched linear system satisfying the conditions [ item : evenornon - neg ] to [ item : mbdd ] and let be the trajectory of .then the discretized process given by is clearly the solution of the discrete - time system where is defined by .proposition [ prop : stab : mu : char ] immediately gives the next corollary on the stability of .[ cor : char ] the following conditions are equivalent . 1 . is exponentially mean stable . is stochastically mean stable .}] ] .let ] , taking the summation about in yields \sum_{k=0}^\infty e[{\lvert x_d(k ) \rvert}^m ] \leq \int_0^\infty e[{\lvert x(t ) \rvert}^m]\,dt <\infty.\ ] ] therefore is stochastically mean stable because both and ] is schur stable .[ [ item : schsta ] [ item : expsta ] ] :here we employ the idea used in the proof of the sufficiency part for ( * ? ? ?* theorem 2.5 ) .assume that }] ] , where and . this inequality and show the mean exponential stability of .this section briefly discusses the stability characterization of regenerative switched linear systems in discrete - time . let be a regenerative process taking values in a set and defined on the set of nonnegative integers .let be a family of real matrices .consider the discrete - time regenerative switched linear system the exponential and stochastic mean stability of are defined as in definition [ defn : sta : sigmamu ] .in addition to the assumptions [ item : evenornon - neg ] to [ item : mbdd ] we place the next assumption on : 1 .[ item : d : invmbdd ] is invertible for each and the set is bounded . for each define the transition matrix for representing the transition of from to in the same way as we defined for continuous - time regenerative switched linear systems in .the next theorem is a discrete - time counterpart of theorem [ theorem : main ] .the following statements are equivalent . 1 . is exponentially mean stable . is stochastically mean stable .}\bigr]$ ] is schur stable .let be the embedded renewal process of . using [ item : mbdd ] and [ item : d : invmbdd ] we can show the existence of a constant such that , for every , if then .then we can prove the desired equivalence in the same way as in the proof of theorem [ theorem : main ] .the details are omitted .in this paper we investigated the mean stability of regenerative switched linear systems . a necessary and sufficient condition for the stability of regenerative switched linear systems established under the assumption that either is even or the system is positive and that the length of each cycle of the underlying regenerative process is essentially bounded . the proof used a discretization of the system at the embedded renewal process of the underlying regenerative process .a numerical example was presented to illustrate the result .j. bertram , p. sarachik , stability of circuits with randomly time - varying parameters , ire trans .circuit theory 6 ( 1959 ) 260270 .x. feng , k. a. loparo , y. ji , h. j. chizeck , stochastic stability properties of jump linear systems , ieee trans .control 31 ( 1992 ) 3853 .o. costa , m. fragoso , stability results for discrete - time linear systems with markovian jumping parameters , j. math .( 1993 ) 154178 .o. l. costa , m. d. fragoso , m. g. todorov , continuous - time markov , springer , 2013 .m. ogura , c. f. martin , http://arxiv.org/abs/1309.2720[stability analysis of positive semi - markovian jump linear systems with state resets ] , siam j. control optim .( accepted ) . arxiv:1309.2720[ math.oc ] d. antunes , j. p. hespanha , c. silvestre , stochastic hybrid systems with renewal transitions : moment analysis with application to networked control systems with delays , siam j. control optim . 51 ( 2013 ) 14811499 .k. sigman , r. wolff , a review of regenerative processes , siam rev .35 ( 1993 ) 269288 .w. l. smith , regenerative , proceedings of the royal soc . a : math .232 ( 1955 ) 631 .j. zhang , b. zwart , steady state approximations of limited processor sharing queues in heavy traffic , queueing syst .60 ( 2008 ) 227246 .d. logothetis , k. trivedi , the effect of detection and restoration times for error recovery in communication networks , j. netw .manag . 5 ( 1997 ) 173195 .r. v. canfield , cost optimization of periodic preventive maintenance , ieee trans .35 ( 1986 ) 7881 .t. nakagawa , periodic and sequential preventive maintenance policies , j. appl .probab . 23 ( 1986 ) 536542 .r. w. brockett , lie theory and control systems defined on spheres , siam j. appl .math . 25 ( 1973 ) 213225 .a. barkin , a. zelentsovsky , method of power transformations for analysis of stability of nonlinear control systems , systems control lett . 3 ( 1983 ) 303310. y. zhang , j. jiang , bibliographical review on reconfigurable fault - tolerant control systems , annual rev . control 32 ( 2008 ) 229252 .p. bolzern , p. colaneri , g. de nicolao , markov jump linear systems with switching transition rates : mean square stability with dwell - time , automatica 46 ( 2010 ) 10811088 .m. mariton , detection delays , false alarm rates and the reconfiguration of control systems , int .j. control 49 ( 1989 ) 981992 .m. ogura , c. f. martin , http://www.sciencedirect.com/science/article/pii/s0024379513004278[generalized joint spectral radius and stability of switching systems ] , linear algebra appl . 439( 2013 ) 22222239 .
this paper investigates the stability of switched linear systems whose switching signal is modeled as a stochastic process called a regenerative process . we show that the mean stability of such a switched system is characterized by the spectral radius of a matrix . the matrix is obtained by taking the expectation of the transition matrix of the system on one cycle of the underlying regenerative process . the characterization generalizes floquet s theorem for the stability analysis of linear time - periodic systems . we illustrate the result with the stability analysis of a linear system with a failure - prone controller under periodic maintenance . switched linear system , regenerative process , mean stability , periodic maintenance
several studies have shown that measures of trabecular bone micro - architecture and bone strength are correlated . together with loss of bone mass , changes in the trabecular bone micro - architecture occur during ageing , during development of osteopenia and osteoporosis as well as in connection with immobilisation or space flight , and can lead to an increased risk of bone fracture . the vertebral bodies and the epiphyses and metaphyses of the long bones consist mainly of trabecular bone surrounded by a thin cortical shell . a dramatic change in the state of the trabecular bone leads to an increased fracture risk . bone mineral density ( bmd ) is the most commonly used predictor of bone strength and fracture risk , and also the most commonly used general descriptor of the state of the bone .non - linear relationships have been established between volumetric bmd and compressive bone strength and elastic modulus . however , for trabecular bone , it has been established that a part of the variation in the strength of the bone can not be explained by bmd alone , but is instead due to the micro - architecture of the trabecular network .for example , a relationship has been established between the mechanical properties of the bone and the shape , orientation , bone trabecular volume fraction , and thickness of the trabeculae . a series of new methodologies based on techniques from nonlinear data analysis has also been introduced in order to study the relationship between the complexity of the trabecular bone network and bone strength . saparin et al .established such a relationship by use of structural measures of complexity based on symbolic encoding . furthermore , several studies using numerical modelling of the trabecular bone and finite element analysis have also confirmed the importance of the trabecular bone micro - architecture to bone strength . in the present study we propose a method of analysis of trabecular bone micro - architecture that is new in two ways .firstly , the analysis is based on peripheral quantitative computed tomography ( pqct ) images , so the method is nondestructive and noninvasive .secondly , our image analysis uses a new approach called _ long range node - strut analysis _ , which quantify the apparent nodes and struts . in the present studywe propose a new method for analysis of trabecular bone micro - architecture from high - resolution quantitative computed tomography ( qct ) images , as at present it is still not possible to perform micro ct ( ) on humans in vivo and in situ .in contrast to classical histomorphometry , ct images can be obtained in a nondestructive and noninvasive manner , which is preferable in a clinical setting .our method uses a new approach called long range node - strut analysis that quantifies the apparent nodes and struts in the trabecular network . in the present studythe analysis is based on peripheral quantitative computer tomography ( pqct ) images .in contrast to classical histomorphometry , pqct images can be obtained in an nondestructive and noninvasive manner which is preferable for implementation of the method in a clinical setting .furthermore , our analysis is based on a new approach called long range node - strut analysis that is able to quantify the apparent nodes and struts .in contrast to the node - strut analysis of garrahan et al . , our method emphasises long range connectivity of the trabecular network , over a distance controlled by a parameter in the algorithm .the analysis has the dual aims of describing the shape of the trabecular network and predicting bone strength .the trabecular bone compartment consists basically of two components : bone and marrow . due to the limited resolution of present day ct and peripheral quantitative ct ( pqct ) scannersit is not possible to completely resolve the trabecular bone micro - architecture .this results in variations of the ct values of the trabecular voxels , even if the intrinsic bone density is constant throughout the trabecular network .consequently , our method takes this apparent variation in bone density into consideration rather than segment or binarise the image , which would imply a loss of this additional information .we assume that higher ct values of a given trabecula will , all else being equal , result in a higher compressive bone strength .we note that our method does not require skeletonisation of the image .we apply our method to 2-dimensional pqct images of human proximal tibiae and quantify the trabecular bone micro - architecture at different levels of bone integrity ranging from normal healthy bone to osteoporotic bone ( as assessed by their bmd ) .we compare our results with the node / terminus ratio ( nd / tm ) , computed using traditional histomorphometry performed on bone biopsies obtained at the same location as the pqct scans .the study population comprised 18 women aged 7598 years and 8 men aged 5788 years . at autopsy ,the tibial bone specimens were placed in formalin for fixation . for each specimen a pqct image and a bone biopsy were obtained from the same location . for each proximal tibia , an axial qct slice was acquired 17 mm below the tibial plateau with a stratec xct-2000 pqct scanner ( stratec gmbh , pforzheim , germany ) , with an in - plane pixel size of 200 m 200 m and a slice thickness of 1 mm . in some cases, the scans were performed after the biopsies were taken .therefore , the holes left from the bone biopsy appear in some of the pqct images .a standardised image pre - processing procedure was applied to exclude the cortical shell from the analysis . one of the resulting images is shown in fig .[ fig : tibia ] . .the hole visible on the left of the image is the result of a cylindrical biopsy . ]cylindrical bone samples with a diameter of 7 mm were obtained 17 mm distal from the centre of the medial facet of the superior articular surface by drilling with a compressed - air - driven drill with a diamond - tipped trephine at either the right or the left proximal tibia .these bone biopsies were embedded undecalcified in methyl methacrylate , cut in 10- - thick sections on a jung model k microtome ( r. jung gmbh , heidelberg , germany ) , and stained with aniline blue ( modified masson trichrome ) .the mounted sections were placed in an flat - bed image scanner and 2540 dpi digital 1 bit images of the sections were obtained as previously described in detail . the resulting pixel size is 10 m 10 m .the trabecular bmd of each pqct slice was calculated using a linear relationship derived on the basis of experimental calibration with the european forearm phantom , as described by saparin et al . the trabecular bmds of the slices in this study range from 30 to 150mg/ , with a median of 97 mg/ . also in the above paper by saparin et al .is a recommended pqct `` cut - off value '' of 275 , corresponding to bmd 25 , to be used for a bone vs. marrow classification .this is the pqct value of the densest non - bone tissue tested , plus the standard deviation of the noise .if the mean bmd is calculated over only those pixels classified as bone , then the resulting bmd values for the 26 images range from 111 to 235mg/ , with a median of 172mg/ .most of this section describes the new image analysis method , _ long range node - strut analysis _ , which includes the new measure , _ mean node strength _ ( ndstr ) .this method was applied to the pqct sections described in section [ sectmaterials ] , after removal of the cortical shell .we also computed a standard measures from the same regions of interest of the same pqct images : trabecular volumetric bone mineral density ( bmd ) , calculated as described by saparin et al . for further comparison , topological 2-dimensional node - strut analysis was performed on the histological sections described in section [ sectmaterials ] , using a custom - made computer program . the trabecular bone profile was iteratively eroded until it was only one pixel thick using a hilditch skeletonisation procedure , and nodes and termini were automatically detected by inspecting the local neighbourhood . if the centre pixel of the neighbourhood is a skeleton pixel and one and only one of the 8 other pixels is a skeleton pixel the centre pixel is classified as a terminus ( tm ) indicating that the strut ends in this pixel .if the centre pixel is a skeleton pixel and three or more of the 8 adjacent pixels are skeleton pixels the centre pixel is classified as a node ( nd ) indicating that two or more struts join in this pixel .compston et al .have argued that the ratio between nodes and termini ( nd / tm ) is an expression of the connectivity of the trabecular network. consequently , only the node - to - terminus ratio ( nd / tm ) from this analysis of the histological sections is used in the present study .we start with the algorithm to find _ strands _ in each image .a strand is a connected trabecular path , i.e. , a chain of one or more struts , with the whole path going in approximately the same direction . in our algorithm, we use eight directions , labelled by the points of the compass : north ( n ) , northeast ( ne ) , east ( e ) , southeast ( se ) , south ( s ) , southwest ( sw ) , west ( w ) , or northwest ( nw ) .`` north '' is the anterior direction , which corresponds to `` up '' in the images shown in this article .node _ is a pixel that is joined to strands in at least three of these eight directions , any two of which are at least 90 degrees apart ( e.g. n , e , and sw ; but not n , ne , and e ) . the _ node strength _ of a pixel is 0 if it is not a node at all , but otherwise depends on the lengths of the strands that meet at the node and the pqct values of the pixels in these strands .these basic definitions could be implemented in a variety of ways , depending for example on the definitions of `` connected '' and `` same direction '' , and on how the pqct values of the pixels are used . in our algorithm ,the first step is to remove marrow from further consideration , by applying a bone threshold filter .the thresholded pqct values will be referred to in this section as _ ct values of bone _, denoted . thus if is the pqct value of a pixel , then we choose , corresponding to a bmd value of 24 mg/ .this is the soft tissue threshold used in symbol encoding for complexity measures by saparin et al . the threshold was determined as the pqct value of the densest non - bone tissue tested , plus the standard deviation of the noise ( a calibration protocol for finding the threshold based on a phantom will be developed in the future ) . in order to explain the central algorithm, we begin by considering a fictitious image consisting of just one row .the ct values of the pixels in the row form a sequence we will define a corresponding sequence of _ strand strengths _ , where each describes the pattern of densities to the left of pixel in the row .we begin by setting , and then we recursively define where is a _ transmission constant _ between and . note that if all of the ct values have a common value then since .as increases , approaches an upper bound of .thus , for the simple case of a uniformly dense row , we have defined the strand strength to be times the ct values of bone .this constant can be interpreted as a characteristic length of the method , which we can vary depending on the length of strand that we believe to be most important to the strength of the bone .for the present study , we have chosen , corresponding to a characteristic length of pixels , i.e. 4 mm .for the general case of a variable ct values row , the presence of the transmission constant in the recursive formula ensures that the ct value of pixels closest to the pixel have the greatest effect on .finally , the factor ensures that depends strongly on , so that any weak link in a chain of high - density pixels lowers the strand strength dramatically .the difficulty in generalising this definition to a 2-dimensional image is of course that there are infinitely many directions but only four that are parallel with the pixel grid .we have resolved this in a practical but ad hoc fashion . as already stated , we consider eight directions .consider first the definition of strand strength in the leftwards ( w ) direction .> from pixel ( at row and column ) we consider that there are five length-3 paths leading approximately leftwards : as illustrated in fig .[ fig : path3 ] . .the start pixel is marked in black . ] for each of these paths , we define and as the ct values of the pixels in the path in columns , and .we then apply the recursive formula which is the formula in eq .( [ e : strandstr ] ) , applied twice .( we must now set . )we now have five values , one for each of the five paths , i.e. , .we multiply all but the third ( fig . [fig : path3](iii ) ) of these five values by a _ bending coefficient _ between and , to penalise strands that bend away from the horizontal direction .then the maximum of these five numbers is our leftwards ( w ) strand strength .strand strengths in the other seven directions are determined in a similar manner . for diagonal directions ,the geometry of the five length-3 paths used in the computation is slightly different , so a different bending coefficient is used .we chose for the horizontal and vertical directions and for the diagonal directions .these constants were chosen so as to give mean strand strengths that are approximately invariant to image rotations by an arbitrary angle , which were verified by numerical experiments on tibial images .+ finally , we calculate the node strength at each pixel .there are possible ways of choosing 3 out of the 8 directions in such a way that each pair of chosen directions makes an angle of at least 90 degrees .for example , e , n , and sw comprise an allowable choice , but e , n , and ne do not . at each pixel , for each allowable choice of three directions , we calculate the minimum of the three strand strengths in these directions , e.g. , . the _ node strength _ is the maximum of the 16 minima subtracted by a minimum strength constant , pixels with a positive node strength are called _nodes_. the mean node strength over the region of interest ( roi ) is called the node strength of the region , the purpose of subtracting the minimum strength constant is to allow us to ignore short `` strands '' , which are often just transverse sections of trabeculae with an apparent width of more than one pixel .the trabeculae in our images have an apparent width of approximately two pixels , or 0.4 mm , which is higher than the true average trabecular width , due to the 1 mm thickness of the ct image and the 0.2 mm pixel size .thus , we wish to ignore strands of bone that are only two pixels long .the strength of such a strand depends on the pqct values of the two pixels . following eq .( [ e : strandstr ] ) , the strength of such a strand with ct value is .based on examination of the resulting node strength images ( such as fig .[ fig : patch ] ( f ) ) , we find ad hoc a minimum strength constant of 225 ( corresponding to a ct value of 500 , eq .( [ e : threshold ] ) , corresponding to a bmd value of 331mg/ , which is higher than the mean ct value for trabecular bone ) .so strands of lower strength than this will be ignored by the algorithm .we note that our complete algorithm involves two thresholding steps : one at the beginning , when pqct values are converted to ct values of bone ; and one at the end , when the minimum strength constant is subtracted .these are not equivalent to one thresholding step with a higher threshold .the purpose of the first threshold filtration is to ignore marrow .the purpose of the second threshold filtration is to ignore short strands , and it can be seen as an alternative to skeletonisation .numerical experiments have shown that relative ( cross - subject ) values of ndstr are stable with respect to the choice of the bone threshold value ( ) and minimum strength constant .specifically , if either of these constants is varied by from the values used in this article , and ndstr is recomputed for all subjects , the resulting ndstr values are very strongly linearly correlated with the original ndstr values ( pearson correlation coefficient in all cases ) .the absolute values of ndstr depend on the choice of these constants : increasing the bone threshold by leads to a mean decrease in ndstr ( percent decrease averaged over all subjects ) , while increasing the minimum strength constant by leads to an average decrease in ndstr of .nevertheless , and as already mentioned , the relative ( cross - subject ) variation of ndstr remains constant .a key property of ndstr is that it depends on both the geometry of the trabecular network and the ct values of the trabeculae .ndstr depends linearly on thresholded ct values , _ if _ the ct values of all pixels are varied by the same factor .it follows that images with the same geometry but different bmd will have different ndstr values . on the other hand , two images with the same bmd but different geometrycan have different ndstr values .this is apparent from the description of the method , and confirmed by figs .[ fig4 ] and [ fig5 ] , as discussed below , and also by the results discussed in section [ sectresults ] . to illustrate the analytical method, we now present the results in visual form for an enlarged region near the bottom ( posterior ) of the slice in fig .[ fig : tibia ] .the enlarged region of the original image is shown in fig .[ fig : patch](a ) . parts ( b ) to ( e ) of fig .[ fig : patch ] show directional strand strengths , and part ( f ) shows the final node strength plot .each directional strand strength plot shows the sum of two strand strengths at every pixel , in opposite directions : east / west ; north / south ; northwest / southeast ; and northeast / southwest . in each of the directional strand strength plots ,the strands in the given direction are shown with the highest intensity , but most of the trabeculae are still visible , even if faintly .in contrast , in the node strength plot ( part ( f ) ) , most of the trabeculae are invisible .this is because of the subtraction of the minimum strength constant . in this example , there are almost no nodes in the right half of the image .this correctly describes the micro - architecture of the original image in that region , which contains many trabeculae but few that cross each other to make a lattice - like micro - architecture .the left half of the image contains many nodes .notice that , in the node strength plot , the nodes seem to be thicker than in the original image .this is because the trabeculae in the original image are actually slightly thicker than they appear , the outer pixels being dimmer ( i.e. lower ct values ) and thus not easily registered by the eye . since the outer pixels near the apparent nodes in the original image are almost as well - connected as pixels in the centres of the nodes , they have large node strengths , and are very visible in the node strength plot .the two specimens depicted in figs . [ fig4 ] and [ fig5 ] have comparable bone mineral densities , but their trabecular micro - architecture are visibly different . in fig .[ fig4 ] , the specimen has a trabecular bmd of 107mg/ , which is near the median value ( 97mg/ ) of the specimens in this study .on the left is the original image , and on the right is the node strength plot .notice that there are a lot of nodes in most of the outer areas , with the notable exception of a region near the bottom left .the mean node strength is 71.2 .the specimen in fig .[ fig5 ] has a trabecular bmd of 94 mg/ , which is only 12% lower than that of the specimen shown in fig .[ fig4 ] , but it has substantially fewer nodes . the mean node strength is only 42.2 , which is 40% lower than that for the specimen shown in fig .this reflects the lack of a strong lattice - like micro - architecture in the original image .the long range node strength of each pixel of each image was computed as described in the previous section , and a mean node strength , ndstr , was calculated for each image .we also computed the bone mineral density ( bmd ) from the pqct slices , as explained in section [ sectanalysis ] . a scatter plot showing ndstr versus bmd for all 26 specimens is shown in fig .[ fig : nodesbmd ] .there is a strong positive correlation , which we quantified in three ways .firstly , pearson s correlation coefficient is , indicating a very strong linear correlation .secondly , since the scatter plot clearly suggests a nonlinear relationship , we fitted an exponential curve to the data and then found pearson s correlation coefficient to be .thirdly , the spearman rank correlation is , indicating a very strong correlation .the spearman rank correlation coefficient is a robust nonparametric correlation measure that is appropriate when little is known about the distributions and nature of the correlation between the variables .we also compared ndstr with the node - terminus ratio , nd / tm , in the topological node - strut analysis introduced by garrahan et al . the two measures are similar in philosophy , because they both quantify the nodes in the trabecular network .however , the definition of nd / tm is highly localised : after the skeletonisation process has eroded the trabecular network to a thickness of one pixel , each pixel is classified as node , strut , or terminus depending on its grid of nearest neighbours ( including the original pixel ) . for the present study ,the images used to compute nd / tm have a pixel size of m , so the classification is made on the basis of a 30 m 30 m region .in contrast , node strength is semi - global , taking into account longer strands to a degree controlled by the transmission strength constant . in the present study , with this constant set to , the method has a characteristic length of 4 mm , in the sense described in section [ sectanalysis ] . the node - terminus ratio , nd / tm , was calculated using histomorphometry performed on bone biopsies as described in section [ sectanalysis ] .recall that these biopsies were from the same regions of the same donors as the pqct slices from which ndstr has been computed .pearson s correlation coefficient for the relationship between ndstr and nd / tm is , and the spearman rank correlation coefficient is .we also measured the correlation between nd / tm and trabecular bmd : pearson s correlation coefficient is , and the spearman rank correlation coefficient is .using either measure of correlation , mean node strength is more strongly correlated with trabecular bmd than the node - terminus ratio is correlated with either of these variables . ) . ]we have introduced a new morphometric measure for characterising the micro - architecture of trabecular bone , _ long range node strength _ , which measures the degree to which a pixel in a 2-dimensional bone image has long - range connectivity in three or more directions , each at least 90 degrees from the others .in addition , we have calculated the mean node strength , ndstr , for each of the 26 bone samples considered in the study .we have found that ndstr has a strong positive correlation with trabecular bmd ( , after exponential transformation ) .furthermore , we have ascertained a strong correlation ( ) between ndstr and the established histomorphometric measure , node - terminus ratio ( nd / tm ) . moreover , qualitative comparison of images with similar trabecular bmd but different mean ndstr ( see figs .[ fig4 ] and [ fig5 ] ) suggest that ndstr successfully quantifies how `` lattice - like '' the micro - architecture is .further studies , including either clinical data or measured bone strength , are needed in order to determine the utility of this measure relative to other existing morphometric measures .such studies could also determine the most useful choice of the parameters that appear in the algorithm : the transmission constant , minimum strength constant , and bending coefficients .for example , these constants could be chosen to maximise correlations with bone strength . in the future ,the sensitivity of the method on these parameters and on different ct settings ( e.g. , ct pixel size , slice thickness , mean ct value of bone ) should be systematically investigated . moreover ,a future study on synthetic data could be used to verify that ndstr measures structure and not just bone volume or mass , and specifically that it finds nodes .also different skeletal sites should be analysed with our approach in order to test its potential to describe structural differences .we would like to note that our algorithm for computing node strength is dependent on pixel size , and is thus unsuitable for absolute comparisons between studies .however , this limitation is not unique to the node strength measure .for example , guggenbuhl _ et al ._ have showed using ct images with different thickness ( 1 mm , 3 mm , 5 mm , and 8 mm ) that the outcome of texture analysis depends substantially on the slice thickness . in the present study we have used pqct equipment with an in - plane pixel size of 200 m 200 m and a slice thickness of 1 mm .we have not conducted a formal investigation into the influence of slice thickness on the long range node strength , but it is fair to assume that the long range node strength is similarly affected by the slice thickness . if further studies were to confirm the practical value of the measure , an implementation could be developed to produce node strengths that are broadly comparable between images with different pixel size . however , the technological development of high - resolution pqct equipment is progressing very quickly , and already high - resolution pqct scanners exist that can image a tibia at an isotropic pixel size of approximately 100 m , which is comparable to the trabecular thickness in the human proximal tibia , thus making the slice thickness less of an issue .some previous studies have investigated the trabecular bone micro - architecture using texture analysis applied on x - ray images of bone . however , these techniques are not comparable to the method presented in the present study as they are based on projections of a 3-dimensional trabecular network on a 2-dimensional plane , whereas our method is applied to 2-dimensional sections obtained through the 3-dimensional trabecular network .nevertheless , ranjanomennahary __ very recently showed some significant correlation did exist between radiograph based texture analysis and based unbiased 3-dimensional measures of trabecular micro - architecture ._ compared 3-dimensional measures of micro - architecture based on 3-dimensional synchrotron radiation ct with more than 350 texture parameters obtained from simulated radiographs that were created from the synchrotron radiation ct data sets .they found using multiple regression analysis that a combination of a subset of texture analysis parameters correlated to the 3-dimensional measures of micro - architecture .however , they had to use at least three texture analysis parameters in the correlation with each of the 3-dimensional measures of micro - architecture .investigated ct images of the distal radius using texture analysis . the ct image used by cortet __ used similar pixel size to those used in the present study .they analysed the ct images using the traditional node - strut analysis and texture analysis including the grey level run length method . as illustrated in the present work there is a moderate correlation ( ) between the node - strut analysis and the ndstr measure , which we believe illustrates that the ndstr measure captures somewhat different information from that obtained with the node - strut analysisgrey level run length is based on runs that have exactly the same grey level , and is therefore very sensitive to the choice of discretisation of the grey levels .in contrast , the ndstr measure treats the grey level as a continuous parameter , and thus it is able to detect runs with variable grey levels , and is consequently less sensitive to discretisation choices .another advantage of the method is that the proposed method utilises all of the ct value information in the pqct images , as no binarization of the images is performed . in the present studywe have not applied the developed method to the histological sections directly .the reason for this is that the histological sections only cover a limited area of interest and thereby a limited trabecular length , which renders the ndstr less meaningful .however , this is a limitation of the histological examination procedure where it is only feasible to investigate a smaller sample ( biopsy ) of a larger structure ( the proximal tibia ) and not a limitation of the proposed method .in addition , the histological images are segmented into bone and marrow by their nature and it is thus not possible to assign an intensity value to pixels , analogous to a ct value , which is needed by the algorithm in its present form .however , it is possible to apply the concept of long range node - strut analysis to such binary images but this is outside the scope of the present investigation . in the present study the long range node - strut analysiswas applied to pqct images of the proximal tibia .however , we would like to stress that the method is not limited to this skeletal site and thus can be applied to 2-dimensional ct images obtained from any skeletal site like e.g. the vertebral body or the calcaneous .finally , we note that the general method of long range node - strut analysis provides more than just the mean node strength . in the present study , we have focused on mean node strength for simplicity , but the intermediate measures of _ directional strand strength _ , used in the computation of node strength , may be useful in themselves as a measure of directional strength .in the present study the long range node - strut analysis has been applied to 2-dimensional pqct images obtained in the horizontal plane .however , the trabecular micro - architecture of the proximal tibia is mostly isotropic in the horizontal plane , whereas the micro - architecture in vertical direction is highly anisotropic compared with the horizontal plane .as scanners become more and more prevalent and as the pixel size and imaging capacity of pqct scanners steadily improve it will be an important future task to generalise the long range node - strut analysis into three dimensions and to apply the technique to 3-dimensional data sets obtained from such equipment .in particular , the directional strand strength could be used to investigate anisotropic differences of the trabecular micro - architecture of such 3-dimensional data sets .the data acquisition parts of this project were made possible by grants from the microgravity application program / biotechnology from the manned spaceflight program of the european space agency ( esa ) ( esa project # 14592 , map ao-99 - 030 ) .the authors would like to thank professor g. bogusch and professor r. graf , center for anatomy , charit berlin , germany for kindly providing the bone specimens .wolfgang gowin , formerly at campus benjamin franklin , charit berlin , germany , is gratefully acknowledged for preparing the bone specimens and harvesting the bone biopsies .erika may and martina kratzsch , campus benjamin franklin , charit berlin and inger vang magnussen , university of aarhus , denmark are acknowledged for their excellent technical assistance scanning the ct images and preparing the histological sections .52ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1016/j.jbiomech.2003.12.001 [ * * ( ) , 10.1016/j.jbiomech.2003.12.001 ] link:\doibase 10.1016/j.bone.2004.09.023 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1016/j.actaastro.2005.01.007 [ * * , ( ) ] * * , ( ) link:\doibase 10.1016/j.physleta.2006.08.058 [ * * , ( ) ] link:\doibase 10.1140/epjst / e2007 - 00078-x [ * * , ( ) ] link:\doibase 10.1103/physreve.79.021903 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1111/j.1365 - 2818.2005.01469.x [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) _ _ , ed .( , , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( )
* purpose * we present a new morphometric measure of trabecular bone micro - architecture , called _ mean node strength _ ( ndstr ) , which is part of a newly - developed approach called _ long range node - strut analysis_. our general aim is to describe and quantify the apparent `` lattice - like '' micro - architecture of the trabecular bone network . * methods * similar in some ways to the topological node - strut analysis introduced by garrahan et al . , our method is distinguished by an emphasis on long - range trabecular connectivity . thus , while the topological classification of a pixel ( after skeletonisation ) as a node , strut , or terminus , can be determined from the neighbourhood of that pixel , our method , which does not involve skeletonisation , takes into account a much larger neighbourhood . in addition , rather than giving a discrete classification of each pixel as a node , strut , or terminus , our method produces a continuous variable , _ node strength_. the node strength is averaged over a region of interest to produce the _ mean node strength _ ( ndstr ) of the region . * results * we have applied our long range node - strut analysis to a set of 26 high - resolution peripheral quantitative computed tomography ( pqct ) axial images of human proximal tibiae acquired 17 mm below the tibial plateau . we found that ndstr has a strong positive correlation with volumetric trabecular bone mineral density ( bmd ) . after an exponential transformation , we obtain a pearson s correlation coefficient of . qualitative comparison of images with similar bmd but with very different ndstr values suggests that the latter measure has successfully quantified the prevalence of the `` lattice - like '' micro - architecture apparent in the image . moreover , we found a strong correlation ( ) , between ndstr and the conventional node - terminus ratio ( nd / tm ) of garrahan et al . the nd / tm ratios were computed using traditional histomorphometry performed on bone biopsies obtained at the same location as the pqct scans . * conclusions * the newly introduced morphometric measure allows a quantitative assessment of the long - range connectivity of trabecular bone . one advantage of this method is that it is based on pqct images that can be obtained noninvasively from patients , i.e. without having to obtain a bone biopsy from the patient .
the inverse scattering problem , non - destructive evaluation , is one of the intriguing research topics since it is closely related to human life . it is because it applies not only to physics , engineering , or image medical science but also to identifying the cracks of the structures such as concrete walls , machines , or buildingstherefore , it has been considerably investigated by many researchers to suggest the algorithm regarding this problem or to experiment and analyze previously suggested algorithms .related works can be found in and references therein . however , the inverse scattering problem is such a difficult problem that not many methods have been studied other than the reconstruction method based on the iterative method such as newton - type method , refer to .regarding the algorithms of using a newton - type method , in case the initial shape is quite different from the unknown target , the reconstruction of material leads to failure with the non - convergence or yielding faulty shapes . even though the reconstruction ends up with a successful result, it could take a great deal of time .therefore , several non - iterative algorithms have been suggested because they can reconstruct the shape that is quite similar to the target , and thus it can be used as a good initial guess , which also takes a short time and efficient in iterative methods ( see and references therein ) . among them , the non - iterative reconstruction algorithm such as kirchhoff and subspace migration has been consistently studied thanks to its better imaging product .however , the existing research on this algorithm has been applied heuristically .that is , it mostly relied on experimental results .although research on several structures was conducted , it was based on experiments or statistical approach , not revealing the mathematical structures explicitly , refer to and references therein .it resulted in the difficulties of explaining the results theoretically .recently , an analysis of mathematical structure of single- and multi - frequency subspace migration for imaging of small electromagnetic materials has been conducted by establishing a relationship with bessel function of integer order in full - view inverse scattering problems , refer to .this remarkable research has shown the reason why subspace migration is effective and why application of multi - frequency guarantees better imaging performance than application of single - frequency .motivated by this work , this analysis is successfully extended to the limited - view inverse scattering problems ( see ) .afterwards , a multi - frequency subspace migration weighted by applied frequency has been suggested in to obtain more precise results .furthermore , the reason why the suggested algorithm presents better imaging products was demonstrated mathematically and an analysis of multi - frequency subspace migration weighted by the power of applied frequency has been considered in .this research concludes that increasing the power of applied frequency is meaningless , so that multi - frequency subspace migration weighted by applied frequency is a good algorithm for imaging .recently , it has been confirmed that a multi - frequency subspace migration weighted by the logarithmic function of applied frequency can yield more appropriate imaging result than the one suggested in .however , one can face difficulties in identifying the reason why it shows the better performance through the mathematical analysis . motivated by this difficulty , we derive an indefinite integration of square of bessel function of order zero of the first kind multiplied by the natural logarithmic function .based on this integration , we discover the structure of multi - frequency subspace migration weighted by the logarithmic function of applied frequency by establishing a relationship with bessel function of integer order of the first kind , and provide the reason of better imaging performance .the organization of this study is as follows . in section [ sec2 ], we briefly introduce two - dimensional direct scattering problem and subspace migration .section [ sec3 ] provides a survey on the structures of single- , multi- , and weighted multi - frequency subspace migrations , the derivation of indefinite integration of square of bessel function multiplied by the natural logarithmic function , and the the mathematical analysis on why multi - frequency subspace migration weighted by natural logarithmic function shows better imaging performance than the traditional one . in section [ sec4 ] ,several results of numerical experiments with noisy data are presented in order to support our analysis .finally , a short conclusion is mentioned in section [ sec5 ] .let us consider two - dimensional electromagnetic scattering from a thin , curve - like homogeneous inclusion within a homogeneous space .the latter contains an inclusion denoted as which is localized in the neighborhood of a curve .that is , where the supporting is a simple , smooth curve in , is the unit normal to at , and is a strictly positive constant which specifies the thickness of the inclusion ( small with respect to the wavelength ) , refer to figure [ thininclusion ] . throughout this paper , we denote be the unit tangent vector at .sketch of the thin inclusion in two - dimensional space .,scaledwidth=35.0% ] in this paper , we assume that every material is characterized by its dielectric permittivity and magnetic permeability at a given frequency .let and denote the permittivity and permeability of the embedding space , and and the ones of the inclusion .then , we can define the following piecewise constant dielectric permittivity and magnetic permeability respectively . note that if there is no inclusion , i.e. , in the homogeneous space , and are equal to and respectively . in this paper, we set and for convenience . at strictly positive operation frequency ( wavenumber ) ,let be the time - harmonic total field which satisfies the helmholtz equation similarly , the incident field satisfies the homogeneous helmholtz equation as is usual , the total field divides itself into the incident field and the scattered field , . notice that this unknown scattered field satisfies the sommerfeld radiation condition uniformly in all directions . in this paper, we consider the illumination of plane waves and far fields in free space where is a two - dimensional vector on the unit circle in . for convenience ,we denote to be a discrete finite set of observation directions and be the set of incident directions .the far - field pattern is defined as a function which satisfies as uniformly on and .then , based on , can be written as an asymptotic expansion formula . for and , the far - field pattern can be represented as where is uniform in , , and is a symmetric matrix defined as follows : let and denote unit tangent and normal vectors to at , respectively . then * has eigenvectors and . *the eigenvalue corresponding to is . *the eigenvalue corresponding to is .now , we introduce subspace migration for imaging of thin inclusion . detailed description can be found in . in order to introduce ,we generate a multi - static response ( msr ) matrix {j , l=1}^{n}\in\mathbb{c}^{n\times n} ] . in order to describe thin inclusions ,two smooth curves are selected as follows : ^t~:~-0.5\leq s\leq0.5\right\}}\\ \sigma_2&={\left\{[s+0.2,s^3+s^2 - 0.6]^t~:~-0.5\leq s\leq0.5\right\}}.\end{aligned}\ ] ] the thickness of thin inclusions is equally set to .we denote and be the permittivity and permeability of , respectively , and set parameters , , and are and , respectively . since and are set to unity , the applied frequencies reads as at wavelength for , which will be varied in the numerical examples between and . for the incident directions , they are selected as ^t\quad\mbox{for}\quad l=1,2,\cdots , n,\ ] ] and total number of directions has chosen . since ,the dominant eigenvectors of are , ^t ] has selected for generating of ( [ vecw ] ) . for a more detailed discussion , we recommend a recent work ( * ? ? ?* section 4.3 ) . in order to show the robustness ,a white gaussian noise with signal - to - noise ratio ( snr ) added to the unperturbed far - field pattern data via a standard matlab command ` awgn ` included in the ` communications system toolbox ` package . for discriminating non - zero singular values of msr matrix , a scheme is applied for each ( see for instance ) . throughout this section , only both permittivity and permeability contrast case is considered . in figure[ figure1 ] , some imaging results via , , and are exhibited when the thin inclusion is . by comparing these results, we can immediately observe that unexpected artifacts can be examined in the results via and but , as we expected , they dramatically disappeared in the map of .maps of ( left ) , ( center ) , and ( right ) when the thin inclusion is .,title="fig:",scaledwidth=32.5% ] maps of ( left ) , ( center ) , and ( right ) when the thin inclusion is .,title="fig:",scaledwidth=32.5% ] maps of ( left ) , ( center ) , and ( right ) when the thin inclusion is .,title="fig:",scaledwidth=32.5% ] now , let us consider the imaging results of in figure [ figure2 ] .based on this , we can observe the same phenomenon as in figure [ figure1 ] .although proposed algorithm successfully eliminates artifacts , some part of ca nt be visible .this is due to the selection of ^t ] for .when , is immensely different from ^t$ ] .hence , finding an optimal is still remaining as an interesting subject .same as figure [ figure1 ] except the thin inclusion is .,title="fig:",scaledwidth=32.5% ] same as figure [ figure1 ] except the thin inclusion is .,title="fig:",scaledwidth=32.5% ] same as figure [ figure1 ] except the thin inclusion is .,title="fig:",scaledwidth=32.5% ] now , let us extend the proposed algorithm for imaging of two-(or more ) different thin inclusions and . for the sake of simplicity ,we denote .figure [ figurem1 ] shows maps of , , and for with the same permittivities and permeabilities .it is interesting to observe that unlike the case of single inclusion , where almost unexpected artifacts are eliminated , proposed imaging functional does not improve the traditional ones and .hence , further analysis is needed to identify the reason .maps of ( left ) , ( center ) , and ( right ) when the thin inclusion is with same material property.,title="fig:",scaledwidth=32.5% ] maps of ( left ) , ( center ) , and ( right ) when the thin inclusion is with same material property.,title="fig:",scaledwidth=32.5% ] maps of ( left ) , ( center ) , and ( right ) when the thin inclusion is with same material property.,title="fig:",scaledwidth=32.5% ] for the final example , we consider the imaging of multiple inclusions with different properties and .results in numerical simulations are exhibited in figure [ figurem2 ] . by comparing the results in figure [ figurem1 ], we can observe that same as the imaging of single inclusion , almost every artifact has disappeared in the map of while some of they are still remaining in the maps of , .same as figure [ figurem2 ] except different material properties.,title="fig:",scaledwidth=32.5% ] same as figure [ figurem2 ] except different material properties.,title="fig:",scaledwidth=32.5% ] same as figure [ figurem2 ] except different material properties.,title="fig:",scaledwidth=32.5% ] to conclude this section , let us present some remarks .from the derivation of theorem [ structurewmsm ] , it follows that the number of incident and observation directions and the number of applied frequencies have to be large enough .furthermore , the selection of of ( [ vecw ] ) must follow the unit normal direction on the supporting curve . on the other hand , we point out that if there are two inclusions with the same material property , our analysis and observation are not valid anymore .in fact , the derivation of asymptotic expansion formula in the existence of multiple inclusions , one must assume that they are well - separated to each other .hence , we expect that if the distance between two inclusions are sufficiently large , the result will be nice .but , this fact does not guarantee the improvement of proposed algorithm .thus , finding a method of improvement is still required .finally , we believe that although the result in this paper does not guarantee the complex shape of thin inclusions due to the intrinsic rayleigh resolution limit , they can be good initial guesses of a level - set method or of a newton - type reconstruction algorithm , refer to and references therein .in the present study , we have proposed a multi - frequency subspace migration weighted by the natural logarithmic function for imaging of thin , crack - like electromagnetic inclusions .this is based on the asymptotic expansion formula of far - field pattern in the existence of such inclusions and the structure of constructed msr matrix operated at multiple frequencies . throughout a careful analysis and numerical experiment ,it is confirmed that proposed method successfully improves traditional approaches .however , a counter example was discovered when one tries to find the shape of multiple inclusions with the same material properties .hence , investigating the reason will be an interesting work .furthermore , for achieving the best imaging of inclusions , finding _ a priori _ information of supporting curve , e.g. , unit outward normal vector , should be a remarkable research . in this paper , we considered the imaging of inclusions located in the homogeneous space but based some recent works , subspace migration can be applicable for imaging of targets buried in the half - space .hence , an extension to the half - space problem is expected . and ,similarly to , the improvement considered herein can be extended to the limited - view inverse scattering problems .00 r. acharya , r. wasserman , j. stevens , and c. hinojosa , biomedical imaging modalities : a tutorial , comput .medical imag . graph ., 19 ( 1995 ) , 325 .d. lvarez , o. dorn , n. irishina and m. moscoso , crack reconstruction using a level - set strategy , j. comput .( 2009 ) , 57105721 .h. ammari , g. bao , and j. flemming , an inverse source problem for maxwell s equations in magnetoencephalography , siam j. appl ., 62 ( 2002 ) , 13691382 . h. ammari , e. bonnetier and y. capdeboscq , enhanced resolution in structured media , siam j. appl ., 70 ( 2009 ) , 14281452 . h. ammari , j. garnier , h. kang , w .- k .park and k. slna , imaging schemes for perfectly conducting cracks , siam j. appl .math , 71 ( 2011 ) , 6891 .h. ammari , e. iakovleva and d. lesselier , a music algorithm for locating small inclusions buried in a half - space from the scattering amplitude at a fixed frequency , multiscale model .simul . , 3 ( 2005 ), 597628 .h. ammari , h. kang , e. kim , k. louati , m. vogelius , a music - type algorithm for detecting internal corrosion from electrostatic boundary measurements , numer .( 2008 ) , 501528 .h. ammari , h. kang , h. lee and w .- k .park , asymptotic imaging of perfectly conducting cracks , siam j. sci .comput . , 32 ( 2010 ) , 894922 .s. r. arridge , optical tomography in medical imaging , inverse problems , 15 ( 1999 ) , r41r93 . e. beretta and e. francini , asymptotic formulas for perturbations of the electromagnetic fields in the presence of thin imperfections , contemp ., 333 ( 2003 ) , 4963 . g. bao , s. hou and p. li , inverse scattering by a continuation method with initial guesses from a direct imaging algorithm , j. comput .phys . , 227 ( 2007 ) , 755762 . l. borcea , g. papanicolaou and c. tsogka , subspace projection filters for imaging in random media , c. r. mecanique , 338 ( 2010 ) , 390401 . m. burger , b. hackl and w. ring , incorporating topological derivatives into level set methods , j. comput .phys . , 194 ( 2004 ) , 344362 .x. chen and k. agarwal music algorithm for two - dimensional inverse problems with special characteristics of cylinders , ieee trans .antennas propag ., 56 ( 2008 ) , 18081812 .x. chen and y. zhong , music electromagnetic imaging with enhanced resolution for small inclusions , inverse problems , 25 ( 2009 ) , 015008 .m. cheney and d. isaacson , distinguishability in impedance imaging , ieee trans . biomed ., 39 ( 1992 ) , 852860. m. cheney , d. isaacson , and j. c. newell , electrical impedance tomography , siam rev ., 41 ( 1999 ) , 85101 .d. colton and a. kirsch , a simple method for solving inverse scattering problems in the resonance region , inverse problems , 12 ( 1996 ) , 383 .a. j. devaney , super - resolution processing of multi - static data using time - reversal and music , available at http://www.ece.neu.edu/faculty/devaney/ajd/preprints.htm .o. dorn and d. lesselier , level set methods for inverse scattering , inverse problems , 22 ( 2006 ) , r67r131 .t. d. dorney , j. l. johnson , j. v. rudd , r. g. baraniuk , w. w. symes and d. m. mittleman , terahertz reflection imaging using kirchhoff migration , opt .( 2001 ) , 15131515 .a. s. fokas , y. kurylev , and v. marinakis , the unique determination of neural currents in the brain via magnetoencephalography , inverse problems , 20 ( 2004 ) , 10671082 .r. griesmaier , reciprocity gap music imaging for an inverse scattering problem in two - layered media , inverse probl .imag . , 3 ( 2009 ) , 389403 . y. a. gryazin , m. v. klibanov , t. r. lucas , two numerical methods for an inverse problem for the 2-d helmholtz equation , j. comput .phys . , 184 ( 2003 ) , 122 - 148 . s. hou , k. huang , k. slna and h. zhao , a phase and space coherent direct imaging method , j. acoust ., 125 ( 2009 ) , 227238 . s. hou , k. slna and h. zhao , imaging of location and geometry for extended targets using the response matrix , j. comput . phys . , 199 ( 2004 ) , 317338 .d. isaacson , distinguishability of conductivities by electric current computed tomography , ieee trans .medical imag . , 5 ( 1986 ) , 9195 .d. isaacson and m. cheney , effects of measurements precision and finite numbers of electrodes on linear impedance imaging algorithms , siam j. appl ., 51 ( 1991 ) , 17051731 .a. ishimaru , t .- k . chan and y. kuga , an imaging technique using confocal circular synthetic aperture radar , ieee trans ., 36 ( 1998 ) , 15241530 .a. ishimaru , c. zhang , m. stoneback and y. kuga , time - reversal imaging of objects near rough surfaces based on surface flattening transform , waves random complex media , 23 ( 2013 ) , 306 - 317 y .- d .joh , y. m. kwon , j. y. huh and w .- k .park , structure analysis of single- and multi - frequency subspace migrations in inverse scattering problems , prog ., 136 ( 2013 ) , 607622 . y .-d . joh , y. m. kwon , j. y. huh and w .- k . park , weighted multi - frequency imaging of thin , crack - like electromagnetic inhomogeneities , poster session , proceeding of progress in electromagnetics research symposium in taipei , ( 2013 ) , 631635 .d . joh and w .- k .park , an optimized weighted multi - frequency subspace migration for imaging perfectly conducting , arc - like cracks , proceedings of the 6th international congress on image and signal processing , 1 ( 2013 ) , 250255 .o. kwon , e. j. woo , j. r. yoon , and j.k .seo , magnetic resonance electrical impedance tomography ( mreit ) : simulation study of algorithm , ieee trans . biomed ., 49 ( 2002 ) , 160167. o. kwon , j. r. yoon , j. k. seo , e. j.woo , and y. g. cho , estimation of anomaly location and size using impedance tomography , ieee trans . biomed .engr . , 50 ( 2003 ) , 8996 . y. m. kwon and w .- k . park , analysis of subspace migration in limited - view inverse scattering problems , appl .lett . , 26 ( 2013 ) , 11071113 . m. l. moran , r. j. greenfield , s. a. arcone and a. j. delaney , multidimensional gpr array processing using kirchhoff migration , j. appl .geophys . ,43 ( 2000 ) , 281295 .park , analysis of a multi - frequency electromagnetic imaging functional for thin , crack - like electromagnetic inclusions , appl ., 77 ( 2014 ) , 3142 .w .- k . park , non - iterative imaging of thin electromagnetic inclusions from multi - frequency response matrix , prog .res . , 106 ( 2010 ) , 225241 .w .- k . park , on the imaging of thin dielectric inclusions buried within a half - space , inverse problems , 26 ( 2010 ) , 074008 .w .- k . park and d. lesselier , electromagnetic music - type imaging of perfectly conducting , arc - like cracks at single frequency , j. comput .228 ( 2009 ) , 80938111 .w .- k . park and d. lesselier , fast electromagnetic imaging of thin inclusions in half - space affected by random scatterers , waves random complex media , 22 ( 2012 ) , 323 .k . park and d. lesselier , music - type imaging of a thin penetrable inclusion from its far - field multi - static response matrix , inverse problems , 25 ( 2009 ) , 075002 .w .- k . park and d. lesselier , reconstruction of thin electromagnetic inclusions by a level set method , inverse problems , 25 ( 2009 ) , 085010 .w .- k . park and t. park , multi - frequency based direct location search of small electromagnetic inhomogeneities embedded in two - layered medium , comput ., 184 ( 2013 ) , 16491659 . w. rosenheinrich , tables of some indefinite integrals of bessel functions , 2011 , available at http://www.fh-jena.de/~rsh/forschung/stoer/besint.pdf f. santosa , a level set approach for inverse problems involving obstacles , esaim contr . optim .var . , 1 ( 1996 ) , 1733 . v. sen , m. k. sen and p. l. stoffa , pvm based 3-d kirchhoff depth migration using dynamically computed travel - times : an application in seismic data processing , parallel comput . , 25 ( 1999 ) , 231248 .j. k. seo , o. kwon , h. ammari , and e. j. woo , mathematical framework and anomaly estimation algorithm for breast cancer detection using ts2000 configuration , ieee trans .biomedical engineering , 51 ( 2004 ) , 18981906 .e. taillet , j. f. lataste , p. rivard , and a. denis , non - destructive evaluation of cracks in massive concrete using normal dc resistivity logging , ndt & e int . ,63 ( 2014 ) , 1120 .g. ventura , j. x. xu and t. belytschko , a vector level set method and new discontinuity approximations for crack growth by efg , int .j. numer . meth .engng , 54 ( 2002 ) , 923944 .y. zhong and x. chen , music imaging and electromagnetic inverse scattering of multiply scattering small anisotropic spheres , ieee trans .antennas propag . , 55 ( 2007 ) , 35423549 .
the present study seeks to investigate mathematical structures of a multi - frequency subspace migration weighted by the natural logarithmic function for imaging of thin electromagnetic inhomogeneities from measured far - field pattern . to this end , we designed the algorithm and calculated the indefinite integration of square of bessel function of order zero of the first kind multiplied by the natural logarithmic function . this is needed for mathematical analysis of improved algorithm to demonstrate the reason why proposed multi - frequency subspace migration contributes to yielding better imaging performance , compared to previously suggested subspace migration algorithm . this analysis is based on the fact that the singular vectors of the collected multi - static response ( msr ) matrix whose elements are the measured far - field pattern can be represented as an asymptotic expansion formula in the presence of such inhomogeneities . to support the main research results , several numerical experiments with noisy data are illustrated . multi - frequency subspace migration , weighted by natural logarithmic function , thin electromagnetic inhomogeneities , multi - static response ( msr ) matrix , numerical experiments .
one success of network science has been to identify that some complex systems can be simplified by considering just the topology of the pairwise interactions between their parts . abstracting a complex system as a graph can bring physical insights and predictive power .yet these graphs can still be very complicated .network geometry is an approach which further abstracts the system by modelling the nodes of the network as points in a geometric space .most existing approaches use riemannian spaces , the simplest example of which is euclidean space . random geometric graphs ( rgg ) are graphs embedded in euclidean space .recently there has been much interest in geometric approaches to the study of networks in non - euclidean spaces .embedding in hyperbolic spaces can yield scale free , clustered networks with community structure illustrating the remarkable power that geometric approaches have to recover complex network properties .a well established geometric approach to data analysis is _multidimensional scaling _( mds ) , a technique to give data expressed as distances or similarities a spatial representation . in most mds analysis ,the space used for that spatial representation has been euclidean , and the technique , as usually described , requires a riemannian manifold , where the triangle inequality is maintained .mds has been used in network science , to fit models of rggs to networks from real data , for example , from protein interactions .normally , the mds algorithm takes as an argument pairwise distances between objects , so when applying it to simple networks , where only binary pairwise relations exist , these distances have to be inferred from the network structure . in the simplest euclidean case ( as in ) , the shortest path on the network is used as an estimate for the distances between vertices , from which mds is used to calculate coordinates .once these coordinates have been calculated , a new rgg can be built from them , and if it is similar to the original graph , the initial geometric assumption was a good one . in this paper we will consider networks where each node is associated with a particular time and directed edges between nodes represent causal relations .such a network forms a * directed acyclic graph * ( dag) .instead of embedding a network in geometric space alone , the causal ordering of the nodes in a dag suggests that an embedding in space and time is needed .the causal structure of such a network has the same constraints as the causal structure of spacetime as used in special and general relativity .this suggests that the geometries used in relativity , which are pseudo - riemannian are the appropriate ones to use because of the special properties of a time dimension . in particular , we will consider _ lorentzian spacetimes _ , a special case of pseudo - riemannian manifolds in which there is one time dimension with some number of spatial dimensions .euclidean space , being flat and isotropic , is the simplest riemannian manifold .analogously , flat isotropic spacetime is minkowski spacetime , the simplest lorentzian manifold . in this paper , we first generalise classical mds to allow converting distances into coordinates for pseudo - riemannian manifolds .we then show how this allows embedding of dags in minkowski spacetime ^{1/\lambda} ] .we will denote the coordinates of point by , , where is the time coordinate .we then construct the causal set graph , by placing an edge from to if and directing that edge from the past to the future .the fact that edges in the graph now represent causal relations illustrates why the graph is necessarily a dag , as closed causal loops are forbidden .figure [ fig : eigenval ] shows the distribution of the eigenvalues for causal sets and random dags . , , and dimensions , and for a random dag . in all cases ,one large negative eigenvalue is seen , corresponding to the one timelike dimension . for the causal sets ,this timelike dimension is the one time dimension in the minkowski spacetime they are embedded in . for the random dag ,this corresponds to the time ordering which could be created as a consequence of the acyclic property of this graph .we observe large positive eigenvalues corresponding to the spatial dimensions in each causal set , illustrating that the coarse graining of the causal set does not prevent the mds algorithm from successfully identifying the principle components of the space . fora given number of points , higher dimensionality will mean fewer relations between the causal set s elements and since these relations are the information used by the mds algorithm its ability to cleanly pick out dimensions diminishes as increases .however for the random dag there is no clear separation of large eigenvalues suggesting there is no natural dimension to an embedding minkowski spacetime.,scaledwidth=100.0% ] to use mds on a network we must estimate the separation matrix using the network s structure . for euclidean mds , the separation is always a non - negative number and the shortest path between nodes in the graph is a natural and effective estimator for the distance .however in minkowski spacetime , the separation of points is not always positive , so the the number of steps along some path is not going to be measure of all lorenztian separations .the solution is to estimate spacelike and timelike separations separately when studying a dag .suppose we have two connected nodes in the graph , meaning they are timelike separated .it was conjectured in and later shown in that for timelike separated points and in the length of _ longest path _ between two connected nodes , say and , is proportional their timelike separation ( in the limit of long separations , and where the longest path must respect the edge s direction ) .so in this case we set .finding the distance between spacelike pairs is more challenging and to our knowledge there is no solution as easily calculated as the longest path is for timelike pairs .good approximations are known however , and we will use a very simple one , described in as ` naive spatial distance ' .suppose we have two disconnected points and in the meaning they are spacelike separated .we then look for a pair of nodes , and , where is in the future of both nodes and while is in the past of both and and equal to some maximal distance which is a parameter of the algorithm . in the examples shown here, we used the length of the longest path in the graph as this parameter . ] .we then chose nodes and so as to minimise the length of the longest path between and ( which are necessarily connected , via paths through and ) .the timelike separation between and is then used as an estimate for the spacelike separation between and .figure [ fig : causal_set ] shows an example of how we estimate timelike and spacelike separations .this estimate is simple and at first appealing , but fails in more than two dimensions for large graphs ( hence the ` naive ' in the name ) .nonetheless we find it is sufficient for our purposes nodes too many cases had no two - links and so the spacelike separation could nt be calculated ] .this is partly because it is inaccurate only for large causal sets but also because in the mds algorithm each point s coordinates is fixed by many distances , both timelike and spacelike which limits the effect of noise some poor estimations of spacelike separations .we can then set for the chosen and .these timelike and spacelike distances define our separation matrix ( where timelike separation has the sign in our conventions ) .finally we use the algorithm described in the previous section to assign coordinates in some -dimensional minkowski spacetime to each vertex in the dag .given a causal set graph , it is possible , in principle and for large enough , to recover all properties of the spacetime ( up to a factor of the density of the sprinkling) .in minkowski spacetime there is only one parameter to recover , , and this can be estimated for the process described above and for dags in general .our task here is to recover not properties of the manifold in which the nodes are embedded , but to find the full details of that embedding .if the graph was originally made sprinkling points into then we know that an exact embedding is possible ( since the original sprinkled coordinates must be a solution ) , and so the embedding algorithm should approximately recover the original coordinates ( up to distance preserving factors ) . if the graph was not , then we may only be able to find an approximate embedding .as is the case for classical mds , our lorentzian mds is guaranteed to recover the coordinates of points when the exact distances are used as the algorithms input . however , when distances are estimated using graph topology , the pairwise separations will be noisy as they are coarse grained by the discrete graph ( see figure [ fig : m_mds ] ) . to assess the reliability of this algorithmwe will test it first on the causal set model described above .we will take the coordinates produced by the lorentzian mds algorithm use them to rebuild the graph by again placing edges only between timelike separated pairs .if the overlap between the edges between nodes in the recreated graph , and in the original graph is high , the embedding is an accurate one , and similarly if the overlap is low the embedding is poor . as in will measure this using the sensitivity ( the fraction of the correct edges which were predicted ) and specificity ( the fraction of correct non - edges which were predicted ) .embedding of the top most cited papers in the ` hep - th ` citation network .node size is proportional to the number of citations , and lines correspond to citations amongst these papers .a large group of papers is visible in the middle of the plot , forming a long chain of citations , as well as some more isolated papers on either side .a small number of spacelike citations are visible ( those edges more than from vertical ) because this two - dimensional embedding is not perfect , but only the optimal set of coordinates found by the mds algorithm.,scaledwidth=100.0% ] our mds algorithm is able to find accurate embeddings for randomly sprinkled causal sets .we can now attempt to embed networks formed from real social systems , and here we will use citation networks from the arxiv ( 2003 kdd cup datasets ) and the us supreme court , as well as random dags for comparison . recall that the dimensionality of the embedding is something we can choose , by selecting the largest eigenvalues in the mds algorithm . to measure the effectiveness of the embeddingwe will compare the original network , with a new network generated from the coordinates determined by the mds algorithm .we want to measure the effectiveness of a classifier which predicts edges in the network from the mds coordinates . to do this we will use the established method of the area under the receiver - operator curve , auc . varying a continuous parameter ,the sensitivity and specificity of the embedding is measured , and plotted , as in figure [ fig : auc_example ] and the area under this curve describes the quality of the classifier .the continuous parameter we will vary is the speed of light ( or the speed information can be transferred ) in the minkowski space , .previously , we have set , but varying this speed will change which nodes are connected in new network generated from the mds coordinates .now , nodes and are connected if their coordinates satisfy for small values of , very few nodes are connected and so the specificity is high ( few false positives ) but the sensitivity is low ( many false negatives ) . for large values of ,many nodes are connected and so the reverse is true .values for dimensional embeddings of networks . where the curve reaches the top - left of the plot we are in the regime ( denoted by the large black point on each curve ) where the trade - off between false positives and false negatives is balanced .the shape of these roc curves measures the effectiveness of the embedding and it is clear that the the causal set performs best , the random dag worst and the citation networks in between.,scaledwidth=100.0% ] we will measure the ease of embedding a network by taking the mean of the area under this curve for networks of size - . in the case of the citation networks this was done by randomly sampling intervals of this size from the citation network . in the cases of the causal set graph , and the random dags , many instances of the required sizes are stochastically generated .-dimensional spacetime , then embedded back into that same dimensional spacetime .their auc values therefore represent the ideal case , where we know an embedding is possible .the random dags show the worst embeddings .their auc values represent the success of an embedding for a graph which has no structure .they are noticeably more embeddable in higher dimensions which is because there are more degrees of freedom in which to assign coordinates while maintaining the randomly placed links .the bars show means and standard deviations of the auc for random dags of size to .citation networks from the arxiv , and the us supreme court fall between these extremes , illustrating the presence of some structure which makes them easier to embed in spacetime .we sampled intervals with between and nodes randomly from each citation network , and the bars show means and standard deviations for the auc in each case.,scaledwidth=100.0% ]finding a good geometric embedding of a network provides a powerful tool for the analysis of that network as it allows standard geometric techniques and intuition to be used .calculations of network properties can be made more efficient , for example , when finding optimal routes from one node to another , the node coordinates provide local information which can improve routing algorithms .coordinates resulting from geometric embedding also provide a natural visualisation for a network .such visualisations are used in bibliometrics to help identify distinct fields or assist literature reviews . in the cases of citation analysis , where we conjecture that the spatial dimensions that result from a geometric approach correspond to similarity in the topic of a paper , our approach yields spatial similarities between papers while accounting for the time difference in their publication .once embedding coordinates are known , the idea that nodes may be ` similar ' can be expressed as nodes being close by using an appropriate metric , which need not be the lorentzian one we used in the construction of the coordinates .two nodes might be spacelike separated , so have no direct links , yet be close in a euclidean sense because mds calculates coordinates globally using information from all vertices and edges .papers could be recommended if spatially close to others a reader has interest in even if there are no local connections between them , potentially bringing work or authors to the attention of readers who are not aware of them .spatial similarity can also be used to define clusters .the idea of ` centrality ' or importance of a node has a natural representation in terms of the density of points in the geometric neighbourhood of the point linked with the chosen node .another use of this approach is where edges in a network are placed primarily according to some geometric rule but their connections are also governed by some smaller second order effect . it may only be possible to measure the smaller effect once we have accounted for the primary geometric one by assigning coordinates using lorentzian mds or a similar methodwe can see this effect clearly when the geometric embedding is one in real geographic space , such as in where accounting for geographic distance in phone - call data allows more accurate prediction of the second order effect of shared language .other approaches to geometric embedding exist in the literature .mds is characterised by maintaining global separations between pairs .others , such as isomap maintain shortest paths between pairs linked by local interactions .that approach may not be as appropriate in pseudo - riemannian geometries .firstly , the shortest path locally may not correspond to the geodesic distance like it does in euclidean space ; as discussed above in graphs embedded in minkowski space it is the longest path which corresponds to the geodesic .secondly , the idea of local neighbours is less clearly defined if there are different types of separation , or if , as is the case for minkowski space , the number of nearest neighbours diverges .another class of embedding approaches are probabalistic , such as stochastic neighbour embedding .although it is beyond the scope of this work , we do not see why such approaches could not be adapted to pseudo - riemannian manifolds , and the ability to use a mixture of separated images for the same object may prove very useful . finally we note that inserting a metric signature into the equations for classical mds allows it to be used on any metric signature , even though we have focused only on the lorentzian signature here .to our knowledge this pseudo - riemannian output is a new development , although some kind of manifold learning techniques exist which can take pseudo - riemannian manifolds as their input .we note that when performing the lorentzian network mds algorithm we often find multiple negative eigenvalues , suggesting that embeddings in spaces with more than one timelike dimension is also possible , as are potential embeddings into lorentzian manifolds other than minkowski space , incorporating curvature or preferred directions .10 j. dall and m. christensen ._ random geometric graphs_. phys .e , 66(1):016121 , july 2002 .m. penrose ._ random geometric graphs_. oxford university press , 2003 .z. xie and t. rogers . _ scale - invariant random geometric graphs_. arxiv:1505.01332 , 2015 .d. krioukov , f. papadopoulos , m. kitsak , a. vahdat , and m. boguna ._ hyperbolic geometry of complex networks_. phys .e , 82(3):036106 , 2010 .d. krioukov , m. kitsak , r. sinkovits , d. rideout , d. meyer , and m. boguna ._ network cosmology . _scientific reports , 2:793 , 2012 .j. r. clough and t. s. evans ._ what is the dimension of citation space ?_ physica a , 448:235247 , 2016 .zhihao wu , giulia menichetti , christoph rahmede , and ginestra bianconi . _emergent complex network geometry_. scientific reports , 5:10073 , 2015 .trevor f. cox and m.a.a ._ multidimensional scaling , second edition_. crc press , 2000 .d. j. higham , m. rasajski , and n. przulj ._ fitting a geometric graph to a protein - protein interaction network_. bioinformatics , 24(8):10931099 , 2008 .j. r. clough , j. gollings , t. v. loach , and t. s. evans ._ transitive reduction of citation networks_. complex networks , 3(2):189:203 , 2015 .s. w. hawking and g. f. r. ellis . _ the large scale structure of space - time_. cambridge university press , 1973 .g. brightwell and m. luczak . _ the mathematics of causal sets_. arxiv:1510.05612 , 2015 . roger n. shepard ._ attention and the metric structure of the stimulus space_. journal of mathematical psychology , 1(1):5487 , 1964 .h. minkowski ._ die grundgleichungen fur die elektromagnetischen vorgange in bewegten korpern _ , 1910 .l. bombelli , j. lee , d. meyer , and r. d sorkin ._ space - time as a causal set_. phys .lett . 59(5):521524 , 1987 .l. bombelli and d. meyer ._ the origin of lorentzian geometry_. physics letters a , 141(5 - 6):226228 , 1989 .f. dowker ._ causal sets as discrete spacetime_. contemporary physics , 47(1):19 , 2006 .j. myrheim ._ statistical geometry_. cern2538 , 1978 .b. bollobs and g. brightwell ._ box - spaces and random partial orders_. transactions of the ams , 324(i):5972 , 1991 .g. brightwell and r. gregory ._ structure of random discrete spacetime_. phys .66(3 ) , 1991 . d. rideout and p. wallden ._ spacelike distance from discrete causal order_. classical and quantum gravity , 155013:32 , 2009 .f. dowker , j. henson , and r. d. sorkin ._ quantum gravity phenomenology , lorentz invariance and discreteness_. modern physics letters a , 19(24):18291840 , 2004 .d. d. reid ._ manifold dimension of a causal set : tests in conformally flat spacetimes_. phys .d , ( october 2002):18 , 2003 .j. h. fowler and s. jeon . _ the authority of supreme court precedent_. social networks , 30(1):1630 , 2008 .d. krioukov , f. papadopoulos , a. vahdat , and m. boguna . _ curvature and temperature of complex networks_. phys .e , 80(3):035101 , 2009 .n. j. van eck and l. waltman ._ citnetexplorer : a new software tool for analyzing and visualizing citation networks_. journal of informetrics , 8(4):802 - 823 , 2014 .p. expert , t. s. evans , v. d. blondel , and r. lambiotte ._ uncovering space - independent communities in spatial networks ._ pnas , 108(19):76638 , 2011 .j. b. tenenbaum , v. de silva , and j. c. langford ._ a global geometric framework for nonlinear dimensionality reduction ._ science , 290(5500):231923 , 2000 .s. t. roweis and l. k. saul ._ nonlinear dimensionality reduction by locally linear embedding . _ science , 290(5500):23236 , 2000 .g. e. hinton and s. t. roweis ._ stochastic neighbor embedding_. advances in neural information processing systems , 833840 , 2002 .d. liu , s. trajanovski , and p. van mieghem ._ reverse line graph construction : the matrix relabeling algorithm marinlinga versus roussopoulos s algorithm_. arxiv:1005.0943 , 2010 .r. liu , z. lin , z. su , and k. tang ._ feature extraction by learning lorentzian metric tensor and its extensions_. pattern recognition , 43(10):32983306 , 2010 .following closely the derivation for euclidean mds in we begin with the timelike ( negative ) and spacelike ( positive ) square distances between each pair of points , and wish to derive the inner product matrix , where .we will denote the minkowski separation between and as . as usualwe first fix the coordinates centre of mass , placing it at the origin , such that for each in a -dimensional space . remembering that with a diagonal matrix with in the first row and in every other row ( the minkowski metric ) we have : the last term on the right vanishingif the order of the sums is reversed due to the condition we stipulated above . doing the same summing over as well gives : we can now write down each element in the matrix we are trying to calculate . \\ b_{ij } & = -\frac{1}{2 } \left [ s_{ij } - \frac{1}{n } \sum_{i=1}^{n } s_{ij } + \frac{1}{n } \sum_{i=1}^n \mathbf{x}_i^{\mathrm{t}}{{{\mathbf{\textsf{g}}}}}\mathbf{x}_i - \frac{1}{n } \sum_{j=1}^{n } s_{ij } + \frac{1}{n } \sum_{j=1}^n \mathbf{x}_j^{\mathrm{t}}{{{\mathbf{\textsf{g}}}}}\mathbf{x}_j \right ] \\b_{ij } & = -\frac{1}{2 } \left [ s_{ij } - \frac{1}{n } \sum_{i=1}^{n } s_{ij } - \frac{1}{n } \sum_{j=1}^{n } s_{ij } + \frac{1}{n^2 } \sum_{i=1}^{n } \sum_{j=1}^{n } s_{ij } \right]\end{aligned}\ ] ] which gives us the the matrix from the distances .in the euclidean case all of the eigenvalues of are positive ( or zero ) . in general , the signs of eigenvalues of the matrix will follow the signature of the metric . for any metric , is symmetric andso can can be decomposed into orthogonal eigenvectors , and eigenvalues . we aim to find a solution to the equation .trying where is some real diagonal matrix gives assuming that the metric is diagonal we then have that and since is real , the signs of the elements of must equal those of .
geometric approaches to network analysis combine simply defined models with great descriptive power . in this work we provide a method for embedding directed acyclic graphs into minkowski spacetime using multidimensional scaling ( mds).first we generalise the classical mds algorithm , defined only for metrics with a euclidean signature , to manifolds of any metric signature . we then use this general method to develop an algorithm to be used on networks which have causal structure allowing them to be embedded in lorentzian manifolds . the method is demonstrated by calculating embeddings for both causal sets and citation networks in minkowski spacetime . we finally suggest a number of applications in citation analysis such as paper recommendation , identifying missing citations and fitting citation models to data using this geometric approach .
fetal distress is generally used to describe the lack of the oxygen of the fetus , which may result in damage or death if not reversed or the fetus is delivered immediately .thus the emergent action by discriminating the fetal distress is an important issue in the obstetrics . in particular, it is important to discriminate the normal fetal heart rate(fhr ) and two types of fetal distress groups(the presumed distress and the acidotic distress ) and reduce the wrong diagnosis rate in which about 65% of the presumed distress fetus(not serious but in need of careful monitoring ) are diagnosed as the acidotic distress fetus(serious , needing an emergent action ) , experiencing a useless surgical operation . in this paper, we try to discriminate the normal and pathologic fetal heart rate group with a robust and reliable method .+ the cardiovascular system of the fetus is a complex system .the complex signal , the heart rate , from the complex system contains enormous information about the various functions .the incoming information , on which various heart activities are projected , can not be linearly separated into each function .but we may be able to estimate the signal from a pathologic fetal heart which contains information on the weakness or a complete loss of a particular function .based on this concept , multiple time scale characteristic of heart dynamics have received much attention . as a quantitative method ,the multi - scale entropy , which quantifies multi - time scale complexity in the heart rate , was introduced and widely applied to the classification between normal and pathologic groups and also different age groups .it was found that in a specific time scale region , the normal and the pathologic adult heart rate groups and the different age effects on the heart rhythms are significantly distinguished .+ the multiple - scale characteristic in the fetal heart rate is due to the autorhythmicity of its different oscillatory tissues , its interaction with other neural controller and the maternal circulatory system and other neural or hormonal activities , which have a wide range of time scales from secondly to yearly . + in our previous work on the multiple - scale analysis of heart dynamics , we extended the analysis with the time scale to both the event and time scales .we found that the event scale in heart dynamics plays more important role than the time scale in classifying the normal and pathologic adult groups . in this paper , therefore , we will investigate characteristic event or time scales of heart dynamics of the normal and the fetal distress group in order to determine the criteria for classifying the normal and the fetal distress groups .+ previous works on the fetal heart rate were based on the various nonlinear measures such as the multi - scale entropy , approximate entropy , power spectral density and detrended fluctuation analysis .they were able to significantly differentiate the mean of normal and pathologic fhr groups but their classification performance was poor , which prevent practical applications of these methods . in this paper, we investigate the typical scale structure of each fhr group by scanning both the event and time scale regions in order to find an appropriate scale region for classifying the normal and the pathologic fetal groups in an optimal way .+ we introduce a new analysis method , called the unit time block entropy(utbe ) , which scans both the event and time scale regions of the heart rate based on symbolic dynamics to find the characteristic scale region of the normal and the pathologic heart rate groups .this method matches the unevenly sampled rr interval data length and the measurement time of the heart rate data set simultaneously , where the rr interval means the time duration between consecutive r waves of the electrocardiogram(ecg ) . in most previous studies the number of rr interval sequences of all data setare fixed in spite of a large variability in the measurement time .using the utbe method , we can directly compare the entropy of data sets without any ambiguity caused by the nonstationarity and noise effect that inevitably appears in the data set with different data length or measurement time .we find that the normal and two pathologic fhr groups are reliably discriminated .+ in section 2 , we will introduce the normal and pathologic fhr data set and their linear properties . in section 3 , the new analysis method , called unit time block entropy(utbe ) ,is introduced and applied to the fetal heart rate data set . in section 4 , we show that the normal , the presumed distress and the acidotic distress fhr groups can be discriminated through the utbe method in some characteristic scale regions .finally , we end with the conclusion .the rr interval time of three fetuses .( a ) a normal fetus .( b ) a presumed fetus . ( c ) a distress fetus .( d ) the mean and standard deviation of normal , presumed and distress rr interval sequences in units of seconds .( e ) the log - log plot of rr interval acceleration .the log - log distribution of rr acceleration shows a power - law distribution.,title="fig : " ] + the rr interval time of three fetuses .( a ) a normal fetus .( b ) a presumed fetus . ( c ) a distress fetus .( d ) the mean and standard deviation of normal , presumed and distress rr interval sequences in units of seconds .( e ) the log - log plot of rr interval acceleration .the log - log distribution of rr acceleration shows a power - law distribution.,title="fig : " ] the rr interval time of three fetuses .( a ) a normal fetus .( b ) a presumed fetus . ( c ) a distress fetus .( d ) the mean and standard deviation of normal , presumed and distress rr interval sequences in units of seconds .( e ) the log - log plot of rr interval acceleration .the log - log distribution of rr acceleration shows a power - law distribution.,title="fig : " ] + the fetal heart rate is acquired from 77 pregnant women who were placed under computerized electronic fetal monitoring during the ante partum and intra partum periods .the fetal heart rate is digitized with the data received with the corometrics 150 model(corometrics , connecticut , usa ) through the catholic computer - assisted obstetric diagnosis system(ccaod ; dobe tech , seoul , korea ) .the computerized electronic fetal heart rate monitoring was done on the fetal heart rate data two hours before the delivery , from each pregnant woman without any missing data .first , 77 pregnant women are divided into two groups ; 36 women into the normal fetus group , who showed the normal heart rate tracing , and 41 women with the abnormal fetal heart rate tracing(severe variable , late deceleration , bradycardia , or tachycardia , and delivery by caesarean section ) .the heart rate tracing has been done using as a standard criteria to determine normal or pathologic states of a fetus .after an immediate delibery by a caesarean section , the umbilical artery of the fetus is examined .then , 41 women were further divided into the presumed distress group with 26 women whose umbilical artery was higher than 7.15 , and the acidotic distress group with 15 women whose umbilical artery was lower than 7.15 and the base excess was lower than . by this umbilical artery test we can retrospectively know whether a fetus was in dangerous situation or not . in this work , the wrong diagnosis rate is , 26 women undertaken a useless surgical operation .the improvement of this wrong diagnosis rate through the presurgical classification of different groups based on the fhr data is the main goal of this work .+ in order to treat errors in the measuring equipment and ectopic beats we selected the interval that did not have any missing data , and manually treated the ectopic beats using a linear interpolation , which is less than of all heart beats selected .we replaced the ectopic beats with resampled data around the vicinity of using the average sampling frequency of each heart rate sequence .the resampling , which is usually used for even sampling from the unevenly measured heart beat , is restrictively applied for replacing ectopic beats .however , this method inevitably produces a number of unexpected artificial effects on the original rr sequences . for example, it can cause the distortion of the short term correlation of heart rate data .thus a new method is developed and applied to overcome these problems as in the following section .+ before applying the nonlinear method , we investigate conventional linear properties such as the mean and the standard deviation and its student t - test between healthy and pathologic groups(normal , presumed distress and acidotic distress ) . in fig.1(a)-(c ) , the rr interval time of three fetuses are presented .the normal fetus shows a typical characteristic of the small , fast variations , while the pathologic fetus shows relatively slower , larger variations . in fig.1(d ) and ( e ) , linear properties such as the mean , the standard deviation and the log - log distribution of rr acceleration from the normal , the presumed distress and the acidotic distress groups are presented .the result of the t - test on the mean and the standard deviation distributions shows that three fhr groups are distinguishable by these two linear properties , resulting in significant p - values for the mean and the standard deviation[p - value(for the mean / the standard deviation ) : the normal and the presumed distress( ) , the normal and the acidotic distress( ) , the presumed distress and the acidotic distress(0.28/0.903 ) ] .although their significant difference in the mean and the standard deviation , their classification performance based on the sensitivity and specificity is poor . in the next section , we will compare the linear properties with the nonlinear ones obtained from the utbe and the multi - scale entropy method . in fig.1(e ), the rr interval acceleration shows a power - law distribution .this suggests that the rr interval acceleration rather than rr interval has a nonlinear characteristics which can be best utilized in classifying the different fhr groups .the investigation of linear properties such as the mean and the standard deviation of the rr sequence shows that the heart dynamics of the normal fetus group is faster than that of the pathologic fetus group , leading to a significant difference in the mean .however , the total measurement time shows a large variability for the 77 subjects studied . in fig.[fig : fig1 ] , the mean and the standard deviation of the measurement time of all rr interval sequence is and the time difference between the shortest one and the longest one is . in treating the rr interval sequence, we typically face this contradictory situation .if we try to match the number of rr interval sequence in advance , the measurement time for all rr interval sequences becomes different .if we try to match the measurement time in advance , the number of rr interval sequence becomes different .this naturally occurs for unevenly sampled data such as the rr interval sequence . in order to solve this problem , we used the unit time block entropy(utbe ) , which simultaneously matches both the measurement time and the number of rr interval sequence for all subjects .the variation of measurement time after fixing the number of rr interval sequence in healthy and pathologic fhr groups .the mean and standard deviation of subjects is (min ) , it shows a large variation . ]the utbe estimates the entropy of the symbolic sequence composed at the specific event and time scale of a heart beat sequence .since it matches both the measurement time and the number of words of the alphabet from the rr sequence , we can make a direct comparison of all involved fhr groups reliably .+ first , we define a word sequence from the rr interval sequence . to construct a word, we use a unit time windowing method , in which each word is constructed in a given unit time window , so the number of symbols involved in each word can be different between windows .+ the rr interval sequence is given by and the rr interval acceleration is by .a word is defined as follows a word composed by a unit time windowing method consists of n(i ) symbols .the number of symbols , n(i ) , is different at each unit time scale , satisfying .a symbol , which is used to construct a word , is 1 for larger than and 0 , otherwise . here, is a unit time for constructing a word and is a rr interval threshold or a rr acceleration threshold for binary symbolization .thus a word contains both information about the time scale and the event scale of the heart rate .therefore , it defines a state of specific scale events of the cardiac system during a unit time .the event scale varies with 20 steps from 5% to 95% of the cumulative rank for all rr acceleration values from normal and pathologic hr data set .the time scale varies with 20 steps from 1 second to 10 seconds , which 1 second is sufficiently short to avoid an empty word .+ fig.[fig : fig2 ] briefly illustrates how to compose a word sequence from a rr interval sequence . in the block windowing case ,a word is defined regularly with the block size n=2 and the block is shifted to the one step next . in the unit time windowing case , a word is defined with a unit time and the unit time windows shifted by 0.5 second to define the next word , allowing the window overlap .the window shifting time , 0.5 second , is appropriately chosen to accumulate an enough number of sampled words . in this way, the unit time widowing method can match the number of words and the total measurement time of different rr intervals sets , while the block windowing method can not . through this procedurewe can investigate the complexity of the symbolic rr sequence composed at each time and event scale combination .+ with the above word constructing method , we can compose a word sequence from a rr interval sequence .let be a finite , nonempty set which we refer to as an alphabet .a string or word over w is simply a finite sequence of elements of w. an alphabet w has the following relation , where is the set of all words over w , including the empty word , , and the set of all words over w of length n or less and the set of all words over w of length n. the number of possible words of and are and , respectively . and s is a s - ary number , which in this paper is binary(s=2 ) . andthe number of possible symbols in a word , , varies with the unit time , which is used to construct a word . is the smallest rr interval time and is an integer number . since depends on , which is sensitive on the ectopic beat or short term noisy beats ,noise reduction should be done carefully in preprocessing .the word sequence composed by the unit time windowing method is a natural generalization of one based on the block windowing method .+ unit time windowing and block windowing method : two different ways to compose a word sequence from a rr interval sequence .the upper case presents how to compose a word sequence from a rr interval sequence with the unit time windowing method , on the other hand , the lower case presents how to compose a word sequence with block windowing method . the former focus on unit time to define a word but the latter focus on the number of symbols , in this example, the block size is n=2 , to define a word . ]+ based on this unit time windowing method , we calculate the symbolic entropy , called the utbe , which quantifies complexity of a word sequence at specific event and time scale region .the entropy , , is a function of the time scale and the event scale . where is the estimation of the frequency of word and is the occurrence number of a word and is the total number of sampled words . to calculate the exact probability of a word , infinite words are considered with for s=2 , utbe varies between a lower bound and a upper bound at each time scale .the lower bound occurs for the completely regular case and the upper bound occurs for the completely random case where all words are equally probable .+ in this paper , to construct an appropriate word ensemble from a rr sequence , the unit time scale is chosen up to 5 steps(about 2.8 seconds ) for the utbe . at this time scale ,the largest number of symbols contained in a word from our data set is 8 , which contribute to 2.9% of the population .the number of possible words is and the number of sampled words from our rr interval sequence is 3,313 at this unit time scale . in the followings ,we show that among two scales , the event scale is more significant for classification of healthy and two pathologic fhr groups than the time scale .+ + + in fig.4 , we applied the utbe method to the three fhr groups and show the result of the search for all event and time scale regions . in order to compare utbe distributions between the normal and two pathologic groups , all p - values of the student t - test in the parameter planeare presented in fig .4(a - f ) .since the scale characteristics of three groups are not known a priori , we scanned all event and time scales for optimization of the classification . for the presumed distress and the acidotic distress groups in the fig.[fig : fig4](a ) and fig.[fig : fig4](d ) , the event sequences composed by only the 50 - 80% cumulative rank region in the rr interval acceleration are significantly distinguished over all time scales( ) .this suggests that the presumed distress and the acidotic distress groups have relatively different characteristics scale complexity on this specific rr interval acceleration region .these two groups could not be discerned by their linear properties[the p - value of ( mean / standard deviation):(0.28/0.903 ) ] . for the normal and the acidotic distress groups in fig.[fig : fig4](b ) and fig.[fig : fig4](e ) , the event sequences composed by most rr interval acceleration scales except the 50 - 75% cumulative rank region are distinguished in their complexity( ) , while in event sequences composed by only narrow rr interval scale regions , with 5% and 95% cumulative rank ones , show the difference .it suggests that the rr interval acceleration of the normal and the acidotic distress group contains significant information on different internal dynamics of the cardiac system , while the rr interval does not . for the normal and the presumed distress groups in fig.[fig : fig4](c ) and fig.[fig : fig4](f ) , most rr interval scales could not discriminate these two groups except narrow regions , with 5 - 35% ranks and 90 - 95% cumulative ranks , while a wide rr acceleration scale region below the 50% cumulative rank significantly discriminates these two groups( ) .+ the above results suggest that the normal , the presumed distress and the acidotic distress groups exhibit characteristic scales with typical dynamics in those scale regions .the information on the characteristic event and time scale regions of three different groups can be used to determine the optimal parameters for the classification of different group . between two types of scales ,the event scale is more effective than the time scale in classifying these groups .when two groups are discriminated at an event scale region , these groups are also well discriminated over most time scales .these results suggest that the rr interval acceleration contains typical information about cardiac dynamics of three groups , whereas the rr interval does not .since slow or fast acceleration of the cardiac system is associated with fetal vagal activity and the motherly - fetal respiratory exchange system , it may provide some clues to which functional difference of cardiac systems causes such difference between healthy and pathological groups .+ in fig.[fig : fig4](g)-(i ) , the best sensitivity and specificity in the parameter plane is presented for the four measures of statistics and complexity for comparison , respectively .+ the sensitivity determines that when a utbe value is given for classifying two groups(a , b ) as a threshold , what percentage of the subjects involved in the group a is correctly classified by the given threshold utbe value .the specificity determines that when a utbe value is given for classifying two groups as a threshold , what percentage of the subjects involved in the other group b is correctly excluded from the group a. so if the sensitivity and specificity are all , the two groups are completely classified by the given threshold . here, the best sensitivity and specificity are achieved after calculating them at all points of the ( ) parameter space , varying the threshold from the minimum utbe to the maximum utbe value of the calculated utbe set . here , the best sensitivity and specificity is determined as the highest values along the diagonal in the plane of the sensitivity and specificity . as a result ,utbe using the rr interval acceleration as a threshold provides the best performance in classification of the presumed distress and the acidotic distress , the normal and the acidotic distress , and the normal and the presumed distress groups .surprisingly , for the normal group and two types of distress groups , utbe using the rr interval acceleration as a threshold completely discriminates these groups with sensitivity 100% , and the specificity 100% , which could not be archived with two linear properties and the utbe using the rr interval threshold . for the presumed distress and the acidotic distress groups , both utbes lead to the same result(sensitivity=71.4% , specificity=72% ) .( a ) the apen with error bar for different fhr groups .the normal(bold solid line ) has the highest complexity and the presumed(dotted line ) and the acidotic fhr(bold dotted line ) have the lower apen values at each time scale .( b ) the best sensitivity and specificity of all fhr groups .the best sensitivity and specificity is ( 94% , 95% ) for the normal group and the presumed distress group(solid line ) , ( 85%,86% ) for the normal group and the acidotic distress group(bold dotted line ) and ( 48%,48% ) for the presumed group and acidotic distress group(dotted line).,title="fig:",scaledwidth=45.0% ] ( a ) the apen with error bar for different fhr groups .the normal(bold solid line ) has the highest complexity and the presumed(dotted line ) and the acidotic fhr(bold dotted line ) have the lower apen values at each time scale .( b ) the best sensitivity and specificity of all fhr groups . the best sensitivity andspecificity is ( 94% , 95% ) for the normal group and the presumed distress group(solid line ) , ( 85%,86% ) for the normal group and the acidotic distress group(bold dotted line ) and ( 48%,48% ) for the presumed group and acidotic distress group(dotted line).,title="fig:",scaledwidth=45.0% ] in this section , we compare the performances of the utbe and the multiscale entropy(mse ) , which has been widely used in the hear rate analysis .the multi - scale entropy calculates the approximate entropy(apen ) or the sample entropy at different scales with heart rate data , which measures the regularity of a given data . since the same data length(n=4061 ) is used in order to remove the dependency on the data length , the approximate entropy is chosen instead of the sample entropy .first , in order to calculate the apen , the rr sequence of length n(=4061 ) is divided into segments of length n and the mean value is calculated for each segment . with the coarse - grained sequence at each scale n , the apen is computed with the following , the parameters ; the embedding dimension m=2 and the delay . in the calculation of the multi - scale entropy ,we use two types of r values ; one is determined from the original data at n=1[ and the other is variable to be determined at all n scales[ . by using the variable r(n ) we can remove the effect of variation due to the coarse - graining process , where sd(n ) denotes the standard deviation of the coarse - grained sequence at a scale n . since the multi - scale entropy for two cases lead to the similar results , we present here one for the first case . in fig.5(a ) and ( b ) , we present the performance of multi - scale entropy in the classification of three fetal heart rate groups . in fig 5.(a ) , the best p - values for each pair of groups in the student t - test are for the normal vs the acidotic distress , for the normal vs the presumed distress and p=0.143(n=1 ) for the presumed distress vs the acidotic distress .this result shows that the mean multi - scale entropy values of each group are significantly different between the normal and two pathologic groups , but the mean multi - scale entropy values of two pathologic groups are not distinguishable . in order to check the possibility of classification, we investigate the sensitivity and specificity as in the utbe .fig.5(b ) presents the best sensitivities and specificities selected from the calculation in all scales . the best sensitivity andspecificity is at the scale(n=4 ) of the normal and the presumed distress case , for the normal and the acidotic distress case and for the presumed distress and the acidotic distress case .the classification performance of multi - scale entropy is not better than that of the utbe in all classification cases from the three groups .this is because utbe searches all the event and time scales to find the optimal classification of different characteristics of healthy and two pathologic data , while multi - scale entropy searches only the time scale .( a ) the normal and the pathologic fhr groups are distinguished at a specific time and event scale(3sec,5% rank ) . here , the two groups are divided by the threshold(utbe=7.8bit ) .( b ) the presumed fhr and the acidotic fhr groups are divided by the threshold(utbe=1.4 ) .the marker `` e '' indicates the three fetuses in each group , whose umbilical artery phs are closest to 7.15.,title="fig:",scaledwidth=45.0% ] ( a ) the normal and the pathologic fhr groups are distinguished at a specific time and event scale(3sec,5% rank ) . here, the two groups are divided by the threshold(utbe=7.8bit ) .( b ) the presumed fhr and the acidotic fhr groups are divided by the threshold(utbe=1.4 ) .the marker `` e '' indicates the three fetuses in each group , whose umbilical artery phs are closest to 7.15.,title="fig:",scaledwidth=45.0% ] in this section , using the difference in the characteristic event scale between the normal and two pathologic groups , we distinguish the normal , the presumed and acidotic groups systematically . in fig.4(e ) and ( f ) , the normal group is significantly distinguished from the presumed distress group and the acidotic distress group in the lower event scale region . in fig.4(d ) , the presumed distress group and the acidotic distress group are distinguished in the relatively higher event scale region .thus , with these characteristics , we can make a strategy to systematically differentiate these groups based on the utbe .first , as in fig.6(a ) , we separate the normal and pathologic groups by a threshold of 7.8 in the utbe , which is determined as the smallest utbe value of the normal group , at the specific time and event scale(3 seconds, rank),respectively . from this procedure, we can differentiate the normal and the pathologic groups with a 100% accuracy .then , for the fetuses deviating from the normal group , usually having the lower utbe , we try to separate the presumed and the acidotic distress groups . in this case , we determine the threshold utbe as 1.4 bits in the time and event scale , which are the first step(1 sec ) and the tenth step(about 50% rank ) , respectively .these time and event scales are selected from the scan of the parameter space as in fig.4(d ) , in which these two groups are distinguished well on the scale region above the tenth event scale . with this threshold ,9 acidotic fetuses out of 14 are distinguished .however , some ambiguity still remains .the clinical determination of the presumed distress fetus and the acidotic distress fetus was carried out by the umbilical artery phs . in fig.6(b ) , we mark the three fetuses with e in each group , who have the umbilical artery phs closest to the threshold value of phs , 7.15 . the range of umbilical artery phs measured is from 6.863 to 7.38 .the phs values of the six fetuses are 7.229 , 7.24 and 7.226 in the presumed distress group and 7.15 , 7.16 and 7.144 in the acidotic distress group .since all the pathologic fetuses were delivered by caesarean surgery after several signs of distress(severe variable , later deceleration , bradycardia , or tachycardia ) , it is not certain if all the acidotic distress fetuses would go to the emergency state or all the presumed distress would be in the safety state . therefore , if the fetuses marked with e are excluded in this analysis , the characteristic of two groups can be clarified more clearly . as a result ,most acidotic distress fetuses are less complex than the most presumed distress fetuses at the time and event scale regions . in the comparison of three groups , two pathologic groups are less complex than the normal group , while in the comparison of two pathologic groups the acidotic distress group is less complex than the presumed group .we find that in order to distinguish the normal and pathologic fetuses a small event scale(5% rank ) and a relatively large time scale(3 sec ) of fetal heart dynamics is useful . on the other hand , in order to distinguish the presumed distress and the acidotic distress fetuses a large event scale(about 50 % rank ) and a relatively small time scale(1 sec ) of fetal heart dynamics is more appropriate . with these scale regions, we were able to reduce the wrong diagnosis rate from to .in this paper , we investigated the event and time scale structure of the normal and pathologic groups .we also introduced and calculated the utbe method in the appropriate event and time scale region to distinguish the three groups . to extract meaningful information from the data set ,the scale structure of the rr interval acceleration is found to be more helpful than that of the rr interval . in the comparison of the utbe over all event and time scale regions, we found that the normal , the presumed distress and the acidotic distress groups have relatively different event scale structures in the rr interval acceleration . in particular , for the normal and two pathologic groups the utbe from the rr acceleration threshold completely classifies these groups in a chosen scale regions , although both linear properties and utbe using the rr interval threshold performs worse . in the case of the presumed distress and the acidotic distress groups, it also provides better classification performance than other measures .the comparison with the multi - scale entropy also shows that the utbe method performs better .it is due to the fact that the utbe approach searches the event and time scale region , while the multi - scale entropy method searches only the time scale .+ based on the difference in the scale structure , we are able to systematically distinguish three fhr groups .the normal and the pathologic groups are separated in a small event scale(5% rank ) and a large time scale(3 sec ) .then , the fetuses deviating from the normal group are separated into the presumed fetuses and the acidotic fetuses at a relatively large event scale(50% rank ) and a small time scale(1sec ) . from these scale regions ,we reduce the wrong diagnosis rate significantly .+ the results suggest that the utbe approach is useful for finding the characteristic scale difference between healthy and pathologic groups .in addition , we can make a more reliable comparison between all fetuses by simultaneous matching of the measurement time and the number of words .this approach can be applied to the other unevenly sampled data taken from complex systems such as biomedical , meteorological or financial tick data . in this study, we also reconfirm that the more pathological a fetus is the less complex its dynamics , following the pathological order , from the acidotic distress , the presumed distress and to the normal fetus .this indicates that in spite of the peculiarity of the fetal cardiac system , the generic notion of the complexity loss can be applied to the fetal cardiac system .+ but these results come from a retrospective test under the well elaborated condition . in a practical view point ,the prospective test is necessary to confirm the selected scale regions and its classification performance .we will further test these results with more subjects for the purpose of practical application of this analysis method . we thank dr . sim and dr.chung in dep . obstetrics and gynecology , medical college , catholic univ . for their assistance on clinical data and helpful comments .this work has been supported by the ministry of education and human resources through the brain korea 21 project and the national core research center program .99 j. altimiras , comp .physiol . a * 124*,447 ( 1999 )s. havlin , s. v. buldyrev , a. bunde _et al . _ , physica a * 273 * , 46 ( 1999 ) m. costa , a. goldberger and c .- k .peng , phys .rev . lett . * 89 * , 068102 ( 2002 ) d. r. chialvo , nature * 419 * , 263 ( 2002 ) m. costa , a. goldberger and c - k .peng , comput .* 29 * , 137 ( 2002 ) m. costa and j. a. healey , comput .cardiol . * 30 * , 705 ( 2003 ) m. costa and a. goldberger and c .- k .peng , phys .e * 71 * , 021906 ( 2005 ) c. e. wood and h. tong , am .j. physiol .* 277 * , r1541 ( 1999 ) u. c. lee , s. h. kim and s. h. yi , phys .rev . e**71 * * , 061917 ( 2005 ) g. magenes , m. g. signorini and d. arduini , proc .the second joint embs / bmes conference , october 23 - 26 , 2002 g. magenes , m. g. signorini and m. ferrario _et al . _ ,the 25th annual international conference of the ieee / embs , sep .17 - 21 , 2003 m. g. signorini , g. magenes and s. cerutti__et al ._ _ , ieee trans . biomed .50 * , 365 ( 2003 ) m. e. d. gomes , h. n. guimaraes , a. l. p. ribeiro and l. a. aguirre , comput .32 * , 481 ( 2002 ) the sensitivity is a true positive rate defined as , and the specificity is a true negative rate defined as .( a : true positive , b : false positive , c : false negative , d : true negative ) .more details are available at http://bmj.bmjjournals.com/ or see t. greenhalgh , bmj * 315 * , 540 ( 1997 ) v. v. nikulin and t. brismar , phys .* 92 * , 089803 ( 2004 )
recently , multiple time scale characteristics of heart dynamics have received much attention for distinguishing healthy and pathologic cardiac systems . despite structural peculiarities of the fetal cardiovascular system , the fetal heart rate(fhr ) displays multiple time scale characteristics similar to the adult heart rate due to the autorhythmicity of its different oscillatory tissues and its interaction with other neural controllers . in this paper , we investigate the event and time scale characteristics of the normal and two pathologic fetal heart rate groups with the help of the new measure , called the unit time block entropy(utbe ) , which approximates the entropy at each event and time scale based on symbolic dynamics . this method enables us to match the measurement time and the number of words between fetal heart rate data sets simultaneously . we find that in the small event scale and the large time scale , the normal fetus and the two pathologic fetus are completely distinguished . we also find that in the large event scale and the small time scale , the presumed distress fetus and the acidotic distress fetus are significantly distinguished . event scale , time scale , symbolization , fetal heart rate , dynamics , complexity 87.19.hh , 87.10.+e , 89.75.-k
differential equations with fractional derivatives are used for modeling non - markovian processes in various areas of science such as physics , biology and economics [ 1 - 5 ] .the methods for analytical solution of ordinary and partial fractional differential equations can be applied to only special types of equations and initial conditions . in practical applicationsthe solutions of the fractional differential equations are determined by numerical methods .the development of efficient methods for numerical solution of fractional differential equations is an active research field .an important approximation for the caputo fractional derivative is the approximation .it has been successfully used for numerical solution of ordinary and partial fractional differential equations .when the function is a smooth function , the accuracy of the approximation is . in previous work we determined the second order approximation by modifying the first three coefficients of the approximation with the value of the riemann zeta function at the point , and we computed the numerical solutions of the fractional relaxation and subdiffusion equations with sufficiently differentiable solutions .the solutions of fractional differential equations often have a singularity at the initial point . in this casethe numerical solutions may converge to the exact solution but their accuracy may be lower than the expected accuracy from the numerical analysis . in this paperwe propose a method for improving the accuracy of the numerical solutions of the fractional relaxation equation where , and the fractional subdiffusion equation ,t>0 , & \\u(x,0)=\sin x,\;u(0,t)=0,\;u(\pi , t)=0 .\end{array } \right .\ ] ] when the solutions of the relaxation and subdiffusion equations are smooth functions the accuracy of the numerical solutions using the approximation are and and the numerical solutions using the modified approximation have accuracy and .the solutions of the fractional relaxation and subdiffusion equations and have differentiable singularities at the initial point . the accuracy of the numerical solutions and of the fractional relaxation equation is and the numerical solutions and of the fractional subdiffusion equation have accuracy .equations and are linear fractional differential equations .the form of the equations allows us to compute the miller - ross derivatives of the solutions at the initial point . in section 3 and section 4we use the fractional taylor polynomials to improve the differentiability properties of the solutions and the accuracy of the numerical methods .the caputo derivative of order , where is defined as when the function is defined on the interval ] with its value at the midpoint of the interval . where and when the function has a continuous second derivative , the accuracy of the approximation is ( ) .the modified approximation for the caputo derivative has coefficients for , where is the riemann zeta function .the modified approximation has accuracy . by approximating the caputo derivative at the point with and we obtain the numerical solutions and of the fractional relaxation equation . in table 1 and table 2we compute the maximum error and the order of the numerical solutions and for the following equations equations and have solutions and .the solution of equation has an unbounded second derivative at . while the numerical solutions of equation converge to the exact solution , their accuracy is smaller than .99 a. cartea , d. del castillo - negrete , fractional diffusion models of option prices in markets with jumps .physica a , 374(2 ) ( 2007 ) , 749763 .o. marom , e. momoniat , a comparison of numerical solutions of fractional diffusion models in finance , nonlinear analysis : real world applications , 140(6 ) ( 2009 ) , 34353442 .f , mainardi , fractional relaxation - oscillation and fractional diffusion - wave phenomena , chaos , solitons fractals , 7(9 ) ( 1996 ) , 1461 1477. s. i. muslih , om p. agrawal , d. baleanu , a fractional schrdinger equation and its solution , international journal of theoretical physics , 49(8 ) ( 2010 ) , 17461752 . p. m. lima , n. j. ford , and p. m. lumb , computational methods for a mathematical model of propagation of nerve impulses in myelinated axons , applied numerical mathematics 85 , ( 2014 ) , 3853 .w. deng , c. li , numerical schemes for fractional ordinary differential equations , in numerical modeling , peep miidla ( editor ) .intech ; 2012 .k. diethelm , the analysis of fractional differential equations : an application - oriented exposition using differential operators of caputo type .springer ; 2010 .miller , b. ross , an introduction to the fractional calculus and fractional differential equations .john wiley & sons , new york ; 1993 .i. podlubny , fractional differential equations . academic press , san diego ; 1999 .a. el - ajou , o. arqub , z. zhour and s. momani , new results on fractional power series : theory and applications , entropy , 15 , ( 2013 ) , 5305 - 5323. j. cao , c. xu , a high order schema for the numerical solution of the fractional ordinary differential equations , journal of computational physics , 238(1 ) , ( 2013 ) , 154168 .y. dimitrov , numerical approximations for fractional differential equations , journal of fractional calculus and applications , 5(3s ) , ( 2014 ) , no .22 , 145 .y. dimitrov , a second order approximation for the caputo fractional derivative , arxiv:1502.00719 , ( 2015 ) .m. glsu , y. ztrk , a. anapal , numerical approach for solving fractional relaxation oscillation equation , applied mathematical modelling 37 , ( 2013 ) 59275937 .h. hejazi , t. moroney , f. liu , stability and convergence of a finite volume method for the space fractional advection dispersion equation , journal of computational and applied mathematics , 255 , ( 2014 ) , 684 697 .b.jin , r. lazarov and z. zhou , an analysis of the scheme for the subdiffusion equation with nonsmooth data , arxiv:1501.00253 , ( 2015 ) . c. li , a. chen , j. ye , numerical approaches to fractional calculus and fractional ordinary differential equation , journal of computational physics , 230(9 ) , ( 2011 ) , 3352 3368 .y. lin and c. xu .finite difference / spectral approximations for the time - fractional diffusion equation .journal of computational physics , 225(2 ) , ( 2007 ) , 15331552 .r. lin , f. liu , fractional high order methods for the nonlinear fractional ordinary differential equation , nonlinear analysis : theory , methods & applications , 66(4 ) , ( 2007 ) , 856869 .n. f. martins , m. l. morgado and m. rebelo , a meshfree numerical method for the time - fractional diffusion equation , proceedings of the 13th international conference on computational and mathematical methods in science and engineering , cmmse 2013 , 2427 june , 2013 .yuste , j. quintana - murillo , a finite difference method with non - uniform timesteps for fractional diffusion equations , computer physics communications , 182 , 2594 ( 2012 ) j. quintana - murillo , s.b .yuste , a finite difference method with non - uniform timesteps for fractional diffusion and diffusion - wave equations , eur .j. special topics 222 , ( 2013 ) , 19871998 .b.jin , r. lazarov and z. zhou , two fully discrete schemes for fractional diffusion and diffusion - wave equations , arxiv:1404.3800 , ( 2015 ) .z. odibat , n. shawagfeh , generalized taylor s formula , applied mathematics and computation 186 , ( 2007 ) , 286293 . t. j. osler , taylor s series generalized for fractional derivatives and applications , siam j. math .anal . , 2(1 ) , ( 1971 ) , 37-48 .j. e. peari , i. peri , h.m .srivastava , a family of the cauchy type mean - value theorems , journal of mathematical analysis and applications , 306(2 ) , ( 2005 ) , 730739 .m. stynes , j. l. gracia , boundary layers in a two - point boundary value problem with a caputo fractional derivative , comput .methods appl ., 15 ( 1 ) , ( 2015 ) , 7995 .sun , x. wu , a fully discrete scheme for a diffusion wave system , applied numerical mathematics , 56(2 ) , ( 2006 ) , 193209 .j. rena , z .- z .sun , maximum norm error analysis of difference schemes for fractional diffusion equations , applied mathematics and computation , 256 , ( 2015 ) 299314 .trujillo , m. rivero , b. bonilla , on a riemann liouville generalized taylor s formula , journal of mathematical analysis and applications , 231(1 ) , ( 1999 ) , 255265 .p. zhuang , f. liu , implicit difference approximation for the time fractional diffusion equation , journal of applied mathematics and computing , 22(3 ) , ( 2006 ) , 8799 .
the accuracy of the numerical solution of a fractional differential equation depends on the differentiability class of the solution . the derivatives of the solutions of fractional differential equations often have a singularity at the initial point , which may result in a lower accuracy of the numerical solutions . we propose a method for improving the accuracy of the numerical solutions of the fractional relaxation and subdiffusion equations based on the fractional taylor polynomials of the solution at the initial point . + * ams subject classification : * 33f05 , 34a08 , 57r10 + * key words : * fractional differential equation , caputo derivative , numerical solution , fractional taylor polynomial .
understanding the collective behaviour of quantum many - body systems remains a central topic in modern physics , as well as one of the greatest computational challenges in science .quantum monte carlo sampling techniques are capable of addressing a large class of ( unfrustrated ) bosonic and spin lattice models , but fail when applied to other models such as frustrated antiferromagnets and interacting fermions due to the so - called sign problem .variational approaches , on the other hand , are sign - problem free but are typically strongly biased towards specific many - body wavefunctions .an important exception is given by the density matrix renormalization group, a variational approach based on the matrix product state ( mps), which is capable of providing an extremely accurate approximation to the ground state of most one - dimensional lattice models .the success of dmrg is based on the fact that an mps can reproduce the structure of entanglement common to most ground states of one - dimensional lattice models . in order to extend the success of dmrg to other contexts ,new tensor networks generalizing the mps have been proposed .for instance , the multi - scale entanglement renormalization ansatz ( mera), with a network of tensors that extends in an additional direction corresponding to length scales , is particularly suited to address quantum critical systems .most significant has also been the proposal of tensor networks for systems in two and higher dimensions , where the mps becomes inefficient .scalable tensor networks include the projected entangled - pair states peps ( a direct generalization of the mps to larger dimensions ) and higher dimensional versions of the mera. they can be used to address frustrated antiferromagnets and interacting fermions , since they are free of the sign problem experienced by quantum monte carlo approaches . in a tensor network state , the size of the tensors is measured by the bond dimension .this bond dimension indicates how many variational coefficients are used .crucially , it also regulates both the cost of the simulation , which scales as for some large power , and how much ground state entanglement the many - body ansatz can reproduce . in the large regime , peps and meraare essentially unbiased methods , but with a huge computational cost that is often unaffordable .more affordable simulations are obtained in the small regime , but there these methods are biased in favour of weakly entangled phases ( e.g. symmetry - breaking phases ) and against strongly entangled phases ( e.g. spin liquids and systems with a fermi surface ) . identifying more efficient strategies for tensor network contraction ,so that larger values of the bond dimension can be used and the bias towards weakly entangled states is suppressed , is therefore a priority in this research area. refs . , proposed the use of monte carlo sampling as a means to decrease computational costs in tensor network algorithms .[ we note that there are other variational anstze , such as so - called correlated product states , entangled plaquette states , and string - bond states , whose contractibility relies on sampling ; see the introduction of ref . for a review ] . in a tensor network approach such as mps , mera or peps ,sampling over specific configurations of the lattice allows to reduce the cost of contractions ( for single samples ) from to , where is significantly smaller than , typically of the order of .needless to say , sampling introduces statistical errors .however , if less than samples are required in order to achieved some pre - established accuracy , then overall sampling results in a reduction of computational costs . the proposal of refs . ,is based on computing the overlap of the tensor network state with a product state ( representing the sampled configuration ) .as such , it can not be directly applied to the mera , because the overlap of a mera with a product state can not be computed efficiently .luckily , as discussed in ref . , a sampling strategy specific to unitary tensor networks ( such as mera and unitary versions of mps and tree tensor networks ) is not only possible , but it actually has several advantages .most notably , sampling takes place over configurations of a reduced , effective lattice ; and it is possible to perform perfect sampling , by means of which uncorrelated configurations are drawn directly according to the correct probability . of a translation invariant lattice made of sites and with periodic boundary conditions .the tensors on each layer are identical and their labels are displayed to the left .the tensors ( green rectangles ) are unitary operators ( acting top - to - bottom ) , ( cyan triangles ) are isometric , and ( red circle ) is a normalized ` wavefunction ' .[ fig_mera],width=245 ] the main goals of this paper are to propose a variational monte carlo scheme for the mera and to demonstrate its feasibility .we also discuss possible future applications .let us briefly list some of the highlights of the approach .( i ) in a lattice of size , the sampled configurations correspond to an effective lattice of size ; in this way , the cost of evaluating the expectation value of a local observable scales just as and not as as in refs . , . ( ii )we employ the perfect sampling strategy of ref . , thus avoiding the loses of efficiency in the markov chain monte carlo of refs . , due to equilibration and autocorrelation times .( iii ) variational parameters are optimized while explicitly preserving the unitary constraints that the tensors in the mera are subjected to .this is accomplished by a steepest descent method within the set of unitary tensors , which is much more robust to statistical noise than the singular value decomposition methods employed in mera algorithms without sampling. we demonstrate the performance of our approach by computing an approximation to the ground state of a finite ising chain with transverse magnetic field . for the binary mera under consideration, sampling lowers the costs of elementary contractions from to .we find that the resulting ( approximate ) ground state energy decreases as the number of samples is increased , thus obtaining a demonstration of principle of the approach .we also notice that the number of samples required to achieve a given accuracy increases as the transverse magnetic field approaches its critical value . to our knowledge ,this is the first instance of sampling - based optimization of a relatively complex tensor network .previous similar optimizations included that of an mps, which is a considerably simpler tensor network ( with only three - legged tensors ) , and of tensor networks that under sampling break into smaller , simpler tensor networks ( e.g. into mps , single plaquette states , etc). in more complex tensor networks , such as mera and peps , the optimization becomes much harder due to high sensitivity to statistical noise .thus , for instance , ref. spells out a full variational monte carlo approach for peps but uses an alternative method , not based on sampling , in order to optimize the tensors . indeed , in ref .sampling is only used to aid in the computation of expectation values . here , instead , we use sampling both to optimize the mera and to compute expectation values .we emphasize , however , that our results only demonstrate a gain over optimization schemes based on exact contractions ( i.e. without sampling ) in the low accuracy regime , where only a relatively small number of samples are required .the specific mera ( namely binary mera for a one - dimensional lattice ) and low value of ( ) considered here for illustrative purposes implies that the cost per sample is times smaller than an exact contraction .recall that the statistical error decreases only as with the number of samples .if more than samples are required in order to obtain a sufficiently accurate approximation of the exact contraction , then the sampling scheme may be overall less efficient than the exact contraction scheme .the advantage of sampling over exact contraction schemes is expected to be more evident in mera settings where the cost scales with a larger exponent , and for larger values of .in particular , we envisage that the method described in this paper , possibly with further improvements , will improve the range of applicability of mera in two and higher dimensions .the content of this paper is distributed in five more sections . in sec .[ sec_approach ] , we discuss methods for sampling with the mera . in sec .[ sec_optimize ] we propose an optimization scheme using sampling techniques . in sec .[ sec_application ] we benchmark the approach with the quantum ising model . in sec .[ sec_discussion ] we discuss future applications including extensions to higher dimensions and extracting long - range correlations , before concluding in sec . [ sec_conclusion ] .in this section we explain how to use sampling in order to speed - up the computation of expectation values with the mera .we present both complete and incomplete perfect sampling strategies , building on the proposals of ref . for generic unitary tensor networks .we also discuss the importance of the choice of local basis in sampling .we start by reviewing some necessary background material on the mera . of the local operator ( yellow rectangle ) , which acts on ( at most )three neighboring sites .the flipped tensors , on the bottom half of the diagram , are the hermitian conjugates of the respective tensors above .the ` causal cone ' is delimited by the dotted line in the left diagram , corresponding to .the tensors outside the causal cone cancel , significantly simplifying the diagram on the right , corresponding to .[ fig_causal_cone ] ] the mera is a variational wavefunction for ground states of quantum many - body systems on a lattice .the state of a lattice made of sites is represented by means of a tensor network made of two types of tensors , called _disentanglers _ and _ isometries_. the tensor network is based on a real - space renormalization group transformation , known as _ entanglement renormalization _ :disentanglers are used to remove short - range entanglement from the system , whereas isometries are used to coarse - grain blocks of site into single , effective sites .an example of a mera on a periodic 1d lattice with sites is depicted in fig .[ fig_mera ] . this structure is called ` binary ' mera because of the 2-to-1 course - graining transformation in each repeating layer . ascending upwards in the figure , the disentanglers remove short - range entanglement in between each course - graining transformation , implemented by isometries until the remaining hilbert space is small enough to deal with directly with some wavefunction .the mera can also be viewed in the reverse starting from the top of fig .[ fig_mera ] , we descend downwards in a unitary quantum circuit , adding ( initially unentangled ) sites in each layer . for instance , let us flow downwards in fig .[ fig_mera ] . to a three - site system in state , we first add three additional unentangled sites , turning it into a six - site system ; and later we add another six unentangled sites , producing the final twelve - site system . this unitary structure can be exploited when calculating the expectation value of a local operator acting on a few neighboring sites . specifically , all tensors not ` causally ' connected to the few sites supporting cancel , as depicted in fig .[ fig_causal_cone ] .the resulting diagram is significantly simpler and can be interpreted as the expectation value of for a state of an effective lattice made of sites ( see fig .[ fig_sampled_wavefunction ] ) .we emphasize that , by construction , therefore , we can evaluate the expectation value by contracting the tensor network corresponding to .the numerical cost of performing this contraction grows linearly with the number of sites in , and thus only logarithmically with the number of site in the original lattice .the dimension of the hilbert space after each course - graining transformation is an adjustable parameter , the _ bond dimension _ , which plays a central role in the present discussion .increasing the bond dimension implies including a larger fraction of the original hilbert space and leads to greater accuracy , but also requires greater computational resources .optimization algorithms to approximate ground states , and to evaluate local expectation values and correlators , are present in the literature. the numerical cost of finding an expectation value or performing a single optimization iteration using the binary mera scales as for a translation invariant system .for more complex mera structures , such as those representing 2d lattices , the power of for the cost increases dramatically .for instance , the 2d mera presented in ref . has a numerical cost of , which on current computers restricts . for many systems ,this does not allow for enough entanglement to accurately describe the ground state , limiting the accuracy of the approach . herewe hope to alleviate this problem by reducing the numerical cost as a function of using monte carlo techniques .we will find that the cost of a single sample scales as for binary 1d mera , compared to the cost of the ` exact ' contraction . and represented by the mera .( a ) tensor network for the state of the original lattice . the causal cone is delimited by a discontinuous line .( b ) tensor network for the state of the effective lattice .sites further from the center effectively represent increasingly large length scales in the original lattice .[ fig_sampled_wavefunction ] ] .the dashed lines indicate the indices that will be sampled . on the right - hand - sidewe explicitly write the tensor contractions outside the causal cone as a sum over a complete , orthonormal set of ` wavefunctions ' ( pink circles ) .monte carlo sampling will be performed over this set .each term of the sum can be expressed as the product .[ fig_mera_structure ] ] our goal is to compute the expectation value by contracting the tensor network corresponding to , see fig .[ fig_causal_cone ] .the first step is to re - express the tensor network contraction as a summation over indices corresponding to the sites of the effective lattice , as shown in fig .[ fig_mera_structure ] .we then get where is an orthonormal basis of product states on the effective lattice .we will approximately evaluate the sum in eq .( [ energy ] ) by using monte carlo sampling over the states .notice that in the effective lattice , the sites away from the support of have undergone one or more course - graining transformations . in other words , sitesfurther from the center represent increasingly larger length - scales .thus , sampling over sites of the effective lattice corresponds to sampling the system _ at different length scales_. this property is reminiscent of global or cluster updates used in existing monte carlo methods to solve critical systems . a nave scheme for approximating the sum in eq .( [ energy ] ) would be to choose at random from , and evaluate according to the tensor networks in fig .[ fig_mera_structure ] .the cost of obtaining a single sample scales as .however , the statistical variance of a sampling scheme can be substantially reduced by implementing importance sampling in this case choosing configurations that are more likely .more precisely , sampling is implemented according to the wavefunction _ weight _ , , which can be calculated efficiently as indicated in fig .[ fig_weight ]. we can express eq .( [ energy ] ) in a form more convenient for importance sampling , where note that because the mera is normalized by construction , and therefore the weights sum to one , .however , during sampling only some subset of the configurations are considered .one needs to renormalize the weights accordingly , so that the expectation value is approximated as in the case of mera , and indeed any state that can be written as a unitary quantum circuit , a ` perfect ' sample can be generated according to the probability distribution in a single sweep. this makes markov chain monte carlo unnecessary , simplifying the algorithm and eliminating a source of statistical error ( i.e. autocorrelation effects ) .this is one advantage of this technique over other tensor network sampling methods in the literature. the sample can be constructed by sampling just one index at a time .beginning at the top layer of the mera , and aiming to sample just the first index ( left - most in fig .[ fig_mera_structure ] ( b ) ) , we can construct the one - site reduced density matrix by the tensor contraction in fig .[ fig_weight ] ( b ) .the probability can then be found _ for all possible _ .a value of is then randomly selected according to any complete basis of our choosing .after this selection is made , we can then sample the next ( top - most ) index , according to the conditional weights as calculated by the diagram in fig .[ fig_weight ] ( b ) ( we refer to ref . for further details ) .we continue to sample the state of each site , until we have sampled every site .each of the diagrams in fig . [ fig_weight ] can be calculated with cost , while there are layers to the mera .a single sample can therefore be generated with cost , compared to for the exact contraction .so long as the number of samples is significantly less than , monte carlo sampling will be faster than exact contraction . in practice, we will perform samples in order to get a good estimate of . asthe samples are completely uncorrelated , the variance of \equiv \sum_{\mathbf{r } } p(\mathbf{r } ) \left ( a^{\mathcal{c}}(\mathbf{r})-\bar{a^{\mathcal{c}}}\right)^2\ ] ] can be used to estimate statistical error , / n } , \label{std_error}\ ] ] while ] is smaller or equal than $ ] in eq .( [ std_error2 ] ) , and therefore the numerical accuracy is increased without affecting the computational cost . and ( b ) with incomplete sampling .[ fig_alt_samp_scheme ] ] in general , we are free to choose any complete basis ( or ) from which to draw individual samples .a good choice is one that produces a small statistical error , eqs .( [ std_error],[std_error2 ] ) . intuitively , the goal of importance sampling is to decrease the statistical variance by choosing configurations ( or ) with a large overlap with the state . with this in mind, one could aim to maximize the ` average ' weight , it is easy to show that for a given quantum probability distribution , specified by a density matrix , the above quantity is maximized in the diagonal basis of the density matrix .inspired by this fact , here we choose to sample site in the basis in which the reduced density matrix is diagonal .the density matrices , calculated in fig. [ fig_weight ] , can be diagonalized with cost .note that the chosen basis will depend on previously sampled sites , and that the resulting sampling basis is still a complete , orthonormal basis of product states .we have found that this approach can radically increase the average value of the weight , eq .( [ eq : weight ] ) with the effect becoming stronger for larger systems and values of .more importantly , we find that the statistical variance in the observables is decreased ( see sec .[ sec_application_expectation ] and fig .[ fig_variance ] ) .this technique to select the sampling basis is not specific to unitary tensor networks nor perfect sampling methods , and could thus be of benefit to other variational quantum monte carlo algorithms .finally , we may wish to compute the expectation value of an operator that is the sum of local terms , such as a hamiltonian made of nearest - neighbor interactions : in this case we sample each local term as indicated previously , noticing that the causal cone of each depends on the location of the sites of lattice where the local operator is supported .one can either choose to ( uniformly ) sample the position in the lattice , or systematically sweep through all the positions .a complete sweep , where each site is visited once , costs .in order to find an approximation to the ground state , we need to minimize the energy of the mera .the direction of steepest _ ascent _ is given by the complex derivative with respect to the conjugate of each element of each tensor .inserting eq .( [ estimator ] ) ( or eq . ( [ eofs ] ) ) into eq .( [ energy2 ] ) and differentiating gives , \label{derivative}\end{aligned}\ ] ] where the derivative with respect to a tensor is element - wise , and similar expressions hold for and .the derivatives on the right - hand - side of eq .( [ derivative ] ) can be found by using the usual rules for calculating derivatives in the diagrams , see figs .[ fig_alt_samp_scheme ] and [ fig_derivatives ] . ) arises from the change in the normalization of the wave function .although we are dealing with unitary / isometric tensors which ensure , small changes in arbitrary directions may break the unitarity and modify the norm . interestingly , even when projecting into the unitary tangent space ( see below ) where this term averages to zero , its inclusion is important to reduce the sampling error sometimes by several orders of magnitude . ] in practice and will be estimated simultaneously by sampling .multiple approaches are possible for updating the mera to minimize the energy and find a good approximation to the ground state .one approach often used iteratively optimizes each tensor in the mera according to the following algorithm . in the above and are unitary , while is diagonal and positive , thus representing the singular value decomposition ( svd ) of the derivative .this algorithm finds the unitary tensor that minimizes the trace value of its product with the above , called environment with the requirement that is negative - semidefinite .similar steps apply to and , where the svd ensures that they remain isometric or normalized , respectively .unfortunately , the above scheme is extremely sensitive to the statistical noise inherent to monte carlo sampling , and results in very poor optimization .ideally , we would prefer a method in which the statistical noise is able to average out over many iterations .the most obvious scheme satisfying this requirement is straightforward steepest descent .again , one must ensure that the tensors obey the unitary / isometric constraints characteristic of the mera , so one can utilize the svd to find the unitary tensor closest ( with respect to the ) to the usual downhill update . with this method ,the step is given by where is a number modulating the size of change at the step .in this paper we avoid using the svd entirely by explicitly remaining in the unitary subspace , along the lines of ref . .we define the tangent vector as the derivative projected onto the tangent space of all unitaries located about , the matrix is within of a unitary matrix . noting that is anti - hermitian , then the update \ ] ] both travels in the direction of the tangent vector while remains precisely unitary .the same approach works for isometric matrices , taking care that , with computational cost scaling similarly to the svd approach as ( see appendix ) .the performance of the algorithm is highly dependent on the behaviour of , as well as the number of monte carlo samples , , taken in each step .simple schemes will keep and constant , which is the approach we take here . on the other hand, one may choose to increase with so that harmful noise is reduced when approaching the optimal solution ; or to decrease with for much the same reason ; or a combination of both .in this section we demonstrate the above techniques with the well - known transverse - field quantum ising model , such a hamiltonian can be expressed as a sum of nearest - neighbour terms , such that we will pay particular attention to the region around the critical point at , which is the most demanding computationally . for concreteness , we use a three - layer binary mera with periodic boundary conditions , resulting in a lattice of 24 sites in the bottom of the mera structure. however , each of these sites corresponds to a block of physical spins , making a total of 72 spins .we choose this blocking so that for the bond dimension only ever decreases when ascending through the mera . inwhat follows , we employ incomplete perfect sampling where three sites at the bottom of the mera are contracted exactly .we now analyze the effectiveness of extracting expectation values from the mera using monte carlo sampling . for perfect sampling techniques, the accuracy can be easily extracted from the variance using eq .( [ std_error ] ) .the scaling of the error in the energy , , is shown explicitly for the critical ( ) system in fig .[ fig_variance ] ( a ) .the variance of the energy estimator as a function of is shown in fig .[ fig_variance ] ( b ) .here we have used mera wavefunctions previously optimized using standard techniques _ without _ monte carlo sampling , that is , sampling is only employed to extract the expectation values . as a function of the number of samples follows the classic scaling .( b ) variance of the energy estimator ( normalized to the expectation value ) for optimized meras with bond - dimension for various values of ( red crosses ) .estimates of the statistical uncertainty are smaller than the symbols .the grey area is eliminated by inequality eq .( [ std_error2 ] ) , bounded by the variance of . for comparison , we include the variances expected from several hypothetical samplings of .variances from sampling the spins in the ( or ) basis is indicated by a blue , dash - dot ( or green , dashed ) line .our numerical results ( red crosses ) show remarkable similarity with sampling in the diagonal basis of ( black , solid line ) .( c ) the entanglement entropy of . in ( b ) and( c ) , the critical point at is indicated with a red dotted vertical line ., title="fig : " ] as a function of the number of samples follows the classic scaling .( b ) variance of the energy estimator ( normalized to the expectation value ) for optimized meras with bond - dimension for various values of ( red crosses ) .estimates of the statistical uncertainty are smaller than the symbols .the grey area is eliminated by inequality eq .( [ std_error2 ] ) , bounded by the variance of . for comparison, we include the variances expected from several hypothetical samplings of .variances from sampling the spins in the ( or ) basis is indicated by a blue , dash - dot ( or green , dashed ) line .our numerical results ( red crosses ) show remarkable similarity with sampling in the diagonal basis of ( black , solid line ) .( c ) the entanglement entropy of . in ( b ) and ( c ) , the critical point at is indicated with a red dotted vertical line ., title="fig : " ] as a function of the number of samples follows the classic scaling .( b ) variance of the energy estimator ( normalized to the expectation value ) for optimized meras with bond - dimension for various values of ( red crosses ) .estimates of the statistical uncertainty are smaller than the symbols .the grey area is eliminated by inequality eq .( [ std_error2 ] ) , bounded by the variance of . for comparison , we include the variances expected from several hypothetical samplings of .variances from sampling the spins in the ( or ) basis is indicated by a blue , dash - dot ( or green , dashed ) line .our numerical results ( red crosses ) show remarkable similarity with sampling in the diagonal basis of ( black , solid line ) .( c ) the entanglement entropy of . in ( b ) and ( c ) , the critical point at is indicated with a red dotted vertical line ., title="fig : " ] notice that the variance is maximal near the critical point at . as the monte carlo code effectively samples wavefunctions from the reduced three - site ( i.e. nine - spin ) density operator, one would expect the energy variance to increase with the amount of entanglement in the system .for reference we have included the entropy of the three - site density matrix in fig .[ fig_variance ] ( c ) .this entropy mostly and share a symmetry which is broken by the mera wavefunction for , and the _ exact _ ground state should have entanglement entropy as . ]corresponds to the entanglement entropy of three sites with the remainder of the system .we see a strong correlation between the amount of entanglement and the size of the variance of the energy estimator .let us emphasize that fig .[ fig_variance ] ( b ) shows that our scheme performs significantly better than directly sampling in either the or spin basis .the measured variances are very similar to a diagonal sampling of ( i.e. in its diagonal basis ) .this indicates that the sampling scheme is performing as intended in sec .[ sec_diagonal ] .in general , an accurate representation of wavefunctions with greater amounts of entanglement will require greater bond dimension .these results suggest that the statistical variance generated by this scheme will also increase with the entanglement . to achieve a certain precision in the expectation value of local observables, the number of required samples grows with this variance , and thus with the amount of entanglement and with the minimum suitable value of .therefore , although a _ single _ sample has cost , the _ total _ cost to obtain a certain precision may have some additional dependence on .nevertheless , no additional dependence was clearly manifest in our simulations at fixed .finally , we combine monte carlo sampling with our unitary - subspace steepest descent algorithm to obtain optimized wavefunctions . in fig .[ fig_optimize ] we plot the energy of the mera during the optimization process at and , where the simulation progresses through a range of different number of sweeps per optimization step . in all casesthe step size is fixed at .we observe that increasing improves the quality of the optimized wavefunction and for large values of the simulation tends to converge towards the same energy obtained with exact contractions , as expected .like all tensor network optimizations , care must be taken to ensure the wavefunction has fully converged to the lowest energy state . for instance , in fig .[ fig_optimize ] ( a ) we see a plateau in energy before around the 4000th iteration that could be mistaken for convergence ( whereas the simulation is actually navigating a stiff region , i.e. a long narrow valley in the energy landscape ) .non - deterministic features due to statistical fluctuations can also be seen such as the sudden increase of energy of the simulation around the 13000th iteration . beyond this , accuracy could be improved by increasing .it should be noted that we have observed that the steepest descent method ( with either exact contractions or sampling ) will not always produce wavefunctions of the same quality as the svd method as it may be more susceptible to local minima or extreme stiffness .however , accuracy can still be systematically improved by increasing . during optimization .each iteration is update using sweeps , where is in the black crosses , red pluses , green diamonds and blue points respectively .every 100 iterations , we calculate the exact energy corresponding to the current wavefunction , which is plotted here .the solid horizontal line indicates the energy of an optimized mera using exact contractions and steepest descent ( which remains above the the true ground state energy , indicated by the dashed line ) .the simulation converges for large , but may need to increase for greater accuracy .( b ) difference to the above solid line plotted on a logarithmic scale .the difference reduces with increasing , and although statistical fluctuations are decreasing , they remain evident on the logarithmic scale.,title="fig : " ] during optimization .each iteration is update using sweeps , where is in the black crosses , red pluses , green diamonds and blue points respectively .every 100 iterations , we calculate the exact energy corresponding to the current wavefunction , which is plotted here . the solid horizontal line indicates the energy of an optimized mera using exact contractions and steepest descent ( which remains above the the true ground state energy , indicated by the dashed line ) .the simulation converges for large , but may need to increase for greater accuracy .( b ) difference to the above solid line plotted on a logarithmic scale .the difference reduces with increasing , and although statistical fluctuations are decreasing , they remain evident on the logarithmic scale.,title="fig : " ] in the previous section we noted that the statistical uncertainty peaked around the critical point at , where the entanglement is maximal , and one might expect the optimizations to be most difficult around this point . plotted in fig .[ fig_optimize_vs_h ] is the difference in energy between our wavefunctions and the exact , analytic solution for a range of .we observe that the error decreases away from the critical point , and that there is a clear relationship between the quality of the wavefunction and the number of samples per step , . .the markers indicate the number of monte carlo sweeps taken between updates , for the black cross , red diamond , green circle and blue square , respectively .there is a clear trend for improved ground state energy as increases , and away from the critical point at ( vertical red dotted line ) . ]there are several possible limiting factors in variational monte carlo optimizations of mera wavefunctions .one must balance the cost of increasing , and the total number of iterations , to produce results of the desired accuracy . on top of this ,the ansatz presents a complicated optimization landscape and one must be careful not to be stuck in local minima .there is much scope to improve on the above optimization scheme by using more sophisticated approaches .most obviously , the step - size and number of samples performed in each iteration could be adjusted as the simulation progresses .for example by choosing the step size to decrease as , with fixed and we are guaranteed convergence to some local minimum .equivalently , the noise could be reduced at each step by increasing , or some combination of both . in ref .it was found that using just the sign of the derivative , as well as properties resulting from translational invariance , was sufficient for optimizing a periodic mps with monte carlo sampling .other approaches existing in the literature may result is significant gains , though it should be noted that approaches requiring the second derivative or hessian matrix would increase the order of the numerical cost as a function of , and would have to be made robust to statistical noise .there are several situations where it is most natural to use monte carlo sampling to speed - up mera algorithms .let us briefly review them . in this work we have considered in detail the binary mera for a 1d lattice with translation invariance .however , even in a translationally invariant , 1d lattice , one has freedom to choose between a large variety of entanglement renormalization schemes , leading to mera structures with different configurations of isometries and disentanglers .-to- mera described in ref . .[ fig_other_meras],title="fig:",width=188]-to- mera described in ref . .[ fig_other_meras],title="fig:",width=132 ] the ternary mera , shown in fig . [ fig_other_meras ] ( a ) , has a narrower causal cone , with a width of just two sites , and the traditional algorithms for optimizing it have a cost that scales as . as a result , the ternary mera is sometimes favored over the binary mera .note that this does not necessarily translate into an improved accuracy in expectation values the ternary mera is in general less accurate than binary mera for the same value of .it is interesting to note that , in the ternary mera , the perfect sampling algorithm presented here again has cost , while markov chain monte carlo , as well as expectation value and environment estimation , is possible with cost .it is unclear whether after including autocorrelation effects the markov chain method performs better , similar , or worse overall compared to the perfect sampling algorithm .another possible 1d mera includes two layers of disentanglers to account for entanglement over larger distances , as depicted in fig .[ fig_other_meras ] ( b ) .this mera has a causal cone that is five sites wide , and traditional algorithms would have numerical cost and require memory limiting to rather small values .however , a sampling technique will only require time per sample and memory overall a huge saving .note that the power roughly halves when we change from an exact contraction which effectively rescales density matrices from higher layers to lower , to a monte carlo scheme which samples wavefunctions from this distribution .the scaling of computational cost in in 2d lattices is even more challenging , mostly because the width of the causal cone ( or number of indices included in a horizontal section of the causal cone ) is much larger . once again, sampling wavefunctions will require roughly square - root the number of operations ( and memory ) needed to calculate the exact reduced - density matrix .for instance , in the -to- mera presented in ref ., the cost of an exact contraction scales as , while with monte carlo sampling it is possible with just operations per sample ( depicted in fig .[ fig_other_meras ] ( c ) ) .memory might be a limiting factor in 2d mera algorithms , while the temporary memory required for this algorithm is less than that to store the disentanglers and isometries .another challenge with mera calculations is the numerical cost of long - range correlations .take for instance the two - site operator , for arbitrary sites and .the cost required to contract the corresponding tensor network within the binary mera scheme can scale as much as significantly more than the cost for neighbor and next - nearest - neighbor correlations .however , an estimate of the correlator can be obtained using monte carlo sampling at a reduced cost ( per sample ) . in fig .[ fig_long_range ] , we depict the causal cone structure of two single - site operators separated by sites in a binary mera . the cost of monte carlo sampling for is just . .an exact contraction of the expectation value costs .[ fig_long_range ] ] this technique can be extended to 2d lattice systems , where calculating long - range correlations exactly quickly becomes infeasible , even for modest values of .note again that memory constraints are particularly challenging for 2d mera calculations and monte carlo sampling can alleviate this burden .we have outlined and tested a scheme for monte carlo sampling with the mera .uncorrelated samples can be efficiently generated directly from the wavefunction overlap probability distribution , without needing to resort to markov chain monte carlo methods . from this , expectation values can be extracted and we have demonstrated techniques to reduce the statistical error . we have also presented and demonstrated an algorithm to optimize mera wavefunctions using sampled energy derivatives . the numerical results presented here were not intended to be state - of - the - art solutions of the 1d quantum ising model , but rather to demonstrate feasibility and motivate subsequent applications to 2d systems . in general , monte carlo sampling becomes more advantageous for systems with large numbers of degrees of freedom , and we expect 2d mera to be no exception . because the reduction in cost in two ( and higher ) dimensions is so significant , monte carlo techniques are a very attractive way to achieve reasonable values of with current computers .obvious improvements to the code include utilizing symmetries and parallelization to supercomputers , which is straightfoward with our perfect sampling algorithm .further research into optimization strategies may lead to other improvements ( e.g. by reducing the number of iterations or the tendency to find local minima ) .reweighting techniques may make the optimization more efficient when approaching the ground state .the authors would like to thank philippe corboz and anders sandvik for useful discussions .support from the australian research council ( ff0668731 , dp0878830 , dp1092513 ) , the visitor programme at perimeter institute , nserc and fqrnt is acknowledged .in this appendix we explain how to compute , with cost , the isometric matrix where is an isometric matrix , is a general matrix , and .the nave approach would be to evaluate the matrix and compute its exponential , with cost , before multiplying by .however , noting that the exponent does not have full rank ( the rank is at most ) , we can hope to find a faster method . taking the taylor expansion we observe that the result can be achieved with a series of multiplications between matrices , and , post - multiplied by either or , requiring total cost . in the binary mera , where isometries are matrices ( i.e. , ), the cost of this algorithm scales as , compared with the cost of the nave approach .this algorithm becomes particularly important for a tree tensor network and for the mera in two dimensions , where the nave approach becomes more expensive , in powers of , than a sampling ( thus becoming the bottle neck of an optimization based on sampling ) , whereas the above algorithm remains competitive .
monte carlo sampling techniques have been proposed as a strategy to reduce the computational cost of contractions in tensor network approaches to solving many - body systems . here we put forward a variational monte carlo approach for the multi - scale entanglement renormalization ansatz ( mera ) , which is a unitary tensor network . two major adjustments are required compared to previous proposals with non - unitary tensor networks . first , instead of sampling over configurations of the original lattice , made of sites , we sample over configurations of an effective lattice , which is made of just sites . second , the optimization of unitary tensors must account for their unitary character while being robust to statistical noise , which we accomplish with a modified steepest descent method within the set of unitary tensors . we demonstrate the performance of the variational monte carlo mera approach in the relatively simple context of a finite quantum spin chain at criticality , and discuss future , more challenging applications , including two dimensional systems .
the term ` back - reaction ' is often used in cosmology to mean ` the effect that structure has on the large - scale evolution of the universe , and observations made within it ' .implicit within this statement are a number of fundamental problems that have yet to be fully understood .these include : * what is meant by the large - scale expansion of space in an inhomogeneous universe , and how should it be calculated ? * how should we link the large - scale expansion of an inhomogeneous space - time with the observations made within it ? *how can we create relativistic cosmological models sophisticated enough to investigate these problems ?let us now briefly consider each of these points , before moving on to discuss recent attempts to understand them .point 1 above alludes to the fact that in relativistic theories what we mean by the spatial separation of any two astrophysical objects depends on how we choose to foliate the universe with hyper - surfaces of constant time . in a spatially homogeneous universe , or a universe with an irrotational matter content, natural - looking choices might present themselves . in general , however , we should be free to make any number of choices .this then presents a problem : if the distance between any two astrophysical objects is in general foliation dependent , and we have no preferred foliation , then how should we go about defining the rate of change of distance between objects , and hence the expansion of the universe ? in the end , the answer to this question will depend on exactly what one is trying to achieve , and is complicated considerably by the fact that in cosmology one is often interested in non - local averages ( a notoriously difficult concept to define in general relativity ) . below we will consider several different cases of interest .point 2 is a subsequent problem that needs to be addressed , once a concept of ` large - scale expansion ' exists that one is prepared to consider .it is not in general the case that observations made in an inhomogeneous geometry will have a straightforward correspondence with the observables that one would measure in a spatially homogeneous and isotropic universe with the same rate of expansion on large scales .that is , even if one succeeds in finding a good description for the large - scale expansion of the universe , then one still needs to do further work in order to relate this to observations made in the underlying inhomogeneous space - time . once again , this is complicated considerably by the fact that we are often interested in the average of observables .this is in general a highly non - trivial problem , and below we will review some recent progress towards understanding it .finally , point 3 is related to the fact that in order to test proposed solutions to the problems posed in points 1 and 2 it is of considerable interest to have cosmological models that are sophisticated enough to allow at least some of the interesting behavior that we expect in general .this is an extremely difficult problem .although many inhomogeneous cosmological solutions to einstein s equations are known , most of these solutions are restricted either because they are required to exhibit a high degree of symmetry , or because they are algebraically special .constructions such as the swiss cheese " models allow some potential progress to be made , but are themselves severely restricted by the boundary conditions at the edge of each hole " .new approaches are required to make further progress in this area , and , once again , we will discuss some recent progress below .in section [ space ] we consider approaches based on averaging over a set of prescribed spatial hyper - surfaces . in section [ spacetime ]we consider approaches based on averaging in four dimensions .section [ models ] contains a discussion of some models that may be of use for studying averaging , and in section [ discussion ] we provide a few closing comments .one way to proceed with the study of back - reaction is to consider the expansion of regions of space in a given foliation . the equations that govern this expansion can then be found , and compared to the friedmann equations .this often leads one to consider the volume - weighted average of quantities such as energy density and pressure .the equations that result are therefore often referred to as the ` averaged field equations ' . while simple , this approach has a number of obvious drawbacks .firstly , it is manifestly not foliation invariant . secondly , there is a freedom in how one chooses to specify that two spatial volumes at different times are the same region . andthirdly , the averaging of quantities over the spatial volume being considered is often only well defined for scalars .one can specify choices for the first and second of these that may initially appear natural , but that could in the end lead one to consider hyper - surfaces in the inhomogeneous space - time that become arbitrarily , and increasingly , distorted .the third of these problems is of more fundamental difficulty , as tensors can not in general be compared at different points .nevertheless , this approach provides a useful framework to investigate , and can be shown to give a straightforward correspondance to the average of observables in some situations .the most well studied set of averaged equations that result from this approach are those found by buchert after averaging the hamiltonian , raychaudhuri and conservation equations : where and are the scale factor " and average ricci curvature of the region of space being considered , angular brackets denote a volume average throughout that region , and is the back - reaction term that quantifies differences from the friedmann equations that one might otherwise construct from these quantities .these are defined as where and are the expansion and volume - preserving shear of the set of curves orthogonal to the hyper - surfaces containing , and and are the proper time measured along this set of curves and the spatial coordinates in the hyper - surfaces of constant , respectively .the quantity is the value of on some reference hyper - surface ( usually taken to be the one that contains us at present ) .one may note that equations ( [ buchert1])-([buchert3 ] ) do not form a closed set .extra information is therefore required , which can be given by specifying .presumably this requires either extra equations , or some knowledge of the inhomogeneous space - time being averaged .as previously stated , one may also note that the averaging procedure given here by the angular brackets is foliation dependent and only applicable to scalars ( this is particularly problematic for the term in equation ( [ q ] ) , as the evolution equation for will contain tensors ) .finally , while the expansion of the spatial domain may not itself be directly observable , we will explain below that in some cases it can be linked to observables . the term ` observables ' can cover a wide array of different possibilities in cosmology . herewe will mainly be concerned with the luminosity distance - redshift relation .this is itself a direct observable of considerable interest for the interpretation of , for example , supernova observations . beyond this, it is also often required in the interpretation of other observables as it is very often the case that one needs to transform from redshift space " to some concept of position space ( i.e. the position of astrophysical objects on some spatial hyper - surface ) .the usual method for calculating luminosity distances in an inhomogeneous space - time is to first find the angular diameter distance to the emitting object as a function of some affine parameter , measuring distance along past - directed null geodesics .this can be achieved by integrating the sachs optical equations . in these equationsthe ricci curvature of the space - time sources the evolution of the expansion of the past - directed null geodesics , and the weyl curvature sources the evolution of their volume - preserving shear ( which itself acts as a source for their expansion ) .the angular - diameter distance can then be straightforwardly related to the luminosity distance , and the redshift can be calculated as a function of the affine distance ( once the world - lines of the objects emitting the radiation have been specified ) .this then provides the luminosity distance as a function of redshift at all points on an observer s past - light cone , provided that geometric optics remains a good approximation , and that the light emitted from the distant object is not obscured by some intermediate matter before it reaches the observer .although the method outlined above is , in general , a complicated problem involving a number of subtleties , it was recently shown by rsnen that progress can be made in space - times that display statistical homogeneity and isotropy on large scales . in this caseone can estimate the average luminosity distance as a function of the average redshift that an observer in such a space - time may expect to reconstruct from observations made over cosmologically interesting distances . assuming that the matter content is irrotational , that the shear in the null trajectories can safely be assumed to be small , that structures evolve slowly , and that hyper - surfaces of constant proper time can also be taken to be the same hyper - surfaces that display statistical homogeneity and isotropy , rsnen made a convincing case that the average luminosity distance - redshift relation in the inhomogeneous space - time should be well approximated by observables calculated in a homogeneous and isotropic model with a scale factor that evolves according to equations ( [ buchert1])-([buchert3 ] ) .an alternative approach to this problem was taken by clarkson and umeh .these authors considered expressions for measures of distance expanded as a power series in redshift , as derived for general space - times by kristian and sachs .they then performed a decomposition into spherical harmonics , and constructed the following deceleration parameter , based on an analogy between the monopole of this expansion and the corresponding relations in a friedmann universe : ,\ ] ] where is the isotropic part of the hubble rate , and subscript ` ' denotes a quantity evaluated at . using this expression they could consider the average deceleration within either a region of space , or a region of space - timehowever , for matter obeying the strong - energy condition it can be seen from equation ( [ cu ] ) that the average of will always be non - negative , and so the space - time ( according to this measure ) will always be inferred to be decelerating ( in the absence of ) .this is in contrast to the averaged evolutions possible from equations ( [ buchert1])-([buchert3 ] ) , and at first glance would appear to contradict the results of rsnen described above .in fact , there is no contradiction between these two sets of results .that is , the observable calculated by clarkson and umeh should be expected to be a good approximation to the deceleration that one would infer from observations made within a small region around an observer .this measure is closely related to the acceleration of space within that region , as specified by einstein s equations ( as long as shear is small ) , and not by equations ( [ buchert1])-([buchert3 ] ) .the observational measures considered by rsnen , however , are only expected to approach the evolution described by equations ( [ buchert1])-([buchert3 ] ) when the distances over which observations are made are much larger than the homogeneity scale of the space - time under consideration .this is , of course , the regime in which cosmological observations are usually made .using example space - times it has been explicitly demonstrated that it is entirely possible for a set of observers in a given region of the universe to infer deceleration from clarkson and umeh s measure , while inferring acceleration from buchert s measure .this clearly demonstrates that the acceleration inferred from cosmological observations does _ not _ have to be closely related to the local acceleration of space itself .it also demonstrates that quantities that are uniquely defined in an exactly homogeneous and isotropic universe ( such as ) can bifurcate into multiple different quantities in space - times that are only statistically homogeneous and isotropic , and that in general these new quantities can take very different values from each other .one must therefore proceed with care .an alternative approach to considering the volume weighted average of quantities within 3-dimensional spatial regions is to instead consider averaging geometric quantities within 4-dimensional regions of space - time .such a process is in general difficult to define in a covariant way , and so far has required the application of bi - local operators .these allow tensors to be compared at different points by transporting them along prescribed sets of curves .this then leads to the problems of how the curves in question should be prescribed , and exactly which transport method should be used .various proposals exist as to the best way to address these issues . while complicated , the idea of averaging quantities in 4-dimensional regions of space - time inherently avoids any foliation dependence .these approaches are also often aimed at averaging tensors directly , rather than just scalars .this has obvious advantages for gravitational theories constructed from tensors , such as general relativity .probably the most well known attempt at averaging in space - time , and constructing a set of effective field equations that the averages should obey , is that of zalaletdinov . the first step in this approach is to construct the following average for a tensor : where primed coordinates are those used in the 4-dimensional region , which is the averaging domain associated with the point .the quantities are the bi - local operators , which are functions of both and , and the quantity is the volume of . each point , ,is expected to have associated with it its own averaging domain , , which is related to other averaging domains by being transported around the manifold . by applying this averaging technique to the connection , and by using some `` splitting rules '', zalaletdinov is able to use einstein s equations to derive a set of field equations that the averaged connection must obey .the are called the macroscopic field equations , and are written where bars denote averaged quantities , and and and }$ ] , where } \rangle - \langle \gamma^{\alpha}_{\phantom{\alpha } \beta [ \gamma } \rangle \langle \gamma^{\mu}_{\phantom{\mu } \underline{\nu } \sigma ] } \rangle , \end{aligned}\ ] ] and where underlined indices are not included in symmetrization operations .the tensor is known as the 2-point correlation tensor , and obeys its own algebraic and differential constraints . the macroscopic field equations ( [ mg ] )can be used to describe the behavior of a particular inhomogeneous space - time after averaging has been performed , but they can also be used as a set of field equations to which one can look for solutions directly .this latter approach has so far been taken in the cases of macroscopic geometries , , that are spatially homogeneous and isotropic , and geometries that are spherically symmetric and static .this work has allowed some possible behaviors of averaged space - times to be found _ without _ specifying the underlying microscopic geometry .however , it has also so far required a number of assumptions to be made about the correlations that are present .these include the vanishing of the three - point and four - point correlation tensors , and the vanishing of the ` electric ' part of the 2-point correlation tensor .the particular situations in which these assumptions are valid remains to be determined , as is also the case for the assumptions that go into the derivation of the macroscopic field equations ( [ mg ] ) . nevertheless , this is an interesting approach that deserves further study . under the assumption that the macroscopic geometry is spatially homogeneous and isotropic ( that is , after the averaging procedure has been applied , and the averaged " geometry displays these symmetries ) , then coley , pelavas and zalaletdinov find the following to be a solution of the macroscopic field equations ( [ mg]) : ,\ ] ] where is a constant , and where and obeys the friedmann - like equations where is a constant ( not necessarily equal to ) , and where and are the macroscopic energy density and pressure ( obtained after averaging the right - hand side of einstein s equations ) .superficially , the geometry given in equations ( [ mgfrw])-([mgdrho ] ) looks a lot like the spatially homogeneous and isotropic solutions of einstein s equations in the presence of a perfect fluid .there is , however , a very significant difference : the spatial curvature constant that appears in the macroscopic geometry , , is not in general the same as the term that looks like spatial curvature in the friedmann - like equation ( [ mgfried ] ) ( i.e. the one that contains ) .that is , spatially curvature can take different values depending on the situation being considered . if one measured the angles at the corners of triangle , and determined the curvature of space in this way , then this would give a different result to that which would be obtained by measuring the recessional velocities of astrophysical objects and inferring the spatial curvature through the dynamical ( friedmann - like ) equation ( [ mgfried ] ) .this behavior is impossible within the spatially homogeneous and isotropic solutions of einstein s equations , and so could provide some potentially observable phenomena that could be used to test this approach .the difference between and is determined by terms that appear in the correlation tensor , , and so by attempting to determine the difference between and observationally we could attempt to constrain , and hence some of the possible effects of averaging .a first step towards investigating this possibility has recently been taken .the authors of this work assume that average observables are determined by null trajectories in the average geometry , as specified in equation ( [ mgfrw ] ) , and that redshifts are represented by the average scale factor , .they then find that luminosity distances are given by the following equation : where the matter content of the macroscopic space - time has been assumed to be well approximated by non - interacting dust and , where , and where the are defined as where is the present energy density in dust , and is the effective energy density in .the expression for luminosity distance given in equation ( [ mgdl ] ) can now be used to interpret cosmological observations , and to obtain constraints on the . using data from the hubble space telescope ( hst) , the wilkinson microwave anisotropy probe ( wmap) , observations of the baryon acoustic oscillations ( baos) , and the union2 and sdss supernova data sets , the parameters , and were constrained to take the values given in table [ table1 ] below .the additional freedom of allowing in this analysis means that the cmb+ is now no longer sufficient to constrain the spatial curvature of the universe significantly .observations of the cmb+ alone are also no longer sufficient to require .this simple extra degree of freedom therefore undermines two of the most important results of modern observational cosmology . by adding further data sets the constraints on and improved , but still remain much weaker than in the standard friedmann models that satisfy einstein s equations .even so , however , it was still found that the results of using all available observables were sufficient to require to high confidence , and that a spatially flat universe was consistent with observations .finally , although the combination of _ some _ data sets excluded the possibility at the 95% confidence level , it was found that the special case was compatible with _ most _ combinations of these data sets . in general one might also consider the possibility of not just allowing and to be different , but also allowing them to functions of scale .such a result might arise , for example , from performing averaging over domains of different sizes , a process which is implicitly carried out when consider different cosmological observables .such a possibility allows for considerable extra freedom .we have so far considered attempts to describe the large - scale behavior of the universe by averaging over regions of space or space - time . in the end, the particular approach that one should use when performing this type of operation should probably be guided by the phenomena that one is trying to create a model to interpret .different observable phenomena may require different approaches , and so one needs to know the limits of any particular approach , as well as the situations in which it reliably reproduces the required results .for this it is useful to have inhomogeneous cosmological models that are of sufficient generality to allow some of the interesting behavior that is expected in general .such models can then be used to test ideas about averaging , back - reaction , and the large - scale evolution of space .unfortunately it is extremely difficult to construct such models .this does _ not _ mean that there is an absence of any interesting behavior to study , only that we need to become more sophisticated in our model building to quantify and constrain the different possibilities in a reliable way .some of the principal difficulties involved with this are how to model over - dense regions of the universe without having to deal with the rapid formation of singularities , how to introduce structure into the universe without assuming a friedmann background or matching onto a friedmann model at a boundary , and how to allow structure to form on different scales without assuming linearity in the field equations . for further discussion of inhomogeneous cosmological solutions the readeris referred to the contribution to these proceedings by krasiski , and to the comprehensive texts .it is currently almost beyond hope to construct a model that allows for all of the possibilities discussed above , while simultaneously maintaining sufficiently generality to model realistic distributions of matter .we are therefore forced to investigate toy models that we hope may reflect some of the features of the real universe , even if they are not realistic in every way .once toy models have been constructed we can then consider the averaging problem by applying some of the methods discussed above to them , or by fitting or comparing them to friedmann models directly .their existence also makes more advanced models a more realistic proposition .it is for these reasons that it is of interest to consider simple -body solutions of einstein s equations .such solutions , if they can be found , will allow over - dense regions to be studied without rapid collapse occurring , and without recourse to the assumption of a friedmann background or linearity in the gravitational field equations .this will be the subject of section [ bhsec ] .the simplest configuration of bodies that one can imagine is a regularly arranged set of points .although such a configuration limits the behaviors that are possible , it does allow for the most straightforward possible comparison to smoothed - out friedmann - like universes .that is , by ` zooming out ' in order to consider large numbers of points , and by performing some kind of coarse graining or smoothing , one could easily imagine such a situation looking more and more like a spatially homogeneous and isotropic universe , which could then be compared to the friedmann solutions of einstein s equations .regularity of the distribution also provides a limited number of preferred spatial planes and curves that can have their area and length compared to those of the friedmann solutions . herewe will consider spatially closed universes . these are known to admit hyper - surfaces of time symmetry at the maximum of expansion of the space - time that allow the constraint equations to be solved in a particularly simple way .the method that we will deploy to ensure that our massive bodies are regularly arranged is to tile the hyper - surface of maximum expansion with a number of regular polyhedra . a mass is then placed at the center of each polyhedron , which by symmetry will be an equal distance from each of its nearest neighbors .these polyhedra will be referred to in what follows as ` cells ' .there are seven such tilings that are possible in three spatial dimensions , as listed in table [ table2 ] . also displayed in this tableare the schlfli symbols of the polychora that these tilings constitute .once the arrangement of masses has been chosen , the geometry of the hyper - surface of maximum - of - expansion can be found . with the exception of the 2-cell ,this has been done for each of the structures described above .the 2-cell is special in that the time - symmetric geometry at the maximum of expansion of this structure is simply a slice through the global schwarzschild solution .the geometry of the full space - time is therefore already known exactly in this case , and is not of cosmological interest here ( by including a non - zero , however , other structures are also possible ) .an illustration of the geometry at the maximum of expansion in the case of the 8-cell and the 120-cell is given in figure [ f1 ] , below .each of the illustrations here corresponds to a single 2-dimensional slice through the 3-dimensional geometry . in the case of the 8-cellthis slice contains 6 masses , while in the 120-cell it contains many more ( although not all 120 ) .the geometry of the 3-space of maximum of expansion in each case is conformally related to the geometry of a 3-sphere , with a scale factor that is a function of position .the distance from the origin in the illustrations in figure [ f1 ] is proportional to this scale factor , and it can be seen that as the number of the masses in the lattice is increased , the bulk of the space approaches homogeneity .it is only in the vicinity of the masses themselves that inhomogeneities exist ( as depicted by the tube - like structures ) .once we have the geometry of the hyper - surface of maximum of expansion , we can take a measure of the scale of the solution , and compare this to the scale of a spatially closed friedmann universe that contains the same amount of `` proper mass '' .the friedmann solutions will , of course , have this mass evenly distributed throughout space , and so by comparing to the scale of the inhomogeneous geometry we can obtain a measure of back - reaction . for the choice of scale in the inhomogeneous space on could choose a number of different measures . herewe consider the proper length of the edge of a cell .this corresponds to the scale of curvature for the sphere that appears to emerge when the number of masses becomes large ( as can be seen from the illustration in figure [ 120fig ] ) .the difference in scale in each case is given in the last column of table [ table2 ] .it can be seen that the broad trend is for the scale of the homogeneous and inhomogeneous space - times to approach each other as the number of cells becomes large .however , for only a small number of cells ( to ) the difference in scale can be of order .in any case , this method provides an exact quantification of back - reaction , and provides an arena for testing formalisms designed for more general configurations of energy and momentum .a numerical evolution of the 8-cell has now been performed , and other methods have also been used to address problem of understand the evolution of this type of structure .various approaches to back - reaction and averaging already exist in the literature , but much work remains to be done if we are to fully understand their observational consequences in the real universe .motivation for taking these problems seriously comes from the apparent necessity of including dark energy when we interpret observations within a linearly perturbed friedmann model , as well as the requirement to understand all possible sources of error and uncertainty in precision cosmology . to fully address this problemit is likely that we will need to develop more sophisticated models of inhomogeneous space - times , as well as developing a more sophisticated understanding of averaging in general relativity .research in this area should be considered exceptionally timely , with large amounts of resources currently being invested into observational probes designed to improve our understanding of dark energy , and the universe around us .i acknowledge the support of the stfc .
we introduce the concept of back - reaction in relativistic cosmological modeling . roughly speaking , this can be thought of as the difference between the large - scale behaviour of an inhomogeneous cosmological solution of einstein s equations , and a homogeneous and isotropic solution that is a best - fit to either the average of observables or dynamics in the inhomogeneous solution . this is sometimes paraphrased as ` the effect that structure has of the large - scale evolution of the universe ' . various different approaches have been taken in the literature in order to try and understand back - reaction in cosmology . we provide a brief and critical summary of some of them , highlighting recent progress that has been made in each case .
this paper focuses on a random partial differential equation consisting of a parabolic pde with irregular noise in the drift .formulation , existence ( with uniqueness in a certain sense ) and double probabilistic representation are discussed .the equation itself is motivated by _random irregular media models_. let , be a continuous function and a generalized random field playing the role of a noise .let , \times{\mathbb r}\rightarrow{\mathbb r} ] , a.s . in ,t [ \times{\mathbb r}) ] , a.s . in ,t [ \times{\mathbb r}) ] ( resp ., ) will be extended , without mention , by setting for and for [ resp ., for . will indicate the set of continuous functions defined on the space of real functions with differentiability class .we denote by [ resp . , the space of continuous ( continuous differentiable ) functions vanishing at zero .when there is no confusion , we will also simply use the symbols .we denote by \times{\mathbb r}) ] . , or simply , indicates the space of continuous bounded functions defined on .the vector spaces and are topological frchet spaces , or spaces , according to the terminology of , chapter 1.2 .they are equipped with the following natural topology . a sequence belonging to [ resp . , ] is said to converge to in the [ resp ., sense if ( resp ., and all derivatives up to order ) converges ( resp . ,converge ) to ( resp ., to and all its derivatives ) uniformly on each compact of . we will consider functions \times{\mathbb r } \rightarrow{\mathbb r} ]will be said to converge _ in a bounded way _ to if : * \times{\mathbb r} ] , the composition notation means . for positive integers , will indicate functions in the corresponding differentiability class .for instance , will be the space of functions which are on ( i.e. , once continuously differentiable ) and such that exists and is continuous. will indicate the set of functions such that the partial derivatives of all orders are bounded .if is a real compact interval and ,1[ ] , being two real numbers such that . here , does not necessarily need to be positive .recall that belongs to if clearly , defines a norm on which makes it a banach space . is an f - space if equipped with the topology of convergence related to for each compact interval .a sequence in converges to if it converges according to for every compact interval . we will also provide some reminders about the so - called _ young integrals _ ( see ) but will remain , however , in a simplified framework , as in or .we recall the essential inequality , stated , for instance , in : let be such that . if , then for any \subset i ] , where is a constant not depending on .the bilinear map sending to can be continuously extended to with values in . by definition , that objectwill be called the _ young integral _ of with respect to on .we also denote it . by additivity, we set , for ] .we suppose that , , with , we define . then if , then the result is obvious .we remark that . repeatedly using inequality ( [ fyoung ] ), one can show that the two linear maps and are continuous from to .this concludes the proof of the proposition . by a _mollifier _ , we mean a function ( i.e. , a -function such that itself and all its derivatives decrease to zero faster than any power of as ) with .we set .the result below shows that mollifications of a hlder function converge to with respect to the hlder topology .[ pyoungreg ] let be a mollifier and let .we write .then in the topology for any .we need to show that converges to zero .we set .let .we will establish that without loss of generality , we can suppose that .we distinguish between two cases ._ case _ .we have _ case _ . in this case, we have ) is verified with .this implies that which allows us to conclude . for convenience ,we introduce the topological vector space defined by it is also a _ vector algebra _ , that is , is a vector space and an algebra with respect to the _ sum and product of functions_. the next corollary is a consequence of the definition of the young integral and remark [ rexvq7 ] .[ cyoung ] let , with .then is well defined and belongs to . is not a metric space , but an inductive limit of the f - spaces ; the weak version of the banach steinhaus theorem for f - spaces can be adapted .in fact , a direct consequence of the banach steinhaus theorem of , section 2.1 , is the following .[ tbs ] let be an inductive limit of f - spaces and another f - space .let be a sequence of continuous linear operators .suppose that exists for any . then is again a continuous ( linear ) operator .we recall here a few notions related to stochastic calculus via regularization , a theory which began with .we refer to a recent survey paper .the stochastic processes considered may be defined on , { \mathbb r}_+ ] ( resp . , ) , we apply the same convention as was applied at the beginning of previous section for functions .so we extend them without further mention , setting for and for ( resp ., for ) . will denote the vector algebra of continuous processes .it is an f - space if equipped with the topology of u.c.p .( uniform convergence in probability ) convergence . in the sequel, we recall the most useful rules of calculus ; see , for instance , or .the forward symmetric integrals and the covariation process are defined by the following limits in the u.c.p .sense , whenever they exist : & : = & \lim_{\varepsilon\to0 + } c^{\varepsilon}(x , y)_t,\end{aligned}\ ] ] where all stochastic integrals and covariation processes will of course be elements of .if ] and ] .[ r1.1 ] ( a ) ] provided that two of the three integrals or covariations exist . provided that one of the two integrals exists .[ r1.2 ] ( a ) if ] , then is said to be a _zero quadratic variation process_. let , be continuous processes such that has all of its mutual covariations . then ] .a bounded variation process is a zero quadratic variation process .( _ classical it formula_. ) if , then exists and is equal to .\ ] ] if and , then the forward integral is well defined . in this paper ,all filtrations are supposed to fulfill the usual conditions .if } ] is the usual covariation process .we now introduce the notion of dirichlet process , which was essentially introduced by fllmer and has been considered by many authors ; see , for instance , for classical properties . in the present section , will denote a classical -brownian motion .[ d81 ] an -adapted ( continuous ) process is said to be a _ dirichlet process _ if it is the sum of an local martingale and a zero quadratic variation process . for simplicity, we will suppose that a.s .[ r81 ] ( i ) process in the previous decomposition is anadapted process .an -semimartingale is an -dirichlet process .the decomposition is unique .let be of class and let be an -dirichlet process .then is again an -dirichlet process with local martingale part . the class of semimartingales with respect to a given filtration is known to be stable with respect to transformations .remark [ r1.2](b ) says that finite quadratic variation processes are stable through transformations .the last point of the previous remark states that stability also holds for dirichlet processes .young integrals introduced in section [ s2 ] can be connected with the forward and symmetric integrals via the regularization appearing before remark [ r1.0 ] .the next proposition was proven in .let be processes whose paths are respectively in and , with , and .for any symbol , the integral coincides with the young integral .[ rexvq7a ] suppose that and satisfy the conditions of proposition [ pexvq7 ]. then remark [ r1.1](a ) implies that =0 ] when exists for any , we denote it by ] has a continuous version , we say that is a _ finite cubic variation process_. if , moreover , there is a positive sequence converging to zero such that ^{\varepsilon_n } \|_t < + \infty,\ ] ] then we say that is a ( _ strong _ ) _ finite cubic variation process_. if is a ( strong ) finite cubic variation process such that = 0 ] , then and if , the operator can be formally expressed in divergence form as suppose that is locally of bounded variation .we then get since in the weak- topology and is continuous .if has bounded variation , then we have in particular , this example contains the case where for any .suppose that is locally hlder continuous with parameter and that is locally hlder continuous with parameter such that .since is locally bounded , is also locally h " older continuous with parameter .proposition [ pyoungreg ] implies that in and in for every and . since is strictly positive on each compact , in . by remark [ rexvq7 ] , well defined and locally h " older continuous with parameter .again , the following lemma can be proven at the level of regularizations ; see also lemma 2.6 in .[ t27 ] the unique solution to problem ( [ e2.9 ] ) is given by [ r2.3 ] if and is a classical solution to , then is clearly also a -generalized solution .[ r2.9bis ] given , we denote by the unique -generalized solution to problem ( [ e2.9 ] ) with , .the unique solution to the general problem ( [ e2.9 ] ) is given by we write , that is , the solution with [ r2.9 ] let .there is at most one such that .in fact , to see this , it is enough to suppose that .lemma [ t27 ] implies that consequently , is forced to be zero .this consideration allows us to define without ambiguity , where is the set of all which are -generalized solution to for some .in particular , .a direct consequence of lemma [ t27 ] is the following useful result .[ l28bis ] is the set of such that there exists with in particular , it gives us the following density proposition .[ t29 ] is dense in .it is enough to show that every -function is the -limit of a sequence of functions in .let be a sequence in converging to in .it follows that converges to and .we must now discuss technical aspects of the way and its domain are transformed by .we recall that and that is strictly positive .condition ( [ nonexplo ] ) implies that the image set of is .let be the classical pde operator where is a classical pde map ; however , we can also consider it at the formal level and introduce .[ t212 ] , . . holds if and only if .moreover , we have for every .this follows similarly as for proposition 2.13 of .we will now discuss another operator related to . given a function , we need to provide a suitable definition of , that is , some primitive of .* one possibility is to define that map , through previous expression , for . *otherwise , we try to define it as linear map on . for this , first suppose that is continuous . then integrating by parts , we obtain we remark that the right - hand side of this expression makes sense for any and continuous .we will thus define as follows : one may ask if , in the general case , the two definitions on and on are compatible .we will later see that under assumption [ a0 ] , this will be the case .however , in general , may be empty .thus far , we have learned how to eliminate the first - order term in a formal pde operator through the transformation introduced at ( [ efh ] ) ; when is classical , this was performed by zvonkin ( see ) .we would now like to introduce a transformation which puts the pde operator in a divergence form .let be a pde operator which is formally of type ( [ e2.1 ] ) : we consider a function of class , namely such that according to assumptions ( [ nonexplo ] ) , is bijective on .if there is no drift term , that is , , then we have .[ t216 ] we consider the formal pde operator given by where then : if and only if ; for every , we have .it is practically the same as in lemma 2.16 of .we now give a lemma whose proof can be easily established by investigation .suppose that is a classical pde operator .then is well defined for functions where acts on the second variable . given a function \times{\mathbb r}) ] by .[ l28 ] let us suppose that .we set .we define the pde operator by , where is a classical operator acting on the space variable and if and in the classical sense , then .we will now formulate a supplementary assumption which will be useful when we study singular stochastic differential equations in the proper sense and not only in the form of a martingale problem .[ a0 ] let be a topological f - space which is a linear topological subspace of ( or , eventually , an inductive limit of sub - f - spaces ) .the -convergence implies convergence in and , therefore , pointwise convergence .we say that fulfills assumption if the following conditions hold : , which is dense . for every , the multiplicative operator maps into itself .let as defined in lemma [ t27 ] , that is , is such that we recall that solves problem with .we suppose that admits a continuous extension to .let .for every with and so that ] , we have and , where denotes the continuous extension of ( see remark [ r2.9bis ] ) to , which exists by .the set is dense in .[ r2.21a ] let .remark [ r2.9bis ] and point ( iii ) above together imply that extends continuously to .moreover , point ( iv ) above shows that and , where ; in fact , , and ( [ e213a ] ) implies that .point ( i ) above is satisfied if , for instance , the map is closable as a map from to . in that case , may be defined as the domain of the closure of , equipped with the graph topology related to .below , we give some sufficient conditions for points ( iv ) and ( v ) of the technical assumption to be satisfied .we define by the vector space of functions such that .this will be an f - space if equipped with the following topology .a sequence will be said to converge to in if and converges to in .in particular , a sequence converging according to also converges with respect to . on the other hand, and a sequence converging in also converges with respect to .moreover , is dense in because is dense in .[ l2.21 ] suppose that points to of the technical assumption are fulfilled .we suppose , moreover , that : . for every , , , , we have and . is well defined and has a continuous extension to , still denoted by , such that . . is the identity map on for every are injective and points and of the technical assumption are satisfied .the injectivity of follows from point ( e ) .the injectivity of is a consequence of remark [ r2.9bis ] .we prove point ( iv ) .point ( c ) says that .we set , , where , .clearly , and .point ( b ) implies that .hence , and ( iv ) is satisfied .concerning point ( v ) , let with and set .since belongs to by ( c ) , belongs to .point ( i ) of the technical assumption implies that there exists a sequence of functions converging to in the sense and thus also in .let be the sequence of primitives of ( which are of class ) such that .in particular , we have that converges to in the -sense . by ( c ), there exists in which is the limit of in the -sense .observe that because of ( b ) , . on the other hand , in . applying and using ( iii ) of the technical assumption ,we obtain the injectivity of allows us to conclude that .[ r2.21aa ] under the assumptions of lemma [ l2.21 ] , we have : * ; * , in fact , let . without loss of generality, we can suppose that .let and set so that .setting lemma [ t27 ] implies that , where .so .since , it follows that , by additivity .on the other hand , by point ( e ) of lemma [ l2.21 ] .[ 4ex ] we provide here a series of four significant examples when technical assumption is verified. we only comment on the points which are not easy to verify .the first example is simple .it concerns the case when the drift is continuous .this problem , to be studied later , corresponds to an ordinary sde where is _ close to divergence type _ , that is , and where is a locally bounded variation function vanishing at zero .the operator is of divergence type with an additional radon measure term , that is , we have . in this case, we have points ( i ) and ( ii ) of the technical assumption are trivial .we have , in fact , defined at point ( iii ) of the technical assumption is such that , where and consequently , the extension of to , always still denoted by the same letter , is given by with and \\[-8pt ] \nonumber \qquad & & \hspace*{11.2 mm } { } \times \biggl ( \ell(0 ) + \int_0^x \ell(y ) \exp \biggl(2\int_0^y\frac{d\beta}{\sigma ^2 } \biggr ) \frac{1}{\sigma^2(y ) } \,d\beta(y ) \biggr ) \biggr\}.\end{aligned}\ ] ] points ( iv ) and ( v ) are seen to be satisfied via lemma [ l2.21 ] .we have .point ( a ) is obvious since and so .let .using lebesgue stieltjes calculus , we can easily show that this shows that and therefore the first part of ( b ) .we remark that we can , in fact , consider because the expression of extends continuously to , which yields the first part of point ( c ) .moreover , inserting the expression for into in ( [ e2.27a ] ) , one shows that .suppose , now , that in expression ( [ e2.27a ] ) , , , .a simple investigation shows that , so the second part of point ( b ) is fulfilled ; point ( d ) is also clear because of ( [ e2.27 ] ) .finally point ( d ) holds because one can prove by inspection that is the identity on .we recall the notation which indicates the topological vector space of locally hlder continuous functions defined on with parameter .we recall that is a vector algebra .suppose that and ( or and ) .remark [ r2.6](d ) implies that also belongs to .we set .technical assumption [ a0 ] is verified for the following reasons . since , belongs to the same space .point ( i ) follows because of proposition [ pyoungreg ] and point ( ii ) follows because is an algebra .corollary [ cyoung ] yields that for every , the function is well defined and belongs to .this shows that can be continuously extended to and point ( iii ) is established .concerning points ( iv ) and ( v ) , we again use lemma [ l2.21 ] .we observe that point ( a ) is obvious since .let .considering as a deterministic process and recalling the definition of as in ( [ e213a ] ) , integration by parts in remark [ r1.1](c ) and proposition [ pexvq7 ] together imply that the first part of point ( b ) follows because of proposition [ pcruley ] .of course , the previous expression can be extended to and this shows the first part of point ( c ) .showing that the second part of point ( c ) of lemma [ l2.21 ] holds consists of verifying that . substituting into the previous expression , through proposition [ pcruley ], we obtain concerning the second part of point ( b ) , let so that .we want to show that coincides with . since , it remains to check that .we recall that twice applying the chain rule of proposition [ pcruley ] and using ( [ e1.1b ] ) , the fact that and integration by parts , we obtain point ( b ) is therefore completely established .point ( d ) follows because in ( [ eimt ] ) , when , it follows that . clearly , as for the previous example , .it remains to show that is the identity map . for this, we first remark that in fact , by proposition [ pexvq7 ] and integration by parts contained in remark [ r1.1](c ) , we obtain by the chain rule of proposition [ pcruley ] , we obtain the right - hand side of ( [ eyverif1 ] ) . at this point , by definition , if , we have therefore , ( [ eyverif1 ] ) and proposition [ pcruley ] allow us to conclude that suppose is locally with bounded variation .then the technical assumption is satisfied for , where is the space of continuous real functions , locally with bounded variation , equipped with the following topology .a sequence in converges to if the arguments for proving that the technical assumption is satisfied are similar , but easier , than those for the previous point .young - type calculus is replaced by classical lebesgue stieltjes calculus .in this section , we consider a pde operator satisfying the same properties as in previous section , that is , where and are continuous . in particular, we assume that exists in , independently of the chosen mollifier . then defined by and is a solution to with . here, we aim to introduce different notions of martingale problem , trying , when possible , to also clarify the classical notion . for the next two definitions, we consider the following convention . let equipped with a filtration fulfill the _usual conditions _ ; see , for instance , , definition 2.25 , chapter 1 . [ dmp ] a process is said to solve _ the martingale problem _ related to ( with respect to the aforementioned filtered probability space ) with initial condition , , if is an -local martingale for and . more generally ,for , , we say that solves the martingale problem related to with initial value at time if for every , is an -local martingale .we remark that solves the martingale problem at time if and only if solves the martingale problem at time .[ dsmp ] let be an -classical wiener process .anprogressively measurable process is said to solve _ the sharp martingale problem _ related to ( on the given filtered probability space ) with initial condition , , if for every . more generally ,for , , we say that solves the sharp martingale problem related to with initial value at time if for every , [ r3.1a ] let be an -wiener process . if is continuous , then a process solves the ( corresponding ) sharp martingale problem with respect to if and only if it is a classical solution of the sde for this , a simple application of the classical it formula gives the result .[ r3.2 ] ( i ) in general , does not belong to , otherwise a solution to the martingale problem with respect to would be a semimartingale . according to remark [ rsem ] , this is generally not the case . in , we gave necessary and sufficient conditions on so that is a semimartingale .\(ii ) given a solution to the martingale problem related to , we are interested in the operators and where is the vector algebra of continuous processes . we may ask whether and are closable in and , respectively .we will see that admits a continuous extension to .however , can be extended continuously to some topological vector subspace of , where includes the drift , only when assumption [ a0 ] is satisfied . similarly , as in the case of classical stochastic differential equations, it is possible to distinguish two types of existence and uniqueness for the martingale problem .even if we could treat initial conditions which are random -measurable solutions , here we will only discuss deterministic ones .we will denote by [ resp . the martingale problem ( resp. sharp martingale problem ) related to with initial condition the notions will only be formulated with respect to the initial condition at time 0 .[ d112 ] we will say that admits _ strong existence _ if the following holds .given any probability space , a filtration and an -brownian motion , there is a process which solves the sharp martingale problem with respect to and initial condition .[ d113 ] we will say that admits _ pathwise uniqueness _ if the following property is fulfilled .let be a probability space with filtration andbrownian motion .if two processes are two solutions of the sharp martingale problem with respect to and , such that a.s . , then and coincide .[ d114 ] we will say that _ admits weak existence _ if there is a probability space , a filtration and a process which is a solution of the corresponding martingale problem .we say that admits weak existence if admits weak existence for every .[ ( _ uniqueness in law_)][d115 ] we say that has a _ unique solution in law _ if the following holds .we consider an arbitrary probability space with a filtration and a solution of the corresponding martingale problem .we also consider another probability space equipped with another filtration and a solution .we suppose that , and , -a.s .then and must have the same law as a r.v.s with values in ( or ] , moreover , let be a fixed filtered probability space fulfilling the usual conditions . the first result concerning solutions to the martingale problem related to is the following .[ p3.3 ] let a process solves the martingale problem related to with initial condition at time if and only if is a local martingale which solves , on the same probability space , where and where is an -classical brownian motion .let be an -classical brownian motion .if is a solution to equation ( [ e3.3 ] ) , then is a solution to the sharp martingale problem with respect to with initial condition at time .[ r3.4 ] let be a solution to the martingale problem with respect to and set as in point ( i ) above . since is a local martingale , we know from remark [ r81](iv ) that is an -dirichlet process with martingale part in particular, is a finite quadratic variation process with = [ m^x , m^x]_t = \int_0^t \sigma^2(x_s)\,ds.\ ] ] proof of proposition [ p3.3 ] for simplicity , we will set .first , let be a solution to the martingale problem related to .since and , we know that is an -local martingale . in order to calculate its bracket , we recall that and hold by proposition [ t212](a ) .thus , is an -local martingale .this implies that = \int_0^t ( \sigma h')^2 ( h^{-1}(y_s))\,ds = \int_0^t \tilde\sigma_h^2(y_s)\,ds.\ ] ] finally , is a solution to the sde ( [ e3.3 ] ) with respect to the standard -brownian motion given by where is the canonical filtration generated by .now , let be a solution to ( [ e3.3 ] ) and let .proposition [ t212](c ) says that , where we can therefore apply it s formula to evaluate , which coincides with .this gives .\ ] ] using = \tilde\sigma_h^2(y_s)\,ds ] with respect to lebesgue measure , that is , fulfilling the density occupation identity ,\ ] ] for every positive borel function . trivially has extended local time regularity , at least with respect to .let .suppose for a moment that is a semimartingale in , as is the case , for instance , if is a classical brownian motion .in that case , one would have clearly , the rightmost integral can be extended continuously in probability to any , which implies that also has extended local time regularity related to .we remark that gives general conditions on semimartingales under which is a good integrator , even if is not necessarily a semimartingale in .[ dsol ] let a filtered probability space , a classical -brownian motion and an -measurable random variable .a process will be called a -_solution _ of the sde if : * has the extended local time regularity with respect to ; * ; * is a finite quadratic variation process . [ rsol ] suppose that . if , then a -solution is also a -solution .the previous definition is also new in the classical case , that is , when is a continuous function .a -solution with corresponds to a solution to the sde in the classical sense . on the other hand , a -solution with strictly including is a solution whose local time has a certain additional regularity . even in this generalized framework , it is possible to introduce the notions of _ strong -existence _ , _ weak -existence _ , _ pathwise -uniqueness _ and _ -uniqueness in law_. this can be done similarly as in definition [ d115 ] according to whether or not the filtered probability space with the classical brownian motion is fixed a priori .[ l3.9 ] we suppose that technical assumption is satisfied . if is a solution to a martingale problem related to a pde operator , then it has extended local time regularity with respect to .let .since solves the martingale problem with respect to , setting , it follows that continuity of on implies that can be extended to .now , let and .since equals a local martingale plus , it remains to show that exists for any .integrating by parts , the previous integral ( [ e3.8 ] ) equals .\ ] ] remark [ r1.2](b ) , ( f ) shows that the rightmost term member is well defined .[ l3.10 ] let be a process having extended local time regularity with respect to some -space ( or inductive limit ) .suppose that for fixed , the application is continuous from to .then for every and every , we have where the banach steinhaus - type theorem [ tbs ] implies that for every , is continuous from to .in fact , expression ( [ e3.11 ] ) is the u.c.p .limit of note that is a continuous bilinear map from to .since is continuous , the mapping is also continuous from to . in order to conclude the proof ,we need to check identity ( [ e3.9 ] ) for . in that case , since both sides of ( [ e3.9 ] ) equal we will now explore the relation between the martingale problem associated with and the stochastic differential equations with distributional drift . [ p3.11 ] let .suppose that fulfills technical assumption .let be a filtered probability space fulfilling the usual conditions and let be a classical -brownian motion .if solves the sharp martingale problem with respect to with initial condition , then is a -solution to the stochastic differential equation \\[-8pt ] \nonumber x_0 & = & x_0.\end{aligned}\ ] ] [ r2.21 ] in particular , if is close to divergence type , as in example [ 4ex](ii ) , then is a -solution to the previous equation with .let be a solution to the martingale problem related to .we know , by lemma [ l3.9 ] , that has extended local time regularity with respect to . on the other hand , by remark [ r3.4 ] , is a finite quadratic variation process .it remains to show that let and set . by definition of a sharp martingale problem ,we have according to remark [ r2.21a](i ) concerning the continuity of the map , previous expression can be extended to any . by remark [ r2.21a](ii ) , and . replacing this in ( [ e3.13 ] ) ,we obtain since , the proof is complete .[ c3.11 ] let .suppose that fulfills technical assumption .if [ resp . ] admits weak ( resp ., strong ) existence , then the sde also admits weak ( resp ., strong ) existence .the statement concerning strong solutions is obvious .concerning weak solutions , let us admit the existence of a filtered probability space , where there is a solution to the martingale problem with respect to with initial condition . then according to remark [ r3.5 ] , this solution is also a solution to a sharp martingale problem and the result follows . if is some -solution to ( [ e3.12 ] ) , is it a solution to the ( sharp ) martingale problem related to some operator ? this is a delicate question . in the following proposition , we only provide the converse of proposition [ p3.11 ] as a partial answer .[ c3.12 ] suppose that the pde operator fulfills technical assumption .let be a filtered probability space fulfilling the usual conditions and let be a classical -brownian motion .let be a progressively measurable process . solves the sharp martingale problem related to with respect to some initial condition if and only if it is a -solution to the stochastic differential equation \\[-8pt ] \nonumber x_0 & = & x_0 .\end{aligned}\ ] ] [ cswequ ] let .suppose that fulfills technical assumption .then weak existence and uniqueness in law ( resp ., strong existence and pathwise uniqueness ) hold for equation if and only if the same holds for [ resp . ] .proof of proposition [ c3.12 ] suppose that is a -solution to ( [ e3.15 ] ) .then it is a finite quadratic variation process .let .since solves ( [ e3.12 ] ) and always exists by the classical it formula [ see remark [ r1.2](e ) of section [ s1 ] ] , we know that also exists and is equal to therefore , this it formula says that holds . by lemma [ l3.10 ] , the linearity of mapping and ( [ e213a ] ), we obtain this shows that for every . in reality , it is possible to show the previous equality for any .in fact , the left - hand side extends continuously to and even to .the right - hand side is also allowed to be extended to for the following reason . for , let be a sequence of functions in converging to when , according to the topology .in particular , the convergence also holds in .since is continuous with respect to the topology with values in , we have in .finally , u.c.p . because of the extended local time regularity with respect to .we will , in fact , use the validity of ( [ e3.14 ] ) for with and and according to technical assumption [ a0](iv ) , we have . therefore , ( [ e3.14 ] ) gives again using extended local time regularity with respect to and the continuity of , we can state the validity of the previous expression for each with , in particular , for with .but in this case , for any with and , we obtain this shows the validity of the identity in definition [ dsmp ] for and that and . if , we replace by in the previous identity and use the fact that for any .it follows that fulfills a sharp martingale problem with respect to .this shows the reversed sense of the statement .the direct implication was proven in proposition [ p3.11 ] .[ ceqstrong ] we suppose that and , or and , with conditions .we set . then equation ( [ e3.15 ] ) admits -strong existence and pathwise uniqueness .the result follows from corollaries [ cswequ ] and [ c3.3ss ] .in this section , we want to discuss the related parabolic cauchy problem with final condition , which is associated with our stochastic differential equations with distributional drift .we will adopt the same assumptions and conventions as in section [ s4 ] .we consider the formal operator , where will hereafter act on the second variable .[ d21 ] let be an element of \times{\mathbb r } ) ] will be said to be a _-generalized solution _ to \\[-8pt ] \nonumber u(t,\cdot ) & = & u^0,\end{aligned}\ ] ] if the following are satisfied : for any sequence in \times{\mathbb r}) ] of class to , , then converges in a bounded way to .[ r21 ] ( a ) is said to solve if there exists such that ( [ f25 ] ) holds .the previous definition depends in principle on the mollifier , but it could be easily adapted so as not to depend on it .the regularized problem admits a solution : if and \times{\mathbb r}) ] of for this , it suffices to apply theorem of .we now state a result concerning the case when the operator is classical .even if the next proposition could be stated when the drift is a continuous function , we will suppose it to be zero .in fact , it will later be applied to .[ p39 ] we suppose that .let \times{\mathbb r } ) , n \in{\mathbb n}, ] .let be a strictly positive real continuous function .suppose that there exist \times { \mathbb r } ) ] in a bounded way , where the function is defined by where is the unique solution ( in law ) to and where is a classical brownian motion on some suitable filtered probability space .[ r310 ] usual it calculus implies that where is the unique solution in law to the problem theorem ( chapter of ) affirms that it is possible to construct a solution ( unique in law ) to the sde ( [ f310quater ] ) resp ., to .suppose that is a classical pde operator .let be bounded and continuous on \times{\mathbb r} ] , . using the engelbert schmidt construction ( see , e.g. , the proof of theorem 5.4 , chapter 5 and 5.7 of ) , it is possible to construct a solution of the sde on some fixed probability space which solves ( [ f310quater ] ) with respect to some classical wiener process .we set for simplicity .the procedure is as follows .we fix a standard brownian motion on some fixed probability space one set is a.s . a homeomorphism on andwe define as the inverse of .a solution will be then given by ; in fact , it is possible to show that the quadratic variation of the local martingale is the brownian motion is constructed a posteriori and is adapted to the natural filtration of by setting so , on the same probability space , we can set , being the inverse of , where .consequently , on the same probability space , we construct , where is the inverse of and . solves equation ( [ f310quinque ] ) with respect to a brownian motion depending on . by construction, the family converges a.s . to . using lebesgue dominated convergence theorems and the bounded convergence of and , we can take the limit when in expression ( [ f311 ] ) and obtain the desired result .[ r311a ] in particular , the corresponding laws of random variables are tight .again , we will adopt the same conventions as in section [ s4 ] .we set . is the classical operator defined at ( [ e2.12 ] ) .let us consider as a formal operator .[ c311 ] let \times{\mathbb r}) ] or , we again set according to the conventions of section .again , we consider as a formal operator .let \times{\mathbb r}) ] to \\[-8pt ] \nonumber u(t,\cdot ) & = & u^0.\end{aligned}\ ] ] moreover , solves \\[-8pt ] \nonumber \tilde{u } ( t,\cdot ) & = & \tilde{u}^0.\end{aligned}\ ] ] in accordance with section [ s4 ] , let be an approximating sequence which is related to .let us consider the pde operators defined at ( [ e2.2 ] ) .let be a sequence in such that , in a bounded way and for which there are classical solutions of we recall that those sequences always exist because of remark [ r21](c ) .we set by lemma , we have where by proposition [ p39 ] , and corollary [ c311 ] , in a bounded way , where this concludes the proof of the proposition .we now discuss how -generalized solutions are transformed under the action of the function introduced at ( [ efk ] ) . a similar result to lemma[ t216 ] for the elliptic case is the following .[ p313 ] for \times{\mathbb r}) ] , let be the unique -generalized solution in \times{\mathbb r}) ] to ; the are smooth on ,\infty [ \times\mathbb{r}^2 ] ; holds for every and every ; the previous lemma allows us to establish the following .[ tdistr ] let be the solution to the martingale problem related to at time and point .suppose that to be of divergence type , having the aronson form .then there is fundamental solution of with the following properties : letting \times{\mathbb r}) ] ; letting \times{\mathbb r}) ] ; it fulfills aronson estimates ; .we recall , by remark [ rcv ] , that where is the fundamental solution associated with the operator , . this , and point ( v ) of lemma [ l5.1 ] , directly imply the validity of the first point .taking into account assumption , aronson estimates for and the fact that result ( ii ) follows easily . with the same conventions as before , we have so , for , \(iii ) follows after integration with respect to and because of lemma [ l5.1](ix ) .[ pdistr1 ] let \times{\mathbb r } ) \cap l^1([0,t ] \times { \mathbb r}) ] be the -generalized solution to , .then : ; is absolutely continuous , and in particular , for a.e . ] , be two continuous random fields a.s . in ,t [ \times{\mathbb r } ) ] ( resp ., ) whose paths are bounded and continuous .let be continuous stochastic processes such that is defined a.s .and assumption is satisfied .let be the random field which is a.s .the -generalized solution to ( [ e3.9quater ] ) .the following then holds : for every .we fix a realization .theorem [ tdistr1 ] says that the unique solution to equation ( [ e3.9quater ] ) is given by where is the density law of the solution to the martingale problem related to at point at time .proposition [ pdistr1](b ) implies that exists and is integrable on ,t [ \times{\mathbb r} ] .we recall , in particular , that is in ,t [ \times{\mathbb r}) ] , is of class . for ] ( resp ., ) whose paths are bounded and continuous .we suppose that and that is a ( two - sided ) zero strong cubic variation process such that there are two finite and strictly positive random variables with a.s .let be the random field which is a.s .a -generalized solution to for .we set .then is a ( weak ) solution of the spde .proposition [ pspdes ] says that it will be enough to verify that for every test function and every $ ] . after making the identification ,the previous lemma [ lspdes ] says that since is a zero strong cubic variation process , proposition [ pstacub ] implies that is also a zero strong cubic variation process .then the it chain rule from proposition [ crule ] , applied with and remark [ r1.0 ] say that the right - hand side of previous expression gives this concludes the proof .we would like to thank an anonymous referee and the editor for their careful reading and stimulating remarks .the first named author is grateful to dr .juliet ryan for her precious help in correcting several language mistakes .flandoli , f. , russo , f. and wolf , j. ( 2004 ) . some sdes with distributional drift .ii . lyons zheng structure , it formula and semimartingale characterization ._ random oper .stochastic equations _ * 12 * 145184 .gradinaru , m. , russo , f. and vallois , p. ( 2003 ) .generalized covariations , local time and stratonovich it s formula for fractional brownian motion with hurst index .probab . _ * 31 * 17721820 .
a new class of random partial differential equations of parabolic type is considered , where the stochastic term consists of an irregular noisy drift , not necessarily gaussian , for which a suitable interpretation is provided . after freezing a realization of the drift ( stochastic process ) , we study existence and uniqueness ( in some appropriate sense ) of the associated parabolic equation and a probabilistic interpretation is investigated . and .
in compressive imaging , we aim to estimate an image from noisy linear observations , assuming that the image has a representation in some wavelet basis ( i.e. , ) containing only a few ( ) large coefficients ( i.e. , ) . in ( [ eqn : system ] ) , is a known measurement matrix and is additive white gaussian noise .though makes the problem ill - posed , it has been shown that can be recovered from when is adequately small and is incoherent with .the wavelet coefficients of natural images are known to have an additional structure known as _ persistence across scales _ ( pas ) , which we now describe . for 2d images ,the wavelet coefficients are naturally organized into quad - trees , where each coefficient at level acts as a parent for four child coefficients at level .the pas property says that , if a parent is very small , then all of its children are likely to be very small ; similarly , if a parent is large , then it is likely that some ( but not necessarily all ) of its children will also be large .several authors have exploited the pas property for compressive imaging .the so - called `` model - based '' approach is a deterministic incarnation of pas that leverages a restricted union - of - subspaces and manifests as a modified cosamp algorithm .most approaches are bayesian in nature , exploiting the fact that pas is readily modeled by a _ hidden markov tree _ ( hmt ) .the first work in this direction appears to be , where an iteratively re - weighted algorithm , generating an estimate of , was alternated with a viterbi algorithm , generating an estimate of the hmt states .more recently , hmt - based compressive imaging has been attacked using modern bayesian tools .for example , used markov - chain monte - carlo ( mcmc ) , which is known to yield correct posteriors after convergence . for practical image sizes , however , convergence takes an impractically long time , and so mcmc must be terminated early , at which point its performance may suffer .variational bayes ( vb ) can sometimes offer a better performance / complexity tradeoff , motivating the approach in .our experiments indicate that , while indeed offers a good performance / complexity tradeoff , it is possible to do significantly better . in this paper , we propose a novel approach to hmt - based compressive imaging based on loopy belief propagation . for this , we model the coefficients in as conditionally gaussian with variances that depend on the values of hmt states , and we propagate beliefs ( about both coefficients and states ) on the corresponding factor graph .a recently proposed `` turbo '' messaging schedule suggests to iterate between exploitation of hmt structure and exploitation of observation structure from ( [ eqn : system ] ) . for the former we use the standard sum - product algorithm , and for the latter we use therecently proposed _ approximate message passing _( amp ) approach .the remarkable properties of amp are 1 ) a rigorous analysis ( as with fixed , under i.i.d gaussian ) establishing that its solutions are governed by a state - evolution whose fixed points when unique yield the true posterior means , and 2 ) very low implementational complexity ( e.g. , amp requires one forward and one inverse fast - wavelet - transform per iteration , and very few iterations ) .we consider two types of conditional - gaussian coefficient models : a bernoulli - gaussian ( bg ) model and a two - state gaussian - mixture ( gm ) model .the bg model assumes that the coefficients are either generated from a large - variance gaussian distribution or are exactly zero ( i.e. , the coefficients are exactly sparse ) , whereas the gm model assumes that the coefficients are generated from either a large - variance or a small - variance gaussian distribution .both models have been previously applied for imaging , e.g. , the bg model was used in , whereas the gm model was used in . although our models for the coefficients and the corresponding hmt states involve statistical parameters like variance and transition probability , we learn those parameters directly from the data .to do so , we take a hierarchical bayesian approach similar to where these statistical parameters are treated as random variables with suitable hyperpriors .experiments on a large image database show that our turbo - amp approach yields state - of - the - art reconstruction performance with substantial reduction in complexity .the remainder of the paper is organized as follows .section [ sec : model ] describes the signal model , section [ sec : alg ] describes the proposed algorithm , section [ sec : sims ] gives numerical results and comparisons with other algorithms , and section [ sec : conc ] concludes ._ notation _ : above and in the sequel , we use lowercase boldface quantities to denote vectors , uppercase boldface quantities to denote matrices , to denote the identity matrix , to denote transpose , and .we use to denote the probability density function ( pdf ) of random variable given the event , where often the subscript `` '' is omitted when there is no danger of confusion .we use to denote the -dimensional gaussian pdf with argument , mean , and covariance matrix , and we write to indicate that random vector has this pdf .we use to denote expectation , to denote the probability of event , and to denote the dirac delta .finally , we use to denote equality up to a multiplicative constant .throughout , we assume that represents a 2d wavelet transform , so that the transform coefficients ^t ] is output at the signal estimate .we now describe how the precisions are learned .first , we recall that describes the apriori precision on the active coefficients at the level , i.e. , on , where the corresponding index set is of size .furthermore , we recall that the prior on was chosen as in ( [ eqn : prior_rhoj ] ) .thus , _ if _ we had access to the true values , then ( [ eqn : spike_slab ] ) implies that which implies ) . ]that the posterior on would take the form of where and .in practice , we do nt have access to the true values nor to the set , and thus we propose to build surrogates from the ssr outputs .in particular , to update after the turbo iteration , we employ and , where and denote the final llr on and the final mmse estimate of , respectively , at the turbo iteration .these choices imply the hyperparameters finally , to perform ssr at turbo iteration , we set the variances equal to the inverse of the expected precisions , i.e. , .the noise variance is learned similarly from the ssr - estimated residual . next , we describe how the transition probabilities are learned . first , we recall that describes the probability that a child at level is active ( i.e. , ) given that his parent ( at level ) is active . furthermore , we recall that the prior on was chosen as in ( [ eqn : prior_pij11 ] ) . thus _ if _ we knew that there were active coefficients at level , of which had active children , then the posterior on would take the form of , where and . in practice ,we do nt have access to the true values of and , and thus we build surrogates from the ssr outputs . in particular , to update after the turbo iteration , we approximate by the event , and based on this approximation set ( as in ( [ eqn : kj ] ) ) and .the corresponding hyperparameters are then updated as finally , to perform ssr at turbo iteration , we set the transition probabilities equal to the expected value .the parameters , , and are learned similarly . until now, we have focused on the bernoulli - gaussian ( bg ) signal model ( [ eqn : spike_slab ] ) . in this section ,we describe the modifications needed to handle the gaussian mixture ( gm ) model where denotes the variance of `` large '' coefficients and denotes the variance of `` small '' ones . for either the bg or gm prior, amp is performed using the steps ( [ eqn : xini])-([eqn : cni ] ) . for the bg case ,the functions , , , and are given in ( [ eqn : fn])([eqn : taun ] ) , whereas for the gm case , they take the form ^ 2}{[1+\bar{\tau}_n(\xi , c)]^2 } + \ , c\,\xi^{-1}\ ! f_n(\xi ; c ) \nonumber\label{eqn : gn2}\\[-1.5mm]&&\\[-3.0 mm ] f'_n(\xi;c ) & = & \frac{\bar{\tau}_n(\xi;c)\bar{\alpha}{_{n,\text{\sf l}}}(c)(1+\bar{\tau}_n(\xi;c ) - 2 \xi^2 \bar{\zeta}_n(c ) ) } { [ 1+\bar{\tau}_n(\xi;c)]^2 } \nonumber\\ & & + \frac{\bar{\alpha}{_{n,\text{\sf s}}}(c)(1+\bar{\tau}_n(\xi;c ) + 2 \xi^2 \bar{\zeta}_n(c)\bar{\tau}_n(\xi;c ) ) } { [ 1+\bar{\tau}_n(\xi;c)]^2 } \quad \label{eqn : fnd2}\\ \bar{\tau}_n(\xi , c ) & = & \bar{\beta}_n(c ) \exp(-\bar{\zeta}_n(c)\xi^2),\end{aligned}\ ] ] where likewise , for the bg case , the extrinsic llr is given by ( [ eqn : amp_llr ] ) , whereas for the gm case , it becomes proposed turbo approach to compressive imaging was compared to several other tree - sparse reconstruction algorithms : modelcs , hmt+irwl1 , mcmc , variational bayes ( vb ) ; and to several simple - sparse reconstruction algorithms : cosamp , spgl1 , and bernoulli - gaussian ( bg ) amp .all numerical experiments were performed on ( i.e. , ) grayscale images using a -level 2d haar wavelet decomposition , yielding approximation coefficients and individual markov trees . in all cases , the measurement matrix had i.i.d gaussian entries . unless otherwise specified , noiseless measurements were used .we used normalized mean squared error ( nmse ) as the performance metric .we now describe how the hyperparameters were chosen for the proposed turbo schemes .below , we use to denote the total number of wavelet coefficients at level , and to denote the total number of approximation coefficients . for both turbo - bg and turbo - gm ,the beta hyperparameters were chosen so that , and with , , , and .these informative hyperparameters are similar to the `` universal '' recommendations in and , in fact , identical to the ones suggested in the mcmc work . for turbo - bg ,the hyperparameters for the signal precisions were set to and \!=\![10 , 1 , 1 , 0.1 , 0.1]$ ] .this choice is motivated by the fact that wavelet coefficient magnitudes are known to decay exponentially with scale ( e.g. , ) . meanwhile ,the hyperparameters for the noise precision were set to .although the measurements were noiseless , we allow turbo - bg a nonzero noise variance in order to make up for the fact that the wavelet coefficients are not exactly sparse , as assumed by the bg signal model .( we note that the same was done in the bg - based work . ) for turbo - gm , the hyperparameters for the signal precisions were set at the values of for the bg case , while the hyperparameters for were set as and .meanwhile , the noise variance was assumed to be exactly zero , because the gm signal prior is capable of modeling non - sparse wavelet coefficients . for mcmc , the hyperparameters were set in accordance with the values described in ; the values of are same as the ones used for the proposed turbo - bg scheme , while . for vb , the same hyperparameters as mcmcwere used except for and , which were the default values of hyperparameters used in the publicly available code .we experimented with the values for both mcmc and vb and found that the default values indeed seem to work best .for example , if one swaps the hyperparameters between vb and mcmc , then the average performance of vb and mcmc degrade by and , respectively , relative to the values reported in table [ tab : summary ] . for both the cosamp and modelcs algorithms ,the principal tuning parameter is the assumed number of non - zero coefficients . for both modelcs ( which is based on cosamp ) and cosamp itself ,we used the rice university codes , which include a genie - aided mechanism to compute the number of active coefficients from the original image .however , since we observed that the algorithms perform somewhat poorly under that tuning mechanism , we instead ran ( for each image ) multiple reconstructions with the number of active coefficients varying from to in steps of , and reported the result with the best nmse . the number of active coefficients chosen in this manner was usually much smaller than that chosen by the genie - aided mechanism . to implement bg - amp, we used the amp scheme described in section [ sec : mp : ssr ] with the hyperparameter learning scheme described in section [ sec : update ] ; hmt structure was not exploited . for this, we assumed that the priors on variance and activity were identical over the coefficient index , and assigned gamma and beta hyperpriors of and , respectively .for hmt+irwl1 , we ran code provided by the authors with default settings . for spgl1 ,the residual variance was set to , and all parameters were set at their defaults .[ fig : reccomp ] shows a section of the `` cameraman '' image along with the images recovered by the various algorithms .qualitatively , we see that cosamp , which leverages only simple sparsity , and modelcs , which models persistence - across - scales ( pas ) through a deterministic tree structure , both perform relatively poorly .hmt+irwl1 also performs relatively poorly , due to ( we believe ) the ad - hoc manner in which the hmt structure was exploited via iteratively re - weighted .the bg - amp and spgl1 algorithms , neither of which attempt to exploit pas , perform better .the hmt - based schemes ( vb , mcmc , turbo - gm , and turbo - gm ) all perform significantly better , with the turbo schemes performing best .[ b][b][0.8]original [ b][b][0.8]hmt+irwl1 [ b][b][0.8]cosamp [ b][b][0.8]modelcs [ b][b][0.8]bg - amp [ b][b][0.8]spgl1 [ b][b][0.8]variational bayes [ b][b][0.8]mcmc [ b][b][0.8]turbo - bg [ b][b][0.8]turbo - gm + [ c][bc][0.8]image type [ c][c][0.8]average nmse ( db ) [ l][l][0.45 ] cosamp [ l][l][0.45 ] hmt - irwl1 [ l][l][0.45]turbo bg [ l][l][0.45]turbo gm [ c][bc][0.8]image type [ c][c][0.8]average runtime ( sec ) [ l][l][0.45 ] cosamp [ l][l][0.45 ] hmt - irwl1 [ l][l][0.45]turbo bg [ l][l][0.45]turbo gm .nmse and runtime averaged over images . [ cols="^,^,^",options="header " , ] [ c][bc][0.8]number of measurements [ c][c][0.8]average nmse ( db ) [ l][l][0.45 ] cosamp [ l][l][0.45 ] hmt - irwl1 [ l][l][0.45]turbo bg [ l][l][0.45]turbo gm [ c][bc][0.8]number of measurements [ c][c][0.8]average runtime ( sec ) [ l][l][0.45 ] cosamp [ l][l][0.45 ] hmt - irwl1 [ l][l][0.45]turbo bg [ l][l][0.45]turbo gm for a quantitative comparison , we measured average performance over a suite of images in a _ microsoft research object class recognition _ database images extracted from the `` pixel - wise labelled image database v2 '' at _ http://research.microsoft.com / en - us / projects / objectclassrecognition_. what we refer to as an `` image type '' is a `` row '' in this database . ] that contains types of images ( see fig . [fig : categories ] ) with roughly images of each type . in particular, we computed the average nmse and average runtime on a 2.5 ghz pc , for each image type .these results are reported in figures [ fig : nmse ] and [ fig : comptime ] , and the global averages ( over all images ) are reported in table [ tab : summary ] . from the table , we observe that the proposed turbo algorithms outperform all the other tested algorithms in terms of reconstruction nmse , but are beaten only by cosamp in speed . between the two turbo algorithms , we observe that turbo - gm slightly outperforms turbo - bg in terms of reconstruction nmse , while taking the same runtime . in terms of nmse performance ,the closest competitor to the turbo schemes is mcmc , db ( i.e. , better ) at an average runtime of sec ( i.e. , slower ) . ]whose nmse is worse than turbo - bg and worse than turbo - gm .the good nmse performance of mcmc comes at the cost of complexity , though : mcmc is times slower than the turbo schemes .the second closest nmse - competitor is vb , showing performance db worse than turbo - bg and worse than turbo - gm .even with this sacrifice in performance , vb is still twice as slow as the turbo schemes . among the algorithms that do not exploit pas, we see that spgl1 offers the best nmse performance , but is by far the slowest ( e.g. , times slower than cosamp ) . meanwhile , cosamp is the fastest , but shows the worst nmse performance ( e.g. , worse than spgl1 ) .bg - amp strikes an excellent balance between the two : its nmse is only away from spgl1 , whereas it takes only times as long as cosamp .however , by combining the amp algorithm with hmt structure via the turbo approach , it is possible to significantly improve nmse while simultaneously decreasing the runtime .the reason for the complexity decrease is twofold .first , the hmt structure helps the amp and parameter - learning iterations to converge faster .second , the hmt steps are computationally negligible relative to the amp steps : when , e.g. , , the amp portion of the turbo iteration takes approximately sec while the hmt portion takes sec .we also studied nmse and compute time as a function of the number of measurements , . for this study , we examined images of type 1 at . in figure[ fig : nmse_varym ] , we see that turbo - gm offers the uniformly best nmse performance across .however , as decreases , there is little difference between the nmses of turbo - gm , turbo - cs , and mcmc .as increases , though , we see that the nmses of mcmc and vb converge , but that they are significantly outperformed by turbo - gm , turbo - cs , and somewhat surprisingly spgl1 .in fact , at , spgl1 outperforms turbo - bg , but not turbo - gm .however , the excellent performance of spgl1 at these comes at the cost of very high complexity , as evident in figure [ fig : comptime_varym ] .we proposed a new approach to hmt - based compressive imaging based on loopy belief propagation , leveraging a turbo message passing schedule and the amp algorithm of donoho , maleki , and montanari .we then tested our algorithm on a suite of natural images and found that it outperformed the state - of - the - art approach ( i.e. , variational bayes ) while halving its runtime .m. f. duarte , m. b. wakin , and r. g. baraniuk , `` wavelet - domain compressive signal reconstruction using a hidden markov tree model , '' in _ proc .speech & signal process ._ , las vegas , nv , apr . 2008 , pp . 51375140 .b. j. frey and d. j. c. mackay , `` a revolution : belief propagation in graphs with cycles , '' in _ adv . in neural inform .processing syst ._ , m. jordan , m. s. kearns , and s. a. solla , eds.1em plus 0.5em minus 0.4em mit press , 1998 .s. som , l. c. potter , and p. schniter , `` on approximate message passing for reconstruction of non - uniformly sparse signals , '' in _ proc . national aerospace and electronics conf . _ ,dayton , oh , jul .2010 .j. k. romberg , h. choi , and r. g. baraniuk , `` bayesian tree - structured image modeling using wavelet - domain hidden markov models , '' _ ieee trans . image process ._ , vol .10 , no . 7 , pp . 10561068 , jul .s. som , l. c. potter , and p. schniter , `` compressive imaging using approximate message passing and a markov - tree prior , '' in _ proc .asilomar conf .signals syst ._ , pacific grove , ca , nov .
we propose a novel algorithm for compressive imaging that exploits both the sparsity and persistence across scales found in the 2d wavelet transform coefficients of natural images . like other recent works , we model wavelet structure using a hidden markov tree ( hmt ) but , unlike other works , ours is based on loopy belief propagation ( lbp ) . for lbp , we adopt a recently proposed `` turbo '' message passing schedule that alternates between exploitation of hmt structure and exploitation of compressive - measurement structure . for the latter , we leverage donoho , maleki , and montanari s recently proposed approximate message passing ( amp ) algorithm . experiments with a large image database suggest that , relative to existing schemes , our turbo lbp approach yields state - of - the - art reconstruction performance with substantial reduction in complexity .
adaptive network consists of a collection of agents that are interconnected to each other and solve distributed estimation and inference problems in a collaborative manner .two useful strategies that enable adaptation and learning over such networks in real - time are the incremental strategy and the diffusion strategy .incremental strategies rely on the use of a hamiltonian cycle , i.e. , a cyclic path that covers all nodes in the network , which is generally difficult to enforce since determining a hamiltonian cycle is an np - hard problem .in addition , cyclic trajectories are not robust to node or link failure . in comparison ,diffusion strategies are scalable , robust , and able to match well the performance of incremental networks . in adaptive diffusion implementations, information is processed locally at the nodes and then diffused in real - time across the network .diffusion strategies were originally proposed in and further extended and studied in .they have been applied to model self - organized and complex behavior encountered in biological networks , such as fish schooling , bird flight formations , and bee swarming .diffusion strategies have also been applied to online learning of gaussian mixture models and to general distributed optimization problems .there have also been several useful works in the literature on distributed consensus - type strategies , with application to multi - agent formations and distributed processing .the main difference between these works and the diffusion approach of is the latter s emphasis on the role of adaptation and learning over networks . in the original diffusion least - mean - squares ( lms ) strategy , the weight estimates that are exchanged among the nodes can be subject to quantization errors and additive noise over the communication links . studying the degradation in mean - square performance that results from these particular perturbationscan be pursued , for both incremental and diffusion strategies , by extending the mean - square analysis already presented in , in the same manner that the tracking analysis of conventional stand - alone adaptive filters was obtained from the counterpart results in the stationary case ( as explained in ( * ? ? ?* ch . 21 ) ) .useful results along these lines , which study the effect of link noise during the exchange of the weight estimates , already appear for the traditional diffusion algorithm in the works and for consensus - based algorithms in . in this paper ,our objective is to go beyond these earlier studies by taking into account additional effects , and by considering a more general algorithmic structure .the reason for this level of generality is because the analytical results will help reveal which noise sources influence the network performance more seriously , in what manner , and at what stage of the adaptation process .the results will suggest important remedies and mechanisms to adapt the combination weights in real - time .some of these insights are hard to get if one focuses solely on noise during the exchange of the weight estimates .the analysis will further show that noise during the exchange of the regression data plays a more critical role than other sources of imperfection : this particular noise alters the learning dynamics and modes of the network , and biases the weight estimates .noises related to the exchange of other pieces of information do not alter the dynamics of the network but contribute to the deterioration of the network performance .to arrive at these results , in this paper , we first consider a generalized analysis that applies to a broad class of diffusion adaptation strategies ( see further ahead ; this class includes the original diffusion strategies and as two special cases ) .the analysis allows us to account for various sources of information noise over the communication links .we allow for noisy exchanges during _ each _ of the three processing steps of the adaptive diffusion algorithm ( the two combination steps and and the adaptation step ) . in this way , we are able to examine how the three sets of combination coefficients in influence the propagation of the noise signals through the network dynamics .our results further reveal how the network mean - square - error performance is dependent on these combination weights . following this line of reasoning, the analysis leads to algorithms and further ahead for choosing the combination coefficients to improve the steady - state network performance .it should be noted that several combination rules , such as the metropolis rule and the maximum degree rule , were proposed previously in the literature especially in the context of consensus - based iterations .these schemes , however , usually suffer performance degradation in the presence of noisy information exchange since they ignore the network noise profile .when the noise variance differs across the nodes , it becomes necessary to design combination rules that are aware of this variation as outlined further ahead in section vi - b .moreover , in a mobile network where nodes are on the move and where neighborhoods evolve over time , it is even more critical to employ adaptive combination strategies that are able to track the variations in the noise profile in order to cope with such dynamic environments .this issue is taken up in section vi - c .we use lowercase letters to denote vectors , uppercase letters for matrices , plain letters for deterministic variables , and boldface letters for random variables .we also use to denote conjugate transposition , for the trace of its matrix argument , for the spectral radius of its matrix argument , for the kronecker product , and for a vector formed by stacking the columns of its matrix argument .we further use to denote a ( block ) diagonal matrix formed from its arguments , and to denote a column vector formed by stacking its arguments on top of each other .all vectors in our treatment are column vectors , with the exception of the regression vectors , , and the associated noise signals , , which are taken to be row vectors for convenience of presentation .we consider a connected network consisting of nodes .each node collects scalar measurements and regression data vectors over successive time instants . note that we use parenthesis to refer to the time - dependence of scalar variables , as in , and subscripts to refer to the time - dependence of vector variables , as in .the measurements across all nodes are assumed to be related to an unknown vector via a linear regression model of the form : where denotes the measurement or model noise with zero mean and variance .the vector in denotes the parameter of interest , such as the parameters of some underlying physical phenomenon , the taps of a communication channel , or the location of food sources or predators .such data models are also useful in studies on hybrid combinations of adaptive filters .the nodes in the network would like to estimate by solving the following minimization problem : in previous works , we introduced and studied several distributed strategies of the diffusion type that allow nodes to cooperate with each other in order to solve problems of the form in an adaptive manner .these diffusion strategies endow networks with adaptation and learning abilities , and enable information to diffuse through the network in real - time .we review the adaptive diffusion strategies below . in ,two classes of diffusion algorithms were proposed .one class is the so - called combine - then - adapt ( cta ) strategy : \\ \end{aligned}\right.\end{aligned}\ ] ] and the second class is the so - called adapt - then - combine ( atc ) strategy : \\ { \bs{w}}_{k , i}&=\sum_{l\in{\mc{n}}_k}a_{2,lk}{\bs{\psi}}_{l , i}\\ \end{aligned}\right.\end{aligned}\ ] ] where the are small positive step - size parameters and the are nonnegative entries of the matrices , respectively .the coefficients are zero whenever node is not connected to node , i.e. , , where denotes the neighborhood of node .the two strategies and can be integrated into one broad class of diffusion adaptation : \\ \label{eqn : idealdiffusionpostdiff } { \bs{w}}_{k , i}&=\sum_{l\in{\mc{n}}_k}a_{2,lk}{\bs{\psi}}_{l , i}\end{aligned}\ ] ] several diffusion strategies can be obtained as special cases of through proper selection of the coefficients .for example , to recover the cta strategy , we set , and to recover the atc strategy , we set , where denotes the identity matrix . in the general diffusion strategy , each node evaluates its estimate at time by relying solely on the data collected from its neighbors through steps and and on its local measurements through step .the matrices , , and are required to be left or right - stochastic , i.e. , where denotes the vector whose entries are all one .this means that each node performs a convex combination of the estimates received from its neighbors at every iteration . the mean - square performance and convergence properties of the diffusion algorithm have already been studied in detail in . for the benefit of the analysis in the subsequent sections , we present below in the recursion describing the evolution of the weight error vectors across the networkto do so , we introduce the error vectors : and substitute the linear model into the adaptation step to find that where the matrix and the vector are defined as : we further collect the various quantities across all nodes in the network into the following block vectors and matrices : then , from , , and , the recursion for the network error vector is given by where each of the steps in involves the sharing of information between node and its neighbors .for example , in the first step , all neighbors of node send their estimates to node .this transmission is generally subject to additive noise and possibly quantization errors .likewise , steps and involve the sharing of other pieces of information with node .these exchange steps can all be subject to perturbations ( such as additive noise and quantization errors ). one of the objectives of this work is to analyze the _ aggregate _ effect of these perturbations on general diffusion strategies of the type and to propose choices for the combination weights in order to enhance the mean - square performance of the network in the presence of these disturbances . to node . ]so let us examine what happens when information is exchanged over links with additive noise .we model the data received by node from its neighbor as where and are noise signals , is a noise signal , and is a scalar noise signal ( see fig . [fig : noise ] ) . observe further that in , we are including several sources of information exchange noise . in comparison ,references only considered the noise source in and one set of combination coefficients ; the other coefficients were set to for and . in other words , these references only considered and the following traditional cta strategy without exchange of the data compare with ; note that the second step in only uses : \\ \end{aligned}\right.\end{aligned}\ ] ] the analysis that follows examines the aggregate effect of all four noise sources appearing in , in addition to the three sets of combination coefficients appearing in . we introduce the following assumption on the statistical properties of the measurement data and noise signals .[ asm : all ] 1 .the regression data are temporally white and spatially independent random variables with zero mean and covariance matrix .2 . the noise signals , , , , and are temporally white and spatially independent random variables with zero mean and ( co)variances , , , , and , respectively .in addition , , , , and are all zero if or .3 . the regression data , the model noise signals , and the link noise signals , , , and are mutually - independent random variables for all and .using the perturbed data , the diffusion algorithm becomes \\ \label{eqn : noisydiffusionpostdiffold } { \bs{w}}_{k , i}&\!=\!\sum_{l\in{\mc{n}}_k}a_{2,lk}{\bs{\psi}}_{lk , i}\end{aligned}\ ] ] where we continue to use the symbols to avoid an explosion of notation . from and ,expressions can be rewritten as \\ \label{eqn : noisydiffusionpostdiff } { \bs{w}}_{k , i}&\!=\!\sum_{l\in{\mc{n}}_k}a_{2,lk}{\bs{\psi}}_{l , i}\!+\!{\bs{v}}_{k , i}^{(\psi)}\end{aligned}\ ] ] where we are introducing the symbols and to denote the aggregate zero - mean noise signals defined over the neighborhood of node : with covariance matrices it is worth noting that and depend on the combination coefficients and , respectively .this property will be taken into account when optimizing over and in a later section .we further introduce the following scalar zero - mean noise signal : for , whose variance is to unify the notation , we define .then , from , , and , it is easy to verify that the noisy data are related via for .continuing with the adaptation step and substituting , we get \end{aligned}\ ] ] then , we can derive the following error recursion for node ( compare with ): where the matrix and the vector are defined as ( compare with and ): we further introduce the block vectors and matrices : and the corresponding covariance matrices for and : then , from , , and , we arrive at the following recursion for the network weight error vector in the presence of noisy information exchange : \!-\!{\bs{v}}_i^{(\psi)}\nonumber\\ { } & = { \mc{a}}_2^{\t}\left[(i_{nm}\!-\!{\mc{m}}{\bs{\mc{r}}}_i')({\mc{a}}_1^{\t}{\wt{\bs{w}}}_{i-1}\!-\!{\bs{v}}_{i-1}^{(w ) } ) \!-\!{\mc{m}}{\bs{z}}_i\right]\!-\!{\bs{v}}_i^{(\psi)}\end{aligned}\ ] ] that is , compared to the previous error recursion , the noise terms in consist of three parts : * is contributed by the noise introduced at the information - exchange step _ before _ adaptation .* is contributed by the noise introduced at the adaptation step .* is contributed by the noise introduced at the information - exchange step _ after _ adaptation .given the weight error recursion , we are now ready to study the mean convergence condition for the diffusion strategy in the presence of disturbances during information exchange under assumption [ asm : all ] . taking expectations of both sides of ,we get where from , , , and , it can be verified that whereas , from and assumption [ asm : all ] , we get \nonumber\\ { } & = -\left(\sum_{l\in{\mc{n}}_k}c_{lk}r_{v , lk}^{(u)}\right)w^o\end{aligned}\ ] ] let us define an matrix that collects all covariance matrices , , weighted by the corresponding combination coefficients , such that its submatrix is . note that itself is _ not _ a covariance matrix because for all .then , from and , we arrive at therefore , using and , expression becomes with a driving term due to the presence of .this driving term would disappear from if there were no noise during the exchange of the regression data . to guarantee convergence of , the coefficient matrix must be stable , i.e. , . since and are right - stochastic matrices , it can be shown that the matrix is stable whenever itself is stable ( see appendix [ app : meanconvergence ] ) .this fact leads to an upper bound on the step - sizes to guarantee the convergence of to a steady - state value , namely , we must have for , where denotes the largest eigenvalue of its matrix argument .note that the neighborhood covariance matrix in is related to the combination weights .if we further assume that is doubly - stochastic , i.e. , then , by jensen s inequality , since ( i ) coincides with the induced -norm for any positive semi - definite hermitian matrix ; ( ii ) matrix norms are convex functions of their arguments ; and ( iii ) by , are convex combination coefficients .thus , we obtain a sufficient condition for the convergence of in lieu of : } } \ ] ] for , where the upper bound for the step - size becomes independent of the combination weights .this bound can be determined solely from knowledge of the covariances of the regression data and the associated noise signals that are accessible to node .it is worth noting that for traditional diffusion algorithms where information is perfectly exchanged , condition reduces to }\end{aligned}\ ] ] for . comparing with, we see that the link noise over regression data reduces the dynamic range of the step - sizes for mean stability . now ,under , and taking the limit of as , we find that the mean error vector will converge to a steady - state value : is well - known that studying the mean - square convergence of a single adaptive filter is a challenging task , since adaptive filters are nonlinear , time - variant , and stochastic systems .when a network of adaptive nodes is considered , the complexity of the analysis is compounded because the nodes now influence each other s behavior . in order to make the performance analysis more tractable , we rely on the energy conservation approach , which was used successfully in to study the mean - square performance of diffusion strategies under perfect information exchange conditions .that argument allows us to derive expressions for the mean - square - deviation ( msd ) and the excess - mean - square - error ( emse ) of the network by analyzing how energy ( measured in terms of error variances ) flows through the nodes . from recursion and under assumption[ asm : all ] , we can obtain the following weighted variance relation for the global error vector : \}\\ { } & \quad+\e\|{\mc{a}}_2^{\t}(i_{nm}\!-\!{\mc{m}}{\bs{\mc{r}}}_i'){\bs{v}}_{i-1}^{(w)}\|_{\sigma}^2+\e\|{\bs{v}}_i^{(\psi)}\|_{\sigma}^2 \end{aligned } } \ ] ] where is an arbitrary positive semi - definite hermitian matrix that we are free to choose .moreover , the notation stands for the quadratic term .the weighting matrix in can be expressed as where is given by and denotes a term on the order of . evaluating the term requires knowledge of higher - order statistics of the regression data and link noises , which are not available under current assumptions .however , this term becomes negligible if we introduce a small step - size assumption .[ asm : smallstepsize ] the step - sizes are sufficiently small , i.e. , , such that terms depending on higher - order powers of the step - sizes can be ignored .hence , in the sequel we use the approximation : observe that on the right - hand side ( rhs ) of relation , only the first and third terms relate to the error vector . by assumption[ asm : all ] , the error vector is independent of and .thus , from , the third term on rhs of can be expressed as \cdot\e{\wt{\bs{w}}}_{i-1}\}\nonumber\\ { } & = -2\,{\mathfrak{re}}(z^*{\mc{m}}{\mc{a}}_2\sigma{\mc{a}}_2^{\t}{\mc{a}}_1^{\t}\cdot\e{\wt{\bs{w}}}_{i-1})+o({\mc{m}}^2)\end{aligned}\ ] ] since we already showed in the previous section that converges to a fixed bias , quantity will converge to a fixed value as well when . moreover , under assumption [ asm : all ] , the second , fourth , and fifth terms on rhs of relation are all fixed values .therefore , the convergence of relation depends on the behavior of the first term .although the weighting matrix of is different from the weighting matrix of , it turns out that the entries of these two matrices are approximately related by a linear equation shown ahead in .introduce the vector notation : then , by using the identity , it can be verified from that where the matrix is given by to guarantee mean - square convergence of the algorithm , the step - sizes should be sufficiently small and selected to ensure that the matrix is stable , i.e. , , which is equivalent to the earlier condition . although more specific conditions for mean - square stability can be determined without assumption [ asm : smallstepsize ] , it is sufficient for our purposes here to conclude that the diffusion strategy is stable in the mean and mean - square senses if the step - sizes satisfy or and are sufficiently small .the conclusion so far is that sufficiently small step - sizes ensure convergence of the diffusion strategy in the mean and mean - square senses , even in the presence of exchange noises over the communication links .let us now determine expressions for the error variances in steady - state .we start from the weighted variance relation . in view of, it shows that the error variance depends on the mean error .we already determined the value of in .we continue to use the vector notation and proceed to evaluate all the terms , except the first one , on rhs of in the following . for the _ second _ term, it can be expressed as ^*\sigma\end{aligned}\ ] ] where we used the identity ^*\sigma l\in{\mc{n}}_k ] .we adopt uniform step - sizes , , and uniformly white gaussian regression data with covariance matrices , where are shown in fig . [fig : regressor ] .the variances of the model noises , , are randomly generated and shown in fig .[ fig : modelnoise ] .we also use white gaussian link noise signals such that , , and .all link noise variances , , are randomly generated and illustrated in fig .[ fig : linknoise ] from top to bottom .we assign the link number by the following procedure .we denote the link from node to node as , where .then , we collect the links in an ascending order of in the list ( which is a set with _ ordered _ elements ) for each node .for example , for node in fig .[ fig : topology ] , it has links ; the ordered links are then collected in .we concatenate in an ascending order of to get the overall list .eventually , the link in the network is given by the element in the list .nodes.,height=172 ] we examine the simplified cta and atc algorithms in and , namely , no sharing of data among nodes ( i.e. , ) , under various combination rules : ( i ) the relative variance rule in , ( ii ) the metropolis rule in : where denotes the degree of node ( including the node itself ) , ( iii ) the uniform weighting rule : and ( iv ) the adaptive rule in with .we plot the network msd and emse learning curves for atc algorithms in figs .[ fig : msd_atc ] and [ fig : emse_atc ] by averaging over 50 experiments . for cta algorithms , we plot their network msd and emse learning curves in figs . [fig : msd_cta ] and [ fig : emse_cta ] also by averaging over 50 experiments .moreover , we also plot their theoretical results and in the same figures . from fig .[ fig : sim2 ] we see that the relative variance rule makes diffusion algorithms achieve the lowest msd and emse levels at steady - state , compared to the metropolis and uniform rules as well as the algorithm from ( which also requires knowledge of the noise variances ) .in addition , the adaptive rule attains msd and emse levels that are only slightly larger than those of the relative variance rule , although , as expected , it converges slower due to the additional learning step . the value for each entry of the complex parameter is assumed to be changing over time along a circular trajectory in the complex plane , as shown in fig .[ fig : track ] .the dynamic model for is expressed as , where , , and .the covariance matrices are randomly generated such that when , but their traces are normalized to be one , i.e. , , for all nodes. the variances for the model noises , , are also randomly generated .we examine two different scenarios : the low noise - level case where the average noise variance across the network is db and the noise variances are shown in fig .[ fig : hsnr ] ; and the high noise - level case where the average variance is db and the variances are shown in fig .[ fig : lsnr ] .we simulate 3000 iterations and average over 20 experiments in figs .[ fig : track_hsnr ] and [ fig : track_lsnr ] for each case .the step - size is 0.01 and uniform across the network . for simplicity, we adopt the simplified atc algorithm where , and only use the uniform weighting rule . the tracking behavior of the network , denoted as , is obtained by averaging over all the estimates , , across the network . figs .[ fig : track_hsnr ] and [ fig : track_lsnr ] depict the complex plane ; the horizontal axis is the real axis and the vertical axis is the imaginary axis .therefore , for every time , each entry of or represents a point in the plane .when is increasing , moves along the red trajectory ( in ) , along the blue trajectory ( in ) , along the green trajectory ( in ) , and along the magenta trajectory ( in ) . from fig .[ fig : track ] , it can be seen that diffusion algorithms exhibit the tracking ability in both high and low noise - level environments .in this work we investigated the performance of diffusion algorithms under several sources of noise during information exchange and under non - stationary environments .we first showed that , on one hand , the link noise over the regression data biases the estimators and deteriorates the conditions for mean and mean - square convergence . on the other hand, diffusion strategies can still stabilize the mean and mean - square convergence of the network with noisy information exchange .we derived analytical expressions for the network msd and emse and used these expressions to motivate the choice of combination weights that help ameliorate the effect of information - exchange noise and improve network performance .we also extended the results to the non - stationary scenario where the unknown parameter is changing over time .simulation results illustrate the theoretical findings and how well they match with theory .following , we first define the block maximum norm of a vector .[ def : vecblkmaxnorm ] given a vector consisting of blocks , the block maximum norm is the real function , defined as where denotes the standard -norm on . similarly , we define the matrix norm that is induced by the block maximum norm as follows : [ def : blkmaxnorm ] given a block matrix with block size , then denotes the induced block maximum ( matrix ) norm on .[ lemma : blkunitaryinvariant ] the block maximum matrix norm is block unitary invariant , i.e. , given a block diagonal unitary matrix consisting of unitary blocks , where , for any matrix , then where denotes the block maximum matrix norm on with block size .[ lemma : rghtstocmat ] let be a right - stochastic matrix . then , for block size , from definition [ def : blkmaxnorm ] , we get {lk}x_k\|_2}{\max_{k}\|x_k\|_2}\nonumber\\ { } & \le\max_{x\in{\mathbb{c}}^{mn}\backslash\{0\ } } \frac{\max_{l}\sum_{k=1}^{n}[a]_{lk}\|x_k\|_2}{\max_{k}\|x_k\|_2}\nonumber\\ { } & \le\max_{x\in{\mathbb{c}}^{mn}\backslash\{0\ } } \frac{\max_{l}(\sum_{k=1}^{n}[a]_{lk})\cdot\max_k\|x_k\|_2}{\max_{k}\|x_k\|_2}\nonumber\\ { } & \le\max_{x\in{\mathbb{c}}^{mn}\backslash\{0\}}\frac{\max_{l}1\cdot\max_{k}\|x_k\|_2}{\max_{k}\|x_k\|_2}\nonumber\\ { } & = 1\end{aligned}\ ] ] where consists of blocks , and {lk}$ ] denotes the entry of . on the other hand , for any induced matrix norm ,say , the block maximum norm , it is always lower bounded by the spectral radius of the matrix : combining and completes the proof .[ lemma : blockdiagonal ] let be a block diagonal hermitian matrix with block size .then the block maximum norm of the matrix is equal to its spectral radius , i.e. , denote the submatrix on the diagonal of by .let be the eigen - decomposition of , where is unitary and is diagonal .define the block unitary matrix and the diagonal matrix .then , . by lemma [ lemma : blkunitaryinvariant ] ,the block maximum norm of with block size is where we used the fact that the induced -norm is identical to the spectral radius for hermitian matrices . on the other hand ,any matrix norm is lower bounded by the spectral radius , i.e. , combining and completes the proof .now we show that the matrix is stable if is stable . for any induced matrix norm , say , the block maximum norm with block size , we have where , from and , and lemma [ lemma : rghtstocmat ] . by and, it is straightforward to see that is block diagonal with block size .then , by lemma [ lemma : blockdiagonal ] , expression can be further expressed as which completes the proof . in appendixi of and the matrix in lemma 2 of are block diagonal , the norm used in these references should simply be replaced by the norm used here and as already done in . ]let us denote the submatrix of by . by assumptions [ asm : all ] and expression , can be evaluated as where , by expressions and , when , expression reduces to when , expression becomes where denotes the kronecker delta function .evaluating the last term on rhs of requires knowledge of the excess kurtosis of , which is generally not available . in order to proceed ,we invoke a separation principle to approximate it as substituting into leads to \end{aligned}\ ] ] from and , we get \end{aligned}\ ] ] substituting into , we obtain \end{aligned}\ ] ] from and , we arrive at expression .l. li and j. a. chambers , `` distributed adaptive estimation based on the apa algorithm over diffusion netowrks with changing topology , '' in _ proc .ieee workshop stat . signal process .( ssp ) _ , cardiff , uk , aug .2009 , pp . 757760 .n. takahashi and i. yamada , `` link probability control for probabilistic diffusion least - mean squares over resource - constrained networks , '' in _ proc .speech , signal process .( icassp ) _ , dallas , tx , mar .2010 , pp . 35183521 .z. towfic , j. chen , and a. h. sayed , `` collaborative learning of mixture models using diffusion adaptation , '' in _ proc .workshop machine learn . signal process .( mlsp ) _ , beijing , china , sept .2011 , pp .j. chen and a. h. sayed , `` diffusion adaptation strategies for distributed optimization and learning over networks , '' to appear in _ ieee trans .signal process .[ also available online at http://arxiv.org/abs/1111.0034 as _ arxiv:1111.0034v2 [ math.oc]_ , oct . 2011 . ]r. abdolee and b. champagne , `` diffusion lms algorithms for sensor networks over non - ideal inter - sensor wireless channels , '' in _ proc .sensor systems ( dcoss ) _ , barcelona , spain , june 2011 , pp .16 .l. xiao , s. boyd , and s. lall , `` a scheme for robust distributed sensor fusion based on average consensus , '' in _ proc .acm / ieee int .sensor networks ( ipsn ) _ , los angeles , ca , apr .2005 , pp . 6370 .d. mandic , p. vayanos , c. boukis , b. jelfs , s. l. goh , t. gautama , and t. rutkowski , `` collaborative adaptive learning using hybrid filters , '' in _ proc .speech , signal process .( icassp ) _ , honolulu , hi , apr .2007 , pp . 921924 .tu and a. h. sayed , `` optimal combination rules for adaptation and learning over netowrks , '' in _ proc .workshop comput .advances multi - sensor adapt . process .( camsap ) _ , san juan , puerto rico , dec .2011 , pp . 317320 .
adaptive networks rely on in - network and collaborative processing among distributed agents to deliver enhanced performance in estimation and inference tasks . information is exchanged among the nodes , usually over noisy links . the combination weights that are used by the nodes to fuse information from their neighbors play a critical role in influencing the adaptation and tracking abilities of the network . this paper first investigates the mean - square performance of general adaptive diffusion algorithms in the presence of various sources of imperfect information exchanges , quantization errors , and model non - stationarities . among other results , the analysis reveals that link noise over the regression data modifies the dynamics of the network evolution in a distinct way , and leads to biased estimates in steady - state . the analysis also reveals how the network mean - square performance is dependent on the combination weights . we use these observations to show how the combination weights can be optimized and adapted . simulation results illustrate the theoretical findings and match well with theory . diffusion adaptation , adaptive networks , imperfect information exchange , tracking behavior , diffusion lms , combination weights , energy conservation .
relativistic heavy ion reactions exhibit dominant collective flow behaviour , especially at higher energies where the number of involved particles , including quarks and gluons , increases dramatically . at intermediate stagesapproximate local equilibrium is reached , while the initial and final stages may be far out of local equilibrium .also , different stages may have different forms or phases of matter , especially when quark gluon plasma ( qgp ) is formed .the need to describe and match different stages of a reaction was realized by the development of the final freeze - out ( fo ) description in landau s fluid dynamical ( fd ) model .then it was improved by milekhin , and a covariant simple model was given by cooper and frye .in all these models the fo happened when the fluid crossed a hypersurface in the spacetime . at early relativistic heavy ion collisions ,the initial compression and thermal excitation was described by a compression shock in nuclear matter .this was already pointed out by the first publications of w. greiner and e. teller and their colleagues , and the shock took place crossing a spacetime hypersurface ( e.g. a relatively thin layer resulting in a mach cone ) .when sudden large changes happen across a spacetime front the conservation laws and the requirement of increasing entropy should be satisfied : ~=~0\label{nconserve}~;\\ & [ t^{\mu\nu}d\sigma_{\mu}]~=~0\label{tconserve}~;\\ & [ s^{\mu}d\sigma_{\mu } ] ~\geq~0 \label{entropy}\end{aligned}\ ] ] where is the baryon current , is the entropy current , is the energy momentum tensor , which , for a perfect fluid , is given by where is the energy density , is the pressure , is the entropy density , and is the baryon density of matter .these are invariant scalars .the is the normal vector of the transition hypersurface , is the particle four velocity , normalized to .the square bracket means =a_1 - a_0 $ ] , the difference of quantity over the two sides of the hypersurface .the metric tensor is defined as .we will also use the following notations : , is the invariant scalar baryon current across the front , is the generalized specific volume , , and , . for a perfect fluid local equilibrium is assumed , thus the fluid can be characterized by an equation of state ( eos ) , .( [ nconserve],[tconserve ] ) and the eos are 6 equations , and can determine the 6 parameters of the final state , , , , and .later csernai pointed out the importance of satisfying energy , momentum and particle charge conservation laws across such hypersurfaces and generalized the earlier description of taub to spacelike and timelike hypersurfaces ( with spacelike and timelike normals respectively ) .in this situation the matter both before and after the shock was near to thermal equilibrium , and thus the conservation laws led to scalar equations connecting thermodynamical parameters of the two stages of the matter : the generalized _ rayleigh line _ and _ taub adiabat _ : (d\sigma^{\mu } d\sigma_{\mu } ) / [ x ] \ , \ [ p ] = [ ( e+p)x ] / ( x_1 + x_0 ) \ .\label{rayligh - taub}\ ] ] at much higher energies , at the first stages of the collision , the matter becomes transparent and the initial state is very far from thermal equilibrium . for this stageother models were needed to handle the initial development , e.g. refs . . the initial non - equilibrium state in this situation can not be characterized by thermodynamical parameters or an eos , so the previous approach , with the generalized _ rayleigh line _ and _ taub adiabat _ is not applicable . nevertheless , the intermediate ( fluid dynamical ) stage is in equilibrium and has an eos , while the initial state has a well defined energy momentum tensor . in this workwe will demonstrate that the final invariant scalar , thermodynamical parameters can be determined in this situation also from the conservation laws .then , bugaev observed that fo across hypersurfaces with spacelike normals , has problems with negative contributions in the cooper - frye evaluation of particle spectra , thus the fo must yield an anisotropic distribution , which he could approximate with a cut - jttner distribution .this is not surprising as in the rest frame of the front ( rff ) all post fo particles must move `` outwards '' , i.e. is required .this condition is not satisfied by any non - interacting thermal equilibrium distribution , which extend to infinity in all directions even if they are boosted in the rff .subsequently , another analytic form was proposed by csernai and tamosiunas , the cancelling - jttner distribution , which replaced the sharp cutoff by a continuous cutoff , based on kinetic model results .parallel to this development , the fo process was analysed in kinetic , transport approaches , where the fo happened in an outer layer of the spacetime , or in principle it could be extended to the whole fluid ( although , at early moments of a collision / explosion , from the center of the reaction few particles can escape ) .these transport studies also indicated that the post fo distributions may become anisotropic even for fo hypersurfaces with timelike normal [ in short : _ timelike surface _ ] , if the normal , , and the velocity four - vector , , are ( very ) different. these studies led to another fo description , where the initial stages of the collision with strongly interacting matter were described by fluid dynamics , while the final , outer spacetime domain ( or later times ) was described by weakly interacting particle ( and string ) transport models , where the final fo was inherently included , as each particle was tracked , until its last interaction .it is important to mention , that in these approaches , the transition from the fd stage to the molecular dynamics ( md ) or cascade stage happens when the matter crosses a spacetime hypersurface , thus the conservations laws have to be satisfied and the post fo particle phase space distributions have to be used when the post fo distributions become anisotropic . in this work for the first time we present a simple covariant solution for the transition problem and conservation laws for the situations when the matter after the front is in thermal equilibrium ( i.e. it has isotropic phase space distribution ) and has an eos , but the matter before the front must not be in an equilibrium state .then we discuss the situation where microscopic models are appended to the fluid dynamical model , which are in , or close to thermal equilibrium , but the eos , is not necessarily known .subsequently , we present the way to generalize the problem to anisotropic matter in final state , which is necessary for fo across spacelike surfaces and also for timelike surfaces if the flow velocity is large in the rest frame of the front ( rff ) .this problem was solved in kinetic approach for the bugaev cut - jttner approach and the csernai - tamosiunas cancelling - jttner approach , by calculating the energy momentum tensors explicitly from the anisotropic phase space distributions , but no general solution is given for post fo matter with anisotropic pressure tensor .the transition hypersurface between two stages of a dynamical development are most frequently postulated , governed by the requirement of simplicity .thus , such a hypersurface is frequently chosen as a fixed coordinate time in a descartian frame , or at a fixed proper time from a spacetime point , although in a general 3 + 1 dimensional system the choice of such a point is not uniquely defined .it is important that the _ transition hypersurface should be continuous _ , ( without holes where conserved particles or energy or momentum could escape through , without being accounted for ) . to secure that one quantity ( e.g. baryon charge ) does not escape through the holes of a hypersurfaceis not sufficient , as other quantities may ( e.g. momentum in case if is different on the two sides of a hole ) .again , to construct such a continuous hypersurface in a general 3 + 1 dimensional system is a rather complex task , although , in 1 + 1 or 2 + 1 dimensions it seems to be easy .both the initial state models and the intermediate stage , fluid dynamical models may be such that the calculation could be continued beyond the point where a transition takes place . then spacetime location of the transition to the next stage can or should be decided , based on a physical condition or requirement , which may be external to the development itself . as a consequence , in some cases the determination of transition surface may be an iterative process . numerically , the extraction of a freeze out ( fo ) hypersurface is by no means trivial .one of us , brs , has recently provided a proper numerical treatment regarding the extraction of fo hypersurfaces in two ( 2d ) , three ( 3d ) and four ( 4d ) dimensions .for instance , in 2d the history , i.e. , the temporal evolution , of a temperature field of a one - dimensional ( 1d ) relativistic fluid can be represented by a gray - level image ( _ cf ._ , fig . 1 ) . in the figure, we use the time and the radius for the temporal and the spatial dimensions , respectively .let bright pixels ( i.e. , picture elements ) refer to high temperatures and dark ones to low temperatures of the fluid .in this example , a 2d freeze - out hypersurface is an iso - therme . in fig . 1.a , we also depict the corresponding co - variant normal vectors . in 2d, the length of each normal vector is equal to the length of each supporting iso - contour vector .each normal vector has its origin at the contra - variant center , , of a given contra - variant iso - contour vector and points to the exterior of the enclosed spacetime region .the latter is also indicated in fig .1.b , where we show that a contra - variant normal vector can be obtained by reflection of the co - variant normal vector at the time axis ( dashed line ) . finally , in fig .1.c we show the contra - variant fo contour vectors with their corresponding contra - variant normal vectors .not all of these contra - variant normal vectors point to the exterior of the enclosed spacetime region .note that the sign conventions of the normals of the transition hypersurface are important , and must be discussed , especially if both timelike and spacelike surfaces are studied . in fact , only the timelike contra - variant normal vectors point outwards , whereas the spacelike contra - variant normal vectors point inwards .if we know the fo hypersurface and the local momentum distribution after the transition the total , measurable momentum distribution can be evaluated by the cooper - frye formula .let us define the contra - variant and co - variant surface normal four - vectors as where in general , as the surface element can be either spacelike ( - ) or timelike ( + ) .we can also introduce a unit normal to the surface as : so that furthermore where for timelike surfaces and for spacelike surfaces . for the frequently used timelike , one - dimensional case . in the general casethe conserved energy - momentum current crossing the surface element is must be continuous across the freeze - out surface , as must the baryon current , where is the invariant scalar baryon charge current .we assume that the initial state , `` '' , and its energy momentum tensor and baryon current before the front is known .we aim for the characteristics of the final state . in totalthere are six unknowns in the equilibrated final state , these are , , and ( here we drop the index `` '' for the final state for shorter notation ) , however the pressure , a function of and , is given by the eos , .knowing and , the eos , and the particular form of the corresponding equilibrated distribution function , the parameters , and , can also be obtained .thus , we have to solve 5 equations : the l.h.s .represents quantities of the initial state of matter and the corresponding conserved quantities are known . equations ( [ eq0],[eq1 ] ) can be solved for in the calculational frame : using now eq .( [ eq1],[eq2],[eq3 ] ) one obtains , and in a similar fashion and this results for , in where , is an invariant scalar , and transforms as the 0-th component of the 4-vector .notice that eq .( [ eq9 ] ) was not used up to this point , thus we can use there results both for the baryon - free and baryon - rich case .we can have an elegant direct solution for the proper energy density , , and pressure , , as both of these quantities are invariant scalars , and we can express these by the covariant , 4-vector equation ( [ em - curr ] ) . from this 4-vector equationwe can get two invariant scalar equations by ( i ) taking its norm , , and ( ii ) taking its projection to the normal direction , : now expressing from eq .( [ ads ] ) and inserting it to eq .( [ aa ] ) , we obtain our final equation which can be solved straightforwardly if the eos , , is known .the other three elements of the equation , , , and , are known from the normal to the surface and from energy - momentum current from the pre - transition side .then , eqs .( [ eqg2a]-[eqg2b ] ) can be used to determine the final flow velocity . at the end , after all conservation law equations are solved , we have to check the non - decreasing entropy condition ( [ entropy ] ) to see whether the solution is physically possible . if the overall entropy is decreasing after transition that would mean that the hypersurface is chosen incorrectly .one will need to choose more realistic condition for the transition and repeat the calculations .this result can be used both if the initial state is in equilibrium and if it is not . in case of an ideal gas of massless particles after the front , with an eos of , eq .( [ aa2 ] ) leads to a quadratic equation , where , is the energy momentum transfer 4-vector across a unit hypersurface element .if the flow velocity is normal to the fo hypersurface , , then for an initial perfect fluid in the local rest ( lr ) frame the above covariant equation takes a simple form , this has two real roots , ( energy density is conserved ) and which does not correspond to a physical solution , as the energy density should not be negative .if the eos depends on the conserved baryon charge density also , then we must exploit in addition eq .( [ eqn ] ) : and inserting from here to eq .( [ ads ] ) yields where is the generalized specific volume , well known from relativistic shock and detonation theory .this equation provides another equation for as \ , \label{ep2nd}\ ] ] which , together with eq .( [ aa2 ] ) and the eos , , provide three equations to be solved for and .this evaluation of the post fo configuration is in agreement with the theory of relativistic shocks and detonations allowing for both spacelike and timelike fo hypersurfaces .see also .this method of evaluation observables is frequently used at the end of fluid dynamical model calculations ( see e.g. ) .recently a frequently practiced method to describe the final stages of a reaction is to switch the fd model over to a molecular dynamics ( md ) description at a transition hypersurface .this is frequently a fixed time , , or fixed proper time , hypersurface .the generation of the initial state of such an md model is a task , which depends on the constituents of the matter described by the md model .nevertheless , same principles must be satisfied , like the conservation laws , eqs.([nconserve]-[tconserve ] ) .let us assume , although not required by physical laws , that we have thermal equilibrium on both sides of the transition and we know explicitly the corresponding final momentum distribution of particles .then , the fundamental equation to construct the post transition microscopic state , in addition to the conservation laws is the cooper - frye formula , assuming that the local phase space distribution , , is known for all initial components of the md model . if are local equilibrium distributions then ( in principle ) we know the intensive and extensive thermodynamical parameters and the eos of the matter when the md model simulation starts. these must not be the same as the ones before the transition hypersurface . in the usual transition from fd to md models ,where the initial state of md is in equilibrium , the eos - s are known on both sides of the transition surface , and thus , both the equations of rayleigh - line and taub - adiabat , eqs .( [ rayligh - taub ] ) , as well as the invariant scalar equations derived here , eqs.([aa],[ads],[aa2],[ep2nd ] ) can be used to determine all parameters of the matter starting the md simulation .these then determine the phase space distributions , of all components of the md simulation .subsequently eq.([cf - f ] ) can be used to generate randomly the initial constituents of the md simulation . as eq.([cf - f ] ) is a covariant equation applicable in any frame of reference , the most straightforward is to perform the generation of particles in the calculational frame of the md model .this transition is by now performed in many hybrid models combining fluid dynamics with microscopic transport models .these models at present are the most effective to describe experimental data and make the need for a modified boltzmann transport equation less problematic . in some cases the first step of the transition , the determination of the parameters of the final state from the exact conservation laws , is dropped with the argument that both before and after the transition the matter has the same constituents and the same eos , thus the all extensive and intensive thermodynamical parameters as well as the flow velocity must remain the same .then , using the intensive parameters the final particle distributions in the cooper - frye formula , eq.([cf - f ] ) , can be directly evaluated in a straightforward way .this procedure is correct , but only if all features of the two states of the matter and their eos are identical . in some cases the pre transition eos assumes effective hadron masses depending on the matter density , while the final eos is that of a hadron ideal gas mixture , but with fixed vacuum masses .this leads to a difference in the eos , thus the above procedure is approximate . in such cases ,the method can be used , but the accurate conservation laws can be enforced by a final adjustment step described in the next subsection . the situation is similar if the constituents and the eos are almost identical before and after the transition , but before the transition a weak or weakening mean field potential or compression energy is taken into account .in addition to the above mentioned approximate methods , even for really identical eos - s across the transition or with generating the final eos parameter based on conservation laws for the final eos , inaccuracies may arise due to other reasons : during the random generation of the initial constituent particles of the md simulation , the exact conservation laws may be violated , due to finite number effects . however , the energy and particle number conservations are usually enforced during the random generation of particles , even if the above procedure of solving the conservation laws beforehand is not fully followed .this is usually the consequence of the fact that the eos of the md model is not necessarily known if the model has complex constituents and laws of motion . in any case to remedy this random error and make the conservation laws exactly satisfied a final correction step is advisable , and it is not always performed .if the energy and particle number conservations are enforced then , the last variable to balance is the momentum conservation .this regulates the flow velocity of the matter after the transition initiating the md simulation .the energy momentum tensor and baryon current for the generated random set of particle species , " , for each fluid cell ( or group of cells if the multiplicity in a single cell is too low ) can be calculated from the kinetic definition : which , yield the resulting momentum and flow velocity of the matter .this can be used to adjust the flow velocity to achieve exact conservation of momentum , and modify the velocity of generated particles by the required lorentz boost .the other conserved quantities may then be affected also , but an iterative procedure to eliminate the error completely is not crucial as the error can be given quantitatively .if the randomly generated state is not following a thermal equilibrium phase space distribution , , and thus does not have an eos , the above described scalar equations can not be used to generate the initial configuration of the md model .nevertheless , the second step to check the conservation laws with the kinetic definition , and then correct the parameters of the generated particles can be done . for a required level of accuracy in this case an iterative proceduremay be necessary .another , easier way to remedy this problem is to choose the transition hypersurface earlier so that the subsequent matter is still in thermal equilibrium .this can always be done if the requirement of entropy increase is satisfied .we have mentioned that the assumption for having thermal equilibrium in the final state is neither excluded nor required from transport theoretical considerations .however , thermal equilibrium distribution is not possible if we have to describe fo across a spacelike hypersurface ( see the discussion in section [ intro ] . ) in the md model description the final post fo momentum distributions develope a local anisotropy if the fo has locally a preferred direction . unless the unit normal of the fo hypersurface is equal to the local flow velocity of the pre fo matter , there is always a selected spatial direction which is the dominant direction of fothis situation is discussed in several theoretical works , and some general features can be extracted from these studies . in explicit transport models this situationis handled : starting from an equilibrium jttner distribution and considering a momentum dependent escape probability in the collision term , - which reflected the direction of the fo front and the distance from the front , - an anisotropic distribution was obtained ( i.e. a distribution , which was anisotropic even in its own lr frame ) .this anisotropic distribution could be approximated with analytic distribution functions : the starting point is the un - cut , isotropic , jttner distribution in the rest frame of the gas ( rfg ) , which is centered around the 4-velocity vector , .this distribution is then cut or cut and smoothed .the resulting distribution has a different new flow velocity , , which is non - zero in rfg , and is pointing in space in the direction of the normal of the fo hypersurface , , labeled by .this defines the local rest ( lr ) frame of the post fo matter .the spatial direction of is not affected by the lorentz transformation from rff to rfg and then to lr , as is the direction of the lorentz transformation from rfg to lr ., must not be parallel to or but these latter velocities can be decomposed to and components with respect to .due to the construction of the cut- or cancelling - jttner distributions or . ] in the general case the boost in the direction leads to a change of the distribution function in the direction , but does not affect the distribution in the direction , or the procedure of cutting or cancelling the distribution in the direction .( the illustration in fig .2a shows the spatial momentum distribution where the boost in the orthogonal direction is already performed . ) in the final lr frame , the matter is characterized by a rather complex energy momentum tensor , inheriting some parameters from the original uncut distribution in rfg , like the temperature and chemical potential , but as the resulting distribution is not a thermal equilibrium distribution , these parameters are not playing any thermodynamical role .one has to determine all parameters numerically from conservation laws ( [ nconserve],[tconserve ] ) , as done in refs . .interestingly , a simplified numerical kinetic fo model led to a fo distribution satisfying the condition for spacelike fo with a smooth distribution function , which is anisotropic ( also in its own lr frame ) and has a symmetry axis pointing in the dominant fo direction .this distribution was then approximated with an analytic , `` cancelling - jttner '' distribution , which can also be used to solve the fo problem .after fo , the symmetry properties of the energy momentum tensor are the same for the cut - jttner and cancelling - jttner cases .the fo leads to an anisotropic momentum distribution and therefore to an anisotropic pressure tensor .the energy momentum tensor is not diagonal in the rfg frame , there is a non vanishing transport term , , in the 2-dimensional plane spanned by the 4-vectors , and .one can , however , diagonalize the energy momentum tensor by making a lorentz boost into the lr frame using landau s definition for the 4-velocity , . in this framethen the energy momentum tensor becomes diagonal , but the pressure terms are not identical , due to the anisotropy of the distribution : here the energy density , , of course must not be the same as in the case of an isotropic , thermal equilibrium post fo momentum distribution .this can be seen from the kinetic definition of the energy momentum tensor as shown in refs . .we need the complete post fo momentum distribution and the corresponding energy momentum tensor to determine final observables .this depends on the transport processes at fo , and can not be given in general ; however , due to the symmetries of the collision integral , the symmetries of the energy momentum tensor are the same irrespectively of the ansatz used ( e.g. cut - jttner , cancelling - jttner or some other distribution ) . in kinetic transport approachesthe microscopic escape probability is peaking in the direction of , which yields a distribution peaking in this direction , i.e. yielding the same symmetry properties as the previously mentioned analytic ansatzes .the energy momentum tensor in general takes the form where is the orthogonal projector to , and is the unit 4-vector projection of in the direction orthogonal to , i.e. , where ensures normalization to -1 . in the landau lr frame this returns expression ( [ tdiag ] )the 4-velocity , , and the other parameters of the post fo state of matter , should be determined from the conservation laws ( [ nconserve],[tconserve ] ) .the schematic diagram of the asymmetric distributions and the different reference frames can be seen in fig .[ fab ] .the fo problem was solved for these configurations and ansatzes , by satisfying the conservation laws explicitly for the full energy momentum tensor .we do not have a general eos(s ) that would characterize the connection among , and , furthermore the relation connecting these quantities depends on the 4-vectors and .in addition this connection depends on the details or assumptions of the transport model .the simple models provide examples for such a dependence .if is known , then for baryon free matter we can determine four unknowns : and , an additional parameter of the post fo distribution , from eq .( [ tconserve ] ) .( due to normalization only 3 components of are unknowns . ) for baryon - rich matter we can determine one more unknown parameter , since we have one additional equation , the conservation of baryon charge from eq .( [ nconserve ] ) .the first step of solution can be done similarly to the isotropic case . then in eq .( [ em - curr ] ) the enthalpy will change as , and , plus an additive term will appear , .furthermore , eqs .( [ eq0]-[eq3 ] ) remain of the same form , with , and , plus the additive term will appear in the r.h.s . of eqs.([eq0]-[eq3 ] ) .this additive term will also appear in the expression of after eq .( [ eqg2a ] ) and in the denominator of eq .( [ eqg2b ] ) also .the additional term , , in eq . ( [ em - curr ] ) is orthogonal to ( by definition of ) , so when we calculate the scalar product ( [ aa ] ) their cross term vanishes , so now one can express from eq .( [ ads - p ] ) and inserting it to eq .( [ aa - p ] ) , we obtain that where this equation is not a scalar equation as it dependes on , where the projector is dependent on .these equations are similar to the ones obtained for the isotropic case , however , to solve this last equation we need a more complex relation among , , .as these arise from the collision integral in the bte approach the needed relation may depend on and . on the other hand ,the escape probability may be simple , or may be approximated in a way , which yields an ansatz for this relation with adjustable parameters , and then the problem is solvable .this was the case in refs . .the recent covariant formulation of the kinetic freeze - out description indicates that the relation among the different parameters of the anisotropic energy momentum tensor , should be possible to express in terms of invariant scalars , which may facilitate the solution of the anisotropic fo problem .when the adjustable parameters of the post fo matter are determined in this way from the conservation laws , we still need the underlying anisotropic momentum distribution of the emitted particles in order to evaluate the final particle spectra using the cooper - frye formula with this anisotropic distribution function . once again, when all conservation law equations are solved we have to check the non - decreasing entropy condition to see whether the selected fo hypersurface is realistic . in case of an anisotropic final state , due to the increased number of parameters and their more involved relations , the covariant treatment of the problem may not provide a simplification , compared to the direct solution of conservation laws for each component of the energy momentum tensor ( e.g. ) .recent viscous fluid dynamical calculations evaluate the anisotropy of the momentum distribution is in the pre fo viscous flow ( see e.g. . )this anisotropy is governed by the spacetime direction of the viscous transport .the pre and post fo matter may still be different , e.g. the pre fo state may be viscous qgp with current quarks and perturbative vacuum , while post fo we may have a hadron gas or constituent quark gas .the final state will also be anisotropic , not only because of the initial anisotropy but also due to freeze - out .the two physical processes leading to anisotropy are independent , so their dominant directions are in general different . in this casethe general symmetries are uncorrelated and can not be exploited to simplify the description of the transition . due to the change of the matter properties , the conservation laws , eqs .( [ nconserve]-[entropy ] ) , are needed to determine the parameters of the post fo matter before the cooper - frye formula with non - equilibrium post fo distribution is applied to evaluate observables .in this work a new simple covariant treatment is presented for solving the conservation laws across a transition hypersurface .this leads to a significant simplification of the calculation if both the initial and final states are in thermal equilibrium .the same method can also be used for the more complicated anisotropic final state , however , this method is only advantageous if the more involved relations among the parameters of the post fo distribution and the distribution itself is given in covariant form , preferably through invariant scalars .e. molnar , l. p. csernai , v. k. magas , a. nyiri , k. tamosiunas , phys . rev . *c74 * , 024907 ( 2006 ) ; j. phys .g 34 ( 2007 ) 1901 ; e. molnar , l. p. csernai and v. k. magas , acta phys . hung . a * 27 * , 359 ( 2006 ) ; v. k. magas , l. p. csernai and e. molnar , acta phys . hunga * 27 * , 351 ( 2006 ) .v. k. magas , l. p. csernai and e. molnar , eur .phys . j. a * 31 * , 854 ( 2007 ) ; int .j. mod .e * 16 * , 1890 ( 2007 ) ; v. k. magas and l. p. csernai , phys .b * 663 * , 191 ( 2008);l.p .csernai , v.k .magas , e. molnar et al . ,j. c 25 , 65 ( 2005);v.k .magas , l.p .csernai , e. molnar et al .phys . a 749 , ( 2005 ) .note , that we actually use for hypersurface construction in 1 + 1 , 2 + 1 , and 3 + 1 dimensional numerical simulations the corresponding computer codes , i.e. , diconex , vesta , and steve , respectively . explains in great detail the extraction of an oriented fo contour which is represented by a set of contra - variant ( so - called `` diconex iso - contour '' ) vectors . in 2d ,the simplices which represent a hypersurface best are line segments , whereas in 3d and 4d they are triangles and tetrahedrons , respectively .in particular , the contra - variant 2d fo contour vectors are oriented counter - clockwise around the enclosed spacetime regions .the co - variant normals of the contra - variant simplices are obtained from calculating the mathematical duals of these simplices with respect to a geometric product ( _ cf ._ , e.g. , ref . ) within the n - dimensional multi - linear space under consideration . note , that the co - variant normal vectors do not depend on any given metric tensor , whereas the contra - variant normal vectors do .bass , a. dumitru , m. bleicher et al .c60 , 021902 ( 1999 ) ; d. teaney , j. lauret and e.v .shuryak , nucl .phys . a 698 , 479 ( 2002 ) ; s.a .bass , t. renk , j. ruppert et al . , j. phys .g 34 , s979 ( 2007 ) ; c. nonaka , m. asakawa and s.a .bass , j. phys .g 35 , 104099 ( 2008 ) ; h. petersen , j. steinheimer , g. burau , m. bleicher , h. stcker , phys . rev .c 78 ( 2008 ) 044901 ; t. hirano and y. nara , phys . rev .c 79 , 064904 ( 2009 ) .
heavy ion reactions and other collective dynamical processes are frequently described by different theoretical approaches for the different stages of the process , like initial equilibration stage , intermediate locally equilibrated fluid dynamical stage and final freeze - out stage . for the last stage the best known is the cooper - frye description used to generate the phase space distribution of emitted , non - interacting , particles from a fluid dynamical expansion / explosion , assuming a final ideal gas distribution , or ( less frequently ) an out of equilibrium distribution . in this work we do not want to replace the cooper - frye description , rather clarify the ways how to use it and how to choose the parameters of the distribution , eventually how to choose the form of the phase space distribution used in the cooper - frye formula . moreover , the cooper - frye formula is used in connection with the freeze - out problem , while the discussion of transition between different stages of the collision is applicable to other transitions also . more recently hadronization and molecular dynamics models are matched to the end of a fluid dynamical stage to describe hadronization and freeze - out . the stages of the model description can be matched to each other on spacetime hypersurfaces ( just like through the frequently used freeze - out hypersurface ) . this work presents a generalized description of how to match the stages of the description of a reaction to each other , extending the methodology used at freeze - out , in simple covariant form which is easily applicable in its simplest version for most applications .
complex networks are natural structures for representing many real - world systems .their flexible and adaptive features make them ideal for interrelating various types of information and to update them as time goes by .recently , there has been tremendous interest , from various fields of science , in the study of the statistical properties of such networks . by now , many interesting features of complex networks have been established .so far , the main attention in the study of complex networks has been on the properties of individual nodes and how they connect ( link ) to their nearest neighbors .however , exploring the _ local _ properties of the network outside the nearest neighbor level , has not been well studied .a noteworthy exception is the recent study by girvan and newman .such studies will naturally have to address the cluster ( modular ) structure of the network .to know if a network is modular or not , is important if one tries to assess its robustness and stability , since only a few critical links are responsible for the inter - modular communication in clustered networks .hence to make the network more robust , such critical links have to be identified and strengthen . in this workwe will address the large scale topological structures of networks .this problem will be approached by considering an auxiliary diffusion process on the underlying complex network . as it will turn out ,it is the slowest decaying diffusive eigenmodes that will be of most interest to us , since such modes will contain , as we will see , information about the weakly interacting modules of the network .hence , by studying how an ensemble of random walkers slowly reaches a state of equilibrium , one can learn something about the topology of the network .to study spectral properties of networks is not new ; its variants has previously been applied to random graphs , to social networks ( the correspondence analysis ) , random and small - world networks ( the laplace equation analysis ) , artificial scale - free networks , and community structures .a diffusion approach has also made it to practical applications .for instance , the analysis of a diffusion process lies at the heart of the popular search engine google .the original physical motivation behind the method to be outlined below , was that the relaxation of some arbitrary initial state ( of walkers on the network ) toward the steady state distribution , was expected to be _ fast _ in regions that are highly connected , while _ slow _ in regions that had low connectivity . by definition ,a module is highly connected internally , but has only a small number of links to nodes outside the module .hence , it was reasoned , that by identifying the slowly decaying eigenmodes of the diffusive network process , one should be able to obtain information about the large scale topological structures , like modules , of the underlying complex network .we will now formalize this idea and outline the method in some detail .let us start by assuming that we are dealing with a ( fully connected ) complex network consisting of nodes and characterized by some degree distribution where denotes the connectivity of node .imagine now placing a large number ( ) of random walkers onto this network .the fraction of walkers on node ( relative the total number ) at time we will denote by . in each time step , the walkers are allowed to move randomly between nodes that are directly linked to each other .since there is no way that a walker can vanish from the network , the total number of walkers must be conserved globally , _i.e. _ one must have at all times . furthermore ,locally a continuity equation must be satisfied ; if we consider , say node , that directly links to other nodes , one must require that ( the balance equation ) : in writing this equation , we have introduced the so - called adjacency matrix defined to be if node and are directly linked to each other , and otherwise , and , we recall , is the degree of node , _i.e. _ its number of nearest neighbors .the first term on the right hand side of eq .( [ eq : cont ] ) describes the flow of walkers into node , while the last term is associated with the out - flow of walkers from the same node . as will be useful later , eq .( [ eq : cont ] ) can be casted into the following equivalent matrix form : where , and is the _ diffusion matrix _ to be defined below .( [ eq : diffusion ] ) should be compared to the continuous diffusion equation for the particle density : , where is the diffusion constant .a complex network obviously has an inherent discreetness , and hence no continuous limit can be taken .however , if the continuous diffusion equation is understood in its discreet form , one is lead to regard eq .( [ eq : diffusion ] ) as the _ master equation _ for the random - walk process taking place on the underlying network . by comparing eqs .( [ eq : cont ] ) and ( [ eq : diffusion ] ) , as well as taking advantage of ( because ) , one is lead to . hence , an equivalent formulation of eq .( [ eq : diffusion ] ) , is where is the _ transfer matrix _ , related to the diffusion matrix by . in component form onethus has . since the transfer matrix , `` transfers '' the walker distribution one time step ahead, it can therefore be thought of as a time - propagator for the process .formally the time development , from some arbitrarily chosen initial state , can be obtained by iteration on eq .( [ eq : transfer ] ) with the result . here means the transition matrix to the power . in general, the transfer matrix ( or equivalently , the diffusion matrix ) will not be symmetric .however , can be related to a symmetric matrix by the following similarity transformation with .thus , and will have the same eigenvalue spectrum , and all eigenvalues ( of ) will be real .it is this eigenvalue spectrum that will control the time - development of the diffusive process through with .it should be noted that since the total number of walkers is conserved at all times , one must have , and that at least one eigenvalue should be one ., must satisfy since . ]we will here adopt the convention and sort the eigenvalues so that corresponds to the largest eigenvalue , to the next to largest one , and so on .physically the principal eigenvalue , corresponds to a _ stationary state _ where , the diffusive current flowing from node to node is exactly balanced by that flowing from to .the stationary state is unique for single component networks , and is fully determined by the connectivities of the network according to .this can be easily checked by substituting this relation into eq .( [ eq : transfer ] ) .all modes corresponding to eigenvalues , are decaying modes since the time dependence of enters through and one recalls that .notice that represent non - oscillatory modes , while correspond to states where oscillations will take place with time , but this latter possibility will not be considered here .the large scale topology of a given complex network reflects itself in the statistical properties of its diffusion eigenvectors .one such property is the participation ratio ( pr ) , that will be defined below .it quantifies the effective number of nodes participating in a given eigenvector with a significant weight .since the stationary state of a network depends on the connectivities of its nodes , , it is convenient to introduce a normalized eigenvector observe that is nothing but the walker density per link of node .hence , in effect , represents the _ outgoing currents _ flowing from node , along each of its links , toward its neighbors . in the steady state , where , it follows that these currents are all the same for any link in the network .hence , highly connected nodes are not treated differently from less connected nodes .more formally , are the eigenvectors of the transposed transfer matrix corresponding to the same eigenvalue ( of ) . hereone alternatively could have defined ( instead of ) as the transfer matrix from the very beginning .doing so , would have resulted in a master equation of the form ( [ eq : transfer ] ) , but for the currents with corresponding eigenmodes .the physical interpretation of such an alternative equation is as follows : instead of walkers , think of a signal propagating on the underlying network .the signal at node at time is then the average of the signal at the nearest neighbors at time .if the link currents are normalized to unity , _ i.e. _ if in the -norm , then the participation ratio ( pr ) is defined as ^{-1}. \end{aligned}\ ] ] for the stationary state , where all the currents are equal to , one has where we recall that is the total number of nodes in the network . hence , when , the ratio can be regarded as an effective number of nodes participating in the eigenvector with a significant weight should be noted that the participation ratio is a simple and crude measure of size .strictly speaking , for this ratio to be able to say something with confidence about the size of a given module , the main contribution to should come from nodes within that module .for instance , if gets major contributions from current elements of different signs , _i.e. _ from different topological structures , it is not trivial to relate to the size of a single module . for all types of complex networks where a modular structure is of interest, it is important to try to quantify the number of modules .such information is of interest since this number may say something about the organizing principles being active in the generating process of the network .however , the measurement of this number is often hampered by statistical uncertainties due to temporary ( non - robust ) topological structures being counted as modules .hence the challenge is to obtain the significant ( or robust ) number of modules that are due to the `` rule of creation '' and not just happened to be there by chance . for this purpose ,it is useful to introduce a null model a randomized version of the network at hand but so that the degree distribution of the original network is not changes .a rough estimate of the number of different modules contained in a network could be given by the number of slowly decaying non - oscillatory modes that have a participation ratio , , significantly exceeding the ( ensemble ) averaged participation ration of the corresponding randomized network , .hence , it is suggested that a module is significant if .we will close this section by noting that the ( outgoing ) currents , , can also be used for vitalization purposes .the basis of this approach is the observation that the outgoing currents on links within a module are almost constant , and the size of this constant depends on to which extent the module participates in the given eigenmode .the constants thus varies from module to module and from eigenmode to eigenmode .hence , by sorting by size , nodes belonging to the same module will be located close to each .during a two year period in the mid 1970 s , w. zachary studies the relations among members of a university karate club during a period of trouble .a serious controversy existed between the trainer of the club ( node in fig .[ fig : zachary]a ) and its administrator ( node ) .ultimately this conflict resulted in the breakup of the club into two new clubs of roughly the same size . in his original study , w. zachary mapped out the strength of friendship between the various members . in our study , however , we will be considering the unweighted version of his network ( see fig .[ fig : zachary]a ) .recently , this same network was used by girvan and newman in their study of community structures .the second network that will be considered is a much larger network taken from the organization of the internet .the internet consists of a large number of individual computers that are identified by their so - called ip - address , and they are usually grouped together in local area networks ( lan ) .such computer networks are connected to one another via routers a complex network linking device .in addition to being able to direct pieces of information to its intended destination , a router also has the ability to determine the best path to a given destination ( routing ) .this is done by keeping available an updated routing table telling the router how to reach certain destinations specified by the network administrator . in much the same way as computers are being organized into networks , the same is done for routers .if the router network becomes large and coherent enough , it may make out what is called an _ autonomous system _ ( as ) .an as is a connected segment of a network that consists of a collection of subnetworks ( with hosts attached ) interconnected by a set of routes .usually it is required that the subnetworks and the routers should be controlled by a coherent organization like , say , a university , or a medium - to - big enterprise .importantly for the efficiency of the internet , each autonomous system , identified by a unique as number , is expected to present to the other systems a consistent list of destinations reachable through that as , as well as a coherent interior routing plan .the particular network we will be considering is an autonomous system network collected by the oregon views project on january 3 , 2000 . in this networkthe as will act as nodes , while their routing plans , _ i.e. _ with which other as a given one directly shares information , will correspond to the links of the network .in sec . [ sec : method ] , it was argued that from the currents corresponding to the _ slowly _ decaying eigenmodes , one should be able map out the large scale structure of the underlying network .our hypothesis will now be put to the test , and we start with the small friendship network due to zachary .this network is depicted in fig .[ fig : zachary]a , and it is small enough to enable us to see how it all comes about . by visual inspection of this figure , it is apparent that there are , at least , two large scale clusters one corresponding to the trainer of the karate club ( node ) and his supporters , and one to the administrator ( node ) . in fig .[ fig : zachary]a , the karate club members ( according to zachary ) supporting the trainer in the ongoing conflict are marked with squares ( nodes ) , while the followers of the administrator are marked with open circles ( nodes ) .notice that this subdivision was done by zachary , and no sophisticated clustering techniques were applied to obtain these results .we will now see if the same clustering structure can be obtained by the method outlined in sec .[ sec : method ] of this paper .let us start by considering the two most slowly decaying modes of the network , namely and ( with our ordering ) , corresponding to the eigenvalues and , respectively .if one sorts the elements of the current vector by size , and group them according to their signs , one recovers two clusters ( see the abscissa of fig . [fig : zachary]b ) .this grouping , or topological structure , fits nicely with the original assignments made by zachary ( fig .[ fig : zachary]a ) and indicated by the open squares and circles in fig .[ fig : zachary]b .hence it is the trainer - administrator separation of the karate club members that is mapped out by the -currents .observe that node , which has an equal number of links to supporters of the trainer and administrator , has a current that is almost zero .our identification of this node is the only one that differs from that made originally by zachary .interestingly , our classification , including node , fits that previously obtained by girvan and newman in their hierarchical tree clustering approach .if the same procedure was repeated , but for the currents , one would map out other topological features of the network . in fig . [ fig : zachary]b , a -plot is presented .such a plot groups the nodes along two axis ; the -axis corresponding to the trainer - administrator axis and the -axis . the most striking feature of fig . [ fig : zachary]b is the group of nodes corresponding to and , _i.e. _ to the following group of nodes . in fig .[ fig : zachary]a , these nodes are located in the upper left corner of the graph .they are in addition to being connected among themselves , only connected to the rest of the network via the trainer .thus , they represent a sub - cluster , and this is indeed what is apparent from fig .[ fig : zachary]b .the participation ratios for the two slowest decaying diffusive modes were found to be and , respectively . as can be seen from fig .[ fig : zachary]b , these participation ratios receive substantial contributions from both positive and negative current elements , _i.e. _ from more then one topological structure .hence , we suspect that these numbers being close to the size of the administrator and trainer `` clan '' ( of size and nodes , respectively ) is somewhat accidental .we will now focus our attention on the autonomous system ( as ) network introduced in the previous section .this network , consisting of nodes and undirected links , is so big that plotting it for the purpose of study its modular structure is not a practical option .motivated by what we found for the small zachary friendship network , we will now go ahead and use somewhat similar techniques for its study . in fig .[ fig : internet ] the participation ratio , , of eigenvectors ( top ) and the eigenvalue density ( bottom ) are plotted as functions of the corresponding eigenvalues .the data for the internet , that is an example of a scale - free network , are displayed together with the data for its randomized counterpart ( the null model ) . from fig .[ fig : internet ] it is apparent that while the density of states is rather similar for these two networks , the participation ratios of the slowly decaying modes , especially for close to , are markedly higher in the internet network than in the accompanying randomized network .these differences signal large scale topological structures that are real and not accidental . for the most slowly decaying diffusive eigenmode ,the participation ratio is ( cf .[ fig : internet]a ) . from fig .[ fig : internet - currents ] , that depicts the currents of the two slowest decaying diffusive eigenmodes , one observes that the main contribution to comes from current elements of the same sign .thus , a module has been detected , and roughly measures its size . from , fig .[ fig : internet - currents ] one also makes the interesting observation that the main contributing nodes to are autonomous systems located in russia ( denoted by open squares in fig .[ fig : internet - currents ] ) .hence , the diffusive mode maps out russia , as a russian module ! in total there are russian nodes in our data set .moreover , from fig . [fig : internet - currents ] , modules corresponding to countries like the usa , france and korea are also easily identified .the edges of the as internet network , defined as the nodes corresponding to the most distant values of the current elements , are in this case represented by a russian and a us military site located in the south pacific .these nodes can thus be said to represent the extreme edges of the internet .one has also studies the number of significant modules ( where ) , and found that this number for the as network is roughly .the number of different nodes participating in one or more of these modules was found to be about .hence the modularity of the network is ( at least ) . one common feature of the current - current plots , figs .[ fig : zachary]b and [ fig : internet - currents ] , is their line or star - like structures .such structures are in fact rather generic , and we will here try to explain why . we have argued earlier that current elements should almost be the same within a module .hence , for two nodes and belonging to one and the same module , the fraction is expected to be a constant unique for that module , but independent of the diffusive mode . for two different ( significant ) diffusive modes , say and , the fraction ( for node ) will therefore as a consequence also be constant .thus , under the assumptions made above , one predicts a straight line in a current - current plot ( that pases through the origin ) : , for nodes belonging to the same module . from fig .[ fig : internet - currents ] , this simple argument appears to be valid .that fig .[ fig : zachary]b seems to not pass through the origin , but still being straight lines , we believe is due to the diffusive modes being excited over a non - trivial background in this case . before presenting the conclusions of this paper , we will add a few closing remarks on the numerical implementation of the method .for this work to have any relevance for large real - world networks , a fast , memory saving , and optimized algorithm for the calculation of the largest eigenvalues and the corresponding eigenvectors of a sparse matrix is required .fortunately , such an algorithm has already been implemented and made available _e.g. _ through the trlan software package .this software is optimized for handling large problems , and it can run on large scale parallel supercomputers .we have generalized the normal diffusion process to diffusion on ( discreet ) complex networks . by considering such a process, it has been demonstrated that topological properties like modular structures and edges of the underlying network can be probed .this is achieved by focusing on the slowest decaying eigenmodes of the network .the use of the procedure was exemplified by considering a small friendship network , with known modular structure , as well as a routing network of the internet , where the structure was not so well known . for the friendship network the known structure was well reproduced , and the internet was indeed found to be modular .the detected modules of the internet were consistent with the geographical location of the nodes , and the individual modules corresponded roughly to the national structure .interestingly , it was observed that a political subdivision of the internet was also one of the predictions of the algorithm presented in this paper ; the two most poorly connected nodes of the internet ( extreme edges ) , where found to be represented by a russian and a us military site located in the south pacific .work at brookhaven national laboratory was carried out under contract no .de - ac02 - 98ch10886 , division of material science , u.s .department of energy .nodes and links ( figure after ref . ) . hereopen squares and circles are used to denote the supporters , in the ongoing conflict , of the trainer ( node ) and administrator ( node ) , respectively .( b ) the -plot maps out the large scale topology of the network .the dashed lines indicate the lines of zero currents . ,title="fig:",width=377,height=302 ] + nodes and links ( figure after ref . ) . hereopen squares and circles are used to denote the supporters , in the ongoing conflict , of the trainer ( node ) and administrator ( node ) , respectively .( b ) the -plot maps out the large scale topology of the network .the dashed lines indicate the lines of zero currents . ,title="fig:",width=377,height=264 ] ( top , a ) and the eigenvalue density ( bottom , b ) as a function of the eigenvalue of the transfer matrix , , measured in the internet ( filled circles ) and in its randomized counterpart ( open squares ) a random scale - free network . the participation ratio was averaged over -bins of size 0.05 excluding eigenmodes , and ( cf . ref . ) . , width=377,height=264 ] -th as in this plotare its current components of the two slowest decaying non - oscillatory diffusion modes .the symbols reveals the geographical location of the as : russia squares , france circles , usa crosses , korea triangles ) .note the straight lines corresponding to good country - modules ., width=377,height=302 ]
a diffusion process on complex networks is introduced in order to uncover their large scale topological structures . this is achieved by focusing on the slowest decaying diffusive modes of the network . the proposed procedure is applied to real - world networks like a friendship network of known modular structure , and an internet routing network . for the friendship network , its known structure is well reproduced . in case of the internet , where the structure is far less well - known , one indeed finds a modular structure , and modules can roughly be associated with individual countries . quantitatively the modular structure of the internet manifests itself in an approximately times larger participation ratio of its slowest decaying modes as compared to the null model a random scale - free network . the extreme edges of the internet are found to correspond to russian and us military sites . complex random networks , network modules , statistical physics 89.75.-k , 89.20.hh , 89.75.hc , 05.40.fb
in materials science and engineering , a class of simple models , known as fiber bundle models ( fbm ) , has proven to be very effective in practical applications such as fiber reinforced composites . in this context ,such models have a history that goes back to the twenties , and they constitute today an elaborate toolbox for studying such materials , rendering computer studies orders of magnitudes more efficient than brute force methods . since the late eighties , these models have received increasing attention in the physics community due to their deceivingly simple appearance coupled with an extraordinary richness of behaviors .as these models are just at the edge of what is possible analytically and typically not being very challenging from a numerical point of view so that extremely good statistics on large systems are available , they are perfect as model systems for studying failure phenomena as a part of theoretical physics .fracture and material stability has for practical reasons interested humanity ever since we started using tools : our pottery should be able to withstand handling , our huts should be able to withstand normal weather . as science took on the formwe know today during the renaissance , leonardo da vinci studied five hundred years ago experimentally the strength of wires fiber bundles as a function of their length .systematic strength studies , but on beams , were also pursued systematic by galileo galilei one hundred years later , as was done by edme mariotte ( of gas law fame ) who pressurized vessels until they burst in connection with the construction of a fountain at versailles .for some reason , mainstream physics moved away from fracture and breakdown problems in the nineteenth century , and it is only during the last twenty years that fracture problems have been studied within physics proper .the reason for this is most probably the advent of the computer as a research tool , rendering problems that were beyond the reach of systematic theoretical study now accessible .if we were to single out the most important contribution from the physics community with respect to fracture phenomena , it must be the focus on _ fluctuations _ rather than averages .what good is the knowledge of the average behavior a system when faced with a single sample and this being liable to breakdown given the right fluctuation ?this review , being written by physicists , reflects this point of view , and hence , fluctuations play an important role throughout it .even though we may trace the study of fiber bundles to leonardo da vinci , their modern story starts with the already mentioned work by . in 1945 , daniels published a seminal review cum research article on fiber bundles which still today must be regarded as essential reading in the field . in this paper ,the fiber bundle model is treated as a problem of statistics and the analysis is performed within this framework , rather than treating it within materials science .the fiber bundle is viewed as a collection of elastic objects connected in parallel and clamped to a medium that transmits forces between the fibers .the elongation of a fiber is linearly related to the force it carries up to a maximum value .when this value is reached , the fiber fails by no longer being able to carry any force .the threshold value is assigned from some initially chosen probability distribution , and do not change thereafter .when the fiber fails , the force it carried is redistributed .if the clamps deform under loading , fibers closer to the just - failed fiber will absorb more of the force compared to those further away .if the clamps , on the other hand , are rigid , the force is equally distributed to all the surviving fibers .daniels discussed this latter case . a typical question posed and answered in this paperwould be the average strength of a bundle of fibers , but also the variance of the average strength of the entire bundle .the present review takes the same point of view , discussing the fiber bundle model as a _ statistical model ._ only in section v we discuss the fiber bundle model in the context of materials science with all the realism of real materials considered. however , we have not attempted to include any discussions of the many experimental studies that have been performed on systems where fiber bundles constitute the appropriate tool .this is beyond the scope of this statistical - physics based review . after introducing ( section ii ) the fiber bundle model to our readers , in section iii, we present the _ equal load sharing model , _ which was sketched just a few lines back .this seemingly simple model is in fact extremely rich .for example , the load at which catastrophic failure occurs is a _second order critical point _ with essentially all the features usually seen in systems displaying such behavior .however in this case , the system is analytically tractable .in fact , we believe that the equal load sharing fiber bundle may be an excellent system for teaching second order phase transitions at the college level . under the heading of fluctuations , we discuss the burst distribution , i.e. , the statistics of simultaneously failing fibers during loading : when a fiber fails and the force it was carrying is redistributed , one or more other fibers may be driven above their failing thresholds . in this equal load sharing model , the absolute rigidity of the bar ( transmitting forces among the fibers )suppresses the stress fluctuations among the fibers . as such, there is no apparent growth of the ( fluctuation correlation ) length scale .hence , although there are precise recursion relations and their linearized solutions are available near the fixed point ( see sec .iii ) , no straight forward application of the renormalization group techniques has been made to extract the exponents through length scaling . in section iv _ local load sharing _ is discussed .this bit of added realism comes at the added cost that analytical treatment becomes much more difficult .there are , still , a number of analytical results in the literature .one may see intuitively how local load sharing complicates the problem , since the relative positions of the fibers now become important . under global load sharing ,every surviving fibers gets the same excess force and , hence , where they are do not matter .there are essentially three local load sharing models in the literature .the first one dictates that the _ nearest surviving neighbors _ of the failing fiber absorb its load .then there are softer models " where the redistribution follows a power law in the distance to the failing fiber .lastly , there is the model where the clamps holding the fibers are elastic themselves , and this leads to non - equal redistribution of the forces .section v contains a review of the use of fiber bundle models in applications such as materials science .we discuss fatigue , thermal failure , viscoelastic effects and precursors of global failure .we then go on to review the large field of modeling fiber reinforced composites . herefiber bundle models constitute the starting point of the analysis , which , by its very nature , is rather complex seen from the viewpoint of statistical physics .lastly , we review some applications of fiber bundle models in connection with systems that initially would seem quite far from the concept of a fiber bundle , such as traffic jams .we end this review by a summary with few concluding remarks in section vi .imagine a heavy load hanging from a rigid anchor point ( say , at the roof ) by a rope or a bundle of fibers .if the load exceeds a threshold value , the bundle fails .how does the failure proceeds in the bundle ? unless all the fibers in the bundle have got identical breaking thresholds ( and that never happens in a real rope ) , the failure dynamics proceeds in a typical collective load transfer way .one can assume that in this kind of situation the load is equally shared by all the intact fibers in the bundle .however , the breaking threshold for each of the fibers being different , some fibers fail before others and , consequently , the load per surviving fiber increases as it gets redistributed and shared equally by the rest .this increased load per fiber may induce further breaking of some fibers and the avalanche continues , or it stops if all the surviving fibers can withstand the redistributed load per fiber .this collective or cooperative failure dynamics and the consequent avalanches or bursts are typical for the failure in any many - body system .it captures the essential features of failure during fracture propagation ( recorded by acoustic emissions ) , earthquake avalanches ( main and aftershocks ) , traffic jams ( due to dynamic clustering ) , etc .the model was first introduced in by in the context of textile engineering .since then it was modified a little and investigated , mainly numerically , with various realistic fiber threshold distributions by the engineering community . starting from late eighties , physicists took interest in the avalanche distribution in the model and in its dynamics .a recursive dynamical equation was set up for the equal - load - sharing version recently and the dynamic critical behavior is now solved exactly .in addition to the extensive numerical results on the effect of short - range fluctuations ( local load sharing cases ) , some progress with analytical studies have also been made .there are a large number of experimental studies of various materials and phenomena that have successfully been analyzed within the framework of the fiber bundle model .for example , have used the fiber bundle model to propose explanations for changes in fibrious collagen and its relation to neuropathy in connection with diabetes . propose a method to monitor the structural integrity of fiber - reinforced ceramic - matrix composites using electrical resistivity measurements .the basic idea here is that when the fibers in the composite themselves fail rather than just the matrix in which they are embedded , the structure is about to fail .the individual fiber failures is recorded through changes in the electrical conductivity of the material .acoustic emission , the crackling sounds emitted by materials as they are loaded , provide yet another example where fiber bundle models play and important role , see e.g. , .= 1.7 in the simplest and the oldest version of the model is the equal load sharing ( els ) model , in which the load previously carried by a failed fiber is shared equally by all the remaining intact fibers in the system . as the applied load is shared globally , this model is also known as global load sharing ( gls ) model or democratic fiber bundle model . due to the consequent mean - field nature, some exact results could be extracted for this model and this was demonstrated by in a classic work some sixty years ago .the typical relaxation dynamics of this model has been solved recently which has clearly established a robust critical behavior .it may be mentioned at the outset that the els or gls models do not allow for spatial fluctuations ( due to the absolute rigidity of the platform in fig. 1 ) and hence such models belong to the the mean field category of critical dynamics , see e.g. , . fluctuations in breaking time or in avalanche statistics ( due to randomness in fiber strengths ) are of course possible in such models and are discussed in details in this section . a bundle can be loaded in two different ways : strain controlled and force controlled . in the strain controlled method , at each step the whole bundle is stretched till the weakest fiber fails . clearly ,when number of fibers is very large , strain is increased by infinitesimal amount at each step until complete breakdown and therefore the process is considered as a _ quasi - static _ way of loading . on the other hand , in the force controlled method , the external force ( load ) on the bundleis increased by same amount at each step until the breakdown .the basic difference between these two methods is that the first method ensures the failure of single fiber ( weakest one among the intact fibers ) at each loading step , while in the second method sometimes none of the fibers fail and sometimes more than one fail in one loading step .let denote the strain of the fibers in the bundle .assuming the fibers to be linearly elastic up to their respective failure point ( with unit elastic constant ) , we can represent the stresses on each of the surviving fibers by the same quantity .the strength ( or threshold ) of a fiber is usually determined by the stress value it can bear , and beyond which it fails .we therefore denote the strength ( threshold ) distribution of the fibers in the bundle by and the corresponding cumulative distribution by .two popular examples of threshold distributions are the uniform distribution and the weibull distribution here is a reference threshold , and the dimensionless number is the weibull index ( fig .[ fig : distributions ] ) . in the strain controlled loading , at a strain ,the total force on the bundle is times the number of intact fibers .the expected or average force at this stage is therefore the maximum of corresponds to the value for which vanishes : here the failure process is basically driven by _fluctuations _ and can be analyzed using extreme order statistics . in the forcecontrolled method , if force is applied on a bundle having fibers , when the system reaches an equilibrium , the strain or effective stress is ( see fig . [fig : fbm - model ] ) }. \label{strain}\ ] ] therefore , at the equilibrium state , eq .( [ load ] ) and eq .( [ strain ] ) are identical .it is possible to construct recursive dynamics of the failure process for a given load and the fixed - point solutions explore the _ average behavior _ of the system at the equilibrium state . fig .[ fig : fbm - model ] shows a static fiber bundle model in the els mode where fibers are connected in parallel to each other ( clamped at both ends ) and a force is applied at one end . at the first step all fibers that can not withstand the applied stress break .then the stress is redistributed on the surviving fibers , which compels further fibers to break .this starts an iterative process that continues until an equilibrium is reached , or all fibers fail .the average behavior is manifested when the initial load is macroscopic ( very large ) .the breaking dynamics can be represented by recursion relations in discrete steps .let be the number of fibers that survive after step , where indicates the number of stress redistribution steps. then one can write .\ ] ] now we introduce , the applied stress and , the surviving fraction of total fibers .then the effective stress after step becomes and after steps the surviving fraction of total fibers is .therefore we can construct the following recursion relations : and at equilibrium and .these equations ( eq .[ rec - x ] and eq .[ rec - u ] ) can be solved at and around the fixed points for the particular strength distribution .let us choose the uniform density of fiber strength distribution ( eq . [ uniform ] ) up to the cutoff .then the cumulative distribution becomes .therefore from eq .( [ rec - x ] ) and eq .( [ rec - u ] ) we can construct a pair of recursion relations and this nonlinear recursion equations are somewhat characteristic of the dynamics of fiber bundle models and such dynamics can be obtained in many different ways .for example , the failed fraction at step is given by the fraction of the load shared by the intact fibers at step and for the uniform distribution of thresholds ( fig .[ fig : distributions]a ) , one readily gets eq .( [ rec - u - uniform ] ) . at the fixed pointthe above relations take the quadratic forms and with the solutions and here is the critical value of applied stress beyond which the bundle fails completely . clearly , for the effective stress ( eq .[ sol - x - uniform ] ) solution with sign is the stable fixed point and with sign is the unstable fixed point whereas for fraction of unbroken fibers( eq . [ sol - u - uniform ] ) , it is just the opposite .now the difference behaves like an order parameter signaling partial failure of the bundle when it is non - zero ( positive ) , although unlike conventional phase transitions it does not have a real - valued existence for . fig .[ fig : uso ] shows the variation of , and with the externally applied stress value .one can also obtain the breakdown susceptibility , defined as the change of due to an infinitesimal increment of the applied stress : such a divergence in had already been reported in several studies . to study the dynamics away from criticality ( from below ) , the recursion relation ( eq .[ rec - u - uniform ] ) can be replaced by a differential equation close to the fixed point , + , ( where ) and this gives where ], the normalized density function and the cumulative distribution are given by ( illustrated in fig .[ pratip_fig4 ] ) : and now we introduce the transformed quantities : for an initial stress ( or , ) along with the cumulative distribution given by eq .( [ eq : prob - lin ] ) , the recursion relations ( eq . [ rec - x ] and eq .[ rec - u ] ) appear as : and the fixed point equations , ( eq . [ fix - x - uniform ] and eq .[ fix - u - uniform ] ) , now assume cubic form : where , and consequently each of the recursions ( eq . [ eq : stressrecur - lin ] and eq .[ eq : fracrecur - lin ] ) have three fixed points only one in each case is found to be stable .for the redistributed stress the fixed points are : where and similarly , for the surviving fraction of fibers the fixed points are : where and - 27 \gamma_0 ^ 2 / 2 \over\left [ \left ( \gamma_l^2 - 1 \right )^2 + 6 \gamma_l \gamma_0 \right ] ^{3/2}}. \label{eq : theta - defn}\ ] ] of these fixed points and are stable whereas , and , are unstable ( fig .[ pratip_fig5 ] ) .the discriminants of the cubic equations ( eq .[ eq : stressfix - lin ] and eq .[ eq : fracfix - lin ] ) become zero at a critical value ( or , ) of the initial applied stress : \label{eq : stresscrit - lin}\end{aligned}\ ] ] and then each of the quantities and have one stable and one unstable fixed point .the critical point has the trivial lower bound : .the expression of in eq .( [ eq : stresscrit - lin ] ) shows that it approaches the lower bound as which happens for finite values of and when .it follows that the upper bound for the critical point is also trivial : .also , at the critical point we get from eq .( [ eq : phi - defn ] ) and eq .( [ eq : theta - defn ] ) : or , the stable fixed points and are positive real - valued when ; thus the fiber bundle always reaches a state of mechanical equilibrium after partial failure under an initial applied stress . for ( or , ) , and no longer real - valued and the entire fiber bundle eventually breaks down .the transition from the phase of partial failure to the phase of total failure takes place when just exceeds and the order parameter for this phase transition is defined as in eq .( [ order - uniform ] ) : close to the critical point but below it , we can write , from eq .( [ eq : theta - defn ] ) and eq .( [ eq : phi - theta - crit2 ] ) , that : ^{3/2 } } \label{eq : theta - nearcrit}\end{aligned}\ ] ] and the expressions for the fixed points in eq .( [ eq : fracfix - lin1 ] ) and eq .( [ eq : fracfix - lin2 ] ) reduce to the forms : and where is the stable fixed point value of the surviving fraction of fibers under the critical initial stress . therefore , following the definition of the order parameter in eq .( [ eq : order - lin - defn ] ) we get from the above equation : on replacing the transformed variable by the original , eq .( [ eq : order - lin - crit ] ) shows that the order parameter goes to zero continuously following the same power - law as in eq .( [ order - uniform ] ) for the previous case when approaches its critical value from below .similarly the susceptibility diverges by the same power - law as in eq .( [ sus - uniform ] ) on approaching the critical point from below : the critical dynamics of the fiber bundle is given by the asymptotic closed form solution of the recursion ( eq . [ eq : fracrecur - lin ] ) for : { 1 \over t } , \hspace{1.0 cm } t \to \infty , \label{eq : frac - lin - crit - dyn}\ ] ] where and are given in eq .( [ eq : stresscrit - lin ] ) and eq .( [ eq : fracfix - lin12-crit ] ) respectively .this shows that the asymptotic relaxation of the surviving fraction of fibers to its stable fixed point under the critical initial stress has the same ( inverse of step number ) form as found in the case of uniform density of fiber strengths ( eq . [ eq : fbm - omori ] ) .we now consider a fiber bundle with a linearly decreasing density of fiber strengths in the interval ] has a maximum which corresponds to the critical value of the initial applied stress .all threshold distributions having this property are therefore expected to lead to the same universality class as the three studied here .if the threshold distribution does not have this property we may not observe a phase transition at all .for example , consider a fiber bundle model with , .here = 1 ] and the characteristic time around the critical threshold .to lowest contributing order in we find +(x_{c}-x)^{2}\label{y}\ ] ] and inserting for from the equation above , and using ( [ twsub ] ) , we find with ^{-1/2}\ln(n).\label{generalk}\ ] ] to show how the magnitude of the amplitude depends on the form of the threshold distribution , we consider a weibull distribution with varying coefficient , and constant average strength . with the average strength equals unity , andthe width takes the value here is the gamma function . using the power series expansion we see how the width decreases with increasing : for the weibull distribution ( eq . [ weibullk ] ) we use eq .( [ generalk ] ) to calculate the amplitude , with the result the last expression for large .comparison between eq .( [ width ] ) and eq .( [ kweibullk ] ) shows that for narrow distributions that narrow distributions give small amplitudes could be expected : many fibers with strengths of almost the same magnitude will tend to break simultaneously , hence the relaxation process goes quicker ..2 in _ ( c ) universality of critical amplitude ratio _ .2 in as function of the initial stress the number of relaxation steps , , shows a divergence at the critical point , both on the pre - critical and post - critical side .this is a generic result , valid for a general probability distribution of the individual fiber strength thresholds . on the post - critical side independent of the system size for large . on the pre - critical sidethere is , however , a weak ( logarithmic ) -dependence , as witnessed by eqs .( ) , ( ) and ( ) .note that the critical amplitude ratio takes the same value for the uniform and the weibull distributions .this shows the universal nature of the critical amplitude ratio , independent of the threshold distribution .note the difference with normal critical phenomena due to the appearance of the in this amplitude ratio here .fiber bundle model captures correctly the non - linear elastic behavior in els mode . in case of strain controlled loading , using the theory of extreme order statistics, it has been shown that els bundles shows non - linear stress - strain behavior after an initial linear part up to which no fiber fails .similar non - linear behavior is seen in the force controlled loading case as well . moreover , from the recursive failure dynamics , the amount of stress drop at the breaking point can be calculated exactly . to demonstrate the scenario we consider an els bundle with uniform fiber strength distribution , having a low cutoff , such that for stresses below the low cutoff , none of the fibers fail .hence , until failure of any of the fibers , the bundle shows linear elastic behavior .as soon as the fibers start to fail , the stress - strain relationship becomes non - linear .this non - linearity can be easily calculated in the els model , using eq .( [ rec - u ] ) for the failure dynamics of the model .fibers are here assumed to be elastic , each having unit force constant , with their breaking strengths ( thresholds ) distributed uniformly within the interval ] beyond which the bundle fails completely . at each fixed point, there will be an equilibrium elongation and a corresponding stress develops in the system ( bundle ) . from eq .( [ nonlinear-1 ] ) , one gets ( for ) also , from the force balance condition , at each fixed - point .therefore , the stress - strain relation for the els model finally becomes : .,width=264,height=188 ] the stress - strain relation in a els bundle is shown in fig .[ fig : stress - strain ] , where the initial linear region has unit slope ( the force constant of each fiber ) .this hooke s region for the stress continues up to the strain value , until which no one of the fibers breaks .after this , nonlinearity appears due to the failure of a few of the fibers and the consequent decrease of .it finally drops to zero discontinuously by an amount \delta f\ge -x_k \delta f < -x_k ] , an increase in the load corresponds to an interval }\label{dx}\ ] ] of fiber thresholds .the expected number of fibers broken by this load increase is therefore note that this number diverges at the critical point , i.e. at the maximum of the load curve , as expected . following the similar method , as in case of uniform distribution , we can determine the asymptotic distribution for large : with a nonzero constant where we have used that at criticality .thus the asymptotic exponent value is universal . for the weibull distribution considered in fig .[ fig : els - per - rmp2 ] we obtain this burst distribution must be given on parameter form , the elimination of can not be done explicitly .the critical point is at and the asymptotics is given by eq .[ dasymp ] , with .if we let the load increase shrink to zero , we must recover the asymptotic power law valid for continuous load increase .thus , as function of , there must be a crossover from one behavior to the other .it is to be expected that for the asymptotics is seen , and when the asymptotics is seen .so far we have discussed in detail the statistical distribution of the _ size _ of avalanches in fiber bundles .sometimes the avalanches cause a sudden internal stress redistribution in the material , and are accompanied by a rapid release of mechanical energy . a useful experimental technique to monitorthe energy release is to measure the acoustic emissions ( ae ) , the elastically radiated waves produced in the bursts .experimental observations suggest that ae signals follow power law distributions . what is the origin of such power laws ?can we explain it through a general scheme of fluctuation guided breaking dynamics that has been demonstrated well in els fiber bundle model ?we now determine the statistics of the energies released in fiber bundle avalanches .as the fibers obey hooke s law , the energy stored in a single fiber at elongation equals , where we for simplicity have set the elasticity constant equal to unity .the individual thresholds are assumed to be independent random variables with the same cumulative distribution function and a corresponding density function . _( a ) energy statistics _ let us characterize a burst by the number of fibers that fail , and by the lowest threshold value among the failed fibers .the threshold value of the strongest fiber in the burst can be estimated to be since the expected number of fibers with thresholds in an interval is given by the threshold distribution function as . the last term in ( eq . [ xmax ] ) is of the order , so for a very large bundle the differences in threshold values among the failed fibers in one burst are negligible . hence the energy released in a burst of size that starts with a fiber with threshold given as following the expected number of bursts of size , starting at a fiber with a threshold value in the interval , is where the expected number of bursts with energies less than is therefore with a corresponding energy density explicitly , with ^{\delta}.\label{gn}\end{aligned}\ ] ] here with a critical threshold value , it follows from ( eq . [ en ] ) that a burst energy can only be obtained if is sufficiently large , thus the sum over starts with ] denotes the integer part of ._ ( b ) high energy asymptotics _ bursts with high energies correspond to bursts in which many fibers rupture . in this rangewe use stirling s approximation for the factorial , replace ] .these results agree qualitatively with the recent experimental observations . in order to investigate the fatigue behavior in heterogeneous fiber bundles ,uniform distribution of fiber strengths has been considered .the noise - induced failure probability has the similar form : ] , and analyzed the life time distribution . for , els andlls models have identical gaussian distribution for . for ,lls shows extreme statistics , while els gives gaussian behavior , see e.g. , . also studied similar distribution and their scaling property with load and temperature variations ..3 in creep behavior has been achieved in a bundle of viscoelastic fibers , where a fiber is modeled by a kelvin - voigt element ( see fig .[ fig : kelvin ] ) and results in the constitutive stress - strain relation here is the applied stress , is the corresponding strain , denotes the damping coefficient , and is the young modulus of the fibers . in the equal load sharing mode, the time evolution of the system under a steady external stress can be described by the equation as one can expect intuitively , there is a critical load for the system and eq .( [ eq : eom ] ) suggests two distinct regimes depending on the value of the external load : when is below the critical value eq .( [ eq : eom ] ) has a fixed - point solution , which can be obtained by setting in eq .( [ eq : eom ] ) .\label{eq : stationary}\end{aligned}\ ] ] in this case the strain value converges to when , and no macroscopic failure occurs .but , when , no fixed - point solution exists . here , remains always positive , that means in this case , the strain of the system monotonically increases until the system fails globally at a finite time . the solution of the differential equation eq .( [ eq : eom ] ) gives a complete description of the failure process . by separation of variables ,the integral becomes }+c,\label{eq : integ}\end{aligned}\ ] ] where is integration constant . below the critical point the bundle slowly relaxes to the fixed - point value .the characteristic time scale of such relaxation process can be obtained by analyzing the behavior of in the vicinity of . after introducing a new variable as , the differential equation can be written as \delta_0.}\label{eq : delta}\end{aligned}\ ] ] clearly , the solution of eq .( [ eq : delta ] ) has the form } ] , where is the characteristic strain and is the shape parameter . the simulation results ( fig .[ fig : sigma ] ) are in excellent agreement with the analytic results .a slow relaxation following fiber failure can also lead to creep behavior . in this case , the fibers are linearly elastic until they break , but after breaking they undergo a slow relaxation process .therefore when a fiber breaks , its load does not drop to zero instantaneously .instead it undergoes a slow relaxation process and thus introduces a time scale into the system .as the intact fibers are assumed to be linearly elastic , the deformation rate is where denotes the stress , denotes the strain and is the young modulus of the fibers .in addition , to capture the slow relaxation effect , the broken fibers with the surrounding matrix material are modeled by maxwell elements ( fig .[ fig : maxwell ] ) , _ i.e. _ they are assumed as a serial coupling of a spring and a dashpot .such arrangement results in the following non - linear response where is the time dependent stress and is the time dependent deformation of a broken fiber .the relaxation of broken fibers is characterized by few parameters , and , where is the effective stiffness of a broken fiber , the exponent characterizes the strength of non - linearity and is a constant . in equal load sharing mode , when external stress is applied ,the macroscopic elastic behavior of the composite can be represented by the constitutive equation +x_{{\rm b}}(t)p(x(t)).\label{eq : macro}\end{aligned}\ ] ] here is the amount of stress carried by the broken fibers and and denote the fraction of broken and intact fibers at time , respectively . by construction ( fig .[ fig : maxwell ] ) , the two time derivatives have to be always equal now the differential equation for the time evolution of the system can be obtained using eq .( [ eq : macro ] ) , eq .( [ eq : broken ] ) and eq .( [ eq : condit ] ) as \right\ } \nonumber\\ & & = b\left[\frac{\sigma_{{\rm o}}-x\left[1-p(x)\right]}{p(x)}\right]^{m}.\label{eq : eom_max } \end{aligned}\ ] ] similar to the viscoelastic model , two different regimes of can be distinguished depending on the value of : if the external load is below the critical value a fixed - point solution exists which can be obtained by setting in eq .( [ eq : eom_max ] ) .\label{eq : statio_max}\end{aligned}\ ] ] this means that the solution of eq .( [ eq : eom_max ] ) converges asymptotically to resulting in an infinite lifetime of the system . when the external load is above the critical value , the deformation rate remains always positive , resulting in a macroscopic failure in a finite time .now we focus on the universal behavior of the model in the vicinity of the critical point . below the critical point the relaxation of to the stationary solution can be presented by a differential equation of the form where denotes the difference .hence , the characteristic time scale of the relaxation process only emerges if .also , in this case relaxation time goes as when approaching the critical point from below .however , for the situation is different : relaxation process is characterized by , where with .again , close to the critical point , it can also be shown that the lifetime shows power law divergence when the external load approaches the critical point from above the exponent is universal in the sense that it does not depend on the disorder distribution .however it depends on the exponent , which characterizes the non - linearity of broken fibers . as a function of the distance from the critical point , , for two different values of the parameter .the number of fibers in the bundle is . from .,width=226,height=188 ] as a check , numerical simulations have been performed for several different values of the exponent ( fig .[ fig : sigma_maxwell ] ) .the slope of the fitted straight lines agrees well with the analytic predictions ( eq . [ eq : tau_max ] ) .an interesting experimental and theoretical study of fatigue failure in asphalt was performed by .the experimental set - up is shown in fig .[ ah - fig13 ] .the cylindrical sample was subjected to cyclic diametric compression at constant load amplitude , and the deformation as a function of the number of cycles was recorded together with the number of cycles at which catastrophic failure occurs .[ ah - fig15 ] shows deformation as a function of number of cycles for two different load amplitudes and 0.4 . here is the tensile strength of the asphalt .fig.[ah - fig16 ] shows the number of cycles to catastrophic failure as a function of the load amplitude .this curve shows three regimes , the middle one being characterizable by a power law , this is the _ basquin regime _ . find for the asphalt system . as a function of number of cycles , showing both experimental data and theoretical curves based on the fiber bundle model . from .,width=302 ] in order to model the behavior found in figs .[ ah - fig15 ] and [ ah - fig16 ] , introduce equal load - sharing fiber bundle model as illustrated in fig .[ ah - fig13]a .each fiber is subjected to a time dependent load .there are two failure mechanisms present .fiber fails instantaneously at time when for the first time reaches its failure threshold . however , there is also a damage accumulation mechanism characterized by the parameter . in the time interval , fiber accumulates a damage where is a scale parameter and is an exponent to be determined .hence , the accumulated damage is when for the first time exceeds the damage accumulation threshold , , fiber fails . the thresholds and are chosen from a joint probability distribution . make the assumption that this distribution may be factorized , . as a function of load amplitude .the two curves show experimental and fiber bundle model data respectively . from .,width=340 ] in addition to damage accumulation , there is yet another important mechanism that needs to be incorporated in the model : damage _ healing _a time scale is associated with this mechanism , and the els average force - load equation (t)\;,\ ] ] where is the cumulative instantaneous breaking threshold probability , is generalized to \left[1-p_t(x(t))\right]x(t)\;,\ ] ] where . show that the basquin law ( [ kun_basquin ] ) may be derived analytically from eq.([kun_general_break ] ) leading to .the solid curves in fig .[ ah - fig15 ] are fits of the theoretical curves based on eq.([kun_general_break ] ) to the experimental data for the vs. .likewise , fig .[ ah - fig16 ] shows the fit of the theoretical vs. to the experimental data . for this fit , and .a fundamental question in strength considerations of materials is when does it fail .are there signals that can warn of imminent failure ?this is of uttermost importance in e.g. the diamond mining industry where sudden failure of the mine can be extremely costly in terms of lives .these mines are under continuous acoustic surveillance , but at present there are no tell - tale acoustic signature of imminent catastrophic failure .the same type of question is of course also central to earthquake prediction and initiates the search for precursors of global ( catastrophic ) failure events , see e.g. , , , .the precursor parameters essentially reflect the growing correlations within a dynamic system as it approaches the failure point .as we show , sometimes it is possible to predict the global failure points in advance .needless to mention that the existence of any such precursors and detailed knowledge about their behavior for major catastrophic failures like earthquakes , landslides , mine / bridge collapses , would be of supreme value for our civilization . in this sub - sectionwe discuss some precursors of global failure in els models .we also comment on how can one predict the critical point ( global failure point ) from the precursor parameters ..1 in _ ( a ) divergence of susceptibility and relaxation time _ .1 in and with applied stress for a bundle having fibers .uniform distribution of fiber thresholds is considered and averages are taken over sample .the dotted straight lines are the best linear fits near the critical point.,title="fig:",width=226,height=188 ] and with applied stress for a bundle having fibers .uniform distribution of fiber thresholds is considered and averages are taken over sample .the dotted straight lines are the best linear fits near the critical point.,title="fig:",width=226,height=188 ] as we discussed earlier ( section iii.a ) in case of els fiber bundles the susceptibility ( ) and the relaxation time ( ) follow power laws ( exponent ) with external stress and both diverge at the critical stress .therefore if we plot and with external stress , we expect a linear fit near critical point and the straight lines should touch axis at the critical stress .we indeed found similar behavior ( fig .[ fig : xi - tau ] ) in simulation experiments after taking averages over many sample . for application ,it is always important that such prediction can be done in a single sample . for a single bundle having very large number of fibers , similar response of and been observed .the estimation ( through extrapolation ) of the failure point is also quite satisfactory ( fig .[ fig : xi - tau - single ] ) . and with applied stress for a single bundle having fibers with uniform distribution of fiber thresholds .straight lines represent the best linear fits near the critical point . , title="fig:",width=226,height=188 ] and with applied stress for a single bundle having fibers with uniform distribution of fiber thresholds .straight lines represent the best linear fits near the critical point ., title="fig:",width=226,height=188 ] .1 in _ ( b ) pattern of breaking - rate _ .1 in different stress values .circles are for stresses below the critical stress and triangles are for stresses above the critical stress .the simulation has been performed for a single bundle with fibers having uniform distribution of fiber thresholds .the dotted straight line has a slope .,width=226,height=188 ] when we apply load on a material body , it is important to know whether the body can support that load or not .the similar question can be asked in fiber bundle model .we found that if we record the breaking - rate , i.e , the amount of failure in each load redistribution -then the pattern of breaking - rate clearly shows whether the bundle is going to fail or not . for any stress below the critical state , the breaking - rate follows exponential decay ( fig .[ fig : breaking - rate ] ) with the step of load redistribution and for stress values above critical stress it is a power law followed by a gradual rise ( fig .[ fig : breaking - rate ] ) . clearly , at critical stress it follows a robust power law with exponent value that can be obtained analytically from eq .( [ eq : fbm - omori ] ) . as we can see from fig .[ fig : breaking - rate - min ] that when the applied stress value is above the critical stress , breaking - rate initially goes down with step number , then at some point it starts going up and continues till the complete breakdown . that means if breaking - rate changes from downward trend to upward trend - the bundle will fail surely-but not immediately after the change occurs -it takes few more steps and number of these steps decreases as we apply bigger external stress ( above the critical value ) .therefore , if we can locate this minimum in the breaking - rate pattern , we can save the system ( bundle ) from breaking down by withdrawing the applied load immediately .we have another important question here : is there any relation between the breaking rate minimum and the failure time ( time to collapse ) of the bundle ?there is indeed a universal relationship which has been explored recently through numerical and analytical studies : for slightly overloaded bundle we can rewrite eq .( [ finaluni ] ) as where from eq .( [ un ] ) follows the breaking rate has a minimum when which corresponds to when criticality is approached , i.e. when , we have , and thus , as expected .we see from eq .( [ un ] ) that for this is an excellent approximation to the integer value at which the fiber bundle collapses completely .thus with very good approximation we have the simple connection .when the breaking rate starts increasing we are halfway ( see fig .[ fig : breaking - rate - min ] ) to complete collapse ! + vs. step ( upper plot ) and vs. the rescaled step variable ( lower plot ) for the uniform threshold distribution for a bundle of fibers .different symbols are used for different excess stress levels : 0.001 ( circles ) , 0.003 ( triangles ) , 0.005 ( squares ) and 0.007 ( crosses ) ., title="fig:",width=226,height=226 ] vs. step ( upper plot ) and vs. the rescaled step variable ( lower plot ) for the uniform threshold distribution for a bundle of fibers .different symbols are used for different excess stress levels : 0.001 ( circles ) , 0.003 ( triangles ) , 0.005 ( squares ) and 0.007 ( crosses ) ., title="fig:",width=226,height=226 ] .1 in _ ( c ) crossover signature in avalanche distribution _ .1 in in els model with uniform fiber strength distribution .here bundle size and averages are taken over sample .two power laws ( dotted lines ) have been drawn as reference lines to compare the numerical results ., width=226,height=226 ] the bursts or avalanches can be recorded from outside -without disturbing the ongoing failure process .therefore , any signature in burst statistics that can warn of imminent system failure would be very useful in the sense of wide scope of applicability . as discussed in section iii.b ,when the avalanches are recorded close to the global failure point , the distribution shows ( fig .[ fig : els - cut - dist ] ) a different power law ( ) than the one ( ) characterizing the size distribution of all avalanches .this crossover behavior has been analyzed analytically in case of els fiber bundle model and similar crossover behavior is also seen in the burst distribution and energy distribution of the fuse model which is an established model for studying fracture and breakdown phenomena in disordered systems .the crossover length becomes bigger and bigger as the failure point is approached and it diverges at the failure point ( eq . [ eq-3b-16 ] ) . in some sense ,the magnitude of the crossover length tells us how far the system is from the global failure point .most important is that this crossover signal does not hinge on observing rare events and is seen also in a single system ( see fig .[ fig3b-3 ] ) . therefore , such crossover signature has a strong potential to be used as a useful detection tool .it should be mentioned that a recent observation suggests a clear change in exponent values of the local magnitude distributions of earthquakes in japan , before the onset of a mainshock ( fig .[ fig : kawamura ] ) .this observation has definitely strengthened the possibility of using crossover signals in burst statistics as a criterion for imminent failure ., much smaller than the average value . from .,width=302 ] as we have seen , fiber bundle models provide a fertile ground for studying a wide range of breakdown phenomena . in some sense , they correspond to the ising model in the study of magnetism . in this section , we will review how the fiber bundle models are generalized to describe composites containing fibers .such composites are of increasing practical importance , see e.g. , fig .[ ah - fig11 ] .the status of modeling fiber reinforced composites has recently been reviewed .these materials consist of fibers embedded in a matrix . during tensile loadingthe main part of the load is carried by the fibers and the strength of the composite is governed to a large extent by the strength of the fibers themselves .the matrix material is chosen so that its yield threshold is lower than that of the fibers which are embedded in it .common materials used for the fibers are aluminum , aluminum oxide , aluminum silica , asbestos , beryllium , beryllium carbide , beryllium oxide , carbon ( graphite ) , glass ( e - glass , s - glass , d - glass ) , molybdenum , polyamide ( aromatic polyamide , aramid ) , kevlar 29 and kevlar 49 , polyester , quartz ( fused silica ) , steel , tantalum , titanium , tungsten or tungsten monocarbide .most matrix materials are resins as a result of their wide variation in properties and relatively low cost .common resin materials are epoxy , phenolic , polyester , polyurethane and vinyl ester .when the composite is to be used under adverse conditions such as high temperature , metallic matrix materials such as aluminum , copper , lead , magnesium , nickel , silver or titanium , or non - metallic matrix materials such as ceramics may be used . when the matrix material is brittle , cracks open up in the matrix perpendicular to the fiber direction at roughly equal spacing . in metallic matrix materials , plasticity sets in at sufficient load .lastly , in polymer matrix composites , the matrix typically responds linearly , but still the fibers carry most of the load due to the large compliance of the matrix .when a fiber fails , the forces it carried are redistributed among the surviving fibers and the matrix . if the matrix - fiber interface is weak compared to the strength of the fibers and the matrix themselves , fractures develop along the fibers .when the matrix is brittle , the fibers bridging the developing crack in the matrix will , besides binding the crack together , lead to stress alleviation at the matrix crack front .this leads to the energy necessary to propagate a crack further increases with the length of the crack , i.e. , so - called _ r - curve behavior _ .when the bridging fibers fail , they typically do so through debonding at the fiber - matrix interface .this is followed by pull - out , see fig .[ ah - fig12 ] .the cox shear lag model forms the basis for the standard tools used for analyzing breakdown in fiber reinforced composites .it considers the elastic response of a single fiber in a homegeous matrix only capable of transmitting shear stresses . by treating the properties of the matrix aseffective and due to the self - consistent response by the matrix material and the rest of the fibers , the cox model becomes a mean - field model .extensions of the cox single - fiber model to debonding and slip at the fiber - matrix interface have been published . in 1961 the single - fiber calculation of coxwas extended to two - dimensional unidirectional fibers in a compliant matrix , i.e. , a matrix incapable of carrying tensile stress , by . in 1967 ,this calculation was followed up by for three - dimensional unidirectional fibers placed in a square or hexagonal pattern .they found the average _ stress intensity factor _( i.e. , the ratio between local stress in an intact fiber and the applied stress ) to be after fibers failing . in his ph.d .thesis , extended these calculations to aligned arrays of broken fibers mixed with intact fibers .this approach was subsequently generalized to systems where the matrix has a non - zero stiffness and hence is able to transmit stress .viscoelasticity of the matrix has been included by and . demonstrated that when the fibers respond under global load - sharing conditions , a mean - field theory may be constructed where the breakdown of the composite is reduced to that of the failure of a single fiber in an effective matrix . studied the redistribution of forces onto the neighbors of a single failing fiber within a two - dimensional uni - directional composite using the shear - lag model , finding that within this scheme , the stress - enhancement is less pronounced than earlier calculations had shown . introduced a multi - fiber failure model including debonding and frictional effects at the fiber - matrix interface , finding that the stress intensity factor would decrease with increasing interfiber distance .an important calculational principle , the _ break influence superposition technique _ was introduced by based on the method of in order to handle models with multiple fiber failures .the technique consists in determining the transmission factors , which give the load at a given position along a given fiber due to a unit negative load at the single break point in the fiber bundle .the multiple failure case is then constructed through superposition of these single - failure transmission factors .this method has proven very efficient from a numerical point of view , and has been generalized through a series of later papers , see . , , and analyzed the interaction between multiple breaks in uni - directional fibers embedded in a matrix using a lattice green function technique to calculate the load transfer from broken to unbroken fibers including fiber - matrix sliding with a constant interfacial shear resistance , given by either a debonded sliding interface or by matrix shear yielding .the differential load carrying capacity of the matrix is assumed to be negligible . in the followingwe describe the zhou and curtin approach in some detail .the load - bearing fibers have a strength distribution given by the cumulative probability of failure over a length of fiber experiencing a stress , where where is the weibull index . when a fiber breaks , the load is transferred to the unbroken fibers .we will return to the details henceforth .the newly broken fiber slides relatively to the matrix . the shear resistance provides an average axial fiber stress along the single broken fiber at a distance from the break , where is radius of the fibers and is the axial fiber stress prior to the failure at point .this defines a length scale the total stress change within a distance of the break is distributed to the other fibers .a key assumption in what now follows is that the total stress in each plane is conserved : the stress difference is distributed among the other intact fibers at the same -level . in order to set up the lattice green function approach, the system must be discretized .each fiber , oriented in the direction , of length is divided into elements of length .the fibers are arranged on the nodes of a square lattice in the plane so that there is a total of fibers .the lattice constants in the and directions are and , respectively .each fiber is labeled by where .this is shown in fig.[ah - fig14 ] .the stress on fiber in layer along the direction is given by .fiber at layer may be intact .it then acts as a hookean spring with spring constant responding to the stress .if fiber has broken at layer , it carries a stress equal to zero .the third possibility is that fiber has broken elsewhere at , and layer is within the slip zone .it then carries a stress which is the discretization of eq .( [ zhou_curtin_slide ] ) with zero spring constant .each element of fiber has two end nodes associated with it . at all such nodes, springs parallel to the plane are placed linking fiber with its nearest neighbors .these springs have spring constant .the displacement of the nodes is assumed confined to the direction only .zhou and curtin denote the displacement of node connecting element with element of fiber , , and the displacement of node linking element with element of fiber , .the force on element of fiber from element of fiber is the reader should compare the following discussion with that which was presented in section [ sec:4c ] .we now assume that it is only layer that carries any damaged or slipped elements , the rest of the layers are perfect .let .if a force is applied to the nodes , the response is where is the lattice green function .given the displacements from solving this equation combined with eq.([zhou_curtin_force_on_n ] ) , the force carried by each broken element is found .the inverse of the lattice green function is .the elements of are either zero , or , reflecting the status of the springs ; undamaged , slipping or broken . when there are no breaks in layer , we define , and .hence , plays a rle somewhat similar to the matrix defined in eq .( [ m4 ] ) . by combining these definitions ,zhou and curtin find the matrices and have dimension where . by appropriately labeling the rows and columns ,the matrices and may be written where the matrix couples elements _ within _ the layer , where all the damage is located .the matrix couples elements within the rest of the layers . these are undamaged perfect " . the two matrices and provide the cross couplings .the matrix becomes in this representation combining this equation with eq .( [ zhou_curtin_g_and_d ] ) gives where the intact green function may be found analytically by solving eq .( [ zhou_curtin_response ] ) for the intact lattice in fourier space , ^{-1}\;. \nonumber\\\end{aligned}\ ] ] by using that , zhou and curtin find that f^+_{n',0}\;,\ ] ] where and refer to the upper and lower node attached to element is layer . before completing the model , the weibull strength distribution , eqs.([zhou_curtin_cumulative ] ) and ( [ zhou_curtin_phi ] ) , must be discretized .each element is given a maximum sustainable load from the cumulative probability where .the breakdown algorithm proceeds as follows : 1 .a force per fiber set equal to the smallest breaking threshold , , is applied to the system .2 . the weakest fiber or fibers are broken by setting their spring constants to zero . 3 . decrease the stresses in the element below and above the just broken fibers according to eq .( [ zhou_curtin_disc_slip ] ) .4 . solve eq .( [ zhou_curtin_dyson ] ) for the layers in which the breaks occurred .calculate the spring displacements in the layers where the breaks occurred using eq .( [ zhou_curtin_u_u ] ) and an effective applied force .the force on each intact spring in such a layer is then .6 . with the new stresses , search for other springs that carry a force beyond their thresholds .if such springs are found , break these and return to ( 2 ) .otherwise proceed .search for the spring which is closest to it breaking threshold .this spring is the one with .increase the load by a factor , where is equal to or somewhat larger than unity .this factor is present to take into account the non - linearities introduced in the system due to the slip of the fibers .proceed until the system no longer can sustain a load . by changing the ratio between the moduli of the springs in the discretized lattice , it is possible to go from fiber bundle behavior essentially evolving according to equal load sharing ( els ) to local load sharing ( lls ) . whereas the computational cost of finite - element calculations on fiber reinforced composites scales with the volume of the composite , the break influence superposition technique and the lattice green function technique scale with the number of fiber breaks in the sample .this translates into systems studied by the latter two techniques can be orders of magnitude larger than the former .after this rather sketchy tour through the use of fiber bundle models as tools for describing the increasing important fiber reinforced composite materials , we now turn to the use of fiber bundle models in non - mechanical settings .the typical failure dynamics of the fiber bundle model captures quite faithfully the failure behavior of several multicomponent systems like the communication or traffic networks .similar to the elastic networks considered here , as the local stress or load ( transmission rate or traffic currents ) at any part of the network goes beyond the sustainable limit , that part of the system or the network fails or gets jammed , and the excess load gets redistributed over the other intact parts .this , in turn , may induce further failure or jamming in the system . because of the tectonic motions stresses develop at the crust - tectonic plate resting ( contact ) regions and the failure at any of these supports induces additional stresses elsewhere .apart from the healing phenomena in geological faults , the fiber bundle models have built - in features to capture the earthquake dynamics . naturally , the statistically established laws for earthquake dynamics can be easily recast into the forms derived here for the fiber bundle models .we will consider here in some more details these three applications specifically .the fiber bundle model has been applied to study the cascading failures of network structures , like erdos - renyi networks , known as er networks and watts - strogatz networks , known as ws networks , to model the overloading failures in power grids , etc . here , the nodes or the individual power stations are modelled as fibers and the transmission links between these nodes are utilized to transfer the excess load ( from one broken fiber or station to another ) .the load transfer of broken fibers or nodes through the edges or links of the underlying network is governed by the lls rule . under a non - zero external load , the actual stress of the intact fiber is given by the sum of and the transferred load from neighboring broken fibers .the local load transfer , from broken fibers to intact fibers , depends on the load concentration factor with , where the primed summation is over the cluster of broken fibers directly connected to , is the number of broken fibers in the cluster , and is the number of intact fibers directly connected to .let the external stress be increased by an infinitesimal amount starting from .fibers for which strength break iteratively until no more fibers break . for each increment of , the size of the avalanche is defined as the number of broken fibers triggered by the increment .the surviving fraction of fibers can be written as one can also measure directly the response function , or the generalized susceptibility , denoted as the critical value of the external load , can be defined from the condition of the global breakdown .the critical value and the susceptibility have been calculated numerically for the model under the lls rule on various network structures , such as the local regular network , the ws network , the er network , and the scale - free networks ( fig .[ fig : sigmac ] ) .the results suggest that the critical behavior of the model on complex networks is completely different from that on a regular lattice . more specifically , while for the fbm on a local regular network vanishes in the thermodynamic limit and is described by for finite - sized systems ( see the curve for in fig .[ fig : sigmac ] , corresponding to the ws network with the rewiring probability ) , for _ all _ networks except for the local regular one does not diminish but converges to a nonzero value as is increased .moreover , the susceptibility diverges at the critical point as , regardless of the networks , which is again in a sharp contrast to the local regular network [ see fig . [fig : sigmac](b ) ] .the critical exponent clearly indicates that the fbm under the lls rule on complex networks belongs to the same universality class as that of the els regime although the load - sharing rule is strictly local .the observed variation for is only natural for lls model . in an lls modelif successive fibers fail ( each with a finite probability ) , then the total probability of such an event is as the probability is proportional to the bundle size . if this probability is finite , then ( for any finite , the failure probability of any fiber in the bundle ) . for a failure of fibers , the neighboring intact fibers get the transferred load which if becomes greater than or equal to their strength , it surely fails giving ( see section iv.a ) .the evidence that the lls model on complex networks belongs to the universality class of the els model is also found in the avalanche size distribution : unanimously observed power - law behavior ( a ) for all networks except for the local regular one ( the ws network with ) is in perfect agreement with the behavior for the els case . on the other hand ,the lls model for a regular lattice has been shown to exhibit completely different avalanche size distribution .also one can observe a clear difference in terms of the failure probability defined as the probability of failure of the whole system at an external stress . while values for lls on complex networks fall on a common line , lls on regular network shows a distinctly different trend , see also .one can apply the equal load sharing fiber bundle model to study the traffic failure in a system of parallel road network in a city .for some special distributions , like the uniform distribution , of traffic handling capacities ( thresholds ) of the roads , the critical behavior of the jamming transition can be studied analytically .this , in fact , is exactly comparable with that for the asymmetric simple exclusion process in a single channel or road .traffic jams or congestions occur essentially due to the excluded volume effects ( among the vehicles ) in a single road and due to the cooperative ( traffic ) load sharing by the ( free ) lanes or roads in multiply connected road networks ( see e.g. , ) . using fbm for the traffic network ,it has been shown that the generic equation for the approach of the jamming transition in fbm corresponds to that for the asymmetric simple exclusion processes ( asep ) leading to the transport failure transition in a single channel or lane ( see e.g. , ) .let the suburban highway traffic , while entering the city , get fragmented equally through the various narrower streets within the city and get combined again outside the city ( see fig .[ bkc - traffic - model ] ) .if denotes the input traffic current and is the total output traffic current , then at the steady state , without any global traffic jam , . in case below , the global jam starts and soon drops to zero .this occurs if , the critical traffic current of the network , beyond which global traffic jam occurs .let the parallel roads within the city have different thresholds for traffic handling capacity : for the different roads ( the -th road gets jammed if the traffic current per road exceeds ) .initially and increases as some of the roads get jammed and the same traffic load has to be shared equally by a lower number of unjammed roads .next , we assume that the distribution of these thresholds is uniform up to a maximum threshold current ( corresponding to the widest road traffic current capacity ) , which is normalized to unity ( sets the scale for ) .gets fragmented into uniform currents in each of the narrower roads and the roads having threshold current get congested or blocked .this results in extra load for the uncongested roads .we assume that this extra load per unconjested roads gets equally redistributed and gets added to the existing load , causing further blocking of some more roads ., width=240,height=240 ] the jamming dynamics in this model starts from the -th road ( say in the morning ) when the traffic load per city roads exceeds the threshold of that road . due to this jam ,the total number of unconjested roads decreases and the rest of these roads have to bear the entire traffic load in the system .hence the effective traffic load or stress on the uncongested roads increases and this compels some more roads to get jammed .these two sequential operations , namely the stress or traffic load redistribution and further failure in service of roads continue till an equilibrium is reached , where either the surviving roads are strong ( wide ) enough to share equally and carry the entire traffic load on the system ( for ) or all the roads fail ( for ) and a ( global ) traffic jam occurs in the entire road network system .this jamming dynamics can be represented by recursion relations in discrete time steps .let be the fraction of unconjested roads in the network that survive after ( discrete ) time step , counted from the time when the load ( at the level ) is put in the system ( time step indicates the number of stress redistributions ) .as such , for all and for for any ; for if , and for if .here follows a simple recursion relation the critical behavior of this model remains the same as discussed in section iii.a in terms of and the exponent values remain unchanged : , for all these equal ( traffic ) load sharing models . in the one dimensional lane or roadis possible if , say , the -th site is occupied and the -th site is vacant .the inter - site hopping probability is indicated by ., width=240,height=144 ] in the simplest version of the asymmetric simple exclusion process transport in a chain ( see fig .[ aspe - model ] ) , the transport corresponds to movement of vehicles , which is possible only when a vehicle at site , say , moves to the vacant site .the transport current is then given by where denotes the site occupation density at site and denotes the inter - site hopping probability .the above equation can be easily recast in the form where .formally it is the same as the recursion relation for the density of uncongested roads in the fbm model discussed above ; the site index here in asep plays the role of time index in fbm .such exact correspondence indicates identical critical behavior in both the cases .the same universality for different cases ( different threshold distributions ) in fbm suggests similar behavior for other equivalent asep cases as well . for extension of the model to scale free traffic networks ,see .the earth s outer crust , several tens of kilometers in thickness , rests on tectonic shells . due to the high temperature - pressure phase changes and consequent ionizations of the metallic ores , powerfulmagneto - hydrodynamic convective flows occur in the earth s mantle at several hundreds of kilometers in depth .the tectonic shell , divided into about ten mobile plates , have got relative velocities of the order of a few centimeters per year , see e.g. , .the stresses developed at the interfaces between the crust and the tectonic shells during the ( long ) _ sticking _ periods get released during the ( very short ) _ slips _ , causing the releases of the stored elastic energies ( at the fault asperities ) and consequent earthquakes .two well known phenomenologically established laws governing the earthquake statistics are ( a ) the gutenberg - richter law relating the number density ( ) of earthquakes with the released energy greater than or equal to ; and ( b ) the omori law where denotes the number of aftershocks having magnitude or released energy larger than a preassigned small but otherwise arbitrary threshold value . as mentioned already ,because of the tectonic motions , stresses develop at the crust - tectonic plate contact regions and the entire load is supported by such regions .failure at any of these supports necessitates load redistributions and induces additional stresses elsewhere .in fact the avalanche statistics discussed in section iii.b can easily explain the gutenberg - richter law ( eq . [ gr - law ] ) with the identification .similarly , the decay of the number of failed fibers at the critical point , given by ( see e.g. , eq . [eq : fbm - omori ] ) , crudely speaking , gives in turn the omori law behavior ( eq .[ omori - law ] ) for the fiber bundle model , with the identification .for some recent discussions on further studies along these lines , see e.g. , .the stick - slip motion in the burridge - knopoff model ( see e.g. , ) , where the blocks ( representing portion of the solid crust ) connected with springs ( representing the elastic strain developed due to tectonic motion ) are pulled uniformly on a rough surface , has the same feature of stress redistribution as one or more blocks slip and the dynamics was mapped onto a els fiber bundle model by .the power law distributions of the fluctuation driven bursts around the critical points have been interpreted as the above two statistical laws for earthquakes .the fiber bundle model enjoys a rare double position in that it is both useful in a practical setting for describing a class of real materials under real working conditions , and at the same time being abstract enough to function as a model for exploring fundamental breakdown mechanisms from a general point of view .very few models are capable of such a double life .this means that a review of the fiber bundle model may take on very different character depending on the point of view .we have in this review emphasized the fiber bundle model as a model for exploring fundamental breakdown mechanisms .failure in loaded disordered materials is a collective phenomenon .it proceeds through a competition between disorder and stress distribution .the disorder implies a distribution of local strength .if the stress distribution were uniform in the material , it would be the weakest spot that would fail first .suppose now that there has been a local failure at a given spot in the material .the further away we go from this failed region , the weaker the weakest region within this distance will be .hence , the disorder makes local failures repel each other : they will occur as far as possible from each other .however , as the material fails locally , the stresses are redistributed .this redistribution creates hot spots where local failure is likely due to high stresses .since these hot spots occur at the boundaries of the failed regions , the effect of the stress field is an attraction between the local failures .hence , disorder and stress has opposite effect on the breakdown process ; repulsion vs. attraction , and this leads to competition between them . since the disorder in the strength of the material , leading to repulsion , dimishes throughout the breakdown process , whereas the stresses create increasingly important hot spots , it is the stress distribution that ends up dominating towards the end of the process .the fiber bundle model catches this essential aspect of the failure process . depending on the load redistribution mechanism , the quantitative aspects change .however , qualitatively it remains the same . in the els case , there are no _ localized _ hot spots , all surviving fibers are loaded the same way .geometry does not enter into the redistribution of forces , and we may say that the hot spots " include all surviving fibers .this aspect gives the els fiber bundle model its mean field character , even though all other fluctuations are present , such as those giving rise to bursts .as shown in section iii , the lack of geometrical aspects in the redistribution of forces in the els model enables us to construct the recursion relations ( e.g. , eq . [ rec - u ] ) which capture well the failure dynamics .we find that the eventual statistics , governed by the fixed points for , the average strength of the bundle , essentially shows a normal critical behavior : order parameter , breakdown susceptibility and relaxation time with and for a bundle of fibers , where subscripts and refers to post and pre critical cases respectively .the statistics of fluctuations over these average behavior , given by the avalanche size distributions with for discrete load increment and for quasi - static load increment in such els cases .the critical stress of the bundle is of course nonuniversal and its magnitude depends on the fiber strength distribution . for the lls model , we essentially find ( see section iv.a ) , the critical strength of the fiber bundle which vanishes in the macroscopic system size limit . the avalanche size distribution is exponential for such cases . for range - dependent redistribution of load ( see section iv.b ) one recovers the finite value of and the els - like mean field behavior for its failure statistics. extensions of the model to capture creep and fatigue behavior of composite materials are discussed in section v.a .precursors of global failure are discussed in section v.b .it appears , a detailed knowledge of the critical behavior of the model can help very precise determination of the global failure point from the well defined precursors .section v.c provides a rather cursory review of models of fiber reinforced composites .these models go far beyond the simple fiber bundle model in complexity and represent the state of the art of theoretical approaches to this important class of materials . however , as complicated as these models are , the philosophy of the fiber bundle model is still very much present .finally we discussed a few extensions of the model to failures in communication networks , traffic jams and earthquakes in section v.d .as discussed here in details , the fiber bundle model not only gives an elegant and profound solution of the dynamic critical phenomena of failures in disordered systems , with the associated universality classes etc , but also offers the first solution to the entire linear and non - linear stress - strain behavior for any material up to its fracture or rupture point .although the model had been introduced at about the same time ( ) as the ising model for static critical phenomena , it is only now that the full ( mean - field ) critical dynamics in the fiber bundle model is solved .apart from these , as already discussed , several aspects of the fluctuations in this model are now well understood .even from this specific point of view , the model is not only intuitively very attractive , its behavior is extremely rich and intriguing. it would be surprising if it did not offer new profound insights into failure phenomena also in the future .we thank p. bhattacharyya and p. c. hemmer for important collaborations at different parts of this work .we acknowledge the financial support from norwegian research council through grant no .nfr 177958/v30 .thanks sintef petroleum research for providing partial financial help and moral support toward this work .
the fiber bundle model describes a collection of elastic fibers under load . the fibers fail sucessively and for each failure , the load distribution among the surviving fibers changes . even though very simple , this model captures the essentials of failure processes in a large number of materials and settings . we present here a review of the fiber bundle model with different load redistribution mechanisms from the point of view of statistics and statistical physics rather than materials science , with a focus on concepts such as criticality , universality and fluctuations . we discuss the fiber bundle model as a tool for understanding phenomena such as creep , and fatigue , how it is used to describe the behavior of fiber reinforced composites as well as modelling e.g. network failure , traffic jams and earthquake dynamics .
there are several motives to find the explicit form of conserved densities of differential - difference equations ( ddes ) .the first few conservation laws have a physical meaning , such as conservation of momentum and energy .additional ones facilitate the study of both quantitative and qualitative properties of solutions .furthermore , the existence of a sequence of conserved densities predicts integrability .yet , the nonexistence of polynomial conserved quantities does not preclude integrability .indeed , integrable ddes could be disguised with a coordinate transformation in ddes that no longer admit conserved densities of polynomial type .another compelling argument relates to the numerical solution of partial differential equations ( pdes ) . in solving integrable discretizations of pdes, one should check that conserved quantities indeed remain constant .in particular , the conservation of a positive definite quadratic quantity may prevent nonlinear instabilities in the numerical scheme .the use of conservation laws in pde solvers is well - documented in the literature .several methods to test the integrability of ddes and for solving them are available .solution methods include symmetry reduction and solving the spectral problem on the lattice . adaptations of the singularity confinement approach , the wahlquist - estabrook method , and symmetry techniques allow one to investigate integrability .the most comprehensive integrability study of nonlinear ddes was done by yamilov and co - workers ( see e.g. ) .their papers provide a classification of semi - discrete equations possessing infinitely many local conservation laws . using the formal symmetry approach ,they derive the necessary and sufficient conditions for the existence of local conservation laws , and provide an algorithm to construct them . in , we gave an algorithm to compute conserved densities for systems of nonlinear evolution equations .the algorithm is based on the concept of invariance of equations under dilation symmetries .therefore , it is inherently limited to polynomial densities of polynomial systems .recently an extension of the algorithm towards ddes was outlined in . herewe present a full description of our algorithm , which can be implemented in any computer algebra language .we also provide details about our software package * diffdens.m * in _ mathematica _ , which automates the tedious computation of closed - form expressions for conserved densities . following basic definitions in section 2 ,the algorithm is given in section 3 . to illustrate the method we use the toda lattice , a parameterized toda lattice , and the discretized nonlinear schrdinger ( nls ) equation . in section 4, we list results for an extended lotka - volterra equation , a discretized modified kdv equation , a network equation , and some other lattices .the features , scope and limitations of our code * diffdens.m * are described in section 5 , together with instructions for the user . in section 6, we draw some conclusions .consider a system of ddes which is continuous in time , and discretized in the single space variable , where and are vector dynamical variables with any number of components .for simplicity of notation , the components of will be denoted by etc .we assume that is polynomial with constant coefficients .if ddes are non - polynomial or of higher order in , we assume that they can be recast in the form ( [ multisys ] ) . a local _ conservation law _is defined by which is satisfied on all solutions of ( [ multisys ] ) .the function is the _ conserved density _ and is the associated _ flux ._ both are assumed to be polynomials in and its shifts . obviously , and this telescopic series vanishes for a bounded periodic lattice or a bounded lattice resting at infinity . in that case , is constant in time .so , we have a conserved quantity .consider the one - dimensional toda lattice where is the displacement from equilibrium of the unit mass under an exponential decaying interaction force between nearest neighbors . with the change of variables , lattice ( [ orgtoda ] ) can be written in polynomial form system ( [ todalatt ] ) is completely integrable .the first two density - flux pairs can be easily be computed by hand .we introduce a few concepts that will be used in our algorithm .let denote the _ shift - down _ operator and the _ shift - up _ operator .both are defined on the set of all monomials in and its shifts .if is such a monomial then and for example , and it is easy to verify that compositions of and define an _ equivalence relation _ on monomials .simply stated , all shifted monomials are _ equivalent _ , e.g. this equivalence relation holds for any function of the dependent variables , but for the construction of conserved densities we will apply it only to monomials . in the algorithmwe will use the following _ equivalence criterion _ : if two monomials and are equivalent , then ] with the _ main _ representative of an equivalence class is the monomial of that class with as _ lowest _ label on ( or ) . for example , is the main representative of the class with elements etc .lexicographical ordering is used to resolve conflicts .for example , ( not ) is the main representative in the class with elements etc .scaling invariance , which results from a special lie - point symmetry , is an intrinsic property of many integrable nonlinear pdes and ddes . indeed , observe that ( [ todalatt ] ) , and the couples and in ( [ todapair ] ) ( after inserting them in ( [ conslaw ] ) ) , are invariant under the dilation ( scaling ) symmetry where is an arbitrary parameter .stated differently , corresponds to one derivative with respect to denoted by similarly , our three - step algorithm exploits the symmetry ( [ symtoda ] ) to find conserved densities ._ step 1 : determine the weights of variables _ the _ weight _ , of a variable is by definition equal to the number of derivatives with respect to the variable carries .weights are non - negative , rational , and independent of without loss of generality we set the _ rank _ of a monomial is defined as the total weight of the monomial .for now we assume that all the terms ( monomials ) in a particular equation have the same rank .this property is called _ uniformity in rank_. the uniformity in rank condition leads to a system of equations for the unknown weights .different equations in the vector equation ( [ multisys ] ) may have different ranks . requiring uniformity in rank for each equation in ( [ todalatt ] )allows one to compute the weights of the dependent variables .indeed , yields which is consistent with ( [ symtoda ] ) ._ step 2 : construct the form of the density _this step involves finding the building blocks ( monomials ) of a polynomial density with prescribed rank .all terms in the density must have the same rank . since we may introduce parameters with weights ( see example 6 ), the fact that the density will be a sum of monomials of uniform rank does not necessarily imply that the density must be uniform in rank with respect to the dependent variables .let be the list of all the variables with positive weights , including parameters with weight .the following procedure is used to determine the form of the density of rank : * form the set of all monomials of rank or less by taking all appropriate combinations of different powers of the variables in . * for each monomial in , introduce the appropriate number of derivatives with respect to so that all the monomials exactly have weight . gather in set all the terms that result from computing the various derivatives . *identify the monomials that belong to the same equivalence classes and replace them by the main representatives . call the resulting simplified set , which consists of the _ building blocks _ of the density with desired rank .* linear combination of the elements in with constant coefficients gives the form of polynomial density of rank continuing with ( [ todalatt ] ) , we compute the form of the density of rank from we build next , introduce -derivatives , so that each term exactly has rank thus , using ( [ todalatt ] ) , gather the resulting terms in set since and in , is replaced by and by hence , we obtain linear combination of the monomials in gives the form of the density : _ step 3 : determine the unknown coefficients in the density _the following procedure simultaneously determines the constants and the form of the flux : * compute and use ( [ multisys ] ) to remove all the -derivatives . * regarding ( [ conslaw ] ) , the resulting expression must match the pattern .use the equivalence criterion to modify .the goal is to introduce the main representatives and to identify the terms that match the pattern . *the part that does not match the pattern must vanish identically for any combination of the components of and their shifts .this leads to a _ linear _ system in the unknowns .if has parameters , careful analysis leads to conditions on these parameters guaranteeing the existence of densities .see for a description of this compatibility analysis .* the flux is the first piece in the pattern . ] . in terms of main representatives ,\nonumber \\ & & + c_2 u_{n } u_{n+1 } v_{n } + [ c_2 u_{n-1 } u_n v_{n-1 } - c_2 u_{n } u_{n+1 } v_{n } ] \nonumber \\ & & + c_2 v_{n}^2 + [ c_2 v_{n-1}^2 - c_2 v_{n}^2 ] - c_3 u_n u_{n+1 } v_n - c_3 v_n^2 . \end{aligned}\ ] ] next , group the terms outside of the square brackets and move the pairs inside the square brackets to the bottom . rearrange the latter terms so that they match the pattern $ ] .hence , .\end{aligned}\ ] ] the first piece inside the square brackets determines the terms outside the square brackets must all vanish piece by piece , yielding the solution is since densities can only be determined up to a multiplicative constant , we choose and substitute this into ( [ formrho3toda ] ) and ( [ formflux3toda ] ) .the explicit forms of the density and the flux follow : to illustrate how the algorithm works for ddes with parameters , consider the parameterized toda lattice where and are _ nonzero _ parameters without weight .in it was shown that ( [ partodalatt ] ) is completely integrable if and only if ; but then ( [ partodalatt ] ) is ( [ todalatt ] ) . using our algorithm , one can easily compute the _ compatibility conditions _ for and , so that ( [ partodalatt ] ) admits a polynomial conserved densities of , say , rank 3 .the steps are the same as for ( [ todalatt ] ). however , ( [ todasystem ] ) must be replaced by a non - trivial solution will exist if and only if analogously , ( [ partodalatt ] ) has density of rank 1 if and density of rank 2 if only when will ( [ partodalatt ] ) have conserved densities of rank 3 : ignoring irrelevant shifts in these densities agree with the results in . in , ablowitz and ladik studied the following integrable discretization of the nls equation : where is the complex conjugate of instead of splitting into its real and imaginary parts , we treat and as independent variables and augment ( [ orgabllad ] ) with its complex conjugate . absorbing in the scale on since we have neither of the equations in ( [ abllad ] ) is uniform in rank . to circumvent this problemwe introduce an auxiliary parameter with weight , and replace ( [ abllad ] ) by this extra freedom allows us to impose uniformity in rank : which yields or , we show how to get the building blocks of the density of rank . in this case and the monomials and have already rank , so no derivatives are needed .the monomials and will have rank after introducing .there is no way for the remaining monomials and to have rank after differentiation with respect to .therefore , they are rejected .the remaining intermediate steps lead to although uniformity in rank is essential for the first two steps of the algorithm , after the second step , we may already set the computations now proceed as in the previous example .the trick of introducing one or more extra parameters with weights can always be attempted if any equation in ( [ multisys ] ) lacks uniformity in rank .we list some conserved densities of ( [ abllad ] ) : \nonumber \\ \!\!\!\!\!\!\!\!\!\!\!\!\ ! & + & c_2 [ { \textstyle{1 \over 3 } } u_n^3 v_{n+1}^3 + u_n u_{n+1 } v_{n+1 } v_{n+2 } ( u_n v_{n+1 } + u_{n+1 } v_{n+2 } + u_{n+2 } v_{n+3 } ) \nonumber \\ \!\!\!\!\!\!\!\!\!\ ! & + & \ !u_n v_{n+2 } ( u_n v_{n+1 } \!+\! u_{n+1 } v_{n+2 } ) \!+\! u_n v_{n+3 } ( u_{n+1 } v_{n+1 } \!+\! u_{n+2 } v_{n+2 } ) \!+\ !u_n v_{n+3}].\end{aligned}\ ] ] our results confirm those in .also , if defined on an infinite interval , ( [ orgabllad ] ) admits infinitely many independent conserved densities . although it is a constant of motion , we can not find the hamiltonian of ( [ orgabllad ] ) , ,\ ] ] for it has a logarithmic term , hu and bullough considered an extended version of the lotka - volterra equation : for ( [ extvolterra ] ) is the well - known lotka - volterra equation , for which the densities were presented in .we computed five densities of ( [ extvolterra ] ) for the case .the first three are : for , we also computed five densities of ( [ extvolterra ] ) .the first three are : we computed four densities of ( [ extvolterra ] ) for to save space we do not list them here .the integrability and other properties of ( [ extvolterra ] ) are discussed in . in , we found the following integrable discretization of the mkdv equation : we computed four densities of ( [ discmkdv ] ) . the first three are : the integrable nonlinear self - dual network equations can be written as : we computed the first four densities of ( [ dual ] ). the first three are shabat and yamilov studied the following integrable volterra lattice : with our program we computed the first four densities for this system : in , the following hamiltonian lattice is also listed : four densities of ( [ ravilsys ] ) are : describe the features , scope and limitations of our program * diffdens.m * , which is written in _ mathematica _ syntax .the program has its own _ menu _ interface with a dozen data files .users should have access to _ mathematica_. the code * diffdens.m * and the data files must be in the same directory . after launching _ mathematica _ , type .... in[1]:=< < diffdens.m .... to read in the code * diffdens.m * and start the program . via its menu interface , the program will prompt the user for answers .the density is available at the end of the computations . to view it in standard _notation , type ` rho ` . to display it in a more elegant subscript notation , type ` subscriptform[rho ] ` . to test systems that are not in the menu, one should prepare a data file in the format of the data files that are provided with the code .of course , the name for a new data file should not coincide with any name already listed in the menu , unless one intended to modify those data files .for the parameterized toda lattice ( [ partodalatt ] ) the data file reads : .... ( * start of data file d_ptoda.m with parameters . * ) ( * toda lattice with parameters aa and bb * ) u[1][n_]'[t]:= aa*u[2][n-1][t]-u[2][n][t ] ; u[2][n_]'[t]:= u[2][n][t]*(bb*u[1][n][t]-u[1][n+1][t ] ) ; noeqs = 2 ; name = " toda lattice ( parameterized ) " ; parameters = { aa , bb } ; weightpars = { } ; ( * the user may give the rank of rho * ) ( * and a name for the output file . * ) ( * rhorank = 3 ; * ) ( * myfile = " ptodar3.o " ; * ) ( * the user can give the weights of u[1 ] and u[2 ] , * ) ( * and of the parameters with weight ( if applicable ) . * ) ( * weightu[1 ] = 1 ; weightu[2 ] = 2 ; weight[aa ] = 1 ; * ) formrho = 0 ; ( * the user can give the form of rho .* ) ( * for example , for the density of rank 3 : * ) ( * formrho = c[1]*u[1][n][t]^3+c[2]*u[1][n][t]*u[2][n-1][t]+ c[3]*u[1][n][t]*u[2][n][t ] ; * ) ( * end of data file d_ptoda.m * ) .... a brief explanation of the lines in the data file now follows . `u[i][n_]'[t ] : = ... ; ` give the equation of the system in _ mathematica _ notation .1.5pt ` noeqs = 2 ; ` specify the number of equations in the system .1.5pt ` name = " toda lattice ( parameterized ) " ; ` pick a short name for the system under investigation .the quotes are essential .1.5pt ` parameters = { aa , bb } ; ` give the list of parameters in the system . if none , set _ parameters = \ { } _ .1.5pt ` weightpars = { } ; ` give the list of the parameters that are assumed to have weights .note that weighted parameters are _ not _ listed in _parameters _ , the latter is the list of parameters without weight .1.5pt ` ( * rhorank = 3 ; * ) ` optional .give the desired rank of the density , if less interactive use of the program is preferred ( batch mode ) .1.5pt ` ( * myfile = " ptodar3.o " ; * ) ` optional . give a name of the output file , again to bypass interaction with the program .1.5pt ` ( * weightu[1 ] = 1 ; weightu[2 ] = 2 ; * ) ` optional .specify a choice for _ some or all _ of the weights .the program then skips the computation of the weights , but still checks for consistency .particularly useful if there are several free weights and non - interactive use is preferred .1.5pt ` formrho = 0 ; ` if _ formrho _ is set to zero , the program will _ compute _ the form of 1.5pt .... formrho = c[1]*u[1][n][t]^3+c[2]*u[1][n][t]*u[2][n-1][t]+ c[3]*u[1][n][t]*u[2][n][t ] ; .... alternatively , one could give a form of ( here of rank 3 ) .the density must be given in expanded form and with coefficients c[i ] .if form of is given , the program skips both the computation of the weights and the form of the density .instead , the code uses what is given and computes the coefficients c[i ] .this option allows one , for example , to test densities from the literature .1.5pt anything within ` ( * ` and ` * ) ` is a comment and ignored by _mathematica_. once the data file is ready , pick the choice ` tt ) your system ` " in the menu .our program can handle systems of first order ddes that are polynomial in the dependent variables . only one independent variable ( )is allowed .no terms in the ddes should have coefficients that _ explicitly _ depend on or .the program only computes polynomial conserved densities in the dependent variables and their shifts , without explicit dependencies on or .theoretically , there is no limit on the number of ddes . in practice ,for large systems , the computations may take a long time or require a lot of memory .the computational speed depends primarily on the amount of memory . by design ,the program constructs only densities that are uniform in rank .the uniform rank assumption for the monomials in allows one to compute _ independent _ conserved densities piece by piece , without having to split linear combinations of conserved densities . due to the superposition principle ,a linear combination of conserved densities of unequal rank is still a conserved density .this situation arises frequently when parameters with weight are introduced in the ddes .the input systems can have one or more parameters , which are assumed to be nonzero .if a system has parameters , the program will attempt to compute the compatibility conditions for these parameters such that densities ( of a given rank ) exist .the assumption that all parameters in the given dde must be nonzero is essential . as a result of setting parameters to zero in a given system of ddes , the weights and the rank of might change .in general , the compatibility conditions for the parameters could be highly nonlinear , and there is no general algorithm to solve them .the program automatically generates the compatibility conditions , and solves them for parameters that occur linearly .grbner basis techniques could be used to reduce complicated nonlinear systems into equivalent , yet simpler , non - linear systems . for ddes with parameters andwhen the linear system for the unknown coefficients has many equations , the program saves that system and its coefficient matrix , etc ., in the file _worklog.m_. independent from the program , the worklog files can later be analyzed with appropriate _ mathematica _functions .the assumption that the ddes are uniform in rank is not very restrictive .if the uniform rank condition is violated , the user can introduce one or more parameters with weights .this also allows for some flexibility in the form of the densities .although built up with terms that are uniform in rank in the dependent variables and parameters , the densities do no longer have to be uniform in rank with respect to the dependent variables alone .conversely , when the uniform rank condition _ is _ fulfilled , the introduction of extra parameters ( with weights ) in the given dde leads to a linear combination of conservation laws , not to new ones . in cases where it is not clearwhether or not parameters with weight should be introduced , one should start with parameters without weight .if this causes incompatibilities in the assignment of weights ( due to non - uniformity ) , the program may provide a suggestion .quite often , it recommends that one or more parameters be moved from the list _ parameters _ into the list _ weightpars _ of weighted parameters . for systems with two or more free weights , the user will be prompted to enter values for the free weights .if only one weight is free , the program will automatically compute some choices for the free weight , and continue with the lowest integer or fractional value .the program selects this value for the free weight ; it is just one choice out of possibly infinitely many .if the algorithm fails to determine a suitable value , the user will be prompted to enter a value for the free weight .negative weights are not allowed .zero weights are allowed , but at least one of the dependent variables must have positive weight .the code checks this requirement , and if it is violated the computations are aborted .note that _ fractional weights _ and densities of _ fractional rank _ are permitted .our program is a tool in the search of the first half - dozen conservation laws .an existence proof ( showing that there are indeed infinitely many conservation laws ) must be done analytically .if our program succeeds finding a large set of independent conservation laws , there is a good chance that the system of ddes has infinitely many conserved densities .if the number of conserved densities is 3 or less , the dde may have other than polynomial conserved densities , or may not be integrable ( in the chosen coordinate representation ) .we offer the scientific community a _ mathematica _ package to carry out the tedious calculations of conserved densities for systems of nonlinear ddes .the code * diffdens.m * , together with several data and output files , is available via internet url .the existence of a large number of conservation laws is an indicator of integrability of the lattice .therefore , by generating the compatibility conditions , one can analyze classes of parameterized ddes and filter out the candidates for complete integrability .nal gkta thanks wolfram research , inc . for an internship .we acknowledge helpful comments from drs .b. herbst , s. mikhailov , w .- h .steeb , y. suris , p. winternitz , and r. yamilov .we also thank g. erdmann for his help with this project .hohler and k. olaussen , int .a 11 ( 1996 ) 1831 .shabat and r.i .yamilov , leningrad math .j. 2 ( 1991 ) 377 .hickernell , stud .appl . math .69 ( 1983 ) 23 .leveque , numerical methods for conservation laws ( birkhuser verlag , basel , 1992 ) .sanz - serna , j. comput .( 1982 ) 199 .d. levi and p. winternitz , j. math .34 ( 1993 ) 3713 .d. levi and o. ragnisco , lett .nuovo cimento 22 ( 1978 ) 691 .a. ramani , b. grammaticos and k.m .tamizhmani , j. phys . a : math . gen .25 ( 1992 ) l883 .b. deconinck , phys .lett . a 223 ( 1996 ) 45 .i. cherdantsev and r. yamilov , in : symmetries and integrability of difference equations , eds .d. levi , l. vinet and p. winternitz ( american mathematical society , providence , rhode island , 1996 ) 51 .d. levi and r. yamilov , j. math .38 ( 1997 ) 6648 .v.v . sokolov and a.b .shabat , in : soviet sci .4 ( harwood academic publishers , new york , 1984 ) 221 .r. yamilov , in : proc .workshop on nonlinear evolution equations and dynamical systems ( world scientific publishing , singapore , 1993 ) 423 .. gkta and w. hereman , j. symb .( 1997 ) 591 .. gkta and w. hereman .the program condens.m is available via internet url : http://www.mines.edu / fs_home / whereman/. m.j .ablowitz and j.f .ladik , j. math .17 ( 1976 ) 1011 .ablowitz and j.f .ladik , stud .55 ( 1976 ) 213 .ablowitz and b.m .herbst , siam j. appl .50 ( 1990 ) 339 .hu and r.k .bullough , j. phys .a : math . gen .30 ( 1997 ) 3635 .ablowitz and p.a .clarkson , solitons , nonlinear evolution equations and inverse scattering ( cambridge university press , cambridge , u.k . , 1991 ) .
an algorithm to compute polynomial conserved densities of polynomial nonlinear lattices is presented . the algorithm is implemented in _ mathematica _ and can be used as an automated integrability test . with the code * diffdens.m * , conserved densities are obtained for several well - known lattice equations . for systems with parameters , the code allows one to determine the conditions on these parameters so that a sequence of conservation laws exist . , conservation law ; integrability ; semi - discrete ; lattice
the world changes .this basic fact is expressed in the second law of thermodynamics : _ the entropy of a closed system never decreases _ : where the entropy is denoted by . in the 1870s ,ludwig boltzmann provided a description of the entropy relating the states of macro and micro - systems .he argued that any system would evolve toward the macrostate that is consistent with the largest possible number of microstates .the number of microstates and the entropy of the system are related by the fundamental formula : where jk is boltzmann s constant and is the volume of the phase - space that corresponds to the macrostate of entropy .the physical processes in any thermodynamical system are irreversible .consequently , any system always tends to its state of maximum entropy . if we consider the maximal system , the universe , the situation is not so clear . in the past , the universe was hotter and at some point matter and radiation were in thermal equilibrium ( i.e. in a state of maximum entropy ) ; then , how can entropy still increase if it was at a maximum at some past time ?some attemps were made to answer this question , invoking the expansion of the universe : the universe actually began in a state of maximum entropy , but because of the expansion , it was still possible for the entropy to continue growing .the main problem with this type of explanation is that the effects of gravity can not be neglected in the early universe .roger penrose suggested that entropy might be assigned to the gravitational field itself .though locally matter and radiation were in thermal equilibrium , the gravitational field should have been quite far from equilibrium , since gravity is an atractive force and the universe was initially structureless .consequently , the early universe was globally out of equilibrium , being the total entropy dominated by the entropy of the gravitational field . in absence of a theory of quantum gravity , a statistical measure of the entropy of the gravitational field is not possible .the study of the gravitational properties of macroscopic systems through classic general invariants , however , might be a suitable approach to the problem .penrose proposed that the weyl curvature tensor can be used to specify the gravitational entropy .the weyl tensor is a 4-rank tensor that contains the independent components of the riemann tensor not captured by the ricci tensor .it can be considered as the traceless part of the riemann tensor .the weyl tensor can be obtained from the full curvature tensor by substracting out various traces .the riemann tensor has 20 independent components , 10 of which are given by the ricci tensor and the remaining by the weyl tensor .the weyl tensor is given by : \beta}-g_{\beta[\gamma}r_{\delta]\alpha})+\\ \nonumber & + & \frac{2}{(n-1)(n-2 ) } r~g_{\alpha[\gamma}g_{\delta]\beta},\\\end{aligned}\ ] ] where is the riemann tensor , is the ricci tensor , is the ricci scalar , ] . outside this intervalthe horizon does not exist and the singularity is naked . according to the cosmic censorship conjecture , singularities only occur if they are hidden by an event horizon .if this conjecture is valid , the value of the charge has to be restricted . , ., width=264,height=188 ] figure [ 2drn ] shows a plot of the entropy density for a fixed value of the charge .the function is not defined at and it tends asymptotically to zero for large values of the radius .this is actually what it is expected of a good description of the entropy density for reissner - nordstrm black holes : can not be defined at the origin because of the presence of a singularity , and goes to zero as the field decreases .furthermore , one of the minima of the function is at where the inner cauchy horizon is located .this horizon is near two relative maxima and an other relative minimum of the function .a possible explanation is that in the case of reissner - nordstrm black hole , near the cauchy horizon , there is a small region of space - time where the curvature is high and potentially unstable against external perturbations , as shown by poisson and israel .the behaviour of the entropy density for r_{\rm{s}}\leq r \leq r_{{\rm l}} r_{\rm{l } } \leq r ] ., .,width=264,height=188 ] just as the schwarzschild solution is the unique vacuum solution of einstein s field equations ( as result of israel s theorem ) , the kerr metric is the unique stationary axisymmetric vacuum solution ( carter - robinson theorem ) .the calculation of the scalars yields : since , the result of the calculation of the entropy in the outer event horizon is equal to the area of this horizon .the metric in this horizon takes the form : }{\rho^2}\sin^2\theta d\chi^2+\rho^2d\theta^2.\end{aligned}\ ] ] the function can be written as , where and are the radii of the inner and outer horizon respectively . from equation[ sup ] , replacing , we obtain : finally , the calculation of the area gives : in the same way we can make the calculation for : the entropy for both horizons takes the simple form : for , the correct schwarzschild limit is obtained . in the case of axisymmetric space - times ,it is not possible to calculate the spatial metric ( see equation [ 3dst ] ) because of the presence of the metric coefficient : as the body is rotating the spatial position of each event in the kerr space - time depends on time .the covariant divergence , then , is obtained from the determinant of the full metric : where is the determinant of the metric and is given by : ^ 2 \sin^2\theta.\ ] ] by introducing equation [ g ] into [ for ] , we calculate the entropy density .the result is : we can see that the entropy density does not depend explicitly on the mass of the black hole .a 3-dimensional plot of equation [ densitykerr ] is shown in figure [ densitykerr1 ] for and different values of the angular momentum .the entropy density is everywhere well defined except for and , where the ring singularity is located . at large distances , , the function goes to zero , as expected . , and .,width=340,height=188 ]the kerr - newman metric of a charged spinning black hole is the most general black hole solution .it was found by newman et al . in 1965 . as in the case of the kerr black hole , in order to avoid the coordinate singularities in the event horizons , we make the following change of coordinates : where : the metric takes the matrix form : here , , and : }{\rho^2}\sin^2\theta.\end{aligned}\ ] ] the kerr - newman solution depends on three parameters : the mass , the angular momentum and the charge .we show , in figure [ rayonkn ] , that the radius of the outer horizon is well defined for ] .this is clearly a combination of the previous cases . , ., width=302,height=226 ] the calculation of the determinant gives : which has the same form as in the kerr space - time . since the kerr - newman space - time is a non - vacuum solution of einstein s field equations because of the presence of an electromagnetic field , the weyl and + kretschmann scalars are not equal .the calculation of the scalar yields : where : and , we compute the entropy density using equation [ for ] .the function we find , however , is singular for certain values of the radius : ,\ ] ] since , the kretschmann scalar , is a polynomial of order six with at least one positive real root .for example , if we solve the equation for , , , and , the roots of the kretschmann scalar are : since the only difference between kerr space - time and kerr - newman space - time is the scalar ( the determinant is the same ) , we conclude that rudjorn et .al.s proposal for the entropy scalar ( see equation [ p1 ] ) can not be used to calculate the entropy density in the kerr - newman space - time .hence , we propose this alternative definition for : for a kerr - newman black hole equation [ p2 ] yields : where : we compute the entropy density using equation [ for ] .the result is : and : in figures [ kn13d ] and [ kn23d ] it is shown the plots of the entropy density of kerr - newman black hole as a function of the radial coordinate and the angular momentum or the charge respectively . in both cases the entropy density diverges for and as expected . , , and .,width=302,height=188 ] , , and .,width=340,height=188 ] figure [ tabla ] shows a table with values of the entropy density at different , in the core of the object and in both horizons .it can be seen that only in the equatorial plane the entropy density diverges , where the ring singularity is located . for ,the entropy density is everywhere well defined .[ cols="^,^,^,^",options="header " , ] in order to check if our definition of the scalar ( equation [ p2 ] ) can be used in other non spherically symmetric space - times , we will test it in kerr black holes .the calculation of the scalar by equation [ p2 ] yields : where : and the entropy density gives : . here , , , and .,width=302,height=188 ] .here , , , and .,width=302,height=188 ] towards the center of the object the entropy density goes to zero , except for and where the ring singularity is located ( see figures [ kerrn1 ] and [ kerrn2 ] ) .we conclude that this new scalar also gives a reasonable description of the gravitational entropy density in the kerr space - time .we remark that in axisymmetric space - times , the vector field could have an angular component : where is a unitary vector . according to equation [ den ] ,the entropy density then takes the form : \right|.\ ] ] the second term of this equation is singular for and , that is , at the poles of both kerr and kerr - newman black holes .the vector field is not regular over the whole horizon and consequently can not be used as a reliable tool to estimate the gravitation entropy .this a direct consequence of the coupling of space and time components in rotating systems .the entropy density for the spherically symmetric compact objects , reissner - nordstrm black holes and wormholes , shows the expected features for a good measure of the gravitational entropy density in terms of rudjord et .al.s proposal : entropy density is everywhere well - defined , except in reissner - nordstrm black holes where the singularity is located , and goes to zero as the field decreases . in the case of a simple wormhole space - time , at the throat because of the presence of exotic matter . for axisymmetric space - times ,however , rudjord et .al s formulation has to be modified because the entropy density presents divergences . in this workwe conclude that in the case of rotating bodies , the entropy density has to be calculated from the full metric form .furthermore , the scalar is redefined as the weyl scalar .these changes give a reasonable description of the gravitational entropy density : the function is well - defined everywhere except where space - time is singular and it tends asymptotically to zero outside the black hole .+ we remark , however , that classical estimators as discussed here , should be tested in cosmological scenarios in order to determine their validity in more general space - times. a complete characterization of the gravitational entropy will only be possible with a quantum theory of gravitation , which should be a singularity - free theory . in the meanwhile, classical estimators can be helpful for some applications .the general validity of the second law of thermodynamics in black hole interiors remains an open issue , that will be addressed in a forthcoming work .boltzmann , l. : die beziehung zwischen dem zweiten hauptsatz der mechanischen w und der wahrscheinlichkeitsrechnung respektive den s das w . _ wiener berichte _ * 76 * , 373 - 435 ( 1877 )
pure thermodynamical considerations to describe the entropic evolution of the universe seem to violate the second law of thermodynamics . this suggests that the gravitational field itself has entropy . in this paper we expand recent work done by rudjord , gr and sigbj where they suggested a method to calculate the gravitational entropy in black holes based on the so - called ` weyl curvature conjecture ' . we study the formulation of an estimator for the gravitational entropy of reissner - nordstrm , kerr , kerr - newman black holes , and a simple case of wormhole . we calculate in each case the entropy for both horizons and the interior entropy density . then , we analyse whether the functions obtained have the expected behaviour for an appropriate description of the gravitational entropy density . example.eps gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore
investigating many applied problems , we can consider a second - order parabolic equation with mixed derivatives as the basic equation .an example is diffusion processes in anisotropic media . in desining various approximations for the corresponding boundary - value problems , we focus on the inheritance of the primary properties of the differential problem during the construction of the discrete problem .locally one - dimensional difference schemes are obtained in a simple enough way for second - order parabolic equations without mixed derivatives .mixed derivatives complicate essentially the construction of unconditionally stable schemes of splitting with respect to the spatial variables for parabolic equations with variable derivatives , even for two - dimensional problems . in some problems, it is convenient to use the fluxes ( derivatives with respect to a spatial direction ) as unknow quantities .this idea may be implemented in the most simple manner for one - dimensional problems . to introduce fluxes ,mixed and hybrid finite elements are applied . the original parabolic equation with mixed derivativesmay be written as a system of equations for the fluxes .the basic peculiarity of this system is that the time derivatives for the fluxes in separate equations are interconnected to each other . for the problem in the flux variables ,unconditionally stable schemes with weights are developed .locally one - dimensional schemes are proposed for problems without mixed derivatives .in a bounded domain , the unknown function , , satisfies the equation assume that the coefficients satisfy the conditions for any with constant .consider the boundary value problem for equation ( [ 2.1 ] ) with homogeneous dirichlet boundary conditions and the initial conditions in the form we introduce a vector quantity ( the index denotes transposition ) such that where is a square matrix ( ) with elements . using this notation , equation ( [ 2.1 ] ) may be written as we can write the above problem ( [ 2.3])([2.5 ] ) in the operator form .scalar functions are considered in the hilbert space with the scalar product and norm defined by the rules for vector functions , we use the hilbert space , where taking into account ( [ 2.2 ] ) , we can treat the matrix as a linear , bounded , self - adjoint , and positive definite operator in : where is the identity operator in . suppose ,i.e ., on the set of functions that satisfy the boundary conditions ( [ 2.3 ] ) , for the gradient and divergence operators , we have it follows from this that , i.e. , in the above notation ( [ 2.7])([2.9 ] ) , from ( [ 2.3])([2.5 ] ) , we obtain the cauchy problem for the system of operator - differential equations for the problem ( [ 2.1])([2.4 ] ) , the following equation corresponds wich is supplemented by the initial condition ( [ 2.12 ] ) .taking into account that it is possible to eliminate from the system of equations ( [ 2.10 ] ) , ( [ 2.11 ] ) that gives in view of ( [ 2.11 ] ) and ( [ 2.12 ] ) , we put in constructing locally one - dimensional schemes ( schemes based on splitting with respect to spatial directions ) , we focus on the coordinatewise formulation of equations ( [ 2.10 ] ) , ( [ 2.11 ] ) , ( [ 2.14 ] ) ) and ( [ 2.14 ] ) .let then the basic system of equations ( [ 2.10 ] ) , ( [ 2.11 ] ) takes the form the equation ( [ 2.13 ] ) for is reduced to for the flux components ( see ( [ 2.14 ] ) ) , we obtain the equations of the system ( [ 2.19 ] ) are connected with each other , and , moreover , the time derivatives are interconnected .the problem ( [ 2.12 ] ) , ( [ 2.18 ] ) seems to be much easier we have a single equation instead of the system of equations. nevertheless , some possibilities to design locally one - dimensional schemes for the system of equations are still there . herewe present elementary a priori estimates for the solution of the above cauchy problems for operator - differential equations , which will serve us as a checkpoint in the study of discrete problems .multiplying equation ( [ 2.13 ] ) scalarly in by , we obtain taking into account ( [ 2.7 ] ) and we arrive at this inequality implies the estimate for the solution of the problem [ 2.12 ] ) , ( [ 2.18 ] ) .now we investigate the problem ( [ 2.14 ] ) , ( [ 2.15 ] ) . by the properties ( [ 2.7 ] ) of the operator , for , we have in view of ( [ 2.21 ] ) , we define the hilbert space , where the scalar product and norm are multiplying equation ( [ 2.15 ] ) scalarly in by , we obtain in view of we arrive at a priori estimate for the solution of the problem ( [ 2.14 ] ) , ( [ 2.15 ] ) .we conduct a detailed analysis using a model two - dimensional parabolic problem in a rectangle in , we introduce a uniform rectangular grid and let be the set of interior nodes ( ) . on this grid ,scalar grid functions are given .for grid functions , we define the hilbert space with the scalar product and norm to determine vector grid functions , we have two main possibilities. the first approach deals with specifying vector functions on the same grid as it used for scalar functions . the second possibility ,which is traditionally widely used , e.g. , in computational fluid dynamics , is based on the grid arrangement , where each individual component of a vector quantity is referred to its own mesh . herewe restrict ourselves to the use of the same grid for all quantities , in particular , for setting the coefficients .consider approximations for the differential operators we apply the standard index - free notation from the theory of difference schemes for the difference operators : if we set the coefficients of the elliptic operator at the grid points , then more opportunities are available in approximation of operators with mixed derivatives . as the basic discretization , we emphasize instead of , we can take their linear combination .in particular , it is possible to put in the general case , we set the introduced discrete operators approximate the corresponding differential operators with the second order : where .we define a grid subset , where the corresponding components of vector quantities are defined .let and for the grid vector variables , instead of two components , we will use four components , putting for the grid functions defined on grids , we define the hilbert spaces , where for the grid vector functions in , we set now we construct the discrete analogs of differential operators introduced according to ( [ 2.8 ] ) , ( [ 2.9 ] ) . using the above difference derivatives in space , we set so that .similarly , we define , where thus for the adjoint operator , we have and the above discrete operators approximate the corresponding differential operators with the first order : for the operator - differential equation ( [ 2.13 ] ) , we put into the correspondence the equation where , e.g , . for equation ( [ 3.14 ] ) , we consider the cauchy problem the construction of the operator is associated with the approximations ( [ 3.1])([3.5 ] ) .the most important properties are self - adjointness and positive definiteness of the operator .the equation ( [ 3.14 ] ) approximates the differential equation ( [ 2.13 ] ) with the second order .the system of equations ( [ 2.10 ] ) , ( [ 2.11 ] ) is attributed to the system for the flux problem ( [ 2.14 ] ) , ( [ 2.15 ] ) , we put into the correspondence the problem similarly to ( [ 2.20 ] ) , we prove the following estimate for the solution of the problem ( [ 3.14 ] ) , ( [ 3.15 ] ) : for the estimate ( [ 2.22 ] ) , we put into the correspondence the estimate for the solution of the problem ( [ 3.23 ] ) , ( [ 3.24 ] ) .we introduce a uniform grid in time with a step and let , . for numerical solving the problem ( [ 3.14 ] ) , ( [ 3.15 ] ) , we apply the standard two - level scheme with weights , where equation ( [ 3.14 ] ) is approximated by the scheme where and , e.g. , . taking into account ( [ 3.15 ] ) , the operator - difference equation ( [ 4.1 ] ) is supplemented with the initial condition the truncation error of the difference scheme ( [ 4.1])([4.3 ] ) is .the study of the difference scheme is conducted using the general theory of stability ( well - posedness ) for operator - difference schemes .let us formulate a typical result on stability of difference schemes with weights for an evolutionary equation of first order . from ( [ 4.4 ] ) , in the standard way, we get the desired stability estimate which may be treated as a direct discrete analogue of the a priori estimate ( [ 2.20 ] ) for the solution of the differential problem ( [ 2.12 ] ) , ( [ 2.18 ] ) . schemes with weights for a system of semi - discrete equations ( [ 3.21 ] ) , ( [ 3.22 ] ) are constructed in a similar way .we put the scheme ( [ 4.7 ] ) , ( [ 4.7 ] ) is equivalent to the scheme ( [ 4.1 ] ) . in view of theorem 1 , it is stable under the restriction , and for the solution of difference problem ( [ 4.2 ] ) , ( [ 4.7 ] ) , ( [ 4.7 ] ) , the a priori estimate ( [ 4.4 ] ) holds .the computational implementation of the unconditionally stable operator - difference schemes ( [ 4.1])([4.3 ] ) for the parabolic equation ( [ 2.1 ] ) with mixed derivatives is based on solving discrete elliptic problems at every time step . for the problem ( [ 3.14 ] ) , ( [ 3.15 ] ) , it seems more convenient to employ additive schemes ( operator - splitting schemes ) that provide the transition to a new time level using simpler problems associated with the inversion of the individual operators rather then their combinations . by the nature of the operators , in this case, we speak of locally one - dimensional schemes .the issues of designing unconditionally stable locally one - dimensional schemes for a parabolic equation without mixed derivatives have been studied in detain . for parabolic equations with mixed derivatives ,locally one - dimensional schemes were constructed in several papers ( see , e.g. , ) .strong results on unconditional stability of operator - splitting schemes can be proved only in a uninteresting case with pairwise commutative operators ( the equation with constant coefficients ) . for our problems ( [ 2.1])([2.4 ] ), the construction of locally one - dimensional schemes requires separate consideration .let us investigate approaches to constructing locally one - dimensional schemes for the problem ( [ 3.23 ] ) , ( [ 3.24 ] ) .the computational implementation of the scheme with weights ( [ 4.9 ] ) , ( [ 4.10 ] ) , which is unconditionally stable for , is associated with solving the system of difference equations for four components of the vector .the equations of this system are strongly coupled to each other , and this interconnection does exist not only for the spatial derivatives ( operators , ) , but also for the time derivatives ( ) .thus , we need to resolve the problem of splitting for the operator at the time derivative , too. the simplest case is splitting of the spatial operator without coupling the time derivatives .such a technique is directly applicable for the construction of locally one - dimensional schemes for parabolic equations without mixed derivatives , where in equation ( [ 2.1 ] ) .assume that i.e. , is the diagonal part of . for numerical solving the problem ( [ 3.23 ] ) , ( [ 3.24 ] ) , we employ the difference scheme , where only the diagonal part of is shifted to the upper time level . in our notation , we set with the initial conditions according to ( [ 4.10 ] ) .the scheme ( [ 4.10 ] ) , ( [ 5.2 ] ) has the first - order approximation in time .it seems more preferable , in terms of accuracy , to apply the scheme that is based on the triangular decomposition of the self - adjoint matrix operator : for the problem ( [ 3.23 ] ) , ( [ 3.24 ] ) , we construct the additive scheme with the splitting ( [ 5.5 ] ) , where the main result is formulated in the following statement .the alternating triangle operator - difference scheme ( [ 4.10 ] ) , ( [ 5.5])([5.7 ] ) belongs to the class of schemes that are based on a pseudo - time evolution process the solution of the steady - state problem is obtained as a limit of this pseudo - time evolution .it has the second - order accuracy in time if , and ony the first order for other values of .hout , k.j . , mishra , c. : stability of the modified craig - sneyd scheme for two - dimensional convection - diffusion equations with mixed derivative term .mathematics and computers in simulation * 81*(11 ) ( 2011 ) 25402548
to solve numerically boundary value problems for parabolic equations with mixed derivatives , the construction of difference schemes with prescribed quality faces essential difficulties . in parabolic problems , some possibilities are associated with the transition to a new formulation of the problem , where the fluxes ( derivatives with respect to a spatial direction ) are treated as unknown quantities . in this case , the original problem is rewritten in the form of a boundary value problem for the system of equations in the fluxes . this work deals with studying schemes with weights for parabolic equations written in the flux coordinates . unconditionally stable flux locally one - dimensional schemes of the first and second order of approximation in time are constructed for parabolic equations without mixed derivatives . a peculiarity of the system of equations written in flux variables for equations with mixed derivatives is that there do exist coupled terms with time derivatives .
in many wireless sensor applications , temperature increase caused by sensor operation has to be carefully managed .for example , wireless sensors implanted in the human body have to be designed such that the temperature due to their operation does not cause any threat for the metabolism .a line of medical research started by pennes in 1948 explores the temperature dynamics due to electromagnetic radiation in conjunction with heat losses to the environment and dissipation of heat in the tissue . in the context of sensors that communicate data ,temperature sensitivity varies depending on the type of tissue . for a given specific tissue, it is recommended that the temperature does not exceed a critical level , in order to prevent damage to the tissue .this necessitates careful scheduling of data transmission .this problem arises in various types of body area sensor networks , see e.g. , and references therein .finally , temperature increase in a sensor is a threat for the proper operation of the hardware itself . in this context , the electric power that feeds the amplifier circuitry has to be carefully scheduled so as to avoid permanent damage in the circuit . in order to obtain design principles with regard to temperature sensitivity of such systems , determining transmission schemes under a safe temperature threshold a useful objective . in this paper, we consider data transmission with energy harvesting sensors under such temperature constraints .data transmission with energy harvesting transmitters has been the topic of recent research . in particular , throughput maximization under offline and online knowledge of the energy arrivalsis considered in these references for single - user and multi - user energy harvesting communication systems . in , this problem is investigated under imperfections such as battery energy leakage , charge / discharge inefficiency , and presence of processing costs . in the current paper , we aim to bridge physical heat dissipation with data transmission in energy harvesting communication systems . when the sole purpose is to maximize the throughput , the transmitter may generate excessive heat while utilizing the energy resource . in a temperature sensitiveapplication , the heat accumulation caused by the transmission power policy has to be explicitly taken into account .in such a case , heat generated in the transmitter circuitry causes a form of information - friction " .we study the effect of this friction " in a deadline constrained communication of an energy harvesting transmitter over an awgn channel . for simplicity, we use transmit power as a proxy for hardware power .that is , we assume that the energy dissipated by the power amplifier dominates other energy sinks in the circuitry . more work is needed to understand full implications of communication circuitry s energy in this context .our formulation also relates to in that the cumulative effect of heat generated in the hardware affects the communication performance .we determine the throughput optimal offline power scheduling policy under energy harvesting and temperature constraints .our thermal model is based on a view of the transmitter s circuitry as a linear heat system where transmit power is an input as in , , .we impose that the temperature does not exceed a critical level .consequently , we obtain a convex optimization problem .we solve this problem using a lagrangian framework and kkt optimality conditions .we first derive the structural properties of the solution for the general case of multiple energy arrivals .then , we obtain closed form solutions under a single energy arrival . for the general case ,we observe that the optimal power policy may make jumps at the energy arrival instants , generalizing the optimal policies in . between energy harvests ,the optimal power is monotonically decreasing .we establish for the case of a single energy arrival that the optimal power policy monotonically decreases , corresponding temperature monotonically increases , and both remain constant when the critical temperature is reached .then , we consider the case of multiple energy arrivals .we observe that the properties of the solution for the single energy arrival case are guaranteed to hold only in the last epoch of the multiple energy arrival case . in the remaining epochs, the temperature may not be monotone and the transmitter may need to cool down to create a temperature margin for the future , if the energy harvested in the future is large .we illustrate possible cases and obtain insights regarding the optimal temperature pattern in the multiple energy arrival case .we consider an energy harvesting transmitter node placed in an environment as depicted in fig .[ model ] . the node harvests energy to run its circuitry and wirelesslysend data to a receiver .the received signal , the input , fading level and noise are related as where is additive white gaussian noise with zero - mean and unit - variance . in this paper ,the channel is non - fading , i.e. , .we use a continuous time model : a scheduling interval has a short duration with respect to the duration of transmission and we approximate it as ] , the transmitter decides a feasible transmit power level and bits are sent to the receiver , where the base of is . to be precise, the underlying physical signaling is in discrete time and the scalings in snr and rate due to bandwidth and the base of the logarithm are inconsequential for the analysis . . ] as shown in fig .[ model2 ] , the initial energy available in the battery at time zero is .energy arrivals occur at times in amounts with .we call the time interval between two consecutive energy arrivals an _ epoch_. is the deadline . and are known offline and are not affected by the heat due to transmission .let and be the number of energy arrivals in the interval and by convention we let .power scheduling policy is subject to energy causality constraints as : \end{aligned}\ ] ] in our thermal model , we use the transmit power as a measure of heat dissipated to the environment . in particular , we model the temperature dynamics of the system as follows : where is the transmit power policy and is the temperature at time . is the constant temperature of the environment that is not affected by the heating effect due to the transmit power level . and are non - negative constants . represents the cumulative effect of additional heat sources and sinks and it can take both positive and negative values . in the following ,we consider the case of no extra heat source or sink , i.e. , . our thermal model in ( [ thermal ] )is intimately related to the thermal model in where hardware heating is modeled as a first order heat circuit . in particular , thermal dynamics of a power controlled transmitter due to its amplifier power consumption ( see e.g. , ) could be modeled as in ( [ thermal ] ) .we also refer the reader to for a related heating model . our thermal model is also related to the well - known pennes bioheat equation .we assume , for simplicity , that the spatial variation in temperature is not significant and leave the general case of spatial temperature variations as future work. becomes available for data transmission at time . is the deadline . ] from ( [ thermal ] ) , the solution of for any given with the initial condition at time is : by inserting in ( [ four ] ) , we get ( c.f .* eq . ( 3 ) ) ) : temperature should remain below a critical temperature , i.e. , , where we assume that .let us define , which is the largest allowed temperature deviation from the environment temperature .typically , initial temperature is , i.e. , initially the temperature is stabilized at the constant environment temperature . from ( [ ths ] ) , using and , we get the following equivalent condition for the temperature constraint : \end{aligned}\ ] ] note that the temperature constraints in ( [ gg ] ) and the energy causality constraints in ( [ causality ] ) do not interact . due to the heat generation dynamics governed by ( [ thermal ] ), we observe in ( [ gg ] ) that the cost of power increases exponentially in time ( i.e. , the multiplier in front of is exponential in ) while the heat budget also increases exponentially in time ( i.e. , the upper bound on the right hand side of ( [ gg ] ) is exponential in ) .offline throughput maximization problem over the interval ] . note that ( [ opt_prob1 ] ) is a convex functional optimization problem .the lagrangian for ( [ opt_prob1 ] ) is : taking the derivative of the lagrangian with respect to and equating to zero : which gives ^+\end{aligned}\ ] ] in addition , the complementary slackness conditions are : in ( [ rf ] ) and ( [ slck1])-([slck2 ] ) , and are distributions that are allowed to have impulses and their total measure over ] where is the total energy expenditure by the time .the input is for .this problem is in the following form : } \quad & \int_0^d \frac{1}{2}\log\left(1 + p(\tau)\right)d\tau \\ \nonumber \mbox{s.t . }\quad & \frac{d}{dt}t(t ) = f_1(t , b , p ) , \\frac{d}{dt}b(t)= f_2(t , b , p ) \\ \qquad & g_1(t , b , t ) \leq 0 , \g_2(t , b , t ) \leq 0 \label{opt_prob2}\end{aligned}\ ] ] where and while and .note that and do not depend on the input . with these selections ,optimization problem ( [ opt_prob2 ] ) is in the same form as that stated in ( * ? ? ?( 2.1)-(2.6 ) ) . in this case, hamiltonian is and the corresponding lagrangian is where and are the co - state trajectories ; and are multiplier functions .we note that pontryagin s maximum principle is necessary and sufficient in this case since ( [ opt_prob2 ] ) is a concave maximization problem .one can derive the equivalence of necessary and sufficient conditions for this optimal control problem to those in ( [ rf ] ) and ( [ slck1])-([slck2 ] ) . in the following ,we proceed with the lagrangian formulation in ( [ lgrng ] ) and the corresponding optimality conditions in ( [ rf ] ) and ( [ slck1])-([slck2 ] ) .in this section , we obtain the structural properties of the optimal power scheduling policy using the optimality conditions . in the following lemmas , refers to the optimal policy and is the resulting temperature unless otherwise stated .we first note that the temperature level never drops below .in particular , if the initial temperature is between and , the temperature at all times will remain between and .[ temp ] whenever the initial temperature is . from ( [ thermal ] ) , since we have whenever . the constraint is satisfied by any feasible policy in ( [ opt_prob1 ] ) .the following lemma states that if the temperature is constant , then the power is constant also ( while it is not true the other way around , see lemma [ piecewise ] ) , and that if the temperature hits the maximum allowed level , then the power must be below a threshold . [ constant] whenever is constant over an interval ] . from ( [ ths ] ) , we have : for , ( [ fin ] ) is a monotone increasing function of . in particular , .now , let us consider the case of constant power levels for the duration of communication , i.e. , over the interval where for all and where is the number of intervals .in this case , we have for : where .hence , the coefficient of in ( [ fin2 ] ) has a negative sign as .this proves that is monotone increasing . to generalize this result for any monotone increasing function , we obtain any monotone increasing simple approximation of , denoted as , such that for all ] and pointwise . by monotone convergence theorem , we have \end{aligned}\ ] ] accordingly , pointwise and we have \end{aligned}\ ] ] since is a monotone increasing piecewise constant function , from the first part of the proof , is monotone increasing , i.e. , . since pointwise , this implies , i.e. , is monotone increasing as well .the next lemma shows that if the temperature remains constant over an interval , then that level could only be or , i.e. , any other temperature can not be a stable temperature .[ const_temp ] if is constant over an interval ] , is strictly monotone increasing , and that the interval ] and otherwise . satisfies the energy causality constraint in ( [ opt_prob1 ] ) since uses the same amount of energy as over ] , we have : \end{aligned}\ ] ] using ( [ corin ] ) in ( [ gggg ] ) and since , ( [ gggg ] ) takes the following equivalent form : note that the left hand side of ( [ ffgg ] ) is either monotone increasing or monotone decreasing in as it is a linear function of .since the inequality ( [ ffgg ] ) holds at and as satisfies the temperature constraint at those points , we conclude that satisfies the temperature constraint for all ] includes an energy arrival instant provided that the energy causality constraint is not tight at that instant .next , we show that discontinuities in the power level could only occur in the form of positive jumps , and only at the instances of energy harvests .[ lm1 ] if there is a discontinuity in , it is a positive jump and it occurs only at the energy arrival instants .the temperature is continuous throughout the ] interval . by lemma [ lm1 ] , we can take in the form without loss of optimality , where , , are finitely many lagrange multipliers corresponding to the energy causality constraints at the energy harvesting instants and the deadline , . the next lemma shows , for an arbitrary feasible policy , that if the temperature reaches the critical level at some , then the power just before must be larger than a threshold .[ lm2 ] if for some , then for all sufficiently small .since , we have : we combine ( [ gg ] ) with ( [ kk ] ) to get \end{aligned}\ ] ] which implies in view of the continuity of ( except for the finitely many energy arrival instants ) proved in lemma [ lm1 ] that for all sufficiently small .we next state the continuity of the optimal power policy at points when it hits the critical temperature .[ kkk ] if for some then is continuous at and .the proof follows from lemma [ constant ] and lemma [ lm2 ] and the fact that negative jumps in are not allowed due to lemma [ lm1 ] .next , we show that when the temperature hits the boundary , it has to return to .[ extend ] whenever for some , there exists such that .assume that for some and for all .by lemma [ kkk ] , . from ( [ four ] ) with , the constraint becomes : since in , only energy causality constraint is active and thus for is the piecewise constant monotone power allocation in .on the other hand , satisfies ( [ reef_r2 ] ) with equality for all . therefore , we must have for all for some . however , this contradicts since there can not be a negative jump in by lemma [ lm1 ] .the following lemma identifies the exact conditions where the power makes a jump .if there is a jump in , it occurs only at an energy arrival instant , when the battery is empty and the temperature is strictly below . due to the slackness conditions in ( [ slck1])-([slck2 ] ) , a jump occursif either the battery is empty or the temperature constraint is tight , i.e. , . by lemma [ kkk ] , is continuous whenever .therefore , a jump in occurs at an energy arrival instant , when the battery is empty and .we finally remark that energy may have to be wasted as aggressive use of energy may cause temperature to rise above the critical level .in this section , we consider a single epoch where units of energy is available at the transmitter at the beginning .we first develop further structural properties for the optimal power control policy in this specific case and then obtain the solution .the next lemma shows that , if the power falls below a certain threshold at an intermediate point and remains under that threshold until the deadline , then it should remain constant throughout .[ lmn ] if for ] .assume is not constant over ] and otherwise . is both energy and temperature feasible .energy feasibility holds by construction as and have the same energy over ] .the following lemma states that the power has to remain constant at the level when the temperature reaches the critical level .[ multiple_r ] let ] .if , then for all ] .by lemma [ lm1 ] , is continuous and therefore , for all ] .then , the optimal power has the form : where is the unit step function .now , from lemma [ kkk ] , is continuous at and should be chosen accordingly .in particular , .the following lagrange multiplier verifies ( [ kkl ] ) : the corresponding optimal temperature pattern for is : and for .we note that satisfies : so that .hence , monotonically increases till it reaches , which is consistent with lemma [ temperature ] .next , consider the case that .in this case , where and .therefore , the optimal in this case is we also remark that level that satisfies ( [ sol ] ) monotonically increases with . to see this ,we rearrange ( [ sol ] ) as follows : let us define a multi - variable real function as the left hand side of ( [ sol2 ] ) and denote a specific solution as for fixed .it is easy to see that ( [ sol2 ] ) always has a solution for fixed . to see this, we evaluate the derivative with respect to as : that is , is monotone decreasing with . at , while as grows . in view of the continuity of , there exists a such that .additionally , we observe in ( [ sol2 ] ) that for fixed , monotonically increases with .therefore , if , then , due to monotone increasing property with respect to , for . hence , for such that , we have due to monotone decreasing property with respect to .note that the optimal power policies in the energy unconstrained cases in ( [ kkl ] ) and ( [ halfway ] ) have finite energies .if the available energy is larger than the corresponding energy level in ( [ kkl ] ) and ( [ halfway ] ) , then the solution is as in ( [ kkl ] ) and ( [ halfway ] ) .otherwise , the energy constraint is active and the lagrange multiplier is . from ( [ nn ] ), we have : we first note that there is a critical energy level such that if , then constant power policy is optimal .this critical level is : this is the critical level below which the temperature constraint is not tight by the constant power allocation .the expression in ( [ thrtn ] ) is evaluated from ( [ fin ] ) by inserting , and requiring . when , since temperature constraint is never tight . in this case , is the maximum energy level for which a constant power level is optimal .if , is monotone increasing over ] where and have to satisfy : the corresponding lagrange multiplier is .depending on the energy and the critical temperature , the optimal power scheduling policy varies according to the plots in fig .[ reg_o ] . for small and fixed or for large and fixed , a constant power policy is optimal . for moderate and large , the optimal power policy is exponentially decreasing and may hit the power level .note that level at which temperature touches the critical level decreases as is decreased and as is increased . in particular , for fixed , the level of is bounded below by the solution for whereas for fixed , goes to as approaches .in this section , we extend the solution to the case of multiple energy arrivals .we start with extending the properties observed for the single energy arrival case when initial temperature is different from .the following lemma generalizes lemmas [ new_lm ] , [ multiple_r ] and [ temperature ] for the case of an arbitrary .[ extension ] assume that the initial temperature is in the range instead of and consider the single energy arrival case : is monotone decreasing .let ] .if , then for all ] , i.e. , the right hand side of ( [ gg2 ] ) is always non - negative . the argument in lemma [ new_lm ] is valid in the presence of the additional term in ( [ gg2 ] ) , and therefore is monotone decreasing .the second claim follows from the argument in lemma [ multiple_r ] .in particular , in addition to lemma [ new_lm ] , lemma [ lmn ] directly extends with the constraint in ( [ gg2 ] ) .hence , the result follows by applying the argument in lemma [ multiple_r ] .finally , is monotone increasing and concave due to the steps followed in lemma [ temperature ] .in particular , if the temperature constraint is tight at , . hence , ( [ sxtn])-([svnt ] ) hold and the temperatureis monotone increasing and concave . if , then due to the energy constraint .note that the temperature decreases in case . as in the single epoch case, we will investigate the solution under special cases .in particular , we will investigate the solution according to the time when the temperature hits the critical level .to this end , we specialize in an interval ] and we let . in this case, the solution of ( [ opt_prob1 ] ) over the interval ] interval .for the following argument , whether equals the original energy arrival amount is not relevant and we leave as arbitrary amounts . to obtain the solution of ( [ opt_prob_relax ] ) using this lagrangian framework ,it is necessary and sufficient to find variables , and such that ^+ , \quad t \in [ \tilde{s}_{i-1},\tilde{s}_{i } ) , \i=1,\ldots , \tilde{n}+1\end{aligned}\ ] ] with the corresponding slackness conditions. therefore , for the ] for some and not both equal to zero .this also holds in a subinterval of an epoch over which . in the following lemma, we show that in such an epoch , the temperature is unimodal .[ unimodal ] if ^{+} ] for some and , the resulting is unimodal over ] , ^+ + bt_e\right ) d\tau + t(t_1 ) \right)\end{aligned}\ ] ] first , we note that when , from ( [ thermal ] ) .hence , it suffices to show that is unimodal when . by evaluating the integral ,we get we claim that in ( [ lbbb ] ) is unimodal for .note that the derivative of is : we let , and concentrate on for .we note that is a strictly monotone decreasing function of for .in particular , we have : thus , is strictly monotone decreasing in . as at , we conclude that the factor in ( [ dfsa ] ) that multiplies can take value at most once . in particular , can take positive or negative values at .if it is positive at , it hits value at most once for .if it is negative at , it stays negative throughout .this proves that is unimodal over ] , the temperature can not return to if it hits and falls below .[ new ] if and for some where both and are in ] . by lemma [ kkk ] , .by lemma [ new_lm ] , power is monotone decreasing in an epoch .therefore , if , then and hence for all ] .next , we complete the unimodal structure of the temperature by showing that it has to be monotone decreasing if it hits and falls below .[ new2 ] in an epoch ] . by lemma[ new ] , if , then for all ] . therefore , is monotone decreasing .we next consider epochs ] , there are three possible cases .the first two possibilities are that is monotone increasing or monotone decreasing throughout the epoch .the third possible case is that is monotone increasing in ] for some .otherwise , hits and does not return to if it falls below it due to lemma [ new ] .therefore , if hits in an epoch ] for some . is monotone increasing over , remains at over and is monotone decreasing over .we finally note that if at , then for all $ ] .this follows from lemma [ extend ] . in this case , the temperature constraint is never tight and the optimal power policy is identical to the one in .is decreased , the energy is spent faster subject to energy causality . ] in fig .[ exp2 ] , we plot the optimal energy expenditure for different values of critical temperature level . we observe that as is decreased , the temperature budget shrinks and the temperature constraint becomes more likely to be tight . in this case , energy is spent faster not to create high amounts of heat in the system . in general , there is a tension between causing unnecessary heat in the system and maximizing the throughput .while we have fully characterized this tension in the single energy arrival case , it needs to be further explored in the multiple energy arrivals case . in particular , when a high amount of energy arrives into the system during the progression of communication , the transmitter has to accommodate it by cooling down and creating a temperature margin for future use .while maximizing the throughput generally requires using the energy in the system to the fullest extent , the transmitter may have to waste energy due to the temperature limit .we investigate this tension in numerical examples in the next section .in this section , we provide numerical examples to illustrate the optimal power policy and the resulting temperature profile . for plots in figs .[ model4 ] , [ model5 ] , [ model6 ] and [ model7 ] , we set , , and . therefore , the critical power level is . in figs .[ model4 ] and [ model5 ] , we consider the energy unlimited scenario . in this case , the solution of ( [ sol ] ) is found as . in fig .[ model4 ] , we set and we observe that the optimal power policy is always above the level . in this case ,power strictly monotonically decreases while temperature strictly monotonically increases with temperature touching the critical level at the deadline . in fig .[ model5 ] , we set the deadline as .we calculate that the energy needed to have the power policy in fig .[ model5 ] is . in other words ,if the initial energy is then the power policy in fig .[ model5 ] is optimal .we observe that the optimal power level monotonically decreases to the level and remains at that level afterwards .similarly , the temperature level rises to and remains at that level afterwards .note that the throughput and the energy consumption in fig .[ model5 ] are higher with respect to those in fig .[ model4 ] .parallel to this observation , the monotone decrease is sharper in the power policy in fig .[ model5 ] compared to that in fig .[ model4 ] .since the power level has to be stabilized at , the temperature increase cost paid for achieving certain throughput is minimized if energy consumption starts faster and drops later . in fig .[ model6 ] , we set the deadline to and the energy limit to .note that this energy level is slightly less than the energy of the power policy in fig .[ model5 ] , which translates into a right shift of the point .in particular , we calculate as the solution of ( [ bir])-([uc ] ) in this case .similar to the effect of decreasing the deadline observed in the comparison of figs .[ model4 ] and [ model5 ] , we observe that decreasing the energy level yields a _ smoother _ power policy .power level drops to and the temperature hits at a later time and both remain constant afterwards . for the single epoch case . ] in fig .[ model7 ] , we consider the same system as in previous figures with two energy arrivals instead of one and with . in particular , is available initially and arrives at time . in this case, we calculate .the energy causality constraint is tight and the power level makes a jump at the energy arrival instant . note that the temperature is continuous at the energy arrival instant even though its first derivative is not .while the power level has a smooth start , a sharper decrease is observed towards the end since the harvested energy has to be fully utilized . in particular ,the temperature increase before the energy arrival is kept to a minimum level so as to have a higher heat budget for the larger energy that arrives later .the temperature hits at after which the power and temperature both remain constant . for the single epoch case . ] finally in fig .[ model8 ] , we illustrate a curious behavior in the optimal policy . for this example , we set , , and .initial energy is and energy arrives at with amount and the deadline is .we observe that energy causality constraint is tight at whereas it is not tight at meaning that some energy is wasted in order not to cause excessive heat .the temperature generated in this throughput optimal power policy first monotonically increases , hits at , remains there till and drops below .we interpret the drop in the temperature in the first epoch as an effort to create temperature margin for the high energy arrival in the next epoch .we calculate as the time after which power level remains at and the temperature remains at .note that under unlimited energy , temperature would hit at .due to the energy scarcity in the first epoch , temperature hits later and drops below .a common behavior we observe in each numerical example is that temperature ultimately increases between two epochs where energy causality constraint is tight .further research is needed to quantify the relations between the amount of temperature generated while performing optimally in terms of throughput .while monotonicity of the temperature is lost when multiple energy harvests exist , we note that monotonicity of the temperature is guaranteed in the last epoch due to lemma [ extension ] . and for the single epoch case .] and at and . ] at and at and . ]we considered throughput maximization for an energy harvesting transmitter over an awgn channel under temperature constraints .we used a linear system model for the heat dynamics and determined the throughput optimal power scheduling policy under a maximum temperature constraint by using a lagrangian framework and the kkt optimality conditions .we determined for the single energy arrival case that the optimal power policy is monotone decreasing whereas the temperature is monotone increasing and both remain constant after the temperature hits the critical level .we then generalized the solution for the case of multiple energy arrivals .while monotonicity of the temperature is lost when multiple energy harvests exist , we observed that the temperature ultimately increases while maximizing the throughput .we also observed that the main impact of the temperature constraints is to facilitate faster energy expenditure subject to energy causality constraints . additionally ,even though using all of the available energy is optimal for throughput maximization only , with temperature constraints , energy may have to be wasted in order not to exceed the critical temperature .q. tang , n. tummala , s. gupta , and l. schwiebert , `` communication scheduling to minimize thermal effects of implanted biosensor networks in homogeneous tissue , '' _ ieee trans . on biomed ._ , vol .52 , pp .12851294 , july 2005 .s. ullah , h. higgins , b. braem , b. latre , c. blondia , i. moerman , s. saleem , z. rahman , and k. kwak , `` a comprehensive survey of wireless body area networks , '' _ jour . medical ._ , vol .36 , pp . 10651094 , june 2012 .a. mutapcic , s. boyd , s. murali , d. atienza , g. de micheli , and r. gupta , `` processor speed control with thermal constraints , '' _ ieee trans .on circ . and sys .-i _ , vol .56 , pp .19942008 , september 2009 .o. ozel , k. tutuncuoglu , j. yang , s. ulukus , and a. yener , `` transmission with energy harvesting nodes in fading wireless channels : optimal policies , '' _ ieee jour . on selected areas in commun ._ , vol . 29 , pp .17321743 , september 2011 .o. ozel , j. yang , and s. ulukus , `` optimal broadcast scheduling for an energy harvesting rechargeable transmitter with a finite capacity battery , '' _ ieee trans . on wireless comm ._ , vol . 11 , pp .21932203 , june 2012 .b. devillers and d. gunduz , `` a general framework for the optimization of energy harvesting communication systems with battery imperfections , '' _ jour . of comm . and netw ._ , vol . 14 , pp . 130 139 , april 2012 .k. tutuncuoglu , a. yener , and s. ulukus , `` optimum policies for an energy harvesting transmitter under energy storage losses , '' _ ieee jour . onselected areas in commun ._ , vol .33 , pp .467481 , march 2015 .j. xu and r. zhang , `` throughput optimal policies for energy harvesting wireless transmitters with non - ideal circuit power , '' _ ieee jour . on selected areas in commun ._ , vol .32 , pp . 322332 , february 2014 .o. ozel , k. shahzad , and s. ulukus , `` optimal energy allocation for energy harvesting transmitters with hybrid energy storage and processing cost , '' _ ieee trans . on signal proc ._ , vol .62 , pp .32323245 , june 2014 .
motivated by damage due to heating in sensor operation , we consider the throughput optimal offline data scheduling problem in an energy harvesting transmitter such that the resulting temperature increase remains below a critical level . we model the temperature dynamics of the transmitter as a linear system and determine the optimal transmit power policy under such temperature constraints as well as energy harvesting constraints over an awgn channel . we first derive the structural properties of the solution for the general case with multiple energy arrivals . we show that the optimal power policy is piecewise monotone decreasing with possible jumps at the energy harvesting instants . we derive analytical expressions for the optimal solution in the single energy arrival case . we show that , in the single energy arrival case , the optimal power is monotone decreasing , the resulting temperature is monotone increasing , and both remain constant after the temperature hits the critical level . we then generalize the solution for the multiple energy arrival case .
a cell is a system that transforms nutrients into substrates for growth and division . by assuming that the nutrient flow from the outside of a cell is an energy and material source, the cell can be regarded as a system to transform energy and matter into cellular reproduction .it is important to thermodynamically study the efficiency of this transformation . regarding material transformation ,the yield is defined as the molar concentration of nutrients ( carbon sources ) needed to synthesize a molar unit of biomass ( cell content ) and has been measured in several microbes . as the conversion of nutrients to cell contentis not perfect and material loss to the outside of a cell occurs as waste , the yield is generally lower than unity .the yield also changes with nutrient conditions , and measurements in several microbes show that the yield is maximized at a certain finite nutrient flow rate .the basic logic underlying the optimization of yield at a finite nutrient flow rate rather than at a quasi - static limit is not fully understood .a cell can also be regarded as a type of thermodynamic engine to transform nutrient energy into cell contents . in this case, it is necessary to study the thermodynamic efficiency or entropy production during the process of cell reproduction .the thermodynamic efficiency of metabolism has been measured in several microbes under several nutrient conditions , and westerhoff and others computed it by applying the phenomenological flow - force relationship of the linear thermodynamics to catabolism and anabolism to show that the efficiency is optimal at a finite nutrient flow .although such a phenomenological approach is important for technological application , a physiochemical approach is also necessary to highlight difference between cellular machinery and the carnot engine by characterizing the basic thermodynamic properties in a simple protocell model . indeed ,when viewed as a thermodynamic engine , a cell has remarkable differences from the standard carnot - cycle engine .the cell sits in a single reservoir , without a need to switch contacts between different baths .the cell grows autonomously to reproduce . to consider the nature of such a system , it is necessary to establish the following three points distinguishing the cell from the standard carnot engine .first , cells contain catalysts ( enzymes ) .the enzyme exists only within a compartmentalized cell encapsulated by a membrane and thus enables reactions to convert resources to intracellular components to occur within a reasonable time scale within a cell but not outside the cell . without the catalyst , extensive timeis required for the reaction .thus , the reaction is regarded to occur only in the presence of the catalyst .this leads to an intriguing non - equilibrium situation : let us consider the reaction with as the resource , as the product , and as the catalyst .then , under the existence of , the system approaches an equilibrium concentration ratio with /[p ] = \exp(-\beta(\mu_r-\mu_p)) ] , and accordingly .there is no optimal nutrient concentration in this expression because is always positive for any .this is consistent with the explanation mentioned above for eq.([eq : tilde ] ) .if the enzyme abundance is fixed to be independent of the nutrient uptake , the speed of approaching equilibrium is not altered by the nutrient condition ; therefore , the entropy production just increases monotonically because of the cell volume growth .thus far , we considered only entropy production by chemical reactions .in addition , the material flow also contributes to entropy production , which is taken into account here .+ to discuss the flow of nutrients , the dynamics of the nutrient concentration can not be neglected . by including the temporal evolution of the nutrient concentration ,the dynamics of the cellular state are given by where and are the enzyme , membrane precursor , and nutrient concentration , respectively , and .the rate constants and are determined by the standard chemical potential of each chemical .additionally , the nutrient is taken up with rate from the extracellular environment with a concentration .+ entropy production by chemical flow is derived from nutrient uptake and membrane consumption , which ( again by assuming linear nonequilibrium thermodynamics ) are given by and , respectively , where is the material flow of component and is the chemical potential. integration of the terms over a narrow layer having a spatial gradient results in and .we neglect the entropy production of the solvent with the assumption that intra- and extracellular solvent concentrations are identical ) , we adopted entropy production of membrane consumption as a diffusion process . ] .the contribution of dilution of the nutrient resulting from cellular growth is approximated as by using the formula of entropy change resulting from the isothermal expansion of an ideal solution to a terminal volume is per unit mole . because is the volume expansion rate in this context and , the change in entropy density is written as per unit mole .the approximated formula is obtained by expanding into the taylor series and taking the limit of to zero . ] ; for other species , we use the same formula . +we choose that and are equal to unity and that , for the sake of simplicity .indeed , the characteristic behavior of is independent of this choice .then , the fixed - point solutions of eq.([eq:3 ] ) are obtained against two parameters and . from the solution ,the entropy production per unit growth is computed , as shown in fig.[fig : eff_en_3nodes_diff](a ) .we note that here again the minimal is achieved for a finite nutrient uptake , i.e. , under nonequilibrium chemical flow . in fig.[fig : eff_en_3nodes_diff](b ) , we plotted , the entropy production excluding that derived from the chemical reaction .it increases monotonically with the external nutrient concentration .entropy production is primarily derived from chemical reactions ; therefore , the conclusion of subsection a is unchanged .+ note that the so - called thermodynamic efficiency is defined as where and are the rates of catabolism and anabolism , and and are the affinities of catabolism and anabolism . here, the optimality with regard to entropy production also leads to the optimal thermodynamic efficiency , which , in the present case , is computed by where and are the absolute values of the uptake ( and consumption ) flow of chemical species ( and ) , and is the chemical potential of the chemical species .it is computed by using the chemical potential of nutrient with as the standard chemical potential for the nutrient and as its standard concentration ( the chemical potential for and are computed in the same way ) .this thermodynamic efficiency also takes a local maximum value at a non - zero nutrient uptake rate ( see fig.[fig : thermodynamic_efficiency ] ) .it is worthwhile to check the generality of our result for a system with a large number of chemical species as in the present cell . for this purpose ,we introduce a model given by where the variables , , and denote the concentrations of the nutrient , membrane precursor , and enzymes , respectively , and is the external concentration of the nutrient .each element of the reaction tensor is unity if the reaction of to catalyzed by exists ; otherwise , it is set to zero . here ,the nutrient and the membrane precursor can not catalyze any reaction , whereas the other components form a catalytic reaction network .all chemical reactions are reversible in our model ; therefore is equal to unity if and only if equals unity . for the sake of simplicity, we assume that catalytic capacity , nutrient uptake rate , membrane precursor consumption rate , and the conversion rate from membrane molecule to cell volume are unity .the standard chemical potential for each chemical species is assigned by uniform random numbers within $ ] , whereas is given by accordingly .+ numerical simulations reveal that there again exists an optimal point of for each randomly generated reaction network of .the dependence of on the nutrient concentration is plotted in fig.[fig : n](a ) , overlaid for different networks .although the nutrient concentration to give the optimal value is network - dependent , it always exists at a finite nutrient concentration ; therefore , the entropy production is minimized at a non - zero nutrient concentration . to determine a possible relationship with the optimality of and equilibrium in the presence of a catalyst we also computed the kullback - leibler ( kl ) divergence of the steady state distribution from the equilibrium boltzmann distribution as a function of the external nutrient concentration , expressed as where is the concentration of the th chemical species in the steady state .the kl divergence for each network shows non - monotonic behavior , as shown in fig.[fig : n](b ) .although the optimal nutrient concentration does not agree with the optimum for , each kl divergence decreases in the region where is reduced . in this sense, it is suggested that the reduction of in our model eq.([eq : n_system ] ) is related to the equilibration process of abundant enzymes synthesized as a result of a relatively high rate of nutrient uptake as discussed for eq.([eq:2 ] ) and eq.([eq:3 ] ) .to discuss the thermodynamic nature of a reproducing cell , we have studied simple protocell models in which nutrients are diffused from the extracellular environment and necessary enzymes for the intracellular reactions are synthesized to facilitate chemical reactions , including the synthesis of membrane components , which leads to the growth of cell volume . in the models ,cell growth is achieved through nutrient consumption by the reactions described above .we computed , which is the entropy production per unit cell volume growth and found that the value was minimized at a certain nutrient uptake rate .this optimization stems from the constraint that cells have to synthesize enzymes to facilitate chemical reactions , i.e. , the autopoietic nature of cells . in general , the concentrations of nutrients and membrane components in extracellular environments are different from those in equilibrium achieved in the presence of enzymes , and the intracellular state moves towards equilibrium by synthesizing enzymes to increase the speed of chemical reactions .the equilibration reduces the entropy per unit chemical reaction .however , faster cell volume growth leads to a higher dilution of chemicals ; therefore , faster chemical reactions are required to maintain the steady - state concentration of chemicals . because entropy production by the reaction increases ( roughly linearly ) with the frequency of net chemical reactions, then increases for a higher growth range .thus , the existence of an optimal nutrient content is explained by the requirement for reproduction mentioned in the introduction , i.e. , equilibration of non - equilibrium environmental conditions facilitated by the enzyme , autocatalytic processes to synthesize the enzyme , and cell - volume increase resulting from membrane synthesis .+ in the present model , all chemical components thus synthesized are not decomposed ; they are only diluted .however , each component generally has a specific decomposition time or deactivation time as a catalyst .we can include these decomposition rates , which can also be regarded as diffusion to the extracellular environment with a null concentration .then , the equilibration effect is clearer , although the results regarding optimal nutrient uptake are unchanged .+ in the present study we focused on the entropy production that corresponds to dissipated energy per unit growth . in microbial biology , however , material loss is discussed as biological yield , as mentioned in the introduction , and it is thus reported that the optimal yield is achieved at a certain finite nutrient flow .material loss is not directly included in the present model ; therefore , we can not discuss the yield derived directly from entropy production .however , it may be possible to assume that energy dissipation is correlated with material dissipation .+ for example , the stoichiometry of metabolism is suggested to depend on dissipated energy . here , metabolism consists of two distinct parts : catabolism and anabolism . for catabolism , the energy is transported through energy currency molecules such as atp , nadph , and gtp , which are synthesized from the nutrient molecule . in this process, molecular decomposition also occurs , leading to the loss of nutrient molecules .in addition , the abundance of energy - currency molecules and the utilized energy are correlated .hence , for both catabolism and anabolism , the energy dissipation and material loss are expected to be correlated .indeed , a linear relationship between the yield and the inverse of thermodynamic loss ( i.e. , quantity similar to here ) is suggested from microbial experiments .+ considering the correlation between energy and matter , the minimal entropy production at a finite nutrient flow that we have shown here may provide an explanation for the finding of optimal yield at a finite nutrient flow .future studies should examine the relationship between minimal entropy production and optimal yield in the future by choosing an appropriate model that includes atp synthesis and waste products in a cell .currently , although our models are too simple to capture such complex biochemistry in a cell , they should initiate discussion regarding the thermodynamics of cellular growth .the authors would like to thank a. kamimura , n. takeuchi , y. kondo , t. hatakeyama , a. awazu , y. izumida , t. sagawa , and t. yomo for the useful discussion .the present work was partially supported by the platform for dynamic approaches to living system from the ministry of education , culture , sports , science , and technology of japan and the dynamical micro - scale reaction environment project of the japan science and technology agency .optimality at a finite flow rate was also discussed at the so - called weak - coupling regime in the linear non - equilibrium thermodynamics , where a linear relationship between the fluxes and forces , ( with ) is adopted , in which is thermodynamic flow , and the conjugate thermodynamic force , and s are the transport coefficients . here , the degree of coupling is defined by , while the thermodynamic efficiency is defined by the entropy production by the first process divided by the second process , given by this efficiency is known to take a local maximum at a finite flow , if the coupling is weak , i.e. , .this optimality , however , is not related with the optimality discussed here . as discussed , ours is due to the equilibration under the existence of catalysts , whose synthesis speed is essential to give the optimality , in addition to the dilution by the cell - volume increase .second , through straightforward calculation of thermodynamic variables for each reaction , it is shown that the optimality in our case is achieved under the tight - coupling regime i.e. , , where the optimality at a finite flow is not possible in the standard linear thermodynamics .entropy production during isothermal expansion of an ideal solution from the initial volume to a terminal volume is per unit mole . because is the volume expansion rate in this context and , the change in entropy densityis written as per unit mole .the approximated formula is obtained by expanding into the taylor series and taking the limit of to zero .
cells generally convert external nutrient resources to support metabolism and growth . understanding the thermodynamic efficiency of this conversion is essential to determine the general characteristics of cellular growth . using a simple protocell model with catalytic reaction dynamics to synthesize the necessary enzyme and membrane components from nutrients , the entropy production per unit cell - volume growth is calculated analytically and numerically based on the rate equation for chemical kinetics and linear non - equilibrium thermodynamics . the minimal entropy production per unit cell growth is found to be achieved at a non - zero nutrient uptake rate , rather than at a quasi - static limit as in the standard carnot engine . this difference appears because the equilibration mediated by the enzyme exists only within cells that grow through enzyme and membrane synthesis . optimal nutrient uptake is also confirmed by protocell models with many chemical components synthesized through a catalytic reaction network . the possible relevance of the identified optimal uptake to optimal yield for cellular growth is also discussed .
in social sciences , people in social organizations ( e.g. , a country or a company ) can be categorized into different rankings of socioeconomic tiers based on factors like wealth , income , social status , occupation , power , etc . in this paper, we will take `` company '' as an example and the internal hierarchical structure of employees in a company can be outlined with _ company organizational chart _ formally .most company organizational charts are usually tree - structure diagrams with ceo at the root , executive vice presidents ( evps ) at the second level and so forth .company organizational chart shows the company internal management structure as well as the relationships and relative ranks of employees with different positions / jobs , which is a common visual depiction of how a company is organized . nowadays , to facilitate the collaboration and communication among employees , a new type of online social networks named _ enterprise social networks _( esns ) has been adopted inside the firewalls of many corporations .a representative example of online esns is yammer .over 500,000 businesses around the world are now using yammer , including 85% of the fortune 500 .yammer provides employees with various enterprise social network services to help them deal with daily work issues and contains abundant heterogeneous information generated by employees online social activities .* problem studied * : company internal organizational chart is usually confidential to the public for the privacy and security reasons . in this paper , we want to infer the organizational chart of a company based on the heterogeneous information in online esns launched in the company , and the problem is formally named as the _ nference of rganization hart _ ( ioc ) problem . to help illustrate the ioc problem more clearly , we also give an example in figure [ fig : example ] , where the left plot is about an online esn adopted in a company and the right plot shows the company s organizational chart . in the esn , users can have different types of social activities , e.g. , follow other users , join groups , write / reply / like posts , etc .meanwhile , in the organizational chart , employees are connected by supervision links from managers to subordinates , who are organized into a rooted tree of depth with ceo `` adam '' at the root . in companies , managerscan usually supervise several subordinates simultaneously , while each subordinate only reports to one single manager .for instance , in figure [ fig : example ] , ceo `` adam '' manages `` bob '' and `` candy '' concurrently , while `` david '' only needs to report to `` bob '' .the ioc problem is an interesting yet important problem . besides inferring company organizational chart ,it can also be applied in other real - world concrete applications : ( 1 ) identifying the command structures of terrorist organizations based on the communication / traffic networks of their members .the command structures of terrorist organizations are usually pyramid diagrams outlining their support systems consisting of the _ leaders _ , _ operational cadre _ , _ active supporters _ and _ passive supporters _ .uncovering their internal operational structure and determining roles of members will be helpful for conducting precise strikes against their key leaders and avoid the tragic events , like 9/11 .( 2 ) inferring the social hierarchies of animals based on their observed interaction networks .many animals ( like , mammals , birds and insect species ) are usually organized into dominance hierarchies .identifying and understanding the organizational hierarchies of animals will be helpful to design and carry out effective conservation measures to protect them .albeit its importance , ioc is a novel problem and we are the first to propose to study it based on online esns .the ioc problem is totally different from existing works : ( 1 ) `` _ _ hierarchy detection in social networks _ _ '' , which only studies the division of the regular users of the social networks into different hierarchies , who are not actually involved in any organizations ; ( 2 ) `` _ _ organizational intrusion _ _ '' , which focuses on attacking organizations and attaining company internal information only ; and ( 3 ) `` _ _ inferring offline hierarchy from social networks _ _ '' , which merely infers fragments of offline hierarchical ties in _ homogeneous _ networks , instead of reconstructing the whole organizational chart .different from all these works , in this paper , we aim at recovering the complete organizational chart of a company ( including both the hierarchical tiers of employees and the supervision links from managers to subordinates ) based on the _ heterogeneous _ information about the employees in online esns. meanwhile , to guarantee the smooth operations of companies , the inferred organizational chart needs to meet certain structural requirements , including both ( 1 ) macro - level depth requirement , and ( 2 ) micro - level width requirement .two classical organizational structures adopted by companies are the vertical structure and the horizontal structure .vertical organizational structure with well - defined chains of command clearly outlines the responsibilities of each employee but will result in delays in information delivery .meanwhile , horizontal organizational structure with flat command system involves everyone in decision making but will lead to difficulties in coordinating the activities of different departments .for instance , based on the input social network shown in plot a of figure [ fig : balance_example ] , we give two extreme cases of the vertical and horizontal organizational charts without depth regulation in plots b and c respectively , both of which will lead to serious management problems for large companies involving tens of thousands employees .proper regulation of the inferred organizational chart s depth ( i.e. , the macro - level depth requirement ) is generally desired . on the other hand ,most employees in companies need good supervisors to coach and instruct their daily work , but the number of subordinates each manager can supervise is limited , which can be determined by their management capacities , available time and energy . rationally regulating the allocation of supervision workload among managers ( i.e. , the micro - level width requirement ) can improve the management effectiveness significantly . for instance , in plot d of figure [ fig : balance_example ] , we show an inferred organizational chart with depth regulation but no subordinate allocation regulation . in the plot ,users in esn are stratified into tiers ( which is relatively reasonable compared to the extreme cases in plots b and c ) but the employees management workloads at tier are all assigned to one single manager , which may be beyond his / her management ability . despite its importance and novelty ,the ioc problem is very hard to solve due to the following challenges : * _ regulated social stratification _ : _ effective social stratification _ to partition users into different hierarchical arrangements ( i.e. , identifying the relative manager - subordinate roles of employees ) while meeting the macro - level depth requirement is the prerequisite for addressing the ioc problem . * _ supervision link inference _ : supervision link is a new type of link merely existing from managers to their subordinates .predicting the existence of potential _ supervision links _ with the heterogeneous information in esns is still an open problem . *_ regulated supervision workload allocation _ : to maximize the management effectiveness and efficiency , the number of subordinates each manager can supervise is limited by the management threshold . in other words , supervision links in organizational chart have an inherent _ k - to - one _ constraint . to address all the above challenges ,a new unsupervised organizational chart inference framework named create ( hr covr ) is proposed in this paper .several new concepts ( e.g. , `` _ _ class transcendence social links _ _ '' , `` _ _ matthew effect based constraint _ _ '' and `` _ _ chart depth regulation constraint _ _ '' ) will be introduced and create resolves the _ regulated social stratification _challenge by minimizing the existence of class transcendence social links in esns .create tackles the _ supervision link inference _ challenge by aggregating multiple social meta paths in the esn between to consecutive social hierarchies .finally , create handles the _ regulated supervision workload allocation _ challenge by applying _ network flow _ to match consecutive social hierarchies to preserve the _ k - to - one _ constraint on _ supervision links_. the remaining parts of the paper are organized as follows . in section [ sec : formulation ] , we will define some important terminologies and the ioc problem .method create will be introduced in section [ sec : method ] .extensive experiment results are available in section [ sec : experiment ] .finally , we describe the related works in section [ sec : relatedwork ] and conclude this paper in section [ sec : conclusion ] .in this section , we will introduce the formal definitions of `` _ _ heterogeneous social network _ _ '' and `` _ _ organizational chart _ _ '' at first and then define the ioc problem with these two concepts .* definition 1 * ( heterogeneous social networks ) : a _ heterogeneous social network _ can be represented as , where and are the sets of different types of nodes and complex links among these nodes in the network respectively . as introduced in section [ sec : introduction ] , users in online esns ( e.g. , yammer ) have various types of social activities , e.g. , follow other users , join groups , write / reply / like online posts , etc . as a result , yammer can be represented as a _ heterogeneous social network _ , where is the set of user , group and post nodes in and denotes the set of social links among users , join links between users and groups , as well as write , reply and like links between users and posts respectively .* definition 2 * ( organizational chart ) : the _ organization chart _ of a company can be represented as a _ rooted tree _ , where is the set of _ employees _ , denotes the set of directed _ supervision links _ from managers to subordinates in and _ root _ represents the ceo in the company .based on the definitions of _ heterogeneous social network _ and _ organizational chart _ , we can define the ioc problem formally as follows : * definition 3 * ( organizational chart inference ( ioc ) ) : given an online esn launched in a company , the ioc problem aims at inferring the most likely organizational chart of the company , where ( is the user set in network ) .furthermore , considering that the node set as well as the root node in are fixed , the ioc problem actually aims at inferring the most likely _ supervision links _ among employees . the inferred supervision links together with the node set as well as the node can recover the original organizational chart of the companyconsidering that supervision links exist merely between managers and subordinates , we propose to stratify users in enterprise social networks into different _ social classes _ to identify their relative manager - subordinate roles in subsection [ subsec : stratification ] .macro - level depth requirement of the inferred chart is achieved with the _ depth regulation constraint _ in the social stratification objective function .potential supervision links can be inferred between employees in consecutive classes by aggregating social meta paths among employees in the esn in subsection [ subsec : metapath ] . to preserve the _ k - to - one _ constraint on supervision links ( i.e. , the micro - level width requirement ) , redundant non - existing supervision links will be pruned in subsection [ subsec : networkflow ] .generally , as shown in figure [ fig : framework ] , framework create has three steps : ( 1 ) regulated social stratification , ( 2 ) supervision link inference , and ( 3 ) regulated social class matching , which will all be introduced in this section .supervision links merely exist between managers and subordinates .division of users into hierarchies to identify their relative manager - subordinate roles can shrink the supervision link inference space greatly .the process of hierarchizing users in online esn is called _ social stratification _ formally .* definition 4 * ( social stratification ) : traditional _ social stratification _ concept used in social science denotes the ranking and partition of people into different hierarchies based on various factors , e.g. , power , wealth , knowledge and importance . in this paper , we define _ social stratification _ as the partition process of users in online esns into different hierarchies according to their management relationships , where managers are at upper levels , while subordinates are at lower levels .the relative stratified levels of users in online esn are defined as their social classes .* definition 5 * ( social class ) : _ social class _ is a term used by social stratification models in social science and the most common ones are the upper , middle , and lower classes . in this paper , we define _ social class _ of users in online esns as their management level in the company , where ceo belongs to social class 1 , evps belong to class 2 , and so forth . in social stratification, users in online esns will be mapped to their _ social classes _ according to mapping : . for each user ,his social class is defined recursively as follows : where represents the direct manager of . in social science ,the working class are eager to get acquainted with and join the upper echelons of their class by either accumulating wealth , imitating their dressing styles , and mimicking their dialect and accents . meanwhile , the upper class are very cohesive and they tend to be friends who share similar background . so is the case for the social links in _ enterprise social networks_. by analyzing the yammer network data , we observe that the probability for users to follow upper - level managers is on average , while that of following subordinates is merely . as a result , in online esns, subordinates tend to follow their managers , while people in management are reluctant to initiate the friendship with their subordinates .based on such an observation , we introduce the concept of _ class transcendence _ social links and propose to stratify users by minimizing the existence of such links in esns .* definition 6 * ( class transcendence social link ) : link ( i.e. , follows ) is defined as a _ class transcendence social link _ in online esn iff and ( where smaller social class denotes upper management level in the organizational chart ) . in social stratification , each introduced class transcendence social link in the result will lead to a _ class transcendence penalty_. let be the _ social stratification _result of all users in the esn .for any directed social link in the esn , the _ class transcendence penalty _ introduced by it can be represented as the _ class transcendence penalty _ introduced by all social links ( i.e. , ) in the esn can be represented as the rich get richer " ( i.e. , the matthew effect ) is a common phenomenon in social science literally referring to issues of fame or status as well as cumulative advantage of economic capital . by analyzing the yammer network data, we have similar observations : `` people at higher management level can accumulate more followers easily '' .such an observation provides important hints for inferring users relative management levels according to their in degrees in esn ( i.e. , the number of followers ) . * definition 7 * ( matthew effect based constraint ) : for any two given users and in the network , let and be the follower sets of and in the network respectively .the _ matthew effect based constraint _ on users and can be represented as if . furthermore , to maximize the operation efficiency of companies , the inferred organizational chart needs to meet the _ macro - level depth requirement _ , which can be achieved with the following chart depth regulation constraint .* definition 8 * ( chart depth regulation constraint ) : the _ chart depth regulation constraint_ avoids obtaining organizational chart with too short command chains ( e.g. , the extreme horizontal structure ) and can be represented as where parameter is used to regulate the depth of the chart , whose sensitivity analysis will be given in section [ sec : experiment ] .furthermore , term is also added to the minimization objective function to avoid obtaining charts with too long command chains ( i.e. , the extreme vertical structure ) .based on all the above remarks , the _ optimal _ regulated social stratification of users in esn can be obtained by solving the following objective function : the integer programming objective function can be solved with open source toolkits , e.g. , glpk , pulp , etc ., very easily and the obtained results of variables represent the inferred social classes of users in online esn .it is a challenge to estimate the supervision relations between the esn members in consecutive social classes .here we use the meta paths concept introduced in to identify and evaluate different types relationship in esn . * social meta paths in enterprise social networks * *_ follow _ : user user , whose notation is `` '' or . *follower of follower _ : user user user , whose notation is `` '' or . *_ common followee _: user user user , whose notation is `` '' or . * _ common follower _ : user user user , whose notation is `` '' or . * _ common group membership _ : user group user , whose notation is `` '' or . * _ reply post _ : user post post user , whose notation is `` '' or .* _ like post _ : user post user , whose notation is `` '' or .an existing user intimacy measure , path - sim , based on meta paths was introduced in , which can calculate the propagation probability between users via meta paths in _ undirected homogeneous networks_. to deal with _ directed heterogeneous networks _ , we extend it and introduce a new intimacy measure , dp - intimacy ( directed path - intimacy ) , based on social meta path : where denotes the instance set of meta path going from to in the esn .different social meta paths capture the intimacy between users in different aspects and overall intimacy between users can be obtained by aggregating information from all these social meta paths .let be the intimacy scores between users and calculated based on social meta paths respectively .without loss of generality , we choose logistic function as the intimacy aggregation function , the overall intimacy between users and can be represented as ,\ ] ] where the value of denotes the weight of social meta path and .meta path aggregation based supervision link inference method proposed in previous step calculate the intimacy scores of all the potential links between pairs of social classes .however , to regulate the supervision workload allocation , the number of subordinates each manager can supervise is limited by the management threshold . in this section , we will prune the redundant non - existing supervision links with network - flow based regulated social class matching to preserve the _ k - to - one _ constraint on _ supervision links_. based on the _ social stratification _results , we can stratify all the users into _ social classes _ ] can be represented as set . by aggregating information in various _ social meta paths _ , we can calculate the intimacy scores of all potential _ supervision links _ between consecutive _ social classes _ , which exist between and can be represented as set .links in are associated with certain weights ( i.e. , the calculated intimacy scores ) , which can be obtained with the mapping .users in social classes and ( i.e. , and ) together with all the potential supervision links between them ( i.e. , ) and their intimacy scores ( i.e. , ) can form a weighted bipartite preference graph . *definition 9 * ( weighted bipartite preference graph ) : the _ weighted bipartite preference graph _ between users in and can be represented as .an example of _ weighted bipartite preference graph _ is shown in the upper plot of figure [ fig : network_flow ] . in the example , all the potential supervision links between the upper - level and lower - level individuals are represented as the directed purple lines between them , whose weights are the numbers marked on the lines .each employee in the figure is associated with multiple potential _supervision links _ and the redundant ones can be pruned with the _ network flow _method introduced in the next subsection . based on the _ bipartite preference graph _ , we propose to construct the following _ network flow graph _ first . * definition 10 * ( network flow graph ) : based on bipartite preference graph , the _ network flow graph _ can be represented as .node set includes all nodes in and two dummy nodes : source node and sink node ( i.e. , ) . besides all the links in ,we further add directed links from to all nodes in , as well as those from all nodes in to ( i.e. , ) . only the links in are associated with weights , which can be obtained with mapping .for instance , based on the _ bipartite preference graph _ in the upper plot of figure [ fig : network_flow ] , we can construct its corresponding _ network flow graph _( i.e. , the lower plot ) .all the links in the network flow graph are directed denoting the flow direction .* bound constraint of network flow * for each link , we allow a certain amount of flow going through within range ] .a subset of these inferred supervision links will be selected via the regulated social class matching based on the network - flow model to preserve the _ k - to - one _ constraint on supervision links .meanwhile , to demonstrate the effectiveness of create , we compare create with many baseline methods , including both state - of - art and traditional methods in social stratification and organizational chart inference . * social stratification methods * : * _ regulated social stratification _ :regulated social stratification is the first step of create proposed in this paper , which is also named as create for simplicity .create exploits the concept of class transcendence social links and matthew effect based constraint to stratify users in esns into different social classes .in addition , to regulate the depth of inferred social classes about employees , create further adds a _ chart depth regulation constraint _ into the objective function . * _ agony based social division _ : asd is a state - of - art social division method proposed in , which detects the social hierarchies of regular users in general online social networks .asd is not designed for organizational chart inference and does nt consider the matthew effect based constraint nor the chart depth regulation constraint .* organizational chart inference methods * : * _ social stratification + link prediction + matching _ ( create ) : create is the framework proposed in this paper and it has three steps : ( 1 ) regulated social stratification , ( 2 ) link inference and ( 3 ) regulated social class matching . *_ social stratification + link prediction _( create - sl ) : create - sl contains two steps : ( 1 ) _ social stratification _ , and ( 2 ) _ link prediction _ based on accumulated social meta paths . create - sl has no _ matching step _ to keep the micro - level width requirement and the outputs can not meet the _ k - to - one _ constraint . * _ social stratification + matching _( create - sm ) : create - sm contains two steps : ( 1 ) _ social stratification _ , and ( 2 ) social class matching .create - sm has no supervision link prediction step and social links from upper - level social class to the lower - level are regarded as the potential supervision links candidates . *_ social stratification _( create - s ) : create - s is identical to create - sl except that it has no matching step and outputs the all the _ social links _ between sequential hierarchies as the supervision links . *_ traditional unsupervised link prediction methods _: no existing supervised link prediction models can be applied as no labeled supervision link exist . for completeness, we further compare _ our _ with traditional unsupervised baseline methods which include _ common neighbor ( cn ) _ , _ jaccard s coefficient ( jc ) _ and _ adamic adar ( aa ) _ between consecutive stratified social classes .* social stratification evaluation metrics * in the _ social stratification _step , the outputs are the inferred _ social classes _ of all the employees . by comparing them with individuals real - world _ social classes _( i.e. , the ground truth ) , we can calculate the _mean absolute error _ , _ mean squared error _ and _ coefficient of determination _( i.e. , ) of the results .in addition , the ratio of correctly stratified users ( i.e. , accuracy ) can also be used to measure the performance .so , the metrics used to evaluate the performance of different social stratification methods include _ mean absolute error _ ( mae ) , _ mean squared error _ ( mse ) , and _ accuracy_. * organizational chart inference evaluation metrics * methods create - sl , create - l and the traditional unsupervised link prediction can only output the confidence scores of all potential supervision links without labels , whose performance will be evaluated by metrics and precision . meanwhile , create and create - sm can output both labels and scores of potential supervision links and , besides and precision , we will also evaluate their performance with precision , recall and f1-score . in social stratification , parameter is applied to maintain the macro - level depth requirement , which can control the depth of the organizational chart . before comparing the performance of create with asd, we will analyze the sensitivity of parameter at first .we select with values in and obtain the accuracy scores achieved by create as shown in figure [ fig : alpha ] ..,scaledwidth=100.0% ] when parameter is very small ( e.g. , from to ) , we observe that it has no effects on the performance of create .the possible reason can be that _matthew effect based constraint _ can already effectively outline the relative hierarchical relationships among users in online esns , the average social class of users obtained based on which is already greater than .when becomes larger ( from to ) , the _ structure regulation constraint _ starts to matter more and the social stratification accuracy goes up steadily and achieve the highest value at , i.e. , the default value of in later experiments .create performs better as increases shows that the _ structure regulation constraint _ can stretch the organizational structure and stratify users in their correct social classes .however , as further increases ( i.e. , from to ) , the accuracy achieved by create decreases dramatically .the reason can be that larger stretches the organizational structure too much and put lots users into the wrong social classes .for example , it is nearly impossible for users to achieve as the average social class , which is actually the largest social class in the sampled fully aligned organizational chart .social stratification results of create and asd are given in figures [ eg_fig9_comp_level]-[eg_fig_comp1 ] , where figure [ eg_fig9_comp_level ] shows the results achieved by create and asd at each social class ( evaluated by precision and recall respectively ) and figure [ eg_fig_comp1 ] shows their overall performance ( evaluated by accuracy , mae , mse and ) .from the microscopic perspective , we observes that create performs better than asd consistently at all social classes .create achieves both precision and recall at social classes , i.e. , create identifies the top management levels of the company correctly .the performance of create at other social classes is also very promising .for instance , the precision scores achieved by create at social classes ( besides ) are either or close to and the recall scores of create at social classes are also very high , which all outperform those of asd with significant advantages . from a macroscopic perspective , the performance of create in stratifying the whole user set in esn is very excellent and much better than that of asd . the accuracy , mae , mse and scores achieved by create are , , and respectively , which all outperforms those achieved by asd .for example , the accuracy achieved create is almost the triple of that obtained by asd , while the mae and mse obtained by create are merely the and of those achieved by asd . in addition, asd gets negative scores in identifying social classes of users in esns , which denotes that the identified users social classes are massively disordered and have no linear correlation with the social class ground truth at all .create has proved its excellent effectiveness in stratifying users in esns , based on which , we further study its performance in inferring the potential supervision links between pairs of consecutive social classes , whose results are evaluated by auc , precision in table [ tab : result1 ] as well as by recall , precision and f1 in figure [ eg_fig_comp2 ] . in table [ tab : result1 ] , we compare create ( of different management thresholds ) with all the other baseline methods , where create ( with parameter and ) performs the best .compared with create - sl ( or create - s ) , create ( or create - sm ) which has the matching step can identify supervision links more effectively .for instance , create ( with ) outperforms create - sl by over in auc and in precision , and create - sm ( with parameter ) outperforms create - s with remarkable advantages .it demonstrates that matching step can effectively prune non - existing supervision links and preserve the micro - level width requirement ( i.e. , the _ k - to - one _ constraint ) .compared with create - sm and create - s , create which infers the potential supervision links based on heterogeneous information in esns instead of merely regarding the social links as supervision link candidates achieves much better results .for example , in table [ tab : result1 ] , the auc of create is higher than that of create - sm and create - s , while the precision of create is also roughly higher as well .in addition , in figures [ eg_fig_comp2 ] , the recall , precision and f1 obtained by create is almost triple of those achieved by create - sm .it confirms the argument that heterogeneous information in esns can capture the relationships among colleagues ( especially between managers and subordinates ) .in addition , we also compare create with traditional unsupervised link prediction methods , including cn , jc and aa , and the advantages of create are very obvious according to table [ tab : result1 ] : create can outperform all these unsupervised link prediction methods with significant advantages .in social class matching , the management threshold parameter plays a key role in constraining the number of supervision links connected to each managers .the sensitivity of parameter will be analyzed in this section , where the results achieved by create ( with different ) evaluated by different metrics are available in figure [ eg_fig_k ] .small management threshold ( e.g. , ) limits each manager s subordinate number to and will preserve the supervision links with extremely high likelihood only but may miss many promising ones .however , as threshold increases , more links with high likelihood will be preserved and the metric scores increase consistently . meanwhile , when the threshold goes to , the performance of create degrades dramatically .the possible reason can be that , with larger threshold , each manager can have too many supervision links , which may exceed the subordinates they have in the real - world .enterprise social networks are important sources for employees in companies to get reliable information .ehrlich et al . propose to search for experts in enterprise with both text and social network analysis techniques .they propose to examine the users dynamic profile information and get the social distance to the expert before deciding how to initiate the contact .enterprise social networks can lead to lots of benefits to companies and the motivations of enterprise social network adoption in companies are studied in details in .users in enterprise social networks will connect and learn from each other through personal and professional sharing .people sensemaking and relation building on an enterprise social network site is studied in by dimicco et al .in addition , social connections among users in enterprise social networks usually have multiple facets . propose to study the study the multiplexity of social connections among users in enterprise social networks , which include both professional and personal closeness . from social networks ,some works have been done to infer the hierarchies of individuals .a measure , agony , is proposed in , by minimizing which the authors propose a hierarchy detection method .a random graph model and markov chain monte carlo sampling is proposed by clauset et al . in , which can address the problem of structural inference of hierarchies in networks .maiya et al .propose to identify the hierarchies in social networks to achieve the maximum likelihood in .all the above three papers focus on dividing individuals into different hierarchies only .a offline hierarchical ties inference method has been proposed by jaber et al . in to discover offline links among people based on a time - based model . however, none of these papers can recover the whole organizational chart .cross - social - network studies has become a hot research topic in recent years .kong et al . are the first to propose the concepts of `` anchor links '' , `` anchor users '' , `` aligned networks '' etc . a novel network anchoring method is proposed in to address the network alignment problem .cross - network heterogeneous link prediction problems are studied by zhang et al . by transferring links across partially aligned networks .besides link prediction problems , jin et al. proposes to partition multiple large - scale social networks simultaneously in and zhang et al .study the community detection problem across partially aligned networks in .zhan et al .analyze the information diffusion process across aligned networks .in this paper , we have studied the organizational chart inference ( ioc ) problem based on the heterogeneous online esns . to address the ioc problem ,a new chart inference framework create has been proposed in section [ sec : method ] .create consists of steps : ( 1 ) regulated social stratification , ( 2 ) supervision link inference with social meta paths aggregation , and ( 3 ) regulated social class matching .experiments on real - world esn and organizational chart dataset have demonstrated the effectiveness of create . [sec : ack ]this work is supported in part by nsf through grants cns-1115234 , google research award , the pinnacle lab at singapore management university , and huawei grants .organisation - organizational structure - organisational chart .http://kalyan-city.blogspot.com/2010/06/organisation-organizational-structure.html [ http://kalyan-city.blogspot.com/2010/06/organisation-organizational-structure.html ] .
nowadays , to facilitate the communication and cooperation among employees , a new family of online social networks has been adopted in many companies , which are called the `` enterprise social networks '' ( esns ) . esns can provide employees with various professional services to help them deal with daily work issues . meanwhile , employees in companies are usually organized into different hierarchies according to the relative ranks of their positions . the company internal management structure can be outlined with the organizational chart visually , which is normally confidential to the public out of the privacy and security concerns . in this paper , we want to study the ioc ( nference of rganizational hart ) problem to identify company internal organizational chart based on the heterogeneous online esn launched in it . ioc is very challenging to address as , to guarantee smooth operations , the internal organizational charts of companies need to meet certain structural requirements ( about its depth and width ) . to solve the ioc problem , a novel unsupervised method create ( hr covr ) is proposed in this paper , which consists of steps : ( 1 ) social stratification of esn users into different social classes , ( 2 ) supervision link inference from managers to subordinates , and ( 3 ) consecutive social classes matching to prune the redundant supervision links . extensive experiments conducted on real - world online esn dataset demonstrate that create can perform very well in addressing the ioc problem .
most of our knowledge of the universe at far infrared wavelengths has been obtained with photoconductive detectors , particularly as used in the infrared astronomy satellite ( iras ) and the infrared space observatory ( iso ) .these detectors have been selected because they provide excellent performance at relatively elevated operating temperatures ( compared with those needed to suppress thermal noise in bolometers ) .similar considerations led to development of high performance photoconductor arrays for the multiband imaging photometer for spitzer ( mips ) , namely a ge : ga array and a stressed ge : ga array operating at 70 and 160 respectively which were built at the university of arizona .to provide complementary measurements at 24 , the instrument also includes a si : as blocked impurity band ( bib ) array , built at boeing north america ( bna ) under contract to the infrared spectrograph ( irs ) team . in the bib or impurity band conduction ( ibc ) architecture ,the high impedance required to minimize johnson noise is provided by a thin , high - purity layer of silicon .the infrared absorption occurs in a second layer , which can be relatively strongly doped . due to the separation of these two functions, the detector layers can be optimized separately .thus , these devices can be designed and built to have fast response , high resistance to cosmic ray irradiation induced responsivity shifts , high quantum efficiency , and good photometric behavior .because the processing necessary for high performance silicon ibc devices has only relatively recently become possible there is relatively little experience with them in space astronomy missions .an early - generation detector array was used in the short wavelength spectrometer ( sws ) in iso .initially , the device showed degradation due to damage by large ionizing particle exposures when the satellite passed through the trapped radiation belts .once the operating conditions were adjusted to minimize these effects , the sws detectors showed the expected virtues of this type of device even though they did not achieve their preflight sensitivity expectations . at wavelengthslonger than 40 , photoconductors are typically built in germanium because of the availability of impurity levels in this material that are much more shallow than those in silicon . achieving the appropriate structure andsimultaneously the stringent impurity control for germanium ibc devices has proven difficult . as a result, all far infrared space astronomy missions have used simple bulk photoconductors .mips uses bulk gallium doped germanium ( ge : ga ) detectors , both unstressed and stressed .in such detectors , the same volume of material determines both the electrical and photo - absorptive properties , making the optimization less flexible than with si ibc detectors .consequently , their behavior retains some undesirable properties that can be circumvented through the more complex architecture of ibc devices . nonetheless , generally satisfactory performance is possible and has been achieved in past space astronomy missions .the 60 and 100 bands in iras utilized 15 ge : ga photoconductors each . the detectors were read out with transimpedance amplifiers that used junction field effect transistor ( jfet ) first stages mounted in a way that isolated them thermally .this allowed these transistors to be heated , resulting in low noise and stable operation .detector calibration was maintained by flashing reverse bolometer stimulators mounted in the center of the telescope secondary mirror , and cosmic ray effects were erased by boosting the detector bias to breakdown .the intrinsic performance of the detectors was limited by the johnson noise of the transimpedance amplifier ( tia ) feedback resistors and by other noise sources associated with the readout .the in - flight performance was similar to expectations from pre - flight calibrations .the isophot instrument carried a array of unstressed ge : ga detectors operating from 50 to 105 and a array of stressed devices operating from 120 to 200 .the readout was by a capacitive transimpedance metal oxide semiconductor field - effect transistor ( mosfet ) amplifier whose processing had been adjusted to improve its performance at low temperatures .calibration was assisted with a stimulator built into the instrument , which could be viewed by adjusting the position of a scan mirror . in practice ,the unstressed focal plane never achieved the performance level anticipated from laboratory measurements of its noise equivalent power ( nep ) .the performance of the stressed devices was substantially better , due in part to the relatively large fast response component of these devices ( compared with the slow component ) and their better thermal isolation from the readout amplifiers .the long wavelength spectrometer ( lws ) instrument on iso used a single ge : be detector , five ge : ga detectors , and four stressed ge : ga detectors .the readouts were based on jfets , mounted with thermal isolation and heated to a temperature where they operated with good stability and low noise .the readout circuit was an integrating source follower and neps of were measured in the laboratory .calibration was assisted with built in stimulators that were flashed between spectral scans .on orbit , it was found that frequent small glitches , probably associated with cosmic ray hits , limited the maximum integration times to shorter values than had been anticipated and also required a lower operating voltage . with these mitigations ,the neps were found to be higher in orbit than expected from ground test data . in mips ,the ge : ga detectors are carefully isolated thermally from their readouts and operated at sufficiently cold temperatures that their dark currents are low and stable .the mosfet - based readouts use a specialized foundry process that provides them with good dc stability even at the low operating temperature of .5k .this feature , combined with the capacitive tia ( ctia ) circuit , maintains the detector bias accurately . a scan mirror ( based on a design provided by t. de graauw ) modulates the signals on a pixel so measurements can be obtained from the relatively well - behaved fast component of the detector response .responsivity variations are tracked with the aid of frequent stimulator flashes . finally , the instrument operations force observers to combine many short observations of a source into a single measurement .the high level of redundancy in the data helps identify outlier signals and also improves the calibration by simple averaging over variations .the efficacy of this operational approach is confirmed by the on - orbit results .details on the design and construction of mips can be found in .the inflight performance of mips is described by .this paper describes the approaches for reduction and calibration of the mips data .section [ sec_challenge ] details the challenges of using si and ge detectors in a space astronomy mission .section [ sec_design ] gives a summary of the design and operational features of mips that address these challenges .section [ sec_overview ] gives an overview of the three stages of mips data reduction .these stages are discussed in more detail in the following three sections .section [ sec_ramps_to_slopes ] details the processing steps to turn the integration ramps into measured slopes .section [ sec_slope_cal ] discusses the corrections to transform the slopes into calibrated fluxes . section [ sec_redund ] gives a brief overview of the use of the inherent redundancy in the observations to further improve the reduction .section [ sec_inflight ] gives the results of initial testing of these reduction techniques with flight data .finally , section [ sec_summary ] provides a summary .at high backgrounds , such as might be encountered in an airborne instrument , far infrared photoconductors behave relatively well , with rapid adjustment of the detector resistance appropriate to a change in illumination level . as the background is decreased , the adjustment to equilibrium levels occurs in a multistep process with multiple associated time constants as discussed below .thus , the detectors can be used in a straightforward manner at high backgrounds but precautions must be taken at low ones to track the calibration . for a more detailed discussionsee . the fast response component in these detectors results from the current conducted within the detector volume associated with the drift of charge carriers freed by absorption of photons .the speed of this component is controlled by the propagation of a zone boundary with drift velocity , so that the time constant is given by the free carrier lifetime divided by the photoconductive gain .this time is very fast ( microseconds or shorter ) in comparison to normal detection standards .however , as charge moves within the detector , the electrical equilibrium must be maintained .for example , charge carriers generated by photoionization are removed from the detector when they drift to a contact .they are replaced by injection of new charge carriers from the opposite contact , but the necessity for new charge can only be conveyed across the detector at a characteristic time proportional to the `` dielectric relaxation time '' , basically its capacitive or time constant : here , is the dielectric constant of the material and is the mobility for the charge carrier of interest , is the permittivity of free space , is the density of free carriers , and is the charge of the electron .the slow response components arise from this phenomenon .the form of this time constant makes explicit the dependence on illumination level through the density of free charge carriers , . in fully illuminated detectors ( for example , the integrating cavities used for the 160 array ) and at the low backgrounds appropriate for space - borne operation , can be tens of seconds . in transverse contact detectors , such as those used for the mips 70 array ,the part of the detector volume near the injecting contact may be poorly illuminated and have large resistance .the detector therefore adjusts to a new equilibrium only at the large dielectric time constant of this layer , which can be hundreds of seconds at low backgrounds .the initial shift of charge in the detector can set up a space charge that reduces the field in the bulk of the device , leading to a reduction of responsivity following the initial fast response . from its appearance on a plot of response versus time ,this behavior is described as `` hook '' response .as the field is restored at a characteristic rate of , the response grows slowly to a new equilibrium value .see for detailed modeling of these effects . in space applications ,ionizing particles such as cosmic rays also affect the calibration of these detectors .the electrons freed by a cosmic ray hit can be captured by ionized minority impurities , reducing the effective compensation and increasing the responsivity .the shifts in detector characteristics can be removed by warming it to a temperature that re - establishes thermal equilibrium , and then cooling it back to proper operating conditions . between such anneal cycles , the responsivity needs to be tracked to yield calibrated data .all successful uses of far infrared photoconductors at low backgrounds have included local relative calibrators of reverse bolometer design that allow an accurately repeatable amount of light to be put on the detector .these stimulators allow frequent measurement of the relative detector responsivity .in general , this strategy is most successful when the conditions of measurement are changed the least to carry out the relative calibration .the mips instrument includes such calibrators , which are flashed approximately every two minutes . based upon data obtained at a proton accelerator and in space ,the average increase in response over a two minute period in the space environment can be 0.5% to 1% , so the calibration interval allows tracking the response accurately .although the detectors in the silicon array are expected to perform well photometrically , the array as a system shows a number of effects that must be removed to obtain calibrated data .the array is operated well below the freezeout temperature for the dopants in the silicon readout ( the readout circuit uses a different foundry process from that developed for the ge detectors ) .therefore , the array must be operated in a continuous read mode to avoid setting up drifts in the outputs that would degrade the read noise .the flight electronics and software are designed to maintain a steady read rate of once per half mips second ( see [ sec_data_collection ] ) . when the array is first turned on, the transient effects of the readout cause a slow drift in the outputs .much of this effect can be removed by annealing the array , which is the standard procedure for starting the mips 24 operations .the array shows an effect termed `` droop . ''the output of the device is proportional to the signal it has collected , plus a second term that is proportional to the average signal over the entire array .in addition , the 24 array has a number of smaller effects ( e.g. , rowdroop , electronic nonlinearities , etc . ) which are described later in this paper .the design and operation of mips is summarized here , paying special attention to those areas which answer the challenges outlined above and , therefore , produce data that can be reduced successfully .mips has three instrument sections , one for 24 imaging , one for 70 imaging and low resolution spectroscopy , and one for 160 imaging .light is directed into the three sections off a single axis scan mirror .the 24 section uses a pixel si : as ibc array and operates in a fixed broad spectral band extending from 21 to about 27 ( the long wavelength cutoff is determined by the photo - absorptive cutoff of the detector array ) . afterlight enters this arm of the instrument from a pickoff mirror , it is brought to a pupil on a facet of the scan mirror .it is reflected off this mirror into imaging optics that relay the telescope focal plane to the detector array at a scale of per pixel corresponding to a sampling of the point spread function , where is the telescope aperture .the surface area of a 24 pixel is .the field of view provided by this array is .a reverse bolometer stimulator in this optical train allows relative calibration signals to be projected onto the array .the scan mirror allows images to be dithered on the array without the overheads associated with moving and stabilizing the spacecraft .it also enables an efficient mode of mapping ( scan mapping ) in which the spacecraft is scanned slowly across the sky and the scan mirror is driven in a sawtooth waveform that counters the spacecraft motion , freezing the images on the detector array during integrations .the 70 section uses a pixel ge : ga array sensitive from 53 to 107 .a cable failure external to the instrument has disabled half of the array and the following description reflects this situation .the light from the telescope is reflected into the instrument off a second pickoff mirror .it is brought to a pupil at a second facet of the scan mirror and from there passes through optics that bring it to the detector array .for this arm of the instrument , there are actually three optical trains that can relay the light to the array ; the scan mirror is used to select the path to be used for an observation .one train provides imaging over a field , with a pixel scale of corresponding to a sampling of the point spread function .the physical size of a 70 pixel is mm and 2 mm long in the direction of the optical axis .this train provides imaging over a fixed photometric band from 55 to 86 .the scan mirror feeds this mode when it is in position to feed the other two arrays , so imaging can be done on all three arrays simultaneously .a second train also provides imaging in the same band , but with the focal plane magnified by a factor of two to per pixel .this mode is provided for imaging compact sources where the maximum possible angular resolution is desired : the pixel scale corresponds to at the center wavelength of the filter band .the third train brings the light into a spectrometer , with spectral resolution of from . in this spectral energy distribution ( sed ) instrument mode, light is directed to a reflective `` slit '' and then to a concave reflective diffraction grating that disperses the light and images the spectrum onto a portion of the 70 array .the slit is 16 pixels long and 2 pixels wide , corresponding to on the sky .the dispersion is 1.73 pixel .reverse bolometer stimulators are provided for calibration , and the scan mirror provides the dithered and scan mapping modes of operation at 70 as have been described for the 24 array .the 160 section shares the pickoff mirror and scan mirror facet with the 24 band .after the light has been reflected off the scan mirror , the telescope focal plane is reimaged and divided , with part going to the si : as array and part going to a stressed ge : ga array , operating in a fixed filter band from 140 to 180 .this array has pixels , arranged to provide an imaging field long in the direction orthogonal to the scan mirror motion with the two rows of detectors spaced such that there is a gap one pixel wide between the two rows .this pixel size provides sampling of the point spread function .the physical size of a 160 pixel is mm .reverse bolometer stimulators are included in the optical train , and the scan mirror provides modes similar to those with the other two arrays .a key aspect of the calibration of the mips ge arrays is the frequent use of stimulators to track responsivity variations .the emitters in these devices are sapphire plates blackened with a thin deposition of bismuth , which also acts as an electrical resistor .the emitters are suspended in a metal ring by nylon supports and their electrical leads .when a controlled current is run through the device , the sapphire plate is rapidly heated by ohmic losses in the metallized layer . the thermal emission is used to track changes in detector response in a relative manner ; hence these devices are described as stimulators rather than calibrators . because of the large responsivity of the detector arrays , it is necessary to operate these stimulators highly inefficiently to ensure accurate control without blinding the detectors .they are mounted inside cavities that are intentionally designed to be inefficient ( e.g. , black walls , small exit holes ) .this allows the stimulators to be run at high enough voltage to be stable and emit at a reasonable effective temperature .the constant - amplitude stimulator flashes provide a means of tracking the responsivity drift inherent in the ge detectors .figure [ fig_stim_repeat ] illustrates the importance of tracking the responsivity variations of ge detectors with as fine a time resolution as feasible .the repeatability of the stimulator measurements is a function of both the background seen by the detector element as well as the amplitude of the stimulator ( stim ) signal above the background .the repeatability of a measurement of the stim signal improves with decreasing background and increasing stim amplitude . for both the 70 and 160 arrays , in ground testing stim amplitudes of greater than dn / s above the background yielded a repeatability of better than % on most backgrounds . setting the stim amplitudes at this level provides a balance between repeatability of the stims and the range of backgrounds accessible to observation without saturation . at this level, well over 95% of the sky should be observable without saturating stim flash measurements at both 70 and 160 .additional complications at 160 include a strong illumination gradient in the stim flash illumination pattern from one end of the array to the other as well as a large increase in the responsivity of the array with exposure to cosmic rays .it is not possible to set the stim amplitude at the optimum 7500 dn / s across the whole array due to a factor of four gradient in the stim amplitude across the array .the on - orbit stim amplitude was set to provide an optimal amplitude over the majority of the 160 pixels .the degradation in stim repeatability on the low illumination region can be mitigated by an observing strategy that dithers the image such that the same region of the sky spends equal amounts of time on both regions of the detector .both of the ge arrays show calibration shifts with even small exposure to ionizing radiation .the effects of ionizing particles were tested using characterization arrays ( see [ sec_lab_test ] ) at the university of california , davis accelerator .the proton beam was attenuated to reduce the particle impact rate to a level similar to that expected on orbit .the energy of the particles was such that each impact was strongly ionizing , depositing much more energy in the detector volume than is expected from a typical cosmic ray .thus , these tests served as a worst - case model of the detector response to cosmic - rays on orbit .the detector responsivity slowly increased with time under exposure to the proton beam .the rate of responsivity increase on the 70 array was comparable to that observed under typical illumination conditions ( cf .[ fig_stim_repeat ] ) without the proton beam , suggesting that accumulated transient response from the background and signals inside the cold test chamber and the photon flux at the accelerator contribute similarly to the responsivity increase .if the particle impacts at the accelerator really represent a worst - case scenario , this suggests that the on - orbit responsivity increase of the 70 array may be dominated by photon flux rather than cosmic ray effects .in contrast , the 160 array showed a large responsivity increase with increasing radiation dose .if they are of modest size , such responsivity shifts can be determined and removed during calibration through use of the stimulator observations . however ,when the shifts are large , they are also highly unstable and can result in substantial excess noise .three methods were tested to remove such effects : re - thermalization of the detectors by heating them ( anneals ) , exposing the detectors to a bright photon source , and boosting the detector bias above breakdown .our experiments indicated that the latter two methods produced little benefit .although the instrument design permits use of all three techniques , we remove radiation damage to the ge arrays by periodically thermally annealing the detectors .there are four mips observing modes , all of which have been designed to provide a high level of redundancy to ensure good quality data ( especially for the ge arrays ) .the photometry mode is for point and small sources . as an example of the redundancy inherit in mips observations , a visualization of a single photometry mode cycle is shown in fig .[ fig_phot70 ] for 70 .the scan map mode provides efficient , simultaneous mapping at 24 , 70 , and 160 by using a ramp motion for the scan mirror to compensate for continuous telescope motion , effectively freezing images of the sky on the arrays .a visualization of a small portion of a scan leg is shown in fig .[ fig_scanmode ] .the sed mode provides 53 to 107 spectra with a resolution r .the total power mode ( tpm ) is for making absolute measurements of extended emissions .the pacing of the mips data collection is based on a `` mips second . ''a mips second is approximately 1.049 seconds , and has been selected to synchronize the data collection with potential sources of periodic noise , such as the computer clock or the oscillators in the power supplies .to first order , this design prevents the down - conversion of pickup from these potential noise sources into the astronomical signals .the data are taken in data collection events ( dces ) ; at the end of a dce , the array is reset before taking more data .dces are currently limited to 3 , 4 , 10 , or 30 mips seconds for the 24 array and 3 , 10 , or 10 mips seconds for the 70 and 160 arrays . during a dce ,each pixel generates a voltage ramp on the array output , as the charge from incoming photons is accumulated on the input node of its integrating amplifier .these ramps are the basic data collected by all three arrays .the 24 array is non - destructively read out every 1/2 mips second while the 70 and 160 arrays are non - destructively read out every 1/8 mips second .all the samples are downlinked for the 70 and 160 arrays , but this is not possible for the 24 array due to bandwidth restrictions .the 24 array has two data modes , sur and raw .most 24 data are taken in sur mode in which the ramps are fitted to a line , and only the fitted slope and first difference ( the difference between the first two reads in the ramp ) are downlinked .the raw mode downlinks the full 24 ramps , but this mode is used only for engineering observations .there are three natural steps in reducing data from integrating amplifiers : ( 1 ) converting the integration ramps to slopes ; ( 2 ) further time - domain processing of the slope images ; and ( 3 ) processing of dithered images in the spatial domain . for detectorsthat do not have time - dependent responsivities , only the first and last steps are usually important .this is strongly not the case for the mips ge arrays and also mildly not so for the mips si array .as a result , mips processing includes all three steps .first , the integration ramps are converted into slopes ( dn / s ) while removing instrumental signatures with time constants on the order of the dce exposure times ( [ sec_ramps_to_slopes ] ) .second , the slopes are calibrated and instrumental signatures with time constants longer than the dce exposure times are removed ( [ sec_slope_cal ] ) .third , the redundancy inherent in the mips observing modes allows a second pass at removing instrumental signatures ( [ sec_redund ] ) .the algorithms used in the first two steps have mostly been determined .the main algorithms used by the third step are being optimized with actual data taken on orbit .portions of the reduction algorithms described in this paper were presented in a preliminary form by .we made extensive use of laboratory testing and theoretical investigations in choosing and ordering the relevant steps .figure [ fig_flowchart ] is a graphical representation of the specific tasks in each of the three processing steps .three versions of the 70 and 160 arrays were constructed : a flight array , a flight spare array , and a characterization array . before integration into the instrument ,the basic performance of the flight and flight spare arrays was measured ( i.e. , read noise , dark current , nep , etc . ) .the characterization arrays were then installed in the two specialized dewars previously used for the flight and flight spare array testing .these arrays are used to determine the detailed behaviors of the 70 and 160 detectors .this knowledge was then used to design observations with the flight arrays to remove specific ge detector effects .the ability to do extensive testing on the characterization arrays has been crucial to the development of the data reduction algorithms for the ge arrays detailed in this paper .in addition to testing at the array level , testing at the instrument level was carried out using the low background test chamber ( lbtc ) .the lbtc was constructed to allow for testing of the full mips instrument and , thus , had a number of independently controlled stimulators including pinhole stimulators providing point sources for testing .the lbtc allowed for the imaging performance of the full instrument to be tested as well as providing for extensive testing of the 24 array .additional details of the laboratory testing can be found in .we also carried out detailed numerical modeling of the behavior of the ge arrays .this modeling allowed effects found in the laboratory testing to be investigated in more detail .for example , the modeling was able to show that the difference in hook behaviors between the 70 and 160 arrays was due to their different illuminations .the numerical modeling was also crucial to the understanding of the behavior of small signals on the detectors .for example , this modeling was able to determine that the stim flash latents ( see [ sec_sflash_latent ] ) were additive , not multiplicative .this understanding then guided the efforts to remove this signal .the first part of the processing fits the ramps to produce slopes for each dce .the processing for the ge ( 70 and 160 ) and si ( 24 ) raw mode data is similar , differing only in the instrumental signatures removed .first , reads which should be rejected from the linear fits are identified .reads are rejected if they represent missing data , autoreject reads , or saturated data .second , the ramps are corrected for instrumental effects . these are dark current ( si only ) , rowdroop ( si only ) , droop ( si only ) , electronic nonlinearities , and stim flash latents ( ge only ) .third , jumps in the ramps usually caused by cosmic rays are identified . in the process , reads that are abnormally noisy are identified as noise spikes . finally , all the continuous segments in each ramp are fit with lines and the resulting slopes averaged to produce the final slope for each pixel .an example of a 24 ramp is given in fig .[ fig_24ramp ] .the 70 and 160 ramps are very similar to the 24 ramp , except they do not have droop .the graphical representation of the data processing shown in fig . [ fig_flowchart ] gives the ordering of the reduction steps .the processing for the si sur mode data is necessarily different as the ramps are fit on - board and only the slope and first difference images are downlinked .the following subsections will describe the si and ge raw mode processing followed by a description of the necessary differences for the si sur mode processing .there are two reasons to automatically reject ( autoreject ) reads ; to avoid reset signatures and to not use the ramps beyond 2 mips seconds for stim flash dces .all mips arrays are reset at the beginning of a ramp , and this has been seen to leave a signature in the first few reads .in general , this reset signature only affects the first read .the first read is automatically rejected for all three arrays .this is even true for sur data for which the line fit is done on - board spitzer .the 70 and 160 arrays can be operated with a reset in the middle of the dce to improve performance .when this mode is used , the reset signature has been seen to last for 4 reads and these 4 reads are then automatically rejected . in a stim flash dceonly the first 2 mips seconds of a ramp are valid .after 2 mips seconds , the stim is turned off and after 2.5 mips seconds , a reset is applied .finally , all reads that are below or above the allowed limits for the mips analog - to - digital converters ( adc ) ( soft saturation ) or saturating the 70 and 160 readout circuits ( hard saturation ) are flagged as low or high saturation , respectively .all three mips arrays display nonlinearities that have been traced to the electronics . for the 24 arraythese nonlinearities are mainly due to a gradual debiasing which occurs as charge accumulates in each pixel during an exposure . for the 70 and 160 arrays ,the readout circuits have been constructed to keep the same bias voltage across the detectors even as charge accumulates . nevertheless , electronic nonlinearities arise due to the simplified ctia circuit .the behavior of the electronic nonlinearities was determined from extensive ground - based testing on the flight arrays .for the 24 array , the functional form was characterized from raw mode data ramps ; a typical case is shown in figure [ fig_24ramp ] .the ramps for most of the pixels can be nearly perfectly described by quadratic polynomial fits ; the linear component of the fit gives directly the linearized signal .for the 70 and 160 arrays , the electronic nonlinearities have been shown generally to have a quadratic shape with significant deviations .corrections were tabulated as a lookup table to allow for the semi - arbitrary forms . for the 24 , 70 , and 160 arrays ,the maximum nonlinearity at full well ( adc saturation ) ranges over the array from , , and , respectively .the main reason discontinuities or jumps appear in mips ramps is cosmic rays .cosmic rays strike the ge detectors ( 70 and 160 arrays ) at a rate of one per pixel per twelve seconds .the rate on the si detector ( 24 array ) is much lower , due to its smaller pixels .it is also possible to get a ramp jump due to an anomaly we have termed a readout jump .ground - based testing has shown that the entire output of one of the 32 readouts ( 4 8 pixels ) on the 70 array occasionally jumps up and then jumps back down by the same dn amount approximately 1 second later .jumps in the ramps are detected using a combination of two methods .first , -point differences are constructed from the reads and outliers are flagged as potential ramp jumps using an iterative sigma clipping algorithm .these potential jumps are tested to see if they are noise spikes or actual ramp jumps by fitting lines to the segments on either side of the potential jump .if the two fitted lines imply a jump that is smaller than the expected noise , then the jump is actually a noise spike , not a cosmic ray or readout jump .second , a more sensitive test for ramp jumps is performed .this method works by assuming each read in a ramp segment has a ramp jump after it and fitting lines to the resulting two subsegments on either side .the most significant ramp jump in the segment implied from the two line fits is then tested to see if it is larger than the noise .if so , then this read is labeled as a ramp jump .the process can be repeated on the subsequent ramp segments until no more jumps are found or a preset number of iterations have been performed . as this second method is more sensitive than the first , but significantly more computationally intensive , we combine the two methods to achieve the best sensitivity to ramp jumps with the least computation time .we explored the signatures of cosmic rays in ramps using several hours worth of 70 and 160 array data that were subject to constant illumination .we then extracted those ramps where we detected ramp jumps ( assumed to be due to energetic particle impacts ) and assessed the effects on the ramp after the impact . on the 70 array we find two main effects : a steepening of the ramp that lasts for a few reads and a persistent responsivity increase of after a big hit ( see fig .[ figure : slope - change - after - radhit ] ) .this is consistent with the slow responsivity increase observed during the radiation run .these results dictate our strategy for dealing with cosmic ray hits on this array : several reads after a hit should be rejected from slope fitting to ensure that the fast transient does not bias the slope measurement , while the small responsivity increase after large hits will be tracked by the stim flash measurements .the 160 array response to cosmic rays is somewhat different .we detected no fast transient within the ramp , but the slope of the ramp after a hit was often different from the slope before the hit .this slope change typically did not persist into the next dce , after a reset had occurred , as shown in figure [ figure : slope - change - after - radhit ] .thus , we were unable to detect a persistent responsivity increase due to particle impacts , in contrast to the accelerator data ( [ sec_anneals ] ) .given that we are unable to predict how the slope will change after a particle hit and that the slope returns to its previous value after the next reset ( usually the next dce ) , the conservative strategy for dealing with particle impacts on this array is to simply ignore all data between a particle hit and the next reset .slopes are determined for each ramp by fitting lines to all the good segments in a ramp .the slope for a ramp is then the weighted average of the slopes of the ramp segments .the weight of each segment is determined from the uncertainty in the segment slope as discussed in the next paragraph .each good segment of a ramp is identified as containing only good reads and not containing any ramp jumps .lines are fit to these segments with the standard linear regression algorithm . calculating the uncertainties on the fitted slope and zero pointis not as straightforward .the uncertainties on each read have both a correlated and random component .the correlated component is due to photon noise , as the reads are a running sum of the total number of photons detected .the random component is the read noise .we have derived equations for the linear fit uncertainties for the correlated component following the work of .the details of this derivation are given in appendix [ app_fit_unc ] .the slope and zero point uncertainties are calculated for the correlated and random read uncertainties separately and then combined in quadrature to get the final uncertainties . the calibration of the 70 and 160 arrays is directly tied to the stim flashes measured approximately every two minutes .the brightness of these stim flashes is set as high as possible to ensure the best calibration ( cf . [ sec_stims ] ) .these stim flashes produce a memory effect , called a stim flash latent , that is persistent for a brief time .intensive measurements of stim flash latents have been performed at the university of arizona on the 70 and 160 characterization arrays .we determined the time constants , amplitudes , variations with the background , and repeatability of the stim flash latents as well as the accuracy of the correction and the effects on the calibration of sources observed during the latent . to characterize the decay behavior of the latents, we fit an exponential law to the time signal of each array pixel .each cycle is divided by the stim amplitude value , to have dimensionless data ( fraction signal / stim ) .the function used to fit the latent is a double exponential : where is the time after the stim is turned off , is the background level , and give the component amplitudes , and and give the time constants . at 70 ,only a single exponential is needed ( thus ) .the amplitude is always less than 3% of the stim amplitude , and in most of the cases below 0.5% . the time constant ranges from 5s to 20s . as a function of increasing background , increases and decreases .the latents are repeatable to 15% or better .an example of the stim latent of one pixel on the 70 characterization array is given in fig .[ fig_slatent ] . at 160 ,the latency effect is more pronounced than at 70 .the amplitude is less than 5% of the stim amplitude .the time constant ranges from 5 to 20s .the amplitude is less than 3% .the time constant equals 20s at high background , and is negligible at low background .the amplitude and time constant are almost insensitive to the background .the latents are repeatable to 20% or better .[ fig_slatent ] also gives an example of the stim latent of one pixel on the 160 characterization array .in general , the stim flash latents are negligible seconds after the stim is turned off .in the first 30 seconds , the calibration of a point source might be overestimated by 1% at 70 and 12% at 160 if no correction is applied . to correct for the stim latent contribution to the pixel signal, we apply a time - dependent correction at the ramp level .we subtract the latent contribution , which is obtained by integrating eq .[ eq : latent ] . on pre - flight data ,the amplitude of the latents after correction is reduced by a factor of at 70 and at 160 .the rowdroop effect manifests itself as an additive constant to each individual pixel and is proportional to the sum of the number of counts measured by all pixels on its row , where a row is in the cross - readout direction .this effect is not completely understood , and is similar to ( but separate from ) the droop phenomenon ( see [ sec_droop ] ) .the additive signal imparted to each pixel on a row is constant and exhibits no gradient or dependence with pixel position , thus , it is not related to a charge bleed , or `` mux bleed '' effect .the rowdroop contributes a small amount to the flux of an individual pixel , and will only significantly affect pixels on rows with high - intensity sources .an example of rowdroop from ground - based testing is shown in fig .[ fig_rowdroop ] .using images of pinhole sources obtained in ground testing , we have computed the row droop constant of proportionality , .this is the factor that gives the fraction of the total counts in a row which is the result of row droop and should be subtracted from each pixel in that row .we find that the constant of proportionality for the mips 24 array is .thus , the rowdroop contributes % of the total number of counts on a row .the rowdroop is corrected for on a read - by - read basis .droop is a constant signal added to each pixel by the readouts .the exact cause of droop is unclear .this extraneous signal , akin to a dc offset , is directly proportional to the total number of counts on the entire array at any given time .we have measured the constant of proportionality from ground test data .the droop coupling constant was measured to be , which agrees well with the 0.32 determined by bna .the droop correction algorithm first computes the mean signal on the array , which is then multiplied by the droop coupling constant to derive the droop signal , as given by where is the droop signal , is the signal on each pixel ( comprising both the actual incident flux and the droop ) , is the number of pixels , and is the droop coupling constant .the resultant droop signal is then subtracted from the original signal on each pixel . under normal circumstances ,the uncertainty associated with this process is at the level , limited mainly by the uncertainty on the coupling constant .however , greater uncertainties arise when pixels are saturated ; since adc saturation occurs well before hard detector saturation , droop signal will still accumulate for an incident flux above the adc saturation level . in this case, the actual signal ramp must be extrapolated beyond the saturation point .the droop signal is determined by extrapolating a fit to the unsaturated portion of the ramp . as with the rowdroop correction ,the droop correction is done on a read - by - read basis for raw mode 24 data .dark subtraction is done at each read using a dark calibration image containing the full dark ramp for each pixel .this step serves both to remove the ( small ) dark current contribution and the offset ramp starting points , so that each ramp starts near zero .the majority of the 24 data are taken in the sur mode instead of the raw mode . in the sur mode, a line is fit to the data ramp on - board the spacecraft .the resulting slope and first difference ( difference between the first two reads of the data ramp ) images are downlinked instead of the full ramp .the first difference frame effectively increases the dynamic range of the sur mode as signals that saturate somewhere in the ramp , but after the second read , will have a valid measurement in the first difference frame . to reduce the data downlinked , any first difference value that is from a ramp which does not saturate is set to zero .this increases the compressibility of the first difference frame .there can be degeneracy of sur slope values due to the possibility of saturation .the possible slope value for a given pixel reaches a maximum at full well , the point of adc saturation .after that point , as the data ramp reaches saturation at the last few reads , the slope value will begin to decrease because the on - board sur algorithm does not reject saturated reads . in cases of extreme saturation , the slope becomes quite small , and can eventually become zero if saturation occurs within the first few reads .the first difference value is provided to break this degeneracy .we have employed a conservative threshold value for the first difference , above which a pixel is flagged as being likely saturated .adc saturation occurs at + 32768 dn ( see figure [ fig_24ramp ] for an example of a saturated raw ramp ) . assuming a linear ramp , the first difference for a ramp that just saturates on the final read would be , where is the total number of reads in the data ramp .for example , there are 60 reads in a 30 second dce , yielding an ideal saturation threshold of dn / read . to be more conservative, we actually employ a threshold value of 1000 dn / read for a 30 second exposure time and scale this for other exposure times . since the data ramps are not linear , the actual first difference threshold is larger than our chosen default value , so most cases of saturation will be flagged .the only exception being saturation at the first read , in which case both the slope and the first difference would be zero . for all pixels that have been flagged for saturation , the first difference value should be used in place of the slope . the rowdroop and droop subtraction is done in the same way for sur mode as for raw mode , except that the corrections are performed on the slope and first difference images . because the sur data do not preserve the actual data ramps , the linearity correction made somewhat complicated .nevertheless , the quadratic behavior of the ramps can be used to analytically determine the linearization of the sur slope values .this correction depends on the observed sur slope value , exposure time , and known quadratic nonlinearity .note that saturation invalidates this method , as the sur slope - fitting algorithm does not reject saturated reads . in this case, no linearity correction is applied .the next step in the mips data reduction is to calibrate the slope images while removing instrumental effects with time constants longer than the dce exposure times .the instrumental effects corrected at this stage include latents ( 24 ) , responsivity drift ( 70 and 160 ) , pixel - to - pixel responsivity variations , the telescope illumination pattern , and flux nonlinearities ( 70 and 160 ) . the graphical representation of the data processing shown in fig .[ fig_flowchart ] gives the ordering of the reduction steps .since the time dependent responsivity of the ge arrays requires additional calibration steps than is usual for more common array detectors , we give the mathematical basis of our ge slope calibration in [ sec_slope_math ] . ignoring the 70 and 160 flux nonlinearities ,an uncalibrated slope image can be represented by r(i , j , t_n)\ ] ] where is the science image of interest , represents the telescope and instrument optics ( the mean of is one ) , is the dark current , and is the instantaneous responsivity of the array ; represent the pixel coordinates and the time of the dce .calibration involves isolating , the flux from the sky in the above equation .the term is the equivalent of a traditional flat - field term . as is a rather sensitive function of time for the 70 and 160 detectors , however , a global `` flat - field ''can not be determined , but must be derived for each dce separately .the stimulators provide the means to monitor and all science observations will be bracketed by stim flashes .stim flash images will be equivalent to science frames with the addition of a stimulator illumination pattern : r(i , j , t_n)\ ] ] where is the illumination pattern introduced on the array by the stim flash with the mean of equal to one .mips observations include the requirement that each stimulator flash will be preceded by a background exposure with the identical telescope pointing ; thus for the stimulator dce there exists a background dce taken at time , r(i , j , t_n - \epsilon).\ ] ] if we assume that the responsivity of the array does nt change dramatically between times and , i.e. , we can construct for each stimulator flash a background subtracted stim flash : with background subtracted stim flashes determined from eq .[ eq : bksubstim ] for all stim flashes in the data set , an instantaneous stim can be determined for any time , , by interpolation from bracketing stim flashes : \ ] ] where $ ] is some interpolating function on background subtracted stims for times bracketing .analysis of ge characterization array data indicates that a weighted linear fit ( weighted by the uncertainty in the stim flash frames ) to two stim flashes on either side of the data frame ( a total of four stim flashes ) provides the optimal strategy for determining the instantaneous stim amplitude ( repeatability to % on most backgrounds ) . dividing science frames , eq .[ eq : sciframe ] , by the interpolated instantaneous stim , eq .[ eq : inst_stim ] , produces /s(i , j).\ ] ] while we have removed the time dependent responsivity variation , the data of interest , , are still modified by the optical response and the dark current ; in addition , we have introduced the stimulator illumination pattern into our data .fortunately , since the time dependence has been removed , we can remove these other instrumental signatures through carefully accumulated calibration data .first , the dark correction , , can be determined from a sequence of exposures as above , with the additional constraint that the scan mirror be positioned such that no light from the `` sky '' falls on the detector. thus the data and stim flashes in a dark current data sequence are represented by and r(i , j , t_n),\ ] ] respectively .the dark data are corrected for responsivity variations exactly as described above and the individual frames combined to produce an average dark current , .subtracting this dark current from science frames that have been corrected for responsivity variations , eq .[ eq : rescor ] yields our responsivity , dark corrected science frame .what remains is to correct for the telescope optics , , and the stim illumination pattern , .correcting for the combined illumination pattern of the telescope and stim involves a standard series of mips exposures , i.e. data frames interspersed with stim flashes . as such , they may be represented by equations of the form eq .[ eq : sciframe ] , where the represent dithered images of `` blank '' sky fields . calibrating the sequence by correcting for responsivity variations and dark current as above results in a series of images since by construction , the dithered images of `` smooth '' regions , if a large number of are acquired , they may be median combined to remove point sources ( and extended sources if dithered `` sufficiently '' ) , cosmic rays , etc .resulting in where and are constant regardless of telescope pointing .hence the median only affects the changing sky image as the telescope is dithered .the constant in equation [ eq : illcor ] may be set to one resulting in the illumination correction frame the responsivity corrected , dark subtracted data ( equation [ eq : darkrescor ] ) are now divided by the illumination correction resulting in and we have recovered the quantity of interest , the astronomical sky .suitable observations of standard stars can then be used to convert instrumental counts to physical units ( e.g. janskies ) .the dark , flat field , and illumination correction calibration images described above will be obtained throughout the life of the mission .example preflight calibration images are shown in figs .[ fig_24cal]-[fig_160cal ] .simulations indicate that high s / n flat field and illumination correction images ( rms ) can be obtained with dithered observations of `` smooth '' areas of the sky : dces at 24 and dces at 70 are required . at 160 , the situation is less ideal , with simulations indicating as many as 500 dces may be required to produce flats to better than 1% rms .si ibc arrays are known to have considerable latency , where the signal induced by bright illumination persists after the illumination has terminated .ideally , if one knows the position of a source exposed on the array and the latency decay behavior , these artifacts can be subtracted from an image .we have characterized the latent behavior from ground test data .several different conditions were explored , including varying brightnesses of the illuminating source , varying brightnesses of the background , initial bias boosts , and changing the number of resets via different exposure times .a bias boost can flush out most of the trapped charge , but resets are not nearly as effective .since bias boosts will only be done in the first dce of each observation , we correct for latent residuals in the data processing .the latent decay curve can be described by single exponential , given by where is the slope in the absence of a latent , is the initial value of the latent , and is the latent time constant . based on the limited ground data , the latent parameters ( and ) appear to be functions of background levels , number of resets and possibly location on the array . in general , the latent contribution is about 1% of the initial source brightness sec after that source has shut off .higher background yield slightly higher values for and lower values of .the value of is in the range of seconds .both the 70 and 160 arrays exhibit nonlinearities that are dependent on the incident point source flux as well as the background .these are termed flux nonlinearities and have been observed in data taken with the characterization array as well as the flight array . as is usual for the ge arrays , each pixel shows flux nonlinearities with a different dependence on source flux and background . correcting for this effectcan be broken into two pieces : ( 1 ) removing the pixel to pixel differences in the nonlinearity followed by ( 2 ) the application of a global nonlinearity correction as a function of the source brightness and background .the pixel to pixel variations in the flux non - linearity may be mapped by analyzing the ratio of two stim flashes , where one is the standard on orbit calibrating stim flash .measured differences in the ratio from pixel to pixel can be used to correct each pixel to the same flux non linearity for the given background and source ( second stim flash ) amplitude .repeating the measurement for a variety of second stim flash amplitudes ( up to saturation for each pixel ) and backgrounds will map out the correction .the second , global , stage of the correction can be characterized by observations of calibration stars with a range of known brightness ratios on similar backgrounds .the combination of these two tasks outlined above should provide a good measurement of the flux nonlinearity correction for a range of backgrounds .this correction will improve continuously during the mission as the range of backgrounds and calibration stars expands .the absolute calibration of mips will rely on a well determined anchor at 10.6 using the fundamental calibrators boo , tau , and gem .three independent methods will be used to extrapolate the calibration at 10.6 to the mips bands : ( 1 ) solar analogs , ( 2 ) a star atmospheric models , and ( 3 ) semi - empirical models of k giants .grids of stars for each method have been observed from the ground and tied to the fundamental calibrators at 10.6 . for the solar analog stars , on orbit observations at 24 , 70 , and 160are being compared with extrapolations of empirical measurements of the sun extrapolated into the mips bands .a grid of a stars has been observed in all three bands on orbit and compared to extrapolations of a star atmosphere models to the mips bands .while the solar analog and a star calibrators will be observed in the mips 160 band , the k giant calibrators will be the only ones detectable at high signal - to - noise in that band .on orbit observations of the k giant calibrators are being compared to theoretical extrapolations of model atmospheres , eg . extrapolated to longer wavelengths using the engelke function .absolute flux calibrators will be observed throughout the lifetime of the mission .the last step in the reduction of mips data is to use the redundancy inherent in the observing modes to improve the removal of instrumental signatures .this step is mainly for the 70 and 160 data due to the challenging aspects of ge detector calibration .we define the level of redundancy to be the number of different pixels that measure the same point on the sky .our approach will be to look for known instrumental signatures ( as a function of time ) in the difference between what a particular pixel detects and what all the other pixels detected for the same sky locations .this is possible because the observing strategy has been designed so that each point on the sky will be observed multiple times by different pixels .table [ tab_redund ] shows the minimum level of redundancy for each mips observing mode .many mips observations are taken with multiple cycles resulting in significantly higher redundancies .it is recommended to have a minimum redundancy of four .lccc photometry , compact & 14 & 10 & 2 + photometry , large & 10 & 6 & 1 + photometry , compact super resolution & 14 & 8 & 6 + photometry , large super resolution & 10 & 8 & + scan map , slow & 10 & 10 & 1 + scan map , medium & 10 & 10 & 1 + scan map , fast & 5 & 5 & 0.5 + sed & & 2 & + total power & 1 & 1 & 1 the basic algorithm for using redundancy to refine the instrumental signature removal is as follows . 1 .create a mosaic of all the images in question . during the mosaic creation ,use a sigma rejection algorithm to remove data that are deviant from the majority of the observations .2 . use the mosaic as a `` truth '' image of what each image should have measured .3 . for each pixel , subtract the actual from the `` truth '' measurements to create a measurement of the time history differences .4 . examine the difference time history for known instrumental signatures .while many instrumental signatures could be present , we plan to concentrate on stim latent residuals and systematic differences between `` extended '' sources and point sources .actual on - orbit data will guide the details and number of instrumental signatures that are corrected using redundancy .5 . correct for all instrumental signatures that are found to be significant .iterate steps 1 - 5 until no new significant instrumental signatures are found .the input to this algorithm is calibrated slope images .the output product of this algorithm is enhanced images .a useful side product will be the mosaicked image of the object . to use the redundancy to remove additional instrumental signatures we must first coadd all related observations into a single mosaic . because the mips optical train is made up of purely off - axis reflective elements thereexist scale changes and rotations across the re - imaged focal plane . to coadd images taken at different places on the array , it is crucial to correct the data for these distortions .we used the code v optical models for spitzer / mips to estimate the distortions present in the images from the three mips detectors .the results from code v allow us to determine distortion polynomials which can then be used to correct for the distortions .we estimated the distortions by setting up a grid of equally spaced points in the field of view at a specific scan mirror angle .the chief ray from each object point was traced through the system to where it was imaged on the focal plane . in a perfect optical systemthe image points would map perfectly from the object with a possible change in magnification .the difference between the ideal location and the actual location is the distortion .for example , figure [ fig_70um_dist ] is a vector plot of the distortions present in the 70 narrow field array .the equally spaced grid of points present the focal plane points and the ends of the vectors correspond to the object points , after a plate scale factor was applied .the difference in the points ( the length of the vector ) is caused by the distortions .lc 24 & 2.84 + 70 wide & 0.2 + 70 narrow & 7.70 + 160 & 7.78 + table [ tab_dist_det ] lists the scale change of the field of view of the different mips arrays . the scale change is defined as ( maximum length of distorted field - minimum length of the distorted field)/(minimum length of the distorted field ) . from a distortion standpoint , it is useful to look closely at individual pixels to see how distortion changes the area imaged on the pixel .figure [ fig_70um_close ] is a plot of a distorted pixel in the 70 narrow field array .one can see that the distorted pixel changes shape from a square to a somewhat trapezoidal shape .the ratio of the distorted to undistorted pixel area is 1.19 .table [ tab_dist_pix ] lists information on how distortion affects the area imaged on individual pixels on the different arrays .the distorted pixel area ratio is defined as ( distorted pixel area)/(undistorted pixel area ) .lccccc 24 & 0.9998 & 0.0282 & 0.9406 & 1.0613 + 70 wide & 1.0027 & 0.0042 & 0.9973 & 1.0148 + 70 narrow & 1.0129 & 0.0664 & 0.8913 & 1.1929 + 160 & 0.9781 & 0.0361 & 0.9007 &1.0137 + following the procedure of converting the pixel coordinates to world coordinates outlined in , the distortion correction is applied to the pixel coordinates before any other transformations .the distortion correction is accounted for by distortion polynomials .the distortion polynomials give the additive correction to map the distorted pixel coordinates , to the distortion corrected pixel coordinates .thus , and , where and the ability to remove additional instrumental signatures is dependent on creating a high resolution mosaicked image .for the mosaicked image to be of sufficient resolution , the mosaicked pixel sizes must be smaller than the original input pixels .while many mosaicking programs compensate for undersampling by making use of dithered data , the mips data are well sampled and do not require this compensation . instead , our focus is co - adding the related calibrated images into a single image without interpolating between pixels .therefore , we always work on the coordinates of the corners of a pixel , transforming them from the input image coordinate system to the output mosaicked coordinate system .the output mosaic image is on a single tangent plane . in the transformation of the pixel corners in the input image pixels to their location on the output mosaic imagethe corners are corrected for distortion , converted to right ascension and declination , and then projected onto the tangent plane defined by the right ascension and declination of the mosaic center .figure [ fig_mosaic ] is an example of three images which overlap each other on the mosaicked plane . in the process of establishing the location of the image pixel corners on the mosaicked plane ,the link and overlap coverage between the input pixel and the output pixels it falls on is determined .a critical step in removing residual instrumental signatures based on the co - added mosaic image depends on correctly linking each mosaic pixel with each image pixel that overlaps it ( and vice versa ) and accurately determining the degree of overlap .essentially each output sub - sampled mosaic pixel becomes a cube of data , with each plane in this cube representing the information in each overlapping image pixel . the surface brightness and uncertainty associated with each mosaic pixelis found by weighted averaging the overlapping planes of data . in the surface brightness case ,the weighting is based on the overlap coverage and uncertainty associated with the input image pixel .the information in each mosaic pixel is based on multiple observations of a single area on the sky .this redundancy of data can be used to identify cosmic rays or any single image pixel measurement that deviates from the expected mean of multiple observations and expected noise .for example , for a 70 photometry observing mode cycle , if one of the pixels suffers from a much larger stim latent than the other observations , it will stand out and be identified as an outlier . as an outlier , it will not be used in creating the mosaicked image .after all the outliers have been determined , then the links between the output mosaic pixel and the input image pixels are used to tally the number of times an image pixel was flagged as an outlier .if the majority of the time an image pixel was flagged as deviant , then this pixel is flagged in the original data as an outlier .if a sufficiently large number ( about 1% ) of the input image pixels are flagged as outliers then the mosaic step is repeated .the final output is a mosaic image that can then be used as the `` truth '' image of what each image should have measured . following the steps outlined in section [ sec_redund_algorithm ] this truth imageis used to remove residual instrumental signatures .with the successful launch of the spitzer space telescope in august 2003 , these reduction algorithms were tested against mips flight data of astronomical sources . this testing has validated the algorithms described in this paper , but has also shown that a number of modifications will be needed to handle the realities of flight data .the initial results of this testing are summarized here and in , but a full accounting will be a subject of a future paper when the final mips reduction algorithms are known . there were two significant changes in the instrument operations which are not easily correctable by reduction algorithms .the 70 array was found to suffer from a cable short induced sometime between ground testing and flight .this short injects a large amount of noise into one half of the 70 array resulting in a useful array of only pixels .the 160 array was found to suffer from a `` blue - leak '' caused by an unintended reflection from the blocking filter which passes through the bandpass filter .this `` blue - leak '' results in an approximately factor of 15 image leak for stellar sources .this leak means that asteroids are now the primary calibrators for 160 .other than bright stars , the leak signal is below the confusion limit for most science targets as they have much smaller blue/160 ratios . at 24 , the pre - flight reduction algorithms were found to work well with only three changes needed .first , the row droop correction does not seem necessary , but extensive testing has yet to be completed .second , an additive offset in the second read of every ramp was found which produced a low level ( 1 - 2% ) gradient in final mosaics .a straightforward correction for this has been implemented using raw and sur data for calibration .third , scan mirror angle dependent flat fields are needed due to contamination of the scan mirror by small particles .this contamination is seen as dark spots in individual images which move with scan mirror angle , but not spacecraft offsets . with these three modifications to the preflight algorithms , mipsis producing high quality 24 images which are well calibrated . at 70 ,flight data has validated the basic structure of the preflight reduction algorithms but significant modification is required to account for time dependent behaviors .the stim flash latents were found to grow in amplitude quickly after anneals . with a similar timescale , the residual background time dependence ( after correction using the stim flash amplitudes ) was seen to grow .these two facts required hand reductions to remove the stim flash latents and background variations to produce good quality mosaics at 70 . these two effects are prime candidates for removal using redundancy , but the effectiveness of automatic removal has not been demonstrated yet . at 160 ,the basic preflight algorithms have been validated from comparison with flight data .some differences in detector behavior were seen in flight data .for example , the stim flash latents have a faster time constant than in preflight data . at this time, the nonlinearities in the 70 and 160 arrays have not been well enough characterized with flight data to validate this section of the preflight algorithms .finally , the cosmic ray rate seen in the ge arrays has been seen to be about a factor of two over preflight predictions , 1 cosmic ray every 12 or so seconds .the ramp jump detection has been seen to work well and line segment fitting removes the majority of the effects of these cosmic rays .some residual effects remain and additional characterization may lead to algorithms to remove these additional effects .the effectiveness of the algorithms described in this paper as well as the design of mips is attested by the point - spread - functions ( psfs ) constructed from flight data at 24 , 70 , and 160 shown in fig .[ fig_psf ] .these psfs all clearly have a well - defined first airy ring with the 24 psf also exhibiting a well - defined second airy ring .all three psfs are well represented by the predictions of tinytim models adapted to mips . in addition , there are many papers written using mips flight data for the spitzer special astrophysical journal supplement issue ( 2004 , apjs , 154 ) .this paper has described the preflight data reduction algorithms for all three arrays for the mips instrument on spitzer .these algorithms have been guided by extensive laboratory testing of the si ( 24 ) and ge ( 70 and 160 ) arrays .in addition , numerical modeling of the ge arrays has provided important insights into their behavior .the design and operation of the mips instrument has been summarized to give sufficient background for understanding the data reduction algorithms .the design and operation of the mips instrument is mainly driven by the needs of the ge arrays .as ge detectors display significant responsivity drift over time due mainly to cosmic ray damage , the mips observing modes include frequent observations of an internal illumination source .in addition , most mips operating modes have been designed to provide significant redundancy to increase the robustness of the mips observations against detector effects .the data reduction for the mips arrays is divided into three parts .the first part converts the data ramps into slope measurements and removes detector signatures with time constants less than approximately 10 seconds .these detector signatures at 24 include saturation , dark current , rowdroop , droop , electronic nonlinearities , and cosmic rays . at 70 and 160 ,the detector signatures removed include saturation , electronic nonlinearities , stim flash latents , and cosmic rays .the resulting slopes are determined from linear fits and their uncertainties are computed accounting for both the random and correlated nature of the data ramp uncertainties .the second part of the mips data reduction converts the slopes to calibrated slopes and removes detector signatures with time constants larger than approximately 10 seconds . at 24, this translates to applying a flat field , correcting for object latents , and applying the flux calibration . at 70 and 160, this step includes subtracting the dark , flat fielding using an instantaneous flat field , correcting for the flux nonlinearities , and applying the flux calibration . a flat field specific to each 70 and 160 image is required to correct for the time - dependent responsivity of the ge arrays .it is constructed from the frequent stim flashes and a previously determined illumination correction .the third data reduction step is to use the spatial redundancy inherit in the mips observing modes to improve the removal of instrumental signatures .this step is only applied to the ge data .known instrumental signatures are searched for in the difference between what a specific pixel and what all other pixels from the same sky locations detected . if instrument signatures are detected , they are removed and the process is repeated .this method is iterative in nature and will require care to avoid introducing spurious signals into the data .the design of this portion of the data reduction algorithms is necessarily the least developed because only after spitzer launches will it be known which instrumental signatures are important to correct with this method .finally , initial testing using flight data from mips has validated these data reduction algorithms , but some modification is necessary to account for the realities of flight .a future paper will describe these modifications in detail once they have been devised and tested .we wish to thank j. w. beeman and e. e. haller for their contributions to the design and building of the mips instrument .this work was supported by nasa jpl contract 960785 .when a detector is non - destructively read out multiple times before resetting the resulting data ramps represent correlated measurements .this is because measurement is equal to where is number of photons detected in the time between and .this statement ignores the effects of read noise , which produces uncorrelated uncertainties on the measurements .while fitting lines to data with correlations is a complex subject , the form of the correlations in the case of non - destructive readouts allows analytic equations to be derived for the linear fit parameters and uncertainties .we present a derivation of equations for linear fit parameters and their uncertainties for the case of a data ramp with correlated reads and no read noise .this derivation is based on a similar derivation by for nicmos data ramps but is slightly more general . as part of this derivation , it can be seen that the linear fit parameters derived assuming either random or correlated uncertainties are equivalent .this is not the case for the uncertainties on the fit parameters , which is the main motivation for this derivation .the basics of fitting a line to data with random uncertainties are given in .we repeat their results here , as the derivation for correlated uncertainties draws directly from this work . in fitting data to a line of the form the fit parameters and their uncertainties are where is the number of ( ) measurements , is the uncertainty on each measurement of , these equations assume that the measurements of are independent . to determine the linear fit terms for a line fit to correlated data ramps , the assumption that the measurements are independent is not correct .the standard formulae need to be modified to sum over terms that are independent .the modifications start with realizing that is the independent quantity in the absence of read noise .any equation in the standard derivation that relies on the independence of needs to be modified to only depend on .thus , and using a similar derivation , the standard equations ( [ eq_a_random ] & [ eq_b_random ] ) can then used to determine the best fit values of and for the case of correlated uncertainties .in fact , the values of and derived assuming correlated or uncorrelated uncertainties are exactly the same . the differences between the two types of uncertainties arises in determining and . to derive and for a data ramp with correlated measurements we start with equations 6.19 and 6.20 of . converting from to as the independent variable gives \\ & = & \sum_{i=2}^n \left [ \sigma(p_i)^2 \left ( \frac{\partial z}{\partial p_i } \right)^2 \right ] \\\end{aligned}\ ] ] where is either or .the partial derivatives needed are then and thus , the assumption that the uncertainties can be calculated separately for the correlated and random measurement uncertainties was tested via monte carlo simulations .simulations for cases similar to that expected for the 70 array are plotted in figure [ fig_fit_cor_ex ] .as can be seen from these plots , equations [ eq_sigma_a ] and [ eq_sigma_b ] give very good estimates of the actual uncertainties .beeman , j. w. & haller , e. e. 2002 , , 4486 , 209 beichman , c. a. , neugebauer , g. , habing , h. j. , clegg , p. e. , & chester , t. j. 1988 , nasa rp-1190 , vol . 1 bevington , p. r. & robinson , d. k. 1992 , data reduction and error analysis for the physical sciences ( new york : mcgraw - hill , inc . ) burgdorf , m. j. , et al .1998 , adv . in space research , 21 , 5 church , s. e. , griffin , m. j. , price , m. c. , ade , p. a. , emergy , r. j. , and swinyard , b. m. 1993 , proc .spie , 1946 , 116 cohen , m. , walker , r. g. , barlow , m. j. , & deacon , j. g. 1992 , , 104 , 1650 cohen , m. , witteborn , f. c. , walker , r. g. , bregman , j. d. , & wooden , d. h. 1995 , , 110 , 275 cohen , m. , witteborn , f. c. , carbon , d. f. , davies , j. k. , wooden , d. h. , & bregman , j. d. 1996a , , 112 , 2274 cohen , m. , witteborn , f. c. , bregman , j. d. , wooden , d. h. , salama , a. , & metcalfe , l. 1996b , , 112 , 241 de graauw , t. et al .1996 , , 315 , l49 dierckx , b. , vermeiren , j. , cos , s. , faymonville , r. , and lemke , d. 1992 , proc .esa symposium on photon detectors for space astronomy ( see n94 - 15025 ) , pp .405 - 408 engelke , c. w. 1992 , , 104 , 1248 gordon , k. d. et al .2004 , , 5487 , 177 greisen , e. w. & calabretta , m. r. 2002 , , 395 , 1061 haegel , n. m. , simoes , j. c. , white , a. m. , & beeman , j. w. 1999 , , 38 , 1910 haegel , n. m. , schwartz , w. r. , zinter , j. , white , a. m. , & beeman , j. w. 2001 , , 40 , 5748 heim , g. b. et al .1998 , , 3356 , 985 heras , a. m. , et al . 2000 , experimental astronomy , 10 , 177 hesselroth , t. , ha , e. c. , pesenson , m. , kelly , d. m. , rivlis , g. , & engelbracht , c. w. 2000 , , 4131 , 26 kessler , m. f. et al .1996 , , 315 , l27 krist , j. 1993 , asp conf .ser . 52 : astronomical data analysis software and systems ii , 2 , 536 lemke , d. , et al .1996 , , 315 , l64 low , f. j. , beichman , c. a. , gillett , f. c. , houck , j. r. , neugebauer , g. , langford , d. e. , walker , r. g. , & white , r. h. 1984 , optical engineering , 23 , 122 neugebauer , g. , et al .1984 , , 278 , 1 rieke , g. h. 2002 , detection of light , 2nd edition ( cambridge , england : cambridge university press ) rieke , g. h. , montgomery , e. f. , lebofsky , m. j. , & eisenhardt , p. r. 1981 , , 20 , 814 rieke , g. h. , lebofsky , m. j. , & low , f. j. 1995 , , 90 , 900 rieke , g. h. , et al .2004 , , 154 , 25 schnurr , r. , thompson , c. l. , davis , j. t. , beeman , j. w. , cadien , j. , young , e. t. , haller , e. e. , & rieke , g. h. 1998 , , 3354 , 322 sparks , w. b. 1998 , nicmos instrument science report , space telescope science institute , 98 - 008 swinyard , b. m. et al .1996 , , 315 , 43 swinyard , b. , clegg , p. , leeks , s. , griffin , m. , lim , t. , & burgdorf , m. 2000 , experimental astronomy , 10 , 157 valentijn , e. a. & thi , w. f. 2000 , experimental astronomy , 10 , 215 young , e. t. et al .1998 , , 3354 , 57 young , e. t. et al .2003 , , 4850 , 98
we describe the data reduction algorithms for the multiband imaging photometer for spitzer ( mips ) instrument . these algorithms were based on extensive preflight testing and modeling of the si : as ( 24 ) and ge : ga ( 70 and 160 ) arrays in mips and have been refined based on initial flight data . the behaviors we describe are typical of state - of - the - art infrared focal planes operated in the low backgrounds of space . the ge arrays are bulk photoconductors and therefore show a variety of artifacts that must be removed to calibrate the data . the si array , while better behaved than the ge arrays , does show a handful of artifacts that also must be removed to calibrate the data . the data reduction to remove these effects is divided into three parts . the first part converts the non - destructively read data ramps into slopes while removing artifacts with time constants of the order of the exposure time . the second part calibrates the slope measurements while removing artifacts with time constants longer than the exposure time . the third part uses the redundancy inherit in the mips observing modes to improve the artifact removal iteratively . for each of these steps , we illustrate the relevant laboratory experiments or theoretical arguments along with the mathematical approaches taken to calibrate the data . finally , we describe how these preflight algorithms have performed on actual flight data .
with the development of electromagnetic and optics technology and the increasing demands in industrial and military applications , scattering from periodic structures have attracted much interest in recent years . a variety of numerical methods including finite difference methods , finite element methods , spectral and spectral element methods , integral equation methods , dirichlet - to - neumann map methods , and mode expansion method have been developed by the engineering community and the applied mathematical community for solving linear diffraction problems from periodic structures .finite difference and finite element methods are easy to implement and result in sparse linear systems , which enable the use of sparse direct solvers .however , these methods typically have significant dispersion errors for high - frequency problems and thus fail to provide accurate solutions .if the medium is piecewise constant , boundary integral equations formulated on the interfaces of multilayer structures or the boundaries of the obstacles are natural and mathematically rigorous . by exploiting high - order quadratures , one obtains much higher efficiency and accuracy than finite difference and finite element methods . for problems with general medium , spectral methods can be employed .they are simple to implement and typically require relatively small number of unknowns to attain a fixed accuracy .we refer to for a brief survey of existing spectral methods for partial differential equations . in this paper , we consider 2d quasi - periodic scattering problems . by introducing transparent boundary conditions ,the problem is governed by a helmholtz equation with variable coefficients defined on rectangular domains .we propose a spectral collocation method and a tensor product spectral method .the first one uses the spectral collocation techniques and the second one , based on separable representations of the differential operator , combines the fourier spectral method and the ultraspherical spectral method . for vertically layered medium case , both methods only require to solve a one dimensional problem .the two spectral methods proposed in this paper can adaptively determine the number of unknowns and approximate the solution to a high accuracy .we remark that the tensor product spectral method leads to matrices that are banded or almost banded . by employing the fast algorithm in , a layered medium approximation problem can be used as a preconditioner for the problem with general media .recently , chebfun software was extended to solve problems in two dimensions , both of which are nonperiodic .we refer to for the summary of the mathematics and algorithms of chebyshev technology for nonperiodic functions .extension to periodic functions in one dimension was proposed in .our spectral methods can be used to solve the problems in two space dimensions , one of which is periodic .implementing our methods with the adaptive strategies of chebfun software is easy .the rest of this paper is organized as follows . in 2 the model problem is formulated and further reduced to a boundary value problem . in 3 we present the spectral collocation method for the problem .section 4 is devoted to the tensor product spectral method .numerical examples illustrating the accuracy and efficiency of the methods are reported in 5 .we present brief concluding remarks in 6 .assume that no currents are present and that the fields are source free .then the electromagnetic fields in the whole space are governed by the following time - harmonic maxwell equations ( time dependence ) & & = h , [ me1 ] + & & = -,[me2]where is the imaginary unit , is the angular frequency , is the permeability , is the permittivity , is the electric field and is the magnetic field . in this paper, we consider a simple quasi - periodic scattering model : is constant everywhere ; is periodic in variable of period , invariant in variable , and constant away from the region , i.e. , there exist constants and such that & & ( x , y , z)=^+ , y > 1- , + & & ( x , y , z)=^- , y < -1 . in the tm polarization ,the electric field takes the simpler form the maxwell equations ( [ me1])-([me2 ] ) yield the helmholtz equation : u+ ^2u=0.[reduv1 ] consider the plane wave is incident from the above , where , , and is the angle of incidence with respect to the positive -axis .the incident wave leads to reflected wave and transmitted wave . for , we have , and for , .we are interested in quasi - periodic solution with phase , i.e. , is -periodic in variable . therefore , the reflected and transmitted waves can be written as & & u^r=_j r_j^_jx+_jy , y>1 , + & & u^t=_j t_j^_jx-_jy , y<-1 , where and are unknown complex scalar coefficients and & & _ j=_0+j , + & & _ j=,(_j)0 , + & & _ j= , ( _ j)0.note that if is real , then the s satisfying correspond propagating modes . throughout , we assume that and for all .this assumption excludes the `` resonant '' cases , where waves can propagate along the -axis . for a quasi - periodic function ,define the linear operators and by & & ( sf)(x)=_j_j f_j^_j x,[mcals ] + & & ( t f ) ( x)=_j_j f_j ^_j x,[mcalt]where & & f ( x)=_jf_j ^_j x , + & & f_j = _ 0 ^ 2f ( x)^-_j xx.let denote the unit outward normal .we obtain transparent boundary conditions : [ dtn ] & & _ u - su=-2_0^_0 x-_0,(0,2)\{1 } , + & & _ u - t u=0,(0,2)\{-1}.[dtnn ] the quasi - periodic scattering problem is to solve the helmholtz equation ( [ reduv1 ] ) in the rectangular domain subject to the transparent boundary conditions ( [ dtn])-([dtnn ] ) and the quasi - periodic boundary condition & & u(x+2,y)=^_0 2u(x , y).[qp ] define . then satisfies [ periodic]\ { lll__0 v+^2v=0 , & in & =(0,2)(-1,1 ) , + _ y v-^-_0 xs ( ^_0 xv)=-2_0 ^ -_0 , & on & ( 0,2 ) , + _xt(^_0 xv)=0 , & on & ( 0,2 ) , + v(x+2,y)=v(x , y ) , & in & ^2 , .where the operator is defined by .the solution to ( [ periodic ] ) is unique at all but a discrete set of frequencies when the incident angle is fixed .existence and uniqueness of the solution is strictly proved in dobson by a variational approach . since the operators and given in ( [ mcals])-([mcalt ] ) are defined by infinite series , the computation has to be truncated in practice . for simplicity , we truncate and as follows : & & ( s^n f)(x)=_j=1-q^n - q_jf_j^_j x , + & & ( t^n f ) ( x)=_j=1-q^n - q_j f_jt^n ^_j x , where is a sufficiently large integer and is an integer satisfying that and all propagating modes are contained in the middle of the truncated series .next , we propose two spectral methods for the following problem [ tperiodic]\ { lll__0 v+^2v=0 , & in & =(0,2)(-1,1 ) , + _y v-^-_0 xs^n(^_0 xv)=-2_0 ^ -_0 , & on & ( 0,2 ) , + _ y v+^-_0 xt^n(^_0 xv)=0 , & on & ( 0,2 ) , + v(x+2,y)=v(x , y ) , & in & ^2 ..we refer to ( * ? ? ?* section 3.1 ) for the discussion on the existence and uniqueness of the solution of the problem ( [ tperiodic ] ) . in this paper, we focus on the accurate and efficient spectral methods for the problem ( [ tperiodic ] ) .thus we always assume that the discrete problem has a unique solution .we approximates the problem ( [ tperiodic ] ) by fourier discretization in variable and chebyshev discretization in variable .let and with , , and . introduce the first - order fourier differentiation matrix , {i , j=1}^{n} ] , n , & & d_ij^xx = \ { ll -+ , & i = j , + ( -1)^i - j+1 , & ij , . +n , & & d_ij^xx = \ { ll -- , & i = j , + ( -1)^i - j+1 ^ 2 , & ij , .and the chebyshev differentiation matrix , {i , j=0}^m, ] , is the approximate solution at the point , {m=0,n=1}^{m , n} ] .the fourier spectral method finds an infinite vector ^\rmt,\ ] ] such that the fourier expansion of the solution of ( [ pode ] ) is given by .\ ] ] note that we have the first - order differentiation operator is given by ,\ ] ] and the second - order differentiation operator is given by in order to handle variable coefficients of the forms , and in ( [ pode ] ) , we need to represent the multiplication of two fourier series as an operator on coefficients .let ] returns the fourier expansion coefficients of .suppose that is given by its fourier series then the explicit formula for ] is banded with a bandwidth of . combining the differentiation and multiplication operators yields \mcald_x^2+\mcalt[b]\mcald_x+\mcalt[c]){\bf w}={\bf f},\ ] ] where and are vectors of fourier expansion coefficients of and , respectively .we need truncate the operator to derive a practical numerical scheme .let be the projection operator satisfying {\bf w}=\l[\begin{array}{cccc } w_{1-q } & w_{2-q } & \cdots & w_{n - q } \end{array}\r]^\rmt.\ ] ] we obtain the following linear system where \mcald_x^2+\mcalt[b]\mcald_x+\mcalt[c].\ ] ] the truncation parameter can be adaptively chosen so that the numerical solution approximates the exact solution to relative machine precision .consider the second order linear ordinary differential equation [ code]w:=a(x)w(x)+b(x)w(x)+c(x)w(x)=f(x),where , , , and are functions defined on ] that represents multiplication of two chebyshev series , and the multiplication operator ] returns the chebyshev expansion coefficients of , and \mcals_{\lambda-1}\cdots\mcals_0{\bf w} ] can be written as : =\frac{1}{2}\l[\begin{array}{ccccc}2a_0 & a_1 & a_2 & a_3 & \cdots\\ a_1 & 2a_0 & a_1 & a_2 & \ddots \\ a_2 & a_1 & 2a_0 & a_1 & \ddots \\ a_3 & a_2 & a_1 & 2a_0 & \ddots \\ \vdots & \ddots & \ddots & \ddots & \ddots \end{array}\r]+\frac{1}{2}\l[\begin{array}{ccccc } 0 & 0 & 0 & 0 & \cdots\\ a_1 & a_2 & a_3 & a_4 & \cdots \\ a_2 & a_3 & a_4 & a_5 & \iddots \\ a_3 & a_4 & a_5 & a_6 & \iddots \\ \vdots & \iddots & \iddots & \iddots & \iddots \end{array}\r].\ ] ] the explicit formula for the entries of ] with look dense ; however , if is approximated by a truncation of its chebyshev or series , then ] , \mcalq_m^\rmt ] .the discrete forms of the transparent boundary conditions are given by [ beta2 ] & & * b*_1 ^ -*a*_1^_=-2_0 ^ -_0 * e*_q^ , + [ gamma2 ] & & * b*_2^+*a*_2^_= * 0*,where & & _= \{_1-q,_2-q,,_n - q } , + & & _= \{_1-q,_2-q,,_n - q } , + & & * b*_1=[0 1 4 ( m-1)^2]^ , + & & * b*_2=[0 1 -4 ( -1)^m(m-1)^2]^ , + & & * a*_1=[1 1 1]^^m , + & & * a*_2=[1 -1 ( -1)^(m-1)]^^m , and is the -th column of the identity matrix .let [ helmfc]*h*= * x*(*qc*)+*i*_n(*qy*)+^2(*q * ) . combining ( [ tpfc ] ) ,( [ beta2 ] ) and ( [ gamma2 ] ) yields the linear system [ globalfc ] * a**v*=*g*,where = [ c*i*_n_1 ^ -__1^ + * i*_n_2^+__2^ + * h * ] , and \in\mbbc^{mn}.\ ] ] for general bivariate function , we can use its low rank approximant , i.e. , sum of functions of the form , where and are univariate functions .an algorithm which is mathematically equivalent to gaussian elimination with complete pivoting can be used to construct low rank approximations . in this case , we have , i.e. , .then the matrix in ( [ helmfc ] ) takes the form reordering the unknowns and equations in ( [ globalfc ] ) , we obtain [ onedfc] [ c*b*_1 ^ -_0*a*_1^ + * b*_2^+_0*a*_2^ + * q*(*y*+^2-|_0|^2*c * ) ] * v*_q = [ c -2_0 ^ -_0 + 0 + * 0 * ] , and [ ofc] [ c*b*_1 ^ -_j - q*a*_1^ + * b*_2^+_j - q*a*_2^ + * q*(*y*+^2-((j - q)^2 + 2_0(j - q)+|_0|^2)*c * ) ] * v*_j=*0*,jq , where is the -th column of . obviously , for . note that the fast algorithm in can be used to solve the linear system ( [ onedfc ] ) , which requires operations . here, is the number of chebyshev points needed to resolve the function .the coefficient matrix resulting from a layered medium approximation problem can be used as a preconditioner for the problem with general medium .the corresponding computational complexity for the preconditioner solve is .we have performed numerical experiments in numerous cases . in this section, we present a few typical results of these experiments to illustrate the accuracy and efficiency of the two spectral methods .all computations are performed with matlab r2012a .the parameters are chosen as , , , .we consider three cases : & & _ 1(x , y)=(y)=1+^+4 , + & & _ 2(x , y)=1+^+4-((x/2 ) ) , + & & _ 3(x , y)=1+^+4-y((x/2 ) ) .the medium characterized by is vertically layered .the bivariate function is of rank .we use the rank approximant obtained by the algorithm proposed in to approximate . in figure [ nr ], we plot the three media and the real parts of the corresponding waves obtained by our spectral methods .we observe that the numerical results obtained by the two methods coincide well for all the three media .we also test the performance of the problem with as a preconditioner for the problem with .the fast algorithm in is used to solve the linear systems ( [ onedfc ] ) and ( [ ofc ] ) .the gmres algorithm is used as the iterative solver .the initial guess is set to be the zero vector .gmres with the layered medium preconditioner obtained a solution with the relative residual norm less than at the iteration , while gmres without preconditioning almost stagnates ; see figure [ conhis ] for the convergence history .we have proposed a spectral collocation method and a tensor product spectral method for solving the 2d quasi - periodic scattering problem . both of the methods can adaptively determine the number of unknowns and approximate the solution to a high accuracy .the tensor product spectral method is more interesting because it leads to matrices that are banded or almost banded , which enable the use of the fast , stable direct solver .based on this fast solver , a layered medium preconditioning technique is used to solve the problem with general media .our methods also apply to solve general partial differential equations in two space dimensions , one of which is periodic .extension of our methods to bi - periodic structure diffraction grating problem is being considered .
we consider the 2d quasi - periodic scattering problem in optics , which has been modelled by a boundary value problem governed by helmholtz equation with transparent boundary conditions . a spectral collocation method and a tensor product spectral method are proposed to numerically solve the problem on rectangles . the discretization parameters can be adaptively chosen so that the numerical solution approximates the exact solution to a high accuracy . our methods also apply to solve general partial differential equations in two space dimensions , one of which is periodic . numerical examples are presented to illustrate the accuracy and efficiency of our methods . helmholtz equation , transparent boundary condition , spectral method , chebfun 65m70 , 65t40 , 65t50 , 78a45
in recent years a new paradigm for computation has been proposed by craig lent and coworkers , based on the concept of quantum cellular automata ( qca ) .such a concept , although extremely difficult to implement from a technological point of view , has several interesting features that make it worth pursuing .the basic building block is made up of a single cell , containing two electrons that can be localized in four different areas or `` dots , '' located at the vertices of a square , as shown in each of the cells represented in fig .[ eins](a ) .coulomb repulsion forces the two electrons to occupy dots that are aligned along one of the diagonals , and each of the two possible alignments is associated with a logic state . by placing cells next to each other , a wire can be formed ( binary wire ) , along which polarization enforced at one end will propagate , as a consequence of the system of charges relaxing down to the ground state .we can see this also as the logic state of the first cell propagating down the chain until it reaches the last cell .it has been shown that , by properly assembling two - dimensional arrays of cells , it is possible to implement any combinatorial logic function . the basic principle of operation of such circuits is therefore the relaxation of the system to the ground state , thus leading to the often used expression `` ground - state computation '' . even in the case of perfectly symmetric and identical cells , the configuration of the qca circuit may depart from the ground state as a consequence of thermal excitations .if the energy separation between the ground state and the first few excited states is small , their occupancy will be nonnegligible even at low temperatures , and the logic output may be corrupted .a complete understanding of the behavior of qca arrays as a function of temperature is thus essential for any practical application of the qca concept .the problem of errors due to finite temperature operation was first addressed by lent , on the basis of entropy considerations .our approach consists in a detailed study of thermal statistics for qca arrays , retrieving the results of ref. as a special case , and allowing treatment of cells with more than just two states .we have developed both a numerical model , which enables us to study relatively short chains made up of six - state cells in full detail , and an analytical model , which can be used for arbitrarily long chains of two - state cells . in both cases ,we have considered a semiclassical approximation , and computed the probability of the system being in the ground state and that of presenting the correct logic output , i.e. of having the last cell of the chain in the expected logic state . sinceonly one configuration corresponds to the ground state , while several different configurations are characterized by the correct logic output , the probability of having the correct output is always larger than that of being exactly in the ground state . in sec .[ enesec ] we present the cell model we have considered for both approaches and the semi - classical approximation that we have chosen to adopt .we also discuss the structure of the energy spectrum for the excited states of a chain of cells . in sec .[ partisec ] we present the procedure that has been followed for the calculation of the partition function with the numerical method and the associated results for the probabilities of correct operation as a function of temperature .the analytical model is described in sec .[ gebbo ] , together with the associated results and a comparison with those from the numerical model .our approach is semi - classical insofar as electrons are treated as classical particles , with the only additional property that they can tunnel between dots belonging to the same cell .this is a reasonable approximation if the tunneling matrix elements between the dots of a cell are small enough to strongly localize the electrons , which therefore behave as well defined particles .our model chains are characterized by two geometrical parameters : is the distance between neighboring cell centers , is the distance between two dots in a cell .we represent the _ driver _ cell , i.e. the cell whose polarization state is externally enforced , with bold lines and indicating only the electron positions ( see fig .[ eins](a ) ) ; _ driven _ cells are represented with solid lines and each dot is indicated with a solid circle if occupied or with an empty circle otherwise .within each cell we consider a uniformly distributed ( per dot , where is the electron charge ) positive background charge , which makes each cell overall neutral and prevents anomalous behaviors in the nearby cells , due to the uncompensated monopole component of the electrostatic field .in particular , the repulsive action of the uncompensated electrons in a driver cell can `` push '' the electrons in the nearby driven cell away , thus leading to the formation of an unwanted state in which electrons are aligned along the side further from the driver cell . in our calculationswe have considered the gaas / algaas material system and assumed a uniform relative permittivity of 12.9 : this is a reasonable approximation , since the permittivity of algaas does not differ significantly from that of the gaas layer , where the electrons are confined . for this studywe have neglected , for the sake of simplicity and of generality , the effects of the semiconductor - air interface and of the metal gates defining the dots , whose rigorous treatment would have required considering a specific layout . for silicon - on - insulator qca cells , materials with quite different permittivities come into play : silicon , silicon oxide and air , but reasonable estimates could be obtained by repeating our calculations with a relative permittivity corresponding to that of silicon oxide , since most of the electric field lines are confined in the oxide region embedding the silicon dots .moreover , estimates of the performance obtained with this approximation would be conservative , since part of the field lines are actually in the air over the device , whose relative permittivity is unitary , thus leading to a stronger electrostatic interaction and therefore to a reduced importance of thermal fluctuations .as we have already stated , the two minimum energy configurations of a cell are those with the electrons aligned along one of the diagonals , since these correspond to the maximum separation between the electrons . however , other configurations are also possible , and , depending on intercell spacing , they can appear in the first few excited states of a binary wire .we consider all of the six configurations that can be assumed by two electrons in four dots , excluding only those with both electrons in the same dot , which correspond to too large an energy .we define the two lowest energy configurations ( those with the electrons along the diagonals ) state 1 and state 0 as indicated in fig .[ eins](b ) , while the corresponding polarization values are and , respectively .polarization values are defined as where is the charge in the -th dot , with the first dot being at the top right and the others numbered counterclockwise .configurations with the two electrons along one of the four sides of the cell have higher energies , as stated before , and do not correspond to a well defined logic state .for this reason , we define them as states .the energy is computed as the electrostatic energy of a classical system of charges : since in our model the total charge in each dot is either the background charge ( empty dot ) or the algebraic sum of the background charge and the charge of an electron , it can take on only two values : or , which implies that if we write the interelectronic distance in terms of the ratio and of the configuration , the energy of a binary wire can be written as where is the number of cells between the cell containing dot and the cell containing dot , indicates the sign of , and , indicate the position of dots and inside the corresponding cells . in particular , is equal to 0 if both dots and are on the left side or on the right side of the cell , to -1 if dot is on the right side and dot is on the left side and to 1 if dot is on the left side and dot is on the right one .furthermore , is equal to 0 if both dots and are on the top or on the bottom of a cell , to 1 if one dot is on the top and the other is on the bottom .we have considered a binary wire made up of six cells ( one of which is a driver cell in a fixed polarization state ) with size nm and computed the energy values corresponding to all possible configurations .the values thus obtained have been ordered with the purpose of studying the energy spectra for different parameter choices .let us define . if , i.e. , the interaction between neighboring cells is substantially due to just the dipole component , and a discrete spectrum is observed already for ( see fig .[ zwei ] ) , with clear steps : the ground state corresponds to configurations with all cells in the same logic state : either all 1 or 0 ; the first excited state , for , includes configurations with one `` kink , '' i.e. with one cell flipped with respect to the rest of the chain .higher steps correspond to a larger number of kinks .energy values are expressed with reference to the ground state energy and in kelvin , i.e. as the result of the division of the actual energies in joule by the boltzmann constant . if is decreased , the interaction between neighboring cells is incremented and made more complex , so that states do appear , as shown in fig .[ zwei ] for .the various plateaus start merging and a continuous spectrum is approached . in particular , if we decrease while keeping constant , and thereby reducing the separation between neighboring cells , the difference between the energy of the ground state and that of the first excited state is expected to increase as , due to the increased electrostatic interaction .however , this is true only down to a threshold value of , below which the splitting between the first excited state and the ground state starts decreasing , as shown in fig .[ drei ] , where the energy split is plotted as a function of for a cell size of 40 nm , for a wire with 2 ( dotted line ) , 3 ( dashed line ) , and 6 ( solid line ) cells . this sudden change of behavior can be understood on the basis of the previously discussed results : below the threshold value for , the configuration for the first excited state contains a cell in the state , thereby disrupting the operation of the wire and lowering the splitting between the first excited state and the ground state . in the inset of fig .[ drei ] we report the dependence of the splitting between the two lowest energy states on the number of cells .these results are for , i.e. for a condition in which no state appears . once the number of cells is larger than a few units , the splitting quickly saturates to a constant value .this is easily understood if we consider that the first excited state is characterized by the cell at the end of the wire being polarized opposite to the others : the strength of the electrostatic interaction drops quickly along the chain , and hence no significant change is determined by the addition of cells beyond the first five or six .the energy splitting has been computed for a cell size nm and , as can be deduced from eq.([eq1 ] ) , is inversely proportional to .it can thus be increased by scaling down cell dimensions .in order to compute the probabilities , at a finite temperature , for the various configurations , we introduce the partition function of the wire : where is the energy of the -th configuration and , being the boltzmann constant and the temperature .the summation is performed over all configurations with the first cell in a given input logic state .the probability of the entire system being in the ground state can be evaluated by taking the ratio of the boltzmann factor for the ground state to the partition function : where , and the sum extends over all excited states . as already mentioned in the introduction, is not the only quantity of interest . from the point of view of applications, we are mainly interested in knowing the probability of obtaining the correct logic output , which is higher than , because several configurations , besides the ground state , exhibit the correct polarization for the output cell ( the cell at the end of the chain ). we can compute by summing over the probabilities corresponding to all such configurations , that we label with the subscript : we have computed both and as a function of the ratio of the splitting between ground state and first excited state to . the results for a chain of 6 cells are presented in fig .[ vier ] : in the limit all the configurations ( a total of , being the number of cells ) become equally probable and the probability reaches its minimum value .the probability of correct logic output , instead , reaches a minimum value of , as a consequence of the six possible states of the output cell being equally probable .it should be noted that an error probability of a few percent may appear unacceptable for any practical circuit application , but data readout must always be done via some detector , which is characterized by a time constant necessarily longer than the typical settling time of the qca circuit .therefore , each reading will be the result of an averaging procedure , and will be compared to a threshold value . in such a case ,an error probability of a few percent for the output state will lead in most cases to a vanishingly small error probability for the actual output of the readout circuit .as already noted , the number of possible configurations for a circuit with cells ( one of which is assumed to be the driver cell and hence in a given , fixed configuration ) is .thus the cpu time required to explore all such configurations grows exponentially with the number of cells , which limits the length of the binary wires that can be investigated with this approach in a reasonable time down to about ten cells . in order to assess the thermal behavior of long wires , we have developed the approximate analytical approach that will be described in the next section .the development of an analytical model for the investigation of the thermal behavior of a qca chain requires a main simplifying assumption , in order to make the algebraic treatment possible : for each cell we consider only two configurations , the ones corresponding to the logic states 1 and 0 , and , thus , to polarization and . from the discussion in sec .[ enesec ] it is apparent that the larger , the better this approximation is , because the role of the states is reduced .let us consider a generic 1-dimensional chain consisting of cells and introduce the following 1-dimensional ising hamiltonian where for each cell labeled by the index the variable corresponds to the polarization and therefore assumes the two values , and the positive quantity ( which has the dimension of an energy ) is related to the splitting between the ground state and the first excited state energies of an -cell system by .let us point out that there is a twofold degeneracy of the ground state , corresponding to the two configurations and .this degeneracy is removed by enforcing the polarization state of the driver cell , which corresponds to enforcing the configuration of one of the boundary sites ; our conventional choice is . in this case, the lowest energy state corresponds to the configuration .the partition function of the -cell system described by the hamiltonian ( [ hising ] ) with the boundary condition is given , in analogy with eq.([partifun ] ) , by the following expression : where stands for the summation over all possible states , i.e. .this last expression can be written as where . in order to compute the r.h.s .( [ vtmp ] ) , the usual procedure consists in introducing the transfer matrix whose eigenvalues are the expression for the matrix is given by it then follows that the partition function ( [ vtmp ] ) reads {11}+[{\cal v}^{n-1}]_{12}= ( e^{\beta j}+e^{-\beta j})^{n-1 } , \label{zetaising}\ ] ] where the subscripts indicate specific elements of the matrix .this explicit formula for the partition function allows us to derive an analytical expression for the probability of the system being in its ground state as a function of the temperature and of the energy splitting between the two lowest states .since the ground state energy for the hamiltonian ( [ hising ] ) is , we obtain finally , we can derive an analytical expression also for the probability of obtaining the correct logic output , in analogy with what we have already done in the numerical case .we need to determine the occupation probability of a generic state with , which corresponds to having the correct output , because the polarization of the cell ( output cell ) is the same as that of the first cell . to this purpose , we evaluate the following `` reduced '' partition function : where again . using the transfer matrix ( [ tram ] ) , it follows that {11}$ ] , and hence {11 } } { [ { \cal v}^{n-1}]_{11}+[{\cal v}^{n-1}]_{12}}= \frac{1}{2}\ , \left[1+\left(\tanh(\beta\delta e/2)\right)^{n-1}\right ] \label{porod2}\ ] ] the above derived analytical expressions have been used to compute and as a function of temperature for a chain of 6 cells , cell size nm , cell separation nm .results are presented with dashed lines in fig .[ fuenf ] , together with those obtained with the numerical technique ( solid lines ) . for temperatures below about 2 k ( those for which reasonably low error probabilities can be achieved ) the analytical model provides values that are in almost perfect agreement with those from the more detailed numerical approach .the situation differs at higher temperatures , because higher energy configurations , containing cells in states , start being occupied and are properly handled by the numerical model , while they are not at all included in the analytical approach . in particular , while for large values of the temperature the numerical tends to , as previously discussed , the analytical approaches the value , because the output cell can be in one of two states with the same probability .analogous considerations can be made for , which becomes extremely small ( ) for higher temperatures in the numerical case , while drops just to in the analytical case , since there are possible configurations . in fig .[ sechs ] , and are reported as a function of the ratio ( with a semilogarithmic scale ) for the analytical model ( thick solid line ) , and for the numerical model with ( thin solid line ) , ( dashed line ) , and ( dotted line ) .as expected , the agreement improves with increasing , because of the reduced relevance of the states . for of the order of a few units ,the error probability becomes very small and the analytical expression can be reliably used to evaluate it . in particular , the analytical expression allows us to provide estimates of the maximum operating temperature for a qca chain formed by a given number of cells .we have computed the maximum operating temperature allowing a given correct logic output probability , as a function of the number of cells : results are reported in fig .[ sieben ] for ( solid line ) , ( dashed line ) , ( dotted line ) and cell size nm , intercell separation nm .the maximum operating temperature , for a number of cells above a few tens , drops logarithmically , which leads to a linear behavior in the logarithmic representation of fig .[ sieben ] .we have developed both a numerical and an analytical approach to the investigation of the thermal dependence of qca wire operation .both methods are based on a semiclassical approach , in which electrons are considered as classical particles interacting via the coulomb force , with , however , the possibility of tunneling between the quantum dots belonging to the same cell .the electrostatic energy associated with each configuration has been evaluated and used for the calculation of the occupancies , via the partition function .numerical results have been derived for wires with six - state cells , which realistically reproduce the behavior of qca systems , provided that the confinement in each quantum dot is strong enough .the numerical procedure thus developed is general and is currently being applied to the investigation of thermal limitations for simple logic gates , including the effect of spurious , states .the analytical approach has allowed a detailed analysis of the error probability due to thermal excitations in arbitrarily long wires , generalizing the findings of previous studies , and the possibility of extending it to selected basic gates is being investigated .it is clear from our results that the operating temperature depends on the ratio of the energy splitting to .it could therefore be raised by increasing , which means reducing the dielectric permittivity or scaling down cell dimensions .as already mentioned , the silicon - on - insulator material system offers better perspectives of higher - temperature operation , due to the lower permittivity of silicon oxide .however , scaling down in any semiconductor implementation is limited by the increasing precision requirements , therefore a trade - off between manufacturability and operating temperature has to be accepted .implementations at the molecular level could provide better opportunities , due to the reduced dimensions , but their actual feasibility is still being assessed . c. s. lent , p. d. tougaw , and w. porod , appl .. lett . * 62 * , 714 ( 1993 ) .m. governale , m. macucci , g. iannaccone , c. ungarelli , j. martorell , j. appl .phys . , * 85 * , 2962 ( 1999 ) .p. d. tougaw and c. s. lent , j. appl. phys . * 75 * , 1818 ( 1994 ) . c. s. lent , p. d. tougaw , and w. porod , in `` the proceedings of the workshop on physics and computing , '' nov .17 - 20 , 1994 , dallas , tx , p. 1 .g. iannaccone , c. ungarelli , m. macucci , e. amirante , m. governale , thin solid films , * 336 * , 145 ( 1998 ) .f. e. prins , c. single , f. zhou , h. heidemeyer , d. p. kern , e. plies , nanotechnology , * 10 * , 132 ( 1999 ). m. girlanda , m. governale , m. macucci , g. iannaccone , appl ., * 75 * , 3198 ( 1999 ) .j. d. jackson , _ classical electrodynamics _( wiley , new york , 1962 ) , p. 20 . see e.g. j. r. baxter , _ exactly solved models in statistical mechanics _( academic press , london , 1982 ) .
we investigate the effect of a finite temperature on the behavior of logic circuits based on the principle of quantum cellular automata ( qca ) and of ground state computation . in particular , we focus on the error probability for a wire of qca cells that propagates a logic state . a numerical model and an analytical , more approximate , model are presented for the evaluation of the partition function of such a system and , consequently , of the desired probabilities . we compare the results of the two models , assessing the limits of validity of the analytical approach , and provide estimates for the maximum operating temperature .
a statistical mixture model with weighted components has underlying probability distribution : with and denoting the mixture parameters : the s are positive weights summing up to one , and the s denote the individual component parameters . (appendix [ sec : notations ] summarizes the notations used throughout the paper . )mixture models of -dimensional gaussians are the most often used statistical mixtures . in that case, each component distribution is parameterized by a mean vector and a covariance matrix that is symmetric and positive definite .that is , .the gaussian distribution has the following probability density defined on the support : where denotes the squared mahalanobis distance defined for a symmetric positive definite matrix ( , the precision matrix ) . to draw a random variate from a gaussian mixture model ( gmm ) with components ,we first draw a multinomial variate , and then sample a gaussian variate from . a multivariate normal variate drawn from the chosen component as follows : first , we consider the cholesky decomposition of the covariance matrix : , and take a -dimensional vector with coordinates being random standard normal variates : ^t ] . using legendre transform , we further have the following equivalences of the relative entropy : where is the dual moment parameter ( and ) .information geometry often considers the canonical divergence of eq .[ eq : cd ] that uses the mixed coordinate systems , while computational geometry tends to consider dual bregman divergences , or , and visualize structures in one of those two canonical coordinate systems .those canonical coordinate systems are dually orthogonal since , the identity matrix . for exponential family mixtures with a single component ( , ), we easily estimate the parameter . given independent and identically distributed observations , the maximum likelihood estimator ( mle ) is maximizing the likelihood function : for exponential families , the mle reports a unique maximum since the hessian of is positive definite ( \succ 0 $ ] ) : the mle is consistent and efficient with asymptotic normal distribution : where denotes the fisher information matrix : = \nabla ^2 f(\theta ) = ( \nabla^2 g(\eta))^{-1}\ ] ] ( this proves the convexity of since the covariance matrix is necessarily positive definite . ) note that the mle may be biased ( for example , normal distributions ) . by using the legendre transform ,the log - density of an exponential family can be interpreted as a bregman divergence : table [ tab : duality ] reports some illustrating examples of the bregman divergence exponential family duality ..some examples illustrating the duality between exponential families and bregman divergences.[tab : duality ] [ cols="^,^,^ " , ] * \0 . *initialization * : * * calculate global mean and global covariance matrix : * * , initialize the seed as * assignment * : + with the squared mahalanobis distance : .+ let be the cluster partition : .+ ( anisotropic voronoi diagram ) * \2 . *update the parameters * : + + * goto step 1 * unless local convergence of the complete likelihood is reached .update the mixture weights * : .+ * goto step 1 * unless local convergence of the complete likelihood is reached .the -mle++ initialization for the gmm is reported in algorithm [ algo : kmleppgmm ] .* choose first seed , for uniformly random in .* for to * * choose with probability + where .+ * * add selected seed to the initialization seed set : .we instantiate the soft bregman em , hard em , -mle , and -mle++ for the rayleigh distributions , a sub - family of weibull distributions .a rayleigh distribution has probability density where denotes the _ mode _ of the distribution , and the support .the rayleigh distributions form a -order univariate exponential family ( ) .re - writing the density in the canonical form , we deduce that , , , and .thus and .the natural parameter space is and the moment parameter space is ( with ) .we check that conjugate gradients are reciprocal of each other since , and we have ( i.e , dually orthogonal coordinate system ) with and .rayleigh mixtures are often used in ultrasound imageries . following banerjee et al . , we instantiate the bregman soft clustering for the convex conjugate , and . the rayleigh density expressed in the -parameterization yields . expectation .: : soft membership for all observations : + + ( we can use any of the equivalent , or parameterizations for calculating the densities . ) maximization .: : barycenter in the moment parameterization : + the associated bregman divergence for the convex conjugate generator of the rayleigh distribution log - normalizer is this is the itakura - saito divergence is ( indeed , is equivalent modulo affine terms to , the burg entropy ) . 1 . hard assignment .: : + voronoi partition into clusters : + 2 .-parameter update .: : + + go to 1 . until ( local ) convergence is met .weight update .: : + go to 1 . until ( local ) convergence is met .note that -mle does also model selection as it may decrease the number of clusters in order to improve the complete log - likelihood .if initialization is performed using random point and uniform weighting , the first iteration ensures that all voronoi cells are non - empty .a good initialization for rayleigh mixture models is done as follows : compute the order statistics for the -th elements ( in overall -time ) .those pivot elements split the set into groups of size , on which we estimate the mles .the -mle++ initialization is built from the itakura - saito divergence : k - mle++ : * choose first seed , for uniformly random in . * for to * * choose with probability + * * add selected seed to the initialization seed set : .ll : + & inner product ( e.g. , for vectors , for matrices ) + & exponential distribution parameterized using the -coordinate system + & support of the distribution family ( ) + & dimension of the support ( univariate versus multivariate ) + & dimension of the natural parameter space + & ( uniparameter versus multiparameter ) + & sufficient statistic ( ) + & auxiliary carrier term + & log - normalizer , log - laplace , cumulant function ( ) + & gradient of the log - normalizer ( for moment -parameterization ) + & hessian of the log - normalizer + & ( fisher information matrix , spd : ) + & legendre convex conjugate + : + & canonical natural parameter + & natural parameter space + & canonical moment parameter + & moment parameter space + & usual parameter + & usual parameter space + & density or mass function using the usual -parameterization + & density or mass function using the usual moment parameterization + : + & mixture model + & closed probability -dimensional simplex + & shannon entropy ( with by convention ) + & shannon cross - entropy + & mixture weights ( positive such that ) + & mixture component natural parameters + & mixture component moment parameters + & estimated mixture + & number of mixture components + & mixture parameters + : + & sample ( observation ) set + & cardinality of sets : for the observations , for the cluster centers + & hidden component labels + & sample sufficient statistic set + & likelihood function + & maximum likelihood estimates + & soft weight for in cluster / component ( ) + & index on the sample set + & index on the mixture parameter set + & cluster partition + & cluster centers + & cluster proportion size + & bregman divergence with generator : + & + & jensen diversity index : + & + : + & average incomplete log - likelihood : + & + & average complete log - likelihood + & + & geometric average incomplete likelihood : + & + & geometric average complete likelihood : + & + & average -means loss function ( average divergence to the closest center ) + & cdric archambeau , john aldo lee , and michel verleysen . on convergence problems of the em algorithm for finite gaussian mixtures . in _european symposium on artificial neural networks ( esann ) _ , pages 99106 , 2003 .arindam banerjee , inderjit dhillon , joydeep ghosh , and srujana merugu .an information theoretic analysis of maximum likelihood mixture estimation for exponential families . in _ proceedings of the twenty - first international conference on machine learning _ , icml , pages 5764 , new york , ny , usa , 2004 .acm .jason v. davis and inderjit s. dhillon .differential entropic clustering of multivariate gaussians . in bernhard scholkopf ,john platt , and thomas hoffman , editors , _ neural information processing systems ( nips ) _ , pages 337344 . mit press , 2006 .kittipat kampa , erion hasanbelliu , and jose principe .closed - form cauchy - schwarz pdf divergence for mixture of gaussians . in _ proceeding of the international joint conference on neural networks ( ijcnn ) _ , pages 2578 2585 , 2011 .michael kearns , yishay mansour , and andrew y. ng .an information - theoretic analysis of hard and soft assignment methods for clustering . in_ proceedings of the thirteenth conference on uncertainty in artificial intelligence _ ,uai , pages 282293 , 1997 .franois labelle and jonathan richard shewchuk .anisotropic voronoi diagrams and guaranteed - quality anisotropic mesh generation . in _ proceedings of the nineteenth annual symposium on computational geometry _ , scg 03 , pages 191200 , new york , ny , usa , 2003 .acm .james b. macqueen .some methods of classification and analysis of multivariate observations . in l.m. le cam and j. neyman , editors , _ proceedings of the fifth berkeley symposium on mathematical statistics and probability_. university of california press , berkeley , ca , usa , 1967 .frank nielsen , paolo piro , and michel barlaud .bregman vantage point trees for efficient nearest neighbor queries . in _ieee international conference on multimedia and expo ( icme ) _ , pages 878881 , new york city , usa , june 2009 . ieee .richard nock , panu luosto , and jyrki kivinen .mixed bregman clustering with approximation guarantees . in _ proceedings of the european conference on machine learning and knowledge discovery in databases _ , pages 154169 , berlin , heidelberg , 2008 .springer - verlag .paolo piro , frank nielsen , and michel barlaud .tailored bregman ball trees for effective nearest neighbors . in _european workshop on computational geometry ( eurocg ) _ , loria , nancy , france , march 2009 .
we describe -mle , a fast and efficient local search algorithm for learning finite statistical mixtures of exponential families such as gaussian mixture models . mixture models are traditionally learned using the expectation - maximization ( em ) soft clustering technique that monotonically increases the incomplete ( expected complete ) likelihood . given prescribed mixture weights , the hard clustering -mle algorithm iteratively assigns data to the most likely weighted component and update the component models using maximum likelihood estimators ( mles ) . using the duality between exponential families and bregman divergences , we prove that the local convergence of the complete likelihood of -mle follows directly from the convergence of a dual additively weighted bregman hard clustering . the inner loop of -mle can be implemented using any -means heuristic like the celebrated lloyd s batched or hartigan s greedy swap updates . we then show how to update the mixture weights by minimizing a cross - entropy criterion that implies to update weights by taking the relative proportion of cluster points , and reiterate the mixture parameter update and mixture weight update processes until convergence . hard em is interpreted as a special case of -mle when both the component update and the weight update are performed successively in the inner loop . to initialize -mle , we propose -mle++ , a careful initialization of -mle guaranteeing probabilistically a global bound on the best possible complete likelihood . # 1#2#1,#2 # 1#2#1,#2 # 1doi:#1 exponential families , mixtures , bregman divergences , expectation - maximization ( em ) , -means loss function , lloyd s -means , hartigan and wong s -means , hard em , sparse em .
it is well known that the sun is an unique star which affect the geo - space environment extremely .any variations of solar activities may greatly affect many sides of human living , from satellite operations , radio - based communication , navigation systems , electrical power grid , and oil tubes , etc .therefore , it is very important to understand when and how the solar activity will take place in the near future .there are many people who had made such endeavors ( hiremath , 2008 ; hathaway , 2009 ; strong , julia , & saba , 2009 ; etc ) . however , we have much of uncertainties ( pesnell , 2008 ) to present the details of the forthcoming solar cycles . the film * 2012 *reflects public misgivings to the imminent impact of solar fierce eruptions around the year of 2012 .there are many methods to predict the solar cycle 24 and the beyond .the solar cycle 24 prediction panel in october 2006 made a comprehensive list of prediction methods including precursor , spectral , climatology , recent climatology , neural network , physics - based , etc ( pesnell , 2008 ) . however , the investigation of periodicity of solar activity is a groundwork for solar cycle prediction . in previous works ,many kinds of periodic modes are found from the analysis of different solar proxies ( e.g. , yearly averaged sunspot number , cosmogenic isotopes , historic records , radiocarbon records in tree - rings , etc . ) , for example , the 11-yr solar cycle , 53-yr period ( le & wang , 2003 ) , 78-yr period ( wolf , 1862 ) , 80 - 90-yr period ( gleissberg , 1971 ) , the 65 - 130 yr quasi - periodic secular cycle ( nagovitsyn , 1997 ) , 100-yr periodic cycle ( frick et al , 1997 ) , 101-yr period cycle ( le & wang , 2003 ) , 160 270-yr double century cycle ( schove , 1979 ) , 203-yr suess cycle ( suess , 1980 ) .otaola & zenteno ( 1983 ) proposed that long term cycles within the range of 80 100 and 170 180 yr are existed certainly .sunspot number is the most commonly predicted index of solar activity .the number of solar flares , coronal mass ejections , and the amount of energy released are well correlated with the sunspot number .the present work mainly applied the annual averaged relative sunspot number ( asn ) during 1700 2009 to study the long - term variations , and found some meaningful insights of the forthcoming solar solar activities .this paper is arranged as following : section 2 presents the investigation of periodicity of solar activity and their main features .section 3 gives some implications of solar cycles , such as the extrapolation of the forthcoming schwabe cycle 24 , the distribution of the solar flares , and the possible origin of solar cycles .section 4 is the conclusions and some discussions .it has been realized for a long time that the solar activity is connected tightly with the solar magnetic field .the energy released in solar eruptive processes is coming from the magnetic field .hence the variations of solar magnetic field will be a physically reasonable indicator for the solar activity .the sunspot number presents a perceptible feature of the solar magnetic field , which can be regarded as a commonly predicted solar activity index .so , in this work , we adopt asn during 1700 2009 to investigate the periodicity of the solar long period activities .the data set is downloaded from the internet : http://sidc.oma.be / sunspot - data/. the upper panel in fig.1 presents the profile of asn during 1700 2009 , and the lower panel is a fourier power spectra of the fast fourier transformation ( fft ) from asn . here presents 3 obvious spectral peaks which indicate that the strongest periodicity is occurred at 11 years ( ) , the second strongest periodicity is at 103 years ( ) , and the mildly periodicity is at a period of 51.5 years ( ) . the most remarkable feature in fig.1 is the solar cycles with about 11-yr periods , it is the well - known schwabe cycle . as asn reaches to the minimum 2.9 in 2008 , and then increases slightly to 3.1 in the year of 2009 .after coming into 2010 , it becomes stronger obviously than the past two years , there occurred several goes m - class flares in the first two months .so we may confirm that the end of solar cycle 23 and the start of solar cycle 24 is occurred around 2008 .there are 28 schwabe cycles during 1700 2009 .they are numbered by an international regulation since 1755 .the solar cycle 1 started from the year of 1755 , the solar cycle 23 ended around 2008 . during 1700 1755 , there are 5 solar schwabe cycles .we may numbered them as , , , , and respectively , which presented in the upper panel of fig.1 . in order to investigate the profile of each solar schwabe cycle more detail to get more useful insights, we may define several parameters : \(a ) rise - time ( ) , defined as from the starting minimum to the maximum of a solar cycle in unit of year ; \(b ) decay - time ( ) , defined as from the maximum to the next minimum of a solar cycle in unit of year ; \(c ) the profile of solar schwabe cycles is always asymmetric , i.e. , taking less time to rise to the maximum than reaching to the next minimum .we may define an asymmetric parameter as ( ) . when , the profile is symmetric ; is left asymmetric , is right asymmetric ; \(d ) maximum asn ( ) , which may represents the asn amplitude of each schwabe cycle .table 1 lists all parameters of schwabe cycles since 1700 .the last two lines in the table present the mean value and the standard deviation of above parameters ..list of characteristic parameters of solar schwabe cycles during 1700 2009 . , , , p , , is the start time , rise - time , decay - time , period , asymmetric parameter , and the maximum asn , respectively .av and dev are the averaged value and the standard deviation of the related parameters , respectively .the star marks the bottom of the each secular cycle . [ cols="<,^,^,^,^,^,^,^",options="header " , ] table 1 indicates that the rise - time of schwabe cycles is in range of 3 7 years ( the longest rise - time is 7 years in cycle 7 ) , with the averaged length of 4.4 years , and the standard deviation of 1.2 years ; the decay - time of schwabe cycles is in range of 3 11 years ( the longest decay - time is 11 years in cycle 4 ) , with the averaged length of 6.6 years , and the standard deviation of 1.2 years ; the period of schwabe cycles is in range of 9 14 years , with the averaged length of 11 years , and the standard deviation of 1.3 years .the distribution of asn amplitudes of schwabe cycles shows clearly the existence of 103-yr cycle , we call such long - term cycle as secular cycle .gleissberg firstly implied the existence of secular solar cycle , so we also call it as gleissberg cycle ( gleissberg , 1939 ) .in fact , the secular cycle is consistent with the appearance rate of the grand minima , such as the sp minimum around 1500 , the maunder minimum around 1700 ( 1645 1715 , eddy , 1983 ) , the dalton minimum around 1800 ( 1790 1820 ) . from the distribution of asn amplitudes of schwabe cycleswe may also find that around 1900 ( schwabe cycle 14 ) seems also a grand minimum .in other word , each of the grand minima is possibly occurred in a vale between two secular cycles . in order to show clearly the secular cycles, we make a fitted empirical function with sinusoidal shape by using a method similar to the square - least - method : at first we assume the sinusoidal function in following formation : here , represents the time from 1700 ( with unit of year ). then let the following sum to become minimum : ^{2}\rightarrow min.\ ] ] in above equations , and present the values of the fitted empirical function and the observations of asn , respectively .as equation ( 1 ) is a nonlinear function , we could nt obtain the true values of parameters a , b , c , d , and e from the standard square - least - method .however , we may try to list a series of [ a , b , c , d , e ] and calculate the values of , respectively .then find out the minimum value of and the corresponding parameters of [ a , b , c , d , e ] .the thick dot - dashed curve in the upper panel of fig.1 is the fitted empirical function , which can be expressed as : + 0.06y.\ ] ] from the empirical function we may find that the period of long - term variation of asn is 103 years , which is very close to the secular cycle .we may numbered the secular cycles as ( includes the schwabe cycle e a , and 1 5 ) , ( includes the schwabe cycle 6 14 ) , ( includes the schwabe cycle 15 24 ) , and ( after the schwabe cycle 24 ) since 1700 , respectively marked in fig .1 . at presentthe sun is in a vale between and .the last term in the right hand of equ.(3 ) implies that secular cycles have a gradually enhancement , and the sun may have a tendency to become more and more active at the timescale of several hundred years .many previous studies also presented the evidences of the secular cycles ( nagovitsyn , 1997 ; frick et al , 1997 ; le & wang , 2003 ; bonev , penev , & sello , 2004 ; hiermath , 2006 , etc . ) .the grand minima ( e.g. , sp minimum , maunder minimum , dalton minimum , etc ) implies that the sun might have experienced the dearth of activity in its evolutionary history . andthere is no complete consensus among the solar community whether such grand minima are chaotic or regular .this work may confirm the periodicity of the solar secular cycles . from the upper panel of fig.1we can not get the obvious evidence of the cycle with period of 51.5 yr . however , the evidence is very strong in the fourier power spectra in lower panel of fig.1 .in fact , from the upper panel of fig .1 we may find something of that , around each peak of the secular cycle , the schwabe cycles are in mildly weak amplitudes in asn .secular cycles seem to segment into two sub - peaks .for example , the solar schwabe cycle a and 1 around the peak of , the solar schwabe cycle 10 around the peak of , and the solar schwabe cycle 20 around the peak of .possibly , these facts are the indicator of the existence of component .le & wang ( 2003 ) investigated the wavelet transformation of asn series from 1700 2002 , and found the evidence of solar cycles with period of 11-yr , 53-yr , and 101-yr . in this work ,the asn series is spanned from 1700 2009 , and we rectify the periods as 11-yr , 51.5-year , and 103-yr .our results are very close to that of le & wang ( 2003 ) .additionally , we find that there is an interesting phenomenon : is obviously departed from any integers ; however , is fitly equal to an integer of 2 .this evidence shows that and are originally connected with each other , and seems to be a second harmonics of . however , there is no such relationships between and , .table 1 presents another interesting feature : most of the asymmetric parameters are less than 1.00 ( there are 22 schwabe cycles with among the total 28 cycles , and the proportion is 78.5% ) , the averaged value of asymmetric parameters is about 0.722 , and this implies that most schwabe cycles are left asymmetric .they have rapidly rising phases and slowly decay phases . andthe cycle evolution can not be modelled by some simple amplitude - modulated sinusoids .such phenomenon is called as waldmeier effect ( 1961 ) . among the total 28 cycles ,there are only 3 schwabe cycles with symmetric profiles , and 3 schwabe cycles with right asymmetric profiles .the symmetric cycles ( no.d , no.5 , and no.16 ) are very close to the vale of the secular cycles .the no.7 cycle is a bizarrerie for the super long rise - time ( 7 years ) and super short decay - time ( 3 years ) with a super large asymmetric parameter ( ) .additionally , there is an anti - correlated relationship between asymmetric parameter and maximum asn among 28 solar schwabe cycles .the correlate coefficient is -0.55 .i.e. the stronger the solar schwabe cycle , the more left asymmetric the cycle profile .it is very important to forecast the forthcoming solar cycles .many people make great efforts on this problem ( pesnell , 2008 ; hiremath , 2008 ; wang et al , 2009 ; strong & saba , 2009 ; hathaway , 2009 ; etc ) . according to the main features of long - term solar activity periodicity, we may also make an extrapolation of the forthcoming solar cycles .firstly , we note that solar schwabe cycle e , 5 , and 14 are located in the bottom of secular cycles , and have rise - time of 46 years . at the same time , the solar schwabe cycle 24 is also possibly seated in a bottom between and .then , it is reasonable to suppose that the rise - time of the solar schwabe cycle 24 will be in the range of 4 6 years , i.e. , its maximum will occur around 2012 2014 .the another important fact is that all durations of the three bottom solar schwabe cycle ( cycle e , cycle 5 , and cycle 14 ) are 12 years , which are relatively long duration schwabe cycles .based on the similarity , we may deduce that the length of schwabe cycle 24 will also last for about 12 years , which is longer a bit than the normal periods of solar schwabe cycles . according to variations of the magnitude of solar cycles ( expressed as asn ), we can make some extrapolations for the forthcoming solar cycles . from table 1 , we know that the averaged magnitude of asn among the 28 solar schwabe cycles during 1700 2009 is 105.9 with a standard deviation of 37.8 , and the three bottom cycles ( cycle e , 5 , and 14 ) have the asn magnitude from 47.5 to 63.5 . from the above investigations we know that solar schwabe cycle 24 may also be a bottom cycle, then we have no other reason to suppose that solar schwabe cycle 24 will have an asn magnitude of exceeding the range of 47.5 63.5 , it is a really relative weak solar schwabe cycle . however , this value is lower than the most results listed in the work of pesnell ( 2008 ) , but it is consistent with the result of badalyan , et al ( 2001 ) .2 plotted the extrapolation of the forthcoming solar schwabe cycles by the similarity of the trend - line induced from fig.1 .it shows that the position of solar schwabe cycle 24 is very close to the valley between the secular cycle and , which is very similar to the solar schwabe cycle 5 in the valley between and , and the solar schwabe cycle 14 in the valley between and .so it is very reasonable to extrapolate that the solar schwabe cycle 24 will be similar to that of the solar schwabe cycle 5 and cycle 14 .this similarity implies that the schwabe cycle 24 will be a relatively weak activities .based on these similarities and equ.(1 ) , we may plot the extrapolated asn of the subsequent solar schwabe cycles and the secular cycle in fig.2 . from these extrapolations, we may find that solar schwabe cycle 24 will reach to the apex in the year of about 2012 - 2014 , and this cycle may last for about 12 years or so .it will be a long , but relatively weak schwabe cycle . when and what kind of solar flare events will occur is also an intriguing problem .fig.3 presents comparisons between asn and the distributions of the appearance rate of solar flares and the annual averaged solar radio flux ( arbitrary unit ) at frequency of 2.84 ghz around the solar schwabe cycle 23 .the appearance rate of c - class , m - class , and x - class ( limit from x1.0 to x9.9 ) flares are presented by their annul flare numbers , and the appearance rate of the super - flares ( here we define the super - flare as x10.0 ) are presented by their related goes soft x - ray class .the vertical dashed line marks the time of magnetic maximum of solar schwabe cycle 23 .the 2.84 ghz solar radio flux is observed at chinese solar broadband radiospectrometer ( sbrs / huairou ) ( fu et al , 2004 ) .the data set of the annul flare numbers is compiled from goes satellite at soft x - rays ( http://www.lmsal.com/sxt/homepage.html ) .from fig.3 we may find an obvious tendency : the more powerful solar flares are inclined to occur in the period after the maximum of solar schwabe cycle ( presented by the two tilted - dotted lines ) .for example , the asn of solar schwabe cycle 23 reaches to its maximum in about 2000 , and the annual number of c - class flares and m - class flares reach to the maximum in about 2001 , while the annual number of x - class flare reaches to its apex in about 2002 , the most concentration of the super - flares is occurred in 2003 . during the whole schwabe cycle 23 ( 1996 2008 ) there are 12995 c - class flares , 1444 m - class flare , 119 x - class flares , and 6 super - flares . among themthere are 7414 c - class flares ( 57.1% ) , 950 m - class flare ( 65.8% ) , 80 x - class flares ( 67.2% ) and all the super - flares are occurred after the year of asn apex ( 2000 ) of the cycle . in 2005 , which is very close to the magnetic minimum of cycle 23 , there are 18 x - class flares occurred .additionally , the annual averaged solar radio flux ( in panel b ) at frequency of 2.84 ghz are very similar to the profile of the annual c - class flare numbers .generally , the solar radio emission at 2.84 ghz is mainly associated with the non - thermal eruptive processes .it shows that all of the 6 super - flares ( ) occurred in the decay phase of the schwabe cycle 23 , including the the largest flare event ( x28 , 2003 - 11 - 03 ) recorded in noaa so far .these facts imply that the stronger flare events are inclined to occur in the decay phase of the schwabe cycle .however , so far we can not make a doubtless conclusion because we have no enough reliable observation data of the other schwabe cycles .the above characteristics of the solar flare distribution in solar schwabe cycles is also implied the asymmetric properties of the schwabe cycles .however , it is most intricate that most of the powerful flares are occurred not in the rising - phase but in the decay - phase . in this work, we make an assumption that future behaviors of the solar activity in upcoming several decades years can be deduced from the averaged behaviors in the past several hundred years . andthis can be accepted because the several hundred - year is so much short related to the life - time of the sun when it is at the main sequence .we may regard the sun as a steady - going system which will run according to its averaged behaviors of its past several hundred years and last for several hundred years again .the time - scales of the solar 11-yr schwabe cycle and 103-yr secular cycles are much shorter than the diffusion time - scale of solar large - scale global magnetic field structures ( about yr , stix , 2003 ) , and much larger than the solar dynamical time - scale ( for example , the 5-min oscillations , etc ) .the origin of these cycles is a remarkable unsolved tantalizing problem .we need to explore some theoretical models which may carry the energy from the interior to the surface and release in solar atmosphere in some periodic forms . presently , there are two kinds of theoretical models for solar cycles , one is turbulent dynamo models , the other is mhd oscillatory models. however , both of them have much difficulties to interpret the main features of solar cycles reliably ( hiremath , 2009 ) . as for the 11-yr schwabe cycle , in spite of much of difficulties , possibly dynamo theory is the most popular hypothesis to explain the generations ( babcock , 1961 ; leighton , 1964 , 1969 ) .this model finds its origin in the tachocline where strong shearing motions occur in the solar internal plasma , store and deform the magnetic field , and can give a semi - empirical model of solar 11-yr cycle and reproduced the well known sunspot butterfly diagrams . at the same time, there are several other approaches .polygiannakis & moussas ( 1996 ) assume that the sunspot - cycle - modulated irradiance component may be caused by a convective plasma current , this current can drive a nonlinear rlc oscillator. this model can describe the shape and the related morphological properties of the solar cycles , and give some reasonable interpretations to the long period inactive maunder minimum .the closeness between the 11-yr schwabe cycle and the 11.86-yr period of jupiter has been noted for a long time . andthis let us to speculate that planetary synodic period may resonate with solar activity ( wood , 1975 ) .grandpierre ( 1996 ) proposed that the planetary tides of co - alignments can drive the dynamo mechanism in the solar interior and trigger solar eruptions .they calculate the co - alignments ( conjunction and opposition ) of the earth , venus , and jupiter and find that their co - alignment periods are in the range of 8.7588 13.625 yr , the averaged value is around 11.2 yr , being very close to the observed schwabe cycle of 11-yr .however , de jager & versteegh ( 2005 ) compare the accelerations due to planetary tidal force , the sun s motion around the gravitational center of the solar system , and the observed acceleration at the level of tachocline , and find that the latter are by a factor of about 1000 larger than the former two , and assert that the planetary tidal force can not trigger the dynamo mechanism observably . however , in my opinion , as the plasma system is always very brittle , even with a very small perturbation , a variety of instabilities are also very easy to develop , accumulate , and trigger a variation as the solar schabe cycle . hence , at present stage , it is very difficult to confirm which factor is the real driver of solar schwabe cycles .the most bewildering is the harmonic relationship between the long - term cycles and .this may imply that solar long - term active cycles have wave s behaviors .however , so far , we do nt know more natures of this kind of cycles .the dynamo theory can give some reasonable interpretation to the 11-yr solar schwabe cycle , but it can not present even if a plausible explanation to the 51.5-yr cycle and 103-yr secular cycle so far . in 1943 , alfven assumed that solar large - scale dipole magnetic field has the axis coincide with the rotation axis , and the magnetic disturbance in the interior travels along the field line to the surface at alfven speed .this magnetic disturbance will excite mhd oscillation and transport to the whole sun , the travel time of the mhd oscillation is about 70 80 yr .this time scale is neither agree with the 11-yr period , nor with the time scale of the secular cycles .many people believe that the 11-yr solar schwabe cycle is originated from the tachocline which is a thin shear layer of strong differential rotation motion at the base of the solar convection zone ( dikpati , 2006 ) .helioseismology suggests that the tachocline in thin of the order of 1% of the solar radius ( christensen - dalsgaard & thompson , 2007 ) .it is well - known that the kelvin - helmholtz instability is very easy to develop around a layer with strong shear motion . from the work of gilliland ( 1985 ) , we know that the time scale of kelvin - helmholtz instability for the whole sun is yr. then it is possible to assume that the time scale of kelvin - helmholtz instability around the thin tachocline can be reduced to the order of 100-yr .rashid , jones , & tobias ( 2008 ) pointed out that the evolution of the hydrodynamic instability of the slow tachocline region will occur on timescale of hundred years .however , we need much of investigations to confirm this assumption and to understand the long - term solar cycles .the origin of long - term solar cycles is still a big unsolved problem .from the above analysis and estimations , we obtain the following conclusions : \(2 ) the solar schwabe cycle 24 is in a vale between the two secular cycles ( g3 , and g4 ) , it will be a relatively weak and long active cycle , which may reach to its apex in about 2012 - 2014 and last for about 12 years ; \(3 ) most solar schwabe cycles are left asymmetric , they have rapidly rising phases and slowly decay phases .the most intriguing is that most of the solar powerful flares occurred in the decay - phase of the solar schwabe cycle .\(4 ) the secular cycle is possibly associated with the solar inner large scale motions as well as the dynamo processes can be account for the 11-yr schwabe cycles .however , there are much of works need to do to understand the mechanism of the secular cycles .the author would like to thank the referee s valuable comments on this paper .this work was supported by nsfc grant no .10733020 , 10873021 , cas - nsfc key project ( grant no .10778605 ) , and the national basic research program of the most ( grant no .2006cb806301 ) .babcock , h.w . : 1961 , _ apj _ * 133 * , 572 .badalyan , o.g . ,obridko , v. , & sykora , n.j . : 2001 , _ solar phys _ * 199 * , 421 .bonev , p.p . ,penev , k.m . , & sello , s. : 2004 , _ apj _ * 605 * , l81 .christensen - dalsgaard , j. , & thompson , m.j . : 2007 , in solar tachocline , ed . d.w .hughes , r. rosner , & n.o . , weiss ( cambridge university press ) .de jager , c. , & versteegh , g.j.m .: 2005 , _ solar phys _ * 229 * , 175 .dikpati , m. : 2006 , _ adv space res _ * 38 * , 839 .eddy , j.a . : 1983 , _ solar phys _ * 89 * , 195 .frick , p. , galyagin , d. , hoyt , d.v ., et al . : 1997 , _ astron astrophys _ * 328 * , 670 .fu , q.j . ,ji , h. , qin , z.h . , & et al : 2004 , _ solar phys _ * 222 * , 167 .gilliland , r.l . : 1985 , _ apj _ * 290 * , 344 .gleissberg , w. : 1939 , _ observatory _ * 62 * , 158 .gleissberg , w. : 1971 , _ solar phys _ * 21 * , 240 .grandpierre , a. : 1996 , _ astrophys .space sci . _ * 243 * , 393 .hathaway , d.h . : 2009 , _ space sci rev _ * 144 * , 401 .hiremath , k.m . : 2006 , _ astron astrophys _ * 452 * , 591 .hiremath , k.m . : 2008 , _ astrophys space sci _ * 314 * , 45 .hiremath , k.m . : 2009 , arxiv0909.4420[astro - ph.sr ] ., & wang , j.l . : 2003 , _ chin j astron astrophys _ * 3 * , 391 .leighton , r.b . : 1964 , _ apj _ * 140 * , 1547 .leighton , r.b . : 1969 , _ apj _ * 156 * , 1 .nagovitsyn , yu a. : 1997 , _ astron lett _ * 23 * , 742 .otaola , j.aq . , &zenteno , g. : 1983 , _ solar phys _ * 89 * , 209 .pesnell , w.d . : 2008 , _ solar phys _ * 252 * , 209 .polygiannakis , j.m . , & moussas , c.p . : 1996 , _ solar phys _ * 163 * , 193 .rashid , f.q . ,jones , c.a ., & tobias , s.m . : 2008 , _ astron astrophys _ * 488 * , 819 .schove , d.j . : 1979 , _ solar phys _ * 63 * , 423 .stix , m. : 2003 , _ solar phys _ * 212 * , 3 .strong , & saba , j. l.r .: 2009 , _ adv space res _ * 43 * , 756 .suess , h.e . : 1980 , _ radiocarbon _ * 20 * , 200 .waldmeier , m. : 1961 , _ the sunspot activity in the years 1610 - 1960 _ , zurich schulthess and company ag .wang , j.l . , zong , w.g ., le , g.m ., et al : 2009 , _ res .astron astrophys ._ * 9*,133 .wolf , r. : 1862 , _ astro mitt zurich _ , 209 .wood , r.m . : 1975 , _ nature _ * 255 * , 312 .
based on analysis of the annual averaged relative sunspot number ( asn ) during 1700 2009 , 3 kinds of solar cycles are confirmed : the well - known 11-yr cycle ( schwabe cycle ) , 103-yr secular cycle ( numbered as g1 , g2 , g3 , and g4 , respectively since 1700 ) ; and 51.5-yr cycle . from similarities , an extrapolation of forthcoming solar cycles is made , and found that the solar cycle 24 will be a relative long and weak schwabe cycle , which may reach to its apex around 2012 - 2014 in the vale between g3 and g4 . additionally , most schwabe cycles are asymmetric with rapidly rising - phases and slowly decay - phases . the comparisons between asn and the annual flare numbers with different goes classes ( c - class , m - class , x - class , and super - flare , here super - flare is defined as x10.0 ) and the annal averaged radio flux at frequency of 2.84 ghz indicate that solar flares have a tendency : the more powerful of the flare , the later it takes place after the onset of the schwabe cycle , and most powerful flares take place in the decay phase of schwabe cycle . some discussions on the origin of solar cycles are presented .
the general theory of relativity is thought to provide the correct classical description for the interaction of gravity with matter .this description is embodied in a system of partial differential equations on a four dimensional manifold , the so - called _ einstein equations _ , which relate the ricci curvature of an unknown metric to the energy - momentum tensor of matter : to complete the classical picture of a physics based on a collection of fields satisfying a closed system of equations , one must also consider the laws which govern the evolution of the matter fields generating the energy - momentum tensor on the right hand side of .( one important special case is when there is no matter , the so - called vacuum .then with vanishing right hand side is a closed system of quasilinear hyperbolic equations . ) in general , one arrives at quite complicated systems of equations .however , from the perspective of classical physics , all phenomena are in principle described by the solutions of such a system . moreover , for these systems , the initial value problem is natural , just as in classical dynamics .thus , from one point of view , the general theory of relativity is a classical physical theory that can be studied mathematically in parallel with other field theories of nineteenth - century classical physics .indeed , the equations of general relativity exhibit similar local behavior with other equations of evolution as regards , for instance , the issues of local existence and uniqueness of solutions to the initial value problem .when one turns , however , to the initial value problem in the large , the einstein equations present features that have no analogue in other typical equations of mathematical physics .the subject of this talk will be what appears , at least at first sight , as the most pathological of these features , namely the possibility of loss of uniqueness of the solution of the initial value problem _ without loss of regularity ._ this possibility is at the center of what is known as the _ strong cosmic censorship conjecture _ formulated by penrose .the reason why the theory of the initial value problem in the large for the einstein equations is richer than for other non - linear wave equations is that the global geometry of the characteristics is not constrained _ a priori _ by any other structure .this geometry , which corresponds precisely to the conformal geometry for the vacuum equations , is _ a priori _ unknown .it turns out that many features of the initial value problem for hyperbolic equations that one takes for granted actually depend on certain global properties of the geometry of the characteristics ; the question of uniqueness indicated above is one of these .the best way to gain some intuition for what kind of conformal geometric structure develops in the course of evolution in general relativity and what are the implications of this structure is to carefully examine the special solutions of the theory .in fact , almost all conjectures and intuition regarding the theory in the end derives from simple properties of such solutions . moreover, since our focus of interest is global _ geometric _ structure , there is no substitute in building intuition than a good pictorial representation .this talk will rely very much on such `` pictures '' .it should be noted , however , that in the spherically symmetric context in which we shall be working , these `` pictures '' , besides conveying intuition , also carry complete and precise information and can be treated on the same level as symbols or formulas .the assumption of spherical symmetry and associated pictorial representations will be carefully discussed in the next section .we will then proceed to examine a series of special solutions which will lead to a particular initial value problem .finally , theorems describing the solutions of the initial value problem will be formulated and their proofs will be discussed . in regard to uniqueness , it turns out that there is always a spacetime which can be uniquely associated to initial data - manifold , along with a symmetric -tensor satisfying the constraint equations that would arise if were to be the second fundamental form of realized as a hypersurface in a ricci flat -manifold . ] .this is the so - called _ maximal domain of development _it is the `` biggest '' spacetime which admits the given initial hypersurface as initial data and is at the same time _ globally hyperbolic _, i.e. all inextendible causal curves intersect the initial hypersurface precisely once .this latter property ensures that the domain of dependence property holds .the question of uniqueness in general relativity is thus the issue of the _ extendiblity _ of this maximal domain of development .if it is extendible , then the solution is not unique .since , as noted earlier , there is really no substitute for a pictorial representation , we defer further discussion of this till later on .the current state of affairs in the theory of quasilinear hyperbolic partial differential equations in several space variables is such that global , large data problems appear beyond reach . for there to be any hope of making headway, it seems that some sort of reduction must be made to a problem where the number of the independent variables is no more than two . for hyperbolic equations of evolution ,such reductions in general are accomplished by considering symmetric solutions , or equivalently , symmetric initial data . in general relativity ,symmetry assumptions are formulated in terms of a group which acts by isometry on the spacetime and preserves all matter .the only -dimensional symmetry group that is compatible with the notion of an isolated gravitating system , i.e. that can act on asymptotically flat spacetimes , is .solutions invariant under such an action are called _ spherically symmetric_. as we shall see below , most of the expected phenomena of gravitational collapse of isolated gravitating systems , and the fundamental questions that these phenomena pose , can be suggested by the spherically symmetric solutions of various einstein - matter systems .moreover , the conformal structure of these solutions , which is the essential ingredient for the phenomena we wish to discuss , can be completely represented on the blackboard .( or on paper ! ) the reason for this is simple : the space of group orbits can be given the structure of a -dimensional lorentzian manifold .restricting to which are _ maximal domains of development _ of initial data , it follows that these can be globally conformally represented as bounded domains in -dimensional minkowski space. the images of such representations are called _ penrose diagrams _ ; from these , the conformal geometry can be immediately read off as the characteristics are just the lines at or radians from the horizontal : gain some familiarity with these diagrams , it is perhaps best to begin with -dimensional minkowski space from this point of view , i.e. with the penrose diagram of the maximal domain of development of minkowski initial data .here the diagram is as follows : the referred to above is a function on defined to be a multiple of the square root of the area of the group orbit corresponding to the points of .the line labelled is thus the axis of symmetry .the line labelled future null infinity " is not part of the spacetime but should be thought of as a `` boundary '' at infinity .the same applies to its two endpoints , `` spacelike infinity '' and `` future timelike infinity '' .the latter corresponds to the `` endpoint '' of all inextendible future timelike geodesics .the above minkowski space is of course future causally geodesically complete , i.e. all causal curves can be extended to infinite affine parameter .thus we have the analogue of global existence and uniqueness . that these properties of minkowski space are stable to small perturbations ( _ without _ any symmetry assumptions )is a deep theorem of christodoulou and klainerman .having understood the conformal diagram of minkowski space , we turn to a more interesting solution : the schwarzschild solution .this is actually a one - parameter family of solutions ( the parameter is called mass and denoted by ) which contains minkowski space ( the case where ) .as it is a spherically symmetric vacuum solution , its non - triviality in the case must be generated by topology .any cauchy hypersurface has two asymptotically flat ends and topology .`` downstairs '' , this corresponds to a line with two endpoints , not intersecting an axis of symmetry . for convenience , we will choose a time - symmetric initial hypersurface .the maximal development of this `` schwarzschild '' initial data then looks like : the point depicted where is a minimal surface `` upstairs '' in the initial hypersurface .as the initial hypersurface was chosen to be time symmetric , this minimal surface is also what is called marginally trapped .the `` ingoing '' and `` outgoing '' null cones emanating from this surface , which correspond to the two null rays through `` downstairs '' , can thus not reach `` future null infinity '' .( this is related to the so - called `` singularity theorems '' of penrose . )moreover , all timelike geodesics emanating from the point reach in finite time the curve .this curve is not an axis of symmetry but a singularity !that is to say , there is no extension of the spacetime above through with a continuous lorentzian metric .this singular behavior of the schwarzschild solution may at first appear to be an undesirable feature .indeed , historically , it was first considered exactly as such .but it turns out in fact that the behavior outlined above would provide the `` ideal '' scenario for the end - state of gravitational collapse .this has to do with two specific features of this solution : 1 .labelling the null rays emanating from the minimal surface as , it turns out that there is a class of timelike observers , namely those who do not cross , who can observe for infinite time and whose causal past is completely regular .the singularity is hidden inside a black hole , and is called the event horizon . to be more precise about the `` completeness '' property of the region outside the black hole ,fix some outgoing null geodesic which intersects future null infinity and parallel translate a conjugate null vector ( i.e. a null vector in the other direction ) .the affine length of the null curves joining this null geodesic and , where the affine parameter is determined by the aforementioned vector , goes to .in particular , future null infinity can be thought to have infinite affine length .one says that the solution pocesses a `` past complete future null infinity '' or a complete domain of outer communications . 2 .the above spacetime is future inextendible as a metric .thus , according to the discussion in the introduction , this means that the schwarzschild solution is unique even in the class of very low regularity solutions .the significance of this fact will become clear later .at this point it should be noted that while the schwarzschild solution indeed provides intuition about black holes , it can not give insight as to whether these can occur in evolution of data where no trapped surfaces are present initially , i.e. whether the kind of behavior outlined above is related in any way to the endstate of gravitational collapse . that properties 1 and 2 above are indeed general properties of solutions was proven by christodoulou for the spherically - symmetric einstein scalar field equations : for generic solutions of the initial value problem , the penrose diagram obtained by christodoulou is as follows : in , however , christodoulou explicitly constructs solutions with conformal diagram : these are so - called `` naked singularities '' .one might ask why should one consider the coupling with the scalar field . the natural case to consider first , it would seem , is the vacuum . a classical theorem of birkhoff , however , states that the only spherically symmetric vacuum solutions are schwarzschild .thus , matter _ must _ be included to give the problem enough dynamical degrees of freedom in spherical symmetry .the scalar field is in some sense the simplest , most natural choice ., etc ] as far as property 1 is concerned , christodoulou s results are the best evidence yet that this is indeed a property of `` realistic '' graviational collapse .it turns out however that there is another `` competing '' set of evidence that indicates that the behavior of christodoulou s solutions related to property 2 does not represent `` realistic collapse '' .( remember that property 2 is the central question for us , as this is what determines the notion of uniqueness . )this evidence is provided again by the intuition given by special solutions .one may consider the schwarzschild family of solutions as embedded in a larger , 2-parameter family of solutions called the _kerr solutions_. here the parameters are called mass and angular momentum , and schwarzschild corresponds to vanishing angular momentum . for all non - vanishing values of angular momentum ,the internal structure of the black hole is completely different , and , as we shall see momentarily , much more `` problematic '' , as property 2 will fail .thus the introduction of even an arbitrarily small amount of angular momentum a phenomenon that can not be `` seen '' by spherically - symmetric models seems to change everything and cast doubt on the conclusions derived from the spherically symmetric einstein - scalar field model . to summarize our `` unhappy '' situation, it seems that the phenomenon which plays a fundamental role in the issue we want to study is incompatible with the assumptions we have to make in order to render it mathematically tractable .it would seem that understanding the black hole region of realistic gravitational collapse using a spherically symmetric model is a lost cause .fortunately , there is a `` solution '' to this problem ! the effect of angular momentum on gravity turns out to be similar to the effect of charge .( as john wheeler puts it , charge is a poor man s angular momentum . ) indeed , there is a very close similarity between the conformal structure of the kerr family and a 2-parameter _ spherically symmetric _ family of solutions to the einstein - maxwell equations : }=0,\ ] ] the so - called _ reissner - nordstrm _ solution . herethe parameters are mass and charge . for retrieves the schwarzschild family , while for one obtains : the singularity of the schwarzschild solution has disappeared ! the above spacetime is completely regular up to the edges .these new edges that `` complete '' the triangle , however , are at a _finite _ distance from the initial data , in the sense that all timelike geodesics joining those edges with the initial hypersurface have finite length .this solution thus has a regular future boundary and is extendible ( in ! ) beyond it .what fails at the boundary of this maximal domain of development of initial data is thus not the regularity of the solution , but rather , global hyperbolicity .any extension of will contain past inextendible causal geodesics not intersecting the intial hypersurface : points in such an extension but not in itself can not be determined by initial data , in the exact same way that the solution to a linear wave equation at the point depicted below , can not be uniquely determined by its values in the shaded set : what has happened in the reissner - nordstrm solution is that the situation depicted above developed , but from an that was complete .the physical interpretation of the above situation is that the classical principle of determinism fails , but without any sort of loss in regularity that would indicate that the domain of the classical theory has been exited .it is in that sense that this kind of behavior is widely considered by physicists to be problematic . on the other hand , numerical calculations ( penrose and simpson ) on the behavior of linear equations on a reissner - nordstrm background indicate that a naturally defined derivative blows up at the cauchy horizon .this was termed the blue - shift effect .thus , penrose argued , the pathological behavior of the reissner - nordstrm solution might be unstable to perturbation . this led him to conjecture , more generally , _ strong cosmic censorship _ for generic initial data , in an appropriate class , the maximal domain of development is inextendible . in view of our discussion in the introduction , this can be thought of as the conjecture that , for generic initial data , the solution is unique wherever it can be defined . of course, the context in which this should be applied to ( i.e. what equations , what class of initial data should be considered , etc ) and the notion of extendibility are left open .we will comment more on that later .it might seem at first that the proper setting for discussing the problem of whether cauchy horizons arise in evolution , for generic data in spherical symmetry , is the einstein - maxwell equations .unfortunately , these suffer in fact from the same drawback as the einstein vacuum equations , namely they do not possess the required dynamical degrees of freedom .one necessarily has to include more matter , and again the simplest choice , as in the work of christodoulou , is a scalar field .thus one is easily led to the coupled einstein - maxwell - scalar field system : }=0,\ ] ] it turns out that in spherical symmetry the maxwell part of the equation decouples . since the scalar field carries no charge , a non - trivial maxwell fieldcan only be present if an initial complete spacelike hypersurface has non - trivial topology .in particular , this model is not suitable for considering the formation of black holes , as in the work of christodoulou .thus we will consider the problem where there is already a black hole present initially . to take the simplest possible formulation that captures the essense of the problem at hand, one can prescribe initial data for the system on two null rays , such that one corresponds to the event horizon of a reissner - nordstrm solution , and the other carries `` arbitrary '' matching data : to make the most of the method of characteristics , we introduce null coordinates , i.e. coordinates such that the metric on takes the form . herewe select the -axis to be the event horizon , and the axis to be the conjugate ray on which we presribe our data .the unknowns are then just , and , and the electromagnetic part contributes a constant which is computed from the initial data .to write the equations as a first order system , we define , , , , and also , what one can call the `` renormalized '' hawking mass is defined to be .the renormalized version has the property that it is constant in the reissner - nordsrm solution and coincides with at future null infinity .] , defined by we then have : can now state the theorems of . on the one hand we have : after restricting the range of the coordinate , the penrose diagram of the solution of the i.v.p .described above is as follows : moreover , extends to a function on the cauchy horizon with the property as ( where is the constant value of on the initial _ event _ horizon ) , and the metric can be continuously extended globally across the cauchy horizon .on the other hand , we have : for `` generic '' initial data in the class of allowed initial data , blows up identically on the cauchy horizon .in particular , the solution is inextendible across the cauchy horizon as a metric .thus , strong cosmic censorship is true , according to theorem 2 , if formulated with respect to extendibility in or higher , but false , according to theorem 1 , if formulated with respect to extendibility in ( see for reasons why one might want to require . ) in any case , the formation of a null `` weak '' singularity indicates a qualitatively different picture of the internal structure of the black hole from any of the previous models described above , and from the original expectations of penrose .the scenario of theorems 1 and 2 was first suggested by israel and poisson who put forth some heuristic arguments .subsequently , a large class of numerics was done for precisely the equations considered here ( see for a survey ) .because of the blow - up in the mass , this phenomenon was termed `` mass inflation '' .it should be noted that the imposition of reissner - nordstrm data on the event horizon is somewhat unnatural if the data is viewed as having arisen from generic data for a characteristic value problem where the -axis is in the domain of outer communications .similar results to the above theorems , however , can in fact be proven for a wide class of data which includes the kind conjectured to arise from the aforementioned problem .these results will appear in .the relavence of such an extension will only become clear , however , if the problem of determining the correct `` generic '' decay on the event horizon is mathematically resolved .our initial data are trapped , i.e. and are both nonpositive .these signs are then preserved in evolution .note that from the equation it follows that is non - increasing in .what gives the analysis of our equations _ in the black hole region _its characteristic flavor is the fact that is potentially infinite when integrated for constant along the whole range of ( it is indeed infinite in initial data , i.e. ) , and that this infinity can appear in the equations ( for instance in ) with either a positive or negative sign , depending on the sign of .note that , by contrast , in the domain of outer communications , this infinity is killed by the term since on outgoing rays . in the domain of development of our initial data , is bounded above by its initial constant value on the event horizon , in view of the signs of and . in the reissner - nordstrm solution, the sign of goes from negative near the event horizon to positive near the cauchy horizon , while remains constant in and thus infinite , accounting for both what is called the infinite red shift near the event horizon ( this makes objects crossing the event horizon slowly disappear to outside observers as they are shifted to the red ) and the infinite blue shift near the cauchy horizon ( this accounts for the instability of the cauchy horizon to _ linear _ perturbations ) . for general solutions of the initial value problem , some of these features of the sign of turn out to be stable , while others do not .in particular , theorems 1 and 2 together imply that the sign must become negative near the cauchy horizon , and not positive ! to attack this initial value problem ,it is clear that the behavior of this sign is the first thing that must be understood .it turns out that before the effects of the linear instability start to play a role , three geometrically distinct regions develop in evolution , a red - shift , no - shift , and stable blue - shift region , characterized by respectively : in the red - shift region , is unbounded as , but it appears with a `` favorable '' sign ( favorable as far as controlling is concerned bigger , and there is an extra on the denominator of . ] ) , in the no - shift region , is uniformly bounded , and in the stable blue - shift region , grows with but at a rate `` less '' than the growth of certain natural derivative of .these facts allow us to control all quantities reasonably well up until the future boundary of the stable blue - shift region , though completely different arguments must be applied to each subsequent region as it develops in evolution from the previous one .of course , all this work seems only to have pushed forward the problem from the original initial segments to the future boundary of the `` stable blue - shift region '' .but in fact our new `` initial '' conditions on the future boundary of the stable blue - shift region are much more favorable .the stable blue - shift region that has preceded it ensures that has a sufficiently fast decay rate in .( remember that blue - shift regions tend to make smaller , so they are favorable for controlling , but unfavorable for controlling . )once this rate can be shown to be preserved , it follows by integrating that one can bound _ a priori _ away from in its future , and thus prove the existence of the solution up to the cauchy horizon .it is clear from what we have said above that if the unstable region remains a blue - shift region ( see left diagram below ) , there is no problem .( this is of course what happens for the reissner - nordsrm solution itself . ) the danger is if a new red - shift region develops ( see right diagram above ) .it turns out , however , that a new _ a priori _estimate is available in this case , which is independent of the size of .this makes use of the fact that there is a hidden also in the denominator in .( the estimate depends , however , on our knowledge of the new initial condition of and a particular bootstrap assumption on its future behavior ; in particular this estimate does not hold in the original `` red - shift '' region . )this is the last element in the proof of theorem 1 .we now discuss the proof of theorem 2 .as has been noted , at the level of perturbation theory , one does see an instability , caused by the blue - shift region .( requiring some derivative of the scalar field to be positive initially , will decay in , and thus will decay in , at a slower rate than the decay of in , so that the natural derivative . ) on the other hand , the non - linearity tends to diminish this effect , since if the mass indeed increases , in view of theorem 1 , we have a reappearance of a red - shift region ( see diagram on the right above ) .as remarked earlier , this tends to make ( and also ) bigger , and thus smaller .the proof must thus encorporate something beyond `` linear theory arguments '' .this extra ingredient is supplied by a very powerful monotonicity peculiar to black hole interiors , or more specifically , to `` trapped regions '' .integration of and then implies that if and are initially of the same signs coordinate .this condition and the non - vanishing of a particular derivative of at the origin together define the `` generic '' class of initial data to which theorem 2 applies .] , let s say non - negative , then they remain non - negative , and in fact and . nowintegration of and using that is also non - increasing in , yields that for , , it should be mentioned that in view of the sign of , both sides of the above inequality are positive .the broad outline of the proof of theorem 2 is as follows : first assume that the spacetime looks very much like reissner - nordstrm .then the linear theory more or less applies , and applying the bounds for in the equation , one obtains , which is a contradiction .thus one is reduced to proving that any spacetime `` quantitatively different '' from reissner - nordstrm must have .it is not possible here to explain precisely what `` quantitatively different '' has to mean . to give a taste of the kind of arguments involved in this `` non - linear part '' of the proof , we will be content to show that assuming only that is bounded below by a positive number plus its reissner - nordstrm value ( this is indeed a quantitative difference ) it follows from that must in fact blow up identically on the cauchy horizon. if is the future boundary of the stable blue - shift region , the fact that implies that given any , a sequence of points can be constructed so that the mass differences are all greater than : but in view of , this mass difference can be added on to yield infinite mass at the point where meets the cauchy horizon .suffice it to say that the non - linear analysis of black hole regions is quite different than the analysis we are used to .which parts of this spherically - symmetric picture generalize and which do not remains to be seen .piotr chruciel _ on the uniqueness in the large of solutions of the einstein s equations ( `` strong cosmic censorship '' ) _ australian national university , centre for mathematics and its applications , canberra , 1991
this talk will describe some recent results regarding the problem of uniqueness in the large ( also known as _ strong cosmic censorship _ ) for the initial value problem in general relativity . the interest in the issue of uniqueness in this context stems from its relation to the validity of the principle of determinism in classical physics . as will be clear from below , this problem does not really have an analogue in other equations of evolution typically studied . moreover , in order to isolate the essential analytic features of the problem from the complicated setting of gravitational collapse in which it arises , some familiarity with the conformal properties of certain celebrated special solutions of the theory of relativity will have to be developed . this talk is an attempt to present precisely these features to an audience of non - specialists , in a way which hopefully will fully motivate a certain characteristic initial value problem for the spherically - symmetric einstein - maxwell - scalar field system . the considerations outlined here leading to this particular initial value problem are well known in the physics relativity community , where the problem of uniqueness has been studied heuristically and numerically . in , the global behavior of generic solutions to this ivp , and in particular , the issue of uniqueness , is completely understood . only a sketch of the ideas of the proof is provided here , but the reader may refer to for details .
cartographic maps are an important tool for exploring , analyzing and communicating data in their geographic context .effective maps show their main information as prominently as possible .map elements that are not relevant to the map s purpose are abstracted or fully omitted . in _schematic maps _ , the abstraction is taken to `` extreme '' levels , representing complex geographic elements with only a few line segments .in addition to highlighting the main aspects of the map , they are useful to avoid an `` illusion of accuracy '' which may arise when showing data on a detailed map : the schematic appearance acts as a visual cue of distortion , imprecision or uncertainty .however , the low complexity must be balanced with recognizability .correct topological relations and resemblance hence play a key role in schematization .schematic maps also tend to be stylized by constraining the permitted geometry .orientations of line segments are often restricted to a small set , so called _-oriented schematization_. the prototypical example is a schematic transit map ( e.g. the london tube map ) , in which all segments are horizontal , vertical or a -degree diagonal .to be schematized ( switzerland ) .( b ) a -oriented ( rectilinear ) graph is placed on .( c ) a _ simple _ polygon with its boundary constrained to that resembles .( d ) shape is a -oriented schematization of the input . ]a central problem in schematization is the following : given a simple polygon , compute a simple -oriented polygon with low complexity and high resemblance to .typically , one is constrained to optimize the other .formalizing `` high resemblance '' requires the use of similarity measures , each having its own benefits and weaknesses . in this paperwe investigate a discretized approach to schematization . to this endwe overlay a -oriented plane graph on , and require the boundary of to coincide with a simple cycle in ( fig .[ fig : idea ] ) .though it restricts the solution space , this approach also offers some benefits .* the graph can easily model a variety of constraints , possibly varying over the map , or mixing with other types of geometry , such as circular arcs .* discretization promotes the use of collinear edges and provides a uniformity of edge lengths .this provides a stronger sense of schematization and a more coherent `` look and feel '' when schematizing multiple shapes . *combined with the simplicity constraint , the graph enforces a minimal width for narrow strips in , avoiding an undesired visual collapse ( see fig . [ fig : exaggeration ] ) .finally , discretization may be necessary due to the intended use of the schematic shape .examples include computing tile - based cartograms and deciding for a grid map on a connected set of cells that resemble the complete region .these use cases require us to bound not ( only ) the bends of the result , but the number or total size of the enclosed faces .this is also relevant in the context of area - preserving schematization .exaggerates the narrow strip .computed using techniques described in .( b ) result of algorithm by buchin et al . contains a visual collapse . ]we consider two discretized approaches , as outlined below .note that we focus on grid graphs ( plane graphs with horizontal and vertical edges only ) without constraining the complexity of the result .refer to section [ sec : prelims ] for precise definitions used in this paper .the first approach aims to find a simple cycle in , quantifying resemblance via the frchet distance ( ) .this leads us to the problem statement below .[ prob : smm ] let be a partial grid graph , let be a simple polygon and let . decide whether a simple cycle in exists such that . in section [ sec : proof ]we prove that this problem is np - complete .this proof has implications on several variants of this problem ( section [ sec : corollaries ] ) .in particular , no reasonable approximation algorithm exists , unless p . with the second approach we use the symmetric difference ( ) to quantify resemblance .we now consider the full polygon rather than only its boundary , and look for a connected face set in instead of a cycle .this leads to the following problem : [ prob : cfs ] let be a full grid graph , let be a simple polygon and let . decide whether a connected face set in exists such that . in spite of the symmetric difference being insensitive to matching different parts between polygons, we prove that this is again an np - complete problem in section [ sec : cfs ] .this result is independent of whether we allow holes in the resulting polygon .moreover , the proof readily implies hardness for regular tilings using triangles or hexagons .* schematization . * line and network schematization ( e.g. transit maps ) have received significant attention in the algorithmic literature , e.g. .recently , schematization of geographic regions has gained increasing attention , e.g. .our discretized approach is similar in nature to the octilinear schematization technique of cicerone and cermignani , though simplicity is of no concern in their work .as mentioned in the introduction , the discretized approach offers conceptual advantages over the existing nondiscretized methods .map matching has various applications in geographic information systems , such as finding a driven route based on a road network and a sequence of gps positions . without its simplicity constraint ,it has been studied extensively with various criteria of resemblance , e.g. .alt et al . describe an algorithm that solves nonsimple map matching under the frchet distance in time where is the complexity of and the complexity of .though `` u - turns '' can be avoided , no general simplicity guarantees are possible .similarly , the decision problem for the weak frchet distance can be solved in time .the _ simple _ map matching problem on the other hand has received little attention , although it stands to reason that for many applications a nonselfintersecting result is desired , if the input curve is simple .a full grid graph always admits a solution with hausdorff distance at most and frchet distance at most , where parametrizes a realistic input model .wylie and zhu prove independently that simple map matching under the _ discrete _ frchet distance is np - hard , however , without requiring a simple input curve .a stronger result with a simple input curve follows directly from our proofs .sherette and wenk show that it is np - hard to find a simple curve with bounded frchet distance on a 2d surface with holes or in 3d , but again without requiring a simple input curve . in its dual form ,connected face selection is a specialization of the known np - hard maximum - weight - connected - subgraph problem in which a ( planar ) graph must be partitioned into two disjoint components , and , such that is connected and has maximal total weight .our results readily imply that this dual problem remains np - hard even on a full grid graph if all weights are nonnegative and the size of is given ; this is independent of whether we constrain to be connected .it is also related to the well - studied graph cuts , though focus there lies minimizing the number of cut edges connecting and ( e.g. ) .vertex weights have been included only as part of the optimization criterion ( e.g. ) , the other part still being the number of cut edges .the number of cut edges is not correlated to the complexity of the eventual shape : we can not use this as a trade - off between complexity and resemblance . also , these approaches tend not to require that a partition is connected , and focus on ensuring a certain balance between the two partition sizes . partitioning an unweighted nonplanar graph into connected components that each contain prescribed vertices is known to be np - hard . * polygons . *a polygon is defined by a cyclic sequence of vertices in .each pair of consecutive vertices is connected by a line segment ( an edge ) .a polygon is _ simple _ if no two edges intersect , except at common vertices .we use to refer to the area of polygon .the _ complexity _ of a polygon is its number of edges .we use to refer to the boundary of .unless mentioned otherwise , polygons are assumed to be simple throughout this paper .a straight - line graph is defined by a set of vertices in and edges connecting pairs of vertices using line segments .the graph is _ plane _ if no two edges intersect , except at common vertices . the _ complexity _ of a plane graph is its number of edges .we call a plane graph a _ ( partial ) grid graph _ if all of its vertices have integer coordinates , and all edges are either horizontal or vertical , having length at least _ grid graph is a maximal grid graph ( in terms of both vertices and edges ) within some rectangular region ; all edges have length .a full grid graph represents a tiling of unit squares .unless mentioned otherwise , graphs are assumed to be plane in this paper .a _ cycle _ in a graph is a sequence of vertices such that every consecutive pair as well as the first and last vertex in the sequence are connected via an edge .a cycle is _ simple _ if the sequence does not contain a vertex more than once .a simple cycle in a plane graph corresponds to ( the boundary of ) a simple polygon .the _ bends _ of a cycle are the vertices at which the interior angle is not equal to , that is , those that form corners in the polygon it represents .complexity _ of a cycle is its number of bends .the frchet distance quantifies the ( dis)similarity between two geometric shapes : a high frchet distance indicates a low similarity .we define as the continuous function that maps the unit circle onto the boundary of .let denote the set of all orientation - preserving homeomorphisms on . using to denote the euclidean distance , the frchet distance between two polygons is defined as maximal empty regions of a graph ( i.e. , not containing a vertex in its interior )are referred to as _ faces_. one face , the outer face , is infinite ; all other faces have bounded area .faces are said to be adjacent if they share at least one edge .the dual of graph is defined by a vertex set containing a dual vertex for every face and two dual vertices are connected via a dual edge if the corresponding faces are adjacent .a face set in is said to be _ connected _ if the corresponding induced subgraph of is connected .we call a face set _ simply connected _ if the complement of the face set is also connected .a simply connected face set corresponds to a simple polygon ; a ( nonsimply ) connected face set may have holes .the symmetric difference between two polygons and is defined as the area covered by precisely one of the polygons .that is , the symmetric difference is if is composed of pairwise disjoint regions ( e.g. a face set ) , then we may decompose the above formula into .in this section we consider problem [ prob : smm ] , simple map matching .we prove that this problem is np - complete , as formalized in the following theorem .[ thm : main ] let be a partial grid graph , let be a simple polygon and let .it is np - complete to decide whether contains a simple cycle with .the problem is in np since the frchet distance can be computed in polynomial time and it is straightforward to check simplicity . in this sectionwe prove that the problem is also np - hard .we conclude this section by considering the implications of this result on variants of this problem .we assume in the remainder of this section .de berg and khosravi prove that _ planar monotone 3sat _ is np - complete .that is , decide whether a 3cnf formula is satisfiable , given the following constraints ( fig .[ fig : planar3sat](a ) ) : * clauses are either positive ( only unnegated literals ) or negative ( only negated literals ) ; * a planar embedding for is given , representing variables and clauses as disjoint rectangles ; * variables lie on a single horizontal line ; * positive clauses lie above the variables , negative clauses below ; * links connecting clauses to the variables of their literals are strictly vertical . .gray polygons represent gadgets , interact ingvia shared boundaries .the red lines connect the various gadgets to obtain a simple polygon . ] for our reduction we construct a simple map matching instance a partial grid graph and a polygon that contains a simple cycle with if and only if formula is satisfiable .we use three types of gadgets to represent the variables , clauses and links of : _ variable gadgets _ , _ clause gadgets _ , and_ propagation gadgets_. our clause gadgets have small fixed dimensions and hence can not be stretched horizontally .a single bend for the two `` outer '' edges of a clause is sufficient to ensure this ( see fig . [fig : planar3sat](b ) ) . in the upcoming sections we first design the gadgets in isolation before completing the proof using the gadgets .an overview of the eventual result of this construction is given in fig .[ fig : constructionoverview ] .each gadget specifies a _ local graph _ ( a part of ) and a _ local curve _ ( a part of ) .the gadgets interact via vertices and edges shared by their local graphs .there is no interaction based on the local curve : it is used only to force choices in using edges of the local graph .if a cycle exists in the complete graph , a _ local path _ in the local graph must have a frchet distance of at most to the local curve .the local path `` claims '' its vertices and edges : these can no longer be used by another gadget .this results in _ pressure _ on the other gadget to use a different path .a gadget has a number of pressure _ ports_. a port corresponds to a sequence of edges in the local graph that may be shared with another gadget .a port may _ receive _ pressure , indicating that the shared edges and vertices may not be used in the gadget .similarly , it may _ give _ pressure , indicating that the shared edges and vertices may not be used by any adjacent gadget .all interaction between gadgets goes via these ports .the local curves must be joined carefully to ensure that is simple . to this end, each gadget has two curve _ gates _ that correspond to the endpoints of the local curve .later , we show how to connect these gates to create a single simple polygon . in the following paragraphswe describe the three gadgets .in particular , we take note of the _ specification _ of each gadget : its behavior in terms of its ports ; a bounding polygon that contains the local graph and local curve ; the placement of its two gates and its ports .these specifications capture all necessary aspects to complete the reduction in section [ ssec : proof ] . herewe focus on positive clauses ( i.e. , above the variables ) and their edges .gadgets below are defined analogously , by mirroring vertically .we present specifications and constructions mostly visually , using the following encoding scheme .the bounding polygon is given with a black outline ; the ports are represented with thick green lines ; the gates are represented with red dots .the local graph is given with thick light - blue lines ; the local curve is a red line .we indicate various local paths using dark - blue lines ; ports that give pressure for a local path are indicated with an outward arrow .all elements are visualized on an integer grid ( thin gray lines ) , to show that we indeed construct a partial grid graph .all coordinates are an integer multiple of a half : all vertices are placed on vertices of the grid , exactly halfway an edge or exactly in the center of a cell . a clause gadget is illustrated in fig .[ fig : clause_gadget ] .it has fixed dimensions ; the figure precisely indicates its specification as well as its construction . the gadget admits a local pathonly if one of its ports does _ not _ receive pressure .any local path causes pressure on at least one port ; for each port there is a path that causes pressure only on that port .the lack of external pressure on a port indicates that the value of the corresponding variable satisfies the clause . there is no local path that avoids all three ports :if all ports receive pressure , none of the variables satisfies the clause and the gadget does not admit a local path .the specification and construction of a variable gadget depend on the number of literals .let denote the maximum of the number of positive and the number of negative literals of the variable .we assume that : otherwise , it does not occur in any clause .its bounding polygon , gates and ports are illustrated in fig .[ fig : var_placement ] for . for higher values of , we increase the width by , to ensure a port of width and a distance of in between ports .the gadget admits exactly two local paths : `` _ _ true _ _ '' and `` _ _ false _ _ '' .ports for positive literals ( top side ) give pressure only with the _ false _ path .ports for negative literals ( bottom side ) give pressure only with the _ true _ path .in other words , a port gives pressure if the variable does _ not _ satisfy the corresponding clause . .the right two figures indicate the two local paths , for . ] a propagation gadget ( shown in fig .[ fig : prop_gadget ] ) connects a port of a variable gadget to a port of a clause gadget .the bounding polygon is a corridor of width with at most one bend : if the link in formula has a bend , then the gadget also has a bend .the corridor can have any integer height greater than .if it has a bend , the corridor spans any integer width at least .the two ports and gates of the gadget are placed as indicated .the local graph and curve are constructed such that it admits only two local paths ; each puts pressure on exactly one port .the gadget does not admit a path if both ports receive pressure .if one port receives pressure , the other must give pressure : it propagates pressure .we are now ready to construct graph and polygon based on formula . fig .[ fig : constructionoverview ] illustrates this construction .first , we place all variable gadgets next to one another , in the order determined by , with a distance of in between consecutive variables . using the -coordinates in the embedding of , we sort the positive clauses to define a _ positive order _ .we place the gadget for clause at a distance above the variables .analogously , we use a _ negative order _ to place the negative clauses below the variables .horizontally , the clause gadgets are placed such that the bottom port lines up with the appropriate port on the variable gadget of the middle literal . finally , we place a propagation gadget for each link in to connect the clause and variable gadgets . by placement of the clauses ,any propagation gadget has height at least and a link without a bend can be represented by a propagation gadget without any bends .as ports are at least distance apart , the width of a propagation gadget with a bend exceeds .a propagation gadget does not overlap other gadgets : the placement of clauses would then imply that the provided embedding for is not planar . we have composed the various gadgets in polynomial timehowever , we do not yet have a simple polygon .we must `` stitch '' the local curves together ( in any order ) to create polygon . to this endwe first define three subcurves : for the variable gadgets ; for the positive clause gadgets and their propagation gadgets ; and for the negative clause gadgets and their propagation gadgets .below is a detailed description of how these are constructed .[ fig : constructionoverview ] visually illustrates the result . for first define point and at distance outward from the leftmost and rightmost variable gadget respectively , both at the height of the gates .we connect to the left gate of the leftmost variable , connect the matching gates of consecutive variables , and then connect the right gate of the rightmost variable to .subcurve is constructed by defining a points and , similarly as for , but at the height of the `` positive '' propagation gadgets ; is placed at distance instead .analogous to , we first create the straight traversal from to through all positive propagation gadgets .we then include each positive clause in this subcurve , right before it `` enters '' the propagation gadget for its leftmost literal .this is done by going up starting at distance before this gadget to two above the top side of the clause gadget , and then connecting to its right gate .we go back from its left gate , and go down at distance before the propagation gadget .now , traverses all positive clauses and their propagation gadgets .subcurve is constructed analogous to , though is placed again at distance . by placement of the gadgets , these subcurves are simple polygonal curves between their respective endpoints ( and ) and do not intersect each other . to obtain a single simple polygon , we must now connect the endpoints of the three subcurves .we connect to and to using vertical segments .we connect to , by routing an edge at distance below the lowest negative clause .we now have a simple polygon ; we define as the union of all local graphs and the parts of not contained in a gadget .we now have constructed graph and polygon .we must argue that the complexity is polynomial and that is satisfiable if and only if a simple cycle exists in with .let denote the number of variables , and the number of clauses in .the width of the construction is at most where is the number of occurrences of the ^th^ variable . as ,the width of the construction is .the height of the construction is at most , where is the number of positive clauses and the number of negative clauses ; since , the height is . as all coordinates are required to be an integer multiple of a half , this implies a polynomial bound on the complexity of and .assume that is satisfiable and consider some satisfying assignment .we argue the existence of a simple cycle . for each variable gadgetwe choose either the `` true '' or `` false '' local path , matching the assigned truth value .this gives pressure on a number of propagation gadgets : we choose the only remaining local path for these , causing pressure on the corresponding clauses .for the other propagation gadgets , we choose the path such that it gives pressure at the variable and may receive pressure at the clause . since the truth values of the variables originate from a satisfying assignment , at most two ports of any clause receive pressure .hence , the clause admits a local path as well .we concatenate the local paths with the paths that are used to stitch together the local curves to obtain a simple cycle . by construction at most . now , assume that contains a simple cycle with . by construction ,cycle traverses all gadgets and contains exactly one local path for each gadget .this local path ends at the gates of the gadget and the frchet distance between this local path and the local curve is at most . for a variable, this local path corresponds to either the true or false state .this directly yields the truth values of the variables .each clause gadget also has a local path and hence one or more of its ports give pressure . since the propagation gadgets have a local path , the pressure from the clauses results in pressure on a variable gadget .this pressure ensures that a variable that receives pressure from a clause is in a state satisfying the clause .hence , the truth values found from the variables yield a satisfying assignment for formula .this concludes the proof for theorem [ thm : main ] , showing that simple map matching is indeed np - complete .this result and its construction have a number of implications , as discussed below . * approximation . * with the np - hardness result above , we may want to turn to approximation algorithms .however , a simple argument shows that approximation is also np - hard .[ cor : noapprox ] let be a partial grid graph of complexity and let be a simple polygon of complexity .it is np - hard to approximate the minimal of any simple cycle in within any factor of where is a polynomial in and . for an unsatisfiable formulathe minimal frchet distance of a simple cycle in the constructed graph is significantly larger than .suppose that this minimal frchet distance is strictly greater than .any -approximation algorithm for simple map matching is able to decide satisfiability of planar monotone 3sat formulas .thus , unless p , no -approximation algorithm can have polynomial execution time . to determine the exact value of , we wish to determine at which value a local path becomes admissible that does not correspond to the desired behavior of the construction .this occurs at : the clause gadget breaks , admitting a local path that does not pass through any of the ports .however , it is straightforward to lengthen the gadgets to increase the value of . this does not increase the combinatorial complexity of and , thus maintaining a polynomial - size instance .the construction now spans an area by , where and are the number of variables and clauses in formula .thus , we require to encode the coordinates in polynomial space .r0.3 * counting bends . * for schematizationwe do not wish to just find some cycle in , but also optimize or bound its complexity , measured in the number of bends .if is a partial grid graph ( corresponding to rectilinear schematization ) , any bend is either a left or right turn .every rectilinear polygon has a certain _ bend profile _ :the sequence of left and right bends in counterclockwise order along its boundary .the bend profile gives no information about edge lengths : seemingly different polygons have the same bend profile ( see fig . [fig : turnprofile ] ) . unfortunately , even with a given bend profile , the problem remains np - complete .however , in this reduction the bend profile has length proportional to the complexity of the formula .it does not prove that no fixed - parameter - tractable ( fpt ) algorithm exists .[ cor : nobends ] let be a partial grid graph , let be a simple polygon , let and let be a bend profile .it is np - complete to decide whether contains a simple cycle that adheres to with . in all constructions , the bends made by the various local paths are identical .we can easily derive a bend profile that must lead to a simple cycle , if the formula is satisfiable .* area preservation .* suppose we want an area - preserving solution , i.e. , the area of cycle must be equal to that of .a simple argument proves that our reduction can be extended to prove that this more constrained problem is also np - complete .[ cor : smm - area ] let be a partial grid graph , let be a simple polygon and let .it is np - complete to decide whether contains a simple cycle with and .[ fig : constructionoverview_complement ] at the end of the paper shows an overview of the modified construction .the key observation is that the original construction is duplicated , but inside and outside the polygon have been inverted .this is achieved by connecting the endpoints of the subcurves , and , slightly differently , i.e. , with those of the duplicate .any solution coincides with outside the gadgets : only the local paths change the area of .hence , any change in area resulting from a local path in one of the gadgets in the one copy can be counteracted by choosing the exact same local path in the other copy .* variants . *finally , there are a number of variants of the problem that can be proven to be np - complete via the same construction . as strict monotonicity in the homeomorphismis not crucial for the reduction , the problem under the _ weak _ frchet distance is also np - complete .the clause gadget admits an extra local path , one that still exhibits the desired behavior .the problem is also np - complete under the _ discrete _ ( weak ) frchet distance , as we may sample the graph and polygon appropriately .not every grid location needs a vertex , preserving the inapproximability result of corollary [ cor : noapprox ] . as all interaction between gadgetsis based on edges , it is also np - complete to determine the existence of an `` edge - simple '' cycle that uses each edge at most once ( but vertices may be used more than once ) .it is not essential in any of the above ( except for area preservation ) for to be a closed curve : the same construction works for open curves , and thus these variants are np - hard as well .with the negative results using the frchet distance , we now consider the symmetric difference via problem [ prob : cfs ] , connected face selection .we prove that this problem is also np - complete , even on a full grid graph , as captured in the following theorem .[ thm : cfs ] let be a full grid graph , let be a simple polygon and let .it is np - complete to decide whether contains a connected face set with .the problem is obviously in np , since the symmetric difference and connectedness can be straightforwardly verified in polynomial time .here we show that it is also np - hard .we conclude this section with some implications of this result. * rectilinear steiner tree . *the rectilinear steiner tree problem is formulated as follows : given a set of points in , is there a tree of total edge length at most that connects all points in , using only horizontal and vertical line segments ? vertices of are not restricted to .this problem was proven np - complete by garey and johnson .hanan showed that an optimal result must be contained in graph corresponding to the arrangement of horizontal and vertical lines through each point in ( see fig . [fig : cfs_sketch](a ) ) .subsequently , this was called the _ hanan grid _ ( e.g. ) .we call a vertex of a _ node _ if it corresponds to a point in and a _ junction _ otherwise .as the problem is scale invariant , we assume .all edges in must be shorter than : otherwise , the answer is trivial no such tree exists .( dark points ) .( b ) consists of unit square cells : node - cells are colored black , junction - cells white , edge - cells gray ; face - cells are hatched .( c ) `` skeleton '' for polygon .( d ) sketch of polygon constructed on top . ; light blue areas indicate flexible parts in the construction that represent edge lengths . ]we must transform point set into a full grid graph , a polygon and a value .we construct such that each _ cell _ ( face of ) corresponds to a vertex ( _ node - cell _ or _ junction - cell _ ) , an edge ( _ edge - cell _ ) , or a bounded face ( _ face - cell _ ) of ; see fig .[ fig : cfs_sketch](b ) .we then construct polygon by defining a part of the polygon inside all cells , except for the face - cells : does not overlap these . to structure use a _ skeleton _ , a tree spanning the non - face - cells in the dual of ( fig .[ fig : cfs_sketch](c ) ) .recall that the symmetric difference between and a face set may be computed as .as , we define the _ weight _ of a cell in as . hence , the symmetric difference is .we set the desired weight for cell to : * if is a node - cell ; * if is a junction - cell ; * if is an edge - cell , where is the length of the corresponding edge in ; * if is a face - cell .given a desired weight for cell , the area of overlap is .every cell has area of inside ; equals .we call the _ local polygon _ of .we set .that is , the sum of weights is at most .this can be achieved only if the face set contains all node - cells and no face - cell .we design every cell such that the desired weight is achieved . for a face - cell , this is trivial : , hence and we keep disjoint from this cell . for all other cells to cover some fraction of its interior , as dictated by .skeleton dictates how to connect the local polygons ; we ensure that at least the middle of the shared edge ( the _ connector _ ) is covered .a local polygon should never touch the corners of its cell .node- and junction - cells may have up to four neighbors in . covering the four connectors is done with a cross shape , covering of the cell s area . by thickening this cross, we can straightforwardly support with , ensuring not to touch the corners or edges shared with cells that are not adjacent in .a node - cell has weight ; .a junction - cell has weight ; . hence ,both can be represented ( see fig . [fig : cfs_design](a b ) ) .an edge - cell has weight and thus should cope with weights between zero and a half ; lies between and .any edge - cell has degree or in ; if it has degree , the neighboring cells are on opposite sides .hence , can be trivially handled by creating a rectangular shape that touches exactly the necessary connectors . is dealt with by widening this rectangle within the cell ; any intermediate weight is handled by interpolating between these two .this is illustrated in fig .[ fig : cfs_design](c ) . by .( b ) junction - cells are covered for by .( c ) between ( dark blue ) and ( dark and light blue ) of edge - cells are covered by . ]we now have graph , polygon and a value for .we must prove that the reduction is polynomial and that has a rectilinear steiner tree of length at most , if and only if there is a connected face set in such that .the former is trivial by observing that : has complexity in each of the cells ; computing tree for guiding the construction of can be done with for example a simple breadth - first search .suppose we have a rectilinear steiner tree of length at most in the hanan grid .we construct a face set as the union of all cells corresponding to vertices and edges in . by definition of , this must contain all node - cells and can not contain face - cells . as junction - cellshave no weight , the total weight of is where is the cell of corresponding to edge . by assumption : the total weight is at most .thus , the symmetric difference for is at most .suppose we have a connected face set in such that .the total weight is thus .since face - cells have weight and only node - cells have negative weight , being , this can be achieved only if contains all node - cells and no face - cells .in particular , the sum of the weights over all edge - cells is at most .thus , the subgraph of described by the selected cells must be connected , contain all nodes of , and have total length at most .if this subgraph is not a tree , we can make it a tree , by leaving out edges ( further reducing the total length ) , until the subgraph is a tree . *simply connected .* the same reduction works for a _ simply _ connected face set , as steiner tree can not contain cycles and a simply connected face set in readily implies a tree .let be a full grid graph , let be a simple polygon and let .it is np - complete to decide whether contains a simply connected face set with .the face set obtained from a steiner tree must be simply connected , since can not contain cycles .in addition , a simply connected face set in ( that does not contain a face - cell ) directly describes a tree. * area preservation . * with an area - preservation constraint the problem remains np - complete .we only sketch an argument ; full proof can be found in appendix [ app : cfs - impl ] .this readily implies that variants prescribing the number of faces or total area via a parameter are also np - hard .[ cor : cfs - area ] let be a full grid graph , let be a simple polygon and let .it is np - complete to decide whether contains a ( simply ) connected face set with and .we assume for simplicity that no points in share an - or -coordinate : is then exactly an grid .the number of cells we need to represent a steiner tree spanning is at least and at most .thus , we need , as to allow sufficiently many cells to be selected in the original construction . to achieve the necessary area for , we are going to add more cells to the construction , each with and thus .we add such cells . due to their weight, their presence in or absence from face set has no effect on the symmetric difference .we need the new cells to not interfere with the original construction .therefore , they should be adjacent only with one of the node - cells on the boundary of the construction .thus we also add cells weight to separate the new cells from the original cells .[ fig : cfs_sketcharea ] illustrates an overview of this extended construction and its new skeleton .note that the newly added cells have the same weights as the original junction - cells and face - cells ; the construction of thus extends straightforwardly .if we are given some connected face set ( that now uses a fixed number of cells ) that has sufficiently low symmetric difference , we still know that it must span all the node - cells and thus encode a rectilinear steiner tree of sufficiently low length . to transform a rectilinear steiner tree to a connected face set , we must argue that we can always select the correct number of cells .first , let us bound the size of .since edges have in between and , it is easy to derive that the total area of the original construction satisfies ; this simplifies to .the new cells add area .thus , is bounded by the interval $ ] . hence , the number of cells we need to select is more than and strictly less than the number of newly added cells with . therefore , we simply apply the original transformation , but use the new cells as `` overflow '' for any cells in excess of those needed to represent .the above assumes that the eventual area of is an integer ; since it depends on the edge lengths in , it likely is not .this can be remedied by either assuming we are allowed to round the area of , or by letting extend slightly into the outer face to make it integer .note that the weight of the outer face is always infinite for a bounded .alternatively , we can use any weight for node - cells , strictly in between and .more general graph classes ( e.g. plane graphs ) are also np - hard ; this is readily implied by theorem [ thm : cfs ] .finally , the problem remains np - complete for graphs representing hexagonal and triangular tilings ( by combining two triangles into one slanted square ) .schematic maps are an important tool to visualize data with a geographic component .we studied discretized approaches to their construction , by restricting solutions to a plane graph .this promotes alignment and uniformity of edge lengths and avoids the risk of visual collapse .we considered two variants : _ simple map matching _ using the frchet distance and _ connected face selection _ using the symmetric difference .unfortunately , both turn out to be np - complete ; the former is even np - hard to approximate .the proofs readily imply that a number of variants are np - hard as well , even with an area - preservation constraint .it remains open whether ( general ) simple map matching is fixed - parameter tractable .moreover , there is quite a gap between the graph necessary for the reduction and the graphs that would be useful in the context of schematization ( e.g. full grid graphs , triangular tilings ) .are such instances solvable in polynomial time ? for connected face selection , we know hardness on such constructed graphs , but it remains open whether approximation or fpt algorithms are possible . in both methods the reduction needs rather convoluted polygons , very unlike the geographic regions that we want to schematize : do realistic input assumptions help to obtain efficient algorithms ?recently , bouts et al . have show that any full grid graph admits a simple cycle with frchet distance bounded using a realistic input model , called narrowness .however , this does not readily preclude the decision problem from being np - hard , even for polygons that have bounded narrowness .the frchet distance is a bottleneck measure and thus results obtained via simple map matching may locally deviate more than necessary , even when minimizing the number of bends . buchin et al . introduced `` locally correct frchet matchings '' to counteract this flaw with the frchet distance .can we extend this concept to simple map matching ?the author would like to thank : bettina speckmann and her applied geometric algorithms group , kevin buchin , bart jansen , arthur van goethem , marc van kreveld and aidan slingsby for inspiring discussions on the topic of this paper ; gerhard woeginger for pointing out the exponential bound in corollary [ cor : noapprox ] .the author was partially supported by the netherlands organisation for scientific research project 639.023.208 and marie skodowska - curie action msca - h2020-if-2014 656741 . c. wenk ,r. salas & d. pfoser . addressing the need for map - matching speed : localizing global curve - matching algorithms . in _ proc .conf . on sci . & stat .database management _ ,pages 379388 , 2006 .
to produce cartographic maps , simplification is typically used to reduce complexity of the map to a legible level . with schematic maps , however , this simplification is pushed far beyond the legibility threshold and is instead constrained by functional need and resemblance . moreover , stylistic geometry is often used to convey the schematic nature of the map . in this paper we explore discretized approaches to computing a schematic shape for a simple polygon . we do so by overlaying a plane graph on as the solution space for the schematic shape . topological constraints imply that should describe a simple polygon . we investigate two approaches , _ simple map matching _ and _ connected face selection _ , based on commonly used similarity metrics . with the former , is a simple cycle in and we quantify resemblance via the frchet distance . we prove that it is np - hard to compute a cycle that approximates the minimal frchet distance over all simple cycles in a plane graph . this result holds even if is a partial grid graph , if area preservation is required and if we assume a given sequence of turns is specified . with the latter , is a connected face set in , quantifying resemblance via the symmetric difference . though the symmetric difference seems a less strict measure , we prove that it is np - hard to compute the optimal face set . this result holds even if is full grid graph or a triangular or hexagonal tiling , and if area preservation is required . moreover , it is independent of whether we allow the set of faces to have holes or not .
the workload management task ( work package 1 , or wp1 ) of the eu datagrid project ( also known , and referred to in the following text , as edg ) is mandated to define and implement a suitable architecture for distributed scheduling and resource management in the grid environment . during the first year and a half of the project ( 2001 - 2002 ) , and following a technology evaluation process , edg wp1 defined , implemented and deployed a set of services that integrate existing components , mostly from the condor and globus projects .this was described in more detail at chep 2001 . in a nutshell , the core job submission component of condorg ( ) ,talking to computing resources ( known in datagrid as computing elements , or ces ) via the globus gram protocol , is fundamentally complemented by : * a job requirement matchmaking engine ( called the _ resource broker _ , or rb ) , matching job requests to computing resource status coming from the information system and resolving data requirements against the replicated file management services provided by edg wp2 . *a job logging and book - keeping service ( lb ) , where a job state machine is kept current based on events generated during the job lifetime , and the job status is made available to the submitting user .the lb events are generated with some redundancy to cover various cases of loss .* a stable user api ( command line , c++ and java ) for access to the system .job descriptions are expressed throughout the system using the condor classified ad language , where appropriate conventions were established to express requirement and ranking conditions on computing and storage element info , and to express data requirements .more details on the structure and evolution of these services and the necessary integration scaffolding can be found in various edg public deliverable documents .this paper focuses on how the experience of the first year of operation of the wp1 services on the edg testbed was interpreted , digested , and how a few design _ principles _ were learned ( possibly the hard way ) from the design and implementation shortcomings of the first release of wp1 software .these principles were applied to design and implement the second major release of wp1 software , that is described in another chep 2003 paper ( ) . to illustrate the logical path that leads to at least some of these principles , we start by exploring the available techniques to model the behaviour and throughput of the integrated workload management system , and identify two factors that significantly complicate the system analysis .the workload management system provided by edg - wp1 is designed to rely as much as possible on existing technology .while this has the obvious advantages of limiting effort duplication and facilitating the compatibility among different projects , it also significantly complicates troubleshooting across the various layers of software supplied by different providers , and in general the understanding of the integrated system . also , where negotiations with external software providers could nt reach an agreement within the edg deadlines , some of the interfaces and communication paths in the system had to be adapted to fit the existing external software incarnations . to get a useful high - level picture of the integrated workload management system , beyond all these practical constraints , we can model it as a queuing system , where job requests traverse a network of queues , and the service stations " connected to each queue represent one of the various processing steps in the job life - cycle .a few of these steps are exemplified in figure [ fig - netqueue ] .establishing the scale factors for each service in the wp1 system ( e.g. : how many users can a single matchmaking / job submission station serve , how many requests per unit time can a top - level access point to the information system serve , what is the sustained job throughput that can be achieved through the workload management chain , etc . ) is one of the fundamental premises for the correct design of the system .one could expect to obtain this knowledge either by applying queuing theory to this network model ( this requires obtaining a formal representation of all the components , their service time profiles and their interconnections ) or by measuring the service times and by identifying where long queues are likely to build up when a realistic " request load is injected in the system .this information could in principle also be used to identify the areas of the system where improvement is needed ( sometimes collectively called _ bottlenecks _ ) .experience with the wp1 software integration showed that both of these approaches are impractical for either dimensioning the system or ( possibly even more important ) for identifying the trouble areas that affect the system throughput .we identified two non - linear factors that definitely work against the predictive power of queuing theory in this case , and require extra care even to apply straightforward reasoning when bottlenecks are to be identified to improve system throughput .these are the consequence of common programming practice ( and are therefore easy to be found in the software components that we build or are integrating ) and are described in the following section .one of the most common ( and most frustrating , both to developers and to end users ) experiences in troubleshooting the wp1 workload management system on the edg testbed has been the fact that often , perceived _improvements _ to the system ( sometimes even simple bug fixes ) result in a _ decrease _ in the system stability , or reliability ( fraction of requests that complete successfully ) .the cause is often closely related to the known fact that removing a bottleneck , in any flow system , can cause an overflow downstream , possibly close to the _ next _ bottleneck .the complicating factor is that there are at least two characteristics that could ( and possibly still can ) be found in many elements of our integrated workload management queuing network , that can cause problems to appear even very far from the area of the network where an _ improvement _ is being attempted : * _ queues of job requested can form where they can impact on the system load . _ + different techniques can be chosen or needed to pass job requests around . sometimes a socket connection is needed , sometimes sequential request processing ( one request at a time in the system ) is required for some reason , and multiple processes / threads may be used to handle individual requests . having a number of tasks ( processes / threads ) wait for a socket queue or a sequential processing slotis one way to queue " requests that definitely generates much extra work for the process scheduler , and can cause any other process served by the same scheduler to be allocated less and less time .queues that are unnecessarily scanned while waiting for some other condition to allow the processing of their element can also impact on the system load , especially if the queue elements are associated to significant amounts of allocated dynamic memory . *_ some system components can enforce hard timeouts and cause anomalies in the job flow . _+ when handling the access ( typically via socket connections ) to various distributed services , provisions typically need to be made to handle all possible failure modes . reasonably " long timeouts are sometimes chosen to handle failures that are perceived to be very unlikely by developers ( failure to establish communication to a local service , for instance ) .this kind of failures , however , can easily materialise when the system resources are exhausted under a stress test or load peak .figure [ fig - fmode ] illustrates how these two effects can conspire to frustrate a genuine effort to remove what seems the limiting bottleneck in the system ( the example in the figure does nor refer to any real case or component ) : removing the bottleneck ( 1 ) causes a request queue to build up at the next station ( 2 ) , and this interferes via the system load to cause hard timeouts and job failures elsewhere ( 3 ) .this example is used to rationalise some of the unexpected reactions that , in many cases , were found while working on the wp1 integrated system .the experience on practical troubleshooting cases similar to this one , while bringing an understanding of the difficulties inherent in building distributed systems , also drove us to formulate some of the principles that are presented in the next section .the attempts at getting a deeper understanding of the edg - wp1 workload management system and their failures led us to formulate a few design principles and to apply them to the second major software release . hereare the principles that descend from the paradigm example described in section [ sec - fmode ] : 1 .* queues of various kinds of requests for processing should be allowed to form where they have a minimal and understood impact on system resources . *+ queues that get ` filled ' in the form of multiple threads or processes , or that allocate significant amounts of system memory should be avoided , as they not only adversely impact system performance , but also generate inter - dependencies and complicate troubleshooting .* limits should always be placed on dynamically allocated objects , threads and/or subprocesses . *+ this is a consequence of the previous point : every dynamic resource that gets allocated should have a tunable system - wide limit that gets enforced .* special care needs to be taken around the pipeline areas where serial handling of requests is needed .* + the impact of any contention for system resources becomes more evident near areas of the queuing system that require the acquisition of system - wide locks .so far we concentrated on a specific attempt at modeling and understanding the workload management system that led to an increased attention to the usage of shared resources .there were other specific practical issues that emerged during the deployment and troubleshooting of the system and that led to the awareness of some fundamental design or implementation mistake that was made .here is a short list , where the fundamental principle that should correct the fundamental mistake that was made is listed : 1 .* communication among services should always be reliable : * * always applying double - commit and rollback for network communications . * going through the filesystem for local communications .+ in general , forms of communication that do nt allow for data or messages to be lost in a broken pipe lead to easier recovery from system or process crashes . where network communication is necessary, database - like techniques have to be used .every process , object or entity related to the job lifecycle should have another process , object or entity in charge of its well - being . *+ automatic fault recovery can only happen if every entity is held accountable and accounted for .* information repositories should be minimized ( with a clear identification of authoritative information ) . *+ many of the software components that were integrated in the edg - wp1 solution are stateful and include local repositories for request information , in the form of local queues , state files , database back - ends .only one site with authoritative information about requests has to be identified and kept .monolithic , long - lived processes should be avoided . * + dynamic memory programming , using languages and techniques that require explicit release of dynamically allocated objects , can lead to leaks of memory , descriptors and other resources .experimental , r&d code can take time to leak - proof , so it should possibly not be linked to system components that are long - lived , as it can accelerate system resource starvation . short - lived , easy - to - recover components are a clean and very practical workaround in this casemore thought should be devoted to efficiently and correctly recovering a service rather than to starting and running it . *+ this is again a consequence of the previous point : the capability to quickly recover from failures or interruption helps in assuring that system components ` can ' be short - lived , either by design or by accident .edg - wp1 has been distributing jobs over the edg testbed in a continuous fashion for one and a half years now , with a software solution where existing grid technology was integrated wherever possible .the experience of understanding the direct and indirect interplay of the service components could not be reduced to a simple _ scalability _ evaluation .this because understanding and removing _ bottlenecks _ is significantly complicated by non - linear and non - continuous effects in the system . in this process, few principles that apply to the very complex practice of distributed systems operations were learned the hard way ( i.e. not by just reading some good book on the subject ) .edg - wp1 tried to incorporate these principles in its second major software release that will shortly face deployment in the edg testbed .j. frey , t. tannenbaum , i. foster , m. livny , s. tuecke , `` condor - g : a computation management agent for multi - institutional grids '' , _ proceedings of the tenth ieee symposium on high performance distributed computing ( hpdc10 ) _ , 2001 datagrid wp1 members ( c. anglano _ et al ._ ) , * integrating grid tools to build a computing resource broker : activities of datagrid wp1 * " presented at the chep 2001 conference , beijing ( p. 708in the proceedings )
application users have now been experiencing for about a year with the standardized resource brokering services provided by the workload management package of the eu datagrid project ( wp1 ) . understanding , shaping and pushing the limits of the system has provided valuable feedback on both its design and implementation . a digest of the lessons , and better practices " , that were learned , and that were applied towards the second major release of the software , is given .
we often see published results in the form where and are _ usually _ positive . and with all combinations of signs , see public online tables of deep inelastic scattering results. i want to make clear since the very beginning that it is not my intention to blame experimental or theoretical teams which have reported in the past asymmetric uncertainty , because we are all victims of a bad tradition in data analysis .at least , when asymmetric uncertainties have been given , there is some chance to correct the result , as described in sec .[ sec : thumb ] . since some asymmetric contributions to the global uncertainties almost unavoidably happen in complex experiments , i am more worried of collaborations that never arrive to final asymmetric uncertainties , because i must imagine they have symmetrised somehow the result but , i am afraid , without applying the proper shifts to the ` best value ' to take into account asymmetric contributions , as it will be discussed in the present paper . ] as firstly pointed out in ref . and discussed in a simpler but more comprehensive way in ref . , this practice is far from being acceptable and , indeed , could bias the believed value of important physics quantities .the purpose of the present paper is , summarizing and somewhat completing the work done in the above references , to remind where asymmetric uncertainty stem from and to show why , as they are usually treated , they bias the value of physical quantities , either in the published result itself or in subsequent analyses .once the problems are spotted , the remedy is straightforward , at least within the bayesian framework ( see e.g. , or and for recent reviews ) .in fact the bayesian approach is conceptually based on the intuitive idea of probability , and formally grounded on the basic rules of probability ( what are usually known as the probability ` axioms ' and the ` conditional probability definition ' ) plus logic . within this frameworkmany methods of ` conventional ' statistics are reobtained , as approximations of general solutions , under well stated conditions of validity . instead , in the conventional , frequentistic approach _ad hoc _ formulae , prescriptions and un - needed principles are used , often without understanding what is behind these methods before a ` principle ' there is nothing !the proposed bayesian solutions to cure the troubles produced by the usual treatment of asymmetric uncertainties is to step up from approximated methods to the more general ones ( see e.g. ref . , in particular the top down approximation diagram of fig .2.2 ) . in this paperwe shall see , for example , how and minus log - likelihood fit ` rules ' can be derived from the bayesian inference formulae as approximated methods and what to do when the underlying conditions do not hold .we shall encounter a similar situation regarding standard formulae to propagate uncertainty .some of the issues addressed here and in refs . and have been recently brought to our attention by roger barlow , who proposes frequentistic ways out .michael schmelling had also addressed questions related to ` asymmetric errors ' , particularly related to the issue of weighted averages .the reader is encouraged to read also these references to form his / her idea about the spotted problems and the proposed solutions . in sec .[ sec : propagation ] the issue of propagation of uncertainty is briefly reviewed at an elementary level ( just focusing on the sum of uncertain independent variables i.e. no correlations considered ) though taking into account asymmetry in probability density functions ( p.d.f . ) of the _ input _ quantities . in this way we understand what ` might have been done ' ( we are rarely in the positions to exactly know `` what has been done '' ) by the authors who publish asymmetric results andwhat is the danger of improper use of such a published ` best value ' _ as is _ in subsequent analyses .then , sec .[ sec : sources ] we shall see in where asymmetric uncertainties stem from and what to do in order to overcome their potential troubles . this will be done in an exact way and , whenever is possible , in an approximated way . some rules of thumb to roughly recover sensible probabilistic quantities ( expected value and standard deviation ) from results published with asymmetric uncertainties will be given in sec .[ sec : thumb ] .finally , some conclusions will be drawn .determining the value of a physics quantity is seldom an end in itself . in most casesthe result is used , together with other experimental and theoretical quantities , to calculate the value of other quantities of interest . as it is well understood , uncertainty on the value of each ingredientis propagated into uncertainty on the final result .if uncertainty is quantified by probability , as it is commonly done explicitly or implicitly in physics , the propagation of uncertainty is performed using rules based on probability theory .if we indicate by the set ( ` vector ' ) of input quantities and by the final quantity , given by the function of the input quantities , the most general propagation formula ( see e.g. ) is given by ( we stick to continuous variables ) : \cdot f(\mvec x)\ , \mbox{d}\mvec x\ , , \label{eq : prop_general}\ ] ] where is the p.d.f .of , stands for the joint p.d.f . of and the dirac delta ( note the use of capital letters to name variables and small letters to indicate the values that variables may assume ) .the exact evaluation of eq .( [ eq : prop_general ] ) is often challenging , but , as discussed in ref . , this formula has a nice simple interpretation that makes its monte carlo implementation conceptually easy .as it is also well known , often there is no need to go through the analytic , numerical or monte carlo evaluation of eq.([eq : prop_general ] ) , since linearization of around the expected value of ( e[ ] ) makes the calculation of expected value and variance of very easy , using the well known standard propagation formulae , that for uncorrelated input quantities are & \approx & y(\mbox{e}[\mvec x ] ) \label{eq : prop_approx_e } \\\sigma^2(y ) & \approx & \sum_i \left(\left.\frac{\partial y}{\partial x_i } \right|_{\mbox{e}[\mvec x]}\right)^2\ , \sigma^2(x_i)\ , .\label{eq : prop_approx_sigma}\end{aligned}\ ] ] as far as the shape of , a gaussian one is usually assumed , as a result of the central limit theorem .holding this assumptions , ] gives the ` best value ' , and probability intervals , upper / lower limits and so on can be easily calculated .in particular , within the gaussian approximation , the most believable value ( _ mode _ ) , the barycenter of the p.d.f .( _ expected value _ ) and the value that separates two adjacent 50% probability intervals ( _ median _ ) coincide . if is asymmetric this is not any longer true and one needs then to clarify what ` best value ' means , which could be one of the above three _ position parameters _ , or something else ( in the bayesian approach ` best value ' stands for expected value , unless differently specified ) . anyhow , gaussian approximation is not the main issue here and , in most real applications , characterized by several contributions to the combined uncertainty about , this approximation is a reasonable one , even when some of the input quantities individually contribute asymmetrically .my concerns in this paper are more related to the evaluation of ] is more sensitive than on the exact shape of or curve . equation ( [ eq : shift_chi2 ] ) has to be taken only to get an idea of the order of magnitude of the effect .for example , in the case depicted in fig [ fig : asymmetric_chi2 ] the shift is 80% of . ] the remarks about misuse of and rules can be extended to cases where several parameters are involved .i do not want to go into details ( in the bayesian approach there is nothing deeper than studying or in function of several parameters .. ] ) , but i just want to get the reader worried about the meaning of contour plots of the kind shown in fig .[ fig : spots ] .another source of asymmetric uncertainties is nonlinear dependence of the output quantity on some of the input in a region a few standard deviations around .this problem has been studied with great detail in ref . , also taking into account correlations on input and output quantities , and somewhat summarized in ref . .let us recall here only the most relevant outcomes , in the simplest case of only one output quantity and neglecting correlations .+ & + & + figure [ fig : quad ] shows a non linear dependence between and and how a gaussian distribution has been distorted by the transformation [ has been evaluated analytically using eq.([eq : prop_general ] ) ] . as a result of the nonlinear transformation , mode , mean ,median and standard deviation are transformed in non trivial ways ( in the example of fig .[ fig : quad ] mode moves left and expected value right ) . in the general casethe complete calculations should be performed , either analytically , or numerically or by monte carlo .fortunately , as it has been shown in ref . , second order expansion is often enough to take into account small deviations from linearity .the resulting formulae are still compact and depend on location and shape parameters of the initial distributions .second order propagation formulae depend on first and second derivatives . in practical cases ( especially as far as the contribution from systematic effectsare concerned ) the derivatives are obtained numerically as } & \approx & \frac{1}{2 } \left(\frac{\delta _ + } { \sigma(x)}+ \frac{\delta _ -}{\sigma(x)}\right ) = \frac{\delta _+ + \delta _ -}{2\,\sigma(x)}\ , , \\ \left.\frac{\partial^2 y}{\partial x^2}\right|_{\mbox{\small e}[x ] } & \approx & \frac{1}{\sigma(x)}\ , \left(\frac{\delta _ + } { \sigma(x)}-\frac{\delta _ -}{\sigma(x)}\right ) = \frac{\delta _ + -\delta _ -}{\sigma^2(x)}\,,\end{aligned}\ ] ] where and now stand for the left and right deviations of when the _ input variable varies by one standard deviation _ around order propagation formulae are conveniently given in ref . in terms of the deviations and are given by } \ , \sigma^2(x ) \\\overline{\delta } & = & \left.\frac{\partial y } { \partial x}\right|_{\mbox{\small e}[x ] } \ , \sigma(x)\,.\end{aligned}\ ] ] ] . for that depends only on a single input get : ) + \delta\ , , \label{eq : ey_nonlinear } \\\sigma^2(y ) & \approx & \overline{\delta}^2 + 2\,\overline{\delta}\cdot\delta\cdot s(x)+ \delta^2\cdot\left[{\cal k}(x)-1\right]\ , , \label{eq : sig_nonlinear}\end{aligned}\ ] ] where is the semi - difference of the two deviations and is their average : while and stand for skewness and kurtosis of the input variable. we should not forget that the input quantities could have non trivial shapes . since skewness and kurtosis are related to 3rd and 4th moment of the distribution , eq .( [ eq : sig_nonlinear ] ) makes use up to the 4th moment and is definitely better that the usual propagation formula , that uses only second moments . in ref . approximated formulae are given also for skewness and kurtosis of the output variable , from which it is possible to reconstruct taking into account up to 4-th order moment of the distribution . ] for many input quantities we have ) + \sum_i\delta_i\ , , \label{eq : ey_nonlinear_many } \\\sigma^2(y ) & \approx & \sum_i\sigma^2_{x_i}(y)\ , , \label{eq : sig_nonlinear_many}\end{aligned}\ ] ] where stands for each individual contribution to eq .( [ eq : sig_nonlinear ] ) .the expression of the variance gets simplified when all input quantities are gaussian ( a gaussian has skewness equal 0 and kurtosis equal 3 ) : and , as long as are much smaller that , we get the convenient _ approximated formulae _ ) + \sum_i \delta_i\ , , \label{eq : nl_simple_e } \\\sigma^2(y ) & \approx & \sum_i \overline{\delta}^2_i\,\ , \label{eq : nl_simple}\end{aligned}\ ] ] valid also for other symmetric input p.d.f.s ( the kurtosis is about 2 to 3 in typical distribution and its exact value is irrelevant if the condition holds ). the resulting practical rules ( [ eq : nl_simple_e])([eq : nl_simple ] ) are quite simple : * the _ expected value _ of _ is shifted _ by the sum of the individual shifts , each given by half of the semi - difference of the deviations ; * each input quantity contributes ( in quadrature ) to the combined standard uncertainty with a term which is approximately the average between the deviations .moreover , if there are many contributions to the uncertainty , the final uncertainty will be symmetric and approximately gaussian , thanks to the central limit theorem . finally , and this is often the case that we see in publications , asymmetric uncertainty results from systematic effects .the bayesian approach offers a natural and clear way to treat systematics andi smile at the many attempts of ` squaring the circle ' using frequentistic prescriptions simply because probabilistic concepts are consistently applied to all _ influence quantities _ that can have an effect on the quantity of interest and whose value is not precisely known .therefore we can treat them using probabilistic methods .this was also recognized by metrologic organizations .indeed , there is no need to treat systematic effects in a special way .they are treated as any of the many input quantities discussed in sec .[ ss : nonlinear ] , and , in fact , their asymmetric contributions come frequently from their nonlinear influence on the quantity of interest . the only word of caution , on which i would like to insist , is to use expected value and standard deviation for each systematic effect .in fact , sometimes the uncertainty about the value of the influence quantities that contribute to systematics is intrinsically asymmetric .i also would like to comment shortly on results where either of the is negative , for example ( see e.g. ref . to have an idea of the variety of signs of ) .this means that that the we are in proximity of a minimum ( or a maximum if were negative ) of the function .it can be shown that eqs .( [ eq : ey_nonlinear])-([eq : sig_nonlinear ] ) hold for this case too . by around its expected value ] . ] for further details about meaning and treatment of uncertainties due systematics and their relations to iso _ type b _uncertainties , see refs . and .having understood what one should have done to obtain expected value and standard deviation in the situations in which people are used to report asymmetric uncertainties , we might attempt to recover those quantities from the published result .it is possible to do it exactly only if we know the detailed contributions to the uncertainty , namely the or log - likelihood functions of the so called ` statistical terms ' and the pairs , together to the probabilistic model , for each ` systematic term ' . however , these pieces of information are usually unavailable .but we can still make some _ guesses _ , based on some rough assumptions , lacking other information : * asymmetric uncertainties in the ` statistical part ' are due to asymmetric or log - likelihood : apply corrections given by eqs .( [ eq : sigma_delta_chi2])([eq : shift_chi2 ] ) ; * asymmetric uncertainties in the ` systematic part ' comes from nonlinear propagation : apply corrections given by eqs .( [ eq : nl_simple_e])([eq : nl_simple ] ) . as a numerical example , imagine we read the following result ( in arbitrary units ) : ( that somebody would summary as ! ) .the only certainty we have , seeing two asymmetric uncertainties with the same sign of skewness , is that _ the result is definitively biased_. let us try to make our estimate of the bias and calculate the corrected result ( that , not withstanding all uncertainties about uncertainties , will be closer to the ` truth ' than the published one ) : 1 .the first contribution gives roughly [ see .( [ eq : sigma_delta_chi2])([eq : shift_chi2 ] ) ] : 2 . for the second contribution we have[ see . eqs .( [ eq : delta_m])([eq : delta_m ] ) , ( [ eq : nl_simple_e])([eq : nl_simple ] ) ] : _ our _ guessed best result would then become notation . in this examplewe would have .personally , i do not think this is a very important issue as long as we know what the quantity means .anyhow , i understand the iso rational , and perhaps the proposed notation could help to make a break with the ` confidence intervals ' . ] ( the exceeding number of digits in the intermediate steps are just to make numerical comparison with the correct result that will be given in a while . ) if we had the chance to learn that the result of eq .( [ eq : esempio_bad_y ] ) was due to the asymmetric fit of fig .[ fig : asymmetric_chi2 ] plus two systematic corrections , each described by the triangular distribution of fig .[ fig:2triang ] , then we could calculate expectation and variance exactly : i.e. , quite different from eq .( [ eq : esempio_bad_y ] ) and close to the result corrected by rule of thumb formulae. indeed , knowing exactly the ingredients , we can evaluate from eq.([eq : prop_general ] ) as although by monte carlo .the result is given in fig .[ fig : skewed+2tr ] , from which we can evaluate a mean value of 4.54 and a standard deviation of 1.65 in perfect agreement with the figures given in eqs .( [ eq : exact_corr_e])([eq : exact_corr_sigma ] ) . of fig .[ fig : asymmetric_chi2 ] is the rounded value of 1.54 .replacing 1.5 by 1.54 in eq .( [ eq : exact_corr_sigma ] ) , we get exactly the monte carlo value of 1.65 . ] as we can see from the figure , also those who like to think at best value in term of most probable value have to realize once more that _the most probable value of a sum is not necessarily equal to the sum of most probable values of the addends _ ( and analogous statements for all combinations of uncertainties ) . in the distribution of fig .[ fig : skewed+2tr ] , the mode of the distribution is around 5 . [ note that expected value and variance are equal to those given by eqs .( [ eq : exact_corr_e])([eq : exact_corr_sigma ] , since in the case of a linear combination they can be obtained exactly . ]other statistical quantities that can be extracted by the distribution are the median , equal to 4.67 , and some quantiles ( values at which the cumulative distribution reaches a given percent of the maximum the median being the 50% quantile ) .interesting quantiles are the 15.85% , 25% , 75% and 84.15% , for which the monte carlo gives the following values of : 2.88 , 3.49 , 5.72 and 6.18 . from these valueswe can calculate the _ central _ 50% and 68.3% intervals , ] is almost unavoidable ( i have known physicists convinced and who even taught it ! that the standard deviation only ` makes sense for the gaussian ' and that it was defined via the ` 68% rule ' ) .for this reason , recently i have started to appreciate thinking in terms of 50% probability intervals , also because they force people to reason in terms of better perceived fifty - to - fifty bets .i find these kind of bets very enlighting to show why practically all standard ways ( including bayesian ones ! ) fail to report upper / lower _ confidence limits _ in _ frontier case situations_ characterized by _open likelihoods _ ( see chapter 12 in ref. ) .i like to ask `` please use your method and give me a 50% c.l .upper / lower limit '' , and then , when i have got it , `` are you really 50% confident that the value is below that limit and 50% confident that it is above it? would you equally bet on either side of that limit ? '' . and the supporters of ` objective ' methods are immediately at loss .( at least those who use bayesian formulae realize that there must be some problem with the choice of priors . ) ] which are ] , respectively . again , the information provided by eq .( [ eq : esempio_bad_y ] ) is far from any reasonable way to provide the uncertainty about , given the information on each component . besides the lucky case of this numerical example ( which was not constructed on purpose , but just recycling some material from ref . ) , it seems reasonable that even results roughly corrected by rule of thumb formulae are already better than those published directly with asymmetric result .was , without further information , we could still try to apply some shift to the result , obtaining or depending on some guesses about the source of the asymmetry . in any case , either results are better than ! ] but the accurate analysis can only be done by the authors who know the details of the individual contribution to the uncertainty .asymmetric uncertainties do exist and there is _ no way to remove them artificially_. if they are not properly treated , i.e. using prescriptions that do not have a theoretical ground but are more or less rooted in the physics community , the published result is biased . instead ,if they are properly treated using probability theory , in most cases of interest the final result is practically symmetric and approximately gaussian , with expected value and standard deviations which take into account the several shifts due to individual asymmetric contributions .note that some of the simplified methods to make statistical analyses had a _raison dtre _ many years ago , when the computation was a serious limitation .now it is not any longer a problem to evaluate , analytically or numerically , integrals of the kind of those appearing e.g. in eqs.([eq : prop_general ] ) , ( [ eq : theta_e ] ) and ( [ eq : theta_sigma ] ) .in the case the final uncertainty remains asymmetric , the authors should provide detailed information about the ` shape of the uncertainty ' , giving also most probable value , probability intervals , and so on .but the best estimate of the _ expected value and standard deviation should be always given _( see also the _ iso guide _ ) . to conclude, i would like to leave the final word to my preferred quotation with whom i like to end seminars and courses on probability theory applied to the evaluation and the expression of uncertainty in measurements : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` although this guide provides a framework for assessing uncertainty , it can not substitute for critical thinking , intellectual honesty , and professional skill .the evaluation of uncertainty is neither a routine task nor a purely mathematical one ; it depends on detailed knowledge of the nature of the measurand and of the measurement .the quality and utility of the uncertainty quoted for the result of a measurement therefore ultimately depend on the understanding , critical analysis , and integrity of those who contribute to the assignment of its value.''_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ h1 collaboration , www-h1.desy.de/psfiles/figures/d97-042.tab_corre + lations.txt ; + zeus collaboration , www-zeus.desy.de/physics/sfew/public/ + sfew_results / published / nc-9900/table-2.dat .g. dagostini and m. raso , cern - ep/2000 - 026 , february 2000 [ hep - ex/0002056 ] .g. dagostini , _ bayesian reasoning in data analysis : a critical introduction _ , world scientific , 2003 ( book info , including list of contents and hypertexted bibliography can be found at www.roma1.infn.it//wspc/ ) .g. dagostini , proc .xviii international workshop on maximum entropy and bayesian methods , garching ( germany ) , july 1998 , v. dose et al ., kluwer academic publishers , dordrecht , 1999 , 157170 [ physics/9811046 ] .
the issue of asymmetric uncertainties resulting from fits , nonlinear propagation and systematic effects is reviewed . it is shown that , in all cases , whenever a published result is given with asymmetric uncertainties , the _ value _ of the physical quantity of interest _ is biased _ with respect to what would be obtained using at best all experimental and theoretical information that contribute to evaluate the combined uncertainty . the probabilistic solution to the problem is provided both in exact and in approximated forms . # 1
is limited by noise in channels , but error correction methods can efficiently offset this restriction in both classical and quantum cases . at a simple level , multiple copies of the information can be transmitted , and a majority rule can be applied to discern the correct code , but such coding is neither practical nor efficient .sparse graph coding , such as gallager s low - density parity - check ( ldpc ) codes , offers an efficient alternative that approaches the shannon information limit .fortunately quantum coding and decoding strategies can be constructed from their classical counterparts , but unfortunately this mapping from classical to quantum coding can be problematic due to the requirement that quantum codes satisfy the duality - containing condition .moreover , due to increased challenges posed by these quantum codes , performance improvement requires further progress in the proposed decoding algorithm .the entanglement - assisted ( ea ) stabilizer formalism adds error - free entangled bits as a consumable resource for performing quantum error correction .this ea approach overcomes the duality - containing requirement and thus offers a rich lode of quantum error correction protocols inspired by classical protocols . using the ea approach , modern codes for classical channels , such as sparse codes ,can easily be imported as quantum error - correcting codes .our aim is to improve belief propagation ( bp ) decoding methods so that quantum coding is dramatically improved over existing techniques .specifically our numerical results presented here show that an improved bp method whose heuristical feedback strategies based on exploiting all accessible information from stabilizer measurements , yield a dramatically improved block error rate ( ber ) for any depolarizing channel .our methods should work for any pauli noise channel . for qubits ,a pauli channel is defined by the mapping with the -fold tensor product of single - qubit pauli operators .our interest is focused on memory - less channels wherein the error on each qubit is independent of the error on any other qubit .in particular we consider the depolarizing channel , which is the most - studied case : for fixed channel error probability , the error on qubit is given by with denoting the probability of no error occurring . in quantum settings , due to the inability to measure each and every qubit, syndrome - based decoding is typically chosen .consequently , decoders using pauli channels are generally considered to be hard - decision decoders . in other words , in quantum decoding ,the conventional soft - decision techniques are not applicable when the channel is in the pauli channel model .generally , sparse quantum codes are decoded by using syndrome - based bp decoding algorithms which automatically imply hard - decision decoding .though there is an equivalence between syndrome - based decoding and a posteriori probability decoding ( signal - based decoding ) under this setting , the syndrome - based decoding results in a serious drawback to bp decoder : the symmetric degeneracy error .fortunately , poulin and chung ( pc08 ) propose a solution to the symmetric degeneracy error by using the random perturbation method .motivated by the soft - decision techniques used in classical settings , which provide extra reliable information on the message nodes thereby yielding a better error correcting capacity , we develop a new heuristical feedback adjustment strategy for the standard bp decoder .when used in decoding sparse quantum codes , our method can on one hand solve the symmetric degeneracy error problem , and on the other hand can provide more useful information to the message nodes .the difference between pc08 and our approach is that , in pc08 , they feed back only the syndrome of the decoder output to adjust prior error probability distributions for received qubits .these adjusted distributions are then fed back into the decoder .we significantly improve this protocol by feeding back not just the syndrome but also the values of the frustrated checks on individual qubits of the code and the channel model ; accordingly we introduce a new adjustment strategy .specifically , our approach , which is based not only on syndromes but also on frustrated checks obtained from full stabilizer measurements and the channel model , yields a better ber for the case of depolarizing quantum channels .we provide a detailed description of our basic bp decoder for decoding sparse quantum codes .the basic bp decoder introduced here is inspired by the strategy used for decoding sparse classical quaternary codes under the bp algorithm . using this strategy, we can decode sparse quantum codes directly regardless of whether they arise from classical binary codes or not and regardless of whether they are calderbank - shor - steane ( css ) construction codes or not .in this section , we briefly reprise the essential elements of bp iterative decoding for sparse quantum codes . in subsec .[ subsec : bpdecoding ] we discuss the key idea of standard bp iterative algorithms .then we compare decoding of classical codes vs quantum codes , and introduce the standard bp decoding for quantum codes . finally in subsec .[ subsec : decodesparse ] we show how to decode sparse quantum codes in based on the standard bp algorithm .consider a -bit message that is encoded into an -bit codeword , which is then transmitted through a noisy channel .the received message is an error - prone vector over the output alphabet , and there is no guarantee that decoding will reveal the original codeword .however , the codeword can be guessed with a high probability of being correct by maximizing the probability for this codeword based on the observed output vector .unfortunately for a linear block code , encoding information bits into bits allows possible -bit codewords , and calculating conditional probabilities for individual codewords is too expensive in practice .bp algorithms overcome this inefficiency for sparse codes .the strategy is to represent a linear - block classical error - correcting code ( cecc ) by a tanner graph comprising message nodes and check nodes corresponding respectively to received bits and check constraints .then an iterative algorithm recovers the values of received bits as follows . at each round, probabilities are passed from message nodes to check nodes and then from back to .the probabilities from to are computed based on the observed value of the message node and other probabilities passed from the neighboring check nodes to the respective message node .it turns out that , for decoding sparse codes , the bp algorithm can provide a reasonable trade - off between complexity and performance . for a linear block cecc, the code space can be viewed as the orthogonal projection space ( solution space ) of its check matrix . the sender alice transmits her message as a codeword that is encoded according to a specific check ( or generator ) matrix through the channel .when receiver bob obtains the channel output , the received vector may not be the solution vector of the check matrix .therefore bob needs to apply a smart algorithm to recover the codeword efficiently . asthe check matrix is sparse , bob employs the bp algorithm .first , bob measures each bit to obtain its posterior probability distribution .subsequently , he puts these probabilities into the bp decoder , and , based on the constraint that the codeword should be the orthogonal vector of the check matrix , he infers the original message .this procedure extends naturally to sparse quantum codes . in the quantum case ,the stabilizer formalism for a quantum error - correcting code ( qecc ) is useful .the code space within corresponds to the simultaneous eigenspace of all generators of an abelian subgroup alice transmits her message as a codeword , which propagates through the depolarizing quantum channel .due to channel errors , the received codeword may not be the simultaneous eigenspace of . bob measures to obtain the syndrome=0 ] and means . for two -qubit pauli operators such as and , we have the messages from check nodes to qubit nodes are denoted which is defined only up to a constant factor .the factor can be fixed by normalization .the messages from qubit nodes to check nodes are similarly denoted as where is the initial probability in the definition of the memoryless channel .then the beliefs are constructed by first initializing , then evaluating according to after the iteration procedure based on eqs .( [ eq : bpequation1 ] ) and ( [ eq : bpequation2 ] ) .in fact , eqs .( [ eq : bpequation1 ] ) and ( [ eq : bpequation2 ] ) define a sum - product iterative procedure for decoding sparse quantum codes .hence this algorithm is also called the sum - product algorithm ( spa ) , which is one of the most important algorithms based on bp . in order to clarify the bp decoding algorithm for sparse quantum codes, we now show how to implement eqs .( [ eq : bpequation1 ] ) and ( [ eq : bpequation2 ] ) in gf(4 ) .there is a convenient isomorphism between the pauli group generated by and the galois field generated by .the isomorphism is explained by the element identification and the operation identification _ _ multiplication__ _ and _ _commutativity__ inner product _ . for the one - qubit case, we have , &=0 \leftrightarrow { \rm tr}(\hat{p } \times \bar{\hat{q } } ) = 0 , \ ; \ { p , q \}=0 \leftrightarrow { \rm tr}(\hat{p } \times \bar{\hat{q } } ) = 1\end{aligned}\ ] ] the isomorphism is readily extended to the -qubit case : &=0 \leftrightarrow { \rm tr}(\bm{u}_p \cdot \bm{v}_q ) = 0,\ ; \ { p , q \}=0 \leftrightarrow { \rm tr}(\bm{u}_p\cdot \bm{v}_q ) = 1.\end{aligned}\ ] ] where ` ' used here ( between two vectors ) is a regular inner product .that is , for and , we have , the addition and multiplication rules of are shown in tables [ table : isomorphictable1 ] and [ table : isomorphictable2 ] , respectively ..addition of gf(4 ) [ cols="^,^,^,^,^",options="header " , ] we can import the strategy used for decoding sparse classical quaternary codes under bp to use for decoding sparse quantum codes .this adaptation to quantum codes is achieved by transforming the check criterion ( [ eq : bpequation1 ] ) from commutativity to trace inner product .for example , suppose .then the check criterion for this check node should be , which is equivalent to =0 ] as the probability for to take the value , with the entry of , and the mapped elements of and in gf(4 ) , see eq .( [ eq : isomorphic1 ] ) . ] and ] denoting an ea qecc that encodes qubits into qubits with the help of ancillary ebits , we construct a simple ea qecc for ] quaternary code with check matrix first we transform to then transform to a set of ( perhaps non - commuting ) generators next we transform into a canonical form by multiplying the third generator by the second generator and by multiplying the fourth generator by the first and second generator .thus , we obtain now , it is easy to check that the first generator anti - commutes with the second generator , and the last two generators commute with each other as well as commuting with the first two generators . according to the ea qecc formalism, this coding scheme needs one ebit to assist encoding .the extended commuting set of generators is for which is just the stabilizer of ] only depends on the left part of , which corresponds to the four transmitted qubits held by the sender .suppose the qubits of ] is designed to correct one error but not two errors .] we now use the standard bp decoder with pc08 s random perturbation . comparing between the error syndrome and the syndrome of the output of the standard bp decoder , the probability distributions of the errors occurring on the qubits connected to the second , third or fourth checks can be reset .for all these cases , the decoder could not yield an appropriate recovery . here as an example , we use pc08 s random perturbation strategy for the frustrated check and show the corresponding performance in fig . [fig : syndrome - basedbp ] . but using the standard bp decoding algorithm replaced by pc08 and applying random perturbation strength to the prior for qubits , and ., width=169 ] but using the standard bp decoding algorithm replaced by pc08 and applying random perturbation strength to the prior for qubits , and . ,width=172 ] but using the standard bp decoding algorithm replaced by pc08 and applying random perturbation strength to the prior for qubits , and ., width=169 ] but using the standard bp decoding algorithm replaced by pc08 and applying random perturbation strength to the prior for qubits , and . ,width=172 ] finally , we use our enhanced feedback bp decoding algorithm . by the same method ,our feedback strategy can reset the probability distributions of the errors occurring on the qubits connected to the second , third or the fourth checks .when the decoder chooses the fourth entry ( ) of the frustrated check and resets according to eq .( [ eq : resetting2 ] ) , the correct decoding result arises in just a few iterations .we show the performance of our approach in fig .[ fig : stabilizer - basedbp ] . but using our enhanced feedback iterative decoding algorithm . herewe reset according to eq .( [ eq : resetting2 ] ) . in this case, only three iterations were required to yield a valid output , which is exactly the error occurring on [ [ 4 , 1;1 ] ] during its transmission . ,width=170 ] but using our enhanced feedback iterative decoding algorithm . herewe reset according to eq .( [ eq : resetting2 ] ) . in this case, only three iterations were required to yield a valid output , which is exactly the error occurring on [ [ 4 , 1;1 ] ] during its transmission ., width=172 ] but using our enhanced feedback iterative decoding algorithm . herewe reset according to eq .( [ eq : resetting2 ] ) . in this case, only three iterations were required to yield a valid output , which is exactly the error occurring on [ [ 4 , 1;1 ] ] during its transmission . ,width=170 ] but using our enhanced feedback iterative decoding algorithm . herewe reset according to eq .( [ eq : resetting2 ] ) . in this case, only three iterations were required to yield a valid output , which is exactly the error occurring on [ [ 4 , 1;1 ] ] during its transmission ., width=172 ] in this example , pc08 s method could not help the standard bp decoder to yield a valid output , however , our enhanced feedback bp decoding algorithm yields a valid output in just a few iterations .in fact even if the error occurring on the first four qubits is not but , which has the same syndrome as , our decoding output can still recover the transmitted quantum state because which is just the third generator of the extended stabilizer . because it is hard to check whether when the number of generators of is large , we choose as the success criterion of the decoding result in our simulations .we have applied our enhanced feedback iterative bp decoding algorithm to a variety of sparse quantum codes , including conventional sparse quantum codes and ea sparse quantum codes over depolarizing channels . in each case , our improved bp decoder yields significantly lower ber over both the standard bp decoder and the bp decoder with pc08 s random perturbation . in the following subsections , we simulate decoding of sparse quantum codes with different code parameters ( including block lengths , rates , and row weight ) under the three decoders to demonstrate the superiority of our approach . conventional sparse quantum codes can be constructed from sparse classical dual - containing codes .one of the most successful dual - containing constructions is the so - called `` construction b '' , which is built as follows .first we take an cyclic matrix with row weight , and define .\ ] ] then we delete some rows from to obtain a matrix with rows . by construction , is dual - containing . a conventional sparse quantum code with length can thus be given according to css construction .our first example is based on this strategy .we first construct a cyclic binary ldpc code [ 63 , 37 ] with row weight based on finite geometries .this code has a cyclic check matrix with independent rows and redundant rows .hence , we can construct a conventional sparse quantum code with , , and .we refer to this ] discussed in sec .[ sec : simpleexample ] .these three codes have different net rate ( including positive , zero and negative net rates ) , different block lengths and different row weight distributions .the first ea sparse quantum code we consider is ] with girth .the classical code ] . the code ] , which is constructed from a binary regular sparse classical code ] , which has a rate , row weight and column weight , was constructed by mackay .as there are pairs of anti - commuting generators and commuting generators in , we obtain an ea sparse quantum code ] , which is constructed from ] code , which has a rate and row weight , was also constructed by mackay . following the construction procedure mentioned in sec .[ sec : simpleexample ] , the number of anti - commuting pairs in is ; thus , we obtain an ea sparse quantum code $ ] . herewe take this code as our third example of ea sparse quantum codes , this time with a negative net rate we name it ea-3 " and show the performance of the three bp decoders when applied to ea-3 over depolarizing channels in fig .[ fig : eaexample3 ] .we evaluate and compare the time consumed by each decoder according to the average number of iterations consumed by one codeword , with the total number of iterations to finish decoding blocks . in our simulation . in figs . 8 and 9 we show the time efficiency ( complexity ) comparison for the three decoders applied to conventional , ea-1 , ea-2 and ea-3 .evidently our algorithm has a smaller anoi than pc08 , which shows improved decoding efficiency when compared to pc08 . from figs .[ fig : conventionalexample]-[fig : averageiteration2 ] , it is evident that our strategy yields a significantly lower ber with a lower anoi for both conventional sparse quantum codes and ea sparse quantum codes . as an example of the effectiveness of our new decoder ,consider the case , fig .4 shows that the enhanced - feedback bp decoder yields a db gain over pc08 and an db gain over the standard bp decoder while reducing the anoi from 3.1 for pc08 to 2.2 for our enhanced - feedback bp decoder as seen in fig .8 . for another point of view , in fig .[ fig : eaexample1 ] , we can see that to guarantee ea-1 having a decoding error below , the standard bp decoder allows and pc08 allows .this value is increased dramatically to using our enhanced feedback bp decoder with a similar anoi performance as pc08 . as for the decoding of ea-2 ( fig .[ fig : eaexample2 ] ) , when , the ber performance of pc08 s method is times better than that of the standard bp decoder with times more anoi , and the ber performance of our approach is approximately times better than that of the standard bp decoder with only times more anoi .notably , interesting results appear in the decoding of ea-3 . from fig . [fig : eaexample3 ] and [ fig : averageiteration2 ] we see that pc08 could not significantly improve the standard bp decoder even with high anoi , yet our approach does improve the performance of ea-3 significantly with almost the same anoi as the standard bp decoder excepting the last two nodes .our algorithm is capable of turning the detectable errors into new outputs whose syndromes are identical with corresponding observed syndromes . we refer to these outputs as _ valid _ outputs can be classified into three cases : ( 1 ) ; ( 2 ) but ; ( 3 ) .the first two cases will yield correct outputs that can recover the sent quantum states successfully .however , it is hard to check whether which makes distinguishing cases ( 2 ) and ( 3 ) difficult . therefore , we choose ( 1 ) as the success criterion for the decoding result in our numerical simulation .some detectable errors are turned into undetectable errors by our decoder . in order to show how often our algorithm results in an undetectable error, we classify all the results yielded by our algorithm ( not just valid outputs ) when the original bp protocol fails into three cases : ( 1 ) ; cases ( 2 ) or ( 3 ) ; remain as detectable errors . in fig .[ fig : conventional - proportion ] , we take conventional as an example to depict the number of blocks that fall into each whereas the original bp protocol fails 50 times . for our enhanced bp decoder when applied to conventional while the original bp decoder fails 50 times .the maximum number of iterations is .the maximal number of iterations between each perturbation is .the maximal number of entries traversed by enhanced feedback decoder round down to one fifth of the code length.,width=283 ] .[ fig : conventional - proportion ] it is evident to see from fig .[ fig : conventional - proportion ] that most of the detectable errors will be turned to correct output but not undetectable errors by using our enhanced feedback bp decoder .it is evident from fig .[ fig : conventional - proportion ] that most of the detected errors will be turned to correct output but not un - detected errors by using our enhanced feedback bp decoder .in fact we can take the percentage of detected ( pod ) errors as an important parameter to appraise the decoding capacity of the decoder , since one detected error indicates a failure of the decoder and undetected errors can be viewed as fundamental limitations of the code in some sense . herewe take the ea-1 as an example and suppose this time . in this caseour simulation reveals that standard bp decoder fails times when it has simulated blocks , and the number of detected error is so pod .similarly pc08 fails times when it has simulated blocks , and the number of detected errors is also so again pod .on the other hand , our enhanced feedback bp decoder fails times for a simulation of blocks , and only errors are attributed to detected errors so pod= .the pod in our case is much less than for the other two bp decoders .we have developed an enhanced feedback bp decoding algorithm whose feedback adjustment strategy is not based solely on the syndrome but also on the channel model and on the individual values of the entries of the frustrated checks .our approach retains the capability of breaking the symmetric degeneracy while also feeding back extra useful information to the bp decoder .therefore , our feedback adjustment strategy yields a better error correcting capacity with relative less iterations compared to the proposed decoding algorithms for sparse quantum codes . we have considered three cases : the standard bp decoder adapted from classical decoding to decode quantum codes , the superior bp decoder with poulin and chung s random perturbation , and the bp decoder with our new feedback adjustment strategy introduced here .we used the representation of quantum codes to construct our feedback - based algorithm and exploit not just the syndrome but all the measurement resulting from stabilizer measurements .then we used the block error rate ( ber ) and average number of iterations ( anoi ) to demonstrate a dramatic better error correcting capacity improvement with relative less iterations .this result was achieved using new feedback adjustment strategy vs the two alternatives : standard bp decoder and the bp decoder with random perturbation .as shown in section v , the net rate for ea-2 and ea-3 are 0 and respectively .as pointed out by brun , devetak and hsieh , in general , net rates for ea quantum codes can be positive , negative , or zero .the zero rate does not mean that no qubits are transmitted by this code !rather , it implies that a number of bits of entanglement is needed that is equal to the number of bits transmitted . "compared to quantum codes with high net rates , ea quantum codes with zero or negative net rates employ many more physical qubits including a great number of bits of entanglement that being assumed to be error - free , thus these codes can tolerate stronger noise .therefore , it is not surprising that ea-2 and ea-3 greatly increase the cross - over probability of depolarizing channel when compared to quantum codes with similar code length and higher net rates . for comparison , based on construction b " ( same as the construction of conventional ) , we construct a quantum code with a similar code length as ea-2 but having a relative higher net rate .we name this example as ex which has a code length of 800 and a net rate . in fig .[ fig : ex ] , we see that ex has comparable ber performance to other examples presented in existing literatures which have similar parameters as ex but significantly worse than ea-2 . for the standard bp decoder and for our algorithm when applied to ex .the maximum number of iterations is .both the number of iterations between each perturbation and the maximal entries traversed by our decoder are .,title="fig:",width=283 ] +the authors appreciate critical comments by d. poulin on an early version of the manuscript .yjw appreciates financial support by the china scholarship council .bcs received financial support from __ i__core , and bmb and xmw received financial support from the 973 project of china under grant no .2010cb328300 , the nsfc - guangdong jointly funded project of china under grant no .u0635003 , and the 111 program of china under grant no .bcs is supported by a cifar fellowship. 1 r. g. gallager , `` low density parity - check codes , '' ire trans .theory , vol .it-8 , pp .21 - 28 , jan .d. j. c. mackay , `` good error - correcting codes based on very sparse matrices , '' _ ieee trans .inform . theory _399 - 431 , march 1999 .w. shor , `` scheme for reducing decoherence in quantum computer memory , '' _ phys . rev .a _ , vol .r2493-r2496 , 1995 .a. m. steane , `` error - correcting codes in quantum theory , '' _ phys ._ , vol .793 - 797 , 1996 .d. gottesman , `` stabilizer codes and quantum error correction , '' ph.d .thesis , california institute of technology , pasadena , ca 1997 ; d. gottesman .( 2007 ) lecture notes : quantum error correction .[ online ] .available : http://www.perimeterinstitute.ca/personal/dgottesman/qecc2007 c. h. bennett , d. p. divincenzo , j. a. smolin and w. k. wootters , `` mixed state entanglement and quantum error correction , '' _ phys .a _ , vol .3824 - 3851 , 1996 .e. knill and r. laflamme , `` a theory of quantum error - correcting codes , '' _ physa. _ , vol .900 - 911 , 1997 .a. calderbank , e. rains , p. shor , and n. sloane , `` quantum error correction via codes over gf(4 ) , '' _ ieee trans .inform . theory _1369 - 1387 , july 1998 .d. j. c. mackay , g. j. mitchison , and p. l. mcfadden , `` sparse - graph codes for quantum error correction , '' _ ieee trans .inform . theory _2315 - 2330 , oct .d. poulin and y. chung , `` on the iterative decoding of sparse quantum codes , '' _ quantum inform ._ , vol . 8 , pp .987 - 1000 , 2008 .d. poulin , j .-p tillich , and h. ollivier , `` quantum serial turbo codes , '' _ ieee trans .inform . theory _6 , pp . 2776 - 2798 , june 2009 . t. a. brun , i. devetak , and m .- h .hsieh , `` correcting quantum errors with entanglement , '' _ science _. 314 , pp . 436 - 439 , october 2006 . m .- h . hsieh , t. a. brun , and i. devetak , `` entanglement - assisted quantum quasi - cyclic low - density parity - check codes , '' _ phys .a _ , vol .79 , 032340 , 2009 .wang , b .-bai , w .- b .zhao , and x .- m . wang , `` entanglement - assisted quantum error - correcting codes constructed from irregular repeat accumulate codes '' _ int .j. quantum inf ._ , vol . 7 , no . 7 , pp . 1373 - 1389 , 2009 . j. preskill , _ lecture notes for physics 219 : quantum computation ( 2001)_. [ online ] .available : www.theory.caltech.edu/people/preskill/ph219 .t. k. moon , _ error correction coding : mathematical methods and algorithms ._ new jersey , u.s.a .: john wiley sons , inc . , 2005 .t. camara , h. ollivier , and j .-tillich , `` a class of quantum ldpc codes : construction and performances under iterative decoding , '' in _ proc .inf . theory _, nice , france , june 2007 , pp .811 - 815 . m. s. leifer and d. poulin , `` quantum graphical models and belief propagation , '' _ annals of physics _ , vol .. 1899 - 1946 , 2008 .a. r. calderbank and p. w. shor , `` good quantum error - correcting codes exist , '' _ phys .a _ , vol .pp . 1098 - 1105 , 1996 .a. shokrollahi , `` ldpc codes : an introduction , '' tech . rep .fremont , ca : digital fountain , inc . , apr . 2 , 2003 .d. j. c. mackay , `` good error correcting codes based on very sparse matrices , '' _ ieee trans .inform . theory _2 , pp . 399 - 431 , mar .this regular ldpc check matrix is available at : http:// www.inference.phy.cam.ac.uk/mackay/codes/en/c/816.3.174 .this regular ldpc check matrix is available at : http:// www.inference.phy.cam.ac.uk/mackay/codes/en/c/1920.1280.3.303.gz .r. laflamme , c. miquel , j .-paz , and w. h. zurek , `` perfect quantum error correcting code , '' _ phys ._ , vol .77 , pp .198 - 201 , 1996 .f. r. kschischang , b. j. frey , and h .- a .loeliger , `` factor graphs and the sum - product algorithm , '' _ ieee trans .inform . theory _ , vol .498 - 519 , feb . 2001 .y. kou , s. lin , and m. fossorier , `` low - density parity - check codes based on finite geometries : a rediscovery and new results , '' _ ieee trans .inform . theory _47 , no . 7 , pp . 2711 - 2736 , nov .m. hagiwara and h , imai , `` quantum quasi - cycli ldpc codes '' in _ proc .inform . theory _, nice , france , june 2007 , pp .806 - 810 .m .- h hsieh , w .- t yen , and l - y hsu ,`` high performance entanglement - assisted quantum ldpc codes need little entanglement , '' _ ieee trans .inform . theory _1761 - 1769 , mar .
decoding sparse quantum codes can be accomplished by syndrome - based decoding using a belief propagation ( bp ) algorithm . we significantly improve this decoding scheme by developing a new feedback adjustment strategy for the standard bp algorithm . in our feedback procedure , we exploit much of the information from stabilizers , not just the syndrome but also the values of the frustrated checks on individual qubits of the code and the channel model . furthermore we show that our decoding algorithm is superior to belief propagation algorithms using only the syndrome in the feedback procedure for all cases of the depolarizing channel . our algorithm does not increase the measurement overhead compared to the previous method , as the extra information comes for free from the requisite stabilizer measurements . shell : bare demo of ieeetran.cls for journals sparse quantum codes , quantum error correction , quantum channels , belief propagation , stabilizers .
the work of wyner led to the development of the notion of secrecy capacity , which quantifies the maximum rate at which a transmitter can reliably send a secret message to a receiver , without an eavesdropper being able to decode it .more recently , researchers have considered secrecy for the two - user broadcast channel , where each receiver acts as an eavesdropper for the independent message transmitted to the other .this problem was addressed in , where inner and outer bounds for the secrecy capacity region were established .further work in studied the multiple - input single - output ( miso ) gaussian case , and considered the general mimo gaussian case .it was shown in that , under an input covariance constraint , both confidential messages can be simultaneously communicated at their respective maximum secrecy rates , where the achievablity is obtained using secret dirty - paper coding ( s - dpc ) . however , under an average power constraint , a computable secrecy capacity expression for the general mimo case has not yet been derived . in principle , the secrecy capacity for this case could be found by an exhaustive search over the set of all input covariance matrices that satisfy the average power constraint . clearly , the complexity associated with such a search and the implementation of dirty - paper encoding and decoding make such an approach prohibitive except for very simple scenarios , and motivates the study of simpler techniques based on linear precoding .while low - complexity linear transmission techniques have been extensively investigated for the broadcast channel ( bc ) without secrecy constraints , e.g. , - , there has been relatively little work on considering secrecy in the design of linear precoders for the bc case . in , we considered linear precoders for the mimo gaussian broadcast channel with confidential messages based on the generalized singular value decomposition ( gsvd ) .it was shown numerically in that , with an optimal allocation of power for the gsvd - based precoder , the achievable secrecy rate is very close to the secrecy capacity region . in this paper , we show that for a two - user mimo gaussian bc with arbitrary numbers of antennas at each node and under an input covariance constraint , linear precoding is optimal and achieves the same secrecy rate region as s - dpc for certain input covariance constraints , and we derive an expression for the optimal precoders in these scenarios .we then use this result to develop a sub - optimal closed - form algorithm for calculating linear precoders for the case of average power constraints .our numerical results indicate that the secrecy rate region achieved by this algorithm is close to that obtained by the optimal s - dpc approach with a search over all suitable input covariance matrices . in section [ secii ], we describe the model for the mimo gaussian broadcast channel with confidential messages and the optimal s - dpc scheme , proposed in . in section [ seciii ], we consider a general mimo broadcast channel under a matrix covariance constraint , we derive the conditions under which linear precoding is optimal and achieves the same secrecy rate region as s - dpc , and we find the corresponding optimal precoders .we then present our sub - optimal algorithm for designing linear precoders for the case of an average power constraint in section [ seciv ] , followed by numerical examples in section [ secv ] .section [ secvi ] concludes the paper .* notation : * vector - valued random variables are written with non - boldface uppercase letters ( _ e.g. , _ ) , while the corresponding non - boldface lowercase letter ( ) denotes a specific realization of the random variable .scalar variables are written with non - boldface ( lowercase or uppercase ) letters .the hermitian ( i.e. , conjugate ) transpose is denoted by , the matrix trace by tr ( . ) , and * i * indicates an identity matrix .the inequality ( ) means that is hermitian positive ( semi-)definite .mutual information between the random variables and is denoted by , is the expectation operator , and represents the complex circularly symmetric gaussian distribution with zero mean and variance .we consider a two - receiver multiple - antenna gaussian broadcast channel with confidential messages , where the transmitter , receiver 1 and receiver 2 possess , , and antennas , respectively .the transmitter has two independent confidential messages , and , where is intended for receiver 1 but needs to be kept secret from receiver 2 , and is intended for receiver 2 but needs to be kept secret from receiver 1 .the signals at each receiver can be written as : where is the transmitted signal , and is white gaussian noise at receiver with independent and identically distributed entries drawn from .the channel matrices and are assumed to be unrelated to each other , and known at all three nodes .the transmitted signal is subject to an average power constraint when for some scalar , or it is subject to a matrix power constraint when : where is the transmit covariance matrix , and .compared with the average power constraint , ( [ lin3 ] ) is rather precise and inflexible , although for example it does allow for the incorporation of per - antenna power constraints as a special case .it was shown in that for any jointly distributed such that forms a markov chain and the power constraint over is satisfied , the secrecy rate pair given by is achievable for the mimo gaussian broadcast channel given by ( [ lin1 ] ) , where the auxiliary variables and represent the precoding signals for the confidential messages and , respectively . in , the achievablity of the rate pair ( [ lin4 ] ) was proved .liu _ et al . _ analyzed the above secret communication problem under the matrix power - covariance constraint ( [ lin3 ] ) .they showed that the secrecy capacity region is rectangular .this interesting result implies that under the matrix power constraint , both confidential messages and can be _ simultaneously _ transmitted at their respective maximal secrecy rates , as if over two separate mimo gaussian wiretap channels . to prove this result , liu __ showed that the secrecy capacity of the mimo gaussian wiretap channel can also be achieved via a coding scheme that uses artificial noise and random binning ( * ? ? ? * theorem 2 ) . under the matrix power constraint ( [ lin3 ] ) , the achievablity of the optimal corner point given by ( * ? ? ? *theorem 1 ) is obtained using dirty - paper coding based on double binning , or as referred to in , secret dirty paper coding ( s - dpc ) .more precisely , let maximize ( [ lin5 ] ) , and let where and are two independent gaussian vectors with zero means and covariance matrices and , respectively , and the precoding matrix is defined as .one can easily confirm the achievablity of the corner point by evaluating ( [ lin4 ] ) for the above random variables and noting that in ( [ lin1 ] ) , . note that under the matrix power constraint , the input covariance matrix that achieves the corner point in the secrecy capacity region satisfies .the matrix that maximizes ( [ lin5 ] ) is given by \bc^h \bs^{\frac{1}{2}}\end{aligned}\ ] ] where ] that satisfy , where correspond to generalized eigenvalues that are larger or less - than - or - equal - to one , respectively .[ lin_remetx2 ] since the above result holds for any block - diagonal with appropriate dimensions , then for every bc there are an infinite number of matrix power constraints that achieve a block diagonalization and hence allow for an _ optimal _ linear precoding solution . in the following ,we restrict our attention to diagonal rather than block - diagonal matrices , for which a closed form solution can be derived . from theorem [ lin_thm1 ] , we have the following result . [ lin_lem4 ] for any diagonal , the secrecy capacity of the broadcast channel in ( [ lin1 ] ) under the matrix power constraint defined in ( [ lin20])-([lin23 ] ) can be obtained by linear precoding . in particular , = v_1 + v_2\end{aligned}\ ] ] where and are independent gaussian random vectors with zero means and covariance matrices and such that \ ; , \end{aligned}\ ] ] and as before represent independently encoded gaussian codebook symbols corresponding to the confidential messages and , with zero means and covariances and respectively given by \bphi_{\bw}^h \bw \label{lin25a } \\\bs_\bw - \bk_{t\bw}^ * & = \bw\bphi_{\bw}\left [ \begin{array}{cc } \b0 & \b0 \\ \b0 & \bp_2 \end{array } \right ] \bphi_{\bw}^h \bw \ ; . \label{lin25b}\end{aligned}\ ] ] the matrix simultaneously block diagonalizes and , so by theorems [ lin_thm1 ] and [ lin_thm2 ] we know that linear precoding can achieve the secrecy capacity region .the proof is completed in appendix f by showing the equality in ( [ lin25 ] ) , and showing that ( [ lin25a ] ) corresponds to the optimal covariance in ( [ lin8 ] ) . from the proof in appendix f and ( [ lin21])-([lin23 ] ) , we see that under the matrix power constraint given by ( [ lin24 ] ) with diagonal , the general bc is transformed to an equivalent bc with a set of parallel independent subchannels between the transmitter and the receivers , and it suffices for the transmitter to use independent gaussian codebooks across these subchannels . in particular , the diagonal entries of and represent the power assigned to these independent subchannels prior to application of the precoder in ( [ lin23 ] ) do not represent the actual transmitted power , since the columns of are not unit - norm . ] . from ( [ lin25 ] ) ,the signals at the two receivers are given by + \bz_1 \\ & = & \bgam_1 \bsig_1 \left [ \begin{array}{c } \bv'_1 \\ \bv'_2 \end{array } \right ] + \bz_1 \\ & = & \bgam_1 \left [ \begin{array}{c } \bsig_{1\rho } \bv'_1 \\ \bsig_{1\bar{\rho } } \bv'_2 \end{array } \right ] + \bz_1 \\ \by_2 & = & \bgam_2 \left [ \begin{array}{c } \bsig_{2\rho } \bv'_1 \\ \bsig_{2\bar{\rho } } \bv'_2 \end{array } \right ] + \bz_2 \ ; , \end{aligned}\ ] ] where are unitary .the confidential message for receiver 1 is thus transmitted with power loading over those subchannels which are degraded for receiver 2 ( ) , while receiver 2 s confidential message has power loading over subchannels which are degraded for receiver 1 ( ) .any subchannels for which the diagonal elements of are equal to those of are useless from the viewpoint of secret communication , but could be used to send common non - confidential messages . from theorem [ lin_thm1 ] , the rectangular secrecy capacity region of the mimo gaussian bc ( [ lin1 ] ) under the matrix power constraint ( [ lin24 ] ) is defined by the corner points where is given by ( [ linap35 ] ) in appendix f. note that we have explicitly written as a function of the diagonal matrix to emphasize that contains the only parameters that can be optimized for .more precisely , since for a given matrix power constraint , and are channel dependent and thus fixed , as shown in ( [ lin21])-([lin22 ] ) .a similar description is also true for .here we propose our sub - optimal closed form solution based on linear precoding for the broadcast channel under the _ average _ power constraint ( [ lin2 ] ) .the goal is to find the diagonal matrix in ( [ lin24 ] ) that maximizes in ( [ lin27 ] ) for a given allocation of the transmit power to message , and that satisfies the average power constraint in ( [ lin28 ] ) . ] noting that can be written as ] and ] is full - rank .following the same steps as in the proof of ( * ? ? ?* lemma 2 ) or ( * ? ? ?b ) , we can convert the case when , , to the case where with the same secrecy capacity region . from ( [ lin10 ] ) and ( [ lin11 ] ) we have \bc^{-1}-\bi\right ] \bs^{-1/2 } \\ & \bg^h\bg= \bs^{-1/2}\left[\bc^{-h}\bc^{-1}-\bi\right ] \bs^{-1/2 } \ ; . \end{split}\end{aligned}\ ] ] using ( [ linap7 ] ) and ( [ linap8 ] ) , we have : \bc^h\cdot\left[\bc^{-h}\left[\begin{array}{ccc}\mathbf{\lambda}_1 & 0\\ 0 & \mathbf{\lambda}_2\end{array}\right]\bc^{-1}-\bi\right ] \bs^{-1/2}\right|\nonumber \\ & = \left|\bi+ \left[\begin{array}{ccc } \b0 & \b0\\\b0 & ( \bc_2^h\bc_2)^{-1}\end{array}\right ] \cdot\left[\left[\begin{array}{ccc}\mathbf{\lambda}_1 & 0\\ 0 & \mathbf{\lambda}_2\end{array}\right]-\bc^h\bc\right ] \right| \label{linap9 } \\ & = \left|\left[\begin{array}{ccc}\bi & \b0\\\b0 & ( \bc_2^h\bc_2)^{-1}\mathbf{\lambda}_2\end{array } \right]\right| \label{linap10 } \\ & = \left|(\bc_2^h\bc_2)^{-1}\mathbf{\lambda}_2\right| = \left|(\bc_2^h\bc_2)^{-1}\right|\cdot\left|\mathbf{\lambda}_2\right| \ ; , \label{linap11}\end{aligned}\ ] ] where ( [ linap9 ] ) comes from the fact that .finally , ( [ linap10 ] ) holds since and is block diagonal .similarly , one can show that and \bc^{-1}\right| = \left|(\bc^h\bc)^{-1}\right|\cdot\left|\mathbf{\lambda}_1\right| \cdot\left|\mathbf{\lambda}_2\right| \nonumber\\ & = \left|(\bc_1^h\bc_1)^{-1}\right|\cdot\left|(\bc_2^h\bc_2)^{-1}\right|\cdot\left|\mathbf{\lambda}_1\right| \cdot\left|\mathbf{\lambda}_2\right| \;.\label{linap14}\end{aligned}\ ] ] substituting ( [ linap11 ] ) , ( [ linap13 ] ) and ( [ linap14 ] ) in ( [ linap4 ] ) , we have , and this completes the proof .from ( [ lin10])-([lin11 ] ) , we know that , where represents number of generalized eigenvalues of the pencil ( [ lin9 ] ) that are greater than 1 . from ( [ lin10])-([lin11 ] ) ,we have \bc_1=\mathbf{\lambda}_1 \label{linap1 } \\ & \bc_1^h\left[\bs^{\frac{1}{2}}\bg^h\bg\bs^{\frac{1}{2}}+\bi\right]\bc_1=\bi \;.\label{linap2}\end{aligned}\ ] ] subtracting ( [ linap1 ] ) from ( [ linap2 ] ) , a straightforward computation yields \bs^{\frac{1}{2}}\bc_1= \mathbf{\lambda}_1-\bi \succ\b0 \;.\end{aligned}\ ] ] from ( [ linap3 ] ) , we have \bs^{\frac{1}{2}}\bc_1\succ\b0 ] , where precoding signals and are independent gaussian vectors with zero means and diagonal covariance matrices respectively given by and . in both cases , and the same secrecy rate region is achieved . 1 a. wyner , `` the wire - tap channel , '' _ bell .j. _ , vol .54 , no . 8 , pp . 1355 - 1387 , jan . 1975 .r. liu , i. maric , p. spasojevic , and r. d. yates , discrete memoryless interference and broadcast channels with confidential messages : secrecy rate regions , " _ ieee trans .inf . theory _54 , no . 6 , pp . 2493 - 2512 , june 2008 .r. liu and h. v. poor , secrecy capacity region of a multiple - antenna gaussian broadcast channel with confidential messages , " _ ieee trans .inf . theory _1235 - 1249 , mar .r. liu , t. liu , h. v. poor , and s. shamai , multiple - input multiple - output gaussian broadcast channels with confidential messages , " _ ieee trans .inf . theory _ ,4215 - 4227 , 2010 .q. h. spencer , a. l. swindlehurst , and m. haardt , zero - forcing methods for downlink spatial multiplexing in multiuser mimo channels , " _ ieee trans .signal processing _ , vol .461 - 471 , feb . 2004 .t. yoo and a. goldsmith , on the optimality of multi - antenna broadcast scheduling using zero - forcing beamforming , " _ ieee j. select .areas commun ._ , special issue on 4 g wireless systems , vol .3 , pp.528 - 541 , mar .a. wiesel , y. eldar , and s. shamai , linear precoding via conic optimization for fixed mimo receivers , " _ ieee trans .signal processing _ , vol .161 - 176 , jan . 2006 .a. fakoorian and a. l. swindlehurst , `` dirty paper coding versus linear gsvd - based precoding in mimo broadcast channel with confidential messages , '' in _ proc .ieee globecom _ , deca. khisti and g. wornell , `` secure transmission with multiple antennas ii : the mimome wiretap channel , '' _ ieee trans .56 , no . 11 , pp . 5515 - 5532 , 2010 .a. fakoorian and a. l. swindlehurst , `` optimal power allocation for the gsvd based mimo gaussian wiretap channel , '' in _ isit _ , july 2012 .r. bustin , r. liu , h. v. poor , and s. shamai ( shitz ) , `` a mmse approach to the secrecy capacity of the mimo gaussian wiretap channel , '' _ eurasip journal on wireless comm . and net .2009 , article i d 370970 , 8 pages , 2009 .r. a. horn and c. r. johnson , _ matrix analysis _ , university press , cambridge , uk , 1985 .h. weingarten , y. steinberg , and s. shamai ( shitz ) , `` the capacity region of the gaussian multiple - input multiple - output broadcast channel , '' _ ieee trans . inf .9 , pp . 3936 - 3964 , 2006 .d. nion , `` a tensor framework for nonunitary joint block diagonalization , '' _ ieee trans .signal processing _ , vol .4585 - 4594 , oct .s. a. a. fakoorian and a. l. swindlehurst , `` mimo interference channel with confidential messages : achievable secrecy rates and beamforming design , '' _ ieee trans . on inf .forensics and security _ , vol .j. lee , and n. jindal , `` high snr analysis for mimo broadcast channels : dirty paper coding versus linear precoding , '' _ ieee trans .inf . theory _ ,4787 - 4792 , dec .we want to characterize the matrices for which has generalized eigenvectors with orthogonal and . for any positive semidefinite matrix , there exists a matrix such that .more precisely , , where can be any unitary matrix ; thus is not unique .[ lin_remap2 ] let the invertible matrix and the diagonal matrix respectively represent the generalized eigenvectors and eigenvalues of so that \overline{\bc}=\overline{\mathbf{\lambda}}\\ & \overline{\bc}^h\left[\bt^h\bg^h\bg\bt+\bi\right]\overline{\bc}=\bi \ ; , \end{split}\end{aligned}\ ] ] where for a given unitary matrix . by comparing ( [ lin10 ] ) and ( [ linap_ex1 ] ) , one can confirm that and , where and are respectively the generalized eigenvectors and eigenvalues of ( [ linap22 ] ) , as given by ( [ lin10 ] ) .also note that , for any unitary , .thus , finding a such that ( [ linap22 ] ) has orthogonal and ( block diagonal ) is equivalent to finding a , , such that ( [ linap23 ] ) has orthogonal and ( block diagonal ) .the _ if _ part of theorem [ lin_thm2 ] is easy to show .we want to show that if and simultaneously block diagonalizes and , as given by ( [ linshrink1 ] ) such that and , then . from the definition of the generalized eigenvalue decomposition , we have \overline{\bc}= \overline{\bc}^h \left[\begin{array}{ccc}\bi+\bk_{\bh1 } & \b0\\ \b0 & \bi+\bk_{\bh2}\end{array}\right]\overline{\bc}= \left[\begin{array}{ccc}\bd_1 & \b0\\ \b0 & \bd_2 \end{array}\right]\\ & \overline{\bc}^h\left[\bt^h\bg^h\bg\bt+\bi\right]\overline{\bc}=\overline{\bc}^h \left[\begin{array}{ccc}\bi+\bk_{\bg1 } & \b0\\ \b0 & \bi+\bk_{\bg2}\end{array}\right]\overline{\bc}= \left[\begin{array}{ccc}\bi & \b0\\ \b0 & \bi \end{array}\right ] \ ; , \\ \end{split}\end{aligned}\ ] ] from which we have =\left[\begin{array}{ccc}\overline{\bc}_{11 } & \b0\\ \b0 & \overline{\bc}_{22}\end{array}\right ] \;,\ ] ] where the invertible matrix and diagonal matrix are respectively the generalized eigenvectors and eigenvalues of . since , then , which shows that corresponds to generalized eigenvalues that are bigger than or equal to one .we have a similar definition for and diagonal matrix , corresponding to . finally , since is block diagonal , then , where is the generalized eigenvector matrix of ( [ linap22 ] ) , is block diagonal as well .this completes the _ if _ part of the theorem . in the following , we prove the _ only if _ part of theorem [ lin_thm2 ] ; _ i.e. , _ we show that if results in ( [ linap22 ] ) having orthogonal and , then there must exist a square matrix such that and and are simultaneously block diagonalized as in ( [ linshrink1 ] ) with and .let have the eigenvalue decomposition , where is unitary and is a positive semidefinite diagonal matrix .also let have the eigenvalue decomposition , where is unitary and is a positive definite diagonal matrix .one can easily confirm that and , where and are respectively the generalized eigenvectors and eigenvalues of ( [ linap22 ] ) . also let be ordered such that ] .this completes the proof .[ lin_remap1 ] by applying the schur complement lemma on ^h\left[\bc_1\quad\bc_2\right]=\left [ \begin{array}{ccc}\bc_1^h\bc_1 & \bc_1^h\bc_2\\ \bc_2^h\bc_1 & \bc_2^h\bc_2\end{array}\right]\ ] ] and recalling the fact that is full - rank , we have that is full rank .similarly , one can show that exists . also , we have .define $ ] , so that ^h\left[\bp^\perp_{\bc_2}\bc_1\quad \bc_2\right]=\left [ \begin{array}{ccc}\bc_1^h\bp^\perp_{\bc_2}\bc_1 & \b0\\ \b0 & \bc_2^h\bc_2\end{array}\right]\;.\ ] ] consequently , we can write \widehat{\bc}^h\end{aligned}\ ] ] and \widehat{\bc}^h \ ; .\end{aligned}\ ] ] in the following we show the achievablity of in ( [ lin19 ] ) .the achievablity of is obtained in a similar manner .since and in theorem [ lin_lem2 ] are independent , from ( [ lin17 ] ) we have recalling ( [ linap8 ] ) , we have where we used remark [ lin_remap1 ] to obtain ( [ linap18 ] ) . from ( [ linap15 ] ) , we have \widehat{\bc}^h\cdot\left[\bc^{-h}\left[\begin{array}{ccc}\mathbf{\lambda}_1 & 0\\ 0 & \mathbf{\lambda}_2\end{array}\right]\bc^{-1}-\bi\right ] \right|\nonumber \\ & = \left|\bi+\left[\begin{array}{ccc } \b0 & \b0\\\b0 & ( \bc_2^h\bc_2)^{-1}\end{array}\right ] \cdot\left[\widehat{\bc}^h\bc^{-h}\left[\begin{array}{ccc}\mathbf{\lambda}_1 & 0\\ 0 & \mathbf{\lambda}_2\end{array}\right]\bc^{-1}\widehat{\bc}-\widehat{\bc}^h\widehat{\bc } \right ] \right|\nonumber \\ & = \left|\bi+\left[\begin{array}{ccc } \b0 & \b0\\\b0 & ( \bc_2^h\bc_2)^{-1}\end{array}\right ] \cdot\left[\left[\begin{array}{ccc}\bi & \bn^h\\ \b0 & \bi\end{array}\right ] \left[\begin{array}{ccc}\mathbf{\lambda}_1 & 0\\ 0 & \mathbf{\lambda}_2\end{array}\right]\left[\begin{array}{ccc}\bi & \b0\\ \bn & \bi\end{array}\right ] -\widehat{\bc}^h\widehat{\bc } \right ] \right| \label{linap19}\\ & = \left|\bi+\left[\begin{array}{ccc } \b0 & \b0\\\b0 & ( \bc_2^h\bc_2)^{-1}\end{array}\right ] \cdot\left[\left[\begin{array}{ccc}\mathbf{\lambda}_1+\bn^h\mathbf{\lambda}_2\bn & \bn^h\mathbf{\lambda}_2\\ \mathbf{\lambda}_2\bn & \mathbf{\lambda}_2\end{array}\right ] -\widehat{\bc}^h\widehat{\bc } \right ] \right| \nonumber\\ & = \left|\left[\begin{array}{ccc } \bi & \b0 \\ ( \bc_2^h\bc_2)^{-1}\mathbf{\lambda}_2\bn & ( \bc_2^h\bc_2)^{-1}\mathbf{\lambda}_2\end{array}\right ] \right| \nonumber\\ & = \left|(\bc_2^h\bc_2)^{-1}\mathbf{\lambda}_2\right| = \left|(\bc_2^h\bc_2)^{-1}\right|\cdot\left|\mathbf{\lambda}_2\right| \ ; , \label{linap20}\end{aligned}\ ] ] where in ( [ linap19 ] ) , , and we used the fact that \ , \left[\bp^\perp_{\bc_2}\bc_1 \quad \bc_2\right ] = \left[\begin{array}{ccc}\bi & \b0\\ \bn & \bi\end{array}\right ] \;.\end{aligned}\ ] ] similarly , we have \cdot\left[\left[\begin{array}{ccc}\bi & \bn^h\\ \b0 & \bi\end{array}\right ] \left[\begin{array}{ccc}\bi & \b0\\ \bn & \bi\end{array}\right ] -\widehat{\bc}^h\widehat{\bc } \right ] \right| \nonumber\\ & = \left|\bi+\left[\begin{array}{ccc } ( \bc_1^h\bp^\perp_{\bc_2}\bc_1)^{-1 } & \b0\\\b0 & \b0\end{array}\right ] \cdot\left[\left[\begin{array}{ccc}\bi+\bn^h\bn & \bn^h\\ \bn & \bi\end{array}\right ] -\widehat{\bc}^h\widehat{\bc } \right ] \right| \nonumber\\ & = \left|\left[\begin{array}{ccc } ( \bc_1^h\bp^\perp_{\bc_2}\bc_1)^{-1 } ( \bi+\bn^h\bn ) & ( \bc_1^h\bp^\perp_{\bc_2}\bc_1)^{-1}\bn^h \\ \b0 & \bi\end{array}\right ] \right| \nonumber\\ & = \left|(\bc_1^h\bp^\perp_{\bc_2}\bc_1)^{-1}\right|\cdot\left|\bi+\bn^h\bn\right| \;. \label{linap21}\end{aligned}\ ] ] subsituting ( [ linap18 ] ) , ( [ linap20 ] ) and ( [ linap21 ] ) in ( [ linap17 ] ) , we have , which completes the proof .
we study the optimality of linear precoding for the two - receiver multiple - input multiple - output ( mimo ) gaussian broadcast channel ( bc ) with confidential messages . secret dirty - paper coding ( s - dpc ) is optimal under an input covariance constraint , but there is no computable secrecy capacity expression for the general mimo case under an average power constraint . in principle , for this case , the secrecy capacity region could be found through an exhaustive search over the set of all possible matrix power constraints . clearly , this search , coupled with the complexity of dirty - paper encoding and decoding , motivates the consideration of low complexity linear precoding as an alternative . we prove that for a two - user mimo gaussian bc under an input covariance constraint , linear precoding is optimal and achieves the same secrecy rate region as s - dpc if the input covariance constraint satisfies a specific condition , and we characterize the corresponding optimal linear precoders . we then use this result to derive a closed - form sub - optimal algorithm based on linear precoding for an average power constraint . numerical results indicate that the secrecy rate region achieved by this algorithm is close to that obtained by the optimal s - dpc approach with a search over all suitable input covariance matrices .
the ricci flow has been introduced by r. hamilton with the goal of providing an analytic approach to thurston s geometrization conjecture for three - manifolds . inspired by the theory of harmonic maps , he considered the geometric evolution equation obtained when one evolves a riemannian metric , on a three - manifold , in the direction of its ricci tensor , _i.e. _ in recent years , this geometric flow has gained extreme popularity thanks to the revolutionary breakthroughs of g. perelman , who , taking the whole subject by storm , has brought to completion hamilton s approach to thurston s conjecture .the prominent themes recurring in hamilton s and perelman s works converge to a proof that the ricci flow , coupled to topological surgery , provides a natural technique for factorizing and uniformizing a three - dimensional riemannian manifold into locally homogeneous geometries .this is a result of vast potential use also in theoretical physics , where the ricci flow often appears in disguise as a natural real - space renormalization group flow .non - linear -model theory , describing quantum strings propagating in a background spacetime , affords the standard case study in such a setting .another paradigmatical , perhaps even more direct , application occurs in relativistic cosmology , ( for a series of recent results see also and the references cited therein ) .this will be related to the main topic of this talk , and to motivate our interest in it , let us recall that homogeneous and isotropic solutions of einstein s laws of gravity ( the friedman lemaitre robertson walker ( flrw ) spacetimes ) do not account for inhomogeneities in the universe .the question whether they do _ on average _ is an issue that is the subject of considerable debate especially in the recent literature ( see and follow up references ; comprehensive lists may be found in and ) . in any case, a member of the family of flrw cosmologies ( the so called concordance model that is characterized by a dominating cosmological constant in a spatially flat universe model ) provides a successful _ fitting model _ to a large number of observational data , and the generally held view is that the spatial sections of flrw spacetimes indeed describe the _ physical universe _ on a sufficiently large averaging scale .this raises an interesting problem in mathematical cosmology : devise a way to explicitly construct a constant curvature metric out of a _ scale dependent _ inhomogeneous distribution of matter and spatial curvature .it is in such a framework that one makes the basic observation that the ricci flow ( [ mflow ] ) and its linearization , provide a natural technique for deforming , and under suitable conditions smoothing , the geometrical part of scale dependent cosmological initial data sets .moreover , by taking advantage of some elementary aspects of perelman s results , this technique also provides a natural and unique way for deforming , along the ricci flow , the matter distribution .the expectation is that in this way we can define a deformation of cosmological initial data sets into a one - parameter family of initial data whose time evolution , along the evolutive part of einstein s equations , describe the ricci flow deformation of a cosmological spacetime .to set notation , we emphasize that throughout the paper we shall consider a smooth three - dimensional manifold , which we assume to be closed and without boundary .we let and be the space of smooth functions and of smooth fields on , respectively .we shall denote by the group of smooth diffeomorphisms of , and by the space of all smooth riemannian metrics over .the tangent space , , to at can be naturally identified with the space of symmetric bilinear forms over .the hypothesis of smoothness has been made for simplicity .results similar to those described below , can be obtained for initial data sets with finite holder or sobolev differentiability .in such a framework , let us recall that a collection of fields , , , , defined over the three - manifold , characterizes a set , of physical cosmological initial data for einstein equations if and only if the _ matter fields _ verify the weak energy condition , the dominant energy condition , and their coupling with the geometric fields is such as to satisfy the hamiltonian and divergence constraints ; we adopt the summation convention .the nabla operator denotes covariant derivative with respect to the 3metric .the units are such that . ] : here is the cosmological constant , , and is the scalar curvature of the riemannian metric . if such a set of admissible data is propagated according to the evolutive part of einstein s equations , then the symmetric tensor field can be interpreted as the extrinsic curvature and as the mean curvature of the embedding of in the spacetime resulting from the evolution of , whereas and are , respectively , identified with the mass density and the momentum density of the material self gravitating sources on .the averaging procedure described in is based on a smooth deformation of the physical initial data into a one - parameter family of initial data sets with being a parameter characterizing the averaging scale .the general idea is to construct the flow ( [ betadata ] ) in such a way as to represent , as increases , a scale - dependent averaging of , and under suitable hypotheses reducing it to a constant - curvature initial data set where is a constant curvature metric on , is the ( spatially constant ) trace of the extrinsic curvature ( related to the hubble parameter ) , and is the averaged matter density . under the heading of such a general strategyit is easy to figure out the reasons for an important role played by the ricci flow and its linearization . as we shall recall shortly, they are natural geometrical flows always defining a non - trivial deformation of the metric and of the extrinsic curvature . moreover , when global , they posses remarkable smoothing properties . for instance , if , for , the scalar curvature of is , and if there exist positive constants , , , not depending on , such that ,and ,where the trace free part of the ricci tensor , then the solutions of the ( volume normalized ) ricci flow and its linearization exist for all , and the pair , uniformly converges , when , to where is a metric with constant positive sectional curvature , and is some vector field on , possibly depending on . the flow , describing a motion by diffeomorphisms over a constant curvature manifold , can be thought of as representing the smoothing of the geometrical part of an initial data set .+ it is useful to keep in mind what we can expect and what we can not expect out of such a ricci flow deformation of cosmological initial data set .let us start by remarking that the family of data ( [ betadata ] ) will correspond to the initial data for physical spacetimes iff the constraints ( [ constraint1 ] ) and ( [ constraints ] ) hold throughout the deformation .this is a very strong requirement and , if we have a technically consistent way of deforming the matter distribution and the geometrical data , then the most natural way of implementing the constraints is to use them to define scale dependent backreaction fields , describing the non linear interaction between matter averaging and geometrical averaging , _i.e. _ , \;,\label{hamro}\\ \psi _ { a } ( \beta ) & \doteq & j_{a}(\beta ) -(8\pi \,g)^{-1 } \left[\nabla _ { b}k_{\;\,a}^{b}(\beta ) -\nabla _ { a}k(\beta ) \right]\;. \label{divj}\end{aligned}\ ] ] to illustrate how this strategy works , let us concentrate , in this talk , on the characterization of the scalar field , providing the backreaction between matter and geometrical averaging .the covector field can , in principle , be controlled by the action of a diffeomorphism .however , its analysis requires a subtle interplay with the kinematics of spacetime foliations , ( _ i.e. _ , how we deal with the lapse function and with the shift vector field in the framework of perelman s approach ) , and will be discussed elsewhere , ( for a pre perelman approach to this issue see ) .+ let us start by observing that the matter averaging flow must comply with the preservation of the physical matter content and must be explicitly coupled to the scale of geometrical averaging . in other words ,if , for some fixed , we consider that part of the matter distribution which is localized in a given region of size , then we should be able to tell from which localized distribution , at , the selected matter content has evolved .a natural answer to these requirements is provided by perelman sbackward localization of probability measures on ricci evolving manifolds .the idea is to probe the ricci flow with a probability measure whose dynamics can localize the regions of the manifold of geometric interest .this is achieved by considering , along the solution of ( [ mflow ] ) , a mapping , in terms of which one constructs on the measure , where is a scale parameter chosen in such a way as to normalize according to the so called _ perelman s coupling_: .it is easily verified that this is preserved in form along the ricci flow ( [ mflow ] ) , if the mapping and the scale parameter are evolved backward in time according to the coupled flows defined by where is the laplacian with respect to the metric , and , are given ( final ) data . in this connection , note that the equation for is a backward heat equation , and as such the forward evolution is an ill - posed problem .a direct way for circumventing such a difficulty is to interpret ( [ ourp ] ) according to the following two - steps prescription : ( i ) evolve the metric , say up to some , according to the ricci flow , ( if the flow is global we may let ) ; ( ii ) on the ricci evolved riemannian manifold so obtained , select a function and the corresponding scale parameter , and evolve them , backward in , according to ( [ ourp ] ) .with these preliminary remarks along the way , let us characterize the various steps involved in constructing the flow ( [ betadata ] ) .( for ease of exposition , we refer to the standard unnormalized flow ; volume normalization can be enforced by a reparametrization of the deformation parameter ) .+ ( _ geometrical data deformation _ ) given an initial data set for a cosmological spacetime , the ricci flow deformation of its geometrical part is defined by the flow , provided by the ( weakly ) parabolic initial value problem ( the ricci flow , proper ) <\infty \;, ] , denotes the lichnerowicz derham laplacian with respect to the variable , the heat kernels and are smooth sections of and , respectively , and finally , is the dirac measure , ( ) on .the dirac initial condition is understood in the distributional sense , _i.e. _ , , for any smooth symmetric bilinear form with compact support , and where the limit is meant in the uniform norm on . note that heat kernels for generalized laplacians , such as , ( smoothly ) depending on a one parameter family of metrics , , are briefly dealt with in .the delicate setting where the parameter dependence is , as in our case , identified with the parabolic time driving the diffusion of the kernel , is discussed in , ( see appendix a , 7 for a characterization of the parametrix of the heat kernel in such a case ) , and in . strictly speaking , in all these works , the analysis is confined to the scalar laplacian , possibly with a potential term , but the theory readily extends to generalized laplacians , always under the assumption that the metric is smooth as .finally , the kernels for and can both be normalized , along the ricci flow , over the round ( collapsing ) 3sphere .let be the ricci flow deformation , on ] are smooth sections , ( depending on the geometry of ) , characterizing the asymptotics of the heat kernel . with an obvious adaptation, such an asymptotic behavior also extends to and . with these resultsit is rather straightforward to provide useful characterizations of the ( intrinsic ) backreaction field . for illustrative purposes ,let us assume that and , then from ( [ hamro ] ) and the above proposition we get that \;.\nonumber\end{aligned}\ ] ] if we further assume that the hamiltonian constraint holds at the _ fixed _ observative scale and that $ ] , ( _ i.e. _ , curvature fluctuates around the _ constant curvature background _ ) , then from which it follows that the intrinsic backreaction field is generated by curvature fluctuations around the given background , as expected .m.c . would like to dedicate this paper to t. ruggeri on occasion of his .. th birthday ; he also thanks the organizers for a very stimulating and enjoyable meeting .t.b . acknowledges hospitality at and support from the university of pavia during a working visit .research supported in part by prin grant .i. bakas , _ geometric flows and ( some of ) their physical applications _ , avh conference advances in physics and astrophysics of the 21st century , 6 - 11 september 2005 , varna , bulgaria , arxiv : hep - th/0511057 ( 2005 ). t. buchert , _ on average properties of inhomogeneous fluids in general relativity : 2 .perfect fluid cosmologies _ , gen .* 33 * , 1381 ( 2001 ) .t. buchert , _ dark energy from structure : a status report _( special issue on dark energy ) , in press ; arxiv:0707.2153 ( 2007 ) .t. buchert and m. carfora , _ regional averaging and scaling in relativistic cosmology _ , class .* 19 * , 6109 - 6145 ( 2002 ) .t. buchert and m. carfora , _ cosmological parameters are dressed _ , phys .. lett . *90 * , 31101 - 1 - 4 ( 2003 ) . m. carfora , a. marzuoli , _ model geometries in the space of riemannian structures and hamilton s flow _quantum grav . 5 , 659 - 693 ( 1988 ) .b. chow , s - c .chu , d. glickenstein , c. guenther , j. isenberg , t. ivey , d. knopf , p. lu , l. ni , _ the ricci flow : techniques and applications : part i : geometric aspects _ , math. surveys and monographs vol . *135 * , am .( 2007 ) .ellis and t. buchert , _ the universe seen at different scales _ , phys .. a ( einstein special issue ) * 347 * , 38 ( 2005 ) .d. h. friedan , _ nonlinear models in dimensions _physics * 163 * , no .2 , 318419 ( 1985 ) .
ricci flow deformation of cosmological initial data sets in general relativity is a technique for generating families of initial data sets which potentially would allow to interpolate between distinct spacetimes . this idea has been around since the appearance of the ricci flow on the scene , but it has been difficult to turn it into a sound mathematical procedure . in this expository talk we illustrate , how perelman s recent results in ricci flow theory can considerably improve on such a situation . from a physical point of view this analysis can be related to the issue of finding a constant curvature template spacetime for the inhomogeneous universe , relevant to the interpretation of observational data and , hence , bears relevance to the dark energy and dark matter debates . these techniques provide control on curvature fluctuations ( intrinsic backreaction terms ) in their relation to the averaged matter distribution .
the broadcast nature of wireless transmissions causes co - channel interference and channel contention , which can be viewed as interactions among transceivers .interactions among multiple decision makers can be formulated and analyzed using a branch of applied mathematics called game theory .game - theoretic approaches have been applied to a wide range of wireless communication technologies , including transmission power control for code division multiple access ( cdma ) cellular systems and cognitive radios . for a summary of game - theoretic approaches to wireless networks ,we refer the interested reader to .application - specific surveys of cognitive radios and sensor networks can be found in . in this paper , we focus on potential games , which form a class of strategic form games with the following desirable properties : * the existence of a nash equilibrium in potential games is guaranteed in many practical situations ( theorems [ th : existence_finite ] and [ th : existence_infinite ] in this paper ) , but is not guaranteed for general strategic form games .other classes of games possessing nash equilibria are summarized in and .* unilateral improvement dynamics in potential games with finite strategy sets are guaranteed to converge to the nash equilibrium in a finite number of steps , i.e. , they do not cycle ( theorem [ eq : converge_in_finite_steps ] in this paper ) . as a result, learning algorithms can be systematically designed .a game that does not have these properties is discussed in example [ example2 ] in section [ sec : game ] .we provide an overview of problems in wireless networks that can be formulated in terms of potential games .we also clarify the relations among games , and provide simpler proofs of some known results .problem - specific learning algorithms are beyond the scope of this paper . [ cols="<,<,<,<",options="header " , ] [ tab : system_models ] the remainder of this paper is organized as follows : in sections [ sec : game ] , [ sec : pot ] , and [ sec : learning ] , we introduce strategic form games , potential games , and learning algorithms , respectively .we then discuss various potential games in sections [ ssec : nie ] to [ sec : immobile_sensor ] , as shown in table [ tab : system_models ] .finally , we provide a few concluding remarks in section [ sec : conclusion ] .the notation used here is shown in table [ 210446_20aug14 ] . unless the context indicates otherwise ,sets of strategies are denoted by calligraphic uppercase letters , e.g. , , strategies are denoted by lowercase letters , e.g. , , and tuples of strategies are denoted by boldface lowercase letters , e.g. , .note that is a scalar variable when is a set of scalars or indices , is a vector variable when is a set of vectors , and is a set variable when is a collection of sets .we use to denote the set of real numbers , to denote the set of nonnegative real numbers , to denote the set of positive real numbers , and to denote the set of complex numbers .the cardinality of set is denoted by .the power set of is denoted by . finally , is the indicator function , which is one when is true and is zero otherwise .we treat many system models , as shown in fig .[ fig : system_models ] . in multiple - access channels , as shown in fig .[ fig : system_models ] , multiple transmitters ( txs / users / mobile stations / terminals ) transmit signals to a single receiver ( rx / base station ( bs)/access point ( ap ) ) . in fig .[ fig : system_models ] , represents the link gain from tx to the rx . in a network model consisting of tx - rx pairs , as shown in fig .[ fig : system_models ] , each tx transmits signals to rx . in this case , . in a network model consisting of txs shown in fig .[ fig : system_models ] , each tx ( bs / ap / transceiver / station / terminal / node ) interferes with others . in this model , . a `` canonical network model '' , shown in fig .[ fig : system_models ] , consists of clusters that are spatially separated in order for to hold .note that these network models have been discussed in terms of graph structure in .we use to denote a directed link from tx to tx or cluster to cluster .let interference graph be an undirected graph , where the set of vertices corresponds to txs or clusters , and interferes with if , as shown in fig .[ fig : system_models ] , i.e. , where is the transmission power level for every tx and is a threshold of the received power .note that in undirected graph , , for every .we denote the neighborhood of in graph by .we also define , then , .c|x & strategic form game + & finite set of players , + & + & set of strategies for player + & strategy space , + & payoff function for player + & potential function + & best - response correspondence of player + & strategy of player , + & set of probability distributions over + & mixed strategy , + & mixed strategy profile , + & link gain between tx and a single isolated rx in fig .[ fig : system_models ] + & link gain between tx and rx ; in fig .[ fig : system_models ] , and in figs .[ fig : system_models ] and [ fig : system_models ] + & directed link from to + & set of edges in undirected graph + & .neighborhood in graph + & . + & common noise power for every player + & noise power at rx + & noise power at rx in channel + & interference power at rx at channel arrangement + & set of available channels for player + & channel of player + & + & set of available transmission power levels for player + & transmission power level of player as a strategy + & + & identical transmission power level for every player + & transmission power level for player as a constant + & required signal - to - interference - plus - noise power ratio ( sinr ) + [ 210446_20aug14 ]we begin with the definition of a strategic form game and present an example of a game - theoretic formulation of a simple channel selection problem .moreover , we discuss other useful concepts , such as the best response and nash equilibrium .the analysis of nash equilibria in the channel selection example reveals the potential presence of cycles in best - response adjustments . a _ strategic ( or normal ) form game _ is a triplet , or simply , where is a finite set of _ players _ ( decision makers ) , is the set of _ strategies _ ( or actions ) for player , and is the _ payoff _ ( or utility ) function of player that must be maximized .if , we denote the cartesian product by . if , we simply write to denote , and to denote . when , we let denote and denote . for , , , and . [ example1 ] consider a channel selection problem in the tx - rx pair model shown in fig . [fig : system_models ] .each tx - rx pair is assumed to select its channel in a decentralized manner in order to minimize the received interference power .the channel selection problem can be formulated as a strategic form game .the elements of the game are as follows : the set of players is the set of tx - rx pairs .the strategy set for each pair , is the set of available channels .the received interference power at rx is determined by a combination of channels , where let be the payoff function to be maximized , i.e. , note that was introduced in , and we further discuss it in example [ example2 ] .the _ _ best - response correspondence _ _ ( or simply , best response ) of player to strategy profile is the correspondence or equivalently , .a fundamental solution concept for strategic form games is the nash equilibrium : a strategy profile is a pure - strategy _ nash equilibrium _ ( or simply a nash equilibrium ) of game if for every and ; equivalently , for every .that is , is a solution to the optimization problem . at the nash equilibrium, no player can improve his / her payoff by adopting a different strategy _ unilaterally _ ; thus , no player has an incentive to unilaterally deviate from the equilibrium .the nash equilibrium is a proper solution concept ; however , the existence of a pure - strategy nash equilibrium is not necessarily guaranteed , as shown in the next example . .a cycle results from the best - response adjustment . ][ example2 ] consider and the arrangement shown in fig .[ fig : loop ] , i.e. , , for every , and , , and .the game does not have a nash equilibrium , i.e. , for every channel allocation , at least one pair has an incentive to change his / her channel .the details are as follows : when all players choose the same channel , e.g. , , every player has an incentive to change his / her channel because for all ; thus , it is not in nash equilibrium . on the contrary , when two players choose the same channel , and the third player chooses a different channel , e.g. , , as shown in fig .[ fig : loop](a ) , , i.e. , pair 2 has an incentive to change its channel from 1 to 2 , and ( [ eq : ne ] ) does not hold . because of the symmetry property of the arrangement in fig .[ fig : loop ] , every strategy profile does not satisfy ( [ eq : ne ] ) .furthermore , the best - response channel adjustments , which will be formally discussed in section [ sec : learning ] , cycle as , , , , , , and , as shown in figs .[ fig : loop](a - f ) . the channel allocation game is discussed further in section [ ssec : nie ] .we state key definitions and properties of potential games in section [ ssec : properties ] , show how to identify and design exact potential games in sections [ ssec : identification_exact ] and [ ssec : design ] , and show how to identify ordinal potential games in section [ ssec : identification_ordinal ] .monderer and shapley introduced the following classes of potential games : a strategic form game is an _ exact potential game _ ( epg ) if there exists an _ exact potential function _ such that for every , , and .a strategic form game is a _ weighted potential game _ ( wpg ) if there exist a _ weighted potential function _ and a set of positive numbers such that for every , , and .a strategic form game is an _ ordinal potential game _ ( opg ) if there exists an _ordinal potential function _ such that for every , , and , where denotes the sign function .although the potential function is independent of the indices of the players , reflects any unilateral change in any payoff function for every player .since an epg is a wpg and a wpg is an opg , the following properties of opgs are satisfied by epgs and wpgs .[ th : existence_finite ] every opg with finite strategy sets possesses at least one nash equilibrium ( * ? ? ?* corollary 2.2 ) .[ th : existence_infinite ] in the case of infinite strategy sets , every opg with compact strategy sets and continuous payoff functions possesses at least one nash equilibrium ( * ? ? ?* lemma 4.3 ) .[ th : unique ] every opg with a compact and convex strategy space , and a strictly concave and continuously differentiable potential function possesses a unique nash equilibrium ( * ? ? ?* theorem 2), .the most important property of potential games is _ acyclicity _ , which is also referred to as the finite improvement property .a _ path _ in is a sequence , { \boldsymbol a}[1 ] , \ldots) ] while = { \boldsymbol a}_{-i}[k\!-\!1] ] is an _ improvement path _ if , for every , ) > u_i({\boldsymbol a}[k\!-\!1]) ] to his / her best response ] .note that while the term `` best - response dynamics '' was introduced by matsui , it has many representations depending on the type of game .we also note that best - response dynamics may converge to sub - optimal nash equilibria .by contrast , the following spatial adaptive play can converge to the optimal nash equilibrium . to be precise, it maximizes the potential function with arbitrarily high probability .consider a game with a finite number of strategy sets ._ log - linear learning _ , _ spatial adaptive play _ , and _ logit - response dynamics _ refer to the following update rule : at each step , a player unilaterally changes his / her strategy from ] .thus , is an epg with potential i.e. , the sum of the inverse sinr in the network .note that the above expression is a single carrier version of orthogonal channel selection .menon et al . discussed a waveform adaptation version of that can be applied to codeword selection in non - orthogonal code division multiple access ( cdma ) , and buzzi et al . discussed waveform adaptation .buzzi et al . also discussed an ofdma subcarrier allocation version of .cai et al . discussed joint transmission power and channel assignment utilizing the payoff function ( [ eq : buzzi_utility ] ) of .gllego et al . proposed using the network throughput of joint power and channel assignment , as potential , where is the bandwidth of channel , and is the required sinr .it may have been difficult to derive a simple payoff function , and they thus proposed the wlu ( [ eq : wlu2 ] ) of ( [ eq : gallego ] ) .yu et al . and chen et al . considered sensor networks where each rx ( sink ) receives messages from multiple txs ( sensors ) .they proved that a channel selection that minimizes the number of received and generated interference signals is an epg , where the potential is the number of total interference signals .note that the average number of retries is approximately proportional to the number of received interference signals when the probability that the messages are transmitted is very small , as in sensor networks .a simpler and related form of ( [ eq : nie_utility ] ) is detailed in the following discussion . to reduce the information exchange required to evaluate ( [ eq : nie_utility ] ) , yamamoto et al . proposed using the number of received and generated interference sources as the payoff function , where the received interference power is greater than a given threshold , i.e. , this model is sometimes referred to as a `` binary '' interference model in comparison with a `` physical '' interference model . because is a bsi game with , is an epg .when we consider a directed graph , where edges between tx and rx indicate , we denote tx s neighboring rxs by , and rx s neighboring txs by . using these expressions , ( [ eq : yamamoto_utility ] ) can be rewritten to yang et al . discussed a multi - channel version of .in section [ ssec : nie ] , channel allocation games in the tx - rx pair model shown in fig . [fig : system_models ] are discussed .neel et al . considered a different channel allocation game typically applied to channel allocation for aps in the wireless local area networks ( wlans ) shown in fig .[ fig : system_models ] , where each tx selects a channel to minimize the interference from other txs , i.e. , where is the common transmission power level for every tx .note that in this scenario , whereas in the tx - rx pair model shown in fig .[ fig : system_models ] .moreover , note that interference from stations other than the txs is not taken into account in the payoff function .in addition to the tx network model , channel selection can be applied to the canonical network model shown in fig .[ fig : system_models ] . because is a bsi game where , it is an epg with potential which corresponds to the aggregated interference power among txs .neel et al .pointed out that other symmetric interference functions , e.g. , , where is the common bandwidth for every channel , can be used instead of in ( [ eq : neel_utility ] ) .kauffmann et al . discussed essentially the same problem .however , they considered player - specific noise , and derived ( [ eq : neel_utility ] ) by substituting and into ( [ eq : kauffmann_utility ] ) .compared with the payoff function ( [ eq : nie_utility ] ) , ( [ eq : neel_utility ] ) can be evaluated with only local information available at each tx ; however , the transmission power levels of all txs need to be identical .we further discuss this requirement in section [ ssec : non - identical ] .liu and wu reformulated the game represented by ( [ eq : neel_utility ] ) as a cg by introducing virtual resources .further discussion can be found in . to avoid the requirement of identical transmission power levels in ( [ eq : neel_utility ] ), neel proposed using the product of ( constant ) transmission power level and interference as the payoff function , i.e. , because is a bsi game with , is an epg with note that this form of payoff functions was provided by menon et al . in the context of waveform adaptations .this game under frequency - selective channels was discussed by wu et al . .the relationship between ( [ eq : babadi_utility ] ) and its exact potential function ( [ eq : babadi_v ] ) implies that the game with payoff function is a wpg with potential function and in ( [ eq : wpg ] ) , i.e. , the identical transmission power level required in ( [ eq : neel_utility ] ) is not necessarily required for the game to have the fip .this was made clear by bahramian et al . and babadi et al . . as extensions , in , the interference management game on graph structures with the following payoff functionwas discussed : proposed using the expected value of interference in order to manage fluctuating interference .zheng treated dynamical on - off according to traffic variations in .menon et al . showed that a waveform adaptation game where the payoff function is the sinr or the mean - squared error at the rx is an opg .chen and huang showed that a channel allocation game in the tx network model shown in fig .[ fig : system_models ] , or in the canonical network model shown in fig .[ fig : system_models ] , where the payoff function is the sinr or a shannon capacity , is an opg . here, we provide a derivation in the form of channel allocation according to the derivation provided in .a channel selection game with payoff function \label{eq : pin_utility}\end{aligned}\ ] ] is an epg with potential because is a constant in ( [ eq : pin_utility ] ) , by theorem [ eq : ordinal_transform ] , with payoff is an opg with potential . as a result , once again using theorem [ eq : ordinal_transform ] , with payoff is an opg with potential . further discuss , where the active tx set can be stochastically changed .a quite relevant discussion was conducted by song et al .they discussed a joint transmission power and channel assignment game to maximize throughput : where represents throughput depending on sinr .they pointed out that since each user would set the maximum transmission power at a nash equilibrium , is equivalent to the channel selection game .further discussion on joint transmission power and channel assignment can be found in .for the interference graph shown in fig .[ fig : system_models ] , xu et al . proposed using the number of neighbors that select the same channel as the payoff function , i.e. , we would like to point out that ( [ eq : xustsp1_utility ] ) can be reformulated to i.e. , is a bsi game with . thus is an epg .note that this is a special case of singleton cgs on graphs discussed in section [ ssec : aloha ] . as variations of , xu et al . discussed the impact of partially overlapped channels .yuan et al . discussed the variable - bandwidth channel allocation problem .zheng et al . took into account stochastic channel access according to the carrier sense multiple access ( csma ) protocol . discussed a multi - channel version of .liu et al . discussed a common control channel assignment problem for cognitive radios , and proposed using for the payoff function so that every player chooses the same channel .this game is similar to the consensus game .channels can be viewed as common resources in the congestion model introduced in section [ sssec : congestion_game ] . in general , throughput when using a channel depends only on the number of stations that select the relevant channel .a cg formulation is thus frequently used for channel selection problems .altman et al . formulated a multi - channel selection game in a single collision domain as a cg . based on a cg formulation , channel selections by secondary stations were discussed in . a channel selection problem in multiple collision domainswas discussed in .iellamo et al . used numerically evaluated successful access probabilities depending on the number of stations in csma / ca as payoff functions . here , we discuss channel selection problems in interference graph , where each node attempts to adjust its channel to maximize its successful access probability or throughput . consider collision channels shared using slotted aloha .each node adjusts its channel to avoid simultaneous transmissions on the same channel because these result in collisions . in this case , when one node exclusively chooses a channel , the node can transmit without collisions .thus , the following payoff function captures the benefit of nodes : is a singleton cg on graphs , and thomas et al . showed that is an opg is equivalent to when setting for every . ] .consider that each node has a transmission probability ( ) .chen and huang proposed using the logarithm of successful access probability , \end{aligned}\ ] ] and proved that is a wpg . here , we provide a different proof .when we consider \nonumber \\ & = - \log ( 1 - x_i ) \log ( x_i ) \nonumber\\ & \hphantom { = { } } - \log ( 1 - x_i ) \textstyle\sum _ { j \neq i } \indicator { c_j = c_i } \indicator { ij \in \mathcal { e } } \log ( 1 - x_j ) , \nonumber\end{aligned}\ ] ] is a bsi game with .thus , is a wpg and , by theorem [ eq : ordinal_transform ] , with payoff is an opg . chen and huang further discussed with player - specific constants and proved that the game is an opg . before concluding this section, we would like to point out the relationship between and cgs .when we assume an identical transmission probability for every , we get i.e. , is a cg on graphs .let the backoff time of player be denoted by ] to maximize the following successful access probability ( minus the cost ) : this is a well - known payoff function .further discussion can be found in . because ( [ eq : saloha_utility ] ) satisfies ( [ eq : twice_difference ] ) , ) , ( u{{\ref{eq : saloha}}}_i ) ) ] and where , and is the number of txs with whom tx establishes ( possibly over multiple hops ) a communication path using bidirectional links . note that when .this game has been shown to be an opg with note that the mathematical representation of using connectivity matrix was first proposed in .komali et al . also discussed interference reduction through channel assignment , which is seen as a combination of and a channel assignment version of .they further discussed the impact of the amount of knowledge regarding the network on the spectral efficiency .chu and sethu considered battery - operated stations and formulated transmission power control to prolong network lifetime while maintaining connectivity as an opg .similar approaches can be found in , and the joint assignment of transmission power and channels was discussed in .liu et al . formulated measures for transmission power and sensing range adjustment to enhance energy efficiency while maintaining sensor coverage as an opg .baar et al . formulated a flow and congestion control game , where each user adjusts the amount of traffic flow to enhance where represents the commodity - link cost of congestion .because ( [ eq : basar_utility ] ) is a combination of self - motivated and coordination functions , a game with payoff function is an epg with the learning process of this game was further discussed by scutari et al .other payoff functions for flow control were discussed in .douligeris and mazumdar , and zhang and douligeris introduced an m / m/1 queuing game , where each user transmits packets to a single server at departure rate and adjusts the arrival rate to maximize the `` power '' , which is defined as the throughput divided by the delay , i.e. , where is a factor that controls the trade - off between throughput and delay .note that this game is a cournot game ( see ( [ eq : cournot ] ) ) when for every .gai et al . proved that is an opg . here , we provide a different proof . because a game with payoff function is an epg , by theorem [ eq : ordinal_transform ], is an opg .marden et al . pointed out that the sensor deployment problem ( see and references therein ) , where each mobile node updates its location to forward data from immobile sources to immobile destinations , can be formulated as an epg .since the required transmission power to an adjacent node is proportional to the square of the propagation distance , , in a free - space propagation environment , minimizing the total required transmission power problem is formulated as a maximization problem with global objective if is used as the payoff function of node , is equivalent to the consensus game .a sensor coverage problem is formulated as a maximization problem with global objective in continuous form \ , \dd r,\end{aligned}\ ] ] or in discrete form ,\end{aligned}\ ] ] where is the specific region to be monitored , is an event density function or value function that indicates the probability density of an event occurring at point , $ ] is the probability of sensor to detect an event occurring at , and is the location of sensor . for a summary of coverage problems ,we refer the interested reader to .arslan et al . discussed a game where each mobile sensor updates its location , treated as potential , and proposed assigning a wlu to each sensor , i.e. , where corresponds to the probability that sensor detects an event occurring at alone .further discussion can be found in .we would like to note that has a similar expression with . in the same manner in , drr et al . treated as potential and proposed assigning a wlu \ , \dd r. \label{eq : coverage_continuous_utility}\end{aligned}\ ] ] zhu and martnez considered mobile sensors with a directional sensing area .each mobile sensor updates its location and direction .the reward from a target is fairly allocated to sensors covering the target .arsie et al . considered a game where each node attempts to maximize the expected value of the reward . here , each node receives the reward if node is the first to reach point , and the value of the reward is the time until the second node arrives , i.e. , was proved to be an epg . formulated a time slot assignment problem for immobile sensors , which is equivalent to a channel allocation problem , as , where each sensor selects a slot to maximize the area covered only by sensor , i.e. , where is the sensing area covered by sensor .game was proved to be an epg with potential where corresponds to the average coverage . to show the close relationship between the payoff functions ( [ eq : saloha_utility ] ) in the slotted aloha game and ( [ eq : coverage_continuous_utility ] ) in the coverage game , we provide different expressions . using , we get where the surface integral is taken over the whole area .wang et al . further discussed this problem .song et al . applied the coverage game to camera networks .ding et al . discussed a pan - tilt - zoom ( ptz ) camera network to track multiple targets .another potential game - theoretic ptz camera control scheme was proposed in , motivated by natural environmental monitoring .directional sensors were discussed in .the form of payoff functions is similar to ( [ eq : coverage_discrete_utility ] ) . until now, each immobile sensor was assumed to receive a payoff when it covered a target alone .yen et al . discussed a game where each sensor receives a payoff when the number of sensors covering a target is smaller than or equal to the allowable number .since this game falls within a class of cgs , it is also an epg .we have provided a comprehensive survey of potential game approaches to wireless networks , including channel assignment problems and transmission power assignment problems .although there are a variety of payoff functions that have been proven to have potential , there are some representative forms , e.g. , bsi games and congestion games , and we have shown the relations between representative forms and individual payoff functions .we hope the relations shown in this paper will provide insights useful in designing wireless technologies .other problems that have been formulated in terms of potential games are found in routing , bs / ap selection , cooperative transmissions , secrecy rate maximization , code design for radar , broadcasting , spectrum market , network coding , data cashing , social networks , computation offloading , localization , and demand - side management in smart grids .this work was supported in part by jsps kakenhi grant numbers 24360149 , 15k06062 .the author would like to acknowledge dr .takeshi hatanaka at tokyo institute of technology for his insightful comments on cooperative control and ptz camera control .the author acknowledges dr .i. wayan mustika at universitas gadjah mada , and dr .masahiro morikura and dr .takayuki nishio at kyoto university for their comments .h. al - tous and i. barhumi , `` joint power and bandwidth allocation for amplify - and - forward cooperative communications using stackelberg game , '' _ ieee trans ._ , vol .62 , no . 4 , pp .16781691 , may 2013 .t. alpcan and t. baar , `` a game - theoretic framework for congestion control in general topology networks , '' in _ proc .41st ieee conf . decision and control ( cdc )_ , vol . 2 , las vegas , nv , dec . 2002 , pp . 12181224 .e. altman , y. hayel , and h. kameda , `` evolutionary dynamics and potential games in non - cooperative routing , '' in _ proc .modeling optim .mobile , ad hoc , wireless netw .( wiopt ) _ , limassol , apr .2007 , pp . 15 .e. altman , a. kumar , and y. hayel , `` a potential game approach for uplink resource allocation in a multichannel wireless access network , '' in _ proc .icst conf .tools ( valuetools ) _ , pisa , italy , oct . 2009 , pp .72:172:9 .g. arslan , m. f. demirkol , and y. song , `` equilibrium efficiency improvement in mimo interference systems : a decentralized stream control approach , '' _ ieee trans .wireless commun ._ , vol . 6 , no . 8 , pp . 29842993 , aug .2007 .t. baar and r. srikant , `` revenue - maximizing pricing and capacity expansion in a many - users regime , '' in _ proc .( infocom ) _ , vol . 1 , new york , ny , jun .2002 , pp . 294301 .m. bennis , s. m. perlaza , p. blasco , z. han , and h. v. poor , `` self - organization in small cell networks : a reinforcement learning approach , '' _ ieee trans .wireless commun ._ , vol . 12 , no . 7 , pp . 32023212 , jul2013 .m. bloem , t. alpcan , and t. baar , `` a stackelberg game for power control and channel allocation in cognitive radio networks , '' in _ proc .workshop game theory commun .( gamecomm ) _ , nantes , france , oct .2007 , pp . 19 .s. buzzi , g. colavolpe , d. saturnino , and a. zappone , `` potential games for energy - efficient power control and subcarrier allocation in uplink multicell ofdma systems , '' _ ieee j. sel .topics signal process . _ ,vol . 6 , no . 2 , pp. 89103 , apr .2012 .y. cai , j. zheng , y. wei , y. xu , and a. anpalagan , `` a joint game - theoretic interference coordination approach in uplink multi - cell ofdma networks , '' _ wireless pers ._ , vol .80 , no . 3 , pp .12031215 , feb .2015 .m. canales , j. ortn , and j. r. gllego , `` game theoretic approach for end - to - end resource allocation in multihop cognitive radio networks , '' _ ieee commun ._ , vol . 16 , no . 5 , pp .654657 , may 2012 .u. o. candogan , i. menache , a. ozdaglar , and p. a. parrilo , `` competitive scheduling in wireless collision channels with correlated channel state , '' in _ proc .game theory netw .( gamenets ) _ , istanbul , may 2009 , pp . 621630 .j. chen , q. yu , p. cheng , y. sun , y. fan , and x. shen , `` game theoretical approach for channel allocation in wireless sensor and actuator networks , '' _ ieee trans .56 , no .23322344 , oct . 2011 .x. chen , x. gong , l. yang , and j. zhang , `` a social group utility maximization framework with applications in database assisted spectrum access , '' in _ proc .( infocom ) _ , apr .2014 , pp . 19591967 .x. chu and h. sethu , `` cooperative topology control with adaptation for improved lifetime in wireless ad hoc networks , '' in _ proc .comput . commun .( infocom ) _ , orlando , fl , mar .2012 , pp . 262270 .k. cohen , a. nedi , and r. srikant , `` distributed learning algorithms for spectrum sharing in spatial random access networks , '' in _ proc .modeling optim .mobile , ad hoc , wireless netw .( wiopt ) _ , may 2015 .h. dai , y. huang , and l. yang , `` game theoretic max - logit learning approaches for joint base station selection and resource allocation in heterogeneous networks , '' _ to appear in ieee j. sel .areas commun ._ , 2015 .e. del re , g. gorni , l. s. ronga , and r. suffritti , `` resource allocation in cognitive radio networks : a comparison between game theory based and heuristic approaches , '' _ wireless pers ._ , vol .49 , no . 3 , pp .375390 , may 2009 .m. derakhshani and t. le - ngoc , `` distributed learning - based spectrum allocation with noisy observations in cognitive radio networks , '' _ ieee trans ._ , vol . 63 , no . 8 , pp .37153725 , oct .2014 .b. f. duarte , z. m. fadlullah , a. v. vasilakos , and n. kato , `` on the partially overlapped channel assignment on wireless mesh network backbone : a game theoretic approach , '' _ ieee j. sel .areas commun . _ ,30 , no . 1 ,119127 , jan . 2012 .drr , m. s. stankovi , and k. h. johansson , `` distributed positioning of autonomous mobile sensors with application to coverage control , '' in _ proc .control conf .( acc ) _ , san francisco , ca , jun .2011 , pp . 48224827 .y. gai , h. liu , and b. krishnamachari , `` a packet dropping - based incentive mechanism for m / m/1 queues with selfish users , '' in _ proc .( infocom ) _ , shanghai , apr .2011 , pp . 26872695 .j. r. gllego , m. canales , and j. ortn , `` distributed resource allocation in cognitive radio networks with a game learning approach to improve aggregate system capacity , '' _ ad hoc netw . _ , vol .10 , no . 6 , pp . 10761089 , aug . 2012 .a. ghosh , l. cottatellucci , and e. altman , `` normalized nash equilibrium for power allocation in femto base stations in heterogeneous network , '' in _ proc .modeling optim .mobile , ad hoc , wireless netw . ( wiopt )_ , mumbai india , may 2015 .hao , y .- x. zhang , n. jia , and b. liu , `` virtual game - based energy balanced topology control algorithm for wireless sensor networks , '' _ wireless pers ._ , vol .69 , no . 4 , pp . 12891308 , apr .hao , y .- x .zhang , and b. liu , `` distributed cooperative control algorithm for topology control and channel allocation in multi - radio multi - channel wireless sensor network : from a game perspective , '' _ wireless pers ._ , vol .73 , no . 3 , pp .353379 , dec . 2013 .t. hatanaka , y. wasa , and m. fujita , `` game theoretic cooperative control of ptz visual sensor networks for environmental change monitoring , '' in _ proc .52nd ieee conf .decision and control ( cdc ) _ , firenze , dec .2013 , pp . 76347640 .g. he , l. cottatellucci , and m. debbah , `` the waterfilling game - theoretical framework for distributed wireless network information flow , '' _ eurasip j. wirel .2010 , no . 1 ,113 , jan . 2010 . h.he , j. chen , s. deng , and s. li , `` game theoretic analysis of joint channel selection and power allocation in cognitive radio networks , '' in _ proc .oriented wireless netw .( crowncom ) _ , may 2008 , pp . 15 .m. hong , a. garcia , j. barrera , and s. g. wilson , `` joint access point selection and power allocation for uplink wireless networks , '' _ ieee trans . signal process ._ , vol .61 , no . 13 , pp . 33343347 , jul .2013 . c. ibars , m. navarro , and l. giupponi ,`` distributed demand management in smart grid with a congestion game , '' in _ proc .. smart grid commun .( smartgridcomm ) _ , gaithersburg , md , oct .2010 , pp . 495500 .l. jiao , x. zhang , o .- c .granmo , and b. j. oommen , `` a bayesian learning automata - based distributed channel selection scheme for cognitive radio networks , '' in _ proc .other applications applied intelligent syst . _ , kaohsiung , taiwan , jun .2014 , pp .4857 .b. kauffmann , f. baccelli , a. chaintreau , v. mhatre , k. papagiannaki , and c. diot , `` measurement - based self organization of interfering 802.11 wireless access networks , '' in _ proc .ieee int . conf .( infocom ) _ , anchorage , ak , 2007 , pp .14511459 .r. s. komali and a. b. mackenzie , `` analyzing selfish topology control in multi - radio multi - channel multi - hop wireless networks , '' in _ proc .conf . commun .( icc ) _ , dresden , jun .2009 , pp .r. s. komali , r. w. thomas , l. a. dasilva , and a. b. mackenzie , `` the price of ignorance : distributed topology control in cognitive networks , '' _ ieee trans .wireless commun ._ , vol .4 , pp . 14341445 , apr .o. korcak , t. alpcan , and g. iosifidis , `` collusion of operators in wireless spectrum markets , '' in _ proc .modeling optim .mobile , ad hoc , wireless netw .( wiopt ) _ , paderborn , germany , may 2012 , pp .3340 . q. d. la , y. h. chew , and b. h. soong , `` an interference - minimization potential game for ofdma - based distributed spectrum sharing systems , '' _ ieee trans ._ , vol . 60 , no . 7 , pp. 33743385 , sep .2011 .q. d. la , y. h. chew , and b .- h .soong , `` subcarrier assignment in multi - cell ofdma systems via interference minimization game , '' in _ proc .ieee wireless commun .netw . conf .( wcnc ) _ , shanghai , apr .2012 , pp . 13211325 .d. li and j. gross , `` distributed tv spectrum allocation for cognitive cellular network under game theoretical framework , '' in _ proc .new frontiers dynamic spectrum access netw .( dyspan ) _ , bellevue , wa , oct .2012 , pp . 327338 .j. li , w. liu , and k. yue , `` a game - theoretic approach for balancing the tradeoffs between data availability and query delay in multi - hop cellular networks , '' in _ proc .theory and appl .models of comput .( tamc ) _ , beijing , china , may 2012 , pp .487497 .j. li , k. yue , w. liu , and q. liu , `` game - theoretic based distributed scheduling algorithms for minimum coverage breach in directional sensor networks , '' _ int .j. distrib .sens . n. _ , vol .2014 , pp .112 , may 2014 .q. liang , x. wang , x. tian , f. wu , and q. zhang , `` two - dimensional route switching in cognitive radio networks : a game - theoretical framework , '' _ to appear in ieee / acm trans ._ , vol .pp , no . 99 , 2014 .d. ling , z. lu , y. ju , x. wen , and w. zheng , `` a multi - cell adaptive resource allocation scheme based on potential game for icic in lte - a , '' _ int . j. commun ._ , vol . 27 , no . 11 , pp27442761 , nov .2014 .l. liu , r. wang , and f. xiao , `` topology control algorithm for underwater wireless sensor networks using gps - free mobile sensor nodes , '' _ j. netw_ , vol .35 , no . 6 , pp .19531963 , nov .2012 .y. liu , l. dong , and r. j. marks ii , `` common control channel assignment in cognitive radio networks using potential game theory , '' in _ proc .ieee wireless commun .( wcnc ) _ , shanghai , apr .2013 , pp . 315320 .s. maghsudi and s. staczak , `` joint channel selection and power control in infrastructureless wireless networks : a multi - player multi - armed bandit framework , '' _ to appear in ieee trans . veh ._ , vol .pp , no . 99 , 2014 .r. maheshwari , s. jain , and s. r. das , `` a measurement study of interference modeling and scheduling in low - power wireless networks , '' in _ proc .6th acm conf .embedded netw .sensor syst .( sensys ) _ , raleigh , nc , nov .2008 , pp . 141154 .m. mavronicolas , i. milchtaich , b. monien , and k. tiemann , `` congestion games with player - specific constants , '' in _ proc .math . found .( mfcs ) _ , czech republic , aug .2007 , pp . 633644 .r. menon , a. b. mackenzie , r. m. buehrer , and j. h. reed , `` a game - theoretic framework for interference avoidance in ad hoc networks , '' in _ proc .ieee global telecommun .( globecom ) _ , san francisco , ca , nov .2006 , pp . 16 .p. mertikopoulos , e. v. belmega , a. l. moustakas , and s. lasaulce , `` distributed learning policies for power allocation in multiple access channels , '' _ ieee j. sel .areas commun ._ , vol . 30 , no . 1 , pp .96106 , jan .2012 .n. moshtagh , r. mehra , and m. mesbahi , `` topology control of dynamic networks in the presence of local and global constraints , '' in _ proc .( icra ) _ , anchorage , ak , may 2010 , pp . 27182723 .i. w. mustika , k. yamamoto , h. murata , and s. yoshida , `` potential game approach for spectrum sharing in distributed cognitive radio networks , '' _ ieice trans .e93-b , no .32843292 , dec . 2010 .j. neel , r. m. buehrer , b. h. reed , and r. p. gilles , `` game theoretic analysis of a network of cognitive radios , '' in _ proc .midwest symp .circuits syst .( mwscas ) _ , vol . 3 , tulsa , ok , aug .2002 , pp .iii409iii412 .j. o. neel , r. menon , a. b. mackenzie , j. h. reed , and r. p. gilles , `` interference reducing networks , '' in _ proc .oriented wireless netw .( crowncom ) _ , orlando , fl , aug .2007 , pp . 96104 .j. o. neel and j. h. reed , `` performance of distributed dynamic frequency selection schemes for interference reducing networks , '' in _ proc .ieee military commun .( milcom ) _ ,washington , dc , oct .2006 , pp . 17 .j. o. neel , `` analysis and design of cognitive radio networks and distributed radio resource management algorithms , '' ph.d .dissertation , virginia polytechnic inst .state univ ., blacksburg , va , sep .2006 .n. nie , c. comaniciu , and p. agrawal , `` a game theoretic approach to interference management in cognitive networks , '' in _ wireless communications , the i m a volumes in mathematics and its applications_. 1em plus 0.5em minus 0.4emnew york , ny : springer new york , 2007 , vol . 143 , pp . 199219 .j. ortn , j. r. gllego , and m. canales , `` joint route selection and resource allocation in multihop wireless networks based on a game theoretic approach , '' _ ad hoc netw ._ , vol . 11 , no . 8 , pp .22032216 , nov .m. peltomki , j .-koljonen , o. tirkkonen , and m. alava , `` algorithms for self - organized resource allocation in wireless networks , '' _ ieee trans . veh ._ , vol .61 , no . 1 ,346359 , jan . 2012 .s. perlaza , e. belmega , s. lasaulce , and m. debbah , `` on the base station selection and base station sharing in self - configuring networks , '' in _ proc . int .icst conf .tools ( valuetools ) _ , pisa , italy , oct . 2009 , pp .71:171:10 .v. ramaswamy , v. reddy , s. shakkottai , a. sprintson , and n. gautam , `` multipath wireless network coding : an augmented potential game perspective , '' _ ieee / acm trans ._ , vol .22 , no . 1 ,217229 , feb .v. reddy , s. shakkottai , a. sprintson , and n. gautam , `` multipath wireless network coding : a population game perspective , '' in _ proc .( infocom ) _ , san diego , ca , mar .2010 , pp . 19 .g. scutari , s. barbarossa , and d. palomar , `` potential games : a framework for vector power control problems with coupled constraints , '' in _ proc. 31st ieee int .speech signal process .( icassp ) _ , vol . 4 ,toulouse , may 2006 , pp .241244 .s. secci , j .- l .rougier , a. pattavina , f. patrone , and g. maier , `` peering equilibrium multipath routing : a game theory framework for internet peering settlements , '' _ ieee / acm trans ._ , vol . 19 , no . 2 , pp. 419432 , apr .2011 . c. singh , a. kumar , and r. sundaresan , `` uplink power control and base station association in multichannel cellular networks , '' in _ proc .game theory netw .( gamenets ) _ , istanbul , may 2009 , pp .4351 . y. song , c. zhang , and y. fang , `` joint channel and power allocation in wireless mesh networks : a game theoretical perspective , '' _ ieee j. sel .areas commun ._ , vol . 26 , no . 7 , pp . 11491159 , sep. 2008 .v. srivastava , j. neel , a. b. mackenzie , r. menon , l. a. dasilva , j. e. hicks , j. h. reed , and r. p. gilles , `` using game theory to analyze wireless ad hoc networks , '' _ ieee commun . surveys tuts . _ , vol . 7 , no . 4 , pp .4656 , jan .2005 .r. w. thomas , r. s. komali , a. b. mackenzie , and l. a. dasilva , `` joint power and channel minimization in topology control : a cognitive network approach , '' in _ proc .( icc ) _ , glasgow , jun .2007 , pp . 65386543 .tran , p .-n . tran , and n. boukhatem , `` strategy game for flow / interface association in multi - homed mobile terminals , '' in _ proc .( icc ) _ , cape town , south africa , may 2010 , pp .tseng , f .-chien , d. zhang , r. y. chang , w .- h .chung , and c. huang , `` network selection in cognitive heterogeneous networks using stochastic learning , '' _ ieee commun ._ , vol .17 , no . 12 , pp .23042307 , dec .j. n. tsitsiklis , d. p. bertsekas , and m. athans , `` distributed asynchronous deterministic and stochastic gradient optimization algorithms , '' _ ieee trans .31 , no . 9 , pp . 803812 , sep .1986 .z. uykan and r. jantti , `` joint optimization of transmission - order selection and channel allocation for bidirectional wireless links part i : game theoretic analysis , '' _ ieee trans .wireless commun ._ , vol . 13 , no . 7 , pp . 40034013 , jul. 2014 . , `` joint optimization of transmission - order selection and channel allocation for bidirectional wireless links part ii : algorithms , '' _ ieee trans .wireless commun ._ , vol . 13 , no . 7 , pp .39914002 , jul .2014 .j. wang , y. xu , a. anpalagan , q. wu , and z. gao , `` optimal distributed interference avoidance : potential game and learning , '' _ trans ._ , vol . 23 , no . 4 , pp .317326 , jun . 2012 .q. wang , w. yan , and y. shen , `` -person card game approach for solving set -cover problem in wireless sensor networks , '' _ ieee trans ._ , vol .61 , no . 5 , pp .15221535 , may 2012 . c. wu , h. mohsenian - rad , j. huang , and a. y. wang , `` demand side management for wind power integration in microgrid using dynamic potential game theory , '' in _ proc .ieee global telecommun .workshops ( globecom workshops ) _ , houston , tx , dec .2011 , pp . 11991204 .q. wu , y. xu , j. wang , l. shen , j. zheng , and a. anpalagan , `` distributed channel selection in time - varying radio environment : interference mitigation game with uncoupled stochastic learning , '' _ ieee trans ._ , vol .62 , no . 9 , pp . 45244538 , nov .2013 .y. xu , a. anpalagan , q. wu , l. shen , z. gao , and j. wang , `` decision - theoretic distributed channel selection for opportunistic spectrum access : strategies , challenges and solutions , '' _ ieee commun .surveys tuts ._ , vol .15 , no . 4 , pp . 16891713 , 2013 .y. xu , j. wang , q. wu , a. anpalagan , and y .- d .yao , `` opportunistic spectrum access in cognitive radio networks : global optimization using local interaction games , '' _ ieee j. sel. topics signal process . _ ,vol . 6 , no . 2 , pp. 180194 , apr .2012 .y. xu , j. wang , y. xu , l. shen , q. wu , and a. anpalagan , `` centralized - distributed spectrum access for small cell networks : a cloud - based game solution , '' 2015 .[ online ] .available : http://arxiv.org/abs/1502.06670 y. xu , q. wu , l. shen , j. wang , and a. anpalagan , `` opportunistic spectrum access with spatial reuse : graphical game and uncoupled learning solutions , '' _ ieee trans .wireless commun ._ , vol . 12 , no . 10 ,48144826 , oct .y. xu , q. wu , j. wang , l. shen , and a. anpalagan , `` opportunistic spectrum access using partially overlapping channels : graphical game and uncoupled learning , '' _ ieee trans ._ , vol .61 , no . 9 , pp . 39063918 , sep .y. xu , y. zhang , q. wu , l. shen , and j. wang , `` distributed spectrum access for cognitive small cell networks : a robust graphical game approach , '' 2015 .[ online ] .available : http://arxiv.org/abs/1502.06667 k. yamamoto , h. murata , and s. yoshida , `` management of dominant interference in cognitive radio networks , '' in _ proc .21st ieee int .. personal , indoor and mobile radio commun .( pimrc ) workshops _ , istanbul , turkey , sep .2010 , pp . 451455 .d. yang , x. fang , and g. xue , `` channel allocation in non - cooperative multi - radio multi - channel wireless networks , '' in _ proc .( infocom ) _ , orlando , fl , mar .2012 , pp . 882890 .q. yu , j. chen , y. fan , x. shen , and y. sun , `` multi - channel assignment in wireless sensor networks : a game theoretic approach , '' in _ proc .( infocom ) _ , san diego , ca , mar .2010 , pp . 19 .e. zeydan , d. kivanc , and u. tureli , `` cross layer interference mitigation using a convergent two - stage game for ad hoc networks , '' in _ proc .( ciss ) _ , princeton , nj , mar .2008 , pp . 671675 .q. zhang , l. guo , c. sun , x. an , and x. chen , `` joint power control and component carrier assignment scheme in heterogeneous network with carrier aggregation , '' _ iet commun . _ , vol . 8 , no . 10 , pp . 18311836 , jul . 2014 .z. zhang and c. douligeris , `` convergence of synchronous and asynchronous greedy algorithms in a multiclass telecommunications environment , '' _ ieee trans ._ , vol .40 , no . 8 , pp . 12771281 , aug .j. zheng , y. cai , y. liu , y. xu , b. duan , and x. shen , `` optimal power allocation and user scheduling in multicell networks : base station cooperation using a game - theoretic approach , '' _ ieee trans .wireless commun ._ , vol . 13 , no . 12 , pp .69286942 , dec . 2014 .j. zheng , y. cai , x. chen , r. li , and h. zhang , `` optimal base station sleeping in green cellular networks : a distributed cooperative framework based on game theory , '' _ to appear in ieee trans .wireless commun . _ , 2015 .j. zheng , y. cai , y. xu , and a. anpalagan , `` distributed channel selection for interference mitigation in dynamic environment : a game - theoretic stochastic learning solution , '' _ ieee trans ._ , vol .63 , no . 9 , pp . 47574762 , nov .w. zhong , g. chen , s. jin , and k .- k .wong , `` relay selection and discrete power control for cognitive relay networks via potential game , '' _ ieee trans .signal process ._ , vol .62 , no . 20 , pp .54115424 , oct .2014 .w. zhong , y. xu , and h. tianfield , `` game - theoretic opportunistic spectrum sharing strategy selection for cognitive mimo multiple access channels , '' _ ieee trans .signal process ._ , vol .59 , no . 6 , pp . 27452759 , jun .2011 .koji yamamoto received the b.e .degree in electrical and electronic engineering from kyoto university in 2002 , and the m.e . and ph.d .degrees in informatics from kyoto university in 2004 and 2005 , respectively . from 2004 to 2005 , he was a research fellow of the japan society for the promotion of science ( jsps ) .since 2005 , he has been with the graduate school of informatics , kyoto university , where he is currently an associate professor . from 2008 to 2009 , he was a visiting researcher at wireless , royal institute of technology ( kth ) in sweden .his research interests include the application of game theory , spectrum sharing , and in - band full - duplex communications .he received the pimrc 2004 best student paper award in 2004 , the ericsson young scientist award in 2006 , and the young researcher s award from the ieice of japan in 2008 .he is a member of the ieee .
potential games form a class of non - cooperative games where unilateral improvement dynamics are guaranteed to converge in many practical cases . the potential game approach has been applied to a wide range of wireless network problems , particularly to a variety of channel assignment problems . in this paper , the properties of potential games are introduced , and games in wireless networks that have been proven to be potential games are comprehensively discussed . potential game , game theory , radio resource management , channel assignment , transmission power control
the modelling and simulation of pedestrians and crowds is a consolidated and successful application of research results in the more general area of computer simulation of complex systems .it is an intrinsically interdisciplinary effort , with relevant contributions from disciplines ranging from physics and applied mathematics to computer science , often influenced by ( and sometimes in collaboration with ) anthropological , psychological , sociological studies and the humanities in general .the level of maturity of these approaches was sufficient to lead to the design and development of commercial software packages , offering useful and advanced functionalities to the end user ( e.g. cad integration , cad - like functionalities , advanced visualisation and analysis tools ) in addition to a simulation engine . nonetheless , as testified by a recent survey of the field by and by a report commissioned by the cabinet office by , there is still much room for innovations in models improving their performances both in terms of _ effectiveness _ in modelling pedestrians and crowd phenomena , in terms of _ expressiveness _ of the models ( i.e. simplifying the modelling activity or introducing the possibility of representing phenomena that were still not considered by existing approaches ) , and in terms of _ efficiency _ of the simulation tools .in addition to the above directions , we want to emphasise the fact that one of the sometimes overlooked aspects of a proper simulation project is related to the _ calibration _ and _ validation _ of the results of tools related to the _ synthesis _ of the pedestrians and crowd behaviour in the considered scenario .these phases are essentially related to the availability of proper empirical data about or , at least , relevant to , the considered scenario ranging from the pedestrian demand ( i.e. an origin destination matrix ) , preferences among different alternative movement choices ( e.g. percentage of persons employing stairs , escalators and elevators in a multiple level scenario ) , but also the average waiting times at service points ( i.e. queues ) , the average time required to cover certain paths , the spatial distribution of pedestrians in specific environmental conditions that is required to evaluate the so called `` level of service '' associated to portions of the environment as defined by .these data are results of activities of _ analysis _ , some of which can be fruitfully automated given , on one hand , the wide diffusion of cameras employed for video surveillance of public areas and , on the other , considering the level of maturity of video processing and analysis techniques. an _ integrated approach _ to pedestrians and crowd studies encompasses both the application of analysis and synthesis techniques that , in a virtuous circle , can mutually benefit one from the other , to effectively ( i ) identify , ( ii ) face and ( iii ) provide innovative solutions to challenges towards the improvement of the understanding of crowding phenomena .this kind of interdisciplinary study , in addition to computer science , often employs or directly involves research results in the area of social sciences in general , can be found in the literature .for instance , show how computational fields guiding simulated pedestrians movement in a simulated environment can be automatically derived by video footages of actual people moving in the same space . , instead , employ an hydrodynamic model , that ( to a certain extent ) can represent the flow of pedestrians in mass movement situations , to improve the characterisation of pedestrian flows by means of automatic video analysis . from the perspective of offering a useful service to crowd managers an anticipative system integrating computer vision techniques and pedestrian simulation able to suggest crowd management solutions ( e.g. guidance signals ) to avoid congestion situations in evacuation processes .finally , in the authors propose the employment of the _ social force model _ by , probably the most successful example of crowd synthesis model , to support the detection of abnormal crowd behaviour in video sequences .an example of identification of a still not considered phenomenology is related to a work by in which the authors have defined an extension to the social force model by that considers the presence of groups in the simulated population : the motivations and some modelling choices ( i.e. the limited size of considered groups and their spatial arrangement ) are based on actual observations and analyses .a related effort , carried out instead by a research group trying to improve crowd analysis techniques , is described in : in this case , the social force model acts as a sort of predictor block in an automated video analysis pipeline , improving the tracking in case of groups within the observed flow .finally , also focus on groups , as central element of an observation and analysis that also considers psychometric factors .it is important to emphasise that anthropological considerations about human behaviour in crowded environments , such as the analysis of spatial social interaction among people , are growingly considered as crucial both in the computerised analysis of crowds as pointed out by and in the synthesis of believable pedestrian and crowd behaviour , such as in and in .in particular , _ proxemics _ has a prominent role both in the modelling and analysis of pedestrians and crowd behaviour .the term was first introduced by with respect to the study of a type of non verbal behaviour related to the dynamic regulation of interpersonal distance between people as they interact . within this context the aim of this paperis to provide a comprehensive framework comprising both the synthesis and analysis of pedestrians and crowd behaviour : in this schema we suggest both ways in which the results of the analysis can provide fruitful inputs to modellers and , on the other hand , how results of the modelling and simulation activities can contribute to the ( automated ) interpretation of raw empirical data .the framework will be described in the following section , while an example of unfolding of these conceptual and experimental pathways will be described through the introduction of an adaptive model for group cohesion ( section [ sec : model ] ) that was motivated and that will effectively be calibrated and validated by means of analyses on observed crowd behaviour ( section [ sec : observation ] ) .conclusions and future works will end the paper .a comprehensive framework trying to put together different aspects and aims of pedestrians and crowd dynamics research has been defined in .the central element of this schema is the mutually influencing ( and possibly motivating ) relationship between the above mentioned efforts aimed at synthesising crowd behaviour and other approaches that are instead aimed at analysing field data about pedestrians and crowds in order to characterise these phenomena in different ways .it must be noted , in fact , that some approaches have the goal of producing aggregate level quantities ( e.g. people counting , density estimation ) , while others are aimed at producing finer - grained results ( i.e. tracking people in scenes ) and some are even aimed at identifying some specific behaviour in the scene ( e.g. main directions , velocities , unusual events ) .the different approaches adopt different techniques , some performing a _ pixel level analysis _, others considering larger patches of the image , i.e. _ texture level analysis _ ; other techniques require instead the detection of proper objects in the scene , a real _ object level analysis_. from the perspective of the requirements for the synthesis of quantitatively realistic pedestrian and crowd behaviour , it must be stressed that both aggregate level quantities and granular data are of general interest : a very important way to characterise a simulated scenario is represented by the so called fundamental diagram by , that represents the relationship in a given section of an environment between the flow of pedestrians and their density .qualitatively , a good model should be able to reproduce an empirically observed phenomenon characterised by the growth of the flow until a certain density value ( also said _critical density _ ) is reached ; then the flow should decrease .however , every specific situation is characterised by a different shape of this curve , the position of critical density point and the maximum flow level ; therefore even relatively `` basic '' counting and density estimation techniques can provide useful information in case of observations in real world scenarios .density estimation approaches can also help in evaluating qualitatively the patterns of space utilisation generated by simulation models against real data .tracking techniques instead can be adopted to support the estimation of traveling times ( and length of the followed path ) by pedestrians .crowd behaviour understanding techniques can help in determining main directions and the related velocities . to complete the initial schema , suggesting that analysis and synthesis should mutually benefit one from the other , we propose here the extension of a different kind of diagram , that can be found in or more recently in , used to discuss the form of inference that can be carried out by using a model as a method of study .starting from portion of the reality that we will call _ target system _ the process of synthesis leads to the definition of a model and its implementation in the form of a simulator .the latter can then be employed ( i.e. executed with specific inputs and parameters ) to carry out a simulation campaign leading to a set of results .the processes of analysis involve , on one hand , raw data that can be acquired through direct observations and controlled experiments . on the other hand ,also simulation results require a process of interpretation in order to be comparable with observed empirical data .when this cycle produces simulation results that , once interpreted and analysed , actually match the empirical data acquired on the field , the defined model can be employed for sake of explanation and prediction .of course it is not immediate to define a model that generates simulation results matching empirical data , especially since models of complex systems are generally characterised by a number of parameters even when the modeller tries to keep the model as simple and elegant as possible .it is this need of actually _ calibrating _ model parameters for achieving a model _ validation _ , as described by , that actually introduces the first type of synergetic collaboration between analysis and synthesis : the analysis of raw data about the simulated phenomenon leads to the possibility of identifying values or at least intervals for model parameters .this is not actually the only case of influence of the analysis on the definition of models : in fact , it is the observation of the system that leads to the identification of phenomenologies that are not currently represented and managed by a model and that can represent the motivations and goals for model innovation .the potential outcomes of the modelling and simulation phases that can have an influence on the analysis activities are related to two categories of contributions : on one hand , the need to create a mechanism for the generation of an observed phenomenon leads to its formalisation , that could be instrumental also in the creation of additional mechanisms for its automated analysis .on the other hand , even long before reaching the necessary level of maturity of the simulator , the modeller and developer of a simulation system actually needs to define and develop metrics , indicators and techniques to evaluate the outcomes of the modelling phases .these by - products of the synthesis activity can also represent a starting point for the actual development of automated analysis approaches .the following sections actually represent an unfolding of this kind of conceptual and experimental process .in particular , section [ sec : model ] introduces a model for pedestrian and group behaviour representing the evolution of a first approach that was started to face issues raised by an unstructured and non - systematic observation of crowd patterns and movements in a real world scenario described by . in this line of research ,a model encompassing groups as a fundamental element influencing the overall system dynamics was designed and implemented and a metric for the evaluation of group cohesion ( and , therefore , also its dispersion ) was defined .this metric led to the understanding that the first version of the model was unable to preserve the cohesion of groups and therefore it was instrumental in the realisation of a new model described in that endows pedestrians that are also members of specific types of group with an adaptive mechanism to preserve their cohesion even in difficult situations ( e.g. presence of obstacles or high density environments ) .this research effort also led to the identification of new observations and analyses to back - up simulation results with empirical data ; the dispersion metric was also the starting point for the formal definition of measurements to be executed in the analysis of the acquired raw data .these activities will be described in section [ sec : observation ] .this section introduces a model representing pedestrian behaviour in an environment , considering the impact of the presence of simple and structured groups .the model is characterised by a discrete representation of the environment and time evolution , and it is based on the floor - field mechanism of existing ca approaches .however , the pedestrian behaviour is so articulated , comprising an adaptive mechanism for the preservation of group cohesion , to the point that the model is more properly classified as agent - based .the different elements of the model will now be introduced , then some results of its application to sample simulation scenarios will be given to show the model capabilities and the requirements in terms of empirical data to complete the calibration of the group cohesion adaptive mechanism .a more detailed formal introduction of the model and additional simulation results can be found in .the physical environment is represented in terms of a discrete grid of square cells : where .the size of every cell is according to standard measure used in the literature and derived from empirical observation and experimental procedure shown in and .every cell has a row and a column index , which indicates its position in the grid : .consequently , a cell is also identified by its row and column on the grid , with the following notation : .every cell is linked to other cells , that are considered its neighbours according to the moore neighbourhood .cells are annotated and virtual grids are superimposed on the base environmental representation to endow the environment with the capability to host pedestrian agents and support their perception and action .markers are sets of cells that play particular roles in the simulation. three kinds of marker are defined in the model : ( i ) _ start areas _ , places ( sets of cells ) were pedestrians are generated ; they are characterised by information for pedestrian generation both related to the type of pedestrians and to the frequency of generation . in particular, a start area can generate different kinds of pedestrians according to two approaches : ( a ) _ frequency - based generation _, in which pedestrians are generated during all the simulation according to a frequency distribution ; ( b ) _ en - bloc generation _ , in which a set of pedestrians is generated at once in the start area when the simulation starts ; ( ii ) _ destination areas _ , final places where pedestrians might want to go ; ( iii ) _ obstacles _ , non - walkable cells defining obstacles and non - accessible areas . adopting the approach of the _ floor field_ model by , the environment of the basic model is composed also of a set of superimposed virtual grids , structurally identical to the environment grid , that contains different floor fields that influence pedestrian behaviour . the goal of these grids is to support long range interactions by representing the state of the environment ( namely , the presence of pedestrians and their capability to be perceived from nearby cells ) . in this way ,a local perception for pedestrians actually simply consists in gathering the necessary information in the relevant cells of the floor field grids .floor fields are either _ static _ ( created at the beginning and not changing during the simulation ) or _ dynamic _ ( changing during the simulation ) .three floor fields are considered in the model : * the _ path field _ assigned to each destination area : this field indicates for every cell the distance from the destination , and it acts thus as a gradient , a sort of potential field that drives pedestrians towards it ( static floor field ) ; * the _ obstacles field _ , that indicates for every cell the fact that an obstacle or a wall is within a limited distance , being maximum in cells adjacent to an obstacle and decreasing with the distance ( static floor field ) ; * the _ density field _ that indicates for each cell the pedestrian density in the surroundings at the current time - step , analogous to the concept of _ cumulative mean density _ ( cmd ) indicating the density experienced by pedestrians in a portion of the environment , first introduced by and elaborated for discrete environments by ( dynamic floor field ) .simulation time is modelled in a discrete way by dividing time into steps of equal duration : we assume that a pedestrian moves ( at most , since it is possible to decide to stand still ) 1 cell per time step .the average velocity of a pedestrian , which can be estimated in real observations or experiments as performed by in about , will thus determine the duration of the each time step : considering that the size of the cell is , the duration of each time step is .when running a ca - based pedestrian model , three update strategies are possible : * _ parallel _ update , in which cells are updated all together ; * _ sequential _ update , in which cells are updated one after the other , always in the same order ; * _ shuffled sequential update _ , in which cells are updated one after the other , but with a different order every time .the second and third update strategies lead to the definition of asynchronous ca models ( see for a more thorough discussion on types of asynchronicity in ca models ) . in crowd simulation ca models , parallel update is generally preferred , as mentioned by , even if this strategy can lead to conflicts that must be solved .nonetheless , we currently adopted a shuffled sequential update scheme for a first evaluation of the group cohesion mechanism without adding additional mechanisms and parameters to be calibrated for the management of potential conflicts between agents movements .we focus on two types of group : simple and structured .simple or informal groups are generally made up of friends or family members and they are characterised by a high cohesion level , moving all together due to shared goals and to a continuous mechanism of adaptation of the chosen paths to try to preserve the possibility to communicate , as discussed by .structured groups , instead , are more complex entities , usually larger than simple groups ( more than 4 individuals ) and they can be considered as being composed of sub - groups that can be , in turn , either simple or structured .structured groups are often artificially defined with the goal of organising and managing a mass movement ( or some kind of other operation ) of a set of pedestrians .groups can be formally described as : , [ ped_1 , \ldots , ped_n ] \right\rangle\ ] ] structured groups include at least one subgroup , while simple groups only comprise individual pedestrians .we will refer to the group an agent _ directly _ belongs to as , that is also the smallest group he belongs to ; the _ largest _ group an agent belongs to will instead be referred to as .it must be noted that only when the agent is member of a simple group that is not included in any structured group .these sets are relevant for the computation of influences among members of groups of different type ( simple or structured ) . in this model, a pedestrian is defined as an utility - based agent with state .functions are defined for utility calculation and action choice , and rules are defined for state - change .pedestrians are characterised as : where : ( i ) is the agent identification number and is the identification number of the group to which the pedestrian directly belongs to ( for pedestrians that are not member of any group this value is null ) ; ( ii ) is defined as : where indicates the current cell in which the agent is located , and is the direction followed in the last movement ; is the set of possible actions that the agent can perform , essentially movements in one of the eight neighbouring cells ( indicated as cardinal points ) , plus the action of remaining in the same cell ( indicated by an ` ' ) : ; is the goal of the agent in terms of destination area .this term identifies the current destination of the pedestrian : in particular , every destination is associated to a particular spatial marker .consequently , is used to identify which path field is relevant for the agent : where is the precise path field associated to and is the path field relevant for the agent .all these elements take part in the mechanism that manages the movement of pedestrians : agents associate a desirability value , a utility to every movement in the cell neighbourhood , according to a set of factors that concur in the overall decision . in algorithm [ alg1 ]the agent life - cycle during all the simulation time is proposed : every time step , every pedestrian perceives the values of path field , obstacle field and density field for all the cells that are in its neighbourhood . on the basis of these values and according to different factors , the agent evaluates the different cells around him , associating an utility value to every cell and selects the action for moving into a specific cell . ] ) ] : the use of these parameters , in addition to allowing the calibration and the fine tuning of the model , also supports the possibility of describing and managing different types of pedestrian , or even different states of the same pedestrian in different moments of a single simulated scenario . while the above elements are sufficient to generate a pedestrian model that considers the presence of groups , even structured ones , the introduced mechanisms are not sufficient to preserve the cohesion of simple groups , as discussed in a previous work adopting a very similar approach ( see ) .this is mainly due to the fact that in certain situations pedestrians adapt their behaviour in a more significant way than what is supported by simple and relatively small modifications of the perceived utility of a certain movement . in certain situationspedestrians perform an adaptation that appears in a much more decisive way a _ decision _ : they can suddenly seem to temporarily loose interest in what was previously considered a destination to reach and they instead focus on moving closer to ( or at least do not move farther from ) members of their group , generally whenever they perceive that the distance from them has become excessive . in the following , we will discuss a metric of group dispersion that we adopted to quantify this perceived distance and then we will show how it can be used to adapt the weights of the different components of the movement utility computation to preserve group cohesion .* group dispersion metrics * intuitively , the dispersion of a group can be seen as the degree of spatial distribution of their members . in the area of pedestrian modelling and simulation ,the estimation of different metrics for group dispersion has been discussed in in which different approaches are compared to evaluate the dispersion of groups through their movement in the environment . in particular ,two different approaches are compared here : ( i ) dispersion as occupied area and ( ii ) dispersion as distance from the centroid of the group .this topic was also considered in the context of computer vision algorithms such as in , in which however essentially only _ line abreast _ patterns were analysed .therefore we will focus on the former approach .formally , the above introduced formulas of group dispersion for each approach are defined as follows : with as the area occupied by the group , as the number of its members , as its centroid .the second metric appears much more straightforward when a continuous representation of the environment is possible or at least not in contrast with the adopted modelling approach : in the case of a discrete and relatively coarse discretisation its results are not particularly different from the first metric , but they are sometimes counterintuitive especially when describing particular group shapes ( e.g. river - like lanes that are often present in high density situations ) .the first metric defines the dispersion of the group as the portion of space occupied by the group with respect to the size of the group : the first step of the procedure of computation for this metric builds a convex polygon with the minimum number of edges that contain all the vertices ( representing the position of a pedestrian ) ; the second step computes the area of this polygon .the dispersion value is calculated as the relationship between the polygon area and the size of the group .* utility parameters adaptation*[sec : balance ] the adopted approach is characterised by a trade - off process between the goal attraction value and the intra / inter cohesion value in the utility computation : in the situation in which the spatial dispersion value is low , the cohesion behaviour has to influence pedestrian s overall behaviour less than the goal attraction . on the contrary , if the level of dispersion of a group is high , the cohesion component for the members must become more important than the goal attraction .an adaptation of the two parameters in the utility computation is necessary , by means of a function that can be used to formalise these requirements : where , and are the weighted parameters and is another function that works on the value of group dispersion as the relationship between the area and the size of the group , applying on it the hyperbolic tangent .the value of is a constant that essentially represents a threshold above which the adaptation mechanism starts to become more influential ; after a face validation phase , we set this value to 2.5 , allowing the output of function in the range $ ] according to all elements in .the hyperbolic tangent approaches value when approaches ( values indicate a high level of dispersion for small - medium size groups ( 1 - 4 members ) ) . a graphical representation of the trade - off mechanism is shown in fig .[ fig : balance ] : red and green boxes represent the progress of parameter and parameter ( is treated analogously ) , respectively .note that the increasing of the dispersion value produces an increment of value and a reduction of parameter ., for and it must be emphasised the fact that this adaptive balancing mechanism and the current values for its parameters were heuristically established and they actually require a validation ( and plausibly a subsequent calibration ) by comparing results achieved with this configuration and relevant empirical data about group dispersion gathered from actual observations and experiments in controlled situations .this section describes the results of a simulation campaign carried out to evaluate the performance of the above described model that had mainly two goals : ( i ) _ validate _ the model , in situations for which the adaptation mechanism was not activated ( i.e. no simple groups were present in the simulated population ) , ( ii ) _ evaluate _ the effects of the introduction of simple groups , performing a qualitative face validation of the introduced adaptation mechanism considering available video footages of the behaviour of groups in real and experimental situations .the chosen situations are relatively simple ones and they were chosen due to the availability of relevant and significant data from the literature .in particular , we describe here a linear scenario : a _ corridor _ in which we test the capability of the model to correctly reproduce a situation in which two groups of pedestrians enter from one of the ends and move towards the other .this situation is characterised by a counterflow causing local situations of high density and conflicts on the shared space .we essentially evaluate and validate this scenario by means of a fundamental diagram as defined by : it shows how the average velocity of pedestrians in a section ( e.g. one of the ends of a corridor ) varies according to the density of the observed environment . since the flow of pedestrians is directly proportional to their velocity , this diagram is sometimes presented in an equivalent form that shows the variation of flow according to the density . in general, we expect to have a decrease in the velocity when density grows , due to the growing frequency of `` collision avoidance '' movements by the pedestrians ; the flow , instead , initially grows , since it is also directly proportional to the density , until a certain threshold value is reached ( also called _ critical density _ ) , then it decreases .despite being of great relevance , different experiments gathered different values of empirical data : while there is a general consensus on the shape of the function , the range of the possible values has even significant differences from different versions .we investigated this scenario with a significant number of simulations , varying the level of density by adjusting the number of pedestrians present in the environment , so as to analyse different crowding situations .for every scenario , in terms of environmental configuration and level of density , a minimum of 3 and a maximum of 8 simulations were executed , according to the variability of the achieved results ( more simulations were run when the variability was high , generally around levels of density close to the critical thresholds ) .every simulation considered at least 1800 simulation turns , corresponding to 10 minutes of simulated time .the rationale was to observe a good number of complete paths of pedestrians throughout the environment , that was configured to resemble a torus ( e.g. pedestrians completing a movement through it re - entered the scenario from their starting point ) , therefore simulations of situations characterised by a higher density were also set to last longer .as suggested above , we adopted two different experiment settings : in the first one , the individual pedestrians belonging to a given flow ( i.e. all the pedestrians entering the corridor ) are represented as members of a large structured group , but no simple groups are present .this first part of the experimentation was also necessary to perform a proper calibration of the model , for the parameters not involved in simple group modelling . in the second experimental setting , we included a variable number of simple groups ( based on the total number of pedestrians in the environment and according to available data on the frequency of groups of different size in a crowd as mentioned in and ) first of all to calibrate and qualitatively validate the adequacy of the adaptation mechanism and then to explore its implications on the overall crowd dynamics .we actually simulated the linear counterflow situation in three different corridors , of growing width : their size is respectively m m ( a ) , m m ( b ) and m m ( c ) .note that the variation in terms of width and height were applied according to the choice of maintaining the total area at a level about m in every scenario .a screenshot of the simulation scenario in low density situations including groups of different size is shown in figure [ fig : screenshot ] .the choice of evaluating the influence of groups in different linear scenarios was also inspired by , in which a comparison in terms of pedestrians flow from experimental data among three corridors of width m , m and m is presented . in this case , authors show that , in conformance with , above a certain minimum of about m , the maximum flow is directly proportional to the width of the corridors .the model was calibrated to achieve , in situations not including simple groups , results in tune with empirical data available from the literature .we will now focus on some partly counterintuitive results in presence of simple groups . to do so, data related to the different types of simulated groups were aggregated , and a comparison among the related fundamental diagrams was performed for the movement of groups in corridor a. as summary , fig .[ fig : av_gruppi ] represents on the same chart all group contributions : the depicted points represent the average flow achieved for that kind of group in the total simulated time , and generally more points are available ( representing different simulation runs ) for the same configuration .the overall flow of individuals is generally higher than that of groups in almost all situations , and in general with the growth of the size of a simple group we observe a decrease of its overall flow .moreover , differences tend to decrease and almost disappear after the critical density ( about pedestrians per square metre ) is reached .we also analysed the effect of the width of the corridor on the flow of groups ( and in general on the overall pedestrian flow ) : figure [ fig : comparison_width ] shows the different fundamental diagrams associated to all simple groups ( irrespectively of their size ) in corridors a , b and c. it is apparent that the critical density moves to higher levels with the growth of the corridor width , in tune with the already mentioned results discussed in . finally , we analysed how the level of group dispersion , computed by means of the same function employed to manage the adaptive mechanism for group cohesion , varies with a changing density in the environment. the motivations of this analysis are twofold : first of all , we wanted to understand if the adaptive mechanism for group cohesion is effective , then we wanted to gather empirical data to understand if it produces _ plausible _ results , in line with _ observed data _ that , at the moment of the simulation campaign , were still not available . figure [fig : dispersion_comparison ] shown the variation of the level of dispersion for groups of size in corridors a , b and c : we can conclude that the mechanism for preserving the cohesion of simple groups is actually effective , since the growth of density does not cause a significant growth of the dispersion . on the other hand , at that moment we could not conclude that the model produces realistic results .this section comprises several empirical studies aimed at investigating pedestrian crowd dynamics in the natural context by using on - field observation .in particular the survey was aimed at studying the impact of grouping and proxemics behaviour on the whole crowd pedestrian dynamics .data analyses were focused on : ( i ) _ level of density and service _ , ( ii ) _ presence of groups _ within the pedestrian flows , ( iii ) group proxemic _ spatial arrangement _ , ( iv ) _ trajectories and walking speed _ of both singles and group members .furthermore the _ spatial dispersion _ of group members while walking was measured in order to propose an innovative empirical contribution for a detailed description of group proxemics dynamics while walking .the survey was performed the last 24^th^ of november 2012 from about 2:50 pm to 4:10 pm .it consisted in the observation of the bidirectional pedestrian flows within the vittorio emanuele ii gallery , a popular commercial - touristic walkway situated in the milan city centre ( italy ) .the gallery was chosen as a crowded urban scenario , given the large amount of people that pass through it during the weekend for shopping , entertainment and visiting touristic - historical attractions in the centre of milan .the team performing the observation was composed of four people .several preliminary inspections were performed to check the topographical features of the walkway .the balcony of the gallery , that surrounds the inside volume of the architecture from about ten meters in height , was chosen as location thanks to possibility to ( i ) position the equipment for video footages from a quasi - zenithal point of view and ( ii ) to avoid as much as possible to influence the behaviour of observed subjects , thanks to a railing of the balcony partly hiding the observation equipment .the equipment consisted of two professional full hd video cameras with tripods .the existing legislation about privacy was consulted and complied in order to exceed ethical issues about the privacy of the people recorded within the pedestrian flows .two independent coders performed a manual data analyses , in order to reduce errors by crosschecking their results .a squared portion of the walkway was considered for data analysis : 12.8 meters wide and 12.8 meters long ( 163.84 squared meters ) . in order to perform data analyses ,the inner space of the selected area was discretised in cells by superimposing a grid on the video ( see fig .[ fig : grid ] ) ; the grid was composed of 1024 squares 0.4 meters wide and 0.4 meters long .the bidirectional pedestrian flows ( from north to south and vice versa ) were manually counted minute by minute : 7773 people passed through the selected portion of the vittorio emanuale ii gallery from 2:50 pm to 4:08 pm .the average level of density within the selected area ( defined as the quantitative relationship between a physical area and the number of people who occupy it ) was detected considering 78 snapshots of video footages , randomly selected with a time interval of one minute .the observed average level of density was low ( 0.22 people / squared meter ) .despite it was not possible to analyse continuous situations of high density , several situation of irregular and local distribution of high density were detected within the observed scenario . according to the highway capacity manual by ,the level of density in motion situation was more properly estimated taking into account the bidirectional walkway level of service criteria : counting the number of people walking through a certain unit of space ( meter ) in a certain unit of time ( minute ) .the average level of flow rate within the observed walkway scenario belongs to a _b _ level ( 7.78 ped / min / m ) that is associated with an irregular flow in low - medium density condition .the second stage of data analysis was focused on the detection of groups within the pedestrian flows , the number of group members and the group proxemics spatial arrangement while walking .the identification of groups in the streaming of passerby was assessed on the basis of verbal and nonverbal communication among members : visual contact , body orientation , gesticulation and spatial cohesion among members .to more thoroughly evaluate all these indicators the coder was actually encouraged to rewind the video and take the necessary time to tell situations of simple local ( in time an space ) similar movements , due to the contextual situation , by different pedestrians from actual group situations .the whole video was sampled considering one minute every five : a subset of 15 minutes was extracted and 1645 pedestrians were counted ( 21.16% of the total bidirectional flows ) . concerning the flow composition ,15.81% of the pedestrians arrived alone , while the 84.19% arrived in groups : 43.65% of groups were couples , 17.14% triples and 23.40% larger groups ( composed of four or five members ) .large structured groups , such as touristic committees , that were present in the observed situation , were analysed considering sub - groups .results about group proxemics spatial arrangement while walking showed that : * 94.43% of couples was characterised by line - abreast pattern while 5.57% by river - like pattern ; * 31.91% of triples was characterised by line - abreast pattern , 9.57% by river - like pattern and 58.51% by v - like pattern ; * 29.61% of four - person groups is characterised by line - abreast pattern , 3.19% by river - like pattern , 10.39% by v - like pattern , 10.39% triads followed by a single person , 6.23% single individual followed by a triad , 7.79% rhombus - like pattern ( one person heading the group , followed by a dyad and a single person ) , 32.47% of the groups split into two dyads .the walking speed of both singles and group members was measured considering the path and the time to reach the ending point of their movement in the monitored area ( corresponding to the centre of the cell of the last row of the grid ) from the starting point ( corresponding to the centre cell of the first row of the grid ) .only the time distribution related to the _ b _ level of service was considered ( as mentioned , the 59% of the whole video footages ) , in order to focus on pedestrian dynamics in situation of irregular flow .a sample of 122 people was randomly extracted : 30 singles , 15 couples , 10 triples and 8 groups of four members .the estimated age of pedestrians was approximately between 15 and 70 ; groups with accompanied children were not taken into account for data analyses . about gender , the sample was composed of 63 males ( 56% of the total ) and 59 females ( 44% of the total ) .differences in age and gender were not considered in this study .the alphanumeric grid was used to track the trajectories of both single and group members within the walkway and to measure the length of their path ( considering the features of the cells : 0.4 m wide , 0.4 m long ) ( see fig . [fig : trajectories ] ) .a first analysis was devoted to the identification of the length of the average walking path of singles ( m=13.96 m , .11 ) , couples ( m=13.39 m , 0.38 ) , triples ( m=13.34 m , 0.27 ) and groups of four members ( m=13.16 m , 0.46 ) .then , the two tailed t - test analyses were used to identify differences in path among pedestrian .results showed a significant difference in path length between : singles and couples ( p value.05 ) , singles and triples ( p value.05 ) , singles and groups of four members ( p value.05 ) .no significant differences were detected between path length of couples and triples ( p value.05 ) , triples and groups of four members ( p value.05 ) , couples and groups of four members ( p value.05 ) .the results showed that the path of singles is 4,48% longer than the average path of group members ( including couples , triples and groups of four members ) .the walking speed of both singles and group members was detected considering the path of each pedestrian within the flows and the time to reach the ending point from their starting point .a first analysis was devoted to the identification of the average walking speed of singles ( m=1.22 m / s , 1.16 ) , couples ( m=0.92 m / s , 0.18 ) , triples ( m=0.73 m / s , 0.10 ) and groups of four members ( m=0.65 m / s , 0.04 ) .then , the two tailed t - test analyses were used to identify differences in walking speed among pedestrian .results showed a significant difference in walking speed between : singles and couples ( p value.01 ) , singles and triples ( p value.01 ) , singles and groups of four members ( p value.01 ) , couples and triples ( p value.01 ) , triples and groups of four members ( p value 0.05 ) . in conclusion, the results showed that the average walking speed of group members ( including couples , triples and groups of four members ) is 37.21% lower than the walking speed of singles .the correlated results about pedestrian path and speed showed that in situation of irregular flow singles tend to cross the space with more frequent changes of direction in order to maintain their velocity , avoiding perceived obstacles like slower pedestrians or groups . on the contrary ,groups tend to have a more stable overall behaviour , adjusting their spatial arrangement and speed to face the contextual conditions of irregular flow : this is probably due to ( i ) the difficulty in coordinating an overall change of direction and ( ii ) the tendency to preserve the possibility of maintaining cohesion and communication among members . in order to improve the understanding of pedestrian proxemics behaviour the last part of the study is focused on the dynamic spatial dispersion of group members while walking .the dispersion among group members was measured as the summation of the distances between each pedestrian and the centroid ( the geometrical centre of the group ) all normalised by the cardinality of the group .the centroid was obtained as the arithmetic mean of all spatial positions of the group members , considering the alphanumeric grid . in order to find the spatial positions , the trajectories of the group members belonging to the previous described sample ( 15 couples , 10 triples and 8 groups of four members ) were further analysed . in particular , the positions of the group members were detected analysing the recorded video images every 40 frames ( the time interval between two frames corresponds to about 1.79 seconds , according to the quality and definition of the video images ) starting from the co - presence of the all members on the alphanumeric grid .this kind of sampling permitted to consider 10 snapshots for each groups .a first analysis was devoted to the identification of the average proxemics dispersion of couples ( m=0.35 m , 0.14 ) , triples ( m=0.53 m , 0.17 ) and groups of four members ( m=0.67 m , 0.12 ) .then , the two tailed t - test analyses were used to identify differences in proxemics dispersion among couples , triples and groups of four members .results showed a significant difference in spatial dispersion between : couples and triples ( p value.05 ) , couples and groups of four members ( p value.01 ) .no significant differences between triples and groups of four members ( p value 0.05 ) . in conclusion, the results showed that the average spatial dispersion of triples and groups of four members while walking is 40.97% higher than the dispersion of couples . in order to be able to provide useful indications for the calibration of the adaptive simple group cohesion mechanismwe also evaluated the average dispersion of the observed groups in terms of area covered by the group : due to the discretisation of the pedestrian localisation mechanism , we were able to essentially count the occupied cells by a sort of convex hull computed considering the positions of pedestrians as vertexes , analogously as for the dispersion metric defined and employed in the simulation model .the results of this operation estimated the dispersion of couples ( ma=0.6 m ) , triples ( ma=0.8 m ) and groups of four ( ma=1.3 m ) .these values are currently being employed to calibrate and validate the simple group cohesion mechanism in the conditions of relatively low density and irregular flow .starting from the achieved results about group proxemics dispersion , we finally focused on a quantitative and detailed description of group spatial layout while walking .the normalised positions of each pedestrian with respect to the centroid and the movement direction were detected by means of a sample of 10 snapshots for each groups ( 15 couples , 10 triple and 8 groups of four members ) and then further analysed in order to identify the most frequent group proxemics spatial configurations , taking into account the degree of alignment of each pedestrian ( see figure [ fig : scatter - all ] ) .result showed that couple members tend to walk side by side , aligned to the each other with a distance of 0.4 m ( 36% of the sample ) or 0.8 m ( 24% of the sample ) , forming a line perpendicular to the walking direction ( line abreast pattern ) ; triples tend to walk with a line abreast layout ( 13% of the sample ) , with the members spaced of 0.60 m. regarding groups of four members it was not possible to detect a typical spatial pattern : the reciprocal positions of group members appeared much more dispersed than in the case of smaller groups , probably to due the continuous arrangements in spatial positioning while walking .the results are in line with the previously described spatial arrangements related to the total observed pedestrian flows ( see section [ sec : flow - comp ] ) , representing an innovative contribution for the understanding of group proxemics dynamics in motion situation , once again in situations characterised by a relatively low density and irregular flow .empirical data about high density situations would be necessary to actually tune the mechanism in moderate and high density situations , but in this kind of scenario this observation and analysis framework would very likely be inappropriate and ineffective . a more sophisticated and at least partly automated ( employing computer vision techniques and maybe machine learning approaches to support the detection of groups ) controlled experiments ( given the high density situation , some help to the tracking and detection approaches would probably be necessary in terms , for instance , of markers to highlight the heads of group members ) is probably needed to actually face the challenges of acquiring empirical evidences about the behaviour of groups in high density situations .the participation of psychologists in the definition of such an experimental observation setting would also help in managing any kind of tiring and learning effect due to the experimental procedure .the paper discussed an integrated approach to the analysis and synthesis of pedestrian and crowd behaviour , in which the two aspects are actually set in an integrated framework and they mutually benefit in different ways . a general schema describing the conceptual pathways connecting modelling and analysis steps was described . a case study in which modelling and simulation approaches , specifically focused on adaptive group behaviour enabling the cohesion of simple groups , produced a research question and some usable techniques to crowd analysis approacheswas also introduced . on the other hand ,a subsequent observation and analysis about the phenomenology represented in the model was also described .currently the empirical evidences resulting from this analysis activity are being used to validate and calibrate the model for group cohesion . in particular the simulation modelcorrectly reproduces some of the observed phenomena , in particular , lower walking speeds for groups and tendency to preserve cohesion ( although this aspect is undergoing a further calibration ) .additional elements that are now being evaluated are related to the capability of the model to generate spatial patterns resulting from the analysis . moreover , the analysis of the gathered video footage also highlighted additional phenomenologies that are now being more thoroughly analysed : in particular , patterns of `` leader follower '' behaviour within groups were detected and the introduced simulation model presents all the necessary elements and mechanisms to represent this kind of pattern .future works are also aimed at supporting innovative automated analysis techniques based on computer vision and machine learning approaches .the authors would like to thank milano s municipality for granting the necessary authorisations to carry out the observation in the vittorio emanuele ii gallery .bandini , s. , rubagotti , f. , vizzari , g. , shimura , k. , 2011 .an agent model of pedestrian and group dynamics : experiments on group cohesion . in : pirrone , r. , sorbello , f. ( eds . ) ,ai*ia . vol .6934 of lecture notes in computer science .springer , pp .104116 .federici , m. l. , gorrini , a. , manenti , l. , vizzari , g. , 2012 .data collection for modeling and simulation : case study at the university of milan - bicocca . in : sirakoulis ,g. c. , bandini , s. ( eds . ) , acri .7495 of lecture notes in computer science .springer , pp .699708 .georgoudas , i. g. , sirakoulis , g. c. , andreadis , i. , 2011 .an anticipative crowd management system preventing clogging in exits during pedestrian evacuation processes .ieee systems journal 5 ( 1 ) , 129141 .leal - taix , l. , pons - moll , g. , rosenhahn , b. , 2011 .everybody needs somebody : modeling social and grouping behavior on a linear programming multiple people tracker . in : iccv workshops .ieee , pp .120127 .manzoni , s. , vizzari , g. , ohtsuka , k. , shimura , k. , 2011 . towards an agent - based proxemic model for pedestrian and group dynamics : motivations and first experiments . in : tumer ,yolum , sonenberg , stone ( eds . ) , proc . of 10th int .conf . on autonomous agents and multiagent systems innovative applications track ( aamas 2011 ) .12231224 .milazzo ii , j. s. , rouphail , n. m. , hummer , j. e. , allen , d. p. , 1999 .quality of service for interrupted - flow pedestrian facilities in highway capacity manual 2000 .transportation research record : journal of the transportation research board 1678 ( -1 ) , 2531 .moussad , m. , perozo , n. , garnier , s. , helbing , d. , theraulaz , g. , 04 2010 .the walking behaviour of pedestrian social groups and its impact on crowd dynamics .plos one 5 ( 4 ) , e10047 .http://dx.doi.org/10.1371%2fjournal.pone.0010047 raghavendra , r. , bue , a. d. , cristani , m. , murino , v. , 2011 . abnormal crowd behavior detection by social force optimization . in : salah , a. a. , lepri , b. ( eds . ) , hbu .7065 of lecture notes in computer science .springer , pp .134145 .schadschneider , a. , klingsch , w. , klpfel , h. , kretz , t. , rogsch , c. , seyfried , a. , 2009 .evacuation dynamics : empirical results , modeling and applications . in : meyers , r. a. ( ed . ) , encyclopedia of complexity and systems science .springer , pp .31423176 .schultz , m. , rger , l. , fricke , h. , schlag , b. , 2012 .group dynamic behavior and psychometric profiles as substantial driver for pedestrian dynamics . in : pedestrian and evacuation dynamics2012 ( ped2012 ) .http://arxiv.org/abs/1210.5553 schultz , m. , rger , l. , fricke , h. , schlag , b. , 2012 .group dynamic behavior and psychometric profiles as substantial driver for pedestrian dynamics . in : pedestrian and evacuation conference ( ped2012 ) .http://arxiv.org/abs/1210.5553 vizzari , g. , manenti , l. , ohtsuka , k. , shimura , k. , 2012 .an agent - based approach to pedestrian and group dynamics : experimental and real world scenarios . in : proceedings of the 7th international workshop on agents in traffic and transportation .http://www.ia.urjc.es/att2012/papers/att2012_submission_1.pdf was , j. , 2010 .crowd dynamics modeling in the light of proxemic theories . in : rutkowski , l. , scherer , r. , tadeusiewicz , r. , zadeh , l. a. , zurada , j. m. ( eds . ) , icaisc ( 2 ) . vol .6114 of lecture notes in computer science .springer , pp . 683688 .weidmann , u. , 1993 .transporttechnik der fussgnger - transporttechnische eigenschaftendes fussgngerverkehrs ( literaturstudie ) .literature research 90 , institut fer verkehrsplanung , transporttechnik , strassen- und eisenbahnbau ivt an der eth zrich .zhang , j. , klingsch , w. , schadschneider , a. , seyfried , a. , 2011 .transitions in pedestrian fundamental diagrams of straight corridors and t - junctions .journal of statistical mechanics : theory and experiment 2011 ( 06 ) , p06004 .
studies related to _ crowds _ of pedestrians , both those of theoretical nature and application oriented ones , have generally focused on either the _ analysis _ or the _ synthesis _ of the phenomena related to the interplay between individual pedestrians , each characterised by goals , preferences and potentially relevant relationships with others , and the environment in which they are situated . the cases in which these activities have been systematically integrated for a mutual benefit are still very few compared to the corpus of crowd related literature . this paper presents a case study of an integrated approach to the definition of an innovative model for pedestrian and crowd simulation ( on the side of synthesis ) that was actually motivated and supported by the analyses of empirical data acquired from both experimental settings and observations in real world scenarios . in particular , we will introduce a model for the adaptive behaviour of pedestrians that are also members of groups , that strive to maintain their cohesion even in difficult ( e.g. high density ) situations . the paper will show how the synthesis phase also provided inputs to the analysis of empirical data , in a virtuous circle . crowd analysis , crowd synthesis , agent based modelling and simulation , groups
energy harvesting communication systems involve transmitters being powered by environmental sources such as solar , vibration , and thermal effects , either alone or as supplement to the power drawn from a grid .the ability to supply the energy storage units from environmental sources can be very useful in distributed systems such as wireless sensor networks , and m2 m networks .recent developments in ambient energy harvesting technologies have already resulted in the practical implementation of such systems .dependence on a variable energy source ( as opposed to a constant supply of power ) poses interesting new challenges for the transmission of information .optimal adaptation of transmission rate has been analyzed under various problem formulations - . in ,the transmission time mimimization problem on an awgn channel of data packets arriving at arbitrary but known time instants , using energy harvests occuring again at arbitrary , known time instants was formulated and solved . in and ,the formulation was extended to a broadcast channel with a static data pool .this was extended to cover the case of new data arriving during transmission in . in , bounds on the capacity of an energy harvesting awgn channel were obtained .the results were extended to a fading channel in .the solution of the transmission completion time minimization problem on a fading channel with a static data pool and known harvest times and channel states was reported in .this paper extends the formulation of by relaxing the static data pool assumption , and its main contribution is to develop an offline solution for the time minimizing packet scheduling problem with a rechargeable transmitter under fading conditions . the solution needs to adjust its transmission power and rate over the course of transmission with respect to packet arrivals , as well as channel state and energy harvests .this will sometimes correspond to lowering rate ( therefore the energy per bit ) , to work energy efficiently and prevent premature data queue idleness ; and at other times , increasing the rate to take advantage of a good channel state , especially when energy is abundant .the problem statement is made precise in the next section .after making the problem statement precise , we show the uniqueness of its solution through equivalence to a related throughput maximization problem .the solution is first obtained for the equivalent convex problem by unconstrained sequential minimization technique ( sumt ) and the optimal allocation of the completion time minimization problem is shown to be achievable through iterative runs of sumt .we begin with the problem statement in the next section .consider point - to - point communication over a fading channel , where transmission is supplied by the harvested energy , arriving at arbitrary instants . following the _ offline _ formulation in , we assume that the transmitter has knowledge of the energy harvests as well as channel states before transmission starts .in contrast to , data packets are allowed to arrive at arbitrary ( known ) times during the course of transmission .the harvested energy is stored in an ( ideal ) battery and immediately becomes available for use by the transmitter .data packets are stored in a data buffer ( of infinite capacity . )an example sequence of energy and packet arrivals , as well as channel gain changes is illustrated in fig .[ fig : sm ] . starting from time ,the amounts of energy and data have become available by time are denoted by and , respectively .any arrival of energy or data or a change in the channel state is called an _ event_. the duration between any two consequent events is called an epoch .the length of epoch is .given an average power constraint and channel gain level during the epoch , we assume rate level of is achievable for a certain tolerable error probability . equivalently the power level used to transmit a codeword at rate is given by : .accordingly , in discrete - time , the received signal is , where and are the input symbol and channel gain , and is zero - mean unit variance gaussian noise . given an average power constraint , capacity of the channel is given by ., are event times ( energy harvests marked as , channel state changes , , or data arrivals ) .the denote inter - event epoch durations . ]we consider packets arriving in a certain time window of size .the problem is to find an allocation of power and rate across time that minimizes the total duration of transmission for all of these packets .an optimal policy should respect causality constraints ( at any time , only the resources available up to that point can be used ) .it immediately follows from the concavity of the rate function that rate ( and power ) should not change within an epoch .so , the search for an optimal schedule can be limited to schedules that keep a constant power level and rate within each epoch .hence , the optimization problem can be written in terms of rates assigned to epochs .note that in the problem formulation below , the solution space is further limited ( w.l.o.g . ) to schedules spanning no more than some epochs .the value of can be set as the number of epochs used by any arbitrary feasible schedule .[ pr : fadeschedulingtmin ] * transmission time minimization of packets on an energy harvesting fading channel : * in pr .[ pr : fadeschedulingtmin ] , denotes the last epoch used in an optimal schedule .state the ( energy and data ) causality constraints , while ensures transmission completion of all data .following , we will exhibit the equivalence of pr .[ pr : fadeschedulingtmin ] to a convex problem , namely problem [ pr : fadeschedulingdmax ] .problem [ pr : fadeschedulingdmax ] aims to find a schedule which minimizes total energy consumption to transmit a given sequence of packets within a given deadline constraint , using the energy harvested during this time .[ pr : fadeschedulingdmax ] * energy consumption minimization of packets on an energy harvesting fading channel : * [ lmm : convexproblem ] problem [ pr : fadeschedulingdmax ] is a convex optimization problem ._ firstly , note that is a strictly convex , monotonically increasing function .furthermore , the constraint set of the problem is defined by non - negative weighted sums of either s and s , each constraint being either convex or linear , respectively .it easily follows ( please see for details ) that the set of feasible allocations form a convex region .finally , as the objective function of the minimization problem is also a non - negative weighted sum of increasing convex functions , we conclude that pr .[ pr : fadeschedulingdmax ] is convex .[ lmm : equivalence ] suppose is the minimum completion time ( obtained by solving pr .[ pr : fadeschedulingtmin ] ) for the sequence of packets arriving by time , .then , for this sequence of events , any solution of pr . [ pr : fadeschedulingdmax ] with deadline constraint specified as provides a solution to pr .[ pr : fadeschedulingtmin ] . _ proof ._ let schedules and be optimal solutions of pr .[ pr : fadeschedulingtmin ] , and pr .[ pr : fadeschedulingdmax ] , defined with the deadline , respectively. the energy consumption of both schedules must be the same since the opposite claim would contradict the optimality of the schedules : has used no more energy by than , by definition .suppose it used less energy , than this means that has used some extra energy to transmit the same packets as .but then , it could use this extra energy in the last epoch to reduce the completion time by a nonzero amount , which would contradict optimality .hence , we have 2 schedules completing the transmission of the same amount of data at the same time by consuming same amount of energy . thus , and are both solutions to problems [ pr : fadeschedulingtmin ] and [ pr : fadeschedulingdmax ] .but by convexity , pr .[ pr : fadeschedulingdmax ] has a unique solution , hence and must be the same . ' '' '' + _ proof . _consider a schedule , which is a solution of pr .[ pr : fadeschedulingtmin ] , and a schedule , which provides an optimal solution to pr .[ pr : fadeschedulingdmax ] , defined with the deadline , respectively. the energy consumption of both schedules must be the same since the opposite claim will contradict the optimality of each allocation .now we have 2 schedules completing the transmission of the same amount of data at the same time by consuming same amount of energy .thus , and are both solutions to problems [ pr : fadeschedulingtmin ] and [ pr : fadeschedulingdmax ] .but by convexity , pr .[ pr : fadeschedulingdmax ] has a unique solution , hence and are the same . ' '' '' + _ proof ._ consider two schedules and , which respectively provide optimal solutions to problems [ pr : fadeschedulingtmin ] and [ pr : fadeschedulingdmax ] , respectively .let be the completion time of , and the deadline constraint of . by time the energy consumption of both schedules must be the same since the opposite claim will contradict the optimality each allocation .now we have 2 schedules completing the transmission of the same amount of data at the same time by consuming same amount of energy .thus , and are both solutions to problems [ pr : fadeschedulingtmin ] and [ pr : fadeschedulingdmax ] .but by convexity , pr .[ pr : fadeschedulingdmax ] has a unique solution , hence and are the same . ' '' '' + [ corollary : uniqueness ] solution of pr .[ pr : fadeschedulingtmin ] is unique ._ as any solution of pr .[ pr : fadeschedulingtmin ] provides a solution to pr .[ pr : fadeschedulingdmax ] , in order for pr .[ pr : fadeschedulingdmax ] to have a unique solution ( which is true by convexity ) pr .[ pr : fadeschedulingtmin ] must have a unique solution . ' '' '' + the _ sequential unconstrained minimization technique ( sumt ) _ is a convenient method for iteratively converging to the optimal of a constrained problem by solving a sequence of unconstrained optimization problems .the unconstrained problems are formed by adding to the objective of the original problem penalty terms corresponding to constraint violations .correspondingly in our case , we obtain pr .[ pr : unconstrained_minimization ] as follows : [ pr : unconstrained_minimization ] * unconstrained minimization problem : * due to constraints , and , penalty terms , , and have been added to the objective function .starting from a point in the exterior of the feasible region for an initial value of the penalty coefficient , we reach the next point by solving the corresponding unconstrained minimization problem . at each sumt iteration ,initial point is moved to the previously computed result . by iterating the penalty coefficient such that after iteration , for some growth parameter , we solve a sequence of unconstrained problems with monotonically increasing values of the penalty coefficient .intuitively , this drives the points toward the feasible region . asthe penalty is zero inside the feasible region , this means that the point the algorithm stops at will be the first feasible point.it is proved in that in the case of a convex objective and penalty terms as defined above , the algorithm converges to the optimum of the original constrained problem as goes to infinity .in practice , the iterations are stopped when an arbitrary stopping criterion is satisfied . in our problem, sumt is initialized with an infeasible allocation ( i.e. , at a point in the exterior of the constraint region ) , specifically , transmitting all data at constant rate within the given deadline , disregarding causality constraints . to ensure fast convergence ( see ) initialized such that the values of the objective and penalty terms are commensurate , and the penalty terms corresponding to each constraint are scaled such that no constraint dominates . at each iteration of sumt, the corresponding unconstrained problem is solved by _newton s method_. it is quite standard to apply newton s method in the inner iterations of sumt . after the newton step in an inner iteration , rate allocation vector is updated as : ^{-1 } \nabla f(\textbf{r}^l) ] , becoming smaller than a predefined accuracy parameter is the stopping criterion for each inner iteration . by reducing ,the inner optimizations can be made arbitrarily accurate .the convergence rate of these iterations will be discussed in the following sections . from lemma 2 , using the optimal value of completion time , , as a parameter in pr.2 would give us an optimal schedule for pr.1 .of course , is not known before solving pr.1 .the method we will use is to iteratively approach by solving pr . 2 for different values of and checking the resulting amount of energy consumption .the bisection method will be used to monotonically narrow down the interval in which the optimal completion time of pr .[ pr : fadeschedulingtmin ] must lie in . since is a monotonically decreasing and continuous function of , any feasible value of provides an upper bound on . in search of upper and lower bounds, is initialized as the end of last data arrival epoch , and sumt is run as detailed in section a. if the resulting optimal energy that sumt returns is too high , it means that transmission can not be completed within this deadline , hence the current value of provides a lower bound . is then is extended by the next epoch length .this procedure is repeated until the total consumed energy returned by sumt goes below the energy harvested by .that value of provides an upper bound .the next deadline is chosen as the average of the upper and lower bounds , and sumt is run again .if the deadline is feasible , it becomes the new upper bound , if not , it becomes the new lower bound , and so on .the iterations are stopped when the difference between the upper and lower bounds goes below , which , provided that the inner optimizations of sumt are also done with sufficient accuracy , sandwiches in an interval of size .the computational complexity is largely imposed by the stopping criteria of newton , sumt and bisection iterations . to compute the overall complexity of proposed scheme ,let us first consider the number of bisections .when bisection iterations begin , the difference between upper and lower bounds on completion time becomes the last epoch length of the most current schedule returned by sumt . in each iteration this interval is halved , so at most bisections are to be performed .for each bisection , sumt makes iterations to converge with a desired accuracy of .the number of newton steps to achieve an accuracy of in the inner newton iterations per each iteration of sumt is upper bounded by , where is the minimum decrement amount of and is the value at the optimal point .this bound follows from the different nature of convergence of newton s iterations for different operating points .it has been shown in that , once the operation point gets sufficiently close to the optimum , convergence rate is quadratic , while it is approximately linear until then . finally , the computational requirements imposed by each newton step , due to the construction and the inversion of a hessian ( where is the number of epochs in the problem ) , is polynomial ( with complexity or as low as with ultimately efficient implementation . )as an example , we consider the event sequence depicted in fig .[ fig : numex ] . the final schedule returned by the proposed algorithmis also shown in the figure .the penalty parameter is initialized as , the growth parameter is set to .the threshold values for newton s method , sumt and bisection are , and , respectively .the algorithm repeats sumt iterations , within which at most 6 newton s steps are repeated , for each of bisection repetitions and terminates within seconds in matlab running on a macbook pro rev .when the newton s and bisection thresholds are raised to and , respectively , the run time reduces to seconds .we believe that optimizing the code over an efficient programming platform can reduce this time significantly .epoch is , the bandwidth is khz , energy harvest amounts and arriving data are marked as and , respectively .( b ) final schedule returned by completion time minimization algorithm . ]a method for solving the offline minimum completion time packet scheduling problem on an energy harvesting fading channel has been developed and demonstrated .the key to the method is exhibiting equivalence to an energy minimization problem which is a convex program . in certain realistic scenarios ,the harvest profile and data arrivals may be known in advance . in that case, the offline solution would apply for a static channel . on a fading channel with an ergodic channel state process ,an online algorithm such as waterfilling could run on top of the offline adaptation .when the data and/or harvest arrivals are also unknown , the offline solution here may be combined with a prediction or learning scheme or a simple look - ahead policy .bib k. lin , j. hsu , s. zahedi , d. lee , j. friedman , a. kansal , v. raghunathan , m. srivastava , heliomote : enabling long - lived sensor networks through solar energy harvesting , " in _ proceedings of the 3rd international conference on embedded networked sensor systems ( sensys ) _ , san diego , ca , usa , p. 309, 2005 .j. yang and s. ulukus , optimal packet scheduling in an energy harvesting communication system , " _ ieee trans . on communications _, 60(1):220 - 230 , january 2012 .m. a. antepli , e. uysal - biyikoglu , and h. erkal , optimal packet scheduling on an energy harvesting broadcast link , " to appear on _ ieee journal on selected areas in communications ( jsac ) , special issue on energy - efficient wireless communications _ , 2011 .j. yang , o. ozel and s. ulukus , broadcasting with an energy harvesting rechargeable transmitter , " _ ieee trans . on wireless communications _ , vol.pp , no.99 , pp.1 - 13 , dec .f. m. ozcelik , h. erkal and e. uysal - biyikoglu , optimal offline packet scheduling on an energy harvesting broadcast link , " _ 2011 ieee int .symposium on information theory _ ,st.petersburg , aug .h. erkal , f. m. ozcelik and e. uysal - biyikoglu , optimal offline packet scheduling on an energy harvesting broadcast link,"arxiv:1111.6502v1 , 2011 o. ozel and s. ulukus , information - theoretic analysis of an energy harvesting communication system , " in _ ieee pimrc workshops _ , 2010 .r. rajesh , v. sharma , and p. viswanath , information capacity of energy harvesting sensor nodes , " _ 2011 ieee int .symposium on information theory _ ,st.petersburg , aug .r. rajesh , v. sharma and p. viswanath , capacity of fading gaussian channel with an energy harvesting sensor node , " in _ ieee globecom11 , houston _ , texas , dec .o. ozel , k. tutuncuoglu , j. yang , s. ulukus and a. yener , transmission with energy harvesting nodes in fading wireless channels : optimal policies , " _ ieee jour . onselected areas in communications , 29(8):1732 - 1743 _ , september 2011 .h. erkal , _ optimization of energy harvesting wireless communication systems _ , m.sc .thesis , middle east technical university , dept . of electrical and electronics engineering , ankara , turkey , dec .2011 , available at http://www.eee.metu.edu.tr/~cng/. f. m. ozcelik , _ optimal and implementable transmission schemes for energy harvesting networks _ , m.sc .thesis , metu , ankara , turkey , expected 2012 , working copy available at http://www.eee.metu.edu.tr/~cng/ , last updated on june 2012 .s. boyd and l. vandenberghe , _ convex optimization _ , cambridge university press , 2004 .d. g. luenberger and y. ye , _ linear and nonlinear programming _ , new york : springer , 2008 .p.a.jensen and j.f .bard , _ algorithms for constrained optimization _ , 2003 , available at http://www.me.utexas.edu/~jensen/ormm/supplements/units/nlp_methods/const_opt.pdf .consider two feasible allocation vectors and and let be a linear combination of these .we will show that rate allocation vector is also feasible and consumes an amount of energy less than .first , let us compute the energy consumption of . follows from the strict convexity of function and equality holds only if or . with this inequality , allocation consumes less than .we are now ready to check feasibility of the allocation and begin with the energy causality constraint , . states the satisfaction of energy causality constraint .similarly , also respects data causality , .as and are two feasible schedules , implies that satisfies data causality constraint .in addition , we have the following when the transmission ends : shows the satisfaction of .so , we have shown that feasible allocations form a convex region , , any linear combination of two feasible schedules is also feasible . combining this result with, we conclude that pr .[ pr : fadeschedulingdmax ] is a convex optimization problem .
the offline problem of transmission completion time minimization for an energy harvesting transmitter under fading is extended to allow packet arrivals during transmission . a method for computing an optimal power and rate allocation ( i.e. , an optimal offline schedule ) is developed and studied . transmission of packets over a fading channel under energy harvesting energy harvesting , completion time , offline schedule , packet scheduling , causality constraints , energy constraint , unconstrained problem , sumt , sequential optimization , complexity .
the unpredictability of turbulence makes a deterministic analysis of the instantaneous velocity field not only impractical , but very nearly impossible .researchers have instead studied the statistical properties of turbulence , which necessarily involves the probabilities of velocities and their fluctuations .although information theory is the natural language for treating these probability distributions , there have been few studies that make use of it . instead, the focus has often been on the moments . in wall - bounded flows , for example, considerable effort has been directed towards determining the mean velocity profile as a universal function of distance from the wall . in other situations ,the fluctuations are of primary interest and the focus has been on the moments of velocity differences : ^n \rangle_x.\ ] ] these velocity differences are thought to represent the characteristic velocity of a turbulent eddy " of size , a concept that goes back to l.f .richardson and his contemporaries .the importance of in turbulence is apparent from its appearance in kolmogorov s law for .this law is derived using several significant assumptions and the navier - stoke s equations and remains one of the only exact solutions in turbulence .it is the starting point for the entire scaling phenomenology of turbulence . herewe propose a different approach . instead of beginning with the moments and kolmogorov - type assumptions , we will focus on the probability distributions used to calculate the , : with these probability distributions, information theory can be used to make quantitative statements about , , the unpredictability of .of course , all the information about the moments is contained in , so our analysis can not be completely unrelated to the traditional theory . in order to gauge the usefulness of this approach ,we apply it to a key turbulence concept .big whirls have little whirls + that feed on their velocity , + and little whirls have lesser whirls + and so on to viscosity .+ the cascade concept captured by this rhyme can be formulated with information theory , because the of the cascade leaves its mark on the probability distributions of the eddies ( whirls ) . in short , the uncertainty in observing a big eddy become a small eddy should be less than the reverse . using experimental observations of ( quasi-)2d turbulent flow ,we confirm the eddy hypothesis without the navier - stoke s equations , scaling arguments or kolmogorov s assumptions , although some of the features of that kind of analysis ( self - similarity ) reappear naturally .a completely new result is the existence and direction of information transfer . before we can come to these conclusions , however, we must review the salient features of turbulence and information theory .two - dimensional ( 2d ) turbulence occurs approximately in nearly all large - scale atmospheric flows due partly to the fact that the thickness of the earth s atmosphere is very small compared to its breadth .this , along with stratification , result in vast regions of the atmosphere where the vertical velocity is negligible compared with the horizontal .for the same reason large scale oceanic flows are also considered two - dimensional .our measurements are made using a soap film , which has an even smaller aspect ratio .the physics of soap films and their usefulness in studying 2d flows and 2d turbulence has already been well documented , but we present some of the essential experimental details below . although we utilize 2d turbulence , our results should extend to 3d turbulence without any significant alteration .the soap solution is a mixture of dawn ( 2 ) detergent soap and water with 10 m hollow glass spheres added for the velocity measurements .figure [ setup ] is a diagram of the experimental setup .the soap film is suspended between two vertical blades connected to a nozzle above and a weight below by nylon fishing wire .the nozzle is connected by tubes to a valve and a reservoir which is constantly replenished by a pump that brings the spent soap solution back up to the reservoir .there is an overflow that leads back to the bottom reservoir so that the height of the top reservoir , and thus the pressure head , is constant. sometimes the pump feeds directly into nozzle , giving results indistinguishable from the reservoir setup .the flow is always gravity - driven .typical centerline speeds are several hundred cm / s with rms fluctuations generally on the order of 10 cm / s .the channel width is usually several cm .the flow velocity is measured using particle image velocimetry ( piv ) .a bright white light source is placed behind the soap film and shines on the soap film at an angle ( so that the light does not directly enter the camera , which is perpendicular ) .the particles scatter more light than the surrounding soap water and are easily distinguished .a fast camera tracks their movement and standard piv techniques are used to calculate the velocity field . because the full velocity field at different times is needed for our analysis , we use piv and can not use , , laser doppler velocimetry .we focus on the wall - normal or horizontal velocity instead of the vertical velocity because it does not suffer from aliasing effects and because changes slightly with ( due to gravitational acceleration ) .we take velocities very near the center of the channel to avoid wall effects and variations in the energy and enstrophy ( norm of vorticity ) injection rates .turbulence in the soap film is generated by inserting a row of rods ( comb ) perpendicular to the film .when this protocol is used we almost always observe the direct enstrophy cascade . in the traditional phenomenology, enstrophy is transferred from some large injection scale to a viscous dissipative scale .the rate of enstrophy transfer is , which should be constant over the inertial range of scales in between and .we introduce this traditional framework in order to contrast it with our own approach , as well as provide evidence to the portential skeptic that we are indeed working with a direct cascade .as is customary we assume that the energy spectrum of velocity fluctuations scales with and the wavenumber , an inverse length , for a certain range of ( inertial range ) . for the direct enstrophy cascade , . in fig .[ spectra ] we plot all of the calculated from .the curves have been normalized using the large length scale and the rms velocity .all of the curves collapse at low and intermediate , a signature of the cascade s self - similarity .the physical process that created this curve is similar in all cases .we now outline our test .consider the diagram of a turbulent cascade shown in fig .[ cascade ] , which is typical of that shown in many turbulence textbooks .the richardson picture involves eddies evolving in both space and time .a large eddy at an early time ( ) will become a small eddy at a later time ( ) .( we also consider the reverse process . )we should be more certain about given than given .let us now move on to the precise description .the shannon entropy is central to information theory .we could simply look at the raw probability distributions themselves , but the entropy gives a single number that characterizes how random the distribution is and provides an interpretation in terms of information .for the eddy , the shannon entropy is where the sum is over all possible values of .this is the amount of information we gain from or uncertainty we had prior to measuring .( uncertainty and information are the same in this framework . ) to test a relationship between two eddies , we use a modified form of the entropy : the conditional entropy .this gives us the uncertainty of one eddy given that the other eddy occurred . for the uncertainty of given we write where and are the joint and conditional probabilities respectively .if and are independent , . knowing does nt help us reduce our uncertainty about . if is determined by with absolute certainty , then .so the stronger the relationship , the smaller this quantity is .the take on a continuous range of values , but to calculate probabilities and then estimate entropies , we must bin ( discretize ) the measured data .this is , in fact , unavoidable due to the finite resolution of all measurement apparatuses .we systematically varied the bin size , but as in ref . , we found our results to be extremely robust .none of our results will change except for a vertical shift in all the conditional entropies by the same factor .we use a bin size of , which of course changes size with , but ensures that the number of bins ( and thus the maximum possible value of the conditional entropy ) remains roughly the same .we assert that signifies a cascade from large to small scales , and we use to test this . in principle also depends on , and as indicated in fig .[ cascade ] , but we maximize with respect to and ( experimental parameters ) and take many realizations at different , which is simply a reference marker , so that in the end only depends on .this means that we are formulating richardson s hypothesis at a scale .we note that our approach is similar to work on information transport in spatiotemporal systems . a study that more closely anticipates our own is an information transfer treatment of the goy model .the result of the calculation of is shown in fig .[ cond_entropy_a ] for the same and as in fig .[ spectra ] . clearly is greater than zero in all cases as expected for a direct cascade . in the traditional framework ,a cascade with constant transfer rate is first assumed and then the scalings are derived with the extra assumptions ( universality , etc . ) already mentioned .then if the moments scale as predicted , the cascade is considered to be demonstrated . herewe bypass this sophisticated argumentation and show the cascade straightaway .more can be gleaned from fig .[ cond_entropy_a ] .there appears to be a region of nearly constant value in many of the curves , which is reminiscent of the enstrophy ( energy ) flux which takes on a maximum and constant value in the inertial range equal to the injection rate ( ) .this suggests a connection between max( ) and . estimating from fig .[ spectra ] , we find a general increase of max( ) with .moreover , all of the curves appear to have a similar shape .this is reminiscent of the energy spectra above , and suggestive of an underlying self - similarity . indeed by choosing a single and normalizing by and by , we find reasonable collapse as seen in fig .[ cond_entropy_b ] .not only have we rediscovered self - similarity , but also a turbulent length scale . is very close to , but their ratio decreases with as shown in the inset of fig .[ cond_entropy_b ] .now let us establish an interesting corollary .the mutual information is the information shared between two variables . for the large eddies at earlier time ( ) and the small at later time ( ) , we write if we also define a mutual information for and , then we can rewrite as where we have used that , due to the small ( experimentally verified ) .this means that in other words , the information shared between eddies going downscale is more than the reverse .there is net information being transferred downscale , concurrently with the enstrophy .this result should apply generally to both the 2d and 3d direct cascades and even the 2d inverse energy cascade .it rests only on the validity of our information theory expression of richardson s eddy hypothesis , which we have here experimentally verified .the implications of this result are quite powerful .kolmogorov s small scale universality assumption can be expressed as the small scales forgetting " about the large ( forcing ) scales .[ eq : mutinfo ] suggests that this can never be true .so long as there is a cascade , there must be information transfer in the same direction which makes forgetting " impossible .thus intermittency appears to be a necessary feature of all turbulent cascades .we have shown that we can formulate and test richardson s idea of a cascade without using the navier - stoke s equation , scaling arguments or kolmogorov s assumptions .and yet some of the old patterns , such as self - similarity , re - emerged .this work presents an entirely new perspective on the statistics of turbulent velocities and suggests a more suitable framework for understanding intermittency .our application of information theory to turbulence ought to serve as a guidepost for further work .
turbulence theory is usually concerned with the statistical moments of the velocity and its fluctuations . one could also analyze the implicit probability distributions . this is the purview of information theory . here we use information theory , specifically the conditional entropy , to analyze ( quasi-)2d turbulence . we recast richardson s eddy hypothesis " that large eddies break up into small eddies in time using the language of information theory . in addition to confirming richardson s idea , we find that self - similarity and turbulent length scales reappear naturally . not surprisingly , we also find that the direction of information transfer is the same as the direction of the cascade itself . consequently , intermittency may be considered a necessary companion to all turbulent flows .
pathogens that diversify and evolve over relatively short time scales are associated with specific features of infectious disease epidemiology .epidemics of acute respiratory infection occuring each winter in temperate climates are caused in general by influenza ( hay 2001 ) and respiratory syncytial viruses ( cane 2001 ) .rotaviruses ( iturriza - gmara 2004 ) are the single most common cause of acute infantile gastroenteritis throughout the world .these viruses undergo antigenic drift and both influenza a and rotaviruses undergo major antigenic shifts when a new virus is introduced into the human population through zoonotic transmission .the dynamics of the epidemic is closely related to the antigenic structure and evolution of the viral population .the spreading of multiple strains may be investigated by means of mathematical models , and four studies have recently uncovered the role of a ` reinfection threshold ' ( gomes _ et al _ 2005 ) in regulating pathogen diversity ( gomes _ et al _ 2002 ; boni _ et al _ 2004 ; abu - raddad & ferguson 2004 ; gkaydin _ et al _ 2005 ) . in spite of the significant differences in model formulations , assumptions , and propositions ,there is a striking convergence at the level of the underlying mechanisms and results : levels of infection increase abruptly as a certain threshold is crossed , increasing the potential for pathogen diversity .this may be achieved by increasing the transmissibility ( modelled by in this work ) , by decreasing the strength of the immunity ( modelled by increasing in this work ) or by increasing the rate at which novel pathogen variants are generated ( not considered in this work ) .the concept of reinfection threshold was introduced in the context of a mean field model for vaccination impact ( gomes _ et al _ 2004 ) and has been the subject of some debate ( breban and blower 2005 ) , on the grounds that it corresponds to a change in the order of magnitude of the equilibrium density of infectives that takes place over a certain range of parameter values , rather than at a well defined threshold .in contrast with other phenomena in epidemic models where the term threshold is widely used , the reinfection threshold does not correspond to a bifurcation , or a phase transition , occurring at a well defined critical value . in section 2, we give a precise definition of the reinfection threshold concept , relating it to a bifurcation that takes place in the limit when the demographic renewal rate tends to zero .thus , the reinfection threshold is a smoothed transition that occurs in systems with low birth and death rates , where a pronounced change in the systems steady state values takes place over a narrow range in parameter space , akin to a smoothed phase transition in finite physical systems around the value of the critical control parameter where a sharp transition occurs in an infinite system .this approach reconciles the two opposing views in the literature , and we shall use the term reinfection threshold throughout the paper in this sense . for the invasion by a pathogen strain of a homogeneously mixed population where another strain circulates, it was recently found that strain replacement is favoured when the transmissibility is below the reinfection threshold , while coexistence is favoured above ( gkaydin _ et al _ 2005 ) .here we consider a model of contact structure in the host population to describe the underlying network for disease transmission , and investigate how this impacts on the outcome of an invasion by a new strain , that starts by infecting a very small fraction of the population .regular lattices are simple models that account for the geographical distribution of the host population through local correlations but are not good descriptions of social contact networks as they neglect mobility patterns . by contrast ,random graphs are simple models of complex networks that account for the mobility of the host population , through the random links or connections , but by neglecting local correlations render these models poor descriptions of real social networks .a more realistic model of social networks was proposed recently by watts and strogatz . in 1998they introduced small - world networks with topologies that interpolate between lattices and random graphs ( watts & strogatz 1998 ) . in these networksa fraction of the lattice links is randomized by connecting nodes , with probability , with arbitrary nodes on the lattice ; the number of links is preserved by removing lattice links when random ones are established . this interpolation is non linear : for a range of the network exhibits small - world behaviour , where a local neighbourhood ( as in lattices ) coexists with a short average path length ( as in random graphs ) .analysis of real networks ( dorogotsev & mendes 2002 ) reveals the existence of small - worlds in many interaction networks , including networks of social contacts . in a recent work the contact patterns for contagion in urban environments have been shown to share the essential characteristics of the watts and strogatz small - world model ( eubank _ et al _ 2004 ) . in this work, we implement the infection dynamics on a small - world network and find that the contact structure of the host population , parametrized by , plays a crucial role in the final outcome of the invasion by a new strain , as departures from the homogeneously mixed regime favour strain replacement and extinction versus coexistence ( section 3 ) . within the scope of an effective mean - field model ( section 4 ), this effect is attributed to a reduction of the effective transmissibility due to screening of infectives , resulting in a translocation of the reinfection threshold to higher transmissibility and thus decreasing the range where strain coexistence is the preferred behaviour .this conclusion seems to contradict the established idea that localized interactions promote diversity , in the sense that the coexistence of several competing strains is favoured with respect to homogeneously mixed models . for the problem that we address here, this idea was confirmed in a recent work ( buckee _ et al _ 2004 ) where strain competition in small - world host networks has been considered . in buckee _et al _ , individual based stochastic simulations for a model with short term immunity , and thus high rates of supply of naive susceptibles , have shown that a localized host contact structure favours pathogen diversity , by reducing the spread of acquired immunity throughout the population .our results indicate that when the rate of supply of susceptibles is low enough , the effect of localized interactions reported by buckee _et al _ is superseded by that of the reduction of the effective transmissibility , that places all but the most infectious strains below the reinfection threshold .thus , in the case of long lasting partial immunity , a predominantely local contact structure will contribute to reduce the number of coexisting competing strains in the host population .we consider an antigenically diverse pathogen population with a two - level hierarchical structure : the population accommodates a number of strains ( two in this work ) ; each strain consists of a number of variants ( here conceptualised as many ) .variants in the same strain differ by some average within - strain antigenic distance , , and different strains are separated by some antigenic distance , .antigenic distance is negatively related to cross - immunity and , in some appropriate normalised measure , we may take .these assumptions are consistent with the idea that rapidly evolving rna viruses experience selection as groups of antigenically close strains ( levin _ et al _ 2004 ) .studies of influenza a viral sequence evolution ( plotkin _ et al _ 2002 ; smith _ et al _ 2004 ) provide empirical support of this modelling approach .we consider a community of ( fixed ) individuals , with susceptibles and infectives in circulation .we assume that hosts are born fully susceptible and acquire a certain degree of immunity as they are subsequently infected .individuals are characterized by their present infection status ( healthy or infected by strain ) and their history of past infections ( previously infected by a set of strains ) . in this model the dynamics of a single viral strain , representing a group of cocirculating antigenically close strains , is described by a susceptible - infected - recovered ( sir ) model with partial immunity against reinfection .abu - raddad & ferguson ( 2004 ) have proposed a different modelling approach based on the reduction of a multi - strain system to an ` equivalent ' single strain sir model , with parameters calculated to reproduce the values of the total disease prevalence and incidence of the original multi - strain model . in our model ,the overall performance of a strain depends on a single parameter , the degree of immunity , measured by , for subsequent infections , and the role and meaning of the reinfection threshold ( see below for a definition ) become particularly clear . the densities ( or fractions of the host population ) are denoted by for susceptibles and for infectives .the superscript of denotes the viral strain that currently infects that fraction of the population .the subscripts describe the history of past infections ( in the case of infectives , if different from the current infecting strain ) and , for the model with two viral strains , run over the subsets of , that we will represent by , for the susceptibles . for the infectives, takes the values , or .the total fraction of the population infected by a given viral strain is , , , and infection of fully susceptible individuals occurs at a rate per capita ( force of infection ) given by .the parameter is the transmissibility for strain .previous infections by the same strain reduce the transmissibility by a factor while previous infections by a different strain reduce the transmissibility by a factor .constant and equal birth / death rates , , and constant recovery rate , are also assumed .the mean - field equations for the dynamics of the various densities of susceptibles and infectives are where the density of susceptibles that were previously infected with both viral strains is . the parameter is fixed at corresponding to a life expectancy of 80 years and the rate of recovery from infection is also fixed at representing an average infectious period of approximately one week .it is well known ( anderson & may 1991 ) that a given single strain , , persists at endemic equilibrium if or , equivalently , where is the basic reproduction number of strain , the type of endemic equilibrium for the system described by ( [ mf ] ) depends on the values of the transmissibilities , ( or ) . to establish whether two strains cocirculate at equilibrium , we evaluate the eigenvalues of the jacobian at the single strain endemic equilibrium . if at least one of the eigenvalues is positive , the single strain equilibrium is unstable , and either coexistence or the other strain alone is the stable state . applying this to strain 1 , we find that this strain is expected to circulate alone when the pair is in the region marked 1 in figure 1(a , b ) . likewise , strain 2 is expected to circulate alone in region 2 .the regions shaded in grey are then identified as the set of parameter values for which coexistence of both strains is stable , or coexistence regions . for comparison, we include the dashed lines that delimit the wider coexistence regions in the absence of reinfection by the same strain ( ) .the two panels correspond to systems with , and different values of : ( or ) in figure 1(a ) ; and ( or ) in figure 1(b ) .comparing the two panels we make the expected observation that the stability of the coexisting solution is enhanced when the two strains are distantly related .we note the existence of two small ` shoulders ' or ` kinks ' , marked a and b , on the boundaries of the coexistence regions . by numerical inspection ,the kinks are found to be close to this makes intuitive sense .a simple sir model with partial immunity ( where reinfection occurs at a rate reduced by ) and with a low birth and death rate ( as considered here ) predicts a sharp increase in the density of infected individuals at close to ( gomes _ et al _ 2004 , 2005 ) .this region of abrupt change is related to a bifurcation at that occurs when . we shall use the term reinfection threshold for the smoothed transition that takes place at small , as well as for the approximate locus of this crossover .focussing on strain 1 , we expect a steep increase in the infections by strain 1 as increases beyond , enhancing the competitive advantage of strain 1 relative to strain 2 and reducing the coexistence region .two limiting cases deserve special reference : when is as large as , and the coexistence region collapses to the diagonal ; and when as in previously studied models ( may & anderson 1983 ; bremerman & thiemme 1989 ) the coexistence region is maximal . hereafter we will restrict ourselves to a symmetrical model , and will denote by the usual symbol for the basic reproduction number , .figure 1(c , d ) shows the stability of the coexistence equilibrium as a function of , measured by the real parts of the seven eigenvalues of the linear approximation of system ( [ mf ] ) at equilibrium for and ( figure 1(c ) ) , ( figure 1(d ) ) .all the eigenvalues have negative real parts , and all but the smallest of them change by several orders of magnitude as crosses the reinfection threshold region around .the plot shows the modulus of the real parts of the seven eigenvalues , in logarithmic scale , versus , and the vertical line through marks the position of the reinfection threshold .also shown by the dashed line is the total density of infectives at the equilibrium , which also increases by two orders of magnitude as crosses the reinfection threshold region . in both caseswe observe that as the transmissibility crosses the reinfection threshold around the overall stability of the coexistence equilibrium increases significantly , as the largest eigenvalues decrease steeply .this effect is achieved by increasing either or , and the system behaviour can be characterised in terms of or of . throughout the rest of this work we fix and vary .the behaviour of the solutions of ( [ mf ] ) for and is shown in figures 2 and 3 .we have taken , and in panels a ) , b ) and c ) , respectively , corresponding to below threshold , threshold and above threshold behaviour for the fixed value of . by threshold we mean , as before , the window of parameter values around that corresponds to the smoothed transition that takes place for small birth and death rate , where the systems coexistence equilibrium densities and stability properties change very rapidly .this is illustrated in the panels d ) , e ) and f ) of figures 2 and 3 , where the dashed line is at and the dotted line indicates the position of the reinfection threshold , for ( figure 2(d ) and 3(d ) ) , ( figure 2(e ) and 3(e ) ) and ( figure 2(f ) and 3(f ) ) .the full line depicts the total equilibrium density of infectives with either one of the two strains as a function of for the ( unique ) stable steady state of the system , corresponding to endemic equilibrium where both viral strains coexist at the same steady state densities .the numerical integration of equations ( [ mf ] ) starts with a single circulating strain ( strain 1 ) , and when the single strain steady state is reached at years a small fraction ( ) of individuals infected with the invading strain ( strain 2 ) is introduced in the system .the two curves in panels a ) , b ) and c ) of figures 2 and 3 represent , in logarithmic scale , the infective densities for each of the strains as a function of time obtained from this numerical integration . the curve that corresponds to the density of infectives carrying strain 2 starts at years with density in all the examples shown . on relatively shorttime scales of a few decades , we find that the reinfection threshold is the boundary between two different regimes .for very similar viral strains , as depicted in figure 2 , the behaviour of the model below threshold is strain replacement , and the behaviour on and above threshold is strain coexistence . for distinct viral strains , as in figure 3, the outcome is always strain coexistence , but the density oscillations are negligible above threshold , and very pronounced below threshold . in order to extract epidemiologically significant predictions from the mean field model we have to take into account that the population is discrete , and that densities below a certain lower bound , that depends on the population size , are effectively zero .if we take this effect into account and set the densities to zero when they fall below , we obtain extinction of both strains in the case of figure 3(a ) . for longer time scales, however , the model also predicts coexistence for the case of similar strains below threshold ( figure 2(a ) ) since there are no attractors in the single strain invariant subspaces when the two strains have the same transmissibility .all the solutions that were analysed tend eventually to the coexistence equilibrium , but the transients for similar strains below threshold last for several hundred years , during which the density of infectives carrying strain 1 reaches much smaller values than the cut - off of .this behaviour and the role of the reinfection threshold can be interpreted in terms of the eigenvalues of the coexistence endemic equilibrium represented in figure 1(c , d ) . as increases across the threshold ,the real parts of the eigenvalues decrease steeply , and the imaginary parts vanish . as a resultthe oscillatory component of the system plays an important role below the reinfection threshold and is absent above .this is the basic ingredient of the different invasion dynamics depicted in figures 2 and 3 .the difference in below threshold behaviour between similar and dissimilar strains , strain replacement in the former case and total extinction in the latter may be understood intuitively in terms of the reduced cross - immunity of dissimilar strains . in this case , the small fraction of infectives carrying strain 2 finds a highly susceptible population and generates a large epidemic outbreak , during which strain 1 goes extinct because of the lack of suceptibles .strain 2 quickly exhausts its pool of susceptibles through first infections , and then dies out too as .then , for realistic population sizes , the predictions of the mean - field model are the following. for similar viruses , strain replacement ( drift ) occurs below threshold , and coexistence on and above threshold . for dissimilar viruses, global extinction occurs below threshold while coexistence occurs on and above threshold .it is known ( bayley 1975 ) that fluctuations due to stochastic effects in discrete populations will also favour strain extinction and replacement with respect to deterministic model predictions . a homogeneously mixed stochastic version of model ( [ mf ] ) was considered by gkaydin _et al _ ( 2005 ) for population sizes between and , and , as expected , stochasticity was found to favour strain replacement and extinction both for similar and dissimilar strains .this effect may be , however , greatly enhanced in more realistic descriptions of the host population where the homogeneously mixed assumption is relaxed .in the presence of random long - range links such as those considered in small - world networks the endemic and epidemic thresholds of susceptible - infective - susceptible ( sis ) and sir models may be mapped on to _ mean - field _ site and bond percolation transitions ( grassberger 1983 , dammer & hinrichsen 2003 , hastings 2003 ) . in recent works ,the network topology has been considered in the calculation of endemic and epidemic thresholds of sis and sir models ( may & lloyd 2001 , moore & newmann 2000a , 2000b , pastor - santorras & vespigniani 2001a ) and the results revealed a strong dependence of the threshold values on the network size and structure .the contact network topology has also been shown to play an important role in the stationary properties of the endemic state ( pastor - santorras & vespigniani 2001b ) , the short term dynamics of epidemic bursts ( keeling 1999 , kleczkowski & grenfell 1999 , rhodes & anderson 1996 ) , the long term dynamics of childhood diseases ( verdasca _ et al _ 2005 ) and in the estimation of disease parameters from epidemiological data ( meyers _ et al _ 2005 ) .the structure of the network of contacts of the host population is also expected to play a role in the invasion dynamics described in the previous section .this will affect the pattern of strain replacement and as a consequence the evolution of competing multi - strain pathogens . in the following we analyse quantitatively the effects of a network of social contacts with small - world topology on the simple two strain model ( [ mf ] ) .we implemented a discrete version of the model ( [ mf ] ) on a cellular automaton ( square lattice ) with sites and small - world interaction rules ( see the appendix for a description of the algorithm ) .we started by simulating the behaviour of the system for a single strain model , whose mean field equations are simply we found characteristic medium and long - term dynamics related , in a quantitative fashion , to the structure of the network of contacts .in particular , as decreases , the increase in spatial correlations ( i ) decreases the effective transmissibility through the screening of infectives and susceptibles , which in turn increases the value of the transmissibility at the endemic and reinfection thresholds .in addition , the spatial correlations ( ii ) enhance the stochastic fluctuations with respect to the homogeneously mixed stochastic model .this effect is particularly strong at low , where the relative fluctuations are largest and where as a consequence ( iii ) the dependence of the steady state densities on the effective transmissibility predicted by the mean - field equations breaks down .analogous effects , including departures from the mean - field behaviour at the endemic threshold ( persistence transition ) of a sir model with non - zero birth rate have been reported recently ( verdasca _ et al _ 2005 ) . in figure 4we plot the effective transmissibility , ( average density of new infectives per time step divided by the value of for the instantaneous densities ) , at fixed and , well above the reinfection threshold of the mean - field model , as a function of the small - world parameter , .the variation of the effective transmissibility with is due to the clustering of infectives and susceptibles .this is a well known screening effect that results from the local structure ( correlations ) , and has to be taken into account in fittings to effective mean - field models . in this framework, the reduction of the effective transmissibility represents an increase of the value of at the reinfection threshold .the clustering , and spatial correlations in general , have also drastic consequences on the amplitude and nature of the stochastic fluctuations .enhanced fluctuations will ultimately lead to stochastic extinction as decreases , but before extinction occurs , the fluctuations lead to the appearance of a regime dominated by local structure and correlations , where the mean - field relations among the equilibrium densities break down .this is illustrated in figure 5 where we plot the effective transmission rate ( dots ) as a function of the equilibrium density of infectives ( model parameters as in figure 4 ) . in the same figure we plot the equilibrium value of the density of infectives ( full line ) predicted by the mean - field equations ( [ mf1 ] ) for a value of the transmissibility equal to the effective transmissibility obtained from the simulations .when , away from the network endemic threshold at , the mean - field equilibrium density for the effective transmissibility is in excellent agreement with the simulated equilibrium density . in the region of small ( ) ) , however , the simulated equilibrium densities differ significantly from those calculated using the effective mean - field theory .a departure from the mean - field dependence of the effective transmissibility on the steady state density of infectives implies that the description of the system is no longer possible in terms of effective mean - field theories based on density dynamics , by contrast with the second regime where the contact structure can be implicitely taken into account by effective mean - field models ( section 4 ) .we then focussed on the analysis of the invasion dynamics .we performed individual based simulations starting from an initial condition where the system is close to the steady state for a single resident strain and introduced a small fraction of individuals ( ) infected by the invading strain .the algorithm is a natural extension of the single strain stochastic algorithm for sir dynamics with reinfection on a small - world network of contacts , and a detailed description is given in the appendix .we found that as local effects become important ( low ) , strain replacement ( and also total extinction ) are favoured with respect to the homogeneously mixed regime ( ) .these results are summarized in figure 6 where we plot at crossover between the different regimes , coexistence , replacement and extinction , versus the small - world parameter , , for systems where the competing viral strains are similar , ( figure 6(a ) ) and dissimilar , ( figure 6(b ) .the full line is the boundary between the strain coexistence and the strain replacement regimes , and the dashed line is the boundary between strain replacement and total extinction .the final outcome of an invasion is determined by carrying out a series of simulations , for different values of and , and keping fixed at . for each , twenty ( forty in some cases )invasion simulations are performed for a period of 13.7 years , with invasion at . the fraction of simulations where both the circulating and the invading strain prevail is plotted as a function of , at fixed .this fraction changes rapidly from one to zero as decreases across a small interval , and the boundary of the coexistence regime ( the full line in figure 6(a , b ) is determined as the point where it takes the value . a similar analysis yields the boundary that separates the replacement from the total extinction regime ( the dashed line in figure 6(a , b ) .the results show that as decreases the value of at these two boundaries increases exponentially from its value in the homogeneously mixed system , . as a consequence ,the range of parameters in the coexistence regime decreases drastically as decreases , supporting the claim that the structure of the small - world network of contacts hinders , rather than favours , pathogen diversity .this contrasts with the behaviour reported by buckee _et al _ ( 2004 ) for a model of strain evolution including short term host immunity and cross - immunity on a ( static ) small - world network of contacts , where coexistence of competing strains was found to be favoured with respect to the homogeneously mixed model .the different behaviour we report is due to the combined effect of small - world network structure and a low rate of supply of naive susceptibles , and may be understood in terms of the analysis of the single strain stochastic model discussed previously , together with the behaviour of the two - strain mean - field model which , for low , predicts strain coexistence as the outcome of an invasion only when the transmissibility is high enough. recall that in the mean - field model of section 2 ( system ( [ mf ] ) ) when a second strain is introduced in a population with a resident virus , drift occurs below the reinfection threshold for antigenically similar strains ( small ) . when the resident and the invading strains are antigenically distant ( large ) the final outcome is coexistence at and above the reinfection threshold and global extinction below threshold .the results of the individual based simulations show that the enhancement of the stochastic fluctuations due to spatial correlations favours replacement where we would otherwise have coexistence , and global extinction where we would otherwise have replacement .in particular , replacement ( both drift and shift , replacement of dissimilar strains ) becomes a typical outcome at the reinfection threshold , instead of coexistence as in the mean - field model .this may be viewed as a finite size stochastic effect similar to the effect reported in gkaydin _et al _ 2005 for the homogeneously mixed stochastic model , or to what we have also found here for , but much more pronounced . as we noted previously the amplitude of the stochastic fluctuations at small values of greatly enhanced with respect to the amplitude of the stochastic fluctuations at , due to coherent fluctuations of host population clusters .however , the main effect of spatial correlations that accounts for the results of figure 6 is the screening of infectives that leads to the reduction of the effective transmissibility as shown in figure 4 .as decreases , this effect brings a system above the reinfection threshold closer to or even below the reinfection threshold , also favouring replacement for small and either replacement or extinction for larger .while the effects of the coherent fluctuations of clusters are important only in a small range of above the single strain endemic transition , screening of infectives occurs over the whole range of with the corresponding translocation of the reinfection threshold to larger values of .in particular , for a large range of values of we obtain drift / shift , instead of coexistence as predicted by the well mixed model for the same disease parameters .many of these effects may be captured by effective mean - field models as described in the next section .in section 3 we have seen that , for single strain dynamics , the mean field equilibrium incidence given by ( [ mf1 ] ) for the effective transmissibility is in excellent agreement with the results of the simulations for values of the small world parameter in an interval ] , the simulated time series are well approximated by the solutions of an effective mean - field model of the form ( [ mf ] ) . for the construction of the effective modelwe assume that the transmissibility is of the form , where is a function , unknown a priori , that represents the screening of infectives .the screening function is obtained from a plot as in figure 4 , where the screeening effect is quantified for the single strain dynamics .the relevant range of for this fit is ] of above the endemic threshold where we have found significant departures from the mean - field description , one may think that a more general ansatz for the form of the force of infection could lead to a modified effective mean - field model , albeit a more complicated one , that would fit the simulation results .however , it is easy to check that the parametric plot of vs , where is the steady state average infective density , follows the mean field relation of the model with linear force of infection whatever the functional form assumed for . indeed , for any function , and rate of infection of the form measured from the simulations is , and the curve is the same as the standard mean field curve for .this means that even in the scope of a more general model with a nonlinear force of infection the mean field curve for vs , the full line in figure 5 , does not fit the simulations results for in the range $ ] .we conclude that the effect of the breakdown of the mean - field relations reported in section 3 is an indication of a new regime where spatial correlations are too important for the contact rate between individuals of different classes to be described by the product of the corresponding densities .the construction of effective models in this regime requires the use of pair approximations that take into account the spatial correlations ( de aguiar _ et al _ 2003 ) and will be the subject of future work .we performed stochastic simulations of individual based models on small - world networks to represent host populations where one or two viral strains , each representing a group of antigenically close variants , may be present . in the case of a single viral strain , the node dynamics corresponds to a sir model with reinfection at a reduced rate due to acquired immunity . in the case of two competing strains ,the rate of infection by a distinct strain is also reduced due to cross immunity , which is always weaker than strain specific immunity .both the single and the double strain models include crucial ingredients required by realistic modeling of the host population , namely stochasticity , a discrete finite population , and spatial structure given by a plausible contact network .we analised both the reinfection dynamics of the single strain model and the invasion dynamics of the two strains model , taking initial conditions that correspond to the presence of a small number of individuals infected with a pathogen strain in a population where another pathogen strain circulates . for single - strain dynamics, we found that the major effect of spatial correlations is a decrease in the effective transmissibility through the screening of infectives and susceptibles which in turn increases the value of the transmissibility at the endemic and at the reinfection thresholds .in addition , spatial correlations enhance the amplitude of the stochastic fluctuations with respect to the homogeneously mixed stochastic model .this effect is particularly strong at low , where the relative fluctuations are largest and where the mean - field dependence of the steady state densities on the effective transmissibility breaks down .indeed , we have found a regime at small , where spatial correlations dominate and the mean - field relations break down , as well as a second , wider , regime for larger values of where effective mean - field models are capable of describing the essential effects of the spatial correlations through a reduced effective transmissibility .for the two - strain model , we found that the host contact structure significantly affects the outcome of an attempted invasion by another strain of a population with an endemic resident strain , and by contrast with standard expectations we observed that spatial structuring reduces the potential for pathogen diversity .the simulation results show that the structure of the network of contacts favours strain replacement ( drift / shift ) or global extinction versus coexistence as the outcome of an invasion , with respect to the mean - field and stochastic homogeneously mixed populations .in particular , we find that as the small - world parameter , , decreases at fixed , the value of the strain specific immunity parameter above which coexistence is the typical outcome increases exponentially from its value for the homogeneously mixed system , , supporting the claim that local correlations strongly reduce pathogen diversity .this conclusion is in apparent contradiction with the established idea that localized interactions promote diversity , as confirmed by stochastic simulations for a model with high rates of supply of naive susceptibles . in that modelthe effect of the host contact structure favours pathogen diversity , by suppressing the uniform spread of acquired immunity throughout the population .by contrast , in our model characterised by partial permanent immunity and low rates of supply of naive susceptibles ( through births ) the former effect is superseded by the screening effect .the latter is also due to the clustering of the host population , and results in a reduction of the effective transmissibility , changing the reinfection threshold to higher levels of , and therefore reducing the range where coexistence is the preferred behaviour .the results of our simulations strongly support the conclusion that in systems with a reinfection threshold , due to a low rate of supply of naive susceptibles , the major effect of the host population spatial structuring is the effective reduction of pathogen diversity . the immunity profile ( ferguson & al .2003 ) and the duration of infection ( gog & grenfell 2002 ) have recently been shown to play an important role in shaping the patterns of pathogen evolution in multi - strain models with mutation generated antigenic diversity . investigating the additional effect of host population contact structure topology in these models will be the subject of future work .financial support from the portuguese foundation for science and technology ( fct ) under contracts pocti / esp/44511/2002 , pocti / isfl/2/618 and pocti / mat/47510/2002 , and from the european commission under grant mext - ct-2004 - 14338 , is gratefully acknowledged .the authors also acknowledge the contributions of j. p torres and m. simes to test and to improve the code used in the simulations .99 abu - raddad , l j and ferguson , n f ( 2004 ) the impact of cross - immunity , mutation and stochastic extinction on pathogen diversity , _ proc ._ b * 271 * 2431 - 2438 .+ de aguiar , m a m , rauch , e m , bar - yam , y ( 2003 ) mean field approximation to a host - pathogen model , _ physe _ * 67 * 047102 .+ anderson , r a and may , r m ( 1991 ) _ infectious diseases of humans _ , oxford u. p. , oxford .+ bailey , n t j ( 1975 ) _ the mathematical theory of infectious diseases _ , 2nd edition , charles griffin & co , london .+ boni , m f , gog , j r , andreasen , v and christiansen , f b ( 2004 ) influenza drift and epidemic size : the race between generating and escaping immunity , _ theor ._ * 65 * 179 - 191 .+ bremermann , h and thieme , h r ( 1989 ) a competitive - exclusion principle for pathogen virulence , _ j. math. biol . _ * 27 * 179 - 190 .+ buckee , c o f , koelle , k , mustard m j , and gupta , s ( 2004 ) the effects of host contact network structure on pathogen diversity and strain structure , _ proceedings of the national academy of science _ * 101* 10839 - 10844 . + cane , p c ( 2001 ) molecular epidemiology of respiratory syncytial virus , _ rev . med .virol . _ * 11 * 103 - 116 .+ dammer , s m and hinrichsen , h ( 2003 ) epidemic spreading with immunization and mutations , _ phys .e _ * 68 * 016114 .+ dorogotsev , s n and mendes , j f ( 2003 ) _ evolution of networks _, oxford : oxford university press .+ grassberger , p ( 1983 ) on the critical behaviour of the general epidemic process and dynamical percolation _ math .biosci . _ * 63 * 157 - 172 .+ eubank , s , guclu h , anil kumar v s , marathe m v , srinivasan a , torockzcal z and wang n ( 2004 ) modelling disease outbreaks in realistic urban social networks , _ nature _ * 429 * 180 - 184. + ferguson , n m , galvani , a p and bush , r b ( 2003 ) ecological and immunological determinants of influenza evolution , _ nature _ * 422 * 428 - 433 . + gog , j r and grenfell , b t ( 2002 ) dynamics and selection of many - strain pathogens , _ proc .natl acad .usa _ * 99 * 17209 - 17214 .+ gkaydin , d , oliveira - martins , j b , gordo , i and gomes , m g m ( 2005 ) the reinfection threshold regulates pathogen diversity : the unusual patterns of influenza a evolution ( submitted ) .+ gomes , m g m , medley , g f and nokes , d j ( 2002 ) on the determinants of population structure in antigenically diverse pathogens _ proc ._ b * 269 * 227 - 233 .+ gomes , m g m , white , l j and medley , g f ( 2004 ) infection , reinfection , and vaccination under suboptimal immune protection : epidemiological perspectives ._ j. theor .biol . _ * 228 * 539 - 549 .+ gomes , m g m , white , l j and medley , g f ( 2005 ) the reinfection threshold , _( in press ) .+ hastings , m b ( 2003 ) mean - field and anomalous behaviour on a small world network , _ phys .lett . _ * 91 * 098701 .+ hay , a j , gregory , v , douglas , a r and lin , y p ( 2001 ) the evolution of human influenza viruses , _ philos ._ b * 356 * 1861 - 1870 . + iturriza - gmara , m , kang , g and gray , j ( 2004 ) rotavirus genotyping : keeping up with an evolving population of human rotaviruses , _* 31 * 259 - 265 .+ keeling , m j ( 1999 ) the effects of local spatial structure on epidemiological invasions _ proc .lond . b _ * 266 * 859 - 867 .+ kleczkowski , a and grenfell , b t ( 1999 ) mean field type of equations for spread of epidemics : the small world model , _ physica a _ * 274 * 355 - 360. + levin , s a , dushoff , j and plotkin , j b ( 2004 ) evolution and persistence of influenza a and other diseases , _ math .biosci . _ * 188 * 17 - 28 .+ may , r m and anderson , r m ( 1983 ) epidemiology and genetics in the coevolution of parasites and hosts , _ proc ._ b * 219 * 281 - 313 .+ may , r m and lloyd , a l ( 2001 ) infection dynamics on scale free networks , _ phys .e _ * 64 * 066112 .+ meyers , l a , pourbohloul , b p , newman m e j , skowronski d m , brunham r c ( 2005 ) network theory and sars : predicting outbreak diversity , _ j. theor ._ * 232 * 71 - 81 .+ moore , c and newmann , m e j ( 2000 ) epidemics and percolation in small world networks , _ phys .e _ * 61 * 5678 - 5682 .+ moore , c and newmann , m e j ( 2000 ) exact solution of site and bond percolation on small world networks , _ phys .e _ * 62 * 7059 - 7064 .+ pastor - santorras , r and vespigniani , a ( 2001 ) epidemic spreading in scale free networks , _ phys .* 86 * 3200 - 3203 .+ pastor - santorras , r and vespigniani , a ( 2001 ) epidemic dynamics and endemic states in complex networks , _ phys .e _ * 63 * 066117 .+ plotkin , j b , dushoff , j and levin , s a ( 2002 ) hemagglutinin sequence clusters and the antigenic evolution of influenza a virus , _ proceedings of the national academy of science _ * 99 * 6263 - 6268 .+ rhodes , c j and anderson , r m ( 1996 ) persistence and dynamics in lattice models of epidemic spread , _ j. theor .* 180 * 125 - 133 .+ smith , d j , lapedes , a s , de jong , j c , bestebroer , t m , rimmelzwaan , g f , osterhaus , a d and fouchier , r a ( 2004 ) mapping the antigenic and genetic evolution of influenza virus , _ science _ * 305 * 371 - 376 .+ verdasca j , telo da gama , m m , nunes , a , bernardino , n r , pacheco , j m and gomes , m c ( 2005 ) recurrent epidemics in small world networks , _ j. theor .* 233 * 553 - 561. + watts , d j and strogatz , s h ( 1998 ) it s a small world , _ nature _ * 392 * 440 - 442 .a community of ( fixed ) individuals comprises , at time , susceptibles and infectives in circulation .hosts are born fully susceptible and acquire a certain degree of immunity as they are subsequently infected .we consider susceptibles and infectives that were previously infected by strain , and .the indices denote previous and current infections as in the text .we consider a cellular automaton ( ca ) on a square lattice of size with periodic boundary conditions .the ( random ) variables at each site may take one of eight values : , , , , , , or .the lattice is full .we account for local interactions / connections with neighbouring sites , with , and long - range interactions / connections , with a small - world probability , .the transmissibility is the sum of the local and long - range rates of transmission .first infection , within - strain and between - strains reinfection , recovery , birth and death occur stochastically , with fixed rates ( , , , , , .the recovery time ( ) sets the time scale . at each monte carlo ( time ) step , site updates are performed following a standard algorithm .the type of event , long or short - range infection , within and between - strains reinfections by strains 1 and 2 , recovery , birth and death , is chosen with the appropriate frequency ( , , , , , , , , ) and then proceeds as follows .long(short)-range first infection by 1 ( 2 ) .a site is chosen at random ; if the site is occupied by an or an other than no action is taken .if the site is occupied by an , one of the other lattice sites ( or of its neighbours for short - range infection ) is chosen at random ; is infected by 1 ( 2 ) iff that site is an ( ) .long(short)-range between strain reinfection by 1 ( 2 ) .a site is chosen at random ; if the site is occupied by an or an other than ( ) no action is taken .if the site is occupied by an ( ) , one of the other lattice sites ( or of its neighbours for short - range infection ) is chosen at random ; ( ) is reinfected by 1 ( 2 ) iff that site is an ( ) .long(short)-range within strain reinfection by 1 ( 2 ) .a site is chosen at random ; if the site is occupied by an or an other than ( ) or no action is taken .if the site is occupied by an ( ) or one of the other lattice sites ( or its neighbours for short - range infection ) is chosen at random ; ( ) or is reinfected by 1 ( 2 ) iff that site is an ( ) .* figure 1 * stability analysis .( a , b ) conditions for strain coexistence or competitive exclusion as described by the two strain mean - field model .the panels correspond to models with and different values of : ( a ) ; ( b ) .the regions shaded in grey are the set of parameter values for which coexistence of both strains is stable , or coexistence regions .the dashed lines delimit the wider coexistence regions in the absence of reinfection by the same strain ( ) . in region 12 ) , the single strain endemic equilibrium for strain 1 ( resp .2 ) is stable .( c , d ) modulus of the real parts of the seven eigenvalues of the coexistence equilibrium along the diagonal of ( a , b ) respectively , in logarithmic scale .also shown is the total density of infectives at equilibrium ( dashed line ) , and the position of the reinfection threshold ( dotted line ) .* figure 2 * mean - field new infectives densities for the two strain model of equations ( 1 - 4 ) with .we have taken , and in panels a ) , b ) and c ) , respectively , corresponding to below threshold , threshold and above threshold behaviour for the value of .this is illustrated in the diagrams d ) , e ) and f ) where the dashed line is at and the dotted line indicates the position of the reinfection threshold .the full line depicts the equilibrium total density of infectives , , as a function of for the stable steady state of the system , the endemic equilibrium where both viral strains coexist .* figure 3 * mean - field new infectives densities for . as in figure 2we take , and in panels a ) , b ) and c ) , respectively , corresponding to below threshold , threshold and above threshold behaviour for , as illustrated in the diagrams of panels d ) , e ) and f ) . *figure 4 * effective transmissibility vs the small - world parameter , .the effective transmissibility , , is calculated as the average density of new infectives per time step divided by where and are the instantaneous densities of susceptibles and infectives .the drastic reduction in is due to the clustering of infectives and susceptibles as decreases and the spatial correlations increase .the model is the single strain reinfection model ( [ mf1 ] ) and the parameters are and .the number of nodes is .* figure 5 * effective transmissibility vs the equilibrium infective density , ( dots ) , over the whole range of the small - world parameter , ( dots ) .notice the logarithmic scale in the axis .the full line is the curve calculated from the mean - field single strain reinfection model ( [ mf1 ] ) .the parameters are as in figure 4 .the results show the existence of two regimes : an effective mean - field regime for , where the relations between the equilibrium densities are given by the mean - field equations for the screened value of the transmissibility , and a fluctuation dominated regime for , where the mean - field relations break down .* figure 6 * coexistence versus replacement ( full line ) and replacement versus total extinction ( dashed line ) crossovers in space .the reinfection parameter , , is plotted as a function of the small - world parameter , , at the crossover separating the dynamical regimes of coexistence of two competing viral strains ( above the full line ) , of invasion prevalence ( between the dashed and the full lines ) and of total extinction ( below the dashed line ) .see the text for details .panel a ) corresponds to similar strains , , and plot b ) to dissimilar strains , . and as in other simulations .* figure 7 * mean field and individual based description of invasion dynamics .the same initial conditions and the same invasion conditions are used in all cases .( a ) mean - field new infectives densities ( grey line ) for the model of equations ( [ mf ] ) with , , fitted from the simulations of the single strain model for ( other parameters as in figure 2(b ) ) , and time series ( black line ) of the stochastic model for ( other parameters as in figure 2(b ) ) .( b ) we also show , for comparison , the solutions of the standard mean - field model ( grey line ) , where , , are not corrected to take into account the screening. the black line corresponds to the same data as in ( a ) .
we investigate the dynamics of a simple epidemiological model for the invasion by a pathogen strain of a population where another strain circulates . we assume that reinfection by the same strain is possible but occurs at a reduced rate due to acquired immunity . the rate of reinfection by a distinct strain is also reduced due to cross - immunity . individual based simulations of this model on a ` small - world ' network show that the host contact network structure significantly affects the outcome of such an invasion , and as a consequence will affect the patterns of pathogen evolution . in particular , host populations interacting through a small - world network of contacts support lower prevalence of infection than well - mixed populations , and the region in parameter space for which an invading strain can become endemic and coexist with the circulating strain is smaller , reducing the potential to accommodate pathogen diversity . we discuss the underlying mechanisms for the reported effects , and we propose an effective mean - field model to account for the contact structure of the host population in small - world networks . keywords : pathogen diversity , reinfection threshold , spatial structure , complex networks
several criteria to detect non - classicality of quantum states of a harmonic oscillator have been introduced , mostly based on phase - space distributions , ordered moments , or on information - theoretic arguments . at the same time, an ongoing research line addresses the characterization of quantum states according to their gaussian or non - gaussian character , and a question arises on whether those two different hierarchies are somehow linked each other . as a matter of fact ,if we restrict our attention to pure states , hudson s theorem establishes that the border between gaussian and non - gaussian states coincides exactly with the one between states with positive and negative wigner functions .however , if we move to mixed states , the situation gets more involved .attempts to extend hudson s theorem have been made , by looking at upper bounds on non - gaussianity measures for mixed states having positive wigner function . in this framework , by focusing on states with positive wigner function , one can define an additional border between states in the _ gaussian convex hull _ and those in the complementary set of _ quantum non - gaussian states _, that is , states that can not be expressed as mixtures of gaussian states .the situation is summarized in fig .[ f : poswigner ] : the definition of the gaussian convex hull generalizes the notion of glauber s non - classicality , with coherent states replaced by generic pure gaussian states , i.e. squeezed coherent states. quantum non - gaussian states with positive wigner function are not useful for quantum computation , and are not necessary for entanglement distillation , _e.g. _ the non - gaussian entangled resources used in are mixtures of gaussian states . on the other hand , they are of fundamental interest for quantum information and quantum optics . in particular , since no negativity of the wigner function can be detected for optical losses higher than ( or equivalently , for detector efficiencies below ) criteria able to detect quantum non - gaussianity are needed in order to certify that a _ highly non linear _ process ( such as fock state generation , kerr interaction , photon addition / subtraction operations or conditional photon number detections ) has been implemented in a noisy environment , even if no negativity can be observed in the wigner function .different measures of non - gaussianity for quantum states have been proposed , but these can not discriminate between quantum non - gaussian states and mixtures of gaussian states . an experimentally friendly criterion for quantum non - gaussianity , based on photon number probabilities , has been introduced , and then employed in different experimental settings to prove the generation of quantum non - gaussian states , such as heralded single - photon states , squeezed single - photon states and fock states from a semiconductor quantum dot . ] in this paper we introduce a family of criteria which are able to detect quantum non - gaussianity for single - mode quantum states of a harmonic oscillator based on the wigner function . as we already pointed out , according to hudson s theorem ,the only pure states having a positive wigner function are gaussian states .one can then wonder if any bound exists on the values that the wigner function of convex mixtures of gaussian states can take . by following this intuition we present several bounds on the values of the wigner function for convex mixtures of gaussian states , consequently defining a class of sufficient criteria for quantum non - gaussianity .in the next section we will introduce some notation and the preliminary notions needed for the rest of the paper . in sec .[ s : criteria ] we will prove and discuss our wigner function based criteria for quantum non - gaussianity and in sec . [s : examples ] we will prove their effectiveness by considering different families of non - gaussian states evolving in a lossy ( gaussian ) channel . we will conclude the paper in sec . [s : conclusions ] with some remarks .throughout the paper we will use the quantum optical terminology , where excitations of a quantum harmonic oscillator are called photons .all the results can be naturally applied to any bosonic continuous - variable ( cv ) system .we will consider a single mode described by a mode operator , satisfying the commutation relation = \mathbbm{1} ] is a thermal state ( and ) .pure gaussian states can be written as , and , according to hudson s theorem , they are the only pure states having a positive wigner function . together with that of a gaussian state , one can define the concept of _ gaussian map _ : a quantum ( completely positive ) map is defined gaussian iff it transforms gaussian states into gaussian states .all unitary gaussian maps can be expressed as , and they correspond to hamiltonian operators at most bilinear in the mode operators .similarly , a generic gaussian map can be decomposed as a gaussian unitary acting on the system plus an ancilla ( the latter prepared in a gaussian state ) , followed by partial tracing over the ancillary mode .another complete description of a cv quantum state may be given in terms of the so - called -function (\alpha) ] . * proof * the multi - index , which labels every gaussian state in the convex mixture , contains the information about the squeezing and displacement .we can then equivalently consider as variables . by exploiting the linearity property of the wigner functionwe obtain (0 ) & = \int d{{\boldsymbol \lambda}}\ : p({{\boldsymbol \lambda } } ) w[|\psi_{\sf g}({{\boldsymbol \lambda}})\rangle\langle\psi_{\sf g}({{\boldsymbol\lambda}})|](0 ) \nonumber \\ & \geq \frac 2\pi \int d{{\boldsymbol \lambda}}\ : p({{\boldsymbol \lambda } } ) \exp \ { -2n(1+n ) \ } \ : , \label{eq : integral}\end{aligned}\ ] ] where inequality ( [ eq : boundpure ] ) has been used . by defining which is a valid probability distribution with respect to the variable , eq .( [ eq : integral ] ) becomes(0 ) \geq \frac 2\pi \int_0^{\infty } dn \ : \widetilde{p}(n ) \ : \exp \ { -2n(1+n)\}.\end{aligned}\ ] ] studying the second derivative of we conclude that the function is _ convex _ in the whole _ physical _ region ( _ i.e. _ ) . as a consequence , where ] . * proof * given a quantum state which can be expressed as a mixture of gaussian state , and a gaussian map ( or a convex mixture thereof ) , the output state can still be expressed as a mixture of gaussian states . as a consequence we can apply to the state the result in proposition [ p : boundmix ] , obtaining the thesis . + proposition [ p : bound2mix ] leads to two corollaries that will be used in the rest of the paper . for any single - mode quantum state , the following inequality holds (\beta ) \geq \frac2\pi \exp\ { -2 \bar{n}_\beta ( 1+\bar{n}_\beta)\}\ : , \:\ : \forall \beta \in \mathbbm{c } \ : , \label{eq : bounddisp}\end{aligned}\ ] ] where ] . * proof * the proof follows from proposition [ p : bound2mix ] , by considering the gaussian map .moreover , since the value of the wigner function at the origin is invariant under any squeezing operation , _ i.e. _ (0 ) = w[\varrho](0 ) \:,\end{aligned}\ ] ] one can maximize the rhs of inequality ( [ eq : bound2mix ] ) with regard to the squeezing parameter . + the violation of any of the inequalities presented in the last two propositions and two corollaries provides a _ sufficient _ condition to conclude that a state is quantum non - gaussian .we formalize this by re - expressing the previous results in the form of two criteria for the detection of quantum non - gaussianity .[ c : uno ] let us consider a quantum state and define the quantity = w[\varrho](0 ) - \frac2\pi \exp\{-2 \bar{n}(\bar{n}+1)\ } \ : .\label{eq : delta1}\end{aligned}\ ] ] then , < 0 \:\ :\rightarrow\:\ : \varrho \notin \mathcal{g } , \nonumber \end{aligned}\ ] ] that is , is quantum non - gaussian .[ c : due ] let us consider a quantum state , a gaussian map ( or a convex mixture thereof ) , and define the quantity = w[\mathcal{e}_{\sf g}(\varrho)](0 ) - \frac2\pi \exp\{-2 \bar{n}_\mathcal{e}(\bar{n}_\mathcal{e}+1)\}\ : .\label{eq : delta2}\end{aligned}\ ] ] then , < 0 \rightarrow \ : \varrho \notin \mathcal{g}. \nonumber \ ] ] typically , criterion [ c : uno ] can be useful to detect quantum non - gaussianity of phase - invariant states having the minimum of the wigner function at the origin of phase space . on the other hand , criterion [ c : due ] is of broader applicability . to give two paradigmatic examples, the latter criterion can be useful if : ( i ) the minimum of the wigner function is far from the origin , so that one may be able to violate inequality ( [ eq : bounddisp ] ) by considering displacement operations ; ( ii ) the state is not phase - invariant and presents some squeezing , and thus one may be able to violate inequality ( [ eq : boundsq ] ) by using single - mode squeezing operations .in this section we test the effectiveness of our criteria , by applying them to typical quantum states that are of relevance to the quantum optics community .we shall consider pure , non - gaussian states evolving in a lossy channel , and test their quantum non - gaussianity after such evolution .specifically , we focus on the family of quantum channels associated to the markovian master equation the resulting time evolution , characterized by the parameter , models both the incoherent loss of photons in a dissipative zero temperature environment , and inefficient detectors with an efficiency parameter . the evolved state can be equivalently derived by considering the action of a beam splitter with reflectivity , which couples the system to an ancillary mode prepared in a vacuum state .the corresponding average photon number reads = ( 1-\epsilon)\ : \bar{n}_0 \:,\end{aligned}\ ] ] where ] defined in eq .( [ eq : delta1 ] ) .we will consider different families of states , namely fock states , photon - added coherent states and photon - subtracted squeezed states . in section [ s : violation2 ], we will study how to improve the results obtained , by considering the second criterion and thus by studying the non - gaussianity indicator ] for the first three fock states : red - dotted line : ; green - dashed line : ; blue - solid line : .+ ( right ) maximum value of the noise parameter such that the bound ( [ eq : boundmix ] ) is violated for the state , as a function of the fock number .[ f : fock1],title="fig : " ] ] as a function of for the first three fock states is plotted in fig .[ f : fock1 ] ( left panel ) .one can observe that the criterion works really well for the fock state , which is proven to be quantum non - gaussian for all values of .for the fock states and , a non monotonous behavior of is observed as a function of the loss parameter . still , negative values of the non - gaussian indicator are observed in the region of interest .however , the maximum value of the noise parameter decreases monotonically as a function of , as shown in fig .[ f : fock1 ] ( right panel ) . by increasing the fock number , it settles to the asymptotic value . as one would expect by looking at the bound in eq .( [ eq : boundmix ] ) , for high values of the average photon number , the criterion becomes practically equivalent to the detection of negativity of the wigner function , and thus the maximum noise corresponds to .a _ photon - added coherent _ ( pac ) state is defined as the operation of photon - addition has been implemented in different contexts , and in particular non - gaussianity and non - classicality of pac states have been investigated in .+ being the non - gaussianity indicator ] for pac states as a function of and for different values of : red - dotted line : ; green - dashed line : ; blue - solid line : .+ ( right ) maximum value of the noise parameter such that the bound ( [ eq : boundmix ] ) is violated for the state , as a function of the parameter .[ f : pacs1],title="fig : " ] ] can be evaluated accordingly .its behavior as a function of and for different values of the squeezing factor is plotted in fig .[ f : psss1 ] ( left ) . ] for pss states as a function of and for different values of : red - dotted line : ; green - dashed line : ; blue - solid line : . + ( right ) maximum value of the noise parameter such that the bound ( [ eq : boundmix ] ) is violated for the state , as a function of the initial squeezing parameter .[ f : psss1],title="fig : " ] in the right panel of fig .[ f : psss1 ] we plot the maximum noise parameter as a function of the squeezing parameter , observing the same behavior obtained for fock and pac states : the value of decreases monotonically with the energy of the state , approaching the asymptotic value . we will now show how the second criterion , which is based on the violation of the inequality ( [ eq : bound2mix ] ) , can be exploited in order to improve the results shown in the previous section . since in this case one can optimize the procedure over an additional gaussian channel , in general one has .the simplest gaussian maps that one can consider are displacement and squeezing operations ; correspondingly we are going to seek violation of the bounds described by eqs .( [ eq : bounddisp ] ) and ( [ eq : boundsq ] ) . as anticipated in sec .[ s : criteria ] , these new criteria are useful for states which are not phase invariant : the paradigmatic examples are states _ displaced _ in the phase - space , that is , having the minimum of the wigner function outside the origin , or states that exhibit squeezing in a certain quadrature . due to this fact , the bounds based on eqs .( [ eq : bounddisp ] ) and ( [ eq : boundsq ] ) can not help in optimizing the results we obtained for fock states .we will focus then on the other classes of states we introduced , that is pac and pss states . for .the minimum of the wigner function is not at the origin of the phase - space , and the state has non - zero first moments .[ f : wigpac0 ] ] by looking at the pac state wigner function in fig .[ f : wigpac0 ] , one observes that its minimum is not at the origin of the phase space. moreover , these states have non - zero first moments , implying that one can decrease their average photon number by applying an appropriate displacement .both observations suggest that it is possible to decrease the value the quantum non - gaussianity indicator defined in eq .( [ eq : delta2 ] ) , \ : , \label{eq : deltapac}\end{aligned}\ ] ] by means of a displacement operation . to evaluate according to eq .( [ eq : deltapac ] ) one has simply to evaluate the wigner function of the state in a displaced point in the phase space , _ i.e. _ (-\beta)$ ] , and its average photon number where , and for , our goal is then to minimize over the possible displacement parameters .+ as a function of the additional displacement parameter , for and for different values of the initial parameter : : red - dotted line : ; green - dashed line : ; blue - solid line : .+ ( right ) optimized non - gaussianity indicator as a function of and for different values of , where the displacement parameter has been chosen as in eq . ( [ eq : betaopt ] ) : ; green - dashed line : ; blue - solid line : .[ f : pacs2],title="fig : " ] as a function of the additional displacement parameter , for and for different values of the initial parameter : : red - dotted line : ; green - dashed line : ; blue - solid line : .+ ( right ) optimized non - gaussianity indicator as a function of and for different values of , where the displacement parameter has been chosen as in eq . ( [ eq : betaopt ] ) : ; green - dashed line : ; blue - solid line : .[ f : pacs2],title="fig : " ] in fig .[ f : pacs2 ] ( left ) we plot as a function of for different values of the coherent state parameter and for . we observe that , while for the bound is not always violated , it is possible to find values such that and thus prove that the state is quantum non - gaussian .unfortunately the optimal value , which minimizes , can not be obtained analytically .however we observed that for large values of and for one can approximate it as the behavior of as a function of shown in fig .[ f : pacs2 ] ( right ) , for different values of and fixing as in eq . .if we compare this with fig .[ f : pacs1 ] , not only we observe an improvement in our capacity to witness quantum non - gaussianity for these states , but we also see that remains negative for all values of . indeed , numerical investigations seem to suggest that for all the possible values of : we indeed conjecture that any initial pac state remains quantum non - gaussian during the lossy evolution induced by eq . , and that this feature can be captured by our second criterion .however , as one can observe from fig .[ f : pacs2 ] ( right ) , the non - gaussianity indicator approaches zero quite fast with both and , and thus it may be more challenging to detect its negativity in an actual experiment for states with a high average photon number and for large losses . for .the minimum of the wigner function is at the origin of the phase - space , and the state exhibits squeezing in one of the quadratures .[ f : wigpss0 ] ] like pac states inherit a displacement in phase space from the initial coherent states , pss states inherit squeezing , as we can observe by looking at the wigner function in fig . [f : wigpss0 ] .this motivates us to make use of corollary [ c : due ] , and thus optimize the non - gaussianity indicator in eq .( [ eq : delta2 ] ) as \ : , \label{eq : deltapss}\end{aligned}\ ] ] that is by considering an additional squeezing operation on the evolved state . as pointed out in the proof of inequality ( [ eq : boundsq ] ) , the wigner function at the origin is invariant under squeezing operations .hence , the optimal value that minimizes coincides with the value which minimizes the average photon number of , + \nu_s^2\,,\end{aligned}\ ] ] where , and for an initial pss state ( with a real squeezing parameter ) , as a function of the additional squeezing parameter , for and for different values of the initial parameter : red - dotted line : ; green - dashed line : ; blue - solid line : .+ ( right ) optimized non - gaussianity indicator as a function of and for different values of , where the squeezing parameter is given by : red - dotted line : ; green - dashed line : ; blue - solid line : .[ f : psss2],title="fig : " ] as a function of the additional squeezing parameter , for and for different values of the initial parameter : red - dotted line : ; green - dashed line : ; blue - solid line : .+ ( right ) optimized non - gaussianity indicator as a function of and for different values of , where the squeezing parameter is given by : red - dotted line : ; green - dashed line : ; blue - solid line : .[ f : psss2],title="fig : " ] the behavior of as a function of the additional squeezing is plotted in fig .[ f : psss2 ] . as we observed in the previous case , the optimised criterion works in cases where the bound ( [ eq : boundmix ] ) ( corresponding to ) was not violated .+ moreover the optimal squeezing value can be evaluated analytically , yielding and , obtained respectively by means of the optimized and not - optimized criteria , for the state , as a function of the initial squeezing parameter .red - dotted line : ; blue - solid line : .[ f : psss3 ] ] the optimized quantum non - gaussianity indicator is plotted in fig .[ f : psss2 ] ( right ) , where we observe that negative values are obtained for large values of losses . however , while for pac states we had evidence that the maximum value of losses is for all the possible initial states , this is no longer true for pss states . the behavior of as a function of is plotted in fig .[ f : psss3 ] , together with the previously obtained .we can notice the big improvement in our detection capability , obtained by exploiting corollary [ c : due ] ; however for large values of we still observe that decreases towards the same limiting value .moreover , as it can be observed in fig .[ f : psss2 ] ( right ) , the indicator approaches zero by increasing and , and thus also in this case it can become challenging to witness quantum non - gaussianity with our methods , in experiments with large values of the initial squeezing and large losses .we have presented a set of criteria to detect quantum non - gaussian states , that is , states that can not be expressed as mixtures of gaussian states .the first criterion is based on seeking the violation of a lower bound for the values that the wigner function can take at the origin , depending only on the average photon number of the state . to verify the effectiveness of the criterion , we considered the evolution of non - gaussian pure states in a lossy gaussian channel , looking for the maximum value of the noise where such bound is violated .we observed that the criterion works well , detecting quantum non - gaussianity in the non - trivial region of the noise parameters where no negativity of the wigner function can be observed .we have also shown how the criterion can be generalized and improved , by optimising over additional gaussian operations applied to the states of interest .notice that in a possible experimental implementation one does not need to perform such additional gaussian operations , such as displacement or squeezing , in the actual experiment .indeed , it suffices to use the data obtained on the state itself , and then apply suitable post - processing to evaluate the optimized non - gaussianity indicator .our criterion , which expresses a sufficient condition for quantum non - gaussianity , shares some similarities with hudson s theorem for pure gaussian states , in the sense that it establishes a relationship between the concept of gaussianity ( combined with classical mixing ) , and the possible values that a wigner function can take .the successful implementation of our criteria corresponds to the measurement of the wigner function at the origin of the phase space which , in turn , corresponds to the ( photon ) parity of the state under investigation .this may be obtained with current technology by direct parity measurement , or by reconstruction of the photon distribution either by tomographic reconstruction or by the on / off method . when the criterion is satisfied, one can confirm that the quantum state at disposal has been generated by means of a highly non - linear process , even in the cases where , perhaps due to inefficient detectors or other types of noise , negativity of the wigner function can not be detected .mgg , tt and msk thank radim filip for discussions .mgap thanks vittorio giovannetti for discussions .mgg acknowledges support from uk epsrc ( ep / i026436/1 ) .t.t . and m.s.k .acknowledge support from the nprp 4- 426 554 - 1 - 084 from qatar national research fund .so and mgap acknowledge support from miur ( firb `` lichis '' no .rbfr10yq3h ) .99 e. p. wigner , phys . rev . * 40 * , 749 ( 1932 ) r. glauber , phys . rev . * 131 * , 2766 ( 1963 ) .e. c. g. sudarshan , phys .* 10 * , 277 ( 1963 ) . c. t. lee , phys .a * 44 * , r2775 ( 1991 ) . c. t. lee , phys .a * 45 * , 6586 ( 1992 ) .arvind , n. mukunda , r. simon , phys.rev .a * 56 * , 5042 ( 1997 ) .arvind , n. mukunda , r. simon , j. phys a * 31 * : 565 ( 1998 ) .m. a. marchiolli , v. s. bagnato , y. guimaraes , b. baseia , phys .a * 279 * , 294 ( 2001 ) .m. g. a. paris , phys .a * 289 * , 167 ( 2001 ) .dodonov , j. opt .b * 4 * , r1 , ( 2002 ) .a. kenfack , k. zyczkowski , j. opt .b * 6 * , 396 ( 2004 ). e. v. shchukin , w. vogel , phys . rev . a * 72* , 043808 ( 2005 ) .t. kiesel , w. vogel , v. parigi , a. zavatta , m. bellini , phys . rev . a * 78 * , 021804 r ( 2008 ) . w. vogel , phys .lett . * 100 * , 013605 ( 2008 ) .r. simon , phys .. lett . * 84 * , 2726 ( 2000 ) .lu - ming duan , g. giedke , j.i .cirac , p. zoller , phys .* 84 * , 2722 ( 2000 ) .p. marian , t. a. marian , h. scutaru , phys .lett . * 88 * , 153601 ( 2002 ) .p. giorda , m. g. a. paris , phys .lett . * 105 * , 020503 ( 2010 ) ; g. adesso , a. datta phys . rev . lett . * 105 * , 030501 ( 2010 ) .a. ferraro , m. g. a. paris , phys .lett * 108 * , 260403 ( 2012 ) . c. gehrke , j. sperling , w. vogel , phys .a * 86 * , 052118 ( 2012 ) .d. buono , g. nocerino , v. dauria , a. porzio , s. olivares , m.g.a .paris , j. opt .b * 27 * , 110 ( 2010 ) ; d. buono , g. nocerino , a. porzio , s. solimeno .phys . rev .a * 86 * , 042308 ( 2012 ) .m. g. genoni , m. g. a. paris and k. banaszek , phys .a * 76 * , 042327 ( 2007 ) .m. g. genoni , m. g. a. paris and k. banaszek , phys .a * 78 * , 060303 ( 2008 ) .m. g. genoni and m. g. a. paris , phys .a * 82 * , 052341 ( 2010 ) .m. barbieri , n. spagnolo , m. g. genoni , f. ferreyrol , r. blandino , m. g. a. paris , p. grangier and r. tualle - brouri , phys .a * 82 * , 063833 ( 2010 ) .r. filip , l. mista , jr .lett . * 106 * , 200401 ( 2011 ) .m. jeek , i. straka , m. miuda , m. duek , j. fiuraek , r. filip , phys .107 * , 213602 ( 2011 ) .m. jeek , a. tipsmark , r. dong , j. fiuraek , l. mita , jr ., r. filip , u. l. andersen , phys . rev .a * 86 * , 043813 ( 2012 ) .a. predojevic , m. jezek , t. huber , h. jayakumar , t. kauten , g. s. solomon , r. filip and g. weihs , arxiv:1211.2993 [ quant - ph ] .r. filip , phys .a * 87 * , 042308 ( 2011 ) . v. dauria , c. de lisio , a. porzio , s. solimeno , j. anwar , m. g. a. paris , phys . rev . a * 81 * , 033846 ( 2010 ) .r. l. hudson , rep .* 6 * , 249 ( 1974 ) .f. soto and p. claverie , j. math .* 24 * , 97 ( 1983 ) .a. mandilara , e. karpov and n. j. cerf , phys .a * 79 * , 062302 ( 2009 ) .u. m. titulaer and r. j. glauber , phys . rev . *140 * , 676 ( 1963 ) .a. mari and j. eisert , physlett . * 109 * , 230503 ( 2012 ) .v. veitch , n. wiebe , c. ferrie and j. emerson , new j. phys .* 15 * , 013037 ( 2013 ) .j. heersink , c. marquardt , r. dong , r. filip , s. lorenz , g. leuchs and u. l. andersen , phys .lett . * 96 * , 253601 ( 2006 ) .k. e. cahill and r. j. glauber , phys . rev .* 177 * , 1882 ( 1969 ) .j. eisert and m. m. wolf , _ quantum information with continuous variables of atoms and light _ , edited by n. j. cert , g. leuchs and e. s. polzik ( imperial college press , london , 2007 ) , pp .being the map divisible , then for all , a parameter exists , such that . as a consequence ,if a criterion is violated for the quantum state , the quantum state is quantum non - gaussian for all .a. zavatta , s. viciani and m. bellini , phys .a * 70 * , 053821 ( 2004 ) .a. zavatta , v. parigi and m. bellini , phys . rev .a * 75 * , 052106 ( 2007 ) .v. parigi , a. zavatta , m. s. kim and m. bellini , science * 317 * , 1890 ( 2007 ) .a. zavatta , v. parigi , m. s. kim , h. jeong and m. bellini , phys .* 103 * , 140406 ( 2009 ) .m. dakna , t. anhut , t. opatrny , l. knoll and d .-welsch , phys .a * 55 * , 3184 ( 1997 ) ; m. s. kim , e. park , p. l. knight and h. jeong , physa * 71 * , 043805 ( 2005 ) ; s. olivares and m. g. a. paris , j. opt b * 7 * , 616 ( 2005 ) .a. ourjoumtsev , r. tualle - brouri , j. laurat and p. grangier , science * 312 * , 83 ( 2006 ) .k. wakui , h. takahashi , a. furusa and m. sasaki , opt .express * 15 * , 3568 ( 2007 ) .j. s. neergaard - nielsen , b. m. nielsen , c. hettich , k. molmer and e. s. polzik , phys .97 * , 083604 ( 2006 ) .t. gerrits , s. glancy , t. s. clement , b. calkins , a. e. lita , a. j. miller , a. l. midgall , s. w. nam , r. p. mirin and e. knill , phys .a * 82 * , 031802 ( 2010 ) .s. haroche , m. brune and j. m. raimond , j. mod . opt . * 54 * , 2101 ( 2007 ) .s. wallentowitz , w. vogel , phys .a * 53 * , 4528 ( 1996 ) .k. banaszek , k. wodkiewicz , phys .* 76 * , 4344 ( 1996 ) .t. opatrny and d. g. welsch , phys .a * 55 * , 1462 ( 1997 ) ; t. opatrny , d. g. welsch , and w. vogel , phys . rev .a * 56 * , 1788 ( 1997 ) .m. munroe , d. boggavarapu , m.e .anderson , and m. g. raymer , phys .a * 52 * , r924 ( 1995 ) ; y. zhang , k. kasai , and m. watanabe , opt . lett . *27 * , 1244 ( 2002 ) .m. raymer and m. beck , in _ quantum states estimation _ , lect . not .* 649 * ( springer , berlin - heidelberg , 2004 ) .d. mogilevtsev , opt . commun . * 156 * , 307 ( 1998 ) ; acta phys .slovaca * 49 * , 743 ( 1999 ) .a. r. rossi , s. olivares , m. g.a .paris , phys .a * 70 * , 055801 ( 2004 ) ; a. r. rossi , m. g. a. paris , eur .. j. d * 32 * , 223 ( 2005 ) .g. zambra , a. andreoni , m. bondani , m. gramegna , m. genovese , g. brida , a. rossi and m. g. a. paris , phys .* 95 * , 063602 ( 2005 ) ; g. zambra , m. g. a. paris , phys . rev .a * 74 * , 063830 ( 2006 ) .a. allevi , a. andreoni , m. bondani , g. brida , m. genovese , m. gramegna , p. traina , s. olivares , m. g. a. paris , g. zambra , phys .a * 80 * , 022114 ( 2009 ) .l. a. jiang , e. a. dauler , j. t. chang , phys .rev . a * 75 * 062325 ( 2007 ) .a. divochiy et al . , nat* 2 * , 302 ( 2008 ) .d. achilles et al ., opt . lett . * 28 * , 2387 ( 2003 ) , m. j. fitch , b. c. jacobs , t. b. pittman , j. d. franson , phys . rev .a * 68 * , 043814 ( 2003 ) .g. zambra , m. bondani , a. s. spinelli , a. andreoni , rev .instrum . * 75 * , 2762 ( 2004 ) .m. ramilli , a. allevi , a. chmill , m. bondani , m. caccia , a. andreoni , j. opt .b , * 27 * , 852 ( 2010 ) .j. kim , s. takeuchi , y. yamamoto , h.h .hogue , appl .lett . * 74 * , 902 ( 1999 ) .
we introduce a family of criteria to detect quantum non - gaussian states of a harmonic oscillator , that is , quantum states that can not be expressed as a convex mixture of gaussian states . in particular we prove that , for convex mixtures of gaussian states , the value of the wigner function at the origin of phase space is bounded from below by a non - zero positive quantity , which is a function only of the average number of excitations ( photons ) of the state . as a consequence , if this bound is violated then the quantum state must be quantum non - gaussian . we show that this criterion can be further generalized by considering additional gaussian operations on the state under examination . we then apply these criteria to various non - gaussian states evolving in a noisy gaussian channel , proving that the bounds are violated for high values of losses , and thus also for states characterized by a positive wigner function .
the study and understanding of heterogeneously catalyzed reactions is a field that contains an enormous wealth of still unclear or even completely unexplained phenomena .this scenario leads to an exciting and challenging domain for investigation and fundamental research .furthermore , the occurrence of many complex and fascinating physical and chemical phenomena , such as pattern formation and self - organization , regular and irregular kinetic oscillations , propagation and interference of chemical waves and spatio - temporal structures , the transition into chaotic behaviour , fluctuation - induced transitions , irreversible phase transitions ( ipt s ) , etc , has attracted the attention of many scientists .in addition to the basic interest , heterogeneous catalysis is a field of central importance for numerous industrial ( e.g. synthesis of ammonia , sulfuric and nitric acids , cracking and reforming processes of hydrocarbons , etc . ) and practical ( e.g. catalytic control of environmental pollution such as the emission of , , , , etc . ) applications .furthermore , information technology , material science , corrosion , energy conversion , ecology and environmental sciences , etc .are some fields whose rapid growth is somehow based on the recent progress in the study of heterogeneous reactions occurring on surfaces and interfaces .it should be noticed that recent developments of experimental techniques such as scanning tunneling microscopy ( stm ) , low energy electron diffraction ( leed ) , high resolution electron energy loss spectroscopy ( hreels ) , ultraviolet photoelectric spectroscopy ( ups ) , photoelectron emission microscopy ( peem ) , etc . , just to quote few of them , allows the scientists to gather detailed physical and chemical information about surfaces , adsorbates and reaction products . within this context, the stm based measurement of the reaction rate parameters at a microscopic level for the catalytic oxidation of is a clear example of the progress recently achieved .remarkably , the measured parameters agree very well with those previously obtained by means of macroscopic measurements .also , all elementary steps of a chemical reaction have been induced on individual molecules in a controlled step - by - step manner with the aid of stm techniques .furthermore , very recently , the oxidation of on was studied by means of stm techniques inside a high - pressure flow reactor , i.e. under semirealistic conditions as compared with those prevailing in the actual catalytic process .it is interesting to notice that a new reaction mechanism , not observed when the reaction takes place under low pressure , has been identified . due to this stimulating generation of accurate experimental information ,the study of catalyzed reaction systems is certainly a challenging scientific field for the development and application of analytical methods , theories and numerical simulations . within this context ,the aim of this report is to review recent progress in the understanding of ipt s occurring in various lattice gas reaction systems ( lgrs ) .it should be noticed that lgrs models are crude approximations of the actual ( very complex ) catalytic processes .however , from the physicist s point of view , the lgrs approach is widely used because it is very useful to gain insight into far - from equilibrium processes .in fact , due to the lack of a well - established theoretical framework , unlike the case of their equilibrium counterpart , the progress of the statistical mechanics of systems out of equilibrium relies , up to some extent , on the study and understanding of simple models .so , in the case of lgrs , one renounces to a detailed description of the catalyzed reaction and , instead , the interest is focused on the irreversible critical behaviour of archetype models inspired in the actual reaction systems . keeping these concepts in mind, the review will be devoted to survey the theoretical development of the field during the last decade and to discuss promising areas for further research as well as open questions . in most cases ,heterogeneously catalyzed reactions proceed according to well - established elementary steps .the first one comprises trapping , sticking and adsorption .gaseous reactant atoms and/or molecules are trapped by the potential well of the surface .this rather weak interaction is commonly considered as a physisorbed precursor state .subsequently , species are promoted to the chemisorbed state where a much stronger interaction potential is activated . particularly important from the catalytic point of viewis that molecules frequently undergo dissociation , e.g. , , , etc , which is a process that frees highly reactive atomic species on the surface .sticking and adsorption processes depend on the surface structure ( both geometric and electronic ) . in some cases , chemisorption of small atoms and molecules may induce the reconstruction of the surface .this effect , coupled to structure dependent sticking coefficients , may lead to the occurrence of collective phenomena such as oscillations .after adsorption , species may diffuse on the surface or , eventually , become absorbed in the bulk . due to collisions between adsorbed species of different kindthe actual reaction step can occur .of course , this step requires that energetic and spatial constraints be fulfilled .the result of the reaction step is the formation of a product molecule. this product can be either an intermediate of the reaction or its final output .the final step of the whole reaction process is the desorption of the products .this step is essential not only for the practical purpose of collecting and storing the desired output , but also for the regeneration of the catalytic active sites of the surface .most reactions have at least one rate limiting step , which frequently makes the reaction prohibitively slow for practical purposes when , for instance , it is intended in an homogeneous ( gas or fluid ) medium .the role of a good solid - state catalyst is to obtain acceptable output rate of the products .reactions occurring in this way are commonly known as heterogeneously catalyzed . at this stage and in order to illustrate the above - mentioned elementary steps , it is useful to point our attention to a specific reaction system . for this purpose ,the catalytic oxidation of carbon monoxide , namely , which is likely the most studied reaction system , has been selected .it is well known that such reaction proceeds according to the langmuir - hinshelwood mechanism , i.e with both reactants adsorbed on the catalyst s surface where is an empty site on the surface , while ( ) and ( ) refer to the adsorbed and gas phase , respectively . the reaction takes place with the catalyst , e.g , in contact with a reservoir of and whose partial pressures are and , respectively . equation ( 1 )describes the irreversible molecular adsorption of on a single site of the catalyst s surface .it is known that at under suitable temperature and pressure reaction conditions , molecules diffuse on the surface .furthermore , there is a small probability of desorption that increases as the temperature is raised . equation ( 2 ) corresponds to the irreversible adsorption of molecules that involves the dissociation of such species and the resulting atoms occupy two sites of the catalytic surface . under reaction conditionsboth the diffusion and the desorption of oxygen are negligible .due to the high stability of the molecule the whole reaction does not occur in the homogeneous phase due to the lack of dissociation .so , equation ( 2 ) dramatically shows the role of the catalyst that makes feasible the rate limiting step of the reaction .finally , equation ( 3 ) describes the formation of the product ( ) that desorbs from the catalyst s surface .this final step is essential for the regeneration of the catalytic active surface . assuming irreversible adsorption - reaction steps , as in the case of equations ( 1 - 3 ), it may be expected that on the limit of large and small ( small and large ) values , the surface of the catalyst would become saturated by ( ) species and the reaction would stop .in fact , the surface of the catalyst fully covered by a single type of species , where further adsorption of the other species is no longer possible , corresponds to an inactive state of the system .this state is known as ` poisoned ' , in the sense that adsorbed species on the surface of the catalyst are the poison that causes the reaction to stop .physicists used to call such state ( or configuration ) ` absorbing ' because a system can be trapped by it forever , with no possibility of escape .these concepts are clearly illustrated in figure 1 , which shows plots of the rate of production ( ) and the surface coverage with and ( and , respectively ) , versus the partial pressure of ( ) , as obtained using the ziff - gulari - barshad ( zgb ) lattice gas reaction model .details on the zgb model will be discussed extensively below , see also . for surface becomes irreversibly poisoned by species with , and .in contrast , for the catalyst is irreversibly poisoned by molecules with , and .these poisoned states are absorbing and the system can not escape from them .however , as shown in figure 1 , between these absorbing states there is a reaction window , namely for , such that a steady state with sustained production of is observed .it is worth mentioning that starting from the reactive regime and approaching the oxygen absorbing state , all quantities of interest change smoothly until they adopt the values corresponding to the absorbing state .this behaviour typically corresponds to a second - order irreversible phase transition ( ipt ) .the transition is irreversible because when the control parameter ( in this example ) is tuned into the absorbing state the system becomes trapped by it forever .this behaviour is in contrast to that observed for second - order reversible phase transitions , such as the order - disorder transition of the ising ferromagnet in the absence of an external magnetic field , where it is possible to change reversibly from one phase to the other , simply tuning the control parameter . for second - order ipt s , as in the case of their reversible counterparts, it is possible to define an order parameter , which for the former is given by the concentration of minority species ( , in the case of the second - order ipt of the catalytic oxidation of ) .furthermore , it is known that vanishes according to a power law upon approaching the critical point , so that where is the order parameter critical exponent and is the critical point .remarkably , the behaviour of the system is quite different upon approaching the absorbing state from the reactive regime ( see figure 1 ) . in this case , all quantities of interest exhibit a marked discontinuity close to .this is a typical first - order ipt and is the coexistence point .experimental results for the catalytic oxidation of carbon monoxide on and are in qualitative agreement with simulation results of the zgb model , it follows from the comparison of figures 1 and 2 .a remarkable agreement is the ( almost ) linear increase in the reaction rate observed when the pressure is raised and the abrupt drop of the reactivity when a certain ` critical ' pressure is reached . in spite ofthe similarities observed , two essential differences are worth discussing : i ) the oxygen - poisoned phase exhibited by the zgb model within the low - pressure regime is not observed experimentally .therefore , one lacks experimental evidence of a second - order ipt .ii ) the phase exhibiting low reactivity found experimentally resembles the state predicted by the zgb model .however , in the experiments the nonvanishing probability prevents the system from entering into a truly absorbing state and the abrupt , ` first - order like ' transition , shown in figure 2 is actually reversible .of course , these and other disagreements are not surprising since the lattice gas reaction model , with a single parameter , is a simplified approach to the actual catalytic reaction that is far more complex .1.5 true cm at this stage it is worth warning the reader that due to the non - hamiltonian nature of the zgb model , as well as all the lattice gas reaction systems that will be treated hereafter , one lacks a thermodynamic quantity , such as the free energy , in order to draw a sharper definition of the order of the transitions .even more , in contrast to their reversible counterpart , the field of ipt s lacks a well - established theoretical framework . also , it is worth mentioning that systems with absorbing states are clearly out of equilibrium .in fact , the transition rate out of the absorbing state is zero , such as those configurations can not fulfill standard detailed balance requirements .therefore , the study of the critical behaviour of these systems must involve ipt s between reactive and absorbing phases . due to these circumstances ,the study of ipt s represents a theoretical challenge within the more general goal of modern physics given by the development of the statistical mechanics of nonequilibrium systems .it should be recognized that the study of absorbing states and ipt s is almost ignored in the courses of statistical mechanics , in spite of the fact that they abound in physics , chemistry , biology and other disciplines including sociology .some typical examples include the spreading of epidemics through a population , the propagation of rumors in a society , the spreading of fire through a forest , coexistence with extinction transitions in prey - predator systems and , of course , catalytic and autocatalytic chemical reactions . from a more general point of view, absorbing states are expected to occur in situations where some quantity of interest can proliferate or die out ( e.g the fire in the forest ) , without any possibility of spontaneous generation ( e.g due to the rays of an electrical storm ) .the underlying physics involves the competition between proliferation and death of a certain quantity .proliferation is associated with the active ( reactive ) phase of the system , while inactivity ( poisoning ) characterizes the absorbing phase .there are currently three basic approaches for the theoretical modeling of surface reactions : i ) _ ab - initio _ calculations , ii ) analytic approaches and iii ) stochastic models .the _ ab - initio _ method is usually implemented via density functional theory approaches and due to the huge computational requirements , the calculations are restricted to few atomic layers of the catalysts and a very reduced catalyst s surface ( of the order of ) .this approach frequently allows the study of a single adsorbed species or a reactive pair only .consequently , the study of macroscopic surface systems incorporating statistical effects , as requested for the case of critical phenomena , is impossible at present .so , this approach will not be further discussed here . on the other hand ,stochastic models can account for fluctuations in large systems .so , they are used to deal with a large number of collective phenomena occurring in reaction systems that are not only restricted to ipt s , but also involve spatio - temporal structures , chemical waves , kinetic oscillations , the transition to chaos , etc .. broad experience gained in the treatment of equilibrium systems has shown that monte carlo simulations and renormalization group ( rg ) analysis of classical field - theoretical models are among the most useful tools for the treatment of phase transitions and critical phenomena .a much more reduced experience , obtained during the last decade , indicates that , after some adaptation , similar techniques can also been employed to deal with ipt s .monte carlo ( mc ) simulations of heterogeneously catalyzed reactions can be considered the computational implementation of microscopic reaction mechanisms .in fact , such mechanisms are the ` rules ' of the computer algorithm .of course , the operation of the rules may lead to the development of correlations , while stochastic fluctuations are inherent to the method . for the practical implementation of the mc method , the catalyst s surface is replaced by a lattice .therefore , lattice gas reaction models are actually considered .for this reason , the method often faces the limitations imposed by the size of the lattices used . in some particular cases ,e.g. when studying second - order phase transitions , this shortcoming can be overcome appealing to the well - established finite - size - scaling theory . also , very often one can develop extrapolation methods that give reliable results for the thermodynamic limit , i.e. infinite lattices .another limitation arises when the diffusion rate of the adsorbed species is very large . in this case , most of the computational time has to be devoted to the diffusion process while the quantity of interest , namely the number of reaction events , becomes negligible .this drawback may be overcome implementing a mixed treatment : mean - field description of the diffusion and mc simulation of the reaction .this approach may become an interesting and powerful tool in the near future .mc simulations of dynamic and kinetic processes are often hindered by the fact that the monte carlo time , usually measured in monte carlo time steps , is somewhat proportional to the actual time .so , direct comparison with experiments becomes difficult .however , very recently a more sophisticated implementation of the mc method has been envisioned : namely the dynamic monte carlo ( dmc ) approach that incorporates the actual time dependence of the processes allowing direct comparison with experiments .further developments and applications of the dmc method are a promising field of research . within this context ,the following subsections describe different mc approaches suitable for the study of ipt s in reaction systems , namely the standard ensemble , the constant coverage ensemble and the epidemic method .furthermore , the standard finite - size scaling theory adapted to the case of second - order ipt s and they application to mc data are also discussed .in this ensemble the catalyst is assumed to be in contact with an infinitely large reservoir containing the reactants in the gas phase .adsorption events are treated stochastically neglecting energetic interactions .the reaction between reactants takes place on the surface of the catalyst , i.e. the so - called langmuir - hinshelwood mechanism .after reaction , the product is removed from the surface and its partial pressure in the gas phase is neglected , so that readsorption of the product is not considered . in order to further illustrate the practical implementation of the standard ensemblelet us describe the simulation method of the lattice gas reaction version of the catalytic oxidation of ( equations ( 1 - 3 ) ) according to the zgb model on the two - dimensional square lattice .the monte carlo algorithm is as follows : \i ) or molecules are selected at random with relative probabilities and , respectively .these probabilities are the relative impingement rates of both species , which are proportional to their partial pressures in the gas phase in contact with the catalyst . due to the normalization , , the model has a single parameter , i.e. . if the selected species is , one surface site is selected at random , and if that site is vacant , is adsorbed on it according to equation ( 1 ) . otherwise , if that site is occupied , the trial ends and a new molecule is selected .if the selected species is , a pair of nearest - neighbor sites is selected at random and the molecule is adsorbed on them only if they are both vacant , as requested by equation ( 2 ) .\ii ) after each adsorption event , the nearest - neighbors of the added molecule are examined in order to account for the reaction given by equation ( 3 ) .if more than one [ pair is identified , a single one is selected at random and removed from the surface .the phase diagram of the zgb model , as obtained using the standard ensemble , is shown in figure 1 and will be further discussed in .monte carlo simulations using the constant coverage ( cc ) ensemble , as early proposed by ziff and brosilow , are likely the most powerful method available for the study of first - order ipt s . in order to implement the cc method ,first a stationary configuration of the system has to be achieved using the standard ensemble algorithm , as described in for this purpose , one selects a value of the parameter close to the coexistence point but within the reactive regime . after achieving the stationary state ,the simulation is actually switched to the cc method .then , one has to maintain the coverage with the majority species of the corresponding absorptive state as constant as possible around a prefixed value .this goal is achieved regulating the number of adsorption events of competing species .the ratio between the number of attempts made trying to adsorb a given species and the total number of attempts is the measure of the corresponding effective partial pressure . in this way, in the cc ensemble the coverage assumes the role of the control parameter . in the case of the zgb model , the stationary state is usually achieved for .so , in order to maintain the coverage close to the desired value , oxygen ( ) adsorption attempts take place whenever ( ) .let and be the number of carbon monoxide and oxygen attempts , respectively .then , the value of the ` pressure ' of in the ensemble ( ) is determined just as the ratio .subsequently , the coverage is increased by small amount , say . a transient period then disregarded for the proper relaxation of the system to the new value , and finally averages of are taken over a certain measurement time . in the original cc algorithm of brosilow and ziff the coverage of increased stepwise up to a certain maximum value ( ) and the set of points ( ) were recorded .however , later on it has been suggested that it would be convenient to continue the simulations after reaching by * decreasing * stepwise until reaches a value close to the starting point , namely .in fact , this procedure would allow to investigate one possible hysteretic effects at first - order ipt s , which are expected to be relevant as follows from the experience gained studying their counterpart in equilibrium ( reversible ) conditions .since true phase transitions only occur on the thermodynamic limit and computer simulations are always restricted to finite samples , numerical data are influenced by rounding and shifting effects around pseudo critical points . within this contextthe finite - size scaling theory has become a powerful tool for the analysis of numerical results allowing the determination of critical points and the evaluation of critical exponents .all this experience gained in the study of reversible critical phenomena under equilibrium conditions can be employed in the field of second - order ipt s . in order to perform a finite - size scaling analysis close to a second - order itp in a reaction systemit is convenient to take the concentration of the minority species on the surface , ( ) , as an order parameter .by analogy to reversible second - order transitions on assumes that where is the order parameter critical exponent and is the critical value of the control parameter .approaching from the reactive regime , the characteristic length scale of the system given by the correlation length diverges according to where is the correlation length exponent in the space direction . for finite samples and close to the critical region , the concentration of minority species will depend on two competing lengths , namely , and the scaling hypothesis assumes , \ ] ] whereequation ( [ corlen ] ) has been used and is a suitable scaling function . just at , one has and such that equation ( [ orpa ] ) is recovered when in the critical region . as anticipated by the adopted notation , second - order ipt s exhibit spatio - temporal anisotropy , so that the correlation length in the time direction is given by where is the corresponding correlation length exponent .examples of the application of finite - size scaling to various reaction systems can be found in the literature .however , a more accurate method for the evaluation of critical exponents is to combine finite - size scaling of stationary quantities and dynamic scaling , as will be discussed just below . in order to obtain accurate values of the critical point and the critical exponents using the standard ensemble and applying finite - size scaling, it is necessary to perform mc simulations very close to the critical point .however , at criticality and close to it , due to the large fluctuations always present in second - order phase transitions , any finite system will ultimately become irreversibly trapped by the absorbing state .so , the measurements are actually performed within metastable states facing two competing constraints : on the one hand the measurement time has to be long enough in order to allow the system to develop the corresponding correlations and , on the other hand , such time must be short enough to prevent poisoning of the sample .this shortcoming can be somewhat healed by taking averages over many samples and disregarding those that have been trapped by the absorbing state .however , it is difficult to avoid those samples that are just evolving to such absorbing state , unless that fluctuations are suppressed by comparing two different samples evolving through the phase space following very close trajectories . in view of these shortcomings, experience indicates that the best approach to second - order ipt s is to complement finite - size scaling of stationary quantities , as obtained with the standard ensemble , with epidemic simulations .the application of the epidemic method ( em ) to the study of ipt s has become a useful tool for the evaluation of critical points , dynamic critical exponents and eventually for the identification of universality classes .the idea behind the em is to initialize the simulation using a configuration very close to the absorbing state .such a configuration can be achieved generating the absorbing state using the standard ensemble and , subsequently , removing some species from the center of the sample where a small patch of empty sites is left . in the case of the zgb modelthis can be done by filling the whole lattice with , except for a small patch .patches consisting of 3 - 6 neighboring empty sites are frequently employed , but it is known that the asymptotic results are independent of the size of the initial patch .such patch is the kernel of the subsequent epidemic .after the generation of the starting configuration , the time evolution of the system is followed using the standard ensemble as already described in . during this dynamic processthe following quantities are recorded : ( i ) the average number of empty sites ( ) , ( ii ) the survival probability , which is the probability that the epidemic is still alive at time , and ( iii ) the average mean square distance , , over which the empty sites have spread . of course ,each single epidemic stops if the sample is trapped in the poisoned state with and , since these events may happen after very short times ( depending on the patch size ) , results have to be averaged over many different epidemics. it should be noticed that ( ) is averaged over all ( surviving ) epidemics .if the epidemic is performed just at critically , a power - law behaviour ( scaling invariance ) can be assumed and the following anstze are expected to hold , and where , and are dynamic critical exponents .thus , at the critical point log - log plots of , and will asymptotically show a straight line behaviour , while off - critical points will exhibit curvature .this behaviour allows the determination of the critical point and from the slopes of the plots the critical exponents can also be evaluated quite accurately .using scaling arguments , it has been shown that the following relationship holds , allowing the evaluation of one exponent as a function of the other two .the validity of equations ( [ nu ] ) , ( [ so ] ) and ( [ r2 ] ) for second - order ipt s is very well established .furthermore , the observation of a power - law behaviour for second - order ipt s is in agreement with the ideas developed in the study of equilibrium ( reversible ) phase transitions : scale invariance reflects the existence of a diverging correlation length at criticality .it should be mentioned that the em can also be applied to first - order ipt s close to coexistence . however , since it is well known that in the case of first - order reversible transitions correlations decay exponentially , preventing the occurrence of scale invariance , equations ( [ nu ] ) , ( [ so ] ) and ( [ r2 ] ) have to be modified .recently , the following anzats has been proposed \label{anz}\ ] ] where is an effective exponent and sets a characteristic time scale .so , equation ( [ anz ] ) combines a power - law behaviour for with an exponential ( asymptotic ) decay .it should also be mentioned that the whole issue of the occurrence of power - law behaviour at equilibrium first - order transitions is a bit more complex than the simplified arguments used above .for example , scaling behaviour inside the coexistence zone has been observed for the liquid - gas phase transitions using finite systems .however , this scaling disappears on the thermodynamic limit .also , when a first - order line ends in a second - order critical point , the system frequently displays several decades of critical behaviour ( before the exponential roll - off ) even when measurements are performed quite a bit beyond the critical point . in the field of reversible critical phenomena ,the most effective analytical tool for the identification of universality classes is the renormalization group analysis of classical field - theoretical models using coarse - grained ginzburg - landau - wilson free - energy functionals .while reaction systems exhibiting ipt s do not have energy functionals , they can often be treated on the coarse - grained level by means of phenomenological langevin equations .stochastic partial differential equations of this kind are the natural formalism to analyze critical properties of irreversible systems with absorbing states , as will be discussed below . on the other hand , mean - field modeling using ordinary differential equations ( ode )is a widely used method for the study of first - order ipt s ( see also below ) .further extensions of the ode framework , to include diffusional terms are very useful and , have allowed the description of spatio - temporal patterns in diffusion - reaction systems . however , these methods are essentially limited for the study of second - order ipt s because they always consider average environments of reactants and adsorption sites , ignoring stochastic fluctuations and correlations that naturally emerge in actual systems .the langevin equation for a single - component field can be written as where and are functionals of , and is a gaussian random variable with zero mean , such that the only nonvanishing correlations are .equation ( [ lan ] ) is purely first - order in time since it is aimed to describe critical properties where small - wavenumber phenomena dominate the physical behaviour .the goal of the coarse - grained lagevin representation is to capture the essential physics of microscopic models on large length and time scales .therefore , and are taken to be analytical functions of and its spatial derivatives . in this way ,coarse graining smooths out any nonanalyticity of the underlying microscopic dynamic model . at this stage , it is clear that one has an infinite number of possible analytical terms in and its space derivatives that can be placed on the right hand side term of equation ( [ lan ] ) .following the ideas of landau and lifshitz , it is expected that symmetry considerations would suffice to determine the relevant terms of and .in fact , these functional must include all analytic terms consistent with the symmetry of the microscopic model and no term with lower symmetry can be included .the coefficients of these terms are simply taken as unknown parameters or constants to be determined phenomenologically .this assumption is consistent with the fact that the typical ipt s , wich are intended to be described with equation ( [ lan ] ) , can be characterized by a set of critical exponents defining their universality class , which do not depend on the values of the parameters .depending on the behaviour of the noise amplitude functional , two types of physical systems can already be treated : \a ) systems without absorbing states . in this case a constant value . using renormalization group argumentsit can be shown that higher powers of and its derivatives in are irrelevant for the description of the critical behaviour and therefore , they can be neglected . in this way, the langevin equation has a simple additive gaussian noise of constant amplitude . assuming that can be written as where and are constants , the best known example results in the ginzburg - landau - wilson functional derivative of free energy functional from the ising model , e.g. ,\ ] ] such that the langevin equation becomes the celebrated time dependent ginzburg - landau equationthis equation is very useful for the treatment of equilibrium critical phenomena and has been quoted here for the sake of completeness only .in fact , in this article , our interest is focused on far - from equilibrium systems , for which can not be expressed as a functional derivative . within this contextthe best known example is the kardar - parisi - zhang ( kpz ) equation introduced for the description of the dynamic behaviour of interfaces without overhangs . for systems with interfaces , is taken as the height at time of a site of the interface at position .it has been shown that the functional captures the large scale physics of moving interfaces leading to the kpz equation recent studies have demonstrated that the kpz equation holds for the treatment of a wide variety of physical situations including the description of the interfacial behaviour of reactants close to coexistence in systems exhibiting first - order ipt s , for further details see also .\b ) systems with absorbing states . in the langevin equations capable of describing these systems ,the noise amplitude functional must vanish with so that the state is an absorbing state for any functional that has no constant piece and therefore it also vanishes with .it can be demonstrated that , for small enough values of . also , can assume only positive integer and half - odd - integer values .the cases and are those of interest in practice , the former includes the directed percolation ( dp ) universality class ( which is of primary interest in the context of this article because it is suitable for the description of second - order ipt s ) , while the latter includes the problem of multiplicative noise . considering the case of dp , one has to keep in mind that the state must be absorbing , so that both functionals and must vanish as .this constraint implies that terms independent of must can not be considered .now , imposing the condition of symmetry inversion on the underlying lattice ( ) , so that terms containing are forbidden , the langevin equation with the lowest allowed terms in and its derivatives becomes where , and are constants .using renormalization group methods it can be shown that the critical dimension for equation ( [ lan1 ] ) is and that the critical exponents in terms of the first order ( here ) are : dynamic exponent , order parameter critical exponent and correlation length exponent . also , the scaling relation holds , where is the critical exponent of the two - point correlation function .therefore , only two of the exponents are truly independent .notice that equation ( [ scarel ] ) is a nonequilibrium counterpart of the well - known scaling relation valid for a system without absorbing states .apart from the scaling relations given by equation ( [ anzatgdt ] ) and ( [ scarel ] ) , which hold for dynamic and static exponents , the following relationships between exponents have been found to hold : , and .it should also be noticed that ipt s subject to extra symmetries , and thus out of the dp universality class , have been identified in recent years . amongthen one has systems with symmetric absorbing states , models of epidemics with perfect immunization , and systems with an infinite number of absorbing states .additional discussions on these topics are beyond the aim of this article since these situations do not appear in the reaction models discussed here .for further details the reader is addressed to the recent review of hinrichsen .the mean field ( mf ) approach to irreversible reactions neglects spatial fluctuations and the effect of noise .so , the actual concentrations of the reactants are replaced by averaged values over the sites of the lattice .differential equations for the time dependence of such averages are derived for specific reaction systems with different degrees of sophistication , for instance involving one or more site correlations .mf equations are useful in order to get insight into the behaviour of reaction systems close to first - order ipt s where spatial fluctuations are expected to be irrelevant . in spite of the restricted application of mf equations to the study of second - order ipt s , it is instructive to discuss the mf predictions of the langevin equation ( [ lan1 ] ) .in fact , replacing the field by the spatial constant and neglecting the noise and the laplacian terms , equation ( [ lan1 ] ) becomes equation ( [ lanmf ] ) has two long - time stationary solutions given by the absorbing state ( ) and the active regime ( ) , wich are stable for and . here plays the role of the control parameter with a critical point at .the order parameter critical exponent that governs the decay of the average density for from the active phase is .the first part of this section will be devoted to describe lattice gas reaction models inspired in actual catalytic reactions , such as the catalytic oxidation of carbon monoxide ( ) and the reaction between nitric oxide and carbon monoxide ( ) .these models exhibit many features characteristic of first - order ipt s that have also been found in numerous experiments , such as hysteretic effects and abrupt changes of relevant properties when a control parameter is tuned around a coexistence point ( see also figure 2 ) , etc . on view of these facts, special attention will be drawn to the discussion of first - order itp s .on the other hand , will be mostly devoted to the discussion of generic models , which are not intended to describe specific reaction systems .most of these models exhibit second - order itp s . as already discussed in the introductory sections ,the catalytic oxidation of carbon monoxide is one of the most studied reaction due to its practical importance and theoretical interest . the simplest approach for this reaction is the zgb lattice gas model as described in and ( for additional details see ) .the phase diagram of the zgb model ( figure 1 ) exhibits a second - order ipt close to that belongs to the dp universality class .the dynamic critical exponents , as evaluated using epidemic simulations ( ) , equations ( [ nu ] ) , ( [ so ] ) and ( [ r2 ] ) ) , are , and ( in two dimensions ) , which , in fact , are in excellent agreement with the acepted exponents of the universality class of the directed percolation , namely , and .the order parameter critical exponent , as evaluated using the damage spreading technique , is , also in excellent agreement with the dp value , namely .more interesting , close to , the zgb model exhibits a first - order ipt ( see figure 1 ) in qualitative agreement with experiments performed using single crystal surfaces as catalysts ( see figure 2 ) . as mentioned above , the nonvanishing desorption rate experimentally observed prevents the actual catalyst system from entering in to a truly absorbing state and the abrupt transition shown in figure 2 is actually reversible .further experimental evidence on the existence of this first - order - like behaviour arises from dynamic measurements exhibiting clear hysteretic effects , as shown in figure 3 for the case of the reaction on a single crystal surface . , keeping the oxygen pressure constant at , while the partial pressure is varied cyclically ( horizontal axis ) .( a ) hysteresis in the reactant coverage as measured by the photoelectron emission microscopy ( peem ) and ( b ) in the reaction rate .more details in the text . adapted from reference .,width=226 ] , keeping the oxygen pressure constant at , while the partial pressure is varied cyclically ( horizontal axis ) .( a ) hysteresis in the reactant coverage as measured by the photoelectron emission microscopy ( peem ) and ( b ) in the reaction rate .more details in the text . adapted from reference .,width=226 ]figure 3(a ) shows hysteresis in the reactant coverage upon cyclic variation of the partial pressure ( horizontal axis ) .the vertical axis shows the photocurrent measured in a photoelectron emission microscopy ( peem ) experiment .notice that a low ( negative ) photocurrent indicates an oxygen - rich phase ( left - hand side of figure 3(a ) ) , while a large photocurrent corresponds to a phase ( right - hand side of figure 3(a ) ) .also , figure 3(b ) shows the hysteresis in the rate of production ( measured using a mass spectrometer ) as a function of the partial pressure .when the system is in the low pressure regime , the surface is mostly covered by oxygen that corresponds to the ` oxygen side ' or monostable region a. upon increasing the reaction rate also rises until , close to , the surface becomes covered with adsorbed .this ` side ' or monostable state b , corresponds to a low reactivity regime . decreasing ,the system remains in this monostable region b until , close to , it suddenly undergoes a steep transition and rapidly returns to the initial state of high reactivity . summing up ,during the hysteresis loop the system may be in two monostable regions a ( ) and b ( ) , separated from each other by a bistable region ( ) . in view of this stimulating experimental evidencelet us review some numerical studies on hysteretic effects close to coexistence .figure 4 shows a plot of versus obtained by means of the ensemble applied to the zgb model and using a relatively small sample ( ) .starting from the stationary value of at , one observes that stepwise increments of cause to increase steadily up to the -dependent upper spinodal point . turning around the spinodal point , further increments of to decrease steadily up to for . at this pointthe growing branch finishes and the subsequent decrease in causes the system to return to the starting point where the decreasing branch of the loop ends .notice that both the growing and decreasing branches describe the same trajectory ( within error bars ) on the ( ) plane .so , it has been concluded that hysteretic effects are not observed using this small lattice . increasing the size of the lattice ( in figure 4(b ) ) , the behaviour of the system changes dramatically . on the one hand ,the spinodal point becomes appreciably shifted ( see inset of figure 4(b ) ) , and on the other hand , hysteretic effects become evident since -growing and -decreasing branches can be clearly distinguished .notice that within a remarkably wide range of values both branches are vertical , and consequently parallel each to other . after increasing the lattice size ( in figure 4(c ) ) only minor changes in , , occur , but a well defined spinodal point and a hysteresis loop can still be observed ( see inset of figure 4(c ) ) .figure 5 shows a plot of the -dependent spinodal points ( ) versus the inverse lattice size ( ) . performing an extrapolation to the infinite size limit yields .this figure should be compared with the value reported by brosilow , , which corresponds to a determination of in a finite lattice of size . also , evans have reported for a finite lattice of size .very recently , an independent estimation given by , which is in excellent agreement with the figures measured using the cc method , has been obtained by means of short - time dynamic measurements .plots of both and versus , also shown in figure 5 , indicate that finite - size effects are negligible for large lattices ( ) .so , the extrapolated estimates are and , respectively .another interesting feature of simulations showing hysteresis is that one can exchange the role of the axis in the following sense : finely tuning the coverage one can induce the system to undergo first - order transitions in parameter space ( in this case ) .hysteretic effects can be further understood after examination of various snapshot configurations as shown in figure 6 .it should be noticed that all of the configurations belong to the coexistence region and , consequently , these states are not allowed when simulating the zgb model using the standard algorithm since right at , displays a discontinuous jump from to ( see figure 1 ) . figure 6(a ) shows a typical configuration corresponding to the spinodal point with . here, one observes that some small but compact clusters have already been nucleated .this configuration is different from those obtained within the reactive regime using the standard algorithm ( not shown here for the sake of space ) that show mostly monomers with .the snapshot configurations shown in figures 6(b ) and ( c ) correspond to the growing branch and have been obtained for and , respectively .it should be noticed that above a single massive cluster has spread within the reactive phase . in figure 6(b ) , this massive cluster does not percolate , but increasing percolation along a single direction of the lattice is observed ( figure 6(c ) ) .percolation of the massive cluster along only one direction is observed up to a relatively high coverage ( in figure 6(d ) ) .these values of the are remarkably greater than the percolation threshold of random percolation model , given by .however , the random percolation cluster is a fractal object with fractal dimension , while the cluster is a compact object . dangling ends emerging from the surface of the cluster eventually get in contact causing percolation of such a cluster in both directions of the lattice ( figure 6(e ) ) .it should be noticed that the snapshot configuration of figure 6(d ) corresponds to the growing branch while that of figure 6(e ) , which has been taken after few mcs , corresponds to an effective pressure characteristic of the decreasing branch .therefore , the jump from one branch to the other seems to be accompanied by a change in the configuration of the cluster .it will be interesting to quantitatively study the properties of the interface between the cluster and the reactive phase in order to determine their possible self - affine nature , as well as the interplay between curvature and hysteresis . from the qualitative point of view, the examination of snapshots suggests that both the interface roughness and length of the massive cluster remain almost unchanged for the growing branch .when is further increased , the jump to the decreasing branch is eventually characterized by the onset of percolation along both directions of the lattice and , consequently , the macroscopic length and the curvature of the interface may change .so , the subtle interplay of interfacial properties , such as length , roughness , curvature , etc . has to be studied in detail in order to fully understand the hysteresis loop observed using the ensemble .it is expected that the longer the interface length of the growing branch the easier the reaction , so one needs a higher effective pressure to keep the coverage constant .in contrast , along the decreasing branch , the shorter length of the interface inhibits reactions so that a greater oxygen pressure ( smaller pressure ) is needed to achieve the desired coverage .so , these arguments may explain the existence of two branches .0.50 true cm , -occupied sites are black while other sites are left white ; a ) snapshot obtained at the spinodal point with , b ) , c ) and d ) are snapshots obtained along the growing branch with , , and , respectively ; e ) snapshot obtained few monte carlo steps after figure d ) but now the system has jumped to the decreasing branch with .,width=188 ] , -occupied sites are black while other sites are left white ; a ) snapshot obtained at the spinodal point with , b ) , c ) and d ) are snapshots obtained along the growing branch with , , and , respectively ; e ) snapshot obtained few monte carlo steps after figure d ) but now the system has jumped to the decreasing branch with .,width=188 ] , -occupied sites are black while other sites are left white ; a ) snapshot obtained at the spinodal point with , b ) , c ) and d ) are snapshots obtained along the growing branch with , , and , respectively ; e ) snapshot obtained few monte carlo steps after figure d ) but now the system has jumped to the decreasing branch with .,width=188 ] 1.0 true cm , -occupied sites are black while other sites are left white ; a ) snapshot obtained at the spinodal point with , b ) , c ) and d ) are snapshots obtained along the growing branch with , , and , respectively ; e ) snapshot obtained few monte carlo steps after figure d ) but now the system has jumped to the decreasing branch with .,width=188 ] , -occupied sites are black while other sites are left white ; a ) snapshot obtained at the spinodal point with , b ) , c ) and d ) are snapshots obtained along the growing branch with , , and , respectively ; e ) snapshot obtained few monte carlo steps after figure d ) but now the system has jumped to the decreasing branch with .,width=188 ] the snapshot configurations of figure 5 unambiguously show the coexistence of two phases , namely a -rich phase dominated by a massive cluster that corresponds to the -poisoned state and a reactive phase decorated with small islands .it should be noticed that such a nice coexistence picture is only available using the ensemble since coexistence configurations are not accessible using the standard ensemble .it should also be noted that the existence of hysteretic effects hinders the location of the coexistence point using the ensemble method .in fact , in the case of hysteresis in thermodynamic equilibrium the chemical potential at coexistence can be obtained after a proper thermodynamic integration of the growing and decreasing branches of the hysteresis loop . for nonequilibrium systems like the zgb model where no energetic interactions are considered , the standard methods of equilibrium thermodynamics are not useful . in order to overcome this shortcoming a method based onthe spontaneous creation algorithm already used to study different systems , has been proposed .the method implies the study of the stability of the hysteresis branches upon the application of a small perturbation .this can be carried out by introducing a negligible small desorption probability ( ) to the zgb model .it is well known that the first - order nature of the ipt of the zgb model remains if .it has been found that taking both branches of the hysteresis loop collapse into a single one that can be identified as the coexistence point given by .this figure is close to the middle of the hysteresis loop located close to , where the error bars cover both branches of the loop .the value reported by brosilow , which is remarkably close to this figure , has been obtained using the ensemble but neglecting both finite - size and hysteretic effects .regrettably , the size of the lattice used in reference was not reported , and therefore additional comparisons can not be performed . in the seminal paper of ziff , an estimation of the coexistence pointis performed studying the stability of the coexistence phase .this analysis gives , which is also in good agreement with other independent estimates .also , evans have reported based on the analysis of epidemic simulations .the value reported by meakin , , seems to be influenced by metastabilities due to the large lattices used for the standard simulation method .therefore , that figure is a bit larger and may correspond to a value close to the spinodal point .surprisingly , better values , e.g. , can be obtained using very small lattices and the standard algorithm since in such samples metastabilities are short - lived .very recently , hysteresis phenomena have been studied on the basis of a modified version of the zgb model .in fact , it is assumed that the surface of the catalyst has two kinds of inhomogeneities or defects . in type-1 defects , which are randomly distributed on the sample with probability , the desorption of adsorbed proceeds with probability . in type-2 inhomogeneities , which are randomly distributed with probability , the adsorption of oxygen molecules is inhibited and the desorption probability of is given by .furthermore , , where is the desorption probability of on unperturbed lattice sites . also , the diffusion of species is considered with probability , while the probability of other events such as adsorption , desorption and reaction is given by . in order to study hyteretic effects the pressureis varied at a constant rate in closed cycles .it is interesting to discus the mechanisms , which are originated by the definition of the model , which may lead to hysteretic effects .in fact , the low desorption probability of prevents the occurrence of a absorbing state and , consequently , the abrupt transition from the high - reactivity state to the low - reactivity regime is reversible .furthermore , the blocking of lattice sites for oxygen adsorption also prevents the formation of the oxygen - poisoned state and the second - order ipt becomes reversible and takes place between an oxygen - rich low - reactivity regime and a high - reactivity state .since escaping from both low - reactivity regimes is quite difficult , the occurrence of hysteresis in dynamic measurements can be anticipated .the reported results obtained by means of simulations ( see figure 7 ) are in qualitative agreement with the experimental findings ( figure 3 ) .figure 7(a ) corresponds to the evolution of oxygen coverage . decreases smoothly from the oxygen - rich state when the pressure is raised .this behaviour resembles the case of the zgb model ( figure 1 ) .however , the abrupt increase in observed when is lowered is due to the fact that the surface is mostly covered by species ( see figure 7(b ) ) , which have low desorption probability .in fact , oxygen can only be adsorbed on a nearest - neighbor pair of vacant sites that may become available with ( roughly ) a low probability of the order of . on the other hand ,the growing branch of ( figure 7(b ) ) is flat and that coverage remains close to zero , for .subsequently , it exhibits an abrupt increase close to , as already expected from the behaviour of the zgb model ( figure 1 ) .the high coverage region of the decreasing branch is due to the low desorption probability of while the subsequent sudden drop is consistent with the abrupt increase in ( figure 7(b ) ) .it should be noticed that the experimental measurement of the photocurrent does not allow one to distinguish from , so figure 3(a ) should be compared with a composition of both figures 7(a ) and 7(b ) .anyway , the experimental findings are nicely ( qualitatively ) reproduced by the simulations .finally , the behaviour of the rate of production ( figure 7(c ) is also in excellent agreement with the experimental data shown in figure 3(b ) . 0.5 true cm as in the experiments ( see figure 3 ), the numerical simulation results shown in figure 7 also allow one to estimate the values of the partial pressure where the transitions from the monostable states a and b to the bistable state , and , respectively take place .therefore , the width of the hysteresis loop is given by .the dependence of on the scan rate of the pressure ( ) has also been measured experimentally and by means of simulations , as shown in figures 8(a ) and 8(b ) , respectively .again , the numerical results are in remarkable qualitative agreement with the experiments . neglecting surface defects and based on mean - field calculations zhdanov and kasemo have stated that the two branches of the hysteresis loop should fall together to the equistability point , provided there occurs a sufficiently rapid nucleation and growth of islands .this result is expected to be valid on the limit where one also should observe . as shown in figure 8, the transition points and approach each other and shrinks with decreasing .conclusive evidence on the validity of the mean - field prediction can not be found due to either experimental or numerical limitations to achieve a vanishing small scanning rate .however , it has been suggested that should be finite due to the presence of surface defects , which is neglected in the mean - field treatment . comparing the numerical results obtained applying the cc method to the zgb model with those of the study performed by hua , one has to consider that not only both models are different , but also the data corresponding to the cc ensemble were obtained after a long - time stabilization period and consequently exhibit smaller hysteretic effects , in contrast to the dynamic measurements where the relaxation of the system towards stationary states is not allowed . as anticipated above a truly statecan not be achieved in the experiments due to the nonvanishing probability ( ) . according to the theory of thermal desorption and the experiments , depends on the temperature of the catalysts and the energetic interactions with neighboring adsorbed species through an arrhenius factor .therefore , the magnitude of the abrupt drop in the reaction rate ( see figure 2 ) decreases upon increasing , as shown in figure 9 for the case of the catalytic oxidation of on .furthermore , on increasing the sharp peak of the reaction rate becomes rounded and for high enough temperature , e.g. for in figure 8 , the signature of the first - order transition vanishes .the influence of desorption on the phase diagram of the zgb model has also been studied by means of monte carlo simulations .the simplest approach is just to introduce an additional parameter to the zgb model , given by the desorption probability .as expected , the second - order ipt of the models is not influenced by desorption .however , the first - order ipt actually disappears because due to the finite value of the system can no longer achieve a truly state .however , the first - order nature of the transition remains for very low desorption probabilities , as shown in figure 10 , for . on increasing ,the peak of the rate of production becomes shifted and rounded in qualitative agreement with the experiments ( figure 9 ) .1.0 true cm another useful approach to the study of first - order ipt s is to apply the em as described in section .early epidemic studies of the first - order ipt of the zgb model have been performed by evans and miesch .epidemic simulations were started with the surface of the catalysts fully covered by species , except for an empty patch placed at the center of the sample .the time dependence of the number of empty sites ( ) and the survival probability of the patches ( ) were analyzed in terms of the conventional scaling relationships given by equations ( [ nu ] ) and ( [ so ] ) .an interesting feature observed in these earlier simulations was the monotonic decrease of , which can be fitted with an exponent .this result is in marked contrast with the behaviour expected for second - order ipts in the dp universality class , where equation ( [ nu ] ) holds with a positive exponent such as with in two dimensions .furthermore , it has been observed that empty patches have a extremely low survival probability and the data can be fitted using equation ( [ so ] ) with an exponent , i.e a figure much larger than the exponent expected for dp given by .wide experience gained studying * reversible * first - order critical phenomena shows that in this kind of transitions the correlations are short - ranged .therefore , the reported power - law decays of and are certainly intriguing .however , recent extensive numerical simulations performed averaging results over different epidemic runs have changed this scenario , as shown in figure 11 . in fact, data taken for , , and show pronounced curvature with a clear cut - off , departing from a power - law behaviour as described by equation ( [ nu ] ) .so , it has been concluded that the occurrence of power law ( scale invariance ) in the first - order dynamic critical behaviour of the zgb model can safely be ruled out . on the other hand , for , log - log plots of versus exhibit pseudo power - law behaviour over many decades ( , as shown in figure 11 .the effective exponent describing the early time behaviour of is , in agreement with the result reported by evans .however , after a long time , few successful epidemics prevail and the number of empty sites suddenly grows as , indicating a spatially homogeneous spreading .the results shown in figure 11 suggest that instead of the power - law behaviour characteristic of second - order transitions ( equation ( [ nu ] ) ) , the epidemic behaviour close to first - order ipt s could be described by means of a modified ansatz involving a short - time power - law behaviour followed by a long - time exponential decay , as given by equation ( [ anz ] ) .the inset of figure 11 shows a test of equation ( [ anz ] ) , namely a semilogarithmic plot of versus , where has been assumed .the scattering of points for long times is simply due to the reduction of statistics as a consequence of the low survival probability of the initial epidemic patches . summing up ,epidemic studies of the zgb model close to and at coexistence show a pseudo power - law behaviour for short times ( ) that crosses over to an asymptotic exponential decay for longer times .consequently , the absence of scale invariance in the first - order ipt of the zgb model places this kind of transition on the same footing its their reversible counterparts .another interesting scenario for the study of bistable behaviour close to first - order ipt s , observed in catalyzed reactions , is to investigate the properties of interfaces generated during the propagation of reaction fronts .in fact , key experiments have underlined the importance of front propagation for pattern formation in bistable systems , including the formation of labyrinthine patterns , self - replicating spots , target patterns and spiral waves , stationary concentration patterns ( ` turing structures ' ) , etc .furthermore , recent experimental studies of catalytic surface reactions have confirmed the existence of a wealth of phenomena related to pattern formation upon front propagation .the basic requirement for the observation of front propagation is a process involving an unstable phase , which could be displaced by a stable one , leading to the formation of an interface where most reaction events take place .this interesting situation is observed close to first - order ipts , as in the case of the zgb model ( figure 1 ) .in fact , just at the transition point one has a discontinuity in that corresponds to the coexistence between a reactive state with small clusters and a -rich phase , which likely is a large -cluster , as suggested by simulations performed using the cc ensemble ( see the snapshots of figures 6(b)-(e ) ) . between and the upper - spinodal point , the reactive state is unstable and it is displaced by the -rich phase . on the contrary , between the lower spinodal point and the reactive state displaces the -rich phase .this latter case has been studied by evans and ray , who have reported that the reactive regime displaces the -poisoned state , resulting in a propagation velocity normal to the interface .it has been proposed that must vanish as , where both states become equistable , so one has with .the limit of high diffusivity of the reactants can be well described by mean - field reaction - diffusion equations , which give .it is interesting to notice that if diffusion is restricted or even suppressed , simulation results give values of that are also very close to unity , suggesting that this exponent is independent of the surface diffusivity of the reactants . for an evolving interface, there is a clear distinction between the propagation direction and that perpendicular to it .so it may not be surprising that scaling is different along these two directions .therefore , an interface lacks self - similarity but , instead , can be regarded as a self - affine object .based on general scaling arguments it can be shown that the stochastic evolution of a driven interface along a strip of width is characterized by long - wavelength fluctuations that have the following time- and finite - size - behaviour where for and for , with .so , the dynamic behaviour of the interface can be described in terms of the exponents and , which are the roughness and growth exponents , respectively .thus , for an infinite system , one has , as .note that is also known as the interface width .it is reasonable to expect that the scaling behaviour should still hold after coarse - graining and passing to the continuous limit .in fact , the dynamics of an interface between two phases , one of which is growing into the other , is believed to be correctly described by simple nonlinear langevin type equations , such as equation ( [ kkppzz ] ) proposed by kardar , parisi and zhang ( kpz ) , the edward - wilkinson ( ew ) equation , and others .as in the case of second - order phase transitions , taking into account the values of the dynamic exponents , evolving interfaces can be grouped in sets of few universality classes , such that interfaces characterized by the same exponents belong to the same universality class . among others ,kpz and ew universality classes are the most frequently found in both experiments and models , including electrochemical deposition , polycrystalline thin - film growth , fire - front propagation , etc . . pointing again our attention to the simulation results of evans and ray ,they have reported that the propagation of the reaction interface , close to , can be described in terms of dynamic scaling arguments , with , i.e. , a figure close to the kpz value ( in dimensions ) .very recently , chvez studied the dynamics of front propagation in the catalytic oxidation of co on by means of a cellular automaton simulation .it is found that the dynamic scaling exponents of the interface are well described by equation ( [ fv ] ) with and .it is also reported that , in the absence of surface diffusion , the interface dynamics exhibits kpz behaviour .based on a variant of the zgb model , goodman have studied the propagation of concentration waves .they reported the observation of trigger waves within the bistable regime of the process , i.e. , close to the first - order ipt .in fact , within this regime one has the coexistence of a stable state with a metastable one . at the boundary between the two, the stable state will displace the metastable one and the boundary will move , so this process leads to the propagation of concentration fronts ( trigger waves ) .goodman found that the velocity of the front depends on the diffusion rate of species ( diffusion of oxygen is neglected ) and .the velocity of the front vanishes on approaching the poisoning transition at ( note that the transition point now depends on ) , according to equation ( [ velo ] ) , with , in agreement with the results of evans .while front propagation during the catalytic oxidation of on platinum surfaces has been observed in numerous experiments , the quantitative analysis of data is somewhat restricted by the fact that the fronts are neither flat nor uniformly curved , eventually several of them nucleate almost at the same time and , in most cases , the occurrence of strong interactions between fronts ( ` interference of chemical waves ' ) makes clean interpretations quite difficult . in order to overcome these shortcomings ,haas have studied the propagation of reaction fronts on narrow channels with typical widths of 7 , 14 and 28 .the main advantage of these controlled quasi - one - dimensional geometries is that frequently only a single front propagates along the channel , thus avoiding front interactions .additionally , the narrowness of the channels and the absence of undesired flux at the boundaries lead to practically planar fronts . using this experimental setup ,the front velocity in different channels can be measured as a function of the partial pressure of , keeping the temperature and the oxygen partial pressure constant , as shown in figure 12(a ) . at low valuesonly oxygen fronts are observed .furthermore , their velocity decreases when is increased , reaching a minimum value at a certain critical threshold ( see figure 12(a ) ) .when is further increased a jump is observed- now the front reverses itself into a front and travels in the opposite direction .when is lowered from high values , the fronts become slower and hysteresis is observed ( see the coexistence between oxygen and fronts in figure 12(a ) for ) .finally , at another jump is observed- under these conditions fronts can no longer persist below a quite low ( but nonzero velocity ) and they reverse themselves into fast oxygen fronts ( figure 12(a ) ) .many features of the experiment of haas can be recovered simulating front propagation with the aid of the zgb model on the square lattice with rectangular geometries of sides ( ) .thus is the width of the channel and its length .free boundary conditions were taken along the channel while the opposite ends are assumed to be in contact with oxygen and sources , respectively .if or species are removed from the ends of the channels ( i.e. , the ` sources ' ) , due to the reaction process , they are immediately replaced .the propagation of the concentration profile was studied starting with a sample fully covered by oxygen , except for the first and second columns , which are covered by ( the source ) , and left empty , respectively .the propagation of the oxygen profile was followed using a similar procedure . under these conditionsone always has two competing interfaces along the channel . in order to make a quantitative description of the propagation, the concentration profiles of the reactants , and , are measured along the length of the channel in the -direction and averaged over each column of lattice sites of length .then , the moments of the profiles , which in subsequent steps can be used to determine the propagation velocity and the width of the profiles , are also measured .in fact , the moments of order of the profiles can be evaluated according to }{\sum [ \theta(x+1 ) - \theta(x ) ] } .\label{moment}\ ] ] thus , using equation ( [ moment ] ) the velocity of propagation can be obtained from the first moment monte carlo simulation results show that the front propagation velocity depends on both and the channel width , as shown in figure 12(b ) .this figure also shows that the displacement of - and -poisoned channels by the reactive regime stops at certain ( -dependent ) critical values , and , respectively . by means of an extrapolation to the thermodynamic limit it is possible to identify these critical values with the critical points of the zgb model , namely and , respectively .it is also found that close to , when the propagation of the profile ceases , the velocity of the profile undergoes a sharp change .this behaviour can be correlated with the first - order ipt between the stationary reactive regime and the -poisoned state observed in the zgb model at ( see figure 1 ) .so far , the main conclusions that can be drawn from figure 12(b ) can be summarized as follows : a ) there are two critical pressures , and , which depend on the width of the channel , at which propagation of one profile or the other stops ; b ) within these critical values , propagating and profiles coexist ; c ) profiles propagate faster than profiles .all these observations appear in qualitative agreement with the experimental results shown in figure 12(a ) .however , the underlying physics is different : in the simulations the displacement of a poisoned phase by the invading reactive phase takes place within a range of pressures where the latter is unstable , while the former is stable . in contrast , the experiment may show the propagation of coexisting phases within a bistable regime . so far , all those simulations of the zgb model discussed above do not attempt to describe the occurrence of oscillations in the concentration of the reactants and in the rate of production , which are well documented by numerous experiments .in fact , it is well known that the catalytic oxidation of on certain surfaces exhibits oscillatory behaviour , within a restricted range of pressures and temperatures , which is associated with adsorbate - induced surface phase transitions .since the aim of this paper is to describe the irreversible critical behaviour of the reaction , the oscillatory behaviour will not be further discussed .therefore , the interested reader is addressed to recent developments involving the study of numerous lattice - gas models aimed to explain the oscillations observed experimentally . since the zgb lattice gas reaction model is an oversimplified approach to the actual processes involved in the catalytic oxidation of co ,several attempts have been made in order to give a more realistic description .some of the additional mechanisms and modifications added to the original model are the following : ( i ) the inclusion of desorption that causes the first - order ipt to become reversible and slightly rounded , in qualitative agreement with experiments ( figures 9 and 10 ) .( ii ) energetic interactions between reactants adsorbed on the catalyst surface have been considered by various authors . in general ,due to these interactions the ipt s become shifted , rounded and occasionally they are no longer observed .( iii ) studies on the influence of the fractal nature of the catalyst are motivated by the fact that the surface of most solids at the molecular level must be considered as a microscopic fractal , such as the case of cataliysts made by tiny fractal matallic cluster dispersed in a fractal support or in a discontinuous thin metal films .the fractal surfaces have been modeled by means of random fractals , such as percolating clusters , , diffusion limited aggregates and also deterministic fractals , such as sierpinsky carpets , , etc . .one of the main findings of all these studies is that the first - order ipt becomes of second - order for dimensions . since in dimensions the zgb modeldoes not exhibit a reaction window , one may expect the existence of a ` critical ' lower fractal dimension capable a sustaining a reactive regime .this kind of study , of theoretical interest in the field of critical phenomena , remains to be addressed .( iv ) also , different kinds of adsorption mechanisms , such as hot - dimer adsorption , local versus random adsorption , nonthermal mechanisms involving the precursor adsorption and diffusion of molecules , the presence of subsurface oxygen , etc . have been investigated .( v ) the influence of surface diffusion has also been addressed using different approaches .particularly interesting is the hybrid lattice - gas mean - field treatment developed by evans for the study of surface reactions with coexisting immobile and highly mobile reactants .( vi ) considering the eley - rideal mechanism as an additional step of the set of equations ( [ adco]-[reac ] ) , namely including the following path poisoning of the surface by complete occupation by species is no longer possible preventing the observation of the second - order ipt .( vii ) the influencia of surface defects , which has also been studied , merits a more detailed discussion because , in recent years , considerable attention has been drawn to studies of surface reactions on substrates that include defects or some degrees of geometric heterogeneity , which is not described by any fractal structure , as in the case of item iii ) .interest in these types of substrates is based on the fact that the experiments have shown that inert species capable to block adsorption sites , such a sulfur , deposit on the catalyst surface during the exhaust of the combustion gases .also crystal defects that are formed during the production of the catalyst result in blocked sites .other inhomogeneities consist of crystallographic perturbations but may equally well involve active foreign surface atoms .recently lorenz have performed monte carlo simulations using three different types of defective sites .site-1 , that adsorbed neither nor and site-2 ( site-3 ) that adsorbed ( ) but no ( ) .they found that islands form around each defect near ( ) .the average density of decays as a power - law of the radial distance from the defect ( , ) , and the average cluster size also obeys a power - law with the distance to spinodal point ( ) with exponent .when defects are randomly distributed , with density , decreases linearly according to .this model has also been investigated in the site and pair mean - field approximations .the pair approximation exhibits the same behaviour that the monte carlo simulation .the size of the reactive windows decreases with and the abrupt transition at becomes continuous ( the same behaviour have been reported in a related model ) . however , unlike the analytical results , in the monte carlo simulation there is a critical concentration above which the transition at becomes continuous ( in agreement with previous results ) . in conclusion , various models shown that the presence of defects on the catalytic surface causes the poisoning transition to occur at lower values than on homogeneous surfaces .also , beyond some critical concentration of defects , the first - order ipt of the zgb model becomes second - order .the overall effect of inert sites is to reduce the production of .furthermore , these findings provide an alternative explanation for the absence of a second - order ipt into a -poisoned state observed in the experiments of oxidation ( see figure 2 ) . the catalytic reduction of with various agents , including , , , hydrocarbons , etc ., has been extensively studied on and surfaces , which are the noble metals used in automotive catalytic converters , due to the key role played by emission in air pollution . aside from the practical importance, the catalytic reduction of also exhibits a rich variety of dynamic phenomena including multistability and oscillatory behaviour . within this context ,the catalytic reaction between and is the subject of this section .the archetypal model used in most simulations has early been proposed by yaldran and khan ( yk ) . as in the case of the zgb model ,the yk model is also a lattice gas reaction system based on the langmuir - hinshelwood mechanism .the reaction steps are as follows where represents an unoccupied site on the catalyst surface , represents a nearest neighbor ( nn ) pair of such sites , indicates a molecule in the gas phase and indicates an species adsorbed on the catalyst .the reactions given by equations ( [ reco ] ) and ( [ ren ] ) are assumed to be instantaneous ( infinity reaction rate limit ) while the limiting steps are the adsorption events given by equations ( [ noad ] ) and ( [ coad ] ) .the yk model is similar to the zgb model for the reaction , except that the is replaced by , and nn atoms , as well as nn pairs , react . for further details on the yk modelsee .early simulations of the yk model have shown that a reactive window is observed on the hexagonal lattice while such kind of window is absent on the square lattice , pointing out the relevance of the coordination number for the reactivity .therefore , we will first discuss monte carlo simulations of the yk model performed on the hexagonal ( triangular ) lattice .subsequently , results obtained for variants of the yk model that also exhibit reaction windows on the square lattice will be presented .the simulation procedure in the standard ensemble is as follows : let and be the relative impingement rates for and , respectively , which are taken to be proportional to their partial pressures in the gas phase. taking , such normalization implies that the yk model has a single parameter that is usually taken to be . and adsorption events are selected at random with probabilities and , respectively .subsequently , an empty site of the lattice is also selected at random .if the selected species is , the adsorption on the empty site occurs according to equation ( [ coad ] ) .if the selected molecule is , a nn site of the previously selected one is also chosen at random , and if such site is empty the adsorption event takes place according to equation ( [ noad ] ) .of course , if the nn chosen site is occupied the adsorption trial is rejected .after each successful adsorption event all nn sites of the adsorbed species are checked at random for the occurrence of the reaction events described by equations ( [ reco ] ) and ( [ ren ] ) . during the simulations , the coverages with , and ( , and , respectively ) as well as the rate of production of and ( , , respectively ) are measured .the phase diagram of the yk model , shown in figure 13 , is similar to that of the zgb model shown in figure 1 .in fact , in both cases second- and first - order ipt s are observed .however , in contrast to the zgb model where the absorbing ( poisoned ) states are unique , in the case of the yk model such states are mixtures of and as follows from the observation of the left and right sides of the phase diagram , respectively ( figure 13(a ) ) .the ipt observed close to is continuous and therefore of second - order ( see figure 13 ) .more interesting , an abrupt first - order ipt is also observed close to ( figure 1(a ) and ( b ) ) .hysteretic effects close to the first - order ipt of the yk model have been investigated using the cc ensemble ( see ) . for small lattices ( )the relaxation time is quite short , so that hysteretic effects are absent .this result is in agreement with similar measurements of the zgb model ( see figure 4(a ) ) . on increasing the lattice size ,hysteretic effects can be observed even for and they can unambiguously be identified for , as shown in figure 14(a ) . a vertical region located at the center of the loop and slightly above , as well as the upper spinodal point , can easily be observed in figure 14 .furthermore , it is found that while the location of is shifted systematically toward lower values when is increased , the location of the vertical region ( close to the center of the loops ) remains almost fixed very close to .using lattices of size , the hysteretic effects are quite evident ( see figure 14(b ) ) and also , the growing and decreasing branches of the loops are almost vertical .also , the location of these branches depends on the lattice size , as follows from the comparison of figures 14(a ) and ( b ) . versus obtained using the cc ensemble and taking : ( a ) and ( b ) .the arrows indicate the growing branch ( gb ) , the decreasing branch ( db ) , the vertical region(vr ) and the upper spinodal point ( ) .more details in the text.,width=226 ] versus obtained using the cc ensemble and taking : ( a ) and ( b ) .the arrows indicate the growing branch ( gb ) , the decreasing branch ( db ) , the vertical region(vr ) and the upper spinodal point ( ) .more details in the text.,width=226 ] a more quantitative analysis on the behaviour of corresponding to the different branches and the vertical region has also been reported . versus in the growing branch ( ) , decreasing branch ( ) , and the vertical region ( ) .the straight lines correspond to the best fits of the data that extrapolate to .( b ) plots of versus .the straight line corresponds to the best fit of the data that extrapolates to .more details in the text.,width=226 ] versus measured in the growing branch ( ) , decreasing branch ( ) , and the vertical region ( ) .the straight lines correspond to the best fits of the data that extrapolate to .( b ) plots of versus .the straight line corresponds to the best fit of the data that extrapolates to .more details in the text.,width=226 ] figure 15(a ) shows the dependence of the location of the growing branch and the decreasing branch ( and , respectively ) on the inverse of the lattice size .the dependence of at the vertical region ( ) is also shown for the sake of comparison .it has been found that the location of all relevant points , namely with and , depends on the curvature radius ( ) of the interface of the massive cluster in contact with the reactive region .such dependence can be written as follows where is the location of the point under consideration after proper extrapolation to the thermodynamic limit and is an -dependent function .for the vertical region one has and is almost independent of , so , as shown in figure 15(a ) .in contrast , for the db and the gb , is finite and of the order of and , respectively .so , one has while , in agreement with the results shown in figure 15(a ) . the extrapolated points are , and also, and have been . on the basis of these results , has been identified as the coexistence point in excellent agreement with an independent measurement , , reported by brosilow and ziff .this result is in contrast with measurements performed with the zgb model .in fact , for the zgb systems the vertical region is not observed while the locations of the growing and decreasing branches are almost independent of the lattice size ( see figure 4 ) .the explanation of the difference observed comparing both models , which may be due to the different behaviour of the interface of the massive cluster , is an interesting open question .it has also been reported that the location of the upper spinodal point depends on the lattice size , as shown in figure 15(b ) .this dependence of is due to local fluctuations in the coverage that take place during the nucleation of the critical cluster .the extrapolation of the data shown in figure 15(b ) gives .furthermore , the coverage at this point is .these results point out that in the thermodynamic limit the spinodal point is very close to coexistence , i.e. , . for the sake of comparisonit is worth mentioning that for the zgb model one has ( see ) .further insight into the first - order ipt of the yk model can be gained performing epidemic studies . however , in this case it is necessary to account for the fact that the poisoned ( absorbing ) state above coexistence is nonunique , since it is due to a mixture of and atoms with coverage and , as shown in figure 13(a ) .so , the starting configuration has to be obtained running the actual dynamics of the system slightly above coexistence until ` natural ' absorbing states suitable for the studies are generated .figure 16 shows results obtained performing epidemic simulations for various values of including , , , as well as a value close to coexistence but slightly inside the active region , namely .= 6.0 cm from figure 16 , it becomes evident that the method is quite sensitive to tiny changes of .the obtained curves are fitted by equation ( [ anz ] ) with , indicating a markedly low survivability of the epidemic patches as compared with the zgb model that gives , as already discussed in .the main finding obtained using epidemic studies is that the occurrence of a power - law scaling behaviour close to coexistence can unambiguously be ruled out .this result is in qualitative agreement with data of the zgb model , see .all these observations are also in agreement with the experience gained studying first - order reversible phase transitions where it is well established that correlations are short ranged , preventing the emergence of scale invariance .it should be mentioned that several mean - field theories of the yk model have been proposed . truncating the hierarchy of equations governing the cluster probabilities at the level , a reasonable estimate of the coexistence point given by obtained on the triangular lattice .however , this approach fails to predict the second - order ipt observed in the simulations ( see figure 13 ) .also , kortlke have derived an accurate prediction of the second - order critical point at using a two - site cluster approximation .the prediction of this approach for the coexistence point is less satisfactory , namely . on the other hand ,very recently an elaborated mean - field theory of the yk model has been developed up to the pair - approximation level that yields a very accurate estimation of the coexistence point , namely . as already mentioned above , the behaviour of the yk model on the square lattice is radically different than that observed on the triangular lattice .in fact , in the former * no * reactive stationary state has been observed , as shown in figure 17(a ) . . ( empty triangles ) , ( empty squares ) , and ( solid circles ) .( a ) results obtained neglecting diffusion showing the absence of a reaction window such that the catalyst always remains poisoned by mixtures of adsorbed species .( b ) results obtained considering diffusion . in this casethe yk model exhibits a reaction window .adapted from reference .( c ) results obtained considering the influence of the eley - rideal ( er ) mechanism and the diffusion on . adapted from reference .more details in the text.,width=226 ] . ( empty triangles ) , ( empty squares ) , and ( solid circles ) .( a ) results obtained neglecting diffusion showing the absence of a reaction window such that the catalyst always remains poisoned by mixtures of adsorbed species .( b ) results obtained considering diffusion . in this casethe yk model exhibits a reaction window .adapted from reference .( c ) results obtained considering the influence of the eley - rideal ( er ) mechanism and the diffusion on . adapted from reference .more details in the text.,width=226 ] however , simulation results have shown that diffusion of ( but not of , or ) restores the possibility of a reactive state , as shown in figure 17(b ) .in fact , in this case a second - order ipt is observed close to , while a first - order ipt is found at .also , meng have shown that by adding a new reaction channel to yk model ( equations ( [ noad]-[ren ] ) ) , such that the reactivity of the system becomes enhanced and consequently a reaction window is observed on the square lattice .this window exhibits a second - order ipt close to and a first - order ipt close to .this behaviour is reminiscent of that observed modeling the zgb model , as discussed above . on the other hand , assuming that the dissociation of given by equation ( [ noad ] ) is preceded by a molecular adsorption on a singe site , namely and the yk model also exhibits a reaction window in the square lattice provided that both and desorption are considered .very recently , khan have studied the influence of the eley - rideal ( er ) mechanism ( reaction of molecules with already chemisorbed oxygen atoms to produce desorbing ) on the yk model on the square lattice . in the absence of diffusion ,the added er mechanism causes the onset of a reactive regime at extremely low pressures , i.e. , for . however , considering the diffusion of species , the window becomes considerably wider and the reactive regime is observed up to where a first - order itp is found , as shown in figure 17(c ) .this finding suggests that the incorporation of the er mechanisms does not affect the first - order ipt ( see figure 17(b ) ) .in contrast , the second - order ipt is no longer observed as shown in figure 17(c ) . as in the case of the zgb model ,the bistable behaviour of the yk model close to coexistence provides the conditions for the displacement of reactive fronts or chemical waves . within this context , tammaro and evans studied the reactive removal of unstable mixed layers adsorbed on the lattice .furthermore , in order to account for the diffusion of the reactants , the hopping of all adsorbed species ( except for atoms whose mobility is negligible ) has been considered .simulations are started with the surface fully covered by a mixture .this mixture is unstable since the vacation of a single site may produce the dissociation of ( equation ( [ noaddiso ] ) ) and its subsequent reaction with followed by desorption of the products and the generation of empty sites capable of triggering the autocatalytic reaction . due to the high mobility of most adsorbed species , initially an exponential increase in the number of highly dispersed vacancies is observed .thereafter , a reaction front forms and propagates across the surface at constant velocity .it is also interesting to remark that all simulation results are confirmed by an elaborated mean - field treatment of chemical diffusion on mixed layers , incorporating its coverage - dependent and tensorial nature , both of these features reflecting the interference of chemical diffusion of adsorbed species on surface by coadsorbed species .in addition to the zgb and the yk models , numerous lattice gas reaction models have also been proposed attempting to describe catalyzed reaction processes of practical and academic interest . among others ,the dimer - dimer ( dd ) surface reaction scheme of the type has been proposed in order to describe the catalytic oxidation of hydrogen .monte carlo simulations of the dd model have shown the existence of second - order ipt s and a rich variety of irreversible critical behaviour .relevant numerical results have also been qualitatively reproduced by mean - field calculations . on the other hand ,the catalytic synthesis of ammonia from hydrogen and nitrogen on iron surfaces ( ) is among the catalyzed reactions of major economical importance .ever since its discovery and technical realization , the reaction has become the focus of fundamental investigations .very recently , various lattice gas reaction models have been proposed and studied by means of numerical simulations .the existence of ipt s has been observed , as shown in figure 18 for the case of the model proposed by khan and ahmad ( ka ) . herethe langmuir - hinshelwood reaction mechanism is assumed and the control parameter is the partial pressure of .as follows from figure 18 , the ka model exhibits a second - order ( first - order ) ipt close to ( ) , resembling the behaviour of both the zgb and the yk models ( see figures 1 and 13 , respectively ) . . ( open circles ) , ( empty triangle ) ( open squares ) and ( solid circles ) . adapted from reference .,width=226 ] recently , the simulation of reaction processes involving more than two reactants has received growing attention .one example of these processes is a multiple - reaction surface reaction model based upon both the zgb and the dd models .so , this zgb - dd model may be applied to the oxidation of co in the presence of -traces as well as to the oxidation of hydrogen in the presence of co - traces .interest on this model is due to various reasons .for example , the oxidation of hydrogen and carbon monoxide plays a key role in the understanding of the process of hydrocarbon oxidation .in fact , the oxy - hydrogen reaction mechanism contains the chain - branching steps producing o , h and oh - radicals that attack hydrocarbon species . also , co is the primary product of hydrocarbon oxidation and it is converted to carbon dioxide in a subsequent slow secondary reaction .furthermore , the zgb - dd model exhibits interesting irreversible critical behaviour with nonunique multi - component poisoned states .there are also models that are not aimed to describe any specific reaction system but , instead , they are intended to mimic generic reactions .typical examples are the monomer - monomer model , the dimer - trimer model , the monomer - trimer model , etc .( see also references .on the other hand , in the literature there is a vast variety of lattice gas reaction models following the spirit of the reaction described in the above subsections .they all exhibit the same type of irreversible critical behaviour at the transition , which is determined by a common feature- the existence of an absorbing or poisoned state , i.e. , a configuration that the system can reach but from where it can not escape anymore , as discussed in detail in section 1.2 .as already discussed , the essential physics for the occurrence of ipt s is the competition between proliferation and death of a relevant quantity .so , it is not surprising that a large number of models satisfying this condition , and aimed to describe quite diverse physical situations , have been proposed and studied . some examples of models exhibiting second - order ipt s are , among others ` directed percolation ' as a model for the systematic dripping of a fluid through a lattice with randomly occupied bonds , the ` contact process ' as a simple lattice model for the spreading of an epidemics , ` autocatalytic reaction - diffusion models ' aimed to describe the production of some chemical species , ` the stochastic game of life ' as a model for a society of individuals , ` forest fire models ' , ` branching annihilating random walkers ' with odd number of offsprings , epidemic spreading without immunization , prey - predator systems , the domany - kinzel cellular automata , etc . for an excellent review on this subject see .the common feature among all these models is that they exhibit second - order ipt s belonging to the universality class of directed percolation ( dp ) , the langevin equation ( ) being the corresponding field - theoretical representation . the robustness of these dp models with respect to changes in the microscopic dynamic rules is likely their most interesting property .such robustness has led janssen and grassberger to propose the so - called dp conjecture , stating that models in the dp universality class must satisfy the following conditions : ( i ) they must undergo a second - order ipt from a fluctuating active state to a unique absorbing state .( ii ) the transition has to be characterized by a positive single - component order parameter .( iii ) only short - range processes are allowed in the formulation of the microscopic dynamic rules .( iv ) the system has neither additional symmetries nor quenched randomness . in spite of the fact that the dp conjecture has not been proved rigorously , there is compelling numerical evidence supporting it .so far , dp appears to be the generic universality class for ipt s into absorbing states , having a status similar to their equilibrium counterpart , namely the venerated ising model .however , despite the successful theoretical description of the dp process , there are still no experiments where the critical behaviour of dp has been observed .therefore , this is a crucial open problem in the field of ipt s . for further discussionssee for instance .the study of irreversible critical behaviour in reaction systems has attracted the attention of many physicists and physical - chemists for more than four decades . on the one hand ,second - order ipt s are quite appealing since , like the archetypal case of dp , they are observed in simple models in terms of their dynamic rules .nevertheless , second - order behaviour is highly nontrivial and has not yet been solved exactly , even using minimal models in one dimension .furthermore , the critical exponents are not yet known exactly .field - theoretical calculations and numerical simulations have greatly contributed to the understanding of second - order irreversible behaviour .most systems lie in the universality class of dp , which plays the role of a standard universality class similar to the ising model in equilibrium statistical physics , and the reason for few of the exceptions found are very well understood .the main challenges in the field are , from the theoretical point of view , the achievement of exact solutions even for simple models and , from the point of view of the experimentalists , the realization of key experiments unambiguously showing dp behaviour .on the other hand , the scenario for the state of the art in the study of first - order ipt s is quite different .firstly , there is stimulating experimental evidence of the existence of abrupt ( almost irreversible ) transitions , while hysteretic effects and bistable behaviour resembling first - order like behaviour have been observed in numerous catalyzed reaction experiments .secondly , one still lacks a theoretical framework capable describing first - order ipt s and theoretical efforts are being addressed to the development of mean - field approaches with different degrees of sophistication .the achievement of a theoretical framework enabling the treatment of irreversible critical behaviour and the gathering of further experimental evidence , including the accurate measurement of critical exponents , are topics of high priority that will certainly contribute to the development of a general theory of the physics of far - from equilibrium processes .this work is financially supported by conicet , unlp and anpcyt ( argentina ) .we are grateful with numerous colleagues for stimulating discussions .hilderbrand m , kuperman m , wio h , mikhailov a and ertl g 1999 1475 only few exceptions of dp are known so far .however , they all violate at least one of the essential conditions required for the validity of the dp conjecture , as dicussed in . for some examplessee reference .
an introductory review on the critical behaviour of some irreversible reaction systems is given . the study of these systems has attracted great attention during the last decades due to , on the one hand , the rich and complex underlying physics , and on the other hand , their relevance for numerous technological applications in heterogeneous catalysis , corrosion and coating , development of microelectronic devices , etc .. the review is focuses on recent advances in the understanding of irreversible phase transitions ( ipt s ) providing a survey of the theoretical development of the field during the last decade , as well as a detailed discussion of relevant numerical simulations . the langevin formulation for the treatment of second - order ipt s is discussed . different monte carlo approaches are also presented in detail and the finite - size scaling analysis of second - order ipt s is described . special attention is devoted to the description of recent progress in the study of first - order ipt s observed upon catalytic oxidation of carbon monoxide and the reduction of nitrogen monoxide , using lattice gas reaction models . only brief comments are given on other reactions such as the oxidation of hydrogen , ammonia synthesis , etc . also , a discussion of relevant experiments is presented and measurement are compared with the numerical results . furthermore , promising areas for further research and open questions are also addressed . = 20pt # 1
though shannon showed in 1956 that noiseless feedback does not increase the capacity of memoryless channels , feedback s other benefits have made it a staple in modern communication systems .feedback can simplify the encoding and decoding operations and has been incorporated into incremental redundancy ( ir ) schemes proposed as early as 1974 .hagenauer s work on rate - compatible punctured convolutional ( rcpc ) codes allows the same encoder to be used in various channel conditions and uses feedback to determine when to send additional coded bits .the combination of ir and hybrid arq ( harq ) continues to receive attention in the literature and industry standards such as 3gpp .although it can not increase capacity in point - to - point channels , the information - theoretic benefit of feedback for reducing latency through a significant improvement in the error exponent has been well understood for some time .( see , for example , . )recent work casts the latency benefit of feedback in terms of block length rather than error exponent , generating new interest in the practical value of feedback for approaching capacity with a short average block length .polyanskiy et al . provided bounds for the maximum rate that can be accomplished with feedback for a finite block length and also demonstrated the energy - efficiency gains made possible by feedback . in the variable - length feedback with termination ( vlft ) scheme, uses an elegant , single , `` stop feedback '' symbol ( that can occur after any transmitted symbol ) that facilitates the application of martingale theory to capture the essence of how feedback can allow a variable - length code to approach capacity .a compelling example from shows that for a binary symmetric channel with capacity 1/2 , the average block length required to achieve 90% of the capacity is smaller than 200 symbols . for practical systems such as hybrid arq ,the `` stop feedback '' symbol may only be feasible at certain symbol times because these systems group symbols together for transmission in packets , so that the entire packet is either transmitted or not . in , chen et al .used a code - independent rate - compatible sphere - packing ( rcsp ) analysis to quantify the latency benefits of feedback in the context of such grouped transmissions .chen et al . focused on the awgn channel and also showed that capacity can be approached with surprisingly small block lengths , similar to the results of . using the rcsp approach of chen et al .as its foundation , this paper introduces an optimization technique and uses it to explore how closely one may approach capacity with only a handful of incremental transmissions . for a fixed number of information bits and a fixed number of maximum transmissions before giving up to try again from scratch , a numerical optimization determines the block lengths of each incremental transmission to maximize the expected throughput .we consider only and show that this is sufficient to achieve more than 90% of capacity while requiring surprisingly small block lengths similar to those achieved by polyanskiy et al . andchen et al .while rcsp is an idealized scheme , it provides meaningful guidance for the selection of block lengths and the sequence of target decoding error rates , which we call the decoding error trajectory . a 1024-state rate - compatiblepunctured tail - biting convolutional code using the block lengths determined by our rcsp optimization technique achieves the rcsp decoding error trajectory and essentially matches the throughput and latency performance of rcsp for transmissions .our results , like those of polyanskiy et al . and chen et al . , assume that the receiver is able to recognize when it has successfully decoded . the additional overhead of , for example ,a cyclic redundancy check ( crc ) has not been included in the analysis .longer block lengths would be required to overcome this overhead penalty .the paper is organized as follows : section [ sec : analysis ] reviews the rcsp analysis .section [ sec : algo ] describes the rcsp numerical optimization used to determine transmission lengths and shows the throughput vs. latency performance achieved by using these transmission lengths for up to six rate - compatible transmissions .this performance is compared with a version of vlft scheme proposed by polyanskiy et al .section [ sec : sims ] introduces the decoding error trajectory and shows how rcsp performance can be matched by a real convolutional code using the transmission lengths identified in the previous section .section [ sec : outage ] shows how the rcsp analysis can be applied to scenarios that involve strict latency and outage probability constraints .section [ sec : conc ] concludes the paper .to review the sphere - packing analysis presented in for a memoryless awgn channel , consider a codebook of size that maps information symbols into a length- codeword with rate .the channel input and output can be written as : where is the output ( received word ) , is the codeword of the message , and z is an -dimensional i.i.d .gaussian vector .let the received snr be and assume without loss of generality that each noise sample has unit variance .the average power of received word is then . as in , the largest possible squared decoding radius assuming that the decoding spheres occupy all available volume is a bounded - distance decoder declares any message within a distance of codeword to be message , a decoding error is declared . because the sum of the squares of the gaussian noise samples obeys a chi - square distribution with degrees of freedom ,the probability of decoding error associated with decoding radius is where the are standard normal distributed random variables with zero mean and unit variance and is the cdf of a chi - square distribution with degrees of freedom .the idea of rcsp is to assume that sphere - packing performance can be achieved by each transmission in a sequence of rate - compatible transmissions .thus the idealized sphere - packing analysis is applied to a modified incremental redundancy with feedback ( mirf ) scheme as described in .mirf works as follows : information symbols are coded with an initial block length . if the receiver can not successfully decode , the transmitter will receive a nack and send extra symbols . the decoder attempts to decode again using all received symbols for the current codeword , i.e. , with block length .the process continues for .the decoded block length is and the code rate is .if decoding is not successful after transmissions , the decoder discards the transmissions and the process begins again with the transmitter resending the initial symbols .this scheme with = is standard arq .the squared decoding radius of the cumulative transmission is and the marginal probability of decoding error associated with decoding radius is where the are standard normal distributed random variables with zero mean and unit variance .however , this marginal probability is not what is needed .the probability of a decoding error in the transmission depends on previous error events .indeed , conditioning on previous decoding errors makes the error event more likely than the marginal distribution would suggest .the joint probability is we compute the expected number of channel uses ( i.e. , latency or average block length ) by summing the incremental transmission lengths weighted by the probability of error in the prior cumulative transmission and dividing by the probability of success by the last ( ) transmission ( as in arq ) , according to this expression does not consider delay due to decoding operations .the corresponding throughput is given by the special case of = ( when only the initial transmission of length is ever transmitted ) , mirf is arq . in this casethe expected number of channel uses given by ( [ eqn : lambda_tau ] ) can be simplified as follows ( with ) : which yields an expected throughput of if we fix the number of information bits , ( [ eqn : rt_arq ] ) becomes a quasiconcave function of the initial code rate , allowing the optimal code rate , which maximizes the throughput for a given , to be found numerically .[ fig : latvthroughput_a ] plots the maximum achievable throughput in the ( arq ) rcsp scheme as the red ( diamond markers ) curve . in , chen et al. demonstrated one specific rcsp scheme with ten transmissions that could approach capacity with low latency .specifically , the transmission lengths were fixed to and , while was varied to maximize throughput .this paper builds on the intuition of the arq case presented above .both and the number of transmissions are fixed , and a search identifies the set of transmission lengths that maximizes throughput . we seek to identify approximately how much throughput can be achieved using feedback with a small number of incremental transmissions , specifically .furthermore , we seek insight into what the transmission lengths should be and what decoding error rates allow the sequence of transmissions to be most efficient . for , identifying the transmission lengths which minimize the latency in ( [ eqn : lambda_tau ] ) is not straightforward due to the joint decoding error probabilities in ( [ eqn : pzeta_i ] ). however , the restriction to a small allows exact computation of in mathematica , avoiding the approximations of . to reflect practical constraints ,we restrict the lengths to be integers .the computational complexity of , which increases with the transmission index , forces us to limit attention to a well - chosen subset of possible transmission lengths .thus , our present results may be considered as lower bounds to what is possible with a fully exhaustive optimization .[ fig : latvthroughput_a ] shows the throughput vs. latency performance achieved by rcsp for on an awgn channel with snr 2.0 db . as is increased , each additionalretransmission brings the expected throughput closer to the channel capacity , though with diminishing returns .the points on each curve in fig .[ fig : latvthroughput_a ] represent values of ranging from 16 to 256 information bits .[ fig : latvthroughput_a ] shows , for example , that by allowing up to four retransmissions ( ) with , rcsp can achieve 91% of capacity with an average block length of 102 symbols .similar results are obtained for other snrs .vlft achievability results for the awgn channel based on are shown in fig .[ fig : latvthroughput_b ] for comparison .both the original vlft scheme , in which the transmission may be ended after any symbol , and a constrained version of vlft using the same block lengths and feedback structure as = rcsp are presented .the original vlft closely approaches capacity with a latency on the order of 200 symbols .rcsp is unable to match vlft because the overall rcsp transmission can only be terminated after one of the incremental transmissions completes .if vlft is constrained in the same way , its performance is initially worse than rcsp because random coding does not achieve ideal sphere packing with short block lengths . at an average latency of 200 , constrainedvlft performance becomes similar to the comparable rcsp scheme .the vlft achievability curve evaluates ( * ? ? ? * theorem 10 ) using the upper bound of ( 162 ) with i.i.d .gaussian inputs with average power equal to the power constraint . such codebooks will sometimes violate the 2 db power constraint . to address this ,the average power should be slightly reduced and codebooks violating the power constraint should be purged , which will lead to a small performance degradation .alternatively , codebooks or even codewords can be constrained to meet the power constraint with equality .further analysis of vlft codes more carefully considering the power constraint for the awgn channel will be the subject of future work .rcsp makes the rather optimistic assumption that a family of rate - compatible codes can be found that performs , at each rate , equally well as codes that pack decoding spheres so well that they use all of the available volume .a variety of well - known upper bounds on the packing density indicate that the maximum packing density decreases as the dimension increases ( e.g. , ) , making such codes difficult to find .however , we show in this section that a rate - compatible tail - biting convolutional code can indeed match the performance of rcsp , at least for .we consider two rate / convolutional codes from : a 64-state code with generator polynomial ( ,,)=( ) and a 1024-state code with ( ,,)=( ) , where the generator notation is octal .high rate codewords are created by pseudorandom rate - compatible puncturing of the rate / mother codes .we restrict our attention to tail - biting implementations of these convolutional codes because the throughput efficiency advantage is important for the relatively small block lengths we consider .simulations compare the performance of these two codes in the mirf setting for the awgn channel with snr 2 db , as shown in fig .[ fig : latvthroughput_b ] .the simulations presented here focus on the case ..optimal rcsp transmission lengths for and snr 2 db . [ cols="^,^,^,^,^,^",options="header " , ] [ table : m5_steps ] the transmission lengths used in the simulations are those identified by the rcsp optimization . table [ table : m5_steps ] shows the results of the optimization ( i.e. , the set of lengths found to achieve the highest throughput ) .thus our simulations used , , , , .the induced code rates of the cumulative blocks are /= , /= , /= , /= and /= .[ fig : rates_vs_k ] shows these rates as well as the rates for other values of according to the rcsp optimization for .note that for every value of the initial code rate is above the channel capacity of 0.6851 .this is the benefit of feedback : it allows the decoder to capitalize on favorable noise realizations by attempting to decode early , instead of needlessly sending additional symbols .the rcsp optimization also computes the joint decoding error probabilities of , which we call the `` decoding error trajectory '' .if we can find a rate - compatible family that achieves this decoding error trajectory , then we can match the rcsp performance .[ fig : prerrorm5k64 ] shows the = decoding error trajectories for the rcsp cases studied in figs . 1 and 2 ( shown as discrete points in fig .[ fig : prerrorm5k64 ] for each value of ) and for constrained vlft for and 64-state and 1024-state convolutional code simulations for = .the dashed line represents the marginal probability of error for a sphere - packing codebook as in which was recognized in as a tight upper bound for the joint probabilities of error given by .this tight upper bound can serve as a performance goal for practical rate - compatible code design across a wide range of block lengths .while the 64-state code is not powerful enough to match rcsp performance , the 1024-state code closely follows the rcsp trajectory for . thus thereexist practical codes , at least in some cases , that achieve the idealized performance of rcsp . indeed , fig .[ fig : latvthroughput_b ] plots the points of the two convolutional codes , demonstrating that the 1024-state code achieves 91% of capacity with an average latency of 102 symbols , almost exactly coinciding with the rcsp point for and .the convolutional code s ability to match a mythical sphere - packing code is due to maximum likelihood ( ml ) decoding , which has decoding regions that completely fill the multidimensional space ( even in high dimensions ) .these simulation results assume that the receiver is able to recognize when it has successfully decoded .this same assumption is made by the rcsp analysis , the vlft scheme of polyanskiy et al ., and the mirf scheme of chen et al .while this assumption does not undermine the essence of this demonstration of the power of feedback , its practical and theoretical implications must be reviewed carefully , especially when very short block lengths are considered .an important practical implication is that the additional overhead of a crc required to avoid undetected errors will drive real systems to somewhat longer block lengths than those presented here .this will affect the choice of error control code .an important implication is that this analysis can not be trusted if the block lengths become too small .this assumption allows block errors to become block erasures at no cost .consider the binary symmetric channel ( bsc ) : if the block length is allowed to shrink to a single bit , then this assumption turns the zero capacity bsc with transition probability into a binary erasure channel with probability , which has a capacity of instead of zero .both the practical and theoretical problems of this assumption diminish as block length grows .however , a quantitative understanding of the cost of knowing when decoding is successful and how that cost changes with block length is an important area for future work .mirf has an outage probability of zero because it never stops trying until a message is decoded correctly .with slight modifications , the mirf scheme and the rcsp transmission length optimization can incorporate strict constraints on latency ( so that the transmitter gives up after transmissions ) and outage probability ( which would then be nonzero ) . to handle these two new constraints ,we restrict to be less than a specified . without modifying the computations of in ( [ eqn : pzeta_i ] ), the optimization is adapted to pick the set of lengths that yields the maximum throughput s.t .when there is a decoding error after the transmission , the transmitter declares an outage event and proceeds to encode the next information bits .this scheme is suitable for delay - sensitive communications , in which data packets are not useful to the receiver after a deadline has passed .the expected number of channel uses is now given by the expected throughput is again given by ( [ eqn : rt ] ) .the purpose of this paper is to bring the information theory of feedback and the communication practice of feedback closer together . beginning with the idealized notion of rate - compatible codes with decoding spheres that completely fill the available volume, the paper eventually demonstrates a convolutional code with performance strikingly similar to the ideal rate - compatible sphere - packing ( rcsp ) codes .an optimization based on rcsp identifies the highest throughput possible for a fixed and .this optimization provides the lengths of the initial and subsequent transmissions and the sequence of decoding error probabilities or `` decoding error trajectory '' that characterize the throughput - maximizing performance .the rcsp decoding error trajectories computed in this paper are tightly bounded by the marginal error probability of sphere packing .designing a code with a similar error trajectory will thus yield comparable latency performance .rcsp predictions and simulation results agree in demonstrating that feedback permits 90% of capacity to be achieved with about 100 transmitted symbols assuming that the decoder knows when it has decoded correctly . however , the implications of this assumption for short block lengths warrant further investigation .vlft performance shows that if the transmission could be stopped at any symbol ( rather than only at the end of each incremental transmission ) capacity is closely approached with an average latency of 200 symbols , but a more careful analysis of vlft in light of the awgn power constraint is warranted .the authors would like to thank yury polyanskiy for helpful conversations regarding the vlft analysis .
recent work by polyanskiy et al . and chen et al . has excited new interest in using feedback to approach capacity with low latency . polyanskiy showed that feedback identifying the first symbol at which decoding is successful allows capacity to be approached with surprisingly low latency . this paper uses chen s rate - compatible sphere - packing ( rcsp ) analysis to study what happens when symbols must be transmitted in packets , as with a traditional hybrid arq system , and limited to relatively few ( six or fewer ) incremental transmissions . numerical optimizations find the series of progressively growing cumulative block lengths that enable rcsp to approach capacity with the minimum possible latency . rcsp analysis shows that five incremental transmissions are sufficient to achieve 92% of capacity with an average block length of fewer than 101 symbols on the awgn channel with snr of 2.0 db . the rcsp analysis provides a decoding error trajectory that specifies the decoding error rate for each cumulative block length . though rcsp is an idealization , an example tail - biting convolutional code matches the rcsp decoding error trajectory and achieves 91% of capacity with an average block length of 102 symbols on the awgn channel with snr of 2.0 db . we also show how rcsp analysis can be used in cases where packets have deadlines associated with them ( leading to an outage probability ) .
cooperation is vital for the maintenance of public goods in human societies . but according to darwin s theory of evolution , competition rather than cooperation ought to drive our actions .the reconciliation of this theory with the fact that cooperation is widespread in human societies , as well as with the fact that it is much more common in nature as one might expect , is one of the most persistent challenges in evolutionary biology and social sciences .past decades have seen the paradigm of punishment rise as one of the more successful strategies by means of which cooperation might be promoted .indeed , punishment is also the principle tool of institutions in human societies for maintaining cooperation and otherwise orderly behavior .however , punishment is costly , and as such it reduces the payoffs of both the defectors as well as of those that exercise the punishment , hence yielding an overall lower income and acting as a drain on social welfare .thus , understanding the emergence of costly punishment is crucial for the evolution of cooperation . while recent research confirms that punishment is often motivated by negative personal emotions such as anger or disgust , raihani and mcauliffehave shown also that the decision to punish is often motivated with the aversion of inequity in mind , rather than by the desire for reciprocity .although prosocial punishment is widespread in nature , it is unlikely that cooperators are willing to commit permanently to punishing wrongdoers .for that , the action is simply to costly , and hence some form of abstinence is likely , also to avoid unwanted retaliation .several research groups have recently investigated these and related up and down sides of punishment .for example , it was shown that cooperators punish defectors selectively depending on their current personal emotions , even if the number of defectors is large . more often than not , however , whether or not to punish depends on the whiff of the moment and is thus a fairly random event .motivated by these observations , we have recently shown that sharing the effort of punishment in a probabilistic manner can significantly lower the vulnerability of costly punishment and in fact help stabilize costly altruistic strategies . herewe drop the assumption that cooperators who do punish defectors do so uniformly at random .instead , we account for the diversity in punishment , taking into account the fact that some individuals are more likely to punish , while others punish only rarely .more specifically , we introduce different threshold levels for punishment , which ultimately introduces different classes of cooperators that punish defectors .the assumption of diverse players is not just a realistic hypothesis , but in general it is firmly established that it also has a decisive impact on the evolution of public cooperation .motivated by this fact , we therefore study a spatial public goods game with defectors and different types of punishing cooperators .while previously we have demonstrated the importance of randomly shared punishment , we here approach a more realistic scenario by assuming that each type of cooperators will punish with a different probability .our goal is to determine whether a specific class of punishing cooperators will be favored by natural selection , or whether despite the competition among them synergistic effects will emerge .as we will show , the evolution is governed by a counterintuitive selection mechanism , depending further on the synergistic effects of cooperative behavior . however , before presenting the main results in detail , we proceed by a more accurate description of the studied spatial public goods game with different punishing strategies .we consider a population of individuals who play the public goods game on a square lattice of size with periodic boundary conditions .we assume that the game is contested between classes of cooperators ( , , , ) and defectors ( ) .independently of the class a cooperator belongs to , it contributes an amount to the common pool , while defectors contribute nothing .after the sum of all contributions in the group is multiplied by the enhancement factor , the resulting amount is shared equally among all group members .moreover , cooperators with strategy ( ) choose to punish defectors with a probability if the latter are present .as a result , each defector in the group is punished with a fine , while all the cooperators who participated in the punishment equally shared the associated costs .in particular , each punishing cooperator bears the cost , where and are the number of cooperators and punishers in the group , respectively . we emphasize that a cooperator who decides to punish bears the same cost independently of the class it belongs to .thus , here the strategy only determines how frequently a cooperator is willing to punish defectors .nevertheless , it is worth pointing out that never punish and thus correspond to traditional second - order free - riders because they enjoy the benefits of punishment without contributing to it . on the other extreme ,cooperators belonging to the class punish always when defectors are present in the group . since each player on site with von neumann neighborhood is a member of five overlapping groups of size , in each generation it participates in five public goods games and obtains its total payoff , where is the payoff gained from group .subsequently , a player , having strategy , adopts the strategy of a randomly chosen neighbor with the probability },\ ] ] where denotes the amplitude of noise . without loosing generality and to ensure continuity of this line of research we set , meaning that it is very likely that the better performing players will pass their strategy to their neighbors , yet it is also possible that players will occasionally learn from a less successful neighbor . to conclude the description of this public good game, we would like to emphasize that different classes represent different strategies , as our goal is to explore how the willingness to punish evolves at specific parameter values .the model is studied by means of monte carlo simulations .initially , defectors randomly occupy half of the square lattice , and each type of cooperators randomly of the rest of the lattice . during one full monte carlo step ( mcs ) , all individuals in the population receive a chance once on average to adopt another strategy . depending on the proximity to phase transition points and the typical size of emerging spatial patterns ,the linear system size was varied from to and the relaxation time was varied from to mcs to ensure proper statistical accuracy .the reported fractions of competing strategies were determined in the stationary state when their average values became time - independent .alternatively , we have averaged the outcomes over independent runs when the system terminated into a uniform absorbing state .and the probability to punish , as obtained for a low multiplication factor in the original model proposed in , where a uniform probability to punish was assumed for all cooperators .note that both and have a non - monotonous impact on the fraction of cooperators.,width=264 ] for the sake of comparison , we first present the fraction of cooperators in dependence on the punishment fine and the probability to punish at a low value , as obtained in the original probabilistic punishment model , where cooperators punish uniformly at random .figure [ fig1 ] illustrates that the fraction of cooperators first increases , reaches its maximum , but then again decreases , as the values of and increase along the diagonal on the plane .increasing one of these parameters , while the other is kept constant , returns to the same observation .both and thus have a non - monotonous impact on the fraction of cooperators , which is closely related with the fact that characterizes not only the level of punishment but also its cost .accordingly , too high values of involve too high costs stemming from the act of punishing .it is worth pointing out that , which is used in fig .[ fig1 ] , is a relatively low value of the multiplication factor at which the non - monotonous dependence can still be observed . in comparison with the results obtained for larger values of as used in ref . , however , the current plot features a significantly narrower region where full cooperation is possible when is sufficiently large . similarly , there is a limited region of intermediate values where cooperators that punish severely can beat defectors . based on these observations , in the present modelwe thus explore if there is an evolutionary selection among different punishing strategies as they compete against the defectors simultaneously , or if there is indeed cooperation in the common goal to deter defectors . when they start fighting with defectors simultaneously .panel ( c ) shows an enlarged part of panel ( a ) at low values , when cooperation becomes dominant over defection . to present the overall level of cooperation in the population, the cumulative fraction of strategies is also shown ( denoted by ) . for comparison , in panel ( b ) we have also plotted the resulting fraction of cooperator classes when they fight against defectors individually . as in panel ( c ) , panel ( d ) shows an enlarged part of panel ( b ) at a specific interval of .the multiplication factor in all panels is .,width=321 ] for an intuitive overview , we set and investigate how the six types of punishing strategies compete and potentially cooperate with each other in the presence of defectors .the general conclusion , however , is robust and remains valid if we use other values of .using the same as in fig .[ fig1 ] , the panels of fig .[ fig2 ] summarize our main findings .the first panel shows the fractions of strategies in the final state in dependence of the punishment fine when different punishing strategies fight against defectors simultaneously .for clarity , we have also plotted the accumulated fraction of punishing strategies .in contrast to the uniform punishing model , we can see that the total fraction of cooperators should increase monotonously with increasing . as fig .[ fig2](a ) illustrates , cooperators can survive when , and become dominant over ( see also the enlarged part in fig .[ fig2](c ) ) .we should stress , however , that not all types of cooperators can survive at equilibrium , even if cooperators take over the whole population .it turned out that there are some `` weak '' classes of cooperators who go extinct before defectors die out , while other classes of cooperators survive . for a more in - depth explanation, the vitality of punishing classes can be estimated if we let them fight against defectors individually .the outcomes of this scenario are summarized in fig .[ fig2](b ) .results presented in this panel suggest that there are punishing classes who can dominate for all high values , while others become vulnerable as we increase .more interestingly , however , there are mildly punishing strategies who can survive only due to the support of the more successful punishing strategies .for example , for classes and can outperform defectors , while disappear when they fight against defectors individually [ fig .[ fig2](b ) and ( c ) ] .but when all punishing strategies are on the stage then players can survive as well .this effect is more spectacular for the second - order free riding class , who would die out immediately at such a low synergy factor if they face defectors alone .but now , especially at high values , their ratio becomes considerable .this indicates that some less viable classes of cooperators can survive because of the support of more viable punishing strategies via an evolutionary selection mechanism which has a biased impact on the evolution of otherwise competing strategies . to demonstrate the underlying mechanism behind the above observations , we present a series of snapshots of strategy evolutions starting from different prepared initial statesthe comparative analysis is plotted in fig .[ fig3 ] , where all runs were obtained for and . in the first row, we demonstrate how the class of punishing strategy can prevail over defectors .initially , only a tiny portion of cooperators is launched in the sea of defectors [ the fraction of is , see panel ( a ) ] .still , cooperators can expand gradually and invade the whole available territory [ shown in panels from ( a2 ) to ( a4 ) ] .the second row , which was taken at the same parameter values , demonstrates clearly the vulnerability of the class against defectors . despite of the fact that they occupy the majority of the available room at the beginning , shown in panel ( b1 ) ,still , they will be gradually crowded out by defector players .the final state , shown in panel ( b4 ) , highlights that such a rare punishment activity represented by class is ineffective against defectors at the applied synergy factor .the third row , where all previously mentioned strategies are present at the beginning , illustrates a completely different scenario . herewe start from a balanced initial state where half of the lattice sites is occupied by and strategies , while the other half is filled by defectors . as panels ( c1 ) to ( c4 )illustrate , defectors will gradually go extinct while `` weak '' cooperators survive and occupy almost half of the available territory in the final state .we note that there is a neutral drift between punishing strategies in the absence of defectors , which will result in a homogeneous state where the probability to arrive to one of the possible final destinations is proportional to the initial portion of a specific class at the time defectors die out .this evolutionary outcome indicates that although players are , as an isolated strategy , weak against defector players , they can nevertheless survive because of the assistance of the strong strategy even if the initial fraction of the later is modest . in the fourth row , however , when we arrange a similar setup but replaced weak players with also weak players , the final state will always be the full state . here, the presence of strong players does not yield a relevant support to players who therefore die out , and subsequently the system returns to the scenario illustrated in panels ( a1 ) to ( a4 ) . and .the first row shows the case when just a few cooperators are initially present among defectors .it can be observed that even under such unfavorable initial conditions the strategy can successfully outperform defectors .the second row feature a similar experiment with the strategy , which fails to survive among defectors even though the latter are initially in minority .the third row illustrates cooperation among strategies and , which together dominate the whole population even though alone would fail under the same conditions ( see second row ) .we note that a neutral drift starts when defectors die out , as explained in the main text .the fourth row demonstrates , however , that the cooperation among different punishing strategies illustrated in the third row is rather fragile .if initially the strategy is replaced by strategy , then the later simply die out and subsequently the whole evolution becomes identical to the one shown in the first row , where strategy alone outperforms all defectors . for clarity , herethe employed system size is small with just players.,width=321 ] the key point , which explains the significantly different trajectories for mildly punishing strategies is based on the difference of invasion velocities between the competing strategies . to demonstrate the importance of invasion velocities, we monitor how the fraction of strategies evolves in time when we launch the system from a two - strategy state where both strategies form compact domains .following the previously applied approach illustrated in fig .[ fig3 ] , we compare the strategy invasions between , , and between strategies .the comparison of these different cases is plotted in fig .as expected , both and loose the lonely fight against defectors , while will eventually crowd out defectors .note that there is only a very slight increase during the early stages of the evolutionary process that can be observed for all cases , independently of the final outcome .this is because straight initial interfaces can provide a strong temporary phalanx for every punishing strategy .nevertheless , when this interface becomes irregular due to invasions the individual weakness of and strategies reveals itself . still , there is a significant difference between their trajectories .namely , strategy is able to resist for a comparatively long time , which gives strategy enough time to crowd out defectors .on the other hand , strategy is a too easy prey for defectors , which is why they die out faster than the strategy is able to eliminate all defectors . ultimately thus , strategy can benefit from cooperation with strategy , while strategy is unable to do the same . , and , against defectors in dependence on time .note that initially only one cooperative strategy and defectors are present , using the same initial conditions as illustrated in fig .[ fig3 ] . positive value of indicates the invasion of cooperator strategy while its negative value suggests invasion to the reversed direction .note that while both and strategies ultimately loose their battle , the latter is able to prevail significantly longer .this enables an effective help of strategy when they compete against defectors together , as illustrated in panels ( c1 ) to ( c4 ) in fig .[ fig3].,width=321 ] in the remainder of this work , we focus on the parameter region where cooperators are able to coexist with defectors without applying punishment .namely , if the synergy factor exceeds , then pure cooperators ( cooperators that do not punish ) can survive permanently alongside defectors due to network reciprocity . evidently , the presence of punishers can of course still elevate the overall cooperation level and defectors can be effectively crowded out from the population . herethe main question is thus how the different punishing strategies will share the available space .the results are summarized in the left panel of fig .[ r4 ] , as obtained for the representative value of .it can be observed that , when all the different types of punishing strategies fight against defectors simultaneously , then cooperators can dominate the whole population above a threshold value .however , to evaluate these final outcomes adequately , we need to know the individual relations between each particular cooperative strategy and defectors on a strategy - versus - strategy basis . therefore , as for the previously presented low case in fig .[ fig2 ] , in the right panel of fig .[ r4 ] we also show the stationary fractions of different cooperators classes when they compete against defectors individually .results presented in panel ( b ) highlight that too large values could be detrimental for the , and the strategy .this is the so - called `` punish , but not too hard '' effect , where too large costs of sanctioning do more damage to those that execute punishment than the imposed fines do damage to the defectors . a direct comparison with the results presented in panel ( a ) demonstrates clearly that we can observe a similar cooperation among punishing strategies as we have reported before for the low case , in particular because all the mentioned mildly punishing strategies can survive even at a high value . on the other hand, a conceptually different mechanism can be observed in the small region , which is reminiscent of what one would actually expect from a selection process .more specifically , panel ( a ) of fig .[ r4 ] shows that at only strategy survives and coexists with while all the other punishing strategies die out .the latter players are those , who could survive individually with defectors but should die out because of the presence of a more effective ( ) strategy .interestingly , the mentioned selection mechanism can work most efficiently when the leading strategy is less efficient against defectors .right panel of fig .[ r4 ] shows that would be unable to crowd out strategy at these values , while a -free state could be obtained at higher value . in the latter case ,when is too powerful , then this strategy beats defectors too fast which allows other punishing strategies to survive : this is similar to what we have observed in the third row of fig .but when is less effective at smaller values then the presence of surviving players enables players to play out their superior efficiency if comparing to other punishing strategies .thus , depending on the key parameter values , most prominently the multiplication factor and the punishment fine , the different punishing strategies can either cooperate with each other or compete against each other in the spatial public goods game .we have introduced and studied multiple types of punishing strategies that sanction defectors with different probabilities .the fundamental question that we have addressed is whether there exists a selection mechanism which would result in an unambiguous victor when these strategies compete against defectors .we have shown that the answer to this question depends sensitively on the external conditions , in particular on the value of the multiplication parameter .if the public goods game is demanding due to a low value of , then the pure payoff - driven individual selection provides a helping hand to those punishing strategies that would be unable to survive in an individual competition against defectors .in particular , we have demonstrated that the failure or success of a specific punishing strategy could depend sensitively on the relation of invasion velocities between specific punishing strategies and the defectors .accordingly , if the loosing punishing strategy can delay the complete victory of defectors sufficiently long , then a more successful punishing strategy has a chance to wipe out defectors first .this is an example of the cooperation between different punishing strategies . on the other hand , in a less demanding environment , characterized by a higher multiplication factor, a different kind of relation can emerge .while the previously summarized cooperation between punishing strategies is still possible , there also exist parameter regions where competition is the dominant mode , and indeed there is always a single and unambiguous victor among the different classes of punishers .interestingly , we have shown that this happens when the fittest punishing strategy is not effective enough to beat defectors completely . instead , by carefully taming the defectors , they help to reveal the advantages of other punishing strategies .as we have shown , the key point here is again the relation between the invasion velocities .namely , a too intensive invasion will decimate defectors too fast and the advantage of specific punishing classes will remain forever hidden . therefore , in contrast to intuitive expectation , the social diversity of cooperators in terms of their relations with defectors could be the result of an effective selection mechanism .we hope that this research will contribute relevantly to our understanding of the emergence of diversity among competing strategies , as well to their role in determining the ultimate fate of the population .this work was supported by the fundamental research funds of the central universities of china , the hungarian national research fund ( grant k-101490 ) , the slovenian research agency ( grant p5 - 0027 ) , and by the deanship of scientific research , king abdulaziz university ( grant 76 - 130 - 35-hici ) .
inspired by the fact that people have diverse propensities to punish wrongdoers , we study a spatial public goods game with defectors and different types of punishing cooperators . during the game , cooperators punish defectors with class - specific probabilities and subsequently share the associated costs of sanctioning . we show that in the presence of different punishing cooperators the highest level of public cooperation is always attainable through a selection mechanism . interestingly , the selection not necessarily favors the evolution of punishers who would be able to prevail on their own against the defectors , nor does it always hinder the evolution of punishers who would be unable to prevail on their own . instead , the evolutionary success of punishing strategies depends sensitively on their invasion velocities , which in turn reveals fascinating examples of both competition and cooperation among them . furthermore , we show that under favorable conditions , when punishment is not strictly necessary for the maintenance of public cooperation , the less aggressive , mild form of sanctioning is the sole victor of selection process . our work reveals that natural strategy selection can not only promote , but sometimes also hinder competition among prosocial strategies .
the quantum information topics are the subject of active research .the impressive progress have been reached in the quantum cryptography and study of quantum algorithms . due to an impact on security in quantum cryptography ,the quantum cloning is still a significant topic . at the same time, a cloning itself is hardly sufficient for an eavesdropping .no - copying results have been established for pure states as well as for mixed states . in view of such evidences ,the question arose how well quantum cloning machines could work . in effect , the basic importance of the no - cloning theorem is expressed much better in more detailed results , which also give explicit bounds on an amount of the noise .after the seminal work by buek and hillery , many approaches to approximate quantum cloning have been developed . in view of existing reviews , we cite only the literature that is directly connected to our results .an approximate cloning of two prescribed pure states was first considered in ref .this kind of cloning operation is usually referred to as _ state - dependent cloning _ . in general ,various types of state - dependent cloners may be needed with respect to the question of interest .errors inevitably occur already in a cloning of two nonorthogonal states .how close to perfection can a cloning be ?of course , any explicit answer must utilize some optimality criterion. we will refer criterion used in ref . to as the _ absolute error _ .chefles and barnett derived the least upper bound on the global fidelity for cloning of two pure states with arbitrary prior probabilities . the quantum circuit that reaches this upper bound was also constructed .the global fidelity of cloning of several equiprobable pure states was examined in ref . .although cloning problems were mostly analyzed with respect to the fidelity criteria , other measures of closeness of quantum states are relevant .for example , the partial quantum cloning is easier to analyze with respect to the squared hilbert - schmidt distance .one of criteria , _ relative error _ , has been shown to be useful within the b92 protocol emerged in ref . . deriving bounds on the relative errorwas based on the spherical triangle inequality and the notion of the angle sometimes called the _ bures length _ . using this new method ,a cloning of two equiprobable mixed states was studied with respect to the global fidelity .the results of ref . were partially extended to mixed - state cloning . in a traditional approach, the ancilla does not contain _ a priori _ information of state to be cloned just now .a more general case is the scope of the stronger no - cloning theorem .namely , a perfect cloning is achievable , if and only if the full information of the clone has already been provided in the ancilla state alone . in ref . we examined a cloning of finite set of states when the ancilla contains a partial information of the input state .so , the previous result of ref . was extended to both the mixed states and _ a priori _ information . in this paper, we study the relative error of cloning of several mixed states , having arbitrary prior probabilities . _a priori _ information in the ancilla is also assumed . in section ii , the relative error criterion introduced in refs . is extended to the general cloning scenario .we derive the lower bounds on the relative error for cloning of two - state set ( see section iii ) and multi - state set ( see section iv ) . in sectionv , the relative error is compared with other optimality criteria .we also build the quantum circuit for cloning of two pure states ( see section vi ) .this circuit reaches the lower bound on the relative error for arbitrary prior probabilities and _ a priori _ knowledge about the input .section vii concludes the paper .the main problem posed formally is this .we have indistinguishable -level systems that which are all prepared in the same state from the known set of density operators on the space .these systems form the register .its initial state is a density operator on the input hilbert space .the prior probabilities of states obey the normalization condition .we aim to get a larger number of copies of the given originals by means of the ancilla whose initial state is according to the input . herewe mean a system composed of extra register and environment .the extra register contains additional -level systems , each is to receive the clone of .if we include an environment space then any deterministic physical operation may be expressed as a unitary evolution .thus , the final state of two registers is the partial trace over environment space \ . \label{bd2}\ ] ] the output is a density operator on the output hilbert space .the actual output must be compared with the ideal output .many measures of distinguishability between mixed quantum states are based on the fidelity .we shall employ the angles and the sine metric .let denote a unique positive square root of .the fidelity between the two density operators and is equal to . in terms of this measure ,the angle ] .we shall now extend the notion of relative error for the set with , when the number of different pairs is equal to .the probability of taking the pair is equal to where .we clearly have and for the set of equiprobable states .to each pair assign the quantity which takes into account that , perhaps , .it is natural to put the weighted average of the quantities ( [ aveid ] ) . *definition 1 . * _ the relative error of cloning of the set is defined by _let the prior probability be value of order for all the states except and .that is , we take for , whence , for the rest pairs .the expression ( [ bd7 ] ) for relative error is simply reduced to . in the same manner ,we can find , when and probabilities except for the states solely .we are interested in a nontrivial lower bound on the relative error ( [ bd7 ] ) .our approach to obtaining the limits utilizes triangle inequalities .following the method , we shall derive the angle relation from which bound on the relative error is simply obtained .it is handy to introduce the angle ] .hence we obtain . due to the property ( [ dp3012 ] ) of distinguishability transfer gate , we have in the third stage , the label in ( [ gdll ] ) runs from to .so , the gate acts on the qubits 1 and 2 , the gate acts on the qubits 2 and 3 , and so on .the total action is described by where the gates are put from right to left with increasing . in ( [ stag3 ] ), the accumulated distinguishability is distributed among the qubits of interest . on fig .1 , the four gates , , and of the third stage are grouped in the right dash box . using the linearity, we see that too . due to , that is actually correct . specifying concrete values of and and herewith the single - qubit gate in eq .( [ datet ] ) , we can optimize either the relative error or the global fidelity . in each case, we superpose the onto the .then after the second stage the qubits of interest lie in the states . for the optimality with respect to the relative error , we demand that , whence we get and , from ( [ tildd3 ] ) .the angle between and is equal to , the angle between and is equal to . because unitary transformations preserve angles , the angle between and is equal to within the third stage , the state maps to . by definition , the value is angle between and . since and , we find the needed value .thus , the inequality ( [ theor3 ] ) is saturated too , and the built scheme is really optimal with respect to the relative error .note that and are found as and .but the described geometrical picture is quite sufficient for all the purposes . in the same manner ,the optimization of cloning with respect to the global fidelity would be considered . as result , the generalization of the deterministic cloner of ref . to prior ancillary information can be obtained .we have analyzed a new optimality criterion for the state - dependent cloning of several states with arbitrary prior probabilities and an ancillary information .the notion of the relative error has been extended to the general cloning scenario .the lower bounds on the relative error have been obtained for both the two - state and multi - state cases .the attainability of the derived bounds has been discussed .the quantum circuit for optimal cloning of two pure states with respect to the relative error has been built .our approach is based on the simple geometrical description , which generally clarifies origins of a bound for one or another figure of merit . in principle , the described scheme allows to develop cloning circuit that is optimal with respect to any non - local figure of merit . the scenario with an _ a priori _ information in the ancilla was inspired by the stronger no - cloning theorem .the obtained conclusions on a possible merit of the cloning contribute to this subject .unequal prior probabilities of inputs are usual in communication systems .the examination of mixed - state cloning is needed because all the real devices are inevitably exposed to noise .analysis with respect to the relative error may have potential applications to the problem of eavesdropping in quantum cryptography .let us consider the function , where positive and obey .let \, ] , whence . by calculus, we obtain the extreme value for ] . as is well - known , one- and two - qubit gates are sufficient to implement universal computation . in the context of cloning , the writers of ref . note that only one type of pair - wise interaction is needed .the _ distiguishability transfer gate _ is described by where by the unitarity .it follows from eqs .( [ dp1230 ] ) and ( [ dp3012 ] ) that the operation is hermitian .the action of distiguishability transfer gate on two - qubit register is shown on figure 2 .the corresponding circuit of elements and one - qubit operations is given in ref .
the relative error of cloning of quantum states with arbitrary prior probabilities is considered . it is assumed that the ancilla may contain some _ a priori _ information about the input state to be cloned . the lower bound on the relative error for general cloning scenario is derived . both the case of two - state set and case of multi - state set are analyzed in details . the treated figure of merit is compared with other optimality criteria . the quantum circuit for optimal cloning of a pair of pure states is constructed .
several recent efforts have been done in order to understand the subyacent dynamics in electoral systems and opinion formation , from the contrarian effect to the , so called , small world behavior . in the brazilian elections , for instance , power law was found for the proportional vote .however , to establish a proper description of an electoral process is a hard issue since many factors and interactions appear and several aspects of them must be studied. statistical characterization of actual processes is an important issue as well , mainly with the increasing possibility of obtaining the vote data . in the present workwe incorporate the analysis on the corporate vote with a study on the statistical properties of the federal mexican elections of 2000(e-2000 ) , 2003(e-2003 ) , and 2006(e-2006 ) . since their distributions are smooth , the existence of an analytical distribution that describes them and a model which explains them , are very tempting issues .we were successful in the first topic but the answers to the second remain open .we find a remarkable well fit of the properly unfolded distribution of votes with a family of distributions obtained in the context of spectral statistics of complex quantum systems , the called daisy models .the process presented here is different from those that appears in , for instance , since the vote decision is taken due to pertain to a corporate . in the referred electoral processestwo new features appeared : i ) the party who ruled for around years became opposition and ii ) the vote data are public and in an electronic format .the last fact allows an extense statistical analysis , meanwhile , the former one gave the opportunity to analyze the vote distribution of the hard core or corporate voters .we shall denote this party as p2 according to the place in which it appears in the data basis .here we present the vote distribution of this party during the elections 2000 , 2003 , and 2006 for the president position and places in both chambers .this distribution consist of the histogram of the number of votes per cabin obtained for each party , i.e. , in how many cabins exists vote , votes and so on .the crude data histograms are presented in the next section , and the unfolding procedure together with the theoretical distribution appear in section [ unfolding ] followed by some remarks and conclusions .in a recent analysis of the mexican election of 2006 the distributions of votes for all the participants were done with data obtained from the preliminary results program ( prep ) .for the two main parties the results are unclear and a mix of processes is expected ; meanwhile for the rest of the forces clear distributions appear : power laws for small parties , annulled votes and non - registered candidates , and a smooth distribution for the third electoral force .the corporate party p2 present a histogram with a clear maximum and a tail with exponential decay . in fig .[ fig:1](a ) the histograms of the official final results for p2 are presented .the data were obtained from the electoral authorities web page and on request . by construction ,each cabin admits only votes from the registered list of voters and they are distributed over all the country being a sample grid on the population aged over years old . as an exception, there exist special cabins for voters in transit but the number of them is small and do not affect the statistical results presented here. with those remarks , it is clear that the histograms are statistics .the analysis of the link with the geo - economic regions is beyond of the present work but it is of interest . in fig .[ fig:1](a ) and [ fig:1](c ) the histograms for presidential ( red ) , deputies ( black ) and senators ( blue ) for e-2000 ( upper panel ) and e-2006 ( lower panel ) are presented and the intermediate elections for the low chamber in e-2003 ( fig .[ fig:1](c ) in violet ) .we present the crude data with no average of any sort .other parameters of the distributions are presented in table [ table:1 ] being the total number of cabins considered in each process the following : for e-2000 , for e-2003 and for e-2006 .the existence of a smooth distribution that fits the data is a very tempting issue and is the matter of the rest of the present work . .[ cols="<,>,>,>",options="header " , ]a direct comparison with any probabilistic distribution requires of proper normalization and unfolding of the signal , i.e. separate the secular part from the fluctuating one . to consider this procedure is important since in many cases considering the relative variable is not enough because the average could not be constant through the whole set of data . in general , the unfolding procedure is a very delicate task . in the present case no _ a priori _ density can be defined , since it is not clear if the alphabetical order in which the data basis is ordered corresponds to the dynamics of the system . to test the fitting ,two sorts were considered , the original order and a randomly sorted sequence of the votes . in both casesthe histograms are similar but the last one gives much more stable results after the unfolding procedure , as we shall discuss below . in order to fit the experimental data with a probability distribution we treat the number of votes as if they were differences of energies in a quantum spectrum . as in the case of energy levels, we consider the spectrum, , formed by levels where is the number of votes in the cabin and we define .it is costumary to consider the integrated spectral function or integrated density , which counts the number of levels with value equal or less that . stands for the heaveside unitary step function .the integrated density is decomposed into a secular and a fluctuating part .the former part is given by the integral of the correlation function of one point ( see ref . for explanation ) .the sequence is mapped onto the numbers , with .the new variable is the unfolded one which has a constant density and the statistical analysis is performed on it . in the case of quantum systems is estimated applying semiclassical rules .the first term of its expansion is called , in the literature , the thomas - fermi estimate or , the weyl term in the case of billiards . in practical situationsthis function is evaluated via polynomial fitting .the last procedure described was done in the present case since it is the standard in many fields of physics and it is of general applicability .the specific statistic that we analyze corresponds to the nearest neighbor spacing , , defined as and is the unfolded version of the number of votes defined in eq .( [ intvote ] ) .the results for the unfolded votes are shown in figs .[ fig:2 ] and [ fig:3 ] for the deputies , president and senators in 2000 and 2006 elections . for sake of claritywe drop the e-2003 analysis .the theoretical comparison is made with the daisy model of rank .this model departs from retaining each level from a set of levels with a poisson distribution .the resulting sequence has the -nearest neighbor distribution of poisson s , but it must be renormalized in order to obtain the nearest neighbor distribution of the daisy model of rank .the resulting -th neighbor spacing distribution is )}s^{(r+1)n-1}\exp[-(r+1)s ] .\label{rdaisy}\ ] ] where corresponds to the kind or rank of the family .the rank corresponds to the semi - poisson distribution which is related to the energy distribution of pseudointegrable systems and others systems at criticality like as the disordered conductor at the anderson transition . a strong relation exists between daisy models and the nearest - neighbor interaction one - dimensional coulomb gas , where the dependence in the inverse temperature from the later model has the same role as the rank in the former . we do no have an _ a priori _ density in order to compare the vote records with the distribution of eq .( [ rdaisy ] ) .then , the present study can be done only at nearest neighbors , , even when an exploration to larger range correlations is extremely interesting and will be matter of future works .the model described by eq .( [ rdaisy ] ) with fits the unfolded p2 distribution of vote in two regions with two different daisy ranks , , the central part is usually fitted by a higher rank and decays with a lower one . in the case of e-2000 ( fig .[ fig:2 ] ) the fit is between and , but a remarkable deviation is that the experimental distributions start with a linear grow as indicated below and with the dashed line in fig . [ fig:1](a ) .this characteristic remains after the unfolding procedure and marks a clear deviation from the behavior , however , if we do not consider this data , the area preservation of the distribution makes that the function with fits better .the decay is well fitted by a daisy model in all the cases , as can be seen in the lower panel of fig .[ fig:2 ] .it is important to note that a fit to weibull / brody distribution is not good , since the decay is clearly exponential and that only happens in the poissonian case of such distribution .clearly this is not the case . in e-2006a differential vote occurs since the presidential candidate obtains around less votes than p2 obtains for the chambers , as can see in table [ table:1 ] .such an event does not happens in the e-2000 process . in this case, the vote distributions for the chambers fit with a for the whole range , the body and the tail ( fig . [ fig:3 ] ) . for the presidential caseexists a general fit with the for the body , but decays with a rank .note that the agreement for the presidential case ( red histogram ) is not as good as the other cases .the main difference in e-2000 and e-2006 is that p2 arrived to the last process with a deep internal division as was widely reported in national newspapers .an interesting remark , since the votes distribution has fit different rank of daisy models is that the parameter plays the role of an inverse temperature when the these models are contrasted with the statistical distributions of a -dimensional coulomb gas with logaritmic interactions this work we presented an analysis of the vote distribution for one of the corporate parties ( named p2 here ) in mexico which has a wide influence in all the country since it was in the federal power for around years . by constructionthe mexican electoral system admits a straight statistical analysis since the number of cabins are defined and distributed in order that each cabin admits only voters .the crude p2 vote distributions look smooth ( fig .[ fig:1 ] ) and , after a proper unfolding and normalization , corresponds to a probability distribution .comparison of the data with nearest neighbor daisy models of rank ( ) gives a good agreement in all the cases . in the election in 2000the agreement could be better , but the data distribution depart from a linear grow as indicated in fig .[ fig:1](a ) with a dashed line tor guide the eye .the distributions tails follow a different daisy model rank .the dynamical meaning of these results is unclear but the agreement with a daisy model suggest the existence of universal processes therein and not just a fortuitous agreement . in ref . the daisy model of rank fit the distribution of distances for the quasi - optimal path in the traveling salesman problem ( tsp ) .the problem consists in finding the shortest path between cities visiting each city just one time .it is clear that the problem presented here is of the same type .p2 have voters in each region of the country and their conform a truly wide web .how this happens is matter of future analysis , as well the existence of similar behavior in other corporate parties around the world .this work was partially supported by dgapa - unam project in-104400 and promep 2115/35621 .oo a. carbone , g. kaniadakis , and a. m. scarfone .j. b. * 57 * 121(2007 ) .this reference offers a large but non - exhaustive list of references .s. fortunato and c. castellano , phys .lett . * 99 * , 138701 ( 2007 ) . c. borghesi and s. galam , phys .e * 73 * , 066118 ( 2006 ) . a large amount of existing references can be obtained in science citation index .gonzalez , a.o .sousa and h.j .c * 15 * , 45 ( 2004 ) .costa filho , m.p .almeida , phys .e , * 60 * , 1067 ( 1999 ) ; physica a * 322 * , 698 ( 2003 ) .a.a . moreira , d.r .paula , r.n .costa filho and j.s .andrade jr .rev e. * 73 * , 065101(r ) ( 2006 ) . h. hernndez - saldaa , j. flores and t.h .e. * 60 * , 449 ( 1999 ) . ` http://www.ife.org.mx ` according with the mexican transparency law , the electoral data are public and can be obtained on request .the party p2 is the partido revolucionario insitutional ( pri ) .g. baz , h. hernndez - saldaa and r.a mndez - snchez . to be submitted to phys .e. e - print ` arxiv : physics/0609114 ` to be registered in the voters list is almost mandatory in mexico since is required as the official i d card for all practical purposes .t. guhr , a .mller - groeling , and h.a .phys . rep . *299 * , 190 ( 1998 ) .g. casati and t. prosen .lett . * 85 * , 4261 ( 2000 ) .bogomolny , u. gerland , and c. smith .e. * 59 * , r1315 ( 1999 ) ; eur .j. b. * 19 * , 121 ( 2001 ) .u. gerland , ph .d. thesis , university of hilderberg , germany , 1998 .a.m. garca - garca and j. wang .e. * 73 * , 036210 ( 2006 ) .shklovskii , b. shapiro , b.r .sears , p. lambrianides , and h.b .shore , phys . rev . * 47 * , 11487 ( 1993 ) .
the distribution of votes of one of the corporate parties in mexico during elections of 2000 , 2003 and 2006 is analyzed . after proper normalization and unfolding , the agreement of the votes distributions with those of daisy models of several ranks is good . these models are generated by retaining each level in a sequence which follows a poisson distribution . beyond the fact that rank daisy model resembles the distribution of the quasi - optimal distances for the traveling salesman problem , no clear explanation exists for this behavior , but the agreement is not fortuitous and the possibility of a universal phenomena for corporate vote is discussed .