article
stringlengths 0
456k
| abstract
stringlengths 0
65.5k
|
---|---|
we developed the statistical asynchronous regression ( sar ) technique described in this paper as part of a study of relativistic electron conditions at geosynchronous orbit .this part of the earth s radiation belts can evolve on a timescale of hours or even minutes .unfortunately , while individual satellites may make measurements every few seconds , it is difficult to separate the temporal changes from consequences of orbital motion . the easiest way to dothis would be to have continuous measurements at a fixed location , or local time , such as local noon .instead , we have continuous measurements on board moving spacecraft .we can remove the orbital effects if we can map our continuous measurements to local noon at geosynchronous orbit .relativistic electrons in the vicinity of geosynchronous orbit drift around the earth every 5 - 15 minutes under the influence of the local magnetic field .as it happens , these electrons do not follow circular paths like satellite orbits , but rather elliptical paths that depend on the details of the local magnetic field geometry . however , because electron density is a relatively smooth function of altitude near geosynchronous orbit , measurements at different local times are strongly correlated .this correlation is stronger still if we average our data over several drift periods .the strong correlation suggests that we can map our continuous measurements to local noon , if we can determine the right mapping function .sometimes it is possible to determine empirical mappings between measurements at different local times by regression of simultaneous measurements .for example , it is possible to relate measurements made by the goes 8 spacecraft at local dawn ( 0600 ) to goes 9 measurements at local 10 am ( 1000 ) , because whenever goes 8 is at local dawn , goes 9 is at local 10 am . however , it is never the case that goes 8 is at local dawn when goes 9 is at local noon . therefore , we need some method for mapping measurements from anywhere to local noon ( or some other local time of interest ) . until recently ,there have been three strategies for resolving this difficulty : interpolate between multiple calibrated spacecraft , use the equation of motion of electrons in model electromagnetic fields to follow particles around geosynchronous orbit , or use some kind of empirical description of the orbital variations .the first approach degrades substantially when only a few spacecraft are available , and fails when only one spacecraft is available .the second approach suffers from the substantial imperfections in our magnetic field models near geosynchronous .the third approach has been applied with encouraging success by _ moorer _ [ 1999 ] , who uses whatever measurements are available to adjust the crresele empirical radiation belt model for best agreement .the sar technique provides us with a more robust approach that can be applied in cases when there is no pre - existing empirical model like crresele .the sar technique calibrates not only between spacecraft and instruments but also between different locations ( local times ) around geosynchronous orbit .one can easily imagine the sar technique as calibrating measurements made by goes 8 at local dawn to measurements made by goes 9 at local noon even though these two spacecraft have never been at these locations simultaneously .additionally , the sar technique is non - parametric because it does not require us to assume a functional form for the mapping between local times .when we have described the sar technique to our colleagues , many have found it novel and challenging to understand , and some have stated that it might be useful in their own work on other problems . for our own purposes , since we have used this technique as the basis of a statistical study of the energetic electrons near geosynchronous orbit , we present this technique to familiarize our audience with the technique and to demonstrate its robustness . as we believe the sar technique has applications beyond the electron radiation belts , we have chosen to dedicate this paper entirely to the technique itself , reserving the radiation belt study to a later publication .in essence , our method provides a means of performing a regression of one time varying quantity against another without requiring simultaneous knowledge of both .we call this the statistical asynchronous regression ( sar ) method , because it allows us to regress against using only the two statistical distributions and .the sar method determines the function by matching the quantiles ( or percentiles ) and of the distributions of x and y for each probability level . a primitive variant of this technique was developed to standardize the calculation of indices at different magnetic observatories ( * ? ? ?* and references therein ) .we also note that a transformation similar to the sar method has been introduced to map non - gaussian random variables onto gaussian ones , with application to the construction of multivariate distribution functions in high - energy particle physics experiments , in the theory of portfolio in finance , and earlier in the treatment of bivariate gamma distributions . in statistics ,one method of graphical hypothesis testing is the q - q ( quantile - quantile ) plot , which is essentially a graphical depiction of based on the same principle as the sar method .a linear indicates that the two variables differ only by a scaling and an offset but are otherwise identically distributed .however , in spite of the variety of graphical techniques related to the sar method , none makes use of the plotted , aside from determining whether it is linear .since we are specifically interested in potentially nonlinear , we have developed the sar method as an extension to the q - q plot . under various names , such as _ anchoring _ or the _ equipercentile _ method , psychological and educational testing use the same principle as the sar technique to normalize a new test to a standard score distribution .however , is not explicitly calculated , and the information it contains is typically discarded .additionally , the spearman rank order correlation coefficient touches on the same notion as the sar method .it calculates a linear correlation coefficient between the sorted rank orders of two quantities rather than the quantities themselves ; this coefficient measures the quality of the optimal nonlinear mapping between two simultaneously measured quantities .since we are concerned with comparing quantities not measured simultaneously , we will not make use of the spearman coefficient . in the remainder of this paper, we will provide a description and some limited analysis of the sar method .first , we will describe the technique by parable , using a graphical illustration .next we will provide the formal derivation of the technique .we will provide several examples and a simple recipe for the implementation of the sar technique .then we will address the problems of finite sample size and noisy measurements .finally we will show how we use the sar method to map geosynchronous energetic electron flux from one local time to another .we begin our explanation of the sar technique by taking a step back from space physics to a simpler analogous problem .suppose we have two meteorologists making measurements every other day .one has been measuring his favorite meteorological quantity , and the other has been measuring .unfortunately , owing to an error in scheduling , the two meteorologists have not been making their measurements on the same days .it is therefore impossible for them to plot against and perform a regression .we will show how it is nonetheless possible for them to recover the empirical function .the powerful statistical tool that will make this possible is the fundamental principle that probability is conserved under a change of variables .we will leave the mathematical presentation of this principle to later sections . in, we have plotted the probability density functions ( pdfs ) and along the - and -axes respectively . for clarity, we have plotted upside down and rotated counterclockwise .each density function represents the distribution of observations made by one of the scientists . in this example, is distributed uniformly between 1 and 2 , and is distributed as between and .we have also plotted the relational function that provides the change of variables .the shaded area within is the probability that a single measurement of falls between and .similarly , the shaded area within is the probability that a single measurement of falls between and .the conservation of probability is illustrated graphically by the fact that the two shaded regions are equal in area . with any two of these three curves ,it is possible to determine the third .generally , it has been of greater interest to reconstruct knowing and .we , however , are interested in reconstructing knowing only and .the fundamental assumption is that of stationarity : the unknown relationship is the same at all times ; this condition must be met for a statistical approach to be possible .one can reconstruct for each simply by finding the value such that the area inside from to is equal to the area inside from to . inwe demonstrate this cumulative way of looking at the problem . instead of plotting the density functions and , we have plotted the cumulative distribution functions ( cdfs ) and .the cdfs are the integrals from to of and to of , and they correspond to the areas inside and mentioned above . to find the that corresponds to a given in figure [ pedanticcdf ] , one reads from the value on the abscissa up to then horizontally over to the same value of , and back down to the abscissa to find the corresponding .compared to figure [ pedanticpdf ] , this visualization makes it easier to find for a given , but does not provide an obvious representation of . while emphasizing different features of the method , these two graphical representations of the method give identical results .in the following sections , we will provide the formal mathematical treatment of the graphical operations .some of our readers will no doubt be a bit rusty in the manipulation of probabilities .therefore , we have included a thorough treatment of the change of variables theorem in an appendix . here , we begin with the differential form of the change of variables : in order to use this equation, we must determine the sign of . for distributions with only one tail, we can do this rather easily by examining the rare values of and .when the rare values of and fall at the same end of the real number line , is positive .when they fall at opposite ends , is negative .physical insight is also a useful tool in determining the sign of .if we expect larger ( or more positive ) values of to correspond to larger values of , then is positive .if we expect larger values of to correspond to smaller ( or more negative ) values of , then is negative . for , we can integrate ( [ changeofvars ] ) , this equation implicitly defines as the function that provides the matching integration bounds .we recognize these integrals as the cdfs of and , so we can rewrite ( [ uposintegral ] ) as we can invert to arrive at an explicit equation for , this equation represents the mathematical counterpart to the graphical operation described in figure [ pedanticcdf ] , where one moves up from to , then across to , then back down to the corresponding . for , we can integrate ( [ changeofvars ] ) , converting this equation to cdfs , we have solving for , we arrive at combining ( [ uforupos ] ) and ( [ uforuneg ] ) we arrive at it is clear , then , that all we need to determine is knowledge of the sign of and either and or and .we summarize the desirable properties of as follows : * it can be arbitrarily nonlinear ; * its determination is not parametric ; * it maps the entire distribution and all of the moments of onto those of ; * it can be determined without simultaneous measurements of and .we now turn to some more sophisticated examples of the sar method .first , we will return to our original meteorological example to demonstrate the sar procedure on analytical functions .then , we will provide a function relating a bimodal distribution to a gaussian .finally , we will demonstrate the method on a stretched exponential and a gaussian .in the example of the meteorologists , illustrated in figures [ pedanticpdf ] and [ pedanticcdf ] , the following analytical functions were used : using ( [ xcdf ] ) and ( [ ycdf ] ) together with ( [ analyticalpdfx ] ) and ( [ analyticalpdfy ] ) , we have inserting ( [ analyticalcdfx ] ) and ( [ analyticalcdfy ] ) into ( [ uforupos ] ) , we see that adding in the proper bounds , we have in our next example , we will show how the sar method easily handles bimodal distributions .we have chosen to be bimodal and to be unimodal .the pdfs are while there is no closed form for , a graphical display can show its qualitative features . shows how the bimodal maps to .the highly nonlinear mapping has a flat spot ( with small but still positive slope ) corresponding to the local minimum in , since . in figure[ bimodalpdf ] , we see how a large range of values near maps to a very narrow range of values near . more generally , the terraced shape of can be seen to generate bimodal or multimodal distributions from unimodal ones . for our final example, we will treat an unusual distribution and an unusual mapping .we consider the case of a stretched exponential mapped to a gaussian . in this case , and are distributed as where , , and are positive real values . using ( [ changeofvars ] ) and assuming , we can write a differential equation for , by our design of ( [ stretchpdf ] ), will cause the two exponentials to drop out of the equation , satisfying the system solving ( [ exponentialsu ] ) for we have which is , in fact , the solution to ( [ exponentialsuprime ] ) and thus of ( [ upowerlaw ] ) .this mapping function is a highly nonlinear power - law . in ,we have depicted the borderline case for , , and . for ,this distribution becomes a stretched exponential , which is a common distribution in real data .while diverges at , the sar method cleanly recovers the mapping function .we are now going to investigate the robustness of the sar method on finite and noisy data sets .so far , we have considered the analytical representations of and . however , in practice , we will only have a finite number of samples of each variable .we can use these samples to construct and and then perform either a tabular or an analytical approximation to ( [ ufinal ] ) .first , we sort the and values .these sorted values give us an approximation to and .for example , if is the smallest value in measurements of , then an estimate of is similarly , we estimate as there are more sophisticated methods of estimating these distributions , such as kernel estimators , if the need arises . henceforth , we will only treat the case , but the interested reader can easily derive the case in a similar fashion . to obtain for a particular , we find such that next we find and such that we then have an estimate of by determining a for each sample of , we achieve a tabular definition of .we have depicted the mapping process and the uncertainty for the bimodal example in .we have chosen artificially small datasets of and to illustrate the estimation effect .the approximate uncertainty in the estimated from ( [ yestimator ] ) is given by we can rewrite ( [ rawdeltayeqn ] ) in terms of as this expression contains a first order estimate of the derivative of which , using ( [ ycdf ] ) , can be expressed in terms of as therefore ( [ deltaing ] ) can be expressed as rewriting ( [ j1condition ] ) and ( [ j2condition ] ) using ( [ fest ] ) and ( [ gest ] ) , we have therefore which leads us to here , accounts for the sampling effect .this relationship implies that in the rarified regions of the distribution , where is small , the estimation error is large .it also suggests that , to first order , increasing the sample size is not as useful in reducing as would be increasing the sample size .however , the uncertainty in is also important because the total uncertainty in - space is . by a derivation similar to that of , we have for a total uncertainty of to improve the overall quality of the reconstruction of , we would like both and to be as large as possible .another consideration for the implementation of the sar method is the effect of noise . until now, we have assumed that there is no noise in our measurements of and .however , in practice , we always encounter noisy data , and we want to be sure that the sar does not become invalid under typically noisy conditions . in a standard regression , where simultaneous values of and are known , a least - squares approach can be used to determine from noisy and . we will attempt to demonstrate the effect of noise on the sar method by simulating a noisy version of the meteorological example .we generate 100 noisy samples from the distributions and given in ( [ analyticalpdfx ] ) and ( [ analyticalpdfy ] ) .the noise distributions are chosen to be unbiased gaussians with standard deviations and for and .for now , we choose and to be 25% of the standard deviations and of and .we can fit the noisy data with .we perform two such fits : a standard least - squares regression on the pairs and least - squares regression on the pairs produced by the sar method described in ( [ yestimator ] ) . for this parametric example, a maximum likelihood estimation of and would probably outperform the least - squares approach , but we will compare to the more familiar regression for this illustration . ideally , and , but , for the noisy data , the two regressions give the sar fit is significantly better than the standard regression . in the future , it would be interesting to study how this depends on the type of noise and the form of . in , we see a graphical depiction of the noisy data and the two fits . both fits lie very close to the true curve compared to the noisy data , however there is a clear improvement with the sar fit .to understand better the effect of noise , we repeat the above simulation 5000 times to obtain a distribution of for each fitting approach .these distributions are plotted in .it is clear that both methods provide biased estimates of .the sar method produces a smaller bias , but we would still like to know how that bias depends on the noise amplitude . we can test this dependence by finding the bias for various noise / signal ratios .we will choose the same for and , such that so far , we have only tested , but now we will test a full range from to . in , we have plotted the median estimated versus .we see that for small noise , the estimate quality is high , but , as approaches 1 , the estimation fails .the estimated by the sar method is generally of higher quality than the estimate from the standard regression .for relatively large noise amplitude neither regression method produces quality estimates of .it is clear that , while the derivation of the sar method assumes noiseless data , our implementation of the sar method is at least as robust to noise as is the traditional least - squares regression .the sar appears to be reliable when the noise amplitude is small compared to the variability of the data sample .finally , we would like to demonstrate the sar method on a real problem from space physics .the goes 8 geosynchronous spacecraft measures , among other things , the flux of electrons with energies above 2 mev .the spacecraft orbits the earth once per day .the electron populations at geosynchronous orbit are organized by the position of the sun relative to the earth , which we identify as local time .owing to the asymmetry of the earth s magnetic field in space , as the spacecraft passes through different local times , it measures slightly different parts of the radiation belts . because the relativistic electron density varies smoothly with altitude and the electrons themselves make slightly elliptical orbits every few minutes , hour averaged fluxes at all locations around geosynchronous orbit are well correlated ; we therefore expect a monotonically increasing function relating fluxes measured at one local time to fluxes measured at another local time .we can estimate the flux at from a measurement made by the spacecraft at if we can determine .the probability distributions of electron measurements at every local time at geosynchronous are relatively stationary in time ; that is , the distribution of measurements in one year is roughly equivalent to the distribution of measurements in any other year . therefore , we can estimate and using historical measurements of and , and we can use the sar method to reconstruct .we will assume is local dawn and is local noon .we have obtained goes 8 measurements for 1998 from cdaweb ( http://cdaweb.gsfc.nasa.gov/ ) .we calculated hourly averages and grouped them into 1-hour bins near local dawn and local noon .this gives us about 360 samples at each location , but none that are simultaneous because the spacecraft is only at one location at a time .because electron measurements tend to be heavily biased toward low values , we will use the complementary cumulative distribution functions and . in terms of these functions , for a monotonically increasing , we have shows the constructed and .we can fit both distributions with the same analytical form : assuming an increasing , we use ( [ uforuposcomplements ] ) to arrive at an analytical form for : the non - parametric sar mapping is shown in to be nearly a power - law .we have determined an analytical fit to be this fit is in agreement with the function derived in ( [ nfak ] ) above from the implementation of the sar method using the parameterizations ( [ tre ] ) and ( [ hfs ] ) of the cumulative distributions .the fact that the exponent in ( [ fakal ] ) is very nearly 1 indicates that the densities at dawn and noon change in fixed proportion to each other , even as the radiation belts are filled during geomagnetic activity . if we imagine the electron phase - space structure to be , then we can state the proportionality as this relationship suggests a simple separation of variables : where represents a phase space shape function , and represents the varying global relativistic electron content of the geosynchronous region .the parameterizations ( [ tre ] ) and ( [ hfs ] ) together with the corresponding prediction ( [ nfak ] ) validated by the direct non - parametric implementation of the sar method giving ( [ fakal ] ) suggest in addition a simple and useful representation of the heavy tail structure of the distribution of electron fluxes in terms of stretched exponentials .such distributions have been found to parameterize a large variety of distributions found in nature as well as in social sciences .they present a quasi - stable property and can be shown to be the generic result of the product of random variables in the `` extreme deviation '' regime .so far , we have only determined the mapping from local dawn to local noon .it may also be necessary to allow u(x ) to vary with magnetic activity level .the magnetic indices dst and kp measure the intensity of the magnetospheric ring current and the variability of magnetospheric currents , respectively .we can create different mappings for each of several bins of geomagnetic indices ; such binning would organize the data by the state of the system , reinforcing the assumption that each is monotonic and time invariant . using the sar method , we can find mapping functions from every local time to every other local time , depending on geomagnetic activity , as necessary ; this allows us to reconstruct the flux around the entire orbit at any time based only on the single measurement made by goes 8 .if we produce fluxes around the entire orbit every hour , we can view spatial and temporal variations separately .in particular , if we reconstruct a time series of hourly fluxes at a fixed local time , we can perform various time series analyses that will not be influenced by the spatial variations seen in the measured time - series .this investigation will be reported elsewhere .we have shown that it is possible to accurately determine the function relating two variables even when they are not measured simultaneously . specifically , we were trying to map energetic electron fluxes between different local times at geosynchronous orbit . however , we believe that our solution may be useful to other researchers whose data are not taken simultaneously .we developed a technique , statistical asynchronous regression ( sar ) , that uses the statistical distributions of two variables to determine the unique monotonic function that can map one distribution onto the other . because the sar technique only workswhen there is a monotonic relationship between the two quantities , it should only be applied to quantities that are believed to be highly correlated with each other .we caution that the sar technique will produce a relationship for any two quantities , regardless of whether they are actually related .it is particularly inappropriate to use the sar to describe chaotic systems , which generally arise from non - monotonic behaviors .also , when the noise amplitude is a substantial fraction of the data sample variability , we do not expect the sar to give reliable results . to illustrate the sar technique when the two distributions are known analytically, we have provided several examples of common distributions .we have shown that the sar technique can recover the underlying relationship of the two quantities even when one distribution diverges or has more than one local maximum .we have provided a simple algorithm for implementing the sar .we derived simple expressions for the uncertainty in the estimated relationship between the two quantities . to ensure that the technique is robust for noisy data , we have simulated two noisy variables with a known relationship and determined how well the sar technique recovers that relationship ; the sar performs than a least squared error regression , which requires simultaneous measurements of both quantities .while we expect that ultimately most scientists will wish to fit to some parametric form , we feel that it is important that the sar does not require us to assume a parametric form a priori . for those wishing to apply the sar technique to problems where passes through zero ,we offer the following strategy : if the occurs at known and , then the sar technique is perfectly valid in bins of and constrained to be between the zeros of . in this way, the sar would provide a piecewise form of . in closing, we would like to suggest some areas that might benefit from the sar approach . in modeling tectonic deformations ,it is useful to quantify the balance of deformation accommodated by different faults in a complex network . for an individual fault, we often can measure only its length or its offset . relying only on faults with both length and offset knownwould exclude many useful measurements .however , the physics of tectonic deformation leads us to expect a monotonic relationship between fault length and offset . in this case, the sar technique would allow us to regress fault length against fault offset , using all of the available measurements .similarly , for individual earthquakes we often know only one of seismic moment and energy released ; the sar technique would allow us to regress all the available measurements rather than only those from earthquakes with both moment and energy known .we hope that the ideas presented here will assist those who need to relate non - simultaneous measurements .the sar method relies heavily on the change of variables theorem from probability theory .the following derivation will be instructive to those not familiar with the manipulation of probabilities .we will use the notational style ] .therefore , we can apply ( [ ycdf ] ) and ( [ udef ] ) to ( [ unegreplacement ] ) to arrive at = 1-g(u(x ) ) .\label{feq1-gu}\ ] ] by differentiating ( [ feqgu ] ) and ( [ feq1-gu ] ) , we arrive at or , equivalently , + + probability is conserved under a change of variables .this is the change of variables theorem , and it is depicted graphically in figure [ pedanticpdf ] .this work was in part funded by igpp grant lanl 1001 and nsf grant .we would like to thank g. reeves and the energetic particles group at los alamos national lab for their insightful critique of preliminary presentations of the sar method .we would also like to thank cdaweb and t. onsager for providing data from the goes 8 spacecraft .we thank v. pisarenko , f. schoenberg , and a. russell for helpful comments on the technique and manuscript .igpp no .5471 .friedel , r.h.w ., g. reeves , d. belian , t. cayton , c. mouikis , a. korth , b. blake , j. fennell , s. selesnick , d. baker , t. onsager , and s. kanekal , a multi - spacecraft synthesis of relativistic electrons in the inner magnetosphere using lanl , goes , gps , sampex , heo , and polar , _ radiation measurements 30 _ , 589 - 597 , 1999 .li , xinlin , d.n .baker , m. temerin , t.e .cayton , e.g.d .reeves , r.a .christensen , j.b .blake , m.d .looper , r. nakamura , and s.g .kanekal , multisatellite observations of the outer zone electron variation during the november 3 - 4 , 1993 magnetic storm , , 14,123 - 14,140 , 1997 .mcguire , r.e .burley , r.m .candey , r.l .kessel , t.j .kovalick , cdaweb and sscweb : enabling correlative international sun - earth - connections science entering the era of image and cluster , spring agu meeting , 2000 .reeves , g. d. , d. n. baker , r. d. belian , j. b. blake , t. e. cayton , j. f. fennell , r. h. w. friedel , m. m. meier , r. s. selesnick , and h. e. spence , the global response of relativistic radiation belt electrons to the january 1997 magnetic cloud , _ geophys .lett . , 25 _ , 3265 - 3268 , 1998 . | we introduce the statistical asynchronous regression ( sar ) method : a technique for determining a relationship between two time varying quantities without simultaneous measurements of both quantities . we require that there is a time invariant , monotonic function y = u(x ) relating the two quantities , y and x. in order to determine u(x ) , we only need to know the statistical distributions of x and y. we show that u(x ) is the change of variables that converts the distribution of x into the distribution of y , while conserving probability . we describe an algorithm for implementing this method and apply it to several example distributions . we also demonstrate how the method can separate spatial and temporal variations from a time series of energetic electron flux measurements made by a spacecraft in geosynchronous orbit . we expect this method will be useful to the general problem of spacecraft instrument calibration . we also suggest some applications of the sar method outside of space physics . igpp publication no . |
ever since the proposition of the `` demon '' by maxwell , numerous studies have been conducted on the consistency between the role of the demon and the second law of thermodynamics .bennett resolved the apparent contradiction by considering the logically irreversible initialization of the demon .the key observation here is the so - called landauer principle which states that , in erasing one bit of information from the demon s memory , at least of heat should , on average , be dissipated into the environment with the same amount of work being performed on the demon .piechocinska has proved this principle without invoking the second law in an isothermal process .the essence of consistency between the role of the demon and the second law of thermodynamics can be illustrated by the setup of the szilard engine .suppose that the entire state of the szilard engine and the demon is initially in thermal equilibrium .the demon gains one bit of information on the state of the szilard engine . the engine performs just of work by using this information , before returning to the initial state .the demon then erases the obtained information from its memory .consequently , the entire state returns to the initial equilibrium state .the sum of the work performed on the engine and the demon in a full cycle of the szilard engine is non - negative according to the landauer principle ; thus the szilard engine is consistent with the second law in this situation .however , the landauer principle stated above tells us nothing if the demon is far from equilibrium in the initial and/or final states .further discussions on maxwell s demon involve quantum - mechanical aspects of the demon , and general relationships between the entropy and action of the demon from a quantum information - theoretic point of view . on the other hand ,the relationship between the work ( or heat ) and action of the demon is not yet fully understood from this viewpoint .we stress that is not valid in a general thermodynamic process .jarzynski has proved an irreversible - thermodynamic equality which relates the work to the free energy difference in an arbitrary isothermal process : , where , is the work done on the system , is the difference in the helmholtz free energy between the initial and final states , and is the statistical average over all microscopic paths .note that this equality is satisfied even when the external parameters are changed at a finite rate .it follows from this equality that the fundamental inequality holds . while the original jarzynski equality is classical , quantum - mechanical versions of the jarzynski equalityhave been studied .kim and qian have recently generalized the equality for a classical langevin system which is continuously controlled by a maxwell s demon . in this paper , we establish a general relationship between the work performed on a thermodynamic system and the amount of information gained from it by the demon , and prove the relevant equality and several corollary inequalities which are generalizations of eq .( [ 1 ] ) . with the present setup ,the demon performs a quantum measurement during an isothermal process , selects a sub - ensemble according to the outcome of the measurement , and performs unitary transformations on the system depending on the outcome .we follow the method of ref . to characterize the demon only in terms of its action on the system and do not make any assumption about the state of the demon itself .the subsequent results therefore hold true regardless of the state of the demon , be it in equilibrium or out of equilibrium .this paper is constituted as follows . in sec .ii , we formulate a general setup of isothermal processes with maxwell s demon and illustrate the case of a generalized szilard engine . in sec .iii , we derive the generalized jarzynski equality , and new thermodynamic equalities generalizing inequality ( [ 1 ] ) . in sec .iv a , we clarify the property of an effective information content obtained by the demon s measurement . in sec .iv b , we discuss a crucial assumption of the final state of thermodynamic processes , which sheds light on a fundamental aspect of the characterization of thermodynamic equilibrium states . finally , in sec .vii , we conclude this paper .we consider an isothermal process at temperature , in which a thermodynamic system is in contact with an environment at the same temperature , and in which the initial and final states of the entire system are in thermodynamic equilibrium .we do not , however , assume that the states in the course of the process are in thermodynamic equilibrium . we treat the isothermal process as the evolution of thermodynamic system s and sufficiently large heat bath b , which are as a whole isolated andonly come into contact with some external mechanical systems and a demon . apart from the demon, the total hamiltonian can be written as where the time dependence of describes the mechanical operation on s through certain external parameters , such as an applied magnetic field or volume of the gas , and the time dependence of describes , for example , the attachment of an adiabatic wall to s. we consider a time evolution from to , assume , and write and .we consider the simplest isothermal process in the presence of the demon .this process can be divides into the following five stages : _ stage 1._at time , the initial state of s+b is in thermal equilibrium at temperature .the density operator of the entire state is given by note that the partition function of s+b is the product of that of s and that of b : , and the helmholtz free energy of s+b is the sum , where , etc ._ stage 2._from to , system s+b evolves according to the unitary transformation represented by _ stage 3._from to , a demon performs a quantum measurement described by measurement operators on s and obtains each outcome with probability .let be the set of all the outcomes satisfying .suppose that the number of elements in is finite .the process proceeds to _ stage 4 _ only if the outcome belongs to subset of , otherwise the demon discards the sample and the process restarts from _ stage 1 _ ; we calculate the statistical average over subensemble ._ stage 4._from to , the demon performs a mechanical operation on s depending on outcome .let be the corresponding unitary operator on s+b .we assume that the state of the system becomes independent of at the end of this stage , this being a feature characterizing the action of the demon .this stage describes a feedback control by the demon .note that the action of the demon is characterized only by set ._ stage 5._from to , s+b evolves according to unitary operator which is independent of outcome .we assume that s+b has reached equilibrium at temperature by from the macroscopic point of view because the degrees of freedom of b is assumed to far exceed that of s. the partition function of the final hamiltonian is then given by as in _ stage 1 _ , it follows that and , where .we denote as the density operator of the final state . after going through all the stages ,the state of the system changes from to where is given by and is the sum of the probabilities of s belonging to as note that is a nonunitary operator because it involves the action of measurement by the demon .in contrast , in the original setup of jarzynski , there is no demon and hence is unitary . in the case of a generalized szilard engine ,the foregoing general process can be illustrated as follows .we use a box containing a single molecule localized in a region that is sufficiently smaller than the box .this molecule interacts with a heat bath at temperature . and . the conventional szilard engine corresponds to ._ stage 1._the state of the engine and the heat bath is initially in thermodynamic equilibrium ._ stage 2._divide the box into 5 partitions ._ stage 3._a demon measures which partition the molecule is in .this measurement is described by measurement operators . _stage 4._the demon performs operation depending on measurement outcome . if , or , the demon moves the box to the leftmost position . if the outcome is 4 or 5 , the demon discards the sample and the process restarts from _stage 1_. _ stage 5._the box is expanded quasi - statically and isothermally so that the final state of the entire system returns to the initial state from a macroscopic point of view .this process is a unitary evolution of the entire system .see the text for details . ] _ stage 1._a molecule is initially in equilibrium at temperature .let be the density operator of the initial state of the engine and the heat bath ._ stage 2._we divide the box into partitions of equal volume .the state of the molecule then becomes , where represents the state of the molecule in the partition .we set ._ stage 3._a demon performs a measurement on the system to find out where the molecule is .the demon chooses subset ( ) , and the process proceeds to _ stage 4 _ only if outcome belongs to .the state of the system is at the end of this stage ._ stage 4._when the outcome is ( ) , the demon removes all but the box , and then moves the box to the leftmost position .this operation is described by .the state of the molecule after this operation is ._ stage 5._we expand the box quasi - statically and isothermally so that the final state of the entire system returns to the initial state from a macroscopic point of view . here by the last sentence we mean that the expectation value of any macroscopic quantity in the final state is the same as that in the initial state , as will be discussed in detail in sec .iv b. this process is unitary in respect of the molecule and heat bath ; we can thus describe this process by unitary operator .figure [ figure1 ] illustrates these processes for the case of and .when , this process is equivalent to the conventional szilard engine .let us now prove the equality which constitutes the main result of this paper .let be the work performed on s+b during the entire process , and be the respective eigenvalues and eigenvectors of , and and the respective eigenvalues and eigenvectors of .we can then calculate the statistical average of over the subensemble specified by condition as where is given by eq .( [ 8 ] ) . making the polar decomposition of as , , where is a unitary operator, we can rewrite eq .( [ 9 ] ) as where is the density operator of the canonical distribution of the final hamiltonian : let be the density operator just before the measurement , and be that immediately after the measurement with outcome .we obtain and ; therefore thus eq . ([ 10 ] ) reduces to where we introduced the notation the parameter describes the measurement error of the demon s measurement , the precise meaning of which is discussed in sec . v. we now _ assume _ that the final state satisfies we discuss in detail the physical meaning and validity of this assumption in sec .iv b. note that if the density operator of the final state is the canonical distribution , i.e. , then the above assumption ( [ assumption ] ) is trivially satisfied . under this assumption ,we finally obtain where .this is the main result of this paper . in a special case in which s are projection operators for all , and , eq .( [ 14 ] ) reduces to where is the number of elements in .note that the right - hand side of eq .( [ 17 ] ) is independent of the details of pre - measurement state .we can apply the generalized jarzynski equality to prove an inequality .it follows from the concavity of the exponential function that we therefore obtain consider the case in which the demon selects single outcome , that is .equation ( [ 9 ] ) then reduces to , where , and inequality ( [ 16 ] ) becomes averaging inequality ( [ 18 ] ) over all , we obtain where describes an effective information content which the demon gains about the system . inequality ( [ 19 ] ) is a generalization of ( [ 1 ] ) and is stronger than ( [ 16 ] ) .it shows that we can extract work larger than from a single heat bath in the presence of the demon , but that we _ can not _extract work larger than .we discuss the physical meaning of and .it can easily be shown that here for all if and only if is proportional to the identity operator , and for all if and only if is a projection operator . in the former case, the demon can gain no information about the system , while in the latter case , the measurement is error - free. let us consider the case of ( ) , where and are projection operators and is a small positive number .then is given by we can therefore say that is a measure of distance between and the projection operator .it follows from ( [ 21 ] ) that where is the shannon information content that the demon obtains : .we now derive some special versions of inequality ( [ 19 ] ) .if the demon does not get information ( i.e. , ) , inequality ( [ 19 ] ) becomes , which is simply inequality ( [ 1 ] ) . on the other hand , in the case of a projection measurement , where , ( [ 19 ] ) becomes .an inequality similar ( but not equivalent ) to this inequality has been proved by kim and qian for a classical langevin system .we next show the physical validity of the assumption ( [ assumption ] ) . in general, the canonical distribution describes the properties of thermodynamic equilibrium states .however , the thermodynamic equilibrium is a macroscopic property characterized only by the expectation values of macroscopic quantities .in fact , to show that a heat bath is in thermodynamic equilibrium , we do not observe each molecule , but observe only macroscopic quantities of the heat bath , for example the center of mass of a `` small '' fraction involving a large number ( e.g. ) of molecules .thus a density operator corresponding to a thermodynamic equilibrium state is _ not _ necessarily the rigorous canonical distribution ; is too strong an assumption .note that no assumption has been made on the final state in the derivation of the original jarzynski equality without maxwell s demon , so holds for any final state . under the condition that the final state of s+b is in thermodynamic equilibrium corresponding to the free energy from a macroscopic point of view, we can interpret inequality ( [ 1 ] ) , which can be shown from the jarzynski equality , as the thermodynamic inequality for a transition between two thermodynamic equilibrium states , even if is not a rigorous canonical distribution .on the other hand , we have required the supplementary assumption ( [ assumption ] ) for the final state to prove the generalized jarzynski equality with maxwell s demon .the assumption ( [ assumption ] ) holds if for all in , where .we can say that under the assumption ( [ assumption ] ) , is restricted not only in terms of macroscopic quantities , but also constrained so as to meet eq .( [ a ] ) .it appears that the latter constraint is connected with the fact that the system in state and that in state should not be distinguished by the demon .we stress that our assumption ( [ assumption ] ) is extremely weak compared with the assumption . to see this , we denote as the degree of freedom of s+b ( e.g. ) , as the dimension of the hilbert space corresponding to s+b , and as the number of elements in .we can easily show that . on the other hand, holds in a situation that the role of the demon is experimentally realizable . for example , suppose that s is a spin-1 system and b consists of harmonic oscillators , and the demon made up from optical devices performs a projection measurement on s. in this case , and . the rigorous equality holds , in general , if and only if all the matrix elements in coincides with that of , and note that the number of independent real valuables in the density operators is .on the other hand , only equalities in ( [ a ] ) are required to meet the assumption .although it is a conjecture that the assumption ( [ assumption ] ) is virtually realized in real physical situations , we believe that it is indeed realized in many situations . a more detailed analysis on the assumption ( [ assumption ] ) is needed to understand the concept of thermodynamic equilibrium .finally , we consider the case that the assumption ( [ assumption ] ) is not satisfied and holds for all in .we can then estimate the value of as thus the deviation from the generalized jarzynski equation is bounded only by the difference on the left - hand side of eq .( [ b ] ) .in conclusion , we have generalized the jarzynski equality to a situation involving maxwell s demon and derived several inequalities in thermodynamic systems .the demon in our formulation performs a quantum measurement and a unitary transformation depending on the outcome of the measurement .independent of the state of the demon , however , our equality ( [ 17 ] ) establishes a close connection between the work and the information which can be extracted from a thermodynamic system , and our inequality ( [ 19 ] ) shows that one can extract from a single heat bath work greater than due to an effective information content that the demon gains about the system . to analyze broader aspects of informationthermodynamic processes merits further study .this work was supported by a grant - in - aid for scientific research ( grant no. 17071005 ) and by a 21st century coe program at tokyo tech , `` nanometer - scale quantum physics '' , from the ministry of education , culture , sports , science and technology of japan .mu acknowledges support by a crest program of jst .24 j. c. maxwell , _ `` theory of heat '' _ ( appleton , london , 1871 ) ._ `` maxwell s demon 2 : entropy , classical and quantum information , computing '' _ , h. s. leff and a. f. rex ( eds . ) , ( princeton university press , new jersey , 2003 ) . c. h. bennett , int .. phys . * 21 * , 905 ( 1982 ) .r. landauer , ibm j. res .* 5 * , 183 ( 1961 ) .b. piechocinska , phys .a * 61 * , 062314 ( 2000 ) .l. szilard , z. phys .* 53 * , 840 ( 1929 ) .w. zurek , e - print : quant - ph/0301076 .s. lloyd , phys .a * 56 * , 3374 ( 1997 ) .g. j. milburn , aus .* 51 * , 1 ( 1998 ) . m. o. scully , phys .lett . * 87 * , 220601 ( 2001 ) .t. d. kieu , phys .lett . * 93 * , 140403 ( 2004 ) .j. oppenheim , m. horodecki , p. horodecki , and r. horodecki , phys .lett . * 89 * , 180402 ( 2002 ) .k. maruyama , f. morikoshi , and v. vedral , phys .a * 71 * , 012108 ( 2005 ) .m. a. nielsen , c. m. caves , b. schumacher , and h. barnum , proc .london a , * 454 * , 277 ( 1998 ) .v. vedral , proc .london a , * 456 * , 969 ( 2000 ) . c. jarzynski , phys .78 * , 2690 ( 1997 ) . c. jarzynski , phys .e * 56 * , 5018 ( 1997 ) .h. tasaki , e - print : cond - mat/0009244 .s. yukawa , j. phys .. jpn . * 69 * , 2367 ( 2000 ) .s. mukamel , phys . rev90 * , 170604 ( 2003 ) .w. deroeck and c. maes , phys .e * 69 * , 026115 ( 2004 ) .k. h. kim and h. qian , e - print : physics/0601085 .e. b. davies and j. t. lewis , commun . math .* 17 * , 239 ( 1970 ) .m. a. nielsen and i. l. chuang , _`` quantum computation and quantum information '' _ ( cambridge university press , cambridge , 2000 ) . | we propose a new thermodynamic equality and several inequalities concerning the relationship between work and information for an isothermal process with maxwell s demon . our approach is based on the formulation la jarzynski of the thermodynamic engine and on the quantum information - theoretic characterization of the demon . the lower bound of each inequality , which is expressed in terms of the information gain by the demon and the accuracy of the demon s measurement , gives the minimum work that can be performed on a single heat bath in an isothermal process . these results are independent of the state of the demon , be it in thermodynamic equilibrium or not . |
network routing problems involve the selection of a pathway from a source to a sink in a network .network routing is encountered in logistics , communications , the internet , mission planning for unmanned aerial vehicles , telecommunications , and transportation , wherein the cost effective and safe movement of goods , personnel , or information is the driving consideration . in transportation science and operations research, network routing goes under the label _ vehicle routing problem _ ( vrp ) ; see bertsimas and simchi - levi ( ) for a survey .the flow of any commodity within a network is hampered by the failure of one or more pathways that connect any two nodes .pathway failures could be due to natural and physical causes , or due to the capricious actions of an adversary .for example , a cyber - attack on the internet , or the placement of an improvised explosive device ( ied ) on a pathway by an insurgent .generally , the occurrence of all types of failures is taken to be probabilistic .see , for example , gilbert ( ) , or savla , temple and frazzoli ( ) who assume that the placement of mines in a region can be described by a spatio - temporal poisson process .the traditional approach in network routing assumes that the failure probabilities are fixed for all time , and known ; see , for example , colburn ( ) .modern approaches recognize that networks operate in dynamic environments which cause the failure probabilities to be dynamic .dynamic probabilities are the manifestations of new information , updated knowledge , or new developments ( circumstances ) ; de vries , roefs and theunissen ( ) articulate this matter for unmanned aerial vehicles .the work described here is motivated by the placement of ied s on the pathways of a logistical network ; see figure [ fig1 ] .our aim is to prescribe an optimal course of action that a decision maker is to take vis - - vis choosing a route from the source to the sink . by optimal action we mean selecting that route which is both cost effective and safe . s efforts are hampered by the actions of an adversary , who unknown to , may place ied s in the pathways of the network . in military logistics, is an insurgent ; in cyber security , is a hacker . s uncertainty about ied presence on a particular routeis encapsulated by s personal probability , and s actions determined by a judicious combination of probabilities and s utilities . for an interesting discussion on a military planner s attitude to risk ,see ( ) who claim that individuals tend to be risk prone when the information presented is in terms of losses , and risk averse when it is in terms of gains .methods for a meaningful assessment of s utilities are not on the agenda of this paper ; our focus is on an assessment of s probabilities , and the unconventional statistical issues that such assessments spawn . to cast this paper in the context of recent work in route selection under dynamic probabilities , we cite ye et al .( ) who consider minefield detection and clearing . for these authors ,dynamic probabilities are a consequence of improved estimation as detection sensors get close to their targets .the focus of their work is otherwise different from the decision theoretic focus of ours .we suppose that is a coherent bayesian and thus an expected utility maximizer ; see lindley ( ) .this point of view has been questioned by de vries , roefs and theunissen ( ) who claim that humans use heuristics to make decisions .the procedures we endeavor to prescribe are on behalf of .we do not simultaneously model s actions , which is what would be done by game theorists .rather , our appreciation of s actions are encapsulated via likelihood functions , and modeling socio - psychological behavior via subjectively specified likelihoods is a novel feature of this paper .fienberg and thomas ( ) give a nice survey of the diverse aspects of network routing dating from the 1950s , covering the spectrum of probabilistic , statistical , operations research , and computer science literatures . in thomas and fienberg ( )an approach more comprehensive than that of this paper is proposed ; their approach casts the problem in the framework of social network analysis , generalized linear models , and expert testimonies .we start section [ sec2 ] by presenting a subnetwork , which is part of a real logistical network in iraq , and some ied data experienced by this subnetwork . for security reasons , we are unable to present the entire network and do not have access to all its ied experience .section [ sec3 ] pertains to the decision - theoretic aspects of optimal route selection .we discuss both the nonsequential and the sequential protocols .the latter raises probabilistic issues , pertaining to the `` principle of conditionalization , '' that appear to have been overlooked by the network analyses communities .the material of section [ sec3 ] constitutes the general architecture upon which the material of section [ sec4 ] rests .section [ sec4 ] is about the inferential and statistical matters that the architecture of section [ sec3 ] raises .it pertains to the dynamic assessment of failure probabilities , and describes an approach for the integration of data from multiple sources .such data help encapsulate the actions of , and s efforts to defeat them .the approach of section [ sec4 ] is bayesian ; it entails the use of logistic regression and an unusual way of constructing the necessary likelihood functions .section [ sec5 ] summarizes the paper , and portrays the manner in which the various pieces of sections [ sec3 ] and [ sec4 ] fit together .section [ sec5 ] also closes the paper by showing the workings of our approach on the network of section [ sec2 ] .figure [ fig1 ] is a subnetwork abstracted from a real logistics network used in iraq .the subnetwork has nine nodes , labeled a ( not to be confused with adversary ) to i , and ten links , labeled 1 to 10 .the source node is a and the sink node is i. there are thirteen bridges dispersed over the ten links of figure [ fig1 ] , with link 9 having one bridge , the `` _ _new bridge__. '' this bridge is a mile away from a park , the old city , the bus station , and the mosque .the precise locations of the remaining 12 bridges in the subnetwork are classified .there have been four crossings on the `` new bridge , '' and none of these have experienced an ied attack . to plan an optimal route from source to sink , needs to know the probability of experiencing an ied attack on the next crossing on each of the ten links .however , we focus discussion on link 9 , because it is for this link that we have information on the number of previous crossings . to assess the required probabilities , we need to have all possible kinds of information , including that given in table [ tab1 ] , which gives the history of ied placements on the remaining twelve bridges of the subnetwork .the data of table [ tab1 ] , though public , were painstakingly generated via information from multiple sources such as google maps by the so - called process of `` connecting the dots . '' generally ,such data are hard to come by via the public domain .the recently released wikileaks ( ) data has some covariate information on ied experiences in afghanistan. however , there are very few well - defined logistical routes in afghanistan , and those that may be there are not identified in the wikileaks database . furthermore , the covariate information that is available is not of the kind relevant to route selection .thus , for this paper , the wikileaks afghanistan data are of marginal value ..2d1.2d1.2d1.2@ * bridge * & & & & & + aimma & 0 & 0 & 0 & 1 & 0.1 + adhimiya & 0 & 0.25 & 0.75 & 1.5 & 1 + sarafiya & 1 & 0 & 1 & 1 & 0.5 + sabataash & 0 & 1 & 0 & 0.75 & 0.2 + shuhada & 0 & 2 & 0 & 0.75 & 0.1 + ahrar & 0 & 1 & 0 & 1 & 0.75 + sinak & 0 & 0.5 & 0 & 1 & 0.3 + jumhuriya & 0 & 0.1 & 0 & 0.75 & 0.3 + arbataash & 1 & 0 & 3 & 3.5 & 2 + jadriya & 1 & 0 & 5.5 & 5 & 2 + sjadriya & 0 & 0 & 6 & 5.5 & 3 + dora & 1 & 2 & 5 & 4 & 4 + in table [ tab1 ] , the column labeled `` attack '' is 1 whenever the bridge has experienced an attack ; otherwise it is .the other columns give the distance of the bridge , in miles , from population centers like a park , old city , bus station , and mosque .an entry of zero denotes that the bridge is next to the landmark . whereas data on ied attacks tends to be public ( because of press reports ) , data on the number of crossings by convoys , the number of ieds cleared , the composition of the convoys , etc ., remains classified .the three routes suggested by figure [ fig1 ] are as follows : , , and .since ieds are placed by adversaries , is generally uncertain of their presence when planning begins . additionally , there are pros and cons with each route in terms of distance traversed , route conditions ( such as the number of curves and bends , terrain topology ) , proximity to hostile territory , receptiveness of the local population to harbor insurgents , and so on .in actuality , will have access to historical data of the type shown in table [ tab1 ] , and also information about the nature of the cargo , the convoy speed , intelligence about the cunningness and sophistication of the insurgents , the number of previous unencumbered crossings on a link , etc.=-1 s problem is to select an optimal route between the three routes given above .a variant is to specify the optimal route _sequentially_. that is , start by going from a to c via links 1 and 2 , and then , upon arrival at c , make a decision to proceed along link 9 to the sink , or to take the circuitous routes via the links 3 to 8 , and 10 , to get to the sink .similarly , upon arrival at node e , could proceed along link 10 , or via the links and 8 to arrive at the sink . s decision as to which choice to make will be based on s uncertainty of ied presence on the links 3 to 10 , assessed when is at node c and at node e. thus , optimal route selection is a problem of decision under uncertainty .because of the dynamic environment in which convoys operate , s uncertainties change over time . in section [ sec3 ]we prescribe a decision - theoretic architecture for route selection .this requires that assess his ( her ) uncertainties about ied placements , as well as utilities for a successful or failed traversal . since s uncertainties are dynamic , the prescription of section [ sec3 ] is also dynamic ; that is , the selected route is optimal only for an upcoming trip .the main challenge therefore is an assessment of the dynamic probabilities ; see section [ sec4 ] .under the nonsequential protocol , needs to choose , at decision time , from the following : take route ; take route ; or take route .figure [ fig2 ] shows s decision tree for these choices , with each leading to a random node , with each leading to an outcome ( for success ) and ( for failure ) , . here is the event that an ied is not encountered on any link of the route , and the event that an ied is encountered .if is aware of any route clearing activity , then this becomes a part of s covariates used to assess probabilities .the presence of an ied does not necessarily imply an explosion .unexploded ieds cause disruptions , and s aim is to choose that route which minimizes the risk of damage and disruption .s decision tree for nonsequential actions . ] in figure [ fig2 ] , and denote s probabilities for success and failure , and and , s utilities under .the quantities , , , and pertain to ; similarly , for .assessing utilities is a substantive task [ cf .singpurwalla ( ) ] entailing rewards , penalties , and attitudes to risk .this task is not pursued here .however , one often assumes binary loss functions , so that and . per the principle of _ maximization of expected utility _, chooses that for which the expected utility is a maximum .thus , at each , computes , for ,=p_{i}(s)u(d_{i},s)+p_{i}(f)u(d_{i},f),\]]and chooses that which maximizes ] .thus , after three successive failures gives more and more weight to larger values of , suggesting an absence of s complacence with a long series of failures .the specification of the likelihoods as embodied in equations ( [ eq4.4 ] ) and ( [ eq4.5 ] ) is a novel feature of this paper ; it is a possible approach to_ adversarial modeling_. an assessment of the posterior of in the light of known covariates and the historical data is developed in two stages .the challenge here is with the specification of the likelihood ._ stage i : logistic regression for extracting the information in . _information provided by lies in an assessment of the posterior of , where appears in a logistic regression model , with .recall , and are the row of . using standard but computationally intensive simulation procedures , we can obtain the posterior of in light of .denote this posterior as . _stage ii : the likelihood of under and ._ to assess the posterior , invoke bayes law to write is the likelihood of in light of the known and , and is s prior for .note that and are specific to link , whereas is common to all the links of the network .the prior on could be any suitable distribution , such as a beta distribution over .the main theme of stage ii , however , is a development of the likelihood . whereas likelihoods may be subjectively specified, the conventional method is to invert a probability model by juxtaposing the parameter(s ) and the random variables .this is the strategy we use , but to do so we need a probability model for with and as background information .since depends on , we denote this dependence by replacing with .thus , we seek a probability model for with as a background , namely , ] has probability . however , per the logistic regression model,=\frac{1}{1+\exp ( -% \sum _ { u}^{k}z_{u}\beta _ { u}^{\ast } ) } , \]]where appears as the element of .to summarize , the event =1/[1+\exp ( -\sum z_{u}\beta _ { u}^{\ast } ) ] ] .consequently , a plot of versus provides the required likelihood function . to implement this idea , we sample a from to obtain=\frac{1}{1+\exp ( -% \sum _ { u}^{k}z_{u}\beta _ { u}^{\ast } ) } , \]]and also .a plot of versus is then the likelihood function of in light of and ; see figure [ fig4 ] . with and known .] with the prior on specified , and the likelihood induced via a logistic regression model governing and , the desired posterior be numerically assessed .once the above is done , all the necessary ingredients for obtaining equation ( [ eq4.3 ] ) , which can now be written as at hand .the above expression can be numerically evaluated .for both the nonsequential and sequential protocols wherein the poc is upheld , we need to assess conditional probabilities of the type , where links and are adjacent to each other , andtraversing on precedes that on .there are two possible strategies .the first one is for to subjectively change the assessed by either increasing it because an insurgent might find it easy to populate neighboring links with ieds , or to decrease it if thinks that an insurgent has limited resources for placing ieds .the second approach is less subjective because it incorporates data on ied placements or nonplacements on neighboring links .the idea here is to treat the conditioning event as a covariate , so that the vectors and of sections [ sec4.1 ] and [ sec4.3 ] get expanded by an additional term , as and .correspondingly , the matrix of section [ sec4.1 ] also gets expanded to include an additional column whose term is whenever there has been an ied experience in a preceding link ; otherwise is . with the above in place , a repeat of the exercise described in section [ sec4.3 ] would enable a formal assessment of the conditional probabilities .the only other matter that remains to be addressed pertains to the likelihood of as discussed in section [ sec4.2 ] .since the likelihood is a weight assigned to the posterior of , may either increase the of equations ( [ eq4.4 ] ) and ( [ eq4.5 ] ) , or decrease it depending on what thinks of an insurgent s abilities and resources . would increase if feels that the insurgent s resources are plentiful ; otherwise downgrades .equation ( [ eq4.7 ] ) shows how can assess , the probability of one or more ied placements on link in a unified manner by a systematic application of the bayesian approach .it entails a fusion of information on past ied experience on link ( encapsulated by ) , historical data on ied experience in the region ( encapsulated by the matrix ) , and s subjective views about , encapsulated via the likelihood and the prior .the essence of equation ( [ eq4.7 ] ) is that its right - hand side is the expected value of a weighted prior distribution of .the weighting of the prior is by the product of two likelihoods , one reflecting historical ied experience specific to link , and the other reflecting historical ied experience in the region as well as the relevant covariates specific to the forthcoming trip contemplated by .the entire development being grounded in the calculus of probability is therefore _coherent_. though cumbersome to plough through , there are novel features to the two likelihoods .the first likelihood equations ( [ eq4.4 ] ) and ( [ eq4.5])is an unconventional likelihood for use with bernoulli trials .it is motivated by socio - psychological considerations attributed to both the insurgents who place the ied s , as well as to , who does not become complacent upon a sequence of successful crossings and who upon the occurrence of the first failure adopts the posture of extreme caution . the second likelihood that of figure [ fig4]is induced in an unusual manner by leaning on the posterior distribution of the parameter vector of a logistic regression .the approach of section [ sec4 ] displays the manner in which information from different sources can be fused by decomposing the likelihood of .equation ( [ eq4.7 ] ) shows this .the material of section [ sec4 ] feeds into that of section [ sec3 ] which pertains to sequential and nonsequential decision making under uncertainty .the computational and simulation work spawned by section [ sec4 ] entails logistic regression , generating -dimensional samples from the posterior distribution of , numerically assessing ( [ eq4.6])and numerical integration to obtain ( [ eq4.7 ] ) .none of these pose any obstacles .section [ sec4.4 ] pertains to conditional probabilities .it expands on sections [ sec4.1 ] through [ sec4.3 ] , by treating the conditioning events as covariates .the one major obstacle pertains to the paucity of the data for validating the approach . the required data , namely , , , and , are available to the military logisticians , but are almost always classified .the wikileaks data tend to focus on ied explosions and not on success stories wherein ied s get cleared , similarly with other publicly available data .information that is relevant to constructing the likelihood based on socio - psychological considerations is highly individualized , and perhaps not even recorded .it is _ desirable _ to collect this kind of information via experiments pertaining to the psychology of logisticians and route planners , and also insurgents via what is known as `` red teaming . ''the text of this paper can be seen as a template for addressing network routing in a dynamic environment .the network architecture of figure [ fig1 ] brings out the necessary caveats that problems of this type pose , one such caveat being the caveat of conditionalization , discussed in section [ sec3.2.1 ] .real logistical networks are more elaborate . in actual practice the matrix have a very large dimension and thus be unmanageable .however , given the role that plays , one may simply sample from a high dimensional to work with a more manageable matrix .besides a prior for , , all that is required of are the utilities mentioned in section [ sec3 ] .however , these utilities are proxies for costs , and no form of optimization can be achieved without cost considerations .finally , this paper shows how statistical methodologies can be constructively brought to bear in network routing problems which generically belong in the domain of computer science , network analysis , and operations research .we close this paper by illustrating in section [ sec5.1 ] the workings of sections [ sec3 ] and [ sec4 ] by using the data of table [ tab1 ] to assess the probability of encountering an ied on the next crossing on the `` new bridge . '' with respect to the network of figure [ fig1 ] , the data of table [ tab1 ] maps to the matrix of section [ sec4.1 ] , with its column 2 corresponding to column 3 corresponding to and so on , with column 6 corresponding to .a logistic regression model , with , was fitted to the data of table [ tab1 ] using independent gaussian priors with means and standard deviations .this choice of priors is arbitrary .the joint posterior distribution of was obtained via gibbs sampling with 10,000 simulations after a burn - in of 1,000 simulations .the marginal posterior distributions of , and were symmetric looking , but those of and were skewed to the left ; plots of these distributions are not shown .table [ tab2 ] compares posterior means against their maximum likelihood estimates , showing a good agreement between the two , save for ..comparison of bayes versus maximum likelihood estimates [ cols="<,^,^,^,^,^",options="header " , ] about 60 samples from the joint posterior distribution of were generated , and for each sample , the quantity ^{-1}$ ] computed .here , suggesting that the next crossing is to be on the new bridge which is one mile away from all the four city centers of interest .associated with each generated sample is also the probability of the sample ; this is provided by the joint probability density .figure [ fig5 ] shows a plot of the computed quantity mentioned above [ our of section [ sec4.3 ] ] versus the joint probability .a smoothed plot , smoothed by a moving average of five consecutive points , is the monte carlo induced likelihood .since the new bridge has experienced 4 previous crossings and none of these crossings have experienced an ied attack , ; thus , , see equation ( [ eq4.4 ] ) . with the above in place , all the ingredients needed to compute ( [ eq4.7])are at hand , save for the prior . supposing uniform on , we have given by the likes of figure [ fig5 ] .this can be numerically evaluated for a range of , say , , to obtain .similarly , we obtain .the normalizing constant is , giving and .thus , the probability of encountering an ied on the next crossing on the `` new bridge '' is 0.306 . in order to prescribe an optimal route for the network of figure [ fig1 ], we need to calculate the probability of encountering an ied on each of the remaining 9 links of the network in a manner akin to that given above for link 9 , the `` new bridge . ''this requires that we have the vectors and for each of these links , where is the historic ied experience for a link , and is the vector of covariates associated with the links .this we do not have and are unable to obtain for reasons of security .consequently , and purely with the intent of illustrating how our decision theoretic framework can be put to work , we shall make some meaningful specifications about the s , .these will be based on the relative lengths of each link , relative to the length of link 9 for which has been assessed as 0.306 ; that is , calibrate the required s in terms of .to do the above , we start by remarking that links 1 and 2 are of almost equal length , and are about two - thirds the length of link 9 .links 3 to 8 are of equal length and are about one - fifth the length of link 9 , whereas link 10 is about half the length of link 9 .note that figure [ fig1 ] is not drawn to scale .thus , we set , and .these choices are purely illustrative ; we could have used other methods of scaling such as the logarithmic or the square root .in addition to specifying the s , we also need to specify utilities .for this we propose a utility function of the form for a successful route traversal . here is the number of links in the route , and is a constant which ensures that a successful traversal does not result in a negative utility .specifically , the idea here is that a successful traversal yields a utility of one , but each link in the route contributes to a disutility to which is assigned a weight .choice entails the route and with chosen to be 100 , the utility of a successful traversal on this route will be .similarly , the failure to achieve a successful traversal yields a utility of , yielding a negative utility of , which in the case of route with is . the above choices for utility do not take into consideration things such as composition of the convoys , traversal time , vicinity to hostile territory , costs of disruption , etc . with the above in place , and assuming independence of the ied placement events , it can be easily seen that the expected utilities of choices , , and are 0.414 , 0.361 , and 0.430 , respectively .thus , for the given choices of probabilities and utilities , s optimal route will be , which is . observe that neither the shortest nor the longest routes are optimal .sensitivity of s final choice to values of other than 100 can be explored .for example , were taken to be 10 , then will turn out to be s optimal choice .this is because it turns out the probability of a successful traversal via choices , , and turns out to be rather close to each other , namely , 0.444 , 0.441 , and 0.480 , respectively .this completes our discussion on illustrating the workings of the proposed approach vis - - vis the network of figure [ fig1 ] , and closes the paper .the author was exposed to the ied problem by professors robert koyak , lynn whittaker , and ( col . ) alejandro hernandez of the naval postgraduate school ( nps ) , in monterey , ca .joshua landon s help with the computations and simulations of section [ sec5.1 ] is deeply acknowledged .anna gordon painstakingly generated the data of table [ tab1 ] , whose source was made available to us by dr .robert bonneau of the air force office of scientific research .the several helpful comments by the referees , the editor , professor fienberg , and the fienberg - thomas paper have enabled the author to cast the problem of route selection in a broader context .work on this paper began when the author was a visitor at nps during the summer of 2008 . | recently , there has been an explosion of work on network routing in hostile environments . hostile environments tend to be dynamic , and the motivation for this work stems from the scenario of ied placements by insurgents in a logistical network . for discussion , we consider here a sub - network abstracted from a real network , and propose a framework for route selection . what distinguishes our work from related work is its decision theoretic foundation , and statistical considerations pertaining to probability assessments . the latter entails the fusion of data from diverse sources , modeling the socio - psychological behavior of adversaries , and likelihood functions that are induced by simulation . this paper demonstrates the role of statistical inference and data analysis on problems that have traditionally belonged in the domain of computer science , communications , transportation science , and operations research . . |
the non - brownian scaling of the mean squared displacement ( msd ) of a diffusing particle of the power - law form is a hallmark of a wide range of anomalous diffusion processes .equation ( [ msd ] ) features the anomalous diffusion coefficient of physical dimension and the anomalous diffusion exponent . depending on its magnitude we distinguish subdiffusion ( ) and superdiffusion ( ) .interest in anomalous diffusion processes was rekindled with the advance of modern spectroscopic methods , in particular , advanced single particle tracking methods .thus , subdiffusion was observed for the motion of biopolymers and submicron tracer particles in living biological cells , in complex fluids , as well as in extensive computer simulations of membranes or structured systems , among others .superdiffusion of tracer particles was observed in living cells due to active motion .anomalous diffusion processes characterised by the msd ( [ msd ] ) may originate from a variety of distinct physical mechanisms .these include a power - law statistic of trapping times in the continuous time random walks ( ctrws ) as well as related random energy models and ctrw variants with correlated jumps or superimposed environmental noise .other models include random processes driven by gaussian yet power - law correlated noise such as fractional brownian motion ( fbm ) or the fractional langevin equation . closely related to these models is the subdiffusive motion on fractals such as critical percolation clusters .finally , among the popular anomalous diffusion models we mention heterogeneous diffusion processes with given space dependencies of the diffusion coefficient as well as processes with explicitly time dependence diffusion coefficients , in particular , the scaled brownian motion ( sbm ) with power - law form analysed in more detail herein .also combinations of space and time dependent diffusivities were investigated .space and/or time dependent diffusivities were used to model experimental results for smaller tracer proteins in living cells and anomalous diffusion in biological tissues including brain matter . in particular, sbm was used to describe fluorescence recovery after photobleaching in various settings as well as anomalous diffusion in various biophysical contexts . in other branches of physics sbmwas used to model turbulent flows observed by richardson as early as 1952 by batchelor . moreover ,the diffusion of particles in granular gases with relative speed dependent restitution coefficients follow sbm .we note that in the limiting case the resulting process is ultraslow with a logarithmic growth of the msd known from processes such as sinai diffusion , single file motion in ageing environments , or granular gas diffusion with constant restitution coefficient . in the followingwe study the ergodic properties of sbm in the boltzmann - khinchin sense , finding that even long time averages of physical observables such as the msd do not converge to the corresponding ensemble average .in particular we compute the ergodicity breaking parameter eb characterising the trajectory - to - trajectory fluctuations of the time averaged msd in the entire range of the scaling exponents , both analytically and from extensive computer simulations .we generalise the results for the ergodic properties of sbm in the presence of ageing , when we start to evaluate the time average the msd a finite time span after the initiation of the system .the paper is organised as follows . in section [ sec - observables ]we summarise the observables computed and provide a brief overview of the basic properties of sbm . in section [ sec - model - simul ]we describe the theoretical concepts and numerical scheme employed in the paper .we present the main results for the eb parameter of non - ageing and ageing sbm in detail in sections [ sec - non - aged ] and [ sec - aged ] . in section [ sec - disc ] we summarise our findings and discuss their possible applications and generalisations .we define sbm in terms of the stochastic process where is white gaussian noise with zero mean and unit amplitude . the time dependent diffusion coefficient is taken as where we require the positivity of the scaling exponent , .sbm is inherently out of thermal equilibrium in confining external potentials .let us briefly outline the basic properties of the sbm process .the ensemble averaged msd of sbm scales anomalously with time in the form of equation ( [ msd ] ) . here andbelow we use the standard definition of the time averaged msd ^ 2dt,\ ] ] where is the lag time , or the width of the window slid along the time series in taking the time average ( [ eq - tamsd ] ) .moreover , is the total length of the time series .we denote ensemble averages by the angular brackets while time averages are indicated by the overline .often , an additional average of the form is performed over realisations of the process , to obtain smoother curves . from a mathematical point of view , this trajectory average allows the calculation of the time averaged msd for processes , which are not self - averaging both quantities ( [ eq - tamsd ] ) and ( [ eatamsd ] ) are important in the analysis of single particle trajectories measured in advanced tracking experiments .for sbm the mean time averaged msd ( [ eatamsd ] ) grows as }{(\alpha+1)(t-\delta)}. \label{eq - sbm - tamsd}\ ] ] in the limit , the time averaged msd scales linearly with the lag time , sbm is thus a weakly non - ergodic process in bouchaud s sense : the ensemble and time averaged msds are disparate even in the limit of long observation times , and thus violate the boltzmann - khinchin ergodic hypothesis , while the entire phase space is accessible to any single particle .moreover , the magnitude of the time averaged msd becomes a function of the trace length .analogous asymptotic forms for the mean time averaged msd ( [ eatamsd ] ) are found in subdiffusive ctrw processes and heterogeneous diffusion processes , see also the extensive recent review .note that also much weaker forms of non - ergodic behaviour exist for lvy processes .another distinct feature of weakly non - ergodic processes of the subdiffusive ctrw and heterogeneous diffusion type is the fact that time averaged observables remain random quantities even in the long time limit and thus exhibit a distinct scatter of amplitudes between individual realisations for a given lag time .this irreproducibility due to the scatter of individual traces around their mean is described by the ergodicity breaking parameter where .moreover , we introduced the abbreviations and for the nominator and denominator of eb , respectively .this notation will be used below . for brownian motion in the limit the eb parameter vanishes linearly with in the form in contrast to subdiffusive ctrw and heterogeneous diffusion processes , the eb parameter of sbm vanishes in the limit and in this sense the time averaged observable becomes reproducible .we demonstrate the small amplitude scatter of sbm in figure [ fig - tamsd - aged ] , for a detailed discussion see below .we note that the scatter of the time averaged msd of sbm around the ergodic value becomes progressively asymmetric for smaller values and in later parts of the time averaged trajectories , see fig . 6 of reference .in the following we derive the exact analytical results for the eb parameter of sbm and support these results with extensive computer simulations .moreover we extend the analytical and computational analysis of the eb parameter to the case of the ageing sbm process when we start evaluating the time series at the time after the original initiation of the system at . for several values of the scaling exponents and ageing times .the asymptotic behaviour of equation ( [ eq - aged - sbm - delta-2-t - delta ] ) is shown by the black solid lines .parameters : , , , , and traces are shown.,title="fig:",width=302 ] for several values of the scaling exponents and ageing times .the asymptotic behaviour of equation ( [ eq - aged - sbm - delta-2-t - delta ] ) is shown by the black solid lines .parameters : , , , , and traces are shown.,title="fig:",width=302 ] for several values of the scaling exponents and ageing times .the asymptotic behaviour of equation ( [ eq - aged - sbm - delta-2-t - delta ] ) is shown by the black solid lines .parameters : , , , , and traces are shown.,title="fig:",width=302 ] for several values of the scaling exponents and ageing times .the asymptotic behaviour of equation ( [ eq - aged - sbm - delta-2-t - delta ] ) is shown by the black solid lines .parameters : , , , , and traces are shown.,title="fig:",width=302 ] the time averaged msd of an ageing stochastic process is defined as ^ 2dt\ ] ] and thus again involves the observation time .the properties ageing sbm were considered recently .the mean time averaged msd becomes .\label{eq - aged - sbm - delta-2-t - delta}\end{aligned}\ ] ] the ratio of the aged versus the non - ageing time averaged msd in the limit has the asymptotic form this functional form is identical to that obtained for subdiffusive ctrws and heterogeneous diffusion processes .the factor quantifies the respective depression and enhancement of the time averaged msd for the cases of ageing sub- and superdiffusive sbm .figure [ fig - tamsd - aged ] shows the time averaged msd of individual sbm traces for the case of weak , intermediate , and strong ageing for different values of .we observe that the spread of individual changes only marginally with progressive ageing times .also the changes with the scaling exponent are modest , compare figure [ fig_phi ] . also note that the magnitude of the time averaged msd decreases with for ultraslow sbm at , stays independent on for brownian motion at , and increases with the ageing time for superdiffusive processes at .these trends are in agreement with the theoretical predictions of equation ( [ eq - aged - sbm - delta-2-t - delta ] ) shown as the solid lines in figure [ fig - tamsd - aged ] . of the relative amplitude of the time averaged msd traces for sbm processes with different scaling exponents as indicated in the panels .as expected , the spread grows and the distribution becomes more leptokurtic at longer lag times . for progressivelylarger values of the scaling exponent the spread of time averaged msd decreases but stays asymmetric with a longer tail at larger values . in particular , for and 2 the shape is almost indistinguishable at , see the bottom right panel .the trace length is and the number of traces used for averaging is .,title="fig:",width=279 ] of the relative amplitude of the time averaged msd traces for sbm processes with different scaling exponents as indicated in the panels .as expected , the spread grows and the distribution becomes more leptokurtic at longer lag times . for progressivelylarger values of the scaling exponent the spread of time averaged msd decreases but stays asymmetric with a longer tail at larger values . in particular , for and 2 the shape is almost indistinguishable at , see the bottom right panel .the trace length is and the number of traces used for averaging is .,title="fig:",width=279 ] of the relative amplitude of the time averaged msd traces for sbm processes with different scaling exponents as indicated in the panels .as expected , the spread grows and the distribution becomes more leptokurtic at longer lag times .for progressively larger values of the scaling exponent the spread of time averaged msd decreases but stays asymmetric with a longer tail at larger values .in particular , for and 2 the shape is almost indistinguishable at , see the bottom right panel .the trace length is and the number of traces used for averaging is .,title="fig:",width=279 ] of the relative amplitude of the time averaged msd traces for sbm processes with different scaling exponents as indicated in the panels . as expected , the spread grows and the distribution becomes more leptokurtic at longer lag times . for progressivelylarger values of the scaling exponent the spread of time averaged msd decreases but stays asymmetric with a longer tail at larger values . in particular , for and 2 the shape is almost indistinguishable at , see the bottom right panel .the trace length is and the number of traces used for averaging is .,title="fig:",width=279 ][ sec - non - aged ] analytically , the derivation of the eb parameter for sbm involves the evaluation of the fourth order moment of the time averaged msd , we use the fundamental property of sbm that and the wick - isserlis theorem for the fourth order correlators .we then obtain the nominator of the eb parameter of equation ( [ eq - eb - via - xi ] ) taking the averages by help of equation ( [ eq - pair - corr ] ) we arrive at ^ 2 . \label{eq - eb - nominator - after - wick - after - averagng}\end{aligned}\ ] ] with the new variable ( assuming ) and by changing the order of integration we find the expression ^ 2 .\label{eq - eb - nominator}\end{aligned}\ ] ] now , the new variables and are introduced . substituting equation ( [ msd ] ) into equation([eq - eb - nominator ] ) we obtain .\label{eq - eb - nominator-2}\end{aligned}\ ] ] splitting the double integral over the variable into an integral over a square region and a triangular region yields from the double integrals from the power - law functions in equation ( [ eq - eb - nominator-2 ] ) , via equation ( [ eq - pair - corr ] ) we compute the nominator as ,\end{aligned}\ ] ] in terms of the variable the integral remaining in the last term of this expression can , in principle , be represented in terms of the incomplete beta - function .the denominator of the eb parameter ( [ eq - eb - via - xi ] ) is just the squared time averaged msd given by equation ( [ eq - sbm - tamsd ] ) .we thus arrive at the expression ^ 2.\end{aligned}\ ] ] note that the double analytical integration of equation ( 9 ) in via wolfram mathematica yields a result , that is indistinguishable from equation ( [ eq - eb - sbm - analyt ] ) , as demonstrated by the blue dots in figure [ fig - eb]b . ) and ( [ eq - eb - sbm - analyt1 ] ) are given by the solid coloured lines .data points for different lag times are shown in different colours .the values of eb for ultraslow sbm ( [ eq - eb - usbm ] ) at and at given by equation ( [ eq - eb-1-over-2 ] ) are shown as the bigger black bullets , computed for , , and .the larger orange bullets denote the same limits but without the additive constants to the leading functional dependencies with .parameters : the trace length is , the number of traces used for averaging at each value is .( b ) exact and approximate analytical results for eb .the red , green , and blue curves are the exact evaluations of equation ( [ eq - eb - sbm - analyt ] ) .the dashed curve in the region corresponds to equation ( [ eq - eb - sbm ] ) and the dashed curves for are the results of .the magenta curves in the region are according to the analytical expansion ( [ eq - eb-0-to-05 ] ) for given values .the dark blue data points , coinciding with our exact result ( [ eq - eb - sbm - analyt ] ) , follow from evaluating the double integral in equation ( 9 ) of with mathematica.,width=604 ] we here consider some limiting cases of the eb parameter based on expressions ( [ eq - eb - sbm - analyt ] ) and ( [ eq - eb - sbm - analyt ] ) . in the limit and for leading order expansion in terms of turns into equation ( [ eq - eb - bm ] ) .as it should the sbm process reduces to the ergodic behaviour of standard brownian motion .the general expression for the behaviour of the eb parameter in the range follows from equation ( [ integral ] ) by help of the identity [ equation ( 1.2.2.1 ) in ref . ] that can be checked by straight differentiation . performing this sort of partial integration three timeswe reduce the power of the integrand so that in the limit the integral becomes a converging function . in the range we the find exact expression the remaining converging integral can be represented in the limit via the beta function : setting the upper integration limit we obtain then we arrive at the following scaling law for the eb parameter , where the coefficient is given by the scaling form of eb versus of equation ( [ eq - eb-0-to-05 ] ) coincides with that proposed in reference , and it is indeed valid for vanishing and scaling exponents not too close to and , see below .we find in addition that in the region the eb parameter of the sbm process becomes a sensitive function of the lag time , as shown in figure [ fig - eb]a , both from our theoretical results and computer simulations .this means that no universal rescaled variable exists , as is the case for standard brownian motion .the asymptote ( [ eq - eb-0-to-05 ] ) agrees with the result ( 10 ) in in the range of the scaling exponent and for infinitely large values .equation ( [ eq - calfa - coeff ] ) above provides an explicit form for the prefactor . in figure[ fig - eb]b the approximate expansion ( [ eq - eb-0-to-05 ] ) is shown as magenta curve . at realistic values the asymptote ( [ eq - eb-0-to-05 ] ) agrees neither with our exact expression ( [ eq - eb - sbm - analyt ] ) nor with the simulation data .as this demonstrates the exact expression ( [ eq - eb - sbm - analyt ] ) needs to be used a forteriori .the main reason is the finite value used in the simulations : for very small equation ( [ eq - eb-0-to-05 ] ) describes the exact result ( [ eq - eb - sbm - analyt ] ) significantly better ( not shown ) .we note that away from the critical points at and , equation ( [ eq - eb-0-to-05 ] ) returns zero and infinity , respectively ( magenta curves in figure [ fig - eb]b ) . at these pointsspecial care is required when computing in equation ( [ i1-alfa - smaller-05 ] ) , as discussed below . for values of the scaling exponent in the limit of small denominator ( [ eq - eb - sbm - analyt1 ] ) becomes .note that here we need to include two more iterations of the integral in the last term of equation ( [ i1-alfa - smaller-05 ] ) by using equation ( [ eq - prud - formula ] ) .then we arrive at a new integral term that is converging at .thus the nominator ( [ eq - eb - sbm - analyt])after cancellation of the first three orders in the expansion in terms of large to leading order ] .consequently the eb parameter to leading order is independent of the ageing time and follows equation ( [ eq - eb - sbm ] ) as long as .figure [ fig - eb - aged ] shows the simulations results based on the stochastic langevin process of ageing sbm .we find that in the limit of strong ageing , consistent with our theoretical results the eb of ageing sbm indeed approaches the brownian limit ( [ eq - eb - bm ] ) . for weak and intermediate ageing the general eb expression ( [ eq - eq - sbm - final - nominator - aged ] )is in good agreement with the simulations results , compare the data sets in figure [ fig - eb - aged ] .finally figure [ fig - eb - ta ] depicts the graph of eb versus ageing time explicitly , together with the theoretical results ( [ eq - eq - sbm - final - nominator - aged ] ) and ( [ eq - aged - sbm - denomi - via - tau ] ) . we observe that eb decreases with the ageing time and this reduction is particularly pronounced for strongly subdiffusive sbm processes .the latter also feature some instabilities upon the numerical solution of the stochastic equation for long ageing times . )are represented by the solid lines of the corresponding colour .parameters : , , and .,width=453 ] . analytical results ( [ eq - eq - sbm - final - nominator - aged ] ) and ( [ eq - aged - sbm - denomi - via - tau ] ) for different values are represented by the solid lines .some instabilities in the simulations are visible at long ageing times , in particular for small .parameters : , , and .,width=453 ]we here studied in detail the ergodic properties of sbm with its power - law time dependent diffusivity .in particular , we derived the higher order time averaged moments and obtained the ergodicity breaking parameter of sbm , which quantifies the degree of irreproducibility of time averaged observables of a stochastic process . for the highly non - stationary , out - of - equilibrium sbm process we analysed the eb parameter with respect to the scaling exponent , the lag time , and the trace length .we revealed a non - monotonic dependence .in particular , we showed that there is no divergence at , in contrast to the approximate results of .we also obtained a peculiar dependence for the eb dependence on the trace length , for and for , in agreement with .we also obtained analytical and numerical results for eb for ageing sbm as function of the model parameters and the ageing time .our exact analytical results are fully supported by stochastic simulations .we find that over the range and for the eb dependence on the lag time and trace length involves the universal variable , as witnessed by equation ( [ eq - eb - sbm ] ) . for arbitrary lag times and trace lengths the general result for ageing and non - ageing sbm are , however , more complex , see equations ( [ eq - eb - sbm - analyt ] ) and ( [ eq - eq - sbm - final - nominator - aged ] )these are the main results of the current work . for strongly subdiffusive sbm in the range of exponents ergodic properties are , in contrast , strongly dependent on the lag time .the correct limit of our exact result ( [ eq - eb - sbm - analyt ] ) was obtained for the eb parameter of ultraslow sbm with and for sbm with exponent .although eb has some additional logarithmic scaling at this point , it reveals no divergence as is approached .we are confident that the strategies for obtaining higher order time averaged moments developed herein will be useful for the analysis of other anomalous diffusion processes , in particular for the analysis of finite time corrections of eb for fractional brownian motion or for processes with spatially and temporally random diffusivities .we acknowledge funding from the academy of finland ( suomen akatemia , finland distinguished professorship to rm ) , the deutsche forschungsgemeinschaft ( to agc , ims and ft ) , and the imu berlin einstein foundation ( to avc ) .99 c. bruchle , d. c. lamb , and j. michaelis , single particle tracking and single molecule energy transfer ( wiley - vch , weinheim , germany , 2012 ) ; x. s. xie , p. j. choi , g .- w .li , n. k. lee , and g. lia , annu .biophys . * 37 * , 417 ( 2008 ) .k. burnecki , e. kepten , j. janczura , i. bronshtein , y. garini , and a. weron , biophys .j. * 103 * , 1839 ( 2012 ) ; e. kepten , i. bronshtein , and y. garini , phys .e * 83 * , 041919 ( 2011 ) ; j .- h .jeon , v. tejedor , s. burov , e. barkai , c. selhuber - unkel , k. berg - srensen , l. oddershede , and r. metzler , phys .lett . * 106 * , 048103 ( 2011 ) ; s. m. a. tabei , s. burov , h. y. kim , a. kuznetsov , t. huynh , j. jureller , l. h. philipson , a. r. dinner , and n. f. scherer , proc .usa * 110 * , 4911 ( 2013 ) ; a. v. weigel , b. simon , m. m. tamkun , and d. krapf , proc .usa * 108 * , 6438 ( 2011 ) ; c. manzo , j. a. torreno - pina , p. massignan , g. j. lapeyre , jr . , m. lewenstein , and m. f. garcia parajo , phys .x * 5 * , 011021 ( 2015 ) . j. szymanski and m. weiss , phys .lett . * 103 * , 038102 ( 2009 ) ; g. guigas , c. kalla , and m. weiss , biophys .j. * 93 * , 316 ( 2007 ) ; w. pan , l. filobelo , n. d. q. pham , o. galkin , v. v. uzunova , and p. g. vekilov , phys .lett . * 102 * , 058101 ( 2009 ) ; j .- h .jeon , n. leijnse , l. b. oddershede , and r. metzler , new j. phys . * 15 * , 045011 ( 2013 ) .e. yamamoto , t. akimoto , m. yasui , and k. yasuoka , scient .rep . * 4 * , 4720 ( 2014 ) ; g. r. kneller , k. baczynski , and m. pasienkewicz - gierula , j. chem . phys . * 135 * , 141105 ( 2011 ) ; j .- h . jeon , h. martinez - seara monne , m. javanainen , and r. metzler , phys .lett . * 109 * , 188103 ( 2012 ) .a. godec , m. bauer , and r. metzler , new j. phys .* 16 * , 092002 ( 2014 ) ; l .- h .cai , s. panyukov , and m. rubinstein , macromol . * 48 * , 847 ( 2015 ) ; m. j. skaug , l. wang , y. ding , and d. k. schwartz , acs nano * 9 * , 2148 ( 2015 ) .a. caspi , r. granek , and m. elbaum , phys .* 85 * , 5655 ( 2000 ) ; n. gal and d. weihs , phys .e * 81 * , 020903(r ) ( 2010 ) ; d. robert , th .h. nguyen , f. gallet , and c. wilhelm , plos one * 4 * , e10046 ( 2010 ) ; j. f. reverey , j .- h . jeon , m. leippe , r. metzler , and c. selhuber - unkel , sci . rep . * 5 * , 11690 ( 2015 ) .a. v. chechkin , m. hofman , and i. m. sokolov , phys .e * 80 * , 80 , 031112 ( 2009 ) ; v. tejedor and r. metzler , j. phys .a * 43 * , 082002 ( 2010 ) ; m. magdziarz , r. metzler , w. szczotka , and p. zebrowski , phys . rev .e * 85 * , 051103 ( 2012 ) ; j. h. p. schulz , a. v. chechkin , and r. metzler , j. phys .a * 46 * , 475001 ( 2013 ) .t. khn , t. o. ihalainen , j. hyvaluoma , n. dross , s. f. willman , j. langowski , m. vihinen - ranta , and j. timonen , plos one * 6 * , e22962 ( 2011 ) ; b. p. english , v. hauryliuk , a. sanamrad , s. tankov , n. h. dekker , and j. elf , proc .u. s. a. * 108 * , e365 ( 2011 ) .g. guigas , c. kalla , and m. weiss , febs lett .* 581 * , 5094 ( 2007 ) ; n. periasmy and a. s. verkman , biophys . j. * 75 * , 557 ( 1998 ) ; j. wu and m. berland , biophys . j. * 95 * , 2049 ( 2008 ) ; j. szymaski , a patkowski , j gapiski , a. wilk , and r. holyst , j. phys . chem .b * 110 * , 7367 ( 2006 ) ; e. b. postnikov and i. m. sokolov , physica a * 391 * , 5095 ( 2012 ) .l. boltzmann , vorlesungen ber gastheorie ( j. a. barth , leipzip , 1898 ) ; p. ehrenfest and t. ehrenfest , begriffliche grundlagen der statistischen auffassung in der mechanik , in enzyklopdie der mathematischen wissenschaften , vol .4 , subvol . 4 , f. klein and c. mller , editors ( b. g. teubner , leipzig , 1911 ) ; a. i. khinchin , mathematical foundations of statistical mechanics ( dover , new york , 1949 ) .g. sinai , theory prob .* 27 * , 256 ( 1982 ) ; g. oshanin , a. rosso , and g. schehr , phys . rev .* 110 * , 100602 ( 2013 ) ; d. s. fisher , p. le doussal , and c. monthus , phys . rev .e * 64 * , 066107 ( 2001 ) ; a. godec , a. v. chechkin , e. barkai , h. kantz , and r. metzler , j. phys .a * 47 * , 492002 ( 2014 ) .i. m. sokolov , e. heinsalu , p. hnggi , and i. goychuk , europhys .lett . * 86 * , 30009 ( 2009 ) ; m. j. skaug , a. m. lacasta , l. ramirez - piscina , j. m. sancho , k. lindenberg , and d. k. schwartz , soft matter * 10 * , 753 ( 2014 ) ; t. albers and g. radons , europhys . lett . * 102 * , 40006 ( 2013 ) . | we examine the non - ergodic properties of scaled brownian motion , a non - stationary stochastic process with a time dependent diffusivity of the form . we compute the ergodicity breaking parameter eb in the entire range of scaling exponents , both analytically and via extensive computer simulations of the stochastic langevin equation . we demonstrate that in the limit of long trajectory lengths and short lag times the eb parameter as function of the scaling exponent has no divergence at and present the asymptotes for eb in different limits . we generalise the analytical and simulations results for the time averaged and ergodic properties of scaled brownian motion in the presence of ageing , that is , when the observation of the system starts only a finite time span after its initiation . the approach developed here for the calculation of the higher time averaged moments of the particle displacement can be applied to derive the ergodic properties of other stochastic processes such as fractional brownian motion . |
next - generation wireless communication systems demand both high transmission rates and a quality - of - service guarantee .this demand directly conflicts with the properties of the wireless medium . as a result of the scatterers in the environment and mobile terminals , signal components received over different propagation paths may add destructively or constructively and cause random fluctuations in the received signal strength .this phenomena , which is called fading , degrades the system performance .multi - input multi - output ( mimo ) systems introduce spatial diversity to combat fading . additionally , taking advantage of the rich scattering environment , mimo increases spatial multiplexing .user cooperation / relaying is a practical alternative to mimo when the size of the wireless device is limited .similar to mimo , cooperation among different users can increase the achievable rates and decrease susceptibility to channel variations .in , the authors proposed relaying strategies that increase the system reliability . although the capacity of the general relay channel problem has been unsolved for over thirty years , the papers and triggered a vast literature on cooperative wireless systems .various relaying strategies and space - time code designs that increase diversity gains or achievable rates are studied in - . as opposed tothe either / or approach of higher reliability or higher rate , the seminal paper establishes the fundamental tradeoff between these two measures , reliability and rate , also known as the diversity - multiplexing tradeoff ( dmt ) , for mimosystems . at high ,the measure of reliability is the diversity gain , which shows how fast the probability of error decreases with increasing .the multiplexing gain , on the other hand , describes how fast the actual rate of the system increases with .dmt is a powerful tool to evaluate the performance of different multiple antenna schemes at high ; it is also a useful performance measure for cooperative / relay systems . on one handit is easy enough to tackle , and on the other hand it is strong enough to show insightful comparisons among different relaying schemes .while the capacity of the relay channel is not known in general , it is possible to find relaying schemes that exhibit optimal dmtperformance .therefore , in this work we study cooperative / relaying systems from a dmt perspective . in a general cooperative / relaying network with multiple antenna nodes , some of the nodes are sources , some are destinations , and some are mere relays .finding a complete dmt characterization of the most general network seems elusive at this time , we will highlight some of the challenges in the paper .therefore , we examine the following important subproblems of the most general network . *_ problem 1 : _ a single source - destination system , with one relay , each node has multiple antennas , * _ problem 2 : _ the multiple - access relay channel with multiple sources , one destination and one relay , each node has multiple antennas , * _ problem 3 : _ a single source - destination system with multiple relays , each node has a single antenna , * _ problem 4 : _ a multiple source - multiple destination system , each node has a single antenna .an important constraint is the processing capability of the relay(s ) .we investigate cooperative / relaying systems and strategies under the full - duplex assumption , i.e. when wireless devices transmit and receive simultaneously , to highlight some of the fundamental properties and limitations .half - duplex systems , where wireless devices can not transmit and receive at the same time , are also of interest , as the half - duplex assumption more accurately models a practical system . therefore , we study both full - duplex and half - duplex relays in the above network configurations .the channel model and relative node locations have an important effect on the dmt results that we provide in this paper . in , we investigated _ problem 3 _ from the diversity perspective only .we showed that in order to have maximal mimo diversity gain , the relays should be clustered around the source and the destination evenly .in other words , half of the relays should be in close proximity to the source and the rest close to the destination so that they have a strong inter - user channel approximated as an additive white gaussian noise ( awgn ) channel . only for this clustered case we can get maximal mimo diversity , any other placement of relays results in lower diversity gains .motivated by this fact , we will also study the effect of clustering on the relaying systems listed above .most of the literature on cooperative communications consider single antenna terminals .the dmt of relay systems were first studied in and for half - duplex relays .amplify - and - forward ( af ) and decode - and - forward ( df ) are two of the protocols suggested in for a single relay system with single antenna nodes . in both protocols , the relay listens to the source during the first half of the frame , and transmits during the second half , while the source remains silent . to overcome the losses of strict time division between the source and the relay , offers incremental relaying , in which there is a 1-bit feedback from the destination to both the source and the relay , and the relay is used only when needed , i.e. only if the destination can not decode the source during the first half of the frame . in , the authors do not assume feedback , but to improve the af and df schemes of they allow the source to transmit simultaneously with the relay .this idea is also used in to study the non - orthogonal amplify - and - forward ( naf ) protocol in terms of dmt .later on , a slotted af scheme is proposed in , which outperforms the naf scheme of in terms of dmt .azarian et al . also propose the dynamic decode - and - forward ( ddf ) protocol in .in ddf the relay listens to the source until it is able to decode reliably .when this happens , the relay re - encodes the source message and sends it in the remaining portion of the frame .the authors find that ddf is optimal for low multiplexing gains but it is suboptimal when the multiplexing gain is large .this is because at high multiplexing gains , the relay needs to listen to the source longer and does not have enough time left to transmit the high rate source information .this is not an issue when the multiplexing gain is small as the relay usually understands the source message at an earlier time instant and has enough time to transmit .mimo relay channels are studied in terms of ergodic capacity in and in terms of dmtin .the latter considers the naf protocol only , presents a lower bound on the dmt performance and designs space - time block codes .this lower bound is not tight in general and is valid only if the number of relay antennas is less than or equal to the number of source antennas .the multiple - access relay channel ( marc ) is introduced in . in marc ,the relay helps multiple sources simultaneously to reach a common destination .the dmt for the half - duplex marc with single antenna nodes is studied in . in , the authors find that ddf is dmt optimal for low multiplexing gains ; however , this protocol remains to be suboptimal for high multiplexing gains analogous to the single - source relay channel .this region , where ddf is suboptimal , is achieved by the multiple access amplify and forward ( maf ) protocol .when multiple single antenna relays are present , the papers show that diversity gains similar to multi - input single - output ( miso ) or single - input multi - output ( simo ) systems are achievable for rayleigh fading channels .similarly , upper bound the system behavior by miso or simo dmt if all links have rayleigh fading . in other words , relay systems behave similar to either transmit or receive antenna arrays ._ problem 4 _ is first analyzed in in terms of achievable rates only , where the authors compare a two - source two - destination cooperative system with a mimo and show that the former is multiplexing gain limited by 1 , whereas the latter has maximum multiplexing gain of 2 . in the light of the related work described in section [ subsec : relatedwork ], we can summarize our contributions as follows : * we study _ problem 1 _ with full - duplex relays and compare df and compress - and - forward ( cf ) strategies in terms of dmt for both clustered and non - clustered systems .we find that there is a fundamental difference between these two schemes .the cf strategy is dmt optimal for any number of antennas at the source , the destination or the relay , whereas dfis not .* we also study _ problem 1 _ with half - duplex relays .this study reveals that for half - duplex systems we can find tighter upper bounds than the full - duplex dmt upper bounds .moreover , we show that the cf protocol achieves this half - duplex dmt bound for any number of antennas at the nodes .this is the first known result on dmt achieving half - duplex relaying protocols . * for _ problem 2we show that the cf protocol achieves a significant portion of the half - duplex dmt upper bound for high multiplexing gains .our results for single antenna marc easily extend to multiple antenna terminals .* we examine _ problem 3 _ and _ problem 4 _ and develop the dmt analysis to understand if the _ network _ provides any mimo benefits .our analysis shows that even for clustered systems with full - duplex relays , all relay systems fall short of mimo , mainly due to multiplexing gain limitations .the same problem persists in cooperative systems with multiple source destination pairs .overall , our work sheds light onto high behavior of cooperative networks as described by the dmt , and suggests optimal transmission and relaying strategies .the paper is organized as follows .section [ sec : systemmodel ] describes the general system model . in section [ sec : preliminaries ] , we give some preliminary information that will be used frequently in the rest of the paper . in section [ sec : multiantennasinglerelay ] we solve the single user , single relay problem with multiple antennas for full - duplex relays , and in section [ sec : multiantennasinglerelayhd ] we solve the same problem for half - duplex relays ( _ problem 1 _ ) .section [ sec : marc ] introduces marc , and suggests an achievable dmt ( _ problem 2 _ ) . in section [ sec : singleantenna ] we study two problems : the two relay system with a single source destination pair ( _ problem 3 _ ) , and the two source two destination problem ( _ problem 4 _ ) .finally , in section [ sec : conclusion ] we conclude .for the most general model all the channels in the system have independent , slow , frequency non - selective , rician fading . for rician fading channels ,the channel gain matrix is written as where , and denote the rician factor , the line of sight component and the scattered component respectively .the dmt for rician channels are studied in detail in . in authors find that for finite rician factor , the channel mean does not affect the dmtbehavior , and the system dmt will be equal to that of a rayleigh fading channel with . on the other hand in the authors study the effect of on miso andsimo dmt when approaches infinity .they find that for large , the system diversity increases linearly with .moreover , when tends to infinity , the diversity gain is infinity for all multiplexing gains up to . based on the above observations , without loss of generality , in this work we assume a discrete approximation to the rician model : if two nodes are apart more than a threshold distance , the line of sight component is too weak and the rician factor can be assumed to be equal to zero . thus the channel gain matrix is distributed as rayleigh , and we say that the nodes are in _ rayleigh zones _ , fig .[ fig : zones](a ) . on the other hand ,if the inter - node distance is less than , the line of sight component in the received signal is strong ; can be assumed to be infinity and the rician distribution approximates a gaussian . in this casewe say that the nodes are in _awgn zones _ , fig .[ fig : zones](b ) . for the rayleigh zone , the channel gain matrix for mimo terminals has independent , identically distributed ( i.i.d. ) zero mean complex gaussian entries with real and imaginary parts each with variance .the variance is proportional to , where denotes the internode distance , and is the path loss exponent .if nodes and are in the awgn zone , the channel gain matrix from node to has deterministic entries , all equal to and the channel gain matrix has rank 1 .there is also a dead zone around the nodes , which limits the channel gain .depending on the locations of the nodes , the rayleigh or awgn zone assumption results in two important configurations we will consider : clustered and non - clustered . for the clustered system , all the source(s ) and some of the relay(s ) are in the same awgn zone , and the destination(s ) and the remaining relay(s ) are in another awgnzone , but the source cluster and the destination cluster , which are more than the threshold distance apart , are in their rayleigh zones .however , for the non - clustered system , every pair of nodes in the system are in their rayleigh zones .we do not explicitly study the systems in which some nodes are clustered and some are not in this paper , although our results can easily be applied to these cases as well .the relay(s ) can be full - duplex , that is they can transmit and receive at the same time in the same band ( sections [ sec : multiantennasinglerelay ] , and [ sec : singleantenna ] ) , or half - duplex ( sections [ sec : multiantennasinglerelayhd ] and [ sec : marc ] ) .the transmitters ( source(s ) and relay(s ) ) in the systems under consideration have individual power constraints .all the noise vectors at the receivers ( relay(s ) and destination(s ) ) have i.i.d .complex gaussian entries with zero mean and variance 1 . without loss of generalitywe assume the transmit power levels are such that the average received signal powers at the destination(s ) are similar , and we define as the common average received signal to noise ratio ( except for constant multiplicative factors ) at the destination . because of this assumption , for the clustered systems we study in section [ sec : singleantenna ] , the nodes in the source cluster hear the transmitters in their cluster much stronger than the transmitters in the destination cluster , and for all practical purposes we can ignore the links from the destination cluster to the nodes in the source cluster .this assumption is the same as the level set approach of . for non - clustered systems each node can hear all others . all the receivers have channel state information ( csi ) about their incoming fading levels .furthermore , the relays that perform cf have csi about all the channels in the system .this can happen at a negligible cost by proper feedback .we will explain why we need this information when we discuss the cfprotocol in detail in section [ sec : multiantennasinglerelay ] .the source(s ) does not have instantaneous csi .we also assume the system is delay - limited and requires constant - rate transmission .we note that under this assumption , information outage probability is still well - defined and dmt is a relevant performance metric .there is also short - term average power constraint that the transmitters have to satisfy for each codeword transmitted . for more information about the effect of csi at the transmitter(s ) and variable rate transmission on dmtwe refer the reader to .in this section we first introduce the notation , and present some results we will use frequently in the paper . for notational simplicitywe write , if the inequalities and are defined similarly . in the rest of the paper the identity matrix of size , denotes conjugate transpose , and denotes the determinant operation . to clarify the variables ,we would like to note that denotes the relay , whereas denotes transmission rates ; e.g. will be used for target data rate .let denote the transmission rate of the system and denote the probability of error .then we define multiplexing gain and corresponding diversity as the dmt of an mimo is given by , the best achievable diversity , which is a piecewise - linear function connecting the points , where , .note that . in , the authors prove that the probability of error is dominated by the probability of outage .therefore , in the rest of the paper we will consider outage probabilities only . we knowthat for any random channel matrix of size and for any input covariance matrix of size , combined with the fact that a constant scaling in the transmit power levels do not change the dmt , this bound will be useful to establish dmt results .in a general multi - terminal network , node sends information to node at rate where is the number of channel uses , denotes the message for node at node , and is s estimate at node . then the maximum rate of information flow from a group of sources to a group of sinks is limited by the minimum cut ( * ? ? ?* theorem 14.10.1 ) and we cite this result below . [ thm : cutsetbound ] consider communication among nodes in a network .let and be the complement of in the set .also and denote transmitted signals from the sets and respectively . denotes the signals received in the set . for information rates from node to , there exists some joint probability distribution , such that for all .thus the total rate of flow of information across cut - sets is bounded by the conditional mutual information across that cut - set .we can use the above proposition to find dmt upper bounds .suppose denotes the target data rate from node to node , and is its multiplexing gain , denotes the sum target data rate across cut - set and is its sum multiplexing gain .we say the link from to is in outage if the event occurs .furthermore , the network outage event is defined as which means the network is in outage if any link is in outage .minimum network outage probability is the minimum value of over all coding schemes for the network .we name the exponent of the minimum network outage probability as _ maximum network diversity _ , , where is a vector of all s. then we have the following lemma , which says that the maximum network diversity is upper bounded by the minimum diversity over any cut .[ lemma : cutsetupperbound ] for each , define the maximum diversity order for that cut - set as then the maximum network diversity is upper bounded as we provide the proof in appendix [ app : lemmacutsetbound ] .in addition to lemma [ lemma : cutsetupperbound ] , the following two results will also be useful for some of the proofs .[ thm : minkowski ] for two positive definite matrices and , if is positive semi - definite , then .[ lemma : auxineq]for two real numbers , , where is a non - negative real number , implies , or . therefore , for two non - negative random variables and , .the proof follows from simple arithmetic operations , which we omit here .the general multiple antenna , multiple source , destination , relay network includes the multiple antenna relay channel consisting of a single source , destination and relay , as a special case .any attempt to understand the most general network requires us to investigate the multiple antenna relay channel in more detail .therefore , in this section we study _ problem 1 _ , in which the source , the destination and the relay has , and antennas respectively .this is shown in fig .[ fig : multiantennasinglerelay ] .as clustering has a significant effect on the dmt performance of the network , we will look into the non - clustered and clustered cases and examine the dfand cf protocols . in this sectionthe relay is full - duplex , whereas in section [ sec : multiantennasinglerelayhd ] , the relay will be half - duplex . , and antennas respectively.,width=211 ] denoting the source and relay transmitted signals as and , when the system is non - clustered , the received signals at the relay and at the destination are where and are the independent complex gaussian noise vectors at the corresponding node . , and are the , and channel gain matrices between the source and the relay , the source and the destination , and the relay and the destination respectively .[ thm : multiantennasinglerelaynonclus ] the optimal dmt for the non - clustered system of fig .[ fig : multiantennasinglerelay ] , , is equal to and the cf protocol achieves this optimal dmt for any , and . _1 ) upper bound : _ the instantaneous cut - set mutual information expressions for cut - sets and are to maximize these mutual information expressions we need to choose and complex gaussian with zero mean and covariance matrices having trace constraints and respectively , where and denote the average power constraints each node has .moreover , the covariance matrix of and should be chosen appropriately to maximize . then using ( [ eqn : covarianceremoval ] ) to upper bound with and with we can write with ,~~{\ensuremath{\mathbf{h}}}_{sr , d } = \left[\begin{array}{cc } { \ensuremath{\mathbf{h}}}_{sd } & { \ensuremath{\mathbf{h}}}_{rd } \\ \end{array}\right].\label{eqn : h_s_rd}\end{aligned}\ ] ] the above bounds suggest that the csi at the relay does not improve the dmt performance under short term power constraint and constant rate operation .the best strategy for the relay is to employ beamforming among its antennas . for an antenna mimo , with total transmit power , the beamforming gain can at most be , which results in the same dmt as using power .therefore , csi at the relay with no power allocation over time does not improve the dmt , it has the same the dmt when only receiver csi is present .note that , , with and . then using lemma [ lemma : cutsetupperbound ] , one can easily upper bound the system dmt by for a target data rate ._ 2 ) achievability : _ to prove the dmt upper bound of theorem [ thm : multiantennasinglerelaynonclus ] is achievable , we assume the relay does full - duplex cf as we explain below .we assume the source , and the relay perform block markov superposition coding , and the destination does backward decoding .the encoding is carried over blocks , over which the fading remains fixed . in the cf protocolthe relay performs wyner - ziv type compression with side information taken as the destination s received signal . for this operationthe relay needs to know all the channel gains in the system . for the cf protocol ,as suggested in and , the relay s compression rate has to satisfy in order to forward reliably to the destination . here denotes the compressed signal at the relay .the destination can recover the source message reliably if the transmission rate of the source is less than the instantaneous mutual information we assume and are chosen independently , and have covariance matrices and respectively . also , where is a length vector with complex gaussian random entries with zero mean . has covariance matrix , and its entries are independent from all other random variables .we define \right|.\label{eqn : l_s , rd}\\ l_{s , rd}^{\prime } & \triangleq & \left|{{\ensuremath{\mathbf{h}}}_{s , rd}}{\ensuremath{\mathbf{h}}}_{s , rd}^{\dag}\frac{p_s}{m}+\mathbf{i}_{k+n}\right| \label{eqn : l_s , rd'}\end{aligned}\ ] ] then we have to satisfy the compression rate constraint in ( [ eqn : compconstr ] ) , using the csi available to it , the relay ensures that the compression noise variance satisfies {l_{s , rd}/l_{sr , d}} ] , and is an matrix with i.i.d . complex gaussian entries . thus , for , is equal to the dmtof a system , , and overall we have the expression stated in theorem [ thm : multiantennasinglerelayclusub ] for and arbitrary and .in random state half - duplex relay systems , the system state can also be viewed as a channel input .thus , we need to optimize over all joint distributions . using proposition[ thm : cutsetbound ] we have , for some , where is the information rate from to .then for a target data rate we have then using ( [ eqn : div_hd ] ) we can write for the multiple antenna , half - duplex relay channel we have where the last inequality follows because is a binary random variable . similarly , the above two bounds show that random state protocols can at most send one extra bit of information , which does not play a role at high . thus , fixed and random state protocols have the same dmt upper bound .to illustrate that cf achieves the dmt in theorem [ thm : multiantennasinglerelayhd_cf ] , we follow the cfprotocol of section [ sec : multiantennasinglerelay ] . in the static half - duplex casethe relay listens to the source only for fraction of time with .the wyner - ziv type compression rate is such that the compressed signal at the relay can reach the destination error - free in the remaining fraction of time , in which the relay transmits .then , for a fixed the instantaneous mutual information at the destination is subject to note that the above equations incorporate the half - duplex constraint into ( [ eqn : compconstr ] ) and ( [ eqn : multiantennasinglerelaycfrate ] ) . the source and relay input distributions are independent , is the auxiliary random vector which denotes the compressed signal at the relay and depends on and .more information on cf can also be found in for the half - duplex case for single antenna nodes .we consider and are i.i.d .complex gaussian with zero mean and covariance matrices , , , and is a vector with i.i.d .complex gaussian entries with zero mean and variance that is independent from all other random variables . using the definitions of , , , and ( [ eqn : l_s , d ] ) , ( [ eqn : l_sr , d ] ) , ( [ eqn : l_s , rd ] ) and ( [ eqn : l_s , rd ] ) we have thus using ( [ eqn : compconstrhd ] ) we can choose the compression noise variance to satisfy {\frac{l_{s , rd}}{u } } , \mbox{~with~ } u = l_{s , d } \left(\frac{l_{sr , d}}{l_{s , d}}\right)^{(\frac{1-t}{t})},\label{eqn : relaychannelhalfduplexcfnoise}\ ] ] and ( [ eqn : rc_cf ] ) becomes to prove the dmt of ( [ eqn : rc_hd_cf ] ) we follow steps similar to ( [ eqn : multiantennasinglerelaynoncluspout1])-([eqn : multiantennasinglerelaynoncluspout2 ] ). then we have {2^k}{\ensuremath{\mathrm{snr}}}\right ) \nonumber\\ & & { + } \ : p \left((1-t)\log l_{sr , d}\nonumber \right.\\ & & { + } \ : \left .l_{s , d } <r \log \sqrt[r]{2^k}{\ensuremath{\mathrm{snr}}}\right ) \label{eqn : sss}\\ & \overset{(b)}{\dot{= } } & { \ensuremath{\mathrm{snr}}}^{-d_{\mathcal{c}_s}'(r , t)}+ { \ensuremath{\mathrm{snr}}}^{-d_{\mathcal{c}_d}'(r , t ) } \nonumber \\ & \dot{= } & { \ensuremath{\mathrm{snr}}}^{-\min\{d_{\mathcal{c}_s}'(r , t ) , d_{\mathcal{c}_d}'(r , t)\ } } , \nonumber\end{aligned}\ ] ] where is because for any fixed , . for have used the fact that and , and , and and are of the same form except for power scaling and hence result in the same dmt . as a result if , then . as the achievable dmt can not be larger than the upper bound ,we conclude that cf achieves the bound in ( [ eqn : rc_dmt_upperbound ] ) for any .thus it also achieves the best upper bound of ( [ eqn : rc_dmt_upperbound_maximized ] ) .if the relay is dynamic , cf can also behave dynamically and will be a function of csi available at the relay . for dynamic cfwe can still upper bound the probability of outage at the destination with ( [ eqn : sss ] ) , which is equivalent to the dmtupper bound for dynamic protocols at high . hence , dynamic cfachieves the dynamic half - duplex dmt upper bound .in this appendix we prove theorem [ thm : halfduplexdmtbound ] . for , ( [ eqn : c_1(t ) ] ) can be written as with where are independent exponentially distributed random variables with parameter 1 , that denote the fading power from source antenna to receive antenna at the destination or at the relay respectively .let , . then are i.i.d . with probability density function denote the outage event for a target data rate .then probability of outage is where is the set of real -vectors with nonnegative elements .the outage event is defined as where without loss of generality we assume and and is given as we have because does not change the diversity gain , decays exponentially with if , is for and approaches 1 for at high , follows because at high converges to , finally is due to laplace s method . as a result . to solve the optimization problem of ( [ eqn : g * ] ) .we first solve the subproblems where as an example , suppose we want to find .thus we have the following linear optimization problem this problem has two solutions at then for similarly , we find and .then , which concludes the proof .when and do equal time sharing and , we use corollary [ cor : hd_cf_111 ] to conclude that is achievable , where denotes time sharing .next , we discuss the case when both sources transmit together . in the half - duplex marc ,when both sources transmit simultaneously and the relay does cf for the signal it receives , similar to cfdiscussed in sections [ sec : multiantennasinglerelay ] and [ sec : multiantennasinglerelayhd ] , the information rates satisfy for independent , , and subject to where is the auxiliary random variable which denotes the quantized signal at the relay and depends on and and is the fraction of time the relay listens . to compute these mutual information , we assume and are independent , complex gaussian with zero mean , have variances and respectively , and , where is a complex gaussian random variable with zero mean and variance and is independent from all other random variables .we define {\ensuremath{\mathbf{h}}}_{s_1s_2,rd}^{\dag}\right.\\ & & { + } \:\left.\left[\begin{array}{cc } \hat{n}_r+1 & 0 \\ 0 & 1 \end{array } \right ]\right|\label{eqn : l_s1s2,rd}\end{aligned}\ ] ] where $ ] .since the relay has relevant csi , using ( [ eqn : marccompressionconstraint ] ) it can choose the compression noise variance to satisfy then to find a lower bound on the achievable dmt , we use the union bound on the probability of outage .for symmetric users with individual target data rates , and a target sum data rate the probability of outage at the destination is one can prove that the first and second terms and are on the order of at high , for any . to see this we write ( [ eqn : rate1 ] ) explicitly as as the relay compresses both sources together , the compression noise is on the order of and the term does not contribute to the overall mutual information at high . the last term in ( [ eqn : marc_pout ] )can be analyzed similar to section [ sec : multiantennasinglerelay ] , as this term mimics the 2 antenna source , 1 antenna relay and 1 antenna destination behavior . for , we follow the proof from ( [ eqn : cfproof ] ) . from first line to the second , we used the fact that with {\ensuremath{\mathbf{h}}}_{s_1s_2,rd}^{\dag}+\mathbf{i}_2 \right|,\ ] ] as a multiple antenna system has higher capacity than a system . using from theorem [ thm : halfduplexdmtbound ] with , , , to maximize over , we need to choose and thus where denotes simultaneous transmisssion . to find an upper bound on the achievable dmt we write combining this with the upper bound in ( [ eqn : marc_upperbound ] ) , and with , we have theorem [ thm : marc_cf_dmt ] .to provide upper bounds , we will use the cut - set bounds as argued in lemma [ lemma : cutsetupperbound ] .the cut - sets of interest are shown in fig .[ fig : singleantennatworelay ] and denoted as , and .we will see that these will be adequate to provide a tight bound . in order to calculate the diversity orders for each cut - set, we write down the instantaneous mutual information expressions given the fading levels as to maximize this upper bound we need to choose , and complex gaussian with zero mean and variances , and respectively , where , and denote the average power constraints each node has .then where \label{eqn : matrix_a}\ ] ] and we used ( [ eqn : covarianceremoval ] ) to upper bound with in ( [ eqn : singleantennatworelaynonclusteredcutset2 ] ) and with in ( [ eqn : singleantennatworelaynonclusteredcutset4 ] ) . for a targetdata rate , , whereas .then using lemma [ lemma : cutsetupperbound ] , the best achievable diversity of a non - clustered system is upper bounded by . when the system is clustered , and are larger than the gaussian channel capacities and respectively .then , if . in other words ,it is possible to operate at the positive rate of reliably without any outage and as increases , the data rate of this bound can increase as without any penalty in reliability .however , this is not the case for any as . combining these results withthe upper bound due to , we have assume the source , and perform block markov superposition coding .after each block and attempt to decode the source .the destination does backward decoding similar to the case in section [ sec : multiantennasinglerelay ] . using the block markov coding structure both and remove each other s signal from its own received signal in ( [ eqn : singanttworelayr1receivedsignal ] ) and ( [ eqn : singanttworelayr2receivedsignal ] ) before trying to decode any information .we choose and independent complex gaussian with zero mean and variances and respectively .we also choose independently with complex gaussian distribution .then the probability of outage for this system , when the target data rate , is equal to where and , as the system is non - clustered .using the fact that and , this outage probability becomes high , which is equivalent to the outage behavior of a system ( or ) system .hence , in a non - clustered system if both relays do df , the dmt in theorem [ thm : singleantennatworelaynonclus ] can be achieved .to prove that the dmt of theorem [ thm : singleantennatworelayclus ] is achievable , we use the mixed strategy suggested in , in which does df and then the source node and the first relay together perform block markov superposition encoding . similar to the non - clustered case in appendix [ app : singleantennatworelaynonclus ] , we require to decode the source message reliably , and to transmit only if this is the case .we assume that and the destination know if transmits or not .the second relay does cf . to prove the dmt we calculate the probability of outage as as clustered with the source , source to communication is reliable for all multiplexing gains up to 1 ; i.e. and can be made arbitrarily close to 1 and 0 respectively .therefore , we only need to show that decays at least as fast as with increasing . when decodes the source message reliably , if the target data rate satisfies subject to then the system is not in outage .we choose , and independent complex gaussian with variances , and respectively and , where is an independent complex gaussian random variable with zero mean , variance and independent from all other random variables .we define {\ensuremath{\mathbf{h}}}_{sr_1,r_2d}^{\dag}\right .\nonumber \\&&{+}\ : \left .\left [ \begin{array}{cc } \hat{n}_{r_2}+1 & 0 \\ 0 & 1 \end{array}\right]\right| , \label{eqn : l_sr1,r2d } \\l_{sr_1,r_2d } ' & \triangleq & \left|{\ensuremath{\mathbf{h}}}_{sr_1,r_2d } \left[\begin{array}{cc } p_s & 0 \\ 0 & p_{r_1 } \\ \end{array}\right]{\ensuremath{\mathbf{h}}}_{sr_1,r_2d}^{\dag}+\mathbf{i}_2\right| , \nonumber \\ \label{eqn : l_sr1,r2d'}\end{aligned}\ ] ] where is given in ( [ eqn : matrix_a ] ) . using the definitions of , , and from ( [ eqn :l_s , r1 ] ) , ( [ eqn : l_sr1,d ] ) and ( [ eqn : l_sr1r2,d ] ) , with and , the instantaneous mutual information expressions conditioned on the fading levels for the mixed strategy become the mutual information in the compression rate constraint of ( [ eqn : singleantennatworelaycompressionconstraint ] ) are then the compression noise power has to be chosen to satisfy note that both sides of the above inequality are functions of . using the csi , the relay will always ensure ( [ eqn : singleantennatworelayclusteredhatn3 ] ) is satisfied .after substituting the value of the compression noise in ( [ eqn : cfrate_with_r1 ] ) we need to calculate .given decodes , this problem becomes similar to _ problem 1 _ , and we can find that when , . finally , as , we say the mixed strategy achieves the dmt bound .the authors would like to thank dr .gerhard kramer , whose comments improved the results , and the guest editor dr .j. nicholas laneman and the anonymous reviewers for their help in the organization of the paper .j. n. laneman , d. n. c. tse , and g. w. wornell , `` cooperative diversity in wireless networks : efficient protocols and outage behavior , '' _ ieee transactions on information theory _ , vol .50 , no . 12 , p. 3062, december 2004 .m. janani , a. hedayat , t. e. hunter , and a. nosratinia , `` coded cooperation in wireless communications : space - time transmission and iterative decoding , '' _ ieee transactions on signal processing _ , vol .52 , no . 2 , p. 362, february 2004 . j. n. laneman and g. w. wornell , `` distributed space - time coded protocols for exploiting cooperative diversity in wireless networks , '' _ ieee transactions on information theory _ ,49 , no . 10 , p. 2415, october 2003 .i. maric , r. d. yates , and g. kramer , `` the discrete memoryless compound multiple access channel with conferencing encoders , '' in _ proceedings of ieee international symposium on information theory _ , 2005 .r. u. nabar , h. bolcskei , and f. w. kneubuhler , `` fading relay channels : performance limits and space - time signal design , '' _ ieee journal on selected areas in communications _ , vol . 22 , no . 6 , p. 1099, august 2004 .a. reznik , s. r. kulkarni , and s. verdu , `` degraded gaussian multirelay channel : capacity and optimal power allocation , '' _ ieee transactions on information theory _ ,50 , no . 12 , p. 3037, december 2004 .j. m. shea , t. f. wong , a. avudainayagam , and x. li , `` reliability exchange schemes for iterative packet combining in distributed arrays , '' in _ ieee wireless communications and networking conference _ , 2003 ,p. 832 .k. azarian , h. el - gamal , and p. schniter , `` on the achievable diversity - multiplexing tradeoff in half - duplex cooperative channels , '' _ ieee transactions on information theory _ , vol .51 , no . 12 , p. 4152, december 2005 .l. sankaranarayanan , g. kramer , and n. b. mandayam , `` hierarchical sensor networks : capacity bounds and cooperative strategies using the multiple - access relay channel model , '' in _ proceedings of first annual ieee communications society conference on sensor and ad hoc communications and networks _ , 2004 ,p. 191 .a. bletsas , a. khisti , d. p. reed , and a. lippman , `` a simple cooperative diversity method based on network path selection , '' _ ieee journal on selected areas in communications _ ,24 , no . 3 , p. 659, march 2006 .a. khoshnevis and a. sabharwal , `` on diversity and multiplexing gain of multiple antennas systems with transmitter channel information , '' in _ proceedings of 42nd allerton conference on communication , control and computing _ , 2004. c. zeng , f. kuhlmann , and a. buzo , `` achievability proof of some multiuser channel coding theorems using backward decoding , '' _ ieee transactions on information theory _ , vol .35 , no . 6 , p. 1160, november 1989 . s. a. jafar and a. j. goldsmith , `` isotropic fading vector broadcast channels : the scalar upper bound and loss in degrees of freedom , '' _ ieee transactions on information theory _ , vol . 51 , no . 3 , p. 848, 2005 .melda yuksel [ s98 ] received her b.s .degree in electrical and electronics engineering from middle east technical university , ankara , turkey , in 2001 .she is currently working towards her ph.d .degree at polytechnic university , brooklyn , ny . in 2004 , she was a summer researcher in mathematical sciences research center , bell - labs , lucent technologies , murray hill , nj . melda yuksel is the recipient of the best paper award in the communication theory symposium of icc 2007 .her research interests include communication theory and information theory and more specifically cooperative communications , network information theory and information theoretic security over communication channels .elza erkip [ s93 , m96 , sm05 ] received the ph.d . and. degrees in electrical engineering from stanford university , and the b.s .degree in electrical and electronics engineering from middle east technical university , turkey .she joined polytechnic university in spring 2000 , where she is currently an associate professor of electrical and computer engineering .dr . erkip received the 2004 communications society stephen o. rice paper prize in the field of communications theory and the nsf career award in 2001 .she is an associate editor of ieee transactions on communications , a publications editor of ieee transactions on information theory and a guest editor of ieee signal processing magazine , special issue on signal processing for multiterminal communication systems .she is the technical area chair for the `` mimo communications and signal processing '' track of 41st annual asilomar conference on signals , systems , and computers , and the technical program co - chair of 2006 communication theory workshop .her research interests are in wireless communications , information theory and communication theory . | we consider a general multiple antenna network with multiple sources , multiple destinations and multiple relays in terms of the diversity - multiplexing tradeoff ( dmt ) . we examine several subcases of this most general problem taking into account the processing capability of the relays ( half - duplex or full - duplex ) , and the network geometry ( clustered or non - clustered ) . we first study the multiple antenna relay channel with a full - duplex relay to understand the effect of increased degrees of freedom in the direct link . we find dmt upper bounds and investigate the achievable performance of decode - and - forward ( df ) , and compress - and - forward ( cf ) protocols . our results suggest that while df is dmtoptimal when all terminals have one antenna each , it may not maintain its good performance when the degrees of freedom in the direct link is increased , whereas cf continues to perform optimally . we also study the multiple antenna relay channel with a half - duplex relay . we show that the half - duplex dmt behavior can significantly be different from the full - duplex case . we find that cf is dmt optimal for half - duplex relaying as well , and is the first protocol known to achieve the half - duplex relay dmt . we next study the multiple - access relay channel ( marc ) dmt . finally , we investigate a system with a single source - destination pair and multiple relays , each node with a single antenna , and show that even under the idealistic assumption of full - duplex relays and a clustered network , this virtual multi - input multi - output ( mimo ) system can never fully mimic a real mimo dmt . for cooperative systems with multiple sources and multiple destinations the same limitation remains to be in effect . cooperation , diversity - multiplexing tradeoff , fading channels , multiple - input multiple - output ( mimo ) , relay channel , wireless networks . |
balanced detection provides a unique tool for many physical , biological and chemical applications . in particular , it has proven useful for improving the coherent detection in telecommunication systems , in the measurement of polarization squeezing , for the detection of polarization states of weak signals via homodyne detection , and in the study of light - atom interactions .interestingly , balanced detection has proved to be useful when performing highly sensitive magnetometry , even at the shot - noise level , in the continuous - wave and pulsed regimes .the detection of light pulses at the shot - noise level with low or negligible noise contributions , namely from detection electronics ( electronic noise ) and from intensity fluctuations ( technical noise ) , is of paramount importance in many quantum optics experiments .while electronic noise can be overcome by making use of better electronic equipment , technical noise requires special techniques to filter it , such as balanced detection and spectral filtering .even though several schemes have been implemented to overcome these noise sources , an optimal shot - noise signal recovery technique that can deal with both technical and electronic noises , has not been presented yet . in this paper, we provide a new tool based both on balanced detection and on the precise calculation of a specific pattern function that allows the optimal , shot - noise limited , signal recovery by digital filtering . to demonstrate its efficiency , we implement pattern - function filtering in the presence of strong technical and electronic noises .we demonstrate that up to 10 db of technical noise for the highest average power of the beam , after balanced detection , can be removed from the signal .this is especially relevant in the measurement of polarization - rotation angles , where technical noise can not be completely removed by means of balanced detectors .furthermore , we show that our scheme outperforms the wiener filter , a widely used method in signal processing .the paper is organized as follows . in section [ sec : pattern ] we present the theoretical model of the proposed technique , in section [ sec : experiment ] we show the operation of this tool by designing and implementing an experiment , where high amount of noise ( technical and electronic ) is filtered . finally in section[ sec : conclusions ] we present the conclusions .to optimally recover a pulsed signal in a balanced detection scheme , it is necessary to characterize the detector response , as well as the `` electronic '' and `` technical '' noise contributions .we now introduce the theoretical framework of the filtering technique and show how optimal pulsed signal recovery can be achieved . to model a balanced detector , see fig .[ fig : expsetup ] , we assume that it consists of 1 ) a polarizing beam splitter ( pbs ) , which splits the and polarization components to two different detectors 2 ) the two detectors pd and pd , whose output currents are directly subtracted , and 3 ) a linear amplifier because the amplification is linear and stationary , we can describe the response of the detector by impulse response functions . if the photon flux at detector is , the electronic output can be defined as where is the electronic noise of the photodiodes , including amplification . here , stands for the convolution of and , i.e. , . for clarity , the time dependence will be suppressed when possible .it is convenient to introduce the following notation : , , and . using these new variables ,eq . takes the form from this signal , we are interested in recovering the differential photon number , where is the time interval of the desired pulse , with minimal uncertainty .more precisely , we want to find an estimator $ ] , that is unbiased , and has minimal variance .in order to make unbiased , we realize that it must linearly depend on .this because and are linear in both and . therefore , the estimator must have the form in eq ., refers to as _ pattern function _ , which describes the most general linear estimator .in this work , we will consider three cases : 1 ) a raw estimator , for and 0 otherwise ; 2 ) a wiener estimator , which makes use of a wiener - filter - like pattern function , , where represents the wiener filter in the time domain , and 3 ) a model - based pattern function estimator .notice that both and are defined in , allowing to properly choose a desired pulse . inwhat follows , we explicitly show how to calculate the model - based pattern function estimator .we assume that have known averages ( over many pulses ) , and similarly the response functions have averages .then the average of the electronic output reads as and . in writing eq ., we have assumed that the noise sources are uncorrelated . from thiswe observe that if a balanced optical signal is introduced , i.e. , the mean electronic signal is entirely due to . in orderthat correctly detects this null signal , must be orthogonal to , i.e. our second condition derives from which is in effect a calibration condition : the right - hand side is a uniform - weight integral of , while the left - hand side is a non - uniform - weight integral , giving preference to some parts of the signal .if the total weights are the same , the above gives .we note that this condition is not very restrictive .for example , given , and given up to a normalization , the equation simply specifies the normalization of .notice that the condition given by eq . may still be somewhat ambiguous .if we want this to apply for all possible shapes , it would imply const ., and would make the whole exercise trivial .instead , we make the physically reasonably assumption that the input pulse , with shape is uniformly rotated to give , .similarly , it follows that .we note that this assumption is not strictly obeyed in our experiment and is a matter of mathematical convenience : a path difference from the pbs to the two detectors will introduce an arrival - time difference giving rise to opposite - polarity features at the start and end of the pulse , as seen in fig .[ fig : restech](a ) . a delay in the corresponding response functions is , however , equivalent , and we opt to absorb all path delays into the response functions . in our experimentthe path difference is , implying a time difference of less than 0.2 ns , much below the smallest features in fig .[ fig : restech](a ) . absorbing the constant of proportionality into , we find which is our calibration condition .we consider two kinds of technical noise : fluctuating detector response and fluctuating input pulses .we write the response functions in the form , for a given detector , where the fluctuating term is a stochastic variable .similarly , we write , where is or . by substituting the corresponding fluctuating response functions into eq . , the electronic output signal becomes where is the summed technical noise from both and sources .we note that the optical technical noise , in contrast to optical quantum noise , scales as , so that . in passing to the last line we neglect terms on the assumption , .we further assume that and are uncorrelated .we find the variance of the model - based estimator , , is with the first term describing technical noise , and the second one electronic noise . to compare against noise measurements , we transform eq . to the frequency domain .using parseval s theorem , see eq .( [ eq : simp1mwm ] ) , we can write the noise power as our goal is now to find the that minimizes satisfying the conditions in eqs . and , which in the frequency space are the specific form of the solution is given in appendix [ sec : pattesolution ] .at 150 mhz ( blue dashed line ) and the amplified one at 5 mhz ( green solid line ) . for the sake of comparison ,both pulses are normalized.,width=226 ] [ cols="^ " , ] to illustrate the performance of our technique when filtering technical noise , we introduce a high amount of noise about db above the shot noise level at the maximum optical power to the light pulses produced by the aoms . after balancing a maximum of 10db remains in the electronic output , which is then filtered by means of the optimal pattern function technique .we have verified the correct noise filtering by comparing the results with shot - noise limited pulses .for this purpose , we compute , the variance of the optimal estimator for each power , and for each data set , the shot - noise limited and the noisy one .figure [ fig : res3 ] shows the computed noise estimation as function of the optical power for both .notice that the two noise estimations are linear with the optical power .moreover , we observe that both curves agree at , using the ratio of the slopes , which allows us to conclude that , by using this technique , we can retrieve shot - noise limited pulses from signals bearing high amount of technical noise .the experimental setup that we have implemented , see fig .[ fig : expsetup ] , can perform also as a pulsed signal polarimeter .for instance , it is possible to determine a small polarization - rotation angle from a linear polarized light pulse . along these lines ,we make use of three estimators and to determine the amount of noise on the estimation of the polarization - rotation angle . from the obtained results ,we show that the model - based estimator outperforms the other two .we proceed to calculate the noise on the polarization - rotation angle estimation , for this determination we calculate the variance of .we notice that the taylor approximation of the variance of is for small angles , the function is approximately linear on , so the contribution from higher order terms can be disregarded .therefore , the noise on the angle estimation is we can then compute this expression using the three before mentioned estimators . for such task we use the experimental data together with an analytical approximation of the derivative , that takes as input the measured data .figure [ fig : resangle ] depicts the noise angle estimation , showing that the optimal pattern function performs better than the other estimators when eliminating the technical noise and reducing the electronic noise .in particular , the based - model estimator surpasses the wiener estimator , which is a widely used method in signal processing .we have studied in theory and with an experimental demonstration , the optimal recovery of light pulses via balanced detection .we developed a theoretical model for a balanced detector and the noise related to the detection of optical pulses .we minimized the technical and electronic noise contributions obtaining the optimal ( model - based ) pattern function .we designed and implemented an experimental setup to test the introduced theoretical model . in this experimental setup , we produced technical noise in a controlled way , and retrieved shot - noise limited signals from signals bearing about 10 db of technical noise after balanced detection .finally , we compare against nave and wiener filter estimation for measuring rotation angles , and confirm superior performance of the model - based estimator .the results presented here might lead to a better polarization - rotation angle estimations when using pulses leading to probe magnetic atomic ensembles in environments with technical noise .this possibility is especially attractive for balanced detection of sub - shot - noise pulses , for which the acceptable noise levels are still lower .we note the inner - product form of parseval s theorem where the functions are the fourier transforms of , respectively . for any stationary random variable , ( if this were not the case , there would be a phase relation between different frequency components , which contradicts the assumption of stationarity ) . from this, it follows that will minimize the noise power ( see eq . ) with respect to the pattern function using the two conditions ( see eq . and eq . ) .we solve this by the method of lagrange multipliers .for this , we write and then solve the equations the first equation reads with formal solution the second and third equations from eq .are the same as eq .and eq . above. the problem is then reduced to finding , which ( through the above ) , make satisfy the two constraints . substituting eq . into eq . andeq . , we find and where with . the solution to the set of eqs . and is then given by it should be noted that quantum noise is not explicitly considered in the model .rather , it is implicitly present in which may differ from their average values due to quantum noise . note that the point of this measurement design is to optimize the measurement of including the quantum noise in that variable .for this reason , it is sufficient to describe , and minimize , the other contributions .the wiener filter estimator can be derived from the frequency domain wiener filter output define as we thank f. wolfgramm , f. martn ciurana , j. p. torres , f. beduini and j. zieliska for helpful discussions .this work was supported by the european research council project `` aqumet '' , the spanish mineco project `` mago '' ( ref . fis2011 - 23520 ) , and by fundaci privada cellex barcelona .y. a. de i. a. was supported by the scholarship bes-2009 - 017461 , under project fis2007 - 60179 .bach , `` ultra - broadband photodiodes and balanced detectors towards 100 gbit / s and beyond , '' in `` optics east 2005 , '' ( international society for optics and photonics , 2005 ) , pp . 60,140b60,140b13 .youn , _ measurement of the polarization state of a weak signal field by homodyne detection _( intech , available from : http://www.intechopen.com/books/photodetectors/ , from the book , 2012 ) , chap .17 , pp . 389404 .m. kubasik , m. koschorreck , m. napolitano , s. r. de echaniz , h. crepaz , j. eschner , e. s. polzik , and m. w. mitchell , `` polarization - based light - atom quantum interface with an all - optical trap , '' phys . rev .a * 79 * , 043,815 ( 2009 ) .v. g. lucivero , p. anielski , w. gawlik , and m. w. mitchell , `` shot - noise - limited magnetometer with sub - pt sensitivity at room temperature , '' arxiv * quant - ph * , 1403.7796 ( submitted to phys .a ) ( 2014 ) .n. behbood , f. m. ciurana , g. colangelo , m. napolitano , m. w. mitchell , and r. j. sewell , `` real - time vector field tracking with a cold - atom magnetometer , '' applied physics letters * 102 * , 173,504 ( 2013 ) .h. hansen , t. aichele , c. hettich , p. lodahl , a. i. lvovsky , j. mlynek , and s. schiller , `` ultrasensitive pulsed , balanced homodyne detector : application to time - domain quantum measurements , '' opt .* 26 * , 17141716 ( 2001 ) . p. j. windpassinger , m. kubasik , m. koschorreck , a. boisen , n. kj , e. s. polzik , and j. h. mller , `` ultra low - noise differential ac - coupled photodetector for sensitive pulse detection applications , '' measurement science and technology * 20 * , 055,301 ( 2009 ) .v. ruilova - zavgorodniy , d. y. parashchuk , and i. gvozdkova , `` highly sensitive pump probe polarimetry : measurements of polarization rotation , ellipticity , and depolarization , '' instruments and experimental techniques * 46 * , 818823 ( 2003 ) .t. ezaki , g. suzuki , k. konno , o. matsushima , y. mizukane , d. navarro , m. miyake , n. sadachika , h .- j .mattausch , and m. miura - mattausch , `` physics - based photodiode model enabling consistent opto - electronic circuit simulation , '' in `` electron devices meeting , 2006 .international , '' ( 2006 ) , pp .14 .r. j. sewell , m. koschorreck , m. napolitano , b. dubost , n. behbood , and m. w. mitchell , `` magnetic sensitivity beyond the projection noise limit by spin squeezing , '' phys .* 109 * , 253,605 ( 2012 ) . | we demonstrate a new tool for filtering technical and electronic noises from pulses of light , especially relevant for signal processing methods in quantum optics experiments as a means to achieve the shot - noise level and reduce strong technical noise by means of a pattern function . we provide the theory of this pattern - function filtering based on balance detection . moreover , we implement an experimental demonstration where 10 db of technical noise is filtered after balance detection . such filter can readily be used for probing magnetic atomic ensembles in environments with strong technical noise . |
creep is a major limitation of concrete .indeed , it has been suggested that creep deformations are logarithmic , that is , virtually infinite and without asymptotic bound , which raises safety issues .the creep of concrete is generally thought to be mainly caused by the viscoelastic and viscoplastic behavior of the cement hydrates . while secondary cementitious phases can show viscoelastic behavior , the rate and extent of viscoelastic deformations of such phases is far less significant than that calcium silicate hydrate ( c s h ) , the binding phase of the cement paste . as such , understanding the physical mechanism of the creep of c s h is of primary importance . despite the prevalence of concrete in the built environment , the molecular structure of c s h has just recently been proposed , which makes it possible to investigate its mechanical properties at the atomic scale . here , relying on the newly available model , we present a new methodology allowing us to simulate the long - term creep deformation of bulk c s h ( at zero porosity , i.e. , at the scale of the grains ) .results show an excellent agreement with nanoindentation measurements .to describe the disordered molecular structure of c s h , pellenq et al . proposed a realistic model for c s h with the stoichiometry of ( cao)(sio)(h) .we generated the c s h model by introducing defects in an 11 tobermorite configuration , following a combinatorial procedure . whereas the ca / si ratio in 11 tobermorite is 1 ,this ratio is increased to 1.71 in the present c s h model , through randomly introducing defects in the silicate chains , which provides sites for adsorption of extra water molecules .the reaxff potential , a reactive potential , was then used to account for the reaction of the interlayer water with the defective calcium silicate sheets .more details on the preparation of the model and its experimental validation can be found in ref . and in previous works .we simulated the previously presented c s h model , made of 501 atoms , by molecular dynamics ( md ) using the lammps package . to this end, we used the reaxff potential with a time step of 0.25fs .prior to the application of any stress , the system is fully relaxed to zero pressure at 300k .shear strain and potential energy with respect to the number of loading / unloading cycles .the inset shows the shape of the applied shear stress . ]the relaxation of c s h , or of other silicate materials , takes place over long periods of time ( years ) , which prevents the use of traditional md simulations , which are limited to a few nanoseconds . to study the long - term deformations of c s h , we applied a method that has recently been introduced to study the relaxation of silicate glasses . in this method ,starting from an initial atomic configuration of glass , formed by rapid cooling from the liquid state , the system is subjected to small , cyclic perturbations of shear stress around zero pressure . for each stress ,a minimization of the energy is performed , with the system having the ability to deform ( shape and volume ) in order to reach the target stress .these small perturbations of stress deform the energy landscape of the glass , allowing the system to jump over energy barriers .note that the observed relaxation does not depend on the choice of , provided that this stress remains sub - yield .this method mimics the artificial aging observed in granular materials subjected to vibrations . here , in order to study creep deformation , we add to the previous method a constant shear stress , such that ( see the inset of figure [ fig : method ] ) .when subjected to shear stresses of different intensities , c s h presents a shear strain that : ( 1 ) increases logarithmically with the number of cycles ( figure [ fig : method ] ) and ( 2 ) is proportional to the applied shear stress ( see figure [ fig : strain ] ) .shear strain with respect to the number of loading / unloading cycles , under a constant shear stress of 1 , 2 , and 3 gpa .the inset shows the creep modulus with respect to the packing fraction obtained from nanoindentation , compared with the computed value at . ]the creep of bulk c s h can then be described by a simple logarithmic law , where is a constant analogous to a relaxation time and is the creep modulus. a careful look at the internal energy shows that the height of the energy barriers , through which the system transits across each cycle , remains roughly constant over successive cycles . according to transition state theory , which states that the time needed for a system to jump over an energy barrier is proportional to , we can assume that each cycle corresponds to a constant duration , so that a fictitious time can be defined as .we note that the computed creep moduli does not show any significant change with respect to the applied stress . as such , it appears to be a material property that can directly been compared to nanoindentation results extrapolated to zero porosity .as shown in the inset of figure [ fig : strain ] , we observe an excellent agreement , which suggests that the present method offers a realistic description of the creep of c s h at the atomic scale .this result also suggests that , within the linear regime ( i.e. , for sub - yield stresses , when remain constant ) , deformations due to cyclic creep and basic creep , with respect to the number of stress cycle or the elapsed time , respectively , should be equivalent .we reported a new methodology based on atomistic simulation , allowing us to successfully observe long - term creep deformations of c s h .creep deformations are found to be logarithmic and proportional to the applied shear stress .the computed creep modulus shows an excellent agreement with nanoindentation data , which suggests that the present methodology could be used as a predictive tool to study the creep deformations of alternative binders .mb acknowledges partial financial support for this research provisioned by the university of california , los angeles ( ucla ) . this work was also supported by schlumberger under an mit - schlumberger research collaboration and by the cshub at mit .this work has been partially carried out within the framework of the icome2 labex ( anr-11-labx-0053 ) and the a*midex projects ( anr-11-idex-0001 - 02 ) cofunded by the french program `` investissements davenir '' which is managed by the anr , the french national research agency . | understanding the physical origin of creep in calcium silicate hydrate ( c s h ) is of primary importance , both for fundamental and practical interest . here , we present a new method , based on molecular dynamics simulation , allowing us to simulate the long - term visco - elastic deformations of c s h . under a given shear stress , c s h features a gradually increasing shear strain , which follows a logarithmic law . the computed creep modulus is found to be independent of the shear stress applied and is in excellent agreement with nanoindentation measurements , as extrapolated to zero porosity . |
the additive white gaussian noise channel is basic to shannon theory and underlies practical communication models .we introduce classes of superposition codes for this channel and analyze their properties .we link theory and practice by showing superposition codes from polynomial size dictionaries with least squares decoding achieve exponentially small error probability for any communication rate less than the shannon capacity .a companion paper , provides a fast decoding method and its analysis .the developments involve a merging of modern perspectives on statistical linear model selection and information theory .the familiar communication problem is as follows .an encoder is required to map input bit strings of length into codewords which are length strings of real numbers , with norm expressed via the power .we constrain the average of the power across the codewords to be not more than .the channel adds independent noise to the selected codeword yielding a received length string .a decoder is required to map it into an estimate which we want to be a correct decoding of .block error is the event , bit error at position is the event , and the bit error rate is .an analogous section error rate for our code is defined below .the reliability requirement is that , with sufficiently large , the bit error rate or section error rate is small with high probability or , more stringently , the block error probability is small , averaged over input strings as well as the distribution of .the communication rate is the ratio of the input length to the codelength for communication across the channel .the supremum of reliable rates is the channel capacity , by traditional information theory as in , , .standard communication models , even in continuous - time , have been reduced to the above discrete - time white gaussian noise setting , as in , .this problem is also of interest in mathematics because of relationship to versions of the sphere packing problem as described in conway and sloane . for practical codingthe challenge is to achieve rates arbitrarily close to capacity with a codebook of moderate size , while guaranteeing reliable decoding in manageable computation time .we introduce a new coding scheme based on sparse superpositions with a moderate size dictionary and analyze its performance .least squares is the optimal decoder .accordingly , we analyze the reliability of least squares and approximate least squares decoders .the analysis here is without concern for computational feasibility . in similar settingscomputational feasibility is addressed in the companion paper , , though the closeness to capacity at given reliability levels is not as good as developed here .we introduce sparse superposition codes and discuss the reliability of least squares in subsection [ sub : spar ] of this introduction .subsection [ sub : decod ] contrasts the performance of least squares with what is achieved by other methods of decoding . in subsection[ sub : pracd ] , we mention relations with work on sparse signal recovery in the high dimensional regression setting .subsection [ sub : awgncode ] discusses other codes and subsection [ sub : forneycover ] discusses some important forerunners to our developments here .our reliability bounds are developed in subsequent sections .we develop the framework for code construction by linear combinations .the story begins with a list ( or book ) of vectors , each with coordinates , for which the codeword vectors take the form of superpositions .the vectors which are linearly combined provide the terms or components of the codewords and the are the coefficients .the received vector is in accordance with the statistical linear model where is the matrix whose columns are the vectors and is the noise vector distributed normal( ) . in keeping with the terminology of that statistical setting , the book may be called the design matrix consisting of variables , each with observations , and this list of variables is also called the dictionary of candidate terms .the coefficient vectors are arranged to be of a specified form . for _ subset superposition coding _ we arrange for a number of the coordinates to be non - zero , with a specified positive value , and the message is conveyed by the choice of subset .denote .if is large , it is a _ sparse superposition code_. in this case , the number of terms sent is a small fraction of dictionary size . with somewhat greater freedom, one may arrange the non - zero coefficients to be or times a specified value , in which case the superposition code is said to be _signed_. then the message is conveyed by the sequence of signs as well as the choice of subset . to allow such forms of , we do not in general take the set of permitted coefficient vectors to be closed under a field of linear operations , and hence our linear statistical model does not correspond to a linear code in the sense of traditional algebraic coding theory . in a specializationwe call a _ partitioned superposition code _ , the book is split into sections of size , with one term selected from each , yielding terms in each codeword out of a dictionary of size .likewise , the coefficient vector is split into sections , with one coordinate non - zero in each section to indicate the selected term .optionally , we have the additional freedom of choice of sign of this coefficient , for a signed partitioned code .it is desirable that the section sizes be not larger than a moderate order polynomial in or , for then the dictionary is arranged to be of manageable size .most convenient is the case that the sizes of these sections are powers of two. then an input bit string of length splits into substrings of size .the encoder mapping from to is then obtained by interpreting each substring of as simply giving the index of which coordinate of is non - zero in the corresponding section .that is , each substring is the binary representation of the corresponding index .as we have said , the rate of the code is input bits per channel uses and we arrange for arbitrarily close to . for the partitioned superposition code , this rate is . for specified rate , the codelength .thus , the length and the number of terms agree to within a log factor . with one term from each section , the number of possible codewords is equal to . alternatively ,if we allow for all subsets of size , the number of possible codewords would be , which is of order , for small compared to . to match the number of codewords, it would correspond to reducing by a factor of . though there would be the factor savings in dictionary size from allowing all subsets of the specified size ,the additional simplicity of implementation and simplicity of analysis with partitioned coding is such that we take advantage of it wherever appropriate . with signed partitioned codingthe story is similar , now with possible codewords using the dictionary of size .the input string of length , splits into sections with bits to specify the non - zero term and bit to specify its sign . for a rate codethis entails a codelength of .control of the dictionary size is critical to computationally advantageous coding and decoding .possible dictionary sizes are between the extremes and dictated by the number and size of the sections , where is the number of input bits . at one extreme , with section of size ,one has as the whole codebook with its columns as the codewords , but the exponential size makes its direct use impractical .at the other extreme we have sections , each with two candidate terms in subset coding or two signs of a single term in sign coding with ; in which case is the generator matrix of a linear code . between these extremes , we construct reliable , high - rate codes with codewords corresponding to linear combinations of subsets of terms in moderate size dictionaries .design of the dictionary is guided by what is known from information theory concerning the distribution of symbols in the codewords . by analysis of the converse to the channel coding theorem ( as in ) , for a reliable code at rate near capacity , with a uniform distribution on the sequence of input bits , the induced empirical distribution on coordinates of the codeword must be close to independent gaussian , in the sensethat the resulting mutual information must be close to its maximum subject to the power constraint .we draw entries of independently from a normal distribution with mean zero and a variance we specify , yielding the properties we want with high probability .other distributions , such as independent equiprobable , might also suffice , with a near gaussian shape for the codeword distribution obtained by the convolutions associated with sums of terms in subsets of size . for the vectors , the non - zero coefficients may be assigned to have magnitude , which with having independent entries of variance , yields codewords of average power near .there is a freedom of scale that allows us to simplify the coefficient representation .henceforth , we arrange the coordinates of to have variance and set the non - zero coefficients to have magnitude .optimal decoding for minimal average probability of error consists of finding the codeword with coefficient vector of the assumed form that maximizes the posterior probability , conditioning on and .this coincides , in the case of equal prior probabilities , with the maximum likelihood rule of seeking such a codeword to minimize the sum of squared errors in fit to .this is a least squares regression problem , with constraints on the coefficient vector .we show for all , that the least squares solution , as well as approximate least squares solutions such as may arise computationally , will have , with high probability , at most a negligible fraction of terms that are not correctly identified , producing a low bit error rate .the heart of the analysis shows that competing codewords that differ in a fraction of at least terms are exponentially unlikely to have smaller distance from than the true codeword , provided that the section size is polynomially large in the number of sections , where a sufficient value of is determined . for the partitioned superposition codethere is a positive constant such that for rates less than the capacity , with a positive gap not too large , the probability of a fraction of mistakes at least is not more than consequently , for a target fraction of mistakes and target probability , the required number of sections or equivalently the codelength depends only polynomially on the reciprocal of the gap and on the reciprocal of .indeed of order\log ( 1/\epsilon) ] with is the cumulant generating function of a test statistic in our analysis . our first result on the distribution of the number of mistakes is the following . * lemma 1 : * set for an . for approximate least squares with ,the probability of a fraction mistakes is upper bounded by or equivalently , where and is the signal - to - noise ratio .* remark 1 : * we find this lemma 1 to be especially useful for in the lower range of the interval from to .lemma 2 below will refine the analysis to provide an exponent more useful in the upper range of the interval .* proof of lemma 1 : * to incur mistakes , there must be an allowed subset of size which differs from the subset sent in an amount which undesirably has squared distance less than or equal to the value achieved by .the analysis proceeds by considering an arbitrary such , bounding the probability that , and then using an appropriately designed union bound to put such probabilities together .consider the statistic given by .\ ] ] we set a threshold for this statistic equal to .the event of interest is that .the subsets and have an intersection of size and difference of size .given the actual density of is normal with mean and variance and we denote this density . in particular, there is conditional independence of and given .consider the alternative hypothesis of a conditional distribution for given and which is normal( ) .it is the distribution which would have governed if were sent .let be the associated conditional density .with respect to this alternative hypothesis , the conditional distribution for given remains normal( ) .that is , .we decompose the above test statistic as \ ] ] .\ ] ] let s call the two parts of this decomposition and , respectively .note that depends only on terms in , whereas depends also on the part of not in .concerning , note that we may express it as where is the adjustment by the logarithm of the ratio of the normalizing constants of these densities .thus is equivalent to a likelihood ratio test statistic between the actual conditional density and the constructed alternative hypothesis for the conditional density of given and .it is helpful to use bayes rule to provide via the equality of and and to interpret this equality as providing an alternative representation of the likelihood ratio in terms of the reverse conditionals for given and .we are examining the event that there is an allowed subset ( with of size and of size ) such that that is less than . for positive indicator of this event satisfies because , if there is such an with negative , then indeed that contributes a term on the right side of value at least . herethe outer sum is over of size . for each such , for the inner sum , we have sections in each of which , to comprise , there is a term selected from among choices other than the one prescribed by . to bound the probability of , take the expectation of both sides , bring the expectation on the right inside the outer sum , and write it as the iterated expectation , where on the inside condition on , and to pull out the factor involving , to obtain that ] . when plugged in above it yields the claimedbound optimized over in ]is bounded by the minimum for in the interval between and of the following \big\}.\ ] ] where * proof of lemma 2 : * split the test statistic where }\ ] ] and }\ ] ] likewise we split the threshold where is negative and is positive .the event that there is an with is contained in the union of the two events , that there is an with , and the event , that .the part has no dependence on so it can be treated more simply .it is a mean zero average of differences of squared normal random variables , with squared correlation .so using its moment generating function , ] , its analysis is much the same as for lemma 1 .we again decompose as the sum , where is the same as before .the difference is that in forming we subtract rather than .consequently , ,\ ] ] which again involves a difference of squares of standardized normals .but here the coefficient multiplying is such that we have maximized the correlations between the and .consequently , we have reduced the spread of the distribution of the differences of squares of their standardizations as quantified by the cumulant generating function .one finds that the squared correlation coefficient is for which .accordingly we have that the moment generating function is \} ] and ] .the smallest such section size rate is where the maximum is for in .this definition has the required invariance to the choice of base of the logarithm , assuming that the same base is used for the communication rate and for the that arises in the definition of . in the above ratio the numerator and denominator are both at and ( yielding at the ends ) .accordingly , we have excluded and from the definition of for finite . nevertheless , limiting ratios arise at these ends .we show that the value of is fairly insensitive to the value of , with the maximum over the whole range being close to a limit which is characterized by values in the vicinity of .let near be the solution to * lemma 4 : * the section size rate has a continuous limit which is given , for , by ^ 2 /[8v(1\!+\!v)\log e]}\ ] ] and for by /[2(1\!+\!v)]}\ ] ] where is the signal - to - noise ratio . with replaced by and using log base e , in the case , it is ^ 2 } \ ] ] which is approximately for small positive ; whereas , in the case it is which asymptotes to the value for large .* proof of lemma 4 : * for in we use and the strict positivity of to see that the ratio in the definition of tends to zero uniformly within compact sets interior to .so the limit is determined by the maximum of the limits of the ratios at the two ends . in the vicinity of the left and rightends we replace by the continuous upper bounds and , respectively , which are tight at and , respectively .then in accordance with lhopital s rule , the limit of the ratios equals the ratios of the derivatives at and , respectively .accordingly , where and are the derivatives of with respect to evaluated at and , respectively . to determine the behavior of in the vicinity of and we first need to determine whether the optimal in its definition is strictly less than or equal to . according to our earlier developments that is determined by whether .the right side of this is .so it is equivalent to determine whether the ratio is less than for in the vicinity of and .using lhopital s rule it suffices to determine whether the ratio of derivatives is less than when evaluated at and .at it is /v ] which is less than if and only if . for the cases in which the optimal , we need to determine the derivative of at and .recall that is the composition of the functions and and .we use the chain rule taking the products of the associated derivatives .the first of these functions has derivative which is at , the second of these has derivative which is at , and the third of these functions is which has derivative that evaluates to at and evaluates to ^ 2/[v(1\!+\!v)] ] at , which is again smaller in magnitude than the derivative at , producing the claimed form for for . at equate and see that both of the expressions for the magnitude of the derivative at agree with each other ( both reducing to ) so the argument extends to this case , and the expression for is continuous in .this completes the proof of lemma 3 .while is undesirably large for small , we have reasonable values for moderately large .in particular , equals and , respectively , at and , and it is near for large . as a function of the signal - to - noise ratio .the dashed curve shows at . just below itthe thin solid curve is the limit for large . for section size the error probabilities are exponentially small for all and any .the bottom curve shows the minimal section size rate for the bound on the error probability contributions to be less than , with and at ., height=336 ] numerically is of interest to ascertain the minimal section size rate , for a specified such as , for chosen to be a proscribed high fraction of , say , for a proscribed small target fraction of mistakes , say , and for to be a small target probability , so as to obtain ,p[\tilde e_\ell]+p[e_\ell^*]\}\le \epsilon ] and the bound from lemma 2 is used for +p[e_\ell^*] ] and previously discussed , beginning in section 2 .this is near for small .when the condition is satisfied and indeed solves the above equation ; otherwise provides the solution .now . with to integers between and , it is not more than and , with equality at particular near and , respectively .it remains small , with , for .also we have from lemma 2 .consequently , is small for large ; moreover , for near and , it is of order and , respectively , and via the indicated bounds , derivatives at and can be explicitly determined .the analysis in lemma 4 may be interpreted as determining section size rates such that the differentiable upper bounds on are less than or equal to for , where , noting that these quantities are at the endpoints of the interval , the critical section size rate is determined by matching the slopes at . at the other end of the interval ,the bound on the difference has a strictly positive slope at , given by - [ 2vr / a]^{1/2} ] for moderate and small , having is not so important to the exponent , as the positivity of produces a positive exponent even if matches or is slightly greater than . in this regime ,the lemma 1 bound is preferred , where we set without need for .in this section we put the above conclusions together to demonstrate the reliability of approximate least squares .the probability of the event of more than any small positive fraction of mistakes is shown to be exponentially small .recall the setting that we have a random dictionary of sections , each of size .the mapping from -bit input strings to coefficient vectors is as previously described .the set of such vectors are those that have one non - zero coefficient in each section ( with possible freedom for the choice of sign ) and magnitude of the non - zero coefficient equal to .let be the coefficient vector for an arbitrary input .we treat both the case of a fixed input , and the case that the input is drawn at random from the set of possible inputs .the codeword sent is the superposition of a subset of terms with one from each section .the received string is with distributed normal .the columns of are independent and and are known to the receiver , but not . the section size rate is such that . in fashion with shannon theory , the expectations in the following theorem are taken with respect to the distribution of the design as well as with respect to the distribution of the noise ; implications for random individual dictionaries are discussed after the proof .the estimator is assumed to be an ( approximate ) least squares estimator , taking values in and satisfying , with .let denote the number of mistakes , that is , the number of sections in which the non - zero term in is different from the term in .suppose the threshold is not more than .some natural choices for the threshold include , , and . for positive .* theorem 5 : * suppose the section size rate is at least , that the communication rate is less than the capacity with codeword length , and that we have an approximate least squares estimator .for between and , the probability ] using the minimum of the bounds from lemmas 1 and 2 .it follows that there is a positive constant , such that for all between and , \le 2l \exp\{-nc\min\{\alpha_0,g(c\!-\!r)\}\}.\ ] ] consequently , asymptotically , taking of the order of a constant times , the fraction of mistakes is of order in probability , provided is at least a constant multiple of .moreover , for any fixed , , and , not depending on , satisfying , and , we conclude that this probability is exponentially small .* proof : * consider the exponent as given at the start of the preceding section .we take a reference for which and for which is at least and at least a multiple of .the simplest choice is , which may be used when is less than a fixed fraction of .then exceeds , taking to be between and .small precision makes for a greater computational challenge .allowance is made for a more relaxed requirement that be less than and less than a fixed fraction of .both of these conditions are satisfied when is less than the value stated for the theorem .accordingly , set ] and ] bound ( the part controlling ] not more than the sum of \ , d^\prime \}\ ] ] and for any choice of between and .for instance one may choose to be half way between and . now if is less than a fixed fraction of , we have arranged for both and to be of order uniformly for .accordingly , the first of the two parts in the bound has exponent exceeding a quantity of order .the second of the two parts has exponent related to a function of the ratio ] , uniformly exponentially small for , with the stated conditions on . with optimized ,let be the minimum of the two exponents from the two terms in the bound on ] goes to zero polynomially in .indeed , for at least a multiple of , and sufficiently small , the bound becomes which with becomes , \le 2(1/l)^{(1/2)(a / r)\tau_v w_v \ell_0 - 1}.\ ] ] it is assured to go to zero with for at least ] is the same for all by exchangeability of the distribution of the columns of .accordingly , it also matches the average probability = \frac{1}{2^k } \sum_u { \mathbb{p}}[e|u] ] , where ] is over the distribution of the noise ) .this ] satisfies the indicated bound , random are likely to behave similarly .indeed , by markov s inequality \ge \tau p_e^{b}\big ] < 1/\tau ] , with probability at least .the manageable size of the dictionary facilitates computational verification by simulation that the bound holds for that . with may independently repeat the generation of a geometric( ) number of times until success .the mean number of draws of the dictionary required for one with the desired performance level is .even with only one draw of , one has with , that \le 2l e^{-(n/2)d_{min}},\ ] ] except for in an event of probability not more than .now ] is exponentially small for most ( again by markov s inequality ) . in theory one could expurgate the codebook , leaving only good performing and reassigning the mapping from to , to remove the minority of cases in which > 4l e^{-(n/2)d_{min}} ] for a specific and , to decide whether that should be used .however , it is not practical to do so in advance for all , and it is not apparent how to perform such expurgations efficiently on - line during communications .thus we maintain our focus in this paper on average case error probability , averaging over the possible inputs , rather than maximal error probability .as we have said , for the average case analysis , armed with a suitable decoder , one can check , for a dictionary , whether it satisfies an exponential bound on ] .our current method of proof does not facilitate providing such a direct check .the reason is that our analysis does not exclusively use the distribution of given and ; rather it makes critical use of properties of the joint distribution of and given .likewise , averaging over the random generation of the dictionary , permits a simple look at the satisfaction of the average power constraints . with a randomly drawn , and associated coefficient vector , consider the behavior of the power and whether it stays less than .the event , when conditioning on the input , has exponentially small probability ] is the same for all and hence matches the average ] enjoys the exponential bound , from which , again by applications of markov s inequality , except for in an event of exponentially small probability , for all but an exponentially small fraction of coefficient vectors in .control of the average power is a case in which we can formulate a direct check of what is required of the dictionary , as is examined in appendix b.here we examine the average and maximal power of the codewords . the maximal power has a role in our analysis of decoding .the power of a codeword is its squared norm , consisting of the average square of the codeword values across its coordinates .the terminology _ power _ arises from settings in which codeword values are voltages on a communication wire or a transmission antenna in the wireless case , recalling that power equals average squared voltage divided by resistance . *average power for the signed subset code : * consider first our signed , subset superposition code .each input correspond to a coefficient vector , where for each of the sections there is only one for which is nonzero , and , having absorbed the size of the terms into the , the nonzero coefficients are taken to be .these are the coefficient vectors of our codewords , for which the power is . with a uniform distribution on the binary input sequence of length ,the induced distribution on the sequence of indices is independent uniform on the choices in section , and likewise the signs are independent uniform valued , for . fix a dictionary , and consider the average of the codeword powers with this uniform distribution on inputs , by independence across sections , this average simplifies to now we consider the size of this average power , using the distribution of the dictionary , with each entry independent normal( ) .this average power has mean equal to , standard deviation , and distribution equal to { { \mathcal{x}}}_{nn}^2 ] is near for small positive , so that the bound is near , which is for .or we may appeal to the normal approximation for fixed when is large ; the probability is not more than that the dictionary has average power outside the interval formed by the mean plus or minus two standard deviations for instance , suppose and the rate is near the capacity , so that is near , and pick and .then with high probability is not more than times .if the average power constraint is held stringently , with average power to be precisely not more than , then in the design of the code proceed by generating the entries of with power , where is less than .the analysis of the preceding sections then carries through to show exponentially small probability of more than a small fraction of mistakes when as long as is sufficiently close to . * average power for the subset code : * likewise , let s consider the case of subset superposition coding without use of the signs . once again fix and consider a uniform distribution on inputs ; it again makes the term selections independent and uniformly distributed over the choices in each section .now there is a small , but non - zero , average of the terms in each section , and likewise a very small , but non - zero , overall average .we need to make adjustments by these averages when invoking the section independence to compute the average power . indeed , as in the rule that an expected square is the square of the expectation plus a variance , the average power is the squared norm of the average of the codewords plus the average norm squared difference between codewords and their mean .the mean of the codewords , with the uniform distribution on inputs , is , which is a normal( ) random vector of length . by independence of the term selections ,the codeword variance is .accordingly , in this subset coding setting , using the independence of and and standard distribution theory for sample variances , with a randomly drawn dictionary , we have that is times a chi - square random variable with degrees of freedom , plus times an independent chi - square random variable with degrees of freedom .so it has mean equal to and a standard deviation of , which is slightly greater than before .it again yields only a small departure from the target average power , as long as and are large .* worst case power : * next we consider the matter of the size of the maximum power among codewords for a given design .the simplest distribution bound is to note that for each , the codeword is distributed as a random vector with independent normal( ) coordinates , for which is times a chi - square random vector .there are such codewords , with the rate written in nats .we recall the probability bound .accordingly , by the union bound , is not more than except in an event of probability which we bound by , where is the inverse of the function $ ] .this is seen to be of order for small positive and of order for large .consequently , the bound on the maximum power is near rather than . according to this characterization , for positive rate communication , with subset superpositions , one can not rely , either in encoding or in decoding , on the norms being uniformly close to their expectation . *individual codeword power : * we return to signed subset coding and provide explicitly verifiable conditions on such that for every subset , the power is near for most choices of signs .the uniform distribution on choices of signs ameliorates between - section interference to produce simplified analysis of codeword power .the input specifies the term in each sections along with the choice of its sign given by in , leading to coefficient vectors equal to at position in section , for .the uniform distribution on the choices of signs leads to them being independently , equiprobable and .now the codeword is given by .it has the property that conditional on and the subset , the contributions for distinct sections are made to be mean zero uncorrelated vectors by the random choice of signs . in particular , again conditioning on the dictionary and the subset , we have that the power has conditional mean which we shall see is close to . the deviation from the conditional mean equals .the presence of the random signs approximately symmetrizes the conditional distribution and leads to conditional variance .now concerning the columns of the dictionary , the squared norms are uniformly close to , since the number of such is not exponentially large .indeed , by the union bound the maximum over the columns , satisfies except in an event of probability bounded by . whence the conditional mean power is not more than uniformly over all allowed selections of term subsets .note here that the polynomial size of makes the small ; this is in contrast to the worst case analysis above were the log cardinality divided by is the fixed rate .next to show that the conditional mean captures the typical power , we show that the conditional variance is small . toward that endwe examine the inner products and their maximum absolute value .consider products of independent standard normals .these have moment generating function equal to .[ this matches the moment generating function for half the difference in squares of independent normals found in section 2 ; to see why note that equals half the difference in squares of and . ]accordingly , for positive , where . as previously discussed , this is near for small and accordingly its inverse function is near for small .the corresponding two - sided bound is . by the union bound, we have that except for dictionaries in an event of probability not more than . recall that the conditional variance of equals . in the likely event that the above bound holds , we have that this conditional variance is not more than .consequently , the conditional distribution of the power given and is indeed concentrated near .accordingly , for each subset , most choices of sign produce a codeword with power near .moreover , for this codeword power property , it is enough that the individual columns of the dictionary have near and near , uniformly over .we thank john hartigan , cong huang , yiannis kontiyiannis , mokshay madiman , xi luo , dan spielman , edmund yeh , john hartigan , mokshay madiman , dan spielman , imre teletar , harrison zhou , david smalling and creighton heaukulani for helpful conversations .barron , a. joseph , `` least squares superposition codes of moderate dictionary size , reliable at rates up to capacity , '' _ proc .symp information theory _, austin , texas , jun 13 - 18 , 2010 .benjamini , y. and hochberg , y. `` controlling the false discovery rate : a practical and powerful approach to multiple testing , '' _, 57 , 1995 . g. berrou , a. glavieux , and p. thitimajshima , `` near shannon limit error - correcting coding : turbo codes , '' _ proc .commun _ , geneva , switzerland , may 1993 , pp .1064 - 1070 .r. j. mceliece , d. j. c. mackay , and j - f .cheng , `` turbo decoding as an instance of pearl s belief propagation algorithm , '' _ ieee journal on selected areas in commun _ ,16 , 2 , pp .140 - 152 , feb . 1998 .d. donoho , `` for most large underdetermined systems of linear equations , the minimal l1-norm solution is also the sparsest solution , '' commun .pure and appl .59 , no . 6 , pp .797 - 829 , jun . 2006 .donoho , j. tanner , `` exponential bounds implying construction of compressed sensing matrices , error - correcting codes , and neighborly polytopes by random sampling , '' _ ieee trans .inform . theory _alyson k. fletcher , sundeep rangan , vivek k. goyal , kannan ramchandran , `` denoising by sparse approximation : error bounds based on rate - distortion theory,''__j .signal process _10 , 2006 .w. hoeffding , `` probability inequalities for sums of bounded random variables , '' _ j. american statist ._ , pp.13 - 30 , march , 1963 .hu , h. zhao and h.h .zhou , `` multiple hypothesis testing with groups , '' manuscript .l. jones , `` a simple lemma for optimization in a hilbert space , with application to projection pursuit and neural net training , '' _ annals of statistics _ , vol.20 , pp.608 - 613 , 1992 .wainwright , `` sharp thresholds for high - dimensional and noisy sparsity recovery using -constrained quadratic programming ( lasso ) . ''_ ieee trans .inform . theory _ , vol.55 , no.5 , pp.2183 - 2202 , may 2009 .w. wang , m. j. wainwright , and k. ramchandran , information - theoretic limits on sparse signal recovery : dense versus sparse measurement matrices , _ ieee trans . inform . theory _6 , jun 2010 . | for the additive white gaussian noise channel with average codeword power constraint , new coding methods are devised in which the codewords are sparse superpositions , that is , linear combinations of subsets of vectors from a given design , with the possible messages indexed by the choice of subset . decoding is by least squares , tailored to the assumed form of linear combination . communication is shown to be reliable with error probability exponentially small for all rates up to the shannon capacity . |
magnetised turbulence pervades the universe .it is likely to play an important role in the transport of energy , momentum and charged particles in a diverse range of astrophysical plasmas .it is studied with regards to its influence on the generation of magnetic fields in stellar and planetary interiors , small - scale structure and heating of stellar winds , the transport of angular momentum in accretion discs , gravitational collapse and star formation in molecular clouds , the propagation and acceleration of cosmic rays , and interstellar scintillation ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?the effects of magnetised turbulence need to be taken into account when analysing astrophysical observations and also when modelling astrophysical processes .the simplest theoretical framework that describes magnetised plasma turbulence is that of incompressible magnetohydrodynamics ( mhd ) , where the elssser variables are defined as , is the fluctuating plasma velocity , is the fluctuating magnetic field normalized by , is the alfvn velocity based upon the uniform background magnetic field , , is the plasma pressure , is the background plasma density , represents forces that drive the turbulence at large scales and for simplicity we have taken the case in which the fluid viscosity is equal to the magnetic resistivity .energy is transferred to smaller scales by the nonlinear interactions of oppositely propagating alfvn wavepackets .this can be inferred directly from equation ( [ eq : mhd - elsasser ] ) by noting that in the absence of forcing and dissipation , if then any function is an exact nonlinear solution that propagates parallel and anti - parallel to with the alfvn speed .the efficiency of the nonlinear interactions splits mhd turbulence into two regimes .the regime in which the linear terms dominate over the nonlinear terms is known as ` weak ' mhd turbulence , otherwise the turbulence is ` strong ' .in fact , it has been demonstrated both analytically and numerically that the mhd energy cascade occurs predominantly in the plane perpendicular to the guiding magnetic field .this ensures that even if the turbulence is weak at large scales it encounters the strong regime as the cascade proceeds to smaller scales .mhd turbulence in astrophysical systems is therefore typically strong . for strong mhd turbulence, argued that the linear and nonlinear terms in equations ( [ eq : mhd - elsasser ] ) should be approximately balanced at all scales , known as the critical balance condition .consequently , postulated that the wave packets get progressively elongated in the direction of the guide field as their scale decreases ( with the field - parallel lengthscale and field - perpendicular scale related by ) and that the field - perpendicular energy spectrum takes the kolmogorov form .recent high resolution direct numerical simulations with a strong guide field ( ) do indeed verify the strong anisotropy of the turbulent fluctuations , however , the field - perpendicular energy spectrum appears to be closer to ( e.g. , * ? ? ?* ; * ? ? ?? * ; * ? ? ?* ; * ? ? ?a resolution to this contradiction was proposed in .therein it was suggested that in addition to the elongation of the eddies in the direction of the guiding field , the fluctuating velocity and magnetic fields at a scale are aligned within a small scale - dependent angle in the field perpendicular plane , . in this modelthe wavepackets are three - dimensionally anisotropic .scale - dependent dynamic alignment reduces the strength of the nonlinear interactions and leads to the field - perpendicular energy spectrum .although the two spectral exponents and are close together in numerical value , the physics of the energy cascade in each model is different .the difference between the two exponents is especially important for inferring the behaviour of processes in astrophysical systems with extended inertial intervals .for example , the two exponents can lead to noticeably different predictions for the rate of turbulent heating in coronal holes and the solar wind ( e.g. , * ? ? ? * ; * ? ? ? * ; * ? ? ? * ) .thus , there is much interest in accurately determining the spectral slope from numerical simulations .unfortunately , the reynolds numbers that are currently accessible by most direct numerical simulations do not exceed a few thousand , which complicates the precise identification of scaling exponents .techniques for careful optimisation of the numerical setup and alternative ways of differentiating between the competing theories are therefore much sought after . maximising the extent of the inertial range is often achieved by implementing physically motivated simplifying assumptions . for example , since the turbulent cascade proceeds predominantly in the field - perpendicular plane it is thought that the shear - alfvn waves control the dynamics while the pseudo - alfvn waves play a passive role ( see , e.g. , ) .if one neglects the pseudo - alfvn waves ( i.e. removes the fluctuations parallel to the strong guide field ) one obtains a system that is equivalent to the reduced mhd system ( rmhd ) that was originally derived in the context of fusion devices by and ( see also ) .incompressibility then enables the system to be further reduced to a set of two scalar equations for the elssser potentials , resulting in a saving of approximately a factor of two in computational costs .further computational savings can be made by making use of the fact that the wavepackets are elongated .hence variations in the field - parallel direction are slower than in the field - perpendicular plane and a reduction in the field - parallel resolution would seem possible .indeed , this is widely used as an optimisation tool in numerical simulations of the inertial range of field - guided mhd turbulence ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?the accumulated computational savings can then be re - invested in reaching larger reynolds numbers for the field - perpendicular dynamics . additionally , it is advantageous to seek other ways of probing the universal scaling of mhd turbulence . in this workwe point out a rather powerful method , which is based on the fact that there may exist certain quantities in mhd turbulence that exhibit very good scaling laws even for turbulence with relatively low reynolds numbers .the situation here is reminiscent of the well known phenomenon of extended self - similarity in hydrodynamic turbulence .we propose that one such `` stable '' object is the alignment angle between the velocity and magnetic fluctuations , which we measure as the ratio of two specially constructed structure functions .this ratio has been recently measured in numerical simulations in an attempt to differentiate among various theoretical predictions ( ) .also , it has recently been shown by that the same measurement is accessible through direct observations of solar wind turbulence .scale - dependent alignment therefore has practical value : its measurement may provide an additional way of extracting information about the physics of the turbulent cascade from astrophysical observations . in the present work we conduct a series of numerical simulations with varying resolutions and reynolds numbers .we find that as long as the simulations are well resolved , the alignment angle exhibits a universal scaling behavior that is virtually independent of the reynolds number of the turbulence .moreover , we find that the _ length _ of scaling range for this quantity extends to the smallest resolved scale , independently of the reynolds number .this means that although the dissipation spoils the power - law scaling behaviour of each of the structure functions , the dissipation effects cancel when the ratio of the two functions is computed and the universal inertial - range scaling extends deep in the dissipation region .the described method allows the inference of valuable scaling laws from numerical simulations , experiments , or observations of mhd turbulence with limited reynolds number .however , one can ask how well the extended - scaling method can be combined with the previously mentioned optimisation methods relying on the reduced mhd equations and a decreased parallel resolution .we check that reduced mhd does not alter the result .however , when the dissipation region becomes under - resolved ( as can happen , for example , when the field - parallel resolution is decreased ) , the extended scaling of the alignment angle deteriorates significantly .thus the optimisation technique that works well for viewing the inertial range of the energy spectra should not be used in conjunction with the extended - scaling measurements that probe deep into the dissipation region .the remainder of this paper will report the findings of a series of numerical measurements of the alignment angle in simulations with different reynolds numbers and different field - parallel resolutions in both the mhd and rmhd regimes .the aim is to address the need to find an optimal numerical setting for studying strong mhd turbulence and to raise caution with regards to the effects that implementing simplifying assumptions in the numerics can have on the solution and its physical interpretation .we simulate driven incompressible magnetohydrodynamic turbulence in the presence of a strong uniform background magnetic field , .the mhd code solves equations ( [ eq : mhd - elsasser],[eq : div ] ) on a periodic , rectangular domain with aspect ratio , where the subscripts denote the directions perpendicular and parallel to and we take .a fully dealiased 3d pseudospectral algorithm is used to perform the spatial discretisation on a grid with a resolution of mesh points .the rmhd code solves the reduced mhd counterpart to equations ( [ eq : mhd - elsasser],[eq : div ] ) in which ( see ) .the domain is elongated in the direction of the guide field in order to accommodate the elongated wavepackets and to enable us to drive the turbulence in the strong regime while maintaining an inertial range that is as extended as possible ( see ) .the random forces are applied in fourier space at wavenumbers , , where we shall take or .the forces have no component along and are solenoidal in the -plane .all of the fourier coefficients outside the above range of wavenumbers are zero and inside that range are gaussian random numbers with amplitudes chosen so that .the individual random values are refreshed independently on average every , i.e. the force is updated approximately times per turnover of the large - scale eddies .the variances control the average rates of energy injection into the and fields .the results reported in this paper are for the balanced case . in all of the simulations performed in this work we will set the background magnetic field in velocity units .time is normalised to the large scale eddy turnover time .the field - perpendicular reynolds number is defined as .the system is evolved until a stationary state is reached , which is confirmed by observing the time evolution of the total energy of the fluctuations , and the data are then sampled in intervals of the order of the eddy turnover time .all results presented correspond to averages over approximately 30 samples .we conduct a number of mhd and rmhd simulations with different resolutions , reynolds numbers and field - parallel box sizes .the parameters for each of the simulations are shown in table [ tab : params ] .ccccccc m1 & mhd & 256 & 256 & 5 & 800 & 1 + m2 & mhd & 512 & 512 & 5 & 2200 & 1 + m3 & mhd & 512 & 512 & 5 & 2200 & 0.1 + m4 & mhd & 512 & 512 & 10 & 2200 & 0.1 + m5 & mhd & 512 & 256 & 10 & 2200 & 0.1 + r1 & rmhd & 512 & 512 & 6 & 960 & 0.1 + r2 & rmhd & 512 & 512 & 6 & 1800 & 0.1 + r3 & rmhd & 256 & 256 & 6 & 960 & 0.1 + [ tab : params ] for each simulation we calculate the scale - dependent alignment angle between the shear - alfvn velocity and magnetic field fluctuations .we therefore define velocity and magnetic differences as and , where is a point - separation vector in the plane perpendicular to . in the mhd casethe pseudo - alfvn fluctuations are removed by subtracting the component that is parallel to the local guide field , i.e. we construct ( and similarly for ) where . in the rmhd case fluctuations parallel to are not permitted and hence the projection is not necessary .we then measure the ratio of the second order structure functions where the average is taken over different positions of the point in a given field - perpendicular plane , over all such planes in the data cube , and then over all data cubes . by definition of the cross product where is the angle between and and the last approximation is valid for small angles .we recall that the theoretical prediction is .figure [ fig : angle_n_re ] illustrates the ratio ( [ eq : angle2 ] ) as a function of the separation for two mhd simulations ( m1 and m2 ) corresponding to a doubling of the resolution from to mesh points with the reynolds number increased from to .excellent agreement with the theoretical prediction is seen in both cases . as the resolution and reynolds number increase , the scale - dependence of the alignment angle persists to smaller scales .indeed , we believe that the point at which the alignment saturates can be identified as the dealiasing scale , corresponding in configuration space to for the simulations , respectively . this is verified in figure [ fig : angle_re ] that shows that alignment is largely insensitive to the reynolds number ( provided that the system is turbulent ) and figure [ fig : angle_n ] that shows that the saturation point decreases by a factor of approximately 2 as the resolution doubles at fixed reynolds number .thus as computational power increases , allowing higher resolution simulations to be conducted , we expect to find that scale - dependent alignment persists to smaller and smaller scales .the fact that even in the lower reynolds number cases scale - dependent alignment is clearly seen over quite a wide range of scales is particularly interesting , as in those cases only a very short inertial range can be identified in the field - perpendicular energy spectrum , making the identification of spectral exponents difficult ( see figure 1 in ) . in the larger cases , we can estimate the inertial range of scales in configuration space to be the range of over which the energy spectrum displays a power law dependence . the field - perpendicular energy spectrum for the case m2 is shown in figure 1 in , with the inertial range corresponding to approximately , i.e. .comparison with figure [ fig : angle_n_re ] shows that a significant fraction of the region over which the scaling is observed corresponds to the dissipative region , i.e. that ratios of structure functions appear to probe deeper than the inertial range that is suggested by the energy spectra .we now consider the effect on the alignment ratio of decreasing the field - parallel resolution .figure [ fig : angle_nz ] shows the results from three mhd simulations ( m3 , m4 & m5 ) for which the field - parallel resolution decreases by a factor of two , twice . as the resolution decreases the extent of the self - similar region diminishes and the scale - dependence of the alignment angle becomes shallower . if one were to calculate the slope for the lowest field - parallel resolution case ( m5 ) one would find a scale - dependence that is shallower than the predicted power law exponent of .this may lead one to conclude ( incorrectly ) that scale - dependent alignment is not a universal phenomenon in mhd turbulence .however , the effect is obviously a result of the poor resolution rather than being an attribute of the alignment mechanism itself .finally , we mention that for the three cases illustrated in figure [ fig : angle_nz ] , the field - perpendicular energy spectra ( not shown ) display no appreciable difference .since the reynolds number is moderate the inertial range in -space is quite short .however , when the spectra are compensated with and the former results in a better fit in all cases .this happens for two reasons .first , the stronger deviation from the alignment scaling occurs deeper in the dissipation region , that is , further from the inertial interval where the energy spectrum is measured .second , according to the relationship between the scaling of the alignment angle and the energy spectrum , a noticeable change in the scaling of the alignment angle leads to a relatively small change in the scaling of the field - perpendicular energy spectrum .[ tbp ] [ tbp ]there are two main conclusions that can be drawn from our results .the first is that the measurement of the alignment angle , which is composed of the ratio of two structure functions , appears to display a self - similar region of significant extent , even in the moderate reynolds number case which requires only a moderate resolution .we have checked that plotting the numerator and the denominator of the alignment ratio separately as functions of the increment displays only a very limited self - similar region , from which scaling laws can not be determined . a clear scaling behaviour is also not found when one plots the numerator versus the denominator as is the case in extended self - similarity .the result is interesting in its own right .it also has important practical value as it allows us to differentiate effectively between competing phenomenological theories through numerical simulations conducted in much less extreme parameter regimes than would otherwise be necessary. the result could be especially useful if it extends to ratios of structure functions for which an exact relation , such as the relations , is known for one part , as it would then allow the inference of the scaling of the other structure function .reaching a consensus on the theoretical description of magnetised fluctuations in the idealised incompressible mhd system represents the first step towards the ultimate goal of building a theoretical foundation for astrophysical turbulence .the second main result that can be drawn from our work is that the measurement of the alignment angle appears to probe deep into the dissipation region and hence it is necessary to adequately resolve the small scale physics .as the field - parallel resolution is decreased , numerical errors contaminate the physics of the dissipative range and affect measurement of the alignment angle . as the decrease in resolution is taken to the extreme , the errors propagate to larger scales and may ultimately spoil an inertial range of limited extent .we propose that similar contamination effects should also arise through any mechanism that has detrimental effects on the dissipative physics .mechanisms could include pushing the reynolds number to the extreme or using hyperdiffusive effects .for example , our results may provide an explanation for the numerical findings by who noticed a flattening of the alignment angle in simulations of mhd turbulence with a reduced parallel resolution and strong hyperdiffusivity .we also point out that the result recalls the phenomenon of extended self - similarity in isotropic hydrodynamic turbulence , which refers to the extended self - similar region that is found when one plots one structure function versus another , rather than as a function of the increment .our finding is fundamentally different however , in the sense that the self - similar region only becomes apparent when one plots ratios of structure functions versus the increment , rather than structure functions versus other structure functions .our result appears to be due to a non - universal features in the amplitudes of the functions , rather than their arguments , cancelling when the ratios are plotted .whether such a property holds for other structure functions in mhd turbulence is an open and intriguing question .this is a subject for our future work .we would like to thank leonid malyshkin for many helpful discussions .this work was supported by the nsf center for magnetic self - organization in laboratory and astrophysical plasmas at the university of chicago and the university of wisconsin - madison , the us doe awards de - fg02 - 07er54932 , de - sc0003888 , de - sc0001794 , and the nsf grants phy-0903872 and ags-1003451 .this research used resources of the argonne leadership computing facility at argonne national laboratory , which is supported by the office of science of the u.s .department of energy under contract de - ac02 - 06ch11357 .benzi , r. , ciliberto , s. , tripiccione , r. , baudet , c. , massaioli , f. & succi , s. 1993 , , 48 , r29 beresnyak , a. 2011 , , 106 , 075001 beresnyak , a. & lazarian , a. 2006 , , 640 , l175 biskamp , d. 2003 , magnetohydrodynamic turbulence ( cambridge university press , cambridge ) boldyrev , s. 2006 , , 96 , 115002 boldyrev , s. , mason , j. & cattaneo , f. 2009 , , 699 , l39 .brandenburg , a. & nordlund a. 2011 , rep .phys . , 74, 046901 chandran , b. d. g. , li , b. , rogers , b. n. , quataert , e. , & germaschewski , k. 2010 , , 720 , 503 chandran , b. d. g. 2010 , , 720 , 548 goldreich , p. & sridhar , s. 1995 , , 438 , 763 goldstein , m. l. , roberts , d. a. , & matthaeus , w. h. 1995 , ann .astrophys . , 33 , 283 grappin , r. & mller , w .- c .2010 , , 82 , 026406 kadomtsev , b. b. & pogutse , o. p. 1974jetp , 38 , 283 kolmogorov , a.n .1941 , dokl .nauk sssr , 32 , 16 ( reprinted in proc .a , 434 , ( 1991 ) 15 ) kraichnan , r. h. 1965 , phys .fluids , 8 , 1385 kulsrud , r. m. 2004 , plasma physics for astrophysics ( princeton university press ) maron , j. , & goldreich , p. 2001 , , 554 , 1175 mason , j. , cattaneo , f. & boldyrev , s. 2006 , , 97 , 255002 mason , j. , cattaneo , f. & boldyrev , s. 2008 , , 77 , 036403 mckee , c. f. & ostriker , e. c. , 2007 , ann . rev .astrophys . , 45 , 565 mller , w .- c . & grappin , r. 2005 , , 95 , 114502 oughton , s. , dmitruk , p. & matthaeus , w. h. 2004 , physics of plasmas , 11 , 2214 perez , j. c. & boldyrev , s. 2008 , astrophys .j. , 672 , l61 perez , j. c. & boldyrev , s. 2009 , , 102 , 025003 perez , j. c. & boldyrev , s. 2010 , phys .plasmas , 17 , 055903 perez , j.c . ,mason , j. , boldyrev , s. & cattaneo , f. 2011 , in preparation podesta , j. j. & borovsky , j. e. 2010 , phys .plasmas , 17 , 112905 podesta , j. j. , chandran , b. d. g. , bhattacharjee , a. , roberts , d. a. , & goldstein , m. l. 2009 , j. geophys .res . , 114 , a01107 politano , h. & pouquet , a. 1998 , geophys . res .lett , 25 , 273 schekochihin , a. a. & cowley , s. c. 2007 , in magnetohydrodynamics : historical evolution and trends , ( springer , dordrecht ) , 85 strauss , h. 1976 , phys .fluids , 19 , 134 | magnetised turbulence is ubiquitous in astrophysical systems , where it notoriously spans a broad range of spatial scales . phenomenological theories of mhd turbulence describe the self - similar dynamics of turbulent fluctuations in the inertial range of scales . numerical simulations serve to guide and test these theories . however , the computational power that is currently available restricts the simulations to reynolds numbers that are significantly smaller than those in astrophysical settings . in order to increase computational efficiency and , therefore , probe a larger range of scales , one often takes into account the fundamental anisotropy of field - guided mhd turbulence , with gradients being much slower in the field - parallel direction . the simulations are then optimised by employing the reduced mhd equations and relaxing the field - parallel numerical resolution . in this work we explore a different possibility . we propose that there exist certain quantities that are remarkably stable with respect to the reynolds number . as an illustration , we study the alignment angle between the magnetic and velocity fluctuations in mhd turbulence , measured as the ratio of two specially constructed structure functions . we find that the scaling of this ratio can be extended surprisingly well into the regime of relatively low reynolds number . however , the extended scaling becomes easily spoiled when the dissipation range in the simulations is under - resolved . thus , taking the numerical optimisation methods too far can lead to spurious numerical effects and erroneous representation of the physics of mhd turbulence , which in turn can affect our ability to correctly identify the physical mechanisms that are operating astrophysical systems . |
at lattice 2000 we discussed how to include fermionic loops contributions in numerical stochastic perturbation theory for lattice , an algorithm which we will refer to as unspt ( unquenched nspt ) .our main message here is that unquenching nspt results in not such a heavy computational overhead , provided only that an can be implemented in a fairly efficient way . is the main ingredient in constructing the fermion propagator by inverting the dirac kernel order by order . for a discussion of the foundations of unspt we refer the reader to . [cols="<,<,<,<,<",options="header " , ] +the need for an efficient is what forced us to wait for apemille : our implementation mimic , which is based on a plus transpositions , an operation which asks for local addressing on a parallel architecture .unspt has been implemented both in single and in double precision , the former being remarkably robust for applications like wilson loops . to estimate the computational overhead of unquenching nsptone can inspect table [ table:1 ] .we report execution times of a fixed amount of sweeps both for quenched and unquenched nspt . on both columnsthe growth of computational time is consistent with the the fact that every operation is performed order by order . on each row the growth due to unquenchingis roughly consistent with a factor .one then wants to understand the dependence on the volume , which is the critical one , the propagator being the inverse of a matrix : this is exactly the growth which has to be tamed by the .one should compare execution times at a given order on and lattice sizes .note that is simulated on an apemille board ( fpus ) , while on an apemille unit ( fpus ) . by taking this into account one easily understands that is doing its job : the simulation time goes as the volume also for unspt ( a result which is trivial for quenched nspt ) .notice that at this level one has only compared crude execution times : a careful inspection of autocorrelations is anyway not going to jeopardize the picture .as for the dependence on ( number of flavours ) , it is a parametric one : one plugs in various numbers and then proceed to fit the polynomial ( in ) which is fixed by the order of the computation .it is then reassuring to find the quick response to a change in which one can inspect in figure [ fig : nf_change ] ( which is the signal for second order of the plaquette at a given value of the hopping parameter ) .we now proceed to discuss some benchmark computations .a typical one is given by wilson loops . in figure [ fig:5ordplaq ]one can inspect the first five orders .] of the basic plaquette at a given value of hopping parameter , for which analytic results can be found in : going even higher in order would be trivial at this stage , but with no mass counterterm ( see later ) . ] .apart for being an easy benchmark , we are interested in wilson loops for two reasons .first of all we are completing the unquenched computation of the lattice heavy quark effective theory residual mass ( see for the quenched result ) . on top ofthat we also keep an eye on the issue of whether one can explain in term of renormalons the growth of the coefficients of the plaquette .there is a debate going on about that ( see ) , the other group involved having also started to make use of nspt . in the renormalon frameworkthe effect of can be easily inferred from the -function , eventually resulting in turning the series to oscillating signs .in figure [ fig : mcg2 ] we show the signal for one loop order of the critical mass for wilson fermions ( two loop results are available from ) .the computation is performed in the way which is the most standard in perturbation theory , _i.e. _ by inspecting the pole in the propagator at zero momentum .this is already a tough computation .it is a zero mode , an mass - cutoff is needed and the volume extrapolation is not trivial . on top of that oneshould keep in mind that also gauge fixing is requested . the coefficients which are known analytically can be reproduced .still one would like to change strategy in order to go to higher orders ( which is a prerequisite of all other high order computations ) .the reason is clear : we have actually been measuring the propagator , while the physical information is actually coded in ( one needs to invert the series and huge cancellations are on their way ) .notice anyway that the fact that the critical mass is already known to two - loop makes many interesting computations already feasible .benchmark computations in unspt look promising , since the computational overhead of including fermionic loops contributions is not so huge . this is to be contrasted with the heavy computational effort requested for non perturbative unquenched lattice qcd .this in turn suggests the strategy of going back to perturbation theory for the ( unquenched ) computation of quantities like improvement coefficients and renormalisation constants .the critical mass being already known to two loops , many of these computations are already feasible at order .+ we have only discussed the implementation of the algorithm on the apemille architecture .we can also rely on a implementation for pc s ( clusters ) which is now at the final stage of development .9 f. di renzo , l. scorzato , .t. lippert , k. schilling , f. toschi , s. trentmann , r. tripiccione , .b. alles , a. feo , h. panagopoulos , .f. di renzo , l. scorzato , .see f. di renzo , l. scorzato , and r. horsley , p.e.l .rakow , g. schierholz , .e. follana , h. panagopoulos , ; s. caracciolo , a. pelissetto , a. rago , . | the inclusion of fermionic loops contribution in numerical stochastic perturbation theory ( nspt ) has a nice feature : it does not cost so much ( provided only that an fft can be implemented in a fairly efficient way ) . focusing on lattice , we report on the performance of the current implementation of the algorithm and the status of first computations undertaken . |
[ cols="<,^ , < " , ]there have long been concerns on the connection between skin friction and wall heat - transfer rate ( or simply , heat transfer ) , as macroscopically the two quantities are related respectively to the normal gradient of velocity and temperature , and microscopically they are momentum and energy transports arising from molecular moves and collisions .the most famous result of this subject is the reynolds analogy in the flat plate boundary layer flow problem , where the skin friction and heat transfer were found proportional along the surface .but such simple relationship does not exist for curved surfaces .for example , in flows past circular cylinders or spheres the heat transfer reaches its maximum at the stagnation point and diminishes monotonously downstream while the skin friction is zero at the stagnation point and varies non - monotonically downstream . with regard to the relation between skin friction and heat transfer ,there has not yet been a theory suitable for the curved surfaces in either continuum or rarefied gas flows .separately speaking , the heat transfer in hypersonic flows has received much more attentions than the skin friction due to early engineering requirements .lees first developed theories of the heat transfer to blunt - nosed bodies in hypersonic flows based on laminar boundary layer equations .the method made rational predictions in high reynolds number flows .later fay and riddell gave more detailed discussions on the stagnation point heat transfer and developed prediction formulas with improved accuracy .considering the downstream region of the stagnation point , kemp et al . made an improvement to extend the theory to more general conditions . in practice ,empirical formulas of the heat transfer distribution were also constructed for typical nose shapes such as circular cylinders and spheres .besides above boundary layer analyses in continuum flows , theoretical studies of heat transfer in rarefied gas flows have been carried out by different approaches .cheng accepted the thin viscous shock layer equations and obtained analytical expressions of the stagnation point heat transfer from the boundary layer flow to the free molecular flow .wang et al . presented a theoretical modelling of the non - fourier aeroheating performance at the stagnation point based on the burnett equations .a control parameter was derived as the criterion of the local rarefied gas effects , and formulas based on the parameter were found to correlate the heat transfer both at the stagnation point and in the downstream region .unlike the heat transfer , the skin friction is often neglected in continuum flows over large blunt bodies , for the friction drag is much less than the pressure drag in those conditions .most of the existing studies concerns only about turbulence flows rather than laminar flows .however for wedges or cones with small angles in rarefied gas flows , the skin friction contributes a significant part , as much as , in the total drag .unfortunately , there is still no reliable theory we could use to predict the skin friction over curved surfaces. it will be meaningful if we find a general reynolds analogy by using which we could estimate the skin friction based on available heat transfer prediction formulas , or vice versa . in lees and the followers research on the aeroheating performance of blunt bodies ,the momentum equations were solved coupled with the energy equation of boundary layer flows , which offers a breakthrough point to analyze the relation between the skin friction and the heat transfer . in the present work ,the ratio of skin friction to heat transfer along curved surfaces is firstly discussed based on the self - similar solution of boundary layer equations . an expression with simple formis obtained for circular cylinders as a typical example .subsequently , an extended analogy is deduced in the near continuum flow regime by considering the non - linear shear and heat transfer in the burnett approximation , and it is found the rarefied gas effects on the analogy are characterized by the rarefied flow criterion introduced in our previous study . as a preliminary study , the molecular vibration and chemical reaction effects are out of consideration in the theoretical analysis .the direct simulation monte carlo ( dsmc ) method is also used to simulate present flows to validate the theoretical results .fig . [ fig_sketch ] is a sketch illustrating the hypersonic flow over a blunt - nosed cylindrical body or body of revolution .the local coordinate is set on the wall ../figures / sketch.eps ( 35,58) ( 57,56) ( 0,33) ( 44,51) ( 83,33) ( 81,42) ( 41,22) in order to seek the self - similar solutions of the boundary layer equations governing the flow around blunt - nosed bodies , lees et al . introduced the coordinate transformation : and the normalizations of velocity and temperature : where for planar bodies and for bodies of revolution . with the transformation ,the boundary layer equations were simplified and the self - similar solution have been obtained under certain conditions . defining the coefficients of skin friction and heat transfer as and , respectively , and with the approximation in the hypersonic limit ( ), we have : from eqs .( [ eq_transform ] ) and ( [ eq_fg ] ) . from the compressible bernoulli s equation, the streamwise velocity along the edge of the boundary layer depends mainly on the wall pressure : }\ ] ] on the basis of the symmetry and smoothness of the pressure distribution near the stagnation point , eq .( [ eq_stream_velo ] ) can be linearized to : with and , where {\theta=0}$ ] .in fact it was found that the linear variation of with is valid downstream the stagnation point till from korobkin s experimental data .therefore in following analyses , eq .( [ eq_veltotheta ] ) is accepted not only near the stagnation point , but also in the downstream region .cohen and reshotko numerically solved the self - similar boundary layer equations and calculated .the value depends on and varies with along the surface . actually according to the calculations from kemp et .al , can be taken as a constant along the isothermal wall with a slowly changing curvature radius .and from cohen and reshotko s solutions , the slight variation of is of the same order as that of , and thus it is reasonable to assume a constant as long as the self - similar assumption is satisfied . as a result ,the expression of in eq .( [ eq_ratio ] ) can be written as : with the coefficient being independent of the location .the equation indicates that for a variety of nose shapes in hypersonic flows , is proportional to along the windward surface as long as is a constant .thus , we have a more general form of the analogy relation between skin friction and heat transfer which is not restricted to the flat plate flow , compared with the classical reynolds analogy . in order to further calculate the coefficient , the shape dependent and should be specified . as a typical example , the corresponding values for circular cylinders are to be presented in the following . first , the pressure distribution can be calculated by the newtonian - busemann theory , and at the stagnation point , we have second , since is regarded as a constant along the surface , we will calculate the value at the stagnation point where for a planar body .then from cohen and reshotko s numerical calculations , was obtained within as plotted in fig .( [ fig_corre_fg ] ) . obviously the points with fall into a straight line , and a linear correlation fits well with the points . with eqs .( [ eq_p_grad ] ) and ( [ eq_corre_fg ] ) applied to eq .( [ eq_linear_the ] ) , we have ./figures/2fg_2.eps ( 2,35 ) 90 ( 42,1) ( 20,65) ( 16,60) the explicit relation in eq . ( [ eq_ratio2 ] ) gives a convenient formula to predict around a circular cylinder in hypersonic flows .comparisons are given in the following with dsmc numerical simulations of the nitrogen gas flows past blunt - nosed bodies in hypersonic speeds under , , .the simulations are carried out by using the source code as described in .the molecular vibration and chemical reaction effects are excluded to correspond with the derivations in this paper ../figures / mach.eps ( 20,50 ) for circular cylinders under different mach numbers with .[fig_mach],title="fig : " ] ( 53,0) ( 0,38) ( 45,25)(0,25)(20,25 ) ; : eq .( [ eq_ratio2 ] ) ( 38,15)symbols : dsmc simulations ( 20,65)circular cylinder ( 5,-10 ) cp25pt < p20pt < p20pt < p20pt < p20pt < p20pt < labels & ( 0,0 ) rectangle ( 3.5,3.5 ) ; & ( 0,0)(2,4)(4,0)cycle ; & ( 0,0)(2,4)(4,0)cycle ; & ( 0,0 ) rectangle ( 3.5,3.5 ) ; & ( 0,0)(2,4)(4,0)cycle ; & ( 2,2 ) circle(2 ) ; + &5&8&10&15&20&25 + ./figures / temp.eps ( 50,2) ( 0,50) ( 42,23)lines : eq .( [ eq_ratio2 ] ) ( 37,18)symbols : dsmc simulations ( 20,90)circular cylinder ( -35,-10 ) p25pt < p50pt < p50pt < p50pt < p50pt < p50pt < p50pt < labels & ( 0,0)(18,0 ) ; ( ( 0,0 ) rectangle ( 3.5,3.5 ) ; ) & ( 0,0)(18,0 ) ; ( ( 0,0 ) rectangle ( 3.5,3.5 ) ; ) & ( 0,0)(18,0 ) ; ( ( 2,2 ) circle(2 ) ; ) & ( 0,0)(18,0 ) ; ( ( 0,0 ) rectangle ( 3.5,3.5 ) ; ) & ( 0,0)(18,0 ) ; ( ( 0,0)(2,4)(4,0)cycle ; ) & ( 0,0)(18,0 ) ; ( ( 2,0)(0,4)(4,4)cycle ; ) + &&&&&& + the numerically computed distributions of along windward surfaces of circular cylinders are presented in fig .( [ fig_mach ] ) and ( [ fig_temp ] ) . in fig .( [ fig_mach ] ) , the numerical results from different all fit well with eq . ( [ eq_ratio2 ] ) , showing a mach number independence , except that the ratio with , the lower limit of hypersonic flows , is slightly lower .meantime cases with in fig .( [ fig_temp ] ) , as well as a reference result from santos with , indicate that eq .( [ eq_ratio2 ] ) precisely describes the variation of with .other flow parameters in simulations are , and are accepted in eq .( [ eq_ratio2 ] ) for comparisons . the reynolds analogy for the flat plate flow shows that distributes uniformly along the surface as mentioned .throughout the zero - thickness leading edge , the ratio can be expressed as : ^{-1}\ ] ] the equation was derived from the free molecular theory , and with , , its range of application could be extended from the free molecular flow to the boundary layer flow . in practice , a finite thickness anda blunt nose always exist in the leading edge of a flat plate as illustrated in fig .( [ fig_rey_ana ] ) . assuming a cylindrically blunted nose in the front of the plate , the ratio at the flat segment can be obtained by taking in eq .( [ eq_ratio2 ] ) as : values of in eq .( [ eq_ratio_pi ] ) and for zero - thickness flat plate are plotted in fig .( [ fig_rey_ana ] ) against .the difference between them is less than with .this result indicates that the present linear analogy in flows around curved bodies is in fact consistent with the classical reynolds analogy for flat plate flows ../figures / reynolds.eps ( 2,38) ( 45,1) ( 33,59)flat plate , eq .( [ eq_ratio_plate ] ) ( 33,54.2)general analogy , eq .( [ eq_ratio_pi ] ) ( 60,20 ) , and in eq .( [ eq_ratio_plate ] ) and ( [ eq_ratio_pi]).,title="fig : " ] ( 66,38)flat segment ( 85,34) ( 71,28) although lack of explicit expressions at present , due to the mathematical complexity , the existence of the self - similar boundary layer flow could also be found near the wall surfaces of many other nose shapes as long as the variation of the radius of curvature is slow .flows over two dimensional wedges with a variety of shapes under are also simulated to extend our discussions , as shown in fig .( [ fig_shapes ] ) . among these results ,the shape of the power - law leading edge is expressed as , and in the present simulations . in simulations , .distributions of along surfaces of power - law shapes with are both linear with as expected .the lines diverge slightly from each other but still fit with eq .( [ eq_ratio2 ] ) generally .the discrepancies may be caused mainly by the variation of the wall pressure distributions .the present theory is not suitable for the rectangle cylinder with a round shoulder , since the boundary layer around the body is highly non - selfsimilar . as presented in fig .( [ fig_shapes ] ) , increases in the vertical windward segment with maintaining zero .a similar variation can be observed near the stagnation point of the power - law shape with , where the surface is also nearly vertical and increases sharply .however , the linear variation of with is still observed in the round shoulder segment of the rectangle cylinder and the downstream region of the power - law shape .this phenomenon indicates a possibility that the present linear analogy relation could be extended to wider situations ../figures / shapes.eps ( 50,0) ( 3,40) ( 72,37) wedge ( 72,32.5)flat nosed ( 72,28)power - law ( 72,23.5)power - law ( 72,19.3)power - law ( 72,14.8)eq . ( [ eq_ratio2 ] ) ./figures / real.eps ( 50,1) ( 1,50) ( 26,84)hyperboloid , by blottner ( 26,78)sphere , with dsmc , ( 26,73)by holman and boyd further more , is also found proportional to when considering the real gas effects .blottner calculated the boundary layer equations for the equilibrium air flow over a hyperboloid under .holman and boyd computed the dissociating air flow over a sphere under and ( , where is the mean free path of molecules in the free stream ) with both dsmc method and navier - storks(n - s ) equations .results of from both hyperboloid and sphere cases are demonstrated in fig .( [ fig_real ] ) .although the slopes are much different from that of the above - mentioned calorically perfect gas condition , the linear feature can be still clearly observed . despite assumptions and simplifications , the linear distribution of on the windward of curved surfacesis revealed by theoretical analyses and shown by dsmc simulations with different types of shapes . for circular cylinder cases , with the available explicit and , good agreements are observed between the present analytical prediction and numerical results .the analyses in the above section are based on the boundary layer assumption , and thus the theory is valid only for continuous flows .when the flow deviates from the continuum regime , the nonequilibrium of molecular collisions causes non - linear shear and heat transfer , and then the linear newtonian shear and fourier heat transfer in the n - s equations fail .if the deviation is small , the constitutive relation can be corrected by bringing in the second order shear and heat transfer in burnett equations as has been suggested by wang et al . . in order to obtain a more general analogy relation covering the rarefied flows , the second order shear stress and heat transfer will be studied firstly in the near continuum regime , and then , based on numerical validation and calibration , the results will be extended to the more rarefied flow regime to calculate .in fact , an early exploration has been carried out in our previous work on the flat plates leading edge flow problem . instead of directly solving the burnett equations ,the second order effects could be evaluated based on a perturbation point of view , i.e. to use the non - linear constitutive relations in burnett equations to analyze the flow field features predicted by the first order approximation for instance the boundary layer theory or the computation method of n - s equations .the original form of the burnett equations can be found in chapman and cowling s derivations . for planar bodies , taking the assumptions , , and the constant wall temperature .besides , in the near continuum regime we have and , then the burnett shear and heat transfer near a planar curved surface become : as has been testified in flows past the leading edge of flat plates , the above simplifications , although not strictly , still reflect the essential features of non - linear terms . the flow fieldpredicted by the first order continuum theory and the gradients near the wall need to be given before using eq .( [ eq_burnett ] ) for further discussions . assuming the normalized pressure and heat transfer distribution functions as where and are the pressure and the heat flux at the stagnation point , respectively .both and are functions of and are able to be obtained with different approaches such as analytical theories or data correlations . taking in hypersonic limit , then eq . ( [ eq_distri ] ) can be transformed to and with the linear analogy equation ( [ eq_linear_the ] ) , the normal gradient of becomes with eqs .( [ eq_p_part ] ) and ( [ eq_part_u ] ) submitted , eq .( [ eq_burnett ] ) becomes : \left(\frac{\mu}{\rho ru_{\infty}}k\frac{\partial t}{\partial y}\right)_w \end{split}\ ] ] where , . in eq .( [ eq_burnett_2 ] ) , expressions of the second order shear stress and heat transfer contain and explicitly , and thus , more meaningfully , we could get the relative magnitudes of the second order effects \frac{\mu_r}{\rho_rr u_{\infty } } \end{split}\ ] ] the subscript represents the quantities under the reference temperature , as in hypersonic flows the reference temperature method is usually employed to present characteristics of boundary layers .here , and the viscosity - temperature power law is applied. then eq .( [ eq_burnett_rate ] ) becomes where , , and are : \left(\frac{t_w+t_0}{4t_0 } \right)^{\omega+1 } \end{split}\ ] ] with and being added to calculate the skin friction and heat transfer , we can introduce a modification to the linear analogy eq .( [ eq_linear_the ] ) : values of and denote the influence of the body shape and vary with , and generally both of them are in the order of unit . for a typical case ofthe hypersonic nitrogen gas flows past a circular cylinder under , with the modified newtonian pressure and the heat transfer fitting formula from beckwith and gallagher accepted in eq .( [ eq_gamma ] ) , an approximation can be taken in the range of as from the taylor series expansion . then in the near continuum regime with , eq .( [ eq_ratio_mod ] ) can be simplified to \theta\ ] ] the first order correction is similar with wang et al.s result in the stagnation point heat transfer problem .it turns out that , is a control parameter of the rarefied gas effects not only at the stagnation point but also in the downstream region . although the correction factor in eq .( [ eq_ratio_wr ] ) is explicit and clear , it will lose its credibility when the rarefaction degree of the flow is sufficiently high , and due to the complexity of the transition flow , the data fitting and calibrating seem still unavoidable to get a practical general analogy .in fact , based on the rarefied flow criterion , a bridge function can be built between the continuum limit eq .( [ eq_linear_the ] ) and the free molecular limit . for circular cylinders ,the function is carried out as eq .( [ eq_bridge_cylinder ] ) with different is plotted in fig .( [ fig_bridge_circle ] ) compared with the dsmc simulations of nitrogen gas flows past circular cylinders .it can be seen that the variation of with is no longer linear in the rarefied gas flow regime ../figures / bridge_cylinder.eps ( 53,0) ( 0,38) ( 45.3,23)symbols : dsmc simulations ( 36.6,18)dashed lines : equation ( [ eq_bridge_cylinder ] ) ( 60,13.3 ) : free mole .( -35,-10 ) p25pt <p60pt < p60pt < p60pt < p60pt < p60pt < labels & ( 0,0)(18,0 ) ; ( ( 0,0 ) rectangle ( 3.5,3.5 ) ; ) & ( 0,0)(18,0 ) ; ( ( 0,0)(2,4)(4,0)cycle ; ) & ( 0,0)(18,0 ) ; ( ( 0,0)(2,4)(4,0)cycle ; ) & ( 0,0)(18,0 ) ; ( ( 0,0)(2,4)(4,0)cycle ; ) & ( 0,0)(18,0 ) ; ( ( 2,2 ) circle(2 ) ; ) + &&&&& +the relation between skin friction and heat transfer for blunt - nosed bodies in hypersonic flows , named the general reynolds analogy , has been investigated in this paper by using the theoretical modelling and the dsmc methods .first , based on the boundary layer flow properties , the ratio of the skin friction to the heat transfer for the blunt - nosed body was found proportional to the local surface slope angle . as a typical demonstration ,an explicit expression of the ratio was derived for circular cylinders .also , numerical calculations indicated that this characteristic exists for other blunt - nosed shapes even in chemically reactive flows .second , the analogy in rarefied gas flows was analyzed . in the rarefied flow regime , the deviation from the linear distribution of the ratio was proved to be controlled by the rarefied flow criterion .therefore , a bridge function was constructed based on to describe the analogy in the transition flow regime .this study , combined with our former investigation on the flat plate leading edge flows , clarifies the general reynolds analogy in the whole flow regime for both flat plates and blunt - nosed bodies in hypersonic flows .the present general reynolds analogy has potential usefulness in the engineering practice . from the analogy relation ,the skin friction is related to the heat flux along surfaces , or further to the heat flux at the stagnation point and its normalized distribution downstream . as a result ,the viscous drag integrated from the skin friction is proportional to the stagnation point heat flux , which suggests that if one of them is known , the other could also be obtained immediately .this work was supported by the national natural science foundation of china ( grant no .11202224 ) . the paper benefits a lot from our discussions with associate professor lin bao .lees , l. , `` laminar heat transfer over blunt - nosed bodies at hypersonic flight speeds , '' _ journal of jet propulsion _26 , no . 4 , 1956 , pp . 259269 .fay , j. a. , and riddell , f. r. , `` theory of stagnation point heat transfer in dissociated air , '' _ journal of the aerospace sciences _ , vol .25 , no . 2 , 1958 , pp .kemp , n. h. , and rose , p. h. and detra , r. w. , `` laminar heat transfer around blunt bodies in dissociated air , '' _ journal of the aerospace sciences _ , vol .26 , no . 7 , 1959 , pp .murzinov , i. n. , `` laminar boundary layer on a sphere in hypersonic flow of equilibrium dissociating air , '' _ fluid dynamics _ , vol . 1 , no . 2 , 1966 , pp . 131133 .beckwith , i. e. , and gallagher , j. j. , `` local heat transfer and recovery temperatures on a yawed cylinder at a mach number of 4.15 and high reynolds numbers , '' nasa tr - r104 , may 1962 .cheng , h. k. , `` hypersonic shock - layer theory of the stagnation region at low reynolds number , '' _ proceedings of the 1961 heat transfer and fluid mechanics institute _ , edited by binder , r. c. , and epstein , m. , and mannes , r. l. and yang , h. t. , stanford university press , chicago , 1961 , pp .wang , z. h. , bao , l. , and tong , b. g.,``variation character of stagnation point heat flux for hypersonic pointed bodies from continuum to rarefied flow states and its bridge function study , '' _ science in china series g : physics , mechanics and astronomy _52 , no . 12 , 2009 , pp . 2007 - 2015wang , z. h. , bao , l. , and tong , b. g.,``rarefaction criterion and non- fourier heat transfer in hypersonic rarefied flows , '' _ physics of fluids _ , vol .22 , no . 12 , 2010 , paper 126103 .santos , w. f. , and lewis , m. j. , `` power - law shaped leading edges in rarefied hypersonic flow , '' _ journal of spacecraft and rockets _39 , no . 6 , 2002 , pp . 917925. bird , g. a. , _ molecular gas dynamics and the direct simulation of gas flows _ , oxford univ . press , new york , 1994 .anderson , j. d. , _ hypersonic and high temperature gas dynamics _, mcgraw hill , new york , 2006 , chaps . 2 , 6 .lees , l. , `` hypersonic flow , '' _ journal of spacecraft and rockets _ ,40 , no . 5 , 1955 , pp .korobkin , i ., `` laminar heat transfer characteristics of a hemisphere for the mach number range 1.9 to 4.9 , '' u. s. naval ordnance laboratory , navord report no . 3841 , october 10 , 1954 .cohen , c. b. , and reshotko , e. , `` similar solutions for the compressible laminar boundary layer with heat transfer and pressure gradient , '' naca report 1293 , 1956 .blottner , f. g. , `` finite difference methods of solutions of the boundary - layer equations , '' _ aiaa journal _ , vol . 8 , no . 2 , 1970 , pp . 193205 .holman , t. d. and boyd , i. d. , `` effects of continuum breakdown on hypersonic aerothermodynamics for reacting flow , '' _ physics of fluids _ , vol .23 , no . 2 , 2011 , paper 027101 .chapman , s. , and cowling , t. g. , _ the mathematical theory of non - uniform gases _ , 3rd ed ., cambridge univ . press , cambridge ,england , u.k . , 1970 ,. 280296 .chen , x. x. , wang , z. h. , and yu , y. l. , `` nonlinear shear and heat transfer in hypersonic rarefied flows past flat plates , '' _ aiaa journal _ , ( 2013 ) , accessed april 4 , 2014 .doi : 10.2514/1.j053168 matting , f. w. , `` general solution of the laminar compressible boundary layer in the stagnation region of blunt bodies in axisymmetric flow , '' nasa technical note , d-2234 , 1964 . | in this paper , the relation between skin friction and heat transfer along windward sides of blunt - nosed bodies in hypersonic flows is investigated . the self - similar boundary layer analysis is accepted to figure out the distribution of the ratio of skin friction to heat transfer coefficients along the wall . it is theoretically obtained that the ratio depends linearly on the local slope angle of the wall surface , and an explicit analogy expression is presented for circular cylinders , although the linear distribution is also found for other nose shapes and even in gas flows with chemical reactions . furthermore , based on the theoretical modelling of the second order shear and heat transfer terms in burnett equations , a modified analogy is derived in the near continuum regime by considering the rarefied gas effects . and a bridge function is also constructed to describe the nonlinear analogy in the transition flow regime . at last , the direct simulation monte carlo method is used to validate the theoretical results . the general analogy , beyond the classical reynolds analogy , is applicable to both flat plates and blunt - nosed bodies , in either continuous or rarefied hypersonic flows . + + + + + + + + + |
numerically solving the full 3d nonlinear einstein equations is , for several reasons , a daunting task .still , numerical relativity remains the best method for studying astrophysically interesting regions of the solution space of the einstein equations in sufficient detail and accuracy in order to be used to interpret measurements made by the up and coming gravitational wave detectors .even though numerical relativity is almost 35 years old , some of the same problems faced by researchers three decades ago are present today . aside from the computational complexity of implementing a numerical solver for the nonlinear einstein equations , there exist several unsolved problems , including the well - posedness of certain initial value formulations of the einstein equations and the proper choice of gauge .not the least of these problems is numerical stability .a common thread in numerical relativity research over the past three decades is the observation of high frequency ( nyquist frequency ) noise growing and dominating the numerical solution .traditionally , numerical studies have been performed with the initial value formulation of the einstein equations known as the adm 3 + 1 formulation , in which the 3-metric and extrinsic curvature are the dynamically evolved variables . lately , a formulation based on variables in which the conformal factor of the 3-metric and the trace of the extrinsic curvature are factored out and evolved separately has been studied .this conformal - traceless ( ct ) formulation was first introduced by nakamura and shibata and later slightly modified by baumgarte and shapiro .the stability properties of the ct formulation were shown in to be better than those of the adm formulation for linear waves .the improvement in numerical stability in the ct formulation versus the adm formulation was demonstrated in strong field dynamical cases in .a step toward understanding the improved stability properties of the ct formulation was taken in where it was shown by analytically linearizing the adm and ct equations about flat space that the ct system effectively decouples the gauge modes and constraint violating modes .it was conjectured that giving the constraint violating modes nonzero propagation speed results in a stable evolution . here, we take another step towards understanding the improved stability properties of the ct system by performing a von neumann stability analysis on discretizations of both the adm and ct systems .we are led to the von neumann stability analysis by lax s equivalence theorem , which states that given a well posed initial value problem and a discretization that is consistent with that initial value problem ( i.e. , the finite difference equations are faithful to the differential equations ) , then stability is equivalent to convergence . here ,the words `` stability '' and `` convergence '' are taken to mean very specific things .convergence is taken to mean pointwise convergence of solutions of the finite difference equations to solutions of the differential equations .this is the _ pice de rsistance _ of numerical relativity .after all , what we are interested in are solutions to the differential equations .stability , on the other hand , has a rather technical definition involving the uniform boundedness of the discrete fourier transform of the finite difference update operator ( see for details ) .in essence , stability is the statement that there should be a limit to the extent to which any component of an initial discrete function can be amplified during the numerical evolution procedure ( note that stability is a statement concerning the finite difference equations , _ not _ the differential equations ) .fortunately , the technical definition of stability can be shown to be equivalent to the von neumann stability condition , which will be described in detail in the next section .while one can not apply lax s equivalence theorem directly in numerical relativity ( the initial value problem well - posedness assumption is not valid for the einstein field equations in that the evolution operator is not , in general , uniformly bounded ) , numerical relativists often use it as a `` road map '' ; clearly consistency and stability are important parts of any discretization of the einstein equations ( curiously , convergence is usually implicitly assumed in most numerical studies ) .code tests , if done at all , usually center around verifying the consistency of the finite difference equations to the differential equations ( as an example of the extents to which some numerical relativists will go to check the consistency of the finite difference equations to the differential equations , see , e.g. , ) . stability , on the other hand , is usually assessed postmortem .if the code crashes immediately after a sharp rise in nyquist frequency noise and/or if the code crashes sooner in coordinate time at higher resolutions , the code is deemed unstable .we suggest that the stability of the code can be assessed before ( and perhaps even more importantly , while ) numerical evolutions take place .as will be seen in the next section , the stability properties of any given nonlinear finite difference update operator depend not only on the courant factor , but also on the values of the discrete evolution variables themselves .therefore , during numerical evolutions of nonlinear problems , as the evolved variables change from discrete timestep to timestep , the stability properties of the finite difference operator change along with them ! ideally , one would want to verify that the finite difference update operator remains stable for _ each _ point in the computational domain _ at each timestep_. while the computational expense of this verification would be prohibitive , verification at a reasonably sampled subset of discrete points could be feasible .the remainder of the paper is outlined as follows .section [ sec : vonneumann ] will describe the von neumann stability analysis for a discretization of a general set of nonlinear partial differential equations in one spatial dimension .included will be results from a von neumann stability analysis of the linear wave equation discretized with an iterative crank - nicholson scheme .section [ sec : grflat ] will present the adm and ct formulations of the einstein equations restricted to diagonal metrics and further restricted to dependence on only one spatial variable .the von neumann stability analysis is performed on iterative crank - nicholson discretizations of both formulations with flat initial data .section [ sec : nonlin_anal ] will repeat the von neumann stability analysis from section [ sec : grflat ] with a nonlinear wave solution for initial data .let a set of partial differential equations be given as where is a vector whose components consist of the dependent variables that are functions of the independent variables and ( while we ignore boundary conditions in this paper , as we consider only interior points of equations with finite propagation speeds , the von neumann analysis presented here is easily extended to include boundary treatments ) . now , consider a discretization of the independent variables and : along with a discretization of the dependent variables : furthermore , consider a consistent discretization of the set of differential equations in the form of eq .[ eq : diffeq ] that can be written in the form as we will only be analyzing the iterative crank - nicholson discretization scheme , we assume a two - step method as written in eq .[ eq : finitediffeq ] .however , a von neumann analysis does not depend on this , and could easily be performed for three - step methods such as the leapfrog scheme .now , assume some initial data for the discrete variables is given at time .the amplification matrix at is given by the condition for numerical stability is that the spectral radius of the amplification matrix be 1 or less for each wavenumber .that is , each eigenvalue of the amplification matrix should have a modulus of 1 or less for each discrete mode first , notice that for linear finite difference equations ( eq .[ eq : finitediffeq ] ) , the amplification matrix does _ not _ depend on either the initial data nor on the spatial index .it only depends on the discretization parameters and as well as the mode wavenumber .that is , for the linear case , once one specifies the discretization parameters and , one only needs to verify that the eigenvalues of the amplification matrix have a modulus of less than or equal to 1 for all wavenumbers to insure the numerical stability of a numerical update operator , regardless of initial data .contrast this with the nonlinear case where the amplification matrix is not only a function of discretization parameters , and the mode wavenumber , but also of initial data . in this case , not only must one verify that the amplification matrix have a spectral radius of 1 or less for each spatial index ( assuming the initial data depends on , which it will in general ) , but one must carry out this verification at _ every _ time step , as the data will , in general , change with increasing .so , in the nonlinear case , the amplification matrix will depend on both the spatial index and the temporal index . in principle, one must verify that have a spectral radius of 1 or less for each mode wavenumber , for each spatial index and for each time step , in order to be confident that the solutions of the finite difference equations are converging to solutions of the differential equations ( which is , after all , what we are ultimately interested in ) . in ,a von neumann analysis of the advection equation was presented for the iterative crank - nicholson scheme . here ,as a prelude to computing a von neumann analysis for discretizations of the equations of general relativity , we present a von neumann analysis for a 2-iteration iterative crank - nicholson discretization of the wave equation in 1 dimension we write the wave equation in first - order form as in eq . [ eq : diffeq ] .defining , , and ,\ ] ] the wave equation in one spatial dimension becomes .\ ] ] the iterative crank - nicholson discretization procedure begins by taking a ftcs ( forward time centered space ) step which is used to define the intermediate variables : .\ ] ] this intermediate state variable is averaged with the original state variable which , in turn , is used to calculate , the state vector produced from the first iteration of the iterative crank - nicholson scheme : .\ ] ] the averaged state is calculated and used to compute the final state variable .\ ] ] using this second iteration as the final iteration , we can write eq .[ eq : finitediffeq ] explicitly as the three equations where we have denoted as the courant factor .we compute the amplification matrix eq .[ eq : ampmatrix ] , which is independent of the spatial index and the time index : \ ] ] where the components are where .the modulus of the eigenvalues of are easily calculated and found to be ( ignoring the eigenvalue that is identically 1 ) by inspection , the von neumann condition for stability , i.e. for all , is .however , it is instructive to look at the dependence of the stability criterion .figure [ fig : lineareigenvalue ] shows a plot of eq .[ eq : lineareigenvalue ] as a function of for various values of the courant factor .as can be seen , the first eigenmode that goes unstable ( i.e. , has an eigenvalue whose modulus is greater than 1 ) as is increased is the mode .this corresponds to modes of wavelength , i.e. nyquist frequency modes , which are precisely the modes that usually crop up and kill numerical relativity simulations .here , we present the analytic equations for the adm and ct formulations that we will use to study the stability properties of numerical methods in general relativity . we will discretize the general relativity evolution equations using the same discretization used in the previous section for discretizing the scalar wave equation , namely , a 2-iteration iterative crank - nicholson scheme . before analyzing the stability of the discretization for non - linear waves, we will perform a von neumann analysis for the discretizations of both the adm and ct equations about flat space .the form of the metric that we will use to study stability properties of discretizations of the adm form of the einstein equations is given by where the lapse and metric functions , , and are functions of the independent variables and .to put the evolution equations in first order form ( eq . [ eq : diffeq ] ) , we introduce the extrinsic curvature functions , , and , each of which are also function of the independent variables and . the evolution equations for this adm system is given in the form of eq .[ eq : diffeq ] as where the index pair takes on the values , the index is summed over the values , denotes the trace of the extrinsic curvature , denotes the covariant derivative operator compatible with the 3-metric , and denotes the three components of the 3-ricci tensor , which are given explicitly as throughout this paper , we use the so - called `` 1 + log '' slicing condition in eq .[ eq : adm ] .this local condition on the lapse has been used successfully in several recent applications .moreover , it is a local condition , and thus , the von neumann analysis remains local ( this would be in contrast with a global elliptic condition on the lapse , e.g. maximal slicing , in which the calculation of the sum in the definition of the amplification matrix in eq . [eq : ampmatrix ] would have nonzero global contributions ) .the initial value formulation is completed by specifying initial data that satisfies the hamiltonian and momentum constraints , given respectively by the form of the metric that we will use to study stability properties of the discretizations of the ct form of the einstein equations , as defined in is given by where the lapse , the conformal function , and the conformal metric components , , and are functions of the independent variables and . the determinant of the conformal 3-metric is identically 1 . instead of evolving the extrinsic curvature components as in the adm formalism , the extrinsic curvature is split into its trace ( ) and traceless ( ) components : in addition , the conformal connection function , defined by is also treated as an evolved variable .there are therefore 10 evolution equations that are in the form of eq .[ eq : diffeq ] , and given explicitly as where the indices of are raised and lowered with the conformal metric , the index on the covariant derivative operator with respect to the physical metric is raised and lowered with the physical metric , and are the christoffel symbols related to the conformal metric .note that the hamiltonian constraint has been substituted in for the 3-ricci scalar in the equations for and .also , the momentum constraint has been substituted in the equation for .this corresponds to the `` mom '' system from , and the , , system from .note that the ricci components can be written in terms of the conformal ricci components : where the indices of the covariant derivative operator with respect to the conformal metric are raised and lowered with the conformal metric .the conformal ricci components can in turn be written as notice that , although we have imposed planar symmetry , we have expressed the ricci tensor in terms of derivatives of the conformal metric and of the conformal connection function just as is done for full 3-d numerical relativity .as outlined for the scalar wave equation in section [ sec : linearwaveeq ] , we discretize both the adm equations ( eq .[ eq : adm ] ) and the ct equations ( eq .[ eq : ct ] ) using a 2-iteration iterative crank - nicholson scheme .the resulting finite difference equations , even though planar symmetry and a simplified form of the metric is assumed , are still too complicated to perform a von neumann analysis by hand .the complication arises due to the recursive nature of the iterative crank - nicholson method ; a 2-iteration procedure results in the source terms of eqs .[ eq : adm ] and [ eq : ct ] being computed 3 times in recursive succession .we have performed the von neumann analysis of the adm and ct equations in two independent ways . in the first way, we used the symbolical calculation computer program mathematica to explicitly calculate the 2-iteration iterative crank - nicholson update , and then explicitly calculated the amplification matrix eq .[ eq : ampmatrix ] in terms of the initial data variables .we then took these expressions , substituted the initial data about which we want to compute the von neumann analysis , and calculated the eigenvalues to arbitrary precision inside mathematica .the second , independent way of performing the von neumann analysis was to write an evolution code for the finite difference equations .we then input the initial data about which we want to compute the von neumann analysis and computed the derivatives in the definition of the amplitude matrix eq .[ eq : ampmatrix ] using finite differencing ( this amounts to finite differencing the finite difference equations ! ) . the package eispack was then used to compute the eigenvalues of the resulting amplification matrices . to obtain the highest accuracy possible , whenever calculating the amplification matrix eq .[ eq : ampmatrix ] using finite differencing , we finite difference with multiple discretization parameters , and use richardson extrapolation to obtain values of the derivatives .both methods were used , and found to produce identical results .all results reported in the paper were produced using both methods as described above .we emphasize the use of two independent methods not only to verify our results , but also for more practical reasons : one may eventually want to perform a von neumann analysis for a full 3-d numerical relativity code .the analytic method using a symbolical manipulation package such as mathematica may not be feasible in the near future .it took well over 100 hours on one node of an origin 2000 running mathematica to perform the symbolical calculations needed for the von neumann analysis of a plane symmetric code .it may be several orders of magnitude more expensive to analyze a full 3d evolution update .the finite difference method for computing the amplification matrix is much quicker , and it is reassuring that the finite difference method , used in conjunction with richardson extrapolation , is accurate enough to reproduce the same detailed structures of the eigenvalues of the amplification matrices as doing the analytic calculation . in figure[ fig : adm_flat ] we plot the maximum of the modulus of the eigenvalues ( neglecting eigenvalues that are exactly 1 ) of the amplification matrix of the 2-iteration iterative crank - nicholson discretization scheme of the adm equations from section [ sec : admeq ] using flat space as initial data , namely , , and .we see that the spectral radius of the amplification matrix is less than or equal to 1 for .notice that for the nyquist frequency mode ( ) , when the courant factor is , all eigenvalues of the amplification matrix have a modulus of exactly 1 .for , all nyquist frequency modes are unstable ( i.e. they all have amplification matrices with spectral radii ) . in figure[ fig : ct_flat ] we plot the maximum of the modulus of the eigenvalues ( neglecting eigenvalues that are exactly 1 ) of the amplification matrix of the 2-iteration iterative crank - nicholson discretization scheme of the ct equations from section [ sec : cteq ] , again using flat space as initial data .the resulting plot is identical to that of the adm equations for flat space .the stability properties of the ct equations about flat space are exactly the same as the stability properties of the adm equations about flat space .in this section , we study the stability properties of the adm and ct equations about nonlinear plane waves .we require initial data that corresponds to nonlinear plane waves that satisfy the constraints and takes on the form of our simplified metric eqs .[ eq : admmetric ] and [ eq : ctmetric ] .we choose an exact plane wave solution first given by .the metric is assumed to take the form where , , and . given an arbitrary function , the einstein equations reduce to the following ordinary differential equation for l(v ) : in this paper , we take to be given as eq . [ eq : einstein_plane ] is then solved with a 4th order runge - kutta solver .initial data is obtained by setting ( and thus , ) , shown in figure [ fig : mtwdata ] .we present results of a von neumann analysis about the point with .we have investigated different values of and , and find the results presented to be generic for different values of and , as long as we remain in the nonlinear regime , .the main difference between the stability properties of discretizations of the ct and adm systems is shown in figures [ fig : admmedsig ] and [ fig : ctmedsig ] .these show , respectively , plots of the maximum modulus of the eigenvalues ( ignoring eigenvalues that are exactly 1 ) of the amplification matrices for discretizations of the adm and ct systems at .these plots show the range of the courant factor and the range of modes .notice how , for these ranges of courant factor and mode wavenumber , all of the eigenvalues for the ct system are less than or equal to 1 , where as for the adm system , there is always at least one eigenvalue that has a modulus that is greater than 1 . for these ranges of and ,we conclude that the ct system is stable , while the adm system is unstable . by looking at courant factors of values , we see the typical nyquist frequency instability , shown in figures [ fig : admhighsig ] and [ fig : cthighsig ] for the adm system and ct system , respectively . notice in both figures [ fig : admhighsig ] and [ fig : cthighsig ] that the largest modulus of the eigenvalues of the amplification matrices for long wavelength modes ( ) are all greater than .in fact , this is the case for all values of courant factor .this is due to the fact that there is an exponentially growing gauge mode in the analytic solution to the analytic equations .recall that we are using a different gauge choice than that given by the exact solution in eqs .[ eq : nonlin_metric ] - [ eq : nonlin_beta ] . using the gauge choice given in eq .[ eq : adm ] and eq .[ eq : ct ] , there exists an exponentially growing gauge mode in . to take into account equations that admit exponentially growing solutions , the von neumann condition , eq . [ eq : vonneumann_condition ] ,must be modified ( see for details ) . in orderthat finite difference discretizations of equations that admit solutions that have exponentially growing modes remain stable , we must have to verify that the long wavelength ( ) phenomena observed in figures [ fig : admhighsig ] and [ fig : cthighsig ] is simply due to the existence of ( long wavelength ) exponentially growing modes in the analytic solution to the analytic equations , we repeat the calculations leading to figures [ fig : admhighsig ] and [ fig : cthighsig ] , but decrease the discretization parameter by a factor of 2 ( ) .the results for both the adm and ct systems are similar , so we only show the results for the ct system in figure [ fig : cthighsig2 ] .notice that by decreasing by a factor of 2 and holding the courant factor constant , we also decrease by a factor of 2 . by comparing the long wavelength ( ) sections of figures [ fig : cthighsig ] and [ fig :cthighsig2 ] , we indeed see that the difference between the maximum modulus of the eigenvalues of the amplification matrices and 1 decreases by a factor of 2 when is decreased by a factor of 2 . therefore , the discretizations of both the adm and ct systems are stable for the long wavelength , exponentially growing gauge modeof course , the maximum eigenvalues for the high frequency sections of figures [ fig : cthighsig ] and [ fig : cthighsig2 ] for do not approach 1 as , which signifies a true von neumann instability for .first , while the results presented in this paper are specific to the discretization method and initial data chosen here , one of the main points of this paper is to show that a von neumann analysis can indeed be applied directly to discretizations of the einstein equations . in the past, particular discretization methods were only analyzed with the von neumann method through simple equations , such as the linear wave equation .here we show that it is possible to carry out a von neumann analysis on discretizations of equations as complicated as the einstein equations .we would like to point out that a von neumann analysis could also be used to test and/or formulate boundary conditions .the stability of outer boundary conditions , as well as the stability of inner boundary conditions for black hole evolutions , could be tested first through a von neumann analysis instead of the traditional ( and painful ) method of coding up an implementation and looking for ( and usually finding ) numerical instabilities during the numerical evolution . in this paper, we have shown that the stability properties , as determined by a von neumann stability analysis , of a common discretization ( a 2-iteration iterative crank - nicholson scheme ) of the adm and ct systems about flat space are similar to the stability properties of the scalar wave equation . however , as we would like to emphasize again , the stability properties of a nonlinear finite difference update operator depend on the values of the discrete evolution variables .therefore , it is not enough to study the stability of numerical relativity codes about flat space . in principle , one must verify the stability of a nonlinear finite difference update operator about every discrete state encountered during the entire discrete evolution process , as argued in section [ sec : vonneumann ] . as a first step in this direction , we studied the stability properties of the adm and ct systems about highly nonlinear plane waves by performing a von neumann analysis for these scenarios .several interesting features presented themselves .the main difference between the stability of the adm and ct systems about the nonlinear wave solution is seen in figures [ fig : admmedsig ] and [ fig : ctmedsig ] .there , we see that , for a wide range of courant factors and wavenumbers ( which includes the mode that is usually the most troublesome in numerical relativity , the nyquist frequency mode whose wavelength is ) , the ct system is stable ( i.e. , the amplification matrix has a spectral radius of 1 or less ) whereas the adm system is unstable ( i.e. , the amplification matrix has a spectral radius greater than 1 ) .note that the adm system , in this region of courant factor and wavenumber , has an amplification matrix whose spectral radius is approximately .this is a very small departure from unity .thus , instabilities arising from these modes could take a long time to develop during numerical evolutions .for example , the 2-iteration iterative crank - nicholson update operator for the scalar wave equation from section [ sec : linearwaveeq ] has a spectral radius of for a courant factor , and this update operator can be used to evolve initial data sets many wavelengths before the nyquist frequency instability sets in .again , it must be pointed out that these results are specific to plane wave spacetimes , simplified ( diagonal ) forms of the metric , choice of gauge , choice of initial data used here , and the 2-iteration iterative crank - nicholson scheme .it remains to be seen whether or not these results are generic to other , more general discretizations of the einstein equations and for more general data , such as black hole and neutron star discrete evolutions .the one thing that is certain is that the von neumann analysis can be used as a diagnostic tool for determining the stability of discretizations of nonlinear differential equations as complicated as the einstein equations .we would like to thank josh goldberg , alyssa miller , david rideout , peter saulson , rafael sorkin , and wai - mo suen for encouragement and interesting discussions .this research is supported by nsf ( phy 96 - 00507 and phy 99 - 79985 ) and nasa ( nccs5 - 153 ) . | we perform a von neumann stability analysis on a common discretization of the einstein equations . the analysis is performed on two formulations of the einstein equations , namely , the standard adm formulation and the conformal - traceless ( ct ) formulation . the eigenvalues of the amplification matrix are computed for flat space as well as for a highly nonlinear plane wave exact solution . we find that for the flat space initial data , the condition for stability is simply . however , a von neumann analysis for highly nonlinear plane wave initial data shows that the standard adm formulation is unconditionally unstable , while the conformal - traceless ( ct ) formulation is stable for . |
the accurate numerical simulation of fluid flow in porous media is important in many applications ranging from hydrocarbon production and groundwater flow to catalysis and the gas diffusion layers in fuel cells .examples include the behavior of liquid oil and gas in porous rock , permeation of liquid in fibrous sheets such as paper , determining flow in underground reservoirs and the propagation of chemical contaminants in the vadose zone , assessing the effectiveness of leaching processes and optimizing filtration and sedimentation operations .an important and experimentally determinable property of porous media is the permeability , which is highly sensitive to the underlying microstructure .comparison of experimental data to numerically obtained permeabilities can improve the understanding of the influence of different microstructures and assist in the characterization of the material .before the 1990 s the computational power available was very limited restricting all simulations either to small length scales or low resolution of the microstructure .shortly after its introduction lattice - boltzmann ( ) simulations became popular as an alternative to a direct numerical solution of the stokes equation for simulating fluid flow in complex geometries .historically , the method was developed from the lattice gas automata .in contrast to its predecessor , in the method the number of particles in each lattice direction is replaced with the ensemble average of the single particle distribution function , and the discrete collision rule is replaced by a linear collision operator . in the methodall computations involve local variables so that it can be parallelized easily . with the advent of more powerful computers it became possible to perform detailed simulations of flow inartificially generated geometries , tomographic reconstructions of sandstone samples , or fibrous sheets of paper .the accuracy of simulations of flow in porous media depends on several conditions .these include the resolution of the discretization of the porous medium , proper boundary conditions to drive the flow and to implement the solid structure or the choice of the collision kernel .even though advanced boundary conditions , discretization methods , as well as higher order kernels have been developed and are common in the literature , it is surprising to the authors that they only found limited applications so far . in particular for commercial applicationsa three - dimensional implementation with 19 discrete velocities and a single relaxation time linearized collision operator is still the de - facto standard to calculate stationary velocity fields and absolute permeabilities for porous media . here ,the flow is usually driven by a uniform body force to implement a pressure gradient and solid surfaces are generated by simple bounce back boundary conditions .the present work is motivated by the question whether permeabilities calculated by this standard approach can be considered to be accurate . in particular , it is important to understand where the limits of this method are and how the accuracy can be increased .we quantify the impact of details of the implementation by studying 3d poiseuille flow in pipes of different shape and resolution and comparing the simulation results to analytical solutions .this allows to demonstrate how simple improvements of the simulation paradigm can lead to a substantial reduction of the error in the measured permeabilities .these include a suitable choice of the relaxation parameter and the application of the multirelaxation time method in order to ascertain a minimal unphysical influence of the fluid viscosity on the permeability .further , a correct implementation of the body force to drive the flow together with suitable in- and outflow boundaries is mandatory to avoid artifacts in the steady state velocity field .finally , the small compressibility of the fluid requires a proper determination of the pressure gradient in the system .if these details are taken care of , it is shown that the method is well suitable for accurate permeability calculations of stochastic porous media by applying it to discretized micro computer - tomography ( ) data of a fontainebleau sandstone .the boltzmann equation describes the evolution of the single particle probability density , where is the position vector , is the velocity vector , is the time , and is the collision operator . while discretizations on unstructured grids exists , they are not widely used and typically the position is discretized on a structured cubic lattice , with lattice constant .the time is discretized using a time step and the velocities are discretized into a finite set of vectors with , called lattice velocities , where the finite integer varies between implementations . in this workwe exclusively use the so - called d3q19 lattice , where velocities are used in a three dimensional domain . a cubic lattice with basis , is embedded into using the coordinate function to map the lattice nodes to position vectors .the computational domain is a rectangular parallelepiped denoted as where are its dimensionless side - lengths .see fig .[ fig : permcalc ] for a visualization .physical quantities such as pressure or density on the lattice are abbreviated as .we introduce the vector notation , where the components are the probabilities calculated as here , is the finite volume associated with the point and is the volume in velocity space given by lattice velocity .the macroscopic density and velocity are obtained from as where is a reference density . the pressure is given by with the speed of sound discretization of eq .provides the basic system of difference equations in the method with and the initial condition ( for ) .the generally nonlinear collision operator is approximated using the linearization around a local equilibrium probability function , with a collision matrix . the simplest approach to define the collision matrix uses a single relaxation time with time constant , where is the kronecker delta .this single relaxation time ( ) scheme is named after the original work of bhatnagar , gross and krook . within the method , is approximated by a second order taylor expansion of the maxwell distribution , if external forces are absent , the equilibrium velocity is defined as from eq . .as explained further below , and may differ from eq . if an external acceleration is present .the numbers are called lattice weights and differ with lattice type , number of space dimensions and number of discrete velocities . see for a comprehensive overview on different lattices .an alternative approach to specify the collision matrix is the multirelaxation time ( ) method .here , a linear transformation is chosen such that the moments represent hydrodynamic modes of the problem .we use the definitions given in , where is the density defined in eq ., represents the energy , with the momentum flux and , with are components of the symmetric traceless stress tensor . introducing the moment vector , , a diagonal matrix , and the equilibrium moment vector , we obtain during the collision step the density and the momentum flux are conserved so that and with .the non - conserved equilibrium moments , , are assumed to be functions of these conserved moments and explicitly given e.g. in .the diagonal element in the collision matrix is the relaxation time moment .one has , because the corresponding moments are conserved , describes the relaxation of the energy and the relaxation of the stress tensor components .the remaining diagonal elements of are chosen as to optimize the algorithm performance .because two parameters and remain free , the multirelaxation time method reduces to a `` two relaxation time '' ( ) method .an alternative implementation can be found in .to apply the method to viscous flow in porous media it is necessary to establish its relations with hydrodynamics .the chapman - enskog procedure shows that density , velocity and pressure fulfill the navier - stokes equations without external forces , with a kinematic viscosity combining eq .and eq . gives because , , and , it follows that .a typical value for the pore diameter in sandstone is , and for water the kinematic viscosity and speed of sound are and , respectively . with typical velocities of order reynolds number is .discretizing with gives then . because for the method is known to be unstable , a direct simulation of water flow in porous media with these parameters is not feasible . to overcome this impasse, one might impose and simultaneously fix and as fluid parameters .the discretization then is and again , a simulation with these parameters is not possible because a typical pore with diameter would have to be represented by nodes , exceeding realistic memory capacities .another way to circumvent these problems is to appeal to hydrodynamic similarity for stationary flows .the simulations in this paper are performed with fluid parameters that represent a pseudofluid with the same viscosity as water , but as the speed of sound .the discretization then is and .a pore of diameter is then represented by nodes and a cubic sample with side - length requires nodes , a manageable system size on parallel computers . an external force , as discussed next , drives the flow such that the velocities are of order .the mach and reynolds numbers in the simulations are and , characterizing a laminar subsonic flow .as long as and hydrodynamic similarity remains valid , we do not expect that the parameters of the pseudofluid will change the permeability estimate .an external acceleration acting on the fluid is implemented by adding two modifications .first , a forcing term written as a power series in the velocity is added to the right hand side of eq . .second , eq . for the equilibrium velocity in eq . needs to be modified .the parameters of order 0 , 1 , and 2 in the expansion are , , and .the definition of the velocities and differ with the method used . we present four possible implementations which all assume , since otherwise a source term in the mass balance would have to be taken into account .the sums in this paragraph run from and the quantities , , , , , and are functions of and unless specified otherwise . the first method to implement a body force is referred to as in the remainder of the paper .it uses and a modified definition of and which causes the influence of temporal and spatial derivatives of on the density and momentum changes to vanish . for this method oneobtains , with instead of eq . . a multiscale expansion in time of the resulting discrete equation yields that the macroscopic density and velocity recover the navier - stokes equations with an external body force term .the forcing is applied in two steps during every time step , one half within the collision step by the definition of and the second half within the streaming step by the term . in the case of part which is applied during the collision step is added to the modes with , which represent the momentum flux . the second method ( )is defined by setting so that does not depend on .the full acceleration is applied only within the streaming step through the term .one sets and the macroscopic velocity defined as in eq . .this simplification is useful because it reduces the computational effort , but it is restricted to stationary flows . in our simulations is time independent and we are mainly interested in the permeability and stationary flows so that we have adopted in our simulations below . in macroscopic fields fulfill mass balance , but some additional unphysical terms appear in the momentum balance . herewe assume that all these additional terms are negligible or vanish for stationary flows , because we expect that all spatial gradients are sufficiently small . is intended for constant and uses the same parameters and as .however , the macroscopic velocity is calculated as in eq . .this recovers momentum balance , because unphysical terms either vanish or are negligible , but it does not recover mass balance , which in this case reads the reason is an inaccurate calculation of the macroscopic velocity .the impact of this issue on the simulation results is shown in sec .[ diffperm : sec ] . suggests to incorporate the acceleration not by using the forcing term , but by adding the term to the equilibrium velocity .the macroscopic velocity remains calculated by eq .this is equivalent to using the forcing term with and given by eq . .this implementation leads to the same drawback in the mass balance equation as in .the most common boundary conditions ( ) used jointly within implementations are periodic ( ) and no - slip . when using , fluid that leaves the domain , i.e. , the term in eq .exceeds the computational domain size , enters the domain from the other side .the no - slip , also called simple bounce - back rule ( ) , approximates vanishing velocities at solid surfaces . if the lattice point in eq .represents a solid node , the discrete equation is rewritten as where the probability function is associated with , where is the probability function in opposite direction to .midplane improve the eliminating the zig - zag profile when plotting the mass flow vs. , but yield the same mass flow , see eqs . and for their definition , respectively .the scheme depends on viscosity and relaxation time , especially in under - relaxed simulations ( large values of ) .the numerically exact position of the fluid - solid interface changes slightly for different which can pose a severe problem when simulating flow within porous media , where some channels might only be a few lattice units wide .the permeability , being a material constant of the porous medium alone , becomes dependent on the fluid viscosity .as demonstrated below within the method this - correlation is significantly smaller than within .recently , further improvements for no - slip have been discussed .most of these implementations use a spatial interpolation .for example , linearly and quadratic interpolated bounce - back , or multireflection . to calculate boundary effects these methods use multiple nodes in the vicinity of the surface . for this reason these schemes are unsuitable in porous media where some porethroats might be represented by 2 or 3 nodes only .consequently , we use midplane as well as for our simulations . to drive the flow on - site pressure or flux may be used . using them it is possible to exactly set the ideal gas pressure ( or density , see eq . ) or flux on a specific node .thus , creating a pressure gradient by fixing either the pressure or the mass flux at the inlet and outlet nodes are feasible alternatives .the computational domain .the ( porous ) sample is , and the fluid is accelerated in the acceleration zone .two fluid chambers and are used to avoid artifacts . ] the computational domain ( see fig . [fig : permcalc ] ) is composed of three zones : the sample describing the geometry and two chambers ( inlet ) and ( outlet ) , before and after the sample , containing fluid reservoirs . the notation denotes a cross - section , where , , , and represent the cross - sections right before the sample ( ) , the first ( ) , and last ( ) cross - section within the sample , and the cross - section right after the sample ( ) , respectively .every lattice point ( node ) in is either part of the matrix , denoted , or part of the fluid , denoted , so that and .results are presented in the dimensionless quantities where the discretization parameters and are chosen according to the analysis presented in sec .[ simmeth : sec ] . unless otherwise noted , the relaxation time is and for simulations is used .generally , results from simulations are labeled with the superscript `` '' , e.g. , the density .if the results refer to a specific implementation ( or ) they are labeled accordingly , e.g. , or . the fluid is driven using model b. the acceleration is not applied throughout the whole domain but only within the acceleration zone .an acceleration of is used for all simulations .the average for a physical quantity is with the domain and the number of nodes in that domain .the mass flow through a cross - section is given by with being the momentum component in direction of the flow .the mass flow through the whole domain is calibrate the simulation we simulate poiseuille flow in pipes with quadratic cross - section .the simulation parameters are defined by where and .the system dimensions are , with the channel width . according to ref . the analytical solution for the velocity component in flow direction in a pipe with quadratic cross - section is where ,{x_{{2}}}\in[-b/2,b/2]$ ] .the cartesian coordinates and have their origin in the center of the pipe . is the pressure gradient in flow direction and the dynamic viscosity .the expression is asymmetric in and .contrary to the no - slip condition the velocities are not zero for finite . to estimate the truncation error we define ,\\\ ] ] and with being the normalized velocities on the wall calculated from eq . . quantifies the truncation error at finite . requiring that the truncation error is at least three to four decades smaller than the velocities in the corners , for example or any other such corner velocity , yields . for all further comparisons with we use .if is chosen too small , a meaningful comparison of the simulation results with the analytical solution is not possible because of the inaccuracies in the numerical evaluation of the analytical solution itself .is the stationary solution for the velocity component in flow direction on a quadratic cross - section in an infinitely long pipe and for a constant pressure gradient .therefore , the simulated and are inspected for convergence at the end of the simulation and the assumption of a constant pressure gradient is checked .we define as the maximum relative change of a quantity during the time and within the computational domain , where is either the velocity or the density . because the pressure is proportional to the density , eq ., the pressure is converged , if the density is sufficiently converged ..[tbl : steady ] maximum relative change of the velocity and density , eq . , during the time when the simulation ended at . is the dimensionless channel width . [cols="^,^,^,^,^,^ " , ] the top plot shows the permeability for different , schemes , the and the sample .the influence of on the permeability is stronger using the .the bottom plot shows the permeability results , for the sample with and the sample with . at extrapolated permeabilities are shown.,title="fig : " ] the top plot shows the permeability for different , schemes , the and the sample .the influence of on the permeability is stronger using the .the bottom plot shows the permeability results , for the sample with and the sample with . at extrapolated permeabilities are shown.,title="fig : " ]our simulation setup of an acceleration zone and in / outlet chambers , together with our approximations of darcy s law , provides a method for permeability calculations .several problems in the numerical implementation and data evaluation were addressed , such as a correct acceleration implementation and an adequate approximation for calculating the pressure gradient .caveats when using simulations to calculate permeabilities have been exposed .we performed detailed studies with different -implementations , i.e. , and , and for various systems to quantitatively determine the accuracy of the calculated velocity field and calculated permeability .we find that for reasonably resolved quadratic pipes , the error of the calculated permeability is below .investigating non - aligned geometries , circular and triangular pipes , the discretization and permeability error is roughly at comparable resolutions . from thiswe infer that permeability calculations in stochastic porous media will have a significantly larger error , because the resolution of pores and pore walls is usually well below the resolution used for our pipe calculations above . comparing the two -implementations , and , we find that reduces the dependence of the permeability on the value of substantially .using and a relaxation time tailored to give good results for a specific geometry does not assure reliable results in a stochastic porous medium .for example , we found that and yields the best result for 3d poiseuille flow in a quadratic pipe .however , for this value and results differ by 20% if applied to the fontainebleau sandstone ( see fig . [fig : fntb ] ) .therefore , is suggested to be used for permeability estimates based on simulations .further investigations using the method for flow through stochastic porous media should include a resolution and relaxation time dependent analysis together with an appropriate extrapolation scheme for more reliable permeability estimates .we are grateful to the high performance computing center in stuttgart , the scientific supercomputing center in karlsruhe , and the jlich supercomputing center for providing access to their machines .one of us ( t.z . ) would like to acknowledge partial support from the dfg program exc310 ( simulationstechnik ) .we would like to thank bibhu biswal for fruitful discussions and the sonderforschungsbereich 716 , the dfg program `` nano- and microfluidics '' , and the deutscher akademischer austauschdienst ( daad ) for financial support .n. s. martys , j. g. hagedorn , d. goujon , and j. e. devaney .large scale simulations of single and multi - component flow in porous media . in _ developments in x - ray tomography ii ,proceeding of spie _ , volume 3772 , pp 205213 , 1999 . | during the last decade , lattice - boltzmann ( ) simulations have been improved to become an efficient tool for determining the permeability of porous media samples . however , well known improvements of the original algorithm are often not implemented . these include for example multirelaxation time schemes or improved boundary conditions , as well as different possibilities to impose a pressure gradient . this paper shows that a significant difference of the calculated permeabilities can be found unless one uses a carefully selected setup . we present a detailed discussion of possible simulation setups and quantitative studies of the influence of simulation parameters . we illustrate our results by applying the algorithm to a fontainebleau sandstone and by comparing our benchmark studies to other numerical permeability measurements in the literature . |
in a recent paper moitsheki et al argued that a method based on lie algebras is suitable for obtaining the solution to nonlinear ordinary differential equations that appear in simple models for heat transfer .they compared the analytical solutions with other results coming from perturbation approaches like homotopy perturbation method ( hpm ) and homotopy analysis method ( ham) . it is worth noticing that there is an unending controversy between the users of those fashionable perturbation approaches that arose some time ago .the purpose of this paper is to determine the usefulness of the results for the heat transfer systems provided by the lie algebraic method and those perturbation approaches . in sec .[ sec : exact ] we analyze the exact solutions arising from lie algebras , in sec . [ sec : taylor ] we outline the application of the well known taylor series approach , in sec .[ sec : virial ] we derive a simple accurate analytical expressions for one of the models and in sec .[ sec : conclusions ] we summarize our results and draw conclusions .the first example is the nonlinear ordinary differential equation ^{\prime \prime } ( x)+\epsilon u^{\prime } ( x)^{2 } & = & 0 \nonumber \\u(0)=1,\;u(1 ) & = & 0 \label{eq : ex_1}\end{aligned}\ ] ] where the prime denotes differentiation with respect to the variable .this equation is trivial if one rewrites it in the following way ^{\prime } = 0 ] and therefore the approach will be useful for small and moderate values of . as increases the rate of convergence of the taylor series method decreases because the radius of convergence approaches unity from above . however , this example is trivial and of no interest whatsoever for the application of a numerical or analytical method .this reasoning also applies to example ( [ eq : ex_3 ] ) although in this case we do not have an explicit solution but .the example ( [ eq : ex_2 ] ) is more interesting because there appears to be no exact solution , and for this reason we discuss it here .the unknown parameter is and the partial sums for the taylor series about }(x)=\sum_{j=0}^{n}u_{j}(u_{0})x^{j } \label{eq : u_x_series}\ ] ] enable one to obtain increasingly accurate estimates } ] .although the rate of convergence decreases as increases it is sufficiently great for most practical purposes .notice that the ham perturbation corrections for this model are polynomial functions of whereas the hpm has given polynomial functions of either or . however , there is no doubt that the straightforward power series approach is simpler and does not require fiddling with adjustable parameters .the analysis of the nontrivial equations for heat transfer models may be easier if we have simple approximate analytical solutions instead of accurate numerical results or cumbersome perturbation expressions . in the case of the models ( [ eq : ex_1 ] ) and ( [ eq : ex_3 ] ) there is no doubt that the exact analytical expressions should be preferred .for that reason , in what follows we concentrate on the seemingly nontrivial model ( [ eq : ex_2 ] ) .we have recently shown that the well known virial theorem may provide simple analytical solutions for some nonlinear problems .in particular , we mention the analysis of a bifurcation problem that appears in simple models for combustion .the only nontrivial problem outlined above is a particular case of nonlinear ordinary differential equations of the form the hypervirial theorem is a generalization of the virial one .if is an arbitrary differentiable weight function , the hypervirial theorem provides the following suitable expression for our problem ( [ eq : gen_nonlin ] ) : ^{\prime } dx & = & w(u(1))u^{\prime } ( 1)-w(u(0))u^{\prime } ( 0 ) \nonumber \\ & = & \int_{0}^{1}\left [ \frac{dw}{du}(u^{\prime } ) ^{2}+w(u)f(u)\right ] dx \label{eq : vt_gen}\end{aligned}\ ] ] in the particular case of the example ( [ eq : ex_2 ] ) we have dx \label{eq : vt_ex_2}\ ] ] when we obtain the virial theorem .here we also consider the even simpler choice that we will call hypervirial although it is just a particular case .since we try the ansatz that satisfies the boundary conditions in equation ( [ eq : ex_2 ] ) .it follows from equation ( [ eq : vt_ex_2 ] ) that the adjustable parameter is a root of when and when .[ fig : ht1 ] shows for some values of and also the accurate result obtained from the taylor series discussed in sec .[ sec : taylor ] .we appreciate that the accuracy of the analytical expression ( [ eq : u_app ] ) decreases as increases .however , if one takes into account the simplicity of equation ( [ eq : u_app ] ) the agreement is remarkable .besides , the hypervirial theorem with proves to be more accurate than the virial theorem .it is curious that there is no such test for the hpm or ham . as a particular example we consider ( the preferred parameter value for both ham and hpm calculations ) . from the partial sums of the taylor series with we obtain . the analytical function ( [ eq : u_app ] ) yields , for and , for that is a reasonable estimate of the unknown parameter .again we see that the hypervirial approach is better than the virial one .[ fig : ht2 ] shows accurate values of given by the taylor series with , our approximate analytical virial expression and equation ( [ eq : u_ex_2 ] ) for .it seems that the accuracy of is somewhat between the ham results of 5th and 10th order . on the other hand , the equation ( [ eq : u_ex_2 ] ) derived bythe lie algebraic method exhibits a wrong behaviour . finally , in fig .[ fig : ht2b ] we compare the numerical , virial ( ) and hypervirial ( ) approaches to the function in a wider scale .we conclude that the virial theorem is not always the best choice for obtaining approximate solutions to nonlinear problems .the purpose of this paper has been the discussion of some recent results for the nonlinear equations arising in heat transfer phenomena .the oversimplified models considered here may probably be of no utility in actual physical or engineering applications .notice that the authors did not show any sound application of those models and the only reference is a pedagogical article cited by rajabi et al .however , it has not been our purpose to discuss this issue but the validity of the methods for obtaining exact and approximate solutions to simple nonlinear equations .it seems that the particular application of the lie algebraic method by moitsheki et al has only produced the exact result of a trivial equation and a wrong result for a nontrivial one .therefore , we believe that the authors failed to prove the utility of the technique and it is not surprising that they concluded that their results did not agree with the ham ones ( see fig . [ fig : ht2 ] ) . we have also shown that under certain conditions the well known straightforward taylor series method is suitable for the accurate treatment of such nontrivial equations . it is simpler than both ham and hpm and as accurate as the numerical integration routine built in a computer algebra system .finally , we have shown that the well known hypervirial theorem may provide simple analytical expressions that are sufficiently accurate for a successful analysis of some of those simple models for heat transfer systems .it is surprising that our results suggest that the virial theorem may not be the best choice . | we analyze some exact and approximate solutions to nonlinear equations for heat transfer models . we prove that recent results derived from a method based on lie algebras are either trivial or wrong . we test a simple analytical expression based on the hypervirial theorem and also discuss earlier perturbation results . |
the search for the surface gravity effect of the free translational oscillations of the inner core , the so - called slichter modes , has been a subject of observational challenge , particularly since the development of worldwide data from superconducting gravimeters ( sgs ) of the global geodynamics project .indeed these relative gravimeters are the most suitable instruments to detect the small signals that would be expected from the slichter modes . a first claim by of a triplet of frequencies that he attributed to the slichter modes led to a controversy ( e.g. ) .this detection has been supported by and but has not been confirmed by other authors . have shown it is necessary to consider dynamic love numbers to calculate the slichter mode eigenperiods . latest theoretical computation predicts a degenerate ( without rotation or ellipticity ) eigenperiod of 5.42 h for the seismological reference prem earth model . a more recent study by states that the period could be shorter because of the kinetics of phase transformations at the inner - core boundary ( icb ) .the interest raised by the slichter modes resides in its opportunity to constrain the density jump and the viscosity in the fluid outer core at the icb .the density jump at the icb is a parameter that constrains the kinetic energy required to power the geodynamo by compositional convection .some discrepancies have been obtained for the value of this parameter .on the one hand , by analyzing seismic pkikp / pcp phases , found that it should be smaller than 450 kg / m , later increased to 520 kg / m . on the other hand , using normal modes observation , obtained 820 180 kg / m .such differences in the estimate of the icb density jump have been partially attributed to the uncertainties associated with the seismic noise . a model that satisfies boththe constraints set by powering the geodynamo with a reasonable heat flux from the core , and pkp traveltimes and normal mode frequencies has been proposed by with a large overall density jump between the inner and outer cores of 800 kg / m and a sharp density jump of 600 kg / m at the icb itself . in the followingwe will adopt the prem value of 600 kg / m .the non - detection of the slichter modes raises the question of their expected amplitude , their damping and the possible mechanisms to excite them .a certain number of papers have considered the damping of the inner core oscillation through anelasticity of the inner core and mantle , through viscous dissipation in the outer core or through magnetic dissipation . and have summarized the theoretical q values expected for the slichter mode . have concluded that it should most probably be equal to or larger than 2000 .various sources of excitation have been previously considered .the seismic excitation has been studied by , and .they have shown that earthquakes can not excite the slichter modes to a level sufficient for the sgs to detect the induced surface gravity effect .for instance , even for the 1960 chilean event the induced surface gravity effect does not reach the nanogal level ( 1 ngal nm / s ) .surficial pressure flow acting at the icb and generated within the fluid outer core has been considered by and as a possible excitation mechanism .however , the flow in the core at a timescale of a few hours is too poorly constrained to provide reliable predictions of the amplitude of the slichter modes . investigated the excitation of the slichter modes by the impact of a meteoroid , which they treated as a surficial seismic source . for the biggest known past collision associated to the chicxulub crater in mexico with a corresponding moment - magnitude ,the surface excitation amplitude of the slichter mode was barely 0.0067 nm / s 0.67 ngal . nowadays, a similar collision would therefore not excite the slichter modes to a detectable level .the degree - one surface load has also been investigated by .they showed that a gaussian - type zonal degree - one pressure flow of 4.5 hpa applied during 1.5 hour would excite the slichter mode and induce a surface gravity perturbation of 2 ngal which should be detectable by sgs .this determination was based on a purely analytical model of surface pressure . in this paperwe will use hourly surface pressure data provided by two different meteorological centers and show that the surface atmospheric pressure fluctuations can only excite the slichter modes to an amplitude below the limit of detection of current sgs .1.5pt in this section , we consider a spherical earth model , for which the frequencies of the three slichter modes degenerate into a single frequency , and establish a formula for the spectral energy of the amplitude of the mode when it is excited by a surface load .developed in a surface spherical harmonics expansion , a degree - one load contains three terms : where and are the colatitude and longitude , respectively . the green function formalism suited for surface - load problems has been generalized to the visco - elastic case by and has been established for the degree - one slichter mode by . the degree - one radial displacement due to load ( [ load ] ) is given by \nonumber \\ & & \lbrack \int_{-\infty}^{t}e^{i\nu t ' } ( \sigma_{10}(t ' ) \cos\theta + \sigma_{11}^c(t ' ) \sin\theta\cos\phi + { \sigma}_{11}^s(t ' ) \sin\theta \sin\phi ) dt ' \rbrack , \label{radialdisplacement(t)}\end{aligned}\ ] ] and the perturbation of the surface gravity is \nonumber \\ & & \lbrack \int_{-\infty}^{t}e^{i\nu t ' } ( \sigma_{10}(t ' ) \cos\theta + \sigma_{11}^c(t ' ) \sin\theta\cos\phi + { \sigma}_{11}^s(t ' ) \sin\theta \sin\phi ) dt ' \rbrack \nonumber \\ & & [ -\omega^2u(r_s)+\frac{2}{r_s}g_0 u(r_s)+\frac{2}{r_s}p(r_s)].\end{aligned}\ ] ] in the last two equations , and are , respectively , the radial displacement and perturbation of the gravity potential associated to the slichter mode , is the earth radius , and is the complex frequency . as in , we adopt a quality factor and a period h for a prem - like earth s model .the sources of excitation we consider are continuous pressure variations at the surface .a similar problem was treated by and for the atmospheric excitation of normal modes where the sources were considered as stochastic quantities in space and time . as we use a harmonic spherical decomposition of the pressure field ,the correlation in space depends on the harmonic degree , here the degree - one component of wavelength .the correlation in time is performed in the spectral domain . as a consequencewe introduce the energy spectrum of the degree - one pressure fluctuations and the energy spectrum of the radial displacement where is the fourier transform of , is the fourier transform of and denotes the complex conjugate .the fourier transform of eq .( [ radialdisplacement(t ) ] ) is \int_{-\infty}^{+\infty } e^{-i\omega t } \lbrack \int_{-\infty}^{t}e^{i\nu_kt ' } \nonumber \\ & & ( \sigma_{10}(t ' ) \cos\theta + \sigma_{11}^c(t ' ) \sin\theta\cos\phi + { \sigma}_{11}^s(t ' ) \sin\theta \sin\phi ) dt ' \rbrack dt \nonumber \\& = & \frac{r_s^2 u(r)}{\omega_0(1+\frac{1}{4q^2 } ) } [ u(r_s)g_0+p(r_s ) ] \frac{\omega_0(1-\frac{1}{4q^2})-\omega+\frac{i}{2q}(\omega-2\omega_0)}{\frac{\omega_0 ^ 2}{4q^2}+(\omega_0-\omega)^2 } \nonumber \\ & & ( { \hat \sigma_{10}}(\omega)\cos\theta + { \hat \sigma_{11}^c}(\omega)\sin\theta \cos\phi + { \hat \sigma_{11}^s}(\omega)\sin\theta \sin\phi ) , \nonumber\end{aligned}\ ] ] and , therefore , we have ^ 2 \nonumber \\ & & \frac{\lbrack \omega_0(1-\frac{1}{4q^2})-\omega \rbrack^2+\frac{(\omega-2\omega_0)^2}{4q^2 } } { \lbrack \frac{\omega_0 ^ 2}{4q^2}+(\omega_0-\omega)^2\rbrack ^2 } s_p(\theta,\phi;\omega ) . \end{aligned}\ ] ] from eq.([exdispl ] ) and ( [ deltag ] ) , we obtain the spectral energy of gravity for the excitation of the slichter mode by a surface fluid layer ^ 2 \nonumber \\ & & \frac{\lbrack \omega_0(1-\frac{1}{4q^2})-\omega \rbrack^2+\frac{(\omega-2\omega_0)^2}{4q^2 } } { \lbrack \frac{\omega_0 ^ 2}{4q^2}+(\omega_0-\omega)^2\rbrack ^2 } s_p(\theta,\phi;\omega).\end{aligned}\ ] ]the operational model of the european centre for medium - range weather forecasts ( ecmwf ) is usually available at 3-hourly temporal resolution , the spatial resolution varying from about 35 km in 2002 to 12 km since 2009 .this is clearly not sufficient to investigate the slichter mode excitation .however , during the period of cont08 measurements campaign ( august , 2008 ) , atmospheric analysis data were provided by the ecmwf also on an hourly basis .cont08 provided 2 weeks of continuous very long baseline interferometry ( vlbi ) observations for the study , among other goals , of daily and sub - daily variations in earth rotation .we take advantage of this higher - than - usual temporal resolution to compute the excitation of the slichter mode by the surface pressure fluctuations . to do so, we first extract the degree - one coefficients of the surface pressure during this period , considering both an inverted and a non - inverted barometer response of the oceans to air pressure variations .both hypotheses have the advantage to give simple responses of the oceans to atmospheric forcing , even if at such high frequencies , static responses are known to be inadequate .the use of a dynamic response of the ocean would lead to more accurate results but we would need a forcing ( pressure and winds ) of the oceans at hourly time scales which is not available .the degree-1 surface pressure changes contain three terms : from it , we can estimate the surface mass density by where is the mean surface gravity and compute the energy spectrum as defined in eq.([sp ] ) .the time - variations and fourier amplitude spectra of the harmonic degree - one coefficients , and are plotted in fig.[fig : time_fft_ib ] for the ib and non - ib hypotheses .the power spectral densities ( psds ) computed over the whole period considered here ( august 2008 ) are represented in fig.[fig : mapspsd ] for both oceanic responses .we also consider the ncep ( national centers for environmental prediction ) climate forecast system reanalysis ( cfsr ) model , for which hourly surface pressure is available with a spatial resolution of about 0.3 before 2010 , and 0.2 after .ncep / cfsr and ecmwf models used different assimilation schemes , e.g. 4d variational analysis for ecmwf , and a 3d variational analysis for ncep / cfsr every 6 hours .the temporal continuity of pressure is therefore enforced in the ecmwf model , whereas 6-hourly assimilation steps can sometimes be seen in the ncep / cfsr pressure series , for certain areas and time .the power spectral densities of the degree - one ncep / cfsr surface pressure field computed over august 2008 are represented in fig .[ fig : mapspsdncep ] for both oceanic responses .using equations ( [ deltag ] ) and ( [ exg ] ) we can compute the surface gravity perturbation induced by the slichter mode excited by the degree - one ecmwf and ncep / cfsr surface atmospheric pressure variations during august 2008 .the computation is performed with the eigenfunctions obtained for a spherical , self - gravitating anelastic prem - like earth model as in .we remove the 3 km - thick global ocean from the prem model because the ocean response is already included in the degree - one atmospheric coefficients .the power spectral densities of the surface gravity effect induced by the slichter mode excited by the ecmwf atmospheric data are plotted in figs [ fig : dgib - nib_maps_ecmwf ] for an inverted and a non - inverted barometer response of the oceans .the psd is given in decibels to enable an easy comparison with previous sg noise level studies ( e.g. ) .ncep / cfsr weather solutions give a similar surface excitation amplitude less than -175 db .we also consider the psds of the excitation amplitude at the sg sites djougou ( benin ) and bfo ( black forest observatory , germany ) for both oceanic responses in fig .[ fig : dg_ecmwf_b1dj_ngal ] . according to fig .[ fig : dgib - nib_maps_ecmwf ] the djougou site turns out to be located at a maximum of excitation amplitude in the case of an inverted barometer response of the oceans .bfo is the sg site with the lowest noise level at sub - seismic frequencies .the later noise level is also plotted in fig . [ fig : dg_ecmwf_b1dj_ngal ] . in that figure, we can see that the excitation amplitude at djougou reaches -175 db . for an undamped harmonic signal of amplitude the psdis defined by where is the number of samples and the sampling interval .consequently , assuming a 15-day time duration with a sampling rate of 1 min , a psd amplitude of -175 db corresponds to a harmonic signal of 0.3 ngal , which is clearly below the 1 ngal detection threshold and below the best sg noise level .a decrease of noise by a factor 3 would be necessary to be able to detect such a sub - nanogal effect . stacking worldwide sgs of low noise levelswould improve the signal - to - noise ratio by a factor . supposing that we had a large number of sg sites with equal noise levels ( same as bfo sg noise level ) , then we would need to stack 10 datasets to improve the snr by 10 db , so as to reach the nanogal level .both ecmwf and ncep atmospheric pressure data lead to a similar , presently undetectable , excitation amplitude for the slichter mode during august 2008 .however , as we have at our disposal 11 years of hourly ncep / cfsr surface pressure field , we can look at the time - variations of the excitation amplitude of the slichter mode over the full period .ncep atmospheric pressure data are assimilated every 6 h introducing an artificial periodic signal . in order to avoid the contamination by this 6 h - oscillation in spectral domain on the slichter mode period of 5.42 h, we need a data length of 2.5 days at least .we consider time - windows of 15 days shifted by 7 days and compute the surface excitation amplitude of the slichter mode at the bfo and djougou superconducting gravimeter sites and at location on earth for which the excitation amplitude is maximum ( fig.[fig : dg_bfo - dj_ncep_11yr ] ) .note that this location of maximum amplitude is also varying in time .we can see that the excitation amplitude is larger than 0.4 ngal at bfo for instance between january and march 2004 and in november 2005 for both oceanic responses .there is also a peak of excitation at djougou in november 2005 and between january and march 2004 but only for an ib - hypothesis .however , during these 11 years between 2000 and 2011 , the maximum surface excitation amplitude stays below 0.7 ngal . as a consequencewe can conclude that the degree - one surface pressure variations are a possible source of excitation for the slichter mode but the induced surface gravity effect is too weak to be detected by current sgs .using a normal mode formalism , we have computed the surface gravity perturbations induced by a continuous excitation of the slichter mode by atmospheric degree - one pressure variations provided by two meteorological centers : ecmwf and ncep / cfsr . both inverted and non - inverted barometer responses of the oceans to the atmospheric loadhave been employed .we have shown that the induced surface gravity signal does not reach the nanogal level , which is considered as being the level of detection of present sgs .the surficial degree - one pressure variations are a probable source of excitation of the slichter mode but the weak induced surface amplitude is one additional reason why this translational mode core has never been detected .an instrumental challenge for the future gravimeters would be to further decrease their noise levels .another source of possible excitation that has not been investigated yet is the dynamic response of the oceans .the oceans are known to be a source of continuous excitation of the fundamental seismic modes .so a further study would require to improve the response of the oceans .we would like to thank two anonymous reviewers for their comments on this work .we acknowledge the use of meteorological data of the ecmwf and ncep . courtier , n. , ducarme , b. , goodkind , j. , hinderer , j. , imanishi , y. , seama , n. , sun , h. , merriam , j. , bengert , b. , smylie , d.e .global superconducting gravimeter observations and the search for the translational modes of the inner core , _ phys .earth planet ._ , _ 117 _ , 320 .pagiatakis , s. d. , yin , h. and abd el - gelil , m. ( 2007 ) .least - squares self - coherency analysis of superconducting gravimeter records in search for the slichter triplet , _ phys .earth planet ._ , _ 160 _ , 108 - 123 .rosat , s. , hinderer , j. , crossley , d.j . , rivera , l. ( 2003 ) .the search for the slichter mode : comparison of noise levels of superconducting gravimeters and investigation of a stacking method .earth planet ._ , _ 140 _ ( 13 ) , 183 - 202 .rosat , s. , rogister , y. , crossley , d. et hinderer , j. ( 2006 ) . a search for the slichter triplet with superconducting gravimeters : impact of the density jump at the inner core boundary , _ j. of geodyn ._ , _ 41 _ , 296 - 306 .rosat , s. ( 2007 ) .optimal seismic source mechanisms to excite the slichter mode .int . assoc . of geod .symposia , dynamic planet , cairns ( australia ) , _ vol .130 _ , 571 - 577 , springer berlin heidelberg new york .rosat , s. , sailhac , p. and gegout , p. ( 2007 ) . a wavelet - based detection and characterization of damped transient waves occurring in geophysical time - series : theory and application to the search for the translational oscillations of the inner core , _ geophys . j. int ._ , _ 171 _ , 55 - 70 .tkalcic , h. , kennett , b. l. n. and cormier , v. f. ( 2009 ) . on the inner - outer core density contrast from pkikp / pcp amplitude ratios and uncertainties caused by seismic noise , _ geophys ._ , _ 179 _ , 425 - 443 . . the best sg noise level and the levels corresponding to the 1 ngal and 0.3 ngal signals are indicated.,title="fig:",width=377 ] . the best sg noise level and the levels corresponding to the 1 ngal and 0.3 ngal signals are indicated.,title="fig:",width=377 ] | using hourly atmospheric surface pressure field from ecmwf ( european centre for medium - range weather forecasts ) and from ncep ( national centers for environmental prediction ) climate forecast system reanalysis ( cfsr ) models , we show that atmospheric pressure fluctuations excite the translational oscillation of the inner core , the so - called slichter mode , to the sub - nanogal level at the earth surface . the computation is performed using a normal - mode formalism for a spherical , self - gravitating anelastic prem - like earth model . we determine the statistical response in the form of power spectral densities of the degree - one spherical harmonic components of the observed pressure field . both hypotheses of inverted and non - inverted barometer for the ocean response to pressure forcing are considered . based on previously computed noise levels , we show that the surface excitation amplitude is below the limit of detection of the superconducting gravimeters , making the slichter mode detection a challenging instrumental task for the near future . slichter mode ; ecmwf atmospheric model ; ncep / cfsr atmospheric model ; superconducting gravimeters ; surface gravity ; normal mode |
the recent advance of microwave wireless power transfer ( wpt ) technology enables to build wireless powered communication networks ( wpcns ) , where wireless devices ( wds ) are powered over the air by dedicated wireless power transmitters for communications .compared to conventional battery - powered networks , wpcn eliminates the need of manual battery replacement / recharging , which can effectively reduce the operational cost and enhance communication performance . besides , wpcn has full control over its power transfer , where the transmit power , waveforms , and occupied time / frequency dimensions , etc ., are all tunable for providing stable energy supply under different physical conditions and service requirements .this is in vivid contrast to _ energy harvesting _ ( eh ) based approaches , where wds opportunistically harness renewable energy in environment not dedicated to power the wds , e.g. , solar power and ambient radio frequency ( rf ) transmission . because the availability and strength of renewable energy sources are mostly random and time varying , stable and on - demand energy supply to wds is often not achievable with eh - based methods .these evident advantages of wpt over conventional energy supply methods make wpcn a promising new paradigm to the design and implementation of future wireless communication systems with stable and self - sustainable power supplies .current wpt technology can effectively transfer tens of microwatts rf power to wds from a distance of more than meters , while there is still significant room for improving the magnitude and range with future advancement in wpt .this makes wpcn potentially suitable for a variety of low - power applications with device operating power up to several milliwatts , such as wireless sensor networks ( wsns ) and radio frequency identification ( rfid ) networks .currently , commercial wpt - enabled sensors and rfid tags are already in the market . in the future, the extensive applications of wpt - enabled devices may fundamentally reshape the landscape of related industries , such as internet - of - things ( iot ) and machine - to - machine ( m2 m ) communications . as illustrated in fig . [71 ] , without the need to replace energy - depleted sensors in conventional wsn , a wpt - enabled wsn can achieve uninterrupted operation with massive number of sensors powered by fixed energy transmitters and/or a vehicle moving in a planned route used for both wireless charging and data collection . besides , thanks to the more ample power supply from wpt , rfid devices can now expect much longer operating lifetime and afford to transmit actively at a much larger data rate and from a longer distance than conventional backscatter - based rfid communications . despite the potential performance improvement brought by wpcn , building efficient wpcns is a challenging problem in practice . on one hand , the received energy level can be very low at wds located far away from energy transmitters due to significant attenuation of microwave power over distance .this energy near - far effect can cause severe performance unfairness among wds in different locations . on the other hand ,joint design of wireless energy and information transmissions is required in wpcn .first , wireless energy and information transmissions are often related , e.g. , a wd needs to harvest enough energy by means of wpt before transmitting data .second , energy transfer may share the common spectrum with communication channel , which can cause co - channel interference to concurrent information transmission . due to the above reasons , novel physical - layer transmission techniques as well as networking protocolsneed to be devised to optimize the performance of wpcns . to tackle the above technical challenges, we provide an overview in this article on state - of - the - art techniques to build an efficient wpcn .specifically , we first introduce the basic components and network models of wpcn .then , we present the key performance enhancing techniques for wpcn based on the introduced system models . atlast , we discuss the extensions and future research directions for wpcn and conclude the article .we present in fig . [ 72 ] some basic building blocks of wpcn . in a wpcn , energy nodes ( ens ) transmit wireless energy to wds in the downlink , and the wds use the harvested energy to transmit their own data to information access points ( aps ) in the uplink . as shown in fig .[ 72](a ) , the ens and aps are in general _separately _ located , but can also be grouped into pairs and each pair of en and ap are _ co - located _ and integrated as a hybrid access point ( hap ) as in fig . [ 72](b ) .the integrated hap makes the coordination of information and energy transmissions in the network easier as compared to separated en and ap , and also helps save the production and operation cost by sharing their communication and signal processing modules .however , it also induces a practical design challenge named _ doubly - near - far _ " problem , where user that is far away from its associated hap ( e.g. , wd in fig . [ 72](b ) ) harvests lower wireless energy in the downlink but consumes more to transmit data in the uplink than that of a user nearer to the hap ( wd ) . as a result , unfair user performance may occur since a far user s throughput can be much smaller than a nearby user .this user unfairness problem can be alleviated in a wpcn with separated ens and aps . as shown in fig .[ 72](a ) , wd harvests less energy than wd because of its larger distance to en , but also consumes less on data transmission due to its smaller distance to ap .furthermore , the circuit structures for energy and information transmissions are rather different . for instance, a typical information receiver can operate with a sensitivity of dbm receive signal power , while an energy receiver needs up to dbm signal power . to maximize their respective operating efficiency , energy andinformation transceivers normally require different antenna and rf systems . therefore , as shown in fig . [ 72](c ) and ( d ) , a practical wpt - enabled wd has two antenna systems , one for harvesting energy and the other for transmitting information . similarly , an hap with co - located energy transmitter and information receiver also needs two sets of antenna systems .energy and information transmissions can be performed either in an _ out - band _ or _ in - band _ manner . as shown in fig . [72](c ) , the out - band approach transfers information and energy on different frequency bands to avoid interference .however , energy transmission in general needs to use pseudo - random energy signal , which occupies non - negligible bandwidth , to satisfy the equivalent isotropically radiated power ( eirp ) requirement on its operating frequency band imposed by radio spectrum regulators such as fcc ( federal communications commission ) .to enhance the spectrum efficiency , in - band approach allows the information and energy to be transmitted over the same band . in this case , however , energy transmitters may cause co - channel interference at information receivers , especially when an energy transmitter and an information receiver co - locate at an hap that may receive strong self - interference .a practical solution is to separate energy and information transmissions in different time slots , which , however , reduces the time for information transmission and thus the system throughput . a point to noticeis that a wd can in fact operate in an information / energy full - duplex manner , which is able to transmit information and harvest energy to / from the ap / en ( hap ) in the same band .for instance , when en and ap in fig .[ 72](a ) are well separated , i.e. , energy transmission does not cause strong interference to information decoding , it is feasible for wd to simultaneously receive energy from en and transmit information to ap .in addition , as shown in fig .2(c ) , the information / energy full - duplex operation enables an additional benefit known as _ self - energy recycling _, where a wd can harvest additional rf energy from its own transmitted information signal .evidently , a full - duplex wd can benefit from high loop - link channel gain from its information transmitting antenna to energy receiving antenna .therefore , the receiving antenna should be placed as close to the transmitting antenna as possible , yet without disturbing its radiation pattern . as shown in fig .[ 72](d ) , another promising solution for in - band approach is to use _ full - duplex _hap , which is able to transmit energy and receive information to / from wds simultaneously in the same frequency band .notice that the full - duplex operation of hap is different from that of wd in the sense that the energy transmission can cause severe self - interference to the information decoding . in this case ,a low loop - link channel gain is practically desired to mitigate the harmful self - interference , e.g. , through directional antenna design or large antenna separation .a full - duplex hap can also perform _ self - interference cancelation _ ( sic ) to further reduce the interference power , using analog / digital sic circuitry and hybrid signal processing approaches , etc .although some recent studies suggest that perfect self - interference cancelation in wireless channel is difficult , full - duplex hap has the potential to provide folded spectrum efficiency improvement than conventional half - duplex energy / information transmissions . in fig . [ 73 ] , we present a numerical example comparing the performance of different operating models in wpcn . for the simplicity of illustration , we consider a simple wpcn consisting of only one wd , one en and one information ap .specifically , we consider the following six models : * half - duplex information / energy transfer using separated en and information ap or an integrated hap ; * full - duplex information / energy transfer using an hap that can achieve either or db sic .that is , the interference power received by the receiving antenna is further attenuated by or db by analog and/or digital sic techniques before information decoding ; * full - duplex information / energy transfer using separated en and information ap , where the ap can achieve either or db interference cancellation ( ic ) in the received energy signal from the en ( assuming known energy signal waveform at the ap ) . for half - duplex information/ energy transfer , information and energy transmissions can either operate on different frequency bands ( out - band ) or different time slots ( in - band ) , with the same achievable throughput performance . here , we consider only the in - band method to avoid repetition . as shown at the top of fig .[ 73 ] , in the case of separated en and ap , the distance between the en and ap is meters , and the wd locates on the line connecting them . for full - duplex operations , we assume a db loop - link power from the wd s information transmit antenna to its own energy harvesting antenna , such that it can achieve self - energy recycling . on the other hand , the loop - link power from the hap s energy transmitting antenna to its information receiving antenna is assumed to be db . in addition , the transmit power of the en is watt , the carrier frequency is mhz with mhz operating bandwidth , the wireless channels are assumed to follow a free space path loss model , and the receiver noise power spectrum density is dbm / hz .we plot the achievable data rates of different models in bits / sec / hz ( bps / hz ) as the distance between the wd and the en ( or hap ) varies from to meters . for fair comparison between half - duplex and full - duplex operations ,the time allocation ratio between energy and information transmissions is optimized for half - duplex scheme at each of the wd s locations .we can see that the data rates of using an hap quickly degrade as the separation between the hap and wd increases due to the doubly - near - far effect in signal attenuation .using separated en and ap , on the other hand , can achieve a more stable performance under distance variation because both the energy harvested and consumed decrease as the distance between the en and wd increases .when full - duplex method is used , we can see that the data rate of full - duplex hap with db sic strictly outperforms that with half - duplex hap . however , the full - duplex hap with db sic produces close - to - zero data rate even when the distance between wd and hap is moderate , because in this case the residual self - interference overwhelms the received information signal . for full - duplex operation with separated en and ap ,the ic capability of the ap is also a critical factor that determines the communication performance .specifically , the full - duplex operation achieves strictly higher data rate than half - duplex scheme when the ap is able to cancel db interference , while achieving lower data rate when the ic capability of the ap is decreased to db .by applying the above basic operating models , we are able to build more complex wpcns with larger number of nodes for various different applications . in practice , the performance of wpcn is fundamentally constrained by the low efficiency and short range of wpt , and also the limited resources for both energy and information transmissions . in this section , we introduce some useful techniques to enhance the performance of wpcn . in particular , we divide our discussions into four parts : energy beamforming , joint communication and energy scheduling , wireless powered cooperative communication , and multi - node cooperation , as illustrated in fig . [ 74 ] .the introduced methods , as well as their combined use , can effectively extend the operating range and increase the capacity of wpcn , making wpcn a viable solution to more extensive applications .for better exposition , we assume information / energy half - duplex operation in this section , while leaving the discussions on the extensions to full - duplex operation in the next section . to achieve efficient energy transfer , wpt generally requires highly directional transmission by using high - gain antennas to focus the energy in narrow energy beams towards the energy receivers ( ers ) . for wpt in fixed line - of - sight ( los ) links , the conventional large aperture antennas , such as dish or horn antennas , could be employed ; whereas for mobile applications with dynamic channel environment , electronically steerable antenna array enabled _ energy beamforming _technique is more suitable to flexibly and efficiently direct wireless energy to ers by adapting to the propagation environment . with energy beamforming , the energy signals at different antennasare carefully weighted to achieve constructive superposition at intended ers . to maximize the received power level, the energy transmitter ( et ) in general requires accurate knowledge of the channel state information ( csi ) , including both magnitude and phase shift from each of the transmit antennas to each receive antenna of different ers . as shown in fig . [74](a ) , one method to obtain csi at the et is via forward - link ( from et to er ) training and reverse - link ( from er to et ) feedback .however , different from the conventional channel training design in wireless communication systems , where the major concern is the bandwidth / time used for transmitting training signals , the channel training design for wpt is constrained by the limited energy available at the er to perform channel estimation and send csi feedbacks .intuitively , more accurate csi knowledge can be obtained by the et if the er uses more energy to perform channel estimation and/or feedback .however , the energy cost to the er may eventually offset the energy gain achieved from a more refined energy beamforming used by the et with more accurate channel knowledge .in particular , the energy cost can be prohibitively high for et with a large antenna array , as the channel estimation / feedback overhead increases proportionally to the number of antennas at the et .instead , reverse - link training method is more suitable for estimating large - array csi .specifically , training signals are sent in the reverse direction by the er so that the csi can be directly estimated at the et without any channel estimation or feedback by the er . in this case , the training overhead is independent of the number of antennas at the et .however , the er still needs to carefully design its training strategy , such as the transmit power , duration , and frequency bands , to maximize the _ net _harvested energy , i.e. , the energy harvested at the er less that consumed on sending training signals . besides the limited energy constraint at theer , training design in wpt systems may also be constrained by the limited hardware processing capability of er . for instance , some low - cost wireless sensors may not have adequate baseband processing units to perform conventional csi estimation and/or feedback . to tackle this problem ,new limited feedback methods need to be developed .for instance , one - bit information feedback signal can be sent from the er to indicate the increase or decrease of the received power level during each training interval as compared to the previous one , based on which the et can iteratively update its channel matrix estimation from the feedback using a cutting - plane algorithm .it is proved that this simple channel estimation method can converge to the exact channel matrix after finite number of iterations .communication and energy transfer are often related in a wpcn .on one hand , downlink energy transfer strategy is based on the energy demanded by the wds to satisfy their uplink communication quality requirements . on the other hand ,uplink information transmission is causally constrained by the amount of energy available at each wd after harvesting energy by means of wpt in the downlink .therefore , information and energy transmissions should be jointly scheduled to avoid co - channel interference and optimize the overall system performance . as shown in fig .[ 74](b ) , time - frequency resource blocks in a wpcn can be allocated dynamically either to the hap for energy transfer in the downlink or to the wds for information transmission in the uplink , based on a joint consideration of the wireless channel conditions , battery states , communication demands and performance fairness among the wds .for instance , to tackle the doubly - near - far problem , user fairness can be improved by allocating more resource blocks to the far user wd and less to near user wd in fig .it could also occur that no transmission is scheduled at some resource blocks because of the poor wireless channel conditions due to fading .similar dynamic resource allocation method can also be extended to a wpcn with separated en and ap , where the wireless channels for energy and information transmissions are independent . in practice ,real - time information / energy scheduling is a challenging problem because of the time - varying wireless channels and the causal relationship between current wpt and future information transmissions .communication and energy scheduling can also be performed in the spatial domain when en and ap are equipped with multiple antennas .specifically , energy beamforming technique can be used by an en to steer stronger energy beams towards certain users to prioritize their energy demands . at the same time , _ spatial division multiple access _ ( sdma ) along with multi - user detection technique can be used by the ap to allow multiple users to transmit on the same time - frequency resource block . in this case, uplink transmit power control can be applied to balance the throughput performance among all the users . in general , sdma is a more spectrally efficient method than time / frequency - division based multiple access methods . besides , energy beamforming and sdma can be combined with dynamic time - frequency resource allocation to further enhance the system performance in wpcn .in addition to the above techniques , another promising approach is wireless powered cooperative communication , where users are allowed to share their resources , e.g. , energy and time , and communicate to the ap collaboratively . as shown in fig . [ 74](c ) with an hap serving two users , the near user wd with ample energy supply can use part of its energy and transmit time to help relay the data transmission of far user wd . specifically, the relay protocol can be designed to consist of three time slots . in the first time slot, the hap performs wpt and the users harvest energy ; in the second time slot , wd transmits its data to wd for decoding ; in the third time slot , wd encodes wd s message together with its own message and sends to the hap . evidently , wd can benefit from the cooperation due to the shorter communication range compared to the direct communication with the hapmeanwhile , although wd consumes energy and time on helping wd , its data rate loss due to cooperation can also be made up by an overall longer data transmission time .this is because the gain from user cooperation allows the hap to allocate more time for data transmission instead of wpt .besides communication cooperation , users can also perform peer - to - peer _ energy cooperation _, e.g. , wd directly transmits its excessive energy to wd .this potential win - win situation makes user cooperation an attractive and low - cost method to improve the overall efficiency of wpcn .the application of wireless powered cooperative communication to a wpcn with separated en and ap is also illustrated in fig . [ 74](c ) .similarly , after harvesting the energy broadcasted by the en , the users can perform communication and/or energy cooperation to improve the performance of each other .in particular , some spectrally efficient cooperative communication methods , such as distributed space - time coding , can be applied when the communication links between the two users are sufficiently reliable . as illustrated in fig . [ 74](d ) , besides the device - to - device ( d2d ) cooperation between wd and wd , the multiple ens and information aps can also cooperate for more efficient energy and information transmissions . specifically , the ens and information aps ( including haps ) are interconnected by wired / wireless backhaul links for exchanging user data and control signals that enable them to operate collaboratively in serving the wds . in the downlink energy transfer , the collaborating ens form a virtual mimo ( multiple - input multiple - output ) system , which is able to perform _ distributed energy beamforming _ to maximize the receive energy level at target wds , e.g. , en , en and hap cooperatively transfer energy to wd . for the uplink information transmission, the collaborating aps essentially form a _ coordinated multi - point _ ( comp ) system , which is able to jointly decode user messages from the signals received across multiple aps , e.g. , ap , ap and hap jointly decode the message of wd .notice that the downlink energy transfer and uplink information transmission can be performed simultaneously on the same frequency band without causing interference , e.g. , concurrent energy transfer to wd and data transmission of wd .this is because the aps can cancel the interference from the energy transfer using the predetermined energy signals informed by the ens .this fully centralized processing scheme can provide significant beamforming gain in the downlink energy transfer and spatial diversity / multiplexing gain in the uplink information transmission. however , its implementation can be very costly in a large - size wpcn due to the practical requirements , such as high computational complexity , large control signaling overhead , heavy backhaul traffic , and accurate multi - node synchronization .in practice , it may be preferable to use a hybrid scheme that integrates both centralized and distributed processing methods to balance between performance and implementation cost .an important problem that directly relates to the long - term performance of a multi - node wpcn , e.g. , average network throughput , is the placement of ens and aps .when the wds are fixed in location , e.g. , a wsn with sensor ( wd ) locations predetermined by the sensed objects , the problem becomes determining the optimal number and locations of ens and aps to satisfy a certain energy harvesting and communication performance requirements .the node placement problem in wpcn is different from that in conventional wireless communication networks , where only information aps are deployed .intuitively , the high energy consumption of a wd that is far from any ap can now be replenished by means of wpt via deploying an en close to the wd . in general , the placements of ens and aps should be jointly optimized to enhance the performance of a wpcn , such as throughput , device operating lifetime , and deployment cost .besides the discussions in the previous sections , wpcn also entails rich research problems of important applications yet to be studied . in this section ,we highlight several interesting research topics that we deem particularly worth investigating .energy beamforming is a key enabling technique in wpcn .as we have discussed in the previous section , efficient energy beamforming design requires accurate csi at the energy transmitter ( csit ) , which is often not available due to limited energy and/or simplified hardware at ers .besides the introduced reverse - link training and limited feedback methods that reduce the cost on csit estimation , energy beamforming design based on imperfect or statistical csit knowledge is also a practical yet challenging problem .the problem becomes even more challenging when we take into consideration the non - linear energy conversion efficiency of a practical energy receiver , where the conversion efficiency in general increases with the received rf signal power and degrades if the received power is above a certain threshold .meanwhile , the future advance in full - duplex technology is expected to provide folded performance improvement over the conventional half - duplex information / energy transfer method .for instance , a full - duplex hap is able to transfer energy to and at the same time receive the data transmissions from the wds on the same frequency band . as a result , the joint communication /energy scheduling design in wpcn needs to be revised , without the need to allocate orthogonal time / frequency for information and energy transmissions as in half - duplex based systems . for wirelesspowered cooperative communication , a full - duplex wd can transmit its own data to the information ap ( or full - duplex hap ) while receiving concurrent energy transfer from the en ( or full - duplex hap ) and data transmission from its collaborating wd , given that the interference from both information and energy signals can be effectively canceled at the information receiver of the wd .in addition , the two - user wireless powered cooperative communication model can be generalized to multi - user wpcn with a cluster - based structure , where a near user to the hap can act as a relay for a cluster of users .this cluster - based structure can be very useful in a large - size wpcn with many poor direct wd - to - hap links . in this case, the cluster - head nodes will be responsible for coordinating intra - cluster communications , relaying data traffic to / from the hap , and exchanging the control signals with the hap , etc . as a result, some cluster - head nodes may quickly deplete their batteries although they may actually harvest more energy than the other non - cluster - head nodes .possible solutions include using energy beamforming to steer strong energy beams to prioritize the energy supply to cluster - head nodes , or using a hybrid structure that allows opportunistic direct wd - to - hap communication of non - cluster - head nodes to reduce the energy consumption of the cluster - head nodes .energy harvesting methods can be combined with wpt to build a green and self - sustainable wpcn that requires less energy to be drawn from fixed power sources by the ens . as illustrated in fig . [75 ] , energy harvesting techniques can be applied at both the ens and wds . specifically , a wd can opportunistically harvest renewable energy in the environment , such as solar power and ambient rf radiation , and store the energy in a rechargeable battery .on one hand , when the intensity of renewable energy is strong at most wds , ens can turn off energy transfer to avoid waste of energy due to limited battery capacity . on the other hand , conventional wpt can be used to power the wds when effective energy harvesting is not feasible at most wds . in between ,a hybrid power method using both energy harvesting and wpt can be adopted , where ens can perform transmit power control or use energy beamforming to concentrate transmit power to the users who harvest insufficient renewable energy . in a green wpcn with hybrid power sources, the key challenge is to achieve timely switching between different operating modes and design efficient energy transmit strategies , to minimize the energy drawn from fixed power sources while satisfying the given communication performance requirements . in general , the optimal design requires the joint consideration of a number of factors , such as the current and predicted renewable energy intensity , battery state information , and wireless channel conditions , which is still open to future investigation . in practice ,a wpcn is likely to coexist with other existing communication networks , while they can cause harmful co - channel interference to each other when operating simultaneously on the same frequency band . as shown in fig .[ 76 ] , the wpcn can cause interference to the information decoding at wd in an existing communication network . at the same time , the transmission of the ap in the communication network can also cause interference to the information decoding at the hap in the wpcn .notice that , although transmission in the existing communication network produces harmful interference to the hap s information decoding , it also provides additional energy to harvest for the users ( wd and wd ) in the wpcn . given limited operating spectrum , a wpcn should be made cognitive to efficiently share the common frequency band with existing communication networks .in particular , a cognitive wpcn can be either cooperative or noncooperative with the existing communication networks .on one hand , a cooperative wpcn protects the communication of the existing communication networks .similar to the primary / secondary network setup in conventional cognitive radio networks , the wpcn ( secondary network ) designs its transmit strategy to optimize its own performance given that its transmission will not severely degrade the communication performance of the existing communication networks ( primary network ) . on the other hand ,a noncooperative wpcn designs its transmission strategy to optimize its own system performance , while only with a secondary consideration to minimize its impact to the existing communication networks . in practice, some incentive schemes that can promote mutual cooperations may be a promising solution to the coexisting problem between a cognitive wpcn and conventional communication networks .in this article , we have provided an overview on the basic models of wpcn and the corresponding performance - enhancing techniques to build efficient wpcns . compared to the battery - powered and environment energy harvesting based communications ,wpcn significantly improves the throughput and reliability of the network .although many techniques introduced for wpcn appear to be similar to those in conventional wireless communication networks , the additional dimension of energy transfer requires more sophisticated system design , and yet also brings in valuable opportunities to solve the fundamental energy - scarcity problem for wireless communications .we foresee that wpcn will be a necessary and important building block for future wireless communication systems to achieve energy self - sustainable device operations .i. krikidis , s. timotheou , s. nikolaou , g. zheng , d. w. k. ng , and r. schober , simultaneous wireless information and power transfer in modern communication systems , " _ ieee commun . mag ._ , vol .52 , no . 11 ,2014 , pp .104 - 110 .a. sabharwal , p. schniter , d. guo , d. w. bliss , s. rangarajan , r .wichman , `` in - band full - duplex wireless : challenges and opportunities , '' _ ieee j. sel .areas commun ._ , vol .32 , no . 9 , sep . 2014 , pp .1637 - 1652 .s. bi and r. zhang , placement optimization of energy and information access points in wireless powered communication networks , " to appear in _ ieee trans .wireless commun ._ , available online at http://arxiv:1505.06530 s. lee and r. zhang , cognitive wireless powered network : spectrum sharing models and throughput maximization , " to appear in _ ieee trans ._ , available online at http://arxiv:1506.05925 | wireless powered communication network ( wpcn ) is a new networking paradigm where the battery of wireless communication devices can be remotely replenished by means of microwave wireless power transfer ( wpt ) technology . wpcn eliminates the need of frequent manual battery replacement / recharging , and thus significantly improves the performance over conventional battery - powered communication networks in many aspects , such as higher throughput , longer device lifetime , and lower network operating cost . however , the design and future application of wpcn is essentially challenged by the low wpt efficiency over long distance and the complex nature of joint wireless information and power transfer within the same network . in this article , we provide an overview of the key networking structures and performance enhancing techniques to build an efficient wpcn . besides , we point out new and challenging future research directions for wpcn . |
the specification defines several services to forward events generated on _ update _ regions to a set of _ subscription _ regions .for example , consider a simulation of vehicles moving on a two - dimensional terrain .each vehicle may be interested in events happening inside its area of interest ( e.g. , its field of view ) that might be approximated with a rectangular region centered at the vehicle position .this kind of problem also arises in the context of massively multiplayer online games , where the game engine must send updates only to players that might be affected by game events , in order to reduce computation cost and network traffic . in this paperwe assume that a region corresponds to a single _ extent _ in terminology ) , that is , a -dimensional rectangle ( -rectangle ) in a -dimensional routing space .spatial data structures that can solve the region intersection problem have been developed over the years ; examples include the - tree and r - tree . however , it turns out that simpler , less efficient solutions are actually preferred in practice and widely deployed in implementations .the reason is that efficient spatial data structures tend to be complex to implement , and therefore their theoretical performance is affected by high constant factors .the increasingly large size of computer simulations employing techniques is posing a challenge to the existing solutions .as the number of regions increases , so does the execution time of the service . given the current trend in microprocessor design where a single cpu contains multiple independent execution units , significant improvements could be achieved if the existing matching algorithms were capable of taking advantage of the computational power provided by multi - core processors .there are two opportunities for parallelizing algorithms .the first is based on the observation that the problem of identifying whether two -rectangles intersect can be reduced to independent intersection problems among one - dimensional segments ( details will be given in section [ sec : ddm - algorithms ] ) .therefore , given an algorithm that can identify intersections among two sets of segments , we can execute instances in parallel , each computing the intersections among the projections of the extents along each dimension .the extent intersections can be easily computed from the segments overlap information .the idea above can be regarded as the `` low hanging fruit '' which is easy to get , but does not solve the problem in the long run .in fact , the number of cores in modern processors is often larger than the number of dimensions of most routing spaces ; this gap is likely to increase ( e.g. , the tilera tile - gx8072 processor offers 72 general - purpose cores on the same chip , connected through an on - chip mesh network ) . here comes the second parallelization opportunity : distribute the regions to the available cores so that each core can work on a smaller problem .this is quite difficult to achieve on the existing algorithms , since they are either inefficient ( and therefore there is little incentive in splitting the workload ) , or inherently sequential ( and therefore there is no easy way to achieve parallelism over the set of extents ) . in this paperwe describe the algorithm for solving the one - dimensional segment intersection problem .the algorithm uses a simple implementation of the interval tree data structure based on an augmented balanced search tree .experimental performance measures indicate that the sequential version of is competitive in the sequential case with the best algorithm used for , namely sort - based matching .we also observed good scalability of the parallel implementation of on shared - memory architectures .an important feature of is that it can be used to efficiently update overlap information in a dynamic setting , that is , in case extents can be moved or resized dynamically .this paper is organized as follows . in section [ sec : related - work ] we briefly review the state of the art and compare with existing solutions to the matching problem . in section [ sec : ddm - algorithms ]we describe three commonly used algorithms for : brute force , grid - based and sort - based matching . in section[ sec : parallel - ddm ] we describe and analyze its computational cost . in section [ sec : experimental - evaluation ]we experimentally evaluate the performance of the sequential version of compared with brute force and sort - based matching ; additionally , we study the scalability of a parallel implementation of on a multicore processor . finally , conclusions and future works will be discussed in section [ sec : conclusions ] .matching can be considered as an instance of the more general problem of identifying intersecting pairs of ( hyper-)rectangles in a multidimensional metric space . well known space - partitioning data structures such as - trees and r - trees can be used to efficiently store volumetric objects and identify intersections with a query object . however , spatial data structures are quite complex to implement and , although asymptotically efficient , they can be slower than less efficient but simpler solutions in many real - world situations . in authors describe a rectangle - intersection algorithm in two - dimensional space that uses only simple data structures ( arrays ) , and can enumerate all intersections among rectangles time and space .the usage of interval trees for was first proposed in where the authors used a different and more complex data structure than the one proposed here ( see section [ sec : parallel - ddm ] ) . in their case , the performance evaluation on very small instances shows mixed results . is a widely used algorithm for enumerating all intersections among subscription and update extents , with particular emphasis on distributed simulation applications based on the high level architecture ( hla ) specification .first sorts the endpoints , and then scans the sorted set ( details will be given in section [ sec : sort - based ] ) .is extended in to work efficiently on a dynamic scenario where extents can be moved or resized dynamically . despite its simplicity and efficiency , has the drawback that its sequential scan step is intrinsically serial and can not be easily parallelized .this can be a serious limitation when dealing with a large number of extents on multicore processors . in authors propose a binary partition - based matching algorithm that has good performances in some settings , but suffers from a worst case cost of where is the total number of subscription and update regions .moreover , the extension of this algorithm to the dynamic scenario seems impractical .layer et al . describe the binary interval search ( bits ) algorithm .bits can be used to efficiently count the number of intersections between two sets and of intervals in time . to do so, bits performs a preprocessing phase in which two sorted arrays and are created in time . contains the starting points of all intervals in , while contains the ending points . the number of intervals in that intersect a given query interval ] , $ ] intersect , that can be done in time using algorithm [ alg : intersect1d ] . for the general case we observe that two -dimensional extents and intersect if and only if all their projections along each dimension intersect . looking again at figure [ fig : ddm_example ] , we see that the projections of and intersect along dimension 1 but not along dimension 2 ; therefore , and can not intersect . on the other hand , the projections of and intersect along both dimensions , and in fact these rectangular regions intersect in the plane . since dealing with segments is easier than dealing with -rectangles , it is common practice in the research community to define efficient algorithms for the one - dimensional case , and use them to solve the general higher dimensional case . according to the discussion above , an algorithm that enumerates all intersections among two sets of and segments in time be immediately extended to an algorithm for -rectangles .for this reason in the rest of this paper we will consider the case only . , let be an intersection matrix * return * the most direct approach for solving the segment intersection problem is region - based matching , also called approach shown in algorithm [ alg : brute - force ] . the algorithm tests all subscription - update pairs , and records intersection information in matrix .the algorithm requires time , and is therefore not very efficient ; despite this , it is appealing due to its simplicity .furthermore , can be trivially parallelized since all iterations are independent ( it is an example of _ embarrassingly parallel _ computation ) .when processors are available , the amount of work performed by each processor is . dimensions . ] the matching algorithm proposed by boukerche and dzermajko is an improved solution to the -rectangle intersection problem .it works by partitioning the routing space into a grid of -dimensional cells .each extent is mapped to the grid cells it overlaps with .the events produced by an update extent are sent to all subscriptions that share at least one cell in common with .the approach is more scalable than ; furthermore , its performance can be tuned by choosing a suitable cell size .unfortunately , it has some drawbacks : matching may report spurious overlaps , that is , may deliver events to subscribers which should not receive them .this situation is illustrated in figure [ fig : ddm_example_grid ] : the extent and share the dashed cell but do not overlap ; therefore , will receive spurious notifications from that will need to be filtered out at the receiving side .the problem of spurious events can be mitigated by applying the brute force algorithm to each grid cell .if the routing space is partitioned into cells and all extents are evenly distributed over the grid , each cell will have subscription extents and update extents .therefore , the brute force approach applied to each cell requires operations ; since there are cells , the overall complexity becomes . in conclusion , in the ideal case the matching can reduce the workload by a factor with respect to .unfortunately , when cells are small ( and therefore is large ) each extent is mapped to a larger number of cells , which increases the computation time .the algorithm proposed by raczy et al . is a simple and very efficient solution to the matching problem . , let be an intersection matrix let be a vector with elements insert and in sort in nondecreasing order [ alg : sbm - loop ] extents in overlap extents in overlap * return * in its basic version , is illustrated in algorithm [ alg : sort - based ] .given a set of subscription intervals , and a set of update intervals , the algorithm sorts the endpoints in nondecreasing order in the array .then , the algorithm performs a scan of the sorted vector ; two sets and are used to keep track of the active subscription and update intervals at every point .each time the upper bound of an interval is encountered , the intersection matrix is updated appropriately , depending on whether is a subscription or update extent . as can be seen , uses only simple data structures .if we ignore the time needed to initialize the matrix , algorithm [ alg : sort - based ] requires time to sort the vector , then time to scan the sorted vector . during the scan phase, total time is spent to transfer the information from the sets _ subscriptionset _ and _ updateset _ to the intersection matrix , assuming that the sets above are implemented as bitmaps .the overall computational cost is , and therefore asymptotically not better than ; however , the term comes from simple operations on bitmaps , hence is very efficient in practice . while is very fast , it has the drawback of not being easily parallelizable .in fact , while parallel algorithms for sorting the array are known , the scan step is affected by loop - carried dependencies , since the content of _ subscriptionset _ and _ updateset _ depend on their values at the previous iteration .this dependency can not be easily removed .given the widespread availability of multi- and many - core processors , this limitation can not be ignored . in the next sectionwe introduce the algorithm for computing intersections among two sets of intervals .uses an augmented avl tree data structure to store the intervals .the performance of depends on the number of intersections ; however we will show that is faster than in the scenarios considered in the literature .furthermore , can be trivially parallelized , hence further performance improvements can be obtained on shared - memory multi - core processors .is a matching algorithm for one dimensional segments based on the _ interval tree _ data structure .an interval tree stores a dynamic set of intervals , and supports insertions , deletions , and queries to get the list of segments intersecting with a given interval . different implementations of the interval tree are possible .priority search trees support insertions and deletions in time , and can report all intersections with a given query interval in time .for the experimental evaluation described in section [ sec : experimental - evaluation ] we implemented the simpler but less efficient variant based on augmented avl trees , described in ( * ? ? ?* chapter 14.3 ) .we did so in order to trade a slight decrease in asymptotic efficiency for a simpler and more familiar data structure .it should be observed that is not tied to any specific implementation of interval tree , therefore any data structure can be used as a drop - in replacement inside the algorithm .each node of the avl tree holds an interval ; intervals are sorted according to their lower bounds , and ties are broken by comparing upper bounds .node includes two additional fields and , representing the maximum value of the upper bound and minimum value of the lower bound , respectively , of all intervals stored in the subtree rooted at .we have chosen avl trees over other balanced search trees , such as red - black trees , because avl trees are more rigidly balanced and therefore allow faster queries .figure [ fig : interval_tree ] shows an example of interval tree with intervals . insertions and deletions are handled with the usual rules of avl trees , with the additional requirement to propagate updates of the and attributes up to the root .since the height of an avl tree is , insertions and deletions in the augmented data structure still require time in the worst case .the storage requirement is .* return * , function , described in algorithm [ alg : query ] , is used to update matrix with all intersections of the update extent with the segments stored in the subtree rooted at node .the function is invoked as .the basic idea is very similar to a conventional item lookup in a binary search tree , the difference being that at each node both the and fields are used to drive the exploration of the tree ; also , the search might proceed on both the left and right child of node .algorithm [ alg : query ] can identify all intersections between and all intervals stored in the tree in time . , let be an intersection matrix * return * the complete matching procedure can now be easily described in algorithm [ alg : itm ] .first , an interval tree is created from the subscription extents in .then , for each update extent , function interval - query is invoked to identify all subscriptions that intersect . [[ asymptotic - running - time ] ] asymptotic running time + + + + + + + + + + + + + + + + + + + + + + + if there are subscription and update extents , the interval tree of subscriptions can be created in time and requires space ; the total query time is , being the number of intersections involving all subscription and all update intervals .note that we can assume without loss of generality that ( if this is not the case , we can switch the role of and ) . [ [ parallelizing ] ] parallelizing + + + + + + + + + + + + + + algorithm [ alg : itm ] can be trivially parallelized , since all queries on are independent .note that function interval - query modifies the intersection matrix passed as parameter ; however , each invocation of interval - query modifies a different column of , therefore no conflicts arise . in section [ sec : experimental - evaluation ] we will illustrate the results of experimental investigations on the scalability of the parallel implementation of . [ [ dynamic - interval - management ] ] dynamic interval management + + + + + + + + + + + + + + + + + + + + + + + + + + + another interesting feature of is that it can easily cope with _ dynamic _ intervals . in most applications , extents can move and grow / shrink dynamically ; if an update extent , say , changes its position or size , then it is necessary to recompute column of matrix .the brute force approach applied to alone gives an algorithm , since it is only necessary to identify overlaps between and all subscription segments .an extension of capable of updating intersection information efficiently has been proposed , with an asymptotic cost that depends on various factors ( e.g. , upper bound of the dimension , maximum bound shift in a region modification ) . on the other hand ,we can use two interval trees and , holding the set of update and subscription extents , respectively , to recompute the intersections efficiently .if an update extent is modified , we can identify the subscriptions overlapping in time by performing a query on .similarly , if a subscription extent changes , the list of intersections can be recomputed in time using .maintenance of both and does not affect the asymptotic cost of .the performance of a service can be influenced by many different factors , including : ( _ i _ ) the computational cost of the matching algorithm ; ( _ ii _ ) the memory footprint ; ( _ iii _ ) the communication overhead of the parallel / distributed architecture where the simulation is executed and ( _ iv _ ) the cost of sending and discarding irrelevant events at the destination , if any .the communication overhead depends on the hardware platform over which the simulation model is executed , and also on the implementation details of the communication protocol used by the simulation middleware .therefore , factor ( _ iii _ ) above is likely to equally affect any algorithm in the same way .the cost of discarding irrelevant notifications applies only to approximate matching algorithms , such as matching , that can report spurious intersections ( unless spurious intersections are cleaned up at the sender side ) .the , and algorithms do not suffer from this problem since they never return spurious intersections .besides , in and the authors show that for relevant cases the algorithm has better performance than matching . therefore , in the performance evaluation study we focused on the exact matching algorithms above , where only factors ( _ i _ ) and ( _ ii _ ) should be considered .j. rosenberg , `` geographical data structures compared : a study of data structures supporting region queries , '' _ computer - aided design of integrated circuits and systems , ieee transactions on _ , vol . 4 , no . 1 ,pp . 5367 , 1985 .m. petty and a. mukherjee , `` experimental comparison of d - rectangle intersection algorithms applied to hla data distribution , '' in _ proceedings of the 1997 distributed simulation symposium _, 1997 , pp . 1326 .f. devai and l. neumann , `` a rectangle - intersection algorithm with limited resource requirements , '' in _ proc .10th ieee int . conf . on computer and information technology _ ,cit 10.1em plus 0.5em minus 0.4em washington , dc , usa : ieee computer society , 2010 , pp .23352340 .a. boukerche and c. dzermajko , `` performance comparison of data distribution management strategies , '' in _ proc .5th ieee int .workshop on distributed simulation and real - time applications _ ,ds - rt 01.1em plus 0.5em minus 0.4emwashington , dc , usa : ieee computer society , 2001 , pp .67. d. t. marr , f. binns , d. l. hill , g. hinton , d. a. koufaty , a. j. miller , and m. upton , `` hyper - threading technology architecture and microarchitecture , '' _ intel technology journal _ , vol . 6 , no . 1 , feb .2002 .l. bononi , m. bracuto , g. dangelo , and l. donatiello , `` a new adaptive middleware for parallel and distributed simulation of dynamically interacting systems , '' in _ proc .8th ieee int . symp . on distributed simulation and real - time applications_.1em plus0.5em minus 0.4emwashington , dc , usa : ieee computer society , 2004 , pp . 178187 . | identifying intersections among a set of -dimensional rectangular regions ( -rectangles ) is a common problem in many simulation and modeling applications . since algorithms for computing intersections over a large number of regions can be computationally demanding , an obvious solution is to take advantage of the multiprocessing capabilities of modern multicore processors . unfortunately , many solutions employed for the data distribution management service of the high level architecture are either inefficient , or can only partially be parallelized . in this paper we propose the interval tree matching ( itm ) algorithm for computing intersections among -rectangles . itm is based on a simple interval tree data structure , and exhibits an embarrassingly parallel structure . we implement the itm algorithm , and compare its sequential performance with two widely used solutions ( brute force and sort - based matching ) . we also analyze the scalability of itm on shared - memory multicore processors . the results show that the sequential implementation of itm is competitive with sort - based matching ; moreover , the parallel implementation provides good speedup on multicore processors . data distribution management ; high level architecture ; parallel algorithms ; interval tree |
in a multipartite quantum system , any completely positive ( cp ) map applied locally to one part does not affect the reduced density operator of the remaining part .this fundamental no - go result , called the `` no - signalling theorem '' implies that quantum entanglement does not enable nonlocal ( `` superluminal '' ) signaling under standard operations , and is thus consistent with relativity , inspite of the counterintuitive , stronger - than - classical correlations that entanglement enables . for simple systems ,no - signaling follows from non - contextuality , the property that the probability assigned to projector , given by the born rule , tr , where is the density operator , does not depend on how the orthonormal basis set is completed .no - signaling has also been treated as a basic postulate to derive quantum theory .it is of interest to consider the question of whether / how computation theory , in particular intractability and uncomputability , matter to the foundations of ( quantum ) physics .such a study , if successful , could potentially allow us to reduce the laws of physics to mathematical theorems about algorithms and thus shed new light on certain conceptual issues . for example , it could explain why stronger - than - quantum correlations that are compatible with no - signaling are disallowed in quantum mechanics .one strand of thought leading to the present work , earlier considered by us in ref . , was the proposition that the measurement problem is a consequence of basic algorithmic limitations imposed on the computational power that can be supported by physical laws . in the present work, we would like to see whether no - signaling can also be explained in a similar way , starting from computation theoretic assumptions .the central problem in computer science is the conjecture that two computational complexity classes , * p * and * np * , are distinct in the standard turing model of computation . is the class of decision problems solvable in polynomial time by a ( deterministic ) tm . is the class of decision problems whose solution(s ) can be verified in polynomial time by a deterministic tm . is the class of counting problems associated with the decision problems in .complete " following a class denotes a problem within the class , which is maximally hard in the sense that any other problem in the class can be solved in poly - time using an oracle giving the solutions of in a single clock cycle .for example , determining whether a boolean forumla is satisfied is * np*-complete , and counting the number of boolean satisfactions is -complete . the word hard " following a class denotes a problem not necessarily in the class , but to which all problems in the class reduce in poly - time .* p * is often taken to be the class of computational problems which are efficiently solvable " ( i.e. , solvable in polynomial time ) or tractable " , although there are potentially larger classes that are considered tractable such as * rp * and * bqp * , the latter being the class of decision problems efficiently solvable by a quantum computer . *np*-complete and potentially harder problems , which are not known to be efficiently solvable , are considered intractable in the turing model .if and the universe is a polynomial rather than an exponential place , physical laws can not be harnessed to efficiently solve intractable problems , and * np*-complete problems will be intractable in the physical world . that classical physics supports various implementations of the turing machine is well known .more generally , we expect that computational models supported by a physical theory will be limited by that theory .witten identified expectation values in a topological quantum field theory with values of the jones polynomial that are -hard .there is evidence that a physical system with a non - abelian topological term in its lagrangian may have observables that are * np*-hard , or even # * p*-hard .other recent related works that have studied the computational power of variants of standard physical theories from a complexity or computability perspective are , respectively , refs . and refs . . ref . noted that * np*-complete problems do not seem to be tractable using resources of the physical universe , and suggested that this might embody a fundamental principle , christened the * np*-hardness assumption ( also cf . studies how insights from quantum information theory could be used to constrain physical laws .we will informally refer to the proposition that the universe is a polynomial place in the computational sense ( to be strengthened below ) as well as the communication sense by the expression the world is not hard enough " ( wnhe ) .recently , ref . have posed the question whether nonlinear quantum evolution can be considered as providing any help in solving otherwise hard problems , on the grounds that under nonlinear evolution , the output of such a computer on a mixture of inputs is not a convex combination of its output on the pure components of the mixture .we circumvent this problem here by adopting the standpoint of _ information realism _ , the position that physical states are ultimately information states registered in some way sub - physically but objectively by nature . at this stage , we will not worry about the details except to note an implication for the present situation , which is that from the perspective of ` nature s eye ' , there are no mixed states . therefore , in describing nonlinear physical laws or specifying the working of non - standard computers based on such laws , it suffices for our purpose to specify their action on ( all relevant ) pure state inputs . in ref . , we pointed out that the assumption of wnhe ( and further that of * p * * np * ) can potentially give a unified explanation of ( a ) the observed ` insularity - in - theoryspace ' of quantum mechanics ( qm ) , namely that qm is _ exactly _ unitary and linear , and requires measurements to conform to the born rule ; ( b ) the classicality of the macroscopic world ; ( c ) the lack of quantum physical mechanisms for non - signaling superquantum correlations . in ( a ) , the basic idea is that departure from one or more of these standard features of qm seems to invest quantum computers with super - turing power to solve hard problems efficiently , thus making the universe an exponential place , contrary to assumption .the possibility ( b ) arises for the following reason .it is proposed that the wnhe assumption holds not only in the sense that hard problems ( in the standard turing model ) are not efficiently solvable in the physical world , but in the stronger sense that any physical computation can be simulated on a probabilistic tm with at most a polynomial slowdown in the number of steps ( the strong church - turing thesis ) .therefore , the evolution of any quantum system computing a decision problem , could asymptotically be simulated in polynomial time in the size of the problem , and thus lies in * bpp * , the class of problems that can be efficiently solved by a probabilistic tm . assuming * bpp * * bqp *, this suggests that although at small scales , standard qm remains valid with characteristic * bqp*-like behavior , at sufficiently large scales , classical ( ` * bpp*-like ' ) behavior should emerge , and that therefore there must be a definite scale sometimes called the heisenberg cut where the superposition principle breaks down , so that asymptotically , quantum states are not exponentially long vectors . in ref . , we speculate that this scale is related to a discretization of hilbert space .this approach provides a possible computation theoretic resolution to the quantum measurement problem . in ( c ) , the idea is that in a polynomial universe , we expect that phenomena in which a polynomial amount of physical bits can simulate exponentially large ( classical ) correlations , thereby making communication complexity trivial , would be forbidden . in the present work ,we are interested in studying whether the no - signaling theorem follows from the wnhe assumption .the article is divided into two parts : part i , concerned with the computer scientific aspects , giving a complexity theoretic motivation for the work ; part ii , concerned with the quantum optical implementation of a test suggested by part i. in part i , first some results concerning non - standard operations that violate no - signaling and help efficiently solve intractable problems are surveyed , in sections [ sec : srisum ] and [ sec : sripol ] , respectively . then, in section [ sec : sripolynon ] , we introduce the concept of a polynomial superluminal gate , a hypothetical primitive operation that is prohibited by the assumption of no - signaling , but allowed if instead we only assume that intractable problems should not be efficiently solvable by physical computers .we examine the relation between the above two classes of non - standard gates .we also describe a _ constant _ gate on a single qubit or qutrit , possibly the simplest instance of a polynomial superluminal operation .in part ii , first we present a quantum optical realization of the constant gate , and its application to an experiment involving entangled light generated by parametric downconversion in a nonlinear crystal in section [ sec : ent ] .physicists who could not care less about computational complexity aspects could skip directly to this section .they may be warned that the intervening sections of part i will involve mangling qm in ways that may seem awkward , and whose consistency is , unfortunately , not obvious ! on the other hand , computer scientists unfamiliar with quantum optics may skip section [ sec : ent ] , which is essentially covered in section [ sec : sriung ] , which discusses quantitative and conceptual issues surrounding the physical realization of the constant gate .finally , we conclude with section [ sec : conclu ] by surveying some implications of a possible positive outcome of the proposed experiment , and discussing how such an unexpected physical effect may fit in with the mathematical structure of known physics .we present a slightly abridged version of discussions in this work in ref . .even minor variants of qm are known to lead to superluminal signaling .an example is a variant incorporating nonlinear observables , unless the nonlinearity is confined to sufficiently small scales . in this section, we will review the case of violation of no - signling due to departure from standard qm via the introduction of ( a ) non - complete schrdinger evolution or measurement , ( b ) nonlinear evolution , ( c ) departure from the born rule . in each case , we will not attempt to develop a non - standard qm in detail , but instead content ourselves with considering simple representative examples .\(a ) _ non - complete measurements or non - complete schrdinger evolution ._ let us consider a qm variant that allows a non - tracepreserving ( and hence non - unitary ) but invertible single - qubit operation of the form : where is a real number .the resultant state must be normalized by dividing it by the normalization factor immediately before a measurement , making measurements nonlinear . given the entangled state that alice and bob share , to transmit a superluminal signal , alice applies either ( where is an integer ) or the identity operation to her qubit .bob s particle is left , respectively , in the state or , which can in principle be distinguished , the distance between the states being greater for larger ( cf .section [ sec : sripol ] ) , leading to a superluminal signal from alice to bob . even at the single - particle level , if the measurement is non - complete , there is a superluminal signaling due to breakdown in non - contextuality coming from the renormalization . as a simple illustration ,suppose we are given two observers alice and bob sharing a delocalized qubit , , with eigenstate localized near alice and near bob . with an -fold application of ( which can be thought of as an application of imaginary phase on alice s side , leading to selective augmentation of amplitude ) on this state, alice produces the ( unnormalized ) state , so that after renormalization , bob s probability of obtaining has changed in a context - dependent fashion from to . by thus nonlocally controlling the probability with which bob finds , alice can probabilistically signal bob superluminally .\(b ) _ nonlinear evolution ._ as a simple illustration of a superluminal gate arising from nonlinear evolution , we consider the action of the nonlinear two - qubit ` or ' gate , whose action in a preferred ( say , computational ) basis is given by : if the two qubits are entangled with other qubits , then the gate is assumed to act in each subspace labelled by states of the other qubits in the computational basis .alice and bob share the entangled state . to transmit a bit superluminally alice measures her qubit in the computational basis or the diagonal basis latexmath:[|\psi\rangle = -qubit states and are neither necessarily mutually orthogonal nor normalized and with , which is interpreted as |\underline{\psi}^\prime\rangle\equiv .if and are mutually orthogonal , and thus the reduced density operator for the last qubit is diagonal in the computational basis , then , and no such special interpretation is needed . to simulate the production of with standard quantum resources, one first applies a phase gate followed by a hadamard on the last qubit , to obtain the state .measurement on the last qubit in the computational basis yields , and hence in the first register , with probability , which is to say that the simulation of succeeds with probability , irrespective of .similar arguments hold for , etc .therefore , the class of problems efficiently solvable with standard quantum computation augmented by the non - standard family of constant gates is in * bqp*. it is worth noting that the constant gate is quite different from the following two operations that appear to be similar , but are in fact quite distinct .the first operation is a standard quantum mechanical cp map , polynomial and not superluminal ; the second is exponential and consequently superluminal .\(a ) to begin with , a constant gate is not a quantum deleter , in which a qubit is subjected to a _complete _ operation , in specific , a contractive cp map that prepares it asymptotically in a fixed state .the action of a quantum deleter is given by an amplitude damping channel , which has an operator sum representation , respectively in the qubit case or when extended to the qutrit case , with the kraus operators given by eq .( [ eq : srikjanichwaraa ] ) or ( [ eq : srikjanichwarab ] ) , respectively [ eq : srikjanichwara ] unlike in the case of , or , there is no actual destruction of quantum information , but its transfer through dissipative decoherence into correlations with a zero - temperature environment . the reduced density operator of bob s entangled system remains unaffected by alice s application of this operation on her system .the deleting action , though nonlinear at the state vector level , nevertheless acts linearly on the density operator .\(b ) next we note that the constant gate is quite different from the ` post - selection ' operation , which is a _rank-1 projection .verbally , if the constant gate corresponds to the operation for all input states in the computational basis , set the output state to , independently of , except for a global phase " , where is some fixed state , then post - selection corresponds to the action for all input states , if , then discard branch " .post - selective equivalents of and are _ followed by renormalization_. in particular , whereas the action of on the first of two particles in the state leaves the second particle in the state , that of leaves the second particle in the state .it is straightforward to see that post - selection is an exponential operation : acting it on the second qubit of in eq .( [ eq : sat ] ) , and post - selecting on 1 , we obtain the solution to sat in one time - step .the seemingly immediate conclusion due to the fact is that the wnhe assumption is not strong enough to derive no - signaling , and would have to be supplemented with additional assumption(s ) , possibly purely physically motivated ones , prohibiting the physical realization of polynomial superluminal gates .an alternative , highly unconventional reading of the situation is that wnhe is a fundamental principle of the physical world , while the no - signaling condition is in fact not universal , so that some polynomial superluminal gates may actually be physically realizable .quite surprisingly , we may be able to offer some support for this viewpoint .we believe that constant gates of above type can be quantum optically realized when a photon detection is made at a _ path singularity _ , defined as a point in space where two or more incoming paths converge and terminate . in graphtheoretic parlance , a path singularity is a terminal node in a directed graph , of degree greater than 1 .we describe in section [ sec : ent ] an experiment that possibly physically realizes . in principle, a detector placed at the focus of a convex lens realizes such a path singularity .this is because the geometry of the ray optics associated with the lens requires rays parallel to the lens axis to converge to the focus after refraction , while the destructive nature of photon detection implies the termination of the path .although conceptually and experimentally simple , the high degree of mode filtering or spatial resolution that the experiment requires will be the main challenge in implementing it .indeed , we believe this is the reason that such gates have remained undiscovered so far . our argument here has implicitly assumed that .if it turns out that , then even the obviously non - physical operations such as or would be polynomial gates , and the wnhe assumption would not be able to exclude them . nevertheless , the question of existence and testability of certain superluminal gates , which is the main result of this work , would still remain valid and of interest .if polynomial superluminal gates are indeed found to exist ( and given that other superluminal gates do not seem to exist anyway ) , this would give us greater confidence that ( or , to be safe , that even nature does not ` know ' that ! ) and that the assumption of wnhe is indeed a valid and fruitful one .our proposed implemention of , based on the use of entanglement , is broadly related to the type of quantum optical experiments encountered in refs . , and closely related to an experiment performed in innbruck that elegantly illustrates wave - particle duality by means of entangled light . in the innsbruck experiment ,pairs of position - momentum entangled photons are produced by means of type - i spontaneous parametric down - conversion ( spdc ) at a nonlinear source , such as a bbo crystal .the two outgoing conical beams from the nonlinear source are presented ` unfolded ' in figure [ fig : zeiz ] .one of each pair , called the ` signal photon ' , is received by alice , while the other , called the ` idler ' , is received and analyzed by bob .alice s photon is registered by a detector behind a lens .bob s photon is detected after it enters a double - slit assembly .if alice s detector , which is located behind the lens , is positioned at the focal plane of the lens and detects a photon , it localizes alice s photon to a point on the focal plane . by virtue of entanglement , this projects the state of bob s photon to a ` momentum eigenstate ' , a plane wave propagating in a particular direction .for example , if alice detects her photon at , or , bob s photon is projected to a superposition of the parallel modes 2 and 5 , modes 1 and 4 , or modes 3 and 6 . since this can not reveal positional information about whether the particle originated at or , and hence reveals no which - way information about slit passage , therefore , _ in coincidence _ with a registration of her photon at a focal plane point , the idler exhibits a young s double - slit interference pattern .the patterns corresponding to alice s registering her photon at , or will be mutually shifted .bob s observation in his single counts will therefore not show any sign of interference , being the average of all possible such mutually shifted patterns .the interference pattern is seen by bob in coincidence with alice s detection . ), it projects bob s state into a mixture of plane waves , which produce an interference pattern on bob s screen _ in coincidence _ with any fixed detection point on alice s focal plane .bob s pattern in his _ singles count _ , being the integration of such patterns over all focal plane points , shows no interference pattern . on the other hand , positioning her detector in the imaging plane can potentially reveal the path the idler takes through the slit assembly , and thus does not lead to an interference pattern on bob s screen even in the coincidence counts.,width=642 ] if her detector is placed at the imaging plane ( at distance from the plane of the lens ) , a click of the detector can reveal the path the idler takes from the crystal through the slit assembly which therefore can not show the interference pattern even in the coincidence counts .for example , if alice detects her photon at ( resp . , ), bob s photon is projected to a superposition of the mutually non - parallel modes 4 , 5 and 6 ( resp ., 1 , 2 and 3 ) and , because the double - slit assembly is situated in the near field , can then enter only slit ( resp . , ) .therefore , alice s imaging plane measurement gives path or position information of the idler photon , so that no interference pattern emerges in bob s coincidence counts , and consequently also in his singles counts .this qualitative description of the innsbruck experiment is made quantitative using a simple six - mode model in the next subsection . herethe state of the spdc field of figure [ fig : zei ] is modeled by a 6-mode vector : where is the vacuum state , ( resp . , )are the creation operators for alice s ( resp . , bob s ) light field on mode , per the mode numbering scheme in figure [ fig : zei ] .the quantity ( ) depends on the pump field strength and the crystal nonlinearity .the coincidence counting rate for corresponding measurements by alice and bob is proportional to the square of the second - order correlation function , and given by : where represents the positive frequency part of the electric field at a point on alice s focal or imaging plane , and that of the electric field at an arbitrary point on bob s screen .we have : where is the wavenumber , the distance from the epr source to the upper / lower slit on bob s double slit diaphragm ( the length of the segment or ) ; ( resp . , ) is the distance from the lower ( resp ., upper ) slit to .the other two terms in eq .( [ bobfjeld ] ) , pertaining to the other two pair of modes , are obtained analogously .we study the two cases , corresponding to alice making a remote position or remote momentum measurement on the idler photons .alice remotely measures position ( path ) of the idler ._ suppose alice positions her dectector at the imaging plane and detects a photon at or .the corresponding field at her detector is where ( resp . , )is the path length along any ray path from the source point ( resp ., ) through the lens upto image point ( resp . , ) .by fermat s principle , all paths connecting a given pair of source and image point are equal . setting in eq .( [ eq : coinc ] ) , and substituting eqs .( [ spdc ] ) , ( [ bobfjeld ] ) and ( [ alicefjeldb ] ) in eq .( [ eq : coinc ] ) , we find the coincidence counting rate for detections by alice and bob to be which is essentially a single slit diffraction pattern formed behind , respectively , the upper and lower slit .the intensity pattern bob finds on his screen in the singles count , obtained by averaging over , is thus not a double - slit interference pattern , but an incoherent mixture of the two single slit patterns .a similar lack of interference pattern is obtained by bob if alice makes no measurement .alice remotely measures momentum ( direction ) of the idler ._ alice positions her dectector on the focal plane of the lens .if she detects a photon at , or , the field at her detector is , respectively , [ alicefjelda ] where ( resp . , )is the distance from ( resp . , ) along the path 2 ( resp ., 5 ) path through the lens upto point .the distances along the two paths being identical , .the distances and are defined analogously .substituting eqs .( [ spdc ] ) , ( [ bobfjeld ] ) and ( [ alicefjelda ] ) in eq .( [ eq : coinc ] ) , we find the coincidence counting rate is given by [ ram ] )\right ] , \label{rama } \\r_{f^\prime}(z ) & \propto & \epsilon^2\left[1 + \cos(k\cdot[r_1 - r_4 ] + \omega_{14})\right ] , \label{ramb } \\r_{f^{\prime\prime}}(z ) & \propto & \epsilon^2\left[1 + \cos(k\cdot[r_3 - r_6 ] + \omega_{36})\right ] , \label{ramc}\end{aligned}\ ] ] where and are fixed for a given point on the focal plane .each equation in eq .( [ ram ] ) represents a conventional young s double slit pattern .conditioned on alice detecting photons at , bob finds the pattern , and similarly for points and . in his singles count, bob perceives no interference , because he is left with a statistical mixture of the patterns ( [ rama ] ) , ( [ ramb ] ) , ( [ ramc ] ) , etc . , corresponding to _ all _ points on alice s focal plane illuminated by the signal beam . the experiment proposed here , presented earlier by us in ref . , is derived from the innsbruck experiment , and therefore called ` the modified innsbruck experiment ' .it was claimed to manifest superluminal signaling , though it was not clear what the exact origin of the signaling was , and in particular , which assumption that goes to proving the no - signaling theorem was being given up .the modified innsbruck experiment is revisited here in order to clarify this issue in detail in the light of the discussions of the previous sections .this will help crystallize what is , and what is not , responsible for the claimed signaling effect . in ref . , we studied a version of nonlocal communication inspired by the original einstein - podolsky - rosen thought experiment . recently, similar experiments , also based on the innsbruck experiment , have been independently proposed in refs . ., except that bob s photon ( the idler ) , before entering the double - slit assembly , traverses a direction filter that permits only ( nearly ) horizontal modes to pass through , absorbing the other modes at the filter walls .the direction filter acts as a state filter that ensures that bob receives only the _ pure _ state consisting of the horizontal modes .thus if alice makes no measuement or makes a detection at , bob s corresponding photon builds an interference pattern of the modes 2 and 5 in the _ singles _ counts . on the other hand ,if alice positions her detector in the imaging plane , she knows the path the idler takes through the slit assembly . thus no interference pattern is found on bob s screen even in the coincidence counts.,width=642 ] first we present a qualitative overview of the modified innsbruck experiment .the only material difference between the original innsbruck experiment and the modified version we propose here is that the latter contains a ` direction filter ' , consisting of two convex lenses of the same focal length , separated by distance .their shared focal plane is covered by an opaque screen , with a small aperture of diameter at their shared focus .we want to be small enough so that only almost horizontal modes are permitted by the filter to fall on the double slit diaphragm .the angular spread ( about the horizontal ) of the modes that fall on the aperture is given by , we require that , where is the slit separation , to guarantee that only modes that are horizontal or almost horizontal are selected to pass through the direction filter , to produce a young s double - slit interference pattern on his screen plane . on the other hand , we do nt want the aperture to be so small that it produces significant diffraction , thus : .putting these conditions together , we must have the ability to satisfy this condition , while preferable , is not crucial . if it is not satisfied strictly , the predicted signal is weaker but not entirely suppressed .the point is clarified further down .if alice makes no measurement , the idler remains entangled with the signal photon , which renders incoherent the beams coming through the upper and lower slits on bob s side , so that he will find no interference pattern on his screen .similarly , if she detects her photon in the imaging plane , she localizes bob s photon at his slit plane , and so , again , no interference pattern is seen . thus far , the proposed experiment the same effect as the innsbruck experiment .on the other hand , if alice scans the focal plane and makes a detection , she remotely measures bob s corresponding photon s momentum and erases its path information , thereby ( non - selectively ) leaving it as a mixture of plane waves incident on the direction filter .however only the fraction that makes up the pure state comprising the horizontal modes passes through the filter .diffracting through the double - slit diaphragm , it produces a young s double slit interference pattern on bob s screen .those plane waves coincident with alice s detecting her photon away from focus are filtered out and do not reach bob s double slit assembly .it follows that an interference pattern will emerge in bob s _ singles counts _ , coinciding with alice s detection at or close to .thus alice can remotely prepare inequivalent ensembles of idlers , depending on whether or not she measures momentum on her photon . in principle , this constitutes a superluminal signal .quantitatively , the only difference between the innsbruck and the proposed experiment is that eq .( [ bobfjeld ] ) is replaced by an expression containing only horizontal modes . as an idealization ( to be relaxed below ) , assuming perfect filtering and low spreading of the wavepacket at the aperature , we have : where now represents the distance from the epr source to the upper / lower slit on bob s double slit diaphragm ( the length of the segment or ) ; ( resp . , ) is the distance from the upper ( resp ., lower ) slit to ._ detection of a signal photon at or near is the only possible event on the focal plane such that bob detects the twin photon at all_. focal plane detections sufficiently distant from will project the idler into non - horizontal modes that will be filtered out before reaching bob s double - slit assembly .therefore , the interference pattern eq .( [ rama ] ) is in fact the only one seen in bob s singles counts .we denote by , this pattern , which bob obtains conditioned on alice measuring in the focal plane . by contrast , in the innsbruck experiment bob in his singles counts sees a statistical mixture of the patterns ( [ rama ] ) , ( [ ramb ] ) , ( [ ramc ] ) , etc . , corresponding to _ all _ points on alice s focal plane illuminated by the signal beam .when alice measures in the imaging plane , as in the innsbruck experiment bob finds no interference pattern in his singles counts . setting in eq .( [ eq : coinc ] ) , and substituting eqs .( [ spdc ] ) , ( [ bobfjeld0 ] ) and ( [ alicefjeldb ] ) in eq .( [ eq : coinc ] ) , we find the coincidence counting rate for detections by alice and bob to be which is a uniform pattern ( apart from an envelope due to single slit diffraction , which we ignore for the sake of simplicity ) .it follows that bob s observed pattern in the singles counts conditioned on alice measuring in the imaging plane , , is also the same , i.e. , .our main result is the difference between the patterns and , which implies that alice can signal bob one bit of information across the spacelike interval connecting their measurement events , by choosing to measure her photon in the focal plane or not to measure . in practice, bob would need to include additional detectors to sample or scan the -plane fast enough .this procedure can potentially form the basis for a superluminal quantum telegraph , bringing into sharp focus the tension between quantum nonlocality and special relativity .considering the far - reaching implications of a positive result to the experiment , we may pause to consider the following : whether our analysis of so far can be correct , and under the possibility ( however limited ) that it is how such a signal may ever arise , in view of the no - signaling theorem .it may be easy to dismiss a proof of putative superluminal communication as ` not even wrong ' , yet less easy to spot where the purported proof fails and to provide a mechanism for thwarting the signaling .for one , the prediction of the nonlocal signaling is based on a model that departs only slightly from our quantum optical model of section [ sec : sridom ] , which explains the original innsbruck experiment quite well . there have been various attempts at proving that quantum nonlocality somehow contravenes special relativity .the author has read some of their accounts , and it was not difficult to spot a hidden erroneous assumption that led to the alleged conflict with relativity .armed with this lesson , the present claim will be different in the following three ways : * _ we discuss in the following section various possible objections to our claim , and demonstrate why each of them fails ._ perhaps they do not cover some erroneous but subtle assumption , but even so , our present exercise could still be instructive in yielding new theoretical insights .for example , a proposal for superluminal communication based on light amplification was eventually understood to fail because it violates the no - cloning theorem , a principle that had not been discovered at the time of the proposal was made ( cf . ) . *_ we single out , in the following section , the key assumption responsible for the superluminality _ ( that alice s momentum measurement implements a polynomial superluminal gate ) .this singling out of the non - standard element at play makes it easier for the reader to judge whether the proposal is wrong , not even wrong , or as we believe is the case worth testing experimentally . *_ we have furnished computation- and information - theoretic grounds for why superluminal gates could be possible , _ according to which no - signaling could be a nearly - universal - but - not - quite side effect of the computation theoretic properties of physical reality ; elsewhere , we show how the relativity principle could be a consequence of conservation of information . in the last section ,we clarify how non - complete measurements , if experimentally validated , could possibly fit in with known physics .there we will argue that they arise owing to the potential fact that practically measurable quantities resulting from quantum field theory are not described by hermitian operators , at variance with a key axiom of orthodox quantum theory . in the section, we will consider a number of possible objections to our main result , and demonstrate quantitatively why each of them fails .[ [ spreading - at - the - direction - filter . ] ] spreading at the direction filter .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + [ [ alices - focal - plane - measurement - implements - a - constant - gate - in - the - subspace - of - interest ] ] alice s focal plane measurement implements a constant gate in the subspace of interest + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the state ( [ spdc ] ) is now represented in a simple way as the unnormalized state where for simplicity the vacuum state , which does not contribute to the entanglement related effects , is omitted , and it is assumed that each mode contains at most one pair of entangled photons ( i.e. , no higher excitations of the light field ) .further because of the direction filter , it suffices to restrict our attention to the state the projection of onto , where is the subspace spanned by and , can be written as the kraus operators . within operators form a complete set since in the position basis does not nonlocally affect bob s reduced density operator , which is proportional to .on the other hand , if alice measures momentum , her measurment is represented by the field operator in eq .( [ alicefjelda ] ) .we have in the above notation this is just the polynomial superluminal gate in eq .( [ eq : srikjanichwara ] ) , with the output basis given by , where is any basis element orthogonal to the vacuum state .we note that the operator , that would complete in that in the space span .however , is necessarily non - physical in the given geometry since modes 2 and 5 meets only at , where the electric field operator is indeed .we further note that , inspite of the non - completeness of , the structure of in eq .( [ eq : srispdc ] ) guarantees that is by default normalized , and hence poses no problem with respect to probability conservation .by contrast , bob s measurement is complete ( which rules out a bob - to - alice superluminal signaling ) .each element of bob s screen -basis is a possible outcome , described by the annihilation operator approximately of the form , where is the phase difference between the paths 2 and 5 from the slits to a point on bob s screen .this represents a povm of the form .even though has the same form as alice s operator as a kraus operator describing the absorption of two interfering modes at a point , yet , when integrated over his whole ` position basis ' , bob s measurement is seen to form a complete set , for , as it can be shown , role of the direction filter .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + a simple model of the action of the perfect direction filter is latexmath:[\[d \equiv \sum_{j=2,5 } |j\rangle\langle j| + \sum_{j\ne 2,5 } the state of . = here can be thought of as a state orthogonal to all s and other s , that removes the photon from the experiment , for example , by reflecting it out or by absorption at the filter .it suffices for our purpose to note that can be described as a local , standard ( linear , unitary and hence complete ) operation . since the structure of qm guarantees that such an operation can not lead to nonlocal signaling , the conclusion is that the superluminal signal , if it exists , must remain _ even if the the direction filter is absent_. we will employ the notation . to see that the nonlocal signaling is implicit in the state modified by alice s actions even without the application of the filter , we note the following : if alice measures ` momentum ' on the state and detects a signal photon at , she projects the corresponding idler into the state .similarly , her detection of a photon at projects the idler into the state , and her detection at , projects the idler into the state .therefore , in the absence of the direction filter , alice s remote measurement of the idler s momentum leaves the idler in a ( assumed uniform for simplicity ) mixture given by her momentum measurement is non - complete , since the summation over the corresponding projectors ( r.h.s of eq .( [ eq : srirhop ] ) ) is not the identity operation pertaining to the hilbert space spanned by six modes . on the other hand ,if alice remotely measures the idler s position , she leaves the idler in the mixture here again , her position measurement is non - complete , reflected in the fact that the summation over the corresponding projectors ( r.h.s of eq .( [ eq : srirhoq ] ) ) is not . since , we are led to conclude that the violation of no - signaling _ is already implicit in the innsbruck experiment_. yet , since bob measures in the -basis rather than the ` mode ' basis , in the absence of a direction filter as is the case in the innsbruck experiment , bob s screen will not register any signal , for the following reason . in case of alices focal plane measurement , the integrated diffraction - interference pattern corresponding to different outcomes will wash out any observable interference pattern . on the other hand , in the case of alice s imaging plane measurement ,each of bob s detections comes from the photon s incoherent passage through one or the other slit , and hence again no interference pattern is produced on his screen .thus , measurement at bob s screen plane without a direction filter will render effectively indistinguishable from .the role played by the direction filter is to prevent modal averaging in case of alice s momentum measurement , by selecting one set of modes .the filter does not create , but only exposes , a superluminal effect that otherwise remains hidden .[ [ complementarity - of - single -- and - two - particle - correlations . ] ] complementarity of single- and two - particle correlations .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + it is well known that path information ( or particle nature ) and interference ( or wave nature ) are mutually exclusive or complementary . in the two - photon case , this takes the form of mutual incompatibility of single- and two - particle interference , because entanglement can be used to monitor path information of the twin particle , and is thus equivalent to ` particle nature ' .one may thus consider single- and two - particle correlations as being related by a kind of complementarity relation that parallels wave- and particle - nature complementarity .a brief exposition of this idea is given in the following paragraph .for a particle in a double - slit experiment , we restrict our attention to the hilbert space , spanned by the state and corresponding to the upper and lower slit of a double slit experiment . given density operator , we define coherence by , a measure of cross - terms in the computational basis not vanishing .the particle is initially assumed to be in the state , and a monitor " , initially in the state , interacting with each other by means of an interaction , parametrized by variable that determines the entangling strength of .it is convenient to choose , where cnot is the operation , where is the pauli operator . under the action of , the system particle goes to the state = \frac{i}{2 } + \frac{1}{2 } [ ( \cos\theta + i\sin\theta)\cos\theta |0\rangle\langle1| + { \rm c.c} ] indicates taking trace over the monitor . applying the above formula for coherence to , we calculate that coherence .we let denote the eigenvalues of . quantifying the degree of entanglement by concurrence , we have .we thus obtain a trade - off between coherence and entanglement given by , a manifestation of the complementarity between single - particle and two - particle interference . in the context of the proposed experiment, this could raise the following purported objection to our proposed signaling scheme : as the experiment happens in the near - field regime , where two - particle correlations are strong , one would expect that bob should not find an interference pattern in his singles counts . yet , contrary to this expectation , eq .( [ ram ] ) implies that such an interference pattern does appear .the reason is that in the focal plane measurement , alice is able to erase her path information in the subspace , but , by virtue of the associated non - completeness , she does so in _ only one _ way , viz . via the non - complete operation associated with her measurment . if her measurement were _ complete_ , she would erase path information in more than one way , and the corresponding conditional single - particle interference patterns would mutually cancel each other in the singles count .this is clarified in the following section .[ [ polarization - and - interferometric - quantum - computing . ] ] polarization and ` interferometric quantum computing ' .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + -like gates describe the situation where two converging modes at the path singularity have the _ same _ polarization .the quantum optics formalism implies that if the polarizations of the two incoming modes are not parallel when interfering , then the polarization states add vectorially ( that is , superpose ) , with amplitudes being added componentwise along each polarization / dimension , and the resulting probability being the squared magnitude of this vector sum .one can define a corresponding more general constant gate ( a tensor sum of constant gates over the internal dimensions ) , and a correspondingly potentially larger * bqp* .it can be shown that theorem [ thm : bqpc ] still holds . herewe will content ourselves to illustrate it by a simple example .suppose we have this ` interferometric quantum computer ' : a -level atom , whose spin part is prepared initially in the state , yielding , i.e. , a particle is detected with with exponentially low probability , and detection leaves the particle in the state .the oracle together with detection at the path singularity is equivalent to the non - complete operation .if the marked state is designated to be a possible solution to a sat problem , the measurement would have to be repeated an exponentially large number of times , or performed once on an exponentially large number of atoms , to detect a possible ` yes ' outcome .either way , the physical situation is compatible with the wnhe assumption , but not with no - signaling .( we observe that augmenting the detection with a renormalization following vector addition would in fact implement the post - selection gate . ) finally , let us clarify the sense in which non - complete operations like of potential physical interest may _ effectively _ conform to probability conservation. in the modified innsbruck experiment , alice s application of conforms _ exactly _ to probability conservation , because the state in eq .( [ eq : srispdc ] ) has a schmidt form , with defined in bob s schmidt basis .however , this is not the general situation . in such cases ,one seems to find that the spreading of the wavefunction produces a pattern of bright and dark interferometric fringes at and around the path singularity such that , even though locally there is an excess or deficit over the average probability density , still there is an overall probability conservation across the fringes .this is somewhat comparable to the situation with bob s povm , which , even though locally a -like operation , still yields identity when integrated over .this conservation mechanism is not applicable to the modified innsbruck experiment , which is performed in the near - field limit , where spreading is minimal and two - particle correlations are strong .however , as noted above , probability conservation is inherently exact for the situation in the experiment , and the mechanism need not be invoked . as an illustration of the mechanism ,let the angle at which the two interfering beams of the ` interferometric quantum compter ' converge towards a spatial overlap region be .the fringes are given by a stationary pattern with spatial frequency , where is the spatial separation between two optical elements ( say , mirrors ) that are , respectively , reflecting the beams along the two interferometric arms towards , and the distance from the central point between these mirrors to the center of region .the width of each fringe is about .now the initial beam width must be of the order of several wavelengths , and the diffractive spread rate of each beam at least , so that beam width .thus , the spreading of ( quantum ) waves guarantees that there will always be compensatory fringes , and hence overall conservation of probability , even though locally the dark and bright bands contain less or more than the average probability density . applied to the above atom interferometer example , the state vector at the interference screen will have the form with running from to , where is a narrow gaussian - like function centered at .when , one obtains dark fringes with the ` solution ' at exponentially low intensity , as noted above .when , one obtains bright fringes of nearly maximal intensity , diminished by only an exponentially small amount , corresponding , again , to the ` solution ' .thus the interference pattern is a band of bright and dark fringes at spatial frequency with the bright ones very slightly dimmer than if and had the same polarization , and the dark ones very slightly brighter .considering the far - reaching implications of a positive result to our proposed experiment , even though we have ruled out in section [ sec : sriung ] all the ( as far as we know ) obvious objections , we have to remain open to the possibility that there may be a subtle error , possibly a hidden unwarrented assumption , somewhere in our analysis . in the surprising event that the proposed experiment yields a positive outcome ,no - signaling would no longer be a universal condition , and the issue of ` speed of quantum information ' would assume practical significance .it would also bolster the case for believing that the wnhe assumption is a basic principle of quantum physics , and that considerations of intractability , and by extension uncomputability , can serve as an informal guide to basic physics .physical space would be regarded as a type of information , and physical dynamics a kind of computation , with physical separation being not genuine obstacle to rapid communication in the way it would be when seen from the perspective of causality in conventional physics . on the other hand , the barrier between polynomial - time and hard problems would be real , and the physical existence of superluminal signals would thus not be as surprising as that of exponential gates .interestingly , polynomial superluminal operations exist even in classical computation theory .the random access machine ( ram ) model , a standard model in computer science wherein memory access takes exactly one time - step irrespective of the physical location of the memory element , illustrates this idea .rams are known to be polynomially equivalent to turing machines . even granting that the noncomplete gate turns out to be physically valid and realizable, this brings us to another important issue : how would non - completeness fit in with the known mathematical structure of the quantum properties of particles and fields ?we venture that the answer has to do with the nature of and relationship between observables in qm on the one hand , and those in quantum optics , and more generally , in quantum field theory ( qft ) , on the other hand .it is frequently claimed that qft is just the standard rules of first quantization applied to classical fields , but this position can be criticized . for example , the relativistic effects of the integer - spin qft imply that the wavefunctions describing a fixed number of particles do not admit the usual probabilistic interpretation . again , fermionic fields do not really have a classical counterpart and do not represent quantum observables . in practice ,measurable properties resulting from a qft are properties of particles of photons in quantum optics .particulate properties such as number , described by the number operator constructed from fields , or the momentum operator , which allows the reproduction of single - particle qm in momentum space , do not present a problem .the problem is the _ position _ variable , which is considered to be a parameter , and not a hermitian operator , both in qft and single - particle relativistic qm , and yet relevant experiments measure particle positions .the experiment described in this work involves measurement of the positions of photons , as for example , alice s detection of photons at points on the imaging or focal plane , or bob s detection at points on the -plane , respectively .there seems to be no way to derive from qft the experimentally confirmed born rule that the nonrelativistic wavefunction determines quantum probabilities of particle positions . in most practical situations , this is really not a problem .the probabilities in the above experiment were computed according to standard quantum optical rules to determine the correlation functions at various orders , which serve as an effective wavefunction of the photon , as seen for example from eqs .( [ eq : coinc ] ) . in qft, particle physics phenomenologists have developed intuitive rules to predict distributions of particle positions from scattering amplitudes in _ momentum _ space . nevertheless , there is a problem in principle , and this leads us to ask whether qft is a genuine quantum theory . if we accept that properties like position are valid observables in qm , the answer seems to be ` no ' .we see this again in the fact that the effective momentum and position observables that arise in the above experiment are not seen to be hermitian operators of standard qm ( cf . note ) .further , non - complete operations like , disallowed in qm , seem to appear in qft .this suggests that it is qm , and not qft , that is proved to be strictly non - signaling by the no - signaling theorem . since nonrelativistic qm and qft are presumably not two independent theories describing entirely different objects , but do describe the same particles in many situations , the relationship between observables in the two theories needs to be better understood .perhaps some quantum mechanical observables are a coarse - graining of qft ones , having wide but not universal validity .for example , alice s detection of a photon at a point in the focal plane was quantum mechanically understood to project the state of bob s photon into a one - dimensional subspace corresponding to a momentum eigenstate .quantum optically , however , this ` eigenstate ' is described as a superposition of a number of parallel , in - phase modes originating from different down - conversion events in the non - linear crystal , producing a coherent plane wave propagating in a particular direction .100 a. einstein , n. rosen , and b. podolsky .is the quantum - mechanical description of physical reality complete ?phys . rev . *47 * , 777 ( 1935 ) .p. h. eberhard , nuovo cimento 46b , 392 ( 1978 ) ; c. d. cantrell and m. o. scully , phys . rep . * 43 * , 499 ( 1978 ) .g. c. ghirardi , a. rimini , and t. weber , lett .nuovo cimento * 27 * , 293 ( 1980 ) .p. j. bussey , phys .lett . * a 90 * , 9 ( 1982 ) .t. f. jordan , phys .lett . * 94*a , 264 ( 1983 ) .a. j. m. garrett , found .phys . * 20 * 381 ( 1990 ) ; a. shimony , in _ proc . of the int .symp . on foundations of quantum mech .s. kamefuchi et al .japan , tokyo , 1993 ) ; r srikanth , phys lett . * a 292 * , 161 ( 2002 ) .j. s. bell , physics * 1 * , 195 ( 1964 ) ; j. f. clauser , m. a. horne , a. shimony , and r. a. holt , phys .* 23 * , 880 ( 1969 ) ; a. aspect , p. grangier and g. roger , phys .* 49 * , 91 ( 1982 ) ; w. tittel , j. brendel , h. zbinden and n. gisin , phys .lett * 81 * 3563 ( 1998 ) ; g. weihs , t. jennewein , c. simon , h. weinfurter and a. zeilinger , physlett . * 81 * , 5039 ( 1998 ) ; p. werbos , arxiv:0801.1234 . a. m. gleason .measures on the closed subspaces of a hilbert space . j. math .mech . , * 6 * , 885 ( 1957 ) .a. peres , _ quantum mechanics : concepts and methods _ , ( kluwer , dordrecht , 1993 ) .c. simon , v. buek and n. gisin , phys .87 * , 170405 ( 2001 ) ; _ ibid . _* 90 * , 208902 ( 2003 ) .h. halvorson , studies in history and philosophy of modern physics 35 , 277 ( 2004 ) ; quant - ph/0310101 .g. brassard , h. buhrman and n. linden et al .a limit on nonlocality in any world in which communication complexity is not trivial .phys . rev .* 96 * 250401 ( 2006 ) .r. srikanth .the quantum measurement problem and physical reality : a computation theoretic perspective .aip conference proceedings * 864 * , ( ed .d. goswami ) 178 ( 2006 ) ; quant - ph/0602114v2 . in complexitytheory , * rp * is the class of decision problems for which there exists a probabilistic tm ( a deterministic tm with access to genuine randomness ) such that : it runs in polynomial time in the input size . if the answer is ` no ' , it returns ` no ' . if the answer is ` yes ' , it returns ` yes ' with probability at least 0.5 ( else it returns ` no ' ) . *bqp * is the class of decision problems solvable by a _tm in polynomial time , with error probability of at most 1/3 ( or , equivalently , any other fixed fraction smaller than 1/2 ) independently of input size .m. a. nielsen and i. chuang , _ quantum computation and quantum information _ , ( cambridge 2000 ) .e. witten .quantum field theory and the jones polynomial .phys . * 121 * , 351 ( 1989 ) .m. freedman ._ p_/_pn _ and the quantum field computer .usa * 95 * , 98 ( 1998 ) . c. s. calude, m. a. stay . from heisenberg to goedel via chaitin .44 1053 ( 2005 ) ; quant - ph/0402197 .s. aaronson , quant - ph/0401062 ; _ ibid ._ , quant - ph/0412187 . s. aaronson .np - complete problems and physical reality .acm sigact news * 36 * 30 ( 2005 ) ; quant - ph/0502072 .d. s abrams and s. lloyd .nonlinear quantum mechanics implies polynomial - time solution for np - complete and # p problems .phys . rev .* 81 * , 3992 ( 1998 ) .s. aaronson .the limits of quantum computers . scientific american * 42 * , march 2008 . j. gruska . quantum informatics paradigms and tools for qipc .backaction quantum computing : back action 2006 , iit kanpur , india , march 2006 , ed .d. goswami , aip conference proceedings .110 ( 2006 ) .that is , ` the universe is not hard enough to _ not _ be simulable using polynomial resources ' .the expression is non - technically related to the statement the world is not enough `` ( `` _ orbis non sufficit _ '' ) , the family motto of , as well as a motion picture featuring , a well known anglo - scottish secret agent ! c. h. bennett , d. leung , g. smith , j. a. smolin .can closed timelike curves or nonlinear quantum mechanics improve quantum state discrimination or help solve hard problems ?s. weinberg ._ dreams of a final theory _ ( vintage 1994 ) .more formally , * bpp * is the class of decision problems solvable by a probabilistic tm in polynomial time , with error probability of at most 1/3 ( or , equivalently , any other fixed fraction smaller than 1/2 ) independently of input size .a. bassi and g. ghirardi . a general argument against the universal validity of the superposition principle .lett . a 275 ( 2000 ) ; quant - ph/0009020 .r. srikanth .no - signaling , intractability and entanglement .eprint 0809.0600 .n. gisin , helv .physica acta * 62 * , 363 ( 1989 ) ; phys . lett . * a * 143 , 1 ( 1990 ) . j. polchinski .lett . * 66 * , 397 ( 1991 ) .g. svetlichny .nonlinear quantum mechanics at the planck scale .* 44 * , 2051 ( 2005 ) : quant - ph/0410230 .g. svetlichny .informal resource letter nonlinear quantum mechanics .quant - ph/0410036 .g. svetlichny .amplification of nonlocal effects in nonlinear quantum mechanics by extreme localization .quant - ph/0410186 .g. svetlichny .nonlinear quantum gravity .j.geom.symmetry phys .6 ( 2006 ) 118 ; quant - ph/0602012 .g. svetlichny .quantum formalism with state - collapse and superluminal communication .foundations of physics , 28 131 ( 1998 ) ; quant - ph/9511002 .a. sen - de and u. sen .testing quantum dynamics using signaling .a * 72 * , 014304 ( 2005 ) . is the class of decision problems solvable by a turing machine in polynomial ( memory ) space possibly taking exponential time . in complexity theory ,* pp * is the class of decision problems for which there exists a polynomial time probabilistic tm such that : if the answer is ` yes ' , it returns ` yes ' with probability greater than , and if the answer is ` no ' , it returns ` yes ' with probability at most .grover , l. k. quantum mechanics helps in searching for a needle in a haystack .lett . * 79 * , 325 ( 1997 ) . c. bennett , e. bernstein , g. brassard and u. vazirani . strengths and weaknesses of quantum computing .quant - ph/9701001 .how the no - cloning theorem got its name .quant - ph/0205076 .r. srikanth and s. banerjee .an environment - mediated quantum deleter .lett . a * 367 * , 295 ( 2007 ) ; quant - ph/0611161 .r. ghosh and l. mandel , phys .lett . * 59 * , 1903 ( 1987 ) ; p. g. kwiat , a. m. steinberg and r. y. chiao , phys . rev . * a 47 * , 2472 ( 1993 ) ; d. v. strekalov , a. v. sergienko , d. n. klyshko , and y. h. shih , phys . rev . lett . * 74 * , 3600 ( 1995 ) ; t. b. pittman , y. h. shih , d. v. strekalov , and a. v. sergienko , phys .rev . * a 52 * , r3429 ( 1995 ) ; y. -h .kim , r. yu , s. p. kulik , and y. shih , m. o. scully , phys .lett . * 84 * , 1 ( 2000 ) .a. zeilinger .experiment and foundations of quantum physics .phys . * 71 * , s288 ( 1999 ) .b. dopfer .zwei experimente zur interferenz von zwei - photonen zustnden : ein heisenbergmikroskop und pendellsung .thesis ( university of innsbruck , 1998 ) .r. srikanth , pramana * 59 * , 169 ( 2002 ) ; _ ibid ._ quant - ph/0101023 .r. srikanth .quant - ph/9904075 ; _ ibid ._ quant - ph/0101022 .j. g. cramer , w. g. nagourney and s. mzali . a test of quantum nonlocal communication .cenpa annual report ( 2007 ) ; + http://www.npl.washington.edu/npl/int_rep/qm_nl.html and http://faculty.washington.edu/jcramer/nls/nl_signal.htm .r. jensen , staif-2006 proc ., m. el - genk , ed .1409 ( aip , 2006 ) ; http://casimirinstitute.net/coherence/jensen.pdf ( 2006 ) . h. nikoli . is quantum field theory a genuine quantum theory ?foundational insights on particles and strings .how the no - cloning theorem got its name .arxiv : quant - ph/0205076 . c. m. chandrashekar , subhashish banerjee , r. srikanth .relationship between quantum walk and relativistic quantum mechanics .eprint arxiv:1003.4656 .we can then define alice s ( remote ) ` momentum observable ' as .interpreted as a quantum field theoretic observable , is non - complete because the projectors to its eigenstates , and do not sum to .similarly , alice s non - complete ` position ' observable is .but note that s projection into the subspace is indeed a complete observable . and are of rank 3 and 2 , respectively , which is smaller than 6 , the dimension of the relevant hilbert subspace .abouraddy , et al .demonstration of the complementarity of one- and two - photon interference .a * 63 * , 063803 ( 2001 ) .s. bose and d. home . generic entanglement generation , quantum statistics , and complementarity .lett . * 88 * , 050401 ( 2002 ) .d. salart , a. baas , c. branciard , n. gisin , and h. zbinden . testing the speed of ` spooky action at a distance ' .nature 454 , 861 ( 2008 ) ; arxiv:0808.331v1 .s. skiena , _ the algorithm design manuel _ , springer ( 1998 ) .h. d. zeh .there is no ' ' first " quantization .a * 309 * , 329 ( 2003 ) ; quant - ph/0210098 .h. nikoli .there is no first quantization - except in the de broglie - bohm interpretation .quant - ph/0307179 .r. j. glauber .the quantum theory of optical coherence .phys . rev .130 , 2529 ( 1963 ) .w. k. wootters .entanglement of formation and concurrence .info . and comput .* 1 * , 27 ( 2001 ) . | we consider the problem of deriving the no - signaling condition from the assumption that , as seen from a complexity theoretic perspective , the universe is not an exponential place . a fact that disallows such a derivation is the existence of _ polynomial superluminal _ gates , hypothetical primitive operations that enable superluminal signaling but not the efficient solution of intractable problems . it therefore follows , if this assumption is a basic principle of physics , either that it must be supplemented with additional assumptions to prohibit such gates , or , improbably , that no - signaling is not a universal condition . yet , a gate of this kind is possibly implicit , though not recognized as such , in a decade - old quantum optical experiment involving position - momentum entangled photons . here we describe a feasible modified version of the experiment that appears to explicitly demonstrate the action of this gate . some obvious counter - claims are shown to be invalid . we believe that the unexpected possibility of polynomial superluminal operations arises because some practically measured quantum optical quantities are not describable as standard quantum mechanical observables . |
the task of recovering an unknown low - rank matrix from a small number of measurements appears in a variety of contexts .examples of this task are provided by collaborative filtering in machine learning , quantum state tomography in quantum information , the estimation of covariance matrices , or face recognition . if the measurements are linear , the technical problem reduces to identifying the lowest - rank element in an affine space of matrices . in general , this problem is -hard and it is thus unclear how to approach it algorithmically . in the wider field of compressed sensing , the strategy for treating such problems is to replace the complexity measure here the rank with a tight convex relaxation .often , it can be rigorously proved that the resulting convex optimization problem has the same solution as the original problem for many relevant problems , while at the same time allowing for an efficient algorithm .the tightest ( in some sense ) convex relaxation of rank is the _ nuclear norm _ , i.e. the sum of singular values .minimizing the nuclear norm subject to linear constraints is a semi - definite program and great number of rigorous performance guarantees have been provided for low - rank reconstruction using nuclear norm minimization .the geometry of convex reconstruction schemes is now well - understood ( c.f .figure [ fig : geometry ] ) . starting with a convex regularizer ( e.g. the nuclear norm ) , geometric proof techniques like tropp s bowling scheme or mendelson s small ball method bound the reconstruction error in terms of the descent cone of at the matrix that is to be recovered . moreover , these arguments suggest that the error would decrease if another convex regularizer with smaller descent cone would be used . this motivates the search for new convex regularizers that ( i ) are efficiently computable and ( ii ) have a smaller descent cone at particular points of interest . in this work ,we introduce such an improved regularizer based on the _ diamond norm _ .this norm plays a fundamental role in the context of quantum information and operator theory . for this work , it is convenient to also use a variant of the diamond norm that we call the _square norm_. while not obvious from its definition , it has been found that the diamond norm can be efficiently computed by means of a semidefinite program ( sdp ) .starting from one such sdp characterization , we identify the set of matrices for which the square norm s descent cone is contained in the corresponding one of the nuclear norm . as a result ,low - rank matrix recovery guarantees that have been established via analyzing the nuclear norm s descent cone are also valid for square norm regularization , provided that the matrix of interest belongs to said set .what is more , bearing in mind the reduced size of the square norm s descent cone , we actually expect an improved recovery .indeed , with numerical studies we show an improved performance . going beyond low - rank matrix recovery , we identify several applications . in physics , we present numerical experiments that show that the diamond norm offers improved performance for _ quantum process tomography _ .the goal of this important task is to reconstruct a quantum process from suitable preparations of inputs and measurements on outputs ( generalizing quantum _ state _ tomography , for which low - rank methods have been studied extensively .we then identify applications to problems from the context of signal processing .these include matrix versions of the _ phase retrieval problem _ , as well as a matrix version of the _ blind deconvolution problem _ .recently , a number of _ bi - linear problems _ combined with sparsity or low - rank structures have been investigated in the context of compressed sensing , with first progress on recovery guarantees being reported .the present work can be seen as a contribution to this recent development .we conclude the introduction on a more speculative note .the diamond norm is defined for linear maps taking operators to operators i.e. , for objects that can also be viewed as order- tensors .we derive a characterization of those maps for which the diamond norm offers improved recovery , and find that it depends on the order- tensorial structure . in this sense ,the present work touches on an aspect of the notoriously difficult _ tensor recovery problem _ ( no canonic approach or reference seems to have emerged yet , but see ref . for an up - to - date list of partial results ) .in fact , the `` tensorial nature '' of the diamond norm was the original motivation for the authors to consider it in more detail as a regularizer even though the eventual concrete applications we found do not seem to have a connection to tensor recovery .it would be interesting to explore this aspect in more detail .in this section , we introduce notation and mathematical preliminaries used to state our main results .we start by clarifying some notational conventions . in particular , we introduce certain matrix norms and the partial trace for operators acting on a tensor product space .moreover , we summarize a general geometric setting for the convex recovery of structured signals . throughout this work we focus exclusively on finite dimensional mostly complex vector spaces elements we mostly denote by lower case latin letters , e.g. .furthermore we assume that each vector space is equipped with an inner product or simply for short that is linear in the second argument .such an inner product induces the euclidean norm and moreover defines a conjugate linear bijection from to its dual space : to any we associate a dual vector which is uniquely defined via . the vector space of linear maps from to is denoted by .its elements being _ operators _ are denoted by capital latin letters ( e.g. ) and often we also refer to them as matrices .when dealing with endomorphisms , we write for the sake of notational brevity .the adjoint of an operator is determined by for all and and we call an operator self - adjoint , or hermitian , if .a self - adjoint operator is positive semidefinite , if it has a non - negative spectrum .a particularly simple example for such an operator is the identity operator .the set of positive semidefinite operators in forms a convex cone which we denote by .this cone induces a partial ordering on and we write if .on we define the frobenius ( or hilbert - schmidt ) inner product to be where denotes the trace of an operator .in addition to that , we are going to require three different matrix norms the frobenius norm is induced by the inner product , while the nuclear norm requires the operator square root : for we let be the unique positive semi - definite operator obeying . note that these norms correspond to the schatten - , schatten - and schatten -norms , respectively .all schatten norms are multiplicative under taking tensor products .the frobenius norm is preserved under any re - grouping of indices , the prime example of such an operation being the vectorization of matrices .this fact justifies our convention to extend the notation to the -norms of vectors and ( later on ) tensors . a crucial role is played by the space of _ bipartite operators _ , by which we refer operators that act on a tensor product space .for such operators we define the _ partial trace _ as the linear extensions of the map given by where and , see also figure [ fig : tensor_diagram_trw ] .finally , we define our improved regularizer on to be it is easy to see that is a norm and we call it the _ square norm_. it will become clear later on that the square norm is closely related to the diamond norm from quantum information theory . as we will discuss in section [ sec : notation_maps ] , , where denotes the so - called choi - jamiokowski isomorphism .both square and diamond norm can be calculated by a semidefinite program ( sdp ) satisfying strong duality .also , note that the pair is admissible in the maximization .inserting it recovers and establishes the bound .this bound plays a crucial role for our results .( m2 ) ( m ) [ bbox ] at ( 0,0 ) ; ( m.west ) + + ( 0 , ) coordinate ( moli ) ; ( m.west ) + + ( 0,- ) coordinate ( muli ) ; ( m.east ) + + ( 0 , ) coordinate ( more ) ; ( m.east ) + + ( 0,- ) coordinate ( mure ) ; ( more ) + + ( , 0 ) ++(0 , ) node ( xo ) [ near end , right ] ; ( mure ) + + ( , 0 ) ++(0,- ) node ( xu ) [ near end , right ] ; ( moli ) + +( -,0 ) ++(0 , ) node ( yo ) [ near end , left ] ; ( muli ) + + ( -,0 ) ++(0,- ) node ( yu ) [ near end , left ] ; ( xo.north east ) + + ( 1ex,0 ) coordinate ( ore ) ( ore|-xu.south ) ; ; ( try ) [ right = of m2 ] ; ( m3 ) [ right = of try ] ( m ) [ bbox ] at ( 0,0 ) ; ( m.west ) + + ( 0 , ) coordinate ( moli ) ; ( m.west ) + + ( 0,- ) coordinate ( muli ) ; ( m.east ) + + ( 0 , ) coordinate ( more ) ; ( m.east ) + + ( 0,- ) coordinate ( mure ) ; ( moli ) + + ( -,0 ) |- ( muli ) ; ( more ) + + ( , 0 ) ++(0 , ) node ( xo ) [ near end , right ] ; ( mure ) + + ( , 0 ) ++(0,- ) node ( xu ) [ near end , right ] ; ( xo.north east ) + + ( 1ex,0 ) coordinate ( ore ) ( ore|-xu.south ) ; ; in this section , we summarize a recent but already widely used geometric proof technique for low - rank matrix recovery . mainly following ref . , we devote this section to explaining the general reconstruction idea . in the setting of convex recovery of structured signals ,one obtains a _ measurement vector _ of a _ signal _ in some vector space via a _ measurement map _ , where represents additive noise in the sampling process . throughout ,we assume linear data acquisition , i.e. , is linear .the goal is to efficiently obtain a good approximation to given and for the case where one only has knowledge about some structure of .of course , it is desirable that the number of measurements required for a successful reconstruction is as small as possible . for several different structures of the signal a general approach of the following formhas proven to be very successful .one chooses a convex function that reflects the structure of and performs the following convex minimization where is some anticipated error bound .next , we give two definitions and a general error bound that has proven to be helpful to find such recovery guarantees .the _ descent cone _ of a convex function is the set of non - increasing directions . from the convexity of the function, it follows that the descent cone is a convex cone .the following definitions can also be found , e.g. , in ref . .[ def : dc ] the _ descent cone _ of a proper convex function at the point is the _ minimum singular value _ of a linear map is the minimal value of taken over all with .restricting this minimization to a cone yields the _ minimum conic singular value_. let be a linear map and be a cone .the minimum singular value of with respect to the cone is defined as [ prop : general_reconstruction ] let be a signal , be a measurement map , a vector of measurements with additive error , and be the solution of the optimization . if then note that the statement in ref. shows this result for real vector spaces only .however , taking a closer look at the proof reveals that it also holds for complex vector spaces as well .we make the following simple but important observation : [ obs : smaller_dc ] the smaller the descent cone the better the recovery guarantee .an important example is low - rank matrix recovery . here, is some matrix with and a low - rank provides structure that allows for reconstruction from a dimension sufficient number of measurements . for this case , choosing to be the nuclear norm has proven very successful , as the nuclear norm is the convex envelope of the matrix rank . in order to give a concrete bound , consider a real matrix and measurements with each being a real random matrix with entries drawn independently from a normalized gaussian distribution. then one can show that ( see , e.g. , ref . ) with probability ( over the random measurements ) . as a consequence ,a number of measurements are enough for a successful reconstruction of the real - valued matrix with high probability .we show that for certain structured recovery problems , replacing the regularizer in a convex recovery by an optimized regularizer can potentially improve performance ; see also figure [ fig : geometry ] . for the case where is the nuclear norm and the square norm , we show such an improvement with numerical simulations in section [ sec : application_maps ] .( 0,0 ) coordinate ( ker - li ) + + ( , 0 ) coordinate ( ker - li - o ) + + ( 0 , ) coordinate ( ker - re - o ) ; ( ker - li ) + + ( -,0 ) coordinate ( ker - li - u ) + + ( 0 , ) coordinate ( ker - re - u ) ; ( ker - li - u ) rectangle ( ker - re - o ) ; ( ker - li - o ) ( ker - re - o ) ; ( ker - li - u ) ( ker - re - u ) ; ( ker - li ) + + ( 0 , ) coordinate ( ker - re ) node [ inner sep = 1pt ] ( origin ) [ midway ] ; ( origin ) circle ( 2pt ) node[above left , lab] ; ( origin ) + + (: ) coordinate ( cdre ) ; ( origin ) + + (: ) coordinate ( cdli ) ; ( origin ) ( cdli ) arc ( : : ) ( origin ) ; ( origin ) ( cdli ) ( origin ) ( cdre ) ; ( origin ) + + (: ) coordinate ( ccre ) + + ( : .2 ) coordinate ( bccre ) ; ( origin ) + + (: ) coordinate ( ccli ) + + ( : .2 ) coordinate ( bccli ) ; ( origin ) ( ccli ) arc ( : : ) ( origin ) ; ( origin ) ( bccli ) ( origin ) ( bccre ) ; ( origin ) + + (: ) coordinate ( care ) ; ( origin ) + + (: ) coordinate ( cali ) ; ( origin ) ( cali ) arc ( : : ) ( origin ) ; ( origin ) ( cali ) ( origin ) ( care ) ; ( origin ) + + (: ) coordinate ( cbre ) ; ( origin ) + + (: ) coordinate ( cbli ) ; ( origin ) ( cbli ) arc ( : : ) ( origin ) ; ( origin ) ( cbli ) ( origin ) ( cbre ) ; ( ker - li - u ) + + ( -1,.7 ) node [ lab , anchor = east ] ( kera ) ; ( ker - li)++(0,.4 ) to [ out=-110 , in = -80 ] ( kera ) ; ( origin ) ( ker - re - u ) node ( schlauch ) [ midway ] ; ( schlauch ) + + ( -1,-.5 ) node [ lab , anchor = east ] ( eta ) ; ( schlauch ) to[out=120 , in=80 ] ( eta ) ; ( ccre ) + + ( 0,-.3 ) coordinate ( anchorc ) + + ( .6,.2 ) node ( dcc ) [ lab , anchor = south west] ; ( anchorc ) to[out=20 , in = 190 ] ( dcc.west ) ; ( cdre ) ( cdli )coordinate [ midway ] ( dmw ) ; ( dmw)++(.02,0 ) coordinate(anchord ) + + ( .5,0 ) node ( dcd ) [ lab , anchor = west] ; ( anchord ) to[out=20 , in = 170 ] ( dcd.west ) ; ( cali ) + + ( -.1,.3 ) coordinate ( anchora ) + + ( .7,-.7 ) node ( dca ) [ lab , anchor = north west] ; ( anchora ) to[out=-10 , in = 170 ] ( dca.west ) ; ( cbre ) + + ( -.2,-.4 ) coordinate ( anchorb ) + + ( -.8,.4 ) node ( dcb ) [ lab , anchor = south] ; ( anchorb ) to[out=-170 , in = -90 ] ( dcb ) ; [ prop : construction ] let be a convex set and be a compact index set .moreover , let be a family of upper semi - continuous convex functions .define another convex function as the point - wise supremum .then for any , where is the active index set at with the convention .by we will denote the cone generated by a set . according to definition [ def : dc ] of the descent cone , we have writing the supremum as an intersection yields by we denote the ball around the origin of radius .now , consider a non - active index .as is upper semi - continuous , there exists such that for all we have .hence , the set and hence the corresponding cone in eq . is the entire space .therefore , every non - active index can be omitted in the intersection , the definition of the descent cone of finishes the proof .the square norm is a particular instance of such a supremum over nuclear norms .thanks to the following nuclear norm bound , proposition [ prop : construction ] can lead to an improved recovery for any bipartite operator satisfying here , we will only need the lower bound on the square norm but , in order to fully relate it to the usual matrix norms , we also provide two upper bounds . [prop : bounds_on_diamond_norm ] for any our second main result fully characterizes the set of operators satisfying eq . .[ thm : extremality ] let be a bipartite operator .. holds if and only if for now , we content ourselves with sketching the proof idea and present the full proof later . for the case where eq .is satisfied , we exploit it to single out a primal feasible optimal point .exact knowledge of this point together with complementary slackness then allow to severely restrict the range of possible dual optimal points .relation is an immediate consequence of these restrictions . to show the converse, we insert a particular feasible point into the dual sdp of the square norm .. enables us to explicitly evaluate the objective function at this point . doing so yields which in turn implies by weak duality . combining this implication with the converse bound from proposition [ prop : bounds_on_diamond_norm ] establishes , as claimed . as an implication of theorem [ thm : extremality ] and proposition [ prop : construction ]we obtain the following .[ cor : subset ] let satisfy eq . .then where contains all with and being active in the sense that . for instance, setting gives an element of and yields the inclusion for any satisfying eq . .as an immediate application , we will see in the next section that the square norm inherits recovery guarantees from the nuclear norm .in this section we focus on low - rank matrix recovery of hermitian bipartite operators that are either real - valued or complex - valued . as already mentioned in section [ sub : convex_recovery ] , therethe task is to efficiently recover an unknown matrix of low - rank from noisy linear measurements of the form where are the measurement matrices and denotes additive noise in the sampling process . by introducing a measurement map of the form , where denotes the standard basis in , the entire measurement process can be summarized as here, contains all measurement outcomes and denotes the noise vector .if a bound on the noise is available , many measurement scenarios have been identified where estimating by from noisy data of the form stably recovers .note that by employing the well - known sdp formulation of the nuclear norm this optimization can be recast as & { \mathrm{subject\ to } } & \begin{pmatrix } y & -x \\-x{^\dagger } & z \end{pmatrix } \succeq 0 \ , , \\[.9em ] & & y , z \in \operatorname{pos}({{\mathcal{w}}}\otimes { { \mathcal{v } } } ) \ , , \\ & & { { \left\vert { { \mathcal{a}}}(x)-y \right\vert}_{\mathrm{f } } } \leq \eta \ , .\end{array } \label{eq : nnorm_reconstruction}\ ] ] what is more , several of these recovery guarantees can be established using the geometric proof techniques presented in section [ sub : convex_recovery ] . for results established that way, combining observation [ obs : smaller_dc ] with corollary [ cor : subset ] allows us to draw the following conclusion .[ imp : inheriting ] for bipartite operators that satisfy , any recovery guarantee for nuclear norm minimization , which is based on the nuclear norm s descent cone , also holds for square norm minimization .this insight indicates that replacing nuclear norm regularization by results in an estimation procedure that performs at least as well whenever .in fact , observation [ obs : smaller_dc ] suggests that it may actually outperform traditional recovery procedures .also , the sdp formulation for the square norm allows one to recast the optimization as & { \mathrm{subject\ to } } & \begin{pmatrix } y & -x \\-x{^\dagger } & z \end{pmatrix } \succeq 0 \ , , \\[.9em ] & & y , z \in \operatorname{pos}({{\mathcal{w}}}\otimes { { \mathcal{v } } } ) \ , , \\ & & { { \left\vert { { \mathcal{a}}}(x)-y \right\vert}_{\mathrm{f } } } \leq \eta \ , , \end{array}\ ] ] which , just like the optimization , is a convex optimization problem that can be solved computationally efficiently . in the remainder of this section, we present three measurement scenarios for which implication [ imp : inheriting ] holds .the first one is a version of ref .* example 4.4 ) which is valid for reconstructing real - valued matrices . in its original formulation with nuclear norm minimization, it follows from combining proposition [ prop : general_reconstruction ] and eq . .[ prop : gaussian_cs ] let be a real valued , bipartite matrix of rank that obeys .also , suppose that each measurement matrix is a real - valued standard gaussian matrix and the overall noise is bounded as . then, noisy measurements of the form suffice to guarantee with probability at least . here , , and denote absolute constants . with high probability ( w.h.p .) , this statement assures _ stable _ recovery , meaning that the reconstruction error scales linearly in the noise bound and inversely proportional to . for the sake of clarity ,we have refrained from providing explicit values for the constants and in proposition [ prop : gaussian_cs ] . however, resorting to tropp s bound on the minimal conical eigenvalue of a gaussian sampling matrix reveals that stably recovering any rank- matrix obeying eq .requires roughly independently selected gaussian measurements .proposition [ prop : gaussian_cs ] is a prime example for a _ non - uniform _ recovery guarantee : for any fixed rank- matrix obeying eq . , randomly chosen measurements of the form suffice to stably reconstruct w.h.p . for some measurement scenarios ,stronger recovery guarantees can be established .called _ uniform _ recovery guarantees , these results assure that one choice of sufficiently many random measurements w.h.p. suffices to reconstruct all possible matrices of a given rank .a uniform recovery statement can be established for the following real - valued measurement scenario : suppose that with respect to an arbitrary orthonormal basis of , each matrix element of is an independent instance of a real - valued random variable obeying = 0 , \quad { \mathbb{e}}\left [ a^2 \right ] = 1 \quad \textrm{and } \quad { \mathbb{e}}\left [ a^4 \right ] \leq f , \label{eq : fourth_moments}\ ] ] where is an arbitrary constant .measurement matrices of this form can be considered as a generalization of gaussian measurement matrices , where each matrix element corresponds to a standard gaussian random variable . in ref . see also refs . a uniform recovery guarantee for such measurement matrices has been established by means of the _ frobenius robust rank null space property _* definition 10 ) .such a proof technique is different from the geometric one introduced in section [ sub : convex_recovery ] .however , as laid out in the appendix , some auxiliary statements allow for reassembling technical statements from these works to yield a slightly weaker , but still uniform , statement by means of analyzing descent cones . implication [ imp : inheriting ] is applicable for such a result and yields the following . [prop : fourth_moments ] consider the measurement process described in eq ., where each is an independent random matrix of the form .fix and suppose that . then , w.h.p ., every real - valued matrix of rank at most and obeying can be stably reconstructed from the measurements by means of square norm minimization .here , is a constant that only depends on the fourth - moment bound .we conclude this section with two uniform recovery guarantees for hermitian low - rank matrices from measurement matrices that are proportional to rank - one projectors , i.e. , for some .originally established for nuclear norm minimization in ref . , by using an extension of the geometric proof techniques presented in section [ sub : convex_recovery ] , implication [ imp : inheriting ] is directly applicable to such measurements .[ prop : rank_one ] consider recovery of hermitian rank- matrices that obey from rank - one measurements of the form .let .then stable and uniform recovery guarantees for square norm minimization analogous to proposition [ prop : fourth_moments ] hold if either 1 .the measurements are random gaussian vectors in or 2 .the measurements are vectors drawn uniformly from a complex projective 4-design .once more , and denote absolute constants of sufficient size . in the statement above ,complex projective -design _ is a configuration of vectors which is `` evenly distributed '' on a sphere in the sense that sampling uniformly from it reproduces the moments of haar measure up to order . more precisely , the second statement in proposition [ prop : rank_one ] can be seen as `` partial derandomization '' of the first one we come to three concrete applications concerning linear maps that take operators in to operators in .our reconstruction based on the square norm can be applied to such maps by identifying them with operators in .we start with introducing some relevant notation and explain such an identification , the choi - jamiokowski isomorphism , in more detail .then we present numerical results on retrieval of certain unitary basis changes , quantum process tomography , and blind matrix deconvolution .our square norm is closely related to the diamond norm , which is defined for linear operators that map operators to operators .we call such objects _ maps _ and denote their space by , or simply by .we also denote maps by capital latin letters . concretely , for and we write .a particularly simple example is the identity map which obeys for all .we would like to identify maps in with operators in , for which we have discussed certain reconstruction schemes .for this purpose , we employ a very useful isomorphism , called the _ choi - jamiokowski isomorphism _ . in order to explicitly define this isomorphism, we fix an orthogonal basis of .this also gives rise to an operator basis and we define _ vectorization _ by the linear extension of then the choi - jamiokowski isomorphism is defined by the resulting operator is called the _ choi matrix _ of .it can be straightforwardly checked that eq .is equivalent to setting although not evident from eq ., this isomorphism is actually basis independent .indeed , it is just an instance of the natural isomorphism .this identification is illustrated in figure [ fig : tensor_diagram ] , and discussed in more detail in the appendix .( m1 ) ( m ) [ bbox ] at ( 0,0 ) ; ( m.west ) + + ( 0 , ) coordinate ( moli ) ; ( m.west ) + + ( 0,- ) coordinate ( muli ) ; ( m.east ) + + ( 0 , ) coordinate ( more ) ; ( m.east ) + + ( 0,- ) coordinate ( mure ) ; ( more ) + + ( , 0 ) node ( xo ) [ near end , above ] ; ( mure ) + + ( , 0 ) node ( xu ) [ near end , below ] ; ( moli ) + + ( -,0 ) node ( yo ) [ near end , above ] ; ( muli ) + +( -,0 ) node ( yu ) [ near end , below ] ; ( xu.south east ) + + ( 0,-1ex ) coordinate ( ure ) ( ure-|yu.west ) ; ; ( j ) [ right = of m1 ] ; ( m2 ) [ right = of j ] ( m ) [ bbox ] at ( 0,0 ) ; ( m.west ) + + ( 0 , ) coordinate ( moli ) ; ( m.west ) + + ( 0,- ) coordinate ( muli ) ; ( m.east ) + + ( 0 , ) coordinate ( more ) ; ( m.east ) + + ( 0,- ) coordinate ( mure ) ; ( more ) + + ( , 0 ) ++(0 , ) node ( xo ) [ near end , right ] ; ( mure ) + + ( , 0 ) ++(0,- ) node ( xu ) [ near end , right ] ; ( moli ) + +( -,0 ) ++(0 , ) node ( yo ) [ near end , left ] ; ( muli ) + + ( -,0 ) ++(0,- ) node ( yu ) [ near end , left ] ; ( xo.north east ) + + ( 1ex,0 ) coordinate ( ore ) ( ore|-xu.south ) ; ; ( try ) [ right = of m2 ] ; ( m3 ) [ right = of try ] ( m ) [ bbox ] at ( 0,0 ) ; ( m.west ) + + ( 0 , ) coordinate ( moli ) ; ( m.west ) + + ( 0,- ) coordinate ( muli ) ; ( m.east ) + + ( 0 , ) coordinate ( more ) ; ( m.east ) + + ( 0,- ) coordinate ( mure ) ; ( moli ) + + ( -,0 ) |- ( muli ) ; ( more ) + + ( , 0 ) ++(0 , ) node ( xo ) [ near end , right ] ; ( mure ) + + ( , 0 ) ++(0,- ) node ( xu ) [ near end , right ] ; ( xo.north east ) + + ( 1ex,0 ) coordinate ( ore ) ( ore|-xu.south ) ; ; similarly to the definition of the spectral norm , the nuclear norms on and induce a norm on , perhaps surprisingly , the induced nuclear norm of maps of the form can be computed efficiently , as explained in detail below .this motivates studying the _ diamond norm _ it plays an important role in quantum mechanics and is also the core concept of this work . using the choi - jamiokowski isomorphism ,the diamond norm can indeed be written as where the square norm was defined variationally in eq . .hence , for the case of a measurement map , the reconstruction based on the square norm can also be written as & { \mathrm{subject\ to } } & \begin{pmatrix } y & -j(m ) \\-j(m){^\dagger } & z \end{pmatrix } \succeq 0 \ , , \\[.9em ] & & y , z \in \operatorname{pos}({{\mathcal{w}}}\otimes { { \mathcal{v } } } ) \ , , \\ & & { { \left\vert { { \mathcal{a}}}(m)-y \right\vert}_{\mathrm{f } } } \leq \eta \ , . \end{array}\ ] ] our problem of retrieval of unitary basis changes is motivated by the phase retrieval problem .retrieving phases from measurements that are ignorant towards them has a long - standing history in various scientific disciplines .a discretized version of this problem can be phrased as the task of inferring a complex vector from measurements of the form where .recently , the mathematical structure of this problem has received considerable attention .one way of approaching this problem is to recast it as a matrix problem which has the benefit that the measurements become linear . indeed , setting and reveals that this `` lifting '' trick allows for re - casting the phase retrieval problem as the task of recovering a hermitian rank - one matrix from linear measurements of the form .recently , ling and strohmer used similar techniques to recast the important problem of self - calibration in hardware devices as the task to recover a non - hermitian rank - one matrix from similar linear measurements . in this section ,we consider the matrix - analogue of such a task and set but keep and as labels .concretely , we consider maps of the form where and are fixed unitaries .note that any such map has a choi matrix of the form which corresponds to an outer product of the form .moreover , unitarity of both and assures that all such maps meet the requirements of theorem [ thm : extremality ] .we aim to numerically recover such maps from two different types of measurements : ( i ) gaussian measurements and ( ii ) structured measurements .the gaussian measurements are given by a measurement map with real and imaginary parts of all of its components drawn from a normal distribution with zero mean and unit variance .in the case of structured measurements , receives rank- inputs and then inner products with regular measurement matrices are measured .more precisely , the measurement map is given by \ , , \end{aligned}\ ] ] where are chosen uniformly from the complex unit sphere .the matrices , on the other hand , are independent instances of the random matrix , where is a fixed , real - valued diagonal matrix and both and are chosen independently from the unique unitarily invariant haar measure over . for our numerical studies ,we restrict ourselves to even dimensions and set .this in particular assures .as we will see , similar types of measurements can be used in quantum process tomography and blind matrix deconvolution . for both measurement setups ,we find that diamond norm reconstruction outperforms nuclear norm reconstruction ; see figure [ fig : uv ] .interestingly , the structured measurements are better than the gaussian measurements for the diamond norm reconstruction , while for the nuclear norm reconstruction we find the converse .finally , we would like to to point out that ling and strohmer introduced a new algorithm dubbed `` sparselift '' to efficiently reconstruct the signals they consider and simultaneously promote sparsity .it is an intriguing open problem to compare the performance of sparselift to the constrained diamond norm minimization advocated here for different types of practically relevant measurement ensembles .we leave this to future work .the problem of reconstructing quantum mechanical processes from measurements is referred to as _ quantum process tomography_. as explained in the next paragraph , quantum processes are described by maps that saturate the norm inequality and thus are natural candidates for diamond norm - based methods .[ [ preliminaries . ] ] preliminaries .+ + + + + + + + + + + + + + a positive semidefinite operator with unit trace is called a _ density operator _ and a matrix representation is a _density matrix_. the convex space of density operators is denoted by and its elements are referred to as _ quantum states_. the extreme elements of are called _ pure states _ and are given by rank - one operators of the form with -norm normalized _ state vectors _ .observable _ is a self - adjoint operator and the _ expectation value _ of in state is .note that in the case where and are diagonal , corresponds to a classical probability vector and to a random variable also with expectation value .for the following definitions it is helpful to know that quantum systems are composed to larger quantum systems by taking tensor products of operators .a map is called _ completely positive _ if with from eq . .this is the case if and only if for every vector space the map preserves the cone of positive semidefinite operators . is called _ trace preserving _ if for all .the convex space of maps that are both , completely positive and trace preserving is denoted by and its elements are quantum operations as they map density operators to density operators and they are also called _ quantum channels_.the _ kraus rank _ of a quantum channel is the rank of its choi matrix .a channel of kraus rank can be written as where are so - called _ kraus operators _ satisfying , and no other such decomposition has fewer terms .a special role is played by _unitary channels _ , which are channels of unit kraus rank . in this case, the single kraus operator in the kraus representation has to be unitary .unitary quantum channels describe coherent operations in the sense that for isolated quantum systems ( i.e. , systems that are decoupled from anything else ) one can only have unitary quantum channels .quantum channels describing situations where the system is affected by noise have kraus ranks larger than one .in many experimental situations , one aims at the implementation of a unitary channel , but actually implements a channel whose kraus rank is larger than one , but is still approximately low .therefore , process tomography of quantum channels with low kraus rank is an important task in quantum experiments .also , in the context of _ quantum error corretion _, low - rank deviations turn out to have a particularly adverse impact .this underscores the need to design efficient estimation protocols for this case .in the next paragraph , we present numerical results showing that , indeed , replacing the nuclear norm with the diamond norm in a straightforward `` compressive process tomography '' improves the results .we find it plausible that using the diamond norm as a `` drop in replacement '' for the nuclear norm will also lead to improvements in other , more advanced process tomography schemes .for example , kimmel and liu combine compressed process tomography with ideas from _ randomized benchmarking _ .this combination allows recovery using only clifford measurements that are robust to state preparation and measurement ( spam ) errors .their recovery guarantees are based on the geometric arguments presented in section [ sub : convex_recovery ] .it thus seems fruitful to conduct numerical experiments using the diamond norm in their setting .[ [ numerical - results - for - quantum - process - tomography . ] ] numerical results for quantum process tomography .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the task is to reconstruct from measurements of the form where encodes linear data acquisition , summarizes the measurement outcomes , and represents additive noise .the most general measurements conceivable in this context are so - called _ process povms _ .however , here we consider the case where is given by the preparation of pure states given by state vectors and measurements of observables , where ] , that are an input to the problem , we seek to reconstruct from the circular convolutions of and , where now see also figure [ fig : def_deconv ] .the observations are given by the vectors or , equivalently , by where defines the fourier transform and the hadamard product of vector and .let us denote the -th rows of and by and , respectively .then with the unit rank matrices and .indeed , this is precisely a problem of the form discussed here , with and up to a phase , and can be trivially reconstructed from up to phase .that is to say , a matrix version of blind deconvolution can readily be cast into the form of problems considered in this work .numerically , we find a recovery from few samples and that the diamond norm reconstruction outperforms the nuclear norm based reconstruction from ref . adapted to our setting ; see figure [ fig : deconvolution ] .many practical application of this problem are conceivable : the reconstruction of an unknown drift of a polarization degree of freedom in a channel problem is only one of the many natural ramifications of this setup .in this section , we prove proposition [ prop : bounds_on_diamond_norm ] and an extension of theorem [ thm : extremality ] . in order to do so, we first define a generalization of the sign matrix to matrices that are not necessarily hermitian . this will give rise to the left and right absolute values of arbitrary matrices .then we introduce sdps , complementary slackness , and state the sdp for the square norm in standard form . combining all these concepts, this section cumulates in the proofs of proposition [ prop : bounds_on_diamond_norm ] and theorem [ thm : extremality ] .the singular value decomposition of a matrix is where are unitaries and is positive - semidefinite and diagonal .this decomposition allows one to define a `` sign matrix '' of : [ def : sgn ] for any matrix with singular value decomposition we define its _ sign matrix _ to be .note that the sign matrix is in general not unique , but always unitary and it obeys therefore , indeed generalizes the sign - matrix ( which is defined exclusively for hermitian matrices ) upon right multiplication .the following auxiliary statement will be required later on and follows from a schur complement rule .[ lem : block_matrix_psd1 ] for every , one has semidefinite programs ( sdps ) are a class of optimization problems that can be evaluated efficiently , e.g. by using cvx .[ def : sdp ] a _ semidefinite program _ is specified by a triple , where and are self - adjoint operators and is a hermiticity preserving linear map .with such a triple , one associates a pair of optimization problems : is called _ primal feasible _ if it satisfies eq . and eq . .it is called _ optimal primal feasible _ if , additionally , for in eq .the maximum is attained .similarly , is called _ dual feasible _ if it satisfies eq . and _ optimal dual feasible _ , if for the minimum in eq .is attained .sdps that exactly reproduce the problem structure outlined in this definition are said to be in _standard form_. but for specific sdps , equivalent formulations might often be more handy ._ weak duality _ refers to the fact that the value of the primal sdp can not be larger than the value of the dual sdp , i.e. , that for any primal feasible point and dual feasible point .an sdp is said to satisfy _ strong duality _ if the optimal values coincide , i.e. , if for some optimal primal feasible and dual feasible points and it hols that .in fact , from a weak condition , called slater s condition , strong duality follows . [ lem : comp_slack ] suppose that characterizes an sdp that obeys strong duality and let and denote optimal primal and dual feasible points , respectively ( i.e. ) . then the following , somewhat exhaustive , classification of the square norm s sdp will be instrumental later on .[ lem : watrous_sdp ] [ thm : diamond ] let be a bipartite operator .then its square norm can be evaluated by means of an sdp that satisfies strong duality . in standard form , it is given by the block - wise defined matrices where denotes the zero - vector , and , as well as represent zero matrices of appropriate dimension . finally , the map acts as and has an adjoint map given by where , once more , represents a zero - matrix .lemma [ lem : watrous_sdp ] presents an sdp for the square norm in standard form .although this standard form is going to be important for our proofs , it is somewhat unwieldy .fortunately , elementary modifications allow to reduce the sdp to the following pair . & & { \mathrm{subject\ to } } & \begin{pmatrix } { \mathbbm{1}}_{{\mathcal{w}}}\otimes \rho & z \\z{^\dagger } & { \mathbbm{1}}_{{\mathcal{w}}}\otimes \sigma \end{pmatrix } \succeq 0\ , , \hspace{1 cm } \quad \\[1em ] & & & \operatorname{tr}(\rho ) = \operatorname{tr}(\sigma ) = \dim({{\mathcal{v } } } ) \ , , \\[.2em ] & & & \rho,\sigma \in \operatorname{pos}({{\mathcal{v}}})\ , , \\[.2em ] & & & z \in { \operatorname{\mathrm{l}}}({{\mathcal{w}}}\otimes { { \mathcal{v } } } ) \ , \end{array } \\[1em ] \label{eq : watrous_sdp_dual } & \begin{array}{lrll } \textbf{dual:\qquad\quad } & { { \left\vert x \right\vert}_{{\protect\scalebox{0.5}{ } } } } = & \min & \frac{\dim({{\mathcal{v}}})}{2}\bigl({{\left\vert \operatorname{tr}_{{\mathcal{w}}}(y ) \right\vert } } + { { \left\vert \operatorname{tr}_{{\mathcal{w}}}(z ) \right\vert}}\bigr ) \\[.5em ] & & { \mathrm{subject\ to } } & \begin{pmatrix } y & -x \\ -x{^\dagger } & z \end{pmatrix } \succeq 0 \ , , \\[1em ] & & & y , z \in \operatorname{pos}({{\mathcal{w}}}\otimes { { \mathcal{v } } } ) \ , .\end{array}\end{aligned}\ ] ] this simplified sdp pair for the square norm comes in handy for establishing the final claim in proposition [ prop : bounds_on_diamond_norm ] . for hermitian matrices ,the first two bounds presented there were already established in ref .* lemma 7 ) . here , we show that an analogous strategy remains valid for matrices that need not be hermitian .let us start with recalling the variational definition of the square norm : as already mentioned , inserting into eq .establishes the lower bound ( ) .also , a generalized version of hlder s inequality assures for any and . inserting this bound into the variational definition of results in , which is the second bound . for the final bound ( ) we consider the simplified version of the square norm s dual sdp .lemma [ lem : block_matrix_psd1 ] assures that setting results in a feasible point of this program .inserting this point into the objective function yields a value of , because .the bound follows from this value and the structure of the optimization problem . in this section , we prove an extension of theorem [ thm : extremality ] .in particular , this more general result relates theorem [ thm : extremality ] to optimal feasible points in watrous sdp from lemma [ lem : watrous_sdp ] .these will contain the generalizations of the sign matrix from definition [ def : sgn ] .[ thm : extremal ] let be a bipartite operator and set .then the points ( [ item : extremal])([item : reduced_eq ] ) are equivalent : [ item : extremal ] satisfies [ item : xsharp ] some of the form is a primal optimal feasible point for watrous sdp from lemma [ lem : watrous_sdp ] . [ item : ysharp ] some of the form is a dual optimal feasible point for watrous sdp from lemma [ lem : watrous_sdp ] .[ item : reduced_propto ] satisfies [ item : reduced_eq ] satisfies similar to the actual sdp , the optimal feasible points presented in theorem [ thm : extremal ] have simplified counterparts that correspond to optimal feasible points of the simplified sdps and . for the sake of completeness , we present them in the following corollary . for any , optimal feasible points of the primal sdp and the dual sdp for the square norm are given by the following . this statement follows straightforwardly from theorem [ thm : extremal ] by considering the reduced formulations and of the sdp from lemma [ lem : watrous_sdp ] . for statements are evident . from now on, we assume that , or , equivalently , .proof of ( [ item : extremal ] ) ( [ item : xsharp ] ) .: : note that by lemma [ lem : block_matrix_psd1 ] .straightforward evaluation of from lemma [ lem : watrous_sdp ] reveals that is indeed a primal feasible point : in order to show optimality , we evaluate the primal sdp s objective function given by in eq . .employing formulas and to express the absolute values of , we obtain by assumption , this is indeed optimal .proof of ( [ item : xsharp ] ) ( [ item : ysharp ] ) and ( [ item : reduced_propto ] ) : : : strong duality of watrous sdp from lemma [ lem : watrous_sdp ] assures that an optimal dual solution exists and that complementary slackness holds . since from eq .does not depend on block off - diagonal terms , optimal feasibility only depends on the block diagonal parts .hence , we write as complementary slackness ( lemma [ lem : comp_slack ] ) implies that and must equal each other . this in turn demands where we have once more employed identities and for to obtain the absolute values of .equality of and in the first two diagonal entries ( also guaranteed by complementary slackness ) furthermore assures hence , and both , ( [ item : ysharp ] ) and ( [ item : reduced_propto ] ) follow .proof of ( [ item : reduced_propto ] ) ( [ item : reduced_eq ] ) : : : let be constants such that taking the trace of both equations and recognizing the nuclear norm reveals that and , similarly , which proves the claimed implication .proof of ( [ item : reduced_eq ] ) ( [ item : extremal ] ) : : : the crucial observation for this implication is that assumption ( [ item : reduced_eq ] ) alone assures that defined in eq . with all off - diagonal blocks set to zero is a feasible point of watrous dual sdp , albeit not necessarily an optimal one .this claim is easily verified by direct computation . inserting this dual feasible point into the sdp s objective function results in since every dual sdp corresponds to a constrained minimization , evaluating the dual objective function at any feasible point results in an upper bound on the optimal value . in our case , obtain the upper bound , which together with the converse bound from proposition [ prop : gaussian_cs ] , implies equality between the two .we conclude by mentioning several observations and research directions that may merit further attention . [ [ measurement - errors . ] ] measurement errors .+ + + + + + + + + + + + + + + + + + + in our analysis we considered reconstructed matrices and from eqs . and that are required to be -close to the ideal operator .such a reconstruction stably tolerates additive errors as in eq .as long as they obey . for operators satisfying the extremality we prove that recovery guarantees for are inherited by .a similar situation is true for the reconstruction of maps by means of diamond norm minimization . for the idealized setting of noiseless measurements ( ) ,we demonstrate numerically that often vanishes while is large . a numerical analysis for the noisy case yields similar results as for .for the noisy case the phase transition from having no recovery to almost always recovering the signal up to broadens equally for both diamond and nuclear norm regularization .[ [ partial - derandomizations . ] ] partial derandomizations .+ + + + + + + + + + + + + + + + + + + + + + + + + while initial theoretical results often rely on measurements that follow a gaussian distribution , later on significant effort has been put into derandomizing the measurement process . on the one hand , recovery guarantees for structured measurementswere proven . on the other ,also the distributions form which the measurements are drawn were partially derandomized ( see also section [ sec : application_low_rank ] ) , relying on above mentioned -designs .the later methods rely on an analysis of the measurement map s descent cone .hence , such recovery guarantees for partially derandomized measurements are also inherited by our reconstruction via diamond norm minimization . in a similarsetting , a partial derandomization of the random unitaries used as part of the measurements for the retrieval of unitary basis changes ( section [ sec : uv ] ) and for quantum process tomography ( section [ sec : quantum ] ) seems very promising . here , structural insights on unitary designs could be used in future work .[ [ improvement - from - structured - measurements . ] ] improvement from structured measurements .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we numerically performed the reconstruction of unitary basis changes in section [ sec : uv ] for two different measurement settings : gaussian measurements and certain structured measurements .for the nuclear norm , the reconstruction from gaussian measurements performed slightly better than the one from structured measurements , just as expected .perhaps surprisingly , we observed the converse for the diamond norm reconstructions . here , the structure of the measurements seems to be favourable for the reconstruction process .this observation motivates the search for recovery guarantees for diamond norm reconstruction with structured measurements .such structured measurements are also crucial for the quantum process tomography in section [ sec : quantum ] and blind matrix convolution in section [ sec : deconv ] .[ [ cpt - as - a - constraint - in - the - quantum - channel - reconstructions . ] ] cpt as a constraint in the quantum channel reconstructions .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + a map is a quantum channel if and only if when aiming at reconstructing quantum channels , these additional constraints can , in principle , be included in the sdps and for the diamond norm and nuclear norm reconstructions .doing so leads to a significant overhead in the numerical reconstruction process .numerically , one can observe that the recovery success of the diamond norm reconstruction is unchanged , while the nuclear norm reconstruction performs significantly better .in fact , it seems to perform roughly as good as the diamond norm reconstruction when these constraints are included in the sdp . in this sense, the cpt structure can be used in the nuclear norm reconstruction at the expense of a longer computation time to reduce the number of measurements , while in the diamond norm reconstruction the cpt structure is already inbuilt .the run - time of the diamond norm reconstruction and the nuclear norm reconstruction are practically the same for a given number of measurements and scales polynomially with the number of constraints .therefore , the diamond norm reconstruction can help to render larger quantum systems accessible to quantum process tomography .we thank c. riofro , i. roth , and d. suess for insightful discussions and s. kimmel and y.k .liu for advanced access to ref .the research in berlin has been supported by the eu ( siqs , aqus , raquel ) , the bmbf ( q.com ) , the dfg project ei 519/9 - 1 ( spp1798 cosip ) and the erc ( taq ) .the work of dg and rk is supported by the excellence initiative of the german federal and state governments ( grants zuk 43 & 81 ) , the aro under contract w911nf-14 - 1 - 0098 ( quantum characterization , verification , and validation ) , the dfg project gro 4334/2 - 1 ( spp1798 cosip ) , and the state graduate funding program of baden - wrttemberg .in this appendix we provide known material to make this work more self contained .we provide sdps for the nuclear norm and the spectral norm , and introduction to tensor products and a basis independent definition of the choi - jamiokowski isomorphism . also , we devote a subsection to low - rank matrix recovery . therewe show how the statements presented in section [ sec : application_low_rank ] can be derived using geometric proof techniques . on the contrary to the other supplementary chapters ,this section does include technical novelties .[ [ basic - concepts - of - multilinear - algebra - and - the - choi - jamiokowski - isomorphism ] ] basic concepts of multilinear algebra and the choi - jamiokowski isomorphism ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ the core objects of this work are tensors of order four and naturally fall into the realm of multilinear algebra . herewe give a brief introduction on core concepts of multilinear algebra that can be found in any textbook on that topic .our presentation here is influenced by .let be ( finite dimensional , complex ) vector spaces with associated dual spaces .a function is _ multilinear _ , if it is linear in each .the space of such functions constitutes the _ tensor product _ of and we denote it by . by reflexivity ,the tensor product is the space of all multilinear functions its elementary elements are the _ tensor product _ of vectors which alternatively can be constructed by means of the kronecker product however , such an explicit construction requires explicit choices of bases in . with such a notation, the space of linear maps ( matrices ) corresponds to the tensor product which is spanned by rank - one operators . with this identification , it is straightforward to define the tensor product of and to be analogously to before , the elementary of this space are the _ tensor product _ of maps and . restricting to tensor products of endomorphisms ,i.e. and , the _ partial trace _ ( over the first tensor factor ) for elementary elements to be and extend it linearly to . note that with the identification , corresponds to the natural contraction between and .this is illustrated in figure [ fig : tensor_diagram ] .similarly to , the maps introduced in section [ sec : notation_maps ] can be viewed as elements of the tensor product space which can be seen as a four - linear vector space .there are several equivalent ways to interpret its elements . for the given applications of our work ,we have made heavy use of the _ choi - jamiokowski isomorphism _ which acts on four - linear tensors by permuting tensor factors : applied to the four - linear space of maps we obtain and which are basis independent . consequently the choi - jamiokowski isomorphism is linear bijection from maps to operators its explicit definitions and in the main text are just basis - dependent realization of this more general identification .we illustrated this fact pictorially in figure [ fig : tensor_diagram ] by resorting to _ tensor network _ or _ wiring diagrams _ . our main geometric insight corollary [ cor : subset ] asserts that any square norm descent cone is always contained in the corresponding one of the nuclear norm , provided that the operators in question obey . when applying this idea to low - rank matrix recovery , we started with mentioning proposition [ prop : gaussian_cs ]this is a non - uniform recovery guarantee that is stable towards additive noise .however , with some additional work , corollary [ cor : subset ] allows for stronger conclusions .some of them are summarized in proposition [ prop : fourth_moments ] and proposition [ prop : rank_one ] , respectively . here, we outline how these results are obtained . in section [ sub : convex_recovery ]we introduced widely used geometric proof techniques for low - rank matrix recovery mainly following ref .these aim at recovery of a fixed object of interest and thus it suffices to focus on precisely one descent cone , namely , or , respectively . by taking a closer look at the actual proof techniques most notably mendelson s small ball method , ortropp s bowling scheme one can see that such a restriction to a single object of interest is not necessary .up to our knowledge , this was first pointed out in ref . and at the heart of this observation is the following technical statement .[ lem : effective_low_rank ] fix and let be the union of all descent cones anchored in nonzero matrices of rank at most .then , every element obeys for hermitian matrices , a slightly stronger statement of this type was presented in ( * ? ? ?* lemma 10 ) . here wo provide a different proof that does not require hermiticity and exploits a variant of pinching .note that for any it follows from the definition of the schatten- norms that the left hand side of eq .coincides with .using this identity and the decomposition allows us to conclude where we have exploited unitary invariance of schatten- norms and the fact that both and are unitary matrices .it suffices to prove this statement for any fixed descent cone , where has rank at most .let and be the column and row ranges of ( these need not coincide , since need not necessarily be hermitian ) and let be orthogonal projections onto these subspaces .note that if has a singular value decomposition , then and , where is defined component - wise by if and otherwise . introducing orthogonal complements and allows us to define this is an orthogonal projection with respect to the frobenius inner product and obeys by construction .its complement amounts to which obeys .note that this is a straightforward generalization of the -space introduced in ( * ? ? ?* equation ( 2 ) ) to non - hermitian matrices .analogously to there , a decomposition is valid for every and every has rank at most by construction .now choose and note that by definition must be valid for some .combining this with lemma [ thm : pinching ] ( pinching ) assures where we have employed and . also , note that hlder s inequality assures for any and unitary . employing this for , where the sign matrix of was defined in def .[ def : sgn ] , reveals where we have in addition used that has rank at most and frobenius norm smaller than or equal to .combining the bounds and implies since , this bound implies .finally , this relation allows us to infer the result , where we also exploited the fact that has rank at most .lemma [ lem : effective_low_rank ] asserts that any matrix that lies in the nuclear norm s descent cone of any low - rank matrix , is `` effectively '' a low - rank matrix as well .this structural property together with mendelson s small ball method is enough to bound the minimal conic singular value of a measurement map with respect to the union of all possible descent cones .here we provide a particular realization of mendelson s small ball method that is directly applicable to low - rank matrix recovery ( see e.g. ref .* section 4 ) ) .[ thm : mendelson ] let be real subspace of linear maps and let be a measurement map , where each is an independent copy of a random matrix and denotes the standard basis in .also , let , where was defined in lemma [ lem : effective_low_rank ] .then for any , the bound holds with probability at least . here \\w_m \left ( e_r , { { \mathcal{a}}}\right ) & = { \mathbb{e}}\left [ \sup_{y \in e_r } \operatorname{tr } ( h{^\dagger}y ) \right ] \ , , \quad \text{where}\quad h = \frac{1}{\sqrt{m } } \sum_{j=1}^m \epsilon_j a_j\end{aligned}\ ] ] and being a rademacher sequence with equal probability . ] .thanks to lemma [ lem : effective_low_rank ] and hlder s inequality we can bound in theorem [ thm : mendelson ] by \leq { \mathbb{e}}\left [ \sup_{y \in e_r } { { \left\vert y \right\vert}_\ast } { { \bigl\vert h{^\dagger}\bigr\vert } } \right ] \\ & \leq { \mathbb{e}}\left [ \sup_{y \in e_r } ( 1+\sqrt{2 } ) \sqrt{r } { { \left\vert y \right\vert}_{\mathrm{f } } } { { \left\vert h \right\vert } } \right ] = ( 1+\sqrt{2 } ) \sqrt{r}\ , { \mathbb{e}}\left [ { { \left\vert h \right\vert } } \right ] \ , , \end{aligned}\ ] ] which is much easier to handle . this simplification together with mendelson s small ball method theorem [ thm : mendelson ] and the geometric error bound for convex recovery proposition [ prop : general_reconstruction ] provide a convenient sufficient means to assure that a given measurement process allows for uniform and stable low - rank matrix recovery via nuclear norm minimization : [ sufficient criteria for uniform recovery][prop : sufficient_criteria ] let be a measurement map as defined in theorem [ thm : mendelson ] and fix .suppose that this measurement map obeys for some and also \leq c_2 \sqrt{m / r} ] , where is a constant that only depends on .this in particular assures \leq c_f\sqrt{n } \leq \frac{c_f}{\sqrt{c } } \sqrt{\frac{m}{r}}\ ] ] and we can set , and . choosing the constant in the sampling rate large enough assures that these constants obey for and all the requirements of proposition [ prop : sufficient_criteria ] are met .the claim then follows from applying this statement .we conclude this section with embedding the main results of ref . into this framework .in fact , the entire apparatus presented in this section is a condensed version of the proofs in loc .however , the reader s convenience , we include the corresponding statement here as well .[ cor : derandomizations2 ] consider measurement maps of the form .then the following measurement ensembles meet the requirements of proposition [ prop : sufficient_criteria ] , if restricted to the recovery of hermitian matrices : 1 . and each corresponds to the outer product of a complex standard gaussian vector with itself , 2 . and each is the outer product of a randomly selected element of a complex projective -design .let us start with the gaussian case . in ref .* section 4.1 . )the bounds and \leq c_1 \sqrt{n} ] . also , ref .* proposition 13 ) implies \leq 3.1049 \sqrt { n \log ( 2n ) } \leq \frac{3.1049}{\sqrt{c_{4d } } } \sqrt{\frac{m}{r}},\ ] ] where we have inserted .thus , choosing appropriately and the constant in the sampling rate large enough again assures that the requirements of proposition [ prop : sufficient_criteria ] are met . | in low - rank matrix recovery , one aims to reconstruct a low - rank matrix from a minimal number of linear measurements . within the paradigm of compressed sensing , this is made computationally efficient by minimizing the nuclear norm as a convex surrogate for rank . in this work , we identify an improved regularizer based on the so - called diamond norm , a concept imported from quantum information theory . we show that for a class of matrices saturating a certain norm inequality the descent cone of the diamond norm is contained in that of the nuclear norm . this suggests superior reconstruction properties for these matrices and we explicitly characterize this set . we demonstrate numerically that the diamond norm indeed outperforms the nuclear norm in a number of relevant applications : these include signal analysis tasks such as blind matrix deconvolution or the retrieval of certain unitary basis changes , as well as the quantum information problem of process tomography with random measurements . the diamond norm is defined for matrices that can be interpreted as order-4 tensors and it turns out that the above condition depends crucially on that tensorial structure . in this sense , this work touches on an aspect of the notoriously difficult tensor completion problem . |
dose distributions of radiotherapy are represented by point doses at orthogonally arranged grids . in treatment - planning practice ,the grid intervals are defined from a physical , clinical , and practical points of view , often resulting in cubic dimensions of a few millimeters .accuracy , efficiency and their balance are essential in practice , for which the pencil - beam algorithm is commonly used .that is mathematically a convolution integral of total energy released per mass ( terma ) with elementary beam - spread kernel , which may be computationally demanding .the grid - dose - spreading ( gds ) algorithm was developed for fast dose calculation of heavy - charged - particle beams in patient body .the gds algorithm employs approximation to extract beam - interaction part from the integral at the expense of distortion of dose distribution for a beam tilted with respect to the grid axes , as originally recognized in ref .the beam - tilting distortion may be generally insignificant when beam blurring is as small as the required spatial resolution , for example , for a carbon - ion beam .in fact , the gds method was successfully incorporated into a clinical treatment - planning system for carbon - ion radiotherapy with vertical and horizontal fixed beams , for which tilting was intrinsically absent . in that particular implementation, a simplistic post process was added to the original broad - beam algorithm so as to spread an intermediate terma distribution uniformly . in general , the spreading kernel could be spatially modulated using the pencil - beam model for more accurate heterogeneity correction .there are two reciprocal approaches for convolution , _i.e. _ to collect doses transferred from nearby interactions to a grid or _ the dose - deposition point of view _ and to spread a terma from an interaction to nearby grids or _ the interaction point of view_. the latter is usually more efficient than the former for three - dimensional dose calculation .the pencil - beam model implicitly assumes homogeneity of the medium within the elementary beam spread .beams that have grown excessively thick in heterogeneous transport are thus incompatible . as a general and rigorous solution , gaussian - beam splitting was proposed , with which overgrown beams are subdivided into smaller ones at locations of large lateral heterogeneity .figure [ fig : split ] demonstrates its effectiveness for a simple density boundary , where the non - splitting beam happened to traverse an edge of a bone - equivalent material while about a half of the split beams traverse the bone - equivalent material .the splitting causes explosive beam multiplication in a shower - like process . in this particular case for example ,the original beam recursively split into 28 final beams . slowing down of dose calculation due to beam multiplication will be a problem in practice .( a ) non - splitting and ( b ) splitting dose calculations with isodose lines at every 10% levels of the maximum non - splitting dose in the cross section , where a proton pencil beam with mev and mm is incident into water with a bone - equivalent material ( ) inserted halfway ( gray area).,width=321 ] in ref . , the beam - splitting method was stated as efficient due to certain `` algorithmic techniques to be explained elsewhere '' , which in fact implied this work to construct a framework , where the gds and beam - splitting methods work compatibly for accurate and efficient dose calculations . in addition, we will refine the gds algorithm with a fix against the beam - tilting distortion and with the pencil - beam model in the interaction point of view for better heterogeneity correction . although the gaussian - beam approximation may be reasonable for the multiple - scattering effect, two or more gaussian components would improve the accuracy of lateral dose distribution of proton and ion pencil beams .however , such large - sized components are intrinsically incompatible with fine heterogeneity . in addition, it is inconceivable to apply the beam - splitting method for large - sized components to secure practical efficiency. this framework will be applicable not only to broad - beam delivery but also to pencil - beam scanning , where a physical scanned beam may have to be decomposed into virtual elementary beams to address heterogeneity .as this work aims to improve computing methods , we focus on evaluation of efficiency and settlement of the intrinsic artifacts with respect to the ideal beam models that are mathematically given , without repeating experimental assessments of accuracy .we will solve the beam - tilting distortion of the gds algorithm by defining intermediate grids for dose calculation , which are arranged to be normal to the beam - field axes .as shown in figure [ fig : coordinates ] , the original dose grids along numbered axes 1 , 2 , and 3 are defined with basis vectors , , and and intervals , , and . for a given radiation field , the field coordinates , , and with basis vectors , , and are associated , where the origin is at the isocenter and is in the source direction . with lateral margins for penumbra, the normal - grid volume is defined as the supremum of normal rectangular - parallelepiped volume of containing the original grids in the margined field .quadratic projection of the original - grid voxel gives the normal - grid intervals , , and as to approximately conserve the equivalent resolution .normal grids are defined at equally spaced positions for indices ] and $ ] , where is the ceiling function .schematic of grid normalization in cross - section view perpendicular to , where shown are a field ( gray area ) with axes and , margins ( light gray areas ) , original grids with axes and , and normal grids of width and height . ]because the elementary pencil beams are almost parallel to vector , normal - grid doses can be accurately and efficiently calculated with where , , and are the terma , the spread , and the position of normal grid , and is the gaussian distribution of mean and standard deviation whose integral is readily given by the standard error function .dose at original grid is then given by trilinear interpolation , which is repeated for all the original grids in the margined field .in this manner , we only deal with normal grids in the gds algorithm hereafter .beam- spread at grid is given by beam - transport calculation and terma contribution is defined as where is the number of particles that is modeled as a constant , is the dose per in - air fluence that is measured as a depth dose curve , and is the step length in voxel . in the original gds algorithm ,grid terma and spread are intermediately defined with formulas which are straightforward extensions of eqs .( 13 ) and ( 14 ) in ref . . that would be , however , problematic when the beams are arranged independently to the grids .as shown in figure [ fig : sharing](a ) , only several beams may traverse each voxel .due to interplay between the beam and grid structures , the total beam - path length per voxel and consequently the grid terma largely fluctuates .the fluctuation would be smeared out only with large spreading with . to fully resolve the fluctuation, we will distribute step terma to the nearest four grids in a regulatory manner .as shown in figures [ fig : sharing](b ) and [ fig : sharing](c ) , the and axes comprise the lateral plane , where any beam is now modeled to have cross section of size centered at mid - step point in current voxel at grid .the four voxels that intersect share terma contribution by areal fractions . equation ( [ eq : original ] ) is then modified in such a way that the terma and spread distributions will be formed when all the beams have been processed as where the areal fraction is given by for nearby grid with and or otherwise 0 . essentially , this beam - sharing operation is one form of convolution with a rectangular kernel .we thus modify axial dose spreading at grid in ( [ eq : spreading ] ) to correct the extra rectangular spreading as for which , sufficiently fine grid intervals must be given .schematic of inter - grid beam sharing in the cross - section views with grid points ( + ) , voxel boundaries ( dashed lines ) , beam paths ( arrows ) , mid - step points ( ) , and rectangular ( gray areas ) .one of many beams in ( a ) is focused in ( b ) , where the dotted line indicates the cross section in ( c).,width=321 ] [ [ tilted - pencil - beam ] ] tilted pencil beam + + + + + + + + + + + + + + + + + + we examine the effect of grid normalization using an analytic 150 mev proton pencil beam model with formulated dose per in - air fluence and lateral spread .the beam with rms projected angle mrad and size mm was placed at the isocenter to incident into water in the direction of patient - support angle and gantry angle in the standard coordinate system .the original grids were defined cubically as mm to form a volume of 10 cm 15 cm 10 cm . around the beam axis ,1-cm margins were added to define the normal - grid volume .the dose distributions were calculated using the gds algorithm with and without grid normalization for comparison .[ [ broad - beam - with - heterogeneity ] ] broad beam with heterogeneity + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we examine a broad beam in a heterogeneous medium , which approximates clinical situations .the dose per in - air fluence was modeled based on an experiment , where a carbon - ion beam with incident nucleon kinetic energy mev was broadened by spiral wobbling and was range - modulated by a semi - gaussian filter of mm . in the calculation ,a radiation from a source at height 500 cm was limited by a collimator at height 50 cm to form a 4 cm 4 cm field on the isocenter plane , which yielded original pencil beams . in a 20 cm 20cm 10 cm volume gridded at intervals of 1 mm along axes 1 , 2 , and 3 , an 19 cm diameter cylindrical water phantom was defined at the center .the phantom included a 2 cm diameter dense ( ) rod at radius 6 cm .the dosimetric effects of grid normalization and inter - grid sharing were investigated for two gantry angles and with the concurrently rotated phantom so that the rod was always in the middle of the field to see any angular artifact .the computing times were measured using 2.4 ghz intel core2duo processor on apple macbook computer .[ [ tilted - pencil - beam-1 ] ] tilted pencil beam + + + + + + + + + + + + + + + + + + figure [ fig : pencil ] shows the projected dose distributions for the tilted proton pencil beam .the original gds algorithm severely distorted the analytic model that was exactly the dose kernel of the algorithm .the grid normalization greatly reduced the distortion . projected dose at every 10% levels for ( a ) the analytic proton - beam model , ( b ) the original gds calculation , and ( c )the normalized gds calculation.,width=321 ] [ [ heterogeneous - broad - beam ] ] heterogeneous broad beam + + + + + + + + + + + + + + + + + + + + + + + + table [ tab : configurations ] and figures [ fig : broad ] and [ fig : broadp ] show the configurations for broad carbon - ion beam calculations , the computing times , and the dose distribution in the cross - section and along the and axes of the beam field .configurations a and d resulted in severe dose artifacts for the beam grid interplay and for the tilted incidence while the others are hardly distinguishable . from the a b comparison , we find that the inter - grid sharing resolved the interplay artifact and added marginal computing load . from the d e comparison , we find that the grid normalization resolved the angle issue and unexpectedly reduced the computing time . from the b c and e f comparisons , we find that the beam splitting increased the computing time by factors 7.8 and 6.6 although the dosimetric effect was only marginal dose fluctuation from moderate heterogeneity of the round structures ..[tab : configurations ] definitions of calculation configurations a f in combinations of with ( ) or without ( ) tilt , grid normalization , inter - grid sharing , and beam splitting and total computing times for a broad beam in a heterogeneous phantom . [ cols="^,^,^,^,^,^",options="header " , ] [ tab : broad ] dose distributions in the cross section for a carbon - ion beam calculated in configurations a f .the solid and dotted lines indicate every 10% dose levels and the field and axes .the light - gray and medium - gray areas indicate the water phantom and the high - density rod.,width=321 ] dose profiles along the ( a ) and ( b ) axes for a carbon - ion beam in configurations a ( solid ) , b ( dashed ) , c ( dotted ) , d ( ) , e ( ) , and f ( ) in units of arbitrary reference dose .,width=321 ]in this work , we introduced grid normalization to successfully resolve the problems with tilted incidence .the interpolation errors are limited by the grid intervals , which may be generally tolerable and controllable because the grid intervals are normally specified by a treatment planner .we found no apparent loss of speed for interpolation .in fact , the areal convolution with normal grids was faster than the volumetric convolution with tilted grids .the beam grid interplay artifact was overlooked in the original formulation of the gds algorithm that was implemented as an extension to the broad - beam algorithm so that the termas and the spreads were calculated exactly at the grids and were free from the interplay .the inter - grid beam sharing , which is essentially rectangular - kernel convolution , has fully resolved the problem . as shown in ref . , the beam splitting resolved the intrinsic problems with the pencil - beam model for fine heterogeneity by dynamic subdivision .this inevitably requires the interaction - point - of - view approach because otherwise it would be difficult to trace histories of the split beams backwards from each dose - deposition grid . since the gds convolution is directly coupled not to individual beams but to resultant terma and spread distributions , beam multiplication due to splitting will not increase the computing time for dose convolution . in other words ,the combination of the gds and beam - splitting methods is a rational consequence .the observed slowing by a factor of several times was mainly attributed to increased transport calculation of the split beams .these slowing factors were comparable to that originally reported , not surprisingly because they in fact used a similar framework of the beam - splitting gds algorithm though without grid normalization nor inter - grid sharing .those examples happened to be of normal incidence and of small interplay and were only intended for a proof of principle of beam splitting .the terma - weighted mean for the gridded beam spread in eq .( [ eq : sigmag ] ) can not be generally valid because the mean spread only approximately represents the contributing gaussian components .it was originally assumed that its variation would be small enough to be handled as locally uniform. however , beam splitting changes the spread abruptly . on the other hand , the beam splitting converts the major part of beam spreads directly into a terma distribution . as a result ,grid - dose spreading handles only the residuals , for which the terma - weighted mean may suffice .a known problem of tilted incidence with the original gds algorithm was naturally resolved by the grid - normalization method without serious loss of accuracy or efficiency .another problem of beam grid interplay artifact was revealed and was resolved by the inter - grid beam - sharing method . the beam - splitting method for fine - heterogeneity correction will inevitably multiply beams to transport andthus will slow down dose calculation . with the gds algorithm ,the dose convolution is made only once after all the beams have been transported , which minimizes the impact of the beam multiplication on computing time .in fact , for the beams individually split into several tens , the calculation time increased only by several times with the gds .this algorithmic framework will thus enable fast and accurate treatment planning of heavy charged particle radiotherapy in the presence of density heterogeneity finer than the size of intrinsic beam blurring .00 kanematsu n , endo m , futami y , kanai t , asakura h , oka h , yusa k. treatment planning for the layer - stacking irradiation system for three - dimensional conformal heavy - ion radiotherapy . med phys 2002;29:282329 .kanematsu n , komori m , yonai s , ishizaki a. dynamic splitting of gaussian pencil beams in heterogeneity - correction algorithms for radiotherapy with heavy charged particles .phys med biol 2009;54:201527 .pedroni e , scheib s , bhringer t , coray a , grossmann m , lin s , lomax a. experimental characterization and physical modeling of the dose distribution of scanned proton pencil beams .phys med biol 2005;50:54161 .schaffner b , pedroni e , lomax a. dose calculation models for proton treatment planning using a dynamic beam delivery system : an attempt to include heterogeneity effects in the analytical dose calculations .phys med biol 1999;44:2741 . | this work addresses computing techniques for dose calculations in treatment planning with proton and ion beams , based on an efficient kernel - convolution method referred to as grid - dose spreading ( gds ) and accurate heterogeneity - correction method referred to as gaussian beam splitting . the original gds algorithm suffered from distortion of dose distribution for beams tilted with respect to the dose - grid axes . use of intermediate grids normal to the beam field has solved the beam - tilting distortion . interplay of arrangement between beams and grids was found as another intrinsic source of artifact . inclusion of rectangular - kernel convolution in beam transport , to share the beam contribution among the nearest grids in a regulatory manner , has solved the interplay problem . this algorithmic framework was applied to a tilted proton pencil beam and a broad carbon - ion beam . in these cases , while the elementary pencil beams individually split into several tens , the calculation time increased only by several times with the gds algorithm . the gds and beam - splitting methods will complementarily enable accurate and efficient dose calculations for radiotherapy with protons and ions . proton , ion beam , pencil - beam algorithm , treatment planning , inhomogeneity correction 87.55.d- , 87.55.kd |
in multipath fading channels , using multiple antennas at the transceivers is known to provide large capacity gains .such a capacity gain comes at the cost of high decoding complexity at the receiver .it is known that a high - complexity joint ml decoder can be employed at the receiver to reliably recover the information . on the other hand , the _ linear receivers _ such as the zf and the mmse receiver reduce the decoding complexity trading - off error performance . the integer forcing ( if ) linear receiver has been recently proposed new architecture obtains high rates in mimo fading channels . in this approach, the transmitter employs a layered structure with identical lattice codes for each layer .then each receive antenna is allowed to find an integer linear combination of transmitted codewords .the decoded point will be another lattice point because any integer linear combination of lattice points is another lattice point .this idea has been brought to mimo channels from the compute - and - forward protocol for physical layer network coding . in the mimoif architecture , a filtering matrix and a non - singular integer matrix are needed such that with minimum quantization error at high signal - to - noise ratio ( ) values . the exhaustive search solution to the problem of finding is addressed in .it is prohibitively complex already for real mimo and becomes untractable for complex mimo and beyond . a smart practical method of finding based on hkz and minkowski lattice reduction algorithms has been proposed recently .this provides full receive diversity with much lower complexity in comparison to exhaustive search .the major differences between integer - forcing linear receivers and lattice reduction aided mimo detectors are also presented in . in this paper, we propose a low - complexity method for choosing the above matrices . in ,a -layered scheme is considered with real lattice codebook for each layer .unlike the model there , we work on complex entries and we lift that set - up to complex case . the proposed method is a combination of three low - complexity methods which are based on complex lattice reduction ( clll ) technique for a lattice , and singular value decomposition ( svd ) of matrices . for the mimo channel , we compare the performance ( in terms of ergodic rate and uncoded probability of error ) of the proposed low - complexity solution with the known linear receivers and show that the proposed solution ( _ i _ ) provides a lower bound on the ergodic rate of the if receiver , ( _ ii _ ) outperforms the zf and mmse receivers in probability of error , and ( _ iii _ ) trades off error performance for computational complexity in comparison with exhaustive search and other lattice reduction methods including hkz and minkowski algorithms .the rest of the paper is organized as follows . in section [ sec : back ] , we give a brief background on lattices .we present the problem statement along with the signal model in section [ sec : model ] . in section[ sec : methods ] , we study the solution to the if receiver via two clll algorithms . a complexity comparison for different known approaches is also given in this section . in section [ sec : simulations ] , we show some simulation results on the performance of if receiver in ergodic mimo setting . finally , we present concluding remarks in section [ sec : conclusion ] . _notations_. row vectors are presented by boldface letters , and matrices are denoted by capital boldface letters .let be a vector , denotes transposition , and denotes the hermitian transposition .we show the identity and zero matrix as and respectively . for a matrix , the element in the -th row and -th column of will be denoted by .the sets , and ] .a generator matrix for is an complex matrix ^t ] is a factor selected to achieve a good quality - complexity tradeoff .an algorithm is provided in to evaluate a clll - reduced basis matrix of a lattice with a generator matrix .the input of this algorithm is the matrix and a factor , and the outputs of the algorithm are the unimodular matrix and the clll - reduced basis matrix such that .[ sec : model ] a flat - fading mimo channel with transmit antennas and receive antennas is considered . the channel matrix is in , where the entries of are i.i.d . as channel coefficient remains fixed for a given interval ( of at least complex channel uses ) and take an independent realization in the next interval .we use a -layer transmission scheme where the information transmitted across different antennas are statistically independent . for ,the -th layer is equipped with an encoder . this encoder maps a vector message , where is a ring , into a lattice codeword . if denotes the matrix of transmitted vectors , the received signal is given by , where and denotes the average signal - to - noise ratio at each receive antenna .the entries of are i.i.d . and distributed as .we also assume that the channel state information is only available at the receiver. the goal of if linear receiver is to approximate with a non - singular integer matrix .since we suppose the information symbols to be in the ring , we look for an invertible matrix over the ring .thus , we have as is the useful signal component , the effective noise is .in particular , the power of the -th row of effective noise is .hence , we define where and denote the -th row of and , respectively .note that in order to increase the effective signal to noise ratio for each layer , the term has to be minimized for each by appropriately selecting the matrices and .we formally put forth the problem statement below ( see ) : given and , the problem is to find the matrices and ^{n \times n} ] and ] . for ] , we define .further , we expand the term to obtain .note that .therefore , the solutions to the minimization of are also the solutions to the minimization of . towards minimizing , one can immediately recognize that solving minimization of is nothing but finding the shortest vector of .for the matrix in , let denote the clll - reduced generator matrix of . using the short vectors in , we obtain the complex integer matrix with row vectors ] , choose for and put as in .it is important to highlight the difference between algorithm and algorithm . in algorithm , a set of candidates for are chosen via the clll reduction technique on a -dimensional complex lattice . however , in algorithm , we jointly obtain and on a -dimensional complex lattice , and only use as candidates for the rows of .note that vectors delivered from algorithm can be different from that of algorithm , since algorithm attempts to minimize , whereas algorithm attempts to solve minimization of .note that , the clll algorithm guarantees the existence of at least vectors of the lattice such that the vector in the first row of is most likely the shortest vector of the lattice . for large values of , we can write in as , where denotes the -th row of . from the svd property, we have and . if for , then the above equation suggests us to select all , , along the direction of , and as short as possible . on the other hand ,if is large for each , and are comparable , then a set of good candidates can come from choosing a complex integer vector along each . *input : * , and . +* output : * a set of candidates for the rows of . 1 .obtain the svd of as .2 . choose for .therefore , another possible choices for integer vectors can be obtained from the rows of as for , where denotes the nearest integer of a real number . till now, we have proposed three different low - complexity algorithms to obtain candidate vectors for the rows of .algorithm gives us choices , algorithm delivers choices , whereas algorithm brings another possible choices .overall , we can use all the candidate vectors for and obtain the corresponding vectors , using the relation in . with this , we have pairs of .now , we proceed to sort these vectors in the increasing order of their values .then , we find the first vectors which form an invertible matrix over the ring .note that all the three algorithms have low computational complexity . as a result ,the combining algorithm has lower computational complexity than the exhaustive search and other lattice reduction algorithms like hkz and minkowski , and hence , the proposed technique is amenable to implementation .we refer to the combined solution as combined clll - svd solution " .it is clear that the combined clll - svd method includes at most a svd algorithm and a matrix inversion with complexity , a -dimensional clll algorithm with complexity following by a sorting algorithm of size with complexity .therefore the complexity of this approach is and it is obviously independent of .the complexity of the proposed exhaustive search in is known to be .the complexity of other reduction algorithms like hkz and minkowski are given in . in table[ table : complexitycompare ] , we compare the complexity of finding matrix for different known methods . [ cols="^,^,^",options="header " , ][ sec : simulations ] [ sec_4 ] ergodic rate of various linear receivers for mimo channel.,title="fig:",width=287 ] in this section , we present simulation results on the ergodic rate ( see or for definition of ergodic rate ) and the probability of error for the following receiver architectures on mimo channel : ( _ i _ ) if linear receiver with exhaustive search , ( _ ii _ ) if linear receiver with minkowski lattice reduction solution , ( _ iii _ ) if linear receiver with combined clll - svd solution , ( _ iv _ ) the zf and mmse linear receiver , and ( _ v _ ) the joint maximum likelihood ( ml ) decoder . for the if receiver with exhaustive search ,the results are presented with the constraint of fixed radius for the exhaustive search .we have not used the radius constraint given in as the corresponding search space increases with .instead , we have used a fixed radius of for all values of . by relaxing this constraint ,we have reduced the complexity of brute force search , noticeably . in fig . [fig : ergodicratecapacity22 ] , we present the ergodic rate of the above listed receivers , wherein , for the case of ml receiver , the ergodic capacity of mimo channel has been presented . the rate for if linear receiver with minkowski lattice reduction solution is cited from . we observe that the combined clll - svd solution performs pretty much the same as if receiver with exhaustive search and if based on minkowski lattice reduction solution at low and moderate .also , note that the combined clll - svd and minkowski lattice reduction solutions give lower bounds on the ergodic rate of exhaustive search based if receiver while the latter one is tighter . for the ergodic rate results of the if receiver, we have used matrices which are invertible over ] .note that = \mathcal{s } \oplus 2\mathbb{z}[i] ] and then modulo " operation is performed independently on its in - phase and quadrature component . with this , we get from both the components of .further , we solve the system of linear equations over the ring to obtain the decoded vector .ber for various linear receivers with -qam constellation.,title="fig:",width=287 ] in fig .[ fig : qpsk22 ] , we present the ber results for all the five receiver architectures . note that both the if receiver with combined clll - svd and minkowski lattice reduction solutions outperform the zf and mmse architectures , but trades - off error performance for complexity in comparison with brute force search .in particular , the combined clll - svd approach fails to provide diversity results as that of the exhaustive search and minkowski lattice reduction approaches .this diversity loss is due to the larger value of delivered by the combined clll - svd algorithm in comparison with the optimum solution . for the probability of error results of the if receiver, we have used matrices which are invertible over $ ] .we have observed similar results for mimo channels as well .[ sec : conclusion]in , algorithm along with hkz and minkowski lattice reduction algorithms was employed to find the matrix .algorithm which includes the complex lll method as an alternate technique for the above reduction methods turned out to be not satisfactory in terms of ergodic rate and error performance .hence , we have tried to improve effectiveness of clll algorithm by combining it with other approaches such as algorithms and .we have proposed a low - complexity systematic method called the combined clll - svd algorithm for the mimo if architecture .simulation results on the ergodic rate and the probability of error were also presented to reveal the effectiveness of combined clll - svd solution versus other linear receivers . the proposed combined algorithm trades - off error performance for complexity in comparison with both if receivers based on exhaustive search and minkowski or hkz lattice reduction algorithms .further improvements are required to achieve results which are competitive with if receivers based on exhaustive search and the ones presented in .a. sakzad , e. viterbo , y. hong , and and j. boutros , on the ergodic rate for compute - and - forward , " _ proceeding of international symposium on network coding 2012 ( netcod 2012 ) , mit university , boston , ma , usa_. j. zhan , b. nazer , u. erez , and m. gastpar , `` integer - forcing linear receivers , '' _ information theory proceedings ( isit ) , 2010 ieee international symposium on _ , pp . 10221026 , 2010 .extended version is available at : http://arxiv.org/abs/1003.5966 .w. zhang , s. qiao , and y. wei , `` hkz and minkowski reduction algorithms for lattice - reduction - aided mimo detection , '' _ to appear in ieee trans . on signal processing_.available at : http://ieeexplore.ieee.org/stamp/stamp.jsp?tp==6256756 . | integer - forcing ( if ) linear receiver has been recently introduced for multiple - input multiple - output ( mimo ) fading channels . the receiver has to compute an integer linear combination of the symbols as a part of the decoding process . in particular , the integer coefficients have to be chosen based on the channel realizations , and the choice of such coefficients is known to determine the receiver performance . the original known solution of finding these integers was based on exhaustive search . a practical algorithm based on hermite - korkine - zolotareff ( hkz ) and minkowski lattice reduction algorithms was also proposed recently . in this paper , we propose a low - complexity method based on complex lll algorithm to obtain the integer coefficients for the if receiver . for the mimo channel , we study the effectiveness of the proposed method in terms of the ergodic rate . we also compare the bit error rate ( ber ) of our approach with that of other linear receivers , and show that the suggested algorithm outperforms the minimum mean square estimator ( mmse ) and zero - forcing ( zf ) linear receivers , but trades - off error performance for complexity in comparison with the if receiver based on exhaustive search or on hkz and minkowski lattice reduction algorithms . clll algorithm , mimo , linear receivers . |
this article is part of a larger program , which consists in devising and quantitatively analyzing numerical methods to approximate effective coefficients in stochastic homogenization of linear elliptic equations .more precisely we tackle here the case of a discrete elliptic equation with independent and identically distributed coefficients ( see however the end of this introduction for more general statistics ) , and present and fully analyze an approximation procedure based on a monte - carlo method . a first possibility to approximate effective coefficients is to directly solve the so - called corrector equation . in this approach ,a first step towards the derivation of error estimates is a quantification of the qualitative results proved by knnemann ( and inspired by papanicolaou and varadhan s treatment of the continuous case ) and kozlov . in the stochastic case , such an equation is posed on the whole , and we need to localize it on a bounded domain , say the hypercube of side . as shown in a series of papers by otto and the first author , and the first author , there are three contributions to the -error in probability between the true homogenized coefficients and its approximation .the dominant error in small dimensions takes the form of a variance : it measures the fact that the approximation of the homogenized coefficients by the average of the energy density of the corrector on a box fluctuates .this error decays at the rate of the central limit theorem in any dimension ( with a logarithmic correction for ) .the second error is the so - called systematic error : it is due to the fact that we have modified the corrector equation by adding a zero - order term of strength ( as standard in the analysis of the well - posedness of the corrector equation ) .the scaling of this error depends on the dimension and saturates at dimension .it is of higher order than the random error up to dimension .the last error is due to the use of boundary conditions on the bounded domain .provided there is a buffer region , this error is exponentially small in the distance to the buffer zone measured in units of .this approach has two main drawbacks .first the numerical method only converges at the central limit theorem scaling in terms of up to dimension , which is somehow disappointing from a conceptual point of view ( although this is already fine in practice ) .second , although the size of the buffer zone is roughly independent of the dimension , its cost with respect to the central limit theorem scaling dramatically increases with the dimension ( recall that in dimension , the clt scaling is , so that in high dimension , we may consider smaller for a given precision , whereas the use of boundary conditions requires in any dimension ) . based on ideas of the second author in ,we have taken advantage of the spectral representation of the homogenized coefficients ( originally introduced by papanicolaou and varadhan to prove their qualitative homogenization result ) in order to devise and analyze new approximation formulas for the homogenized coefficients in . in particular , this has allowed us to get rid of the restriction on dimension , and exhibit refinements of the numerical method of which converge at the central limit theorem scaling in any dimension ( thus avoiding the first mentioned drawback ) . unfortunately , the second drawback is inherent to the type of method used : if the corrector equation has to be solved on a bounded domain , boundary conditions need to be imposed on the boundary . since their values are actually also part of the problem , a buffer zone seems mandatory with the notable exception of the periodization method , whose analysis is yet still unclear to us , especially when spatial correlations are introduced in the coefficients .in order to avoid the issue of boundary conditions , we adopt here another point of view on the problem : the random walk in random environment approach .this other point of view on the same homogenization problem has been analyzed in the celebrated paper by kipnis and varadhan , and then extended by de masi , ferrari , goldstein , and wick .the strategy of the present paper is to obtain an approximation of the homogenized coefficients by the numerical simulation of this random walk up to some large time . as we did in the case of the approach based on the corrector equation ,a first step towards the analysis of this numerical method is to quantify the corresponding qualitative result , namely here kipnis - varadhan s convergence theorem .compared to the deterministic approach based on the approximate corrector equation , the advantage of the present approach is that its convergence rate and computational costs are dimension - independent .as we shall also see , as opposed to the approach based on the corrector equation , the environment only needs to be generated along the trajectory of the random walker , so that much less information has to be stored during the calculation .this may be quite an important feature of the monte carlo method in view of the discussion of ( * ? ? ?* section 4.3 ) .we consider the discrete elliptic operator , where and are the discrete backward divergence and forward gradient , respectively . for all , is the diagonal matrix whose entries are the conductances of the edges starting at , where denotes the canonical basis of .let denote the set of edges of .we call the family of conductances the _ environment_. the environment is random , and we write for its distribution ( with corresponding expectation ) . we make the following assumptions : * the measure is invariant under translations , * the conductances are i. i. d. , * there exists such that almost surely . under these conditions ,standard homogenization results ensure that there exists some _ deterministic _ symmetric matrix such that the solution operator of the deterministic continuous differential operator describes the large scale behavior of the solution operator of the random discrete differential operator almost surely ( for this statement , ( h2 ) can in fact be replaced by the weaker assumption that the measure is ergodic with respect to the group of translations , see ) .the operator is the infinitesimal generator of a stochastic process which can be defined as follows .given an environment , it is the markov process whose jump rate from a site to a neighbouring site is given by .we write for the law of this process starting from .it is proved in that under the averaged measure , the rescaled process converges in law , as tends to , to a brownian motion whose infinitesimal generator is , or in other words , a brownian motion with covariance matrix ( see also for prior results ) .we will use this fact to construct computable approximations of . as proved in ,this invariance principle holds as soon as ( h1 ) is true , ( h2 ) is replaced by the ergodicity of the measure , and ( h3 ) by the integrability of the conductances . under the assumptions ( h1-h3 ), strengthens this result in another direction , showing that for almost every environment , converges in law under to a brownian motion with covariance matrix .this has been itself extended to environments which do not satisfy the uniform ellipticity condition ( h3 ) , see .let denote the sequence of consecutive sites visited by the random walk ( note that the `` times '' are different in nature for and ) .this sequence is itself a markov chain that satisfies for any two neighbours : = \frac{\omega_{x , y}}{p_\omega(x)},\ ] ] where .we simply write for .let us introduce a `` tilted '' version of the law on the environments , that we write and define by } \\d { \mathbb{p}}(\omega).\ ] ] the reason why this measure is natural to consider is that it makes the environment seen from the position of the random walk a stationary process ( see ( [ defenvpart ] ) for a definition of this process ) .interpolating between two integers by a straight line , we can think of as a continuous function on . with this in mind, it is also true that there exists a matrix such that , as tends to , the rescaled process converges in law under to a brownian motion with covariance matrix .moreover , and are related by ( see ( * ? ? ?* theorem 4.5 ( ii ) ) ) : \ { a_\mathrm{hom}^\mathrm{disc}}.\ ] ] given that the numerical simulation of saves some operations compared to the simulation of ( there is no waiting time to compute , and the running time is equal to the number of steps ) , we will focus on approximating .more precisely , we fix once and for all some with , and define , \qquad \sigma^2 = 2 \xi \cdot { a_\mathrm{hom}^\mathrm{disc}}\xi.\ ] ] it follows from results of ( or ( * ? ? ? * theorem 2.1 ) ) that tends to as tends to infinity .our first contribution is to give a quantitative estimate of this convergence .in particular we shall show that , with i. i. d. coefficients and up to a logarithmic correction in dimension , the difference between and is of order .we now describe a monte - carlo method to approximate . using the definition of the tilted measure ( [ deftdp ] ), one can see that }{t } = \frac{{\mathbb{e}}{\mathbf{e}^\omega}_0[p(\omega ) ( \xi \cdot y(t))^2]}{t { \mathbb{e}}[p]}.\ ] ] assuming that we have easier access to the measure than to the tilted , we prefer to base our monte - carlo procedure on the r. h. s. of the second identity in ( [ enlevetilt ] ) .let be independent random walks evolving in the environments respectively .we write for their joint distribution , all random walks starting from , where stands for .the family of environments is itself random , and we let be the product distribution with marginal . in other words , under , the environments are independent and distributed according to .our computable approximation of is defined by the following step in the analysis is to quantify the random fluctuations of in terms of the number of random walks considered in the empirical average to approximate and .we shall prove a large deviation result which ensures that the -probability that the difference between and exceeds is exponentially small in the ratio .the rest of this article is organized as follows . in section [ sec : quantkv ] , which can be read independently of the rest of the paper , we consider a general discrete or continuous - time reversible markov process .kipnis - varadhan s theorem ( and its subsequent development due to ) gives conditions for additive functionals of this process to satisfy an invariance principle .we show that , under additional conditions written in terms of a spectral measure , the statement can be made quantitative .more precisely , kipnis - varadhan s theorem relies on writing the additive functional under consideration as the sum of a martingale plus a remainder .this remainder , after suitable normalization , is shown to converge to in . under our additional assumptions, we give explicit bounds on the rate of decay . in section [ sec : syserr ] , we make use of this result , in the context of the approximation of homogenized coefficients , to estimate the systematic error .the central achievement of this section is to prove that the relevant spectral measure satisfies the conditions of our quantitative version of kipnis - varadhan s theorem .section [ sec : randfluc ] is dedicated to the estimate of the random fluctuations .these are controlled through large deviations estimates .relying on these results , we give in section [ sec : numtest ] a complete error analysis of the monte - carlo method to approximate the homogenized matrix , which we illustrate by numerical tests .let us quickly discuss the sharpness of these results . if was a periodic matrix ( or even a constant matrix ) the systematic error would also be of order ( without logarithmic correction for ) , and the fluctutations would decay exponentially fast in the ratio as well .this shows that our analysis is optimal ( the additional logarithm seems unavoidable for , as discussed in the introduction of ) .let us also point out that although the results of this paper are proved under assumptions ( h1)-(h3 ) , the assumption ( h2 ) on the statistics of is only used to obtain the variance estimate of ( * ? ? ?* lemma 2.3 ) . in particular , ( h2 ) can be weakened as follows : * the distribution of may in addition depend on , * independence can be replaced by finite correlation length , that is for all , and are independentif , * independence can be replaced by mixing in the sense of dobrushin and shlosman we refer the reader to work in progress by otto and the first author for this issue . * notation .* so far we have already introduced the probability measures ( distribution of ) , ( distribution of ) , ( i.i.d .distribution for ) , ( tilted measure defined in ( [ deftdp ] ) ) and ( product distribution of with marginal ). it will be convenient to define the product distribution of with marginal . for convenience, we write as a short - hand notation for , for , for , and for .the corresponding expectations are written accordingly , replacing `` p '' by `` e '' with the appropriate typography .finally , we write for the euclidian norm of .kipnis - varadhan s theorem concerns additive functionals of reversible markov processes .it gives conditions for such additive functionals to satisfy an invariance principle .the proof of the result relies on a decomposition of the additive functional as the sum of a martingale term plus a remainder term , the latter being shown to be negligible . in this section , which can be read independently of the rest of the paper , we give conditions that enable to obtain some quantitative bounds on this remainder term .we consider discrete and continuous times simultaneously .let be a markov process defined on some measurable state space ( here , stands either for or for ) .we denote by the distribution of the process started from , and by the associated expectation .we assume that this markov process is reversible and ergodic with respect to some probability measure .we write for the law of the process started from the distribution , and for the associated expectation . to the markov processis naturally associated a semi - group defined , for any , by .\ ] ] each is a self - adjoint contraction of . in the continuous - time case , we assume further that the semi - group is strongly continuous , that is to say , that converges to in as tends to , for any . we let be the -infinitesimal generator of the semi - group .it is self - adjoint in , and we fix the sign convention so that it is a positive operator ( i.e. , ) . note that in general , one can see using spectral analysis that there exists a projection such that converges to as tends to , . changing to the image of the projection , and for ,one recovers a strongly continuous semigroup of contractions , and one can still carry the analysis below replacing by the image of when necessary . in discrete time , we set .again , is a positive self - adjoint operator on . note that we slightly depart from the custom of defining the generator as in order to match more closely the continuous time situation .we denote by the scalar product in . for any function define the _ spectral measure _ of projected on the function as the measure on that satisfies , for any bounded continuous , the relation the dirichlet form associated to is given by we denote by the completion of the space with respect to this norm , taken modulo functions of zero norm .this turns into a hilbert space , and we let denote its dual .one can identify with the completion of the space with respect to the norm defined by indeed , for all , the linear form has norm , and thus defines an element of ( with norm ) iff is finite .the notion of spectral measure introduced in ( [ defef ] ) for functions of can be extended to elements of .indeed , let be a continuous function such that as .one can check that the map extends to a bounded linear map on .one can then define the spectral measure of projected on the function as the measure such that for any continuous with , ( [ defef ] ) holds . with a slight abuse of notation , for all and , we write for the duality product between and . for any , we define as according to whether we consider the continuous or the discrete time cases . in the continuous case , the meaning of ( [ defzf ] ) is unclear a priori .yet it is proved in ( * ? ? ?* lemma 2.4 ) that for any the map can be extended by continuity to a bounded linear map on , and moreover , that ( [ defzf ] ) coincides with the usual integral as soon as .the following theorem is due to , building on previous work of .[ kv ] ( i ) for all , there exists , such that defined in satisfies the identity , where is a square - integrable martingale with stationary increments under ( and the natural filtration ) , and is such that : \xrightarrow[t \to + \infty ] { } 0.\ ] ] as a consequence , converges in law under to a gaussian random variable of variance as goes to infinity , and \xrightarrow[t \to + \infty ] { } \sigma^2(f).\ ] ] ( ii ) if , moreover , and , for some , is in , then the process converges in law under to a brownian motion of variance as goes to .* remarks . *the additional conditions appearing in statement ( ii ) are automatically satisfied in discrete time , due to the fact that in this case . in the continuous - time setting and when , the process is almost surely continuous , and is indeed a well - defined random variable . under some additional information on the spectral measure of , we can estimate the rates of convergence in the limits ( [ e : xitendvers0 ] ) and ( [ e : l2conv ] ) . for any and , we say that the spectral exponents of a function are at least if note that the phrasing is consistent , since if for the lexicographical order , and if the spectral exponents of are at least , then they are at least . in , it was found more convenient to consider , instead of ( [ specexp ] ) , a condition of the following form : one can easily check that conditions ( [ specexp ] ) and ( [ specexp2 ] ) are equivalent . indeed , on the one hand, one has the obvious inequality which shows that ( [ specexp2 ] ) implies ( [ specexp ] ) . on the other hand, one may perform a kind of integration by parts , use fubini s theorem : and obtain the converse implication by examining separately the integration over in and in . for all and , we set the quantitative version of theorem [ kv ] is as follows .[ quantkv ] if the spectral exponents of are at least , then the decomposition of theorem [ kv ] holds with the additional property that \= o(\psi_{\gamma , q}(t ) ) \qquad ( t \to + \infty).\ ] ] moreover , }{t } = o(\psi_{\gamma , q}(t ) ) \qquad ( t \to + \infty).\ ] ] in the continuous - time setting , the argument for the first estimate is very similar to the one of ( * ? ?* proposition 8.2 ) , and we do not repeat the details here . it is based on the observation that = 2 \int \frac{1-e^{-\lambda t}}{\lambda^2 t } \ \d e_f(\lambda).\ ] ] one needs to take into account the possible logarithmic terms that appear in ( [ specexp2 ] ) and which are not considered in .some care is also needed because we do not assume that .yet one can easily replace the bound involving the norm of by its norm .the second part of the statement is given by ( * ? ? ?* proposition 8.3 ) .we now turn to the discrete time setting . in this context ,identity ( [ normxi2 ] ) should be replaced by = 2 \int \frac{1-(1-\lambda)^t}{\lambda^2 t } \ \d e_f(\lambda).\ ] ] by definition , , where is the semi - group at time .hence the spectrum of is contained in ] , which is equal to in the proof of ( * ? ? ?* proposition 8.3 ) , is in the present case equal to ] .moreover , the measure defined in ( [ deftdp ] ) is reversible and ergodic for this process ( * ? ? ?* lemma 4.3 ( i ) ) . as a consequence ,the operator is ( positive and ) self - adjoint in .the proof of theorem [ syserr ] relies on spectral analysis . for any function ,let be the spectral measure of projected on the function .this measure is such that , for any positive continuous function , one has = \int \psi(\lambda ) \\d e_f(\lambda).\ ] ] for any and , we recall that we say that the spectral exponents of a function are at least if holds .let us define the local drift in direction as = \frac{1}{p(\omega ) } \sum_{|z| = 1 } \omega_{0,z } \\xi \cdot z.\ ] ] as we shall prove at the end of this section , we have the following bounds on the spectral exponents of . [p : specexp ] under assumptions ( h1)-(h3 ) , there exists such that the spectral exponents of the function are at least let us see how this result implies theorem [ syserr ] . in order to do so, we also need the following information , that is a consequence of proposition [ p : specexp ] .[ corpoly ] let \ ] ] be the image of by the semi - group at time associated with the markov chain .there exists such that = \left| \begin{array}{ll } o\big(t^{-2 } \{ \ln}^q(t)\big ) & \text{if } d = 2,\\ o\big(t^{-(d/2 + 1)}\big ) & \text{if } 3 { \leqslant}d { \leqslant}5 , \\ o\big(t^{-4 } \{ \ln}(t)\big ) & \text{if } d = 6,\\ o\big(t^{-4}\big ) & \text{if } d { \geqslant}7 .\end{array } \right.\ ] ] this result is the discrete - time analog of ( * ? ? ?* corollary 1 ) .it is obtained the same way , noting that = \int ( 1-\lambda)^{2 t } \ \d e_f(\lambda),\ ] ] and that the support of the measure is contained in ] it remains to control the last term .in particular , provided we show that = \left| \begin{array}{ll } o \big ( { \ln}^{q}(t ) \big ) & \text{if } d = 2 , \\o \big ( 1 \big ) & \text{if } d > 2 ,\end{array } \right.\ ] ] , , , and imply first that -{\overline}{\sigma}^2 ] for all .in particular the identity above turns into },\ ] ] since }\int \psi_t p \\d \mathbb{p}=\frac{t}{\mathbb{e}[p]}\int \phi_t p \\d \mathbb{p } = 0 ] , is divided in six steps .starting point is the application of the variance estimate of ( * ? ? ?* lemma 2.3 ) to , which requires to estimate the susceptibility of with respect to the random coefficients . in view of it is not surprising that we will have to estimate not only the susceptibility of but also of and of some green s function with respect to the random coefficients . in the first step ,we establish the susceptibility estimate for the green s function . in step 2we turn to the susceptibility estimate for the approximate corrector .we then show in step 3 that , relying on , this implies that the spectral exponents are at least following step is to estimate the susceptibility of . in step 5 we show that this , combined with the suboptimal estimates obtained using , allow us to improve the spectral exponents to in the last step , we quickly argue that in turn these spectral exponents yield the optimal and suboptimal estimates which finally bootstrap to the desired estimates of the spectral exponents , and consequently yield the following optimal estimate of } ] .it suffices to show that , for some , { \leqslant}\exp(-n { \varepsilon}^2 / c).\ ] ] chebyshev s inequality implies that , for any , & { \leqslant } & e^{-n { \varepsilon}\lambda } { \mathbb{e}}^\otimes\left[\exp(\lambda({\overline}{p}({\omega}^{(1)})+ \cdots + { \overline}{p}({\omega}^{(n)}))\right ] \\ & { \leqslant } & e^{-n { \varepsilon}\lambda } { \mathbb{e}}\left[\exp(\lambda{\overline}{p}({\omega}))\right]^n.\end{aligned}\ ] ] using a series expansion of the exponential , one can check that there exists such that , for any small enough , { \leqslant}c \lambda^2.\ ] ] as a consequence , for any small enough , { \leqslant}\exp\left ( -n ( \lambda { \varepsilon}- c \lambda^2 ) \right),\ ] ] and for , the latter term becomes .the event can be handled the same way , and we thus obtain ( [ oneside ] ) . what makes the proof of proposition [ gdhatp ] work is the observation ( [ loglapl ] ) that the log - laplace transform of ] , any and , { \leqslant}\frac{c_1}{t^{d/2 } } \exp\left(- \frac{|x|^2}{c_1 t } \right).\ ] ] from theorem [ gauss ] we deduce the following result .[ supexp ] let be given by theorem [ gauss ] .for all , one has < + \infty.\ ] ] let . by theorem [ gauss ] , { \leqslant}c_1 { t^{-d/2 } } \sum_{x \in { \mathbb{z}}^d } e^{-\delta |x|^2/t}.\ ] ] if the sum ranges over all , it is easy to bound it by a convergent integral : by symmetry , the estimate carries over to the sum over all .the same argument applies for the sum over all having exactly one component equal to , and so on .the following lemma contains the required uniform control on the log - laplace transform of .[ grandesdev ] there exist and such that , for any and any , { \leqslant}c_2 \lambda^2.\ ] ] it is sufficient to prove that there exists such that , for any small enough and any , { \leqslant}1+c_3 \lambda^2.\ ] ] we use the series expansion of the exponential to rewrite this expectation as .\ ] ] the term corresponding to is equal to , whereas the term for vanishes .the remaining sum , for ranging from to infinity , can be controlled using corollary [ supexp ] combined with the bound ,\ ] ] which follows from the definition of and jensen s inequality .we are now in position to prove theorem [ randfluc ] .starting point is the inequality \\ { \leqslant}{\mathbb{p}}^\otimes_0\left[a_n(t ) - \sigma_t^2 { \geqslant}{\varepsilon}/ t\right ] + { \mathbb{p}}^\otimes_0\left[a_n(t)(\hat{p}_n^{-1 } - 1 ) { \geqslant}{\varepsilon}/ t\right].\end{gathered}\ ] ] we treat both terms of the r. h. s. separately .for the first one , the key observation is that ( recalling the definition of given in ( [ deftdp ] ) ) = { \tilde}{{\mathbb{p}}}^\otimes_0\left[\frac{(\xi \cdot y^{(1)}(t))^2 + \cdots + ( \xi \cdot y^{(n)}(t))^2}{n t } - \sigma_t^2 { \geqslant}{\varepsilon}/ t\right].\ ] ] let . as in the proof of proposition [ gdhatp ] , we bound this term using chebyshev s inequality : \\ & \qquad { \leqslant}{\tilde}{{\mathbb{e}}}^\otimes_0\left [ \exp\left ( \lambda \left(\frac{(\xi \cdot y^{(1)}(t))^2 + \cdots+ ( \xi \cdot y^{(n)}(t))^2}{t } - n \sigma_t^2 \right ) \right ) \right ] \ \exp\left ( -\frac{n \lambda { \varepsilon}}{t } \right ) \\ & \qquad { \leqslant}{{\tilde}{{\mathbb{e}}}_0\left [ \exp\left ( \lambda \left(\frac{(\xi \cdot y(t))^2}{t } - \sigma_t^2 \right ) \right ) \right]}^n \ \exp\left ( -\frac{n \lambda { \varepsilon}}{t } \right ) .\end{split}\ ] ] by lemma [ grandesdev ] , the r. h. s. of ( [ comput1 ] ) is bounded by for all small enough . choosing ( which is small enough for large enough ) , we obtain { \leqslant}\exp\left ( - \frac{n { \varepsilon}^2}{4 c_2 t^2 } \right),\ ] ] as needed .we now turn to the second term of the r. h. s. of ( [ decomp2termes ] ) . from inequality ( [ estim1 ] ) , we infer that there exists such that { \leqslant}\exp\left ( - \frac{n { \varepsilon}^2}{4 c_2 t^2 } \right).\ ] ] since is almost surely bounded by a constant , it is enough to evaluate the probability ,\ ] ] which is controlled by proposition [ gdhatp ] . we have thus obtained the required control of the l. h. s. of ( [ decomp2termes ] ) .the probability of the symmetric event \ ] ] can be handled the same way .in this section , we illustrate on a simple two - dimensional example the sharpness of the estimates of the systematic error and of the random fluctuations obtained in theorems [ syserr ] and [ randfluc ] . in the numerical tests , each conductivity of takes the value or with probability . in this simple case ,the homogenized matrix is given by dykhne s formula , namely ( see for instance ( * ? ? ?* appendix a ) ) . for the simulation of the random walk , we generate and store the environment along the trajectory of the walk . in particular , this requires to store up to a constant times data . in terms of computational cost ,the expansive part of the computations is the generation of the randomness . in particular , to compute one realization of costs approximately the generation of random variables .a natural advantage of the method is its full scalability : the random walks used to calculate a realization of are completely independent .we first test the estimate of the systematic error : up to a logarithmic correction , the convergence is proved to be linear in time . in view of theorem [ randfluc ] , typical fluctuations of are of order no greater than , and thus become negligible when compared with the systematic error as soon as the number of realizations satisfies .we display in table [ tab : syst ] an estimate of the systematic error obtained with realizations .the systematic error is plotted on figure [ fig : syst ] in function of the time in logarithmic scale .the apparent convergence rate ( linear fitting ) is , which is consistent with theorem [ syserr ] , which predicts and a logarithmic correction . for realizations ]we now turn to the random fluctuations of .theorem [ randfluc ] gives us a large deviation estimate which essentially says that the fluctuations of have a gaussian tail , measured in units of .the figures [ fig : histo-1]-[fig : histo-4 ] display the histograms of for and ( with 10000 realizations of in each case ) . as expected , they look gaussian . ] ] ] ]the authors acknowledge the support of inria through the `` action de recherche collaborative '' disco .this work was also supported by ministry of higher education and research , nord - pas de calais regional council and feder through the `` contrat de projets etat region ( cper ) 2007 - 2013 '' .g.c . papanicolaou and s.r.s .boundary value problems with rapidly oscillating random coefficients . in_ random fields , vol .i , ii ( esztergom , 1979 ) _ , volume 27 of _ colloq .jnos bolyai _ , pages 835873 .north - holland , amsterdam , 1981 . | this article is devoted to the analysis of a monte - carlo method to approximate effective coefficients in stochastic homogenization of discrete elliptic equations . we consider the case of independent and identically distributed coefficients , and adopt the point of view of the random walk in a random environment . given some final time , a natural approximation of the homogenized coefficients is given by the empirical average of the final squared positions rescaled by of independent random walks in independent environments . relying on a new quantitative version of kipnis - varadhan s theorem ( which is of independent interest ) , we first give a sharp estimate of the error between the homogenized coefficients and the expectation of the rescaled final position of the random walk in terms of . we then complete the error analysis by quantifying the fluctuations of the empirical average in terms of and , and prove a large - deviation estimate . compared to other numerical strategies , this monte - carlo approach has the advantage to be dimension - independent in terms of convergence rate and computational cost . * keywords : * random walk , random environment , stochastic homogenization , effective coefficients , monte - carlo method , quantitative estimates . * 2010 mathematics subject classification : * 35b27 , 60k37 , 60h25 , 65c05 , 60h35 , 60g50 . |
of interest to anglers seeking to fill their creels and children seeking to fasten their shoes , a wide audience has found knots compelling from time immemorial . in the scientific community ,knots have been featured in initial formulations of the nature of atoms , ( see a popular historical account in ) , the formulation of certain path integrals , , and also in quantitative biology , where knots have been observed in , , and tied into , dna , , where the space of knots is biologically created and manipulated .knots also have been observed occasionally in proteins , .historically , the classification of knots and study of knot invariants were the first subjects of knot theory , and this remains in the center of attention among knot theorists of mathematical orientation .another fundamental aspect of knot theory is that of knot entropy .physically , this group of problems comes to the fore in the context of polymers and biophysics .mathematically , this issue belongs to both topology and probability theory and seems to remain underappreciated in the mathematics and mathematical physics community .even the simplest question in this area is poorly understood : what is the probability that a randomly closed loop in will be topologically equivalent to plane circle ?in other words , using professional parlance of the field , what is the probability that random loop is a trivial knot ( unknot ) , ?there are , of course , many more questions along the same lines , e.g. , what are probabilities of other more complex knots ? what is the entropic response of a topologically constrained loop to various perturbations , etc .most of what we know about these `` probabilistic topology '' questions is learned from computer simulations . in particular , it has been observed by many authors over the last 3 decades that the trivial knot probability depends on the length of the loop , decaying exponentially with the number of segments in the loop , : for some lattice models this exponential law , in the asymptotics , was also mathematically proven .it was also noticed that the same exponential law , with the same decay parameter , also describes the large asymptotical tail of the abundance of any other particular knot - although for complex knots exponential decay starts only at sufficiently large ( as soon as the given knot can be identified as an underknot ) .an alternative view of formula ( [ eq : triv_p ] ) , useful in the context of thermodynamics , implies that the removal of all knots from the loop is associated with thermodynamically additive ( linear in ) entropy loss of per segment ; in other words , at the temperature , untying all knots would require mechanical work of at least per segment .another manifestation of the importance of the parameter was found in the recent series of works .these works belong to the direction addressing the spatial statistics of polymer loops restricted to remain in a certain topological knot state .it turns out that even for loops with no excluded volume and thus are not self - avoiding , marks the crossover scale between mostly gaussian ( ) and significantly non - gaussian ( ) statistics .indeed , at , locking the loop in the state of an unknot excludes only a small domain of the conformational space which produces only marginal ( albeit non - trivial ) corrections to gaussian statistics - for instance , mean - squared gyration radius of the loop is nearly linear in .by contrast , at , the topological constraints are of paramount importance , making the loop statistics very much non - gaussian , and consistent with effective self - avoidance .thus , it seems likely that the parameter might hold the key to the entire problem of knot entropy .we therefore decided to look at this parameter more closely in this paper .present understanding of the values of is quite modest .first , the constant s value was invariably found to be quite large , around for all examined models of `` thin '' loops with no excluded volume , or no self - avoidance .second , it is known that knots are dramatically suppressed for `` thick '' self - avoiding polymers , which means that rapidly increases with the radius of self - avoidance .the latter issue is also closely connected to the probabilities of knots in lattice models , where the non - zero effective self - avoidance parameter is automatically set by the lattice geometry . in the present paper, we will only consider the arguably more fundamental case of loops with no self - avoidance .the starting point of our analysis is the observation that appears to be noticeably different for two standard polymer models for which common sense suggests that they should be equivalent .both models can be called freely - jointed in the sense that they consist of rigid segments with free rotation in the joints . however , in one model all segment vectors are of the same length , while in the other model segment vectors are taken from a gaussian distribution . the motivation to consider the gaussian distributed step vectors comes from the idea of decimation , or renormalization: we can start from the loop with segments of equal length and then group them into blobs of bare segments , each blob having nearly gaussian distributed end - to - end vector . with respect to the knot abundance , the fixed length model was examined in and the gaussian model in .it was noticed that for the gaussian distributed steps was larger than for identical steps , assuming no self - exclusion in both cases .no attention was paid to this observation , possibly because there was no confidence that the observed difference is real , in the context of the numerical error bars in the pertinent measurements .recently , , more detailed data became available which suggest that indeed is different for the two models , with fixed or gaussian distributed steplength .a similar result was independently obtained by vologodskii .latter in this article , we present even better quality data supporting the same observation that is different for these two models .this is a rather disturbing observation .indeed , the idea of universality in polymer physics suggests that there should not be any difference between these two models as far as any macroscopic quantity is concerned .for instance , not only is the mean squared gyration radius the same for both models , but even the distribution of the gyration radii are the same , except far into the tails .in general , the difference between polymer models of this type becomes significant only in the strong stretching regime or at high density . even if one takes into account the idea that knots , when present , are most likely localized along the chain , it is unclear how this fact can manifest itself for the loop that has no knots .thinking generally about the loop models with fixed or gaussian steplength , our reaction to this discrepancy is to realize that the major difference between the two freely - jointed loop models is that the gaussian model may have a few unusually long segments , suppressing the ability of other shorter segments to wind around , and thus decreasing the possibility for knots to occur , and accordingly , increasing .thus , the ability to take long strides might account for the comparative slowness of the ensemble of loops with gaussian steplengths to diversify their knot spectrum with increasing .the main goal of the present work is to investigate this conjecture .the plan of the work is as follows . after a brief description of our computational algorithms used to generate closed loops and to identify their topologies ( section [ sec : methods ] ) , we present computational results ( section [ sec : results ] ) on knot abundance for a variety of models differing in the width of their steplength distribution .in addition to the already mentioned loops with fixed , and gaussian - distributed steplengths , in order to look at the even broader distributions which allow for very long segments , we also generated loops with the generalized cauchy - lorentz `` random - flight '' distribution .finally , we include loops of bimodally distributed fixed steplength . in brief ,our results are as follows .first , we confirm the exponential decay law of the unknot probability , formula ( [ eq : triv_p ] ) , across all models examined .second , we find qualitatively that indeed a wider distribution of the segment lengths leads to knot suppression , ie a larger .third , and most unexpectedly , we find that does not show any signs of any singular behavior associated with the divergence of mean squared segment length or any other moment of the segment length distribution .instead , blows up and appears to grow without a bound when the distribution of segment lengths approaches the border of normalizability .all polymer models referenced and employed in this work use a freely jointed model to represent a polymer loop .the polymer is represented by a set of vertices in 3d , with position vectors , where the step between successive vertices is described , . in all models ,we assume that the distribution of segment vectors , which we call , is spherically symmetric and depends only on the steplength , such that we also assume that the mean squared steplength is always the same ( when defined ! - see below ) , we denote it : with this in mind , the simplest measure of the distribution breadth involves higher order moments : where .specifically , we analyzed the following models .the * fixed steplength model * is described by the distribution for this model , of course . with segments ,the loop s contour length is obviously and the mean squared gyration radius of the loop is .the * gaussian steplength model * is generated by the distribution \ , \ ] ] in this case , .the contour length of the -segment loop in this model is and the mean squared gyration radius is .the * random flight steplength model * is obtained from the generalized cauchy - lorenz distribution ( also known in the theory of lvy flights ) of the form \right/ \left ( 4 \ell^3 \pi^2 \right ) } { 1+(r/\ell)^{\alpha } } \ , \label{eq : mcl_prob}\ ] ] where the factor in the numerator ensures normalization . here, , a parameter of the distribution , must be greater than , otherwise the normalization integral diverges .nevertheless , it would be fair to speak about a family of random - flight models , parameterized by , instead of just one model .varying allows us to work with a `` tunable '' distribution .these distributions `` fat '' power - law tails lead to diverging moments ( which is why they are used to describe super - diffusive behavior seen in biological foraging , and quite recently , in the diffusion of bank - notes across the united states , ) .specifically , the ensemble - averaged contour length of the loop is well defined only at ( ) , mean squared gyration radius exists at , and only exists at , in which case it is equal to ^ 2 } { \sin\left [ 3 \pi / \alpha \right ] \sin\left [ 7\pi / \alpha \right]}-1 } \ \ \ ( { \rm at } \\alpha > 7 ) \.\ ] ] finally , we also include loops with * bimodally distributed steplength*. for these loops , which means that two possible steplengths , or , occur with probabilities and , subject to the normalization conditions , , and .all bimodally distributed models can be conveniently parameterized by and . for these models, might be very large if is very large and is rather close to unity ( ) .unbiased generation of closed loops is of decisive importance for our work .recently , we gave a detailed review of the existing computational methods to generate statistically representative closed loops ( see the last section of the work ) . in principle , the best way to generate loops is based on the so - called conditional probability method .the idea is that a closed path is generated as a random walk , step by step , except after the completion of steps , the next step , , is generated from the analytically computed probability distribution of the step vector , subject to the condition that after more steps , the walker returns to the starting point .this idea was first suggested and implemented for gaussian distributed steps .recently , we implemented this method for steps of equal length .unfortunately , this method is computationally costly , and appears to be prohibitively difficult to implement for more sophisticated models , such as random flight .we therefore use the simpler method , called the method of triangles . this method generates loops of segments with divisible by .it involves creating a set of equilateral triangles , each randomly oriented in 3d space .each triangle is considered a triplet of vectors with zero sum .a random permutation of the edge vectors which make up these triangles , and then connecting all vectors head - to - tail , creates a loop which will be closed , as the bond vectors together have vector sum .of course , this method imposes correlations between segments .we therefore take special care to compare the results of this method with the unbiased generation using the conditional probability method for both gaussian distributed and equal length step models .we found that no appreciable deviations in knot abundance data arise from the imperfection of the triangle method .we therefore use the method of triangles to generate the random flight and bimodal distributed loops , for which no alternative method is available . to avoid even the slightest problems with correlations , implicit in our triplet method , and to ensure that the decay of trivial knot probability is in the exponential regime, we exclude the data from small loops and fit the trivial knot probability on the interval ] , are summarized in figures [ fig : compare_fixed_and_gaussian ] and [ fig : mcl_fit ] .the raw curves of probability given in figure [ fig : compare_fixed_and_gaussian ] clearly show that the odds of finding an unknot in a set of loops get increasingly unfavorable as decreases .that is not unexpected : at smaller , the probability distribution ( [ eq : mcl_prob ] ) acquires an increasingly fat tail , which implies the presence of a fraction of exceptionally long segments , and they of course suppress the chance of knots . at very large , the knot probability for the random flight model appears similar to the data for fixed steplength loops .indeed , as figure [ fig : mcl_fit ] indicates , for the random flight model at very large approaches which is not dramatically different from for the fixed length steps .in fact , the remaining difference might be associated with the fact that even at very large , the random flight model , although it has essentially no very long steps , has some relatively short ones , which might account for the discrepancy in .figures [ fig : compare_fixed_and_gaussian ] and [ fig : mcl_fit ] show further that at , the random - flight loops behave essentially the same way as gaussian steps in terms of .most interestingly , figure [ fig : mcl_fit ] shows no sign of anything unusual happening to at the values of at which various physically important moments of the segment length distribution ( [ eq : mcl_prob ] ) start diverging . for instance , at the mean squared gyration radius diverges , at even the contour length of the loop diverges - and yet none of these facts find any visible reflection on the dependence of on . keeps smoothly increasing with , with a maximum measured value of at .it appears that in fact blows up and goes to infinity as approaches - the border below which the distribution ( [ eq : mcl_prob ] ) is not normalizeable .moreover , as the inset of figure [ fig : mcl_fit ] shows , this divergence is well approximated by the power law dependence of the form , in fitting the data with this power law , we ignored the small irregularities visible around ( or \approx 2.2 ] , with at least loops in each simulation record for , and at least loops in those records used with length .we also consider parameters and in the intervals ] , respectively . by symmetry , it is sufficient to look at ; this means , is the fraction of _ shorter _ segments .the raw data from the loops with bimodal steplength are presented in figure [ fig : bimodal_surface ] .this surface plot charts the change in as a function of two parameters of the model , and . below the surfaceis the corresponding contour plot of as a function of these two parameters .the changes in are smooth , and there do not seem to be any singularities in the behavior of .the maximum observed value of occurs at , .this maximum of appears to be rather sharp , small deviations in from lead to smaller values . from the data ,we believe that in this model is maximized when the fraction of shorter segments is . as regards the second degree of freedom , , while our data certainly shows that as the bimodal system approaches the fixed steplength system of section [ fixed_step_loops ] ( ie ) , in the opposite direction of , it is not clear if will continue to increase in an unbounded way or be in some way encumbered . in thisregard our present knot analysis machinery is limited by the relative disparity between segments of different length , and more work needs to be done to elucidate the scaling of in the limit .qualitatively , all of our data are consistent with the idea that what suppresses knots is the presence of a fraction of unusually long segments .one could then hypothesize that might depend on some unique property of the segment length distribution , for instance , , as defined in eq .( [ eq : sigm_def ] ) .this hypothesis is tested in figure [ fig : bimodal_compare_small ] ; the figure indicates that the hypothesis fails. nevertheless , the results presented in this figure are interesting , as they show that large can be achieved by the combination of segment length difference ( ) and segment type fractions .although a functional relationship between and seems evident in our data for random flight loops , the data we have for loops with bimodal distribution of steplength do not suggest a simple single parameter which determines .in a qualitative sense , it does seem that there exists a relationship between and the reach of successive segments within the chain ( as seen in figure [ fig : bimodal_compare_small ] ) .it seems qualitatively clear indeed that knottedness is greatly suppressed by the presence of some very long segments .thus , the slowly decaying tail of the segment length probability distribution , or the presence of a small fraction of very long segments , implies a large , which sounds natural .however , our understanding of these observations beyond the qualitative level is limited .we found that appears to exhibit no singularity associated with divergence of any natural characteristics of the loop , such as its gyration radius or even contour length ; instead , exhibits power law critical behavior when the segment length distribution approaches the boundary of normalizability .in addition , we do not know which property of the segment length distribution determines .we have established that this is not simply the distribution width , and from the results with random flight distribution it seems also clear that it is not based on any finite moments of the distribution .we consider the development of a fuller understanding of the variance of a compelling challenge .the data clearly show that wide variation in the behavior of is possible , including very large values of in certain models .given that plays the role of the cross - over length for the critical behavior of topologically constrained loops , we can speculate that the models with very large are in some way similar to the models of self - avoidance in the vicinity of the -point .we think that this analogy deserves very close attention .we acknowledge useful discussion with a.l .efros and a. vologodskii .we express thanks to r. lua for the use of his knot analysis routines .we are grateful for access to the minnesota supercomputing institute s ibm core resources and the university of minnesota physics department linux cluster , resources which made much of the simulation possible .this work was supported in part by the mrsec program of the national science foundation under award number dmr-0212302 .to generate steplengths from the random - flight distribution ( [ eq : mcl_prob ] ) , we first take a random number from the uniform distribution on the interval $ ] and then find steplength as , where the mapping is determined by the equation where is given by eq .( [ eq : mcl_prob ] ) . although a closed - form representation of the right - hand side of eq .( [ eq : int_prob ] ) exists in the form of an incomplete beta function , , we chose to implement the mapping via a numerical interpolation of tabulated values of the integral .as the power - law tail of this distribution is quite fat , particularly at small , accurate representation of the integral becomes challenging at large steplengths .this work is ultimately numerical , and we are forced to truncate the representation of the integral to at some maximum steplength , ie specifying an upper bound on .the slow convergence of the tail is behind the appearance of the few outliers in the data for the random - flight method , figure [ fig : mcl_fit ] .witten e 1989 _ commun .phys . _ * 121 * 351 - 399 dean f , stasiak a , koller t and cozzarelli n 1985 _ j. biol . chem . _* 260 * 4975 rybenkov v , cozzarelli n and vologodskii a 1993 _ proc .usa _ * 90 * 5307 arai y , yasuda r , akashi k , harada y , miyata h , kinosita k and itoh h 1999 _ nature _ * 399 * 446 bao x r , lee h j and quake s r 2003 _ phys .lett . _ * 91 * 265506 flammini a , maritan a and stasiak a 2004 _ biophysical journal _ * 84 * 2968 - 2975 mansfield m l 1994 _ nature structural biology _ * 1 * 213 - 214 taylor w r 2000 _ nature _ * 406 * 916 - 919 zarembinski t i , kim y , peterson k , christendat d , dharamsi a , arrowsmith c h , edwards a m and joachimiak a 2003 _ proteins structure , function , and genetics _ * 50 * 177 - 183 koniaris k and muthukumar m 1991 _ phys . rev .lett . _ * 66 * 2211 deguchi t and tsurusaki k 1997 _ phys . rev .e _ * 55 * 6245 janse van rensburg e j and whittington s g 1990 _ j. phys .a. _ * 23 * 3573 - 3590 klenin k , vologodskii a , anshelevich v , dykhne a and frank - kamenetskii m 1988 _ j biomol struct dyn . _ * 5 * 1173 sumners d w and whittington s g 1988 _ j. phys .a. _ * 21 * 1689 - 1694 mooren t , lua r c and grosberg a y ( editors ) calvo j a , millet k c , rawdon e j and stasiak a 2005 _ `` under - knotted and over - knotted polymers : 1 .unrestricted loops '' _ in _ physical and numerical models in knot theory , including applications to the life sciences , series on knots and everything _ * 36 * 363 - 384 ( world scientific ) matsuda h , yao a , tsukahara h , deguchi t , furuta k and inami t 2003 _ phys . rev .e. _ * 68 * 011102 dobay a , dubochet j , millett k , sottas p and stasiak a 2003 _ proc .usa _ * 100 * 5611 vologodskii a v november 5 - 7 , 2004 _ talk given at the pittsburgh meeting of american mathematical society , session on knots and macromolecules _lua r c , moore n t and grosberg a y ( editors ) calvo j a , millet k c , rawdon e j and stasiak a 2005 _ `` under - knotted and over - knotted polymers : 2 .compact self - avoiding loops '' _ in _ physical and numerical models in knot theory , including applications to the life sciences , series on knots and everything _ * 36 * 385 - 398 ( world scientific ) viswanathan g m , afanasyev v , buldyrev s v , havlin s , da luz m g e , raposo e p and stanley h e 2001 _ brazilian journal of physics _ * 31 * 102 levandowsky m , klafter j and white b s 1988 _ bulletin of marine science _* 43 * 758 cole b j 1995 _ anim .behav . _ * 50 * 1317 boyer d , miramontes o , ramos - fernandez g , mateos j l and cocho g 2003 cond - mat/0311252 focardi s , marcellini p and montanaro p 1996 _ journal of animal ecology _ * 65 * 606 viswanathang m , afanasyev v , buldyrev s v , murphy e j , prince p a and stanley h e 1996 _ nature _ * 381 * 413 brockmann d , hufnagel l and geisel t 2006 _ nature _ * 439 * 04292 eric w. weisstein. `` lvy distribution . '' from mathworld a wolfram web resource . ` http://mathworld.wolfram.com/levydistribution.html ` | a veritable zoo of different knots is seen in the ensemble of looped polymer chains , whether created computationally or observed in vitro . at short loop lengths , the spectrum of knots is dominated by the trivial knot ( unknot ) . the fractional abundance of this topological state in the ensemble of all conformations of the loop of segments follows a decaying exponential form , , where marks the crossover from a mostly unknotted ( ie topologically simple ) to a mostly knotted ( ie topologically complex ) ensemble . in the present work we use computational simulation to look closer into the variation of for a variety of polymer models . among models examined , is smallest ( about ) for the model with all segments of the same length , it is somewhat larger ( ) for gaussian distributed segments , and can be very large ( up to many thousands ) when the segment length distribution has a fat power law tail . |
have been extensively studied in the research community over the past few decades .the analysis of clouds and their features is important for a wide variety of applications .for example , it has been used for nowcasting to deliver accurate weather forecasts , rainfall and satellite precipitation estimates , in the study of contrails , and various other day - to - day meteorological applications .yuan et al .have been investigating the clouds vertical structure and cloud attenuation for optimizing satellite links .sky / cloud imaging can be performed in different ways .satellite imagery and aerial photographs are popular in particular for large - scale surveys ; airborne light detection and ranging ( lidar ) data are extensively used for aerial surveys . however, these techniques rarely provide sufficient temporal and/or spatial resolution for localized and short - term cloud analysis over a particular area .this is where ground - based whole sky imagers ( wsis ) offer a compelling alternative .the images obtained from these devices provide high - resolution data about local cloud formation , movement , and other atmospheric phenomena .segmentation is one of the first steps in sky / cloud image analysis .it remains a challenging task because of the non - rigid , feature - less , and poorly - defined structure of clouds , whose shape also changes continuously over time .thus , classical image segmentation approaches based on shape priors are not suitable .furthermore , the wide range of lighting conditions ( direct sunlight to completely covered skies ) adds to the difficulty . as color is the most discriminating feature in sky/ cloud images , most works in the literature use color for cloud segmentation .long et al . showed that the ratio of red and blue channels from rgb color space is a good candidate for segmentation and tuned corresponding thresholds to create binary masks .heinle et al . exploited the difference of red and blue channels for successful detection and subsequent labeling of pixels .liu et al . also used the difference of red and blue channels in their superpixel - based cloud segmentation framework .souza et al . used the saturation ( s ) channel for calculating cloud coverage .mantelli - neto et al . investigated the locus of cloud pixels in the rgb color model .li et al . proposed cloud detection using an adaptive threshold technique in the normalized blue / red channel .yuan et al . proposed a cloud detection framework using superpixel classification of image features . in these existing methods in the literature for cloud segmentation ,the selection of color models and channels has not been studied systematically .many existing approaches use combinations of red and blue channels , which is a sensible choice , because the sky is predominantly blue due to the rayleigh scattering of light at shorter wavelengths .however , we are not aware of any experimental analysis presented regarding the efficacy of these color channels in sky / cloud image segmentation .furthermore , all of the above methods rely on manually - defined parameters and case - based decisions for segmentation .these make the methods somewhat ad - hoc and prone to errors .finally , most of them assign binary labels by design , which further reduces their flexibility and robustness .the motivation of this paper is to propose a robust framework for color - based cloud segmentation under any illumination conditions , including a systematic analysis of color channels .the framework is based on partial least squares ( pls ) regression and provides a straightforward , parameter - free supervised segmentation method .we show that our approach is robust and offers a superior performance across two different databases as compared to current state - of - the - art algorithms .furthermore , it allows annotating each pixel with a degree of _ belongingness _ to the sky or cloud category , instead of the usual binary labeling . in our previous work , we presented an analysis of color channels for sky / cloud images captured by whole - sky cameras , which is an important pre - requisite for better segmentation .the fuzzy c - means clustering method we used in that work however suffers from similar shortcomings as other existing cloud segmentation methods .the main novel contributions of the present manuscript compared to our earlier work include : * introduction of a large public sky / cloud image database with segmentation masks ; * extensive evaluation of color components and selection of appropriate color channels on two different sky / cloud image databases ; * robust learning - based framework for sky / cloud segmentation that outperforms existing methods .the rest of this paper is organized as follows .section [ sec : color - spaces ] introduces the color spaces under consideration and describes the statistical tools used for subsequent evaluation .section [ sec : prob - segment ] discusses the supervised probabilistic segmentation framework .the sky / cloud databases used for evaluation , including our new swimseg database , are presented in section [ sec : database ] .an exhaustive analysis of color channels is performed in section [ sec : results ] .section [ sec : result - segment ] presents the experimental evaluation of the segmentation framework , followed by a discussion of the results in section [ sec : discussion ] .section [ sec : conc ] concludes the paper .in this section , we describe the color models and channels we consider in this paper and present the statistical tools for evaluating their usefulness in sky / cloud image analysis .specifically , we use principal component analysis ( pca ) to check the degree of correlation between the color channels and to identify those that capture the most variance .loading factors from the primary principal component as well as the receiver operating characteristic ( roc ) curve for a simple thresholding applied directly to the color values of a given channel serve as indicators of a channel s suitability for cloud classification .we consider a set of color channels and components ( see table [ ps ] ) .they comprise color spaces rgb , hsv , yiq , , different red - blue combinations ( , , ) , and chroma ..color spaces and components used for analysis .[ cols="^,^,^,^,^,^,^,^,^,^,^,^ " , ]there are three primary advantages of our proposed approach compared to other cloud detection algorithms .first , our cloud segmentation framework is not based on any pre - defined assumptions about color spaces and does not place any restrictions on the type of input images .we systematically compare different color channels and identify the most suitable ones .we also explain the reason for their better performance based on rigorous statistical evaluations in two datasets .second , many existing cloud segmentation algorithms rely on a set of thresholds , conditions , and/or parameters that are manually defined for a particular sky / cloud image database .our proposed cloud segmentation approach is entirely learning - based and thus provides a systematic solution to training for a given database .third , conventional algorithms provide a binary output image from the input sky / cloud image .although these binary images are informative in most cases , they lack flexibility and robustness .we have no indication of the effectiveness of the thresholding of the input image . in reality , because of the nature of clouds and cloud images , it is undoubtedly better to employ a _soft _ thresholding approach .our proposed approach achieves a probabilistic classification of cloud pixels .this is very informative as it provides a general sense of _ belongingness _ of pixels to the cloud class .however , as only binary ground - truth images are available , we convert these probability maps into binary images for performing a quantitative evaluation of our algorithm . in our future work, we plan to extend this analysis by creating _probabilistic ground truth _ images , where the ground truth of the input images is generated by aggregating annotations from multiple experts . finally , from the extensive experiments on the two datasets ( cf .table [ scoretable ] ) , we observe that the performance with images from the swimseg database is generally better than for hyta , even though the behavior of color channels is similar in both ( i.e. the same color channels do well for both swimseg and hyta ) .we believe this is because the images in swimseg were captured with a camera that has been calibrated for color , illumination , and geometry .naturally , many challenges remain , for example : how do different weather conditions affect the classification performance ?the weather in singapore is relatively constant in terms of temperature and humidity , with little variation throughout the year .usually it is either partly cloudy , or rainy ( in which case the sky will be completely overcast , making segmentation unnecessary ) . as a result , our database is not suitable for investigating this question .completely overcast conditions can be dealt with by simple pre - processing of the image before segmentation , e.g. by calculating the number of clusters , as we have done in our previous work .we have presented a systematic analysis of color spaces and components , and proposed a probabilistic approach using pls - based regression for the segmentation of ground - based sky / cloud images .our approach is entirely learning - based and does not require any manually - defined thresholds , conditions , or parameters at any stage of the algorithm .we also release an extensive sky / cloud image database captured with a calibrated ground - based camera that has been annotated with ground - truth segmentation masks .our future work will include the annotation of a database with probabilistic ground - truth segmentation maps as well as the extension of this method to high - dynamic - range ( hdr ) images .going beyond segmentation , it is also important to classify clouds into different types or estimate cloud altitude and movement , which are both part of our current research .this work is supported by a grant from singapore s defence science & technology agency ( dsta ) . c. papin , p. bouthemy , and g. rochard , `` unsupervised segmentation of low clouds from infrared meteosat images based on a contextual spatio - temporal labeling approach , '' , vol .1 , pp . 104114 , jan .m. mahrooghy , n. h. younan , v. g. anantharaj , j. aanstoos , and s. yarahmadian , `` on the use of a cluster ensemble cloud classification technique in satellite precipitation estimation , '' , vol . 5 , no .5 , pp . 13561363 , oct . 2012 .f. yuan , y. h. lee , and y. s. meng , `` comparison of cloud models for propagation studies in ka - band satellite applications , '' in _ proc .ieee international symposium on antennas and propagation _ , 2014 , pp .383384 .f. yuan , y. h. lee , and y. s. meng , `` comparison of radio - sounding profiles for cloud attenuation analysis in the tropical region , '' in _ proc .ieee international symposium on antennas and propagation _ , 2014 , pp .259260 .t. shiraishi , t. motohka , r. b. thapa , m. watanabe , and m. shimada , `` comparative assessment of supervised classifiers for land use - land cover classification in a tropical region using time - series palsar mosaic data , '' , vol .4 , pp . 11861199 , april 2014 .y. chen , l. cheng , m. li , j. wang , l. tong , and k. yang , `` multiscale grid method for detection and reconstruction of building roofs from airborne lidar data , '' , vol .40814094 , oct .2014 .m. p. souza - echer , e. b. pereira , l. s. bins , and m. a. r. andrade , `` a simple method for the assessment of the cloud cover state in high - latitude regions by a ground - based digital camera , '' , vol .437447 , march 2006 .s. l. mantelli - neto , a. von wangenheim , e. b. pereira , and e. comunello , `` the use of euclidean geometric distance on rgb color space for the classification of sky and cloud patterns , '' , vol .9 , pp . 15041517 , sept . 2010 .s. dev , y. h. lee , and s. winkler , `` systematic study of color spaces and components for the segmentation of sky / cloud images , '' in _ proc .ieee international conference on image processing ( icip ) _ , 2014 , pp .51025106 .m. ester , h .-kriegel , j. sander , and x. xu , `` a density - based algorithm for discovering clusters in large spatial databases with noise , '' in _ proc .2nd international conference on knowledge discovery and data mining _, 1996 , pp . 226231 .f. m. savoy , j. lemaitre , s. dev , y. h. lee , and s. winkler , `` cloud - base height estimation using high - resolution whole sky imagers , '' in _ proc .ieee international geoscience and remote sensing symposium ( igarss ) _ , 2015 , pp .16221625 .[ ] soumyabrata dev ( s09 ) graduated summa cum laude from national institute of technology silchar , india with a b.tech . in electronics and communication engineering in 2010 .subsequently , he worked in ericsson as a network engineer from 2010 to 2012 .currently , he is pursuing a ph.d .degree in the school of electrical and electronic engineering , nanyang technological university , singapore . from aug - dec 2015 , he was a visiting student at audiovisual communication laboratory ( lcav ) , cole polytechnique fdrale de lausanne ( epfl ) , switzerland .his research interests include remote sensing , statistical image processing and machine learning . [ ] yee hui lee ( s96-m02-sm11 ) received the b.eng .( hons . ) and m.eng .degrees from the school of electrical and electronics engineering at nanyang technological university , singapore , in 1996 and 1998 , respectively , and the ph.d .degree from the university of york , uk , in 2002 .lee is currently associate professor and assistant chair ( students ) at the school of electrical and electronic engineering , nanyang technological university , where she has been a faculty member since 2002 .her interests are channel characterization , rain propagation , antenna design , electromagnetic bandgap structures , and evolutionary techniques .[ ] stefan winkler is distinguished scientist and director of the video & analytics program at the advanced digital sciences center ( adsc ) , a joint research center between a*star and the university of illinois .prior to that , he co - founded genista , worked for silicon valley companies , and held faculty positions at the national university of singapore and the university of lausanne , switzerland .winkler has a ph.d .degree from the cole polytechnique fdrale de lausanne ( epfl ) , switzerland , and an m.eng./b.eng .degree from the university of technology vienna , austria .he has published over 100 papers and the book `` digital video quality '' ( wiley ) .he is an associate editor of the ieee transactions on image processing , a member of the ivmsp technical committee of the ieee signal processing society , and chair of the ieee singapore signal processing chapter .his research interests include video processing , computer vision , perception , and human - computer interaction . | sky / cloud images captured by ground - based cameras ( a.k.a . whole sky imagers ) are increasingly used nowadays because of their applications in a number of fields , including climate modeling , weather prediction , renewable energy generation , and satellite communications . due to the wide variety of cloud types and lighting conditions in such images , accurate and robust segmentation of clouds is challenging . in this paper , we present a supervised segmentation framework for ground - based sky / cloud images based on a systematic analysis of different color spaces and components , using partial least squares ( pls ) regression . unlike other state - of - the - art methods , our proposed approach is entirely learning - based and does not require any manually - defined parameters . in addition , we release the * * s**ingapore * * w**hole sky * * im**aging * * seg**mentation database ( swimseg ) , a large database of annotated sky / cloud images , to the research community . shell : bare demo of ieeetran.cls for journals cloud segmentation , whole sky imager , partial least squares regression , swimseg database |
electrical impedance tomography ( eit ) is a recently developed non - invasive imaging technique , where the inner structure of a reference object can be recovered from the current and voltage measurements on the object s surface .it is fast , inexpensive , portable and requires no ionizing radiation . for these reasons, eit qualifies for continuous real time visualization right at the bedside . in clinical eit applications ,the reconstructed images are usually obtained by minimizing the linearized - data - fit residuum .these algorithms are fast and simple . however , to the best of the authors knowledge , there is no rigorous global convergence results that have been proved so far .moreover , the reconstructed images usually tend to contain ringing artifacts .recently , seo and one of the author have shown in that a single linearized step can give the correct shape of the conductivity contrast .this result raises a question that whether to regularize the linearized - data - fit functional such that the corresponding minimizer yields a good approximation of the conductivity contrast .an affirmative answer has been proved in for the continuum boundary data . in the present paper, we shall apply this new algorithm to the real electrode setting and test with standard phantom experiment data .numerical results later on show that this new algorithm helps to improve the quality of the reconstructed images as well as reduce the ringing artifacts .it is worth to mention that our new algorithm is non - iterative , hence , it does not depend on an initial guess and does not require expensive computation .other non - iterative algorithms , for example , the factorization method and the monotonicity - based method , on the other hand , are much more sensitive to measurement errors than our new algorithm when phantom data or real data are applied .the paper is organized as follows . in section [ sec : setting ] we introduce the mathematical setting , describe how the measured data can be collected and set up a link between the mathematical setting and the measured data .section [ sec : algorithm ] presents our new algorithm and the numerical results were shown in section [ sec : num ] .we conclude this paper with a brief discussion in sectionlet describe the imaging subject and be the unknown conductivity distribution inside .we assume that is a bounded domain with smooth boundary and that the function is real - valued , strictly positive and bounded .electrical impedance tomography ( eit ) aims at recovering using voltage and current measurements on the boundary of .there are several ways to inject currents and measure voltages .we shall follow the _ neighboring method _ ( aka adjacent method ) which was suggested by brown and segar in 1987 and is still widely being used by practitioners . in this method ,electrodes are attached on the object s surface , and an electrical current is applied through a pair of adjacent electrodes whilst the voltage is measured on all other pairs of adjacent electrodes excluding those pairs containing at least one electrode with injected current .figure [ fig : m1 ] illustrates the first and second current patterns for a -electrode eit system . at the first current pattern ( figure [ fig : m1]a ) ,small currents of intensity and are applied through electrodes and respectively , and the voltage differences are measured successively on electrode pairs . in general , for a -electrode eit system , at the -th current pattern , by injecting currents and to electrodes and respectively , one gets voltage measurements , where and . note that here and throughout the paper , the electrode index is always considered modulo , i.e. the index also refers to the first electrode , etc .= [ circle , draw , inner sep=0mm , minimum size=0.5cm , font= ] \(c ) at ( , ) ; ( , ) circle ( ) ; ( omega ) at ( -3 , 4 ) ; ( text ) at ( -5 , 6 ) a ) ; in 0,22.5,45,67.5,90,112.5,135,157.5,180,202.5,225,247.5,270,292.5,315,337.5 ( , ) (,)(,)(,)cycle ; /in 0/e_1 , 22.5/e_2 , 45/e_3 , 67.5/e_4 , 90/e_5 , 112.5/e_6 , 135/e_7 , 157.5/e_8 , 180/e_9 , 202.5/e_10 , 225/e_11 , 247.5/e_12 , 270/e_13 , 292.5/e_14 , 315/e_15 , 337.5/e_16 ( t ) at ( ) ; ( t ) node ; ( -3.5 , 5 ) to [ out=200 , in=120 ] ( -2.2,4.3 ) to [ out=20 , in=50 ] ( -3.5,5 ) ; ( -3.5 , 3 ) to [ out=20 , in=10 ] ( -2.5,3.8 ) to [ out=200 , in=150 ] ( -3.5,3 ) ; in 0,22.5,45,67.5,90,112.5,135,157.5,180,202.5,225,247.5,270,292.5,315,337.5 ( , ) ( , ) ; ( , ) circle(0.5 mm ) ; ( , ) ( , ) ; ( , ) ( , ) ; ( n0 ) at ( , ) [ place ] ; ( , ) ( n0 ) ( n0 ) ( , ) ; in 45,67.5,90,112.5,135,157.5,180,202.5,225,247.5,270,292.5,315 \(n ) at ( , ) [ place ] ; ( , ) ( n ) ( n ) ( , ) ; ( cr ) at ( , ) ; ( , ) circle ( ) ; ( omegar ) at ( , ) ; ( text ) at ( 0 , 6 ) b ) ; in 0,22.5,45,67.5,90,112.5,135,157.5,180,202.5,225,247.5,270,292.5,315,337.5 ( , ) (,)(,)(,)cycle ; /in 0/e_1 , 22.5/e_2 , 45/e_3 , 67.5/e_4 , 90/e_5 , 112.5/e_6 , 135/e_7 , 157.5/e_8 , 180/e_9 , 202.5/e_10 , 225/e_11 , 247.5/e_12 , 270/e_13 , 292.5/e_14 , 315/e_15 , 337.5/e_16 ( tr ) at ( ) ; ( tr ) node ; ( 1.5 , 5 ) to [ out=200 , in=120 ] ( 2.8,4.3 ) to [ out=20 , in=50 ] ( 1.5,5 ) ; ( 1.5 , 3 ) to [ out=20 , in=10 ] ( 2.5,3.8 ) to [ out=200 , in=150 ] ( 1.5,3 ) ; in 0,22.5,45,67.5,90,112.5,135,157.5,180,202.5,225,247.5,270,292.5,315,337.5 ( , ) ( , ) ; ( , ) circle(0.5 mm ) ; ( , ) ( , ) ; ( , ) ( , ) ; ( n0 ) at ( , ) [ place ] ; ( , ) ( n0 ) ( n0 ) ( , ) ; in 67.5,90,112.5,135,157.5,180,202.5,225,247.5,270,292.5,315,337.5 \(n ) at ( , ) [ place ] ; ( , ) ( n ) ( n ) ( , ) ; assuming that the electrodes are relatively open and connected subsets of , that they are perfectly conducting and that contact impedances are negligible , the resulting electric potential at the -th current pattern obeys the following mathematical model ( the so - called _ shunt model _ ) : here is the unit normal vector on pointing outward and describes the -th applied current pattern where a current of strength is driven through the -th and -th electrode . noticethat satisfy the conservation of charge , and that the electric potential is uniquely determined by ( [ eqn : shunt ] ) only up to the addition of a constant .the voltage measurements are given by the herein used shunt model ignores the effect of contact impedances between the electrodes and the imaging domain .this is only valid when voltages are measured on small ( see ) and current - free electrodes , so that ( [ eqn : shunt ] ) correctly models only the measurements with . for difference measurements ,the missing elements with , on the other hand , can be calculated by interpolation taking into account reciprocity , conservation of voltages and the geometry - specific smoothness of difference eit data , cf . . for an imaging subject with unknown conductivity , one thus obtains a full matrix of measurements .in difference eit , the measurements are compared with measurements for some reference conductivity distribution in order to reconstruct the conductivity difference .this is usually done by a single linearization step where is the frchet derivative of the voltage measurements we discretize the reference domain into disjoint open pixels and make the piecewise - constant ansatz this approach leads to the linear equation where and the columns of the _ sensitivity matrix _ contain the entries of the measurements and the discretized frchet derivative , resp ., written as long vectors , i.e. , most practically used eit algorithms are based on solving a regularized variant of ( [ eqn : one_step_linearized ] ) to obtain an approximation to the conductivity difference .the popular algorithms noser and greit use ( generalized ) tikhonov regularization and minimize with ( heuristically chosen ) weighted euclidian norms and in the residuum and penalty term .it has been shown in that shape information in eit is invariant under linearization .thus one - step linearization methods are principally capable of reconstructing the correct ( outer ) support of the conductivity difference even though they ignore the non - linearity of the eit measurement process . in the authors developed a monotonicity - based regularization method for the linearized eit equation for which ( in the continuum model ) it can be guaranteed that the regularized solutions converge against a function that shows the correct outer shape . in this section ,we formulate and analyze this new method for real electrode measurements , and in the next section we will apply it to real data from a phantom experiment and compare it with the greit method .the main idea of monotonicity - based regularization is to minimize the residual of the linearized equation ( [ eqn : one_step_linearized ] ) with constraints on the entries of that are obtained from monotonicity tests . for the following ,we assume that the background is homogeneous and that all anomalies are more conductive , or all anomalies are less conductive than the background , i.e. , is constant , and either is an open set denoting the conductivity anomalies , and is the contrast of the anomalies .we furthermore assume that we are given a lower bound of the anomaly contrast , i.e. . for the monotonicity tests it is crucial to consider the measurements and the columns of the sensitivity matrix as matrices and compare them in terms of matrix definiteness , cf . for the origins of this sensitivity matrix based approach .let denote the eit difference measurements written as -matrix , and denote the -th column of the sensitivity matrix written as -matrix , i.e. the -th entry of is given by we then define for each pixel where denotes the matrix absolute value of , and the comparison is to be understood in the sense of matrix definiteness , i.e. holds if and only if all eigenvalues of are non - negative .following we then solve the linearized eit equation ( [ eqn : one_step_linearized ] ) using the monotonicity constraints .we minimize the euclidean norm of the residuum under the constraints that 1 .in the case : , and 2 . in the case : . where , and .for noisy data with this approach can be regularized by replacing with where is the identity matrix . for the implementation of see section [ sec : num ] . for the continuum model , and under the assumption that has connected complement , the authors showed that for exact data this monotonicity - constrained minimization of the linearized eit residuum admits a unique solution and that the support of the solution agrees with the anomalies support up to the pixel partition .moreover , also shows that for noisy data and using the regularized constraints , minimizers exist and that , for , they converge to the minimizer with the correct support . since practical electrode measurements can be regarded as an approximation to the continuum model , we therefore expect that the above approach will also well approximate the anomaly support for real electrode data . in the continuum model , the constraints will be zero outside the support of the anomaly and positive for each pixel inside the anomaly .the first property relies on the existence of localized potentials and is only true in the limit of infinitely many , infinitely small electrodes .the latter property is however true for any number of electrodes as the following result shows : [ thm : beta_k ] if , then a. in the case the constraint fulfills , and b. in the case the constraint fulfills . if and then and if and then hence , it suffices to show that holds for all that fulfill a. , or b. .we use the following monotonicity relation from ( * ? ? ?* lemma 3.1 ) ( see also for the origin of this estimate ) : for any vector we have that with . if , then which shows that . if , then which shows that .in this section , we will test our algorithm on the data set ` iirc``data``2006 ` measured by professor eungje woo s eit research group in korea . `iirc ` stands for impedance imaging research center .the data set ` iirc``data``2006 ` is publicly available as part of the open source software framework eidors ( electrical impedance and diffused optical reconstruction software ) .since ` iirc``data``2006 ` is also frequently used in the eidors tutorials , we believe that this is a good benchmark example to test our new algorithm . the data set ` iirc``data``2006 `was collected using the 16-electrode eit system khu mark1 ( see for more information of this system ) .the reference object was a plexiglas tank filled with saline .the tank was a cylinder of diameter 0.2 m with 0.01 m diameter round electrodes attached on its boundary .saline was filled to about 0.06 m depth . inside the tank ,one put a plexiglas rod of diameter 0.02 m .the conductivity of the saline was 0.15 s / m and the plexiglas rod was basically non - conductive .data acquisition protocol was adjacent stimulation , adjacent measurement with data acquired on all electrodes . the data set ` iirc``data``2006 `contains the voltage measurements for both homogeneous and non - homogeneous cases .measurements for the homogeneous case were obtained when the plexiglas rod was taken away ( reference conductivity in this case is 0.15 s / m ) . in the non - homogeneous case , different voltage measurements were measured corresponding to different positions of the plexiglas rod .eidors ( electrical impedance and diffused optical reconstruction software ) is an open source software that is widely used to reconstruct images in electrical impedance tomography and diffuse optical tomography . to reconstruct images with eidors ,one first needs to build an eidors model that fits with the measured data . in this paper, we shall use the same eidors model described in the eidors tutorial web - page : ` http://eidors3d.sourceforge.net/tutorial/eidors_basics/tutorial110.shtml ` figure [ fig:3 ] shows the reconstructed images of the - inhomogeneous measurements with different regularization parameters using the eidors built - in command ` inv\_solve ` , which follows the algorithm proposed in .we emphasize that , figure [ fig:3b ] ( regularization parameter is chosen as by default ) was considered at the eidors tutorial web - page , we show them here again in order to easily compare them with the reconstructed images using our new method later on. in the eidors model suggested in the eidors tutorial web - page , the reference body was chosen by default as a disk of diameter m and the default reference conductivity was s / m .however , in the experiment setting , the reference body was a cylinder of diameter m and the reference conductivity was s / m .hence , an appropriate scaling factor should be applied to the measurements , to make sure that the eidors model fits with these measurements . in the eidorstutorial web - page , the measurements were scaled by multiplying by a factor . in this paper , to increase the precision of the model , we shall find the best scaling factor that minimizes the error between the measured data and the data generated by the eidors model .more precisely , let call ` vh ` the measured data for homogeneous case and ` vh\_model ` the homogeneous data generated by the eidors model , the best scaling factor is a minimizer of the following problem for this experiment setting , the best factor is .from now on , by measured data we always refer to scaled measured data with respect to this best factor .+ the next step is to recover the missing measurements on the driving electrodes .we shall follow the result in to obtain an approximation for these missing measurements using interpolation .now we are in a position to minimize the problem ( [ eq : min_lin_res ] ) under the linear constraint ( c1 ) or ( c2 ) .to do this , we need to clarify and in the linear constraints . after scaling ,the reference conductivity is s / m , and still denotes the plexiglas rod with conductivity s / m .thus , , and is calculated using ( [ betak ] ) . in practice , there is no way to obtain the exact value of the matrix in ( [ betak ] ) .indeed , what we know is just the measured data , where denotes the noise level .when replacing by the noisy version , it may happen that there is no so that the matrix is still positive semi - definite . therefore , instead of using ( [ betak ] ), we shall calculate from here , represents the identity matrix , and is chosen as the absolute value of the smallest eigenvalue of .notice that , in the presence of noise , plays the role of the positive semi - definite matrix .we shall follow the argument in to calculate .let be the lower triangular cholesky decomposition matrix of , and let be the smallest eigenvalue of the matrix . since is negative semi - definite ,so is .thus , .arguing in the same manner as in , we get the minimizer of ( [ eq : min_lin_res ] ) is then obtained using two different approaches : one employs ` cvx ` ( figure [ fig:4a ] ) , a package for specifying and solving convex programs , the other ( figure [ fig:4b ] ) uses the ` matlab ` built - in function ` quadprog ` ( ` trust - region - reflective ` algorithm ) .we also show the reconstructed result using the built - in function ` inv\_solve ` of eidors ( figure [ fig:4c ] ) with the default regularization parameter and with greit algorithm ( figure [ fig:4d ] ) to see that scaling the measured data with the best scaling factor will improve a little bit the reconstructed image .notice that reconstructed images are highly affected by the choice of the minimization algorithms , and we will see from figure [ fig:4 ] that the images obtained by ` cvx ` has less artifacts than the others . it is worth to emphasize that although each eidors model is assigned to a default regularization parameter , when using the eidors built - in function ` inv\_solve ` , in order to obtain a good reconstruction ( figure [ fig:3 ] ) one has to manually choose a regularization parameter , whilst the regularization parameters and in our method are known a - priori provided the information of the conductivity and the reference conductivity exists .besides , if we manually choose the parameters , we even get much better reconstructed images ( figure [ fig:5 ] ) .*15 algorithm & runtime ( second ) + ` cvx ` & 839.3892 + ` quadprog ( trust - region - reflective ) ` & 5.4467 + eidors ( ` inv\_solve ` ) & 0.0231 + greit & 0.0120 + last but not least , our new method proves its advantage when there are more than one inclusions ( figure [ fig:6 ] ) .in this paper , we have presented a new algorithm to reconstruct images in eit in the real electrode setting .numerical results show that this new algorithm helps to reduce the ringing artifacts in the reconstructed images .global convergence result of this algorithm has been proved in for the continuum model . in future works, we shall prove global convergence result for the shunt model setting as well as reduce the runtime to fit with real - time applications .the authors thank professor eungje woo s eit research group for the ` iirc ` phantom data set .mnm thanks the academy of finland ( finnish center of excellence in inverse problems research ) for financial support of the project number 273979 . during part of the preparation of this work , mnm worked at the department of mathematics of the goethe university frankfurt , germany .10 a. adler , j. h. arnold , r. bayford , a. borsic , b. brown , p. dixon , t. j. faes , i. frerichs , h. gagnon , y. grber , et al . : a unified approach to 2d linear eit reconstruction of lung images ., 30(6):s35 , 2009 .m. grant and s. boyd .graph implementations for nonsmooth convex programs . in v.blondel , s. boyd , and h. kimura , editors , _ recent advances in learning and control _ , lecture notes in control and information sciences , pages 95110 .springer - verlag limited , 2008 . | in electrical impedance tomography , algorithms based on minimizing the linearized - data - fit residuum have been widely used due to their real - time implementation and satisfactory reconstructed images . however , the resulting images usually tend to contain ringing artifacts . in this work , we shall minimize the linearized - data - fit functional with respect to a linear constraint defined by the monotonicity relation in the framework of real electrode setting . numerical results of standard phantom experiment data confirm that this new algorithm improves the quality of the reconstructed images as well as reduce the ringing artifacts . |
the following attributes all make an optimization problem more difficult : having an objective function with an unknown and possibly large number of local minima , being constrained , having nonlinear constraints , having inequality constraints , having both discrete and continuous variables .unfortunately , faithfully modeling an application tends to introduce many of these attributes . as a result, optimization problems are usually linearized , discretized , relaxed , or otherwise modified to make them feasible according to conventional methods .one of the most exciting prospects of constraint programming is that such difficult optimization problems can be solved without these possibly invalidating modifications .moreover , constraint programming solutions are of known quality : they yield intervals guaranteed to contain all solutions .equally important , constraint programming can prove the absence of solutions . in this paper we only consider the core of the constraint programming approach to optimization , which is to solve a system of nonlinear inequalities : is understood that it may happen that for some pairs and , so that equalities are a special case .if this occurs , then certain obvious optimizations are possible in the methods described here .the ability to solve systems such as ( [ nonlinsys ] ) supports optimization in more ways than one . in the first place, these systems occur as conditions in some constrained optimized problems .moreover , one of could be defined as , where is the objective function and where is a constant . by repeatedly solving such a system for suitably chosen , one can find the greatest value of for which ( [ nonlinsys ] ) is found to have no solution .that value is a lower bound for the global minimum .this approach handles nonlinear inequalities with real variables .it also allows some or all variables to be integer by regarding integrality as a constraint on a real variable .all constraint programming work in this direction has been based on interval arithmetic .the earliest work used a generic propagation algorithm based directly on domain reduction operators for primitive arithmetic constraints .these constraints included defined as for all reals , , and .also included was defined as for all reals , , and .this was criticized in which advocated the use of composite arithmetic expression directly rather than reducing them to primitive arithmetic constraints . in was acknowledged that the generic propagation algorithm is not satisfactory for csps that derive from composite arithmetic expressions .these papers describe propagation algorithms that exploit the structure of such expressions and thereby improve on what is attainable by evaluating such expressions in interval arithmetic .selective initialization was first described in .this was done under the tacit assumption that all default domains are ] is a convenient notation for all non - empty intervals , bounded or not .a _ box _ is a cartesian product of intervals .moore s idea of solving inequalities such as those in equation ( [ nonlinsys ] ) by means of interval arithmetic is at least as important as the subsequent applications of interval constraints to this problem .suppose we wish to investigate the presence of solutions of a single inequality in equation ( [ nonlinsys ] ) in a box .then one evaluates in interval arithmetic the expression in the left - hand side . as values for the variables one uses the intervals .suppose the result is the interval ] to the box that has the projections & \cap & ( [ e , f]/[c , d ] ) ) \nonumber\\ \varphi([c , d ] & \cap & ( [ e , f]/[a , b ] ) ) \nonumber\\ \varphi([e , f ] & \cap & ( [ a , b]*[c , d ] ) ) \nonumber\end{aligned}\ ] ] here is the function that yields the smallest interval containing its argument .of particular interest is the effect of the dro when all variables have ] .notable exceptions include the constraint ( defined as ) , where the default domain of is ] .a difference with other virtual machines is that a program for the icsp virtual machine is an unordered collection of dros .programs for other virtual machines are ordered sequences of instructions . in those other virtual machines, the typical instruction does not specify the successor instruction . by default this is taken to be the next one in textual order .execution of the successor is implemented by incrementing the instruction counter by one .the simplicity of the instruction sequencing in conventional virtual ( and actual ) machines is misleading .many instruction executions concern untypical instructions , where the next instruction is specified to be another than the default next instruction .examples of such untypical instructions are branches ( conditional or unconditional ) and subroutine jumps . in the icsp virtual machine , the dros are the instructions , and they form an unordered set . instead of an instruction counter specifying the next instruction , there is the active set of gpa containing the set of possible next instructions . instead of an instruction or a default rule determining the next instruction to be executed, gpa selects in an unspecified way which of the dros in the active set to execute . in this way, programs can be declarative : instructions have only meaning in terms of _ what _ is to be computed . _ how _ it is done ( instruction sequencing ) ,is the exclusive task of the virtual machine . equation ( [ nonlinsys ] ) may have multiple occurrences of variables in the same formula .as there are certain advantages in avoiding such occurrences , we rewrite without loss of generality the system in equation ( [ nonlinsys ] ) to the canonical form shown in figure [ singlesys ] . in figure [ singlesys ] , the expressions for the functions have no multiple occurrences of variables . as a result, they have variables instead of , with as in equation ( [ nonlinsys ] ) .this canonical form is obtained by associating with each of the variables in equation ( [ nonlinsys ] ) an equivalence class of the variables in figure [ singlesys ] .this is done by replacing in equation ( [ nonlinsys ] ) each occurrence of a variable by a different element of the corresponding equivalence class .this is possible by ensuring that each equivalence class is as large as the largest number of multiple occurrences .the predicate is true if and only if all its real - valued arguments are equal .an advantage of this translation is that evaluation in interval arithmetic of each expression gives the best possible result , namely the range of the function values . at the same time, the constraint is easy to enforce by making all intervals of the variables in the constraint equal to their common intersection .this takes information into account from all inequalities .if the system in its original form as in equation [ nonlinsys ] , with multiple occurrences , would be translated to a csp , then only multiple occurrences in a single expression would be exploited at one time . in the coming sections and without loss of generality, we will only consider expressions without multiple occurrences of variables .icsps represent what we _ can _ solve .they consist of atomic formulas without function symbols that , moreover , have efficient dros .equation ( [ nonlinsys ] ) exemplifies what we _ want _ to solve : it consists of atomic formulas typically containing deeply nested terms .[ [ the - tree - form - of - a - formula ] ] the tree form of a formula + + + + + + + + + + + + + + + + + + + + + + + + + + we regard a first - order atomic formula as a tree .the unique predicate symbol is the root .the terms that are the arguments of the formula are also trees and they are the subtrees of the root . if the term is a variable , then the tree only has a root , which is that variable .a term may also be a function symbol with one or more arguments , which are terms . in that case , the function symbol is the root with the argument terms as subtrees . in the tree form of a formulathe leaves are variables . in addition, we label every node that is a function symbol with a unique variable .any constants that may occur in the formula are replaced by unique variables .we ensure that the associated domains contain the constants and are as small as possible .[ [ translating - a - formula - to - an - icsp ] ] translating a formula to an icsp + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the tree form of a formula thus labeled is readily translated to an icsp .the translation has a set of constraints in which each element is obtained by translating an internal node of the tree .the root translates to where is the predicate symbol that is the root and are the variables labeling the children of the root .a non - root internal node of the form translates to , where * is the variable labeling the node * are the variables labeling the child nodes * is the relation defined by iff for all . propagation may terminate with small intervals for all variables of interest .this is rare .more likely , propagation leaves a large box as containing all solutions , if any . to obtain more information about possibly existing solutions ,it is necessary to split an icsp into two icsps and to apply propagation to both .an icsp is a result of splitting an icsp if has the same constraints as and differs only in the domain for one variable , say , .the domain for in is the left or right half of the domain for in .a _ search strategy _ for an icsp is a binary tree representing the result of successive splits .search strategies can differ greatly in the effort required to carry them to completion .the most obvious search strategy is the _ greedy strategy _ : the one that ensures that all intervals become small enough by choosing a widest domain as the one to be split .this is a plausible strategy in the case where the icsp has a few point solutions . in general ,the set of solutions is a continuum : a line segment , a piece of a surface , or a variety in a higher dimensional space that has positive volume . in such caseswe prefer the search to result in a single box containing all solutions .of course we also prefer such a box to be as small as possible .the greedy search strategy splits the continuum of solutions into an unmanageably large number of small boxes .it is not clear that the greedy strategy is preferable even in the case of a few well - isolated point solutions . in generalwe need a search strategy other than the greedy one .a more promising search strategy was first described in the ` absolve ` predicate of the bnr prolog system and by , where it is called _ box consistency_. the box consistency search strategy selects a variable and a domain bound .box consistency uses binary search to determine a boundary interval that can be shown to contain no solutions .this boundary interval can then be removed from the domain , thus shrinking the domain .this is repeated until a boundary interval with width less than a certain tolerance is found that can not be shown to contain no solutions .when this is the case for both boundaries of all variables , the domains are said to be _ box consistent _ with respect to the tolerance used and with respect to the method for showing inconsistency .when this method is interval arithmetic , we obtain _ functional box consistency_. when it is propagation , then it is called _ relational box consistency _ .all we need to know about search in this paper is that greedy search and box consistency are both search strategies and that both can be based on propagation .box consistency is the more promising search strategy .thus we need to compare interval arithmetic and propagation as ways of showing that a nonlinear inequality has no solutions in a given box .this we do in section [ psi ] .suppose we have a term that can be evaluated in interval arithmetic .let us compare the interval that is the result of such an evaluation with the effect of gpa on the icsp obtained by translating the term as described in section [ translation ] . to make the comparison possible we define evaluation of a term in interval arithmetic .the definition follows the recursive structure of the term : a term is either a variable or it is a function symbol with terms as arguments .if the term is an interval , then the result is that interval .if the argument is function applied to arguments , then the result is evaluated in interval arithmetic applied to the results of evaluating the arguments in interval arithmetic .this assumes that every function symbol denotes a function that is defined on reals as well as on intervals .the latter is called the _ interval extension _ of the former . for a full treatment of interval extensions , see .the following lemma appears substantially as theorem 2.6 in .[ basic ] let be a term that can be evaluated in interval arithmetic .let the variables of be .let be the variable associated with the root of the tree form of .let be the icsp that results from translating , where the domains of are and where the domains of the internal variables are ] .after applying the dro for that constraint , this domain has become the result of the interval arithmetic operation that obtains the domain for this variable from the domains of the other variables of the constraint .+ according to , every fair sequence of dros converges to the same domains for the variables .these are also the domains on termination of gpa .let us consider a fair sequence that begins with a sequence of dros that mimics the evaluation of in interval arithmetic . at the end of this, has the value computed by interval arithmetic .this shows that gpa gives a result that is a subinterval of the result obtained by interval arithmetic .+ gpa terminates after activating the dros in .this is because in the interval arithmetic evaluation of an operation is only performed when its arguments have been evaluated .this means that the corresponding dro only changes one domain .this domain is the domain of a unique variable that occurs in only one constraint that is already in the active set .therefore none of the dro activations adds a constraint to the active set , which is empty after .gpa yields the same result whatever the way constraints are selected in the active set .therefore gpa always gives the result of interval arithmetic evaluation .however , gpa may obtain this result in an inefficient way by selecting constraints that have no effect .this suggests that the active set be structured in a way that reflects the structure of .this approach has been taken in .the proof shows that , if the active set had not contained any of the constraints only involving internal variables , these constraints would have been added to the active set by gpa .this is the main idea of selective initialization . by initializing and ordering the active set in a suitable way and leaving gpa otherwise unchanged, it will obtain the interval arithmetic result with no more operations than interval arithmetic .this assumes the optimization implied by the totality theorem in .a constraint is a _ seed constraint _iff at least one of its variables has a domain that differs from the default domain assigned to that variable .for example , the term translates to an icsp with constraints , , and . when the domains are ] for , , and ; ] for and ; ] , the value in interval arithmetic of . at this stagethe dro for is executed . if , then failure occurs . if , then the domain for is unchanged .therefore , no constraint is added to the active set .termination occurs with nonfailure .there is no change in the domain of any of .the third possibility is that . in this case , the domain for shrinks : the upper bound decreases from to .this causes the constraints to be brought into the active set that correspond to nodes at the next lower level in the tree .this propagation may continue all the way down to the lowest level in the tree , resulting in shrinking of the domain of one or more of .let us compare this behaviour with the use of interval arithmetic to solve the same inequality .in all three cases , gpa gives the same outcome as interval arithmetic : failure or nonfailure .in the first two cases , gpa gives no more information than interval arithmetic .it also does no more work . in the third case, gpa may give more information than interval arithmetic : in addition to the nonfailure outcome , it may shrink the domain of one or more of .this is beyond the capabilities of interval arithmetic , which is restricted to transmit information about arguments of a function to information about the value of the function .it can not transmit information in the reverse direction . to achieve this extra capability , gpa needs to do more work than the equivalent of interval arithmetic evaluation . in the above, we have assumed that gpa magically avoids selecting constraints in a way that is not optimal .in such an execution of gpa we can recognize two phases : an initial phase that corresponds to evaluating the left - hand side in interval arithmetic , followed by a second phase that starts with the active set containing only the constraint .when we consider the nodes in the tree that correspond to the constraints that are selected , then it is natural to call the first phase bottom - up ( it starts at the leaves and ends at the root ) and the second phase top - down ( it starts at the root and may go down as far as to touch some of the leaves ) .the bottom - up phase can be performed automatically by the psi algorithm .the start of the top - down phase is similar to the situation that occurs in search . in both search and in the top - downphase a different form of selective initialization can be used , shown in the next section . the bottom - up phase and the top - down phaseare separated by a state in which the active set only contains . for reasons that become apparent in the next section, we prefer a separate treatment of this constraint : not to add it to the active set and to execute the shrinking of the domain for as an extraneous event .this is then a special case of termination of gpa , or its equivalent psi , followed by the extraneous event of shrinking one domain .the pseudo - code for psi algorithm is given in figure [ psialg ] .let the active set be a priority queue in which the constraints are + ordered according to the level they occupy in the tree , + with those that are further away from the root placed nearer to the front of the queue + put only * seed * constraints into + while ( ) + choose a constraint from + apply the dro associated with + if one of the domains is empty , then stop + add to all constraints involving variables whose domains have changed , if any + remove from + the correctness of psi algorithm can be easily deduced from the following theorem .[ modeval ] consider the icsp obtained from the tree of the atomic formula .suppose we modify gpa so that the active set is initialized to contain instead of all constraints only seed constraints .suppose also that the active set is a priority queue in which the constraints are ordered according to the level they occupy in the tree , with those that are further away from the root placed nearer to the front of the queue .then gpa terminates with the same result as when the active set would have been initialized to contain all constraints . as we did before , suppose that in gpa the active set is initialized with all constraints such that the seed constraints are at the end of the active set . applying any dro of a constraint that isnot a seed constraint will not affect any domain .thus , the constraints that are not seed constraints can be removed from the active set without changing the result of gpa . since the gpa does not specify any order , can be ordered as desired . herewe choose to order it in such a way we get an efficient gpa when used to evaluate an expression ( see previous section ) .often we find that after applying gpa to an icsp , the domain for one of the variables , say , is too wide .search is then necessary .this can take the form of splitting on the domain for .the results of such a split are two separate icsps and that are the same as except for the domain of . in , has as domain the left half of ; in , it is the right half of .however , applying gpa to and entails duplicating work already done when gpa was applied to .when splitting on after termination of the application of gpa to , we have the same situation as at the beginning of the downward phase of applying gpa to an inequality : the active set is empty and an extraneous event changes the domain of one variable to a proper subset .the following theorem justifies a form of the psi algorithm where the active set is initialized with what is typically a small subset of all constraints .[ modsolvegeneral ] let be the tree obtained from the atomic formula .let be the icsp obtained from .let be a variable in .suppose we apply gpa to .after the termination of gpa , suppose the domain of is changed to an interval that is a proper subset of it .if we apply gpa to with an active set initialized with the constraints only involving , then gpa terminates with the same result as when the active set would have been initialized to contain all constraints . to prove theorem [ modsolvegeneral ], we should show that initializing gpa with all constraints gives the same results as when it is initialized with only the constraints involving . since no ordering is specified for the active set of gpa , we choose an order in which the constraints involving are at the end of the active set .because dros are idempotent , all constraints at the front of the active set , different from those involving , do not affect any domain . thus removing them from the active set in the initialization process does not change the fixpoint of the gpa .thus , theorem [ modsolvegeneral ] is proved .we have only considered the application of selective initialization to solve a single inequality .a conjunction of inequalities such as equation ( [ nonlinsys ] ) can be solved by solving each in turn .this has to be iterated because the solving of another inequality affects the domain of an inequality already solved .this suggests performing the solving of all inequalities in parallel .doing so avoids the waste of completing an iteration on the basis of unnecessarily wide intervals .it also promises speed - up because many of the dro activations only involve variables that are unique to the inequality . in the current version of the design of our algorithm, we combine this parallelization with a method of minimizing the complexity usually caused by multiple occurrences of variables .before interval methods it was not clear how to tackle numerically realistic optimization models . only with the advent of interval arithmetic in the 1960s one could for the first time at least say : `` if only we had so much memory and so much time , then we could solve this problem . ''interval arithmetic has been slow in developing . since the 1980s constraint programming has added fresh impetus to interval methods .conjunctions of nonlinear inequalities , the basis for optimization , can be solved both with interval arithmetic and with constraint programming . in this paperwe relate these two approaches .it was known that constraint propagation subsumes interval arithmetic .it was also clear that using propagation for the special case of interval arithmetic evaluation is wasteful . in this paperwe present an algorithm for propagation by selective initialization that ensures that propagation is as efficient in the special case of interval arithmetic evaluation .we also apply selective initialization for search and for solving inequalities .preliminary results on a parallel version of the methods presented here suggest that realistic optimization models will soon be within reach of modest computing resources .we acknowledge generous support by the university of victoria , the natural science and engineering research council nserc , the centrum voor wiskunde en informatica cwi , and the nederlandse organisatie voor wetenschappelijk onderzoek nwo .frdric benhamou , frdric goualard , laurent granvilliers , and jean - franois puget .revising hull and box consistency . in _ proceedings of the 16th international conference on logic programming _ , pages 230244 .mit press , 1999 .hickey , m.h .van emden , and h. wu . a unified framework for interval constraints and interval arithmetic . in michael maher and jean - franois puget , editors , _ principles and practice of constraint programming cp98 _ , pages 250 264 .springer - verlag , 1998 .lecture notes in computer science 1520 .van emden .computing functional and relational box consistency by structured propagation in atomic constraint systems . in _ proc .6th annual workshop of the ercim working group on constraints ; downloadable from corr _ , 2001 .van emden and b. moa . using propagation for solving complex arithmetic constraints .technical report dcs - xxx - ir , department of computer science , university of victoria .paper cs.na/0309018 in computing research repository ( corr ) ,september 2003 . | numerical analysis has no satisfactory method for the more realistic optimization models . however , with constraint programming one can compute a cover for the solution set to arbitrarily close approximation . because the use of constraint propagation for composite arithmetic expressions is computationally expensive , consistency is computed with interval arithmetic . in this paper we present theorems that support , selective initialization , a simple modification of constraint propagation that allows composite arithmetic expressions to be handled efficiently . |
dna microarrays allow the comparison of the expression levels of all genes in an organism in a single experiment , which often involve different conditions ( _ i.e. _ health - illness , normal - stress ) , or different discrete time points ( _ i.e. _ cell cycle ) . among other applications ,they provide clues about how genes interact with each other , which genes are part of the same metabolic pathway or which could be the possible role for those genes without a previously assigned function .dna microarrays also have been used to obtain accurate disease classifications at the molecular level . however , transforming the huge amount of data produced by microarrays into useful knowledge has proven to be a difficult key step . on the other hand ,clustering techniques have several applications , ranging from bioinformatics to economy .particularly , data clustering is probably the most popular unsupervised technique for analyzing microarray data sets as a first approach .many algorithms have been proposed , hierarchical clustering , k - means and self - organizing maps being the most known .clustering consists of grouping items together based on a similarity measure in such a way that elements in a group must be more similar between them than between elements belonging to different groups .the similarity measure definition , which quantifies the affinity between pairs of elements , introduces _ a priori _ information that determines the clustering solution .therefore , this similarity measure could be optimized taking into account additional data acquired , for example , from real experiments . some works with_ a priori _ inclusion of bioinformation in clustering models can be found in . in the case of gene expression clustering ,the behavior of the genes reported by microarray experiments is represented as points in a -dimensional space , being the total number of genes , and the number of conditions .each gene behavior ( or point ) is then described by its coordinates ( its expression value for each condition ) .genes whose expression pattern is similar will appear closer in the -space , a characteristic that is used to classify data in groups . in our case , we have used the superparamagnetic clustering algorithm ( spc ) , which was proposed in 1996 by domany and collaborators as a new approach for grouping data sets .however , this methodology has difficulties dealing with different density clusters , and in order to ameliorate this , we report here some modifications of the original algorithm that improve cluster detection .our main contribution consists on increasing the similarity measure between genes by taking advantage of transcription factors , special proteins involved in the regulation of gene expression .the present paper is organized as follows : in section 2 , the spc algorithm is introduced , as well as our proposal to include further biological information and our considerations for the selection of the most natural clusters .results for a real data set , as well as performance comparisons , are presented in section 3 . finally , section 4 is dedicated to a summary of our results and conclusions .a potts model can be used to simulate the collective behavior of a set of interacting sites using a statistical mechanics formalism . in the more general inhomogeneous potts model , the sitesare placed on an irregular lattice .next , in the spc idea of domany _ et al . _ , each gene s expression pattern is represented as a site in an inhomogeneus potts model , whose coordinates are given by the microarray expression values . in this way , a particular lattice arrangement is spanned for the entire data set being analyzed .a spin value , arbitrarily chosen from possibilities , is assigned to each site , where corresponds to the site of the lattice .the main idea is to characterize the resulting spin configuration by the ferromagnetic hamiltonian : where the sum goes over all neighboring pairs , and are spin values of site and site respectively , and is their ferromagnetic interaction strength .each site interacts only with its neighbors , however since the lattice is irregular , it is necessary to assign the set of nearest - neighbors of each site using the so - called -mutual - nearest - neighbor criterion .the original interaction strength is as follows : with the average number of neighbors per site and the average distance between neighbors .the interaction strength between two neighboring sites decreases in a gaussian way with distance and therefore , sites that are separated by a small distance have more probability of sharing the same spin value during the simulation than the distant sites . on the other hand , said probability , , also depends on the temperature , which acts as a control parameter . at low temperatures ,the sites tend to have the same spin values , forming a ferromagnetic system .this configuration is preferred over others because it minimizes the total energy . however , the probability of encountering aligned spins diminishes as temperature increases , and the system could experience either a single transition to a totally disordered state ( paramagnetic phase ) , or pass through an intermediate phase in which the system is partially ordered , which is known as the superparamagnetic phase . in the latter case ,varios regions of sites sharing the same spin value emerge .sites within these regions interact among them with a stronger force , exhibiting at the same time weak interactions with sites outside the region .these regions could fragment into smaller grains , leading to a chain of transitions within the superparamagnetic phase until the temperature is so high that the system enters the paramagnetic phase , where each spin behaves independently .this hierarchical subdivision in magnetic grains reflects the organization of data into categories and subcategories .regions of aligned spins emerging during simulation correspond to groups of points with similar coordinates , _i.e. _ , similar gene expression patterns .this subdivision can be simulated , for example , by using the monte carlo approach , by which one can compute and follow the evolution of system properties such as energy , magnetization and susceptibility , while the temperature is modified .in addition , the temperature ranges in which each phase transition takes place can be localized . rather than thresholding the distances between pairs of sites to decide their assignment to clusters , the pair correlation , indicating a collective aspect of the data distribution , is preferred .it can be calculated as follows in this way , is the normalized probability for finding two potts spins and sharing the same value for a given temperature step . if both spins belong to the same ordered region , their correlation value would be close to one , otherwise their correlation would be close to zero .thus , for each temperature step , two sites are assigned to the same cluster if their correlation exceeds a threshold value of . if a site does not have a single correlation value greater than , it is joined with its neighbor showing the highest value .+ for our spctf algorithm , we also accept sites whose are larger than in order to build a cluster .however , differently from the traditional spc algorithm , if two sites do not reach the value greater than they are not connected . this is because with our data we have found that the original condition led to unnatural growth of some clusters when the temperature is increased .as already mentioned , the data are fragmented in various clusters for each temperature value , and for higher temperatures , the number of clusters increases due to finer and finer segmentation . in order to select the more representative clusters through all temperature steps ,we assign a stability value to each obtained cluster , based on its evolution .we define as the number of temperature steps until the system reaches the paramagnetic phase and as the number of temperature steps a cluster survives , while and are defined as the total number of sites and the number of elements in a given cluster , respectively .we assign a stability parameter to each cluster , as follows : where is the fraction of temperature steps a cluster survives , while is the fraction of total elements belonging to .the advantage of using the stability parameter is that it gives preference to clusters that survive several temperatures , but also have an acceptable number of elements .we added a small positive real number to the denominator in the expression of for the special case when , where belongs to the range $ ] , leading to instead of the infinity .it has been reported that the main drawback of the spc algorithm consists of dealing with data showing regions of different density . in this case , either depending on temperature or the number of neighbors selected , some clusters will easily get prominent whereas the detection of others will be hindered . to overcome this problem ,at least two techniques have been proposed _e.g. _ , sequential superparamagnetic clustering and a modularity approach . our idea is to take advantage of already available biological information to improve lattice connectivity in such a way that biologically significant clusters have more probability of being detected by the algorithm .indeed , at the transcriptional level , the expression of a gene could be promoted / suppressed by the binding of the proteins named transcription factors to specific sequences on the gene promoter region .then , if a group of genes shows the same expression behavior in a microarray experiment , it is quite possible that they are being regulated by a specific transcription factor , forming a group of coregulated genes .thus , available information about which genes are targeted by the same transcription factors may be useful in the detection of groups of genes with similar expression profiles . to make effective this idea ,we downloaded from _ www.yeastract.com _ a list of yeast transcription factors that are well documented , andwhenever two neighboring genes are controlled by the same transcription factor , we increased their interaction strength .it is important to note that the list provided by _ www.yeastract.com _includes transcription factors associated with several processes and are not only cell cycle related .the formula that takes this into account replaces eq .( [ eq : js ] ) of the original algorithm , and has the following form : here , is the number of common transcription factors shared by and ( , which varies for each pair of neighboring genes ) , multiplied by a factor which was chosen to be 2.0 after comparing the results obtained with several other values .the selected value has the characteristic of preserving well - defined susceptibility peaks as well as obtaining larger clusters .the objective is to strengthen some connections without preventing the natural fragmentation of clusters caused by the temperature parameter . if two elements do not share a transcription factor , then , recovering the original spc formulatherefore , the modified interaction strength between each site and its neighbors is governed by two aspects : the distance between them , which comes from gene expression values generated through microarray experiments , and the number of transcription factors regulating both genes , obtained from documented biological data . any timetwo genes share a transcription factor , their interaction strength becomes larger , and this favors that the clusters including these sites remain stable for longer temperature ranges , with the corresponding increase of their stability values .we analyzed spellman _ et al . _ microarray data in which gene expression values from synchronized yeast cultures were obtained at various time moments , aiming to identify cell cycle genes .yeast cultures were synchronized by three methods : adding alpha pheromone , which arrests cells in the g1 phase ; using centrifugal elutration for separating small g1 cells ; and using a mutation that arrests cells late in mitosis at a given temperature . combining the three experiments and using fourier and correlation algorithms , spellman _ reported cell cycle regulated genes .the goal was to compare the performance of spc and spc with transcription factors ( spctf ) , which are algorithms that do not make assumptions about periodicity .nonetheless , the overall analysis is time consuming and we only selected the data set treated with the alpha pheromone , available at _ http://cellcycle - www.stanford.edu_. genes with missing values were discarded , leaving an input matrix of genes and time courses that included only of the genes reported by spellman __ . furthermore ,as we do not include the other two synchronization experiments , we expect to loose some of their cell cycle genes .it is worth mentioning that getz __ also analyzed the spellman alpha synchronized set with the spc algorithm .they took genes which have characterized functions and introduced a fourier transform to take into account the oscillatory nature of the cell cycle . in our case , however , we decided not to introduce any considerations about the periodicity of the data , mainly because the time series cover only two cell cycle periods .we obtain compact gene clusters implementing spc original algorithm and spctf , both with parameter values and .the cluster with the highest stability value contains an extremely large number of elements without a clear biological linkage between them .it is mainly composed of genes whose expression do not change significantly over time , thus it is possible that they are included here for this very reason .we discard this cluster from our analysis , although it could always be taken apart and analyzed again with spctf by choosing the appropiate number of neighbors to obtain more information .to compare in more detail both approaches , it is necessary to correlate each cluster in the spc method with its equivalent in spctf . in order to do this , we calculate the euclidian distance between the mean position vector of every cluster in each approach , and choose the pairs with the shortest distance between them .( we recall that the mean position vector of a cluster is obtained by averaging each coordinate between all its elements ) .although different measures could have been used , this one performed adequately , as can be seen in the supplementary information file , where we provide a more detailed comparison between spctf and spc clusters . in table[ tabla1 ] , we present the differences in cluster size as well as the hits , the number of genes reported by spellman _ , which have been included in the clusters .when going through the spctf approach , one can see that the first largest cluster looses some genes , while the number of the rest of the clusters augments .besides , hits or coincidences with spellman _ et al . _ cell cycle genes in clusters of six or more elementsincrease by , from to .therefore , we were able to incorporate several genes to these clusters , mainly from outliers . * comparison between spc and spctf * in the following analysis , we focus on clusters of six or more elements , because we are interested in finding groups of several genes sharing the same expression pattern ( coregulated genes ) . results of the comparison for the first most stable clusters , discarding the first one ,are shown in fig .[ fig : todos_cc ] . generally , these clusters incorporate more elements with spctf , including more cell cycle genes as those reported by spellman __ and thus improving the matching .clusters , discarding the first one .gray bars correspond to the clusters obtained with the spc algorithm and black bars to the equivalent clusters in spctf .groups tend to increase in size and also in hits with cell cycle genes reported by spellman _ , with the exception of cluster .,height=211 ] depending on the available information about the genes , we classify the clusters in three groups .the first cluster type , cell cycle genes , cc , corresponds to groups formed in their majority ( ) by already reported cell cycle genes ( fig .[ fig : cc_sp ] ) .the second type , mixed genes , m , contains clusters with non - reported genes as well as already known cell cycle genes ( fig .[ fig : m_n ] ) , and in the third type , no hits , n , we include the clusters that contain only one hit or are entirely composed of non - previously identified cell cycle genes ( fig .[ fig : m_n ] ) .it is worth mentioning that more cell cycle experiments have been done since spellman __ and new genes have been classified meanwhile as cell cycle regulated .some of these newly reported cell cycle genes were obtained by cho _ , pramila _ et al . _ , rowicka _ et al ._ and lichtenberg _ et al .we analize our clusters taking now as hits , genes reported either by spellman _ or by one of the above mentioned studies . in this way, we gained thirty additional hits in the spc clusters , while in spctf clusters we have fifty - two extra genes .the results including all the aforementioned cell cycle studies are presented in figs .[ fig : all_studies1][fig : m_n_all ] . most stable clusters .hits are now taken as cell cycle genes reported by all studies .gray bars correspond to the clusters obtained with the spc algorithm and black bars to the equivalent clusters in spctf.,height=211 ] in addition , we analyze the expression profiles of the genes conforming each cluster using the sceptrans tool , and we notice that all the genes grouped in the same cluster had the same expression pattern .this gives us further confidence that our algorithm is grouping data correctly .the expression profiles for a representative member of each cluster type are shown in fig .[ fig : exp_pro ] .we also find two clusters ( and ) that present an oscillating behaviour that is due to an artifact in the manner the microarray experiment was performed , see . in the supplementary information file, we include the list of oscillating genes identified in and the number of these genes inside each of our first clusters .we also include the expression profiles of these clusters as well as those of size and which contain hits with cell cycle genes identified by spellman _these clusters have also similar expression profiles but were not further analyzed because of their low number of elements . in the case of gene annotation , it is important to have clusters of many elements to effectively assure that an unknown gene shares the biological function already assigned to the other genes in the same cluster .+ + + the cc clusters are almost entirely composed of cell cycle regulated genes reported either by spellman _ or by other authors , besides , their expression patterns are similar , which leaves no doubt on their validity . for the m andn clusters , we know that they are well grouped because their elements share the same expression patterns , but in order to select those of worth for further analysis ( for example in a laboratory experiment ) we analyze them through musa , motif finding using an unsupervised approach algorithm , that can be found at _www.yeastract.com_. this program searches for the most common sequences ( motifs ) in the regulatory region of a set of genes , and compare them to the transcription factor binding sites already described in yeastract database .results of this analysis are shown in table [ tabla2 ] , which includes the quorum or percentage of genes containing a motif in each cluster , and the alignment score , which quantifies the level of similarity between the encountered motif and the known transcription factor associated with it .the clusters that probably would give us the best results would be those associated with cell cycle transcription factors with high percentages and scores .we select in this way , the clusters , , , , and because they have percentages higher than and scores higher than . in order to validate the musa analysis, we also constructed various clusters with sizes ranging from six to thirty - seven genes that were composed by genes selected at random from the original data . when analyzing these random clusters in the same way in musa , we obtain at most two cell cycle transcription factor coincidences .* musa analysis *large amounts of biological information are constantly obtained by throughput techniques and clustering algorithms have taken an important place in the unraveling of this information .however , the clustering analyses offer a difficult challenge because any data set can be grouped in numerous ways , depending on the level of resolution asked for and the applied similarity measure . in this work ,we propose the use of available biological information in order to strengthen the interaction between genes which share a transcription factor involved in any metabolic process , improving the similarity measure .this information is introduced in the natural evolution of the spc algorithm , and in this way , we are able to enhance the creation and endurance of groups of possible coregulated genes . as the network spanned by the transcription factors information connects all genes , clustering directly _ a posteriori _ using only this information in the present case results into a single massive cluster ( see section iv of the supplementary information ) . however , by having the distance play an important weight in the interaction formula , the far - located clusters will not join , despite sharing transcription factors between their genes . with this in mind , we have modified the spc algorithm , and applied both the original and modified spctf algorithm to one of the three spellman _ data sets of the yeast cell cycle .the expression profiles of the genes in all resulting clusters show a similar behavior , but we obtain larger clusters with spctf .we classified them in three types , cc , m , and n , depending on the amount of cell cycle reported elements inside each cluster .with spctf , the cc type clusters increase in size including more cell cycle genes , and for the m and n type clusters , we also looked for common sequences in its regulatory regions and selected various groups worth of further research in order to report possible new cell cycle genes . as expected , some of these clusters include already known cell cycle genes sharing a transcription factor , _ but more importantly , at the predictive level , they promote the inclusion of new genes with similar expression patterns_. it is also important to note that the modified algorithm can be applied to any data set , and the followed methodology leads to the selection of the potential gene subsets feasible to be experimentally investigated .our work can serve as an example of how the inclusion of available biological information , such as transcription factors , and bioinformatic tools , such as musa , can lead to better and more confident results , aiding in the analysis of data coming from microarray experiments .the authors thank drs . s. ahnert and g. sherlock for useful discussions and comments .we also thank conacyt for providing support for two of the authors ( m.p.m.a . and j.c.n.m . ) andalso the referees for helpful remarks and information . this work was partly supported through the project sep - conacyt-2005 - 49039 .p. t. monteiro , n. d. mendes , m. c. teixeira , s. dorey , s. tenreiro , n. p. mira , h. pais , a. p. francisco , a. m. carvalho , a. b. lourenco , i. sa - correia , a. l. oliveira , and a. t. freitas , nucleic acids res . * 36 * , d132 ( 2008 ) . | in this work , we modify the superparamagnetic clustering algorithm ( spc ) by adding an extra weight to the interaction formula that considers which genes are regulated by the same transcription factor . with this modified algorithm that we call spctf , we analyze spellman _ et al . _ microarray data for cell cycle genes in yeast , and find clusters with a higher number of elements compared with those obtained with the spc algorithm . some of the incorporated genes by using spcft were not detected at first by spellman _ et al . _ but were later identified by other studies , whereas several genes still remain unclassified . the clusters composed by unidentified genes were analyzed with musa , the motif finding using an unsupervised approach algorithm , and this allow us to select the clusters whose elements contain cell cycle transcription factor binding sites as clusters worth of further experimental studies because they would probably lead to new cell cycle genes . finally , our idea of introducing available information about transcription factors to optimize the gene classification could be implemented for other distance - based clustering algorithms . superparamagnetic clustering , similarity measure , microarrays , cell cycle genes , transcription factors . + paper - physa-3 20100901.tex physica a 389(24 ) , 5689 - 5697 ( 2010 ) + doi : 10.1016/j.physa.2010.09.006 |
a quantum finite automaton ( qfa ) is a model for a quantum computer with a finite memory .qfas can recognize the same languages as classical finite automata but they can be exponentially more space efficient than their classical counterparts . to recognize an arbitrary regular language , qfas need to be able to perform general measurements after reading every input symbol , as in .if we restrict qfas to unitary evolution and one measurement at the end of computation ( which might be easier to implement experimentally ) , their power decreases considerably .namely , they can only recognize the languages recognized by permutation automata , a classical model in which the transitions between the states have to be fully reversible .similar decreases of the computational power have been observed in several other contexts .quantum error correction is possible if we have a supply of quantum bits initialized to at any moment of computation ( see chapter 10 of ) . yet, if the number of quantum bits is fixed and it is not allowed to re - initialize them by measurements , error correction becomes difficult .simulating a probabilistic turing machine by a quantum turing machine is trivial if we allow to measure and reinitialize qubits but quite difficult if the number of qubits is fixed and they can not be reinitialized . thus , the availability of measurements is very important for quantum automata . what happens if the measurements are allowed but restricted ?how can we use the measurements of a restricted form to enhance the abilities of quantum automata ?can quantum effects be used to recognize languages that are not recognizable by classical automata with the same reversibility requirements ? in this paper , we look at those questions for measure - many " qfa model by kondacs and watrous .this model allows intermediate measurements during the computation but these measurements have to be of a restricted type .more specifically , they can have 3 outcomes : `` accept '' , `` reject '' , `` do nt halt '' and if one gets `` accept '' or `` reject '' , the computation ends and this is the result of computation .the reason for allowing measurements of this type was that the states of a qfa then have a simple description of the form where is the probability that the qfa has accepted , is the probability that the qfa has rejected and is the remaining state if the automaton has not accepted or rejected .allowing more general measurements would make the remaining state a mixed state instead of a pure state . having a mixed state as the current state of a qfa is very reasonable physically but the mathematical apparatus for handling pure states is simpler than one for mixed states . for this model, it is known that * any language recognizable by a qfa with a probability , is recognizable by a reversible finite automaton ( rfa ) . *the language can be recognized with probability but can not be recognized by an rfa .thus , the quantum automata in this model have an advantage over their classical counterparts ( rfas ) with the same reversibility requirements but this advantage only allows to recognize languages with probabilities at most 7/9 , not with arbitrary .this is a quite unusual property because , in almost any other computational model , the accepting probability can be increased by repeating the computation in parallel . as we see , this is not the case for qfas . in this paper , we develop a method for determining the maximum probability with which a qfa can recognize a given language .our method is based on the quantum counterpart of classification of states of a markov chain into ergodic and transient states .we use this classification of states to transform the problem of determining the maximum accepting probability of a qfa into a quadratic optimization problem .then , we solve this problem ( analytically in simpler cases , by computer in more difficult cases ) .compared to previous work , our new method has two advantages .first , it gives a systematic way of calculating the maximum accepting probabilities .second , solving the optimization problems usually gives the maximum probability exactly .most of previous work used approaches depending on the language and required two different methods : one for bounding the probability from below , another for bounding it from above . often , using two different approaches gave an upper and a lower bound with a gap between them ( like vs. mentioned above ) .with the new approach , we are able to close those gaps .we use our method to calculate the maximum accepting probabilities for a variety of languages ( and classes of languages ) .first , we construct a quadratic optimization problem for the maximum accepting probability by a qfa of a language that is not recognizable by an rfa .solving the problem gives the probability .this probability can be achieved for the language in the two - letter alphabet but no language that is no recognizable by a rfa can be recognized with a higher probability .this improves the result of .this result can be phrased in a more general way .namely , we can find the property of a language which makes it impossible to recognize the language by an rfa .this property can be nicely stated in the form of the minimal deterministic automaton containing a fragment of a certain form .we call such a fragment a non - reversible construction " .it turns out that there are many different `` non - reversible constructions '' and they have different influence on the accepting probability .the one contained in the language makes the language not recognizable by an rfa but the language is still recognizable by a qfa with probability .in contrast , some constructions analyzed in make the language not recognizable with probability for any . in the rest of this paper , we look at different non - reversible constructions " and their effects on the accepting probabilities of qfas .we consider three constructions : `` two cycles in a row '' , `` cycles in parallel '' and a variant of the construction .the best probabilities with which one can recognize languages containing these constructions are , and , respectively . the solution of the optimization problem for `` two cycles in a row '' gives a new qfa for the language that recognizes it with probability , improving the result of . again, using the solution of the optimization problem gives a better qfa that was previously missed because of disregarding some parameters .we define the kondacs - watrous ( `` measure - many '' ) model of qfas . [ def1 ]a qfa is a tuple where is a finite set of states , is an input alphabet , is a transition function ( explained below ) , is a starting state , and and are sets of accepting and rejecting states ( ) .the states in and , are called _ halting states _ and the states in are called _ non halting states_. * states of . * the state of can be any superposition of states in ( i. e. , any linear combination of them with complex coefficients ) .we use to denote the superposition consisting of state only . denotes the linear space consisting of all superpositions , with -distance on this linear space .* endmarkers .* let and be symbols that do not belong to .we use and as the left and the right endmarker , respectively .we call the _ working alphabet _ of . * transition function . *the transition function is a mapping from to such that , for every , the function defined by is a unitary transformation ( a linear transformation on that preserves norm ) .* computation . *the computation of a qfa starts in the superposition .then transformations corresponding to the left endmarker , the letters of the input word and the right endmarker are applied .the transformation corresponding to consists of two steps .first , is applied .the new superposition is where is the superposition before this step .then , is observed with respect to where , , .it means that if the system s state before the measurement was then the measurement accepts with probability , rejects with probability and continues the computation ( applies transformations corresponding to next letters ) with probability with the system having the ( normalized ) state where .we regard these two transformations as reading a letter . * notation .* we use to denote the transformation consisting of followed by projection to .this is the transformation mapping to the non - halting part of .we use to denote the product of transformations , where is the -th letter of the word .we also use to denote the ( unnormalized ) non - halting part of qfa s state after reading the left endmarker and the word . from the notation it follows that .* recognition of languages .* we will say that an automaton recognizes a language with probability if it accepts any word with probability and rejects any word with probability . for classical markov chains, one can classify the states of a markov chain into _ ergodic _ sets and _ transient _ sets .if the markov chain is in an ergodic set , it never leaves it .if it is in a transient set , it leaves it with probability for an arbitrary after sufficiently many steps .a quantum counterpart of a markov chain is a quantum system to which we repeatedly apply a transformation that depends on the current state of the system but does not depend on previous states. in particular , it can be a qfa that repeatedly reads the same word .then , the state after reading times depends on the state after reading times but not on any of the states before that .the next lemma gives the classification of states for such qfas .[ lemmaaf ] let .there are subspaces , such that and 1 . if , then and , 2 . if , then when . instead of ergodic and transient sets, we have subspaces and .the subspace is a counterpart of an ergodic set : if the quantum process defined by repeated reading of is in a state , it stays in . is a counterpart of a transient set : if the state is , is left ( for an accepting or rejecting state ) with probability arbitrarily close to 1 after sufficiently many s . in some of proofswe also use a generalization of lemma [ lemmaaf ] to the case of two ( or more ) words and : [ akvlemma ] let .there are subspaces , such that and 1 . if , then and and and , 2 . if , then for any , there exists such that . we also use a lemma from .[ bvlemma ] if and are two quantum states and then the total variational distance between probability distributions generated by the same measurement on and is at most but it can be improved to . ]ambainis and freivalds characterized the languages recognized by rfas as follows . [ aftheorem ] let be a language and be its minimal automaton . is recognizable by a rfa if and only if there is no such that 1 . , 2 .if starts in the state and reads , it passes to , 3 . if starts in the state and reads , it passes to , and 4 is neither `` all - accepting '' state , nor `` all - rejecting '' state , an rfa is a special case of a qfa that outputs the correct answer with probability 1 .thus , any language that does not contain the construction of theorem [ aftheorem ] can be recognized by a qfa that always outputs the correct answer .ambainis and freivalds also showed the reverse of this : any language with the minimal automaton containing the construction of theorem [ aftheorem ] can not be recognized by a qfa with probability .we consider the question : what is the maximum probability of correct answer than can be achieved by a qfa for a language that can not be recognized by an rfa ?the answer is : [ t1 ] let be a language and be its minimal automaton . 1 .if contains the construction of theorem [ aftheorem ] , can not be recognized by a 1-way qfa with probability more than .2 . there is a language with the minimal automaton containing the construction of theorem [ aftheorem ] that can be recognized by a qfa with probability . _ proof . _we consider the following optimization problem .* optimization problem 1 . *find the maximum such that there is a finite dimensional vector space , subspaces , such that , vectors , such that and and probabilities , such that and 1 . , 2 . , 3 . .we sketch the relation between a qfa recognizing and this optimization problem .let be a qfa recognizing .let be the minimum probability of the correct answer for , over all words .we use to construct an instance of the optimization problem above with .namely , we look at reading an infinite ( or very long finite ) sequence of letters . by lemma [ lemmaaf ], we can decompose the starting state into 2 parts and .define and .let and be the probabilities of getting into an accepting ( for ) or rejecting ( for ) state while reading an infinite sequence of s starting from the state .the second part of lemma [ lemmaaf ] implies that .since and are different states of the minimal automaton , there is a word that is accepted in one of them but not in the other . without loss of generality , we assume that is accepted if is started in but not if is started in . also , since is not an `` all - accepting '' state , there must be a word that is rejected if is started in the state .we choose and so that the square of the projection ( ) of a vector on ( ) is equal to the accepting ( rejecting ) probability of if we run on the starting state and input and the right endmarker .finally , we set equal to the of the set consisting of the probabilities of correct answer of on the words and , for all .then , condition 1 of the optimization problem , is true because the word must be accepted and the accepting probability for it is exactly the square of the projection of the starting state ( ) to . condition 2 follows from running on a word for some large . by lemma [ lemmaaf ] , if for some , .also , , , , is an infinite sequence in a finite - dimensional space .therefore , it has a limit point and there are , such that we have since for , and we have thus , reading has the following effect : 1 . gets mapped to a state that is at most -away ( in norm ) from , 2 . gets mapped to an accepting / rejecting state and most fraction of it stays on the non - halting states . together, these two requirements mean that the state of after reading is at most -away from .also , the probabilities of accepting and rejecting while reading differ from and by at most .let be the probability of rejecting .since reading in leads to a rejection , must be rejected and .the probability consists of two parts : the probability of rejection during and the probability of rejection during .the first part differs from by at most , the second part differs from by at most ( because the state of when starting to read differs from by at most and , by lemma [ bvlemma ] , the accepting probabilities differ by at most twice that ) .therefore , since , this implies . by appropriately choosing , we can make this true for any .therefore , we have which is condition 2 . condition 3 is true by considering .this word must be accepted with probability .therefore , for any , can only reject during with probability and .this shows that no qfa can achieve a probability of correct answer more than the solution of optimization problem 1 .it remains to solve this problem .* solving optimization problem 1 . *the key idea is to show that it is enough to consider 2-dimensional instances of the problem . since , the vectors form a right - angled trianglethis means that , where is the angle between and .let and be the normalized versions of and : , .then , and .consider the two - dimensional subspace spanned by and .since the accepting and the rejecting subspaces and are orthogonal , and are orthogonal .therefore , the vectors and form an orthonormal basis .we write the vectors , and in this basis .the vector is where is the angle between and .the vector is equal to .next , we look at the vector .we fix , and and try to find the which maximizes for the fixed , and . the only place where appears in the optimization problem 1is on the left hand side of condition 1 .therefore , we should find that maximizes .we have two cases : 1 .+ the angle between and is at least ( because the angle between and is and the angle between and is ) .therefore , the projection of to is at most .since is a part of the rejecting subspace , this means that .the maximum is achieved if we put in the plane spanned by and : . + next, we can rewrite condition 3 of the optimization problem as . then, conditions 1 - 3 together mean that to solve the optimization problem , we have to maximize ( [ eq1 ] ) subject to the conditions of the problem . from the expressions for and above, it follows that ( [ eq1 ] ) is equal to first , we maximize .the first term is increasing in , the second is decreasing .therefore , the maximum is achieved when both become equal which happens when .then , both and are .now , we have to maximize we first fix and try to optimize the second term .since ( a standard trigonometric identity ) , it is maximized when and .then , and ( [ eq3 ] ) becomes the first term is increasing in , the second is decreasing .the maximum is achieved when the left hand side of ( [ eq5 ] ) is equal to .therefore , if we denote by , ( [ eq5 ] ) becomes a quadratic equation in : solving this equation gives and .2 . .+ we consider . since the minimum of two quantities is at most their average , this is at most since , we have and ( [ eq6 ] ) is at most .this is maximized by .then , we get which is less than which we got in the first case .this proves the first part of the theorem .* construction of a qfa . *this part is proven by taking the solution of optimization problem 1 and using it to construct a qfa for the language in a two - letter alphabet .the state is just the starting state of the minimal automaton , is the state to which it gets after reading , , is the empty word and .let be the solution of ( [ eq5 ] ) .then , , , , and . is the probability of correct answer for our qfa described below .the qfa has 5 states : , and . , .the initial state is .the transition function is to recognize , must accept all words of the form for and reject the empty word and any word that contains the letter . 1 . the empty word .+ the only tranformation applied to the starting state is .therefore , the final superposition is the amplitude of in the final superposition is and the word is rejected with a probability .2 . for .+ first , maps the component to the probability of accepting at this point is .the other component of the superposition , stays unchanged until maps it to the probability of accepting at this point is .the total probability of accepting is by equation ( [ eq6 ] ) , this is equal to .3 . a word containing at least one .+ if is the first letter of the word , the entire superposition is mapped to rejecting states and the word is rejected with probability 1 . otherwise , the first letter is , it maps to .the probability of accepting at this point is . by equation ( [ eq6 ] ) , this is the same as .after that , the remaining component ( ) is not changed by next and mapped to a rejecting state by the first . therefore , the total probability of accepting is also and the correct answer ( rejection ) is given with a probability .we now look at fragments of the minimal automaton that imply that a language can not be recognized with probability more than , for some .we call such fragments non - reversible constructions " .the simplest such construction is the one of theorem [ aftheorem ] . in this section, we present 3 other non - reversible constructions " that imply that a language can be recognized with probability at most , and .this shows that different constructions are non - reversible " to different extent .comparing these 4 `` non - reversible '' constructions helps to understand what makes one of them harder for qfa ( i.e. , recognizable with worse probability of correct answer ) the first construction comes from the language considered in ambainis and freivalds .this language was the first example of a language that can be recognized by a qfa with some probability ( 0.6822 ... ) but not with another ( ) .we find the `` non - reversible '' construction for this language and construct the qfa with the best possible accepting probability .[ tf4 ] let be a language and its minimal automaton . 1 .if contains states , and such that , for some words and , 1. if reads in the state , it passes to , 2 .if reads in the state , it passes to , 3 .if reads in the state , it passes to , 4 .if reads in the state , it passes to , 5 .if reads in the state , it passes to + then can not be recognized by a qfa with probability more than .the language ( the minimal automaton of which contains the construction above ) can be recognized by a qfa with probability ._ by a reduction to the following optimization problem .* optimization problem 2 . *find the maximum such that there is a finite - dimensional space , subspaces , such that , vectors , and and probabilities , , , such that 1 . , 2 . , 3 . , 4 .5 . ; 6 . ; 7 . ; 8 . ; 9 . .we use a theorem from .[ t13 ] let be a language and be its minimal automaton .assume that there is a word such that contains states , satisfying : 1 . , 2 .if starts in the state and reads , it passes to , 3 .if starts in the state and reads , it passes to , and 4 . there is a word such that if m starts in and reads y , it passes to , then can not be recognized by any 1-way quantum finite automaton .let be a qfa recognizing .let be state where the minimal automaton goes if it reads in the state . in case when we get the forbidden construction of theorem [ t13 ] . in case when states and are different states of the minimal automaton .therefore , there is a word that is accepted in one of them but not in the other . without loss of generality , we assume that is accepted if is started in but not if is started in .we choose so that the square of the projection of a vector on is equal to the accepting probability of if we run on the starting state and input and the right endmarker .we use lemma [ lemmaaf ] .let be and be for word and let be and be for word .without loss of generality we can assume that is a starting state of .let be the starting superposition for .we can also assume that reading in this state does not decrease the norm of this superposition .we divide into three parts : , and so that and , and . due to is the starting superposition we have (condition 1 ) . since get that (condition 3 ) due to .similarly (condition 4 ) and (condition 2 ) .it is easy to get that (condition 7 ) because reading in the state leads to accepting state .let ( ) be the accepting(rejecting ) probability while reading an infinite sequence of letters in the state .then (condition 5 ) due to and .let ( ) be the accepting(rejecting ) probability while reading an infinite sequence of letters in the state .then (condition 6 ) due to and .we find an integer such that after reading the norm of is at most some fixed . now similarly to theorem [ t1 ] we can get condition 8 : .let , , .we find an integer such that after reading the norm of is at most . since then .therefore , .then due to previous inequalities .now similarly to theorem [ t1 ] we can get condition 9 : . +we have constructed our second optimization problem .we solve the problem by computer . using this solutionwe can easily construct corresponding quantum automaton .= 2.5 in [ tf5 ] let be a language .if there are words such that its minimal automaton contains states satisfying : 1 . if m starts in the state and reads , it passes to , 2 .if m starts in the state and reads , it passes to , 3 . for each the state is not `` all - rejecting '' state , + then can not be recognized by a qfa with probability greater than .2 . there is a language such that its minimal deterministic automaton contains this construction and the language can be recognized by a qfa with probability . for ,a related construction was considered in .there is a subtle difference between the two constructions ( the one considered here for and the one in ) .the `` non - reversible construction '' in requires the sets of words accepted from and to be incomparable .this extra requirement makes it much harder : no qfa can recognize a language with the `` non - reversible construction '' of even with the probability . _ proof ._ * impossibility result .* this is the only proof in this paper that does not use a reduction to an optimization problem .instead , we use a variant of the classification of states ( lemma [ akvlemma ] ) directly .we only consider the case when the sets of words accepted from and are not incomparable .( the other case follows from the impossibility result in . )let be the set of words accepted from .this means that for each we have either or . without loss of generalitywe can assume that .now we can choose words such that and .the word exists due to the condition _( c)_. we use a generalization of lemma [ akvlemma ] .[ lemma2 ] let .there are subspaces , such that and 1 . if , then and 2 . if , then for any , there exists a word such that .the proof is similar to lemma [ akvlemma ] .let be a language such that its minimal automaton contains the non reversible construction from theorem [ tf5 ] and be a qfa .let be the accepting probability of .we show that .let be a word such that after reading it is in the state .let , , .we find a word such that after reading the norm of is at most some fixed .( such word exists due to lemma [ lemma2 ] . )we also find words such that , , . because of unitarity of , , on ( part ( i ) of lemma [ lemma2 ] ) , there exist integers such that , .let be the probability of accepting while reading .let be the probabilities of accepting while reading with a starting state and and be the probabilities of accepting while reading with a starting state .let us consider words : + + + + + + + + accepts with probability at least and at most .* proof . *the probability of accepting while reading is .after that , is in the state and reading in this state causes it to accept with probability .the remaining state is .if it was , the probability of accepting while reading the rest of the word ( ) would be exactly .it is not quite but it is close to .namely , we have by lemma [ bvlemma ] , this means that the probability of accepting during is between and .this lemma implies that because of .similarly , because of . finally , we have inequalities : + + + + + + + + + by adding up these inequalities we get .we can notice that .( this is due to the facts that , and . ) hence , . sincesuch words can be constructed for arbitrarily small , this means that does not recognize with probability greater than .* constructing a quantum automaton . * we consider a language in the alphabet such that its minimal automaton has accepting states and rejecting state and the transition function is defined as follows : , , , , , , .it can be checked that this automaton contains the non reversible construction from theorem 4 .hence , this language can not be recognized by a qfa with probability greater than .next , we construct a qfa that accepts this language with such probability .the automaton has states : , , . , .the initial state is the transition function is 1 . the empty word .+ the only tranformation applied to the starting state is .therefore , the final superposition is and the word is accepted with probability 1 . 2 .the word starts with . + reading maps to .therefore , this word is accepted with probability at least .word is in form .the superposition after reading is at this moment accepts with probability and rejects with probability .the computation continues in the superposition clearly , that reading of all remaining letters does not change this superposition . since maps each to an accepting state then rejects this word with probability at most 4 .word starts with . before reading superposition is + _ case 1 ._ . + since then reading maps at least states of to rejecting states .this means that rejects with probability at least + _ case 2 ._ . since then reading maps at least states of to accepting states .this means that accepts with probability at least [ tf2 ] let be a language . 1 . if there are words , , such that its minimal automaton contains states and satisfying : 1 . if m starts in the state and reads , it passes to , 2 .if m starts in the state and reads , it passes to , 3 .if m starts in the state and reads , it passes to an accepting state , 4 . if m starts in the state and reads , it passes to a rejecting state , 5 . if m starts in the state and reads , it passes to a rejecting state , 6 .if m starts in the state and reads , it passes to an accepting state .+ then can not be recognized by a qfa with probability greater than .2 . there is a language with the minimum automaton containing this construction that can be recognized with probability ._ * impossibility result .* the construction of optimization problem is similar to the construction of optimization problem 1 .for this reason , we omit it and just give the optimization problem and show how to solve it .* optimization problem 3 .* find the maximum such that there is a finite dimensional vector space , subspaces , ( unlike in previous optimization problems , and do not have to be orthogonal ) and vectors , such that and and probabilities , such that and 1 . , 2 . , 3 . , 4 . .* solving optimization problem 3 . * without loss of generality we can assume that .then these four inequalities can be replaced with only three inequalities 1 . , 2 . , 3 . . clearly that is maximized by .therefore , we have 1 . , 2 . .next we show that it is enough to consider only instances of small dimension .we denote as .first , we restrict to the subspace generated by projections of and to .this subspace is at most 2-dimensional .similarly , we restrict to the subspace generated by projections of and to .the lengths of all projections are still the same .we fix an orthonormal basis for so that and are both parallel to some basis vectors .then , and where the first two coordinates correspond to basis vectors of and the last two coordinates correspond to basis vectors of .we can assume that and are both non - negative .( otherwise , just invert the direction of one of basis vectors . )let .then , there is ] and .this gives 1 . , 2 . . then after some calculations we get 1 . , 2 . .if we fix and vary , then ( and , hence , ) is maximized by .this means that we can assume and we have 1 . , 2 . .if we consider then .this means that we are only interested in .let and .if we fix and vary , then and are linear functions in and .we consider two cases .+ + _ case 1 ._ .( this gives for each .therefore , in this case we only need to maximize the function . )+ this means that so that , we have this means that ] . )+ this means that is maximized by .therefore , 1 . , 2 . .let be .then ] , then is maximized by .this gives equal to .* construction of a qfa . *we consider the two letter alphabet .the language is the union of the empty word and . clearly that the minimal deterministic automaton of contains the non reversible construction from theorem 5 ( just take as , the empty word as and as ) .next , we describe a qfa accepting this language .let be the solution of in the interval $ ] .it can be checked that , , , .the automaton has 4 states : and . , .the initial state is .the transition function is 1 .the empty word .+ the only tranformation applied to the starting state is .therefore , the final superposition is and the word is accepted with probability .2 . .+ after reading the superposition is and word is rejected with probability .3 . .+ after reading the first the superposition becomes at this moment accepts with probability and rejects with probability .the computation continues in the superposition it is easy to see that reading all of remaining letters does not change this superposition .+ therefore , the final superposition ( after reading ) is this means that rejects with probability .4 . .+ before reading the first the superposition is and reading this changes this superposition to this means that accepts with probability finite automata ( qfa ) can recognize all regular languages if arbitrary intermediate measurements are allowed . if they are restricted to be unitary , the computational power drops dramatically , to languages recognizable by permutation automata . in this paper, we studied an intermediate case in which measurements are allowed but restricted to `` accept - reject - continue '' form ( as in ) .quantum automata of this type can recognize several languages not recognizable by the corresponding classical model ( reversible finite automata ) . in all of those cases ,those languages can not be recognized with probability 1 or , but can be recognized with some fixed probability .this is an unusual feature of this model because , in most other computational models a probability of correct answer can be easily amplified to for arbitrary . in this paper, we study maximal probabilities of correct answer achievable for several languages .those probabilities are related to forbidden constructions " in the minimal automaton .forbidden construction " being present in the minimal automaton implies that the language can not be recognized with a probability higher than a certain .the basic construction is one cycle " in figure [ f1 ] .composing it with itself sequentially ( figure [ f4 ] ) or in parallel ( figure [ f5 ] ) gives forbidden constructions " with a smaller probability .the achievable probability also depends on whether the sets of words accepted from the different states of the construction are subsets of one another ( as in figure [ f1 ] ) or incomparable ( as in figure [ f2 ] ) .the constructions with incomparable sets usually imply smaller probabilities .the accepting probabilities quantify the degree of non - reversibility present in the forbidden construction " .lower probability means that the language is more difficult for qfa and thus , the construction " has higher degree of non - reversibility . in our paper, we gave a method for calculating this probability and used it to calculate the probabilities for several constructions " .the method should apply to a wide class of constructions but solving the optimization problems can become difficult if the construction contains more states ( as for language studied in ) . in this case, it would be good to have methods for calculating the accepting probabilities approximately .a more general problem suggested by this work is : how do we quantify non - reversibility ? accepting probabilities of qfas provide one way of comparing the degree of non - reversibility in different `` constructions '' .what are the other ways of quantifying it ? and what are the other settings in which similar questions can be studied ?john watrous .space - bounded quantum complexity ._ journal of computer and system sciences _ , 59:281 - 326 , 1999 .( preliminary version in proceedings of complexity98 , under the title `` relationships between quantum and classical space - bounded complexity classes '' . ) | one of the properties of the kondacs - watrous model of quantum finite automata ( qfa ) is that the probability of the correct answer for a qfa can not be amplified arbitrarily . in this paper , we determine the maximum probabilities achieved by qfas for several languages . in particular , we show that any language that is not recognized by an rfa ( reversible finite automaton ) can be recognized by a qfa with probability at most . quantum computation , finite automata , quantum measurement . |
in past decades , there have been tremendous efforts on designing high - order accurate numerical schemes for compressible fluid flows and great success has been achieved .high - order accurate numerical schemes were pioneered by lax and wendroff , and extended into the version of high resolution methods by kolgan , boris , van leer , harten et al , and other higher order versions , such as essentially non - oscillatory ( eno ) , weighted essentially non - oscillatory ( weno ) , discontinuous galerkin ( dg ) methods etc . in the past decades, the evaluation of the performance of numerical scheme was mostly based on the test cases with strong shocks for capturing sharp shock transition , such as the blast wave interaction , the forward step - facing flows , and the double mach reflection .now it is not a problem at all for shock capturing scheme to get stable sharp shock transition .however , with the further development of higher order numerical methods and practical demands ( such as turbulent flow simulations ) , more challenging test problems for capturing multiple wave structure are expected to be used . for testing higher - order schemes , the setting of these cases should be sufficiently simple and easy for coding , and avoid the possible pollution from the boundary condition and curvilinear meshes . to introduce a few tests which can be truthfully used to evaluate the performance of higher - order schemeis the motivation for the current paper .our selected examples include the following : one - dimensional cases , two - dimensional riemann problems , and the conservation law with source terms .for the one - dimensional problems , the first case is a highly oscillatory shock - turbulence interaction problem , which is the extension of shu - osher problem by titarev and toro with much more severe oscillations , and the second one is a large density ratio problem with a very strong rarefaction wave in the solution , which is used to test how a numerical scheme capture strong waves .for the two - dimensional cases , four groups are tested .( i ) hurricane - like solutions , which are highly nontrivial two - dimensional time - dependent solutions with one - point vacuum in the center and rotational velocity field .it is proposed to test the preservation of positivity and symmetry of the numerical scheme .( ii ) the interaction of planar contact discontinuities for different mach numbers .the multidimensional contact discontinuities are the composite of entropy waves and vortex sheets .the simulation of such cases have difficulties due to the strong shear effects .since the large mach number limits for these cases have explicit solutions , they are proposed here in order to check the ability of the current scheme for capturing wave structures of various scales and the asymptotic property .( iii ) interaction of planar rarefaction waves with the transition from continuous fluid flows to the presence of shocks .( iv ) further interaction of planar shocks showing the mach reflection phenomenon .these two - dimensional problems fall into the category of two - dimensional riemann problems proposed in .the two - dimensional riemann problems reveal almost all substantial wave patterns of shock reflections , spiral formations , vortex - shock interactions and so on , through the simple classification of initial data .the rich wave configurations conjectured in have been confirmed numerically by several subsequent works .since the formulation of these problems are extremely simple , there is no need of complicated numerical boundary treatment and they are suitable as benchmark tests . the case for the conservation law with source termis also proposed . in order to provide reference solutions for all these test cases .a gas - kinetic scheme will be used to calculate the solutions in this paper .recently , based on the time - dependent flux function of the generalized riemann problem ( grp ) solver , a two - stage fourth - order time - accurate discretization was developed for lax - wendroff type flow solvers , particularly applied for the hyperbolic conservation laws .the reason for the success of a two - stage l - w type time stepping method in achieving a fourth - order temporal accuracy is solely due to the use of both flux function and its temporal derivative . in terms of the gas evolution model , the gas - kinetic scheme provides a temporal accurate flux function as well , even though it depends on time through a much more complicated relaxation process from the kinetic to the hydrodynamic scale physics than the time - dependent flux function of grp . based on this time - stepping method and the second - order gas - kinetic solver ,a fourth - order gas - kinetic scheme was constructed for the euler and navier - stokes equations . in comparison with the formal one - stage time - stepping third - order gas - kinetic solver ,the fourth - order scheme not only reduces the complexity of the flux function , but also improves the accuracy of the scheme , even though the third - order and fourth - order schemes take similar computation cost .the robustness of the fourth - order gas - kinetic scheme is as good as the second - order one .numerical tests show that the fourth - order scheme not only has the expected order of accuracy for the smooth flows , but also has favorable shock capturing property for the discontinuous solutions .this paper is organized as follows . in section 2, we will briefly review the fourth - order gas - kinetic scheme . in section 3 ,we select several groups of problems to show the performance of the scheme .the final conclusion is made in the last section .in this section , we will briefly review our recently developed two - stage fourth - order gas - kinetic scheme .this scheme is developed in the framework of finite volume scheme , and it contains three standard ingredients : spatial data reconstruction , two - stage time stepping discretization , and second - order gas - kinetic flux function .the spatial reconstruction for the gas - kinetic scheme contains two parts , i.e. initial data reconstruction and reconstruction for equilibrium . in this paper , the fifth - order weno method is used for the initial data reconstruction .assume that are the macroscopic flow variables that need to be reconstructed . are the cell averaged values , and are the two values obtained by the reconstruction at two ends of the -th cell . the fifth - order weno reconstruction is given as follows where all quantities involved are taken as and are the nonlinear weights .the most widely used is the weno - js non - linear weights , which can be written as follows where and is the smooth indicator , and the basic idea for its construction can be found in . in order to achieve a better performance of the weno scheme near smooth extrema, weno - z and weno - z+ reconstruction were developed .the only difference is the nonlinear weights .the nonlinear weights for the weno - z method is written as ,\end{aligned}\ ] ] and the nonlinear weights for the weno - z+ method is ,\end{aligned}\ ] ] where is the same local smoothness indicator as in , is used for the fifth - order reconstruction , and is a parameter for fine - tuning the size of the weight of less smooth stencils . in the numerical tests , without special statement ,weno - js method will be used for initial data reconstruction .after the initial date reconstruction , the reconstruction of equilibrium part is presented .for the cell interface , the reconstructed variables at both sides of the cell interface are denoted as . according to the compatibility condition , which will be given later, the macroscopic variables at the cell interface is obtained and denoted as .the conservative variables around the cell interface can be expanded as with the following conditions , the derivatives are given by /\delta x.\end{aligned}\ ] ] the two - stage fourth - order temporal discretization was developed for lax - wendroff flow solvers , and was originally applied for the generalized riemann problem solver ( grp ) for hyperbolic equations . in ,multi - stage multi - derivative time stepping methods were proposed and developed as well under different framework of flux evaluation .consider the following time - dependent equation with the initial condition at , i.e. , where is an operator for spatial derivative of flux .the time derivatives are obtained using the cauchy - kovalevskaya method , introducing an intermediate state at , the corresponding time derivatives are obtained as well for the intermediate stage state , then , a fourth - order temporal accuracy solution of at can be provided by the following equation the details of this analysis can be found in .thus , a fourth - order temporal accuracy can be achieved by the two - stage discretization eq . andeq .. consider the following conservation laws the semi - discrete form of a finite volume scheme can be written as where are the cell averaged conservative variables , are the fluxes at the cell interface , and is the cell size . with the temporal derivatives of the flux ,the two - stage fourth - order scheme can be developed .similarly , for the conservation laws with source terms the corresponding operator can be denoted as the two - stage fourth - order temporal discretization can be directly extended for conservation laws with source terms .the two - dimensional bgk equation can be written as where is the gas distribution function , is the corresponding equilibrium state , and is the collision time .the collision term satisfies the compatibility condition where , , is number of internal freedom , i.e. for two - dimensional flows , and is the specific heat ratio . to update the flow variables in the finite volume framework , the integral solution of bgk equation eq .is used to construct the gas distribution function at a cell interface , which can be written as where is the location of the cell interface , and are the trajectory of particles , is the initial gas distribution function , and is the corresponding equilibrium state .the time dependent integral solution at the cell interface can be expressed as (u)\nonumber\\ + & e^{-t/\tau}g_l[1-(\tau+t)(a_{1l}u+a_{2l}v)-\tau a_l)](1-h(u)).\end{aligned}\ ] ] based on the spatial reconstruction of macroscopic flow variables , which is presented before , the conservative variables and on the left and right hand sides of a cell interface , and the corresponding equilibrium states and , can be determined .the conservative variables and the equilibrium state at the cell interface can be determined according to the compatibility condition eq . as follows the coefficients related to the spatial derivatives and time derivative and in gas distribution function eq . can be determined according to the spatial derivatives and compatibility condition .more details of the gas - kinetic scheme can be found in . as mentioned in the section before , in order to utilize the two - stage temporal discretization , the temporal derivatives of the flux function need to be determined .while in order to obtain the temporal derivatives at and with the correct physics , the flux function should be approximated as a linear function of time within the time interval .let s first introduce the following notation , in the time interval ] .the reflective boundary conditions are imposed for the left and right boundaries ; at the top boundary , the flow values are set as , and at the bottom boundary , they are .the source terms on the right side of the governing equations are . to achieve the temporal accuracy , is given by the cell averaged value and is given by the governing equation , respectively .the uniform meshes with and are used in the computation .the density distributions at and are presented in fig.[rayleigh - taylor1 ] and fig.[rayleigh - taylor2 ] . with the mesh refinement , the flow structures for the complicated flows are observed .it hints that current scheme may be suitable for the flow with interface instabilities as well .in this paper , we select several one - dimensional and two - dimensional challenging problems which can be used to check the performance of higher order numerical schemes .the scheme for providing reference solutions is our recently developed two - stage four order accurate gks scheme .these cases can be used to test the performance of other higher - order schemes as well , which have been intensively constructed in recent years .the test problems are * one - dimensional problems 1 .titarev - toro s highly oscillatory shock - entropy wave interaction ; 2 .large density ratio problem with a very strong rarefaction wave ; * two - dimensional riemann problems 1 .hurrican - like solutions with one - point vacuum and rotational velocity field ; 2 .interaction of planar contact discontinuities , with the involvement of entropy wave and vortex sheets ; 3 .interaction of planar rarefaction waves with the transition from continuous flows to the presence of shocks ; 4 .interaction of planar ( oblique ) shocks . * conservation law with source terms 1 . rayleigh - taylor instability the construction of accurate and robust higher - order numerical schemes for the euler equations is related to many numerical and physical modeling issues . based on the exact riemann solution or other simplified approximate riemann solvers ,the traditional higher - order approaches mostly concentrate on the different frameworks , such as dg , finite different , and finite volume , with all kind of underlying data reconstructions or limiters .the higher - order dynamics under the flux function has no attracted much attention .maybe this is one of the reasons for the stagnation on the development of higher - order schemes in recent years . along the framework of multiple derivatives for the improvement of time accuracy of the scheme, it has a higher requirement on the accuracy of the flux modeling than the traditional higher - order methods , because the scheme depends not only on the flux , but also on the time derivative of the flux function .most test cases in this paper are for the time accurate solutions , which require the close coupling of the space and time evolution .these test cases can fully differentiate the performance of different kinds of higher - order schemes .their tests and the analysis of the numerical solutions can guide the further development of higher - order schemes on a physically and mathematically consistent way .the work of j. li is supported by nsfc ( 11371063 , 91130021 ) , the doctoral program from the education ministry of china ( 20130003110004 ) and the science challenge program in china academy of engineering physics .the research of k. xu is supported by hong kong research grant council ( 620813 , 16211014 , 16207715 ) and hkust research fund ( provost13sc01 , irs15sc29 , sbi14sc11 ) .j. glimm , x. ji , j. li , x. li , p. zhang , t. zhang and y. zheng , transonic shock formation in a rarefaction riemann problem for the 2d compressible euler equations , siam journal on applied mathematics , 69 ( 2008 ) 720 - 742 .v. p. kolgan , application of the principle of minimum values of the derivative to the construction of finite - difference schemes for calculating discontinuous solutions of gas dynamics , scientific notes of tsagi , 3 ( 1972 ) 68 - 77 .kreiss , j. lorenz , initial - boundary value problems and the navier - stokes equations .classics in applied mathematics , 47 .society for industrial and applied mathematics ( siam ) , philadelphia , ( 2004 ) .w. e , y. g. rykov , y. g. sinai , generalized variational principles , global weak solutions and behavior with random initial data for systems of conservation laws arising in adhesion particle dynamics .phys . 177 ( 1996 ) 349 - 380 . | there have been great efforts on the development of higher - order numerical schemes for compressible euler equations . the traditional tests mostly targeting on the strong shock interactions alone may not be adequate to test the performance of higher - order schemes . this study will introduce a few test cases with a wide range of wave structures for testing higher - order schemes . as reference solutions , all test cases will be calculated by our recently developed two - stage fourth - order gas - kinetic scheme ( gks ) . all examples are selected so that the numerical settings are very simple and any high order accurate scheme can be straightly used for these test cases , and compare their performance with the gks solutions . the examples include highly oscillatory solutions and the large density ratio problem in one dimensional case ; hurricane - like solutions , interactions of planar contact discontinuities ( the composite of entropy wave and vortex sheets ) sheets with large mach number asymptotic , interaction of planar rarefaction waves with transition from continuous flows to the presence of shocks , and other types of interactions of two - dimensional planar waves . the numerical results from the fourth - order gas - kinetic scheme provide reference solutions only . these benchmark test cases will help cfd developers to validate and further develop their schemes to a higher level of accuracy and robustness . euler equations , two - dimensional riemann problems , fourth - order gas - kinetic scheme , wave interactions . |
evolving random graphs have recently attracted attention , see e.g. refs and references therein .this interest is mainly motivated by concrete problems related to the structure of communication or biological networks .experimental data are now available in many contexts . in these examples ,the asymmetry and the evolving nature of the networks are likely to be important ingredients for deciphering their statistical properties .it is however far from obvious to find solvable cases that would possibly account for some relevant features of , say , the regulating network of a genome .although biology has strongly influenced our interest in evolving networks , the model we solve is not based on realistic biological facts but it nevertheless incorporates asymmetry and chronological order .understanding such simple evolving graphs may help understanding biological networks , at least by comparison and opposition .we were initially motivated by the study of the yeast genetic regulatory network presented in ref. .the authors studied in and out degree distributions and discovered a strong asymmetry : a single gene may participate to the regulation of many other genes the law for out - degrees seems to be large , but each genes is only regulated by a few other genes the law for in - degrees seems to have finite moments .this is why we consider oriented evolving random graphs in the sequel .a biological interpretation for the asymmetry is that the few promoter - repressor sites for each gene bind only to specific proteins , but that along the genome many promoter - repressor sites are homologous .however , this does not predict the precise laws .an understanding of the same features from a purely probabilistic viewpoint would be desirable as well .the recent experimental studies dealt with global statistical properties of evolving graphs , i.e. when the evolving network is observed at some fixed time with the ages of different vertices and edges not taken into account .there are simple experimental reasons for that : to keep track of the ages would in many cases dramatically reduce the statistics , and in other cases this information is even not available .our second motivation is a better understanding of the local - in - time statistical properties of evolving networks .this helps dating or assigning likely ages to different structures of the networks .as we shall later see , the global analysis , which is like a time average , gives a distorted view of the real structure of the networks .we shall present a detailed analysis of local - in - time features in our model .the model we study is the natural evolving cousin of the famous erds - renyi random graphs .starting from a single vertex at time , a new vertex is created at each time step so that at time , the size of the system , i.e. the number of vertices , is , and new oriented edges are created with specified probabilistic rules .a tunable parameter ranging from to describes asymptotically the average number of incoming edges on a vertex .precise definitions are given in the next section .our main results are the following : from very simple rules , we see an asymmetry emerging .the global in and out degree distributions are different .we also compute the local profiles of in and out degree distributions , and comment on the differences .we make a detailed global analysis for the structure and sizes of the connected components .we use generating function methods to write down a differential equation that implies recursion relations for the distribution of component sizes , see eqs.([cdiff],[crecur ] ) .a salient global feature of the model is a percolation phase transition at a critical value of the average connectivity . below this value, no single component contains a finite fraction of the sites in the thermodynamic limit , i.e. in the large limit .however , a slightly unusual situation occurs in that below the transition the system contains components whose sizes scale like a power of the total size of the graph , see eq.([eq : grosclu ] ) .correspondingly , the probability distribution for component sizes has an algebraic queue , see eq.([asympk ] ) , and its number of finite moments jumps at specific values of the average connectivity . above the transition , this probability distribution becomes defective , but its decrease is exponential , see eq.([pklarge ] ) .the transition is continuous .close to the threshold , the fraction of sites in the giant component the percolation cluster has an essential singularity , see eq.([eq : pof ] ) .we argue that this result is universal , with the meaning used in the study of critical phenomena .the essential singularity at the percolation threshold had already been observed numerically by in a different model which we show to be in the same universality class as ours for the percolation transition , and computed analytically for another class of models in .we then turn to the study of local - in - time profiles of connected components .guided by a direct enumeration based on tree combinatorics , we show that they satisfy recursion relations , and we give the first few profiles ( isolated vertices , pairs , triples ) explicitly .the profile of the giant component is given by a differential equation , from which we extract the singularity in the far past and the critical singularity in the present see eqs([eq : rho_0],[eq : rho_1 ] ) .in particular the giant component invades all the time slices of the graph above the transition .one strange feature of profiles , which would deserve a good explanation , is that in several instances the formal parameter involved in generating functions for global quantities is simply traded for the relative age to obtain interesting local - in - time observables , see eqs.([eq : young],[ddiff ] ) .we have compared our analytical results with numerical simulation whenever possible . while polishing this paper , we became aware of , whose goals overlap partly with ours . when they can be compared , the results agreewe construct evolving random graphs with the following rules : + ( i ) we consider a triangular array of independent random variables , , where takes value with probability ] for which .we shall often take the viewpoint that the ( biased ) coin tossings defining are done at time .we shall assume that at large time , with a parameter which we shall identify as half the average connectivity .this choice ensures the convergence of various distributions to stationary measures , most of them being independent of the precise values of the early probabilities . by constructionall edges arriving at a given vertex are simultaneously created at the instant of creation of this vertex . as a consequence ,these graphs are not only oriented but chronologically oriented this is unrealistic from the biological viewpoint .in this section we give the incoming and outgoing edge distributions .+ let be the number of incoming edges at the vertex , and be the number of outgoing edges at this vertex at time .+ let be the number of vertices with incoming edges , and be the number of vertices with outgoing edges at time .we may look either at the edge distributions at a given vertex , or we may look at the edge distributions defined by gathering averaged histograms over whole graphs .the former are specified by their generating functions , where denotes expectation value .it may depend on the specified vertex labeled by .the latter is defined by the generating functions , we remark that this global histogram distribution is the average of the local - in - time quantity . since at time the total number of vertices is , is properly normalized , , to define an averaged probability distribution function , independent of the vertices , for the incoming or outgoing edge variables : _ incoming vertices . _the number of incoming edges at vertex asymptotically possesses a poisson distribution since \simeq_{j\to\infty}\exp({\alpha}(z-1))\ ] ] the convergence of this distribution justifies our choice of asymptotic probabilities .only the vertices whose ages scale with the age of the graph , i.e. with fixed , , give non trivial contributions at large time to the averaged histogram ( [ histo ] ) and this expression may also be retrieved by looking at the evolution equation of . indeed , consider adding the new vertex at time . since the edges are oriented from older to younger vertices , .] we have from the second definition in eq.([histo ] ) .this is equivalent to as at large time , the stationary limit is given by eq.([v- ] ) .this yields a poissonian distribution with probabilities _ outgoing vertices ._ at a given vertex , with fixed , the number of outgoing edges at vertex also have a poisson distribution at large time , \simeq_{t\to\infty } \exp(-{\alpha}\log\sigma(z-1))\ ] ] but with a parameter depending on the age of the vertex .approximating at large time the sum over in eq.([histo ] ) by an integral over gives the histogram distribution : as for incoming vertices , this formula follows from the evolution equation for . indeed , since the numbers of outgoing edges from vertex at time and differ by we have . from definition ( [ histo ] ) this gives where the first term is the contribution of the newly added vertex at time .the stationary limit is given by eq.([v+ ] ) .this is a geometric distribution , slightly larger than the poisson distribution , with probabilities _ mixed distribution_. let be the number of vertices with outgoing and incoming edges at time . as in eq.([histo ] ) , the generating function for the mixed histogram distribution is defined by by construction the outgoing and incoming edges variables are statistically independent for fixed , so that the last expectation values factorize . as above we may derive an evolution equation by evaluating the contribution of the newly added vertex at time yields : its stationary limit is factorized : outgoing and incoming edges are statistically independent at large time .in this section , we present the main relations governing the probability distributions of connected components of the graphs .two vertices belong to the same connected component if they can be joined by a path made of edges , without any reference to orientation .this definition ensures that the property of being in the same connected component is an equivalence relation , but does not ensure that two points in a connected component can be joined by an oriented path . to partly avoid repetitions , the term _ cluster _ is used as a synonymous for _ connected component _ in the sequel .intuitively , the fact that the network is fragmented can be understood as follows : when a vertex is created , it has a finite probability to be isolated , and the probability that none of the vertices connects to vertex is which scales as .this quantity remains finite as long as does .this argument shows that there are isolated vertices in the system .a small extension of the argument shows that there are also finite components and that young vertices are more likely to be in small components than old ones .this will be made more rigourous in the study of profiles , see section [ sec : chropro ] .let be the number of connected components with vertices at time and let be the generating function , by definition , is the number of components and the total number of vertices , at any finite time .let us write an evolution equation for . at time , we add the vertex with label which may then be connected to connected components of size .this creates a new component of size , but also removes components of size .thus , at time we have : with the kronecker symbol .alternatively , as is apparent from this formulation , the transition probability from a given to a given can be given in closed form . to be precise ,the admissible s ( describing the accessible distributions of components at time ) are polynomials with integral non - negative coefficients , whose derivative at have value .now suppose and are admissible . if the difference can not be written as for some set of nonnegative integers , the transition is forbidden .if it can , then the s are uniquely defined and the transition probability is the meaning of this equation is simple . at time , the new vertex is added , and for each of the former points a ( biased ) coin is tossed to decide the value of the edge variables .the tossings are independent with the same law , so the probability that the new point does not attach to a given component of size is , and distinct components are independent .hence for each one makes independent bernoulli trials with failure probability , and the transition from requires exactly successes .this shows that the graph evolution is a ( time inhomogeneous ) markov process on the space of components distributions , a fact that we shall use for the purpose of numerical simulations .this explicit representation of the transition probability could be used to average equation ( [ ntbis ] ) .alternatively , one can represent the number of components of size which are connected to the new vertex in terms of the edge variables as =1}^{n_k(t)}[1 - \prod_{j\in[k]}(1-\ell_{j , t+1})]\ ] ] where ] .the total derivative of this functional with respect to is it vanishes if is a solution of eq.([eq : univ ] ) , so we have indeed solved in closed form equation ( [ feq ] ) for small . as this is the domain we are interested in , it is tempting to argue that and should exhibit the same singular behaviour .this turns out to be true , but there are some subtleties because the limit is singular : the size of the domain for which is a uniform approximation to shrinks to .our strategy is to use the invariant ( [ eq : int ] ) for the approximate equation ( [ eq : univ ] ) to derive exact inequalities for .let us observe that the functional ( [ eq : int ] ) is singular at points where where it has a jump of amplitude .with this in mind , we define the functional in this definition , is a priori independent of , the value for which is considered .the functional is a smooth function of on -\infty,{\tau}_c[ ] to ] for every such that .indeed , we know that for ,+\infty[ ] in which we have an absolutely convergent sum of analytic functions analytic of .hence the sum is analytic in as claimed . to resum more explicitly the contribution of all trees of a given size , we need the generating function for labeled trees with given incoming degrees .suppose more generally that we give a weight for each edge leaving vertex ( i.e. connecting to a ) and a weight for each edge entering vertex ( i.e. connecting to a ) .the generating function for weighted trees on vertices factorizes nicely as this generalization of the famous caley tree formula labeled trees on vertices . ]implies it immediately .it seems to be little known , although it is implicit in the mathematical literature .gilles schaeffer provided us with a clean proof using a refined version of one of the standard proofs of the caley tree formula , putting trees on vertices in one to one correspondence with applications from ] fixing and , see e.g. .this formula can be specialized to and for to give if one integrates only over a subset of the s , one gets marginal distributions .for instance , for , we get that in the thermodynamic limit the fraction of sites with age close to that are isolated is . for , if we integrate over , we get that the fraction of sites with age close to that are the older vertex of a tree on two vertices is while if we integrate over , we get that the fraction of sites with age close to that are the younger vertex of a tree on two vertices is .the sum , , gives the probability that a site with age close to belongs to a tree on vertices .our explicit representation in terms of trees shows that in this model , and at least for questions concerning connected components , the thermodynamic limit applies not only to the full system , but also to slices of fixed relative age . in the next section we shall study -dependent profiles . for small s, we have done all the integrals and checked the agreement with the value of obtained by the recursion relation . but a general proof valid for all s is lacking . to illustrate the evolving nature of our model , we now determine the local - in - time distribution of the cluster sizes .this means determining for any given age interval what is the proportion of vertices of these ages which belong to clusters of given size .define to be the probability that vertex belongs to a component of size at time .guided by the previous tree representation , we infer that in the thermodynamical limit }\ , p_k(t , t')\simeq t\rho_k(\sigma)d\sigma$ ] , with a deterministic function . by construction , , the total fraction of points that belong to components of size .the reasoning leading to ( [ cdiff ] ) can be generalized : one writes down a recursion relation for and then takes the average , a step justified by the explicit tree representation .the event that vertex belongs to a component of size at time is the exclusive union of several events .\i ) vertex belonged to a component of size at time and this component is not linked to the new vertex .this has probability .\ii ) vertex belonged to a component of size at time , this component is linked to the new vertex , and together with the other components linked to ( say components of size ) , it builds a component of size .this has probability to perform the explicit sum over and the s , we introduce again generating functions and set .this leads to in the large limit this complicated formula simplifies if we use again the hypothesis of self - averaging and asymptotic independence . defining leads to together with the sum rule relating to , this fixes completely the profiles .a relation between and is obtained by integrating ( [ profit ] ) for between and . using the defining equation for , eq.([cdiff ] ) , this leads to .thus , we can summarize our knowledge on component time profiles with the four relations : expansion in powers of leads to recursion relations for the s .they can be shown to be alternating polynomials of degree in with vanishing constant coefficient .the first few polynomials are again , we have checked that for small s the values of as computed from iterated tree integrals and from the generating function coincide , but we have no general proof . as an application , let us look at the profile of vertices of relative age close to which are the youngest in components of size .these are vertices that created , when they appeared , a component of size this happens with probability which was then left untouched for the rest of the evolution this happens with probability .so the distribution of vertices that are the youngest in their connected ( finite ) component is in this expression , the formal parameter has been replaced by .the giant component , when it exists , also has a well - defined profile to which we turn now .we now determine the profile of the giant component .so let be the fraction of vertices whose ages are between and which belong to the giant component . by definition .we know from the tree representation that for finite clusters the thermodynamic limit applies not only on the full graph , but also on time slices . the giant component is the complement of finite components , so we expect that the density is self - averaging as is the size of the percolating cluster . to derive an equation fixing this density we look for the probability for a site of age not to be in the percolating cluster . on the one hand , by definition of the densitythis probability is ) = 1-\rho_{\infty}(\sigma)\ ] ] on the other hand , this probability may be evaluated by demanding the vertex not to be connected to the older and younger vertices of the percolating cluster : )= \prod_{k < j , k\in[k_\infty]}(1-\frac{{\alpha}}{j } ) \prod_{k > j , k\in[k_\infty]}(1-\frac{{\alpha}}{k})\ ] ] at large time , the first above product converges to and the second to .so we get the relation : as is positive , is an increasing function of .so is decreasing and has a right limit at .unless ( i.e. ) , the integral has a logarithmic divergence , and henceforth .more precisely , this means that the early vertices belong to the giant component with probability . on the other hand , by definition so taking in ( [ profil ] ) we get that this means that the late vertices always belong to the giant component with a non vanishing probability although this probability is exponentially small close to the threshold .hence the giant component invades all time slices above the threshold , and the term percolation transition is appropriate even with this unusual interpretation .our results are illustrated on fig.[fig : profinfin ] .random graphs of size . from top to bottom , the values of are , and .__,scaledwidth=70.0% ] to conclude this discussion , let us observe that can be expressed in terms of the profiles of finite components that we studied before : with defined in eq.([defrhoz ] ) . we end up with yet another equation constraining , if we expand in powers of as and define , we can check that in particular , by differentiation this implies that the resemblance with equations ( [ cdiff ] ) and ( [ crecur ] ) for the function is rather striking , but we have not been able to use this more deeply .one difference is that ( [ cdiff ] ) determines from scratch , whereas ( [ ddiff ] ) does it in a two step process .if is the solution of this equation satisfying , then , and then has to satisfy another difference is that the sequence alternates in sign .again , the formal parameter receives a simple physical interpretation where labels the relative date of birth of vertices .in this study , we have described detailed global and local - in - time features of evolving random graphs with uniform attachment rules . concerning global properties , we have shown that the model has a percolation phase transition at .below the transition , the system contains clusters whose sizes scale like . above the transition , a single component , the giant component , grows steadily with time .we have shown that , close to the threshold , the fraction of sites in the giant component has an essential singularity and behaves as .the behaviors below and above the transitions are strongly reminiscent of the two dimensional model , so that our model can be interpreted as some algorithmic equivalent of it , but it seems unlikely that a direct connexion exists .this analogy and further scaling properties we present call for an alternative renormalization group approach to the transition . by describing local - in - time profiles ,we have shown that they offer a more accurate vision of the specificities of evolving graphs .it would be desirable to generalize this approach by answering the following question : assume that a procedure to assign ages to the vertices of some evolving graphs has been given , what informations on the microscopic evolution rules of these graphs can be decoded from the knowledge of local - in - time statistics ?aknowledgements : we thank gilles schaeffer for his clarifying help in tree generating functions and sergei dorogovtsev for his comments and interest .we show how our arguments of section [ sec : clusters ] can be modified to describe the erds - renyi random graph model . in this model ,one starts with points , and any two points are connected by an edge with probability ( so in this model all the points are equivalent ) . then a limit is taken .this famous model describes a static graph , but it can also be rephrased as an evolving graph in the following way : set and suppose that points are added one by one , from to , each new point connecting to any previous one with probability . from this point of view , it can be seen that looking only at the first vertices ( ) amount to look at an erds - renyi random graph with a modified connectivity parameter . to get a recursion relation , we start from an erds - renyi random graph of size with connectivity parameter .we add vertex and connect any older vertex to it with probability , so that the effective connectivity parameter for the graph on vertices is .then the derivation proceeds as before , with the little proviso that in eq.([nmoyen ] ) , on the left - hand side has an effective connectivity parameter instead of .so when we take the thermodynamic limit , nothing changes on the right - hand side of ( [ nmoyen ] ) , but an additional term contributes to the left - hand side . if denotes the analog of but for the erds - renyi random graph , then this little modification in the equation has drastic consequences .the new equation has a single solution regular at , namely which is the well - known result .we conclude that our self - averaging hypothesis is valid for the erds - renyi model .this makes it more plausible that it works for our original model as well , a fact also confirmed by numerical simulations .our goal is to prove the following formula and to investigate a few of its consequences .we claim that ^{n_m(t ) } \rangle } \label{mast}\end{aligned}\ ] ] where the contour integral is around the origin .it is a fokker - planck equation for the markov process formed by the s , which could be used to prove systematically that the variables are self - averaging , a task we perform for and at the end of this appendix .we start from eq.([nt ] ) which may be rewritten as to compute the last term we insert the tautological identity in the r.h.s . to get to compute the r.h.s .expectation value we use a contour integral representation of the kronecker symbol this yields the r.h.s .can now be computed using eq.([ndist ] ) and gives eq.([mast ] ) .this computation has a rather simple combinatorial reinterpretation .we set and observe that going from time to , we add vertex and edges from the rest of the graph to .suppose that the component of vertex has size say .this component was build by `` eating '' some components of the graph at time .a component of size is swallowed with probability and survives with probability .on the other hand , if one expands in powers of , the term of degree enumerates all the possibilities to build the component of vertex with the correct probability .defining , summation over gives as obtained previously .the fokker - planck eq.([mast ] ) may also be formulated as a difference equation , similar to a discrete schrodinger equation .let us for instance specify it for with , , a set of complex numbers and let ^{n_k(t ) } \rangle } \nonumber\end{aligned}\ ] ] this parametrization is similar to that used in matrix theory where may be thought of as the trace of the power of matrix whose eigenvalues are the s .the contour integral in eq.([mast ] ) can then be explicitly evaluated by deforming the integration contour to pick the simple pole contributions located at the points .this gives : eq.([mast ] ) or ( [ diffshro ] ) may be used to prove that the numbers are self - averaging .let us choose for example two parameters and .then , only the clusters of size give a non trivial contribution to at large time so that at large time , the difference equation ( [ diffshro ] ) then reduces to a differential equation for , it implies that is linear in at large time which means that is self - averaging and stationary at large time .more precisely , integrating the above equation gives : in agreement with eq.([lesc ] ) .similarly , to prove that possesses a finite self - averaging limit as we choose three parameters , so that only clusters of size two survive in at large time , eq.([diffshro ] ) then gives a first order differential equation for which implies that in agreement with eq.([lesc ] ) .although we do not have a global argument this proof may clearly be extended to recursively prove self - averageness of any by choosing the parameters with .here we present the proof of eq.([gscal ] ) . let us first recall lagrange formula . consider a variable defined by the implicit relation for some given analytic function .the solution of this equation is supposed to be unique so that is function of . given another analytic function we look for the taylor series in of .this composed function may be presented as a contour integral : expanding the integrated rational function in taylor series in gives lagrange formula : let .eq.([eqgc ] ) translates into or for and .we now apply lagrange formula with and .this gives with and , this proves eq.([gscal ] ) . | we introduce a new oriented evolving graph model inspired by biological networks . a node is added at each time step and is connected to the rest of the graph by random oriented edges emerging from older nodes . this leads to a statistical asymmetry between incoming and outgoing edges . we show that the model exhibits a percolation transition and discuss its universality . below the threshold , the distribution of component sizes decreases algebraically with a continuously varying exponent depending on the average connectivity . we prove that the transition is of infinite order by deriving the exact asymptotic formula for the size of the giant component close to the threshold . we also present a thorough analysis of aging properties . we compute local - in - time profiles for the components of finite size and for the giant component , showing in particular that the giant component is always dense among the oldest nodes but invades only an exponentially small fraction of the young nodes close to the threshold . michel bauer and denis bernard service de physique thorique de saclay ce saclay , 91191 gif sur yvette , france |
in the last two decades , the phenomenon of chaos synchronization has been extensively studied in coupled nonlinear dynamical systems from both theoretical and application perspectives due to its significance in diverse natural and man - made systems . in particular ,various types of synchronization , namely complete synchronization ( cs ) , phase synchronization ( ps ) , intermittent lag / anticipatory synchronizations and generalized synchronization ( gs ) have been identified in coupled systems .all these types of synchronization have been investigated mainly in identical systems and in systems with parameter mismatch .very occasionally it has been studied in distinctly nonidentical ( structurally different ) systems .but in reality , structurally different systems are predominant in nature and very often the phenomenon of synchronization ( gs ) is responsible for their evolutionary mechanism and proper functioning of such distinctly nonidentical systems .as typical examples , we may cite the cooperative functions of brain , heart , liver , lungs , limbs , etc ., in living systems and coherent coordination of different parts of machines , between cardiovascular and respiratory systems , different populations of species , in epidemics , in visual and motor systems , in climatology , in paced maternal breathing on fetal etc. it has also been shown that gs is more likely to occur in spatially extended systems and complex networks ( even in networks with identical nodes , due to the large heterogeneity in their nodal dynamics ) .in addition , gs has been experimentally observed in laser systems , liquid crystal spatial light modulators , microwave electronic systems and has applications in secure communication devices . therefore understanding the evolutionary mechanisms of many natural systemsnecessitates the understanding of the intricacies involved in the underlying generalized synchronization ( gs ) phenomenon .the phenomenon of gs has been well studied and understood in unidirectionally coupled systems , but still it remains largely unexplored in mutually coupled systems .only a limited number of studies are available on gs in mutually coupled systems even with parameter mismatches and rarely in structurally different dynamical systems with different fractal dimensions .recent investigations have revealed that gs emerges even in symmetrically ( mutually ) coupled network motifs in networks of identical systems , and that it also plays a vital role in achieving coherent behavior of the entire network . as almost all natural networksare heterogeneous in nature , the notion of gs has been shown to play a vital role in their evolutionary mechanisms . thus to unravel the role of gs in such large networks , it is crucial to understand the emergence of gs in heterogeneous network motifs composed of distinctly different nonidentical systems .it is to be noted that the notion of ps has been widely investigated in mutually coupled essentially different ( low - dimensional ) chaotic systems , while the notion of gs in such systems has been largely ignored .it is an accepted fact that ps is weaker than gs ( because ps does not add restrictions on the amplitude , and only the locking of the phases is crucial ) .parlitz et al .have shown that in general gs always leads to ps , if one can suitably define a phase variable , and that gs is stronger ( which has been studied in unidirectionally coupled chaotic systems ) .that is , ps may occur in cases where the coupled systems show no gs .further , the transition from ps to gs as a function of the coupling strength has been demonstrated in coupled time - delay systems with parameter mismatch which confirms that ps is weaker than gs . on the other hand , zhang and hu have demonstrated that gs is not necessarily stronger than ps , and in some cases ps comes after gs with increasing coupling strength depending upon the degree of parameter mismatch .they have concluded that ps ( gs ) emerges first for low ( high ) degree of parameter mismatch and that they both occur simultaneously for a critical range of mismatch in low - dimensional systems . in general , the notion of gs and its relation with ps in mutually coupled systems , particularly in distinctly nonidentical systems with different fractal dimensions including time - delay systems , need much deeper understanding . in line with the above discussion , we have reported briefly the existence of gs in symmetrically coupled networks of distinctly nonidentical time - delay systems using the auxiliary system approach in a letter . in this paper, we will provide an extended version of the letter with important additional results and explanations .in particular , in this paper we will demonstrate the occurrence of a transition from partial to global gs in mutually coupled networks of structurally different time - delay systems ( for and ) with different fractal ( kaplan - yorke ) dimensions and in systems with different orders using the auxiliary system approach and the mutual false nearest neighbor ( mfnn ) method .we use the mackey - glass ( mg ) , a piecewise linear ( pwl ) , a threshold piecewise linear ( tpwl ) and the ikeda time - delay systems to construct heterogeneous network motifs . the main reason to consider time - delay systems in this studyis that even with a single time - delay system , one has the flexibility of choosing systems with different fractal dimensions just by adjusting their intrinsic delay alone , which is a quite attracting feature of time - delay systems from modelling point of view .further , time - delay systems are ubiquitous in several real situations , including problems in ecology , epidemics , physiology , physics , economics , engineering and control systems , which inevitably require delay for a complete description of the system ( note that intrinsic delay is different from connection delays which arise between different units due to finite signal propagation time ) . in our present work ,we report that there exists a common gs manifold even in networks of distinctly different time - delay systems .in other words , there exists a functional relationship even for systems with different fractal dimensions , which maps them to a common gs manifold .further , we also wish to emphasize that our results are not confined to just scalar one - dimensional time - delay systems alone but we confirm that there exists a similar type of synchronization transitions even in the case of time - delay systems of different orders .particularly , we demonstrate that synchronization phenomenon occurs in a system of a ikeda time - delay system ( first order time - delay system ) mutually coupled with a hopfield neural network ( a second order time - delay system ) , and in a system of a mg time - delay system ( first order time - delay system ) mutually coupled with a plankton model ( a third order system with multiple delays ) to establish the generic nature of our results .stability of gs manifold in unidirectionally coupled systems is usually determined by examining the conditional lyapunov exponents of the synchronization manifold or the lyapunov exponents of the coupled system itself .here , we will estimate the maximal transverse lyapunov exponent ( mtle ) to determine the asymptotic stability of the cs manifold of each of the systems with their corresponding auxiliary systems starting from different initial conditions , which in turn asserts the stability of gs between the original distinctly nonidentical time - delay systems .further , we will also estimate the cross correlation ( cc ) and the correlation of probability of recurrence ( cpr ) to establish the relation between gs and ps ( here gs and ps always occur simultaneously in structurally different time - delay systems ) .cc essentially gives a much better statistical average of the synchronization error , which is being widely studied to characterize cs .further , cpr is a recurrence quantification tool , which effectively characterizes the existence of ps especially in highly non - phase - coherent hyperchaotic attractors usually exhibited by time - delay systems .it is also to be noted that the auxiliary system approach has some practical limitations .this method fails for systems whose dynamical equations are not known .particularly , cs between response and auxiliary systems arises only when their initial conditions are set to be in the same basin of attraction . due to the above limitations of the auxiliary system approach, we have also calculated the mutual false nearest neighbor ( mfnn ) which doubly confirms our results .in addition , an analytical stability condition using the krasovskii - lyapunov theory is also deduced in suitable cases .the remaining paper is organized as follows : in sec .[ sec2 ] , we will describe briefly the notion of structurally different time - delay systems with different fractal dimensions with examples . in secs .[ sec3 ] and [ sec4 ] , we will demonstrate the existence of a transition from partial to global gs in mutually coupled time - delay systems using the auxiliary system approach and the mfnn method , respectively .an analytical stability condition is also deduced using the krasovskii - lyapunov theory in sec .the transition from partial to global gs is demonstrated in mutually coupled time - delay systems with an array and a ring coupling configuration in sec .[ sec5 ] . in sec .[ sec6 ] , we will consider mutually coupled time - delay systems with array , ring , global and star configurations and discuss the occurrence of partial and global gs . the above synchronization transition in time - delay systems with different ordersis demonstrated in sec .[ sec7 ] and finally we summarize our results in sec .in this section , we consider structurally different first order scalar time - delay systems with different fractal dimensions . here, structurally different time - delay systems refer to systems exhibiting chaotic / hyperchaotic attractors with different phase space geometry characterized by different degree of complexity . despite the similarity in the structure of the evolution equations ,the nature of chaotic attractors , the number of positive les and their magnitudes characterizing the rate of divergence and the degree of complexity as measured by the kaplan - yorke dimension ( ) of the underlying dynamics are different even for the same value of time - delay because of the difference in the nonlinear functional form . as an illustration ,first let us consider a symmetrically coupled arbitrary network of distinctly nonidentical scalar time - delay systems .then the dynamics of the node in the network is represented as where , and is the number of nodes in the network , s and s are system s parameters , s the time - delays , the nonlinear function of the node is defined as , is the coupling strength and a laplacian matrix which determines the topology of the arbitrary network . for the mg time - delay system ,we choose the nonlinear function defined as and for the pwl system with the nonlinear function given as for the tpwl system we choose the form of the nonlinear function as given by with and for the ikeda time - delay system the nonlinear function is given by hyperchaotic attractors of ( a ) mackey - glass,(b ) piecewise linear ( pwl ) , ( c ) piecewise linear with threshold nonlinearity ( tpwl ) and ( d ) ikeda time - delay systems for the choice of parameters given in table [ table1 ] . ]the parameter values for the above time - delay systems are chosen throughout the paper as follows : + 1 ) for the mg systems : we choose , , and ; + 2 ) for the pwl systems : we choose , , , and ; + 3 ) the parameter values for the tpwl are fixed as , , , , and and + 4 ) for the ikeda time - delay system we choose , , .the hyperchaotic attractors of the uncoupled mg , pwl , tpwl and ikeda time - delay systems are depicted in figs .[ fig1](a)-(d ) , respectively .the first few largest les of all the above four ( uncoupled ) time - delay systems are shown as a function of the delay time in figs .[ fig2](a)-(d ) .it is also clear from this figure that the number of positive les , and hence the complexity and dimension of the state space , generally increase with the delay time .further , the degree of complexity , measured by their number of positive les of the dynamics ( attractors ) exhibited by all the four systems are distinctly different even for the same value of time - delay .in fact , we have taken different values of delay for each of the systems , as pointed out by the arrows in fig .[ fig2 ] , to demonstrate the existence of suitable smooth transformation that maps the strongly distinct individual systems to a common gs manifold .[ table1 ] we wish to emphasize especially the structural difference , measured by their degree of complexity , between the hyperchaotic attractors of different scalar first order time - delay systems ( fig .[ fig1 ] ) which we have studied in this paper and are detailed in table [ table1 ] : 1 ) the mg system has two positive les with for , 2 ) the pwl time - delay system has three positive les with for , 3 ) the tpwl time - delay system has four positive les with for and 4 ) the ikeda system has five positive les with for .the above facts clearly indicate that the real state space dimension explored by the flow of a time - delay system , which is essentially infinite - dimensional in nature , and the associated degree of complexity are characterized by the form of nonlinearity and the value of delay time irrespective of the similarity of the underlying evolution equations of the scalar first order time - delay systems .first few largest lyapunov exponents of ( a ) mackey - glass,(b ) piecewise linear ( pwl ) , ( c ) piecewise linear with threshold nonlinearity ( tpwl ) and ( d ) ikeda time - delay systems , as a function of the time - delay .arrows point the value of the delay time we have considered in our analysis . ]it has already been shown that the functional relationship between two different systems in gs is generally difficult to identify analytically .however gs in such systems can be characterized numerically using various approaches , namely the mutual false nearest neighbor method , the statistical modeling approach , the phase tube approach , the auxiliary system approach , etc . among all these methodsthe auxiliary system approach is extensively used to detect the presence of gs in unidirectionally coupled systems ( both in numerical and experimental studies due to its simple and powerful implementation ) .abarbanel et al . first introduced this approach to characterize and confirm gs in dynamical systems ( when the system equations are known ) .the mathematical formulation of this concept was put forward by kocarev and parlitz for a drive - response configuration in low - dimensional systems .they formulated it as the asymptotic convergence of the response and its auxiliary systems which are identically coupled to the drive system , starting from two different initial conditions from the same basin of attraction .the asymptotic convergence indeed ensures the existence of an attracting synchronization manifold ( cs manifold between the response and auxiliary systems and gs manifold between the drive and response systems ) . in other words ,gs between the drive and the response systems occur only when the response system is asymptotically stable , that is in the basin of the synchronization manifold one requires .now , we will provide an extension of this approach to a network for mutually coupled systems . for simplicity, we consider two mutually coupled nonidentical distinctly different time - delay systems represented by [ eqn2a ] where , ( ) and . correspond to the driving signals .system ( [ eqn2a ] ) is in gs if there exists a transformation such that the trajectories of the systems ( [ eqn2a]a ) and ( [ eqn2a]b ) are mapped onto a subspace ( synchronization manifold ) of the whole state space .that is we also note here that since we are dealing with gs of nonidentical systems with different fractal dimensions , the transformation function refers to a generalized transformation ( not identity transformation ) and also there may exist a set of transformations that maps a given and to different subspaces of .this indicates that the synchronization manifold is such that ] , one requires further , it is worth to emphasize that in ref . , it is reported that a subharmonic entrainment takes place when there exists a relation between the interacting systems , which usually takes place for periodic synchronization with periods , .but in this paper , the synchronization dynamics in all the cases we have considered exhibits chaotic / hyperchaotic oscillations and hence the transformation function refers to the existence of a function ( not a relation ) in our case . therefore the trajectories of eq .( [ eqn2a ] ) starting from the basin of attraction asymptotically reach the synchronization manifold defined by the transformation function , which can be smooth if the systems ( [ eqn2a ] ) uniformly converge to gs manifold ( otherwise nonsmooth ) . the uniform convergence ( smooth transformation )is confirmed by negative values of their local lyapunov exponents of the synchronization manifold .schematic diagrams of the auxiliary system approach for networks of mutually coupled systems .( a , b , d ) linear arrays with , ( c , e ) ring configurations with , and ( f , g ) global and star coupling configurations with . ]now , we will demonstrate the existence of gs in symmetrically coupled arbitrary networks of distinctly nonidentical time - delay systems with different fractal dimensions using the auxiliary system approach .we consider a symmetrically coupled arbitrary network as given in eq .( [ eqn1 ] ) . to determine the asymptotic stability of each of the nodes in this network, one can define a network ( auxiliary ) identical to eq .( [ eqn1 ] ) ( starting from different initial conditions in the same basin of attraction ) , whose node dynamics is represented as the parameter values are the same as in eq.([eqn1 ] ) discussed in sec .[ sec2 ] . in the following sections, we will investigate the existence of transition from partial gs to global gs in networks of and systems with a linear array , ring , global and star coupling configurations . to start with , in order to demonstrate the existence of gs between two ( ) mutually coupled nonidentical distinctly different time - delay systems , we consider the network given in fig .[ fig1a](a ) . here 1 and 2 are mutually coupled distinctly nonidentical time - delay systems and 1 and 2 are the associated auxiliary systems .[ fig1a ] also depicts schematically different configurations for , which are discussed later on .the state equations for coupled systems can be represented as now , let us couple the auxiliary system 1 to 2 and 2 to 1 unidirectionally as given in fig .[ fig1a](a ) , such that the systems 1 and 1 are driven by the same signal from system 2 , and systems 2 and 2 are similarly driven by system 1 .the corresponding dynamical equation for the auxiliary systems can be given as we choose the mg time - delay systems ( system 1 and 1 ) with the nonlinear function given in eq .( [ eqn1a ] ) and the pwl systems ( system 2 and 2 ) with the nonlinear function given in eq .( [ eqn1b ] ) .the parameters of both systems are fixed as given in sec .[ sec2 ] ( table [ table1 ] ) and for those parameter values both systems exhibit hyperchaotic attractors [ figs .[ fig1](a ) and [ fig1](b ) ] with two [ fig . [ fig2](a ) ] and three [ fig .[ fig2](b ) ] positive les , respectively . generally , in mutual coupling configuration the systems affect each other and attain a common synchronization manifold simultaneously above a threshold value of the coupling strength .but interestingly in distinctly different coupled time - delay systems with different fractal dimensions , one of the systems first reaches the gs manifold for a lower value of , while the other system remains in a desynchronized state , which we call as a partial gs state . for a further increase of systems organize themselves and enter into a common gs manifold , thereby achieving a global gs . in other words ,when system 1 and 1 are identically synchronized , system 1 is synchronized to a subspace ( synchronization manifold ) of the whole state space of both systems in a generalized sense , which we call as a partial gs .similarly , when the systems 2 and 2 are synchronized identically , then system 2 is synchronized to the common synchronization manifold .this corroborates that both systems 1 and 2 share a common gs manifold in the phase space .thus , when both auxiliary systems are completely synchronized with their original systems for an appropriate coupling strength , then there exists a function that maps systems 1 and 2 to the common ( global ) gs manifold .( a ) mtles and ( b ) cc , cpr of the main and auxiliary systems for two mutually coupled mg - pwl systems as a function of for ( [ 2coup ] ) . ] in order to characterize the transition from partial to global gs and to evaluate the stability of the cs of each of the main and auxiliary systems , we have calculated the mtles of the main and auxiliary systems which in turn ensure the stability of gs manifold between the original systems .we have also estimated the correlation coefficient ( cc ) of each of the main and the associated auxiliary systems , given by where the brackets indicate temporal average . if the two systems are in cs state , the correlation coefficient , otherwise .further , the existence of ps ( between the main and auxiliary systems ) can be characterized by the value of the index cpr .if the phases of the coupled systems are mutually locked , then the probability of recurrence is maximal at a time and cpr .in contrast , one can expect a drift in the probability of recurrence resulting in low values of cpr characterizing the degree of locking between the coupled systems .the coupled equations ( [ eqn1 ] ) and ( [ eqn2 ] ) are integrated using the runge - kutta fourth order method .the mtles are the largest lyapunov exponents of the evolution equation of .the lyapunov exponents are calculated using j. d. farmer s approach . in fig .[ fig5 ] , we have plotted the various characterizing quantities based on our numerical analysis .the red ( light gray ) continuous line in fig .[ fig5](a ) shows the mtle ( ) of the mg systems and the blue ( dark gray ) dotted line depicts the mtle ( ) of the pwl systems as a function of the coupling strength .figure [ fig5](b ) shows the cc and cpr of the main and auxiliary mg time - delay systems as red ( light gray ) filled and open circles , respectively , and the cc and cpr of the pwl systems are represented by the blue ( dark gray ) filled and open triangles .initially , for , both and are nearly zero , indicating the desynchronized state when both and which confirm that cs ( gs ) is unstable .if we increase the coupling strength , and start to increase towards unity and at , ( ) , where , which confirm the simultaneous existence of gs and ps in the pwl system , while the mg system continues to remain in a desynchronized state ( and ) ( partial gs ) .further , if we increase the coupling strength to a threshold value , a global gs occurs where both and become unity and and become negative .the transition of the mtle of the auxiliary and its original systems from positive to negative values as a function of the coupling strength strongly confirms the existence of an attracting manifold . to be more clear , for the value of the coupling strengths in range of global gs ,a negative value of the mtle assures the convergence of the perturbed trajectories in the synchronization manifold ( cs between the main and auxiliary systems and gs manifold between the main systems ) .the convergence corroborates the attracting nature of the synchronization manifold .further , normally one may expect that the systems with lower dynamical complexity will converge to gs manifold first , followed by the system with higher dynamical complexity .but in our studies we encounter a contrary behavior , where the pwl system with three positive les reaches the gs manifold first ( at ) and then the mg system ( with two positive les ) converges to the gs manifold at confirming the existence of a transition from partial to global gs in distinctly nonidentical time - delay systems .( a , b ) the magnitude of difference in the trajectories between the systems ( ) for , and ( c , d ) for for mutually coupled mg and pwl systems . ]( a - c ) the phase portraits of the systems ( , ) , ( , ) and ( , ) for , and ( d - f ) for for . ] we have also numerically computed the synchronization error ( ) and phase projection plots , which are depicted in figs .[ fig4 ] and [ fig3 ] , respectively . in the absence ofthe coupling all the systems evolve with their own dynamics . if we slowly increase the coupling strength the main and the auxiliary pwl systems become completely synchronized for .the synchronization error and the linear relation between the systems in fig .[ fig4](b ) and in fig .[ fig3](b ) ( plotted for ) , respectively , confirm that the systems and are in a cs state , whereas the mg systems remain desynchronized as confirmed by the phase projection [ fig .[ fig3](a ) ] for the same value of the coupling strength .we also note here that for this value of coupling strength the systems and show certain degree of correlation as depicted in fig .[ fig3](c ) .if we increase further , the systems ( ) and ( ) reach the cs manifold ( for ) and one may expect that both and attain the common gs manifold , which we call as global gs . both synchronization errors and become zero as shown in figs .[ fig4](c ) and [ fig4](d ) for .this fact confirms the existence of a global gs state .further , figs .[ fig3](d ) and [ fig3](e ) show a linear relation between the systems ( ) and ( ) , respectively , which additionally confirms the existence of global gs .the degree of correlation in the phase space between the systems and for the global gs state is depicted in fig .[ fig3](f ) for the same value of coupling strength .phase diagram in the ( ) plane for two mutually coupled mg - pwl systems showing partial ( blue / dark gray ) , global ( light gray ) gs and desynchronization ( white ) regimes . ] to obtain a global picture on the occurrence of transition from partial to global gs between the mg and pwl systems , we have plotted the values of as a 2-parameter diagram in the ( ) plane .we have fixed the parameter values of the mg systems ( as given in sec .[ sec2 ] ) and vary one of the parameter ( ) of the pwl system as a function of the coupling strength ( ) as depicted in fig .the white region indicates the desynchronized state and the blue ( dark gray ) region corresponds to the partial gs region , where only one of the mutually coupled systems has reached the common gs manifold as indicated by the unit value of the .the global gs is represented by light gray where both coupled systems are in gs manifold as confirmed by the unit value of of both systems .we have also analytically investigated the existence of the transition from partial to global gs using the krasovskii - lyapunov functional theory .in this connection , we consider the difference between the state variables ( synchronization error ) of the main and the associated auxiliary systems ( ) . for small values of , the evolution equation for the cs manifold of the mg systems ( ) can be written as \delta_{1\tau_{1 } } \label{eqn6}\ ] ] where and .the cs manifold is locally attracting if the origin of eq .( [ eqn6 ] ) is stable . from the krasovskii - lyapunov theory ,a continuous positive definite lyapunov functional can be defined as where is an arbitrary positive parameter .the lyapunov function approaches zero as .the derivative of the above equation ( [ eqn6a ] ) can be written as \delta^{2}_{1}+\beta_{1 } [ f^{\prime}_{1}(x_{\tau_{1}})]\delta_{1 } \delta_{1\tau_{1}}+\mu\delta^{2}_{1}-\mu\delta_{1\tau_{1}}^{2 } , \label{eqn6b}\ ] ] which should be negative for stability of the cs manifold , that is .this requirement results in a condition for stability as from eq .( [ eqn6c ] ) , one can see that the stability condition depends on the nonlinear function of the individual systems .hence from the form of the nonlinear function of the mg system [ eq .( [ eqn1a ] ) ] , we have consequently , the stability condition becomes , it is not possible to find the exact value of the nonlinear function as it depends on the value of the variable .however , it is possible to find the value of by identifying the maximal value of using the lyapunov - razumikin function ( which is a special class of the krasovskii - lyapunov theory ) . from the hyperchaotic attractor of the mg system , for the chosen parameter values, one can identify the maximum value of .so the stability condition becomes ( a sufficient condition for the global gs ) . from fig .[ fig5 ] , the threshold value of the coupling strength to attain gs in the mg systems is for which the stability condition is satisfied .now , one can find the evolution equation for the cs manifold corresponding to the pwl time - delay system ( ) as \delta_{2\tau_{2}} ] is the state vector , the activation function = \left(f_1\left [ x_1(t)\right],f_2\left [ x_2(t)\right ] , \cdots , f_n\left [ x_n(t)\right]\right)^t$ ] denotes the manner in which the neurons respond to each other . is a positive diagonal matrix , is the feedback matrix , represents the delayed feedback matrix with a constant delay . the general class of delayed neural networks represented by the above eq .( [ eqn9 ] ) unifies several well known neural networks such as the hopfield neural networks and cellular neural networks with delay .the specific set of delayed neural networks ( eq .( [ eqn9 ] ) ) which corresponds to the hopfield neural networks is for the choice of the activation function =\tanh\left [ x(t)\right],\ ] ] and for the value of the matrices next , we will illustrate the existence of global gs via partial gs in a system of mutually coupled ikeda time - delay systems and the hopfield neural network .the cc and cpr between the main and auxiliary systems are depicted in fig .[ fig15 ] .low values of cc and cpr in the absence of coupling indicates the asynchronous behavior of both the systems . upon increasing the coupling strength from zero , the second order system , that is the hopfield neural network , reaches the common synchronization manifold first ( partial gs ) at the threshold value of as indicated by the unit value of the cross correlation coefficient , while the ikeda system remains in its transition state .the simultaneous existence of phase synchronization ( ps ) together with gs is also confirmed by the unit value of at the same .further increase in the coupling strength results in the synchronization ( both gs and ps ) of the ikeda system to the common synchronization manifold for as evidenced from the unit value of and ( fig .[ fig15 ] ) confirming the existence of global gs via partial gs in coupled systems of different orders .cc and cpr of the main and auxiliary systems of a coupled ikeda time - delay system and hopfield neural network . ] cc and cpr of the main and auxiliary systems of a coupled mackey - glass time - delay system and the plankton model ( [ plankton ] ) . ] next , we illustrate the transition from partial to global gs in mutually coupled mg time - delay system , and a third order plankton model with multiple delays .the normalized system of equations of a zoo - plankton model is represented as -xy - l_1xz , \\ \dot{y}=&\,xy - b_2y - l_2yz , \\\dot{z}=&\,-b_1z+l_1x(t-\tau_1)z(t-\tau_3)+l_2y(t-\tau_2)z(t-\tau_3)-n(x+y)z , \label{plankton}\end{aligned}\ ] ] where and are constants .delays and are in general different may also be different , but for simplicity we have considered identical delays , , as studied in ref . here and are the normalized quantities of the density of the susceptible phytoplankton , infected phytoplankton and zooplankton ( predator species ) , respectively .low values of the cc and cpr for between the main and auxiliary systems as shown in fig .[ fig16 ] confirm that the systems evolve independently in the absence of coupling between them .as the coupling strength is increased the plankton model synchronizes first to the common synchronization manifold at as denoted by indicating partial gs ( fig . [ fig16 ] ) .ps has also occurred simultaneously at the same threshold value of as indicated by the unit value of .further increase in leads to the existence of global gs by synchronizing the mg time - delay system to the common synchronization manifold as both and attain unity at .thus , we prove that the transition from partial to global gs phenomenon is not restricted to first order structurally similar systems alone but it is also valid for systems with different orders .in conclusion , we have pointed out the existence of a synchronization transition from partial to global gs in distinctly nonidentical time - delay systems with different fractal dimensions in symmetrically coupled ( regular ) networks with a linear array , and also in ring , global and star configurations using the auxiliary system approach and the mutual false nearest neighbor method .we have shown that there exists a smooth transformation function even for networks of structurally different time - delay systems with different fractal dimensions , which maps them to a common gs manifold . we have also found that gs and ps occur simultaneously in structurally different time - delay systems .we have calculated mtles to evaluate the asymptotic stability of the cs manifold of each of the main and the corresponding auxiliary systems .this in turn , ensures the stability of the gs manifold between the main systems .in addition , we have also estimated the cc and the cpr to characterize the relation between gs and ps .further , to prove the genericity of our results , we have also demonstrated the synchronization transition in systems with different orders such as coupled mg and the hopfield neural network model and a system of coupled ikeda and plankton models . the analytical stability condition for partial and global gsis deduced using the krasovskii - lyapuov theory .we wish to note that now we are working on the experimental realization of the existence of partial and global gs in distinctly nonidentical time - delay systems using nonlinear time - delayed electronic circuits .d. v. senthilkumar acknowledges the supoport from serb - dst fast track scheme for young scientists .m. lakshmanan(m .l. ) has been supported by the dst , government of india sponsored irhpa research project .m. l. has also been supported by a dst ramanna fellowship project and a dae raja ramanna fellowship .9 abarbanel , h. d. i. , rulkov , n. f. & sushchik , m. m. [ 1996 ] generalized synchronization of chaos : the auxiliary system approach " _ phys .e _ * 53 * , 4528 .abarbanel , h. d. i. , brown , r. , sidorowich , j. j. & tsimring , l. s. [ 1993 ] the analysis of observed chaotic data in physical systems " _ rev ._ * 65 * , 1331 .abraham , e. r. [ 1998 ] the generation of plankton patchiness by turbulent stirring " _ nature _ * 391 * , 577 .acharyya , s. & amritkar , r. e. [ 2013 ] generalized synchronization of coupled chaotic systems " _ eur .j. special topics _ * 222 * , 939 .amritkar , r. e. & rajgarajan , g. [ 2006 ] spatially synchronous extinction of species under external forcing " _ phys .lett . _ * 96 * , 258102 .appeltant , l. , soriano , m. c. , van der .sande , g. , danckaert , j. , masser , s. , dambre , j. , schrauwen , s. , mirasso , c. r. & fischer , i. [ 2011 ] information processing using a single dynamical node as complex system " _ nature communications _ * 2 * , 468 .blasius , b. , huppert , a. & stone , l. [ 1999 ] complex dynamics and phase synchronization in spatially extended ecological systems " _ nature ( london ) _ * 392 * , 239 .boccaletti , s. , valladares , d. l. , kurths , j. , maza , d. & mancini , h. [ 2000 ] synchronization of chaotic structurally nonequivalent systems " _ phys .e _ * 61 * , 3712 .brown , r. [ 1998 ] approximating the mapping between systems exhibiting generalized synchronization " _ phys .lett . _ * 81 * , 4835 .chen , j. , lu , j .- a ., wu , x. & zheng , w. x. [ 2009 ] generalized synchronization of complex dynamical networks via impulsive control " _ chaos _ * 19 * , 043119 .dmitriev , b. s. , hramov , a , e. , krasovskii , a. a. , starodubov , a. v. , trubetskov , d. i. & zharkov , y. d. [ 2009 ] first experimental observation of generalized synchronization phenomena in microwave oscillators " _ phys .lett . _ * 107 * , 074101 .earn , d. t. d. , rohani , p. & grenfell , b. t. [ 1998 ] persistence , chaos and synchrony in ecology and epidemiology " _ proc .london , ser .b _ * 265 * , 1471 . farmer , s. f. [ 1998 ] rhythmicity , synchronization and binding in human and primate motor systems " _ j. physiol . _* 509 * , 3 .gakkhar , s. & singh , a. [ 2010 ] a delay model for viral infection in toxin producing phytoplankton and zooplankton system " _ commun .nonlinear sci .* 15 * , 3607 .grenfell , b. t. , bjornstad , o. n. & kappey , j. [ 2001 ] travelling waves and spatial hierarchies in measles epidemics " _ nature ( london ) _ * 414 * , 716 .guan , s. , wang , x. , gong , x. , li , k. & lai , c .- h . [ 2009 ] the development of generalized synchronization on complex networks " _ chaos _ * 19 * , 013130 .guan , s. , gong , x. , li , k. , liu , z. & lai , c .- h . [ 2010 ] characterizing generalized synchronization in complex networks " _ new j. phys . _* 12 * , 073045 .hopfield , j. j. [ 1982 ] neural networks and physical systems with emergent collective computational abilities " _ proc ._ * 79 * , 2554 .hu , a. , xu , z. & guo , l. [ 2010 ] the existence of generalized synchronization of chaotic systems in complex networks " _ chaos _ * 20 * , 013112. hung , y. c. , huang , y. t. , ho , m. c. & hu , c. k. [ 2008 ] paths to globally generalized synchronization in scale - free networks " _ phys .e _ * 77 * , 016202 .ikeda , k. , daido , h. & akimoto , o. [ 1980 ] optical turbulence : chaotic behavior of transmitted light from a ring cavity " _ phys .lett . _ * 45 * , 709 .kocarev , l. & parlitz , u. [ 1996 ] generalized synchronization , predictability , and equivalence of unidirectionally coupled dynamical systems " _ phys .lett . _ * 76 * , 1816 .koronvskii , a , a. , moskalenko , i , o. & hramov , a. e. [ 2011 ] nearest neighbors , phase tubes , and generalized synchronization " _ phys .e _ * 84 * , 037201 .lakshmanan , m. & senthilkumar , d. v. [ 2010 ] _ dynamics of nonlinear time - delay systems _( springer , berlin ) .mackey , m. c. & glass , l. [ 1977 ] oscillation and chaos in physiological control systems " _ science _ * 197 * , 287 .maraun , d. & kurths , j. [ 2005 ] epochs of phase coherence between el nio / southern oscillation and indian monsoon " _ j. geophys .lett . _ * 32 * , l15709 .marwan , n. , romano , m. c. , thiel , m. & kurths , j , [ 2005 ] recurrence plots for the analysis of complex systems , " _ phys ._ * 438 * , 237 - 329 .meng , j. & wang , x. [ 2007 ] robust anti - synchronization of a class of delayed chaotic neural networks " _ chaos _ * 17 * , 023113 .moskalenko , o. i. , koronovskii , a. a. , hramov , a. e. & boccaletti , s. [ 2012 ] generalized synchronization in mutually coupled oscillators and complex networks " _ phys . rev .e _ * 86 * , 036216 .moskalenko , o. i. , koronovskii , a. a. & hramov , a. e. [ 2010 ] generalized synchronization of chaos for secure communication : remarkable stability to noise " _ phys .a _ * 374 * , 2925 .murali , k. & lakshmanan , m. [ 1998 ] secure communication using a compound signal from generalized synchronizable chaotic systems " _ phys . lett .a. _ * 241 * , 303 .packard , n. h. , crutchfield , j. p. , farmer , j. d. & shaw , r. s. [ 1980 ] geometry from a time series " _ phys .. lett . _ * 45 * , 712 .parlitz , u. , junge , l. , lauterborn , w. & kocarev , l. [ 1996 ] experimental observation of phase synchronization " _ phys .e _ * 54 * , 2115 .parlitz , u. , junge , l. & kocarev , l. [ 1997 ] subharmonic entrainment of unstable periodic orbits and generalized synchronization " _ phys .lett . _ * 79 * , 3158 .pecora , l. m. , carroll , t. l. , jhonson , g. a. & mar , d. j. [ 1996 ] fundamentals of synchronization in chaotic systems , concepts , and applications . " _ chaos _ * 7 * , 520 .pikovsky , a. s. , rosenblum , m. g. & kurths , j. [ 2001 ] _ synchronization - a unified approach to nonlinear science _( cambridge university press , cambridge , england ) .pyragas , k. [ 1996 ] weak and strong synchronization of chaos " _ phys .e _ * 54 * , 4508(r ) .rogers , e. a. , kalra , r. , schroll , r. d. , uchida , a. , lathrop , d. p. & rajarshi roy [ 2004] generalized synchronization of spatiotemporal chaos in a liquid crystal spatial light modulator " _ phys .* 93 * , 084101 .rulkov , n. f. , sushchik , m. m. , tsimring , l. s. & abarbanel , h. d. i. [ 1995 ] generalized synchronization of chaos in directionally coupled chaotic systems " _ phys .e _ * 51 * , 980 .schfer , c. , rosenblum , m. g. , kurths , j. & abel , h. h. [ 1998 ] heartbeat synchronized with ventilation " _ nature ( london ) _ * 392 * , 239 .schumacher , j. , haslinger , r. & pipa , g. [ 2012 ] statistical modeling approach for detecting generalized synchronization " _ phys .* 85 * , 056215 .sebe , j. y. , van berderode , j. f. , berger , a. j. & abel , h. h. [ 2006 ] inhibitory synaptic transmission governs inspiratory motoneuron synchronization " _ j. neurophysiol _ * 96 * , 391 .senthilkumar , d. v. , lakshmanan , m. & kurths , j. [ 2006 ] phase synchronization in time - delay systems " _ phys .* 74 * , 035205(r ) .senthilkumar , d. v. , lakshmanan , m. & kurths , j. [ 2007 ] transition from phase to generalized synchronization in time - delay systems " _ chaos _ * 18 * , 023118 .senthilkumar , d. v. , suresh , r. , lakshmanan , m. & kurths , j. [ 2013 ] global generalized synchronization in networks of different time - delay systems " _ europhys .lett . _ * 103 * , 50010 .shahverdiev , e. m. & shore , k. a. [ 2009 ] generalized synchronization in laser devices with electro - optical feedback " _ iet optoelectron . _ * 3 * , 274 .shang , y. , chen , m. & kurths , j. [ 2009 ] generalized synchronization of complex networks " _ phys . rev .e _ * 80 * , 027201 .sorino , m. c. , guy van der sande , fischer , i. & mirasso , c. r. [ 2012 ] synchronization in simple network motifs with negligible correlation and mutual information measures " _ physlett . _ * 108 * , 134101 .stein , k. , timmermann , a. & schneifer , n. [ 2011 ] phase synchronization of the el - nio - southern oscillation with the annual cycle " _ phys .lett . _ * 107 * , 128501 .suresh , r. , srinivasan , k. , senthilkumar , d. v. , raja mohamed , i. , murali , k. , lakshmanan , m. & kurths , j. [ 2013 ] zero - lag synchronization in coupled time - delayed piecewise linear electronic circuits " _ eur .. j. special topics _ * 222 * , 729 .uchida , a. , mcallister , r. , meucci , r. & rajarshi roy , [ 2003 ] generalized synchronization of chaos in identical systems with hidden degrees of freedom " _ phys .* 91 * , 174101 .van leeuwen , p. , geue , d. , thiel , m. , cysarz , d. , large , s. , romano , m. c. , wessel , n. , kurths , j. & grnemeyer , d. h. [ 2009 ] influence of paced maternal breathing on fetal maternal heart rate coordination " _ porc .sci . usa _ * 106 * , 13661 .zheng , z. , wang , x. & cross , m. c. [ 2002 ] transitions from partial to complete generalized synchronizations in bidirectionally coupled chaotic oscillators " _ phys .e _ * 65 * , 056211 .zheng , z. & hu , g. [ 2000 ] generalized synchronization versus phase synchronization " _ phys . rev .* 62 * , 7882 . | we point out the existence of transition from partial to global generalized synchronization ( gs ) in symmetrically coupled regular networks ( array , ring , global and star ) of distinctly different time - delay systems of different orders using the auxiliary system approach and the mutual false nearest neighbor method . it is established that there exist a common gs manifold even in an ensemble of structurally nonidentical time - delay systems with different fractal dimensions and we find that gs occurs simultaneously with phase synchronization ( ps ) in these networks . we calculate the maximal transverse lyapunov exponent to evaluate the asymptotic stability of the complete synchronization manifold of each of the main and the corresponding auxiliary systems , which , in turn , ensures the stability of the gs manifold between the main systems . further we also estimate the correlation coefficient and the correlation of probability of recurrence to establish the relation between gs and ps . we also deduce an analytical stability condition for partial and global gs using the krasovskii - lyapunov theory . |
even though the set - sharing domain is , in a sense , remarkably precise , more precision is attainable by combining it with other domains .in particular , freeness and linearity information has received much attention by the literature on sharing analysis ( recall that a variable is said to be free if it is not bound to a non - variable term ; it is linear if it is not bound to a term containing multiple occurrences of another variable ) .as argued informally by sndergaard , the mutual interaction between linearity and aliasing information can improve the accuracy of a sharing analysis .this observation has been formally applied in to the specification of the abstract operator for the domain . in his phd thesis , langen proposed a similar integration with linearity , but for the set - sharing domain .he has also shown how the aliasing information allows to compute freeness with a good degree of accuracy ( however , freeness information was not exploited to improve aliasing ) .king has also shown how a more refined tracking of linearity allows for further precision improvements .the synergy attainable from a bi - directional interaction between aliasing and freeness information was initially pointed out by muthukumar and hermenegildo . since then , several authors considered the integration of set - sharing with freeness , sometimes also including additional explicit structural information .building on the results obtained in , and , but independently from , hans and winkler proposed a combined integration of freeness and linearity information with set - sharing .similar combinations have been proposed in . from a more pragmatic point of view ,codish et al . integrate the information captured by the domains of and by performing the analysis with both domains at the same time , exchanging information between the two components at each step .most of the above proposals differ in the carrier of the underlying abstract domain .even when considering the simplest domain combinations where explicit structural information is ignored , there is no general consensus on the specification of the abstract unification procedure . from a theoretical point of view , once the abstract domain has been related to the concrete one by means of a galois connection , it is always possible to specify the best correct approximation of each operator of the concrete semantics .however , empirical observations suggest that sub - optimal operators are likely to result in better complexity / precision trade - offs . as a consequence ,it is almost impossible to identify `` the right combination '' of variable aliasing with freeness and linearity information , at least when practical issues , such as the complexity of the abstract unification procedure , are taken into account . given this state of affairs , we will now consider a domain combination whose carrier is essentially the same as specified by langen and hans and winkler .( the same domain combination was also considered by bruynooghe et al . , but with the addition of compoundness and explicit structural information . )the novelty of our proposal lies in the specification of an improved abstract unification procedure , better exploiting the interaction between sharing and linearity . as a matter of fact, we provide an example showing that all previous approaches to the combination of set - sharing with freeness and linearity are not uniformly more precise than the analysis based on the domain , whereas such a property is enjoyed by our proposal . by extending the results of to this combination ,we provide a new abstraction function that can be applied to any logic language computing on domains of syntactic structures , with or without the occurs - check ; by using this abstraction function , we also prove the correctness of the new abstract unification procedure .moreover , we show that the same notion of redundant information as identified in also applies to this abstract domain combination . as a consequence , it is possible to implement an algorithm for abstract unification running in polynomial time andstill obtain the same precision on all the considered observables : groundness , independence , freeness and linearity .this paper is based on ( * ? ? ?* chapter 6 ) , the phd thesis of the second author . in section [ sec : prelims ] , we define some notation and recall the basic concepts used later in the paper . in section [ sec : sfl - domain ] , we present the domain that integrates set - sharing , freeness and linearity . in section [ sec : sfl - asub - comparison ] , we show that is uniformly more precise than the domain , whereas all the previous proposals for a domain integrating set - sharing and linearity fail to satisfy such a property . in section [ sec : sfl - redundant ] , we show that the domain can be simplified by removing some redundant information . in section [ sec : exp - eval ] , we provide an experimental evaluation using the analyzer . in section [ sec : related ] , we discuss some related work .section [ sec : conclusion ] concludes with some final remarks .the proofs of the results stated here are not included but all of them are available in an extended version of this paper .for a set , is the powerset of .the cardinality of is denoted by and the empty set is denoted by .the notation stands for the set of all the _ finite _ subsets of , while the notation stands for .the set of all finite sequences of elements of is denoted by , the empty sequence by , and the concatenation of is denoted by .let denote a possibly infinite set of function symbols , ranked over the set of natural numbers .let denote a denumerable set of variables , disjoint from .then denotes the free algebra of all ( possibly infinite ) terms in the signature having variables in .thus a term can be seen as an ordered labeled tree , possibly having some infinite paths and possibly containing variables : every inner node is labeled with a function symbol in with a rank matching the number of the node s immediate descendants , whereas every leaf is labeled by either a variable in or a function symbol in having rank ( a constant ) .it is assumed that contains at least two distinct function symbols , with one of them having rank .if then and denote the set and the multiset of variables occurring in , respectively .we will also write to denote the set of variables occurring in an arbitrary syntactic object .suppose : and are _ independent _ if ; we say that variable _ occurs linearly in _ , more briefly written using the predication , if occurs exactly once in ; is said to be _ ground _ if ; is _ free _ if ; is _ linear _ if , for all , we have ; finally , is a _ finite term _ ( or _ herbrand term _ ) if it contains a finite number of occurrences of function symbols .the sets of all ground , linear and finite terms are denoted by , and , respectively .a _ substitution _ is a total function that is the identity almost everywhere ; in other words , the _ domain _ of , is finite . given a substitution , we overload the symbol ` ' so as to denote also the function defined as follows , for each term : , we write to denote .note that , for each substitution and each finite term , if , then . if and , then is called a _binding_. the set of all bindings is denoted by .substitutions are denoted by the set of their bindings , thus a substitution is identified with the ( finite ) set we denote by the set of variables occurring in the bindings of .we also define .a substitution is said to be _ circular _ if , for , it has the form where , , are distinct variables .a substitution is in _ rational solved form _ if it has no circular subset .the set of all substitutions in rational solved form is denoted by .a substitution is _ idempotent _ if , for all , we have .equivalently , is idempotent if and only if .the set of all idempotent substitutions is denoted by and .the composition of substitutions is defined in the usual way .thus is the substitution such that , for all terms , and has the formulation as usual , denotes the identity function ( i.e. , the empty substitution ) and , when , denotes the substitution . for each and , the sequence of finite terms converges to a ( possibly infinite ) term , denoted .therefore , the function such that is well defined .note that , in general , this function is not a substitution : while having a finite domain , its `` bindings '' can map a domain variable into a term .however , as the name of the function suggests , the term is granted to be _ rational _ , meaning that it can only have a finite number of distinct subterms and hence , be finitely represented .consider the substitutions note that there are substitutions , such as , that are not idempotent and nonetheless define finite trees only ; namely , .similarly , there are other substitutions , such as , whose bindings are not explicitly cyclic and nonetheless define rational trees that are infinite ; namely , .finally note that the ` ' function is not defined on .an _ equation _ is of the form where . denotes the set of all equations .a substitution may be regarded as a finite set of equations , that is , as the set .we say that a set of equations is in _ rational solved form _ if . in the rest of the paper , we will often write a substitution to denote a set of equations in rational solved form ( and vice versa ) . as is common in research work involving equality, we overload the symbol ` ' and use it to denote both equality and to represent syntactic identity .the context makes it clear what is intended .let .we assume that any equality theory over includes the _ congruence axioms _ denoted by the following schemata : in logic programming and most implementations of prolog it is usual to assume an equality theory based on syntactic identity .this consists of the congruence axioms together with the _ identity axioms _ denoted by the following schemata , where and are distinct function symbols or : the axioms characterized by schemata ( [ eq - ax : injective - functions ] ) and ( [ eq - ax : diff - funct ] ) ensure the equality theory depends only on the syntax .the equality theory for a non - syntactic domain replaces these axioms by ones that depend instead on the semantics of the domain and , in particular , on the interpretation given to functor symbols .the equality theory of clark , denoted , on which pure logic programming is based , usually called the _ herbrand _ equality theory , is given by the congruence axioms , the identity axioms , and the axiom schema axioms characterized by the schema ( [ eq - ax : occ - check ] ) are called the _ occurs - check axioms _ and are an essential part of the standard unification procedure in sld - resolution .an alternative approach used in some implementations of logic programming systems , such as prolog ii , sicstus and oz , does not require the occurs - check axioms .this approach is based on the theory of rational trees , denoted .it assumes the congruence axioms and the identity axioms together with a _uniqueness axiom _ for each substitution in rational solved form . informally speaking these state that , after assigning a ground rational tree to each variable which is not in the domain , the substitution uniquely defines a ground rational tree for each of its domain variables .note that being in rational solved form is a very weak property .indeed , unification algorithms returning a set of equations in rational solved form are allowed to be much more `` lazy '' than one would expect .we refer the interested reader to for details on the subject . in the sequel we use the expression `` equality theory '' to denote any consistent , decidable theory satisfying the congruence axioms .we also use the expression `` syntactic equality theory '' to denote any equality theory also satisfying the identity axioms .we say that a substitution is _ satisfiable _ in an equality theory if , when interpreting as an equation system in rational solved form , let be a set of equations in an equality theory .a substitution is called a _ solution for in _ if is satisfiable in and ; we say that is satisfiable if it has a solution .if , then is said to be a _ relevant _solution for .in addition , is a _most general solution for in _ if . in this paper ,a most general solution is always a relevant solution of .when the theory is clear from the context , the set of all the relevant most general solutions for in is denoted by .[ ex : equations ] let and then , for any syntactic equality theory , we have . since , then and hence is satisfiable in . intuitively ,whatever rational tree is assigned to the parameter variable , there exist rational trees , and that , when assigned to the domain variables , and , will turn into a set of trivial identities ; namely , let and be both equal to the infinite rational tree , which is usually denoted by , and let be the rational tree .thus is a relevant most general solution for in .in contrast , is just a relevant solution for in .also observe that , for any equality theory , so that does not satisfy the occurs - check axioms. therefore , neither nor are satisfiable in the herbrand equality theory .intuitively , there is no finite tree such that .we have the following useful result regarding ` ' and satisfiable substitutions that are equivalent with respect to any given syntactic equality theory .[ prop : rt - preserves - vars - gterms - lterms ] let be satisfiable in the syntactic equality theory and suppose that . then given two complete lattices and , a _ galois connection _ is a pair of monotonic functions and such that the functions and are said to be the abstraction and concretization functions , respectively .galois insertion _ is a galois connection where the concretization function is injective .an _ upper closure operator _( uco ) on the complete lattice is a monotonic , idempotent and extensive for each . ]self - map .the set of all uco s on , denoted by , is itself a complete lattice . for any , the set , i.e., the image under of the lattice carrier , is a complete lattice under the same partial order defined on .given a galois connection , the function is an element of .the presentation of abstract interpretation in terms of galois connections can be rephrased by using uco s . in particular, the partial order defined on formalizes the intuition of an abstract domain being more precise than another one ; moreover , given two elements , their reduced product , denoted , is their on .the set - sharing domain of jacobs and langen , encodes both aliasing and groundness information .let be a fixed and finite set of variables of interest .an element of the set - sharing domain ( a _ sharing set _ ) is a set of subsets of ( the _ sharing groups _ ) .note that the empty set is not a sharing group .[ def : sh ] let be the set of _ sharing groups_. the set - sharing lattice is defined as , ordered by subset inclusion .the following operators on are needed for the specification of the abstract semantics .[ def : aux - funcs - sh ] for each and each , we define the following functions : the _ star - union _function , is defined as in it was shown that the domain contains many elements that are redundant for the computation of the actual _ observable _ properties of the analysis , definite groundness and definite independence .the following formalization of these observables is a rewording of the definitions provided in .[ def : sh - observables ] the _ groundness _ and _ independence _ observables ( on ) are defined , for each , by note that , as usual in sharing analysis domains , definite groundness and definite independence are both represented by encoding possible non - groundness and possible pair - sharing information .the abstract domain is the simplest abstraction of the domain that still preserves the same precision on groundness and independence .[ def : rhopsd ] the operator is defined , for each , by the _ pair - sharing dependency _ lattice is . in the following example we provide an intuitive interpretation of the approximation induced by the three upper closure operators of definitions [ def : sh - observables ] and [ def : rhopsd ] .[ ex : con - ps ] let and consider , omitting inner braces .for instance , we will write to denote . ] .then when observing , the only information available is that variable does not occur in a sharing group ; intuitively , this means that is definitely ground .all the other information encoded in is lost ; for instance , in variables and never occur in the _ same _ sharing group ( i.e. , they are definitely independent ) , while this happens in .when observing , it should be noted that two distinct variables occur in the same sharing group if and only if they were also occurring together in a sharing group of , so that the definite independence information is preserved ( e.g. , and keep their independence ) . on the other hand ,all the variables in occur as singletons in whether or not they are known to be ground ; for instance , occurs in although does not occur in any sharing group in . by noting that , it follows that preserves both the definite groundness and the definite independence information of ; moreover , as the inclusion is strict , encodes other information , such as variable covering ( the interested reader is referred to for a more formal discussion ) .one of the key concepts used in for the proofs of the correctness results stated in this paper is that of variable - idempotence .for the interested reader , we provide here a brief introduction to variable - idempotent substitutions , although these are not referred to elsewhere in the paper .the definition of idempotence requires that repeated applications of a substitution do not change the syntactic structure of a term and idempotent substitutions are normally the preferred form of a solution to a set of equations .however , in the domain of rational trees , a set of solvable equations does not necessarily have an idempotent solution ( for instance , in example [ ex : equations ] , the set of equations has no idempotent solution ) . on the other hand ,several abstractions of terms , such as the ones commonly used for sharing analysis , are only interested in the set of variables occurring in a term and not in the concrete structure that contains them .thus , for applications such as sharing analysis , a useful way to relax the definition of idempotence is to ignore the structure of terms and just require that the repeated application of a substitution leaves the set of variables in a term invariant .[ def : vsubst ] a substitution is _ _ variable - idempotent _ _ if and only if for all we have the set of variable - idempotent substitutions is denoted . as any idempotent substitution is also variable - idempotent , we have .consider the following substitutions which are all in .the abstract domain is made up of three components , providing different kinds of sharing information regarding the set of variables of interest : the first component is the set - sharing domain of jacobs and langen ; the other two components provide freeness and linearity information , each represented by simply recording those variables of interest that are known to enjoy the corresponding property .let and be partially ordered by reverse subset inclusion .the abstract domain is defined as and is ordered by , the component - wise extension of the orderings defined on the sub - domains . with this ordering, is a complete lattice whose least upper bound operation is denoted by .the bottom element will be denoted by .when the concrete domain is based on the theory of finite trees , idempotent substitutions provide a finitely computable _ strong normal form _ for domain elements , meaning that different substitutions describe different sets of finite trees . in contrast , when working on a concrete domain based on the theory of rational trees , substitutions in rational solved form , while being finitely computable , no longer satisfy this property : there can be an infinite set of substitutions in rational solved form all describing the same set of rational trees ( i.e. , the same element in the `` intended '' semantics ) .for instance , the substitutions for , , , all map the variable into the same infinite rational tree . ideally , a strong normal form for the set of rational trees described by a substitution can be obtained by computing the limit .the problem is that can map domain variables to infinite rational terms and may not be in .this poses a non - trivial problem when trying to define `` good '' abstraction functions , since it would be really desirable for this function to map any two equivalent concrete elements to the same abstract element .as shown in , the classical abstraction function for set - sharing analysis , which was defined only for substitutions that are idempotent , does not enjoy this property when applied , as it is , to arbitrary substitutions in rational solved form . in , this problem is solved by replacing the sharing group operator ` ' of by an occurrence operator , ` ' , defined by means of a fixpoint computation . however , to simplify the presentation , here we define ` ' directly by exploiting the fact that the number of iterations needed to reach the fixpoint is bounded by the number of bindings in the substitution . [def : occ ] for each and , the _ occurrence operator _ is defined as the operator ` ' is introduced for notational convenience only .[ ex : occ ] let then so that , for , , and . as a consequence , supposing that , we obtain . in a similar way , it is possible to define suitable operators for groundness , freeness and linearity . asall ground trees are linear , a knowledge of the definite groundness information can be useful for proving properties concerning the linearity abstraction .groundness is already encoded in the abstraction for set - sharing provided in definition [ def : occ ] ; nonetheless , for both a simplified notation and a clearer intuitive reading , we now explicitly define the set of variables that are associated to ground trees by a substitution in .[ def : gvars ] the _ groundness operator _ is defined , for each , by [ ex : gvars ] consider where then .observe that although .also , although for all . as for possible sharing, the definite freeness information can be extracted from a substitution in rational solved form by observing the result of a bounded number of applications of the substitution .[ def : fvars ] the _ freeness operator _ is defined , for each , by as has no circular subset , implies .[ ex : fvars ] let and consider where then .thus although .also , although . as in previous cases ,the definite linearity information can be extracted by observing the result of a bounded number of applications of the considered substitution .[ def : lvars ] the _ linearity operator _ is defined , for each , by in the next example we consider the extraction of linearity from two substitutions .the substitution shows that , in contrast with the case of set - sharing and freeness , for linearity we may need to compute up to applications , where ; the substitution shows that , when observing the term , multiple occurrences of domain variables have to be disregarded .[ ex : lvars ] let and consider where .observe that .this is because , so that and so that does not hold .note also that holds for , , .consider now where then .note that we have although , for all , occurs more than once in the term .the occurrence , groundness , freeness and linearity operators are invariant with respect to substitutions that are equivalent in the given syntactic equality theory .[ prop : iff - rsubst - occ - f - g - l ] let be satisfiable in the syntactic equality theory and suppose that .then moreover , these operators precisely capture the intended properties over the domain of rational trees .[ prop : occ - f - g - l - rt - rsubst ] if and then it follows from ( [ case : rsubst - occ ] ) and ( [ case : rsubst - free ] ) that any free variable necessarily shares ( at least , with itself ) .also , as , it follows from ( [ case : rsubst - ground ] ) , ( [ case : rsubst - free ] ) and ( [ case : rsubst - linear ] ) that any variable that is either ground or free is also necessarily linear .thus we have the following corollary .[ cor : f - g - l - rsubst ] if , then we are now in position to define the abstraction function mapping rational trees to elements of the domain .[ def : abstrsfl ] for each substitution , the function is defined by with definition [ def : abstrsfl ] and proposition [ prop : iff - rsubst - occ - f - g - l ] , one of our objectives is fulfilled : substitutions in that are equivalent have the same abstraction .[ cor : iff - rsubst - same - abstrsfl ]let be satisfiable in the syntactic equality theory and suppose . then .observe that the galois connection defined by the functions and is not a galois insertion since different abstract elements are mapped by to the same set of concrete computation states . to see this it is sufficient to observe that , by corollary [ cor : f - g - l - rsubst ] , any abstract element such that , as is the case for the bottom element , satisfies ; thus , all such s will represent the semantics of those program fragments that have no successful computations .similarly , by letting , it can be seen that , for any such that we have , again by corollary [ cor : f - g - l - rsubst ] , . of course, by taking the abstract domain as the subset of that is the co - domain of , we would have a galois insertion .however , apart from the simple cases shown above , it is somehow difficult to _ explicitly _ characterize such a set . for instance , as observed in , if we have .it is worth stressing that these `` spurious '' elements do not compromise the correctness of the analysis and , although they can affect the precision of the analysis , they rarely occur in practice .the specification of the abstract unification operator on the domain is rather complex , since it is based on a very detailed case analysis . to achieve some modularity , that will be also useful when proving its correctness , in the next definition we introduce several auxiliary abstract operators .[ def : aux - funcs - sfl ] let be finite terms such that . for each define the following predicates : and are _ independent in _ if and only if holds for , where the function yields the set of variables of interest that may share with the given term . for each , the function strengthens the sharing set by forcing the coupling of with . for each and each , as a first correctness result , we have that the auxiliary operators correctly approximate the corresponding concrete properties . [ thm : soundness - of - sfl - preds ] let , and .let also be two finite terms such that .then let and consider the abstract element , where then , by applying definition [ def : aux - funcs - sfl ] , we obtain the following .* does not hold whereas holds .* holds but does not hold . * both and hold whereas does not hold ; note that , in the second case , the two arguments of the predicate do share , but this does not affect the independence of the corresponding terms , because is definitely ground in the abstract element . * let ; then does not hold because ; does not hold because occurs more than once in ; holds , even though occurs twice in , because is definitely ground in ; does not hold because both and occur in term and , as observed in the point above , does not hold . * for the reasons given in the point above , does not hold ; in contrast , holds . * and ; thus , both and may share one or more variables with ; since we observed that and are definitely independent in , this means that the set of variables that shares with is disjoint from the set of variables that shares with .* let ; then an intuitive explanation of the usefulness of this operator is deferred until after the introduction of the abstract operator ( see also example [ ex : amgusfl - cyclicreduce ] ) .we now introduce the abstract operator , specifying how a single binding affects each component of the domain in the context of a syntactic equality theory .[ def : amgusfl ] the function captures the effects of a binding on an element of . let and , where .let also where letting and , we also define ' & { { \mathrel{\buildrel \mathrm{def } \over { = } } } } \bigl ( { \mathord{\mathit{vi}}}{\setminus}{\mathop{\mathrm{vars}}\nolimits}({\mathord{\mathit{sh } } } ' ) \bigr ) { \cup}f ' { \cup}l '' , \\ \intertext{where } l '' & { { \mathrel{\buildrel \mathrm{def } \over { = } } } } \begin{cases } l { \setminus}(s_x { \cap}s_t ) , & \text{if ; } \\l { \setminus}s_x , & \text{if ; } \\l { \setminus}s_t , & \text{if ; } \\l { \setminus}(s_x { \cup}s_t ) , & \text{otherwise . } \\\end{cases } \intertext{then } { { \mathop{\mathrm{amgu}}\nolimits}_{\scriptscriptstyle s}}\bigl({\mathit{d } } , x \mapsto t\bigr ) & { { \mathrel{\buildrel \mathrm{def } \over { = } } } } \begin{cases } { \bot_{\scriptscriptstyle s } } , & \text{if \ ( { \mathit{d}}= { \bot_{\scriptscriptstyle s}}\lor \bigl(t = { \ensuremath{\mathcal{ft}}}\land x \in { \mathop{\mathrm{vars}}\nolimits}(t)\bigr ) \ ) ; } \\\langle { \mathord{\mathit{sh } } } ' , f ' , l ' \rangle & \text{otherwise . }\end{cases}\end{aligned}\ ] ] the next result states that the abstract operator is a correct approximation of the concrete one .[ thm : soundness - of - amgusfl ] let and , where . then , for all and in the syntactic equality theory , we have we now highlight the similarities and differences of the operator with respect to the corresponding ones defined in the `` classical '' proposals for the integration of set - sharing with freeness and linearity , such as . note that , when comparing our domain with the proposal in , we deliberately ignore all those enhancements that depend on properties that can not be represented in ( i.e. , compoundness and explicit structural information ) . * in the computation of the set - sharing component , the main difference can be observed in the second , third and fourth cases of the definition of : here we omit one of the star - unions even when the terms and possibly share .in contrast , in the corresponding star - union is avoided only when holds .note that when holds in the second case of , then we have ; thus , the whole computation for this case reduces to , as was the case in the previous proposals .* another improvement on the set - sharing component can be observed in the definition of : the operator allows the set - sharing description to be further enhanced when dealing with _ explicitly cyclic bindings _ , i.e. , when .this is the rewording of a similar enhancement proposed in for the domain in the context of groundness analysis .its net effect is to recover some groundness and sharing dependencies that would have been unnecessarily lost when using the standard operators .when , we have . * the computation of the freeness component is the same as specified in , and is more precise than the one defined in . *the computation of the linearity component is the same as specified in , and is more precise than those defined in . in the following examples we show that the improvements in the abstract computation of the sharing component allow , in particular cases , to derive better information than that obtainable by using the classical abstract unification operators .[ ex : amgusfl - both - lin - share ] let and such that by definition [ def : abstrsfl ] , we have , where consider the binding . in the concrete domain ,we compute ( a substitution equivalent to ) , where note that , where , so that the pairs of variables and keep their independence .when evaluating the sharing component of , using the notation of definition [ def : amgusfl ] , we have since both and hold , we apply the second case of the definition of so that finally , as the binding is not cyclic , we obtain .thus captures the fact that pairs and keep their independence .in contrast , since does not hold , all of the classical definitions of abstract unification would have required the star - closure of both and , resulting in an abstract element including , among the others , the sharing group . since , this independence information would have been unnecessarily lost .similar examples can be devised for the third and fourth cases of the definition of , where only one side of the binding is known to be linear .the next example shows the precision improvements arising from the use of the operator .[ ex : amgusfl - cyclicreduce ] let and . by definition [ def :abstrsfl ] , we have , where let and consider the cyclic binding .in the concrete domain , we compute ( a substitution equivalent to ) , where note that if we further instantiate by grounding , then variables , and would become ground too .formally we have , where .thus , as observed above , covers , and .when abstractly evaluating the binding , we compute since both and hold , we apply the second case of the definition of , so that thus , as , we obtain note that , in the element ( which is the abstract element that would have been computed when not exploiting the operator ) variable covers none of variables , and . thus , by applying the operator , this covering information is restored . the full abstract unification operator , capturing the effect of a sequence of bindings on an abstract element , can now be specified by a straightforward inductive definition using the operator .[ def : aunifysfl ] the operator is defined , for each and each sequence of bindings , by note that the second argument of is a _ sequence _ of bindings ( i.e. , it is not a substitution , which is a _ set _ of bindings ) , because is neither commutative nor idempotent , so that the multiplicity and the actual order of application of the bindings can influence the overall result of the abstract computation .the correctness of the operator is simply inherited from the correctness of the underlying operator . in particular , any reordering of the bindings in the sequence still results in a correct implementation of .the ` merge - over - all - path ' operator on the domain is provided by and is correct by definition .finally , we define the abstract existential quantification operator for the domain , whose correctness does not pose any problem .[ def : aprojsfl ] the function provides the _ abstract existential quantification _ of an element with respect to a subset of the variables of interest . for each and , the intuition behind the definition of the abstract operator is the following . as explained in section [ sec : prelims ] ,any substitution can be interpreted , under the given equality theory , as a first - order logical formula ; thus , for each set of variables , it is possible to consider the ( concrete ) existential quantification .the goal of the abstract operator is to provide a correct approximation of such a quantification starting from any correct approximation for .let and , so that , by definition [ def : abstrsfl ] , let and consider the concrete element corresponding to the logical formula . note that , where . by applying definition [ def : aprojsfl ], we obtain it is worth stressing that such an operator does not affect the set of the variables of interest .in particular , the abstract element still has to provide correct information about variables and .intuitively , since all the occurrences of and in are bound by the existential quantifier , the two variables of interest are un - aliased , free and linear .note that an abstract _ projection _ operator , i.e. , an operator that actually modifies the set of variables of interest , is easily specified by composing the operator with an operator that simply removes , from all the components of _ and _ from the set of variables of interest , those variables that have to be projected out .as we have already observed , example [ ex : amgusfl - both - lin - share ] shows that the abstract domain , when equipped with the abstract mgu operator introduced in section [ subsec : sfl - operators ] , can yield results that are strictly more precise than all the classical combinations of set - sharing with freeness and linearity information . in this sectionwe show that the same example has another interesting , unexpected consequence , since it can be used to formally prove that all the classical combinations of set - sharing with freeness and linearity , including those presented in , are not _ uniformly _ more precise than the abstract domain , which is based on pair - sharing . to formalize the above observation, we now introduce the domain and the corresponding abstract semantics operators as specified in .the elements of the abstract domain have two components : the first one is a set of variables that are known to be definitely ground ; the second one encodes both possible pair - sharing and possible non - linearity into a single relation defined on the set of variables .intuitively , when and occurs in the second component , then and may share a variable ; when occurs in the second component , then may be non - linear .the second component always encodes a symmetric relation ; thus , for notational convenience and without any loss of generality , we will represent each pair in such a relation as the sharing group , which will have cardinality 1 or 2 depending on whether or not , respectively .[ def : asub ] the abstract domain is defined as , where for , let . then the partial order is extended on by letting be the bottom element .let and .then is a shorthand for the condition , whereas is a shorthand for .it is well - known that the domain can be obtained by a further abstraction of any domain such as that is based on set - sharing and enhanced with linearity information .the following definition formalizes this abstraction .[ def : abstrasub ] let .then where the definition of abstract unification in is based on a few auxiliary operators .the first of these introduces the concept of abstract multiplicity for a term under a given abstract substitution , therefore modeling the notion of definite groundness and definite linearity .[ def : chi ] let and let be a term such that .we say that _ occurs linearly ( in ) in _ if and only if holds for , where it is worth noting that , modulo a few insignificant differences in notation , the multiplicity operator defined above corresponds to the abstract multiplicity operator , which was introduced in ( * ? ? ?* definition 3.4 ) and provided with an executable specification in ( * ? ? ?* definition 4.3 ) .similarly , the next definition corresponds to ( * ? ? ?* definition 4.3 ) .[ def : soln ] for each and , where and are such that , the function is defined as follows the next definition corresponds to ( * ? ? ?* definition 4.5 ) .[ def : abstract - composition ] let , where and .then , where we are now ready to define the abstract operator for the domain .this operator can be viewed as a specialization of ( * ? ? ?* definition 4.6 ) for the case when we have to abstract a single binding .[ def : amguasub ] let and , where .then by repeating the abstract computation of example [ ex : amgusfl - both - lin - share ] on the domain , we provide a formal proof that all the classical approaches based on set - sharing are not uniformly more precise than the pair - sharing domain . [ex : amguasub - both - lin - share ] consider the substitutions and the abstract element as introduced in example [ ex : amgusfl - both - lin - share ] . by definition [ def : abstrasub ] ,we obtain , where when abstractly evaluating the binding according to definition [ def : amguasub ] , we compute the following : where note that and , so that these pairs of variables keep their independence . in contrast , as observed in example [ ex : amgusfl - both - lin - share ] , the operators in will fail to preserve the independence of these pairs .we now show that the abstract domain , when equipped with the operators introduced in section [ subsec : sfl - operators ] , is uniformly more precise than the domain .in particular , the following theorem states that the abstract operator of definition [ def : amgusfl ] is uniformly more precise than the abstract operator .[ thm : amgusfl - uniformly - more - precise - than - amguasub ] let and be such that .let also , where .then similar results can be stated for the other abstract operators , such as the abstract existential quantification and the merge - over - all - path operator .it is worth stressing that , when sequences of bindings come into play , the specification provided in ( * ? ? ?* definition 4.7 ) requires that the _ grounding _ bindings ( i.e. , those bindings such that ) are evaluated before the non - grounding ones .clearly , if we want to lift the result of theorem [ thm : amgusfl - uniformly - more - precise - than - amguasub ] so that it also applies to the operator , the same evaluation strategy has to be adopted when computing on the domain ; this improvement is well - known and already exploited in most implementations of sharing analysis .as done in for the plain set - sharing domain , even when considering the richer domain it is natural to question whether it contains redundancies with respect to the computation of the observable properties .it is worth stressing that the results presented in and can not be simply inherited by the new domain .the concept of `` redundancy '' depends on both the starting domain and the given observables : in the domain both of these have changed .first of all , as can be seen by looking at the definition of , freeness and linearity positively interact in the computation of sharing information : _ a priori _ it is an open issue whether or not the `` redundant '' sharing groups can play a role in such an interaction . secondly ,since freeness and linearity information can be themselves usefully exploited in a number of applications of static analysis ( e.g. , in the optimized implementation of concrete unification or in occurs - check reduction ) , these properties have to be included in the observables .we will now show that the domain can be simplified by applying the same notion of redundancy as identified in .namely , in the definition of it is possible to replace the set - sharing component by without affecting the precision on groundness , independence , freeness and linearity . in order to prove such a claim , we now formalize the new observable properties .[ def : sfl - observables ] the ( overloaded ) _ groundness _ and _ independence _observables are defined , for each , by the overloading of working on the domain is the straightforward extension of the corresponding operator on : in particular , the freeness and linearity components are left untouched .[ def : rhopsd - for - sfl ] for each , the operator is defined by this operator induces the lattice .as proved in , we have that ; by the above definitions , it is also clear that ; thus , is more precise than the reduced product .informally , this means that the domain is able to _ represent _ all of our observable properties without precision losses .the next theorem shows that is a congruence with respect to the , and operators .this means that the domain is able to _ propagate _ the information on the observables as precisely as , therefore providing a completeness result .[ thm : psd - precise - for - sfl ] let be such that .then , for each sequence of bindings , for each and , finally , by providing the minimality result , we show that the domain is indeed the generalized quotient of with respect to the reduced product [ thm : psd - aunifysfl : minimality ] for each , let be such that .then there exist a sequence of bindings and an observable property such that as far as the implementation is concerned , the results proved in for the domain can also be applied to .in particular , in the definition of every occurrence of the star - union operator can be safely replaced by the self - bin - union operator . as a consequence ,it is possible to provide an implementation where the time complexity of the operator is bounded by a polynomial in the number of sharing groups of the set - sharing component .the following result provides another optimization that can be applied when both terms and are definitely linear , but none of them is definitely free ( i.e. , when we compute by the second case stated in definition [ def : amgusfl ] ) .[ thm : psd - precise - for - sfl : inner - bin - unions - useless ] let and , where .let , , , , , where , and then it holds therefore , even when terms and possibly share ( i.e. , when ) , by using we can avoid the expensive computation of at least one of the two inner binary unions in the expression for .example [ ex : amgusfl - both - lin - share ] shows that an analysis based on the new abstract unification operator can be strictly more precise than one based on the classical proposal .however , that example is artificial and leaves open the question as to whether or not such a phenomenon actually happens during the analysis of real programs and , if so , how often .this was the motivation for the experimental evaluation we describe in this section .we consider the abstract domain , where the non - redundant version of the domain is further combined , as described in ( * ? ? ?* section 4 ) , with the definite groundness information computed by and compare the results using the ( classical ) abstract unification operator of ( * ? ? ?* definition 4 ) with the ( new ) operator given in definition [ def : amgusfl ] . taking this as a starting point, we experimentally evaluate eight variants of the analysis arising from all possible combinations of the following options : 1 . the analysis can be goal independent or goal dependent ; 2 .the set - sharing component may or may not have widening enabled ; 3 . the abstract domain may or may not be upgraded with structural information using the operator ( see and ( * ? ? ?* section 5 ) ) .the experiments have been conducted using the analyzer on a gnu / linux pc system .is a data - flow analyzer for ( constraint ) logic programs performing bottom - up analysis and deriving information on both call - patterns and success - patterns by means of program transformations and optimized fixpoint computation techniques .an abstract description is computed for the call- and success - patterns for each predicate defined in the program .the benchmark suite , which is composed of 372 logic programs of various sizes and complexity , can be considered representative .the precision results for the goal independent comparisons are summarized in table [ tab : oldmodes - vs - newmodes - precision ] . for each benchmark, precision is measured by counting the number of independent pairs as well as the numbers of definitely ground , free and linear variables detected . for each variant of the analysis ,these numbers are then compared by computing the relative precision improvements and expressing them using percentages .the benchmark suite is then partitioned into several precision equivalence classes and the cardinalities of these classes are shown in table [ tab : oldmodes - vs - newmodes - precision ] .for example , when considering a goal independent analysis without structural information and without widenings , the value 5 found at the intersection of the row labeled ` ' with the column labeled ` i ' should be read : `` for five benchmarks there has been a ( positive ) increase in the number of independent pairs of variables which is less than or equal to two percent . ''note that we only report on independence and linearity ( in the columns labeled ` i ' and ` l ' , respectively ) , because no differences have been observed for groundness and freeness .the precision class labeled ` unknown ' identifies those benchmarks for which the analyses timed - out ( the time - out threshold was fixed at 600 seconds ) .hence , for goal independent analyses , a precision improvement affects from 1.6% to 3% of the benchmarks , depending on the considered variant . when considering the goal dependent analyses , we obtain a single , small improvement , so that no comparison tables are included here : the improvement , affecting linearity information ,can be observed when the abstract domain includes structural information . with respect to differences in the efficiency, the introduction of the new abstract unification operator has no significant effect on the computation time : small differences ( usually improvements ) are observed on as many as 6% of the benchmarks for the goal independent analysis without structural information and without widenings ; other combinations register even less differences .we note that it is not surprising that the precision and efficiency improvements occur very rarely since the abstract unification operators behave the same except under very specific conditions : the two terms being unified must not only be definitely linear , but also possibly non - free and share a variable . & & & & + prec .class & & & & & & & & + & & 2 & & 2 & & 2 & & 2 + & & & & & & & & 1 + & 5 & 5 & 9 & 6 & 6 & 6 & 12 & 8 + same precision & 357 & 355 & 337 & 338 & 366 & 364 & 360 & 361 + unknown & 10 & 10 & 26 & 26 & & & & +sharing information has been shown to be important for finite - tree analysis .this aims at identifying those program variables that , at a particular program point , can not be bound to an infinite rational tree ( in other words , they are necessarily bound to acyclic terms ) .this novel analysis is irrelevant for those logic languages computing over a domain of finite trees , while having several applications for those ( constraint ) logic languages that are explicitly designed to compute over a domain including rational trees , such as prolog ii and its successors , sicstus prolog , and oz .the analysis specified in is based on a parametric abstract domain , where the component ( the herbrand component ) is a set of variables that are known to be bound to finite terms , while the parametric component can be any domain capturing aliasing , groundness , freeness and linearity information that is useful to compute finite - tree information .an obvious choice for such a parameter is the domain combination . it is worth noting that , in , the correctness of the finite - tree analysis is proved by _ assuming _ the correctness of the underlying analysis on the parameter .thus , thanks to the results shown in this paper , the proof for the domain can now be considered complete .codish et al . describe an algebraic approach to the sharing analysis of logic programs that is based on _ set logic programs_. a set logic program is a logic program in which the terms are sets of variables and standard unification is replaced by a suitable unification for sets , called_ aci1-unification _( unification in the presence of an associative , commutative , and idempotent equality theory with a unit element ) .the authors show that the domain of _ set - substitutions _ , with a few modifications , can be used as an abstract domain for sharing analysis .they also provide an isomorphism between this domain and the set - sharing domain of jacobs and langen .the approach using set logic programs is also generalized to include linearity information , by suitably annotating the set - substitutions , and the authors formally state the optimality of the corresponding abstract unification operator ( lemma a.10 in the appendix of ) .however , this operator is very similar to the classical combinations of set - sharing with linearity : in particular , the precision improvements arising from this enhancement are only exploited when the two terms being unified are definitely independent . as we have seen in this paper , such a choice results in a sub - optimal abstract unification operator , so that the optimality result can not hold . by looking at the proof of lemma a.10 in , it can be seen that the case when the two terms possibly share a variable is dealt with by referring to an example : this one is supposed to show that all the possible sharing groups can be generated .however , even our improved operator correctly characterizes the given example , so that the proof is wrong .it should be stressed that the operator presented in this paper , though remarkably precise , is not meant to subsume all of the proposals for an improved sharing analysis that appeared in the recent literature ( for a thorough experimental evaluation of many of these proposals , the reader is referred to ) . in particular , it is not difficult to show that our operator is not the optimal approximation of concrete unification . in a very recent paper ,j. howe and a. king consider the domain and propose three optimizations to improve both the precision and the efficiency of the ( classical ) abstract unification operator .the first optimization is based on the same observation we have made in this paper , namely that the independence check between the two terms being unified is not necessary for ensuring the correctness of the analysis .however , the proposed enhancement does not fully exploit this observation , so that the resulting operator is strictly less precise than our operator ( even when the operator does not come into play ) . in fact , the first optimization of is not uniformly more precise than the classical proposals .the following example illustrates this point .[ ex : amguhw - less - precise - than - classical - amgu ] let , and , where .since and are linear and independent , as well as all the classical abstract unification operators will compute , where in contrast , a computation based on , results in the less precise abstract element , where the second optimization shown in is based on the enhanced combination of set - sharing and freeness information , which was originally proposed in . in particular , the authors propose a slightly different precision enhancement , less powerful as far as precision is concerned , which however seems to be amenable for an efficient implementation .the third optimization in exploits the combination of the domain with the groundness domain .in this paper we have introduced the abstract domain , combining the set - sharing domain with freeness and linearity information .while the carrier of can be considered standard , we have provided the specification of a new abstract unification operator , showing examples where this operator achieves more precision than the classical proposals .the main contributions of this paper are the following : * we have defined a precise abstraction function , mapping arbitrary substitutions in rational solved form into their _ most precise _ approximation on ; * using this abstraction function , we have provided the mandatory proof of _ correctness _ for the new abstract unification operator , _ for both finite - tree and rational - tree languages _ ; * we have formally shown that the domain is _ uniformly _ more precise than the domain ; we have also provided an example showing that all the classical approaches to the combinations of set - sharing with freeness and linearity fail to satisfy this property ; * we have shown that , in the definition of , we can replace the set - sharing domain by its non - redundant version . as a consequence ,it is possible to implement an algorithm for abstract unification running in _ polynomial time _ and still obtain the same precision on all the considered observables , that is groundness , independence , freeness and linearity .we recognize the hard work required to review technical papers such as this one and would like to express our real gratitude to the journal referees for their critical reading and constructive suggestions for preparing this improved version . ,gori , r. , hill , p. m. , and zaffanella , e. 2001 .finite - tree analysis for constraint logic - based languages . in _static analysis : 8th international symposium , sas 2001 _ , p. cousot , ed .lecture notes in computer science , vol .springer - verlag , berlin , paris , france , 165184 . , hill , p. m. , and zaffanella , e. 1997 .set - sharing is redundant for pair - sharing . in _static analysis : proceedings of the 4th international symposium _ , p. van hentenryck , ed .lecture notes in computer science , vol . 1302 .springer - verlag , berlin , paris , france , 5367 . , hill , p. m. , and zaffanella , e. 2000 .efficient structural information analysis for real clp languages . in _ proceedings of the 7th international conference on logic for programming and automated reasoning ( lpar 2000 ) _ ,m. parigot and a. voronkov , eds .lecture notes in artificial intelligence , vol . 1955 .springer - verlag , berlin , runion island , france , 189206 . , zaffanella , e. , gori , r. , and hill , p. m. 2001 .boolean functions for finite - tree dependencies . in _ proceedings of the 8th international conference on logic for programming , artificial intelligence and reasoning ( lpar 2001 ) _ , r. nieuwenhuis and a. voronkov , eds .lecture notes in artificial intelligence , vol .springer - verlag , berlin , havana , cuba , 579594 . , zaffanella , e. , and hill , p. m. 2000 . enhanced sharing analysis techniques : a comprehensive evaluation . in _ proceedings of the 2nd international acm sigplan conference on principles and practice of declarative programming _ ,m. gabbrielli and f. pfenning , eds .association for computing machinery , montreal , canada , 103114 .freeness , sharing , linearity and correctness all at once . in _ static analysis , proceedings of the third international workshop _ , p. cousot , m. falaschi , g. fil , and a. rauzy , eds .lecture notes in computer science , vol .springer - verlag , berlin , padova , italy , 153164 .an extended version is available as technical report cw 179 , department of computer science , k.u .leuven , september 1993 . ,codish , m. , and mulkers , a. 1994a .abstract unification for a composite domain deriving sharing and freeness properties of program variables . in _ verification and analysis of logic languages , proceedings of the w2 post - conference workshop , international conference on logic programming _, f. s. de boer and m. gabbrielli , eds .santa margherita ligure , italy , 213230 . , codish , m. , and mulkers , a. 1994b . a composite domain for freeness , sharing , and compoundness analysis of logic programs .technical report cw 196 , department of computer science , k.u .leuven , belgium .july . , codish , m. , and mulkers , a. 1995 . abstracting unification : a key step in the design of logic program analyses . in_ computer science today : recent trends and developments _ , j. van leeuwen , ed .lecture notes in computer science , vol .springer - verlag , berlin , 406425 . , dams , d. , fil , g. , and bruynooghe , m. 1993 .freeness analysis for logic programs and correctness ?in _ logic programming : proceedings of the tenth international conference on logic programming _ , d. s. warren , ed .mit press series in logic programming . the mit press , budapest , hungary , 116131 .an extended version is available as technical report cw 161 , department of computer science , k.u .leuven , december 1992 . , mulkers , a. , bruynooghe , m. , garca de la banda , m. , and hermenegildo , m. 1993 . improving abstract interpretations by combining domains . in _ proceedings of the acm sigplan symposium on partial evaluation and semantics - based program manipulation_. acm press , copenhagen , denmark , 194205 . also available as technical report cw 162 ,department of computer science , k.u .leuven , december 1992 .abstract interpretation : a unified lattice model for static analysis of programs by construction or approximation of fixpoints . in _ proceedings of the fourth annual acm symposium on principles of programming languages_. acm press , new york , 238252 ., ranzato , f. , and scozzari , f. 1998 .complete abstract interpretations made constructive .in _ proceedings of 23rd international symposium on mathematical foundations of computer science ( mfcs98 ) _ , j. gruska and j. zlatuska , eds .lecture notes in computer science , vol . 1450 .springer - verlag , berlin , 366377 . ,bagnara , r. , and zaffanella , e. 1998 .the correctness of set - sharing . in _static analysis : proceedings of the 5th international symposium _ , g. levi , ed .lecture notes in computer science , vol . 1503 .springer - verlag , berlin , pisa , italy , 99114 . , bagnara , r. , and zaffanella , e. 2003 . on the analysis of set - sharing , freeness and linearity for finite and rational tree languages .tech . rep .2003.08 , school of computing , university of leeds .available at http://www.comp.leeds.ac.uk/research/pubs/reports.shtml .accurate and efficient approximation of variable aliasing in logic programs .in _ logic programming : proceedings of the north american conference _ ,e. l. lusk and r. a. overbeek , eds . mit press series in logic programming .the mit press , cleveland , ohio , usa , 154165 . \1994 . a synergistic analysis for sharing and groundness which traces linearity . in _ proceedings of the fifth european symposium on programming _ , d. sannella , ed .lecture notes in computer science , vol . 788 .springer - verlag , berlin , edinburgh , uk , 363378 .\1994 . depth- sharing and freeness . in _ logic programming : proceedings of the eleventh international conference on logic programming _ , p. van hentenryck , ed .mit press series in logic programming . the mit press , santa margherita ligure , italy , 553568 . \1988 .complete axiomatizations of the algebras of finite , rational and infinite trees . in _ proceedings , third annual symposium on logic in computer science_. ieee computer society press , edinburgh , scotland , 348357 .an application of abstract interpretation of logic programs : occur check reduction . in _ proceedings of the 1986 european symposium on programming _ , b. robinet and r. wilhelm , eds .lecture notes in computer science , vol .springer - verlag , berlin , 327338 .correctness , precision and efficiency in the sharing analysis of real logic languages .ph.d . thesis ,school of computing , university of leeds , leeds , u.k .available at http://www.cs.unipr.it/~zaffanella/. , bagnara , r. , and hill , p. m. 1999 .widening sharing . in _ principles and practice of declarative programming _, g. nadathur , ed .lecture notes in computer science , vol . 1702 .springer - verlag , berlin , paris , france , 414431 . , hill , p. m. , and bagnara , r. 1999 .decomposing non - redundant sharing by complementation . in _static analysis : proceedings of the 6th international symposium _ ,a. cortesi and g. fil , eds .lecture notes in computer science , vol . 1694 .springer - verlag , berlin , venice , italy , 6984 . | it is well - known that freeness and linearity information positively interact with aliasing information , allowing both the precision and the efficiency of the sharing analysis of logic programs to be improved . in this paper we present a novel combination of set - sharing with freeness and linearity information , which is characterized by an improved abstract unification operator . we provide a new abstraction function and prove the correctness of the analysis for both the finite tree and the rational tree cases . moreover , we show that the same notion of redundant information as identified in also applies to this abstract domain combination : this allows for the implementation of an abstract unification operator running in polynomial time and achieving the same precision on all the considered observable properties . abstract interpretation ; logic programming ; abstract unification ; rational trees ; set - sharing ; freeness ; linearity . |
the rapid proliferation of smart mobile devices has triggered an unprecedented growth of the global mobile data traffic .hetnets have been proposed as an effective way to meet the dramatic traffic growth by deploying short range small - bss together with traditional macro - bss , to provide better time or frequency reuse . however, this approach imposes a significant challenge of providing expensive high - speed backhaul links for connecting all the small - bss to the core network .caching at small - bss is a promising approach to alleviate the backhaul capacity requirement in hetnets .many existing works have focused on optimal cache placement at small - bss , which is of critical importance in cache - enabled hetnets .for example , in and , the authors consider the optimal content placement at small - bss to minimize the expected downloading time for files in a single macro - cell with multiple small - cells .file requests which can not be satisfied locally at a small - bs are served by the macro - bs .the optimization problems in and are np - hard , and low - complexity solutions are proposed . in ,the authors propose a caching design based on file splitting and mds encoding in a single macro - cell with multiple small - cells .file requests which can not be satisfied locally at a small - bs are served by the macro - bs , and backhaul rate analysis and optimization are considered .note that the focuses of are on performance optimization of caching design . in ,the authors consider caching the most popular files at each small - bs in large - scale cache - enabled small - cell networks or hetnets , with backhaul constraints .the service rates of uncached files are limited by the backhaul capacity . in ,the authors propose a partion - based combined caching design in a large - scale cluster - centric small - cell network , without considering backhaul constraints . in ,the authors consider two caching designs , i.e. , caching the most popular files and random caching of a uniform distribution , at small - bss in a large - scale cache - enabled hetnet , without backhaul constraints .file requests which can not be satisfied at a small - bs are served by macro - bss . in ,the authors consider random caching of a uniform distribution in a large - scale cache - enabled small - cell network , without backhaul constraints , assuming that content requests follow a uniform distribution .note that the focuses of are on performance analysis of caching designs .on the other hand , enabling multicast service at bss in hetnets is an efficient way to deliver popular contents to multiple requesters simultaneously , by effectively utilizing the broadcast nature of the wireless medium . in and ,the authors consider a single macro - cell with multiple small - cells with backhaul costs .specifically , in , the optimization of caching and multicasting , which is np - hard , is considered , and a simplified solution with approximation guarantee is proposed . in ,the optimization of dynamic multicast scheduling for a given content placement , which is a dynamic programming problem , is considered , and a low - complexity optimal numerical solution is obtained .the network models considered in do not capture the stochastic natures of channel fading and geographic locations of bss and users .the network models considered in are more realistic and can reflect the stochastic natures of signal and interference .however , the simple identical caching design considered in does not provide spatial file diversity ; the combined caching design in does not reflect the popularity differences of files in each of the three categories ; and the random caching design of a uniform distribution in can not make use of popularity information .hence , the caching designs in may not lead to good network performance . on the other hand , consider analysis and optimization of caching in large - scale cache - enabled single - tier networks .specifically , considers random caching at bss , and analyze and optimize the hit probability .reference considers random caching with contents being stored at each bs in an i.i.d .manner , and analyzes the minimum offloading loss . in ,the authors study the expected costs of obtaining a complete content under random uncoded caching and coded caching strategies , which are designed only for different pieces of a single content .in , the authors consider analysis and optimization of joint caching and multicasting .however , the proposed caching and multicasting designs in may not be applicable to hetnets with backhaul constraints . in summary , to facilitate designs of practical cache - enabled hetnets for massive content dissemination , further studies are required to understand the following key questions . how do physical layer and content - related parameters fundamentally affect performance of cache - enabled hetnets ? how can caching and multicasting jointly and optimally assist massive content dissemination in cache - enabled hetnets ? in this paper , we consider the analysis and optimization of joint caching and multicasting to improve the efficiency of massive content dissemination in a large - scale cache - enabled hetnet with backhaul constraints .our main contributions are summarized below . first , we propose a hybrid caching design with certain design parameters , consisting of identical caching in the macro - tier and random caching in the pico - tier , which can provide spatial file diversity .we propose a corresponding multicasting design for efficient content dissemination by exploiting broadcast nature of the wireless medium . then , by carefully handling different types of interferers and adopting appropriate approximations , we derive tractable expressions for the successful transmission probability in the general region and the asymptotic region , utilizing tools from stochastic geometry .these expressions reveal the impacts of physical layer and content - related parameters on the successful transmission probability . next , we consider the successful transmission probability maximization by optimizing the design parameters , which is a very challenging mixed discrete - continuous optimization problem .we propose a two - step optimization framework to obtain a near optimal solution with superior performance and manageable complexity .specifically , we first characterize the structural properties of the asymptotically optimal solutions . then , based on these properties , we obtain the near optimal solution , which achieves better performance in the general region than any asymptotically optimal solution , under a mild condition . finally , by numerical simulations , we show that the near optimal solution achieves a significant gain in successful transmission probability over some baseline schemes .we consider a two - tier hetnet where a macro - cell tier is overlaid with a pico - cell tier , as shown in fig .[ fig : system ] . the locations of the macro - bss and the pico - bssare spatially distributed as two independent homogeneous poisson point processes ( ppps ) and with densities and , respectively , where .the locations of the users are also distributed as an independent homogeneous ppp with density .we refer to the macro - cell tier and the pico - cell tier as the tier and the tier , respectively .consider the downlink scenario .each bs in the tier has one transmit antenna with transmission power ( ) , where .each user has one receive antenna .all bss are operating on the same frequency band of total bandwidth ( hz ) .consider a discrete - time system with time being slotted and study one slot of the network .we consider both large - scale fading and small - scale fading . due tolarge - scale fading , a transmitted signal from the tier with distance is attenuated by a factor , where is the path loss exponent of the tier . for small - scale fading , we assume rayleigh fading channels .let denote the set of files ( e.g. , data objects or chucks of data objects ) in the hetnet .for ease of illustration , assume that all files have the same size .each file is of certain popularity , which is assumed to be identical among all users .each user randomly requests one file , which is file with probability , where .thus , the file popularity distribution is given by , which is assumed to be known apriori .in addition , without loss of generality ( w.l.o.g . ) , assume .the hetnet consists of cache - enabled macro - bss and pico - bss .each bs in the tier is equipped with a cache of size to store different files .assume .each macro - bs is connected to the core network via a wireline backhaul link of transmission capacity ( files / slot ) , i.e. , each macro - bs can retrieve at most different files from the core network in each slot .note that , and reflect the storage and backhaul resources in the cache - enabled hetnet .we are interested in the case where the storage and backhaul resources are limited , and may not be able to satisfy all file requests . in this section ,we propose a joint caching and multicasting design with certain design parameters , which can provide high spatial file diversity and ensure efficient content dissemination . to provide high spatial file diversity , we propose a _ hybrid caching design _ consisting of identical caching in the 1st tier and random caching in the 2nd tier , as illustrated in fig .[ fig : system ] .let denote the set of files cached in the tier . specifically ,our hybrid caching design satisfies the following requirements : ( i ) _ non - overlapping caching across tiers _ : each file is stored in at most one tier ; ( ii ) _ identical caching in the 1st tier _ : each macro - bs stores the same set of ( different ) files ; and ( iii ) _ random caching in the 2nd tier _ : each pico - bs randomly stores different files out of all files in , forming a subset of .thus , we have the following constraint : to further illustrate the random caching in the 2nd tier , we first introduce some notations .we say every different files in form a combination .thus , there are different combinations in total .let denote the set of combinations .combination can be characterized by an -dimensional vector , where indicates that file is included in combination and otherwise .note that there are 1 s in each .denote as the set of files contained in combination . each pico - bs stores one combination at random , which is combination with probability satisfying : ( for the 2nd tier ) .then , based on the insights obtained , we shall focus on reducing complexity while maintaining superior performance . ] denote . to facilitate the analysis in later sections , based on , we also define the probability that file is stored at a pico - bs , i.e. , where denotes the set of combinations containing file .denote .note that and depend on .thus , in this paper , we use and when emphasizing this relation .therefore , the hybrid caching design in the cache - enabled hetnet is specified by the design parameters . to efficiently utilize backhaul links and ensure high spatial file diversity, we only retrieve files not stored in the cache - enabled hetnet via backhaul links .let denote the set of files which can be retrieved by each macro - bs from the core network .thus , we have the following constraint : therefore , the file distribution in the cache - enabled hetnet is fully specified by the hybrid caching design . in this part, we propose a multicasting design associated with the hybrid caching design .first , we introduce the user association under the proposed hybrid caching design . in the cache - enabled hetnet ,a user accesses to a tier based on its desired file .specifically , each user requesting file is associated with the nearest macro - bs and is referred to as a macro - user . while , each user requesting file is associated with the nearest pico - bs storing a combination ( containing file ) and is referred to as a pico - user .the associated bs of each user is called its serving bs , and offers the maximum long - term average receive power for its desired file .note that under the proposed hybrid caching design , the serving bs of a macro - user is its nearest macro - bs , while the serving bs of a pico - user ( affected by ) may not be its geographically nearest bs .we refer to this association mechanism as the _ content - centric association _ in the cached - enabled hetnet , which is different from the traditional _ connection - based association _ in hetnets .now , we introduce file scheduling in the cache - enabled hetnet. each bs will serve all the cached files requested by its associated users .each macro - bs only serves at most uncached files requested by its associated users , due to the backhaul constraint for retrieving uncached files .in particular , if the users of a macro - bs request smaller than or equal to different uncached files , the macro - bs serves all of them ; if the users of a macro - bs request greater than different uncached files , the macro - bs will randomly select different requested uncached files to serve , out of all the requested uncached files according to the uniform distribution .we consider multicasting in the cache - enabled hetnet for efficient content dissemination .suppose a bs schedules to serve requests for different files .then , it transmits each of the files at rate ( bit / second ) and over of total bandwidth w using fdma .all the users which request one of the files from this bs try to decode the file from the single multicast transmission of the file at the bs .note that , by avoiding transmitting the same file multiple times to multiple users , this content - centric transmission ( multicast ) can improve the efficiency of the utilization of the wireless medium and reduce the load of the wireless links , compared to the traditional connection - based transmission ( unicast ) . from the above illustration, we can see that the proposed multicasting design is also affected by the proposed hybrid caching design .therefore , the design parameters affect the performance of the proposed joint caching and multicasting design .in this paper , w.l.o.g . , we study the performance of the typical user denoted as , which is located at the origin .we assume all bss are active .suppose requests file .let denote the index of the tier to which belongs , and let denote the other tier .let denote the index of the serving bs of .we denote and as the distance and the small - scale channel between bs and , respectively .we assume the complex additive white gaussian noise of power at .when requests file and file is transmitted by bs , the signal - to - interference plus noise ratio ( sinr ) of is given by when requests file ( ) , let ( ) and ( ) denote the numbers of different cached and uncached files requested by the users associated with bs , respectively .when requests file , let denote the number of different cached files requested by the users associated with bs .note that are discrete random variables , the probability mass functions ( p.m.f.s ) of which depend on , and the design parameters .in addition , if , bs will transmit file for sure ; if , for given , bs will transmit file with probability . given that file is transmitted , it can be decoded correctly at if the channel capacity between bs and is greater than or equal to .requesters are mostly concerned with whether their desired files can be successfully received .therefore , in this paper , we consider the successful transmission probability of a file requested by as the network performance metric . by total probability theorem ,the successful transmission probability under the proposed scheme is given by : where is given by , and and are given by and , respectively .note that in and , each term multiplied by represents the successful transmission probability of file .later , we shall see that under the proposed caching and multicasting design for content - oriented services in the cache - enabled hetnet , the successful transmission probability is sufficiently different from the traditional rate coverage probability studied for connection - oriented services .in particular , the successful transmission probability considered in this paper not only depends on the physical layer parameters , such as the macro and pico bs densities and , user density , path loss exponents and , bandwidth , backhaul capacity and transmit signal - to - noise ratios ( snrs ) and , but also relies on the content - related parameters , such as the popularity distribution , the cache sizes and , and the design parameters . while , the traditional rate coverage probability only depends on the physical layer parameters . in addition, the successful transmission probability depends on the physical layer parameters in a different way from the traditional rate coverage probability .for example , the content - centric association leads to different distributions of the locations of serving and interfering bss ; the multicasting transmission results in different file load distributions at each bs ; and the cache - enabled architecture makes content availability related to bs densities .in this section , we study the successful transmission probability under the proposed caching and multicasting design for given design parameters .first , we analyze the successful transmission probability in the general region .then , we analyze the asymptotic transmission probability in the high snr and user density region . in this part, we would like to analyze the successful transmission probability in the general region , using tools from stochastic geometry .in general , file loads , , , , and sinr are correlated in a complex manner , as bss with larger association regions have higher file load and lower sinr ( due to larger user to bs distance ) . for the tractability of the analysis , as in and , the dependence is ignored . therefore , to obtain the successful transmission probability in , we analyze the distributions of , , , , and the distribution of , separately .first , we calculate the p.m.f.s of and for as well as the p.m.f.s of and for . in calculating these p.m.f.s, we need the probability density function ( p.d.f . ) of the size of the voronoi cell of w.r.t .file .note that this p.d.f .is equivalent to the p.d.f . of the size of the voronoi cellto which a randomly chosen user belongs . based on a tractable approximated form of this p.d.f . in , which is widely used in existing literature , we obtain the p.m.f.s of , , and .[ p.m.f.s of , and ] the p.m.f.s of and for and the p.m.f.s of and for are given by =g(\mathcal f_{1,-n}^c , k^c-1),\quad k^c=1,\cdots , k_1^c,\label{eqn : k-1-c}\\ & \pr \left[\overline{k}_{1,n,0}^b = k^b\right]=g(\mathcal f_{1}^b , k^b),\quad k^b=0,\cdots , f_1^b,\label{eqn : k-1-b - bar}\\ & \pr \left[\overline{k}_{1,n,0}^c = k^c\right]= g(\mathcal f_{1}^c , k^c),\quad k^c=0,\cdots , k_1^c,\label{eqn : k-1-c - bar}\\ & \pr \left[k_{1,n,0}^b = k^b\right]=g(\mathcal f_{1,-n}^b , k^b-1),\quad k^b=1,\cdots , f_1^b,\label{eqn : k-1-b } \ ] ] where , and .[ lem : pmf - k - m ] please refer to appendix a. next , we obtain the p.m.f .of for . in calculating the p.m.f . of , we need the p.d.f . of the size of the voronoi cell of w.r.t .file when contains combination . however , this p.d.f .is very complex and is still unknown . for the tractability of the analysis , as in , we approximate this p.d.f . based on a tractable approximated form of the p.d.f . of the size of the voronoi cell to which a randomly chosen user belongs , which is widely used in existing literature. under this approximation, we obtain the p.m.f . of .[ p.m.f . of ]the p.m.f . of for given by \nonumber\\ & = \sum_{i\in \mathcal i_n}\frac{p_i}{t_n}\sum_{\mathcal x\in \left\{\mathcal s \subseteq\mathcal n_{i ,-n } : |\mathcal s|=k^c-1\right\ } } \prod\limits_{m\in \mathcal x}\left(1-\left(1+\frac{a_m\lambda_u}{3.5t_m\lambda_2}\right)^{-4.5}\right)\prod\limits_{m\in { \mathcal n_{i ,-n}\setminus \mathcal x}}\left(1+\frac{a_m\lambda_u}{3.5t_m\lambda_2}\right)^{-4.5},\nonumber\\ & \hspace{12cm}k^c=1,\cdots , k_2^c,\label{eqn : k - pmf}\end{aligned}\ ] ] where .[ lem : pmf - k ] please refer to appendix b. the distributions of the locations of desired transmitters and interferers are more involved than those in the traditional connection - based hetnets .thus , it is more challenging to analyze the p.d.f . of .when is a macro - user , as in the traditional connection - based hetnets , there are two types of interferers , namely , i ) all the other macro - bss besides its serving macro - bs , and ii ) all the pico - bss .when is a pico - user , different from the traditional connection - based hetnets , there are three types of interferers , namely , i ) all the other pico - bss storing the combinations containing the desired file of besides its serving pico - bs , ii ) all the pico - bss without the desired file of , and iii ) all the macro - bss . by carefully handling these distributions, we can derive the p.d.f .of , for and , respectively .then , based on lemma [ lem : pmf - k - m ] and lemma [ lem : pmf - k ] as well as the p.d.f . of , we can derive the successful transmission probability .[ performance ] the successful transmission probability of is given by \pr[\overline{k}_{1,n,0}^b = k^b]f_{1,{k^c+\min\{k_1^b , k^b\}}}\nonumber\\ & + \sum_{n\in \mathcal f_1^b}a_{n } \sum_{k^c=0}^{k_1^c}\sum_{k^b=1}^{f_1^b } \pr [ \overline{k}_{1,n,0}^c = k^c]\pr [ k_{1,n,0}^b = k^b ] \frac{\min\{k_1^b , k^b\}}{k^b } f_{1,{k^c+\min\{k_1^b , k^b\}}}\nonumber\\ & + \sum_{n\in \mathcal f_2^c}a_{n } \sum_{k^c=1}^{k_2^c } \pr[k_{2,n,0}=k^c]f_{2,k^c}(t_n),\end{aligned}\ ] ] where the p.m.f.s of , , and are given by lemma [ lem : pmf - k - m ] and lemma [ lem : pmf - k ] , and are given by and , and is given by .here , and denote the complementary incomplete beta function and the beta function , respectively .[ thm : generalkmulti ] please refer to appendix c. from theorem [ thm : generalkmulti ] , we can see that in the general region , the physical layer parameters , , , , , , , , and the design parameters jointly affect the successful transmission probability .the impacts of the physical layer parameters and the design parameters on are coupled in a complex manner . in this part , to obtain design insights , we focus on analyzing the asymptotic successful transmission probability in the high snr and user density region . note that in the remaining of the paper , when considering the high snr region , we assume and for some and , and let . on the other hand , in the high user density region where , discrete random variables , and in distribution .define , , and .note that when , and become functions of instead of . from theorem [ thm : generalkmulti ] , we have the following lemma . when and , we have , where and . here , and are given by and , and is given by .please refer to appendix d. [ lem : asym - perf ] note that represents the successful transmission probability for file ( given that this file is transmitted ) , and represents the successful transmission probability for file , in the asymptotic region . for given ,we interpret lemma [ lem : asym - perf ] below .when , the successful transmission probability of file is the same as that of file . in other words , when backhaul capacity is sufficient , storing a file at a macro - bs or retrieving the file via the backhaul link makes no difference in successful transmission probability .when , the successful transmission probability of file is greater than that of file .in other words , when backhaul capacity is limited , storing a file at a macro - bs is better than retrieving the file via the backhaul link .note that is an increasing function ( please refer to appendix e for the proof ) .thus , for any satisfying , the successful transmission probability of file is greater than that of file .that is , a file of higher probability being cached at a pico - bs has higher successful transmission probability . later , in section[ sec : new - opt ] , we shall see that the structure of facilitates the optimization of .next , we further study the symmetric case where in the high snr and user density region . from lemma [ lem : asym - perf ], we have the following lemma . when , and , we have , where and . here , is given by , and , and are given by [ lem : asym - perf - v2 ] please refer to appendix d. from lemma [ lem : asym - perf - v2 ] , we can see that in the high snr and user density region , when , the impact of the physical layer parameters , and , captured by , and , and the impact of the design parameters on the successful transmission probability can be easily separated .later , in section [ sec : new - opt ] , we shall see that this separation greatly facilitates the optimization of .[ fig : verification - kmulti ] plots the successful transmission probability versus the transmit snr and the user density . fig .[ fig : verification - kmulti ] verifies theorem [ thm : generalkmulti ] and lemma [ lem : asym - perf ] ( lemma [ lem : asym - perf - v2 ] ) , and demonstrates the accuracy of the approximations adopted .[ fig : verification - kmulti ] also indicates that provides a simple and good approximation for in the high transmit snr region ( e.g. , db ) and the high user density region ( e.g. , ) .in this section , we formulate the optimal caching and multicasting design problem to maximize the successful transmission probability , which is a mixed discrete - continuous optimization problem . to facilitate the solution of this challenging optimization problem in the next section, we also formulate the asymptotically optimal caching and multicasting design problem to maximize the asymptotic successful transmission probability in the high snr and user density region .the caching and multicasting design fundamentally affects the successful transmission probability via the design parameters .we would like to maximize by carefully optimizing .[ performance optimization][prob : opt ] where is given by .note that problem [ prob : opt ] is a mixed discrete - continuous optimization problem with two main challenges .one is the choice of the sets of files and ( discrete variables ) stored in the two tiers , and the other is the choice of the caching distribution ( continuous variables ) of random caching for the 2nd tier .we thus propose an equivalent alternative formulation of problem [ prob : opt ] which naturally subdivides problem [ prob : opt ] according to these two aspects .[ equivalent optimization][prob : opt - eq ] for given , the optimization problem in is in general a non - convex optimization problem with a large number of optimization variables ( i.e. , optimization variables ) , and it is difficult to obtain the global optimal solution and calculate . even given , the optimization problem in is a discrete optimization problem over a very large constraint set , and is np - complete in general . therefore , problem [ prob : opt - eq ] is still very challenging . to facilitate the solution of the challenging mixed discrete - continuous optimization problem, we also formulate the optimization of the asymptotic successful transmission probability given in lemma [ lem : asym - perf ] , i.e. , which has a much simpler form than given in theorem [ thm : generalkmulti ] .equivalently , we can consider the asymptotic version of problem [ prob : opt - eq ] in the high snr and user density region .[ asymptotic optimization][prob : opt - asymp - eq ] the optimal solution to the optimization in is written as and is given by where the optimal solution to the optimization in is written as .the optimal solution to problem [ prob : opt - asymp - eq ] is given by , which is the asymptotic optimal solution to problem [ prob : opt - eq ] ( problem [ prob : opt ] ) .based on lemma 2 in , we know that the optimization in is equivalent to the following optimization for given where the optimal solution is written as .in addition , any in convex polyhedron is an optimal solution to the optimization in , where is given by : the vertices of the convex polyhedron can be obtained based on the simplex method , and any can be constructed from all the vertices using convex combination .thus , when optimizing the asymptotic performance for given , we can focus on the optimization in instead of the optimization in .in this section , we propose a two - step optimization framework to obtain a near optimal solution with manageable complexity and superior performance in the general region . we first characterize the structural properties of the asymptotically optimal solutions . then , based on these properties , we obtain a near optimal solution in the general region . in this part, we study the continuous part and the discrete part of the asymptotic optimization in problem [ prob : opt - asymp - eq ] , respectively , to obtain design insights into the solution in the general region .as the structure of is very complex , it is difficult to obtain the closed - form optimal solution to the optimization in . by exploring the structural properties of , we know that files of higher popularity get more storage resources . [ structural property of optimization in ] given any satisfying and , if , then .[lem : mono - general - asym ] please refer to appendix e. now , we focus on obtaining a numerical solution to the optimization in . for given satisfying , the optimization in is a continuous optimization of a differentiable function over a convex set . in general , it is difficult to show the convexity of in . a stationary point to the optimization in can be obtained using standard gradient projection methods . here , we consider the diminishing stepsize satisfying and propose algorithm [ alg : local ] . in step 2 of algorithm [ alg : local ] , , where is given by .step 3 is the projection of onto the set of the variables satisfying the constraints in and .it is shown in that in algorithm [ alg : local ] converges to a stationary point of the optimization in as .on the other hand , as illustrated in the discussion of lemma [ lem : asym - perf ] , is actually a cumulative density function ( c.d.f . ) , and is concave in most of the cases we are interested in . if in is concave w.r.t . , the differentiable function is concave w.r.t . , and hence , the optimization in is a convex problem .then , in algorithm [ alg : local ] converges to the optimal solution to the optimization in as . in other words , under a mild condition ( i.e. , is convex ) , we can obtain the optimal solution to the optimization in using algorithm [ alg : local ] .[ alg : local ] next , we consider the symmetric case , i.e. , , in the high snr and user density region . in this case , we can easily verify that ( given in lemma [ lem : asym - perf - v2 ] ) is convex and slater s condition is satisfied , implying that strong duality holds . using kkt conditions , we can obtain the closed - form solution to the optimization in in this case .[ asymptotically optimal solution when for given , when , and , the optimal solution to the optimization in is given by ^+,1\right\ } , \n\in \mathcal f_2^c,\label{eqn : opt - k - infty}\end{aligned}\ ] ] where ^+\triangleq \max\{x,0\} ] .the p.m.f . of depends on the p.d.f . of the size of the voronoi cell of macro - bs ,i.e. , the p.d.f . of the size of the voronoi cellto which a randomly chosen user belongs .thus , we can calculate the p.m.f .of using lemma 3 of as follows therefore , we complete the proof .when typical user requests file , let random variable denote whether file is requested by the users associated with serving pico - bs when pico - bs contains combination . when requests file and serving pico - bs contains combination , we have .thus , we have the probability that pico - bs contains combination is .thus , by the law of total probability , we have thus , to prove , it remains to calculate $ ] .the p.m.f . of depends on the p.d.f . of the size of the voronoi cell of pico - bs w.r.t .file when pico - bs contains combination , which is unknown .we approximate this p.d.f .based on the known result of the p.d.f . of the size of the voronoi cellto which a randomly chosen user belongs . under this approximation, we can calculate the p.m.f .of using lemma 3 of as followsbased on ( [ eqn : succ - prob - def ] ) , to prove theorem [ thm : generalkmulti ] , we calculate and , respectively . when is a macro - user , as in the traditional connection - based hetnets , there are two types of interferers , namely , i ) all the other macro - bss besides its serving macro - bs , and ii ) all the pico - bss .thus , we rewrite the sinr expression in as follows : where and .next , we calculate the conditional successful transmission probability of file requested by conditioned on when the file load is , i.e. , where is obtained based on ( [ eqn : sinr_v3 ] ) , ( b ) is obtained by noting that , and ( c ) is due to the independence of the rayleigh fading channels and the independence of the ppps . to calculate according to ,we first calculate and , respectively .the expression of is calculated as follows : where is obtained by utilizing the probability generating functional of ppp ( * ? ? ?* page 235 ) , and is obtained by first replacing with , and then replacing with .similarly , the expression of is calculated as follows : substituting ( [ eq : lt_k1_n_m ] ) and ( [ eq : lt_k1_m1 ] ) into ( [ eq : condi_cp_k_m ] ) , we obtain as follows : now , we calculate by first removing the condition of on .note that we have the p.d.f .of as .thus , we have : therefore , by ( [ eqn : succ - prob - def-1 ] ) and by letting in ( [ eq : cp_k_n_m ] ) , we have when is a pico - user , different from the traditional connection - based hetnets , there are three types of interferers , namely , i ) all the other pico - bss storing the combinations containing the desired file of besides its serving pico - bs , ii ) all the pico - bss without the desired file of , and iii ) all the macro - bss .thus , we rewrite the sinr expression in as follows : where is the point process generated by pico - bss containing file combination , is the point process generated by pico - bss containing file combination , , and . due to the random caching policy and independent thinning ( * ? ? ?* page 230 ) , we obtain that is a homogeneous ppp with density and is a homogeneous ppp with density .next , we calculate the conditional successful transmission probability of file requested by conditioned on when the file load is , denoted as .\ ] ] similar to ( [ eq : condi_cp_k_m ] ) and based on ( [ eqn : sinr_v2 ] ) , we have : to calculate according to , we first calculate , and , respectively .similar to ( [ eq : lt_k1_n_m ] ) and ( [ eq : lt_k1_m1 ] ) , we have : substituting ( [ eq : lt_k1_n ] ) , ( [ eq : lt_k1_p1 ] ) and ( [ eq : lt_k1_p2 ] ) into ( [ eq : condi_cp_k ] ) , we obtain as follows : now , we calculate by first removing the condition of on .note that we have the p.d.f .of as , as pico - bss storing file form a homogeneous ppp with density .thus , we have : therefore , by ( [ eqn : succ - prob - def-2 ] ) and by letting in ( [ eq : cp_k_n ] ) , we havewhen , and .when , discrete random variables , and in distribution . thus , when , , , and , we can show and . thus , we can prove lemma [ lem : asym - perf ] .when , , , , and we have : where , and are given by , and . noting that ( is a constant ) , we can solve integrals in and .thus , by lemma [ lem : asym - perf ] , we can prove lemma [ lem : asym - perf - v2 ] .to prove lemma [ lem : mono - general - asym ] , we first have the following lemma .[ monotonicity of ] is an increasing function of .[ lem : monotonicity - f-2-k ] by replacing with in ( [ eqn : f-2-k - infty ] ) , we have : when and , is a decreasing function of . because , and , and and are decreasing functions of .the integrand is an increasing function of for all .therefore , we can show that is an increasing function of .now , we prove lemma [ lem : mono - general - asym ] .let denote an optimal solution to probelm [ prob : opt - asymp - eq ] .consider satisfying .suppose . based on lemma [ lem : monotonicity - f-2-k ], we have .now , we construct a feasible solution to problem [ prob : opt - asymp - eq ] by choosing , , , and for all .thus , by lemma [ lem : asym - perf ] and the optimality of , we have : since , by ( [ eqn : contradiction - thm2-a ] ) , we have , which contradicts the assumption .therefore , by contradiction , we can prove lemma [ lem : mono - general - asym ] .for given , when , , and , the lagrangian of the optimization in is given by where and are the lagrange multipliers associated with , is the lagrange multiplier associated with , , and .thus , we have since strong duality holds , primal optimal and dual optimal , , satisfy kkt conditions , i.e. , ( i ) primal constraints : , , ( ii ) dual constraints and for all , ( iii ) complementary slackness and for all , and ( iv ) for all . by ( ii ) , ( iii ) , and ( iv ) , when , we have , , and ; when , we have , , , and ; when , we have , , and . therefore , we have . combining , we can prove lemma [ lem : solu - opt - infty ] .by constraints ( [ eqn : cache - constr ] ) and ( [ eqn : backhaul ] ) , we have , and . to prove property ( i ) of theorem [ thm : opt - prop ], it remains to prove .suppose there exists an optimal solution to probelm [ prob : opt - asymp - eq ] satisfying , then we have : now , we construct a feasible solution to problem [ prob : opt - asymp - eq ] , where consists of the most popular files of , , , for all and for all .by lemma [ lem : asym - perf ] , we have : thus , is not an optimal solution , which contradicts the assumption .therefore , by contradiction , we can prove for any optimal solution to probelm [ prob : opt - asymp - eq ] . since , and , we have .therefore , we can prove property ( i ) of theorem [ thm : opt - prop ] .first , we prove that there exists an optimal solution to problem [ prob : opt - asymp - eq ] , such that files in are consecutive . by lemma [ lem : asym - perf ] and shown in the proof of property ,we have : let denote the most ( least ) popular file in .suppose for any optimal solution to probelm [ prob : opt - asymp - eq ] , files in are not consecutive , i.e. , there exists satisfying .now , we can construct a feasible solution to probelm [ prob : opt - asymp - eq ] where files in are consecutive as follows . if , choose , , and for all . by lemma [ lem : asym - perf ], we have : where the inequality is due to and . , choose , , and for all .by lemma [ lem : asym - perf ] , we have : where the inequality is due to and . if , choose , , and for all . by lemma [ lem : asym - perf ], we have : by ( [ eqn : controdiction - thm2-b2-leq ] ) , ( [ eqn : controdiction - thm2-b2-geq ] ) and ( [ eqn : controdiction - thm2-b2-eq ] ) , we know that if or , is not an optimal solution , which contradicts the assumption ; and if , we can always construct an optimal solution , satisfying that files in are consecutive .thus , we can prove that there exists an optimal solution to problem [ prob : opt - asymp - eq ] , such that files in are consecutive .in addition , by ( [ eqn : optimal - value - appendix - b2 ] ) , we know that whether file belongs to or makes no difference in the optimal successful transmission probability. therefore , we can prove the property ( ii ) of theorem [ thm : opt - prop ] .we prove that if , the most popular file belongs to for any optimal solution to problem [ prob : opt - asymp - eq ] .suppose that there exists an optimal solution to problem [ prob : opt - asymp - eq ] , such that the most popular file belongs to .let denote a file in .now , we can construct a feasible solution to probelm [ prob : opt - asymp - eq ] , where , , and for all . by lemma [ lem : asym - perf ], we have : since and , we have .thus , is not an optimal solution to problem [ prob : opt - asymp - eq ] , which contradicts the assumption . by contradiction, we prove that if , the most popular file belongs to for any optimal solution to problem [ prob : opt - asymp - eq ] , and hence in theorem [ thm : opt - prop ] ( ii ) satisfies .we prove that if , the most popular file belongs to for any optimal solution to problem [ prob : opt - asymp - eq ] .suppose that there exists an optimal solution to problem [ prob : opt - asymp - eq ] , such that file belongs to .let denote the most popular file in . based on lemma [ lem : mono - general - asym ], we have for any , and hence .now , we can construct a feasible solution to probelm [ prob : opt - asymp - eq ] , where , , and for all . by lemma [ lem : asym - perf ], we have : since and , we have .thus , is not an optimal solution to problem [ prob : opt - asymp - eq ] , which contradicts the assumption .therefore , we prove that if , the most popular file belongs to for any optimal solution to problem [ prob : opt - asymp - eq ] , and hence in theorem [ thm : opt - prop ] ( ii ) satisfies .x. wang , m. chen , t. taleb , a. ksentini , and v. leung , `` cache in the air : exploiting content caching and delivery techniques for 5 g systems , '' _ communications magazine , ieee _ , vol .52 , no . 2 ,pp . 131139 , february 2014 .k. shanmugam , n. golrezaei , a. dimakis , a. molisch , and g. caire , `` femtocaching : wireless content delivery through distributed caching helpers , '' _ information theory , ieee transactions on _ , vol .59 , no .12 , pp . 84028413 , dec 2013 .j. li , y. chen , z. lin , w. chen , b. vucetic , and l. hanzo , `` distributed caching for data dissemination in the downlink of heterogeneous networks , '' _ ieee transactions on communications _63 , no .10 , pp . 35533568 , oct 2015 .d. liu and c. yang , `` cache - enabled heterogeneous cellular networks : comparison and tradeoffs , '' in _ ieee int .conf . on commun .( icc ) _ , kuala lumpur , malaysia , june 2016 . [ online ] .available : http://arxiv.org/abs/1602.08255 z. chen , j. lee , t. q. s. quek , and m. kountouris , `` cooperative caching and transmission design in cluster - centric small cell networks , '' _ corr _ , vol .abs/1601.00321 , 2016 .[ online ] .available : http://arxiv.org/abs/1601.00321 s. t. ul hassan , m. bennis , p. h. j. nardelli , and m. latva - aho , `` caching in wireless small cell networks : a storage - bandwidth tradeoff , '' _ ieee communications letters _pp , no .99 , pp . 11 , 2016 .s. tamoor - ul - hassan , m. bennis , p. h. j. nardelli , and m. latva - aho , `` modeling and analysis of content caching in wireless small cell networks , '' _ corr _ , vol .abs/1507.00182 , 2015 .[ online ] .available : http://arxiv.org/abs/1507.00182 b. zhou , y. cui , and m. tao , `` stochastic content - centric multicast scheduling for cache - enabled heterogeneous cellular networks , '' _ submitted to wireless communications , ieee transactions on _ , vol .abs/1509.06611 , 2015 .[ online ] .available : http://arxiv.org/abs/1509.06611 y. cui , d. jiang , and y. wu , `` analysis and optimization of caching and multicasting in large - scale cache - enabled wireless networks , '' _ corr _ , vol .abs/1512.06176 , 2015 .[ online ] .available : http://arxiv.org/abs/1512.06176 s. singh , h. s. dhillon , and j. g. andrews , `` offloading in heterogeneous networks : modeling , analysis , and design insights , '' _ ieee trans .wireless commun ._ , vol . 12 , no . 5 , pp .24842497 , march 2013 .s. m. yu and s .- l .kim , `` downlink capacity and base station density in cellular networks , '' in _ modeling optimization in mobile , ad hoc wireless networks ( wiopt ) , 2013 11th international symposium on _ , may 2013 , pp . | heterogeneous wireless networks ( hetnets ) provide a powerful approach to meet the dramatic mobile traffic growth , but also impose a significant challenge on backhaul . caching and multicasting at macro and pico base stations ( bss ) are two promising methods to support massive content delivery and reduce backhaul load in hetnets . in this paper , we jointly consider caching and multicasting in a large - scale cache - enabled hetnet with backhaul constraints . we propose a hybrid caching design consisting of identical caching in the macro - tier and random caching in the pico - tier , and a corresponding multicasting design . by carefully handling different types of interferers and adopting appropriate approximations , we derive tractable expressions for the successful transmission probability in the general region as well as the high signal - to - noise ratio ( snr ) and user density region , utilizing tools from stochastic geometry . then , we consider the successful transmission probability maximization by optimizing the design parameters , which is a very challenging mixed discrete - continuous optimization problem . by using optimization techniques and exploring the structural properties , we obtain a near optimal solution with superior performance and manageable complexity . this solution achieves better performance in the general region than any asymptotically optimal solution , under a mild condition . the analysis and optimization results provide valuable design insights for practical cache - enabled hetnets . cache , multicast , backhaul , stochastic geometry , optimization , heterogenous wireless network |
agent - based models represent an efficient way in exploring how individual ( microscopic ) behaviour may affect the global ( macroscopic ) behaviour in a competing population .this theme of relating macroscopic to microscopic behaviour has been the focus of many studies in physical systems , e.g. , macroscopic magnetic properties of a material stem from the local microscopic interactions of magnetic moments between atoms making up of the material .in recent years , physicists have constructed interesting models for non - traditional systems and established new branches in physics such as econophysics and sociophysics .the minority game ( mg ) proposed by challet and zhang and the binary - agent - resource ( b - a - r ) model proposed by johnson and hui , for example , represent a typical physicists binary abstraction of the bar attendance problem proposed by arthur . in mg ,agents repeatedly compete to be in a minority group .the agents have similar capabilities , but are heterogeneous in that they use different strategies in making decisions .decisions are made based on the cumulative performance of the strategies that an agent holds .the performance is a record of the correctness of the predictions of a strategy on the winning action which , in turn , is related to the collective behaviour of the agents .thus , the agents interact through their decision - making process , creation of the record of winning actions , and strategy selection process .interesting quantities for investigations include the statistics of the fraction of agents making a particular choice every time step and the variance or standard deviation ( sd ) of this number .these quantities are related in that knowing the distribution of , one may obtain .the mg , suitably modified , can be used to model financial markets and reproduce stylized facts .the variance , for example , is a quantity related to the volatility in markets .recently , we proposed a theory of agent - based models based on the consideration of decision - making and strategy dynamics .the importance of the strategy selection dynamics has been pointed out by dhulst and rodgers .this approach , which we refer to as the strategy - ranking theory ( srt ) , emphasizes on how the strategies performance ranking pattern changes as the game proceeds and the number of agents using a strategy in a certain rank for making decisions .it is recognized that the srt has the advantages of including tied strategies into consideration and avoiding the troublesome in considering each strategy s performance separately .the theory , thus , represents a generalization of the crowd - anticrowd theory to cases with tied strategies and strategy ranking evolutions two factors that are particularly important in the so - called informationally efficient phase of the mg .the theory has been applied successfully to explain non - trivial features in the mean success rate of the agents in ( i ) mg with a population of non - networked or networked agents , ( ii ) mg with some randomly participating agents , and ( iii ) b - a - r model with a tunable resource level . in this conference paper, we aim to illustrate the basic ideas of srt .in particular , we present results based on srt in evaluating the distribution of and , in the efficient phase of mg in non - networked and networked populations . validity of the results of our theory is tested against results obtained by numerical simulations .while the srt was developed within the context of mg , many of the ideas are should also be appliable to a wide range of agent - based models .the basic mg comprises of agents competing to be in a minority group at each time step .the only information available to the agents is the history .the history is a bit - string of length recording the minority ( i.e. , winning ) option for the most recent time steps .there are a total of possible history bit - strings .for example , has possible histories of the winning outcomes : , , and . at the beginning of the game, each agent picks strategies , with repetition allowed .they make their decisions based on their strategies .a strategy is a look up table with entries giving the predictions for all possible history bit - strings .since each entry can either be ` 0 ' or ` 1 ' , the full strategy pool contains strategies .adaptation is built in by allowing the agents to accumulate a merit ( virtual ) point for each of her strategies as the game proceeds , with the initial merit points set to zero for all strategies .strategies that predicted the winning ( losing ) action at a given time step , are assigned ( deducted ) one virtual point . at each turn, the agent follows the prediction of her best - scoring strategy . in case of tied best - scoring strategies, a random choice will be made to break the tie . in the present work, we will focus on the regime where , i.e. , the efficient phase .in mg literature , a parameter is defined with characterizing the efficient phase .features in this regime is known to be dominated by the crowd effect .a quantitative theory in this regime would have to include the consideration of frequently occurred tied strategies into account , as the dynamics in this regime is highly sensitive to the agents strategy selection . in what follows, we introduce the basic physical picture of the strategy ranking theory and apply it to evaluate the distribution in the fraction of agents making a particular choice and the variance from an analytic expression for non - networked and networked populations .to put our discussions into proper context , we will first present the numerical results of the quantities that we are focusing on .let be the fraction of agents taking the action 1 " ( or 0 " ) at time step .as the game proceeds , there will be a time series .we may then analyze these values of by considering the distribution or probability density function , where is the probability of having a value within the interval to . in using the mg for market modelling, can be taken to be the fraction of agents deciding to buy ( or sell ) an asset at time . in the context of the el farol bar attendance problem , be taken to be the fraction of agents attending the bar .note that every realization of the mg may have a different distribution of strategies among the agents and a different initial bit - string to start the game .these details do not affect the main results reported here , especially when we consider cases deep into the efficient phase , i.e. , when . to illustrate the point ,we have carried out detailed numerical simulations for the simplest case of and .figure 1 shows the numerical results ( squares ) of for systems with two different sizes ( and ) , with the aim of emphasizing the size effect on . notice that the distribution consists of a few peaks ( five peaks for the case of and ) , indicating that as the game proceeds the number jumps among values characterized by these peak values . for larger population ,the peaks are sharper . also shown in fig.1 are the results of the strategy ranking theory ( lines ) .the theoretical results are in reasonably agreement with numerical results .we defer the discussion on obtaining the theoretical results to the next section . besides the typical results shown in fig.1 , we have studied the variance in the following way .we carried out numerical simulations in many realizations using different values of and , with up to and up to .for each run , a value of is obtained . to facilitate comparison with theory, we select those data that are deep in the efficient phase , i.e. , with and plotted them ( black dots ) in fig.2 to show the dependence of on .the data points do not show significant scatter , and essentially fall on a line .also included in the figure are two ( dashed ) lines corresponding to two approximations within the crowd - anticrowd theory .these approximations assume that all the strategies can be ranked at every time step without tied virtual points .one of them assumes that the popularity rankings , i.e. , ranking based on the number of agents using a strategy , of a strategy and its anti - correlated partner are uncorrelated and gives an expression for for cases with as .\ ] ] another approximation is that the ranking of strategies are highly correlated .for example , the anti - correlated partner of the momentarily most - popular strategy is the least - popular one , and so on .this leads to another expression within the crowd - anticrowd theory : .\ ] ] we note that for small values of , the numerical data fall within the two crowd - anticrowd approximations , with neither of the approximations capturing the -dependence of .as will be discussed later , the strategy ranking theory gives an _ analytic _ expression for that captures the -dependence very well in the small regime where the criteria is satisfied to a fuller extent .we proceed to discuss how we could obtain the analytic results shown in figs.1 and 2 , within the strategy ranking theory .details of the theory can be found in .here we briefly summarize the key ideas , with the aim to make the theory physically transparent .we note that in mg and other agent - based models of competing populations , it is the interplay between decision - making , strategy selections , and collective response that leads to the non - trivial and often interesting global behaviour of a system . with this in mind , the strategy performance ranking pattern is of crucial importance . at any time step, the strategies can be classified into ranks , according to the virtual points of the strategies .the momentarily best - performing strategy ( or strategies ) belongs ( belong ) to rank-1 , and so on . at the beginning of the game, all strategies are tied that thus they all belong to the same rank .this is also the case when the strategies are all tied during the game .thus , the lower bound of is zero .it is also noted that there are two different kinds of behaviour in the ranking pattern _ after _ a time step : ( i ) the number of different ranks_ increases _ and such a time step is called an even " time step , and ( ii ) the number of different ranks _ decreases _ and such a time step is called an odd " time step .take , for example , a time step at which the strategies are all tied before decision .regardless of the history based on which the agents decide _ and _ the wining outcome after the agents decided , the strategies split into two ranks , i.e. , increases from 0 to 1 after the time step .half of the strategies belong to the better rank and half to the worse rank , as half of the strategies would have predicted the correct winning outcome for the history concerned . generally speaking ,the underlying mechanism for this splitting is that _ there is no registered virtual point or stored information in the strategies for the history concerned_. we call this kind of time steps even " time steps because this is what would happen when the population encounters a history for decision that had occurred an even number of times since the beginning of the game , not counting the one that is currently in use for decisions .the parameter has another physical meaning .it is the number of history bit - strings that have occurred an odd number of times since the beginning of the game , regardless the current history in use for decisions .since there are at most history bit - strings for a given , the upper bound of is .thus we have .therefore , every time step as the game proceeds can be classified as even " or odd " , together with a parameter . for when all the strategies are tied , the time step is necessarily an even time step . for where there are ranks, the time step is necessarily an odd time step since all the histories have occurred an odd number of times , including the current history in use for decisions . noting that the total number of strategies is , there are in general several strategies in a certain ranking . in this way , the theory takes explicit account of cases of tied strategies . for even time steps ( regardless of the value of ), there is no registered virtual points in the strategies for the current history .therefore , _ even time steps are characterized by agents making random decisions _ . using a random walk argument, the distribution is a normal distribution independent of , with a mean and a variance , i.e. , it turns out that the part of the distribution around shown in fig.1 originates from the even time steps . for odd time steps ,there are registered virtual points or stored information in the strategies for the current history .this is the origin of the crowd effect , which is fundamental to the understanding of collective response in the class of agent - based models based on mg . in this case , the momentarily better performing strategies have predicted the correct action in the last occurrence of the current history in use for decision .there will then be more agents using these better - performing strategies for decisions .however , the number is too large , hence forming a crowd , that the winning action in the last occurrence becomes the losing action in this turn .this is the anti - persistent nature or double periodicity of mg . using the strategy ranking theory , we know that there are ranks among the strategies for time steps labelled .the ratio of the fractions of strategies in different ranks is given by , which are simply the numbers in the pascal triangles .given that the agents use their best - performing strategy for decision , we can readily count the number of agents using a strategy in a particular rank . as mentioned , the better - performing strategies are more likely to lead to wrong predictions at odd time steps .this can be modelled by a winning probability at odd time steps of the form of for a strategy belonging to rank- , for a given value of . putting the information together, we arrive at the probability density function for . the distribution is given by normal distributions centered at the mean values of with a variance applying eq .( 4 ) to the results for in fig.1 , we immediately identify that the peaks in at and are originated from odd time steps corresponding to and the peaks at and are originated from odd time steps corresponding to .these peaks are more noticeable in fig .1(b ) when the population size is large . in eq .( 5 ) , the binomial coefficients should formally be expressed in terms of gamma functions , so that when the lower index in the coefficient becomes negative , vanishes .this is the case for , and the corresponding distribution will then be very sharp .this is , for example , the case for the sharp peaks at and in fig .( 1 ) . to obtain an expression for the overall , including both even and odd time steps and all possible values of , we need to take a weighted average over the occurrence of odd and even time steps .the resulting expression is ,\ ] ] where the factor is the probability of having history bit - strings occurred an odd number of times .the factor is the probability that given , the time step is odd .applying eq .( 6 ) to the case of , we obtain the results ( lines ) shown in fig . 1 .we note that the expression in eq .( 6 ) is also applicable to , as long as the efficient phase criteria is satisfied .the calculation of the variance follows from the definition where is the mean value of and the average represents a time average . replacing the time average by invoking the probability density function , we have + ( 1-\frac{\kappa}{2^m } ) \sigma_{even}^2 \right\}\\ & \approx & \sum_{\kappa=0}^{2^m } \frac{c_{\kappa}^{2^m}}{2^{2^m } } ( \frac{\kappa}{2^m } ) ( \frac{c_{\kappa-1}^{2 \kappa - 1}}{2^{2\kappa}})^2 \nonumber \\ & = & \sum_{\kappa=0}^{2^m } \frac{c_{\kappa}^{2^m}}{2^{2^m } } ( \frac{\kappa}{2^m } ) \left(\frac{1}{2 } \prod_{q=1}^{\kappa } ( 1 - \frac{1}{2q } ) \right)^{2},\end{aligned}\ ] ] where the approximation is valid for .( 9 ) is an _expression for .the last two expressions are equivalent and one may use whichever convenient in obtaining numerical values from eq . ( 9 ) .several remarks are worth mentioning .firstly , we note that the expression of is closely related to the analytic expression for the winning probability reported in , from which an alternative approach arriving at the same result is possible .secondly , the results from eq . ( 9 ) are plotted ( open squares ) in fig.2 .we note that the strategy ranking theory does capture the -dependence of , with good agreement with numerical simulation results in the range where the criteria is better fulfilled .the success of the theory stems from the inclusion of tied strategies , as each rank typically consists of a number of strategies . in the simplest case of , for example, there are tied strategies in _ every _ time step of the game .the better agreement with numerical results when compared with the crowd - anticrowd approximations is thus an indication of the importance of ( i ) the tied strategies and ( ii ) the time evolution of the ranking pattern from time step to time step . in mg , boththe number of tied strategies , i.e. , number of strategies belonging to the same rank , and the time evolution of strategy ranking pattern can be readily found .thirdly , the result eq .( 9 ) is interesting in that there have been much effort in trying to re - scale numerical results of as a function of the parameter so that results from systems of different values of and can be collapsed onto a single curve .( 9 ) suggests that is a complicated function of , deep in the efficient phase .in particular , as one increases the population size at fixed and small , one should approach the result given by eq.(9 ) assuming a uniform initial distribution of strategies to the agents .it is , in fact , possible to include the effects of a finite population size into the strategy - ranking theory starting from eq .( 8) by incorporating the so - called market impact effects .systems in the real world are characterized by connected agents .the connections are often used for collecting information from the neighbours . recently, several interesting attempts have been made to incorporate information sharing mechanisms among the agents into mg and b - a - r models . as an illustration of the application of srt to networked mg , we focus on the model proposed by anghel __ . as in the mg , anghel _ et al ._ s model features agents who repeatedly compete to be in a minority group .communications between agents are introduced by assuming that the agents are connected by an undirected random network , i.e. , classical random graph , with a connectivity being the probability that a link between two randomly chosen agents exists .the links are used as follows .each agent compares the cumulated performance of his predictor , which is the suggested action from his own best - performing strategy at each time step , with that of his neighbours , and then follows the suggested action of the best performing predictor among his neighbours and himself .the limit of the model reduces to the mg .note that the identity of the best - performing strategy changes over time . for the predictor sperformance is generally _ different _ from the agent s performance .it has been reported that the efficiency of the population as a whole , characterized by either or by the average winning probability per agent per turn , shows a _ non - monotonic _ dependence on the connectivity with the most efficient performance occurring at a small but finite value of .in other words , a small fraction of links is beneficial but too many of them are bad .we have explained the feature successfully within the framework of srt .the most important point is that , from our understanding of the non - networked mg ( e.g. , see fig . 1 ) , the performance of an agent actually depends on how similar the strategies that he is holding , with the best performing ones holding two identical strategies .the links then act in two ways depending on the connectivity . for low connectivity ,the links bring the agents with two anti - correlated strategies to have the chance to use other strategies so that these agents will not always join the crowd at odd time steps and hence with their winning probability enhanced . for high connectivity , there are so many links that many agents are linked to the momentarily best - performing predictor or predictors .as discussed in previous section , the higher ranking strategies have a smaller chance of predicting the correct minority outcome .when the connectivity is high , there are many links so that agents have access to strategies that are more likely to lose .this leads to a drop in the average winning probability of the agents .figure 3 shows how the distribution changes with the connectivity at two small values of .the range of small is particularly of interest since for a large population ( ) the non - monotonic feature occurs for .the symbols ( open circles ) give the results from numerical simulations .the peaks of the distribution shifts as is varied . applying srt and incorporating the effects of the presence of links, we found that can again be represented by a weighted sum of distributions characterized by different kinds of time steps .in particular , for , the parameters of the distributions in eq .( 4 ) can be found to be , , and . the variances are given by eq .( 5 ) as and .similarly for , we have and , with the same variances .the values of these parameters are obtained by considering the different winning probabilities of the strategies in different ranks and the change in the number of agents using a strategy of a certain rank due to the presence of the links .the solid lines in fig .3 show the distributions obtained by srt .the theory captures the shifts in with the connectivity .in the present work , we illustrated the basic ideas in constructing a strategy ranking theory for a class of multi - agent models incorporating the effects of tied strategies and strategy selections .we showed how the theory can be applied to mg in the efficient phase to evaluate the distribution in the fraction of agents making a particular decision and the associated variance .in particular , an analytic expression is given for in a non - networked population .the theory is also applied to a version of networked mg in which there exists non - trivial dependence on the performance of the agents as a function of the connectivity . besides and , the theory can also be applied to evaluate other quantities such as the average winning probability of the agents . in closing , while srt is developed with models based on the mg in mind , the general approach , namely that of focusing on the ranking pattern of the strategies and how the pattern evolves in time , should be a key ingredient in the construction of theories for a large class of agent - based models . | the minority game ( mg ) is a basic multi - agent model representing a simplified and binary form of the bar attendance model of arthur . the model has an informationally efficient phase in which the agents lack the capability of exploiting any information in the winning action time series . we illustrate how a theory can be constructed based on the ranking patterns of the strategies and the number of agents using a particular rank of strategies as the game proceeds . the theory is applied to calculate the distribution or probability density function in the number of agents making a particular decision . from the distribution , the standard deviation in the number of agents making a particular choice ( e.g. , the bar attendance ) can be calculated in the efficient phase as a function of the parameter specifying the agent s memory size . since situations with tied cumulative performance of the strategies often occur in the efficient phase and they are critical in the decision making dynamics , the theory is constructed to take into account the effects of tied strategies . the analytic results are found to be in better agreement with numerical results , when compared with the simplest forms of the crowd - anticrowd theory in which cases of tied strategies are ignored . the theory is also applied to a version of minority game with a networked population in which connected agents may share information . * paper to be presented in the 10th annual workshop on economic heterogeneous interacting agents ( wehia 2005 ) , 13 - 15 june 2005 , university of essex , uk . * |
estimation of covariance matrices is a crucial component of many signal processing algorithms [ 1 - 4 ] . in many applications ,there is a limited number of snapshots and the sample covariance matrix can not yield the desired estimation accuracy .this covariance matrix estimation error significantly degrades the performance of such algorithms . in some applications ,the true covariance matrix has a specific structure .for example , the array covariance matrix of a linear array with equally spaced antenna elements is a toeplitz matrix when the sources are uncorrelated [ 5 , 6 ] .moreover , in some applications [ 4 , 8 ] , the structure of the problem suggests that the underlying true covariance matrix is the kronecker product of two valid covariance matrices [ 4 , 7 ] .this side information can be leveraged in covariance matrix estimation to improve the estimation quality .for instance , in [ 5 ] a weighted least square estimator for covariance matrices with toeplitz structures was proposed and it was shown that the resulting covariance matrix can enhance the performance of angle estimation algorithms , such as multiple signals classification ( music ) [ 13 ] . in [ 8 ] , covariance matrices with kronecker structure are investigated and a maximum likelihood based algorithm is introduced .in addition , the structure of covariance matrices has been exploited in various doa estimation algorithms , such as the linear structure in [ 9 ] , and the diagonal structure for the covariance matrix of uncorrelated signals in [ 10 ] .recently some research works have focused on the application of sparse signal processing in doa estimation based on the sparse representation of the array covariance matrix .for example , [ 11 ] proposes the idea that the eigenvectors of the array covariance matrix have a sparse representation over a dictionary constructed from the steering vectors . in[ 12 , 14 ] , it is shown that when the received signals are uncorrelated , the array covariance matrix has a sparse representation over a dictionary constructed using the atoms , i.e. the correlation vectors .a similar idea is proposed in [ 15 ] , with the difference that the proposed method does not require choosing a hyper - parameter . in this paper , we focus on the estimation of array covariance matrices with linear structure .first , we show that when the sources are uncorrelated , the array covariance matrix has a linear structure implying that all possible array covariance matrices can be described by a specific subspace . based on this idea ,a subspace - based covariance matrix estimator is proposed as a solution to a semi - definite convex optimization problem .furthermore , we propose a nearly optimal closed - form solution for the proposed covariance matrix estimator .our results show that the proposed method can noticeably improve the covariance matrix estimation quality .moreover , the closed - form solution is shown to closely approach the optimal performance .the system model under consideration is a narrowband array system with antennas .all the signals are assumed to be narrowband with the same center frequency and impinge on the array from the far field .the baseband array output can be expressed as where is the array output vector , is the number of the received signals , is the signal , is the elevation and azimuth arrival angle of the signal , is the baseband array response to signal and * n*(t ) is the noise vector . the baseband array response , , is called the `` steering vector '' [ 13 ] . if the received signals are uncorrelated , the covariance matrix can be written as where represents the power of the signal , is the noise variance and is the identity matrix .we define the `` correlation vector '' which belongs to direction as follows where is a linear transformation that converts its matrix argument to a vector by stacking the columns of the matrix on top of one another .consequently , the covariance matrix can be rewritten as therefore , is a linear combination of the correlation vectors of the received signals . according to ( 4 ), lies in the subspace of the correlation vectors .hence , if we build the subspace spanned by all possible correlation vectors , then completely lies in this subspace . for many array structures ,the matrix inherits some symmetry properties .accordingly , the correlation vectors can not span an dimensional space .for example , when the incoming signals are uncorrelated , the covariance matrix of a uniform linear array is a toeplitz matrix [ 5 ] .it is easy to show that all the toeplitz matrices can be described by a dimensional space .the subspace of the correlation vectors can be obtained by constructing a positive definite matrix where ( 5 ) is an element - wise integral .based on ( 5 ) , the subspace dimension of the correlation vectors is equal to the number of non - zero eigenvalues of the matrix .consequently , the subspace of the correlation vectors can be constructed using the eigenvectors which correspond to the non - zero eigenvalues .1 shows the eigenvalues of for a square planar array with 16 elements ( the horizontal and vertical space between the elements is half a wavelength ) .one can observe that the number of non - zero eigenvalues is equal to 49 .therefore , for this array , the subspace of the correlation vectors can be constructed from the 49 eigenvectors corresponding to the non - zero eigenvalues .note that for a 16-element linear array , we observe 31 non - zeros eigenvalues because the covariance matrix is a toeplitz matrix[5 ] .for some array structures such as circular array , we may not observe zero eigenvalues but our investigation has shown that the subspace of the correlation vectors can be effectively approximated using the dominant eigenvectors ( the eigenvectors corresponding to the dominant eigenvalues ) . therefore , if we construct the matrix whose columns form a basis for the correlation vectors subspace, we can rewrite the covariance matrix as hence , we can choose the columns of as the eigenvectors corresponding to the non - zero eigenvalues ( or the dominant eigenvectors ) . by imposing the linear structure constraint ( 6 ) to the covariance matrix estimation problem , we can significantly improve the estimation quality . some workshave studied covariance matrices with linear structures .for example , a weighted least - square estimator was proposed in [ 5 ] based on the linear structure for toeplitz covariance matrices .however , the toeplitz structure is restricted to linear arrays and the resulting matrix is not guaranteed to be positive definite .based on ( 4 ) and ( 6 ) , the estimated covariance matrix should lie in the subspace spanned by the columns of .we are going to estimate which is defined as based on the previous discussion , we propose the following optimization problem where is the sample covariance matrix and is the number of time samples . the matrix is the projection matrix on the subspace that is orthogonal to the subspace of the correlations vectors . as such , the first constraint in( 8) ensures that the resulting matrix lies in the correlations vectors subspace .the second constraint guarantees that the resulting matrix is positive definite . note that , ( 8) is a convex optimization problem and can be solved using standard tools from convex optimization .the proposed method imposes the linear structure using the subspace constraint .if the covariance matrix is toeplitz , the subspace constraint enforces the resulting matrix to be toeplitz .however , the proposed algorithm is not limited to toeplitz structures and can be used for any linear structure . the sample covariance matrix ( 9 )can be expressed as the second term on the right hand side of ( 10 ) , , is the unwanted part ( estimation error ) which tends to zero if we have an infinite number of snapshots .the estimation error has some random behavior and can lie anywhere in the entire space . since the first constraint in ( 8) enforces the estimated matrix to lie in the correlation vectors subspace , it is expected to eliminate the component of estimation error which is not in this subspace .the dimension of the correlation vectors subspace is typically smaller than the entire space dimension .for example , for a 30-element uniform linear array , the dimension of the correlation vectors subspace is equal to 59 ; while the entire space dimension is 900 .thus , it is conceivable that the proposed method could yield a much better estimation performance in comparison to the sample covariance matrix ( 9 ) . the proposed optimization problem ( 8) is an dimensional optimization problem .therefore , it may be hard to solve for large arrays . in this section, we derive a closed form near optimal solution which makes our method easy for practical implementation . according to ( 4 ) , the covariance matrix should be in the correlation vectors subspace .we define and as follows thus , is orthogonal to the correlation vectors subspace and contains the desired part. therefore , we rewrite ( 8) as in the proposed estimator ( 8) , we placed the first constraint to suppress the estimation error which does not lie in the correlation vectors subspace . in ( 11 ) , we project the sample covariance matrix to the correlation vectors subspace .thus , we have eliminated the estimation error which does not lie in the correlation vectors subspace .accordingly , we simplify ( 13 ) as follows which has a simple closed form solution where is the number of positive eigenvalues of , are the positive eigenvalues and are their corresponding eigenvectors .actually , we break the primary optimization problem ( 8) into two optimization problems .first , we find a matrix in the correlation vectors subspace which is most close to the sample covariance matrix and the resulting matrix is .in the second step , we find the closest positive semi - definite matrix to and the resulting matrix is given in ( 15 ) .in this section , we provide some simulation results to illustrate the performance of the proposed approach .the examples provided include doa estimation and subspace estimation , which underscores the flexibility of the proposed covariance matrix estimation approach for a broad range of applications .all the curves are based on the average of 500 independent runs .assume a uniform linear array with omnidirectional sensors spaced half a wavelength apart .for this array the correlation vectors subspace is a 19 dimensional space since the covariance matrix is toeplitz .the additive noise is modeled as a complex gaussian zero - mean spatially and temporally white process with identical variances in each array sensor . in this experiment, we compare the performance of music when used with the sample covariance matrix and with the proposed covariance matrix estimation method .we also compare its performance with the sparse covariance matrix representation method [ 14 , 12 ] and the sparse iterative covariance - based estimation approach ( spice ) [ 15 ] .we consider two uncorrelated sources located at and ( is the direction orthogonal to the array line ) and both sources are transmitted with the same power .2 shows the probability of resolution ( the probability that the algorithm can distinguish these two sources ) versus the number of snapshots for one fixed sensor with db .it is clear that using the proposed method leads to significant improvement in performance in comparison to using the sample covariance matrix .spice [ 15 ] is an iterative algorithm , which is based on the sparse representation of the array covariance matrix and requires one matrix inversion in each iteration .one can see that this algorithm fails when we use 20 iterations , however , performs well with 1000 iterations .nevertheless , for practical purposes it is generally computationally prohibitive to perform 1000 matrix inversion operations .in addition , one can observe that the proposed near optimal solution to ( 8) yields a close performance to the optimal solution .3 displays the probability of resolution against for a fixed training size snapshots .roughly , the music algorithm based on the proposed method is 7 db better than the music algorithm based on the sample covariance matrix . in summary ,the proposed method yields notable and promising performance even with a small number of snapshots and at low snr regimes .furthermore , it is easily implementable using the proposed closed - form solution , which consists of a matrix multiplication and eigen - decomposition ., scaledwidth=50.0% ] the estimation of the subspace of the received signals is an important task in many signal processing algorithms .for example , in the eigen - space based beamforming algorithm [ 3 ] , the subspace of the received signals is used to make the beamformer robust against the steering vector mismatch . in the music algorithm ,the subspace of the received signals is used to obtain the noise subspace [ 13 ] .the subspace of the received signals is usually estimated using the dominant eigenvectors of the estimated covariance matrix . in this simulation , we consider three uncorrelated sources located at , and and the sources are received with same signal to noise ratio . to investigate the accuracy of the subspace estimation , we define the distance between two subspaces as follows [ 16 ] : + given two matrices , , the distance between the subspaces spanned by the columns of and is defined as where and are orthonormal bases of the spaces and , respectively .similarly , is an orthonormal basis for the subspace which is orthogonal to and is an orthonormal basis for the subspace which is orthogonal to .in addition , denotes the spectral norm of matrix .4 displays the distance between the true signals subspace and the estimated one as a function of the number of snapshots for db .we construct the signal subspace using the first three eigenvectors .one can observe that the proposed method exhibits a better rate of convergence .in addition , the performance of the closed - form solution closely approaches the optimal solution .in this paper , a subspace method for array covariance matrix estimation was proposed .we have shown that when the received signals are uncorrelated , the covariance matrix lies in the subspace of the correlation vectors .based on this idea , we posed the estimation problem as a convex optimization problem and enforced a subspace constraint .in addition , a near optimal closed - form solution for the proposed optimization problem was derived .a number of numerical examples demonstrated the notable performance of the proposed approach and its applicability to a wide range of signal processing problems , including but not limited to , doa estimation and subspace estimation .in contrast to some of the existing approaches , which suffer from drastic performance degradation with limited data and at low snr regimes , the proposed method showed very graceful degradation in such settings .s. a. vorobyov , _ principles of minimum variance robust adaptive beamforming design _ , elsevier signal processing , invited paper , special issue : advances in sensor array processing , vol .3264 - 3277 , dec .k. yu , m. bengtsson , b. ottersten , d. mcnamara , and p. karlsson , _ modeling of wide - band mimo radio channels based on nlos indoor measurements _ , ieee transactions on vehicular technology , vol . 53 , no . 8 , pp. 655665 , may 2004 .h. li , p. stoica , j. li , _ computationally efficient maximum likelihood estimation of structured covariance matrices _ , ieee transactions on signal processing , vol .1314 1323 , may 1999 .m. rahmani , m. h. bastani , _ robust and rapid converging adaptive beamforming via a subspace method for the signal - plus - interferences covariance matrix estimation _ , iet signal processing , vol . 8 , issue 5 , pp .507 520 , july 2014 .j. c. de munck , h. m. huizenga , l. j. waldorp , and r. m. heethaar , _ estimating stationary dipoles from meg / eeg data contaminated with spatially and temporally correlated background noise _, ieee transactions on signal processing , vol .50 , no . 7 , pp . 15651572 , jul .k. werner , m. jansson , p. stoica , _ on estimation of covariance matrices with kronecker product structure _ , ieee transactions on signal processing , vol .478 491 , feb . 2008 .b. ottersten , p. stoica , and r. roy , _ covariance matching estimation techniques for array signal processing applications _ , digital signal processing , vol . 8 , pp . 185210 , 1998 . b. gransson , m. jansson and b. ottersten , _ spatial and temporal frequency estimation of uncorrelated signals using subspace fitting _ , in proc .of 8th ieee signal processing workshop on statistical signal and array processing , corfu , greece , jun .1996 , pp .d. malioutov , m. etin , a. willsky , _ a sparse signal reconstruction perspective for source localization with sensor arrays _ ,ieee trans signal process .53 , pp . 30103022 , 2005 . l. blanco1 , m. njar , _ sparse covariance fitting for direction of arrival estimation _ , eurasip journal on advances in signal processing , 2012 h.l vantrees , _ optimum array processing .part iv of detection , estimation , and modulation theory _ , wiley,2002 j. s. picard , a. j. weiss , _ direction finding of multiple emitters by spatial sparsity and linear programming _ , 9th international symposium on communications and information technology , pp . 1258 - 1262 , 2009 .p. stoica , p. babu , j. li , _ spice : a sparse covariance - based estimation method for array processing _ , ieee trans signal process .629 - 638 , 2011 .p. jain , p. netrapalli , and s. sanghavi , _ low - rank matrix completion using alternating minimization _ , arxiv preprint arxiv:1212.0467 , 2012 . | this paper introduces a subspace method for the estimation of an array covariance matrix . it is shown that when the received signals are uncorrelated , the true array covariance matrices lie in a specific subspace whose dimension is typically much smaller than the dimension of the full space . based on this idea , a subspace based covariance matrix estimator is proposed . the estimator is obtained as a solution to a semi - definite convex optimization problem . while the optimization problem has no closed - form solution , a nearly optimal closed - form solution is proposed making it easy to implement . in comparison to the conventional approaches , the proposed method yields higher estimation accuracy because it eliminates the estimation error which does not lie in the subspace of the true covariance matrices . the numerical examples indicate that the proposed covariance matrix estimator can significantly improve the estimation quality of the covariance matrix . shell : bare demo of ieeetran.cls for journals covariance matrix estimation , subspace method , array signal processing , semidefinite optimization . |
ferdinand magellan s expedition was the first that completed the circumnavigation of our globe during 1519 - 1522 , after discovering the _ strait of magellan _ between the atlantic and pacific ocean in search for a westward route to the `` spice islands '' ( indonesia ) , and thus gave us a first view of our planet earth .five centuries later , nasa has sent two spacecraft of the stereo mission on circumsolar orbits , which reached in 2011 vantage points on opposite sides of the sun that give us a first view of our central star .both discovery missions are of similar importance for geographic and heliographic charting , and the scientific results of both missions rely on geometric triangulation .the twin stereo / a(head ) and b(ehind ) spacecraft ( kaiser et al .2008 ) , launched on 2006 october 26 , started to separate at end of january 2007 by a lunar swingby and became injected into a heliocentric orbit , one propagating `` ahead '' and the other `` behind '' the earth , increasing the spacecraft separation angle ( measured from sun center ) progressively by about per year .the two spacecraft reached the largest separation angle of on 2011 february 6 .a stereo secchi cor1-a / b intercalibration was executed at separation ( thompson et al .thus , we are now in the possession of imaging data from the two stereo / euvi instruments ( howard et al . 2008 ; wlser et al .2004 ) that cover the whole range from smallest to largest stereoscopic angles and can evaluate the entire angular range over which stereoscopic triangulation is feasible .it was anticipated that small angles in the order of should be most favorable , similar to the stereoscopic depth perception by eye , while large stereoscopic angles that are provided in the later phase of the mission would be more suitable for tomographic 3d reconstruction .the first stereoscopic triangulations using the stereo spacecraft have been performed for coronal loops in active regions , observed on 2007 may 9 with a separation angle of ( aschwanden et al .2008 ) and observed on 2007 june 8 with ( feng et al .further stereoscopic triangulations have been applied to oscillating loops observed on 2007 june 26 with a stereoscopic angle of ( aschwanden 2009 ) , to polar plumes observed on 2007 apr 7 with ( feng et al .2009 ) , to an erupting filament observed on 2007 may 19 with ( liewer et al .2009 ) , to an erupting prominence observed on 2007 may 9 with ( bemporad 2009 ) , and to a rotating , erupting , quiescent polar crown prominence observed on 2007 june 5 - 6 with ( thompson 2011 ) .thus , all published stereoscopic triangulations have been performed within a typical ( small ) stereoscopic angular range of , as it was available during the initial first months of the stereo mission .the largest stereoscopic angle used for triangualtion of coronal loops was used for active region 10978 , observed on 2007 december 11 , with a spacecraft separation of ( aschwanden and sandman 2010 ; sandman and aschwanden 2011 ) , which produced results with similar accuracy as those obtained from smaller stereoscopic angles .so there exists also an intermediate rangle of aspect angles that can be used for stereoscopic triangulation . however , nothing is known whether stereoscopy is also feasible at large angles , say in the range of , and how the accuracy of 3d reconstruction depends on the aspect angle , in which range the stereoscopic correspondence problem is intractable , and whether stereoscopy at a maximum angle near is equally feasible as for for optically thin structures ( as it is the case in soft x - ray and euv wavelengths ) , due to the symmetry of line - of - sight intersections . in this studywe are going to explore stereoscopic triangulation of coronal loops in the entire range of and quantify the accuracy and quality of the results as a function of the aspect angle .observations and data analysis are reported in section 2 , while a discussion of the results is given in section 3 , with conclusions in section 4 . indicated approximately at the beginning of the years , ranging from in april 2007 to in february 2011.,scaledwidth=100.0% ]we select stereo observations at spacecraft separation angles with increments of over the range of to , which corresponds to time intervals of about a year during the past mission lifetime 20072011 .a geometric sketch of the spacecraft positions stereo / a+b relative to the earth - sun axis is shown in fig .1 . additional constraints in the selection are : ( i ) the presence of a relatively large prominent active region ; ( ii ) a position in the field - of - view of both spacecraft ( since the mutual coverage overlap drops progressively from initially to during the first 4 years of the mission ) ; ( iii ) a time near the central meridian passage of an active region viewed from earth ( to minimize confusion by foreshortening ) ; and ( iii ) the availability of both stereo / euvi / a+b and calibrated soho / mdi data .the selection of 5 datasets is listed in table 1 , which includes the following active regions : ( 1 ) noaa 10953 observed on 2007 april 30 ( also described in derosa et al . 2009 ; sandman et al .2009 , aschwanden and sandman 2010 ; sandman and aschwanden 2011 , aschwanden et al .2012 ) , ( 2 ) noaa region 10978 observed on 2007 december 11 ( also described in aschwanden and sandman 2010 , aschwanden et al .2012 , and subject to an ongoing study by alex engell and aad van ballegooijen , private communication ) , ( 3 ) noaa 11010 observed on 2009 jan 12 , ( 4 ) noaa 11032 observed on 2009 nov 21 , and ( 5 ) noaa 11127 observed on 2010 nov 23 .this selection covers spacecraft separation angles of , and . for each of the 5 datasets we stacked the images during a time interval of 2030 minutes in order to increase the signal - to - noise ratio of the euvi images .during the first 3 years ( 2006 - 2008 ) the nominal cadence of 171 images was 150 s , which yields 8 stacked images per 20 minute interval . later in the mission, the highest cadence was chosen for the 195 wavelength , but dropped from 150 s to 300 s due to the reduced telemetry rate at larger spacecraft distances , which yields 612 stacked images per 30 minute interval . in one case ( 2010 nov 23 ) the cadence in euvi / a and b are not equal , either due to data loss or different telemetry priorities ( see time intervals and number of stacked images in table 1 ) .the solar rotation during the time interval of stacked image sequences was removed to first order by shifting the images by an amount corresponding to the rotation rate at the extracted subimage centers ..data selection of 5 active regions observed with stereo / euvi and soho / mdi .[ cols= " < , < , < , < , > , > " , ] the first case is active region noaa 10953 ( fig .2 ) , where we display the same 100 loop segments that have been triangulated in an earlier study ( aschwanden and sandman 2010 ) .the spacecraft separation angle is and the almost identical direction of the line - of - sights of both stereo / a and b spacecraft makes it easy to identify the corresponding loops in a and b , and thus the triangulation is very reliable .note that the height range where discernable loops can be traced in the highpass - filtered images is about solar radii ( or mm ) , which is commensurable with the hydrostatic density scale height expected for a temperature of mk that corresponds to the peak sensitivity of the euvi 171 filter .this is particularly well seen in the side view shown in the bottom left panel in fig . 2 .a measurement of the mean misalignment angle averaged over 10 positions of the 100 reconstructed loops with the local magnetic potential field shows a value of ( table 2 ) , similar to earlier work ( aschwanden and sandman 2010 ; sandman and aschwanden 2011 ) .however , forward - fitting of a nonlinear force - free field model reduces the misalignment to , which implies that this active region is slightly nonpotential .the remaining misalignment is attributed to at least two reasons , partially to inadequate parameterization of the force - free field model , and partially to stereoscopic measurement errors due to misidentified loop correspondences and limited spatial resolution .an empirical estimate of the stereoscopic error was devised in aschwanden and sandman ( 2010 ) , based on the statistical non - parallelity of closely - spaced triangulated loop 3d trajectories , which yielded for this case a value of . in summary, we find that this active region is very suitable for stereoscopy , allows to discern a large number ( 100 ) of loops , minimizes the stereoscopic correspondence problem due to the small ( ) spacecraft separation angle , displays a moderate misalignment angle and stereoscopic measurement error ( ) .this well - defined case will serve as a reference for stereoscopy at larger angles . .the images are highpass - filtered to enhance loop structures ( middle left and right panels ) .a near - simultaneous soho / mdi magnetogram is shown ( bottom right ) , overlaid with the stereoscopically triangulated loops ( blue curves ) and magnetic field lines computed with a nonlinear force - free model ( red curves ) , viewed from the direction of earth or soho / mdi ( bottom right ) , and rotated by to the north ( bottom left).,scaledwidth=100.0% ] .a soho / mdi magnetogram is shown ( bottom right ) , overlaid with the stereoscopically triangulated loops ( blue curves ) and magnetic field lines computed with a nonlinear force - free model ( red curves).,scaledwidth=100.0% ] .a soho / mdi magnetogram is shown ( bottom right ) , overlaid with the stereoscopically triangulated loops ( blue curves ) and magnetic field lines computed with a nonlinear force - free model ( red curves).,scaledwidth=100.0% ] .a soho / mdi magnetogram is shown , overlaid with the stereoscopically triangulated loops ( blue curves ) and magnetic field lines computed with a potential field model .magnetic field lines have a footpoint threshold of g.,scaledwidth=100.0% ] .a soho / mdi magnetogram is shown , overlaid with the stereoscopically triangulated loops ( blue curves ) and magnetic field lines computed with a potential field model .magnetic field lines have a footpoint threshold of g.,scaledwidth=100.0% ] the second case is active region noaa 10978 ( fig .3 ) , observed on 2007 dec 11 with a spacecraft separation angle of .note that the views from euvi / a and b appear already to be significantly different with regard to the orientation of the triangulated loops , as seen from a distinctly different aspect angle .a set of 52 coronal loops were stereoscopically triangulated in this region ( aschwanden and sandman 2010 ) , a mean misalignment angle of is found for a potential field model , and a reduced value of is found for the force - free model ( table 2 ) , while a stereoscopic error of is estimated ( aschwanden and sandman 2010 ) .thus , the quality of stereoscopic triangulation ( as well as the degree of non - potentiality ) is similar to the first active region , although we performed stereoscopy with a 7 times larger spacecraft separation angle ( ) than before ( ) . apparently , stereoscopy is still easy at such angles , partially helped by the fact that the active region is located near the central meridian ( ) for both spacecraft , which provides an unobstructed view from top down , so that the peripheral loops of the active region do not overarch the core of the active region , where the bright reticulated moss pattern ( berger et al .1999 ) makes it almost impossible to discern faint loops in the highpass - filtered images .the top - down view provides also an optimum aspect angle to disentangle closely - spaced loops , which is an important criterion in the stereosopic correspondence identification .the third case is active region noaa 11010 ( fig. 4 ) , observed on 2009 jan 12 with a near - orthogonal spacecraft separation angle of . due to the quadrature of the spacecraft , only a sector of east and west of the central meridian ( viewed from earth )is jointly visible by both spacecraft .this particular active region is seen at a angle by both stereo / a and stereo / b .this symmetric view is the optimum condition to discern a large number of inclined loop segments and to identify the stereoscopic correspondence .we triangulate some 20 loop segments , which appear almost mirrored in the stereo / a and b image due to the east - west symmetry of the magnetic dipole .a mean misalignment angle of with the potential field model is found , and a reduced value of with the force - free field model .an estimate of the statistical ( non - parallelity ) stereoscopic error is not possible due to the small number of triangulated loops .thus , we conclude that stereoscopy is still possible in quadrature . mathematically , the orthogonal projections should yield the most accurate 3d coordinates of a curvi - linear structure , but in practice , confusion of multiple structures with near - aligned projections can cause a disentangling problem in the stereoscopic correspondence identification at this intermediate angle . the fourth case is active region noaa 11032 ( fig. 5 ) , observed on 2009 nov 21 with a large spacecraft separation angle of .stereo / a sees the active region near the east limb from an almost side - on perspective , while stereo / b sees a similar mirror image near the west limb , where confusion near the limb makes the stereoscopic correspondence identification more difficult .we trace some 15 loop segments , but do not succeed in pinning down a larger number of loops , partially because this active region is small and does not exhibit numerous bright loops , and partially because of increasing confusion problems near the limb .we searched for larger active regions over several months around this time , but were not successful due to a dearth of solar activity during this time .we find a misalignment angle of for the potential field , and for the force - free field model , which is still comparable with the previous active regions triangulated at smaller sterescopic angles .thus , stereoscopy seems to be still feasible at such large stereoscopic angles .the last case is active region noaa 11127 ( fig .6 ) , observed on 2010 nov 23 with a very large spacecraft separation angle of , only two months before the two stereo spacecraft pass the largest separation point . at this point , the common field - of - view that is overlapping from stereo / a and b is only the central meridian zone seen from earth ( or the opposite meridian behind the sun ) .stereo / a observes active region noaa 11127 at its east limb , while stereo / b sees it at its west limb , so both spacecraft see only the vertical structure of the active region from a side view ( see fig . 6 top ) .this particular configuration is very unfavorable for stereoscopy .although the vertical structure in altitude can be measured very accurately , the uncertainty in horizontal direction in longitude is very large and suffers moreover the sign ambiguity of positive or negative longitude difference with respect to the limb seen from earth .consequently , we have reliable information on the altitude and latitude of loops , while the longitude is essentially ill - defined . in order to reduce the large scatter in the measurement of -coordinates along a loop , introduced by the near - infinite amplification of parallax uncertainties tangentially at the limb , we restrict the general solution of geometric 3d triangulation to planar loops , by applying a linear regression fit of the coordinates .the example in fig .6 shows that we can trace some ( 5 ) loops in the plane of the sky and have no problem in identifying the stereoscopic counterparts in both stereo / a and b images , but the stereoscopic triangulation is ill - defined at this singularity of the sign change in the parallax effect .the misalignment between the three loop directions and the potential field is , and for the force - free field model is , which indicates that the orientation of the loop planes is less reliably determined .stereoscopic triangulation brakes down at this singularity of separation angles at , although the stereoscopic correspondence problem is very much reduced for the `` mirror images '' , similar to the near - identical images at small separation angles .we are discussing now the pro s and con s of stereoscopy at small and large aspect angles , which includes quantitative estimates of the formal error of stereoscopic triangulation ( section 3.1 ) , the stereoscopic correspondence and confusion problem ( section 3.2 ) , and the statistical probability of stereoscopable active regions during the full duration of the stereo mission ( section 3.3 ) , all as a function of the stereoscopic aspect angle ( or spacecraft separation angle in the case of the stereo mission ) .stereoscopic triangulation involves a parallax angle around the normal of the epipolar plane . for the stereo mission, the epipolar plane intersects the sun center and the two spacecraft a and b positions , which are separated mostly in east - west direction .no parallax effect occurs when the loop axis coincides with the epipolar plane , i.e. , when the loop axis points in east - west direction .thus , the accuracy of stereoscopic triangulation depends most sensitively on this orientation angle , which we define as the angle between the loop direction and the normal of the epipolar plane ( i.e. , approximately the y - axis of a solar image in north - south direction ) .if the position of a loop centroid can be determined with an accuracy of a half pixel size , the dependence of the stereoscopic error on the orientation angle is then ( aschwanden et al .2008 ) , thus , for a highly inclined loop that has an eastern and western footpoint at the same latitude , the stereoscopic error is minimal near the loop footpoints ( pointing in north - south direction ) and at maximum near the loop apex ( pointing in east - west direction ) .in addition , the accuracy of stereoscopic triangulation depends also on the aspect angle ( or spacecraft separation angle ) , which can be quantified with an error trapezoid as shown in fig .if the half separation angle is defined symmetrically to the earth - sun axis ( z - axis ) , the uncertainty in z - direction is and is in the x - direction ( in the epipolar plane ) /2)$ ] . including also a half pixel - size error in the y - direction, we have then a combined error for the 3d position of a triangulated point as , } } } \ .\ ] ] this positional error is symmetric for small and large stereoscopic angles , and has a minimum at an orthogonal angle of .the error is largest in z - direction for small spacecraft separation angles , while it is largest in x - direction for separation angles near . .the uncertainties in x - direction and spacecraft in z - direction depend on the pixel width and half aspect angle .,scaledwidth=50.0% ] to compare the relative importance of the two discussed sources of errors we can evaluate the parameters that increase the individual errors by a factor of two .this is obtained when the orientation angle of a loop segment ( with respect to the east - west direction ) increases from to , or if the spacecraft separation angle changes from the optimum angle to ( or , respectively ) . if stereoscopy at small angles of is attempted , the positional error is about ten - fold ( corresponding to ) , compared with the optimum angle at ( corresponding to ) . for stereo / euvi with a pixel size of mm , this amounts to an accuracy range of mm .the previous considerations are valid for isolated loops that can be unambiguously disentangled in an active region , in both the stereo / a and b images .however , this is rarely the case . in crowded parts of active regions , the correspondence of a particular loop in imagea with the identical loop in image b can often not properly be identified .we call this confusion problem also the _ stereoscopic correspondence problem _ , which appears in every stereoscopic tie - point triangulation method . in order to quantify this source of error, we have to consider the area density of loops and their relative orientation . a top - down view of an active region , e.g. , as seen for small stereoscopic angles by both spacecraft for an active region near disk center ( e.g. , fig .2 ) , generally allows a better separation of individual loops , because only the lowest density scale height is detected ( due to hydrostatic gravitational stratification ) , and neighbored loop segments do not obstruct each other due to the foreshortening projection effect near the footpoints .in contrast ,every active region seen near the limb , shows many loops at different longitudes , but at similar latitudes , cospatially on top of each other , which represents the most severe confusion problem .thus , we can essentially quantify the degree of confusion by the loop number density per pixel , which approximately scales with the inverse cosine - function of the center - to - limb angle due to foreshortening .in other words , we can define a quality factor for identifying the stereoscopic correspondence of properly disentangled coronal loops , which drops from at disk center to at the limb , where is the center - to - limb angle measured from sun center , of stereoscopic triangulation as a function of the spacecraft separation angle , which is a function of the accuracy of triangulated stereoscopic positions and the stereoscopic correspondence quality factor .the best quality ( within a factor of 2 ) occurs in the range of .,scaledwidth=90.0% ] the orbits of the stereo mission reduce the overlapping area on the solar surface that can be jointly viewed by both spacecraft a and b linearly with increasing spacecraft separation angle , so that the center - to - limb distance of an active region located on the central meridian ( viewed from earth ) is related to the spacecraft separation angle by increasing linearly with the separation angle from at the beginning of the mission to at maximum spacecraft separation angle .the location of an active region at the central meridian provides the best view for both spacecraft , because an asymmetric location would move the active region closer to the limb for one of the spacecraft , and thus would increase the degree of confusion , as we verified by triangulating a number of asymmetric cases .thus , we can express the quality factor of stereoscopic correspondence ( eq .7 ) by the spacecraft separation angle ( eq . 8) and obtain the relationship , defining a quality factor for stereoscopic triangulation by combining the stereoscopic correspondence quality ( eq . 9 ) with the accuracy of stereoscopic positions ( eq . 6 ) , which we may define by the normalized inverse error ( i.e. , ) , we obtain } } } \ .\ ] ] we plot the functional dependence of this stereoscopic quality factor together with their underlying factors and in fig . 8 andobtain now an asymmetric function of time ( or spacecraft separation angle ) that favors smaller stereoscopic angles .the stereoscopic quality factor is most favorable ( within a factor of 2 ) in the range of , which corresponds to the mission phase between august 2007 and november 2009 .the same optimum range will repeat again at the backside of the sun 5 years later between august 2012 and november 2014 . from our analysis of 5 active regions spread over the entire spacecraft separation angle range we find acceptable results regarding triangulation accuracy in the range of separation angles of and ( based on acceptable misalignment angles of , which coincides with the predicted optimum range of , while stereoscopy definitely brakes down at , as predicted by theory ( fig .8 and eq .10 ) . during the solar cycle ( thin solid curve ) , the spacecraft separation angle ( dotted curve ) , the stereoscopic accuracy ( dashed curve ) , and the expected number of stereoscopable active regions ( curve with grey area ) as a function of time during a full 16-year mission cycle of the stereo mission.,scaledwidth=90.0% ] there are different factors that affect the quality or feasibility of solar stereoscopy , such as ( i ) the availability of large active regions ( which varies statistically as a function of solar cycle ) , ( ii ) the simultaneous viewing by both spacecraft stereo / a and b ( which depends on the spacecraft separation angle ) , ( iii ) the geometric foreshortening that affects the stereoscopic correspondence problem ( which depends on the center - to - limb distance for each spacecraft view ) , and ( iv ) the time of the central meridian passage of the active region for a viewer from earth ( which determines the symmetry of views for both spacecraft , where minimum confusion occurs in the stereoscopic correspondence identification ) .all but the first factor depend on the spacecraft separation angle , which is a specific function of time for the stereo mission ( with a complete cycle of 16 years ) . in order to assess the science return of the stereo mission or future missions with stereoscopic capabilities , it is instructive to quantify the statistical probability of acceptable stereoscopic results as a function of spacecraft separation angle or time .we already quantified the quality of stereoscopy as a function of the stereoscopic angle in eq .let us define the number probability of existing active regions at a given time with a squared sinusoidal modulation during the solar cycle , where is the maximum number of active regions existing on the total solar surface during the maximum of the solar cycle , is the time of the solar minimum ( e.g. , ) , and yrs the current average solar cycle length .the second effect is the overlapping area on the solar surface that is simultaneously seen by both spacecraft stereo / a and b , which decreases linearly with the spacecraft separation angle from 50% at ( with at the start of spacecraft separation ) to 0% at ( with at maximum separation ) , and then increases linearly again for the next quarter phase of a mission cycle . if we fold the variation of the solar cycle ( eq .11 ) with the triangular stereoscopic overlap area variation together , we obtain a statistical probability for the number of stereoscopically triangulable active regions .however , the number of accurate stereoscopical triangulations scales with the quality factor ( eq . 10 ) , where the spacecraft separation angle is a piece - wise linear ( triangular ) function of time according to the spacecraft orbit .essentially we are assuming that the probability of successfull stereoscopic triangulations at a given time scales with the quality factor or feasibility of accurate stereoscopy at this time .so , we obtain a combined probability of stereoscopically triangulable active regions of , \ .\ ] ] in fig .9 we show this combined statistical probability of feasible stereoscopy in terms of the expected number of active regions for a full mission cycle of 16 years , from 2006 to 2022 .it shows that the best periods for solar stereoscopy are during 2012 - 2014 , 2016 - 2017 , and 2021 - 2023 .after the stereo mission reached for the first time a full view of the sun this year ( 2007 feb 6 ) , the two stereo a and b spacecraft covered also for the first time the complete range of stereoscopic viewing angles from to .we explored the feasibility of stereoscopic triangulation for coronal loops in the entire angular range by selecting 5 datasets with viewing angles at and . because previous efforts for solar stereoscopy covered only a range of small stereoscopic angles ( ) , we had to generalize the stereoscopic triangulation code for large angles up to .we find that stereoscopy of coronal loops is feasible with good accuracy for cases in the range , a range that is also theoretically predicted by taking into account the triangulation errors due to finite spatial resolution and confusion in the stereoscopic correspondence identification in image pairs , which is hampered by projection effects and foreshortening for viewing angles near the limb .accurate stereoscopy ( within a factor of 2 of the best possible accuracy ) is predicted for a spacecraft separation angle range of .based on this model we predict that the best periods for stereoscopic 3d reconstruction during a full 16-year strereo mission cycle occur during 2012 - 2014 , 2016 - 2017 , and 2021 - 2023 , taking the variation in the number of active regions during the solar cycle into account also .why is the accuracy of stereoscopic 3d reconstruction so important ?solar stereoscopy has the potential to quantify the coronal magnetic field independently of conventional 2d magnetogram and 3d vector magnetograph extrapolation methods , and thus serves as an important arbiter in testing theoretical models of magnetic potential fields , linear force - free field models ( lfff ) , and nonlinear force - free field models ( nlfff ) .a benchmark test of a dozen of nlfff codes has been compared with stereoscopic 3d reconstruction of coronal loops and a mismatch in the 3d misalignment angle of has been identified ( derosa et al .2009 ) , which is attributed partially to the non - force - freeness of the photospheric magnetic field , and partially to insufficient constraints of the boundary conditions of the extrapolation codes .empirical estimates of the error of stereoscopic triangulation based on the non - parallelity of loops in close proximity has yielded uncertainties of .thus the residual difference in the misalignment is attributed to either the non - potentiality of the magnetic field ( in the case of potential field models ) , or to the non - force - freeness of the photospheric field ( for nlfff models ) .we calculated also magnetic potential fields here for all stereoscopically triangulated active regions and found mean misalignment angles of , which improved to for a nonlinear force - free model , which testifies the reliability of stereoscopic reconstruction for the first time over a large angular range .the only case where stereoscopy clearly fails is found for an extremely large separation angle of ( ) , which is also reflected in the largest deviation of misalignment angles found ( , ) . based on these positive results of stereoscopic accuracy over an extended angular range from small to large spacecraft separation angleswe anticipate that 3d reconstruction of coronal loops by stereoscopic triangulation will continue to play an important role in testing theoretical magnetic field models for the future phases of the stereo mission , especially since stereoscopy of a single image pair does not require a high cadence and telemetry rate at large distances behind the sun .this work is supported by the nasa stereo mission under nrl contract n00173 - 02-c-2035 .the stereo/ secchi data used here are produced by an international consortium of the naval research laboratory ( usa ) , lockheed martin solar and astrophysics lab ( usa ) , nasa goddard space flight center ( usa ) , rutherford appleton laboratory ( uk ) , university of birmingham ( uk ) , max - planck - institut fr sonnensystemforschung ( germany ) , centre spatiale de lige ( belgium ) , institut doptique thorique et applique ( france ) , institute dastrophysique spatiale ( france ) . the usa institutions were funded by nasa ; the uk institutions by the science & technology facility council ( which used to be the particle physics and astronomy research council , pparc ) ; the german institutions by deutsches zentrum fr luft- und raumfahrt e.v .( dlr ) ; the belgian institutions by belgian science policy office ; the french institutions by centre national detudes spatiales ( cnes ) , and the centre national de la recherche scientifique ( cnrs ) . the nrl effort was also supported by the usaf space test program and the office of naval research .# 1 [ aschwanden , m.j . ,wlser , j.p . , nitta , n. , lemen , j. 2008 , , 827 . ][ aschwanden , m.j .2009 , , 31 . ][ aschwanden , m.j . andsandman , a.w .2010 , , 723 . ][ aschwanden , m.j .2012 , sol.phys .( subm . ) , _ a nonlinear force - free magnetic field approximation suitable for fast forward - fitting to coronal loops . i. theory _ , http : www.lmsal.com/~aschwand / eprints/2012_fff1.pdf ] [ aschwanden , m.j . and malanushenko , a .2012 , sol.phys .( subm ) , _ a nonlinear force - free magnetic field approximation suitable for fast forward - fitting to coronal loops .numeric code and tests _ , http://www.lmsal.com/~aschwand/eprints/2012_fff2.pdf ] [ aschwanden , m.j . , wuelser , j .-p . , nitta , n.v . ,lemen , j.r . , schrijver , c.j . ,derosa , m. , and malanushenko , a. 2012 , apj ( subm ) , _ first 3d reconstructions of coronal loops with the stereo a and b spacecraft : iv .magnetic field modeling with twisted force - free fields _ , http://www.lmsal.com/~aschwand/eprints/2012_stereo4.pdf , http://www.lmsal.com/~aschwand/movies/stereo_fff_movies ] [ bemporad , a. 2009 , , 298 . ][ berger , t.e . ,depontieu , b. , fletcher , l. , schrijver , c.j . ,tarbell , t.d . , and title , a.m. 1999 , , 409 . ] [ derosa , m.l . ,schrijver , c.j . ,barnes , g. , leka , k.d . , lites , b.w ., aschwanden , m.j . , amari , t. , canou , a. , mctiernan , j.m . ,regnier , s. , thalmann , j. , valori , g. , wheatland , m.s . ,wiegelmann , t. , cheung , m.c.m . ,conlon , p.a . ,fuhrmann , m. , inhester , b. , and tadesse , t. 2009 , , 1780 . ][ feng , l. , inhester , b. , solanki , s. , wiegelmann , t. , podlipnik , b. , howard , r.a . , and wlser , j.p .2007 , , l205 . ][ feng , l. , inhester , b. , solanki , s.k . , wilhelm , k. , wiegelmann , t. , podlipnik , b. , howard , r.a . ,plunkett , s.p . ,wlser , j.p . , andgan , w.q .2009 , , 292 . ][ howard , r.a . , howard , r.a . , moses , j.d . ,vourlidas , a. , newmark , j.s . ,socker , d.g . ,plunkett , s.p . ,korendyke , c.m . ,cook , j.w . , hurley , a. , davila , j.m . and 36 co - authors , 2008 , , 67 . ][ inhester , b. 2006 , arxiv e - print : astro - ph/0612649 . ][ kaiser , m.l . ,kucera , t.a . ,davila , j.m .cyr , o.c ., guhathakurta , m. , and christian , e. 2008 , , 5 . ][ liewer , p.c . , dejong , e.m . ,hall , j.r . , howard , r.a . , thompson , w.t . , culhane , j.l . , bone , l. , van driel - gesztelyi , l. 2009 , , 57 . ][ sandman , a. , aschwanden , m.j . ,derosa , m. , wlser , j.p . , and alexander , d. 2009 , , 1 . ][ sandman , a.w .and aschwanden , m.j .2011 , , 503 . ][ thompson , w.t .2006 , , 791 . ][ thompson , w.t .2011 , , 1138 . ][ thompson , w.t . ,davila , j.m ., st . cyr , o.c . , and reginald , n.l .2011 , , 215 . ][ wlser , j.p ., lemen , j.r ., tarbell , t.d . ,wolfson , c.j . ,cannon , j.c . ,carpenter , b.a . ,duncan , d.w . ,gradwohl , g.s . ,meyer , s.b . ,moore , a.s . , and 24 co - authors , 2004 , spie * 5171 * , 111 . ] | we performed for the first time stereoscopic triangulation of coronal loops in active regions over the entire range of spacecraft separation angles ( , and ) . the accuracy of stereoscopic correlation depends mostly on the viewing angle with respect to the solar surface for each spacecraft , which affects the stereoscopic correspondence identification of loops in image pairs . from a simple theoretical model we predict an optimum range of , which is also experimentally confirmed . the best accuracy is generally obtained when an active region passes the central meridian ( viewed from earth ) , which yields a symmetric view for both stereo spacecraft and causes minimum horizontal foreshortening . for the extended angular range of we find a mean 3d misalignment angle of of stereoscopically triangulated loops with magnetic potential field models , and for a force - free field model , which is partly caused by stereoscopic uncertainties . we predict optimum conditions for solar stereoscopy during the time intervals of 20122014 , 20162017 , and 20212023 . |
entangled states are an essential resource for various quantum information processings . hence , it is required to generate maximally entangled states .however , for a practical use , it is more essential to guarantee the quality of generated entangled states .statistical hypothesis testing is a standard method for guaranteeing the quality of industrial products .therefore , it is much needed to establish the method for statistical testing of maximally entangled states .quantum state estimation and quantum state tomography are known as the method of identifying the unknown state .quantum state tomography has been recently applied to obtain full information of the density matrix . however , if the purpose is testing of entanglement , it is more economical to concentrate on checking the degree of entanglement .such a study has been done by tsuda et al as optimization problems of povm .however , an implemented quantum measurement can not be regarded as an application of a povm to a single particle system or a multiple application of a povm to single particle systems . in particular , in quantum optics , the following measurement is often realized , which is not described by a povm on a single particle system .the number of generated particles is probabilistic .we prepare a filter corresponding to a projection , and detect the number of particle passing through the filter .if the number of generated particles obeys a poisson distribution , as is mentioned in section [ s2 ] , the number of detected particles obeys another poisson distribution whose average is given by the density and the projection . in this kind of measurements , if any particle is not detected , we can not decide whether a particle is not generated or it is generated but does not pass through the filter .if we can detect the number of generated particles as well as the number of passing particles , the measurement can be regarded as the multiple application of the povm . in this case , the number of applications of the povm is the variable corresponding to the number of generated particles .also , we only can detect the empirical distribution .hence , our obtained information almost discuss by use of the povm . however , if it is impossible to distinguish the two events by some imperfections , it is impossible to reduce the analysis of our obtained information to the analysis of povms .hence , it is needed to analyze the performance of the estimation and/or the hypothesis testing based on the poisson distribution describing the number of detected particles .if we discuss the ultimate bound of the accuracy of the estimation and/or the hypothesis testing , we do not have to treat such imperfect measurements .since several realistic measurements have such imperfections , it is very important to optimize our measurement among such a class of imperfect measurements . in this paper , our measurementis restricted to the detection of the number of the particle passing through the filter corresponding to a projection .we apply this formulation to the testing of maximally entangled states on two qubit systems ( two - level systems ) , each of which is spanned by two vectors and .since the target system is a bipartite system , it is natural to restrict to our measurement to local operations and classical communications ( locc ) . in this paper , for a simple realization , we restrict our measurements to the number of the simultaneous detections at the both parties of the particles passing through the respective filters . we also restrict the total measurement time , and optimize the allocation of the time for each filters at the both parties . as our results , we obtain the following characterizations .if the average number of the generated particles is known , our choice is counting the coincidence events or the anti - coincidence events .when the true state is close to the target maximally entangled state ( that is , the fidelity between these is greater than ) , the detection of anti - coincidence events is better than that of coincidence events .this result implies that the indistinguishability between the coincidence events and the non - generation event loses less information than that between the anti - coincidence events and the non - generation event .this fact also holds even if we treat this problem taking into account the effect of dark counts . in this discussion , in order to remove the bias concerning the direction of the difference , we assume the equal time allocation among the vectors , which corresponds to the anti - coincidence events , and that among the vectors , which corresponds to the coincidence events , where , , , .indeed , barbieri et al proposed to detect the anti - coincidence events for measuring an entanglement witness , they did not prove the superiority of detecting the anti - coincidence events in the framework of mathematical statistics .however , the average number of the generated particles is usually unknown . in this case, we can not estimate how close the true state is to the target maximally entangled state from the detection of anti - coincidence events .hence , we need to count the coincidence events as additional information . in order to resolve this problem, we usually use the equal allocation between anti - coincidence events and coincidence events in the visibility method , which is a conventional method for checking the entanglement .however , since we measure the coincidence events and the anti - coincidence events based on one or two bases in this method , there is a bias concerning the direction of the difference . in order to remove this bias ,we consider the detecting method with the equal time allocation among all vectors and , and call it the modified visibility method . in this paper, we also examine the detection of the total flux , which can be realized by detecting the particle without the filter .we optimize the time allocation among these three detections .we found that the optimal time allocation depends on the fidelity between the true state and the target maximally entangled state .if our purpose is estimating the fidelity , we can not directly apply the optimal time allocation .however , the purpose is testing whether the fidelity is greater than the given threshold , the optimal allocation at gives the optimal testing method .if the fidelity is less than a critical value , the optimal allocation is given by the allocation between the anti - coincidence vectors and the coincidence vectors ( the ratio depends on . ) otherwise , it is given by the allocation only between the anti - coincidence vectors and the total flux .this fact is valid even if the dark count exists .if the dark count is greater than a certain value , the optimal time allocation is always given by the allocation between the anti - coincidence vectors and the coincidence vectors .further , we consider the optimal allocation among anti - coincidence vectors when the average number of generated particles .the optimal allocation depends on the direction of the difference between the true state and the target state .since the direction is usually unknown , this optimal allocation dose not seems useful .however , by adaptively deciding the optimal time allocation , we can apply the optimal time allocation .we propose to apply this optimal allocation by use of the two - stage method .further , taking into account the complexity of testing methods and the dark counts , we give a testing procedure of entanglement based on the two - stage method .in addition , proposed designs of experiments were demonstrated by hayashi et al . in two photon pairs generated by spontaneous parametric down conversion ( spdc ) . in this article ,we reformulate the hypothesis testing to be applicable to the poisson distribution framework , and demonstrate the effectiveness of the optimized time allocation in the entanglement test .the construction of this article is following .section [ s2 ] defines the poisson distribution framework and gives the hypothesis scheme for the entanglement .section [ s3 ] gives the mathematical formulation concerning statistical hypothesis testing .sections [ s4 ] and [ s5 ] give the fundamental properties of the hypothesis testing : section [ s4 ] introduces the likelihood ratio test and its modification , and section [ s5 ] gives the asymptotic theory of the hypothesis testing .sections [ s6]-[s9 ] are devoted to the designs of the time allocation between the coincidence and anti - coincidence bases : section [ s6 ] defines the modified visibility method , section [ s7 ] optimize the time allocation , when the total photon flux is unknown , section [ s8 ] gives the results with known , and section [ s9 ] compares the designs in terms of the asymptotic variance .section [ s10 ] gives further improvement by optimizing the time allocation between the anti - coincidence bases .appendices give the detail of the proofs used in the optimization .let be the hilbert space of our interest , and be the projection corresponding to our filter . if we assume generation process on each time to be identical but individual , the total number of generated particles during the time obeys the poisson distribution .hence , when the density of the true state is , the probability of the number of detected particles is given as in fact , if we treat the fock space generated by instead of the single particle system , this measurement can be described by a povm . however , since this povm dooes not have a simple form , it is suitable to treat this measurement in the form ( [ 5 - 8 - 1 ] ) .further , if we errorly detect the particles with the probability , the probability of the number of detected particles is equal to this kind of incorrect detection is called dark count .further , since we consider the bipartite case , i.e. , the case where , we assume that our projection has the separable form . in this paper , under the above assumption, we discuss the hypothesis testing when the target state is the maximally entangled state while usami et al. discussed the state estimation under this assumption . herewe measure the degree of entanglement by the fidelity between the generated state and the target state : the purpose of the test is to guarantee that the state is sufficiently close to the maximally entangled state with a certain significance .that is , we are required to disprove that the fidelity is less than a threshold with a small error probability . in mathematical statistics , this situation is formulated as hypothesis testing ; we introduce the null hypothesis that entanglement is not enough and the alternative that the entanglement is enough : with a threshold .visibility is an indicator of entanglement commonly used in the experiments , and is calculated as follows : first , a s measurement vector is fixed , then the measurement is performed by rotating b s measurement vector to obtain the maximum and minimum number of the counts , and .we need to make the measurement with at least two bases of a in order to exclude the possibility of the classical correlation .we may choose the two bases and as , for example .finally , the visibility is given by the ratio between and with the respective a s measurement basis .however , our decision will contain a bias , if we choose only two bases as a s measurement basis .hence , we can not estimate the fidelity between the target maximally entangled state and the given state in a statistically proper way from the visibility .since the equation holds , we can estimate the fidelity by measuring the sum of the counts of the following vectors : obeys the poisson distribution with the expectation value , where the measurement time for each vector is .we call these vectors the coincidence vectors because these correspond to the coincidence events .however , since the parameter is usually unknown , we need to perform another measurement on different vectors to obtain additional information .since also holds , we can estimate the fidelity by measuring the sum of the counts of the following vectors : obeys the poisson distribution , where the measurement time for each vector is . combining the two measurements, we can estimate the fidelity without the knowledge of .we call these vectors the anti - coincidence vectors because these correspond to the anti - coincidence events. we can also consider different type of measurement on .if we prepare our device to detect all photons , i.e. , the case where the projection is , the detected number obeys the distribution ) with the measurement time . we will refer to it as the total flux measurement . in the following ,we consider the best time allocation for estimation and test on the fidelity , by applying methods of mathematical statistics .we will assume that is known or estimated from the detected number .in this section , we review the fundamental knowledge of hypothesis testing for probability distributions .suppose that a random variable is distributed according to a probability measure identified by the unknown parameter .we also assume that the unknown parameter belongs to one of mutually disjoint sets and .when we want to guarantee that the true parameter belongs to the set with a certain significance , we choose the null hypothesis and the alternative hypothesis as then , our decision method is described by a test , which is described as a function taking values in ; is rejected if is observed , and is not rejected if is observed .that is , we make our decision only when is observed , and do not otherwise .this is because the purpose is accepting by rejecting with guaranteeing the quality of our decision , and is not rejecting nor accepting .therefore , we call the region the rejection region . the test can be defined by the rejection region .in fact , we choosed the hypothesis that the fidelity is less than the given threshold as the null hypothesis in section [ s2 ] .this formulation is natural because our purpose is guaranteeing that the fidelity is not less than the given threshold . from theoretical viewpoint ,we often consider randomized tests , in which we probabilistically make the decision for a given data .such a test is given by a function mapping to the interval ] has the minimum at .hence , .it is sufficient to show that if and only if and . by putting , the lhs of ( [ 2 - 24 - 4 ] )is evaluated as since and , if and only if and .define by in fact , when , this value is monotone decreasing concerning . when , this value is .hence , the value coincides with the the value defined by ( [ 2 - 6 - 12 ] ) and ( [ 2 - 6 - 11 ] ) .thus , the relation ( [ 5 - 2 - 1 ] ) follows from the relation we choose such that .then , the above inequality follows from lemma [ le-3 - 12 ] in the following way : [ le-3 - 12 ] any real number and any four sequence of positive numbers , , , and satisfy it is sufficient to show the convexity of implies that hence , the shape of the graph , we can show that the minimum value can be attained by the boundary of .hence the boundary of the convex set is included by the union of the lines .taking the derivative of concerning , we obtain } \frac{t x_i(r)+(1-t)x_j(r ) } { \sqrt{t y_i(r)+(1-t)y_j(r ) } } = z_{i , j}(r).\end{aligned}\ ] ] hence , we obtain ( [ 5 - 5 - 1 ] ). 99 c.h .bennett , g. brassard , c. crpeau , r. jozsa , a. peres , and w. k. wootters , _ phys .lett . _ * 70 * , 1895 ( 1993 ) .briegel , w. dur , j.i .cirac , and p. zoller , _ phys ._ , * 81 * , 5932 ( 1998 ) . c. w. helstrom , _ quantum detection and estimation theory, academic press ( 1976 ) .m. barbieri , f. de martini , g. di nepi , p. mataloni , g. m. dariano , and c. macchiavello , _ phys ._ , * 91 * , 227901 ( 2003 ) .y. tsuda , k. matsumoto , and m. hayashi .`` hypothesis testing for a maximally entangled state , '' quant - ph/0504203 .p. g. kwiat , e. waks , a. g. white , i. appelbaum , and p.h .eberhard , _ phys .a _ , * 60 * , 773(r ) ( 1999 ) . k. usami , y. nambu , y. tsuda , k. matsumoto , and k. nakamura , `` accuracy of quantum - state estimation utilizing akaike s information criterion , '' _ phys . rev .a _ , * 68 * , 022314 ( 2003 ) . | a hypothesis testing scheme for entanglement has been formulated based on the poisson distribution framework instead of the povm framework . three designs were proposed to test the entangled states in this framework . the designs were evaluated in terms of the asymptotic variance . it has been shown that the optimal time allocation between the coincidence and anti - coincidence measurement bases improves the conventional testing method . the test can be further improved by optimizing the time allocation between the anti - coincidence bases . |
complex decision making tasks over a distributed quantum network , a network including entangled nodes , can be analyzed with a quantum game theory approach .quantum games extend the applicability of classical games to quantum networks , which may soon be a reality .quantum game theory imports the ideas from quantum mechanics such as entanglement and superposition , into game theory .the inclusion of entanglement leads to player outcomes that are correlated so that entanglement often behaves like mediated communication between players in a classical game .this can lead to a game that has different nash equilibria with greater payoffs than the classical counterpart .the analysis of quantum games with entanglement can resemble the correlated equilibria of classical games .the entanglement is imposed by a referee , and acts like a contract that can not be broken between the players , and can persist non - locally after the initial entanglement has been performed and communication forbidden .this is in contrast to classical correlated equilibria that rely on communication between the players , whose contracts can be broken , and can not exhibit the non - local behavior associated with quantum mechanics . the correlations produced by entanglement can achieve probability distributions over the payoffs that are not possible in the classical game , even when mixed strategies are used .when interacting with a network , the agents will often have incomplete information about the other nodes .quantum games with incomplete information can be treated within a bayesian approach . with this approach in mind, we are interested in quantized games with classical priors , i.e. a statistical mixture of two quantum games . detailed analysis of bayesian quantum games can potentially lead to applications in quantum security protocols , the development of distributed quantum computing algorithms , or improving the efficiency of classical network algorithms .experiments have begun to demonstrate the results of quantum game theory in nuclear magnetic resonance , quantum circuits in optical , and ion - trap platforms , which , in some cases , i.e. optical , can be easily imagined on a distributed quantum network . to quantize a classical game ,we follow the approach given in the seminal einstein - wilkens - lewenstein scheme .the scheme goes as follows ; both players qubits are initialized to the state , an entangling operation , , is applied , the players apply their strategy choice , , an un - entangling operation is applied , the payoffs are determined from the probability distribution of the final state .this procedure can be encoded in the quantum circuit show in figure [ fig : qpd ] . [ cols="^ " , ] the amount of entanglement that occurscan be varied by varying the parameter in the entangling operation : at maximal entanglement, , this operation produces a bell state , and at is the identity operator .the game is defined by setting the possible strategies of the players .for this we parametrize a single qubit rotation , , with three parameters, in : where ,\phi \in [ 0,2\pi],\alpha \in [ 0,2\pi]$ ] .the outcome of the game is given by : and the average payoff is derived from the expectation values of a measurement performed at the end and the payoff vector there are four possible outcomes , .correspondence to the classical game is made by associating each outcome as one of the classical strategy choices , such that corresponds to confess ( c ) , and corresponds to defect ( d ) , as is illustrated in the canonical prisoner s dilemma game with payoff matrix shown in table [ tab : pdmatrix ] . ' '' '' & & + ' '' '' & & + ' '' '' & & + the bayesian game is constructed with the protocol laid out by harsanyi . in the bayesian gamethe players have incomplete knowledge about their opponent s payoff matrices .this is represented by having the players receive a statistical mixture of different payoff matrices .below we analyze games that are represented by two different payoff matrices .if , for example , player a s payoff is the same in both matrices while player b s vary , this represents player a having incomplete knowledge about player b s preferences .if both have different payoffs , this could be interpreted as two players having incomplete knowledge about what game their playing .this game can be represented by the quantum circuit shown in figure [ fig : quantumcircuit ] .quantum circuit for bayesian game ] is a unitary operation on the control qubit . and are controlled entangling operations acting on and or and , depending on the state of the control qubit .this representation is equivalent to playing a statistical mixture of two quantum circuits shown in figure [ fig : qpd ] with different two - player games . the average payoff for player a in the bayesian game is given by : the b player s average payoff is still calculated according to equation [ eq : payoff ] .the primary solution concept used in game theory is the nash equilibrium .a nash equilibrium is a set of strategies where neither player could benefit by unilaterally deviating .the payoff to the player s at the nash equilibrium represents a stable payoff in a repeated game or large ensemble , because it is self - enforcing .there are refinements to the concept of a nash equilibrium that are used to capture different types of games .relevant to quantum games is the concept of a correlated equilibrium .a correlated equilibrium is a game where the player s strategy choices are correlated in some way , such as reacting to advice or a contract , such that probability distributions are possible that are not in the image of the classical game with mixed strategies .entanglement acts to correlate the player s outcomes in a similar way , except in quantum games the entanglement is imposed by a referee , and once the entanglement is produced , the player s can not break the contract .the method of best responses is used to find the nash equilibria of the game .there have been analytic solutions found for certain cases of quantum games(cite ) , but with the aim to examine a wide range of games , including a bayesian framework and asymmetric payoffs , and for these cases , analytic solutions remain elusive .therefore we adopt a numerical procedure . first , the possible strategies must be chosen .equation [ eq : strat ] represents a completely arbitrary strategy choice .it is instructive to analyze the game with a more discretized set of strategies .we chose a step size for each parameter .then we compile a list that contains all possible combinations of integer multiples of these steps , within the bounds of the parameters : where represents the ith element of the list .this set defines the possible strategies of a game .several of these matrices are redundant , because for example , when , is undefined . to construct the best response list for player a , for each possible strategy choice of playerb , we compute the payoff for player a for each of their possible strategy choices. then we select the elements which have the highest payoff , or best response , .this produces a list of a s best responses to each of b s strategy choices : , where j ranges over all possible strategy choices .then similarly b s best response list is composed, .the nash equilibria are given by the intersection of the best response functions : .for the bayesian game , this procedure is straightforwardly extended to three players .many of the interesting features of the games we examine exist with the stepping parameters , which has a total of 8 unique strategy choices .the majority of the data are presented with these stepping parameters .next we examine the behavior of one game in more detail as we step finer in each of the strategy parameters .this eventually becomes computationally impractical as the step sizes get too small .for example , the stepping parameters yield 1824 unique strategy choices .the code in mathematica with this many strategy choices takes 1 hour to compute all of the nash equilibria for the two - player game for all values of entanglement , making solutions to the bayesian game impractical to find with the current method .here we present the solutions for several two - player games found in the literature and textbook games . for the sake of space , we will not discuss the real world interpretration of these games , rather we focus on their mathematical propeties .the data are shown in figure [ fig : bayesiandata ] , where a pair of two - player games are presented with their payoff matrices inset in the form of table [ tab : pdmatrix ] . in the case of asymmetric games , in the two - player games , player a is plotted in blue , and player b is plotted in red .the payoffs at the nash equilibria are plotted for both players as a function of the entanglement parameter .the nash equilibria curves come in two types , constant with , or increasing with .it is instructive to compare these to the solutions of the corresponding classical games . in all cases , the values of the payoff at no entanglement ,i.e. , match those of the classical game . for gamesonly have one constant nash equilibria curve , such as in the games labeled ` type b , ' ` deadlock ' , and ` stag hunt , ' there is one nash equilibrium of the classical game and it is pareto optimal . as the entanglement increases, the nash equilibrium vanishes above some critical value.for games that have nash equlibrium that grows with ( i.e. ` prisoner s dilemma ' ) there is only one nash equilibrium of the classical game but it is not pareto efficient .the nash equilibra for these games also vanish for some critical value of entanglement .the vanishing of the nash equilibrium at a critical entanglement has been compared to a phase transition - like behavior . for games that have two nash equilibrium in the classical game , one pareto optimal, the pareto optimal solution remains for all values of entanglement , whereas the second nash equlibrium grows and converges with the optimal one at maximal entanglement . in these cases ,the pareto optimal nash equilibrium does not vanish at some critical entanglement . for two player games with no nash equilibrium classically , such as the ` matching pennies ' game , there are no nash equilibria in the corresponding quantum game. in short , games with one equilibrium seem to lose that equilibrium at a critical entanglement , and in games with two equilibria , those equilibria persist for all values of entanglement .becuase our methods are numerical , these observations are not tantamount to formal proofs , and counter - examples may be found , but they are suggestive of a deeper structure .the ` da s brother ' is an interesting outlier from this categorization .the classical game has only one nash equilibrium , and it is pareto optimal .however , as the entanglement increases , a second nash equilibrium appears and then converges to the pareto optimal solution as in the case of games with two equilibria mentioned above .additionally , the pareto efficient solution does not vanish at some critical value of entanglement .three - player bayesian games can be composed out of a pair of two - player games .this can be interpreted as the player s having incomplete information about their opponents payoffs .the solutions to the bayesian games composed of various two - player game combinations are plotted in 3d below the two - player results in figure [ fig : bayesiandata ] .the payoff for only player a is plotted against the entanglement , , and the probability to play with each player , from equation [ eq : bpayoff ] .the bayesian graphs are oriented so that the top two - player game is in the back of the 3d plot with the bottom game in the front , and no entanglement on the right with maximal entanglement on the left .as expected , the and solutions to the bayesian game match the two - person game solutions for players a and b respectively . along the plane, the results match those of the classical game with mixed strategies . in games with the same number of nash equilibria in the two component two - player games such as in ` deadlock ' vs. ` prisoner s dilemma ' , the solutions at continuously and linearly transform into the solutions at . when there are a different number of nash equilibria in the two component two - player games , the equilibria must vanish , or appear , at some value of .this is similar to the vanishing , or appearance , of nash equilibria at a critical entanglement in the two - player games , only here , we also see them in the bayesian game as the degree of incomplete information , , changes . returning to a two - player game, we now examine the game as the discretization of the strategy choices in equation [ eq : space ] becomes finer , approaching the limit of completely arbitrary su(2 ) rotations .the nash equilibria of the ` da s brother ' are now calculated using the stepping parameters which yields 1824 unique strategy choices .as seen in the left graph of figure [ fig : continuous ] , the space between the two nash equilibria becomes filled with many additional equilibria .the nash equilibria found with form the upper and lower bounds of the new nash equilibria . in the right graph of figure[ fig : continuous ] the strategy parameters of the nash equilibria for the players are compared by plotting vs against each other for each nash equilibrium .this shows that the nash equilibria generally follow the trend . taking a slice of the payoff as a function of entanglement data at ,a histogram of the payoffs achieved suggests that there is some structure within the distribution , as shown in the left hand graph of figure [ fig : continuous2 ] . in this datathe stepping parameters used were , yielding 7968 unique strategy choices .the data suggest that more nash equilibria occur near the original pareto optimal solution that occured with .there may be some indication that the nash equilibria are beginning to converge towards the two original nash equilibria however , as computation with finer steps is impractical with our current numerical method , further study is required .there is also a relationship between the payoff that is rewarded and the strategy choice at each nash equilibrium . in the right side graph of figure[ fig : continuous2 ] we plot the parameter of each nash equilibria against the payoff of player a. when , the payoff is the pareto optimal solution , which is expected from the results with .then , as approaches , the other equilibrium of the original game , the payoff transforms to the payoff of the second equilibria .further study is needed to understand these relationships .the nash equilibria that arise in a quantum game , where entanglement produces correlations in the player s outputs , can be compared to the correlated equilibrium in classical game theory .a correlated equilibrium in classical games can arise when mixed strategies are used and there is communication between the players in the form of advice from a referee or a contract . if players receive some piece of advice , or react in a predetermined way to a random event , they can employ strategies that are correlated with one another and realize self - enforcing equilibria that are different from those in the mixed game without communication .when entanglement produces correlated outcomes for the players , the equilibria produced strongly resemble the correlated equilibria .in contrast to the classical case , the role of advice is played by the initial entanglement . andonce that entanglement is imposed on the players by a referee , it forms an effective contract that can not be broken .the correlations will persist even if the players are not allowed communication after the initial entanglement .in addition , quantum correlations can exhibit probability distributions that are not allowed by classical correlations , and can persist non - locally .the sudden dissapearance of a nash equilibrium as the entanglement is increased suggests that the correlations can benefit the player s up to a point , but when the correlations are too strong , the nash equilibrium no longer occurs. it would be interesting to find examples of classical games where the enforcement of some contract produces a benefit for the players , but if it is enforced too strongly , it ceases to allow a nash equilibrium .in addition , the abrupt changing of the structure of nash equilibrium as a function of the player s incomplete information could strongly effect any protocol on a network where the agents have some uncertainty about the payoffs or players in the game .maitra , a. , _ et al ._ : proposal for quantum rational secret sharing , phys .a 92 , 022305 ( 2015 ) .li , q. , he , y. , and jiang , j .-p . : a novel clustering algorithm based on quantum games , j. phys . a : math . theor .42 , 445303 ( 2009 ) .zableta , o.g . ,barrang , j. p. , and arizmendi c. m. : quantum game application to spectrum scarcity problems , physica a 466 ( 2017 ) .du , j. , li , h. , xu , x. , shi , m. , wu , j. , zhou , x. , and han , r. : experimental realization of quantum games on a quantum computer , phys . rev .lett . , 88 , 137902 ( 2002 ) .prevedel , r. , andre , s. , walther , p. , and zeilinger , a. : experimental realization of a quantum game on a one - way quantum computer , new journal of physics 9 , 205 ( 2007 ) .buluta , i. m. ; fujiwara , s. ; hasegawa , s. : quantum games in ion traps , physics letters a 358 , 100 ( 2006 ) .harsanyi , j. c. : games with incomplete information played by bayesian players , mgt .14 , 159 ( 1967 ) .flitney , p. and d. abbott , advantage of a quantum player over a classical one in 2x2 quantum games , proc .a , london , ( 2003 ) .avishai , y. : some topics in quantum games , masters thesis , ben gurion university of the negev , beer sheva , israel ( 2012 ) .du , j. , li , h. , xu , x. , zhou , x. , and han , r. : phase - transition - like behaviour of quantum games , j. phys . a : math .gen 36 p. 6551 - 6562solmeyer , n. , r. dixon , and r. balu , characterizing the nash equilibria of a three - player bayesian quantum game , _ forthcoming ( 2017 ) .auman , r. : subjectivity and correlation in randomized strategies , journal of mathematical economics , 1 , p. 67 - 96 , ( 1974 ) ._ * neal solmeyer * is a physicist the army research laboratory ( adelphi , md ) .he received his ba degrees in physic and philosophy from carleton college ( northfield , mn ) in 2006 , and his phd degree in physics from penn state university ( state college , pa ) in 2013 .his research interests include using rydberg excitations in ensembles of laser cooled and trapped rubidium atoms for the purposes of quantum communication , and applying quantum game theory to distributed quantum networks .biographies and photographs of the other authors are not available . | quantum games with incomplete information can be studied within a bayesian framework . we analyze games quantized within the ewl framework [ eisert , wilkens , and lewenstein , phys rev . lett . 83 , 3077 ( 1999 ) ] . we solve for the nash equilibria of a variety of two - player quantum games and compare the results to the solutions of the corresponding classical games . we then analyze bayesian games where there is uncertainty about the player types in two - player conflicting interest games . the solutions to the bayesian games are found to have a phase diagram - like structure where different equilibria exist in different parameter regions , depending both on the amount of uncertainty and the degree of entanglement . we find that in games where a pareto - optimal solution is not a nash equilibrium , it is possible for the quantized game to have an advantage over the classical version . in addition , we analyze the behavior of the solutions as the strategy choices approach an unrestricted operation . we find that some games have a continuum of solutions , bounded by the solutions of a simpler restricted game . a deeper understanding of bayesian quantum game theory could lead to novel quantum applications in a multi - agent setting . * * * * * neal solmeyer 1 |
numerical methods for approximating variational problems or partial differential equations ( pdes ) with solutions defined on surfaces or manifolds are of growing interests over the last decades .finite element methods , as one of the main streams in numerical simulations , are well established for those problems .a starting point can be traced back to , which is the first to investigate a finite element method for solving elliptic pdes on surfaces . since then , there have been a lot of extensions both in analysis and in algorithms , see for instance and the references therein . in the literature , most of the works consider the _ a priori _ error analysis of various surface finite element methods , and only a few works , up to our best knowledge , take into account the _ a posteriori _ error analysis and superconvergence of finite element methods in a surface setting , see .recently , there is an approach proposed in which merges the two types of analysis to develop a higher order finite element method on an approximated surface , where a gradient recovery scheme plays a key role .gradient recovery techniques , which are important in _ post processing _ solutions or data for improving the accuracy of numerical simulations , have been widely studied and applied in many aspects of numerical analysis . in particular for planar problems ,the study of gradient recovery methods has reached already a mature stage , and there is a massive of works in the literature , to name but only a few .we point out some significant methods among them , like the classical zienkiewicz zhu ( ) patch recovery method , and a later method called polynomial preserving recovery ( ppr ) .the two approaches work with different philosophies in methodology .the former method first locates positions of certain points in the given mesh , and then recovers the gradients themselves at those points to achieve a higher order approximation accuracy , while the latter one first recovers the function values by polynomial interpolations in a local patch at each nodal points , and then takes gradients at the nodal points from the previously recovered functions .both the methods can produce comparable superconvergence results , but do not require the same assumptions on the discretized meshes .gradient recovery methods for data defined on curved spaces have only recently been investigated . in ,several gradient recovery methods have been adapted to a general surface setting for linear finite element solutions which are defined on polyhedrons by triangulation .there a surface is concerned to be a zero level set of a smooth function defined in a higher dimensional space , which is from the point of view of an ambient space of the surface .it has been shown that most of the properties of the gradient recovery schemes for planar problems are maintained in their counterparts for surface problems .in particular , in their implementation and analysis , the methods ask for exact knowledge of the surface , e.g. the nodal points are located on the exact surface , and the tangent spaces or in another word the normal vector field are given . however , this information is usually not available in reality , where we have only the approximations of surfaces , for instance , polyhedrons , splines or polynomial surfaces . on the other hand , the generalized scheme for gradient recovery with surface elements gives the most competitive results in , including several other methods ,their superconvergence are proved with the assumption that the local patch is on the discretized surfaces , which is restrictive in applications . in the planar case, the condition is also asked for the superconvergence by these methods which have been generalized to a surface setting in , but it is not necessary for the ppr method .this difference gives us the motivation to generalize the ppr method for problems with data defined on manifolds .a follow - up question would be what are the polynomials in the domains of curved manifolds .using the idea from the literature , e.g. , one could consider polynomials locally on the tangent spaces of the manifolds .obviously , a direct generalization of ppr to a manifold setting based on tangent spaces will again fall into the awkward situation : the exact manifold and its tangent spaces are unknown . to overcome these difficulties , we go back to the original definition of a manifold .we take the manifold as patches locally parametrized by euclidean planar domains , but not necessarily by their tangent spaces .this has no interruption for us to define patch - wise polynomials in such planar parameter domains . in this manner, we are able to recover the unknown surfaces from the given sampling points in these local domains , as well as the finite element solutions iso - parametrically .our proposed method is thus called parametric polynomial preserving recovery ( pppr ) which _ does not _rely on the symmetric condition for the superconvergence , just like its genetic father ppr . to this end, it will be revealed that pppr is particularly useful to _ address the issue of unavailable tangent spaces _ , and thus it enables us to solve the open issues in .another benefit of the pppr method for data on a surface is that it is relatively _ curvature stable _ in comparing with the methods proposed in .this is verified by our numerical examples , but a quantitative analysis will be open in the paper . moreover , the original ppr method does not preserve the function values at the nodal points in its pre - recovery step . in this paper , we provide an alternative method which can achieve this goal . with this option , the pppr can not only preserve _ parametric polynomial _ , but also preserve the _ surface sampling points _ and the _ function values _ at the given points simultaneously .that means the given data is invariant during the recovery by using the pppr method .the rest of the paper is organized as follows : section [ sec : background ] gives a preliminary account on relevant differential geometry concepts and an analytic pde problem .section [ sec : spaces ] introduces discretized function spaces and collects some geometric notations used in the paper .section [ sec : pppr ] presents the new algorithms especially the pppr for gradient recovery on manifolds .there we make remarks on the comparison of algorithms and the idea of preserving function values , and provide an argument for its curvature stable property .section [ sec : analysis ] gives a brief analysis of the superconvergence properties of the proposed method .section [ sec : estimator ] analyze the recovery - based _ a posteriori _ estimator using the new gradient recovery operators . finally , we present some numerical results and the comparisons with existing methods in section [ sec : numerics ] .we have a proof of a basic lemma in appendix [ appendix ] .we will only show some basic concepts which are relevant to our paper . for a more general overview on the topic of riemannian geometry or differential geometry, one could refer to for instance . in the context of the paper, we shall consider as an oriented , connected , smooth and compact riemannian manifold without boundary , where denotes the riemann metric tensor .the idea we are going to work should be no restriction for general dimensional manifolds , but we will focus on the case of two dimensional ones , which are also called surfaces , in the later applications and numerical examples .our concerns are some quantities which are scalar functions defined on manifolds .first , let us mention the differentiation of a function in a manifold setting , which is called covariant derivatives in general .it is defined as the directional derivatives of the function along an arbitrarily selected path on the manifold here is a tangential vector field .the gradient then is an operator such that it is not harm to think of the gradient as a tangent vector field on the manifold . in a local coordinate, the gradient has the form where is the entries of the inverse of the metric tensor , and denotes the tangential basis .let be a local geometric mapping , then we can rewrite into a matrix form with this local parametrization , that is in , is the pull back of function to the local planar parameter domain , denotes the gradient on the planar domain , is the jacobian matrix of , and on this patch .[ rem : surface_gradient ] is not specified here , and we will make it clear when it becomes necessary later .we actually have a relation that where denotes the moore - penrose inverse of .see ( * ? ? ?* appendix ) for a detailed explanation .note that the parametrization map is not unique , typical ones can be constructed through function graphs which will be used in our later algorithms .we have the following lemma of which the proof is given in appendix [ appendix ] .[ lem : invariant ] the gradient is invariant under different chosen of regular isomorphic parametrization functions . let be the volume form on , and be the tangential bases . for every tangent vector field , , we have a form defined by the interior product of and the volume form through the following way where are indexes with taking out from .the divergence of the vector field then satisfies where denotes the exterior derivative . since both the left hand side and the right hand side of are forms, is a scalar field . using the local coordinates ,we can write the volume form explicitly applying equation , the divergence of the vector field can be computed by it is revealed that the divergence operator is actually the dual of the gradient operator . with the above preparation, we can now given the definition of the laplace - beltrami operator , which is denoted by in our paper , as the divergence of the gradient , that is we mention that if the manifold is a hyper - surface , that is which has co - dimension .the gradient and divergence of the function can be equally calculated through projecting the gradient and divergence of an extended function in ambient space to the tangent spaces of respectively .this type of definitions has been applied in many references which consider problems in an ambient space setting , i.e. where and are the extended scalar and vector fields defined in the ambient space of the hypersurface , which satisfies and for all .note that is the gradient operator defined in the ambient euclidean space , is the tangential projection operator and is a unit normal vector field of . it can be showed that the gradient and divergence by projections are independent of the way of the extension of the scalar or vector fields , and they are equivalent to the former definitions . with the generalized notions of the differentiation on manifolds , the function spaces based on manifold domains can be studied analogously to euclidean domains .sobolev spaces on manifolds are one of the mostly investigated spaces , which provide a breeding ground to study pdes .we are interested in numerically approximating pdes of which the solutions are defined on . even though our methods are _ problem independent _ , in this paper , our analysis will be mainly based on the laplace - beltrami operator , and its generated pdes . for the purpose of both analysis and applications, we consider an exemplary problem the _ laplace - beltrami equation _ , that is for a given satisfying to solve the equation where denotes the manifold volume measure .the discretization of a smooth manifold has been widely studied in many settings , especially in terms of surfaces .a discretized surface , in most cases , is a piecewise polynomial surface .one of the most simple case is the polygonal approximation to a given smooth surface , especially with triangulations .finite element methods for triangulated meshes on surfaces have firstly been studied in by using linear elements . in , a generalization of to high order finite element methodis proposed based on triangulated surfaces . in order to have an optimal convergence rates , it is showed that the geometric approximation error and the function approximation error has to be compatible with each other . in fact, the balance of geometric approximation errors and function approximation errors is also the key point in the development of our recovery algorithm . in this paper, we will denote the triangulated surface , where is the set of triangles , and is the maximum diameter .we restrict ourselves to the first order finite element methods , thus the nodes consist of simply the vertices of , which we denote by . in the following , we define transform operators between the function spaces on and function spaces on , where denotes some perturbation of . and its inverse where is a continuous and bijective projection map from to .we have the following lemma with triangulated approximation .[ lem : transform ] for , the transform operators are uniformly bounded between the spaces and as long as the space is compatible with the smoothness of . for every , denote .each triangular faces of corresponding to a curved triangle faces on , and we denote it as .if , every function and its derivatives are uniformly bounded on , then we can always find constants and such that for , using the results in , we have the equivalence of and .that is there exists positive and uniformly bounded constants and , such that holds on each pair of the triangular faces .since we have then the estimate which gives the conclusion .[ rem : transform ] the statement of lemma [ lem : transform ] hold also for higher order continuous piece - wise polynomial approximation of .we give here an assumption on the triangulations of surfaces , which is a common condition to have the so - called supercloseness .[ ass : irregular ] is a quasi - uniform and shape regular triangulation of , and it satisfies the irregular condition ( cf .* definition 2.4 ) , or ( * ? ? ?* definition 3.2 ) ) .+ for convenience , table [ tab : geometry ] collects some notations in the paper ..notations [ cols="<,<",options="header " , ] in the example , we consider a benchmark problem for adaptive finite element method for laplace - beltrami equation on the sphere .we choose the right hand side function such that he exact solution in spherical coordinate is given by in case of , it easy to see that the solution has two singularity points at north and south poles and the solution is barely in .in fact , . [ fig : sphere_init ] [ fig : sphere_adaptive ] [ fig : sphere_err ] [ fig : sphere_idx ] to obtain optimal convergence rate , adaptive finite element method ( afem ) is used .different from existing methods in the literature , recovery - based _ a posteriori _ error estimator is adopted .we start with the initial mesh given as in fig [ fig : sphere_init ] . the mesh is adaptively refined using the dorfler marking strategy with parameter equal to .fig [ fig : sphere_adaptive ] plots the mesh after the 18 adaptive refinement steps .clearly , the mesh successfully resolves the singularities .the numerical errors are displayed in fig [ fig : sphere_err ] . as expected , optimal convergence rate for error can be observed .in addition , we observe that the recovery is superconvergent to the exact gradient at a rate of . to test the performance of our newrecovery - based _ a posterior _ error estimator for laplace - beltrami problem , the effectivity index is used to measure the quality of an error estimator , which is defined by the ratio between the estimated error and the true error the effectivity index is plotted in fig [ fig : sphere_idx ] .we see that converges asymptotically to which indicates the posteriori error estimator or is asymptotically exact . in this example , we consider the following laplace - beltrami type equation on dziuk surface as in : where . is chosen to fit the exact solution note that the solution has an exponential peak . to track this phenomena ,we adopt afem with an initial mesh graphed in fig [ fig : dziuk_init ] .fig [ fig : dziuk_adaptive ] shows the adaptive refined mesh .we would like to point out that the mesh is refined not only around the exponential peak but also at the high curvature areas .fig [ fig : dziuk_err ] displays the numerical errors .it demonstrates the optimal convergence rate in norm and a superconvergence rate for the recovered gradient .the effective index is showed in fig [ fig : dziuk_idx ] , which converges to 1 quickly after the first few iterations .again , it indicates the error estimator ( or ) is asymptotically exact .[ fig : dziuk_init ] [ fig : dziuk_adaptive ] [ fig : dziuk_err ] [ fig : dziuk_idx ]in this paper , we have proposed a curvature stable gradient recovery method ( pppr ) for data defined on manifolds . in comparing with existing methods for data on surfaces in the literature , cf . , the proposed method has several improvements : the first highlight is that it does not require the exact surfaces , which makes it a realistic and robust method for practical problems ; second , it does not need the element patch to be symmetric to achieve superconvergence .third , it is the most curvature stable methods in comparing with the existing methods . aside from that, we have evolved the traditional ppr method ( for planar problems ) to function value preserving at the mean time . by testing with some benchmark examples ,it is quite evident that the proposed method numerically performs better than the methods in the state of the art .we have also shown the capability of the recovery operator for constructing _ a posterior _ error estimator . even though we only develop the methods for linear finite element methods on triangulated meshes , the idea should be applicable to higher order fem on more accurate approximations of surfaces , e.g. piece - wise polynomial surfaces , b - splines or nurbus .we leave this as a potential work for future .gradient recovery has other applications , like enhancing eigenvalues , pre - processing data in image science , simplifying higher order discretization of pdes , or even designing new numerical methods for higher order pdes , and so on .it would also be interesting to further investigate the full usage of the pppr method for problems with solutions defined on a manifold domain .the authors thank dr .pravin madhavan , dr .bjorn stinner and dr .andreas dedner for their kind help and discussions on a numerical example .gd acknowledges support from the austrian science fund ( fwf ) : geometry and simulation , project s11704 .hg acknowledges support from the center for scientific computing from the cnsi , mrl : an nsf mrsec ( dmr-1121053 ) and nsf cns-0960316 for providing computing resources .in general , there are infinitely many isomorphic parametrizations for a given patch .let us pick arbitrarily two of them , which are denoted by respectively , where and are planar parameter domains , then there exist to be a bijective , differentiable mapping , such that .that means for an arbitrary but fixed position , we have and , such that then we have and consequently , for every function we have using chain rule on both sides of the former equation of , then we get which gives the latter equation in since is non - degenerate . using the same process but consider , we can show the reverse implication .thus , we have shown that any two arbitrary parameterizations and lead to the same gradient values at same positions . , _ best constants in the sobolev imbedding theorem : the yamabe problem _ , in seminar on differential geometry , vol . 102 of ann . of math .stud . , princeton univ . press , princeton , n.j ., 1982 , pp . 173184 . , _ finite elements for the beltrami operator on arbitrary surfaces _ , in partial differential equations and calculus of variations , vol .1357 of lecture notes in math . ,springer , berlin , 1988 , pp .142155 ., _ nonlinear analysis on manifolds : sobolev spaces and inequalities _ , vol . 5 of courant lecture notes in mathematics , new york university , courant institute of mathematical sciences , new york ; american mathematical society , providence , ri , 1999 .height 2pt depth -1.6pt width 23pt , _ the superconvergent patch recovery and a posteriori error estimates .error estimates and adaptivity _ , internat .methods engrg . , 33 ( 1992 ) ,. 13651382 . | this paper investigates gradient recovery schemes for data defined on discretized manifolds . the proposed method , parametric polynomial preserving recovery ( pppr ) , does not ask for the tangent spaces of the exact manifolds which have been assumed for some significant gradient recovery methods in the literature . another advantage of the proposed method is that it removes the symmetric requirement from the existing methods for the superconvergence . these properties make it a prime method when meshes are arbitrarily structured or generated from high curvature surfaces . as an application , we show that the recovery operator is capable of constructing an asymptotically exact posteriori error estimator . several numerical examples on 2dimensional surfaces are presented to support the theoretical results and make comparisons with methods in the state of the art , which show evidence that the pppr method outperforms the existing methods . .3 cm * ams subject classifications . * primary 65n50 , 65n30 ; secondary 65n15 , 53c99 .3 cm * key words . * gradient recovery , manifolds , superconvergence , parametric polynomial preserving , function value preserving , curvature stable . |
the astrophysically ubiquitous keplerian accretion disks should be unstable and turbulent in order to explain observed data , but are remarkably rayleigh stable .they are found in active galactic nuclei ( agns ) , around a compact object in binary systems , around newly formed stars etc .( see , e.g. , * ? ? ?the main puzzle of accreting material in disks is its inadequacy of molecular viscosity to transport them towards the central object .thus the idea of turbulence and , hence , turbulent viscosity has been proposed .similar issue is there in certain shear flows , e.g. plane couette flow , which are shown to be linearly stable for any reynolds number ( ) but in laboratory could be turbulent for as low as . therefore , linear perturbation can not induce the turbulent viscosity to transport matter inwards and angular momentum outwards , in the keplerian disks .note that the issue of linear instability of the couette - taylor flow ( when accretion disks are the subset of it ) is a century old problem .although in the presence of vertical shear and/or stratification , keplerian flow may reveal rayleigh - taylor type instability ( e.g. ) , convective overstability ( ) and the zombie vortex instability ( ) , we intend here to solve the classic century old problem of the origin of linear instability with the exponential growth of perturbation in purely hydrodynamical rayleigh - stable flows with only radial shear. the convective overstability does not correspond to an indefinitely growing mode and it has some saturation ( ) .in addition , the zombie vortex instability is not sufficient to transport angular momentum significantly in a small domain of study .in fact , all of them could exhibit only smaller shakura - sunyaev viscosity parameter ( ) than that generally required to explain observation .the robustness of our work is that , it can explain the turbulent behavior of any kind of rayleigh - stable shear flows , starting from laboratory to astrophysical flows .while many realistic non - magnetized and keplerian flows could be stratified in both the vertical and radial directions of the disks , it is perhaps impossible to prove that all the non - magnetized accretion disks have significant amount of vertical shear and/or stratification to sustain the above mentioned instabilities .note that indeed many accretion disks are geometrically thin .moreover , the laboratory taylor - couette flows have no vertical shear and/or stratification . in 1991 , with the application of magnetorotational instability ( mri ; * ? ? ?* ; * ? ? ?* ) to keplerian disks , showed that initial weak magnetic field can lead to the perturbations growing exponentially . within a few rotation times, such exponential growth could reveal the onset of turbulence .however , for charge neutral flows mri should not work .note also that for flows having strong magnetic fields , where the magnetic field is tightly coupled with the flow , mri is not expected to work ( e.g. ) .it is a long standing controversy ( see , e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?? * ; * ? ? ?* ) , whether the matter in rayleigh stable astrophysical disks is stable or unstable .the answer has profound significance for our understanding of how stars and planets form .it is argued , however , that some types of rayleigh stable flows certainly can be destabilized .based on ` shearing sheet ' approximation , without and with explicit viscosity , some authors attempted to tackle the issue of turbulence in hot accretion disks .however , other authors argued for limitations in this work .based on the simulations including explicit viscosity , the authors could achieve and concluded that keplerian like flows could exhibit very weak turbulence in the absence of magnetic field . nevertheless , the recent experimental results by clearly argued for the significant level of transport from hydrodynamics alone .moreover , the results from direct numerical simulations and exploration of transient amplification , in otherwise linearly stable flows , with and without noise ( e.g. * ? ? ?* ; * ? ? ?* ) also argued for ( plausible ) hydrodynamic instability and turbulence at low .interestingly , accretion disks have huge ( ) , prompting to the belief that they are hydrodynamically unstable .we show here that linearly perturbed apparently rayleigh stable flows driven stochastically can be made unstable even in the absence of any magnetic field .we also argue , why stochastic noise is inevitable in such flows .they exist in the flows under consideration inherently .we develop our theory following the seminal concept based on fluctuating hydrodynamics of randomly stirred fluid , pioneered by and , which , however , was never applied in the context of accretion flows or other shear flows .this work provides a new path of linear hydrodynamic instability of shear flows , which will have vast applications from accretion disks to laboratory flows , for the first time .the plan of the paper is the following .in the next section , we introduce equations describing the system under consideration . then 3 describes the evolution of various perturbations in stochastically driven hydrodynamic flows .subsequently , we discuss the relevance of white noise in the context of shear flows in 4 . finally we summarize with conclusions in 5 . in appendix, we demonstrate in detail the generation of white noise from random walk , particularly in the present context .the linearized navier - stokes equation in the presence of background plane shear and angular velocity , when being the distance from the center of the system , in a small section approximated as incompressible flow with , has already been established . here , any length is expressed in units of the size of the system in the , the time in units of , the velocity in ( ) , and other variables are expressed accordingly ( see , e.g. , , for detailed description of the choice of coordinate in a small section ) .hence , in dimensionless units , the linearized navier - stokes equation and continuity equation ( for an incompressible flow ) can be recasted into the well - known orr - sommerfeld and squire equations , but in the presence of stochastic noise and coriolis force , given by where are the components of noise arising in the linearized system due to stochastic forcing such that , where is a constant for white noise and ; is the -component of velocity perturbation vector and the -component of vorticity perturbation vector .now , we can resort to a fourier series expansion of , and as where can be any one of , and ; and are the wavevector and frequency respectively in the fourier space such that and = .writing down equations ( [ hydroorrv ] ) and ( [ hydroorrzeta ] ) in fourier space by using equation ( [ fourier ] ) , and taking ensemble average , we obtain the equations involving the evolution of mean values of perturbations in the presence of noise as where fourier transformations of are basically multiplied with a random number and on ensemble average it appears to be a constant which is the mean value of the white noise ( we get and when the drift coefficient of the brownian motion or wiener process corresponding to the white noise is zero and nonzero respectively , see appendix for details ) , and , are the fourier transforms of and which are the mean or the ensemble averaged values of and respectively .now let us take the trial solutions , , where are the constant , in general complex , amplitudes of perturbation and , is a vertical wavevector ( one should not confuse this with the shakura - sunyaev viscosity parameter ) .vertical wave vector is chosen since it will be unaffected by shear .this gives ( using equation ( [ fourier ] ) ) .substituting these trial solutions in equations ( [ hydrofourierumean ] ) and ( [ hydrofourierzetamean ] ) , integrating with respect to and we obtain now eliminating and assuming we obtain the dispersion relation if we find any pair of and satisfying equation ( [ vertmeli ] ) for which the imaginary part of positive , then we can say that the mean value of perturbation is unstable .equation ( [ vertmeli ] ) is the hydrodynamic counter part of the dispersion relation obtained due to mri , leading to the avenue of pure hydrodynamic instability . for , from equations ( [ hydroudispersion ] ) and ( [ hydrozetadispersion ] ) , either and both turn out to be zero or there is no instability for non - trivial and .overall , gives rise to stable solutions like the zero magnetic field for mri .figure [ vertm ] shows the ranges of giving rise to linear instability .it is easy to understand that similar results could be obtained with the choice of unequal ensemble averages of white noise in equations ( [ hydroudispersion ] ) and ( [ hydrozetadispersion ] ) and a more general phase difference between and . and the imaginary part of , for vertical perturbation in case i with .we consider , however we obtain almost the same results for other admissible values of . ] 1.5 cm now for a given and , after eliminating from equations ( [ hydroudispersion ] ) and ( [ hydrozetadispersion ] ) , we obtain a dispersion relation between and as which is second order in and hence has two roots and .if we find any pair of and for which the imaginary part of positive , then we can say that the mean value of perturbation is unstable . for in equation ( [ dispersion ] ) , there is no instability , like the zero magnetic field for mri .@ and the imaginary part of one of the solutions of , for vertical perturbation in case ii .we consider , however we obtain almost the same results for other admissible values of . other solution of is stable.,title="fig : " ] and the imaginary part of one of the solutions of , for vertical perturbation in case ii .we consider , however we obtain almost the same results for other admissible values of .other solution of is stable.,title="fig : " ] + + + + and the imaginary part of one of the solutions of , for vertical perturbation in case ii .we consider , however we obtain almost the same results for other admissible values of . other solution of is stable.,title="fig : " ] and the imaginary part of one of the solutions of , for vertical perturbation in case ii .we consider , however we obtain almost the same results for other admissible values of .other solution of is stable.,title="fig : " ] 1.5 cm in fig . [ unstable ] , for and different values of above a certain value , we show that for keplerian flows , there are modes for which the mean values of perturbation are unstable . if the amplitude of perturbations decreases , the value of increases for any fixed nonzero , leading to a larger range of for instability .however for , i.e. for the white noise with zero mean ( which also corresponds to the hydrodynamic accretion flows without any noise ) , we obtain no such unstable modes .while modes are stable for smaller , with the increase of they become unstable and range of giving rise to instability increases with increasing and for unstable modes arise all the way upto . [cols="^,^ " , ] in fig .[ unstablemsp ] , we show how spherical perturbation modes in keplerian flows vary with .this is very similar to as shown in fig .[ unstablem ] for vertical perturbations , except that the modes are stable for a very small but non - zero , while for vertical perturbation the modes remain unstable for . for plane couette flows , making in equation ( [ dispersionsp ] ), we obtain the corresponding dispersion relation as while the first root always corresponds to the stable mode for a real , the second one will lead to the unstable solution for a negative satisfying . and imaginary part of one of the solutions of , for spherical perturbation in case ii , where .it shows the unstable modes for negative ( drift velocity).,width=321 ] figure [ negmsp ] shows that for spherical perturbations , the keplerian flows remain unstable upto , as shown in fig .[ negm ] for vertical perturbation cases .now we shall discuss that how relevant and how likely the white noise is to be present in shear flows .the rayleigh stable flows under consideration have a background shear profile , with some molecular viscosity however small that may be , and hence some drag ( e.g. , in protoplanetary disks , it could be due to the drag between gas and solid particles ) . for plane couette flow , such shear is driven in the fluids by moving the boundary walls by externally applied force . if the external force ( cause ) is switched off , the shearing motion ( effect ) dies out .similarly , in accretion disks , the central gravitational force plays the role of driving force ( cause ) producing differential velocity ( shear ) in the flow .hence , by fluctuation - dissipation theorem of statistical mechanics ( see , e.g. , * ? ? ?* ; * ? ? ?* ) , there must be some thermal fluctuations in such flows , with some temperature however low be , and that cause the fluid particles to have brownian motion . therefore the time variation ( derivative ) of this brownian motion , which is defined as white noise , plays the role of extra stochastic forcing term in the orr - sommerfeld equations ( equations ( [ hydroorrv ] ) , ( [ hydroorrzeta ] ) ) which are present generically , in particular when perturbation is considered .now , due to the presence of background shear in some preferential direction , it is very likely for the fluid particles to have brownian motion with nonzero drift , however small it may be .the detailed technical description of generation of white noise ( with zero and nonzero mean ) from brownian motion has been included in appendix .therefore , if is the random displacement variable of a brownian motion with drift coefficient , its probability density function can be written as , \label{brownian0}\ ] ] where is the standard deviation of the distribution and the time .taking the stochastic time derivative of , we obtain the white noise process which we denote by ( ) .since the stochastic variable is not differentiable in the usual sense , we consider a finite difference approximation of using a time interval of width as therefore , the presence of infinitesimal molecular viscosity ( and shear ) , which is there always , would be enough just to give rise to a nonzero ( infinitesimal ) temperature , leading to thermal noise which can do the rest of the job of governing instability .note that a very tiny mean noise strength , due to tiny asymmetry in the system , is enough to lead to linear instability , as demonstrated in previous sections . here , the externally applied force ( for plane couette ) or the force arising due to the presence of strongly gravitating object ( accretion disk ) introduces the asymmetry in the system , just like , e.g. , the brownian ratchets which has several applications in soft condensed matter and biology ( see , e.g. , ) .the measure of asymmetry and drag determines the value of , which furthermore controls the growth rate of perturbation .the corresponding power spectrum appears to be almost flat / constant ( for ideal white noise it is purely flat ) .although in our chosen shearing box , the azimuthal direction is assumed to be periodic , every such small box always encounters drag and hence thermal fluctuation , which assures the presence of nonzero mean noise . as a result , every such sharing box reveals exponential growth of perturbation .we have shown that linearly perturbed hydrodynamic apparently rayleigh stable rotating shear flows , including accretion disks , and plane couette flow , driven stochastically , can indeed be unstable , since the averaged values of the perturbations grow exponentially . due to background shear and hencedrag , thermal fluctuations arise in these flows which induce brownian motion of the fluid particles and hence stochastic forcing by white noise .therefore the accretion flows , in particular due to perturbation , are inevitably driven by white noise which can not be neglected .it is indeed shown in experiments that the stochastic details decide whether turbulence will spread or eventually decay , which furthermore argues for the determining factor played by stochastic forcing , which we demonstrate here for the first time .since the forcing term in this system is a random variable , the solutions of the perturbations are also random variables and hence have some distributions whose averaged values are investigated . hence , we have shown that even in the absence of magnetic field , accretion disks can be made unstable and plausibly turbulent if they are driven by stochastic noise which is very likely to be present in the disks due to thermal fluctuations .in fact , we argue that neglecting the stochastic noise in accretion flows and any other shear flows is vastly an inappropriate assumption .this is because , some shear is always there ( because those are always driven externally by definition ) , which leads to some temperature ( however be the magnitude ) and a small temperature is enough to reveal stochastic noise , which is the basic building block of our work .hence , the presence of ( asymmetric ) drag and stochastic noise in shearing flows is inherent .hence , this work inevitably presents the origin of pure hydrodynamic instability of rotating shear flows and plane couette flow .therefore , this sheds enormous light on to the understanding of formation of planets and stars .evidently this mechanism works for magnetized shear flows as well , because thermal fluctuations are available there also .for example , a background field of the order of unity with can easily lead to unstable modes of perturbation for in the limit of very large and which is the case in accretion disks . in future, we will report this result in detail .indeed , earlier we studied stochastically driven magnetized flows and showed them to be plausibly unstable and turbulent by calculating the correlation functions of perturbations ( ) .hence the pure hydrodynamic instability explored here is generic .this is , to the best of our knowledge , the first solution to the century old problem of hydrodynamic instability of apparently rayleigh stable flows . in due courses , one has to investigate how exactly the required value of stochastic forcing strength could be arised in real systems and if the growth rates of unstable modes could adequately explain data . in certain cases ,only high reveals instability which might be difficult to achieve in laboratory experiments and numerical simulations as of now .we have assumed here that the white noise has a nonzero mean value .the term _ white noise _ is ambiguous . to shed light on this matter , herewe point out the two definitions of white noise . defines it as + + . . .a stationary random process having a constant spectral density function .. + + defines it as + + we shall say that a process is white noise if its values and are uncorrelated for every and : .. + + the following subsections explore the implications of each definition with respect to the mean of the resulting process .let is an _ ergodic stochastic process _ with the property that it has a constant power spectral density , i.e. where is the power spectral density of the random variable and is a constant .then the corresponding autocorrelation function for the process is =\phi_{xx}(\tau)=\alpha \delta(\tau ) , % ~{\rm by~taking~inverse~fourier } \\ % \nonumber \label{autocorr}\end{aligned}\ ] ] by taking inverse fourier transform of , where $ ] denotes the expectation value .now let us assume that is a zero mean white noise process and is a nonzero mean process. then =e[(x(t)+m)(x(t+\tau)+m)]=\alpha\delta(\tau)+m^2 .\\ % \label{brownian}\end{aligned}\ ] ] therefore which is not constant , thus violates the requirement of the white noise process by this definition . representing the papoulis definition of white noise in our notations ,we can write , a stochastic process is called a white noise process if any two distinct random variables of this stochastic process are independent and uncorrelated , i.e. , the autocovariance function when . in mathematical notation , \\\nonumber = e[x(t)]e[x(t+\tau)]-m_tm_{t+\tau } \\ = m_tm_{t+\tau}-m_tm_{t+\tau}=0 , \label{cov}\end{aligned}\ ] ] where and are the corresponding mean values of the random variables and respectively .we can write the second equality in equation ( [ cov ] ) since , for different values of , are independent random variables by definition ._ thus it is not necessary that a white noise process always has to have a zero mean , from papoulis definition_. that is , a stochastic process having nonzero mean can be a white noise process according to this definition ._ in the present work , we have used papoulis definition of white noise which can indeed have a non - zero mean_. now let us explain why we have chosen papoulis definition over brown s definition .let us consider a signal with constant power spectral density .that is , , \label{corrf}\end{aligned}\ ] ] and the fourier transform of is a constant . nowthe _ parseval s theorem _ tells us that where is the fourier transform of . since andconsequently has a constant positive value according to brown s definition of white noise , the equation ( [ energyf ] ) tells us that the total power of the signal is infinity ( see , e.g. , ) . in mathematical terminology, the energy norm of the signal is infinity and hence the function is not -integrable .therefore , driving a system by a stochastic noise with constant power spectral density is same as injecting infinite amount of energy into the system , which is unphysical . in this section ,we outline the derivation of white noise starting from the random walk , via brownian motion .1.5 cm figure [ rwdiagram ] shows an array of positions where etc .and is the spacing between points . at each interval of time , ,a hop is made with probability to the right and to the left .the distribution of , of hops to the right , in steps is given by the bernoulli distribution the first moment ( mean ) and the second moment ( variance ) of the bernoulli distribution in equation ( [ bar ] ) is given by a particle that started at and took steps to the right and steps to the left arrives at the position with mean value notice that , if , or equal probability to jump to the right and the left , the average position after steps will remain .the second moment about the mean is given by therefore , from the central limit theorem , the limiting distribution after many steps is gaussian , with the first and second moments just obtained ( in equations ( [ meanposition ] ) and ( [ varposition ] ) ) , given by ^ 2}{8npq}\right\rbrace .\label{distposition}\end{aligned}\ ] ] if we introduce the position and time variables by the relations the moments of are given by the factor in the definition of diffusion coefficient is appropriate for one dimension , and would be replaced by if we consider the random walk in a space of dimension . thus the distribution moves with a driftvelocity and spreads with a diffusion coefficient defined by thus the probability distribution of the displacement of a particle under this random walk is , \label{distdisp}\end{aligned}\ ] ] where . a stochastic process in which the random variables are stationary and independent and have distribution as in equation ( [ distdisp ] ) , is called a brownian motion or wiener process .it is very clear from the equation ( [ driftvel ] ) that , when , then the drift velocity is , which means if some random walk is fully symmetric without any bias , then only we obtain the zero drift velocity of the corresponding brownian motion ( which is known as standard brownian motion in literature ) .however , if some process has any asymmetry ( for example hydrodynamic flows with shear in a particular direction , bulk hydrodynamic flows , flows under gravity etc . ) , the random walk of particles in that process will have some bias ( i.e. ) , which eventually introduces a brownian motion with _ nonzero drift velocity_. if we take stochastic time derivative of a brownian motion or wiener process , we obtain a _ white noise _ process . if is the random displacement variable of a brownian motion with drift velocity , its probability density function can be written as ( using equation ( [ distdisp ] ) ) , \label{browniansupp}\end{aligned}\ ] ] where is the standard deviation of the distribution and the time . taking the stochastic time derivative of , we obtain the white noise process which we denote by ( ) .since the stochastic variable is not differentiable in the usual sense , we consider a finite difference approximation of using a time interval of width as since the stochastic random variables corresponding to a brownian motion process are stationary and independent , from equations ( [ browniansupp ] ) and ( [ whitenoisesupp ] ) we obtain that the white noise process has mean / averaged value and variance . as , the variance , and this white noise tends to the ideal white noise having a constant power spectral density .however , since brownian motion is not differentiable anywhere , the ideal white noise does not exist , as also explained above from the energy norm point of view .now we will show that the white noise defined in equation ( [ whitenoisesupp ] ) satisfies the papoulis definition of white noise , i.e. , the process is an uncorrelated stochastic process . to establish this ,let us first note that if and are two random variables of a brownian motion with , then \\\nonumber = e[\lbrace(x(t)-mt)-(x(s)-ms)+(x(s)-ms)\rbrace ( x(s)-ms ) ] \\ \nonumber = e[\lbrace(x(t)-x(s))-(mt - ms)\rbrace(x(s)-ms)]+e[(x(s)-ms)^2 ] \\ = 0+\sigma^2 s=\sigma^2 { \rm min}\lbrace t , s\rbrace . \label{browncov}\end{aligned}\ ] ] the third equality is possible since and are independent random variables for a brownian motion . having the result of equation ( [ browncov ] ) in hand, we now calculate the autocovariance of white noise .it is very easy to verify that the autocovariance function of two random variables and is a linear function in both of its arguments .therefore , \\ \nonumber = \frac{1}{\delta t^2 } \left[c(x(t+\delta t),x(s+\delta t))-c(x(t+\delta t),x(s ) ) \right .\\ \left .-c(x(t),x(s+\delta t))+c(x(t),x(s))\right ] .\label{whitecov}\end{aligned}\ ] ] when , i.e. , then using equation ( [ browncov ] ) , from equation ( [ whitecov ] ) we obtain now let us consider the cases when , i.e. when or . for , equations ( [ browncov ] ) and ( [ whitecov ] ) imply and also for , therefore , i.e. and are uncorrelated .let us define @ for two small values of .,title="fig : " ] + + for two small values of .,title="fig : " ] + + figure [ twodelta ] shows the variation of for two different small values of .the function defined in equation ( [ whitecovdelta ] ) is an approximation of the well known delta function , because as it is seen from fig .also the function satisfies the integral property of the delta function as shown below , hence , when , from equation ( [ whitecovfinal ] ) we obtain therefore , the noise with nonzero mean , obtained from the stochastic time derivative of brownian motion with nonzero drift , is a white noise process according to the papoulis definition and also has the correlators as defined below equation ( [ hydroorrzeta ] ) .the authors acknowledge partial support through research grant no . istc / pph / bmp/0362 .the authors thank amit bhattacharjee of iisc and debashish chowdhury of iit kanpur for discussions related to possible nonzero mean of white noise and the brownian ratchets .thanks are also due to the anonymous referee and ramesh narayan of harvard for suggestions which have helped to improve the presentation of the paper .afshordi , n. , mukhopadhyay , b. , & narayan , r. 2005 , , 629 , 373 avila , m. 2012 , phys .lett . , 108 , 124501 avila , m. _ et al ._ 2011 , science , 333 , 192 balbus , s. a. 2011 , nature , 470 , 475 balbus , s. a. , & hawley , j. f. 1991 , , 376 , 214 balbus , s. a. , hawley , j. f. , & stone , j. m. 1996 , , 467 , 76 barker , a. j. , & latter , h. n. 2015 , mnras , 450 , 21 barkley d. _ et al ._ 2015 , nature , 526 , 550 bottin , s. , & chat , h. 1998 , eur .j. b , 6 , 143 brown , r. g. 1983 , _ introduction to random signal analysis and kalman filtering ._ , john wiley and sons cantwell , c. d. , barkley , d. , & blackburn , h. m. 2010 , phys .flud . , 22 , 034101 chandrasekhar , s. 1960 , proc . nat . acad ., 46 , 53 dauchot , o. , & daviaud , f. 1995 , phys .fluids , 7 , 335 de dominicis , c. , & martin , p. c. 1979 , physa , 19 , 419 dubrulle , b. , dauchot , o. , daviaud , f. , longaretti , p. -y . ,richard , d. , & zahn , j. -p .2005 , phys .fluids , 17 , 095103 dubrulle , b. , marie , l. , normand , c. , hersant , f. , richard , d. , & zahn , j. -p .2005 , a&a , 429 , 1 forster , d. , nelson , d. r. , & stephen , m. j. 1977 , phys . rev .a , 16 , 732 fromang , s. , & papaloizou , j. 2007 , a&a , 476 , 1113 gardiner , c. w. 1985 , _ handbook of stochastic methods for physics , chemistry and natural sciences _ , 2nd edition , springer gu , p. -g . ,vishniac , e. t. , & cannizzo , j. k. 2000 , apj , 534 , 380 hawley , j. f. , balbus , s. a. , & winters , w. f. 1999 , , 518 , 394 kliemann , w. , & namachchivaya , s. 1995 , _ nonlinear dynamics and stochastic mechanics _ , crc press kim , w. -t . , & ostriker , e. c. 2000 , apj , 540 , 372 klahr , h. h. , & bodenheimer , p. 2003, , 582 , 869 klahr , h. , & hubbard , a. 2014 , , 788 , 21 kuo , h. h. 1996 , _ white noise distribution theory _ , crc press latter , h. n. 2016 , mnras , 455 , 2608 lesur , g. , & longaretti , p. -y .2005 , a&a , 444 , 25 lin , m .- k . , & youdin , a. n. 2015 , , 811 , 17 luki , b. , jeney , s. , tischer , c. , kulik , a. j. , forr , l. , & florin , e .-2005 , phys .lett . , 95 , 160601 lyra , w. 2014 , , 789 , 77 mahajan , s. m. , & krishan , v. 2008 , , 682 , 602 marcus , p. s. , pei , s. , jiang , c .- h . , & barranco , j. a. 2015 , , 808 , 87 marcus , p. s. , pei , s. , jiang , c .- h . , & hassanzadeh , p. 2013 , phys ., 111 , 084501 miyazaki , k. , & bedeaux , d. 1995 , physica a , 217 , 53 mukhopadhyay , b. 2013 , phys .b , 721 , 151 mukhopadhyay , b. , afshordi , n. , & narayan , r. 2005 , , 629 , 383 mukhopadhyay , b. , & chattopadhyay , a. k. 2013 , j. phys . a , 46 , 035501 mukhopadhyay , b. , mathew , r. , & raha , s. 2011 , njph , 13 , 023029 nath , s. k. , & chattopadhyay a. .k .2014 , phys .e , 90 , 063014 nath , s. k. , & mukhopadhyay , b. 2015 , phys .e , 92 , 023005 nath , s. k. , mukhopadhyay , b. , & chattopadhyay a. .k .2013 , phys .e , 88 , 013010 nelson , r. p. , gressel , o. , & umurhan , o. m. 2013 , mnras , 435 , 2610 paoletti , m. s. , van gils , d. p. m. , dubrulle , b. , sun , c. , lohse , d. , & lathrop , d. p. 2012, a&a , 547 , a64 papoulis , a. 1991 , _ probability , random variables , and stochastic processes , 3rd ed ._ , wcb / mcgraw - hill poor , h. v. 2013 , _ an introduction to signal detection and estimation _ , springer science & business media pringle , j. e. 1981 , , 19 , 137 pumir , a. 1996 , phys .fluids , 8 , 3112 richard , s. , nelson , r. p. , & umurhan , o. m. 2016 , mnras , 456 , 3571 richard , d. , & zahn , j. -p . 1999 ,a&a , 347 , 734 rudiger , g. , & zhang , y. 2001 , a&a , 378 , 302 shakura , n. i. , & sunyaev , r. a. 1973 , a&a , 24 , 337 stoll , m. h. r. , & kley , w. 2014 , a&a , 572 , a77 stoll , m. h. r. , & kley , w. 2016 , arxiv:1607.02322 trefethen , l. n. , trefethen , a. e. , reddy , s. c. , & driscoll , t. a. 1993 , science , 261 , 578 van oudenaarden , a. , & boxer , s. g. 1999 , science , 85 , 1046 umurhan , o. m. , nelson , r. p. , & gressel , o. 2016 , a&a , 586 , a33 velikhov , e. 1959 , j. exp . theor ., 36 , 1398 yecko , p. a. 2004 , , 425 , 385 zhong , w. x. 2006 , _ duality system in applied mechanics and optimal control _ , springer science & business media | we provide the possible resolution for the century old problem of hydrodynamic shear flows , which are apparently stable in linear analysis but shown to be turbulent in astrophysically observed data and experiments . this mismatch is noticed in a variety of systems , from laboratory to astrophysical flows . there are so many uncountable attempts made so far to resolve this mismatch , beginning with the early work of kelvin , rayleigh , and reynolds towards the end of the nineteenth century . here we show that the presence of stochastic noise , whose inevitable presence should not be neglected in the stability analysis of shear flows , leads to pure hydrodynamic linear instability therein . this explains the origin of turbulence , which has been observed / interpreted in astrophysical accretion disks , laboratory experiments and direct numerical simulations . this is , to the best of our knowledge , the first solution to the long standing problem of hydrodynamic instability of rayleigh stable flows . |
a growing tree - like network can model different processes such as a technological or biological systems represented by a set of nodes , where each element in the network can create new elements .innovation and discovery , artistic expression and culture , language structures and the evolution of life can naturally be represented by a branching process in a tree describing a wide range of real - life processes and phenomena .the general branching process is defined mathematicaly as a set of objects ( nodes ) that do not interact and , at each time step , each object can give rise to new objects .in contrast , interacting branching processes are much more interesting and difficult for analysis . a generalized tree with one ( or more ) ancestor(s )have been used to depict evolutionary relationships between interacting nodes such as genes , species , cultures . besides the interaction among nodes, one can consider spatially embedded nodes .the evolution of networks embedded in metric spaces have been attracted much attention . in this workwe study the evolution of a population , i.e. , the number of nodes in the network , influenced by the interaction among existing nodes and confined to a limited area , representing a competition of individuals for resources .we assume that the growing tree is embedded in a metric space and we consider that spatially close nodes , previously placed in the network , will suppress their ability to born new nodes . in other words , overcrowding of nodes will drain the resources and supress the offspring . in our modeleach node lives for three generations .the evolution of the population of nodes is actually determined by two parameters : the minimum distance between any pair of nodes , and the area in which the network is embedded , namely the linear size of the area , . for simplicity , we assume that this area does not change in time .the population evolves in two different regimes . at the initial generations ( time steps ), one can see an exponential evolution , followed by a saturation regime , after a crossover time . in the saturation regime, the size of the network will finally approach some limiting value .the network has even a chance to extinguish if at some moment all its nodes occur in a small area .we investigated this possibility of complete extinction .the term extinction for our model implies the end of evolution and the absence of new generations .the interaction among the nodes inside the radius is defined by a parameter and the value of regulates the population dynamics .our results show that , under certain conditions , the entire population can be led to extinction .this paper is organized as follows . in sec . 2 we present our model details and obtain simple estimates for its growth . in sec .3 we describe the populational evolution .the possibility of extinction for the model embedded in a bounded space is discussed in sec . 4 , and , finally , in sec .v , we summarize the results and present our conclusions .in our model , the population consists of interacting nodes spatially separated by some distance .we start our process from a single root node at time , as one can see in fig .the single root node ( black circle in fig . [ fig1 ] ) , can branch to produce up two new daughter nodes ( dark gray circles ) at future generation , i.e. , at the next time step .the position of each new node is randomly chosen inside a circle with a given _ radius _ ( ) centered in the parents positions .the attempt to add a newborn node is refused in the case the chosen position is closer than at distance from other nodes .the attempt to generate offsprings takes place at the next time step after the introduction of a new node in the network and each node can produce daughter nodes only at this time . at the next time step , after three generations , the node is removed from the network . andnew attempts are made each time step . in ,one can see a refused attempt ( blue circle ) due to the proximity to other nodes ( closer than a distance ) . in ,the oldest node is removed and new nodes are created . ] at each time step , each of the nodes previously introduced , attempts to branch , so at each time step a new generation of nodes is born .the nodes are chosen uniformly at random one by one and during a unit of time we update the entire network .the total area of the system is limited , considering that it is natural the introduction of a spatial restriction into the model .the first node is settled as the origin of the space and from the origin we set a maximum length for each spatial coordinate of a two - dimensional space .in other words , the geometric position of each node in the network , for our model , is restricted in the range , .the linear size of the area , , is introduced as a parameter of the model and we assume that this area does not change in time . in our simulations we used open boundary conditions .if one lets the population dynamics evolve embedded in a infinitely large system ( ) , the population always increase in size .the number of new nodes grows very fast as for initial times , and , after certain crossover time , the growth is slower than exponential , as one can see in the fig .[ fig2 ] . .the behavior for the initial time steps , , is also exhibited .data are averaged over 50 samples . ] at this regime the total population as function of the time is , for greater than .we can estimate , very roughly , from and , we have which leads to the estimate at small . our numerical results are considering that , for the estimates of the total population in the saturation regime .we should emphasize that in our model the population is confined into a limited area and it is not possible to grow indefinitely .the general result of our simulations for this model is exhibited in fig .[ fig3 ] , where we consider a two - dimensional space and a sufficiently small value of , in comparison with the linear size of the system .initially , the population grows exponentially and , after certain crossover time , one can see that the population reach an steady state .after the crossover time , is nearly constant .the maximum value of the population is , since we are considering a two - dimensional space for the simulations .the growth of the populational density of _ paramecium _ in laboratory , for instance , is reported to have the same behavior of our model .one can see that in the saturation regime , the total population is smaller than .this is due to the fact that the interaction among the nodes does not allow that each possible offspring may be created at some generation , keeping the total population below this limit . andthe inset shows that even for one sample , the population reaches a constant value after some time . ]for a small population , the possibility of extinction is higher .the network has even a chance to extinguish if the offspring created are too few and , at some moment , all its nodes occur in a small area .it is a well known characteristic that the smaller a population , the more susceptible it is to extinction by various causes .figure [ fig4 ] demonstrates an example of the evolution of the population , which in this case has and .the population rapidly increase and the system enters in the fluctuation regime , for which the population fluctuates around a mean value for a few generations . after some time ,the population decreases and is extinguished . and .] the picture which we observe agrees with traditional views on extinction processes which show `` relatively long periods of stability alternating with short - lived extinction events '' ( d. m. raup ) . the competition for resources combined with the restricted space limits the total population and , for some values of the parameters in the model , the population should finally extinct . in real situations ,extinction may require external factors , an environmental stress or an internal mechanism , such as mutation .one can see an example in the case of extinction of reindeer population in st .matthew island .coast guard released 29 reindeer on the island during world war ii , in 1944 .the reindeer population grows exponentially and , in 1963 , it was about 6,000 animals on the island , 47 reindeer per square mile .the overpopulation , limited food supply and the exceptionally severe winter of 1963 - 1964 significantly affect future offspring . the reindeer population of st .matthew island drops to 42 animals in 1966 and dies off by the 1980s .this kind of extinction may also occur in branching annihilating random walks and other related processes studied in refs . , in which the random processes play the role of an external factor , internal mechanism or an environmental stress that may lead to extinction . in our model ,if we choose a large enough value of the parameter , the population will be small and the number of new nodes after some generations can decrease and , sometimes , vanishes . when the nodes competition increases , the population may decays or even vanishes , as we can see by the rapidly decreasing of the new nodes in fig .[ fig4 ] .we investigated the state of the branching process after a long period of time , generations ( i.e. , time steps ) . for the case when the population always dies off , since no offspring is allowed , for any value of .we simulated our model for different samples and for various values of and . from this datawe obtain the probability of extinction , i.e. , the fraction of samples in which the population dies off before the 10 generation , for different l , as one can see in fig . [ fig5 ] .generations versus for 100 samples and different values of linear size l of the system . ] in fig .[ fig6 ] one can see a diagram where an extinction ( below the curve ) and non - extinction ( above the curve ) regions are shown .each point is fig .[ fig6 ] is defined as follows . for a given value of and considering generations, the value of for which the probability of extinction goes to one defines one point in the graphic of fig .our results show that high values of populational density , represented in our model for small and large , can be lead to extinction .this picture is a different representation of the probability of extinction in which we are considering the values of the parameters and corresponding to .plane , where one can see extinction and non - extinction regions . ]we studied the evolution of a population embedded into a restricted space in which the interaction among the population is determined by the relative position of nodes in space .our model generates a competition between species or individuals ( represented by the nodes ) . starting from a single root node and , at each time step, each existent node in the network can branch to produce up to two new daughter nodes at the next generation .the new nodes are not allowed to emerge closer than a certain distance of a pre - existent node , defined by a parameter , i.e. , overcrowding suppresses the `` fertility '' of population .evolutionary processes are usually considered in low dimensions and , for this case , our results do not depend qualitatively on the system s dimension for .we have demonstrated that the embedding of the network into a restricted area , which is natural for general populational evolution , set limits to growth and , for some values of the model s parameters , can result in complete extinction .the simple model we studied can schematically describe a real process in nature .f. l. forgerini would like to thank the fct for the financial support by project no .sfrh / bd/68813/2010 .n. c. would like to thank the brazilian funding agencies capes and cnpq for the financial support .this work was partially supported by projects ptdc / fis/108476/2008 , ptdc / sau - neu/103904/2008 , and ptdc / mat/114515/2009 .30 s. n. dorogovtsev and j. f. f. mendes , _ evolution of networks : from biological nets to the internet and www _ , clarendon press , oxford ( 2002 ) ; s. n. dorogovtsev , _ lectures on complex networks _( oxford university press , oxford , 2010 ) . | we study the competition and the evolution of nodes embedded in euclidean restricted spaces . the population evolves by a branching process in which new nodes are generated when up to two new nodes are attached to the previous ones at each time unit . the competition in the population is introduced by considering the effect of overcrowding of nodes in the embedding space . the branching process is suppressed if the newborn node is closer than a distance of the previous nodes . this rule may be relevant to describe a competition for resources , limiting the density of individuals and therefore the total population . this results in an exponential growth in the initial period , and , after some crossover time , approaching some limiting value . our results show that the competition among the nodes associated with geometric restrictions can even , for certain conditions , lead the entire population to extinction . |
population genetics is concerned with the investigation of the genetic structure of populations , which is influenced by evolutionary factors such as mutation , selection , recombination , migration and genetic drift . for excellent reviews of the theoretical aspects of this field ,see . in this paper ,the antagonistic interplay of mutation and selection shall be investigated , with mutation generating the genetic variation upon which selection can act .pure mutation selection models exclude genetic drift and are therefore deterministic models , and accurate only in the limit of an infinite population size ( for a review , see * ? ? ?a further simplification taken here is to consider only _ haploid _ populations , where the genetic material exists in one copy only in each cell .however , the equations used here to describe evolution apply as well to diploid populations without dominance . for the modelling of the types considered , the _ sequence space approach _is used , which has first been used by to model the structure of the dna , where individuals are taken to be sequences . here, the sequences shall be written in a two - letter alphabet , thus simplifying the full four - letter structure of dna sequences . in this approach, the modelling is based on the microscopic level , at which the mutations occur , hence the mutation process is fairly straightforward to model .however , the modelling of selection is a more challenging task , as selection acts on the phenotype , and the mapping from genotype to phenotype is by no means simple . to this end , the concept of the _ fitness landscape _ is introduced as a function on the sequence space , assigning to each possible genotype a fitness value which determines the reproduction rate .apart from the problem that a realistic fitness landscape would have to be highly complex ( too complex for a mathematical treatment ) , there is also very limited information available concerning the nature of realistic fitness functions .therefore , the modelling of fitness is bound by feasibility , trying to mimic general features that are thought to be essential for realistic fitness landscapes such as the degree of ruggedness .a very common type of fitness functions is the class of permutation - invariant fitness functions , where the fitness of a sequence is determined by the number of mutations it carries compared to the wild - type , but not on the locations of the mutations within the sequence .although this model describes the accumulation of small mutational effects surprisingly well , it is a simplistic model that lacks a certain degree of ruggedness that is thought to be an important feature of realistic fitness landscapes . in this paper ,hopfield - type fitness functions are treated as a more complex model . here , the fitness of a sequence is not only determined by the number of mutations compared to one reference sequence , but to a number of predefined sequences , the _ patterns_. this yields a class of fitness landscapes that contain a higher degree of ruggedness , which can be tuned by the number of patterns chosen .while this can still be treated with the methods used here , it is a definite improvement on the restriction of permutation - invariant fitness functions .particular interest is taken in the phenomenon of mutation driven error thresholds , where the population in equilibrium changes from viable to non - viable within a narrow regime of mutation rates . in this paper , a few examples of hopfield - type fitness functions are investigated with respect to the error threshold phenomenon .section [ the mutation selection model in sequence space ] introduces the basic mutation selection model with its main observables . in section[ sequences as types ] , the model is applied to the sequence space approach , formulating the mutation and fitness models explicitly .sections [ lumping for the hopfield - type fitness ] and [ the maximum principle ] present the method , which relies on a lumping of the large number of sequences into classes on which a coarser mutation selection process is formulated .this lumping is necessary to formulate a simple maximum principle to determine the population mean fitness in equilibrium . in section [ error thresholds ] , this maximum principle is used to investigate some examples of hopfield - type fitness functions with respect to the error threshold phenomenon .the model used here ( as detailed below ) is a pure mutation selection model in a time - continuous formulation as used by and , for instance .[ [ population . ] ] population .+ + + + + + + + + + + the evolution of a population where the only evolutionary forces are mutation and selection is considered , thus excluding other factors such as drift or recombination for instance .individuals in the population shall be assigned a type from the finite _ type space _ .the population at any time is described by the _ population distribution _ , a vector of dimension , the cardinality of the type space .an entry gives the fraction of individuals in the population that are of type .thus the population is normalised such that .[ [ evolutionary - processes . ] ] evolutionary processes .+ + + + + + + + + + + + + + + + + + + + + + + the evolutionary processes that occur are birth , death and mutation events .birth and death events occur with rates and that depend on the type of the individual in question , and taken together , they give the effective reproductive rate , or _ fitness _ as .mutation from type to type depends on both initial and final type and happens with rate .these rates are conveniently collected in square matrices and of dimension , where the reproduction or fitness matrix with entries is diagonal . the off - diagonal entries of the mutation matrix are given by the mutation rates , and as mutation does not change the number of individuals , the diagonal entries of are chosen such that , which makes a markov generator .the time evolution operator is given by the sum of reproduction and mutation matrix , .[ [ deterministic - evolution - equation . ] ] deterministic evolution equation .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in the deterministic limit of an infinite population size , the evolution of the population is governed by the evolution equation {\boldsymbol{p}}(t ) \;,\ ] ] where is the population mean fitness . the term with needed to preserve the normalisation of the population .note that this term makes the evolution equation ( [ evolution equation ] ) nonlinear .[ [ equilibrium . ] ] equilibrium .+ + + + + + + + + + + + the main interest focuses on the equilibrium , i.e. , the behaviour if , which is attained for .all equilibrium quantities shall be denoted by omitting the argument , for instance the equilibrium population distribution is . in equilibrium ,the evolution equation ( [ evolution equation ] ) becomes an eigenvalue equation for , with leading eigenvalue and corresponding eigenvector .if is irreducible , as shall be assumed throughout , perron - frobenius theory ( see , for instance , * ? ?* appendix ) applies , which guarantees that the leading eigenvalue of is non - degenerate and the corresponding right eigenvector is strictly positive , which implies that it can be normalised as a probability distribution . [[ ancestral - distribution . ] ] ancestral distribution .+ + + + + + + + + + + + + + + + + + + + + + + similarly to the population distribution , there is also another important distribution in this model , namely the ancestral distribution .consider the population at time , but count each individual not as its current type , but as the type its ancestor had at time .thus an entry of the ancestral distribution determines the fraction of the population at whose ancestor at time was of type . in the limit ,this also approaches an equilibrium distribution .as shown by , the equilibrium ancestral distribution can be obtained as a product of the left and right pf ( perron - frobenius ) eigenvectors and of the time - evolution operator as , where is normalised such that . [ [ population - and - ancestral - means . ] ]population and ancestral means .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + any function on the type space given , say , by can be averaged with respect to the population or the ancestral distribution distribution .the population mean of is given by whereas the ancestral mean is note that the time - dependence of the means only comes from the distribution , whereas the function is considered constant in time . in equilibrium , time dependence is again omitted such that the equilibrium population and ancestral means are denoted and , respectively .an important example of the population mean is the population mean fitness from equation ( [ evolution equation ] ) .in the previous section , the types are a rather abstract concept . in order to formulate the particular mutation and fitness models ,they shall now be specified as sequences , mimicking the structure of the dna ( cf .* ) . for simplicity ,only two - state sequences are considered , i.e. , sequences that have at each site one out of two possible entries . however , the method used here can immediately be generalised to a more realistic four - state model ( see * ? ? ? * ) .the types therefore are associated with sequences of fixed length , written in the _ alphabet _ , thus for .this means that there are different sequences , and thus the type space ( or _ sequence space _ ) has cardinality .a simple mutation model that neglects any processes changing the length of the sequence , such as deletions or insertions , is used .mutations are modelled as point processes , where an arbitrary site is switched with rate , such that the mutation rate between sequences that differ only in one particular site is given by .sequences that differ in more than one site can not mutate into one another within a single mutational step .this is known as the _ single step mutation model _, introduced by .the mutation model defines a neighbourhood in the sequence space .a convenient measure for the distance between sequences is the hamming distance , which counts the number of sites at which the sequences and differ . with this, the mutation matrix is explicitly given as the diagonal entry is chosen such that fulfils the markov condition .a rather simple , though commonly used type of fitness function is the _ permutation - invariant fitness_. there , the fitness of a sequence depends only on the number of mutations it has compared to a reference type , not on their position along the sequence .thus fitness is a function of the hamming distance to the reference sequence , which is usually chosen as the wild - type .the hamming distance to the wild - type is also called the _ mutational distance _ . a non - permutation - invariant fitness that contains some ruggedness , but is simple enough to be dealt with in this framework , is the _ hopfield - type fitness _ , a special type of spin - glass model , which has been introduced by as a model for neural networks . instead of comparing a sequence only to the wild - type , as it is done for the permutation - invariant fitness , the hopfield - type fitness of a sequenceis determined by its hamming distances to reference sequences , the _ patterns _ , .the hopfield - type fitness shall be defined in terms of the _ specific distances _ , which are the hamming distances with respect to the patterns , and thus the fitness is given as note that in the case of a single pattern ( ) , this yields again a permutation - invariant fitness .one problem of the sequence space approach is the large number of types , which grows exponentially with the sequence length , .the time - evolution operator is a matrix of size , and in this set - up one is interested in its leading eigenvalue and the corresponding right and left eigenvectors and .the relevant sequence length depends on the particular application one has in mind , but it is typically rather long . if one aims to model the whole genome of a virus or a bacterium , has to be in the region of , but even a single gene has of the order of base pairs . these values lead to matrices of a size that makes the eigenvalues and eigenvectors inaccessible . for some types of fitness functions ,this problem can be reduced by _ lumping _ together types into _ classes _ of types , and considering the new process on a reduced sequence space , which contains the classes rather than the individual types . under certain circumstances ,mutation is described as a markov process in the emerging lumped process as well , such that this process is accessible to markov process methods , and the framework developed in section [ the mutation selection model in sequence space ] can directly be applied to the lumped system .the lumping of the mutation process is a standard procedure in the theory of markov chains , see also for an application to mutation selection models .this lumping leads to a meaningful mutation selection model on the reduced type space , if all sequences lumped together into one class have the same fitness .it is possible to lump the markov chain given by the mutation matrix with state space with respect to a particular partition , if and only if for each pair the cumulative mutation rates from type into , are identical for all , cf . the example shown in figure [ fig lumping ] .( 258,171 ) ( 15 , 5)(0,0) ( 200 , 30)(0,0) ( 23 , 90)(0,0) ( 25 , 40)(0,0) visualisation of the compatibility with lumping : consider two classes and .the mutation rates from the types in to the types in ( given next to the arrows ) are compatible with a lumping with respect to and , because the sum of the mutation rates from type to all types in is given by , which is identical with those from , ., title="fig:",scaledwidth=60.0% ] in this case , the lumped process , with states and mutation rates for any , is again a markov chain ( * ? ? ?* theorem 6.3.2 ) . whereas in the case of a permutation - invariant fitness function ,the lumping procedure is fairly simple , collecting all sequences with the same hamming distance to the wild - type into classes , and considering cumulative mutation rates between these classes , the lumping for the hopfield - type fitness is somewhat more complex .for a two - state model with hopfield fitness , it has been performed for instance by , and this shall be recollected in the remainder of this section .first , the quantities with respect to which the lumping shall be performed must be defined . to this end , consider as an example the case of sequence length with three patterns ( ) , and let the patterns be collected in a matrix , such that the row of is pattern . without loss of generality , the pattern can always be chosen as . let the patterns in this example be given as note that there are only different types of sites ( corresponding to the columns of ) .these are collected in a matrix , the columns of which correspond to the possible types of sites in the matrix of patterns . for the case , this matrix is given as using the column vectors of the matrix , the patterns given in equation ( [ example pattern ] ) can alternatively be expressed as classifying the sites into classes according to which of the column vectors of coincides with the column vector of the patterns at site .let be the index set of sites , with a partition into subsets induced by the patterns such that with patterns can be characterised by the number of sites in each subset . the example patterns ( [ example pattern ] ) can therefore be described by considering only subsequences , the _ partial distances _ of a sequence with respect to the pattern are defined as the hamming distance between and , _ restricted to the subsequence . therefore , they can be written as such that the specific distance with respect to pattern is given by . because the differences between each of the patterns within the subsets of are known ( and recorded in the matrix ), it is sufficient to consider only the partial distances with respect to one pattern , here ; the partial distances with respect to any other pattern can be expressed in terms of the as using the matrix elements of .the specific distance to any pattern can be expressed as where the index set of classes is partitioned into two subsets , with and .hence , by specifying the partial distances with respect to pattern , the specific distances with respect to any pattern are determined , which in turn determine the fitness .this implies that all sequences with the same partial distances have the same fitness .thus the partial distances to pattern , collected in a _ mutational distance _vector , shall be the quantities that label the classes in the lumped system . the relevant partition of the sequence space is given by with , and the reduced sequence space , or _mutational distance space _ , contains the classes .considering again the subsequences , there are possible different ( as takes values from to ) , and there are different subsequences for each . hence , considering all sites , there are , and sequences that are mapped onto each . for the patterns chosen as example ( [ example pattern ] ) , we have , while the full sequence space has dimension . in the single step mutation model ,the only neighbours of a sequence with distance vector lie in the classes , where the are the unit vectors of mutation .thus the only non - zero cumulative mutation rates are as a sequence with has -sites and -sites in , and the single - site mutation rates are for all sites , the cumulative mutation rates are given by irrespective of the particular order within the subsequences . therefore the cumulative mutation rates are the same for all sequences with the same , which is the condition for lumping .the mutation selection model with hopfield - type fitnessis indeed `` lumpable '' with respect to the partition induced by the distance vectors . the mutation selection process on the mutational distance space is described by the lumped time - evolution operator with lumped reproduction and mutation matrices and of dimension .whereas the lumped reproduction matrix is still diagonal and contains the same entries as , i.e. , the off - diagonal entries of the lumped mutation matrix are given by the cumulative mutation rates with unchanged diagonal entries compared to , which still fulfil the markov property , and thus with the cumulative mutation rates from equation ( [ cumulative mutation rates ] ) .for a more general derivation of the lumped reproduction and mutation matrices , see .note that the time - evolution operator acting on describes the evolution of a population under mutation and selection determined by the evolution equation ( [ evolution equation ] ) , and thus the theory developed in section [ the mutation selection model in sequence space ] applies .although the lumping procedure reduces the number of types very efficiently , the evaluation of the eigenvalues and eigenvectors of the time - evolution operator still remains a difficult problem for many applications , due to the size of the eigenvalue problem .if one is interested solely in the equilibrium behaviour of the system , however , it is possible to determine the population mean fitness ( at least asymptotically for large sequence length ) , given by the leading eigenvalue of .this can be done by a simple maximum principle that can be derived from rayleigh s general maximum principle , which specifies that the leading eigenvalue of an matrix can be obtained via a maximisation over , the vector for which the supremum is attained is the eigenvector corresponding to the eigenvalue .the simple maximum principle derived from this guarantees that the population mean fitness can be obtained by maximising a function on the mutational distance space .it can be shown that the maximiser itself is the ancestral mean mutational distance .such a maximum principle has first been derived by for two - state sequences with permutation - invariant fitness .this has been generalised to apply for four - state sequences with permutation - invariant fitness by , and subsequently the restriction to permutation - invariant fitness function has been relaxed by .the results from apply directly to the hopfield - type fitness treated here . whereas the original mutation matrix is symmetric, the lumped mutation matrix is no longer symmetric , as different numbers of sequences are lumped into the different classes , therefore giving rise to unequal cumulative forward and backward mutation rates . to derive the maximum principle , it is necessary to symmetrise the mutation matrix . is reversible , i.e. , where is the stationary distribution of the pure mutation process , which is given by the equidistribution of types on , and thus given by the number of sequences that are lumped onto the same mutational distance vector .the reversibility of implies that it can be symmetrised by the means of a diagonal transformation , which yields the symmetrised mutation matrix as with off - diagonal entries using the cumulative mutation rates , this reads as can be seen by using the explicit representation for the cumulative mutation rates from equation ( [ cumulative mutation rates ] ) with the from equation ( [ nd ] ) .because is diagonal , the diagonal entries of the mutation matrix are unchanged , as is diagonal as well , it is not changed by the transformation , and thus this transformation also symmetrises the time - evolution operator such that is symmetric . before symmetrisation , was expressed as the sum of a markov generator and a diagonal remainder . as the transformation does not preserve the markov property ,this is not the case for the symmetrised time - evolution operator in ( [ symmetrised time - evolution operator ] ) .it is however useful to split it up this way . to this end , let where is a ( symmetric ) markov generator and is the ( diagonal ) remainder .the off - diagonal entries of are given by those of from equation ( [ symmetrised mutation rates ] ) , for , whereas the markov property requires as diagonal entries the remainder is given by to deal with the case of infinite sequence length , it will prove useful to use intensively scaled normalised versions of the extensively scaled variables like the mutational distances .the pattern in the hopfield model , previously characterised by the _ number _ of sites in each subset , will now be described by the _ fraction _ of sites in , given by .similarly , we use normalised partial distances where ] is discontinuous at this mutation rate . in the examples shown later in this thesis , the second order error threshold always show an infinite derivative at the critical mutation rates .note that , like phase transitions in physics , these definitions of the error thresholds apply in the strict sense only to a system with infinite sequence length ( ) , for finite sequence lengths , the thresholds are smoothed out due to the lack of non - analyticities . gave a finer classification of different error threshold phenomena .the first order error threshold they called `` fitness threshold '' . here, this term shall include also the second order error threshold , making all error thresholds as defined above fitness thresholds .furthermore , the concept of the `` degradation threshold '' was introduced : * definition ( degradation threshold ) . *+ a degradation threshold is an error threshold of first or second order , where the population distribution beyond the critical mutation rate is given by the equidistribution in sequence space .thus here the degradation threshold is a special case of a fitness threshold , going in line with the complete delocalisation of the population in sequence space .note that in the limit of infinite sequence length ( ) , for which the error threshold definitions apply exactly , this equidistribution is reached immediately above , and beyond the threshold the population is insensitive to any further increase in mutation rates . in the case of finite sequence lengths , where the thresholds are smoothed out ,the equidistribution is of course only reached asymptotically .the original error threshold was observed for the single peaked fitness landscape , where a single sequence is attributed a high fitness value , all other sequences are equally disadvantageous ( for a review , see * ? ? ?this is clearly an oversimplification and should not be regarded as anything but a toy model .other fitness landscapes that have been investigated comprise , in the permutation - invariant case , linear and quadratic fitness functions , general functions showing epistasis , and as examples lacking permutation - invariance the onsager landscape , which has nearest neighbour interactions within the sequence , as well as various spin glass landscapes like the hopfield landscape , the sherrington - kirkpatrick spin glass , the nk spin glass , and the random energy model , assigning random fitness values to each sequence .one fitness landscape where an analytical solution can be obtained is the linear fitness ( cf . * ? ? ?* ; * ? ? ?* ; * ? ? ?note that this corresponds to a multiplicative landscape in a set - up using discrete time . for a linear fitness function, there is no error threshold , but the population changes smoothly from localised to delocalised with an increasing mutation rate . for quadratic fitness functions ,error thresholds only exist for antagonistic epistasis ; they are absent for quadratic fitness functions with synergistic epistasis .these results go in line with those for general epistatic fitness functions .studies using non - permutation - invariant fitness functions generally report the presence of error thresholds .of course the discussion of the error threshold phenomenon is academic if the threshold is an artifact of the model rather than a real biological phenomenon .this issue has been subject to numerous debates , especially because it has first been predicted by a model using the over - simplistic single peaked landscape .however , over the years biologists have accumulated evidence that particularly rna viruses naturally thrive at very high mutation rates , of the order of to per base per replication , corresponding to a genomic mutation rate of about 0.1 to 10 mutations per replication , and a number of studies have reported that populations of rna viruses only survive a moderate increase of their mutation rate , whereas if the mutation rate is increased further , the populations become extinct , for reviews see .this corresponds to the population being pushed beyond the error threshold .it has been suggested to use the error threshold for anti - viral therapies , and in fact , recent experimental results indicate that this is the mechanism via which the broad - spectrum anti - viral drug ribavirin works .this clearly warrants some further investigation of the error threshold phenomenon , which shall be done in the remainder of this section .the original hopfield fitness as introduced by is a quadratic function of the specific distances and reads using normalised specific distances , similarly to the normalised mutational distances .the statistical properties of this landscape have been studied in detail : in the thermodynamic limit , there are global maxima that are associated with the patterns and their complements .in addition to that , the number of local maxima and saddle points grows exponentially with the number patterns , hence the ruggedness of the fitness landscape can be tuned by the number of patterns .most works that have studied a hopfield - type fitness used the original hopfield model , a generalisation was however treated by , using a hopfield - type truncation selection with two patterns .thus it might be interesting and instructive to investigate the threshold behaviour of different kinds of hopfield - type fitness functions .applying criteria for the existence of error thresholds that have been obtained by for permutation - invariant fitness functions to the case of a hopfield - type fitness , it can be shown that for linear hopfield - type fitness functions there are no error thresholds , which is a new result , considering that for all previously investigated hopfield - type fitness functions , the existence of error thresholds was reported .the next step towards more complex fitness functions is to consider quadratic fitness functions , generalising the original hopfield - fitness , which is a particular example for a quadratic function . here, the analysis shall be restricted to a symmetry with respect to the normalised specific distances to the patterns , such that the parameter tunes the linear in relation to the quadratic term , and the sign of the quadratic term determines the _ epistasis _ , a measure for the strength of interaction between sites . for a positive quadratic term epistasisis said to be negative or antagonistic , whereas for a negative quadratic term one speaks of positive or synergistic epistasis .the case combined with a positive quadratic term ( i.e. , negative epistasis ) yields the original hopfield fitness . in the case of two patterns , ,the first pattern can be chosen without loss of generality as , such that there is only one pattern to be chosen , usually randomly .the matrix containing the possible types of sites is given by and thus the index set of sites is partitioned into two subsets , , where contains all sites at which both patterns have entry , whereas corresponds to the sites where the two patterns have entries and , respectively .the only quantities characterising the patterns are now the fractions of sites in each partition , and .thus the pattern can be characterised by a single parameter , .each sequence is characterised with respect to the pattern by the partial hamming distances to pattern ( in normalised form ) , and .these vary from ( all entries in ) to ( all entries in ) , completely independently from each other . the specific distances with respect to the patternsare linear combinations of the and given in normalised form by the hopfield - type fitness is defined as an arbitrary function of these patterns , . due to the small number of variables , for the case of two patterns , a lot can be done by analytical treatment . for the quadratic symmetric hopfield - type fitness ( [ quadratic symmetric hopfield - type fitness ] ) with positive epistasis ( i.e. , negative sign of the quadratic term ) and , there are no phase transitions , going in line with the results for permutation - invariant fitness functions , but different from other results for hopfield - type fitness functions . as an example for negative epistasis , consider first the original hopfield fitness ( [ original hopfield fitness ] ) .since for two patterns there are only two variables , it is possible to visualise the fitness landscape in this case .figure [ fig 6.14 ] shows the original hopfield fitness ( [ original hopfield fitness ] ) for the cases and . original hopfield fitness in the case of two patterns with different .,title="fig:",scaledwidth=45.0% ] original hopfield fitness in the case of two patterns with different .,title="fig:",scaledwidth=45.0% ] in the corners of the mutational distance space , one can see the four degenerate maxima .the ancestral mean partial distances , at which the maxima of are positioned , are obtained by considering the derivatives of .they are given by for the case of , which corresponds to two completely uncorrelated patterns , the ancestral mean partial distances are shown in figure [ fig 6.15 ] on the left , alongside the ancestral mean specific distances . ancestral meanpartial distances ( top ) and specific distances ( bottom ) depending on mutation rates . the original hopfield fitness ( [ original hopfield fitness ] ) fortwo patterns has been used .results correspond to uncorrelated patterns ( , left ) and correlated patterns with with ( right ) . ] for low mutation rates , there are two possible solutions for each of the ancestral mean partial distances , and as the maxima are degenerate , in equilibrium , the population will be centred equally around all of them. however , in the approach to equilibrium , the population might well be predominantly concentrated around one of them , depending on initial conditions .the specific distances that are shown correspond to the combination of and , where both are given by the lower branch .other combinations yield similar results . for high mutation rates ,the population is in the mutation equilibrium with , forming a disordered phase . in the limit of low mutation rates , ,the population is always in the vicinity of one of the patterns ( or its complement ) , such that one of the , which is completely random with respect to the other pattern , and thus the other .this is the ordered phase . at the critical mutation rate , there is a second order phase transition between these two phases , which is a fitness as well as a degradation threshold , corresponding to the infinite derivative of both at this mutation rate .as the specific distances are simply superpositions of the partial distances , the phase transitions are also visible in the . in the correlated case ( figure [ fig 6.15 ] , right ) , two second order transitions can be identified . at , has a phase transition , whereas at , has a phase transition .the threshold occurring at the lower mutation rate is only a fitness threshold , whereas the one happening at the higher mutation rate is both a fitness and degradation threshold , leading to a totally random population . for ,the population is in an ordered phase , for , it is in a partially ordered phase , which is ordered with respect to one of the variables , but random with respect to the other . finally , for , the population is the equidistribution in sequence space . here again , for low mutation rates the population is close to one of the patterns , but due to the correlation in the chosen patterns , this leads to a non - random overlap with the other pattern . in the uncorrelated case with ,the two error thresholds coincide , and the partially ordered phase vanishes .now turn to the question how these phase transitions depend on the particular degeneracy of the fitness functions and consider the quadratic fitness function ( [ quadratic symmetric hopfield - type fitness ] ) with negative epistasis ( i.e. , positive quadratic term ) for values of .figure [ fig 6.16 ] shows the fitness landscapes for values of and for uncorrelated patterns and correlated patterns with .quadratic hopfield - type fitness functions ( [ quadratic symmetric hopfield - type fitness ] ) with negative epistasis and ( top ) and ( bottom ) for an uncorrelated pattern ( left ) and a correlated pattern with ( right).,title="fig:",scaledwidth=47.0% ] quadratic hopfield - type fitness functions ( [ quadratic symmetric hopfield - type fitness ] ) with negative epistasis and ( top ) and ( bottom ) for an uncorrelated pattern ( left ) and a correlated pattern with ( right).,title="fig:",scaledwidth=43.0% ] quadratic hopfield - type fitness functions ( [ quadratic symmetric hopfield - type fitness ] ) with negative epistasis and ( top ) and ( bottom ) for an uncorrelated pattern ( left ) and a correlated pattern with ( right).,title="fig:",scaledwidth=47.0% ] quadratic hopfield - type fitness functions ( [ quadratic symmetric hopfield - type fitness ] ) with negative epistasis and ( top ) and ( bottom ) for an uncorrelated pattern ( left ) and a correlated pattern with ( right).,title="fig:",scaledwidth=43.0% ] as the pictures indicate , the fitness functions ( and thus the behaviour of the system ) with the same are related by symmetry under ( apart from a constant term , which does not influence the dynamics ) .note that in -direction , the fitness function is independent of .this is because in the sum of the specific distances , the term with cancels out , which happens generally in the case of an even number of patterns ( i.e. , odd ) for different variables . because in -direction the fitness is independent of , the solution for the ancestral mean mutational distance is identical with the solution for the original hopfield fitness as given in equation ( [ two - state two - pattern analytical solution ] ) .so for all values of , the phase transition with respect to happens at .for , the solution becomes more complicated , but the inverse function is simpler .it is given by \sqrt{\hat{x}_1(1-\hat{x}_1 ) } } { 2\hat{x}_1 - 1 } \,.\ ] ] the dependence of on the mutation rate is shown in figure [ fig 6.17 ] ( top ) .ancestral mean partial distance ( top ) and specific distances ( thick lines , bottom ) and ( thin lines , bottom ) depending on mutation rates . the quadratic hopfield - type fitness ( [ quadratic symmetric hopfield - type fitness ] ) with negative epistasis for two patterns and different values of has been used .results correspond to uncorrelated patterns ( left ) and correlated patterns with ( right ) .data are shown for parameter values of ( top to bottom ) .for clarity , only specific distances corresponding to are shown . ] for , the second order phase transition is smoothed out and thus vanishes . note that the ambiguity in the solutions that exists in the case ( cf .figure [ fig 6.15 ] ) , does not exist here , due to the lacking degeneracy of the maxima of the fitness function at and ( cf .figure [ fig 6.16 ] ) . at the bottom ,figure [ fig 6.17 ] shows the specific distances , using the lower branch of the solution for ( as shown in figure [ fig 6.15 ] ) , which show the second order transition in , a fitness threshold . with this combination of solutions , for low mutation ratesthe population is centred around the sequence complementary to pattern .the general picture for the uncorrelated ( ) and correlated ( ) choice of patterns is very similar , apart from issues like the exact location of the thresholds .the behaviour of the system with quadratic symmetric hopfield - type fitness has been investigated for three , four and five patterns .however , due to the complexity of the analysis for a higher number of patterns , the focus is on the case of three patterns . for three patterns ,the matrix reads thus there are four describing the patterns , fulfilling , and four variables ] , there are no first order error thresholds for either or .this goes in line with a different sequence becoming optimal at for these values of .however , for , there is an additional line of second order error thresholds , that approaches as grows .preliminary results for indicate , that in that case the second order line does not occur .it might thus be conjectured that the existence of the second order error threshold line depends on the number of patterns being even or odd ( remember that for it does exist ) .this is an interesting result , as for all previously investigated hopfield - type fitness functions ( which are limited to the original hopfield fitness and a hopfield - type truncation selection as far as the author is aware ) , the existence of error thresholds has been reported .ancestral mean partial and specific distances , and , depending on mutation rates . the original hopfield fitness ( [ original hopfield fitness ] ) for three patterns has been used .results correspond to two typical examples of random , but correlated patterns chosen for sequences of lengths ( left ) , ( middle ) and ( right ) , specified at the top of each graph as . ]ancestral mean partial and specific distances , and , depending on mutation rates .the original hopfield fitness ( [ original hopfield fitness ] ) for three patterns has been used .results correspond to two typical examples of random , but correlated patterns chosen for sequences of lengths ( left ) , ( middle ) and ( right ) , specified at the top of each graph as . ]figure [ fig 6.21 ] shows some cases of the ancestral mean partial and specific distances , and , for the original hopfield fitness ( [ original hopfield fitness ] ) with three patterns , which are randomly chosen sequences of finite length .the correlations between the patterns ( and thus the variations of the ) are characteristic for the sequence length .the six cases of patterns shown here are typical examples for the sequence lengths considered . in the case of long sequences ( ) , the deviations of the patterns from the infinite sequence limit are small , and grow with decreasing sequence length .these correlations that are introduced into the system have the same effect as a choice of correlated patterns in the case of two patterns , such that the single critical mutation rate in the case of infinite sequence length is split up into two critical mutation rates , at each of which two of the show threshold behaviour . for short sequence length ( ), it can be seen that , particularly at the smaller critical mutation rate , the threshold is smoothed out .ancestral mean partial and specific distances , and , depending on mutation rates .the hopfield - type fitness ( [ quadratic symmetric hopfield - type fitness ] ) with negative epistasis for three patterns and has been used .results correspond to the patterns used in figure [ fig 6.21 ] . ]ancestral mean partial and specific distances , and , depending on mutation rates .the hopfield - type fitness ( [ quadratic symmetric hopfield - type fitness ] ) with negative epistasis for three patterns and has been used .results correspond to the patterns used in figure [ fig 6.21 ] . ] in figure [ fig 6.22 ] , the ancestral mean partial and specific distances , and , corresponding to the same patterns as in figure [ fig 6.21 ] are shown for the quadratic hopfield - type fitness ( [ quadratic symmetric hopfield - type fitness ] ) with negative epistasis and . for long sequence length ,these look very similar to the results for infinite sequence length ( cf .figure [ fig 6.19 ] ) , showing clearly the single first order phase transition . for shorter sequence lengths ,they become more and more smoothed out , such that at , roughly only every other pattern that was simulated shows an error threshold , whereas for , in the vast majority of cases , there is no threshold .note that this effect is present even though the finite sequence length was only simulated by choosing the patterns accordingly , so it is a feature of the model with correlated patterns . in this section ,the quadratic symmetric hopfield - type fitness given in equation ( [ quadratic symmetric hopfield - type fitness ] ) with negative epistasis ( i.e. , positive quadratic term ) was investigated for a small number of patterns .the results for the original hopfield fitness ( ) have been compared with those for the generalised quadratic hopfield - type fitness ( ) .furthermore , both uncorrelated patterns ( for all ) , corresponding to an infinite sequence length , and correlated patterns ( ) , simulating a finite sequence length , were considered . for two patterns , an analytical treatment was possible , making all values of the accessible , whereas the case of three or more patterns was treated numerically due to the larger number of variables .this means that apart from the uncorrelated choice of pattern ( ) , which was investigated for three , four and five patterns , only some correlated combinations for three patterns with were investigated , some typical examples of which are shown in section [ the case of three patterns ] .the results are summarised as follows : * * original hopfield fitness ( ) * : * * for _ uncorrelated patterns _ , there is one second order error threshold for all at ( investigated ) . * * for _ correlated patterns _ , there are two second order error thresholds , each for half of the ( investigated ) . * * hopfield - type fitness with * : * * for _ uncorrelated patterns _ , there is a first order error threshold only on a restricted range of ( no first order threshold , $ ] ) .+ for an even number of patterns ( ) , there is an additional second order error threshold for any ( , ) .this error threshold does not exist for the investigated cases of an odd number of patterns ( ) . * * for _ correlated patterns _, at , there is one second order threshold , the other one , which is present for the original hopfield fitness , is smoothed out . at , there is up to one first order threshold , smoothed out for more strongly correlated patterns ( corresponding to shorter sequence length ) .the evaluation the hopfield - type fitness was limited to the cases of rather small numbers of patterns , simply because an increase in the number of patterns makes the evaluation more complex .however , the hopfield - type fitness was chosen as a potentially realistic fitness because of its ruggedness that can be tuned by the number of patterns chosen .the simple cases considered here probably do not show as high a degree of ruggedness as one would expect for realistic fitness functions .however , the results described here already indicate some features that are common for all numbers of patterns investigated here , and some that depend on whether the number of patterns is odd or even. it would be very interesting to establish whether these results generalise to an arbitrary number of patterns .furthermore , the concept of partitioning the set of sites into subsets , which was introduced to analyse the hopfield - type fitness , is very interesting .one could imagine a different interpretation for this by classifying sites according to the selection strength they evolve under .some of the behaviour identified for the hopfield system could also occur in such a setting : at intermediate mutation rates , partially ordered phases could exist , such that sites that evolve under weak selection have passed their error threshold and the population is in a phase that is disordered with respect to these sites , whereas at sites that are subject to strong selection the order is still maintained .the present work has been concerned with the investigation of a deterministic mutation selection model in the sequence space approach , using a time - continuous formulation .important observables in these mutation selection models are the population and ancestral distributions and , and means with respect to these distributions , in particular the population mean fitness and the ancestral mean genotype . in equilibrium , is given by the right perron - frobenius eigenvector of the time - evolution operator , whereas the ancestral distribution is given by the product of both right and left pf eigenvectors and , .types have been modelled as two - state sequences . as mutation model ,the single step mutation model was used , whereas selection was modelled by hopfield - type fitness functions , using the similarity of a sequence to a number of patterns to determine its fitness .this allows for a more rugged fitness landscape , and the complexity of the fitness can be tuned by the number of patterns .the large number of types that arise in the sequence space approach have been lumped into classes of types , labelled by the partial distances introduced in section [ lumping for the hopfield - type fitness ] as a generalisation of the hamming distance . with this , the maximum principle as developed by can be applied to the case of hopfield - type fitness functions as done in section [ the maximum principle ] , see also , treating two- and four - state sequences . in section [ error thresholds ] ,the maximum principle derived in section [ the maximum principle ] was used to investigate the phenomenon of the error threshold .these error thresholds can be detected with the maximum principle , because the delocalisation of the population distribution manifests itself as a jump ( or at least an infinite derivative with respect to the mutation rate ) of the ancestral mean genotype , the maximiser .not all fitness functions give rise to error thresholds , and as the error thresholds were first described for a model with highly unrealistic fitness function , it has been argued that they might be an artifact of this rather than a biologically relevant phenomenon .it is therefore clearly necessary to investigate more complex fitness functions with respect to this phenomenon . here , quadratichopfield - type fitness functions with small numbers of patterns have been investigated . for the original hopfield fitness ,the results for all investigated numbers of patterns are identical . however ,if the fitness differs from the original hopfield fitness , different behaviours are observed for different numbers of patterns . in the case of uncorrelated patterns , corresponding to random patterns chosen for infinite sequence length ,the observed features seem to depend on whether the number of patterns is odd or even .because for correlated patterns , only the cases of two and three patterns were investigated , and found to behave differently , it would be interesting to see how these results generalise to a higher number of patterns . in the original hopfield fitness , error thresholdswere observed for all choices of patterns .this is not true for a generalised hopfield - type fitness .for instance , for a hopfield - type fitness with positive epistasis no thresholds were observed , going in line with the results for permutation - invariant fitness . but also for hopfield - type fitness functions with negative epistasis , there are not necessarily any thresholds , if the fitness deviates too much from the original hopfield fitness , challenging the commonly held notion that more complex fitness functions all tend to display error threshold behaviour .the complexity and ruggedness of the original hopfield fitness have been investigated and found to be good candidates for realistic fitness functions . however , these results do not necessarily transfer to the generalised hopfield - type fitness functions , and therefore it would be very useful to study these properties of a generalised hopfield - type fitness functions to analyse which of these factors are responsible for generating the thresholds .it is my pleasure to thank uwe grimm , michael baake , ellen baake and robert bialowons for helpful discussions and uwe grimm for comments on the manuscript .support from the british council and daad under the academic research collaboration programme , project no 1213 , is gratefully acknowledged .baake , e. , gabriel , w. , 2000 .biological evolution through mutation , selection , and drift : an introductory review . in : stauffer , d. ( ed . ) , annual reviews of computational physics vii .world scientific , singapore , pp .203264 .holland , j. j. , domingo , e. , de la torre , j. c. , steinhauer , d. a. , 1990 .mutation frequencies at defined single codon sites in vesicular stromatitis - virus and poliovirus can be increased only slightly by chemical mutagenesis .journal of virology 64 ( 8) , 39603962 .loeb , l. a. , essigmann , j. m. , kazazi , f. , zhang , j. , rose , k. d. , mullins , j. i. , 1999 .lethal mutagenesis of with mutagenic nucleoside analogs .proceedings of the national academy of sciences of the usa 96 , 14921497 .sierra , s. , dvila , m. , lowenstein , p. r. , domingo , e. , 2000 .response of foot - and - mouth disease virus to increased mutagenesis : of viral load and fitness in loss of infectivity .journal of virology 74 ( 18 ) , 83168323 . | a deterministic mutation selection model in the sequence space approach is investigated . genotypes are identified with two - letter sequences . mutation is modelled as a markov process , fitness functions are of hopfield type , where the fitness of a sequence is determined by the hamming distances to a number of predefined patterns . using a maximum principle for the population mean fitness in equilibrium , the error threshold phenomenon is studied for quadratic hopfield - type fitness functions with small numbers of patterns . different from previous investigations of the hopfield model , the system shows error threshold behaviour not for all fitness functions , but only for certain parameter values . mutation selection model , hopfield model , error threshold , maximum principle |
index coding ( introduced by birk and kol in 1998 ) , a sender broadcasts messages through a noiseless shared channel to multiple receivers , each knowing some messages a priori , which are known as side information .side information occurs frequently in many communication networks , e.g. , in a web browsers cache .knowing the side information of the receivers , the sender can send coded symbols , known as an index code , in such a way that all of the receivers can decode their requested messages using their side information and the received coded symbols .the aim is to find the shortest ( optimal ) index code .how to optimally design an index code for an arbitrary index - coding instance is an open problem to date . in the literature, various approaches have been adopted to solve the index - coding problem .we broadly classify these approaches into four categories : ( i ) numerical , ( ii ) shannon s random coding , ( iii ) interference alignment , and ( iv ) graph - based .numerical approaches include rank minimization over finite fields ( which is np - hard to compute in general ) , and mathematical optimization programming ( semi - definite programming , linear programming , and integer - linear programming ) .these approaches do not provide much intuition on the interaction between the side - information configuration and the index codes .shannon s random coding approaches require infinitely long message packets .interference - alignment approaches treat index coding as an interference - alignment problem , and construct index codes via two alignment techniques , namely one - to - one alignment and subspace alignment .these alignment techniques have no well - defined algorithms to construct index codes for arbitrary index - coding instances .graph - based approaches provide intuition on the side - information configurations and index codes .these approaches represent index - coding instances by graphs , and construct index codes as functions of the graphs .these graph - based schemes provide linear ( scalar and vector ) index codes .although linear index codes are not always optimal , they have simpler encoding and decoding processes .we classify graph - based approaches into two sub - categories : ( i ) maximum distance separable ( mds ) code based interference alignment approaches , and ( ii ) graph structure based approaches .the mds code based interference alignment approaches construct index codes by treating messages not known to a receiver as interference , and aligning all interference with the help of mds codes .these approaches include the partial - clique - cover scheme and its fractional version , the local - chromatic - number scheme and its fractional version , and the partitioned - local - chromatic - number scheme and its fractional version .graph structure based approaches exploit special graph structures , based on messages known to the receivers that can provide savings on index - coding instances .it has been shown that no structure in an acyclic graph can provide any savings .furthermore , if an arc does not belong to any cycle , then removing it does not change the optimal index code . these observations point to the importance of cycles on index coding. in the literature ,only disjoint cycles and cliques , a specific combination of overlapping cycles , have been exploited so far .more precisely , disjoint cycles in digraphs are exploited by the cycle - cover scheme and its fractional version , and disjoint cliques in digraphs are exploited by the clique - cover scheme and its fractional version .overlapping cycles can provide more savings than disjoint cycles .we take a clique as an example . in a clique ,every vertex forms a cycle with any other vertex , and we see overlapping of cycles at every vertex . if we consider only disjoint cycles in the clique , we get an index code strictly longer than that by considering the clique . however , not all forms of overlapping cycles are useful , in the sense that they provide more savings than considering only disjoint cycles and cliques . in this work ,we consider a graph structure based approach , and propose structures of overlapping cycles that can be exploited in graphs to provide potentially more savings than the cycle - cover scheme , the clique - cover scheme , and other existing schemes .the proposed structures are called interlinked - cycle ( ) structures , and they generalize cycles and cliques .furthermore , we define a scheme , called the interlinked - cycle cover ( ) scheme , that constructs index codes based on structures . 1 .we propose a new index - coding scheme ( called the scheme ) that generalizes the clique - cover scheme and the cycle - cover scheme .the new scheme constructs scalar linear index codes .we characterize a class of digraphs ( with infinitely many members ) for which the scheme is optimal ( over all linear and non - linear index codes ) .this means scalar linear index codes are optimal for this class of digraphs .3 . for a class of digraphs, we prove that the scheme performs at least as well as the partial - clique - cover scheme .we conjecture that the result is valid in general .furthermore , we present a class of digraphs where the additive gap between these two schemes grows linearly with the number of vertices in the digraph .4 . for a class of digraphs, we prove that the scheme performs at least as well as the fractional - local - chromatic - number scheme .moreover , we present a class of digraphs where the additive gap between these two schemes grows linearly with the number of vertices in the digraph . 5 .we show that the scheme can outperform all of the existing graph - based schemes and the composite - coding scheme in some examples .we extend the scheme to the fractional- scheme .this modified scheme time - shares multiple structures , and constructs vector linear index codes that can be , for certain digraphs , shorter than the scalar linear index codes obtained from the scheme .consider a transmitter that wants to transmit messages to receivers in a _unicast _ message setting , meaning that each message is requested by only one receiver , and each receiver requests only one message . without loss of generality, let each receiver request message , and possess side information .this problem can be described by a digraph , where , the set of vertices in , represents the receivers .an arc exists from vertex to vertex if and only if receiver has packet ( requested by receiver ) as its side information .the set of the side information of a vertex is , where is the out - neighborhood of in .let and all be ordered sets , where the ordering can be arbitrary but fixed . [ index code ] suppose for all and some integer , i.e. , each message consists of bits .given an index - coding problem modeled by , an index code ( , ) is defined as follows : 1 .an encoding function for the source , , which maps to a -bit index for some positive integer .2 . a decoding function for every receiver , , that maps the received index code and its side information to the requested message .[ broadcast rate or index codelength ] the broadcast rate of an index code ( ) is the number of transmitted bits per received message bits at every user , or equivalently the number of transmitted coded symbols ( each of bits ) .this is denoted by , and also referred as the normalized length of the index code .[ optimal broadcast rate ] the optimal broadcast rate for a given index coding problem with -bit messages is , and the optimal broadcast rate over all is defined as .[ maximum acyclic induced sub - digraph ] for a digraph , an induced acyclic sub - digraph formed by removing the minimum number of vertices in , is called a maximum acyclic induced sub - digraph ( mais ) .the order of the mais is denoted as .it has been shown that for any digraph and any message length of -bits , lower bounds the optimal broadcast rate , in this sub - section , we describe the clique - cover , the cycle - cover and the partial - clique - cover schemes in detail .these schemes provide some basic intuitions about our proposed scheme .[ clique ] a clique is a sub - digraph where each vertex has an out - going arc to every other vertex in that sub - digraph .[ clique - covering number , the clique - covering number is the minimum number of cliques partitioning a digraph ( over all partitions ) such that if each clique is denoted as for , then , and ( here a vertex is a clique of size one ) .the clique - covering number is equal to the chromatic number of the underlying undirected graph of the complement digraph , i.e. , , where is the underlying undirected graph of , is the complement digraph of , and denotes the chromatic number of .[ clique - cover scheme ] the clique - cover scheme finds a set of disjoint cliques that provides the clique - covering number , and constructs an index code in which the coded symbol for each of the disjoint cliques is the bit - wise xor of messages requested by all of the vertices in that clique .the clique - cover scheme achieves the following rate : the optimal broadcast rate of an index coding instance is upper bounded by the clique cover number , i.e. , .[ path and cycle ] a _ path _consists a sequence of distinct ( except possibly the first and last ) vertices , say , and an arc for each consecutive pair of vertices for all .we represent a path from the vertex to the vertex in a digraph as . here is the _ first vertex _ and is the _last vertex_. a path with the same first and last vertices is a _ cycle_. [ cycle - covering number , the difference between the total number of vertices in and the maximum number of disjoint cycles in is the cycle - covering number . [ cycle - cover scheme ]the cycle - cover scheme finds a set of disjoint cycles in that provides the cycle - covering number , and constructs an index code that has ( i ) coded symbols for each disjoint cycle ( for a cycle , a set of coded symbols are ) , and ( ii ) uncoded messages which are requested by those vertices not included in any of the disjoint cycles in .the cycle - cover scheme achieves the following rate : the optimal broadcast rate of an index - coding problem is upper bounded by the cycle - covering number , i.e. , .a sub - digraph is a -partial clique , where and is the minimum out - degree of .[partialclique1 ] if , for some positive integer , partition a digraph such that and , and , then = \sum_{i=1}^{m } ( \kappa(d_i)+1),\ ] ] and the partial - clique number of the digraph is ,\ ] ] where the minimum is taken over all partitions .the partial - clique - cover scheme finds a set of disjoint partial cliques in that provides the partial - clique number , and constructs an index code that has ( i ) coded symbols for each disjoint -partial clique with ( a partial clique uses mds codes to generate coded symbols ) , and ( ii ) an uncoded message for each disjoint -partial clique with .the partial - clique - cover scheme achieves the following rate : [ prepbrik ] the optimal broadcast rate of an index coding instance is upper bounded by the partial - clique number , i.e. , the partial - clique - cover scheme performs at least as well as the cycle - cover and the clique - cover schemes , i.e. , this is because the partial - clique - cover scheme includes the cycle - cover scheme or the clique - cover scheme as a special case . by definition ,a clique is a -partial clique , and a cycle with vertices is a -partial clique . despite the fact that the partial - clique - cover scheme uses mds codes , which require sufficiently large message length in general , to construct index codes, one can find mds codes for any cycle and any clique for any .the clique - cover , the cycle - cover and the partial - clique - cover schemes provide scalar linear index codes .we can also construct vector linear index codes by time - sharing all possible cliques , cycles , partial cliques in their respective schemes , and these are called the fractional versions of those schemes .the fractional version can strictly decrease the broadcast rates ( over the non - fractional version ) for some digraphs , e.g. , a 5-cycle .we present an example that illustrates the importance of overlapping cycles on index coding .consider the digraph in fig .[ figmotivationa ] . in , the cycles and overlap at vertex , and some cycles similarly overlap at vertices and . note that .index codelengths for by existing graph - based schemes ( some schemes require a sufficiently large ) are depicted in table [ table:1 ] ..index codelengths for the digraph in fig .[ figmotivationa ] from existing schemes [ cols="^,^",options="header " , ] [ table:2 ] the index codelength provided by the aforementioned existing schemes are strictly greater than except the composite - coding scheme .there exist digraphs where the scheme outperforms the composite - coding scheme . for an example , the digraph shown in fig .[ 4a ] is a - structure , and it is denoted .an index code from the scheme is , which is of length . from theorem [ theorem3 ] , .for , the index codelength provided by the composite - coding scheme is 3.5 , which is greater than .in this section , firstly , we extend the scheme using time - sharing to code over overlapping structures in a digraph , and to obtain vector linear index codes . secondly , we extend the definition of the structure in such a way that we can extend the scheme to code on extended structures .let denote a sub - digraph induced by a subset of the vertices in .we define a function the index codelength from the fractional scheme is represented as , and given by the following linear program : ,\ s\in s. \nonumber \end{aligned}\ ] ] here is the power set of . in the fractional scheme ,each sub - digraph induced by the subset in is assigned a weight $ ] such that the total weight of each message over all of the subsets it belongs to is at least one . in this scheme , is the minimum sum of weights .the scheme is a special case of the fractional scheme where , so .we start with an example that provides an insight to the extension of the structure .consider a digraph that has three cliques , and each of size two .let and be vertex sets of those three cliques respectively .furthermore , for the clique pair ( ) , all vertices in have out - going arcs to all vertices in , and the result follows similarly for clique pairs ( ) and ( ) .this digraph is depicted in fig .[ fig2a ] .one can verify that is a - structure with an inner vertex set .we can not get a - structure in .suppose that we pick , there is no path from vertices and to vertex without passing through the inner vertex or . by symmetry ,choosing any 5 vertices as an inner vertex set will have the same issue .now the scheme gives an index code of length .however , from the coding point of view of an structure , vertices and need not be separated during encoding because they have the same arc sets and , and have arcs to each other .this means we can get another index code by removing , and replacing with , i.e. , of length two . here due to the special connectivity of clique , we have treated it as a single vertex , and used in the code construction . in light of this, we will now extend the definition of an structure to capture cliques with such special configurations . to achieve thiswe define a term called a super - vertex .[ super - vertex ] in a digraph , let be a vertex set where ( i ) all vertices in have arcs to each other , i.e. , they form a clique , and ( ii ) every vertex has the same and the same .such a group of vertices ( all ) is called a super - vertex and denoted as .now we define extended structures and an index - coding scheme for them .[ extended structure ] the extended ( ) structure is defined as an structure that allows super - vertices in its non - inner vertex set .-15pt [ extended scheme ] for any digraph , the extended ( ) scheme finds a set of disjoint structures covering .it then codes each of these structures using the code construction described in the following : * each super - vertex ( non - inner vertices ) is treated as a single vertex during the construction and the encoding process of the structure .* we consider the message requested by the super - vertex to be the xor of all messages requested by the vertices forming the super - vertex .* each of these structures are treated as an structure , and an index code is constructed using the scheme .along with super - vertices , and taking their definition into account , one can prove the validity of the code constructed by the scheme similar to the proof of proposition [ propositionvalidic ] .denote the length of the index code produced by the scheme by . for a digraph ,the index codelength obtained from the scheme is a better upper bound to the optimal broadcast rate than the codelength obtained from the scheme , i.e. , .it follows from the definition of structures that include structures as special cases .graph - based approaches have been shown to be useful for index coding , in which cycles play an important role . prior to this work ,disjoint cycles and disjoint cliques ( including the timeshared version ) were used to construct index codes . in this work, we attempted to extend the role of cycles on index coding .we took a step further and showed the benefits of coding on interlinked - cycle structures ( one form of overlapping cycles ) .our proposed scheme generalizes coding on disjoint cycles and disjoint cliques . by identifying a useful interlinked - cycle structure, we were able to characterize a class of infinitely many graphs where scalar linear index codes are optimal .for some classes of digraphs , we proved that the scheme performs at least as well as some existing schemes such as the partial - clique - cover scheme and the fractional - local - chromatic - number scheme .furthermore , for a class of digraphs , we proved that the partial - clique - cover scheme and the scheme have linearly - growing additive gap in index codelength with the number of vertices in the digraphs .we proved a similar result for the fractional - local - chromatic - number scheme and the scheme for another class of digraphs .we extended the scheme , to allow time - sharing over all possible structures in digraphs .we also extended the structure to allow super vertices as its non - inner vertices .however , it remains an open problem to identify cycles overlapping in other useful ways .a - structure has some properties captured in the following lemmas , which will be used to prove proposition [ propositionvalidic ] . herewe consider and as any two distinct directed rooted trees present in with the root vertices and respectively .[ lemma1 ] for any vertex , and , the set of leaf vertices that fan out from the common vertex in each tree is a subset of . in a tree ( see fig . [ fig3 ] ) , for any vertex and , let be a set of leaf vertices that fan out from vertex .if vertex , then there exists a path from to in .however , in , there is a path from to .thus in the sub - digraph , a path present in any also present in . ] , we obtain a path from to ( via ) and vice versa ( via ) . as a result , a cycle including non - inner vertices and only one inner vertex ( i.e. , ) exists .this cycle is an i - cycle , and condition 1 ( i.e. , no i - cycle ) for is violated .hence , . in other words , , . and with root vertices and respectively , and a non inner vertex in common .here we have used solid arrow to indicate an arc , and dashed arrow to indicate a path.,height=196 ] [ lemma2 ] for any vertex , and , the out - neighborhood of vertex is same in both trees , i.e. , . here the proof is done by contradiction .let us suppose that .for this proof we refer to fig .this proof has two parts . in the first part, we prove that , and then prove that in the second part .( part 1 ) suppose that . from lemma [ lemma1 ] , is a subset of .now pick a vertex that belongs to such that but ( such exists since we suppose that , and we swap the indices and if ) . in tree , there exists a directed path from vertex , which includes to the leaf vertex .let this path be .similarly , in tree , there exists a directed path from vertex , which does nt include ( since ) , and ends at the leaf vertex .let this path be . in the digraph , we can also obtain a directed path from which passes through ( via ) , and ends at the leaf vertex ( via ) .let this path be .the paths and are different , which indicates the existence of multiple i - paths from to in , this violates the condition 2 for .consequently , .( part 2 ) now we pick a vertex such that , without loss of generality , but ( such exists since we assumed that , and we swap the indices and if ) . furthermore , we have two cases for , which are ( case 1 ) , and ( case 2 ) .case 1 is addressed in the first part of this proof . on the other hand , for case 2 ,we pick a leaf vertex such that there exists a path ( see fig . [ fig3 ] ) that starts from followed by , and ends at , i.e. , exists in .a path exists in .thus a path exists in . from the first part of the proof, we have , so . now in , there exists a path from to , which includes followed by a vertex such that and ( as ) , and the path ends at , i.e. , which is different from .note that in trees and , only the root and the leaf vertices are from , so multiple i - paths are observed at from .this violates condition 2 for .consequently , .[ lemma3 ] if a vertex such that , then its out - neighborhood is the same in and in , i.e. , . for any from lemma [ lemma2 ] , for all . since , vertex must have the same out - neighborhood in as well .[ proof of proposition [ propositionvalidic ] ] from , all which are non - inner vertices , can decode their requested messages .this is because the coded symbol is the bitwise xor of the messages requested by and its all out - neighborhood vertices , and any knows messages requested by all of its out - neighborhood vertices as side information . for an inner vertex , rather than analyzing the sub - digraph , we will analyze its tree , and show that it can decode its message from the relevant symbols in .we are able to consider only the tree due to the lemma [ lemma3 ] .now let us take any tree .assume that it has a height where .the vertices in are at various depths , i.e. , from the root vertex .the root vertex has depth zero , and any vertex at depth equal to the height of the tree is a leaf vertex .firstly , in , we compute the bitwise xor among coded symbols of all non - leaf vertices at depth greater than zero , i.e. , .however , in , the message requested by a non - leaf vertex , say , at a depth strictly greater than one , appears exactly twice in ; 1 .once in , where is parent of in tree , and 2 .once in . refer to for mathematical details .thus they cancel out each other while computing in the tree .hence , in the tree , the resultant expression is the bitwise xor of 1 .messages requested by all non - leaf vertices at depth one , and 2 .messages requested by all leaf vertices at depth strictly greater than one .refer to for mathematical details .secondly , in , we compute ( refer to for mathematical details ) which yields the bitwise xor of 1 .the messages requested by all non - leaf vertices at depth one , which are out - neighbors of , 2 .the messages requested by all leaf vertices at depth one , which are also out - neighbors of , and 3 .the message requested by , i.e. , .this is because the message requested by each leaf vertex at depth strictly greater than one in the tree is present in both the resultant terms of and in , thereby canceling out itself in .hence , yields the bitwise xor of and . as knows all as side - information, any inner vertex can decode its required message from .the mathematical computations of and in the tree are as follows : + where , , and here is obtained because each has only one parent in , and we exclude all whose parent is , and is bitwise xor of messages requested by all of the leaf vertices not in the out - neighborhood of .if we expand as per the group of vertices according to their depth , we get \hskip-2pt \right ] \hskip-3pt \dotsc \hskip-2pt \right ] \hskip-2pt \right ] \nonumber \\ & = \bigoplus \limits_{j\in n^+_{t_i}(i)\setminus v_{\mathrm{i } } } x_{j}. \end{aligned}\ ] ] note that the intermediate terms in cancel out ( we have used the same color to indicate the terms that cancel out each other ) .now substituting of and of in , we get let be a - structure having an inner - vertex set .now we prove that any non - inner vertices of belonging to an i - path could not contribute to form a cycle including only non - inner vertices of . to show this , we start by picking two inner vertices and , and the i - path .we assume that the i - path includes non - inner vertices , i.e. , .further , let be denoted by .if , then for . now for , we have the following in : * , and can not contain any vertex in of because of the following : 1 .if or contains any vertex , then ( part of ) and ( part of or ) form an i - cycle at .the existence of the i - cycle in contradicts the definition of an structure .if contains any vertex , then ( part of ) and ( part of ) form an i - cycle at .the existence of the i - cycle in contradicts the definition of an structure .* one can verify that only the remaining i - paths and can contain vertices in without forming an i - cycle in .now for and , these two i - paths must form a directed rooted tree with the root vertex ( by the definition of an structure ) .thus these two i - paths alone could not form a cycle including only non - inner vertices . furthermore ,if and contain some common non - inner vertices , then let a vertex , which is in , be the vertex from where branches to and ( refer to fig . [ abc1 ] ) .* if they contain some common non - inner vertices , then let be the first vertex where and meet each other ( refer to fig .[ abc2 ] ) . considering only these two i - paths ,a cycle including only non - inner vertices can form only if a part of contributes to form a path from to for some .this path is not possible because multiple i - paths would be created from or to in ( contradiction of the definition of an structure ) .in fact , both the i - paths have the same destination , i.e. , , so there must be only one path from to common in both of them ( to avoid any multiple i - paths ) . for and ,if these i - paths contain a non - inner vertex in common , then ( part of ) and ( part of ) forms an i - cycle at ( contradiction to the definition of an structure ) .thus and can not contain any vertex in common except .consequently , considering , and ( along with assumptions that there are some common non - inner vertices in ( i ) and , and ( ii ) and ) , we end up with the structure as shown in fig .[ abc3 ] , where and .this structure contradicts one of the necessary conditions ( let this vertex be ) and an out - going path from a vertex in ( let this vertex be ) , for some , where .one can easily verify this necessary condition .] for any vertex in to contribute to form a cycle including only non - inner vertices in .thus there is no vertex in to contribute to form a cycle including only non - inner vertices in . due to symmetry ,the result ( any non - inner vertices of belonging to could not contribute to form a cycle including only non - inner vertices of ) implies similarly for non - inner vertices belonging to any of the i - paths in .therefore , there is no cycle among the non - inner vertices in .we first prove one lemma that will help to prove the optimality of the scheme .[ proof of theorem [ theorem3 ] ] we will show that the mais lower bound is tight for all .we denote the digraph which is also an structure by , and consider that it has vertices . for , the digraph contains only one vertex , and . for , we have the following : ( case 1 ) from lemma [ lemma6 ] , any cycle must include at least two inner vertices , or no inner vertex , thus if we remove inner vertices , then the digraph becomes acyclic .thus from theorem [ theorem1 ] , we get it follows from , and that . thus .( case 2 ) a can be viewed in two ways .the first way is considering the whole as a - structure . the second way is considering induced sub - digraphs of which consist of 1 . disjoint cycles together consisting of a total of ( ) non - inner vertices ( if or , then , which is case 1 ) , 2 . disjoint structures each with vertices and inner vertices in such a way that , we consider that each structure is also disjoint from all cycles among non - inner vertices , and 3 .total remaining of non - inner vertices ( which are not included in cycles , or the structures ) .now we will show that both ways of looking at are equivalent in the sense of the index codelength generated from our proposed scheme , and both equal to .we prefer the second way of viewing for our proof since it is easier to find the mais lower bound . for the partitioned ( looking at in the second way ) ,the total number of coded symbols is the summation of the coded symbols for ( i ) each of the disjoint cycles ( each cycle has saving equal to one ) , ( ii ) each of the disjoint structures ( each of the structures has savings equal to ) , and ( iii ) uncoded symbols for the remaining non - inner vertices , i.e. , from and , , thus from both perspectives the code length is the same . now for ( looking at in our second way ) , if we remove one vertex from each of the cycles among non - inner vertices ( removal in total ) , and remove vertices from each of the structures ( ) , i.e. , total removal of , then the digraph becomes acyclic .thus it follows from , , and that .thus . in this section , firstly , for every minimal partial clique with , we prove that there exists an structure within it such that both of the schemes ( partial - clique - cover and ) provide the same savings .secondly , we conjecture that the result is valid in general ( this is the main reason that results the conjecture [ conj1 ] ) . the summary is depicted in table [ table1 ] . finally , we prove the theorem .[ lemmaa1 ] a minimal partial clique with vertices and the minimum out - degree is a cycle , and both the partial - clique - cover scheme and the scheme achieve savings of one .the properties of with and are as follows : in , ( i ) as , the partial - clique - cover scheme provides the savings of one by proposition [ lemm1 ] , ( ii ) further partitioning could not provide any savings because it is a minimal - partial - clique , and ( iii ) there exists at least a cycle because an acyclic digraph should have a sink vertex ( a vertex with out - degree zero ) , and that does not exist since .thus , which is a minimal partial clique , having all of the above properties is a cycle . from theorem[ theorem2 ] , a cycle is a - structure , and the scheme provides savings of one by . note that such with is also a clique of order two .[ lemmaa2 ] a minimal partial clique with vertices and the minimum out - degree , is a clique of order , and has the savings of from both the partial - clique - cover scheme and the scheme . in ,if , then each vertex has an out - going arc to every other vertex , which is a clique by definition . thus the partial - clique - cover scheme provides savings of by proposition [ lemm1 ] . from theorem [ theorem2 ] ,a clique is a - structure , and the scheme provides savings of by .[ lemmaa3 ] in any minimal partial clique with vertices and , there exists a - structure , and has the savings of two from both the partial - clique - cover scheme and the scheme .[ def1 ] a path in the forward direction indicates a path from a vertex to any vertex such that , and a path in the reverse direction indicates a path from a vertex to any vertex such that . for simplicity , we refer a _ forward path _ to a path in the forward direction , and a _ reverse path _ to a path in the reverse direction .[ farthest path ] [ def2 ] consider a sequence of vertices labeled in an increasing order such that there are multiple paths from a vertex in the sequence to other vertices in the sequence . among those paths from , the path to the vertex with the largest labelis called the farthest path from vertex . for an example , let be a sequence of vertices in an increasing order , and has paths to vertices .the path from from to is the farthest path from . *a minimal partial clique with number of vertices and has at least one cycle by lemma [ lemmaa4 ] . without loss of generality ,denote the cycle by , and its vertices by . for simplicity ,vertices of are labeled in an increasing order as shown in fig .* for , denotes a path from to including only vertices and arcs of , and denotes a path from to consisting of arcs and vertices outside except vertices and .* for the vertices in ( see fig . [ 2 ] ) , vertex ( the first vertex of ) has only forward paths to other vertices of ( definition [ def1 ] ) , and vertex ( the last vertex of ) has only reverse paths to other vertices of ( definition [ def1 ] ) .let be the first vertex in the vertex sequence that has a reverse path to any vertex in , i.e. , there exist a such that and all vertices in have no reverse path . 1 .there exists at least one cycle , , and 2 . for a vertex ,one of its out - going arcs ( beside the arc in ) contribute to form a path that always returns to some vertex in .in with vertices and , there exists at least one cycle , denoted . this is because if any is acyclic , then it must contain at least one sink vertex , and there can not be sink vertices in since .assume that vertices such that there is an arc from to in .now in , has out - degree , and the next out - going arc of ( other than the arc in ) consider the terminal vertex of the path to be such that .vertex can not be a sink vertex because , and it must contribute to form a path . since there are no disjoint cycles ( all cycles are connected , otherwise any two disjoint cycles in provides savings of two and such is not a minimal partial clique with ) , must have a return path to a vertex in .we had considered that vertex of is the first vertex having a reverse path .now if any path meets path at some vertices , then there exists a reverse path from vertex to .this is not possible because such would have been the first vertex in the vertex sequence that has a reverse path ( contradiction ) .if the farthest path from , , meets any path at some vertices , then there exists .this is not possible ; otherwise , path would have been the farthest path ( contradiction ) . if or , then we get a figure - of - eight structure at .two closed paths at will be ( i ) , and ( ii ) . if , then let be the common vertex which is nearest to in these two paths and .now we get a figure - of - eight structure at .two closed paths at will be ( i ) ( part of ) , ( part of ) , and ( ii ) . these closed paths are vertex - disjoint except . [ proof of lemma [ lemmaa3 ] ] the proof is done by the detailed structural analysis of the minimal partial clique .we divide the proof into two parts : in part - i , we prove that has a figure - of - eight structure ( see definition [ fig8 ] ) at a vertex , and in part - ii , we prove there exist a - structure within having a figure - of - eight structure .( part - i ) consider the cycle in has vertices .now based on , there are two cases in , and those are ( i ) with , i.e. , is a clique of size two , and ( ii ) with .we assume and are the two distinct vertices belonging to in such a way that there exist a directed arc from to . in , andthe next out - going arc of goes out of , and contributes to form a path , say that returns to some vertex in by lemma [ lemmaa4 ] .( case ( i ) ) the cycle is a clique of size two . therefore , the path returns to either at or .now we get a figure - of - eight structure at if the path returns to , otherwise it returns to . for the latter case, we get a new cycle that includes the path ( which includes , some vertices other than ) , vertex and the arc from to ( i.e. , arc of the cycle ) .the new cycle has more than two vertices , thus this ends up with case ( ii ) .on the basis where path returns , we have the following sub - cases : returns to ( ii - a ) ( see in fig . [ 1a ] ) providing a figure - of - eight structure , ( ii - b ) ( see in fig .[ 1c ] and [ 1d ] ) , and ( ii - c ) some vertex ( see in fig . [ 1b ] ) . for sub - case ( ii - b ), we have a vertex such that there is a direct arc from to .since there are no disjoint cycles , the next out - going arc of ( besides the arc from to ) contributes to form a path to a vertex in .if has path to , then we get a figure - of - eight at ( shown in fig .[ 1c ] ) , otherwise we have the following : 1 . has a path to some , so this ends up with sub - case ( ii - c ) ( shown in fig . [ 1b ] ) , or 2 . has a path to , but for this case , we have another path ( shown in fig . [ 1d ] ) from to ( beside the direct arc from to , and path including the direct arc from to ) , and in this path , we can repeat sub - case ( ii - b ) by considering the predecessor of in place of . for sub - case( ii - c ) , note that we have a path , which is vertex - disjoint from except the first and the last vertices , which starts from any vertex and returns to some vertex .we start analyzing the sub - case ( ii - c ) in considering vertices in . for the vertices in ( see fig . [ 2 ] ) , in a sequential order starting from vertex , we track their out - going paths ( which may include vertices not in ) to other vertices in . for this sub - case , we know that there exist a such that and all vertices in have only forward paths .now we consider out - going paths from ( the paths are always forward paths ) , and get the following subsub - cases : 1 .if the farthest path from is for some , then there exists a figure - of - eight at .two closed paths at will be ( i ) , , and ( ii ) , ( refer fig .if and are vertex - disjoint except , then by recalling the definition of and for any , one can show that the two closed paths are vertex - disjoint except .otherwise , and are not vertex - disjoint except , and by lemma [ prop4 ] , one can find a figure - of - eight at .if the farthest path from is for , then there exists a figure - of - eight at .two closed paths at will be ( i ) , , and ( ii ) , for ( one of the forward paths of ) , ( refer fig . [ 2b ] ) .using lemmas [ prop2 ] and [ prop3 ] , and recalling the definition of and for any , one can show that these two closed paths are vertex - disjoint except .3 . otherwise , the farthest path from is for some . starting from , we have at least two forward paths to some vertices in such that these paths are vertex - disjoint except .we assume these paths are path - a ( the path from to ) and path - b ( the path from to ) .paths and will be path - a and path - b respectively ( refer fig . [ 3 ] ) . now considering the out - going paths from into account ( the paths are always forward paths ) , we get the following subsubsub - cases : 1 .if the farthest path from is for some , then there exists a figure - of - eight at .two closed paths at will be ( i ) path - a , , , and ( ii ) path - b , , ( refer fig . [ 3a ] ) .using lemmas [ prop2 ] and [ prop3 ] , and recalling the definition of and for any , one can show that these two closed paths are vertex - disjoint except .if the farthest path from is for , then there exists a figure - of - eight at .two closed paths at are ( i ) path - a , , , and ( ii ) path - b , , for ( one of the forward paths of ) , ( refer fig . [ 3b ] ) .if and are vertex - disjoint except , then by using lemmas [ prop2 ] and [ prop3 ] , and recalling the definition of and for any , one can show that the two closed paths are vertex - disjoint except .otherwise , and are not vertex - disjoint except , and by lemma [ prop4 ] , one can find a figure - of - eight at .3 . otherwise , the farthest path from is for some .the union of path - b and give new path - a and we update .similarly , the union of path - a and give new path - b and we update ( refer fig .[ 3c ] ) . considering new path - a , new path - b , and updating , we repeat the subsubsub - cases of subsub - case 3 . during this iteration ,if we get subsubsub - case ( a ) or ( b ) , then there is a figure - of - eight . otherwise , we have subsubsub - case ( c ) , where strictly increase to a value up to .however , when reaches , we must have either subsubsub - case ( a ) or ( b ) .now for the figure - of - eight structure at in ( see fig .[ fig4 ] ) , consider a vertex in a cycle ( indicated in blue color ) , and a vertex in another cycle ( indicated in red color ) in such a way that both and have direct arcs going to .one of the out - going arcs of must contribute to form a path , say , returning to a vertex in .this is because all other cases are not possible ; * can not return to any vertex in , otherwise a disjoint cycle to will be created .* can return to , but for this case , we have another path from to ( beside path including direct arc from to ) , and in this path , we can repeat the case by considering the predecessor of in place of .thus this case ends up with the same consideration as of the cycle with the vertex having direct arc to the vertex .* must return to some vertex in and due to lemma [ lemmaa4 ] .now in a similarly way , one of the out - going arcs of must also contribute a path to a vertex in other than .rearrange to get the structure in fig .now consider .we can see that any vertex in has only one i - path each to other two vertices in with no i - cycles .thus a - exists in ( for an example , see fig . [ 123 ] ) .consecutively , the scheme provides savings of two by .again , for , the partial - clique - cover scheme provides savings of two by proposition [ lemm1 ] . for all minimal partial cliques that we have analyzed , there exists an ( )- structure within each minimal partial clique having the minimum out - degree .we conjecture that this holds in general ( the following conjecture is not a part of the proof of the theorem [ theorem4 ] , but this provides an intuition about the conjecture [ conj1 ] ) . [ proof of theorem [ theorem4 ] ] given a digraph , if minimal partial cliques , partitioning the digraph to provide partial - clique number , have , then using the scheme on each of the minimal partial cliques , we achieve ( by using lemmas [ lemmaa1 ] , [ lemmaa2 ] and [ lemmaa3 ] , and considering zero savings for from any schemes ) .the scheme may produce a shorter index code by considering a different partitioning of the digraph ( for an example , see fig .[ ex1b ] and [ ex1a ] ) .[ eg3 ] this example illustrates that for the class of digraphs stated in theorem [ theorem4 ] , the scheme performs as least as well as the partial - clique - cover scheme .consider two digraphs that are depicted in fig .the digraph in fig .[ ex1b ] has more savings from the scheme than that obtained from the partial - clique - cover scheme , and the digraph in fig .[ ex1a ] has equal savings from both schemes .[ prop6 ] for any digraph of the class a , the index codelength obtained from the partial - clique - cover scheme and the scheme are and respectively , i.e. , and .if any minimal partial clique in of the class a includes a vertex for any , then .this is because the out - degree of any vertex in is two , i.e. , by construction .now any minimal partial clique without any vertex in has .this is because for any vertex in , only one out - neighbor is in this set , and the rest are in .so , any minimal partial clique in the digraph can have . now by proposition [ lemm1 ] , we know that . in , we try to construct a minimal partial clique that has ( we do not need to consider having because of lemma [ lemma10 ] ) by starting from with only one vertex and then adding vertices into its vertex set in such a way that we can obtain .we start from any vertex for an ( we will show a similar result if we start from some vertex ) .let be the vertex set of .now we include both of the two out - neighbor vertices of , i.e. , vertices in in . if we include only one out - neighbor vertex of in , then resulting .the new vertex - induced sub - digraph has , and because . to get a minimum out - degree of two, we must include another vertex in from . by symmetry, it does not make any difference which vertex to add .we arbitrarily choose , for some .now the new vertex - induced sub - digraph has for a .here because .we further include all vertices in in because , and if we include only one of its out - neighbor vertices , then . the new has and .further , including any vertex set in could not increase because any must have ( lemma [ lemma10 ] ) .if we start building with some , we will end up with a sub - digraph ( i ) that includes for some where , or ( ii ) that includes for some where and .altogether , any minimal partial clique with must contain for some where .however , is not the minimal because by simply considering two partial cliques among the vertices in , i.e. , partial cliques with vertex sets and , we get savings of two ( one in each ) . for a minimal partial clique in of the classa , lemma [ lemma10 ] provides , and lemma [ lemma11 ] proves that there exists no minimal partial clique with . thus the minimal partial cliques in a digraph of the class a are only cycles . for this case ,the cycle - cover scheme and the partial - clique - cover scheme for are the same . since any cycle must include two vertices from , . the minimal partial cliques with vertex sets . consider a vertex set with number of vertices . in the digraph , any vertex has only one path each to all other vertices in such that only the first and the last vertices of each of these paths belong to ( i.e. , i - path ) .moreover , there is no i - cycle at any vertex in .thus this forms an structure with inner - vertex set .now from , . thus .a. blasiak , r. d. kleinberg , and e. lubetzky , `` broadcasting with side information : bounding and approximating the broadcast rate , '' _ ieee trans .inf . theory _ ,59 , no . 9 , pp . 58115823 , sept .2013 .s. e. rouayheb , a. sprintson , and c. georghiades , `` on the index coding problem and its relation to network coding and matroid theory , '' _ ieee trans .inf . theory _ ,56 , no .3187 3195 , jul . | we consider a graphical approach to index coding . while cycles have been shown to provide coding gain , only disjoint cycles and cliques ( a specific type of overlapping cycles ) have been exploited in existing literature . in this paper , we define a more general form of overlapping cycles , called the interlinked - cycle ( ) structure , that generalizes cycles and cliques . we propose a scheme , called the interlinked - cycle - cover ( ) scheme , that leverages structures in digraphs to construct scalar linear index codes . we characterize a class of infinitely many digraphs where our proposed scheme is optimal over all linear and non - linear index codes . consequently , for this class of digraphs , we indirectly prove that scalar linear index codes are optimal . furthermore , we show that the scheme can outperform all existing graph - based schemes ( including partial - clique - cover and fractional - local - chromatic number schemes ) , and a random - coding scheme ( namely , composite coding ) for certain graphs . shell : bare demo of ieeetran.cls for journals index coding problem , unicast , linear index codes , interlinked - cycle cover , optimal broadcast rate . |
given data , where is the response and is the -dimensional covariate , the goal in many analyses is to approximate the unknown function by minimizing a specified loss function [ a common choice is -loss , . in trying to estimate ,one strategy is to make use of a large system of possibly redundant functions .if is rich enough , then it is reasonable to expect to be well approximated by an additive expansion of the form where are base learners parameterized by . to estimate , a joint multivariable optimization over may be used .but such an optimization may be computationally slow or even infeasible for large dictionaries .overfitting may also result . to circumvent this problem , iterative descentalgorithms are often used .one popular method is the gradient descent algorithm described by , closely related to the method of `` matching pursuit '' used in the signal processing literature [ ] .this algorithm is applicable to a wide range of problems and loss functions , and is now widely perceived to be a generic form of boosting . for the step , ,one solves where ^ 2\ ] ] identifies the closest base learner to the gradient in -distance , where is the gradient evaluated at the current value , and is defined by {f_{m-1}(\mathbf{x}_i ) } = -l'(y_i , f_{m-1}(\mathbf{x}_i)).\ ] ] the update for the predictor of is where is a regularization ( learning ) parameter . in this paper, we study friedman s algorithm under -loss in linear regression settings assuming an design matrix m m ] and .note that under the asserted conditions .to determine the above limit requires first determining when a new direction becomes more favorable . for to be more favorable at , we must have when is odd , or when is even .the following result determines the number of steps to favorability .for simplicity only the case when is odd is considered , but this does not affect the limiting result .[ criticalpoint.twocycle.theorem ] assume the same conditions as theorem [ two.cycle.predictor ] .then becomes more favorable than at step where is the largest odd integer such that where and clearly shares common features with .this is no coincidence .the bounds are similar in nature because both are derived by seeking the point where the absolute gradient - correlation between sets of variables are equal . in the case of two - cycling , this is the singularity point where , and are all equivalent in terms of absolute gradient - correlation .the following result states the limit of the predictor under two - cycling .[ dynamic step.two.cycles ] under the conditions of theorem [ two.cycle.predictor ] , the limit of as at the next critical direction equals ,\ ] ] where , , and .furthermore , , where for each , .this shows that the predictor moves along the combined direction taking a step size that makes the absolute gradient - correlation for equal to that of the active set .theorem [ dynamic step.two.cycles ] is a direct analog of theorem [ dynamic.nu.size ] to two - cycling .not surprisingly , one can easily show that this limit coincides with the lar solution . to show this , we rewrite in a form comparable to lar , .\ ] ] recall that lar moves the shortest distance along the equiangular vector defined by the current active set until a new variable with equal absolute gradient - correlation is reached .the term in square brackets above is proportional to this equiangular vector .thus , since is obtained by moving the shortest distance along the equiangular vector such that have equal absolute gradient - correlation , must be identical to the lar solution .analysis of cycling in the general case where the active set is comprised of variables is more complex . in two - cyclingwe observed cycling patterns of the form , but when , 2boost s cycling patterns are often observed to be nondeterministic with no discernible pattern in the order of selected critical directions .moreover , one often observes some coordinates being selected more frequently than others .a study of -cycling has been given by .however , the analysis assumes deterministic cycling of the form which is the natural extension of the two - cycling just studied . to accommodate this framework ,a modified 2boost procedure involving coordinate - dependent step sizes was used .this models 2boost s cycling tendency of selecting some coordinates more frequently by using the size of a step to dictate the relative frequency of selection . under constraints to the coordinate step sizes , equivalent to solving a system of linear equations defining the equiangular vector used by lar , it was shown that the modified 2boost procedure yields the lar solution in the limit .interested readers should consult for details .now we turn our attention to the issue of correlation .we have shown that regardless of the size of the active set a new direction becomes more favorable than the current direction at step where is the smallest integer value satisfying using our previous notation , let and denote the left and right - hand sides of the above inequality , respectively .generally , large values of are designed to hinder noninformative variables from entering the solution path .if requires a large number of steps to become favorable , it is noninformative relative to the current gradient and therefore unattractive as a candidate .surprisingly , however , such an interpretation does not always apply in correlated problems .there are situations where is informative , but can be artificially large due to correlation . to see why , suppose that is an informative variable with a relatively large value of .now , if and are correlated , so much so that , then . hence , and due to . thus , even though is promising with a large gradient - correlation , it is unlikely to be selected because of its high correlation with .the problem is that becomes an unlikely candidate for selection when is close to .in fact , when so that can never become more favorable than when the two values are equal . we have already discussed the condition several times now , and have referred to it as the _repressible condition_. repressibility plays an important role in correlated settings .we distinguish between two types of repressibility : weak and strong repressibility .weak repressibility occurs in the trivial case when .weak repressibility implies that .hence the gradient - correlation for and are equal in absolute value and , and are perfectly correlated .this trivial case simply reflects a numerical issue arising from the redundancy of the and columns of the design matrix . the stronger notion of repressibility , which we refer to as strong repressibility ,is required to address the nontrivial case in which is repressed without being perfectly correlated with .the following definition summarizes these ideas .[ repressible.def ] we say has the strong repressible condition if and .we say that is ( strongly ) repressed by when this happens . on the other hand , has the weak repressible condition if and are perfectly correlated ( ) and .we present a numerical example of how repressibility can hinder variables from being selected . for our illustrationwe use example ( d ) of section 5 from .the data was simulated according to where , and .the first 15 coordinates of were set to 3 ; all other coordinates were 0 .the design matrix {100\times40}$ ] was simulated according to \\[-8pt ] { \mathbf{x}}_j & = & { \mathbf{z}}_3 + \tau{\boldsymbol{{{\varepsilon}}}}_j,\qquad j = 11,\ldots,15,\nonumber \\ { \mathbf{x}}_j & = & { \boldsymbol{{{\varepsilon}}}}_j,\qquad j > 15,\nonumber\end{aligned}\ ] ] where and were i.i.d. and . in this simulation , only coordinates 1 to 5 , 6 to 10 and 11 to 15 have nonzero coefficients .these -variables are uncorrelated across a group , but share the same correlation within a group . because the within group correlation is high , but less than 1, the simulation is ideal for exploring the effects of strong repressibility .for the first 5 coefficients from simulation : red points are iterations where the descent direction .variables 2 and 3 are never selected due to their excessively large step sizes : an artifact of the correlation between the 5 variables .the last panel ( bottom right ) displays for those iterations where . ]figure [ figure5 ] displays results from fitting algorithm [ a : l2boostpath ] for iterations with .the first 5 panels are the values against the iteration , with points colored in red indicating iterations where and is used generically to denote the current descent direction .notationally , the descent at iteration is along for a step size of , at which point becomes more favorable than and the descent switches to , the next critical direction .the value plotted , , is the step size for .whenever the selected coordinate is from the first group of variables ( we are referring to the red points ) one of the coordinates achieves a small value .however , coordinates and maintain very large values throughout all iterations .this is despite the fact that the two coordinates generally have large values of , especially during the early iterations ( see the bottom right panel ) .this suggests that 1 , 4 and 5 become active variables at some point in the solution path , whereas coordinates 2 and 3 are never selected ( indeed , this is exactly what happened ) .we can conclude that coordinates 2 and 3 are being strongly repressed by .interestingly , coordinate 4 also appears to be repressed at later iterations of the algorithm .observe how its values decrease with increasing ( blue line in bottom right panel ) , and that its values are only small at earlier iterations .thus , we can also conclude that coordinates eventually repress coordinate 4 as well .we note that the number of iterations used in the example is not very large , and if 2boost were run for a longer period of time , coordinates 2 and 3 will eventually enter the solution path ( panels 2 and 3 of figure [ figure5 ] show evidence of this already happening with steadily decreasing as increases ) .however , doing so leads to overfitting and poor test - set performance ( we provide evidence of this shortly ) . using different values of did not resolve the problem .thus , similar to the lasso , we find that 2boost is unable to select entire groups of correlated variables . like the lassothis means it also will perform suboptimally in highly correlated settings . in the next sectionwe introduce a simple way of adding -regularization as a way to correct this deficiency .the tendency of the lasso to select only a handful of variables from among a group of correlated variables was noted in . to address this deficiency, described an optimization problem different from the classical lasso framework .rather than relying only on -penalization , they included an additional -regularization parameter designed to encourage a ridge - type grouping effect , and termed the resulting estimator `` the elastic net . '' specifically , for a fixed ( the ridge parameter ) and a fixed ( the lasso parameter ) , the elastic net was defined as to calculate the elastic net , showed that could be recast as a lasso optimization problem by replacing the original data with suitably constructed augmented values .they replaced and with augmented values and , defined as follows : {(n+p)\times1},\qquad { { \mathbf{x}}^*}= \frac{1}{\sqrt{1+\lambda } } \left[\matrix { { \mathbf{x}}\cr \sqrt{\lambda } { \mathbf{i } } } \right]_{(n+p)\times p } = [ { { \mathbf{x}}^*}_1,\ldots,{{\mathbf{x}}^*}_p].\hspace*{-35pt}\ ] ] the elastic net optimization can be written in terms of the augmented data by reparameterizing as . by lemma 1 of , it follows that can be expressed as which is an -optimization problem that can be solved using the lasso .one explanation for why the elastic net is so successful in correlated problems is due to its decorrelation property .let .because the data is standardized such that [ recall ] , we have one can see that is a decorrelation parameter , with larger values reducing the correlation between coordinates . argued that this effect promotes a `` grouping property '' for the elastic net that overcomes the lasso s inability to select groups of correlated variables .we believe that decorrelation is an important component of the elastic net s success .however , we will argue that in addition to its role in decorrelation , has a surprising connection to repressibility that further explains its role in regularizing the elastic net .the argument for the elastic net follows as a special case ( the limit ) of a generalized 2boost procedure we refer to as elasticboost .the elasticboost algorithm is a modification of 2boost applied to the augmented problem .to implement elasticboost one runs 2boost on the augmented data , adding a post - processing step to rescale the coefficient solution path : see algorithm [ a : eboost ] for a precise description . for arbitrarily small ,the solution path for elasticboost approximates the elastic net , but for general , elasticboost represents a novel extension of 2boost .we study the general elasticboost algorithm , for arbitrary , and present a detailed explanation of how imposes -regularization .augment the data .set for .run algorithm [ a : l2boostpath ] for iterations using the augmented data .let denote the -step predictor ( discard for ) .let denote the -step coefficient estimate .rescale the regression estimates : . to study the effect has on elasticboost s solution path we consider in detail how effects , the number of steps to favorability [ defined as in but with and replaced by their augmented values and ] .at initialization , the gradient - correlation for is in the special case when , corresponding to the first descent of the algorithm , therefore , , and hence .\ ] ] this equals the number of steps in the original ( nonaugmented ) problem but where is replaced with variables decorrelated by a factor of . for large values of addresses the problem seen in figure [ figure5 ] .recall we argued that can became inflated due to the near equality of with .however , shrinks to zero with increasing , which keeps from becoming inflated .this provides one explanation for role in regularization , at least for the case when is large .but we now suggest another theory that applies for both small and large .we argue that regularization is imposed not just by decorrelation , but through a combination of decorrelation and reversal of repressibility .s role is more subtle than our previous argument suggests . to show this ,let us suppose that near - repressibility holds .we assume therefore that for some small .then , }_{\mathrm{repressibility\ effect } } \\ & & \hphantom{\qquad= } \underbrace{- \log\biggl(1-\frac{r_{j , k}}{\sqrt{1+\lambda}}{\operatorname{sgn}}\biggl(r_{j , k } \biggl[\frac{1}{1+{\delta}}-\frac{1}{\sqrt{1+\lambda}}\biggr ] \biggr)\biggr)}_{\mathrm { decorrelation\ effect}}.\nonumber\end{aligned}\ ] ] the first term on the right captures the effect of repressibility .when is small , plays a crucial role in controlling its size . if , the expression reduces to which converges to as ; thus precluding from being selected [ keep in mind that is divided by , which is negative ; thus . on the other hand , any , even a relatively small value ,ensures that the expression remains small even for arbitrarily small , thus reversing the effect of repressibility . the second term on the right of is related to decorrelation .if ( which holds if is large enough when , or for all if ) , the term reduces to which remains bounded when if . on the other hand , if , the term reduces to which remains bounded if and shrinks in absolute size as increases . taken together , these arguments show imposes -regularization through a combination of decorrelation and the reversal of repressibility which applies even when is relatively small .these arguments apply to the first descent .the general case when requires a detailed analysis of . in general , we break up the analysis into two cases depending on the size of .suppose first that is small .then which is the ratio of gradient correlations based on the original without pseudo - data .if is a promising variable , then will be relatively large , and our argument from above applies . on the other handif is large , then the third term in the numerator and the denominator of become the dominating terms and the growth rate of for the pseudo data is for a group of variables that are actively being explored by the algorithm .thus and our previous argument applies .as evidence of this , and to demonstrate the effectiveness of elasticboost , we re - analyzed using algorithm [ a : eboost ] .we used the same parameters as in figure [ figure5 ] ( and ) .we set .the results are displayed in figure [ figure6 ] .in contrast to figure [ figure5 ] , notice that all 5 of the first group of correlated variables achieve small values ( and we confirmed that all 5 variables enter the solution path ) .it is interesting to note that is nearly 1 for each of these variables . ) .now each of the first 5 coordinates are selected and each has values near one . ]( top ) and ( bottom ) based on 250 independent learning samples .the distribution of coefficient estimates are displayed as boxplots ; mean values are given in red . ] to compare 2boost and elasticboost more evenly , we used 10-fold cross - validation to determine the optimal number of iterations ( for elasticboost , we used doubly - optimized cross - validation to determine both the optimal number of iterations and the optimal value ; the latter was found to equal ). figure [ figure7 ] displays the results .the top row displays 2boost , while the bottom row is elasticboost ( fit under the optimized ) .the minimum mean - squared - error ( mse ) is slightly smaller for elasticboost ( 217.9 ) than 2boost ( 231.7 ) ( first panels in top and bottom rows ) .curiously , the mse is minimized using about same number of iterations for both methods ( 190 for 2boost and 169 for elasticboost ) .the middle panels display the coefficient paths .the vertical blue line indicates the mse optimized number of iterations . in the case of 2boostonly 4 nonzero coefficients are identified within the optimal number of steps , whereas elasticboost finds all 15 nonzero coefficients .this can be seen more clearly in the right panels which show coefficient estimates at the optimized stopping time .not only are all 15 nonzero coefficients identified by elasticboost , but their estimated coefficient values are all roughly near the true value of 3 .in contrast , 2boost finds only 4 coefficients due to strong repressibility .its coefficient estimates are also wildly inaccurate .while this does not overly degrade prediction error performance ( as evidenced by the first panel ) , variable selection performance is seriously impacted .the entire experiment was then repeated 250 times using 250 independent learning sets .figure [ figure8 ] displays the coefficient estimates from these 250 experiments for elasticboost ( left side ) and 2boost ( right side ) as boxplots .the top panel are based on the original sample size of and the bottom panel use a larger sample size .the results confirm our previous finding : elasticboost is consistently able to group variables and outperform 2boost in terms of variable selection .finally , the left panel of figure [ figure9 ] displays the difference in test set mse for 2boost and elasticboost as a function of over the 250 experiments ( ) .negative values indicate a lower mse for elasticboost , which is generally the case for larger .the right panel displays the mse optimized number of iterations for 2boost compared to elasticboost .generally , elasticboost requires fewer steps as increases .this is interesting , because as pointed out , this generally coincides with better mse performance .a key observation is that 2boost s behavior along a fixed descent direction is fully specified with the exception of the descent length , . in theorem[ criticalpoint.theorem ] , we described a closed form solution for , the number of steps until favorability , where is the currently selected coordinate direction and is the next most favorable direction .theorem [ criticalpoint.theorem ] quantifies 2boost s descent length , thus allowing us to characterize its solution path as a series of fixed descents where the next coordinate direction , chosen from all candidates , is determined as that with the minimal descent length ( assuming no ties ) .since we choose from among all directions , , and equivalently the step length , can be characterized as measures to favorability , a property of each coordinate at any iteration .these measures are a function of and the ratio of gradient - correlations and the correlation coefficient relative to the currently selected direction .characterizing the 2boost solution path by provides considerable insight when examining the limiting conditions .when , 2boost exhibits active set cycling , a property explored in detail in section [ s : cyclingbehavior ] .we note that this condition is fundamentally a result of the optimization method which drives when is arbitrarily small .this virtually guarantees the notorious slow convergence seen with infinitesimal forward stagewise algorithms .the repressibility condition occurs in the alternative limiting condition .repressibility arises when the gradient correlation ratio equals the correlation .when , is said to be strongly repressed by , and while descending along , the absolute gradient - correlation for can never be equal to or surpass the absolute gradient - correlation for .strong repressibility plays a crucial role in correlated settings , hindering variables from being actively selected .adding regularization reverses repressibility and substantially improves variable selection for elasticboost , an 2boost implementation involving the data augmentation framework used by the elastic net . | we consider , a special case of friedman s generic boosting algorithm applied to linear regression under -loss . we study for an arbitrary regularization parameter and derive an exact closed form expression for the number of steps taken along a fixed coordinate direction . this relationship is used to describe s solution path , to describe new tools for studying its path , and to characterize some of the algorithm s unique properties , including active set cycling , a property where the algorithm spends lengthy periods of time cycling between the same coordinates when the regularization parameter is arbitrarily small . our fixed descent analysis also reveals a _ repressible condition _ that limits the effectiveness of in correlated problems by preventing desirable variables from entering the solution path . as a simple remedy , a data augmentation method similar to that used for the elastic net is used to introduce -penalization and is shown , in combination with decorrelation , to reverse the repressible condition and circumvents s deficiencies in correlated problems . in itself , this presents a new explanation for why the elastic net is successful in correlated problems and why methods like lar and lasso can perform poorly in such settings . . |
kanbur et al ( 2002 ) , hendry et al ( 1999 ) , tanvir et al ( 2004 ) introduced the use of principal component analysis ( pca ) in studying cepheid light curves .they showed that a major advantage of such an approach over the traditional fourier method is that it is much more efficient : an adequate fourier description requires , at best , a fourth order fit or 9 parameters , whilst a pca analysis requires only 3 or 4 parameters with as much as 81 of the variation in light curve structure being explained by the first parameter .later , leonard et al ( 2003 ) used the pca approach to create cepheid light curve templates to estimate periods and mean magnitudes for hst observed cepheids .the purpose of this paper is to apply the pca technique to the study of rr lyrae light curves .the mathematical formulation and error characteristics of pca are given in k02 and will only be summarized here .the data used in this study were kindly supplied by kovacs ( 2002 private communication ) and used in kovacs and walker ( 2001 , hereafter kw ) .these data consist of 383 rrab stars with well observed v band light curves in 20 different globular clusters .kw performed a fourier fit to these data , which , in some cases , is of order 15 .details concerning the data can be found in kw .the data we work with in this paper is this fourier fit to the magnitudes and we assume that the fourier parameters published by kw are an accurate fit to the actual light curves .we start with the data in the form used in kw : a list of the mean magnitude , period and fourier parameters for the v band light curve .the light curve can thus be reconstructed using an expression of the form where is the mean magnitude , , the period , the fourier parameters given in kw .these light curves are then rephased so that maximum light occurs at phase 0 and then rewritten as the are the light curve characteristics entering into the pca analysis ( k02 ) .we then solve equation ( 4 ) of k02 , either after , or before removing an average term from the fourier coefficients in equation ( 2 ) .with pca , the light curve is written as a sum of `` elementary '' light curves , where is the magnitude at time t , etc .are the pca coefficients and the are the elementary light curves at phase or time t. these elementary light curves are not a priori given , but are estimated from the dataset in question .each star has associated with it a set of coefficients and these can be plotted against period just as the fourier parameters in equation ( 1 ) are plotted against period .we also note that the pca results are achieved as a result of the analysis of the _ entire _ dataset of 383 stars whereas the fourier method produces results for stars individually .this feature of pca is particularly useful when performing an ensemble analysis of large numbers of stars obtained from projects such as ogle , macho and gaia .solving equation ( 4 ) of k02 yields the principal component scores and the amount of variation carried by each component .what we mean by this is the following : if we carry out an order pca fit , then pca will assume that all the variation in the dataset is described by components and simply scale the variation carried by each component accordingly .table 1 shows this `` amount of variation '' quantity with and without the average term removed .we see that in the case when we do not remove the average term the first pc explains as much as of the variation in the light curve structure . in the casewhen we do remove the average term from the fourier coefficients , the first pca coefficient explains as much 81 percent of the variation in light curve structure . in either case , the first four components explain more than of the variation .figures 1 and 2 show some representative light curves from our rrab dataset . in each panel of these two figures , the solid line is the fourier decomposition of order 15 ( that is 31 parameters ) used by kw , whilst the dashed line is a pca generated light curve of order 14 ( that is 15 parameters ) .straightforward light curves such as the one given in the bottom and top left panels of figures 1 and 2 respectively are easily reproduced by our method .the top left panel of figure 1 provides an example of an rrab light curve with a dip and sharp rise at a phase around 0.8 .this is well reproduced by pca .it could be argued that pca does not do as well as fourier in mimicking this feature , for example , in the bottom right panel of figure 2 .however , the difference in the peak magnitudes at a phase of around 0.8 is of the order of 0.02mags .it is also important to remember that the pca method is an ensemble method and analyzes all stars in a dataset simultaneously .with fourier , it is possible to tailor a decomposition to one particular star .this difference can be seen either as a positive or negative point about either technique . given this, we contend that pca does remarkably well in describing the full light curve morphology of rrab stars . on the other hand , the fourier curve in the bottom left panel of figure 2 at this phase is not as smooth as the pca curve .in fact the pca curves do not change much after about 8 pca parameters .even though table 1 implies that the higher order pca eigenvalues are small , we feel justified in carrying out such a high order pca fit because its only after about 8 pca components that the fitted light curve assumes a stable shape .the left panel of figure 3 displays an eighth order pca fit ( 9 parameters , dashed line ) and a fourth order fourier fit ( 9 parameters , solid line ) .the fourier curve still has some numerical wiggles whilst the pca curve is smoother .in addition , the two curves disagree at maximum light .the right panel of figure 3 shows , for the same star , the same order pca curve as the left panel and an eighth order fourier fit ( 17 parameters ) .now the two light curves agree very well .note that in portraying the pca and fourier fits of reduced order in this figure , we simply truncated the original representations to the required level .we suggest that figures 1 - 3 and table 1 provide strong evidence that pca is an _efficient _ way to describe rrab light curve structure without compromising on what light curve features are captured by this description . figures 4 - 6 display plots of the first three pc scores plotted against log period for our sample .the errors associated with these pca scores are discussed in section 4 of k02 and given in equation 6 of that section .the orthogonal nature of these scores may well provide insight into the physical processes causing observable features in the light curve structure .a detailed study of these plots , in conjunction with theoretical models , is left for a future paper .figure 7 graphs v band amplitude against the first pca coefficient ( after averaging ) . we see a very tight correlation .since table 1 implies that pca1 explains about of the variation in light curve structure , figure 6 shows that the amplitude is a good descriptor of rrab light curve shape , at least for the data considered in this paper .although the fourier amplitudes are also correlated with amplitude , with pca , we can quantify , very easily , the amount of variation described by each pca component .this has implications for both modeling and observation . on the modeling side , a computer code that can reproduce the observed amplitude at the correct period ,will also do a good job of reproducing the light curve structure . on the observational side, this provides insight into why we can use the amplitude , rather than a full blown pca or fourier analysis , to study the _ general _ trends of light curve structure .this is why comparing theoretical and observational rrab light curves on period - amplitude diagrams works reasonably well , though we caution that a careful analysis should consider the finer details of light curve structure .figures 6 and 7 display plots of the first two pca coefficients and fourier amplitudes , respectively , for our data , plotted against each other . whilst and are correlated with each other , and are not , by construction .a similar situation would occur had we plotted or against .this is another advantage of pca analysis of variable star light curves : the different pca components are orthogonal to each other .a practical advantage of this feature is outlined in the next section .a major goal of stellar pulsation studies is to find formulae linking global stellar parameters such as luminosity or metallicity to structural light curve properties .if we are interested in the band magnitude , then we can write , where , since we do not know the function , we try to estimate it empirically .two different approaches to quantifying light curve structure will , in general , yield different formulations of the function , but if there does exist a true underlying function , then both methods should give similar answers for , given the same input data . with a fourier based method ,the function is related to the fourier amplitudes and phases , , usually with a linear relation . with a pca approach ,we use the pca scores plotted in figures 2 - 4 . hence a pca relation , though also linear , will be different .the nature of pca implies that the error structure in such formulae will be simpler and we quantify this below .both formulations should , of course , give similar numbers for the final estimated value of the physical parameter in question , in this case , .kw used the fourier method and found relations of the form , and , we note that these relations were obtained through an iterative procedure whereby outliers were removed and the relations re - fitted ( kovacs 2004 ) . in this paper , we use the pca method , but also , we use the entire dataset c mentioned in kw , consisting of 383 stars , and fit the relations just once .we do not remove any outliers .this may be why we obtain slightly different versions of the fit using fourier parameters than that published in kw .for ease of comparison , we include in table 2 results obtained using both pca and fourier parameters .this table gives the name for the relation , the independent variables considered and coefficients together with their standard errors .the value of chi - squared in the table is defined as where is the fitted value of and are the number of stars and parameters respectively in the fit .an examination of this table strongly suggests that * \1 ) similar relations to equations ( 4 ) and ( 5 ) between and the pca coefficients exist . *\2 ) we can use an f test ( weisberg 1980 ) to test for the significance of adding a second and then a third pca parameter to the regression .the f statistic we use is where are the residual sum of squares under the null and alternate ( nh and ah ) hypothesis respectively .similarly , and are the degrees of freedom under these two hypotheses . for this problem ,the null hypothesis is that the model with the smaller number of parameters is sufficient whilst the alternative hypothesis is that the model with the greater number of parameters is required . under the assumption of normality of errors , equation( 7 ) is distributed as an , ( weisberg , 1980 , p. 88 ) . applyingthis test implies firstly , that adding the first parameter is a significant addition to and secondly , that adding a second and third parameter , and are also highly significant with a p value less than 0.0004 . in the case of fourier parameters , adding the parameter to is highly significant and adding the parameter to this is also highly significant .however , a formula involving ( ) has a p value of 0.0058 and a formula involving all 3 fourier amplitudes and is not a significant addition to a formula involving . * \3 ) the standard deviation of the fits given in the last column is generally slightly higher for the pca case , when considering similar numbers of parameters .this is perhaps caused by the fact that the different pca components carry orthogonal sets of information . *\4 ) the errors on the coefficients in the pca fits are always significantly smaller .this is an important point when we evaluate the errors on the final fitted value of the absolute magnitude . *\5 ) if we write the absolute magnitude as a function of parameters , , then the error on the absolute magnitude is given by , as table 2 indicates , is always smaller when the are pca coefficients rather than fourier amplitudes .figure 8 and 9 portray graphs of vs and verses respectively .we note that .table 3 presents sample correlation and covariance coefficients between the period and pca parameters and period and fourier parameters .table 3 , and figures 6 and 7 demonstrate that the correlation coefficient amongst any pair of pca coefficients is smaller than between any pair of fourier coefficients .hence the error on the fitted value of , , _ has _ to be smaller when using a pca based formula .we can use table 3 and equation ( 9 ) to formally calculate the error on .table 4 presents these results .the label in the top row of this table ( p1 , f1 , etc ., ) refers to the appropriate relation in table 2 .we see clearly that the pca formulae do better than their fourier counterparts with a similar number of parameters . when we consider the and variables ,then the `` error advantage '' using a pca based method is a factor of two .this occurs not just because the pca coefficients are orthogonal to each other , but also because the errors on the coefficients in a pca based formula are significantly smaller than in the fourier case .figure 10 displays a plot of the predicted absolute magnitudes obtained using a two parameter ( ) fourier fit and the three parameter ( ) pca fit .the two approaches are displaced from each other because we do not consider the constants in this study . disregarding this, it can be seen that the slope of this plot is 1 : hence the two methods produce similar relative absolute magnitudes .we have shown that the method of pca can be used to study rr lyrae light curves .it has distinct advantages over a fourier approach because * \a ) it is a more efficient way to characterize structure since fewer parameters are needed .a typical fourier fit requires 17 parameters whereas a pca fit may only need 9 . *\b ) using the pca approach , we see clearly why the amplitude is a good descriptor of rrab light curve shape . * \c ) the different pca components are orthogonal to each other whereas the fourier amplitudes are highly correlated with each other .this leads to relations linking light curve structure to absolute magnitude using pca having coefficients with smaller errors and leading to more accurate estimates of absolute magnitudes .this can reduce the formal error , in some cases , by a factor of 2 . in the present formulation of our pca approach ,the input data is a fourier analysis .if these input data , that is the fourier decompositions , contain significant observational errors , the error bars on the resulting principal components will be larger . neither the pca or fourier approach can compensate fully for noisy data . in this sense , the sensitivity of pca to noisy data should be similar to fourier , though the fact that pca is an ensemble approach in which we initially remove an average term does guard against individual points having too much undue influence .as an example , table 4 of kw gives 17 outliers ( in terms of their fourier parameters ) , which kw removed in their analysis relating absolute magnitude to fourier parameters .we do _ not _ remove these outliers , yet , in terms of the final fitted magnitudes presented in figure 10 , pca and fourier produce very similar results .further , even with the inclusion of these 17 stars , the pca method still produces pca coefficients with smaller errors as given in tables 2 and 3 .kanbur et al ( 2002 ) discuss in detail the nice error properties of the pca method as applied to variable stars and give a recipe with which to calculate errors on pca coefficients .their figure 2 , albeit for cepheids , displays error bars on these coefficients .we see that even with noisy data , the progression of pca parameters with period is preserved , though of course , the error bars on the pca coefficients are larger .ngeow et al ( 2003 ) developed a simulated annealing method which can reduce numerical wiggles in fourier decomposition of sparse data .ngeow et al ( 2003 ) give specific examples of how such an approach improves fourier techinues using ogle lmc cepheids .a similar result will hold true for rr lyraes . hence this annealing technique couple with a principal component analysis should prove very useful when dealing with noisy rr lyrae data and will be treated in detail in a subsequent paper .our pca results are based on a sample of 383 stars in globular clusters .how transferable are our results and how can our results be used to obtain pc coefficients for a new rr lyrae light curve which appears to be normal ( ie no signs of blazhko effects etc . ) ?our results are transferable to the extent that the original 383 stars are a good representation of the entire population of rrab stars , including variation in metallicity and differences between field and cluster variables .given this caveat , we suggest two methods to reproduce the light curve of a new rrab star .firstly , it is straightforward to include the new star in the pca analysis with the existing dataset .this is our recommended approach and preserves the `` ensemble analysis '' property of our pca method .our second method will be the subject of future paper but briefly it is this .we fit the progression of the pca coefficients with period , such as given in figures 4 and 5 , with simple polynomial functions . as an aside, we remark that figure 4 contains significant scatter , perhaps associated with metallicity , so that it would be best to include metallicity in such polynomial fits . for a new star , we then guess its period and read off , for that period , the value of the pca coefficients . equation ( 3 ) then allows us to generate the light curve .we iterate this until a specified error criterion is satisfied .we can then use existing formulae relating absolte magnitude to light curve structure as defined by pca .this pca template approach has been used , with considerable success , in analysing hst cepheid data ( leonard et al 2003 ) .we note from table 2 that the chi square on the fitted relations are similar for pca and fourier .does this mean that despite the smaller formal errors with pca , both methods ability to predict rrab absolte magnitudes is limited by the intrinsic properties of rrab stars themselves ?to some extent this is true .jurcsik et al ( 2004 ) , in analysing accurate data for 100 rrab stars in m3 , show that for some 16 stars , amongst which there exist some pairs whose absolute mean magnitudes differ by about 0.05 mags ( the accuracy of the photometry is about 0.02mags ) , the fourier parameters and periods are very similar .that is , an empirical method relating absolute magnitude to period and fourier parameters in one waveband could not distinguish between these stars .since , as jurcsik et al ( 2004 ) point out , their data contains a small range of both mass and metallicity , temperature is the only other variable , it may be the case that multiwavelength information is needed .it is worthwhile to investigate how pca fares with this dataset .here we give an outline that suggests that pca can be more efficient at extracting information from the light curve .for the sixteen stars which had differing absolute magnitudes but very similar fourier parameters , we can perform the following procedure : for every pair , , we calculate and where are the fourier amplitudes and are the pca coefficients and are the mean magnitudes . in the above, we always take the absolute value of the differences .we need to take fractional changes because the fourier amplitudes and pca coefficients have different ranges .we now plot diff3 against diff1 and diff2 .this is presented in figure 11 , where the open squares are diff1 and the closed squares are diff2 .we see that with pca ( closed squares ) , the differences between light curve structure parameters are greater than with fourier ( open squares ). this could imply that pca can be more efficient though the limitations associated with using a single waveband are still present. a more rigorous , quantitative discussion of this , in a fisher information sense , will be given in a future paper .in other future work we plan to investigate the applicability of this method to light curve structure - metallicity relations , rrc stars and a comparison of observed and theoretical light curves using pca ..percentage of variation explained by pc components [ cols="<,^,^,^,^,^,^",options="header " , ]smk thanks geza kovacs for stimulating discussions and for kindly supplying the rrab dataset .smk thanks d. iono for help writing the pca and least squares program and c. ngeow for help with latex .hm thanks fcrao for providing a summer internship in 2001 when part of this work was completed .we also thank the referee for constructive comments .hendry , m. a. , tanvir , n. a. , kanbur , s. m. , 1999 , asp conf .series , 167 , p. 192kanbur , s. , iono , d. , tanvir , n. & hendry , m. , 2002 , mnras , 329 , 126 jurcsik , j. , benko , j. , bakos , g. , szeidl , b. , szabo , r. , 2004 , apjl , 597 , 49 kovacs , g. , walker , a. r. , 2001 , a&a , 371 , 579 kovacs , g. , 2004 , private communication leonard , d. , kanbur , s.,m . , ngeow , c. , tanvir , n. , 2003 , apj , 594 , 247 ngeow , c. , kanbur , s. m. , nikolaev , s. , tanvir , n. , hendry m. , 2003 , 586 , 959 tanvir , n. r. , hendry , m. a. , kanbur , s. m. , 2004 , in preparation weisberg , s. , 1980 , _ applied linear regression _ , john wiley & sons , ed . | in this paper , we analyze the structure of rrab star light curves using principal component analysis . we find this is a very efficient way to describe many aspects of rrab light curve structure : in many cases , a principal component fit with 9 parameters can describe a rrab light curve including bumps whereas a 17 parameter fourier fit is needed . as a consequence we show show statistically why the amplitude is also a good summary of the structure of these rr lyrae light curves . we also use our analysis to derive an empirical relation relating absolute magnitude to light curve structure . in comparing this formula to those derived from exactly the same dataset but using fourier parameters , we find that the principal component analysis approach has distinct advantages . these advantages are , firstly , that the errors on the coefficients multiplying the fitted parameters in such formulae are much smaller , and secondly , that the correlation between the principal components is significantly smaller than the correlation between fourier amplitudes . these two factors lead to reduced formal errors , in some cases estimated to be a factor of 2 , on the eventual fitted value of the absolute magnitude . this technique will prove very useful in the analysis of data from existing and new large scale survey projects concerning variable stars . rr lyraes stars : fundamental parameters |
a conventional boundary layer theory of fluid flow used for free convective description assumes zero velocity at leading edge of a heated plate .more advanced theories of self - similarity also accept this same boundary condition , , .however experimental visualization definitely shows that in the vicinity of edge the fluid motion exists sb , , .it is obvious from the point of view of the mass conservation law . in the mentioned convection descriptions the continuity equationis not taken into account that diminishes the number of necessary variables .for example the pressure is excluded by cross differentiation of navier - stokes equation component .the consequence of zero value of boundary layer thickness at the leading edge of the plate yields in infinite value of heat transfer coefficient which is in contradiction with the physical fact that the plate do not transfer a heat at the starting point of the phenomenon .the whole picture of the phenomenon is well known : the profiles of velocity and temperature in normal direction to a vertical plate is reproduced by theoretical concepts of prandtl and self-similarity.while the evolution of profiles along tangent coordinate do not look as given by visualisation of isotherms ( see e.g. gdp ) .it is obvious that isotherms dependance on vertical coordinate significantly differs from power low depandance of boundary layer theories . in this articlewe develop the model of convective heat transfer taking into account nonzero fluid motion at the vicinity of the starting edge .our model is based on explicit form of solution of the basic fundamental equations ( navier - stokes and fourier - kirchhoff ) as a power series in dependant variables .the mass conservation law in integral form is used to formulate a boundary condition that links initial and final edges of the fluid flow .we consider a two - dimensional free convective fluid flow in plane generated by vertical isothermal plate of height placed in udisturbed surrounding .the algorithm of solution construction is following .first we expand the basic fields , velocity and temperature in power serious of horizontal variable , it substitution into the basic system gives a system of ordinary differential equations in variable .such system is generally infinite therefore we should cut the expansion at some power .the form of such cutting defines a model .the minimal number of term in the modeling is determined by the physical conditions of velocity and temperature profiles . from the scale analysis of the equations we neglect the horizontal ( normal to the surface of the plate ) component velocity .the minimum number of therms is chosen as three : the parabolic part guarantee a maximum of velocity existence while the third therm account gives us change of sign of the velocity derivative .the temperature behavior in the same order of approximation is defined by the basic system of equations . the first term in such expansion is linear in , that account boundary condition on the plate ( isothermic one ) .the coefficient , noted as satisfy an ordinary differential equation of the fourth order .it means that we need four boundary condition in variable .the differential links of other coefficients with add two constants of integrations hence a necessity of two extra conditions .these conditions are derived from conservation laws in integral form .the solution of the basic system , however , need one more constant choice .this constant characterize linear term of velocity expansion and evaluated by means of extra boundary condition . in the second section we present basic system in dimensional and dimensionless forms . by means of cross - differentiationwe eliminate the pressure therm and next neglect the horizontal velocity that results in two partial differential equations for temperature and vertical component of velocity . in the third sectionwe expand both velocity and temperature fields into taylor series in and derive ordinary differential equations for the coefficients by direct substitution into basic system .the minimal ( cubic ) version is obtained disconnecting the infinite system of equations by the special constraint .the fourth and fives sections are devoted to boundary condition formulations and its explicit form in therms of the coefficient functions of basic fields .it is important to stress that the set of boundary conditions and conservation laws determine all necessary parameters including the grasshof anf rayleigh numbers in the stationary regime under consideration .the last section contains the solution in explicit form and results of its numerical analysis .the solution parameters values as the function of the plate height and parameters whivh enter the grasshof number estimation are given in the table form , which allows to fix a narrow domain of the scale parameter being the characteristic linear dimension of the flow at the starting level .let us consider a two dimensional stationary flow of incompressible fluid in the gravity field .the flow is generated by a convective heat transfer from solid plate to the fluid .the plate is isothermal and vertical . in the cartesiancoordinates ( horizontal and orthogonal to the palte) ( vertical and tangent to the palte ) the navier - stokes ( ns ) system of equations have the form .: in the above equations the pressure terms are divided in two parts .the first of them is the hydrostatic one that is equal to mass force , where : is the density of a liquid at the nondisturbed area where the temperature is .the second one is the extra pressure denoted by part of gravity force arises from dependence of the extra density on temperature , is a coefficient of thermal expansion of the fluid . in the case of gases last terms of the above equations represents the friction forces with the kinematic coefficient of viscosity the mass continuity equation in the conditions of natural convection of incompressible fluid in the steady state has the form : the temperature dynamics is described by the stationary fourier - kirchhoff ( fk ) equation : where and are the components of the fluid velocity , - temperature and - pressure disturbances correspondingly and is the thermal diffusivity . from the point of clarity of further transformationswe use the same scale along both variables and .we will return to the eventual difference between characteristic scales in different directions while the solution analysis to be provided . after introducing variables : we obtain in boussinesq approximation ( in all terms besides of buoyancy one we put ) . and fk equation is written as where is a characteristic linear dimension and is characteristic velocity : then , and the grashof number , which after plugging ( takes the form: after cross differentiation of equations ( and ( ) we have : = \ ] ] \label{ns1 + 2}\ ] ] the fk equation rescales as and where next we would formulate the problem of free convection around the heated vertical isothermal plate , dropping the primes . in this casewe assume the angle between the plate and a stream line is small that means a possibility to neglect the horizontal component of velocity of fluid , denoting the vertical component as . in this paperwe restrict ourselves by the assumption that and , that yields = 0 , \label{ns - a}\ ] ] aim of this paper is the theory application to the standard example of a finite vertical plate . having only two basic functions we consider the power series expansions of the velocity and temperature in cartesian coordinates : according to standard boundary conditions on the plate we assume that the both functions tend to zero when ,so we choose for a calculation the variable that has the zero value for nondimentional temperature ( ) .it means that the value of outside of the convective flow tends to substituting expressions ( and ( into the equations ( we take into account the linear independance of monomials that gives a system of coupled nonlinear equations for the coefficients , .... and , , such system is infinite hence for a practical use we need to choose appropriate scheme of closed formulation for finite number of variables .the formulation should be based on physical assumptions for a concrete conditions .we would like to restrict ourselves by the fourth order approximation for both variables that means we neglect higher order terms starting from fifth one .the area of the approximations validity is defined by the comparison of terms in expantions ( and ( as it will be clear from further analysis we should consider the functions : and as variables of the first order , while and to be the second one . from the relations that appear after substitution of ( and ( into ( and ( it follows that and finally from both equations ( ( we obtain the system of equations for the coefficients , , and the first two ( ( arise from fk equation and the rest of them are from the ns one .the system of equations is closed if .it means that the number of equations and the number of unknown functions is the same . in the first approximationthe velocity and temperature are expressed as : from ( ) one has from ( ) it follows that hence ( ) goes to: the equation ( ) reads : this results in the form of the equation ( ) indicates that for unique solution one needs four boundary conditions for given parameters and .apart from such conditions we should also have values for and .so for expicit determination of and we need eight conditions .looking for the boundary conditions let us apply conservation laws of mass , momentum and energy , applying the laws to a control volume ( see fog.1 ) .the first one is the conservation of mass in two dimensions that in steady state looks as : where : is the sum of all lateral surfaces ( fig.1 ) .the mass conservation law in the integral form ( ) is formulated by a division of the surface to two the lower and upper boundaries only . according to our main assumption about two - dimensionality of the stream we neglecta dependence of variables on coordinate and ] we follow the idea of the velocity field continuity at , hence . for the left side in approximations mentioned above one has ( ) : and outcoming flow is expressed similarily : the mass conservation law yields the next condition is connected with the conservation of energy in a control volume ( area with unit width see fig.1 ) arises from fk equation ( ) by integration over the volume. the left side of the energy conservation equation ( ) is transformed similar applying the identity ( and ( ) . according to our assumptions we left only flows accross and basing on homogenity of the problem with respect to the coordinate have : to link the incoming fluid temperature with the solution at and the outgoing fluid ( see ) we put that results in : where .the equation ( is the ordinary differential equation of the fourth order , therefore its solution needs four constants of integration .these constants depend on two parameters and , which enter the coefficients of the eq.( .the function defines the rest functions and via above relations .it means that we have six constants determining the solution of problem and we need also six corresponding boundary conditions .the temperature values in the vicinity of the boundary edge point and taken as value -1 ( temperature of incoming from the bottom flow ) . in dimensionalform the interval of consideration has the characteristic length which we identify with a parameter we used when dimensional variables where itroduced ( ) .let us remind that scale is connected with special ( local , horizontal ) grashof number ( ).the total height of the plate is denoted for a stationary process an edge conditions may be considered as initial one for a cauchy problem . having a power series approximation of such conditions we choose the coefficients of the series using weierstrass theorem .it means that we equalize the coefficients to scalar product of intial conditions and orthonormal polynomials on the interval . ] in nondimensional variables . in the approximation of the thirdpowerorthogonal polynomials we have : because nondimentional temperature of the fluid at the lower half plane , according to above , is the polynomialas are defined as : the normalization for , orthogonality condition give the link between constants : . , which plugging into results in = substituting the result into gives two equations , , which solving and projecting , .yield boundary conditions for the coefficients for temperature expansion : plotting the temperature approximation at the level approximation ] is given by the fig.2 .let us recall that ( see eq .( )) therefore ( ) we will consider as boundary condition for the temperature gradient values on the plate decrease when grows.at the leading edge we pose the condition because the plate lose the contact with the fluid .it gives third boundary condition ( ) phenomenon of free convective heat transfer from isothermal vertical plate ( ) imply that temperature gradient on the plate is negative ( ) and decrease along ( ) .it is also known that velocity profile has maximum at the distance .the extrema for the curve is defined by derivative of as a function of hence the relation indicates that for , and we have two extremal points {\frac{\alpha^{2}}{9\beta^{2}}-\frac{% \gamma}{3\beta}}\text { \ \ \ \ and \ \ \ \ } x_{0}(y)=-\frac{\alpha } { 3\beta}+% \sqrt[2]{\frac{\alpha^{2}}{9\beta^{2}}-\frac{\gamma}{3\beta } } \label{xm0}\ ] ] if notations are chosen to mark maximum position point as while the minimum one is .in the exeptional case of expression simplifies which is positive for second extremum do not exist now ( see fig.3 ) .[ fig : fig-3 ] there is a possibility to choose the value considering the as a conditional boundary of the upward stream.we define hence . at the starting horizontal edge of the vertical platethe vertical velocity component of incoming flow ( ) varies slow so we assume that hence the extrema of the velocity profile ( ) after account of ( ) and ( ) is transformed as , for maximum : {\frac{c_{2}{}^{2}}{\left ( \frac{g_{r}}{2}% c\left ( y\right ) \right ) ^{2}}+\frac{2\gamma}{g_{r}c\left ( y\right ) } } ] .the following identity holds for : suppose there exists a level at which where denotes the boundary layer thickness analog . the equation ( ) is solved with respect to that gives : as function of the problem parameters . then plugging ( ) for the expression for the yields us return to the expression for the temperature ( ) with neglecting the last term in temperature ( the possibility of such assumption will be explained below ) on the level and substitute ( ) and ( ) into it equalizing to the temperature of surraunding . we have : where : from the equation ( ) after plugging ( ) and taking into account ( ) we have the equation was studied recently where the solution was given by +\exp[-\frac{sy}{2}](b_{1}\cos[\frac{\sqrt{3}}{2}sy]% + b_{2}\sin[\frac{\sqrt{3}}{2}sy ] ) , \label{c(y)}\ ] ] where {2\pr g_{r } } \label{s}\ ] ] is expressed via = have also boundary conditions : ( ) solution of the system results in a rather big expression for function of we skip in theis text , going to following approximtion .the explicit form of the equation ( shows that the three last terms have exponential behaviour as function of means that there are three different domains of the fluid flow structure .the first is the starting one where all terms are significant .the leading edge is characterized by two first terms and the medium domain is described by the only first one .we choose the parameter such that it belongs to that medium range .in such conditions where plugging in the form of ( into the table of boundary conditions gives a_1=--b_1+,=-b_2 + -,=. let us consider the natural approxmation .after substitution of the expression for into next into we have approximate formulas : =-, a_1=,=-,-++. it defines the expression for as the function of parameters ( , the plate height and the new one ( the velocity profile at the level is defined by ( and the parameters values ( : mass conservation equation ( ) after substitution of , and denoting has the form : the only real solution of the equation ( ) value that have physical sense is now we can return to the energy conservation equation ( plugging the boundary conditions for the domain restricted by the plate on interval ( ) .it simpifies the expression for the integral along the plate surface ( heat transfer from the plate on this interval ) .consequently we change to and neglect the integrant oscilations at vicinity of . we estimate the heat flux integral from the plate as and take into account the expressions for parameters yields : . asfurther considerations show , the value of mayby chosen as close to the plate height .chosing and plugging the values of the parameters ( ) {2ra}=\allowbreak 4.\,\allowbreak5782 ] ) .substitution of the grashof number = gives that is represented by the plot . .] in the same condition the temperature profile is defined by the expression ( ) and results in the plot ] to understand the phenomenon it is useful to return to dimensional picture . as a main space scaleit is choosen the parameter ( ) which is connected with the grashof number by ( ) {\frac{1}{% bg\phi } \nu^{2}g_{r}} ]first of all we would stress again that the model we present here have the engieering character of approximations , but include direct possibilities for a development by simple taking next terms of expansions into account .a modification of boundary conditions which would improve the transient regimes at both ends of the y - dependence is also possible .newertheless in this simple modeling we observe some important characteristic features of real convection phenomenon as almost parallel streamlines and isotherms in the stability region ( as , for example in visualizations of interferometric study from ) .it follows from functional parameter behaviour inside the domain and small contribution of cubic therm in the expresion for temperature ( ) .our explicit solution form and parameter values estimation allows to conclude that : \1 .the streamlines and isotherms of the flow are almost paralel to the vertical heating plate surface in the domain of stability , \2 .velocity values of the fluid flow at starting edge of the plate are nonzero , \3 .the set of boundary conditions yields in the complete set of the solution parameter including the local grashof number and hence , the characteristic linear dimension length in normal to plate direction , \4 .the sesults allow to descibed the natural heat transfer phenomenon for given fluid in therms only the temperature difference and the plate heigth which are novel in comparison with former theories .99 y.jaluria.:natural convection heat and mass transfer ; pergamon press , oxford , 1980 .e .. schmidt and w.beckmann , das temperatur- und geschwindigkeitfeld vor einer wrme abgebenden senkrechten plate bei natrlicher konvektion , tech mech .u. thermodynamik , bd.1 , nr .10 , okt.1930 , pp.341 - 349 and bd . 1 , nr.11 , nov .1930 , pp.391 - 406 .h.c.li and g. p. peterson , experimental studies of natural convection heat transfer of al2o3/diwater nanoparticle suspensions ( nanofluids ) , hindawi publishing corporation , advances in mechanical engineering , volume 2010 , article i d 742739 | the model under consideration is based on approximate analytical solution of two dimensional stationary navier - stokes and fourier - kirchhoff equations . approximations are based on the typical for natural convection assumptions : the fluid noncompressibility and bousinesq approximation . we also assume that ortogonal to the plate component ( ) of velocity is neglectible small . the solution of the boundary problem is represented as a taylor series in coordinate for velocity and temperature which introduces functions of vertical coordinate ( ) , as coefficients of the expansion . the correspondent boundary problem formulation depends on parameters specific for the problem : grashoff number , the plate height ( ) and gravity constant . the main result of the paper is the set of equations for the coefficient functions for example choice of expansion terms number . the nonzero velocity at the starting point of a flow appears in such approach as a development of convecntional boundary layer theory formulation . |
we consider canonical hamiltonian systems in the form where is a smooth real - valued function .our interest is in researching numerical methods that provide approximations to the true solution along which the energy is precisely conserved , namely the study of energy - preserving methods form a branch of _ geometrical numerical integration _ , a research topic whose main aim is preserving qualitative features of simulated differential equations . in this context ,symplectic methods have had considerable attention due to their good long - time behavior as compared to standard methods for odes . a related interesting approach based uponexponential / trigonometric fitting may be found in .unfortunately , symplecticity can not be fully combined with the energy preservation property , and this partly explains why the latter has been absent from the scene for a long time . among the first examples of energy - preserving methods we mention discrete gradient schemes which are defined by devising discrete analogs of the gradient function . the first formulae in this class had order at most two but recently discrete gradient methods of arbitrarily high order have been researched by considering the simpler case of systems with one - degree of freedom . here, the key tool we wish to exploit is the well - known line integral associated with conservative vector fields , such us the one defined at , as well as its discrete version , the so called _ discrete line integral_. interestingly , the line integral provides a means to check the energy conservation property , namely & = h\displaystyle \int_0 ^ 1 \nabla^t h(y(t_0+\tau h ) ) j^t \nabla h(y(t_0+\tau h ) ) \d \tau = 0 , \end{array}\ ] ] with , that can be easily converted into a discrete analog by considering a quadrature formula in place of the integral .the discretization process requires to change the curve in the phase space to a simpler curve ( generally but not necessarily a polynomial ) , which is meant to yield the approximation at time , that is , where is the order of the resulting numerical method . in a certain sense, the problem of numerically solving while preserving the hamiltonian function is translated into a quadrature problem .for example , consider the segment , with ] and weights , having degree of precision .we thus obtain to get the energy conservation property we impose that be orthogonal to the above sum , and in particular we choose ( for the sake of generality we use in place of to mean that the resulting method also makes sense when applied to a general ordinary differential equation ) formula defines a runge kutta method with butcher tableau , where and are the vectors of the abscissae and weights , respectively .the stages are called _ silent stages _ since their presence does not affect the degree of nonlinearity of the system to be solved at each step of the integration procedure : the only unknown is and consequently defines a mono - implicit method .mono - implicit methods of runge kutta type have been researched in the past by several authors ( see , for example , for their use in the solution of initial value problems ) .methods such as date back to 2007 and are called -stage trapezoidal methods since on the one hand the choice , , leads to the trapezoidal method and on the other hand all other methods evidently become the trapezoidal method when applied to linear problems .generalizations of to higher orders require the use of a polynomial of higher degree and are based upon the same reasoning as the one discussed above . up to now, such extensions have taken the form of runge kutta methods .it has been shown that choosing a proper polynomial of degree yields a runge kutta method of order with stages .the peculiarity of such energy - preserving formulae , called hamiltonian boundary value methods ( hbvms ) , is that the associated butcher matrix has rank rather than , since stages may be cast as linear combinations of the remaining ones , similarly to the stages in . as a consequence ,the nonlinear system to be solved at each step has dimension instead of , which is better visualized by recasting the method in block - bvm form . in the case where is not a polynomial ,one can still get a _practical _ energy conservation by choosing large enough so that the quadrature formula approximates the corresponding integral to within machine precision . strictly speaking , taking the limit as leads to limit formulae where the integrals come back into play in place of the sums .for example , letting in just means that the integral in must not be discretized at all , which would yield the _ averaged vector field _method , ( see for details ) . in this paperwe start an investigation that follows a different route . unlike the case with hbvms, we want now to take advantage of the previously computed approximations to extend the class in such a way to increase the order of the resulting methods , much as the class of linear multistep method may be viewed as a generalization of ( linear ) one step methods .the general question we want to address is whether there exist -step mono - implicit energy - preserving methods of order greater than two .clearly , the main motivation is to reduce the computational cost associated with the implementation of hbvms .the purpose of the present paper is to give an affermative answer to this issue in the case .more specifically , the method resulting from our analysis , summarized by formula , may be thought of as a nearly linear two - step method in that it is the sum of a fourth order linear two - step method , formula , plus a nonlinear correction of higher order .the paper is organized as follows . in section [ def_methods ]we introduce the general formulation of the method , by which we mean that the integrals are initially not discretized to maintain the theory at a general level . in this sectionwe also report a brief description of the hbvm of order four , since its properties will be later exploited to deduce the order of the new method : this will be the subject of section [ analysis_sec ] .section [ discr_sec ] is devoted to the discretization of the integrals , which will produce the final form of the methods making them ready for implementation .a few test problems are presented in section [ test_sec ] to confirm the theoretical results .suppose that is an approximation to the true solution at time , where is the stepsize of integration .more precisely , we assume that * with ; * , which means that lies on the very same manifold as the continuous solution .the two above assumptions are fulfilled if , for example , we compute by means of a hbvm ( or an -hbvm ) of order .the new approximation is constructed as follows .consider the quadratic polynomial that interpolates the set of data .expanded along the newton basis defined on the nodes , , , the polynomial takes the form ( for convenience we order the nodes as ) as ranges in the interval ] of length , we have scaled the constants and by a factor two with respect to the values reported in . ]furthermore , it may be shown that the internal stage satisfies the order condition .evidently , the implementation of on a computer can not leave out of consideration the issue of solving the integrals appearing in both equations .two different situations may emerge : * the hamiltonian function is a polynomial of degree . in such a case ,the two integrals in are exactly computed by a quadrature formula having degree of precision .* is not a polynomial , nor do the two integrands admit a primitive function in closed form .again , an appropriate quadrature formula can be used to approximate the two integrals to within machine precision , so that no substantial difference is expected during the implementation process by replacing the integrals by their discrete counterparts .case ( a ) gives rise to an infinite family of runge - kutta methods , each depending on the specific choice ( number and distribution ) of nodes the quadrature formula is based upon ( see for a general introduction on hbvms and for their relation with standard collocation methods ) . for example , choosing nodes according to a gauss distribution over the interval ] is entirely contained in a ball which , in turn , is contained in the domain of .is an open simply connected subset of containing while , from the assumption ( ) , decreasing causes the point to approach . ]we set from and we have and hence consequently ( a ) is satisfied by choosing and . concerning ( b ) ,we observe that hence with bounded with respect to . since vanishes with ( see ( [ lcond ] ) ) , we can always tune in such a way that .[ lem2 ] the solution of satisfies . under the assumption ( ) ,may be regarded as a perturbation of system , since and are and close to respectively .. ] since , we can estimate the accuracy of as an approximation of by evaluating its distance from .let be the underlying quadratic curve associated with the hbvm defined by , namely considering that ( see ) from the first equation in and we get & \displaystyle + 8hj \int_0 ^ 1 \nabla^2 h(\tilde \gamma(\tau))\tau(\tau-1 ) \,\d\tau \cdot ( u_1-y_1 ) + o(||u_1-y_1||^2 ) \\[.4 cm ] = & \displaystyle u_2+o(h^5 ) .\end{array}\ ] ] if is small enough , will be inside the ball defined in lemma [ lem1 ] . the lipschitz condition yields ( see ) and hence .the above result states that defines a method of order which is a simplified ( non corrected ) version of our conservative method defined at . in section [ test_sec ] the behavior of these two methods will be compared on a set of test problems .we now state the analogous results for system .[ th1 ] under the assumption ( ) , for small enough , equation admits a unique solution satisfying .consider the solution of system .we have ( see ) and hence , by virtue of , ^t \left[\int_0 ^ 1 ( 2\tau-1)\nabla h(\tilde \gamma(\tau ) ) \ , \d \tau + o(h^5)\right]=o(h^5).\ ] ] since is bounded with respect to , it follows that , in a neighborhood of , system may be regarded as a perturbation of system , the perturbation term being .consider the ball : since , and , this ball is contained in defined in lemma [ lem1 ] and the perturbed function is a contraction therein , provided is small enough .evaluating the right - hand side of at we get which means that property ( b ) listed in the proof of lemma [ lem1 ] , with in place of , holds true for the perturbed function , and the contraction mapping theorem may be again exploited to deduce the assertion .as was stressed in section [ def_methods ] , formula is not operative unless a technique to solve the two integrals is taken into account .the most obvious choice is to compute the integrals by means of a suitable quadrature formula which may be assumed exact in the case where the hamiltonian function is a polynomial , and to provide an approximation to within machine precision in all other cases .hereafter we assume that is a polynomial in and of degree .since has degree two , it follows that the integrand functions appearing in the definitions of and at and have degree and respectively and can be solved by any quadrature formula with abscissae in ] .right picture .roundoff errors may cause a drift of the numerical hamiltonian function ( upper line ) which can be easily taken under control by coupling the method with a costless correction procedure like the one described at .,title="fig:",width=253,height=188 ] produced by methods , with .parameters : stepsize , integration interval $ ] .right picture .roundoff errors may cause a drift of the numerical hamiltonian function ( upper line ) which can be easily taken under control by coupling the method with a costless correction procedure like the one described at .,title="fig:",width=253,height=188 ]we have derived a family of mono - implicit methods of order four with energy - preserving properties .each element in the family originates from a limit formula and is defined by discretizing the integral therein by means of a suitable quadrature scheme .this process assures an exact energy conservation in the case where the hamiltonian function is a polynomial , or a conservation to within machine precision in all other cases , as is also illustrated in the numerical tests .interestingly , each method may be conceived as a perturbation of a two - step linear method .l. brugnano , f. iavernaro and d. trigiante , _ analisys of hamiltonian boundary value methods ( hbvms ) : a class of energy - preserving runge - kutta methods for the numerical solution of polynomial hamiltonian dynamical systems _ , ( 2009 ) ( submitted ) ( arxiv:0909.5659 [ arxiv:0909.5659 ] ) .l. brugnano , f. iavernaro and d. trigiante , _ hamiltonian boundary value methods ( energy preserving discrete line integral methods ) _ , jour . of numer .industr . andappl . math . * 5 * , 12 ( 2010 ) , 1737 ( arxiv:0910.3621 [ arxiv:0910.3621 ] ) .l. brugnano , f. iavernaro , d. trigiante , _ the lack of continuity and the role of infinite and infnitesimal in numerical methods for odes : the case of symplecticity _ , applied mathematics and computation ( to appear ) , doi : 10.1016/j.amc.2011.03.022[doi : 10.1016/j.amc.2011.03.022 ] , ( arxiv:1010.4538 ) . | we introduce a family of fourth order two - step methods that preserve the energy function of canonical polynomial hamiltonian systems . each method in the family may be viewed as a correction of a linear two - step method , where the correction term is ( is the stepsize of integration ) . the key tools the new methods are based upon are the line integral associated with a conservative vector field ( such as the one defined by a hamiltonian dynamical system ) and its discretization obtained by the aid of a quadrature formula . energy conservation is equivalent to the requirement that the quadrature is exact , which turns out to be always the case in the event that the hamiltonian function is a polynomial and the degree of precision of the quadrature formula is high enough . the non - polynomial case is also discussed and a number of test problems are finally presented in order to compare the behavior of the new methods to the theoretical results . ordinary differential equations , mono - implicit methods , multistep methods , canonical hamiltonian problems , hamiltonian boundary value methods , energy preserving methods , energy drift . 65l05 , 65p10 . |
coupled oscillator models exhibit complex dynamics that has been observed in a wide range of different fields including physical and biological models .synchronization , clustering , chaos and spontaneous switching between different cluster states have all been observed in such systems .other studies have examined coupling between two or more systems that may individually be chaotic , and a wide variety of types of synchronization have been found and analysed ; see for example .we examine phase oscillator models that are appropriate if the coupling between oscillators is weak compared to the attraction onto the limit cycle ( e.g. ) .although the coupling structure and strength are important for the dynamical behaviour of the system , the exact coupling function ( which represents the nonlinearities in the oscillators and the coupling ) has a subtle effect on the collective behaviour of the system .research into the dynamics of coupled nonlinear oscillators has long explored the question `` what is the dynamics of a given system ? '' .a less frequently asked , but also very interesting , question is `` how can we design a coupled system to have specific dynamics ? '' .this latter question was considered by who designed cluster states with a prescribed clustering by giving explicit conditions on the coupling function and its first derivative to have a stable cluster state with a specific clustering .they demonstrate specific coupling functions that give stable cluster states for any partition of the oscillator into groups , regardless the number of oscillators and the size of each cluster . in this paperwe go beyond in four ways .firstly , we examine three - cluster states and show that not only stable cluster states , but also cluster states with specific transverse stability properties can be designed by suitable choice of coupling function . secondly , we give some results on how the transverse stability can be varied independently of the tangential stability and hence exhibit possible bifurcation scenarios from transversely stable clustering .thirdly , we show examples of how nontrivial cluster states with three inequivalent clusters can be joined into a heteroclinic network .finally , we generalize some of the bifurcation results to more general multi - cluster states with an arbitrary numbers of clusters .we use a fourier representation of the coupling function associated with a system of oscillators to design general three - cluster states , as in . the rest of the paper is organized as follows; for the remainder of this section we recall some of the notation and previous results on existence and stability of periodic cluster states .we define a notion of inequivalence of clusters within a cluster state and consider some sufficient conditions for clusters to be inequivalent .section [ sec:3cluster ] recalls and extends some basic results on the appearance of tangentially stable but transversely unstable three - cluster states .we present in theorem [ thm : transstab ] a characterization of transverse stability , and in corollary [ cor : couplingfunction ] a result on transverse bifurcation of three - cluster states .section [ sec : heteroclinic ] presents what we claim is the smallest possible cluster state with three inequivalent nontrivial clusters ( requiring at least oscillators ) and gives some examples how these may be connected into robust attracting heteroclinic networks .finally , section [ sec : conclusion ] discusses some consequences of this work , including a generalization of corollary [ cor : couplingfunction ] . in this paperwe consider phase oscillators that are all - to - all coupled and governed by the following generalization of kuramoto s model system of coupled phase oscillators : where is the phase of the oscillator , and is a -periodic nonlinear _ coupling function _ that we assume is smooth and represented by a truncated fourier series as in : where ( ) and ( ) are the real coefficients and is the number of fourier modes .note that the coupling function derived from weakly coupled nonlinear phase oscillators will typically have several non - zero modes in its fourier series , even if the oscillators are close to hopf bifurcation .conditions on the coupling function and its first derivative that ensure the existence and stability of desired cluster states in the system ( [ eq : coupledoscillatorsystem ] ) are derived in .note that the system is invariant under `` spatial '' symmetries acting by permutation of the components and `` temporal '' symmetries given by for any .we now look at periodic cluster states in a bit more depth .consider a partition into clusters , where ; each of form a cluster of size for and .we say a cluster is a _ multi - cluster _ if .the cluster is said to be _ nontrivial _ if . a periodic orbit of ( [ eq : coupledoscillatorsystem ] ) defines an associated _ clustering _ ] .we say two clusters of $ ] are _ equivalent _ if there is a symmetry in that maps one cluster to the other .otherwise they are said to be _a clustering is said to be _ phase non - degenerate _ if the phase difference between two different clusters is only attained by those clusters ; more precisely we say the cluster phases are _ non - degenerate _ if for all with , , unless and . as a metric on the circle .] as noted in , a sufficient condition for this is that the phase differences are rationally independent of each other and of .the following more general statement follows from the definitions and theorem [ thm : asiso ] ; note that if the phase differences are rationally independent this implies ( c ) .suppose is a cluster state .any of the following is a sufficient condition for all clusters to be inequivalent : * is prime and at least one cluster is nontrivial . *no two clusters have the same size * the clustering is phase non - degenerate * proof : * in case ( a ) , theorem [ thm : asiso ] gives that the only possible factorizations have or .the latter case is ruled out if one of the . in case( b ) , this also implies that in theorem [ thm : asiso ] .finally in case ( c ) , if then there will be clusters which can be chosen such that and ( if , otherwise we can choose .this implies that the clustering will be phase degenerate and conversely , phase non - degeneracy implies that and the clusters are inequivalent . given a partition let us now define the subspace in which clustering occurs as follows : = : if there exists such that then because ( [ eq : coupledoscillatorsystem ] ) is equivariant under the action of by permutation of the oscillators and by the circle group ( [ eq : tsymm ] ) , these cluster states are simply fixed point subspaces for groups conjugate to , i.e. with . the system ( [ eq : coupledoscillatorsystem ] ) can be simplified on as follows ; as if is in the cluster then we say and the system reduces to for .the dynamics of a periodic multi - cluster state can then be expressed as for where represents the relative phases of clustering and is the frequency of the periodic orbit .this means that : where .computing the difference between the first equation and the remaining ones gives : .defining as in means that we can rewrite ( [ eq : frequency ] ) as equations ( [ eq : existenceequations ] ) are conditions that that the coupling function should satisfy for existence of such a periodic cluster state .we also refer to for a discussion of stability where it is shown that a cluster state is linearly stable if and only if it is linearly stable in both of the following senses : * * tangential stability : * ( also called inter - cluster stability ) to determine the stability of the state to the change in its phases that respects the clustering we consider the linearized stability for perturbations , and writing , gives the following : where , as stated in , is the matrix : \ ] ] where is the kronecker delta .the matrix has eigenvalues ( including one trivial value ) and for tangential stability we require that all the other tangent eigenvalues have negative real parts .this means that a cluster is tangentially stable if : * * transverse stability : * ( also called intra - cluster stability ) the stability of a periodic state to changes of phases that change the clustering and is obtained by linearizing ( [ eq : coupledoscillatorsystem ] ) about , which gives the following : where ( as stated in ) is given by .\ ] ] this has real eigenvalues that are negative for a transverse stable cluster state ; put otherwise , a cluster state is transversely stable if : where is the number of nontrivial clusters ( i.e. those with more than one oscillator ) .a coupling function associated with a stable cluster state will satisfy ( [ eq : existenceequations ] ) , ( [ eq : tangentstability ] ) , and ( [ eq : transversestability ] ) and we observe that the number of degrees of freedom on choosing the will always be surplus to requirements if enough fourier modes are chosen .note that the above holds for cluster states regardless of whether the clusters are inequivalent or not .if two clusters are equivalent then the transverse eigenvalues for those clusters will be equal ; if the clusters are inequivalent then generically the transverse eigenvalues for those clusters will be unequal .we now derive conditions for tangential and transverse stability of three - cluster states for the system ( [ eq : coupledoscillatorsystem ] ) of globally coupled phase oscillators .consider a periodic state where all clusters are nontrivial ( i.e. the clusters have sizes for , where ) .this implies that ; we explore the special case in more detail in section [ subsec : n=6 ] .recall from that the condition for existence of a three - cluster state is : the tangential stability is determined by ( [ eq : tangentstability ] ) , namely where and transverse stability is determined by the eigenvalues : where the multiplicities of are respectively .our first new result is the following sufficient condition for tangential stability of three - clusters : suppose there is a periodic three - cluster state such that for all .then the cluster is tangentially stable with complex contracting eigenvalues .* proof : * if then all terms in ( [ eq : mueqn ] ) are negative and so .moreover , note that ( [ eq : oldnu ] ) can be written in the form .\end{split } \label{eq : newnu}\ ] ] hence if then all terms in ( [ eq : newnu ] ) are positive and so . hence the eigenvalues ( [ eq : tangenteigenvalues2004 ] ) are complex with negative real parts and so the cluster is tangentially stable .the next result demonstrates that the tangential and transverse stability can be set independently of each other .we define without loss of generality ( renumbering the clusters if necessary ) let us assume that and we demonstrate the following : [ thm : transstab ] suppose that is such that there is a periodic three - cluster state with non - trivial clusters such that the clusters are tangentially stable and assume that .then we can classify the transverse stability as follows : * if then ( all clusters unstable ) . *if then and ( one stable cluster ) .* if then and ( two stable clusters ) .* if then ( all clusters stable ) .* proof : * these conclusions follows from noting that the conditions on ensure that ( [ eq : transeigs3 ] ) have zero , one , two or three positive transverse eigenvalues as stated .note that this is independent of the number of oscillators , although if one or more of the clusters are trivial , the transverse exponent for that cluster is not defined .if for any then the state will be at a bifurcation point and the stability is not determined at linear order .theorem [ thm : transstab ] can be used to prove the following corollary about bifurcation of three - cluster states involving changes in transverse stability .[ cor : couplingfunction ] suppose that is such that there is a periodic three - cluster state with non - trivial clusters , such that the clusters are tangentially stable .then there is a parametrized family of coupling functions and parameter values such that for all values of the parameter the cluster state remains with the same phases and tangentially stability and : 1 . if then ( all clusters unstable ). 2 . if then and ( one stable cluster ) .3 . if then and ( two stable clusters ) .4 . if then ( all clusters stable ) .* proof : * for this specific three cluster state with relative phases there will be a such that for all .now consider a smooth compactly supported periodic function such that for all with , and .one can verify that , for all and , and for all .hence the existence condition ( [ eq : existenceequations ] ) and the tangential stability conditions do not depend on , while the cases of theorem [ thm : transstab ] translate into the cases depending on . in this sectionwe consider properties of cluster states with nontrivial and inequivalent clusters for , before applying the results from the previous section to give sufficient conditions for a nontrivial stable three cluster state for ( [ eq : coupledoscillatorsystem ] ) . by restricting to phase non - degenerate ( and hence inequivalent ) clusterswe do not consider the cases in theorem [ thm : asiso ] .it does not appear to be easy to characterise the solutions of the system of equation for the fourier coefficients analytically .for recall that ( [ eq : coupledoscillatorsystem ] ) can be written as the symmetries of interchange of the oscillators , , gives nine possible isotropy subgroups corresponding to possible cluster states that are listed in the table [ tab : fixedpointsubspacesofs_6 ] ; each periodic cluster state will reside in precisely one of these invariant subspaces ..conjugacy classes of isotropy subgroups and representative fixed - points subspaces for the action of on the phase space for globally coupled oscillators corresponding to inequivalent clusters .note that represents the trivial group while the number of conjugate groups and the number of point in the group orbit under are given in the last two columns .[ cols="^,^,^ , > , > " , ] left : the inset shows for case 1 the coupling function and a timeseries representing the phase difference of the oscillator relative to the 6th oscillator ; as a function of .observe that the six oscillators synchronize into three clusters for most of the time , but there are short times when the clusters break along a connection .right : although the cycle in case 2 has an additional unstable direction , the trajectory still appears to approach a heteroclinic cycle between three symmetrically related states . in both cases , i.i.d .white noise of amplitude is added to each component of the ode.,title="fig:",width=302 ] left : the inset shows for case 1 the coupling function and a timeseries representing the phase difference of the oscillator relative to the 6th oscillator ; as a function of .observe that the six oscillators synchronize into three clusters for most of the time , but there are short times when the clusters break along a connection .right : although the cycle in case 2 has an additional unstable direction , the trajectory still appears to approach a heteroclinic cycle between three symmetrically related states . in both cases , i.i.d .white noise of amplitude is added to each component of the ode.,title="fig:",width=302 ] the parameters listed as cases 0 , 1 and 2 in table [ tab:3clustereg ] all yield three - cluster states of type with varying numbers of positive transverse eigenvalues .while case 0 gives a stable cluster the dynamics for cases 1 and 2 are more subtle .as illustrated in figure [ fig : examplecase12 ] , a randomly chosen initial condition evolves towards a heteroclinic cycle that connects three symmetrically related periodic cluster states within the same invariant subspace .note that any phase non - degenerate -clustered periodic orbit will have a representative that exists within the subspace where note moreover that this invariant subspace will contain six distinct clustered periodic orbits given by cyclically permuting the phases of the clusters , due to the clusters being inequivalent .more specifically , suppose there is a point on a phase non - degenerate periodic -cluster for some .without loss of generality one can choose (in fact , one can assume that ) and phase non - degeneracy means that . as a consequence , and also points on periodic -clusters , where and although these are also within , the phase non - degeneracy means that they are distinct points ; there are three more that are in the same subspace which we write as the relative location of these six equilibria can be seen in figure [ fig:222clusterdiag ] ( left ) calculated using xppaut . for the coupling function in case 1 , table [ tab:3clustereg ] reveals that each of the has a single positive transverse eigenvalue corresponding to instability of one of the clusters but is otherwise stable .the unstable manifold will therefore be contained within a fixed point subspace of symmetry where one of the clusters is broken , but the numerical results in figure [ fig : examplecase12 ] indicate that this unstable manifold is within the stable manifold of one of another . if we write * * * then one can verify that there will be a sequence of connections ( heteroclinic orbit ) ( a ) from to that is transverse within , ( b ) from to that is transverse within and ( c ) from to that is transverse within as show in figure [ fig:222clusterdiag](right ) similarly there is a symmetrically related heteroclinic cycle that connects the remaining equilibria within . turning to case 2 in figure [ fig : examplecase12 ]we note that there are two transversely unstable directions from each of the in figure [ fig:222clusterdiag ] and `` accordingly '' a continuum of directions by which the trajectory can leave a neighbourhood of the .these need no longer be within any of the invariant subspaces . as can be seen for this case , the resulting dynamics `` nonetheless '' seems to return repeatedly to a cycle between the suggesting that the cycle is a milnor attractor .the results in section [ sec:3cluster ] ( concerning clustering behavior and bifurcations ) apply to systems with any number of coupled oscillators , regardless of the size of each cluster. however , these results are essentially local in phase space .more precisely , a given coupling function may admit a variety of different cluster states of varying stability , and there may be constraints on the possible cluster states and/or their stabilities. it would be interesting to understand the nature of such constraints , but we leave this for future work . by considering properties of three - cluster states with equal sized but inequivalent clusters , we find in section [ sec:3cluster ] a new type of robust heteroclinic attractor for oscillators .our results on the existence of robust connections for these heteroclinic attractors still rely on numerical observation of robust connections - it is a challenge to characterise coupling functions that give rise to such cycles in a more analytical ( or geometric ) manner .this does not seem to be an easy task , even if one restricts to cycles with one - dimensional unstable manifolds , i.e. between states that have clusterings consisting only of pairings .[ thm : couplingfunctiongen ] suppose that is such that there is a periodic -cluster state with non - trivial clusters of size , such that the clusters are tangentially stable .then there is a parametrized family of coupling functions and with real parameter and parameter values such that for all values of the parameter a nearby cluster state exists , is still tangentially stable and moreover : * proof : * the proof is similar to that of corollary [ cor : couplingfunction ] ; we make use of the fact that the transverse exponent of the cluster can be written in the form .\ ] ] the nontrivial assumption of the clusters mean one can choose a compactly supported perturbation with and so that , and are independent of while . | in this paper we examine robust clustering behaviour with multiple nontrivial clusters for identically and globally coupled phase oscillators . these systems are such that the dynamics is completely determined by the number of oscillators and a single scalar function ( the coupling function ) . previous work has shown that ( a ) any clustering can stably appear via choice of a suitable coupling function and ( b ) open sets of coupling functions can generate heteroclinic network attractors between cluster states of saddle type , though there seem to be no examples where saddles with more than two nontrivial clusters are involved . in this work we clarify the relationship between the coupling function and the dynamics . we focus on cases where the clusters are inequivalent in the sense of not being related by a temporal symmetry , and demonstrate that there are coupling functions that give robust heteroclinic networks between periodic states involving three or more nontrivial clusters . we consider an example for oscillators where the clustering is into three inequivalent clusters . we also discuss some aspects of the bifurcation structure for periodic multi - cluster states and show that the transverse stability of inequivalent clusters can , to a large extent , be varied independently of the tangential stability . |
in most countries , learning microscopic origin of nature is considered to be an important topic in science education . however , there are not so many student experiments or demonstrations which exhibit the existence of individual atom in direct and intuitive manner .the cloud chamber experiment is a rare exception ; it provides most direct and intuitive way to convince students existence of microscopic particles .beautiful tracks of particles draw audience s attention and interest .various chambers have been developed and used in classroom .most of simple chambers seem to be based on the results of needles and nielsen and cowan , which use a block of dry ice and a beaker filled with ethanol vapour .the great simplicity of their chamber enables students to make diy chamber at home or classroom .such a chamber works well if one uses a radioactive source . from spring 2008 , one of the authors ( s.z . )has been working on classroom experiment program using cloud chamber .an one - day experiment course had been given for local junior high school students . from spring 2011 , some students ( rest of the authors ) have joined this project as a part of super science high school ( ssh ) activity .the typical setting of a chamber is shown in figure [ typicalchamber ] .a chamber is not so large ; smallest example is a glass laboratory dish with radius 5 cm and height 1 cm .a standard chamber we have used is a round glass container with radius 8.5 cm and 8.0 cm high .such chambers require a _ radioactive source . _it is very hard to find particle tracks of background radiation in this kind of chamber .one will find few tracks in a minute only in very dark room . in japan, people s attention to radioactivity is increased much after the fukushima daiichi nuclear disaster following the tohoku earthquake and tsunami on 11 march 2011 .large part of public unease or fear about radioactivity is due to insufficient education about radioactivity , since curricula of primary and secondary schools lack serious and extensive study of radioactivity .after the tohoku earthquake , people have been nervous about radioactivity . in particular , parents do not wish their children to join scientific activity using radioactive source , even if it is relatively safe one such as a gas mantle with thorium .therefore , we think it is very important to develop a sensitive cloud chamber which _ does not require any radioactive source . _ the chamber should be simple , small , portable and cheap since our aim is educational applications such as student experiment course or a workshop for citizens in tohoku area . in this article, we would like to present a construction of a cloud chamber works well without radioactive sources .there is only minor modification from a typical chamber shown in this section , but its performance is remarkable .we also present a result of a performance test .the construction of our cloud chamber is shown in figure [ fig : chamber ] . a major difference from a common chamber is use of a black anodised aluminium heatsink as a bottom plate instead of a metal plate .its dimension is 134 mm 134 mm 20 mm , weight is 622 g , and a side with fins is placed downward . as we see later ,the heat sink greatly improves performance of our chamber .our method has an additional advantage that the heat sink plays a role of supporting base of the chamber which prevents direct contact of side wall with refrigerant .clear walls without a bottom and a top are placed on the heat sink .walls and top are made of square , 5 mm thick acrylic plates .top plate is removable . a piece of black felt e is placed along three sides of a box . upper half of it is soaked with ethanol .total amount of ethanol is about 10 g for each experiment .light from pc projector is introduced from a face without a piece of black felt .all of the above instruments are placed on a shallow styrofoam tray b. liquid nitrogen can be directory poured into this box to this tray .powder of dry ice can be used also . in this case , a heat sink is placed above powdered dry ice . in the placementmentioned above , tracks of particles can be observed from above . figure [ fig : working ] is a photo of our cloud chamber in operation .a pc projector is slanted to obtain fine view of particle tracks . in closing this section, we would like to mention that our result is not new .an advantage of our chamber is good performance in spite of its smallness and simpleness .in fact , various examples of sensitive chambers have been known in japan .very large and expensive chambers for display are available in some science museums .a glass chamber presented in the beginning of this section is also available as a product by rado ltd .a use of a small heat sink with liquid nitrogen is first presented in .another example of a sensitive chamber is given in .surprisingly , the very simple construction presented in the previous section is enough to see many particle tracks without any radioactive sources . external electric field is also unnecessary .we would like to describe operation of a cloud chamber . to begin with, liquid nitrogen was poured to the styrofoam box .our lab was at room temperature , so the temperature of the bottom heat sink rises as liquid nitrogen evaporates . during the observation ,this change of the temperature was measured by a thermocouple .a result is shown in figure [ fig : bottomtemp ] .obtained data fits well to a quadratic curve .particle tracks begin to appear in few minutes . herewe present our result with a video uploaded to youtube .figure [ fig : alphabeta ] is captured image from the video .most thin and wiggy tracks , such as the left picture of figure [ fig : alphabeta ] frequently seen in the chamber , can be considered as those of beta particles .we can see long tracks since the bottom plate is cooled uniformly .some of long tracks are slightly curved even though no external magnetic field is imposed .we think it is due to multiple collision with atoms in the air .an alpha particle also can be observed as a short , thick line such as the right picture in figure [ fig : alphabeta ] .one can see that many droplets are formed along a track and fall down toward bottom of the chamber .although many tracks are seen in the video , there should be no influence of fukushima daiichi nuclear disaster , since in our city ( yokote , akita , japan ) has never observed apparent rise of air dose rate since 11th march .obtaining of high quality video is important both for scientific analysis and publication on the web .the later is especially important in for education , since lower quality videos found in the web are not enough to tell the beauty and excitement that people must feel in this experiment .we find that a digital slr camera ( in our case , eos kiss x3 ) is very useful for this purpose .it has larger ccd and better lens required for high quality video . in our experiment , a camera is directly put on the top plate as in figure [ fig : camera ] , but use of rigid is better if available .a qualitative evaluation of the performance of a chamber will be useful for development of new cloud chamber and finding better working condition .in fact , some test runs tells us that numbers of particle tracks seems to differ for each run . using a recorded video , we counted all tracks seen in the chamber regardless of its length , width and shape . therefore alpha particles , beta particles and other particles are not distinguished .figure [ fig : countpermin ] shows a relation between count per 10 seconds and elapsed time .we observe that , although fluctuation is large , the count is independent of the bottom temperature until it reaches to ` critical ' value .our observation shows the fall of the count begins around degrees . , and degrees respectively .black and white are converted ., scaledwidth=90.0% ] while a number of particle tracks does not change below critical temperature , a number of background droplets affects visibility of particle tracks .many droplets are seen at lower temperature as shown in the left of figure [ fig : droplets ] .contrast of whole image is reduced at such low temperature .we also found that alpha particle tracks become thinner at lower temperature , as mentioned in . on the other hand, it becomes hard to identify particle tracks at high temperature .therefore , the best temperature range for observation should be middle region in figure [ fig : countpermin ] .we expect our cloud chamber will be useful for various application to education , for example , a student research project or demonstrations in public . in particular , it will be important to measure numbers of particle tracks in a city where dose rate is still high . also not examined in this paper , the high quality video enables us various computer analysis such as qualitative evaluation of amount of condensation or automatic counting of particle tracks. such kind of analysis will be useful for a student s research projects .we would like to thank hiroki kanda of tohoku university for useful comments at the beginning of our project , ichiro itoh of yokote seiryo gakuin h.s . for providing a voltage multiplier , ryoko yamaishi for support in lab .we also thank kenichiro aoki of keio university for careful reading of the manuscript .this work is supported by saito kenzo honour fund and the super science high school funding from japan science and technology agency .99 andy foland s cloud chamber page http://w4.lns.cornell.edu/~adf4/cloud.html , how to build a cloud chamber !http://www.youtube.com/watch?v=pewtysxftqk , how to make a diffusion cloud chamber http://www.youtube.com/watch?v=pvcdaa_vvvg needels t s and neilsen c e 1950 a continuously sensitive cloud chamber rev .21 976 cowan e w 1950 a continuously sensitive cloud chamber rev .21 991 rado ltd .http://www.kiribako-rado.co.jp/goods-m.html mori y 1994 let s see positorn in a cloud chamber using liquid nitorogen toray science education award 26 18 , hayashi h 2007 a sensitive cloud chamber in magnetic field as a teaching material of atomic physics j. phys .55 297 sensitive cloud chamber http://www.youtube.com/watch?v=hcv3fdz1rfk y mori http://www.youtube.com/watch?v=mwm-wo7dokw http://www.youtube.com/watch?v=r9swciiqwvm slatis h 1957 on diffusion cloud chambers nucl . instrum . 1 213 | we present a sensitive diffusion cloud chamber which does not require any radioactive sources . a major difference from a commonly used chamber is use of a heat sink as its bottom plate . a result of a performance test of the chamber is given . |
today , bianchi type ix enjoys an almost mythical status in general relativity and cosmology , which is due to two commonly held beliefs : ( i ) type ix dynamics is believed to be essentially understood ; ( ii ) bianchi type ix is believed to be a role model that captures the generic features of generic spacelike singularities .however , we will illustrate in this paper that there are reasons to question these beliefs .the idea that type ix is essentially understood is a misconception .in actuality , surprisingly little is known , i.e. , proved , about type ix asymptotic dynamics ; at the same time there exist widely held , but rather vague , beliefs about mixmaster dynamics , oscillations , and chaos , which are frequently mistaken to be facts . there is thus a need for clarification : what are the known facts and what is merely believed about type ix asymptotics? we will address this issue in two ways : on the one hand , we will discuss the main rigorous results on mixmaster dynamics , the ` bianchi type ix attractor theorem ' , and its consequences ; in particular , we will point out the limitations of these results . on the other hand, we will provide the infrastructure that makes it possible to sharpen commonly held beliefs ; based on this framework we will formulate explicit refutable conjectures . historically , bianchi type ix vacuum and orthogonal perfect fluid models entered the scene in the late sixties through the work of belinskii , khalatnikov and lifshitz and misner and chitr .bkl attempted to understand the detailed nature of singularities and were led to the type ix models via a rather convoluted route , while misner was interested in mechanisms that could explain why the universe today is almost isotropic .bkl and misner independently , by means of quite different methods , reached the conclusion that the temporal behavior of the type ix models towards the initial singularity can be described by sequences of anisotropic kasner states , i.e. , bianchi type i vacuum solutions .these sequences are determined by a discrete map that leads to an oscillatory anisotropic behavior , which motivated misner to refer to the type ix models as mixmaster models .this discrete map , the kasner map , was later shown to be associated with stochasticity and chaos , a property that has generated considerable interest and confusion ,see , e.g. , and references therein . a sobering thought : all claims about chaos in einstein s equations rest on the ( plausible ) belief that the kasner map actually describes the asymptotic dynamics of einstein s equations ; as will be discussed below , this is far from evident ( despite being plausible ) and has not been proved so far .more than a decade after bkl s and misner s investigations a new development took place : einstein s field equations in the spatially homogeneous ( sh ) case were reformulated in a manner that allowed one to apply powerful dynamical systems techniques ; gradually a picture of a hierarchy of invariant subsets emerged where monotone functions restricted the asymptotic dynamics to boundaries of boundaries , see and references therein .based on work reviewed and developed in and by rendall , ringstrm eventually produced the first major proofs about asymptotic type ix dynamics .this achievement is remarkable , but it does not follow that all questions are settled . on the contrary , so far nothing is rigorously known , e.g. , about dynamical chaotic properties ( although there are good grounds for beliefs ) , nor has the role of type ix models in the context of generic singularities been established .the outline of the paper is as follows . in section[ basic ] we briefly describe the hubble - normalized dynamical systems approach and establish the connection with the metric approach . for simplicitywe restrict ourselves to the vacuum case and the so - called orthogonal perfect fluid case , i.e. , the fluid flow is orthogonal w.r.t . the sh symmetry surfaces; furthermore , we assume a linear equation of state . in section [ subsets ]we discuss the levels of the bianchi type ix so - called lie contraction hierarchy of subsets , where we focus on the bianchi type i and type ii subsets . in section [ nongeneric ]we present the results of the local analysis of the fixed points of the dynamical system and discuss the stable and unstable subsets of these points which are associated with non - generic asymptotically self - similar behavior .section [ maps ] is devoted to a study of the network of sequences of heteroclinic orbits ( heteroclinic chains ) that is induced by the dynamical system on the closure of the bianchi type ii vacuum boundary of the type ix state space ( which we refer to as the mixmaster attractor subset ) .these sequences of orbits are associated with the mixmaster map , which in turn induces the kasner map and thus the kasner sequences .we analyze the properties of non - generic kasner sequences and discuss the stochastic properties of generic sequences . in section [ furthermix ]we discuss the main ` mixmaster facts ' : ringstrm s ` bianchi type ix attractor theorem ' , theorem [ rinthm ] , and a number of consequences that follow from theorem [ rinthm ] and from the results on the mixmaster / kasner map .in addition , we introduce and discuss the concept of ` finite mixmaster shadowing ' . in the subsection` attractor beliefs ' of section [ stochasticbeliefs ] we formulate two conjectures that reflect commonly held beliefs about type ix asymptotic dynamics and list some open issues that are directly connected with these conjectures . in the subsection ` stochastic beliefs ' we address the open question of which role the mixmaster / kasner map and its stochastic properties actually play in type ix asymptotic dynamics .this culminates in the formulation , and discussion , of two ` stochastic ' conjectures . in section [ billiard ]we present the hamiltonian billiard formulation , see or ; we demonstrate that this approach yields a ` dual ' formulation of the asymptotic dynamics .we point out that the billiard approach is a formidable heuristic picture , but fails to turn beliefs into facts .we conclude in section [ concl ] with a discussion of the main themes of this paper . throughout this paperwe use units so that and , where is the speed of light and the gravitational constant .we consider vacuum or orthogonal perfect fluid sh bianchi type ix models ( i.e. , the fluid 4-velocity is assumed to be orthogonal to the sh symmetry surfaces ) with a linear equation of state ; we require the energy conditions ( weak / strong / dominant ) to hold , i.e. , and where , and where and are the energy density and pressure of the fluid , respectively . bywe exclude the special cases and , where the energy conditions are only marginally satisfied . , see . the case is known as the stiff fluid case , for which the speed of sound is equal to the speed of light .the asymptotic dynamics of stiff fluid solutions is simpler than the oscillatory behavior characterizing the models with range , and well understood .( in the terminology introduced below , the stiff fluid models are asymptotically self - similar . )we will therefore refrain from discussing the stiff fluid case in this paper .] as is well known , see , e.g. , and references therein , for these models there exists a symmetry - adapted frame , [ bixform ] hence , the type ix models naturally belong to the so - called class a bianchi models , see table [ classamodels ] . let where .furthermore , define where denotes the second fundamental form associated with of the sh hypersurfaces .the quantities and can be interpreted as the expansion and the shear , respectively , of the normal congruence of the sh hypersurfaces . in a cosmological contextit is customary to replace by the hubble variable ; this variable is related to changes of the spatial volume density according to .evidently , in bianchi type ix ( and type viii ) there is a one - to - one correspondence between the ` orthonormal frame variables ' ( with ) and ; in particular , the metric is obtained from via .( for the lower bianchi types i , some of the variables are zero , cf .; in this case , the other frame variables , i.e. , , are needed as well to reconstruct the metric ; see for a group theoretical approach . ).the class a bianchi types are characterized by different signs of the structure constants , where is any permutation of .in addition to the above representations there exist equivalent representations associated with an overall change of sign of the structure constants ; e.g. , another type ix representation is . [ cols="^,^,^,^",options="header " , ] for all class a models except type ix the constraint implies that ; in bianchi type ix , however , is possible . for type ix ,define employing and and using that + \delta \geq 0\:,\ ] ] where equality holds iff , we find the function is strictly monotonically increasing along orbits of bianchi type ix . to see this we use and compute \delta\:,\ ] ] where we note that because of the constraints .in combination with it follows that and are _ bounded towards the past_. the right hand side of the reduced system ( [ ixeq ] ) consists of polynomials of the state space variables and is thus a regular dynamical system .solutions of of bianchi types i viii are global in , since implies the bounds and which control the evolution of in .solutions of of bianchi type ix are global towards the past , since is bounded from below ; this follows from , which yields , so that . the decoupled equation for , cf . ,yields that as , because is non - negative .since is bounded as , the asymptotics of can be bounded by exponential functions from above and below .it follows that the equation can be integrated to yield as a function of such that as .in addition to it is useful to also consider an auxiliary equation for the matter quantity , \omega\:.\ ] ] making use of and we conclude that for all orthogonal perfect fluid models with a linear equation of state we have \tau\right) ] ) , continues with a sequence of kasner parameters obtained via , and ends with a minimal value that satisfies , so that ^{-1} ] and its fractional part , i.e. , \ : , \quad x_s= \ { { \mathsf{u}}_s\}\:.\ ] ] the number represents the ( discrete ) length of era , which is simply the number of kasner epochs it contains .the final ( minimal ) value of the kasner parameter in era is given by , which implies that era number begins with the map is ( a variant of ) the so - called _ era map _ ; starting from it recursively determines , , and thereby the complete kasner sequence .the era map admits a straightforward interpretation in terms of continued fractions .consider the continued fraction representation of the initial value , i.e. , \:.\ ] ] the fractional part of is ] , i.e. , the continued fraction is purely periodic without any preperiod . ]consequently , the era map becomes periodic ( after the era ) , and we thus obtain a periodic sequence of eras and a periodic kasner sequence . it is straightforward to see that while the period of the era sequence is , the period of the kasner sequence is ; see the examples below . since the set of algebraic numbers of degree two ( or equivalently the set of equations with integer coefficients it is a subset of ) is a countable set , case ( ii ) is also non - generic .an irrational number is called badly approximable if its markov constant , let denote the distance from to the nearest integer , i.e. , .the markov constant of a number is defined as , see .it is known that for all .] is finite .if and only if is badly approximable , then the coefficients ( partial quotients ) in its continued fraction representation are bounded , i.e. , \qquad \text{with}\quad k_i \leq k\:\ , \forall i \tag{\ref{cfki}iii}\ ] ] for some positive constant .consequently , the sequence of eras and the kasner sequence are bounded , i.e. , . obviously , case ( ii ) is a subcase of case ( iii ) .( note , however , that there is probably no relationship between case ( iii ) and algebraic numbers of degree greater than , since it is expected that these numbers are well approximable . )the set of badly approximable numbers are a set of lebesgue measure zero , hence this case is non - generic . if and only if the initial kasner parameter is a well approximable irrational number , then the partial quotients in the continued fraction representation \tag{\ref{cfki}iv}\ ] ] are unbounded ( and we can construct a diverging subsequence from the sequence of partial quotients ) .this is the generic case , and hence generically the kasner sequence is infinite and unbounded . in terms of continued fractions ,the kasner sequence generated by ] , which is the golden ratio , then .it follows that the kasner sequence is also a sequence with period , if = 1 + \sqrt{2} ] generates an era sequence of period and an associated kasner sequence of period .finally let = 1 + \sqrt{3/2 } \simeq 2.2247 ] for even and \simeq 4.4495 ] and its associated era sequence , where ] for all . making use of these concepts ,a formulation of finite mixmaster shadowing is the following : let and . consider the sequence of transitions emanating from an initial kasner point and its taub - adapted tubular neighborhood ( associated with ) .then there exists such that each type ix orbit that is generated by initial data with shadows the finite piece of the sequence of transitions .( a proof of this statement in a slightly different form has been given by rendall .alternatively , one can invoke the regularity of the dynamical system , the center manifold reduction theorem and continuous dependence on initial data . ) evidently , depends on the choice of and .more importantly , however , depends on the position of is not uniform ; in particular , if we consider a series of initial points that approach one of the taub points , then necessarily converges to zero along this series shadowing is more delicate in the vicinity of a taub point .we will return to this issue in some detail in the next section .finite mixmaster shadowing concerns any generic type ix orbit .let be an -limit point of the type ix orbit ; without loss of generality we may assume that is not one of the taub points .( the existence of such a point is ensured by corollaries [ possessesalpha][taubthenmany ] of section [ furthermix ] . ) for simplicity we assume that is associated with an irrational value of the kasner parameter , which guarantees that the sequence emanating from is an infinite sequence . since is an -limit point of , there exists a sequence of times , ( ) , such that ( ) .therefore , we observe a recurrence of phases , where the orbit shadows with an increasing degree of accuracy , i.e. , shadowing takes place for increasingly longer pieces of the sequence or in ever smaller neighborhoods .( if the kasner parameter of is rational , then the sequence is finite and terminates at a taub point . will shadow this finite sequence recurrently with an increasing degree of precision . )we will return to this issue in some more detail in the subsection ` stochastic mixmaster beliefs ' of section [ beliefs ] .the concept of shadowing leads directly to the concept of _ approximate sequences _ which we introduce next .consider a generic type ix orbit and the function , where the distance is given as .when the orbit traverses a ( sufficiently small ) neighborhood of a fixed point on , the function exhibits a unique local minimum .this is immediate from the transversal hyperbolic saddle structure of the fixed point .( however , the flow in the vicinity of the taub points is more intricate , since these points are not transversally hyperbolic . )it follows that the function can be used to partition into a sequence of segments in a straightforward manner : into segments seems natural ; it is important to note , however , that any definition depends on the formulation of the problem and is to a certain extent arbitrary .for instance , instead of using the minima of one might prefer to analyze the projection of the orbit onto -space and use the extrema of .however , the conclusions drawn from any construction of segments are quite insensitive to the details of the definition . ] the local minima of form an infinite sequence such that as .( this follows directly from corollaries [ possessesalpha ] and [ onethenall ] because has -limit point(s ) on the kasner circle and -limit points on the type ii subset . ) a _ segment _ of is defined to be the solution curve between two consecutive minima , i.e. , the image of the interval ] via the kasner map ; see section [ maps ] ; in particular , is finite .however , whether orbits with this particular past asymptotic behavior really exist is an open problem .* can there exist orbits such that is bounded but contains infinitely many -values ? is the kasner sequence generated by a badly approximable number a candidate ?the kasner sequence generated by a badly approximable number is an infinite sequence that is bounded .however , there must be at least one accumulation point of this sequence ; if contains , then must contain the accumulation point of the sequence as well ( since -limit sets are closed ) .if this accumulation point is a well approximable number , can not be bounded ; however , if the accumulation point is a quadratic surd , no inconsistencies arise .hence , a priori , there might exist orbits such that is not finite but still bounded .whether this is indeed the case is doubtful , but hard to exclude a priori .an open problem that might be quite separate from the questions raised above concerns the behavior of all type ix orbits save a set of measure zero .the past attractor of a dynamical system given on a state space is defined as the smallest closed invariant set such that for all apart from a set of measure zero . the past attractor of the type ix dynamical system coincides with the mixmaster attractor ( rather than being a subset thereof ) . why is this a belief and not a fact ?theorem [ rinthm ] implies that ; however , it is believed that .( the usage of the terminology ` mixmaster attractor ' for the set reflects the strong belief in the mixmaster conjecture . )it is difficult to imagine how the mixmaster attractor conjecture could possibly be violated .for instance , it seems rather absurd that the past attractor consists only of ( a subset of ) heteroclinic cycles but there exist no proofs .a closely related belief is the following stronger statement . for almost all bianchi type ix solutions the -limit set coincides with the mixmaster attractor .we use the term ` almost all ' in a noncommittal way without specifying the measure ; recall that the word ` generic ' already has the well - defined meaning of ` not past asymptotically self - similar ' .( the usage of the word ` generic ' is in accord with . )this subsection is concerned with the ( open ) question of which role the mixmaster / kasner map plays in the asymptotic evolution of type ix solutions .the basis for our considerations are the results of section [ maps ] where we discussed the mixmaster / kasner map and the stochastic aspects of ( generic ) kasner sequences .the _ mixmaster stochasticity conjecture _ supposes that these stochastic properties carry over to almost every type ix orbit when represented as an approximate kasner sequence .the approximate kasner sequence associated with a generic type ix orbit admits a stochastic interpretation in terms of the probability distribution associated with the kasner map , cf. section [ maps ] .( this holds with probability one , i.e. , for almost every generic type ix orbit . ) the mixmaster stochasticity conjecture is based on a rather suggestive simple idea : type ix evolution is like trying to follow a path of an ( infinite ) network of paths while the ground is shaking randomly , and where the shaking subsides with time but never stops .a type ix orbit tries to follow a sequence of transitions on the mixmaster attractor while random errors cause the orbit to lose track . due to the errors , the orbit is incapable of following one particular sequence forever ;after a finite time , the type ix orbit has deviated too much and it leaves the vicinity of the sequence .although , temporarily , the orbit is contained in a neighborhood of a different sequence , it is bound to lose track of that sequence as well eventually . accordinglythe type ix orbit is thrown around in the space of mixmaster sequence with the effect that the type ix orbit inherits the stochastic properties of generic mixmaster sequences . inthe following we paint a heuristic picture that makes this idea a little more concrete .consider a type ix orbit ( approximate sequence of transitions ) and the associated approximate mixmaster sequence .let be the exact mixmaster sequence with and consider the taub - adapted neighborhood of this sequence associated with some prescribed value .finite mixmaster shadowing entails that there exists a finite piece of the approximate sequence that is contained in the prescribed taub - adapted neighborhood of the exact sequence .however , at , the approximate mixmaster sequence leaves the prescribed tolerance interval due to the accumulation of errors .hence , at we reset the system and consider the exact mixmaster sequence with initial data .the approximate mixmaster sequence is contained in the taub - adapted neighborhood of the exact sequence up to . at readjustment becomes necessary .iterating this procedure and concatenating the finite pieces we are able to construct a sequence with the property that the approximate sequence is contained within the taub - adapted neighborhood ( associated with ) of for all .the sequence is a piecewise exact mixmaster sequence ; it is exact in intervals .the length of these intervals grows beyond all bounds as , because shadowing takes place with an increasing degree of accuracy .the error ( of the order ) between the approximate sequence and the piecewise exact sequence results from the accumulation of numerous small errors .this obliterates the deterministic origin of the problem and generates a ` randomness ' that leads to stochastic properties .accordingly , we expect that the exact sequences from which is built constitute a random sample of mixmaster sequences and thus truly reflect the stochastic properties of the mixmaster map . as a consequence , although the sequence is only a piecewise exact mixmaster sequence , it possesses the same stochastic properties as a generic mixmaster sequence . extrapolating this line of reasoningwe are able to complete the argument and find that the approximate sequence itself reflects the stochastic properties of the mixmaster map . to emphasize this aspect of stochasticity of use the term ` randomized approximate sequence ' .some comments are in order . in our discussionwe have assumed implicitly that the approximate sequence we consider can shadow any exact mixmaster sequence for a finite number of transitions only .this is not necessarily the case .a type ix orbit whose -limit set is one of the heteroclinic cycles ( if such an orbit exists ! ) is an obvious counterexample : for every there exists such that is contained in the taub - adapted -neighborhood of the mixmaster sequence associated with the cycle .however , we expect that this type of ` infinite shadowing ' holds at most for orbits of a set of measure zero. a more serious limitation of the intuitive picture that we have sketched is illustrated by the following related example . consider a type ix orbit whose -limit set is the heteroclinic cycle depicted in figure [ period1a ] ( where we note again that the existence of such an orbit is not proven ) .for the associated approximate kasner sequence we have as .consider the piecewise exact kasner sequence that is associated with the piecewise exact mixmaster sequence .each piece is generated from a value = [ 1;1,1,1,1,\ldots,1,k^{(i)}_n , k^{(i)}_{n+1},\ldots] ] is such a random sample .even so , the stochastic aspect does not carry over to the approximate sequence , since the approximate sequence leaves the neighborhood of each sequence before that sequence has entered its stochastic regime ( which is characterized by the remainder ] .the hamiltonian for the orthogonal bianchi type ix perfect fluid models is given by where is three - curvature and , cf .the remark following ( where we recall that ) . is the inverse of the so - called minisuperspace metric , i.e. , is a -dimensional lorentzian metric .the gravitational potential is given by where we have set the structure constants to one ; the potential for the fluid is given by ] becomes an infinitely high sharp wall described by an infinite step function that vanishes for and is infinite for .accordingly , only ` dominant ' terms in the potential are assumed to be of importance for the generic asymptotic dynamics , while ` subdominant ' terms , i.e. , terms whose exponential functions can be obtained by multiplying dominant wall terms , are neglected . in the present casethere are three dominant terms in ( which is the minimal set of terms required to define the billiard table ) , the three exponentials .dropping the subdominant terms in the limit leads to an asymptotic hamiltonian of the form + \sum_{a=1}^3 \theta_\infty(-2 w_a(\gamma))\:,\ ] ] where only the three dominant terms appear in the sum .the correspondence between the dynamical systems picture and the hamiltonian picture is easily obtained by noting that the hamiltonian constraint is proportional to the gauss constraint .the dominant terms correspond to the terms , , in ; the subdominant terms are collected in .the non - trivial dynamics described by resides in the variables , i.e. , in the hyperbolic space .it can be described asymptotically as geodesic motion in hyperbolic space constrained by the existence of sharp reflective walls , i.e. , the asymptotic dynamics is determined by the type ix ` billiard ' given in figure [ billiardix ] .based on the heuristic considerations the limiting hamiltonian is believed to describe generic asymptotic dynamics .[ cc][cc] [ cc][cc] [ cc][cc] the hamiltonian picture ( as represented by the billiard ) and the dynamical systems picture ( as represented by the mixmaster attractor ) are ` dual ' to each other . in the hamiltonian picturethe asymptotic dynamics is described as the evolution of a point in the billiard .straight lines ( geodesics ) in hyperbolic space correspond to kasner states .wall bounces correspond to bianchi type ii solutions ; the bounces change the kasner states according to the kasner map .since the billiard picture emphasizes the dynamics of the configuration space variables , one may say that the hamiltonian billiard approach yields a ` configuration space ' representations of the essential asymptotic dynamics . in the dynamical systems state space picturethe motion in the hamiltonian billiard picture becomes a ` wall ' of fixed points the kasner circle .the walls in the hamiltonian billiard are translated to motion in the dynamical systems picture bianchi type ii heteroclinic orbits ; these type ii transitions yield exactly the same rule for changing kasner states as the wall bounces : the kasner map .since the variables are proportional to , it is natural to refer to the projected dynamical systems picture as a ` momentum space ' representation of the asymptotic dynamics . summing up , ` walls ' and ` straight line motion ' switch places between the hamiltonian formulation and dynamical systems description of asymptotic dynamics , and the two picturesgive equivalent complementary asymptotic pictures .the above heuristic derivation of the limiting hamiltonian rests on two basic assumptions : it is assumed that is timelike in the asymptotic regime and that one can drop the subdominant terms .these assumptions correspond to assuming that and can be set to zero asymptotically , i.e. , the procedure precisely assumes theorem [ rinthm ] .( the hamiltonian analysis in ( * ? ?? * chapter 2 ) uses the same assumptions , and hence the present discussion is of direct relevance for that work as well . ) that these assumptions are highly non - trivial is indicated by the difficulties that the proof of theorem [ rinthm ] has presented ; cf .an alarming example is bianchi type viii ; in this case the heuristic procedure in this respect leads to exactly the same asymptotic results , but so far there exist no proof that and tend to zero towards the singularity for generic solutions .we elaborate on the differences between type viii and type ix in .moreover , like in the state space picture there exists no proof that all of the possible billiard trajectories are of relevance for the asymptotic dynamics of type ix solutions .for example , there exists a correspondence between periodic orbits in the dynamical systems picture and the hamiltonian billiard picture , and it is not excluded a priori that solutions are forced to one , or several , of these .we emphasize again that all proposed measures of chaos that take billiards as the starting point , see and references therein , rely the conjectured connection between the mixmaster map and asymptotic type ix dynamics .the above discussion shows that there are non - trivial assumptions and subtle phenomena that are being glossed over in heuristic billiard ` derivations . 'nevertheless , we do believe that the billiard procedure elegantly uncovers the main generic asymptotic features , and it might be fruitful to attempt to combine the dynamical systems and hamiltonian picture in order to prove the mixmaster conjectures .a tantalizing hint is that in the billiard picture becomes an asymptotic constant of the motion . rewriting this dimensional constant of the motion in terms of the dynamical systems variables yields \:,\ ] ] where and is a constant .the purpose of this paper is two - fold . on the one hand, we analyze the main known results on the asymptotic dynamics of bianchi type ix vacuum and orthogonal perfect fluid models towards the initial singularity . the setting forour discussion is the hubble - normalized dynamical systems approach , since this is essentially the set - up which has led to the first rigorous statements on bianchi type ix asymptotics .( we choose slightly different variables to emphasize the permutation symmetry that underlies the problem . ) the main result ( ` mixmaster fact ' ) is theorem [ rinthm ] which is due to ringstrm ; for an alternative proof see .this theorem , in conjunction with an analysis of the mixmaster attractor , leads to a number of further rigorous results which we list as consequences .on the other hand , we draw a clear line between rigorous results ( ` facts ' ) and heuristic considerations ( ` beliefs ' ) .we make explicit that the implications of theorem [ rinthm ] and its consequences are rather limited , in particular , the rigorous results do not give any information on the details of the oscillatory nature of mixmaster asymptotics .the mathematical methods required to obtain proofs about the actual asymptotic mixmaster oscillations are yet to be developed ; it is likely that radically new ideas are needed .this paper provides the infrastructure that might yield the basis for further developments .our framework enables us to transform vague beliefs to a number of specific conjectures that describe the expected ` complete picture ' of bianchi type ix asymptotics .the arguments we give in their favor are based on an in - depth analysis of the mixmaster map and its stochastic aspects in combination with the dynamical systems concept of shadowing .we conclude with a few pertinent comments .first , as elaborated in there exists no corresponding theorem to theorem [ rinthm ] in other oscillatory bianchi models such as bianchi type vi or viii ; this suggests that the situation in the general inhomogeneous case is even more complicated than expected .furthermore , numerical studies are incapable of shedding light on the asymptotic limit .this is mainly due to the accumulation of inevitable random numerical errors that make it a priori impossible to track a particular type ix orbit .finally , although the hamiltonian methods are a formidable heuristic tool , so far this approach has not yielded any proofs about asymptotics .nevertheless , it might prove to be beneficial to explore the possible synergies between dynamical systems and hamiltonian methods . in this paper and in we have encountered remarkable subtleties as regards the asymptotic dynamics of oscillatory singularities ; this emphasizes the importance of a clear distinction between facts and beliefs .we thank alan rendall , hans ringstrm , and in particular lars andersson for useful discussions .we gratefully acknowledge the hospitality of the mittag - leffler institute , where part of this work was completed .cu is supported by the swedish research council .motter and p.s .mixmaster chaos .127 ( 2001 ) .more qualitative cosmology 137 ( 1971 ) .o.i . bogoyavlensky and s.p .novikov singularities of the cosmological model of the bianchi type ix type according to the qualitative theory of differential equations 747 ( 1973 ) .o.i . bogoyavlensky .( springer - verlag , 1985 ) .global dynamics of the mixmaster model .2341 ( 1997 ) . h. ringstrm .curvature blow up in bianchi viii and ix vacuum spacetimes . 713h. ringstrm .the bianchi ix attractor . 405heinzle and c. uggla . a new proof of the bianchi type ix attractor theorem . preprint ( 2009 ) .t. damour , m. henneaux , and h. nicolai .cosmological billiards . r145 ( 2003 ) .h. friedrich , a.d .the cauchy problem for the einstein equations .127 ( 2000 ) .l. andersson , a.d .quiescent cosmological singularities 479 ( 2001 ) .jantzen and c. uggla . the kinematical role of automorphisms in the orthonormal frame approach to bianchi cosmology . 353( 1999 ) .lin and r.m .proof of the closed - universe - recollapse conjecture for diagonal bianchi type ix cosmologies .3280 ( 2003 ) .heinzle , n. rohr , and c. uggla .matter and dynamics in closed cosmologies .083506 ( 2005 ) .spatially homogeneous dynamics : a unified picture . in _ proc .int . sch . e. fermi " course lxxxvi on gamov cosmology " _ , r. ruffini , f. melchiorri , eds .( north holland , amsterdam , 1987 ) and in _ cosmology of the early universe _ , r. ruffini , l.z .fang , eds .( world scientific , singapore , 1984 ) .arxiv : gr - qc/0102035 .collins and j.m .qualitative cosmology .419 ( 1971 ) .lim , c. uggla , and j. wainwright .asymptotic silence - breaking singularities .2607 ( 2006 ) .j. wainwright , m. j. hancock , and c. uggla .asymptotic self - similarity breaking at late times in cosmology .2577 ( 1999 ) . | we consider the dynamics towards the initial singularity of bianchi type ix vacuum and orthogonal perfect fluid models with a linear equation of state . surprisingly few facts are known about the ` mixmaster ' dynamics of these models , while at the same time most of the commonly held beliefs are rather vague . in this paper , we use mixmaster facts as a base to build an infrastructure that makes it possible to sharpen the main mixmaster beliefs . we formulate explicit conjectures concerning ( i ) the past asymptotic states of type ix solutions and ( ii ) the relevance of the mixmaster / kasner map for generic past asymptotic dynamics . the evidence for the conjectures is based on a study of the stochastic properties of this map in conjunction with dynamical systems techniques . we use a dynamical systems formulation , since this approach has so far been the only successful path to obtain theorems , but we also make comparisons with the ` metric ' and hamiltonian ` billiard ' approaches . |
continued advances in commodity processing and networking hardware make pc ( or workstation ) clusters a very attractive alternative for lattice qcd calculations . indeed , there are quite a few important problems that can be addressed on pc clusters , and many lattice physicists are taking advantage of this opportunity .however , for the most demanding problems in lattice qcd , e.g. dynamical fermion simulations with realistic quark masses , one would like to distribute the global volume over as many nodes as possible , resulting in a very small local volume per node .pc clusters are inadequate to deal with this case because the communications latency inherent in their networking hardware implies that the local volume must not be chosen too small if a reasonable sustained performance is to be achieved . in other words , for typical lattice qcd problemspc clusters do not scale well beyond a few hundred nodes . in custom - designed supercomputers such as qcdoc and apenext , the communications hardware is designed to reduce the latencies and to assist critical operations ( such as global sums ) in hardware . as a result , these machines are significantly more scalable and allow for much smaller local volumes .in addition , they provide low power consumption , a small footprint , and a very low price / performance ratio per sustained mflops . on the downside ,the development effort is considerably higher than for pc clusters , but this effort is amortized by the unique strengths of these machines .the qcdoc hardware has been described in detail in several previous publications , see refs . , therefore we only summarize its most important features here . the qcdoc asic , shown schematically in fig .[ fig : asic ] , was developed in collaboration with ibm research and manufactured by ibm .it contains a standard powerpc 440 core running at 500 mhz , a 64-bit , 1 gflops fpu , 4 mbytes of embedded memory ( edram ) , and a serial communications interface ( scu ) which has been tailored to the particular requirements of lattice qcd .the scu provides direct memory access , single - bit error detection with automatic resend , and a low - latency pass - through mode for global sums . also on the chipare several bus systems , controllers for embedded and external ( ddr ) memory , an ethernet controller , a bootable ethernet - jtag interface , and several auxiliary devices ( interrupt controller , i interface , etc . ) . a picture of one of the first asics , delivered in june of 2003 , is shown in fig .[ fig : asic_closeup ] .the physical design of a large machine is as follows .two asics are mounted on a daughterboard , together with two standard ddr memory modules ( one per asic ) with a capacity of up to 2 gbytes each .the only other nontrivial components on the daughterboard , apart from a few leds , are four physical layer chips for the mii interfaces ( two per asic ) and a 4:1 ethernet repeater which provides a single 100 mbit / s ethernet connection off the daughterboard . a picture of the very first two - node daughterboard is shown in fig .[ fig : db ] .[ fig : db ] a motherboards holds 32 such daughterboards , eight motherboards are mounted in a crate , and a large machine is built from the desired number of crates . a picture of a qcdoc motherboard is shown in fig .[ fig : mb ] .the physics communications network of qcdoc is a 6-dimensional torus with nearest - neighbor connections .the two extra dimensions allow for machine partitioning in software so that recabling is not required .a 64-node motherboard has a topology , with three open dimensions and three dimensions closed on the motherboard ( one of which is closed on the daughterboard ) .the scu links run at 500 mbit / s and provide separate send and receive interfaces to the forward and backward neighbors in each dimension , resulting in a total bandwidth of 12 gbit / s per asic ( of which 8 gbit / s will be utilized in a 4-dimensional physics calculation ) .in addition to the physics network , there is an ethernet based network for booting , i / o , and debugging , as well as a global tree network for three independent interrupts . the ethernet traffic from / to each motherboard proceeds at 800 mbit / s to a commercial gbit - ethernet switch tree , a parallel disk system , and the host machine .the latter will be a standard unix smp machine with multiple gbit - ethernet cards .see fig .[ fig : network ] .as of the writing of this article ( september 2003 ) , all major subsystems of the qcdoc asic have been tested in single - daughterboard configurations ( 2 asics per daughterboard ) using a temporary test - jig .this configuration allows non - trivial communications between the two asics in one of the six dimensions ; for the remaining dimensions the asic communicates with itself in a loop - back mode . extensive memory tests with different sized commodity external ddr sdram modules have been done , tests of the 4 mbyte on - chip edram have been performed , and all the dma units have been used .high - performance dirac kernels have been run for wilson and asqtad fermion formulations , confirming the performance figures given in table [ tab : performance ] and ref .no problems with the asic have been found to date . with qcdoc motherboards now in hand for use ( fig .[ fig : mb ] ) , tests of 64 and 128 node machines are imminent . in our test - jig tests ,the asics appear to be working well close to the target frequency of 500 mhz . with a fully populated motherboard , and the more stable electrical environment provided by the motherboard as compared to our simple test - jigs, we will soon be able to test a large number of asics at the 500 mhz design frequency . from preliminary measurements ,the power consumption per node is about 5 w.one of the major goals of the development team was to make qcdoc accessible and deployable to a large scientific community , so every attempt has been made to allow users to use standard software tools and techniques . in order to reliably boot ,monitor and diagnose the 10,000 + nodes in a large qcdoc machine , and to allow user code maximum access to the high - performance capabilities of the hardware , we are writing a custom operating system for qcdoc .this is modeled on the qcdsp operating system , which has successfully allowed the use of a machine with a similar number of processing nodes .the qcdoc operating system ( qos ) has been improved in myriad ways , continuing to focus on ease - of - use for the end - user .we now give a more detailed list of the requirements of the operating system and discuss how these features have been implemented .in addition to booting the machine , the qos must diagnose hardware errors and monitor the hardware during user program execution .monitoring is accomplished by regular inspection of on - chip registers which monitor memory and communication link status , along with interrupt handlers which are invoked by changes in hardware state deemed crucial . after booting ,partitions of the machine may be requested for interactive use by a user or the queuing system .each partition will only run a single user program at a time ; all multitasking will be handled by the host front - end . during program execution ,the operating system provides host - qcdoc input / output access using commodity ethernet connections , access to special qcdoc features such as the mesh communication network and on - chip memory and access from qcdoc processing nodes to a parallel disk array .the qos is designed to coordinate the interactions between the host and qcdoc , insulating the user from many qcdoc - specific details while still providing detailed control over their execution environment .our solution is qos , written largely from scratch in c++ , c and some assembler .the software runs on the host computer and the qcdoc nodes and consists of several parts , namely : * the qdaemon on the host , which manages and monitors the entire qcdoc machine .all interactions go through the qdaemon .* the qclient layer on the host , which provides access to the qdaemon for a variety of planned user interfaces , such as the batch system and web applications , as well as the existing command line interface , the qcsh . * the qcsh on the host , a modified version of tcsh , which is currently in use and allows complete control and use of qcdoc .* the boot / run kernels on each node .the qdaemon is the heart of the operating system , and it is designed and implemented using modern c++ techniques , e.g. it is heavily templated , it employs posix threads to allow efficient use of a multi - processor front end , its queue calls and object lists are thread - safe , and it can drive multiple physical network interfaces .the single qdaemon on the host uses reliable udp to transport packets between qcdoc and the host , starting with basic udp packets in the boot procedure and adding an rpc protocol to this when the run kernels are loaded .the qdaemon controls the ip addressing scheme to the nodes , does the partitioning that users request , monitors machine status , directs i / o to the host and evokes hardware testing when needed .the qdaemon can be accessed by a root " user for system management and monitoring , but most users will only have access to a limited set of qdaemon s capabilities , those appropriate for the partition where they are running a job .the qdaemon also provides flexible software partitioning of qcdoc .the qclient is a library of interfaces which communicates with the qdaemon and can accept input from a variety of planned user tools . to access the qclient , and hence the qdaemon , access to the host computer must be gained by conventional authentication tools such as ssh or a secure web connection .currently , a command - line interface , the qcsh , communicates with the qdaemon via the qclient library . following the successful qcdsp model ,qcsh is a command - line interface . starting from a standard unix shell , tcsh ,extra built - in commands to control qcdoc have been added .users can use normal shell redirection to control i / o from qcdoc , programs can be executed on qcdoc and backgrounded in qcsh allowing further interaction between the qcsh and qdaemon ( but not qcdoc directly since it is executing a program ) , and standard shell scripts can be used .the special qcdoc commands in the qcsh all begin with the letter q. some examples are : qinit - establish a connection between the qcsh and the qdaemon ; qpartition_create - create a partition of qcdoc ; qboot - boot a qcdoc partition ; qrun - run a user program ; qhelp - display available qcommands .the qdaemon will only permit connections from the local host , which allows a unix socket to be passed to the qdaemon from a user interface like the qcsh .the qdaemon then does user i / o from / to this unix socket and can determine unix user and group identities directly from the host os kernel .this is a major simplification .while the qdaemon will be handling many users access to qcdoc , it does not have to open / close files and concern itself with protections .this is handled by the initial user interface , thereby ensuring correct ownership and permissions .the first software loaded to a qcdoc node is the boot kernel , which is loaded directly to the data and instruction caches of the 440 core via the ethernet - jtag hardware .only the 440 core must be functional for the boot kernel to execute .when execution begins , tests of the on - chip and ddr memory can be done .once the memory is known to be working , the boot kernel enables the standard 100 mbit ethernet port on the qcdoc asic and the run kernel is loaded down .the run kernel handles the activation of machine - global features such as the nearest - neighbor communications network .when these steps complete , which should be on the scale of ten minutes even for a large machine , it is available for general use . all of the steps outlined above are implemented and have been extensively used on our first asics .the run kernel provides support for users , including access to qcdoc features via system calls .part of the strategy for using rpc for the host run - kernel communication is to avoid having a separate communications protocol for disk access .we have written an nfs client for the run kernel that uses the standard rpc - based nfs protocol .the client supports two mount points and open / read / write / close functionality .this nfs support has already been tested on qcdoc .while providing a standard user environment for high - performance computing , the run - kernel will not support multi - tasking .we have chosen to keep the run - kernel lean and compact to ensure reliability and to keep the software task bounded . a major issue for a massively parallel computer is the effectiveness of its message passing .in hardware , qcdoc is a nearest - neighbor mesh , the majority of qcd based communications are also nearest - neighbor and consequently qos natively supports nearest - neighbor communications calls .the qos calls are implemented so as not to interfere with the low latency of the qcdoc communications hardware .the qcdoc hardware supports efficient global sums ( done via our pass - through hardware ) , which are also accessible via qos calls .we have also implemented the scidac lattice qcd software committee s qcd message passing ( qmp ) protocol on qcdoc .this protocol supports nearest - neighbor communications , which we efficiently map to the native qos communications calls , as well as arbitrary communications . for the latter , we will implement a manhattan - style routing .a simple ( but not trivial ) consequence of qcdoc using a standard processor from ibm s powerpc line is the availability of a whole arsenal of open source and commercial software tools , most notably the gnu tools and ibm s tools .on qcdoc , users will be able to use the gnu toolset , the closest thing to a standard across computer platforms .additionally , we have access to the high - performance commercial tools from ibm , such as the xlc / xlc compilers which we have seen outperform the gnu compilers on powerpc platforms by as much as a factor of two .we expect that users may do initial code development with the gnu toolset and only compile with the more restrictively - licensed ibm compilers when final performance is an issue .an issue for the larger potential user group for qcdoc is performance of existing physics codes , not written for a qcdoc or qcdsp type of computer architecture .the milc code was an obvious choice to test , since it is one of the major lattice simulation codes and is written in c. the milc code has been run on qcdoc , both the simulator and now the actual asic , concentrating on the asqtad action .unmodified milc code gives performance in the few percent range for small lattice volumes , which was easily improved by a few standard c - code modifications as described in .a summary of the performance for single precision milc code and our double precision assembly code is given in table [ tab : performance ] .l|c|c|c action & vol . & assem . &milc + wilson & & 47% + & & 54% + clover & & 56% + & & 59% + staggered & & 36% & 17% + & & & 21% + asqtad & & 43% & 15% + af & & & 14% + & & & 20% + [ tab : performance ]a 128-node prototype machine is currently being assembled at columbia . assuming that no major problems are found , two machines of 10 tflops ( peak ) each will be available to ukqcd and the riken - bnl research center by the summer of 2004 .the operating system and user software needed to utilize these machines is progressing in hand with the hardware developments .approval is pending for a 3 tflops ( peak ) machine at columbia and a 20 tflops ( peak ) installation at bnl for the u.s .lattice community .* acknowledgments .* this work was supported in part by the u.s .department of energy , the institute of physical and chemical research ( riken ) of japan , and the u.k .particle physics and astronomy research council . | qcdoc is a massively parallel supercomputer whose processing nodes are based on an application - specific integrated circuit ( asic ) . this asic was custom - designed so that crucial lattice qcd kernels achieve an overall sustained performance of 50% on machines with several 10,000 nodes . this strong scalability , together with low power consumption and a price / performance ratio of $ 1 per sustained mflops , enable qcdoc to attack the most demanding lattice qcd problems . the first asics became available in june of 2003 , and the testing performed so far has shown all systems functioning according to specification . we review the hardware and software status of qcdoc and present performance figures obtained in real hardware as well as in simulation . |
nowadays , it is very clear how special relativity effects influence on measured data .the first celebrated example of this fact was the atmospheric muons decay explanation as a time dilation effect .this is the rossi - hall experiment . considering the mrzke - wheeler synchronization as the natural generalization to accelerated observers of einstein synchronization in special relativity ,we wonder whether mrzke - wheeler effects influence on measured data in nature .this question is also motivated by the fact that recently the twin paradox was completely solved in ( 1 + 1)-spacetime by means of these effects and it is natural to ask for empirical confirmation .of course these effects comprehend the well known special relativistic ones for inertial observers as well as the new ones .these new effects can be seen as corrections of the special relativistic ones due to the acceleration of the involved observer .+ a small deviation towards the sun from the predicted pioneer acceleration : for pioneer 10 and for pioneer 11 , was reported for the first time in .the analysis of the pioneer data from 1987 to 1998 for pioneer 10 and 1987 to 1990 for pioneer 11 made in improves the anomaly value and it was reported to be .this is known as the pioneer anomaly .+ considering that mrzke - wheeler tiny effects are difficult to measure , we careful looked for some observational object for which the searched effect could be appreciable .this search led us to the pioneer 10 .in fact , through a simple analytic formula for the mrzke - wheeler map exact calculation developed in this letter , computing the acceleration difference between the mrzke - wheeler and frenet - serret coordinates for the earth s translation around the sun , we see that this mrzke - wheeler long range effect is between and of the pioneer anomaly value .unfortunately , due to statistical errors in the measured anomaly , it is not possible to confirm the influence of the mrzke - wheeler acceleration effect on the measured pioneer data .moreover , a recently numerical thermal model based on a finite element method has shown a discrepancy of of the actual measured anomaly and due to the mentioned statistical errors , it was concluded there that the pioneer anomaly has been finally explained within experimental error of of the anomaly value : + _ ... to determine if the remaining represents a statistically significant acceleration anomaly not accounted for by conventional forces , we analyzed the various error sources that contribute to the uncertainties in the acceleration estimates using radio - metric doppler and thermal models ... we therefore conclude that at the present level of our knowledge of the pioneer 10 spacecraft and its trajectory , no statistically significant acceleration anomaly exists . _+ although it is tempting to think that the discrepancy found in is due to a long range mrzke - wheeler acceleration effect , it can not be confirmed .we hope that the ideas presented here could encourage other research teams in the search for other observational objects that could finally answer the question posed in this letter . +consider the -spacetime spanned by the vectors with the lorentz metric : respect to the basis .an observer is a smooth curve naturally parameterized with timelike derivative at every instant ; i.e. .we will say a vector is spatial if it is a linear combination of .a spatial vector is unitary if .+ consider a timelike vector in ; i.e. .we define the scaled lorentz transformation : where is the orthocronous lorentz boost transformation sending to the unitary vector ; i.e. the original and transformed coordinates are in standard configuration ( , and are colinear with , and respectively where the prime denote the spatial transformed coordinates and the others denote the original spatial coordinates ) . the scaled lorentz transformation has the following properties : + a smooth map is a mrzke - wheeler map of the observer if it verifies : for every real , positive real and unitary spatial vector ( see figure [ mw_coord ] ) .this map , , is clearly an extension of the einstein synchronization convention for non accelerated observers ; i.e. it is the natural generalization of a lorentz transformation in the case of accelerated observers .+ [ mwformula ] consider an observer .then , is a mrzke - wheeler map of the observer such that is a unitary spatial vector . _proof : _ recall that for every such that we have that . this way , because . from the formulait is clear that is smooth . + the last mrzke - wheeler map formula was written for the first time in for -spacetime where it was shown , in this particular case , that it is actually a conformal map .moreover , the twin paradox is solved in -spacetime . in the general case treated here ,the mrzke - wheeler map is no longer conformal .+ as an example , consider the uniformly accelerated observer in -spacetime along the axis : where is its natural parameter and such that is the observer acceleration .its mrzke - wheeler map is : \sigma_{0 } \\ & & + r\cosh \left(\frac{s}{r}\right)\left[\cosh \left(\frac{s}{r}\right ) + \sinh \left(\frac{s}{r}\right)\frac{x}{r}\right]\sigma_{1 } \\ & & + \frac{r}{r}\sinh \left(\frac{r}{r}\right)\left [ y\ \sigma_{2 } + z\ \sigma_{3 } \right ] \\\end{aligned}\ ] ] where . in this example , it is interesting that besides restricted to the plane is a conformal map ( as it was expected from ) , it is also also conformal restricted to the plane .the pioneer 10/11 data is measured from earth s dsn antennas ( deep space network ) and we wonder whether this data is affected by earth s translation around the sun .we comment about earth s rotation at the end of the section .we model the earth s translation as the uniformly rotating observer \ ] ] where is its natural parameter , is its lorentz contraction factor and .its mrzke - wheeler map is : \sigma_{0 } \\ & & + \left [ r\cos\left(\frac{\omega}{ck}r\right ) + x\ \sqrt{\frac{1}{k^{2 } } - \left(\frac{r}{r}\sin\left(\frac{\omega}{ck}r\right)\right)^{2 } } \right]\vec{a}(s ) \\ & & + \frac{1}{k}\ y\ \vec{b}(s ) \\ & & + z\ \sqrt{\frac{1}{k^{2 } } - \left(\frac{r}{r}\sin\left(\frac{\omega}{ck}r\right)\right)^{2}}\ \sigma_{3 } \\\end{aligned}\ ] ] where .we have chosen the framing corresponding to the coordinates such that is the frenet - serret framing of the observer ( see figure [ frenetserret ] ) : this expression was also obtained in in the particular case .it is interesting to notice the oscillatory term of the above map . in order to compare the spatial mrzke - wheeler coordinates with the frenet - serretcoordinates we consider the difference . because and , we have that and restricted to the region we have the approximation : + \ \vec{a}(s ) \\ & & + \ y\\vec{b}(s)+ z\ \sigma_{3 } \\\end{aligned}\ ] ] because the component is zero , we have the following transformation between the spatial mrzke - wheeler coordinates and the frenet - serret coordinates : where . because the pioneer s velocity and acceleration are very small respect to the natural scale of the problem , differentiating the above expression we have : where is the distance from the sun and is the acceleration . because the recorded pioneer data ( at least for pioneer 10 ) corresponds to the region between and , we can consider that where is the pioneer s distance from the sun and we have the following approximation : where is the pioneer s speed and is the angle between its radius vector from the sun and its velocity vector . computing the acceleration difference between the mrzke - wheeler and frenet - serretcoordinates at the pioneer s maximal speed , we have the result : and we see that it is between and of the measured pioneer anomaly . + the calculated difference points towards the edge when and in the opposite direction when . this would contradict the claim that the anomaly always points towards the sun made in the data analysis and .however , in the data analysis made in , it is claimed that it can not be confirmed whether the anomaly is sunwards , contrary to the earlier claim .+ finally , we would like to comment about a possible numerical analysis on the influence of earth s rotation on the measured data . in order to do so , we define the following framing dependent non abelian product of observers : this product is the generalization of the special relativistic velocities addition and has the following property : this way, the observer is the one we should consider and its mrzke - wheeler map is just the composition of the previously exactly calculated map of the uniformly rotating observer .unfortunately , the map gets really involved and the analysis must be done numerically .an analysis of the parameters involved in the rotation analysis , shows that the magnitude order of the long range mrzke - wheeler acceleration effect coincides with the one of the pioneer anomaly and should also be considered .although after strongly numerical evidence it is tempting to think that the discrepancy of the anomaly value found in is due to a long range mrzke - wheeler acceleration effect described in this letter , due to statistical errors in the measured anomaly it can not be neither confirmed nor neglected .we hope that the ideas presented here could encourage other research teams in the search for other observational objects that could finally answer whether mrzke - wheeler effects influence on measured data in nature .mrzke m , wheeler j , _ gravitation as geometry - i : the geometry of spacetime and the geometrodynamical standard meter _ gravitation and relativity , h .- y .chiu and w. f. hoffmann , eds ., w. a. benjamin , new york - amsterdam ( 1964 ) 40 . | we wonder whether mrzke - wheeler effects influence on measured data in nature . through a formula developed in this letter for the calculation of the mrzke - wheeler map of a general accelerated observer , we study the influence of the mrzke - wheeler acceleration effect on the nasa s pioneer anomaly and found that it is about a fifth of the anomaly value . due to statistical errors in the measured anomaly , it is not possible to neither confirm nor neglect the influence of the mrzke - wheeler acceleration effect on the measured pioneer data . we hope that the ideas presented here could encourage other research teams in the search for other observational objects that could finally answer the question posed in this letter . |
symmetry occurs in many constraint satisfaction problems .for example , in scheduling a round robin sports tournament , we may be able to interchange all the matches taking place in two stadia .similarly , we may be able to interchange two teams throughout the tournament . as a second example , when colouring a graph ( or equivalently when timetabling exams ) , the colours are interchangeable. we can swap red with blue throughout .if we have a proper colouring , any permutation of the colours is itself a proper colouring .problems may have many symmetries at once .in fact , the symmetries of a problem form a group . their action is to map solutions ( a schedule , a proper colouring , etc . ) onto solutions .symmetry is problematic when solving constraint satisfaction problems as we may waste much time visiting symmetric solutions .in addition , we may visit many ( failing ) search states that are symmetric to those that we have already visited .one simple but effective mechanism to deal with symmetry is to add constraints which eliminate symmetric solutions .unfortunately eliminating all symmetry is np - hard in general .however , recent results in parameterized complexity give us a good understanding of the source of that complexity . in this survey paper ,i summarize results in this area . for more background ,see .to illustrate the ideas , we consider a simple problem from musical composition. the all interval series problem ( prob007 in csplib.org ) asks for a permutation of the numbers 0 to so that neighbouring differences form a permutation of 1 to . for , the problem corresponds to arranging the half - notes of a scale so that all musical intervals ( minor second to major seventh ) are covered .this is a simple example of a graceful graph problem in which the graph is a path .we can model this as a constraint satisfaction problem in variables with iff the number in the series is .one solution for is : the differences form the series : .the all interval series problem has a number of different symmetries .first , we can reverse any solution and generate a new ( but symmetric ) solution : second , the all interval series problem has a value symmetry as we can invert values .if we subtract all values in ( 1 ) from , we generate a second ( but symmetric ) solution : third , we can do both and generate a third ( but symmetric ) solution : to eliminate such symmetric solutions from the search space , we can post additional constraints which eliminate all but one solution in each symmetry class . to eliminate the reversal of a solution , we can simply post the constraint : this eliminates solution ( 2 ) as it is a reversal of ( 1 ) . to eliminate the value symmetry which subtracts all values from , we can post : this eliminates solutions ( 2 ) and ( 3 ) . finally , eliminating the third symmetry where we both reverse the solution and subtract it from is more difficult .we can , for instance , post : & \leq_{\rm lex } & [ 10-x_{11 } , \ldots , 10-x_{1}]\end{aligned}\ ] ] note that of the four symmetric solutions given earlier , only ( 4 ) with , and satisfies all three sets of symmetry breaking constraints : ( 5 ) , ( 6 ) and ( 7 ) .the other three solutions are eliminated .we will need some formal notation to present some of the more technical results . a _ constraint satisfaction problem _ ( csp ) consists of a set of variables , each with a finite domain of values , and a set of constraints .each _ constraint _ is specified by the allowed combinations of values for some subset of variables .for example , is a binary constraint which ensures and do not take the same values .global constraint _ is one in which the number of variables is not fixed .for instance , the global constraint ,n) ] .this is a conjunction of lex leader constraints , ensuring that , for each : unfortunately , enforcing domain consistency on this global constraint is np - hard .however , this complexity depends on the number of symmetries . breaking all value symmetry is fixed - parameter tractable in the number of symmetries .enforcing domain consistency on ) ] .this ensures that for all : that is , the first time we use is before the first time we use for all .for example , consider the assignment : this satisfies value precedence as 1 first occurs before 2 , 2 first occurs before 3 , etc .now consider the symmetric assignment in which we swap with : this does not satisfy value precedence as first occurs before .a constraint eliminates all symmetry due to interchangeable values . in , we give a linear time propagator for enforcing domain consistency on the constraint . in , we argue that can be derived from the lex leader method ( but offers more propagation by being a global constraint ) .another way to ensure value precedence is to map onto dual variables , which record the first index using each value .this transforms value symmetry into variable symmetry on the .we can then eliminate this variable symmetry with some ordering constraints : in fact , puget proves that we can eliminate _ all _ value symmetry ( and not just that due to value interchangeability ) with a linear number of such ordering constraints .unfortunately , this decomposition into ordering constraints hinders propagation even for the tractable case of interchangeable values ( theorem 5 in ) . indeed , even with _just _ two value symmetries , mapping into variable symmetry can hinder propagation .this is supported by the experiments in where we see faster and more effective symmetry breaking with the global constraint .this global constraint thus appears to be a promising method to eliminate the symmetry due to interchangeable values .a generalization of the symmetry due to interchangeable values is when values partition into sets , and values within each set ( but not between sets ) are interchangeable .the idea of value precedence can be generalized to this case .the global constraint ensures that values in each interchangeable set occur in order .more precisely , if the values are divided into equivalence classes , and the equivalence class contains the values to then ensures for all and . enforcing domain consistency on np - hard in general but fixed - parameter tractable in .another common type of symmetry where we can exploit special properties of the symmetry group is row and column symmetry .many problems can be modelled by a matrix model involving a matrix of decision variables . often the rows and columns of such matrices are fully or partially interchangeable .for example , the equidistant frequency permutation array ( efpa ) problem is a challenging combinatorial problem in coding theory .the aim is to find a set of code words , each of length such that each word contains copies of the symbols 1 to , and each pair of code words is at a hamming distance of apart .for example , for , , , , one solution is : this problem has applications in communications , and is closely related to other combinatorial problems like finding orthogonal latin squares ._ consider a simple matrix model for this problem with a by array of variables , each with domains to .this model has row and column symmetry since we can permute the rows and the columns of any solution .although breaking all row and column symmetry is intractable in general , it is fixed - parameter tractable in the number of columns ( or rows ) . with a by matrix , checking lex leader constraints that break all row and column symmetry is np - hard in general but fixed - parameter tractable in .* proof : * np - hardness is proved by theorem 3.2 in , and fixed - parameter tractability by theorem 1 in . note that the above result only talks about _ checking _ a constraint which breaks all row and column symmetry .that is , we only consider the computational cost of deciding if a complete assignment satisfies the constraint .propagation of such a global constraint is computationally more difficult . just row or column symmetry on their own are tractable to break . to eliminate all row symmetry we can post lexicographical ordering constraints on the rows .similarly , to eliminate all column symmetry we can post lexicographical ordering constraints on the columns .when we have both row and column symmetry , we can post a constraint that lexicographically orders both the rows and columns . this does not eliminate all symmetry since it may not break symmetries which permute both rows and columns .nevertheless , it is more tractable to propagate and is often highly effective in practice .note that can be derived from a _ strict _ subset of the constraints . unfortunately propagating such a constraintcompletely is already np - hard . with a by matrix , enforcing domain consistency on is np - hard in general .* proof : * see threorem 3 in .there are two special cases of matrix models where row and column symmetry is more tractable to break .the first case is with an all - different matrix , a matrix model in which every value is different .if an all - different matrix has row and column symmetry then the lex - leader method ensures that the top left entry is the smallest value , and the first row and column are ordered .domain consistency can be enforced on such a global constraint in polynomial time .the second more tractable case is with a matrix model of a function .in such a model , all entries are 0/1 and each row sum is 1 .if a matrix model of a function has row and column symmetry then the lex - leader method ensures the rows and columns are lexicographically ordered , the row sums are 1 , and the sums of the columns are in decreasing order .domain consistency can also be enforced on such a global constraint in polynomial time .the study of computational complexity in constraint programming has tended to focus on the structure of the constraint graph ( e.g. especially measures like tree width ) or on the semantics of the constraints ( e.g. ) .however , these lines of research are mostly concerned with constraint satisfaction problems as a whole , and do not say much about individual ( global ) constraints . for global constraints of bounded arity , asymptotic analysis has been used to characterize the complexity of propagation both in general and for constraints with a particular semantics . for example, the generic domain consistency algorithm of has an time complexity on constraints of arity and domains of size , whilst the domain consistency algorithm of for the -ary constraint has time complexity ._ showed that many global constraints like are also intractable to propagate .more recently , samer and szeider have studied the parameterized complexity of the constraint .szeider has also studied the complexity of symmetry in a propositional resolution calculus .see chapter 10 in for more about symmetry of propositional systems .we have argued that parameterized complexity is a useful tool with which to study symmetry breaking . in particular , we have shown that whilst it is intractable to break all symmetry completely , there are special types of symmetry like value symmetry and row and column symmetry which are more tractable to break . in these case ,fixed - parameter tractability comes from natural parameters like the number of generators which tend to be small . in future, we hope that insights provided by such analysis will inform the design of new search methods . for example, we might build a propagator that propagates completely when the parameter is small , but only partially when it is large . in the longer term , we hope that other aspects of parameterized complexity like kernels will find application in the domain of symmetry breaking .crawford , j. , ginsberg , m. , luks , g. , roy , a. : symmetry breaking predicates for search problems . in : proceedings of the 5th international conference on knowledge representation and reasoning , ( kr 96 ) .( 1996 ) 148159 walsh , t. : symmetry within and between solutions . in zhang , b.t . ,orgun , m.a .pricai 2010 : trends in artificial intelligence , 11th pacific rim international conference on artificial intelligence , proceedings .volume 6230 of lecture notes in computer science . , springer ( 2010 ) 1113 katsirelos , g. , walsh , t. : symmetries of symmetry breaking constraints . in coelho ,h. , studer , r. , wooldridge , m. , eds . :ecai 2010 - 19th european conference on artificial intelligence , proceedings .volume 215 of frontiers in artificial intelligence and applications ., ios press ( 2010 ) 861866 gent , i. , walsh , t. : : a benchmark library for constraints .technical report , technical report apes-09 - 1999 ( 1999 ) a shorter version appears in the proceedings of the 5th international conference on principles and practices of constraint programming ( cp-99 ) .pachet , f. , roy , p. : automatic generation of music programs . in jaffar , j. ,ed . : proceedings of fifth international conference on principles and practice of constraint programming ( cp99 ) , springer ( 1999 ) 331345 bessiere , c. , hebrard , e. , hnich , b. , walsh , t. : the complexity of global constraints . in : proceedings of the 19th national conference on ai , association for advancement of artificial intelligence ( 2004 ) bessire , c. , hebrard , e. , hnich , b. , walsh , t. : the tractability of global constraints . in : 10th international conference on principles and practices of constraint programming ( cp-2004 ) .volume 3258 of lecture notes in computer science . , springer ( 2004 ) 716720 bessire , c. , hebrard , e. , hnich , b. , kiziltan , z. , walsh , t. : filtering algorithms for the nvalue constraint . in : integration of ai and or techniques in constraint programming for combinatorial optimization problems , 2nd international conference ( cpaior-2005 ) .( 2005 ) downey , r.g . , fellows , m.r . ,stege , u. : parameterized complexity : a framework for systematically confronting computational intractability . in : contemporary trends in discrete mathematics : from dimacs and dimatia to the future .volume 49 of dimacs series in discrete mathematics and theoretical computer science .( 1999 ) 4999 walsh , t. : general symmetry breaking constraints . in benhamou , f. , ed .: principles and practice of constraint programming - cp 2006 , 12th international conference , cp 2006 , nantes , france , september 25 - 29 , 2006 , proceedings .volume 4204 of lecture notes in computer science . , springer ( 2006 ) katsirelos , g. , narodytska , n. , walsh , t. : combining symmetry breaking and global constraints . in oddi ,a. , fages , f. , rossi , f. , eds . : recent advances in constraints , 13th annual ercim international workshop on constraint solving and constraint logic programming , csclp 2008 .volume 5655 of lecture notes in computer science . , springer ( 2008 ) 8498 law , y.c . , lee , j. , walsh , t. , yip , j. : breaking symmetry of interchangeable variables and values . in : 13th international conference on principles and practices of constraint programming ( cp-2007 ) , springer - verlag ( 2007 ) hnich , b. , kiziltan , z. , walsh , t. : combining symmetry breaking with other constraints : lexicographic ordering with sums . in : proceedings of the 8th international symposium on the artificial intelligence and mathematics .( 2004 ) frisch , a. , hnich , b. , kiziltan , z. , miguel , i. , walsh , t. : global constraints for lexicographic orderings . in : 8th international conference on principles and practices of constraint programming ( cp-2002 ) , springer ( 2002 ) puget , j.f .: breaking all value symmetries in surjection problems . in van beek ,: proceedings of 11th international conference on principles and practice of constraint programming ( cp2005 ) , springer ( 2005 ) bessire , c. , hebrard , e. , hnich , b. , kiziltan , z. , quimper , c.g . ,walsh , t. : the parameterized complexity of global constraints . in fox , d. , gomes ,c. , eds . : proceedings of the 23rd national conference on ai , association for advancement of artificial intelligence ( 2008 ) 235240 law , y. , lee , j. : global constraints for integer and set value precedence . in : proceedings of 10th international conference on principles and practice of constraint programming ( cp2004 ) , springer ( 2004 ) 362376 walsh , t. : symmetry breaking using value precedence . in brewka , g. , coradeschi , s. , perini , a. , traverso , p. , eds .: ecai 2006 , 17th european conference on artificial intelligence , august 29 - september 1 , 2006 , riva del garda , italy , including prestigious applications of intelligent systems ( pais 2006 ) , proceedings .volume 141 of frontiers in artificial intelligence and applications ., ios press ( 2006 ) 168172 flener , p. , frisch , a. , hnich , b. , kiziltan , z. , miguel , i. , pearson , j. , walsh , t. : breaking row and column symmetry in matrix models . in : 8th international conference on principles and practices of constraint programming ( cp-2002 ) , springer ( 2002 ) flener , p. , frisch ,a. , hnich , b. , kiziltan , z. , miguel , i. , walsh , t. : matrix modelling .technical report apes-36 - 2001 , apes group ( 2001 ) presented at formul01 workshop on modelling and problem formulation , cp2001 post - conference workshop .flener , p. , frisch , a. , hnich , b. , kiziltan , z. , miguel , i. , walsh , t. : matrix modelling : exploiting common patterns in constraint programming . in : proceedings of the international workshop on reformulating constraint satisfaction problems .( 2002 ) held alongside cp-2002 .huczynska , s. , mckay , p. , miguel , i. , nightingale , p. : modelling equidistant frequency permutation arrays : an application of constraints to mathematics . in gent , i. , ed .: principles and practice of constraint programming - cp 2009 , 15th international conference , cp 2009 , lisbon , portugal , september 20 - 24 , 2009 , proceedings .volume 5732 of lecture notes in computer science . , springer ( 2009 ) 5064 katsirelos , g. , narodytska , n. , walsh , t. : on the complexity and completeness of static constraints for breaking row and column symmetry . in cohen ,d. , ed . : principles and practice of constraint programming - cp 2010 , 16th international conference , proceedings .lecture notes in computer science , springer ( 2010 ) flener , p. , frisch , a. , hnich , b. , kiziltan , z. , miguel , i. , pearson , j. , walsh , t. : symmetry in matrix models .technical report apes-30 - 2001 , apes group ( 2001 ) presented at symcon01 ( symmetry in constraints ) , cp2001 post - conference workshop .bessiere , c. , rgin , j. : arc consistency for general constraint networks : preliminary results . in : proceedings of the 15th international conference on ai , international joint conference on artificial intelligence ( 1997 ) 398404 | symmetry is a common feature of many combinatorial problems . unfortunately eliminating all symmetry from a problem is often computationally intractable . this paper argues that recent parameterized complexity results provide insight into that intractability and help identify special cases in which symmetry can be dealt with more tractably . |
the iter device is a tokamak designed to study controlled thermonuclear fusion . roughly speaking ,it is a toroidal vessel containing a magnetised plasma where fusion reactions occur .the plasma is kept out of the vessel walls by a magnetic field which lines have a specific helicoidal geometry. however , turbulence develops in the plasma and leads to thermal transport which decreases the confinement efficiency and thus needs a careful study .plasma is constituted of ions and electrons , which motion is induced by the magnetic field .the characteristic mean free path is high , even compared with the vessel size , therefore a kinetic description of particles is required , see _ dimits _ .then the full 6d vlasov - poisson model should be used for both ions and electrons to properly describe the plasma evolution . however , the plasma flow in presence of a strong magnetic field has characteristics that allow some physical assumptions to reduce the model .first , the larmor radius , i.e. the radius of the cyclotronic motion of particles around magnetic field lines , can be considered as small compared with the tokamak size and the gyration frequency very fast compared to the plasma frequency .thus this motion can be averaged ( gyro - average ) becoming the so - called guiding center motion . as a consequence, 6d vlasov - poisson model is reduced to a 5d gyrokinetic model by averaging equations in such a way the 6d toroidal coordinate system becomes a 5d coordinate system , with the parallel and the perpendicular to the field lines components of the particles velocity , the angular velocity around the field lines and the magnetic momentum depending on the velocity norm , on the magnetic field magnitude and on the particles mass which is an adiabatic invariant . moreover ,the magnetic field is assumed to be steady and the mass of electrons is very small compared to the mass of ions .thus the cyclotron frequency is much faster for electrons than for ions .therefore the electrons are assumed to be at equilibrium , i.e. the effect of the electrons cyclotronic motion is neglected and their distribution is then supposed to be constant in time .the 5d gyrokinetic model then reduces to a vlasov like equation for ions guiding center motion : where is the ion distribution function for a given adiabatic invariant with , velocities and define the guiding center trajectories .if , then the model is termed as conservative and is equivalent to a vlasov equation in its advective form : this equation for ions is coupled with a quasi - neutrality equation for the electric potential on real particles position , with ( with the larmor radius ) : where is an equilibrium electronic density , the electronic temperature , the electronic charge , the boltzmann constant for electrons and the cyclotronic frequency for ions .+ these equations are of a simple form , but they have to be solved very efficiently because of the 5d space and the large characteristic time scales considered .this work is then a contribution in this direction , following _ grandgirard et al _ who develops the gysela 5d code that solves this gyrokinetic model , see and .looking at the model , one notices that the adiabatic invariant acts as a parameter .therefore for each we have to solve a 4d advection equation as accurately as possible but also taking special care on mass and energy conservation , especially in this context of large characteristic time scales . the maximum principle that exists at the continuous level for the vlasov equationshould also be carefully studied at discrete level : with the value at in cell at time .there is no physical dissipation process in the gyrokinetic model that might dissipate over / undershoots created by the scheme and the loss of this bounding extrema of the solution at may even eventually crash a simulation .those studies will be achieved in this paper on a relevant reduced model , the 4d drift - kinetic model described in section [ use ] , which has the same structure than equations .the geometrical assumptions of this model for ion plasma turbulence are a cylindrical geometry with coordinates and a constant magnetic field , where is the unit vector in direction .this 4d model is conservative and will be discretized using a conservative semi - lagrangian scheme , the parabolic spline method scheme ( psm , see _ zerroukat et al _ and ) .it is a fourth order scheme which is equivalent for linear advections to the backward semi - lagrangian scheme ( bsl ) currently used in the gysela code ( see _ grandgirard et al _ ) and introduced by _ cheng - knorr _ and _ sonnendrcker et al _ ) .this conservative psm scheme based on the conservative form of the vlasov equation will be described in section [ use ] and properly allows a directional splitting .+ in this paper , the bsl and psm schemes will be detailed with an emphasis on their similarities and differences .we will see that one difference is about the maximum principle .the bsl scheme satisfies it only with a condition on the distribution function reconstruction and the conservative psm scheme does not satisfy it without an extra condition on the volumes conservation in the phase space .the last condition is equivalent to try to impose that the velocity field is divergence free at the discrete level .a scheme is given to satisfy this constraint in the form of an equivalent finite volume scheme .moreover , we have designed a slope limiting procedure , slope limited splines ( sls ) , to get closer to a maximum principle for the discrete solution , by at least diminish the spurious oscillations appearing when strong gradients exist in the distribution function profile .+ the outline of this paper is the following : in section [ section_bslvspsm ] will be recalled some important properties of vlasov equations at the continuous level .then bsl and psm schemes will be described and compared , according to properties of the discrete solutions . in section [ stability ] , a numerical method will be given to improve the respect of the maximum principle by vlasov discrete solutions when using the psm scheme and particularly to keep constant the volume in the phase space .in section [ use ] , practical aspects of the psm scheme use will be described in the context of the 4d drift - kinetic model and at last we will comment on numerical results .let us consider an advection equation of a positive scalar function with an arbitrary divergence free velocity field : with position and the advection velocity field . + the solutions satisfy the maximum principle : for any initial time .+ since , we can also use an equivalent conservative formulation of the vlasov equation : for more details , see _ sonnendrcker _ lecture notes .one obvious property of this conservation law ( reynolds transport theorem ) is to conserve the mass in a lagrangian volume , by integrating the distribution function on each lagrangian volume element : let us introduce the convective derivative , thus becomes : considering a lagrangian motion of an infinitely small volume , we have , thus we obtain : obviously , a divergence free flow conserves a lagrangian volume in its motion .let us consider a vlasov equation in its non conservative form : with a scalar function , position and the advection field .the bsl scheme , see _ sonnendrcker et al _ , is based on the invariance property of function along characteristic curves to obtain values at time from the values at : with the eulerian coordinates and the characteristic curves defined as with the initial position at .let us locate the discrete function values at mesh nodes .we solve the following nonlinear system which is a second order approximation of : with .the function is a reconstruction of the solution according known values at nodes cubic splines basis functions on the domain to obtain the value at , which is not a mesh node in general . [ [ bsl_prop ] ] properties of the bsl scheme + + + + + + + + + + + + + + + + + + + + + + + + + + + + this scheme is formally fourth order in space .it is second order in time using for instance a leap - frog , predictor - corrector or runge - kutta time integration .mass is not conserved by this scheme , because it has no conservative form . however , an approximated maximum principle is satisfied .let us consider the cubic spline interpolation of the distribution function at time , we have for any : it then naturally appears a `` discrete '' maximum principle : comparing with the property , we have here and because the cubic spline reconstruction does not satisfy a maximum principle . if we have a manner to enforce this property to this reconstruction , a maximum principle is granted for the bsl scheme .no directional splitting is allowed since the bsl scheme is based on the non conservative form of the vlasov equation , see .let us consider a vlasov equation in its conservative form : with a scalar function , position and the advection field .notice that with the hypothesis , conservative form and non - conservative form of the vlasov equation are equivalent .the psm scheme , see _ zerroukat et al _ and , is based on the mass conservation property of function in a lagrangian volume to obtain the value at time : with the characteristic curves defined as and with the initial time , and the volume such that defined by the lagrangian motion with the field .the important point is that this conservative formalism properly allows a directional splitting without loosing the mass conservation .indeed , equation may be solved with successive 1d advections still of conservative form : .\ ] ] we then approximate a 1d equation for each direction using the conservation property . omitting subscript ,the psm scheme writes in 1d as follows : with settled as the 1d mesh nodes and the associated foot of the characteristic curve , ] .+ let us define the unknowns of the scheme as the average of in cell and the primitive function with the uniform space step and an arbitrary reference point of the domain and for instance the first node of the grid .therefore , one has to solve a nonlinear system , which is similar to the bsl one , to obtain a discrete solution of equation that writes : with the time step and the uniform space step .+ the computation of the reconstructed primitive function is based on values at mesh nodes : then this set of values is interpolated by cubic splines functions to obtain an approximated value of the primitive function at any point of the domain : [ [ properties - of - the - psm - scheme ] ] properties of the psm scheme + + + + + + + + + + + + + + + + + + + + + + + + + + + + this scheme is formally fourth order in space and strictly equivalent to the bsl scheme for constant linear advection , see .it is second order in time using for instance a leap - frog , predictor - corrector or runge - kutta time integration scheme .mass is exactly conserved by this scheme for each 1d step of the directional splitting .however , no maximum principle does exist for each step even for the exact solution : in general , even if the velocity field is divergence free .let us consider the scheme in dimensions of space : with a scalar function , position and the advection field .let us consider a cell , where the solution is described at time by its average in cell , the psm scheme then writes : we thus obtain the following relation : with the average of the distribution function in the lagrangian volume at time : here clearly appears two conditions , both difficult to satisfy especially in the context of a directional splitting , to have a maximum principle defined as follows : 1 .maximum principle on the distribution function in : 2 .conservation of volumes in the phase space at the discrete level : the first condition is difficult to ensure in general , because a maximum principle should be satisfied for any average of the distribution function on an arbitrary volume . moreover , in the context of a directional splitting , it is impossible to satisfy a maximum principle for a 1d step , because it does not exist at the continuous level since in general .therefore it is probably impossible to recover a maximum principle of the reconstruction after all steps of the directional splitting .+ the second condition is true at the continuous level while , since we have , see equation in section [ subsec : basics ] . as well as for the first condition , in the context of a directional splitting , it is difficult to ensure a constant volume evolution after all steps of the directional splitting , where compressions or expansions of the lagrangian volume occur successively .+ as a consequence , we will propose a form of the conservative psm scheme that does not use a directional splitting .however we will not write the semi - lagrangian form of the psm scheme in dimensions of space because it is costly in computational time , because of the reconstruction step , and it is difficult to handle with arbitrary coordinate systems .the solution we choose is to use an equivalent finite volume form of the psm scheme described in section [ psm_fv ] , which is locally 1d at each face of the mesh .it is therefore possible to design 1d numerical limiters to try to better satisfy the maximum principle condition .moreover , we will show that this form allows an exact conservation of the volumes in the phase space . the maximum principle and therefore the robustness of this schemewill thus be considerably improved .enforcing the first condition on the maximum principle of the distribution function reconstruction can be really costly in computational time . instead of trying to correct the cubic spline reconstruction, we will reduce the spurious oscillations , generated by high order schemes when strong gradients appear in the distribution function profile , by using a classical _ van leer _ like slope limiting procedure , see for instance _ leveque _ .we propose here to measure the gradients in the flow and to add diffusion where they are detected .the diffusion is added by mixing the high order psm flux with a first order upwind flux .the evaluation of the gradient is given by the classical function and we estimate the diffusion needed with a function $ ] based on a minmod like limiter function ( see fig .[ fig : gamma_theta ] ) .the resulting limiter we propose here is called sls ( slope limited splines ) , see for details : where we define as the classical slope ratio of the distribution which depends on the direction of the displacement : however , the classical limiter minmod , where , set to 0 when .that means that the scheme turns to order 1 when an extrema exists , i.e. the slope ratio .these extrema are thus quickly diffused and that leads to loose the benefits of a high order method . for sls , the choice is to let the high - order scheme deal with the extrema and only add diffusion when strong gradients occurs , i.e. the slope ratio .we also introduce a constant k in relation to control the maximum slope allowed without adding diffusion , i.e. mixing with the upwind scheme , see figure [ fig : gamma_theta ] : [ cols="^ " , ] let us set a velocity field such that : let us write the conservative advection equation in polar coordinates : notice that the geometric jacobian for polar coordinates .+ the psm scheme without directional splitting in the finite volume form reads here : with and positioned at cell faces center and with cell of volume and faces areas and .the cell averaged values of used in the scheme are : using the integral form of the fluxes : let us introduce the volumes swept by each cell face in its normal motion in accordance with the _ green _ formula and the way of computation of feet of characteristic curves normal to cell faces , i.e. without taking into account the tangential motion at the cell faces or the curvature of the mesh , see fig .[ fig : polvf ] : therefore we obtain : with we here recover a discrete mass conservation formulation . to obtain , and thus preserve a constant function ,it yields : using definitions , we thus obtain a discrete divergence formulation in polar coordinates to be nullified : with the following first order definition of the characteristic curves feet computation : as a conclusion , we have presented a general methodology for any coordinate system to compute the associated discrete divergence free condition , by using the approximation of cell edges by straight lines and by only considering the normal to cell faces motion of the volume as it has to be when invoking the _ green _ formula in this finite volume framework .the discrete divergence formulation is independent of the time integration method .it is a discrete consistent relation for the advection field of the form . it should be satisfied to get the conservation condition on volumes in the phase space , which is necessary to obtain a maximum principle for the psm scheme or actually for any finite volume scheme .this condition is also necessary when using the semi - lagrangian psm scheme with directional splitting as described in section [ section_psm ] as well as when using the finite volume form described in section [ psm_fv ] . in fig .[ nodiv ] , we compare the results of a 4d drift - kinetic benchmark ( see section [ drift ] for details ) obtained with the semi - lagrangian psm scheme [ section_psm ] with an advection field computed : first in such a way the discrete divergence free condition is satisfied and second with an advection field computed by cubic spline interpolation without satisfying this condition . [ !htbp ] result at time with ( left ) the advection field computed in a way ( see section [ drift ] ) that satisfy the discrete divergence condition and with ( right ) the advection field computed with cubic splines , which do not satisfy this condition .respecting condition for the advection field not only leads to a better respect of the maximum principle , it is actually necessary to ensure the stability of the scheme .the result in figure [ nodiv ] diverges from realistic physics ., title="fig:",height=170 ] result at time with ( left ) the advection field computed in a way ( see section [ drift ] ) that satisfy the discrete divergence condition and with ( right ) the advection field computed with cubic splines , which do not satisfy this condition .respecting condition for the advection field not only leads to a better respect of the maximum principle , it is actually necessary to ensure the stability of the scheme .the result in figure [ nodiv ] diverges from realistic physics ., title="fig:",height=170 ]this work follows those of _grandgirard et al _ in the gysela code , see and .the geometrical assumptions of this model for ion plasma turbulence are a cylindrical geometry with coordinates and a constant magnetic field , where is the unit vector in direction .the model is the 4d drift - kinetic equations described in_grandgirard et al _ : with and with the electric potential .+ the 4d vlasov equation governing this system , where the ion distribution function is , is the following : this equation is coupled with a quasi - neutrality equation for the electric potential that reads : with and constant in time physical parameters , , and .let us notice that the 4d velocity field is divergence free : because of variable independence and and we have , with and , thus and therefore , one can write an equivalent conservative equation to the preceding vlasov equation : we have obtained a discrete form of the velocity field divergence to nullify ( [ div_disc_pol ] ) , as a necessary condition to obtain a numerical solution with a maximum principle .we saw in ( [ diva ] ) that is satisfied equivalently if ( [ div_rtheta ] ) is satisfied and this is still true at the discrete level ( independence of variables ) .therefore , the velocity field should nullify the discrete polar divergence ( [ div_disc_pol ] ) : with using definitions given in ( [ def_vgc ] ) .let us define the electric potential at the nodes of the mesh , whatever the way it is computed .let us set the following natural finite difference approximation for the velocity field : with this approximated velocity field , the approximation of ( [ divrt_disc ] ) is satisfied .the proof is easy , we just have to put the velocity field in ( [ divrt_disc ] ) to see that all terms annulate each others .notice that the electric potential should be computed at nodes of the mesh to obtain velocities at the center of cell faces and .it is well adapted to the psm schemes , where the displacement should be calculated at cell faces . in this section, we will compare the numerical methods on a 4d drift - kinetic benchmark , following the paper _ grangirard et al _ .the model is described in section [ drift4d ] .we will compute the growth of a 4d unstable turbulent mode .the benchmark consists of exciting the plasma mode , with the poloidal mode ( ) and the toroidal mode ( ) .the initial distribution function is the sum of an equilibrium and a perturbation distribution function .the equilibrium distribution function has the following form : and the perturbation with and two exponential functions and with the length of the domain in direction , , , physical constant profiles , see for details .we have set here and . at the beginning of the time step ,the distribution function is known at time , with .the time step is computed at each step with the cfl like condition : with the coefficients , because the flow is highly non - linear in planes thus characteristics should not cross each others during one time step , and because it is linear advection in direction and so characteristics can not cross each others and then we allow a maximum displacement of cells . actually excluding the linear phase , the most restrictive directions for the time step are and , in such a waythis last value ( 8) has a minor importance compare to the leading parameters .+ the operator splitting between the quasi - neutral equation and the vlasov transport equation is made second order using a predictor - corrector scheme in time : 1 .time step computation .quasi - neutral equation solving at using the distribution function ( actually the density ) to obtain the electric potential at time .the advection field is computed with according to equation and using formula .4d vlasov equation solving at with time step to obtain the distribution function at time using the advection field .quasi - neutral equation solving at using the distribution function ( actually the density ) to obtain the electric potential at time .the advection field is computed with according to equation and using formula .4d vlasov equation solving at with time step to obtain the distribution function at time using the advection field . in the two following paragraphs, we describe the schemes for the 4d vlasov equation solving of the algorithm with in the prediction step and in the correction step .[ [ algosl ] ] 4d semi - lagrangian psm sheme with directional splitting + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + * psm 1d advection of in direction with velocity and time step to obtain .* psm 1d advection of in direction with velocity and time step to obtain .* psm 1d advection of in direction with velocity and time step to obtain .* psm 1d advection of in direction with velocity and time step to obtain .* psm 1d advection of in direction with velocity and time step to obtain .* psm 1d advection of in direction with velocity and time step to obtain . * psm 1d advection of in direction with velocity and time step to obtain .each psm 1d advection is achieved using the standard 1d semi - lagrangian psm scheme as described in section [ section_psm ] .the directional splitting is second order by using a strang like decomposition .since we use here a directional splitting and a second order scheme in time for the computation of the characteristic curves , the volumes are not strictly conserved in the phase space , because the scheme does not satisfy the discrete divergence free condition .[ [ algofv ] ] 4d finite volume form of the psm scheme : + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + * psm 1d advection of in direction with velocity and time step to obtain . * psm 1d advection of in direction with velocity and time step to obtain . *psm 2d advection of in each plane with velocities and time step to obtain .* psm 1d advection of in direction with velocity and time step to obtain . *psm 1d advection of in direction with velocity and time step to obtain .each psm 1d advection is achieved using the standard 1d semi - lagrangian scheme as described in section [ section_psm ] .the psm 2d advection in is achieved with the finite volume form as described in section [ psm_fv ] . since we use here the scheme [ psm_fv ], the volumes are strictly conserved in the phase space , because the scheme does satisfy the discrete divergence free condition . even if we use the semi - lagrangian psm 1d advection in and directions , the property is kept because the velocity is constant in these directions .the mesh is cells in directions .boundary conditions are periodic for directions and and _ neumann _ ( ) in and .we first ran the reference test case with the non - conservative backward semi - lagrangian ( bsl ) scheme in section [ sec : bslscheme ] which is currently used in the gysela code .then we ran four test cases to show the influence of each numerical treatment : the standard conservative semi - lagrangian psm scheme in section [ algosl ] with 1d directional splitting ( psm directional splitting 1d ) and the same with the sls limiter ( sls directional splitting 1d ) , the unsplit finite volume form of the psm scheme in section [ algofv ] ( psm finite volume ) and the same with the sls limiter ( sls finite volume ) .+ the computed 4d distribution functions are pictured in fig .[ sol1800 ] at time and in fig .[ sol4400 ] at time .we only present 2d slices of the distribution function at and for a given value of . in these figures stands for direction and for the direction . at initial time , the minimum and maximum values of the distribution function in this slice are and these values should be the same at any time of the computation if the maximum principle would be respected . in fig .[ sol1800 ] , we show pictures of each scheme result at time , which corresponds approximately to the beginning of the non - linear turbulent phase saturation .small structures are appearing and interact with each others .all results are still close qualitatively . however , we already see oscillations in the solution obtained with psm ds ( directional splitting ) , where the minimum and maximum values are already quite different than the one at initial time .the psm finite volume ( psm fv ) form and the sls ds better keep these extrema , but only sls finite volume keep the extrema unchanged until time with a really similar behaviour of the solution .* psm finite volume * + simulation with 128x256x32x16 cells psm directional splitting 1d ( up - left ) psm finite volume ( up - right ) sls directional splitting 1d ( down - left ) sls finite volume ( down - right ) time = 1800.,title="fig : " ] simulation with 128x256x32x16 cells psm directional splitting 1d ( up - left ) psm finite volume ( up - right ) sls directional splitting 1d ( down - left ) sls finite volume ( down - right ) time = 1800.,title="fig : " ] + * sls directional splitting * * sls finite volume * + simulation with 128x256x32x16 cells psm directional splitting 1d ( up - left ) psm finite volume ( up - right ) sls directional splitting 1d ( down - left ) sls finite volume ( down - right ) time = 1800.,title="fig : " ] simulation with 128x256x32x16 cells psm directional splitting 1d ( up - left ) psm finite volume ( up - right ) sls directional splitting 1d ( down - left ) sls finite volume ( down - right ) time = 1800.,title="fig : " ] + in fig .[ sol4400 ] , we show pictures of each scheme result at time when turbulence is well developed .we see that the standard psm scheme creates a lot of unphysical oscillations ( structures are reaching the boundaries in ) and may crash the computation .the psm fv form and the sls ds better keep the turbulence structures , but still oscillations are created .the sls finite volume keep the extrema of the solution reasonably well ( the sls limiter does not provide a maximum principle ) and the solution is smooth .we may say that the added diffusion with the limiter helps the scheme to diffuse subgrid structures without creating oscillations .the divergence free property of the finite volume scheme is important to cure to solution from instabilities that can be seen at values close the average value of ( vertical line at the middle of pictures in fig . [ sol4400 ] ) in the sls directional splitting solution compare to the sls finite volume solution .* psm finite volume * + simulation with 128x256x32x16 cells psm directional splitting 1d ( up - left ) psm finite volume ( up - right ) sls directional splitting 1d ( down - left ) sls finite volume ( down - right ) time = 4400 , with all color tables set to the minimum and maximum value at initial time.,title="fig : " ] simulation with 128x256x32x16 cells psm directional splitting 1d ( up - left ) psm finite volume ( up - right ) sls directional splitting 1d ( down - left ) sls finite volume ( down - right ) time = 4400 , with all color tables set to the minimum and maximum value at initial time.,title="fig : " ] + * sls directional splitting * * sls finite volume * + simulation with 128x256x32x16 cells psm directional splitting 1d ( up - left ) psm finite volume ( up - right ) sls directional splitting 1d ( down - left ) sls finite volume ( down - right ) time = 4400 , with all color tables set to the minimum and maximum value at initial time.,title="fig : " ] simulation with 128x256x32x16 cells psm directional splitting 1d ( up - left ) psm finite volume ( up - right ) sls directional splitting 1d ( down - left ) sls finite volume ( down - right ) time = 4400 , with all color tables set to the minimum and maximum value at initial time.,title="fig : " ] in fig .[ solbsl ] , we see in the reference bsl solution at time spurious oscillations produced during the reconstruction step of the distribution function , which is the only possibility to break the maximum principle for the bsl scheme : here extrema are instead of values at initial time . at time , we see spurious oscillations as well , but the maximum principle is better satisfied than with the standard psm ds scheme in fig .[ sol4400 ] , because no conservation of volumes in the phase space has to be satisfied , as it is explained in section [ sec : bslscheme ] .* bsl time=4400 * + reference bsl simulation with 128x256x32x16 cells bsl at time=1800 with the real color table values ( left ) bsl at time=4400 with the color table set to the minimum and maximum value at initial time ( left).,title="fig : " ] reference bsl simulation with 128x256x32x16 cells bsl at time=1800 with the real color table values ( left ) bsl at time=4400 with the color table set to the minimum and maximum value at initial time ( left).,title="fig : " ] +the psm scheme has been successfully integrated in the gysela code and has been tested on 4d drift - kinetic test cases .we had first experimentally stated and afterward explained in this paper that the psm scheme can be unstable without taking care of a velocity field divergence free condition .the numerical results show that the study of the volume evolution in the phase space is fruitful .notice that this conservative scheme properly allows a directional splitting , in the semi - lagrangian or in the finite volume form , what is not the case with the bsl scheme .the slope limited splines ( sls ) limiter is efficient to cut off spurious oscillations of the standard psm scheme by adding diffusion that helps eventually the scheme to manage small structures below the cell size .of course , the psm scheme should be further validated as well as its integration in the gysela code using the gyrokinetic 5d model in toroidal geometry . in particular , the curvature of the mesh couple several directions by the geometrical _ jacobian _ which makes the divergence free condition more complex , as well as the writing of the quasi - neutral solver and the gyroaverage operator .v. grandgirard , y. sarazin , p. angelino , a. bottino , n. crouseilles , g. darmet , g. dif - pradalier , x. garbet , ph .ghendrih , s. jolliet , g. latu , e. sonnendrcker , l. villard , global full - f gyrokinetic simulations of plasma turbulence , plasma phys .volume 49b , pp . 173182 ( december 2007 ) .v. grandgirard , m. brunetti , p. bertrand , n. besse , x. garbet , p. ghendrih , g. manfredi , y. sarazin , o. sauter , e. sonnendrcker , j. vaclavik , l. villard , a drift - kinetic semi - lagrangian 4d code for ion turbulence simulation , j. comput .physics , vol .395 - 423 ( 2006 ) .j.guterl , j .-braeunig , n. crouseilles , v. grandgirard , g. latu , m. mehrenberger , e. sonnendrcker , test of some numerical limiters for the psm scheme for 4d drift - kinetic simulations , inria report 7467 ( november 2010 ) .f. huot , a. ghizzo , p. bertrand , e. sonnendrcker , o. coulaud , instability of the time splitting scheme for the one - dimensional and relativistic vlasov - maxwell system , j. comput .185 , issue 2 , pp . 512531( 2003 ) .m. zerroukat , n. wood , a. staniforth , application of the parabolic spline method ( psm ) to a multi - dimensional conservative semi - lagrangian transport scheme ( slice ) , j. comput .phys , vol .225 n.1 , pp .935 - 948 ( 2007 ) . | the purpose of this work is simulation of magnetised plasmas in the iter project framework . in this context , kinetic vlasov - poisson like models are used to simulate core turbulence in the tokamak in a toroidal geometry . this leads to heavy simulations because a 6d dimensional problem has to be solved , even if reduced to a 5d in so called gyrokinetic models . accurate schemes , parallel algorithms need to be designed to bear these simulations . this paper describes the numerical studies to improve robustness of the conservative psm scheme in the context of its development in the gysela code . in this paper , we only consider the 4d drift - kinetic model which is the backbone of the 5d gyrokinetic models and relevant to build a robust and accurate numerical method . numerical simulation , conservative scheme , maximum principle , plasma turbulence 65m08 , 76m12 , 76n99 |
the benchmarks for optimal performance of heat engines and refrigerators , under reversible conditions , are the carnot efficiency , and the carnot coefficient of performance respectively , where is the ratio of cold to hot temperatures of the reservoirs . for finite - time models such as in the endoreversible approximation and the symmetric low - dissipation carnot engines , the maximum power output is obtained at the so called curzon - ahlborn ( ca ) efficiency , . however , ca - value is not as universal as . for small temperature differences ,its lower order terms are obtained within the framework of linear irreversible thermodynamics .thus models with tight - coupling fluxes yield as the efficiency at maximum power .further , if we have a left - right symmetry , then the second - order term is also universal . on the other hand , the problem of finding universal benchmarks for finite - time refrigerators is non - trivial .for instance , the rate of refrigeration ( ) , which seems a natural choice for optimization , can not be optimized under the assumption of a newtonian heat flow ( ) between a reservoir and the working medium . in that case , the maximum rate of refrigeration is obtained as the coefficient of performance ( cop ) vanishes .so instead , a useful target function has been used , where is the heat absorbed per unit time by the working substance from the cold bath , or the rate of refrigeration .the corresponding cop is found to be , for both the endoreversible and the symmetric low - dissipation models .so this value is usually regarded as the analog of ca - value , applicable to the case of refrigerators . in any case, the usual benchmarks for optimal performance of thermal machines are decided by recourse to optimization of a chosen target function .the method also presumes a complete knowledge of the intrinsic energy scales , so that , in principle , these scales can be tuned to achieve the optimal performance . in this letter, we present a different perspective on this problem .we consider a situation where we have a limited or partial information about the internal energy scales , so that we have to perform an inference analysis in order to estimate the performance of the machine .inference implies arriving at plausible conclusions assuming the truth of the given premises .thus the objective of inference is not to predict the `` true '' behavior of a physical model but to arrive a rational guess based on incomplete information . in this context, the role of prior information becomes central . in the spirit of bayesian probability theory ,we treat all uncertainty probabilistically and assign a prior probability distribution to the uncertain parameters .we define an average or expected measure of the performance , using the assigned prior distribution .the approach was proposed by one of the authors and has been then applied to different models of heat engines .these works show that ca - efficiency can be reproduced as a limiting value when the prior - averaged work or power in a heat cycle is optimized .in particular , for the problem of maximum work extraction from finite source and sink , the behavior of efficiency at maximum estimate of work shows universal features near equilibrium , e.g. ] .later we consider an asymptotic range in which the analysis becomes simplified and we observe universal features .now consider two observers and who respectively assign a prior for and .taking the simplifying assumption that each observer is in an equivalent state of knowledge , we can write where is the prior distribution function , taken to be of the same form for each observer . at a fixed known value of efficiency, it implies that , where the normalization constant , ^{-1} ] .then jeffreys prior for can be argued , similar to eq .( [ choice - of - prior ] ) . now we define the expected value of as where is given by eq .( [ defc ] ) . upon integrating the above equation, we get as with power output for the engine , the average becomes increasingly small in the asymptotic limit . in the following ,we focus on cop at maximal , in the asymptotic limit .so the maximum of with respect to , is evaluated as the numerical solution for versus one of the limits is shown in fig .[ finite - limits - refri ] .-criterion is plotted versus ( scaled by ) , while .the upper and lower curves correspond to and , respectively .the dashed lines represent corresponding values . for larger values of ,cop approaches the corresponding . in inset, cop is plotted versus , when .the cop approaches as takes smaller values ., width=302 ] finally , in the asymptotic range , the above expression reduces to so the permissible solution ( ) of the above quadratic equation , which maximizes , is given as _ uniform prior _ : on the other hand , with uniform prior , the expected -criterion is given as upon integrating the above equation , we get .\end{aligned}\ ] ] now , we want to estimate , the cop at maximum expected -criterion in asymptotic range .hence , by putting and imposing the asymptotic range , we obtain the following equation whose acceptable solution can be finally written in the following form - 1 .\label{unicop}\ ] ] again , we see that in the asymptotic range , the cop is given only in terms of the ratio of the reservoir temperatures .we show in fig .[ fig - fridge ] , a comparison amongst the different expressions for cop at optimized performance versus this ratio . ) is plotted versus .the solid curve shows the cop at optimal expected performance ( ) when jeffreys prior is assigned and the asymptotic range is applied .the dashed line represents the interpolation formula for cop corresponding to the optimum value .the top , dotted line is the result of uniform prior , again in the asymptotic range .the inset shows the same three quantities for close - to - equilibrium values of .,width=302 ] in near - equilibrium regime , the carnot cop , as well as become large in magnitude .one can then write the series expansion for relative to as follows : .\ ] ] in this case , relative to behaves as follows : .\ ] ] according to refs . close to equilibrium and upto the leading order , behaves as .the optimal behavior is thus reproduced by the use of jeffreys prior , but uniform prior is not able to generate this dependence .similarly , for large temperature differences , , we get the limiting behavior as while .the interpolation formula at optimal performance , gives . before closing this section, we point out that performing the same analysis in terms of as the uncertain scale , we obtain a similar behavior in the asymptotic range of values , and the same figures of merit , and , are obtained with the choice of jeffreys prior .so far , we have focused on the performance of feynman s ratchet . in the following ,we wish to point out that the above inference analysis can also be performed on other classes of heat engines / refrigerators .the model which we discuss below is a four - step heat cycle performed by a few - level quantum system ( working medium ) .further , the cycle is accomplished using infinitely slow processes .the particular cycle is the quantum otto cycle .consider a quantum system with hamiltonian , with eigenvalue spectrum of the form . here is characterised by the energy quantum number and other parameters / constants which remain fixed during the cycle .we assume there are non - degenerate levels .the parameter represents an external control , equivalent to applied magnetic field for a spin system .initially , the system is in thermal state at temperature , where , , and the partition function .the quantum otto cycle involves the following steps : \(i ) the system is detached from the hot bath and made to undergo a quantum adiabatic process , in which the external control is slowly changed from the value to . thus the hamiltonian changes from to with eigenvalues . following quantum adiabatic theorem, the system remains in the instantaneous eigenstate of the hamiltonian and so the occupation probabilities of the levels remain unchanged . for ,this process is the analogue of an adiabatic expansion .the work done _ by _ the system in this stage is equal to the change in mean energy ) ] .\(iii ) the system is now detached from the cold bath and made to undergo a second quantum adiabatic process ( compression ) during which the control is reset to value .work done _ on _ the system in this step is ) ] . the expected work per cycle for a given , is then given by to perform the integration , we write and integrate by parts .the result can be written as : thus the average work is evaluated to be , \label{wev}\ ] ] or , which is written briefly as : ,\ ] ] where and can be easily identified from eq .( [ wev ] ) .now we wish to find the efficiency at optimal average work , and so we apply the condition the resulting equation is , in general , a function of and . however , we are interested in the asymptotic limit of large and vanishing . in this limit ,the dominant term in the sum is given by , where is the ground - state energy .therefore , .similarly , in the said limit finally , using the above limiting forms in eq .( [ wzero ] ) , we obtain : = 0,\ ] ] which implies that the expected work becomes optimal at , or at ca - efficiency .we observed in feynman s ratchet that for small temperature differences , the figures of merit at optimal values of and , agree with the corresponding expressions at the optimal values of and .the important conditions which hold in this comparison are , jeffreys prior as the underlying prior and an asymptotic range of values over which the prior is defined .in contrast , the uniform prior is not able to generate the optimal behavior in the near equilibrium regime .further we note that for endoreversible models with a newtonian heat flow between a reservoir and the working medium , the efficiency at optimal power is exactly .correspondingly , the cop at optimal -criterion is given by .in this paper , these values are obtained with an inference based approach assuming incomplete information in a mesoscopic model of heat engine .we have also shown that our analysis applies to a broader class of idealized models of heat engines / refrigerators , driven by quasi - static processes . herealso , ca efficiency emerges from the use of jeffreys prior , under the given conditions of the model .we conclude with an argument to support as to why our approach yields the familiar results of finite - time thermodynamics . to exemplify , in the case of feynman s ratchet , the asymptotic range has been considered _after _ we optimized the expected power output ( in case of engine ) over the efficiency .one may consider these two steps in the opposite order , i.e. take the asymptotic range first and then perform the optimization .for that we rewrite eq .( [ q1dot ] ) as follows : and define the expected heat flux as then in the asymptotic range , we obtain the approximate expression as ,\ ] ] where is as in eq .( [ defc ] ) . herewe can draw a parallel with newtonian heat flow : ] , with the same effective heat conductance , between an effective temperature and temperature of the cold reservoir .then it is easily seen that the maximum of expected power , is obtained at ca value .similarly , one can argue for the emergence of in the case of refrigerator mode , in terms of effective heat flows which are newtonian in nature .interestingly , the above expressions seem to suggest an analogy between the expected mesoscopic model with limited information , and a finite - time thermodynamic model with newtonian heat flows . if we compare with the endoreversible models , then we observe that the assumption of a newtonian heat flow goes together with obtaining ca efficiency at maximum power , and cop at optimum -criterion .we however note that the analogy does not hold in entirety .the effective temperatures defined above do not have physical counterpart in the ratchet model , although in the endoreversible picture , these denote the temperatures of the working medium while in contact with hot or cold reservoirs .secondly , the heat conductances need not be equal for the endoreversible model with newtonian heat flows .further , the intermediate temperatures and as above , are equal in magnitude at the maximum expected power . however , for the endoreversible model , these temperatures are not equal at maximum power . still , the form of expressions for the rates of heat transfer do provide a certain insight into the emergence of the familiar expressions for figures of merit at optimal expected performance within the prior - averaged approach .finally , we close with a few observations on future lines of enquiry . it was seen in fig .1 , that for a specified finite range for the prior , the estimates of efficiency at maximum power are either above , or below the estimates in the asymptotic range . in particular , the estimates are function of the values and .we obtain universal results , dependent on the ratio of reservoir temperatures , only in the asymptotic range .further , the smaller values of the upper limit , overestimate the efficiency ( inset in fig .1 ) whereas the larger values of the lower limit , underestimate the efficiency .an opposite behavior is seen for the refrigerator mode ( fig .moreover , this trend for a chosen mode ( engine / refrigerator ) is specific to the choice of the uncertain variable .thus the trend is reversed , if instead of choosing , we perform the analysis with as the uncertain variable .this behavior is seen in both the engine as well as the refrigerator mode .investigation into the relation between inferences derived from the two choices for the uncertain variable , may yield further insight into the behavior of estimated performance and the approach in general .the point may be appreciated by noting that by specifying a finite - range for the prior we add new information to the probabilistic model . in order that inference may provide a useful and practical guess on the actual performance of the device, this additional prior information has to be related to some objective features of the model .these considerations are relevant for further exploring the intriguing relation between the subjective and the objective descriptions of thermodynamic models .the authors acknowledge financial support from the department of science and technology , india under the research project no .sr / s2/cmp-0047/2010(g ) , titled : `` quantum heat engines : work , entropy and information at the nanoscale '' .y. wang , m. li , z. c. tu , a. c. hernndez , j. m. m. roco , coefficient of performance at maximum figure of merit and its bounds for low - dissipation carnot - like refrigerators , phys .e 86 ( 2012 ) 011127 .y. hu , f. wu , y. ma , j. he , j. wang , a. c. hernndez , j. m. m. roco , coefficient of performance for a low - dissipation carnot - like refrigerator with nonadiabatic dissipation , phys .e 88 ( 2013 ) 062115 . | we estimate the performance of feynman s ratchet at given values of the ratio of cold to hot reservoir temperatures ( ) and the figure of merit ( efficiency in the case of engine and coefficienct of performance in the case of refrigerator ) . the latter implies that only the ratio of two intrinsic energy scales is known to the observer , but their exact values are completely uncertain . the prior probability distribution for the uncertain energy parameters is argued to be jeffreys prior . we define an average measure for performance of the model by averaging , over the prior distribution , the power output ( heat engine ) or the -criterion ( refrigerator ) which is the product of rate of heat absorbed from the cold reservoir and the coefficient of performance . we observe that the figure of merit , at optimal performance close to equilibrium , is reproduced by the prior - averaging procedure . further , we obtain the well - known expressions of finite - time thermodynamics for the efficiency at optimal power and the coefficient of performance at optimal -criterion , given by and respectively . this analogy is explored further and we point out that the expected heat flow from and to the reservoirs , behaves as an effective newtonian flow . we also show , in a class of quasi - static models of quantum heat engines , how ca efficiency emerges in asymptotic limit with the use of jeffreys prior . |
much interest has been aroused in wormholes since the morris - thorne article .these act as tunnels from one region of spacetime to another , possibly through which observers may freely traverse .wormhole physics is a specific example of solving the einstein field equation in the reverse direction , namely , one first considers an interesting and exotic spacetime metric , then finds the matter source responsible for the respective geometry . in this manner, it was found that these traversable wormholes possess a peculiar property , namely exotic matter , involving a stress - energy tensor that violates the null energy condition .in fact , they violate all the known pointwise energy conditions and averaged energy conditions , which are fundamental to the singularity theorems and theorems of classical black hole thermodynamics . the weak energy condition ( wec )assumes that the local energy density is positive and states that , for all timelike vectors , where is the stress energy tensor . by continuity ,the wec implies the null energy condition ( nec ) , , where is a null vector .violations of the pointwise energy conditions led to the averaging of the energy conditions over timelike or null geodesics .for instance , the averaged weak energy condition ( awec ) states that the integral of the energy density measured by a geodesic observer is non - negative , i.e. , , where is the observer s proper time .although classical forms of matter are believed to obey these energy conditions , it is a well - known fact that they are violated by certain quantum fields , amongst which we may refer to the casimir effect .pioneering work by ford in the late 1970 s on a new set of energy constraints , led to constraints on negative energy fluxes in 1991 .these eventually culminated in the form of the quantum inequality ( qi ) applied to energy densities , which was introduced by ford and roman in 1995 .the qi was proven directly from quantum field theory , in four - dimensional minkowski spacetime , for free quantized , massless scalar fields and takes the following form in which , is the tangent to a geodesic observer s wordline ; is the observer s proper time and is a sampling time .the expectation value is taken with respect to an arbitrary state .contrary to the averaged energy conditions , one does not average over the entire wordline of the observer , but weights the integral with a sampling function of characteristic width , .the inequality limits the magnitude of the negative energy violations and the time for which they are allowed to exist .the basic applications to curved spacetimes is that these appear flat if restricted to a sufficiently small region .the application of the qi to wormhole geometries is of particular interest .a small spacetime volume around the throat of the wormhole was considered , so that all the dimensions of this volume are much smaller than the minimum proper radius of curvature in the region .thus , the spacetime can be considered approximately flat in this region , so that the qi constraint may be applied .the results of the analysis is that either the wormhole possesses a throat size which is only slightly larger than the planck length , or there are large discrepancies in the length scales which characterize the geometry of the wormhole .the analysis imply that generically the exotic matter is confined to an extremely thin band , and/or that large red - shifts are involved , which present severe difficulties for traversability , such as large tidal forces .due to these results , ford and roman concluded that the existence of macroscopic traversable wormholes is very improbable ( see for an interesting review ) .it was also shown that , by using the qi , enormous amounts of exotic matter are needed to support the alcubierre warp drive and the superluminal krasnikov tube . however , there are a series of objections that can be applied to the qi .firstly , the qi is only of interest if one is relying on quantum field theory to provide the exotic matter to support the wormhole throat .but there are classical systems ( non - minimally coupled scalar fields ) that violate the null and the weak energy conditions , whilst presenting plausible results when applying the qi . secondly , even if one relies on quantum field theory to provide exotic matter , the qi does not rule out the existence of wormholes , although they do place serious constraints on the geometry .thirdly , it may be possible to reformulate the qi in a more transparent covariant notation , and to prove it for arbitrary background geometries . more recently , visser _ et al _ , noting the fact that the energy conditions do not actually quantify the `` total amount '' of energy condition violating matter , developed a suitable measure for quantifying this notion by introducing a `` volume integral quantifier ''. this notion amounts to calculating the definite integrals and , and the amount of violation is defined as the extent to which these integrals become negative .although the null energy and averaged null energy conditions are always violated for wormhole spacetimes , visser _ et al _ considered specific examples of spacetime geometries containing wormholes that are supported by arbitrarily small quantities of averaged null energy condition violating matter .it is also interesting to note that by using the `` volume integral quantifier '' , extremely stringent conditions were found on `` warp drive '' spacetimes , considering non - relativistic velocities of the bubble velocity .as the violation of the energy conditions is a problematic issue , depending on one s point of view , it is interesting to note that an elegant class of wormhole solutions minimizing the usage of exotic matter was constructed by visser using the cut - and - paste technique , in which the exotic matter is concentrated at the wormhole throat . using these thin - shell wormholes ,a dynamic stability was analyzed , either by choosing specific surface equations of state , or by considering a linearized stability analysis around a static solution .one may also construct wormhole solutions by matching an interior wormhole to an exterior vacuum solution , at a junction surface .in particular , a thin shell around a traversable wormhole , with a zero surface energy density was analyzed in , and with generic surface stresses in .a similar analysis for the plane symmetric case , with a negative cosmological constant , is done in .a general class of wormhole geometries with a cosmological constant and junction conditions was analyzed by debenedictis and das , and further explored in higher dimensions .a particularly simple , yet interestingly enough , case is that of a construction of a dust shell around a traversable wormhole . the null energy condition violation at the throat is necessary to maintain the wormhole open , although the averaged null energy condition violating matter can be made arbitrarily small .thus , in order to further minimize the usage of exotic matter , one may impose that the surface stress energy tensor obeys the energy conditions at the junction surface . for a pressureless dust shell ,one need only find the regions in which the surface energy density is non - negative , to determine the regions in which all of the energy conditions are satisfied .the plan of this paper is as follows : in section ii , we present a specific spacetime metric of a spherically symmetric traversable wormhole , in the presence of a generic cosmological constant , analyzing the respective mathematics of embedding .we verify that the null energy and the averaged null energy conditions are violated , as was to be expected . using the `` volume integral quantifier '' and considering the specific example of the ellis `` drainhole '', we verify that the construction of this spacetime geometry can be made with arbitrarily small quantities of averaged null energy condition violating matter . in section iii, we present the unique exterior vacuum solution . in section iv , we construct a pressureless dust shell around the interior wormhole spacetime , by matching the latter to the exterior vacuum solution .we also deduce an expression governing the behavior of the radial pressure across the junction surface . in sectionv , we find regions where the surface energy density is positive , thereby satisfying all of the energy conditions at the junction , in order to further minimize the usage of exotic matter . in section vi ,specific dimensions of the wormhole , namely , the throat radius and the junction interface radius , and estimates of the total traversal time and maximum velocity of an observer journeying through the wormhole , are also found by imposing the traversability conditions .finally , we conclude in section vii .consider the following static and spherically symmetric line element ( with ) where and are arbitrary functions of the radial coordinate , , and is the cosmological constant . is called the redshift function , for it is related to the gravitational redshift .we shall see ahead that this metric corresponds to a wormhole spacetime , so that can be denoted as the form function , as it determines the shape of the wormhole .the radial coordinate has a range that increases from a minimum value at , corresponding to the wormhole throat , to , where the interior spacetime will be joined to an exterior vacuum solution .consider , without a significant loss of generality , an equatorial slice , , of the line element ( [ metricwormhole ] ) , at a fixed moment of time and a fixed .the metric is thus reduced to , which can then be embedded in a two - dimensional euclidean space , .the lift function , , is only a function of , i.e. , .thus , identifying the radial coordinate , , of the embedding space with the slice considered of the wormhole geometry , we have the condition for the embedding surface , given by to be a solution of a wormhole , the radial coordinate has a minimum value , , denoted as the throat , which defined in terms of the shape function is given by at this value the embedded surface is vertical , i.e. , . as in a general wormhole solution ,the radial coordinate is ill - behaved near the throat , but the proper radial distance , , is required to be finite throughout spacetime .this implies that the condition is imposed .furthermore , one needs to impose that the throat flares out , which mathematically entails that the inverse of the embedding function , , must satisfy at or near the throat .this flaring - out condition is given by implying that at the throat , or , we have the important condition we will see below that this condition plays a fundamental role in the analysis of the violation of the energy conditions .using the einstein field equation , , in an orthonormal reference frame with the following set of basis vectors the stress energy tensor components of the metric ( [ metricwormhole ] ) are given by \label{tau}\ , , \\p_{t}(r)&=&\frac{1}{8\pi}\left [ \left(\frac{\lambda}{3}r^2-\frac{m}{r } \right)\left(\phi ' ' + ( \phi')^2+\frac{\phi'}{r}\right)- \frac{(1+r\phi')}{2r^3}\left(m'r - m-\frac{2\lambda}{3 } r^3\right ) + \lambda \right ] \label{p}\,.\end{aligned}\ ] ] is the energy density , is the radial pressure , and is the pressure measured in the lateral directions , orthogonal to the radial direction . one may readily verify that the null energy condition ( nec ) is violated at the wormhole throat .the nec states that , where is a null vector . in the orthonormal frame , , we have .\ ] ] due to the flaring out condition of the throat deduced from the mathematics of embedding , i.e. , eq . ( [ flarecondition ] ), we verify that at the throat , and due to the finiteness of , from eq .( [ necthroat ] ) we have . matter that violates the nec is denoted as exotic matter .in particular , one may consider a specific class of form functions that impose that eq .( [ metricwormhole ] ) is asymptotically flat .this condition is reflected in the embedding diagram as in the limit , i.e. , as . in this casewe verify that the averaged null energy condition ( anec ) , defined as , is also violated . is an affine parameter along a radial null geodesic .carrying out an identical computation as in , we have one may also consider the `` volume integral quantifier '' , as defined in , which provides information about the `` total amount '' of anec violating matter in the spacetime . taking into account eq .( [ necthroat ] ) and performing an integration by parts , the volume integral quantifier is given by \;dr \,.\end{aligned}\ ] ] consider , for simplicity , the ellis `` drainhole '' solution ( also considered in ) , which corresponds to choosing a zero redshift function , , and the following form function suppose now that the wormhole extends from the throat , , to a radius situated at .evaluating the volume integral , one deduces .\end{aligned}\ ] ] taking the limit as , one verifies that .thus , as in the examples presented in , with the form function of eq .( [ homerform ] ) , one may construct a traversable wormhole with arbitrarily small quantities of anec violating matter .the exotic matter threading the wormhole extends from the throat at to the junction boundary situated at , where the interior solution is matched to an exterior vacuum spacetime .in general , the solutions of the interior and exterior spacetimes are given in different coordinate systems .therefore , to distinguish between both spacetimes , the exterior vacuum solution , written in the coordinate system , is given by if , the solution is denoted by the schwarzschild - de sitter spacetime . for , we have the schwarzschild - anti de sitter spacetime , and of course the specific case of is reduced to the schwarzschild solution , with a black hole event horizon at .note that the metric ( [ metricvacuum ] ) is asymptotically de sitter , if as , or asymptotically anti - de sitter , if , as . for the schwarzschild - de sitter spacetime , , if , then the factor possesses two positive real roots ( see for details ) , and , corresponding to the black hole and the cosmological event horizons of the de sitter spacetime , respectively . in this domain we have and . considering the schwarzschild - anti de sitter metric , with , the factor has only one real positive root , , corresponding to a black hole event horizon , with ( see for details ) .we shall match eqs .( [ metricwormhole ] ) and ( [ metricvacuum ] ) at a junction surface , , situated at . in order for these line elements to be continuous across the junction , , we consider the following transformations and correspond to the exterior and interior cosmological constants , respectively , which we shall assume continuous across the junction surface , i.e. , .we shall consider that the junction surface is a timelike hypersurface defined by the parametric equation of the form . are the intrinsic coordinates on , and is the proper time as measured by a comoving observer on the hypersurface .the intrinsic metric to is given by note that the junction surface , , is situated outside the event horizon , i.e. , , to avoid a black hole solution .using the darmois - israel formalism , the surface stresses at the junction interface are given by with . and are the surface energy density and the tangential surface pressure , respectively .the surface mass of the thin shell is given by .the total mass of the system , , is provided by the following expression in particular , considering a pressureless dust shell , , from eq .( [ surfpressure ] ) , we have the following constraint which restricts the values of the redshift parameter , . eliminating the factor containing the form function in eq .( [ surfenergy ] ) by using eq .( [ zetarestriction ] ) , the surface energy density is then given by the following relationship \ , , \label{sigma}\ ] ] with .this expression will be analyzed in the following section .it is also of interest to obtain an equation governing the behavior of the radial pressure at the junction boundary in terms of the surface stresses at the junction boundary , given by where and are the radial pressures acting on the shell from the exterior and the interior .equation ( [ pressurebalance ] ) relates the difference of the radial pressure across the shell in terms of a combination of the surface stresses , and , and the geometrical quantities .note that for the exterior vacuum solution .thus , for the particular case of a dust shell , , eq .( [ pressurebalance ] ) reduces to }{\sqrt{1-\frac{2m}{a}-\frac{\lambda}{3}a^2 } } \,.\ ] ]the weak energy condition ( wec ) on the junction surface implies and , and by continuity implies the null energy condition ( nec ) , .the strong energy condition ( sec ) at the junction surface reduces to and , and by continuity implies the nec , but not necessarily the wec .the dominant energy condition ( dec ) implies and . in principle , by taking the limit ( note ,however , that ) , the `` total amount '' of the energy condition violating matter , of the interior solution , may be made arbitrarily small .thus , in the spirit of further minimizing the usage of exotic matter , we shall find regions where the energy conditions are satisfied at the junction surface .for the specific case of a dust shell , this amounts to finding regions where is satisfied .this shall be done for the schwarzschild spacetime , , the schwarzschild - de sitter solution , , and the schwarzschild - anti de sitter spacetime , . in the analysis that follows we shall only be interested in positive values of . for the schwarzschild spacetime , , one needs to impose that , so that eq .( [ zetarestriction ] ) reduces to the right hand term is positive , which implies that the redshift parameter is always positive , . equation ( [ sigma ] ) reduces to \,.\ ] ] to analyze this relationship , we shall define a new dimensionless parameter , .therefore , eq . ( [ schwsigma ] ) may be rewritten in the following compact form with given by \,.\ ] ] equation ( [ schwarzsigma ] ) is depicted in fig .1 . in the interval , we verify that for , implying a positive energy density , , and thus satisfying all of the energy conditions . for , a boundary surface , ,is given at .a positive surface energy density is verified in the following region i.e. , . ) .we have considered the definition . for and , we verify that . for , with , we have . see text for details.,width=278 ] for the schwarzschild - de sitter spacetime , , consider the definitions of the dimensionless parameters and .then eq . ( [ zetarestriction ] ) takes the form we verify that the redshift parameter is null , , if the factor is zero , i.e. , , which is represented in fig .2 . for , we have , or ; for , we have , or . only the region below the solid curve , given by is of interest . the surface energy density , eq .( [ sigma ] ) , as in the previous case , may be rewritten in the following compact form with given by \,,\ ] ] with .to analyze eq .( [ sigmasds ] ) consider a null surface energy density , , i.e. , , so that we deduce the relationship \,,\ ] ] for .for the particular case of , depicted in fig .( [ sigmasds ] ) reduces to , which is null for , i.e. , ; positive for , i.e. , ; and negative for , i.e. , .thus the energy conditions are satisfied in the region to the right of the curve , , depicted in fig .2 . in the interval , , we verify that , for . for ,then a non - negative surface energy density is verified for .the particular case of is depicted in fig .2 . the energy conditions are also satisfied to the right of the respective curve and to the left of the solid line , . and .only the region below the solid line is of interest .the energy conditions are obeyed to the right of each respective dashed curves , , and .see text for details.,width=230 ] for the schwarzschild - anti de sitter spacetime , , consider the parameters and .( [ zetarestriction ] ) takes the form the right hand side of eq .( [ zetarestrictionsads ] ) is always positive , so that the restriction is imposed . the surface energy density , eq .( [ sigma ] ) , is given in the following compact form with given by \,.\ ] ] consider a null surface energy density , , i.e. , , so that from eq .( [ sigmasads ] ) , we have the relationship \,,\ ] ] for . for the particular case of ,( [ sigmasads ] ) takes the form , which is null for , i.e. , ; positive for , i.e. , ; and negative for , i.e. , .only the region to the left of the solid curve , given by , is of interest . for , a non - negative surface energy density is given for and .for , for . for , we have for . and .only the region to the left of the solid curve , given by , is of interest . for the cases of and , the energy conditions are satisfied to the right of the respective dashed curves , and to the left of the solid line . for the specific case of ,the energy conditions are obeyed above the respective curve .see text for details.,width=230 ]in this section we shall consider the traversability conditions required for the traversal of a human being through the wormhole , and consequently determine specific dimensions for the wormhole .specific cases for the traversal time and velocity will also be estimated . in this sectionwe shall insert to aid us in the computations .consider the redshift function given by , with .thus , from the definition of , the redshift function , in terms of , takes the following form with . with this choice of , also be defined as .the case of corresponds to the constant redshift function , so that . if , then is finite throughout spacetime and in the absence of an exterior solution we have .as we are considering a matching of an interior solution with an exterior solution at , then it is also possible to consider the case , imposing that is finite in the interval .one of the traversability conditions required is that the acceleration felt by the traveller should not exceed earth s gravity . consider an orthonormal basis of the traveller s proper reference frame , , given in terms of the orthonormal basis vectors of eqs .( [ basisvectors ] ) of the static observers , by a lorentz transformation , i.e. , where , and being the velocity of the traveller as he / she passes , as measured by a static observer positioned there .thus , the traveller s four - acceleration expressed in his proper reference frame , , yields the following restriction the condition is immediately satisfied at the throat , . from eq .( [ travellergravity ] ) , one may also find an estimate for the junction surface , .consider that the dust shell is placed in an asymptotically flat region of spacetime , so that .we also assume that the traversal velocity is constant , , and non - relativistic , .taking into account eq .( [ redshift ] ) , from eq .( [ travellergravity ] ) one deduces . considering the equality case , one has providing a value for , one may find an estimate for .for instance , considering that , one finds that .another of the traversability conditions required is that the tidal accelerations felt by the traveller should not exceed the earth s gravitational acceleration .the tidal acceleration felt by the traveller is given by , where is the traveller s four velocity and is the separation between two arbitrary parts of his body .note that is purely spatial in the traveller s reference frame , as , so that . for simplicity ,assume that along any spatial direction in the traveller s reference frame .thus , the constraint provides the following inequalities - \frac{\phi'}{2r^2}\left(m'r - m-\frac{2\lambda}{3 } r^3\right ) \bigg| \ ,\big|\eta^{\hat{1}'}\big|\,c^2 \leq g_{\oplus } \ , , \label{radialtidalconstraint } \\ & & \bigg|\frac{\gamma^2}{2r^3}\left[\left(\frac{v}{c}\right)^2 \;\left(m'r - m-\frac{2\lambda}{3}r^3 \right)+2r^2 \left(\frac{\lambda}{3}r^2-\frac{m}{r}\right)\phi ' \right ] \bigg|\ , \big|\eta^{\hat{2}'}\big|\,c^2 \leq g_{\oplus } \ , . \label{lateraltidalconstraint}\end{aligned}\ ] ] the radial tidal constraint , eq .( [ radialtidalconstraint ] ) , constrains the redshift function , and the lateral tidal constraint , eq .( [ lateraltidalconstraint ] ) , constrains the velocity with which observers traverse the wormhole . at the throat , or , and taking into account eq .( [ redshift ] ) , we verify that eq .( [ radialtidalconstraint ] ) reduces to or considering the equality case . from this relationship, one may find estimates for the junction interface radius , , and the wormhole throat , .( [ tidalrestriction ] ) , one deduces using eqs .( [ equalitycase2 ] ) and ( [ equalitycase ] ) , one may find an estimate for the throat radius , by providing a specific value for .for instance , considering and equating eqs .( [ equalitycase2 ] ) and ( [ equalitycase ] ) , one finds ) , if , we verify that the embedding diagram flares out very slowly , so that from eq .( [ r0 ] ) , may be made arbitrarily small .nevertheless , using specific examples of the form function , for instance eq .( [ homerform ] ) , we will assume the approximation .using the above value of , from eq .( [ r0 ] ) we find .one may use the lateral tidal constraint , eq .( [ lateraltidalconstraint ] ) , to find an upper limit of the traversal velocity , .evaluated at , we find .the traversal times as measured by the traveller and an observer situated at a space station , which we shall assume rests just outside the junction surface , are given respectively by where is the proper radial distance .since we have chosen and , we can use the following approximations for instance , considering the maximum velocity , , with , the traversal through the wormhole can be made in approximately a minute .we have presented a specific metric of a spherically symmetric traversable wormhole in the presence of a generic cosmological constant , verifying that the null energy condition and the averaged null energy condition are violated , as was to be expected .we verified that evaluating the `` volume integral quantifier '' , the specific ellis drainhole may also be theoretically constructed with arbitrarily small quantities of averaged null energy condition violating matter .furthermore , we constructed a pressureless dust shell around the interior wormhole solution , by matching the latter to a unique vacuum exterior spacetime , in the presence of a generic cosmological constant . in the spirit of minimizing the usage of exotic matter, regions were determined in which the surface energy density is non - negative , thus satisfying all of the energy conditions at the junction surface .an equation governing the behavior of the radial pressure across the shell was also determined .taking advantage of the construction , and considering the traversability conditions , estimates of the throat radius and the junction interface radius , of the total traversal time and maximum velocity of an observer journeying through the wormhole , were also found .f. lobo and p. crawford , weak energy condition violation and superluminal travel " , current trends in relativistic astrophysics , theoretical , numerical , observational , lecture notes in physics * 617 * , springer - verlag publishers , l. fernndez et al .eds , pp . 277291 ( 2003 ) [ arxiv : gr - qc/0204038 ] . | firstly , we review the pointwise and averaged energy conditions , the quantum inequality and the notion of the `` volume integral quantifier '' , which provides a measure of the `` total amount '' of energy condition violating matter . secondly , we present a specific metric of a spherically symmetric traversable wormhole in the presence of a generic cosmological constant , verifying that the null and the averaged null energy conditions are violated , as was to be expected . thirdly , a pressureless dust shell is constructed around the interior wormhole spacetime by matching the latter geometry to a unique vacuum exterior solution . in order to further minimize the usage of exotic matter , we then find regions where the surface energy density is positive , thereby satisfying all of the energy conditions at the junction surface . an equation governing the behavior of the radial pressure across the junction surface is also deduced . lastly , taking advantage of the construction , specific dimensions of the wormhole , namely , the throat radius and the junction interface radius , and estimates of the total traversal time and maximum velocity of an observer journeying through the wormhole , are also found by imposing the traversability conditions . |
evolutionary games have recently received ample attention in the physics community , as it became obvious that methods of statistical physics can be used successfully to study also interactions that are more complex than just those between particles .broadly classified as statistical physics of social dynamics , these studies aim to elevate our understanding of collective phenomena in society on a level that is akin to the understanding we have about interacting particle systems . within the theoretical framework of evolutionary games ,the evolution of cooperation is probably the most interesting collective phenomenon to study .several evolutionary games constitute so - called social dilemmas , the most prominent of which is the prisoner s dilemma game , and in which understanding the evolution of cooperation still a grand challenge .regardless of game particularities , a social dilemma implies that the collective wellbeing is at odds with individual success .an individual is therefore tempted to act so as to maximize her own profit , but at the same time neglecting negative consequences this has for the society as a whole .a frequently quoted consequence of such selfish actions is the `` tragedy of the commons '' .while cooperation is regarded as the strategy leading away from the threatening social decline , it is puzzling why individuals would choose to sacrifice some fraction of personal benefits for the wellbeing of society . according to nowak , five rules promote the evolution of cooperation .these are kin selection , direct and indirect reciprocity , network reciprocity , and group selection .recent reviews clearly attest to the fact that physics - inspired research has helped refine many of these concepts .in particular evolutionary games on networks , spurred on by the seminal discovery of spatial reciprocity , and subsequently by the discovery that scale - free networks strongly facilitate the evolution of cooperation , are still receiving ample attention to this day .one of the most recent contributions to the subject concerns the assignment of cognitive skills to individuals that engage in evolutionary games on networks .the earliest forerunners to these advances can be considered strategies such as `` tit - for - tat '' and pavlov , many of which were proposed already during the seminal experiments performed by axelrod , and which assume individuals have cognitive skills that exceed those granted to them in the framework of classical game theory .it has recently been shown , for example , that incipient cognition solves several open question related to network reciprocity and that cognitive strategies are particularly fit to take advantage of the ability of heterogeneous networks to promote the evolution of cooperation .here we build on our previous work , where we have presented the idea that not strategies but rather emotions could be the subject of imitation during the evolutionary process .it is worth noting that the transmissive nature of positive and negative emotional states was already observed in , where it was concluded that humans really do adjust their emotions depending on their contacts in a social network .moreover , the connection between intuition and willingness to cooperate was also tested in human experiments .it therefore is of interest to determine how the topology of the interaction network affects the spreading of emotions , which may in turn determine the level of cooperation . in the context of games on lattices, we have shown that imitating emotions such as goodwill and envy from the more successful players reinstalls imitation as a tour de force for resolving social dilemmas , even for games where the nash equilibrium is a mixed phase .we have also argued that envy is an important inhibitor of cooperative behavior .we now revisit the snowdrift , stag - hunt and the prisoner s dilemma game on random graphs and scale - free networks , with the aim of determining the role of interaction heterogeneity within this framework .we focus on sympathy and envy as the two key emotions determining the emotional profile of each player , and we define them simply as the probability to cooperate with less and more successful opponents , respectively . strategies thus become link - specific rather than player - specific , whereby the level of cooperation in the population can be determined by the average number of times players choose to cooperate .interestingly , in agreement with a recent experiment , we find that network reciprocity plays a negligible role .the outcome on regular random graphs is the same as reported previously for the square lattice , leading to the conclusion that the ability of cooperators to aggregate into spatially compact clusters is irrelevant .only when degree heterogeneity is introduced to interaction networks , we find that the evolution of emotional profiles changes .as we will show , homogeneous networks lead to fixations that are characterized by high sympathy and high envy , while heterogeneous networks lead to low or modest sympathy and low envy .network heterogeneity thus alleviates a key impediment to higher levels of cooperation on lattices and regular networks , namely envy , and by doing so opens the possibility to much more cooperative states even under extremely adverse conditions . from a different point of view, it can be argued that some topological features of interaction networks in fact determine the emotional profiles of players , and they do so in such a way that cooperation is the most frequently chosen strategy .the remainder of this paper is organized as follows .first , we describe the mathematical model , in particular the protocol for the imitation of emotional profiles as well as the definition of social dilemmas on networks .next we present the main results , whereas lastly we summarize and discuss their implications .the traditional setup of an evolutionary game assumes players occupying vertices of an interaction network . moreover ,each player having a pure strategy , cooperates or defects with all neighbors independently of their strategy and payoff . here , instead of pure strategies, we introduce an emotional profile ] interval , where denotes the average degree of players .subsequently , every payoff value is updated by considering the proper neighborhoods of a player and the actual emotional parameters .importantly , after the accumulation of new payoffs , player can not imitate a pure strategy from player but only its emotional profile , i.e. , the and/or value .imitation is decided so that a randomly selected player first acquires its payoff by playing the game with all its neighbors , as defined by the interaction network .note that is thus the degree of player .next , one randomly chosen partner of , denoted by , also acquires its payoff by playing the game with all its neighbors .player then attempts to imitate the emotional profile of players with the probability \}$ ] , where determines the level of uncertainty by strategy adoptions .the latter can be attributed to errors in judgment due to mistakes and external influences that affect the evaluation of the opponent . without loss of generality we set , implying that better performing players are readily imitated , but it is not impossible to adopt the strategy of a player performing worse . importantly , since the emotional profile consist of two parameters , two random numbers are drawn to enable independent imitation of and .this is vital to avoid potential artificial propagations of freak ( extremely successful ) pairs .technically , pairs were available at the start of the evolutionary process .finally , after each imitation the payoff of player is updated using its new emotional profile , whereby each full monte carlo step involves all players having a chance to adopt the emotional profile from one of their neighbors once on average .prior to presenting the result of this model , it is important to note that there will almost always be a fixation of pairs , i.e. , irrespective of and only a single pair will eventually spread across the whole population .once fixation occurs the evolutionary process stops .the characteristic probability of encountering cooperative behavior in the population , which is equivalent to the stationary fraction of cooperators in the traditional version of the game , can then be determined by means of averaging over the final states that emerge from different initial conditions .exceptions to single pair fixations are likely to occur for strongly heterogeneous networks , where more than one pair can survive around strong hubs .this effect is more pronounced in the harmony game ( hg ) quadrant , but becomes negligible in the prisoner s dilemma parametrization of the game . in casemore than a single pair does survive , we present in what follows the average over several independent realizations . for the monte carlo simulations, we have used players and up to full steps , and we have averaged over independent runs .we start by presenting results obtained on a regular random graph with , as it is a natural extension of a simple square lattice population which we have considered before in . importantly , while the degree distribution remains uniform , other topological features , like the presence of shortcuts or the emergence of a nonzero clustering coefficient , change significantly .previous works on games using pure strategies highlighted that these details may play a significant role by the evolution of cooperation in social dilemmas .figure [ rrg ] depicts the color map encoding the fixation values of ( left ) and ( right ) on the parameter plane . from the presented results it followsthat if the governing social dilemma is of the snowdrift type , players will always ( never ) cooperate with their neighbors provided their payoff is lower ( higher ) . in the prisoner s dilemma quadrant, we can observe either complete dominance of defection regardless of the status of the opponents , or the same situation as in the snowdrift quadrant provided is not too negative . for the stag - hunt and the harmony gamethe outcome is practically identical as obtained by means of the traditional version of the two games . in general , however , both color profiles differ only insignificantly from the ones we have reported in ( see figs . 2 and 3 there ) for the square lattice .this leads to the conclusion that the structure of interactions does not play a prominent role as long as the degree of all players is uniform .this leads to suspect that the heterogeneity of interactions might play a pivotal role .we therefore depart from the regular random graph to random graphs with different degree distributions , as depicted in fig .we consider four different types of random graphs with gaussian distributed degree , yet with increasing variance . according to the legend of fig .[ dist ] , the random graph with is thus the least heterogeneous ( only degrees , and are possible ) , while the random graph with is the most heterogeneous .increasing gradually the variance from to thus enables us to monitor directly the consequences of heterogeneity stemming from the interaction network .color maps encoding the fixation values of and for the four different random graphs are depicted in fig .[ randoms ] . by following the plots from left to right, it can be observed that as the heterogeneity of the interaction network increases , the fixation of profiles of both and change . by focusing on the snowdrift and the prisoner s dilemma quadrant, there is a gradual shift from high- low- emotional profiles to low- and high- values as heterogeneity increases .accordingly , taking into account also results presented in fig .[ rrg ] , we conclude that homogeneous interaction networks promote emotions like sympathy and envy ( and ) , while heterogeneous interaction network prefer indifference and servility ( and ) .it is worth highlighting that these emotional profiles emerge completely spontaneously based on payoff - driven imitation .the change is thus brought about exclusively by the heterogeneity of the interaction network .it is possible to take a step further in terms of the heterogeneity of the interaction network by considering scale - free networks .we therefore make use of the standard model proposed by barabsi and albert .results presented in fig .[ sf ] further support our arguments , as the region of low and moderate values extends further into the snowdrift quadrant , while at the same time low values vanish more and more from both the snowdrift and the prisoner s dilemma quadrant . as before ,the harmony games and the stag - hunt quadrant remain relatively unaffected , which corroborates the fact that the proposed shift from the imitation of strategies to the imitation of emotional profiles affect predominantly the social dilemma games .it is also worth reminding that on scale - free networks the fixation may not be unique because different hubs can sustain their own micro - environment independently from the other hubs .we therefore depict an average over several independent realizations to arrive at representative results . in order to obtain an understanding of the preference for low- and high- values , as it is exerted by heterogeneous interaction networks ,it is of interest to examine the time evolution of and values , as depicted in fig .[ fixation ] .the figure shows the probability of any given pair in the population at different times increasing from top left to bottom right .it can be observed that high- high- combinations die out first .these players cooperate with both their more and less successful opponents , and they do so with a high probability . in agreement with well - known results concerning the evolution of cooperation in spatial social dilemmas ,the bulk of cooperators is always the first to die out . only after their arrangement in suitable compact domains the cooperators can take advantage of network reciprocity and prevail against defectors . in our case , however ,this does not happen , i.e. , the `` always cooperate '' players never recover .instead , the evolution proceeds by eliminating also all pairs which contain moderate and high values , until finally the only surviving low- profiles are left to compete .however , preserving at least some form of cooperation may yield an evolutionary advantage , and thus ultimately the low- high- emotional profile emerges as the only remaining .notably , the described scenario is characteristic only for heterogeneous networks . for homogeneous networks the differences between players are more subtle , andindeed it is not at all obvious that cooperating with the more successful neighbors would confer an evolutionary advantage .accordingly , high- profiles are not viable and die out .cooperation can thrive only on the expense of high values , as reported already in .importantly though , given an appropriately heterogeneous interaction network , the low- high- emotional profile can be very much beneficial for the global cooperation level . to support this statement, we present in fig .[ cooperate ] the average frequency of cooperation as obtained on the regular random graph ( left ) and the scale - free network ( right ) .note that the former in general represent homogeneous graphs .the comparison reveals that a much higher cooperation level can be sustained , especially in the snowdrift quadrant , if the dominating emotions are neither sympathy nor envy .to confirm this further , we have manually imposed a high- low- emotional profile on the scale - free network . while this profile is optimal for homogeneous networks ( compare also fig .[ cooperate ] ( left ) with fig .4 in ) the outcome on heterogeneous networks is disappointing , yielding no more than a modest cooperation level of in the most challenging snowdrift and prisoner s dilemma regions .this imposes another interesting conclusion , namely , if the emotional profiles of players can evolve freely as dictated by payoff - driven imitation microscopic dynamics , then the topology `` selects '' the optimal profile in order to produce the highest attainable cooperation level .lastly , it is instructive to explore how the low- high- emotional profile actually works on scale - free networks .a visualization is possible by measuring separately the average willingness to cooperate for players who have different degree .since the payoff of every player is obtained from the pairwise interaction constituted by each individual link , a higher degree therefore in general leads to a higher payoff and also a higher `` social prestige '' . as fig .[ degree ] illustrates , players with low degree will dominantly cooperate with their opponents that have a higher degree and thus most likely a higher payoff . in other words , they can use the `` -part '' of their emotional profile .this act of cooperation , however , is unilateral because the hubs rarely compensate it . due to low values of cooperation with the less successful playersis strongly suppressed .what is more , while players with a higher degree also cooperate with the more successful opponents ( they have the same emotional profile and hence the same high ) , this action is very rare given that there are simply not many who would be superior .it is sad but still true that the hubs with the highest degree very rarely cooperate in the stationary state . despite this rather unfriendly behavior of the `` leaders '' ,the average cooperation level is still acceptable and in fact remarkably high even under adverse conditions ( e.g. , and ) , but this is exclusively because the inferior players do their best and virtually always cooperate towards their superiors .we have shown that high levels of cooperation can evolve amongst self - interested individuals if instead of strategies they adopt simple emotional profiles from their neighbors .since the imitation was governed solely by the payoffs of players , we have made no additional assumptions concerning the microscopic dynamics .the later has been governed by the traditional `` follow the more successful '' rule , which we have implemented with some leeway due to the fermi function . starting from an initial configuration with all possible emotional profiles , we have determined the one that remains after sufficiently long relaxation ( only in the harmony game quadrant , if staged on heterogeneous networks , the fixation may not be unique ) .we have found that the fixation depends not only on the parametrization of the game , but even more so on the topology of the interaction network .more precisely , the topology - induced heterogeneity of players has been identified as the most important property . if players were staged on a network where their degree was equal , then independently of other topological properties of the network the fixation occurred on emotional profiles characterized by high and low values in the interesting payoff region . in agreement with the definition of and ,these are players characterized by high sympathy but also high envy .this profile is also in agreement with the one reported earlier in for the square lattice . on heterogeneous networks , however , the fixation is most likely to be on low or moderate values of and high values of .accordingly , we have the prevalence of low sympathy ( charity , goodwill ) for those that are doing worse , but also little envy ( high servility , proneness to brownnose or `` suck up '' ) towards those that are doing better .noteworthy , although we have not presented actual results , the application of payoffs normalized with the degree of players returns the same results as observed on homogeneous networks .this observation is in agreement with our preliminary expectation because it is well established that the scale - free topology introduces a strong heterogeneity amongst players , but also that this effect is effectively diminished by applying normalized payoffs or degree - sensitive cost .accordingly , in the latter case players become `` equal '' , which results in the selection of emotional profile we have recorded for regular graphs .this observation strengthens further our argument that indeed solely the heterogeneity of players is crucial for the selection of the dominant emotional profile .we thus may argue that on heterogeneous networks each `` dictators dream '' profile can evolve via a simple evolutionary rule .the majority may not be happy about it because the combination of moderate and high values is not necessarily the most coveted personality profile .yet as our study shows , it does have its social advantages .namely , in the absence of envy or in the presence of servility the cooperation level in the whole population can be maintained relatively very high , even if the conditions for the evolution of cooperation are extremely adverse ( high , low ) . in this sense , we conclude that charity and envy are easily outperformed by competitiveness and proneness to please the dominant players , and that indeed this profile emerges completely spontaneous .put differently , it can be argued that it is in fact chosen by the heterogeneity amongst players that is introduced by an appropriate interaction network .we would also like to emphasize that the discussed `` emotional profiles '' do not necessarily cover the broader psychological interpretation of the term .we have used this terminology to express the liberty of each individual to act differently towards different partners in dependence on the differences in social rank ( or success ) , which traditional strategies in the context of evolutionary games do not allow . as such , and in the absence of considering further details determining our personality ,our very simple model naturally can not be held accountable for describing actual human behavior .instead , it reveals the topology of interactions as a crucial property that determines the collective behavior of a social network . according to our observations ,it is indeed the heterogeneity of the interaction network that is key in determining our willingness to help others .finally , we emphasize that in the present model cooperation is maintained without reciprocity .the mechanism at work here is very different from those discussed thoroughly in previous studies . unlike direct and indirect reciprocity , network reciprocity , or even reputation , punishment and reward , which are all deeply routed in the fact that neighboring cooperators will help each other out while at the same time neighboring defectors will craft their own demise , herethe nature of links determines the winner .it may well happen that cooperation and defection occur along the same link , yet still the status of the population as a whole is very robust .what players really share is the way how to behave towards each other under different circumstances , which is determined within the framework of an emotional profile .+ + this research was supported by the hungarian national research fund ( grant k-101490 ) , china s education ministry via the humanities and social sciences research project ( 11yjc630208 ) , and the slovenian research agency ( grant j1 - 4055 ) . | we show that the resolution of social dilemmas on random graphs and scale - free networks is facilitated by imitating not the strategy of better performing players but rather their emotions . we assume sympathy and envy as the two emotions that determine the strategy of each player by any given interaction , and we define them as probabilities to cooperate with players having a lower and higher payoff , respectively . starting with a population where all possible combinations of the two emotions are available , the evolutionary process leads to a spontaneous fixation to a single emotional profile that is eventually adopted by all players . however , this emotional profile depends not only on the payoffs but also on the heterogeneity of the interaction network . homogeneous networks , such as lattices and regular random graphs , lead to fixations that are characterized by high sympathy and high envy , while heterogeneous networks lead to low or modest sympathy but also low envy . our results thus suggest that public emotions and the propensity to cooperate at large depend , and are in fact determined by the properties of the interaction network . |
the amount and the timing of appearance of the transcriptional product of a gene is mostly determined by regulatory proteins through biochemical reactions that enhance or block polymerase binding at the promoter region ( ) . considering that many genes code for regulatory proteins that can activate or repress other genes , the emerging picture is conveniently summarized as complex network where the genes are the nodes , anda link between two genes is present if they interact .the identification of these networks is becoming one of the most relevant task of new large - scale genomic technologies such as dna microarrays , since gene networks can provide a detailed understanding of the cell regulatory system , can help unveiling the function of previously unknown genes and developing pharmaceutical compounds .different approaches have been proposed to describe gene networks ( see ( ) for a review ) , and different procedures have been proposed ( ) to determine the network from experimental data .this is a computationally daunting task , which we address in the present work .here we describe the network via deterministic evolution equations ( ) , which encode both the strenght and the direction of interaction between two genes , and we discuss a novel reverse engineering procedure to extract the network from experimental data .this procedure , though remaining a quantitative one , realizes one of the most important goal of modern system biology , which is the integration of data of different type and of knowledge obtained by different means .we assume that the rate of synthesis of a transcript is determined by the concentrations of every transcript in a cell and by external perturbations . the level of gene transcripts is therefore seen to form a dynamical system which in the most simple scenario is described by the following set of ordinary differential equations ( ) : where is a vector encoding the expression level of genes at times , and a vector encoding the strength of external perturbations ( for instance , every element could measure the density of a specific substance administered to the system ) . in this scenario the gene regulatory network is the matrix ( of dimension ) , as the element measures the influence of gene on gene , with a positive indicating activation , a negative one indicating repression , and a zero indicating no interaction .the matrix ( of dimension ) encodes the coupling of the gene network with the external perturbations , as measures the influence of the -th perturbation on the -th gene .a critical step in our construction is the choice of a linear differential system .even if a such kind of model is based on particular assumptions on the complex dynamics of a gene network , it seem the only practical approach due to the lack of knowledge of real interaction mechanism between thousands of genes. even a simple nonlinear approach would give rise to an intractable amount of free parameters .however , it must also be recognized that all other approaches or models have weakness points .for instance , boolean models ( which have been very recently applied to inference of networks from time series data , as in ( ) , strongly discretize the data and select , _ via _ the use of an arbitrary threshold , among active and inactive gene at every time - step .dynamical bayesian models , instead , are more data demanding than linear models due to their probabilistic nature. moreover , their space complexity grows like ( at least in the famous reveal algorithm by k.p .murphy ( ) ) , which makes this tool suitable for small networks . the linear model of eq .( [ eq - cont ] ) is suitable to describe the response of a system to small external perturbations .it can be recovered by expanding to first order , and around the equilibrium condition , the dependency of on and , .stability considerations ( must not diverge in time ) require the eigenvalues of to have a negative real part .moreover it clarifies that if the perturbation is kept constant the model is not suitable to describe periodic systems , like cell cycles for example , since in this case asymptotically approaches a constant .unfortunately data from a given cell type involve thousands of responsive genes .this means that there are many different regulatory networks activated at the same time by the perturbations , and the number of measurements ( microarray hybridizations ) in typical experiments is much smaller than .consequently , inference methods can be successful , but only if restricted to a subset of the genes ( i.e. a specific network ) ( ) , or to the dynamics of genes subsets .these subsets could be either gene clusters , created by grouping genes sharing similar time behavior , or the modes obtained by using singular value decomposition ( svd ) . in these casesit is still possible to use eq .( [ eq - cont ] ) , but must be interpreted as a vector encoding the time variation of the clusters centroids , or the time variation of the characteristics modes obtained via svd . in this paperwe present a method for the determination of the matrices and starting from time series experiments using a global optimization approach to minimize an appropriate figure of merit . with respects to previous attempts ,our algorithm as the uses explicitly the insight provided by earlier studies on gene regulatory networks ( ) , namely , that gene networks in most biological systems are sparse . in order to code such type of features the problem itself must be formulated as mixed - integer nonlinear optimization one ( ) .moreover our approach is intended to explicitly incorporate prior biological knowledge as , for instance , it is possible to impose that : if it is known that gene inhibits ( does not influence , activates , influences ) gene .this means that the optimization problem is subject to inequality and/or equality constraints .summing up the characteristics of the problem we must solve : high dimensionality , mixed integer , nonlinear programming problem for the exact solution of which no method exists .an approximate solution can be found efficiently using a global optimization techniques ( ) based on an intelligent stochastic search of the admissible set .as consequence of the optimization method used , there is no difficulties to integrates different time series data investigating the response of the same set of genes to different perturbations , even if different time series are sampled at different ( and not equally spaced ) time points .the integration of different time series is a major achievement , as it allows for the joint use of data obtained by different research groups.we believe that the integration of multiple time - series dataset in unveiling a gene network is a topic of great interest as focused in recently published papers ( ) .we illustrate and test the validity of our algorithm on computer simulated gene expression data , and we apply it to an experimental gene expression data set obtained by perturbing the sos system in bacteria _the simplest assumption regarding the dynamical response of gene transcripts ( intially in a steady state , for ) , to the appearance of an external perturbation at time is given by eq .( [ eq - cont ] ) . since the state of the system measured at discrete times , , it useful to consider the discrete form of eq .[ eq - cont ] . where is a matrix with dimension , and is a function of the perturbations , namely here we have assumed , for simplicity sake , , but the generalization to the most general case is straightforward . in particular , for constant one gets . due to the presence of noisethe measured do not coincide with the true values expected to satisfy eq .( [ eq - cont ] ) . if we for simplicity observed samples affected by independent , zero mean additive noise , namely , the matrices ruling the dynamics of eq .( [ eq - cont ] ) can be found by requiring the minimization of a suitably defined _cost function_. under the simplifying assumption of a constant external perturbation , previous works have been focused on the determination of and ( from which and can be retrieved ) , as in ( ) .the matrices and have been assumed to be those minimizing the cost function eq .[ eq - cont ] can be written as standard linear least squares estimation problem for , whose solution can be found by computing the pseudoinverse of a suitable matrix , providing that number of observations is sufficiently high : . in the present analysiswe introduce a new reverse engineering approach to determine the matrices and , which turns out to be more efficient and flexible than previous ones .our approach is based on the following considerations , which have not been taken into account in previous works : * each gene expression time series could in principle be scanned according to both time versus , namely the _ time - reversibility _ of dynamics .* there is a biological evidence suggesting that the matrix is sparse ( ) .for this reason any reverse engineering algorithm has to be able to capture the proper sparsity of gene regulatory network . * in many situations there are biological prior information about bounds on numerical value of some specific entries of and .such bounds must be taken into account in the solution procedure . as a consequence of this we use as a cost function the reduced defined as follows : where .\label{eq - mcost}\end{aligned}\ ] ] note that , and the quantities and can be obtained from and by appropriate numerical approximation algorithms for eq.s ( [ eq - discrete - matrix ] ) .the quantity denotes the standard deviation of the independent , additive noise affecting the dataset .a straightforward optimization on dynamics / input matrices of eq . ( [ eq - cont ] ) is the main improvement of the proposed approach with respect to the previous ones .this is the only way that enable us to incorporate the sparseness requirement on and eventually available biological priors .it is clear that sparseness was destroyed by exponentiation and integration involved in the continuous - discrete transformation of the problem , in the same way simple bounds on elements are transformed in highly complex nonlinear relations on .the price paid for the flexibility of the approach is the computational effort required for any computation of the error function .this put a very strong attention to the efficiency of the optimization algorithm . in eq .( [ eq - mcost ] ) the two contributions in square brackets account for the forward and backward propagation , respectively , and thus implement the time reversibility of the dynamics . moreover , the sparsity of the gene network is taken into account via the number of degrees of freedom ( d.o.f . )defined as with the number of free parameters , the number of equations ( constraints ) and the number of elements of and ( a total of ) fixed to zero . the generalization of the algorithm to the case in which there are different time - series , , corresponding to the response of the same set of genes to similar and/or different perturbations with is straightforward . in this casethe cost function to be minimized is simply here we have assumed the noise to depend on the time - series ( ) . it is clearly possible , however , to introduce a time ( ) and even a gene ( i ) dependence , i.e to use .we detail now our procedure to find the spare matrices and minimizing , , which is in general a formidable task .the first difficulty is the determination of the number of not vanishing elements of ( or equivalently the number of d.o.f . ) .having determined the problem is still very complicated since there are different ways of choosing these elements out of the candidates . for typical values of the parameters , for instance and , the number of possible combinations is of the order of , so big that any kind of extensive algorithmic procedure is precluded . a practical approach to ,at least approximately solve , this formidable problem is that of resort to a global optimization techniques based on a stochastic strategy to search of the admissible set , for a comprehensive review os such type of methods one can see ( ) .we have tackled this problem via the implementation of the more classical of such methods : a simulated annealing procedure ( ) , based on a monte carlo dynamics . for each possible value of the number of parameters , the algorithm search for the matrices and with a total of non zero elements minimizing the cost function of eq .( [ eq - mcost ] ) , as discussed below .we then easily determine and the minimizing matrices and which are our best estimates of the true matrices . in order to determine the matrices and with a total of non zero parameters which minimize the cost function , our simulated annealing procedure starts with two random matrices and with a total of not vanishing parameters , and changes the elements of these matrices according to two possible monte carlo moves .one move is the variation of the value of a not vanishing element of the two matrices , the other one consists in setting to zero a previously non - zero element , and to a random value a zero element .each move , which involves a variation of the cost function , is accepted with a probability $ ] , where is an external parameter . as in standardoptimization by annealing procedures , we start from a high value of , of the order of the cost function value , and then we slowly consider the limit . in the limit of infinitesimally small decrease of algorithm is able to retrieve the true minimum of the cost function , while for faster cooling rates estimates of the real minimum are recovered . as the monte carlo moves attempt to change the values of the elements of and , it is easy to introduce biological constraints on the values of and of , as we will shown in a following example .the algorithm requires the evaluation of the cost function , which is a time consuming operation as the computation of the discrete matrix and of its inverse are required .we have implemented this algorithm in c++ making use of the gnu scientific library , www.gsl.org .in this section , we illustrate our reverse engineering algorithm with three examples .the validity of our algorithm and of other known ones are evaluated by comparing the exact dynamical matrices and with their best estimate and obtained via the reverse engineering procedure . to this end , we have introduced the parameter where or and is the norm of the matrix . clearly , , the equality being satisfied if and only if .since is a measure of a relative error it has no upper bound , but the estimate of becomes unreliable when is above , i.e. .this parameter allows for a faithful evaluation of the quality of the reverse engineering approach , as it summarizes the comparisons of all retrieved elements with their true values .we discuss three applications .first , we show how our algorithm works when applied to a single time series . in this caseone can show that the cost function , which takes into account both the forward and the backward propagation , is more effective in determining the structure of the gene network than the usual cost function of eq .( [ eq - cost ] ) , which only considers the forward propagation .the second example shows how we can easily take into account the presence of different time - series , while the last example shows how biological priors can be included . before discussing the examples we shortly describe the procedure used to generate the synthetic dataset . in order to generate a synthetic dataset one must construct the matrices and , from which it is possible to generate the noiseless time - series .hence , one gets for where are i.i.d .random variables with standard deviation .while there are no constraints on , must be a sparse random matrix whose complex eigenvalues have negative real part .the generation of proceeds according the following steps .first , we generate a block diagonal matrix , whose blocks are antisymmetric matrices with diagonal elements and off diagonal elements , or negative real elements . by direct constructionsall of the eigenvalues of the matrix have negative real part .then we generate a series of random unitary matrices , with only off - diagonal not vanishing entries , and compute the matrices , all of them sharing the spectrum of . clearly , as grows , the number of vanishing entries ( the sparsity ) of decreases .we fix as the matrix characterized by the desired number of vanishing elements . by choosing typical values of and is possible to control the time scale of the relaxation process of the system following the application of the perturbation .synthetic time - series with elements measured at = 20 equally spaced time - points . ]let us consider a simulated time - series with measured at equally - spaced time - points , as shown in fig .this dataset is generated by starting from a sparse gene network ( with only out of non - zero elements ) , a constant perturbation and a sparse external perturbation - coupling matrix with a single not vanishing entry .the white noise is characterized by a standard deviation measured in units of the mean absolute value of the expression levels of all genes .in particular the value has been used .we have applied our algorithm to this dataset . to this end , we have minimized the reduced chi - square , defined in eq .( [ eq - rcs ] ) , for different values of the number of parameters ( i.e. of the number of degrees of freedom ) . fig .[ fig2 ] shows that has a non - monotonic dependence on the number of parameters .this feature is a signature of the fact that both networks with few or with many connections are bad descriptions of the actual gene regulatory system .accordingly , our best estimate of the number of not vanishing parameters is , where has its minimum , and the corresponding minimizing matrices ( with non zero entries ) and ( with non zero elements ) are our best estimates of the actual gene network encoding matrix and of the matrix .the main panel ( inset ) show the dependence of ( of the minimum of the cost function ) on the number of not vanishing parameters , as determined by our algorithm when applied to the time - series shown in fig .the fluctuations are due to the probabilistic nature of the monte carlo minimization procedure .the quantity varies non - monotonically with , and has a minimum with parameters . ]the estimators assume the values and .these values indicate that , when applied to this small dataset , our algorithm is able to retrieve to a very good approximation , and with a comparatively larger error . for comparison , we have also obtained the matrices and which exactly minimize via a linear algebraic approach , and retrieved the corresponding continuous matrices via the use of the bilinear transformation , obtaining the scores and .these numbers prove that by exploiting the time reversibility of the equation of motion , and the sparseness of the gene network , is it possible to estimate the parameters of the network with a greater accuracy , as also shown in fig .[ fig3 ] where we plot the best estimates obtained by both methods versus their true values : in the case of perfect retrieval all of the points should lie on the line .+ we plot here the values of the element of the estimated matrices , obtained both with the linear algebraic approach and with our algorithm , versus their true value .ideally , the points should line on the dotted line . ]there are two major problems encountered when trying to infer a gene network via the analysis of time - series data .the first one is that there are usually to few time - points with respect to the large number of genes .the second one is associated to the fact that , when the system responds to an external perturbation , only the expression of the genes directly or indirectly linked to that perturbation changes , i.e. , only a specific sub - network of the whole gene network is _ activated _ by the external perturbation .while through the study of the time - series it is possible to learn something about the regulatory role of the responding genes , nothing can be learnt about the regulatory role of the non - responding genes .these problems can be addressed by using gene network retrieval procedures which are able to simultaneously analyze different time - series ( ) , particularly if these measure the response of the system to different perturbations , as we expect different perturbations to activate different genes .our reverse engineering approach naturally exploits the presence of multiple time series by requiring the minimization of eq .( [ eq - min - molti ] ) . herewe study the network discussed in the previous example by adding to the time - series shown in fig .[ fig1 ] , other ones generated by the application of two different perturbations . for sake of simplicity all time - series are measured at equally - spaced time - points , but with an elapsing time between two consecutive data points depending on the particular time - series .hence that the problem can not be reduced to the one of a single average time - series by exploiting the linearity of eq .( [ eq - cont ] ) . as the number of time - series increases , our determination of the gene network becomes more and more accurate .for instance , while by means of a single perturbation we obtain ( ) , by using two time - series we obtain ( ) , and by using three time series we get ( ) .dependence of on the fraction of priors , as obtained by analyzing one or two time - series .the scoring parameter decreases as the number of priors increases , indicating that a better estimate of the gene network is recovered . ]+ + as the traditional approach to research in molecular biology has been an inherently local one , examining and collecting data on a single gene or a few genes , there are now couples of gene which are known to interact in a specific way , or do not interact at all .this information is nowadays easily available by consulting pubic databases such as gene ontology .here we show that it is possible to integrate this non - analytical information in our reverse engineering approach , improving the accuracy of the retrieved network . to this endwe consider again the gene network but we introduce some constraints on a fraction of randomly selected elements of the matrices and , namely . as our retrieval procedure tries to exchange vanishing and not vanishing elements of and we introduce the constraints as follows :if the element is zero in the exact matrices then we set it to zero and we never try to set it to a non - zero value ; on the contrary , if the element is different from zero , its value is free to change and we never try to set it to zero . by using this approachwe assure that our best estimates of and are consistent with the previous knowledge . in order to stress the greater improvement that can be obtained via the use of biological priors, we consider now the same gene network and perturbations of examples 1 and 2 , but we corrupt the noiseless dataset by adding a noise ( see eq .( [ eq - noise ] ) ) characterized by , and not by as before . due to the high value of the noisethe linear algebraic approach is not more able to recover the gene network matrix , as it obtains a score .we show in fig . [fig_priors ] the dependence of on the fraction of randomly selected elements of and fixed either to zero or to non - zero , both for the case in which only one or two perturbations have been used in the retrieval procedure .as expected , decreases as the number of priors increases , showing that as more biological knowledge on the system of interest is available the reliability of our reverse engineering approach improves .we applied our algorithm to a nine gene network , part of the sos network in _e. coli_. the genes are , , , , , , , , , and the used time - series consists of six time measurements ( in triplicate ) of the expression level of these genes following treatment with norfloxacin , a known antibiotic that acts by damaging the dna .the time series is the same used in ref .( ) , and experimental details can be found there .given there are unknowns to be determined , as is a matrix , and is a vector of length . since ,the experimental data allows for the writing of equations , and for the determination of only unknowns , while a literature survey ( ) suggests that there are at list connections between the considered genes ( including the self - feedback ) . as in previous works ,we are therefore forced to use an interpolation technique to add new time measurements , creating a time series with time points . when applied to this dataset, our algorithm found that is minimized by a matrix with not vanishing entries , and a vector with non - zero elements , which are given in table [ table1 ] . in the literature ,there are known connections between the nine considered genes , including the self - feedback .. we are able to find of these connections . regarding the interaction with norfloxacin , our algorithm found that primary target is , as expected .l|ccccccccc||c & reca & lexa & ssb & recf & dini & umudc & rpod & rpoh & rpos & + + reca & -1.68 & - & -0.36 & 1.81 & 1.05 & 0.84 & - & - & -0.59 & 0.71 + + lexa & -0.11 & -1.56 & 0.59 & 0.58 & 0.40 & - & -0.34 & - & - & 0.13 + + ssb & -0.47 & 1.82 & -2.83 & - & 0.60 & - & 0.96 & -1.71 & 1.29 & - + + recf & 0.68 & 0.42 & - & -0.93 & -0.52 & -0.40 & -0.30 & 1.13 & - & 0.38 + + dini & 1.18 & 0.72 & 0.39 & -0.96 & -1.71 & 0.42 & - & - & - & 0.34 + + umudc & 0.47 & -0.63 & -0.39 & -0.64 & 0.19 & -0.65 & 0.11 & - & 0.53 & - + + rpod & -0.06 & -0.28 & - & 0.36 & - & - & -0.22 & - & - & 0.40 + + rpoh & - & - & -1.10 & 1.60 & -0.32 & 0.92 & - & -3.46 & 1.46 & - + + rpos & -0.39 & -0.43 & - & - & 0.18 & 0.92 & 0.26 & 0.82 & -0.72 & -0.11 +in the framework of a linear deterministic description of the time evolution of gene expression levels , we have presented a reverse engineering approach for the determination of gene networks .this approach , based on the analysis of one or more time - series data , exploits the time - reversibility of the equation of motion of the system , the sparsity of the gene network and previous biological knowledge about the existence / absence of connections between genes . by taking into account this informationthe algorithm significatively improves the level of confidence in the determination of the gene network over previous works .the drawback of our procedure is the computational cost , which at the moment limits the applicability of the algorithm to a small number of genes / clusters .there are two time - consuming procedures .one is the transformation of the continuous matrix in the discrete matrix , which we have been avoided by using the bilinear transformation , but whose validity breaks down as the time interval between two consecutive measurements increases .the second one , which at the moment is the most expensive in time , is the computation of the inverse matrix , which we accomplish through the so - called lu decomposition whose computational cost is .alternative methods for exploiting the reversibility of the dynamics should therefore by devised for applications with a larger number of genes .we thank d. di bernardo for rousing our interest in this subject , and for helpful discussions . | the important task of determining the connectivity of gene networks , and at a more detailed level even the kind of interaction existing between genes , can nowadays be tackled by microarraylike technologies . yet , there is still a large amount of unknowns with respect to the amount of data provided by a single microarray experiment , and therefore reliable gene network retrieval procedures must integrate all of the available biological knowledge , even if coming from different sources and of different nature . in this paper we present a reverse engineering algorithm able to reveal the underlying gene network by using time - series dataset on gene expressions considering the system response to different perturbations . the approach is able to determine the sparsity of the gene network , and to take into account possible _ a priori _ biological knowledge on it . the validity of the reverse engineering approach is highlighted through the deduction of the topology of several _ simulated _ gene networks , where we also discuss how the performance of the algorithm improves enlarging the amount of data or if any a priori knowledge is considered . we also apply the algorithm to experimental data on a nine gene network in _ escherichia coli_. |
the allen brain atlas ( aba , ) put neuroanatomy on a genetic basis by releasing voxelized , _ in situ _ hybridization data for the expression of the entire genome in the mouse brain ( ) .these data were co - registered to the allen reference atlas of the mouse brain ( ara , ) .about 4,000 genes of special neurobiological interest were proritized . for these genesan entire brain was sliced coronally and processed ( giving rise to the coronal aba ) . for the rest of the genomethe brain was sliced sagitally , and only the left hemisphere was processed ( giving rise to the sagittal aba ) .+ from a computational viewpoint , gene - expression data from the the aba can be studied collectively , thousands of genes at a time . indeedthe collective behaviour of gene - expression data is crucial for the analysis of , in which the brain - wide correlation between the aba and cell - type - specific microarray data was studied .these microarray data characterize the transcriptome of different cell types , microdissected from the mouse brain , and collated in .however , for a given cell characterized in this way , it is not known where other cells of the same type are located in the brain .a linear model was proposed in ( see also ) , and used to estimate the region - specificity of cell types by linear regression with positivity constraint .the model was fitted using the coronal aba only , which allowed to obtain brain - wide results . however, this restriction implies that only one ish expression profile per gene was used to fit the model .this poses the problem of the error bars on the results of the model .since all the ish data in the aba were co - registered to the voxelized ara , so that data for the sagittal and coronal atlas can be treated computationally in the same way .however , the aba does not specify from which cell type(s ) the expression of each gene comes . + * gene expression energies from the allen brain atlas .* in the aba , the adult mouse brain is partitioned into cubic voxels of side 200 microns , to which ish data are registered for thousands of genes . for computational purposes , these gene - expression data can be arranged into a voxel - by - gene matrix . for a cubic labeled , the _ expression energy _ of the gene is a weighted sum of the greyscale - value intensities evaluated at the pixels intersecting the voxel : the analysis of is restricted to digitized image series from the coronal aba , for which the entire mouse brain was processed in the aba pipeline ( whereas only the left hemisphere was processed for the sagittal atlas ) .+ * cell - type - specific transcriptomes and density profiles . * on the other hand , the cell - type - specific microarray reads collated in ( for different cell - type - specific samples studied in ) can be arranged in a type - by - gene matrix denoted by , such that and the columns are arranged in the same order as in the matrix of expression energies defined in eq .[ expressionenergy ] .+ we proposed the following linear model in for a voxel - based gene - expression atlas in terms of the transcriptome profiles of individual cell types and their spatial densities : where the index denotes labels cell type , and denotes its ( unknown ) density at voxel labeled .the values of the cell - type - specific density profiles were computed in by minimizing the value of the residual term over all the ( positive ) density profiles , which amounts to solving a quadratic optimization problem ( with positivity constraint ) at each voxel .these computations can be reproduced on a desktop computer using the matlab toolbox ( bgea ) . for other applications of the toolbox see ( marker genes of brain regions ) , for co - expression properties of some autism - related genes , and for computations of stereotactic coordinates ) .the optimization procedure in our model is deterministic . on the other hand ,decomposing the density of a cell type into the sum of its mean and gaussian noise is a difficult statistics problem ( see ) .some error estimates on the value of were obained in using sub - sampling techniques ( i.e. sub - sampling the data repeatedly by keeping only a random 10% of the coronal aba ) .this induced a ranking of the cell types based on the stability of the results against sub - sampling .however , the 10 % fraction is arbitrary ( even though it is close to the fraction of the genome covered by our coronal data set ) .+ , defined in eq .[ meanfittingdef ] , for medium spiny neurons , labeled in our data set .the restriction to the left hemisphere comes from the use we made of sagittal image series , which cover the left hemisphere only.,scaledwidth=99.0% ] in the present work we simulated the variability of the spatial density of cell types by integrating the digitized sagittal image series into the data set . for gene labeled ,the aba provides expression profiles , where is the number of image series in the aba for this gene .hence , instead of just one voxel - by - gene matrix , the aba gives rise to a family of voxel - by - gene matrices , with voxels belonging to the left hemisphere .a quantity computed from the coronal aba can be recomputed from any of these matrices , thereby inducing a distribution for this quantity .this is a finite but prohibitively large number of computations , so we took a monte carlo approach based on random choices of images series , described by the following pseudo - code : + the larger is , the more precise the estimates for the distribution of the spatial density of cell types will be .the only price we have to pay for th e integration of the sagittal aba is the restriction of the results to the left hemisphere in step 2 of the pseudo - code .the average density across random draws of image series for cell type labeled reads : }(v ) .\label{meanfittingdef}\ ] ] ) , for medium spiny neurons , labeled , based on random draws .the right - most peak , corresponding to the striatum , is well - decoupled from the others , furthermore the other peaks are all centered close to zero ( making most of them almost invisible ) .medium spiny neurons have percent of their densities supported in the striatum , without any region gathering more than 5 percent of the signal in any of the random draws.,scaledwidth=110.0% ] a heat map of this average for medium spiny neurons ( extracted from the striatum ) is presented on fig .[ meanfittings ] .it is optically very similar to the ( left ) striatum , which allows the model to predict that medium spiny neurons are specific to the striatum ( which confirms prior neurobiological knowledge and therefore serves as a proof of concept for the model ) .+ to compare the results to classical neuroanatomy , we can group the voxels by region according to the ara . since the number of cells of a given type in an extensive quantity , we compute the fraction of the total density contributed by a given brain region denoted by ( see the legend of fig .[ fittingdistrs ] for a list of possible values of ) : }(t ) = \frac{1}{\sum_{v\in\mathrm{left\;hemisphere}}\rho_{[i],t}(v ) } \sum_{v\in v_r } \rho_{t,[i]}(v ) .\label{fittingdistrdef}\ ] ] we can plot the distribution of these values for a given cell type and all brain regions ( see fig . [ meanfittings ] for medium spiny neurons , which gives rise to the best - decoupled right - most peak in the distribution of simulated densities ) .moreover , we estimated the densities of the contribution of each region in the coarsest version of the ara to the total density of each cell type in the data set .for most cell types , this confirms the ranking of cell types by stability obtained in , but based on error bars obtained from the same set of genes in every fitting of the model ( see the accompanying preprint for exhaustive results for all cell types in the panel ) .the most stable results against sub - sampling tend to correspond to cell types for which the anatomical distribution of results is more peaked .the present analysis can be repeated when the panel of cell - type - specific microarray expands .9 p. grange , j.w .bohland , b.w .okaty , k. sugino .h. bokil , s.b .nelson , l. ng , m. hawrylycz and p.p .mitra , _ cell - type based model explaining coexpression patterns of genes in the brain _ , proceedings of the national academy of sciences 2014 111 ( 14 ) 53975402 .p. grange , j.w .bohland , b.w .okaty , k. sugino .h. bokil , s.b .nelson , l. ng , m. hawrylycz and p.p .mitra , _ cell - type - specific transcriptomes and the allen atlas ( ii ) : discussion of the linear model of brain - wide densities of cell types _ , .y. ko , s.a .ament , j.a .eddy , j. caballero , j.c .earls , l. hood and n.d . price ( 2013 ) ,_ cell - type - specific genes show striking and distinct patterns of spatial expression in the mouse brain _ , proceedings of the national academy of sciences , 110 ( 8) , 30953100 .tan , l. french and p. pavlidis ( 2013 ) , _ neuron - enriched gene expression patterns are regionally anti - correlated with oligodendrocyte - enriched patterns in the adult mouse and human brain , _ frontiers in neuroscience , 7 . p. grange , m. hawrylycz and p.p .mitra ( 2013 ) , _ computational neuroanatomy and co - expression of genes in the adult mouse brain , analysis tools for the allen brain atlas _ , quantitative biology , 1(1 ) : 91100 .( doi ) 10.1007/s40484 - 013 - 0011 - 5 .p. grange and p.p .mitra ( 2012 ) , _ computational neuroanatomy and gene expression : optimal sets of marker genes for brain regions _ , ieee , in ciss 2012 , 46th annual conference on information science and systems ( princeton ) . | the allen brain atlas ( aba ) of the adult mouse consists of digitized expression profiles of thousands of genes in the mouse brain , co - registered to a common three - dimensional template ( the allen reference atlas ) . this brain - wide , genome - wide data set has triggered a renaissance in neuroanatomy . its voxelized version ( with cubic voxels of side 200 microns ) can be analyzed on a desktop computer using matlab . on the other hand , brain cells exhibit a great phenotypic diversity ( in terms of size , shape and electrophysiological activity ) , which has inspired the names of some well - studied cell types , such as granule cells and medium spiny neurons . however , no exhaustive taxonomy of brain cells is available . a genetic classification of brain cells is under way , and some cell types have been characterized by their transcriptome profiles . however , given a cell type characterized by its transcriptome , it is not clear where else in the brain similar cells can be found . the aba can been used to solve this region - specificity problem in a data - driven way : rewriting the brain - wide expression profiles of all genes in the atlas as a sum of cell - type - specific transcriptome profiles is equivalent to solving a quadratic optimization problem at each voxel in the brain . however , the estimated brain - wide densities of 64 cell types published recently were based on one series of co - registered coronal _ in situ _ hybridization ( ish ) images per gene , whereas the online aba contains several image series per gene , including sagittal ones . in the presented work , we simulate the variability of cell - type densities in a monte carlo way by repeatedly drawing a random image series for each gene and solving optimization problems . this yields error bars on the region - specificity of cell types . + _ prepared for the international conference on mathematical modeling in physical sciences , 5th-8th june 2015 , mykonos island , greece . _ |
numerical relativity has made enormous progress within the last few years .many numerical relativity groups now have sufficiently stable and accurate codes which can simulate the inspiral , merger , and ringdown phases of binary black hole coalescence .similarly , significant progress has been made in the numerical simulation of stellar gravitational collapse and there now seems to be a much better understanding of how supernova explosions happen .all these processes are among the most promising sources of gravitational radiation and therefore , there is significant interest in using these numerical relativity results within various data analysis pipelines used within the gravitational wave data analysis community . a dialog between numerical relativists and data analysts from the ligo scientific collaboration ( lsc ) was recently initiated in november 2006 through a meeting in boston .it seems appropriate to continue this dialog at a more concrete level , and to start incorporating numerical relativity results within various data analysis software .the aim of this document is to suggest formats for data exchange between numerical relativists and data analysts .it is clear that there are still outstanding conceptual and numerical issues remaining in these numerical simulations ; the goal of this document is not to resolve them .the goal is primarily to spell out the technical details of the waveform data so that they can be incorporated seamlessly within the data analysis software currently being developed within the lsc .the relevant software development is being carried out as part of the lsc algorithms library which contains core routines for gravitational wave data analysis written in ansi c89 , and is distributed under the gnu general public license .the latest version of this document is available within this library .the remainder of this document is structured as follows : section [ sec : multipoles ] describes our conventions for decomposing the gravitational wave data in terms of spherical harmonics , section [ sec : format ] specifies the data formats for binary black hole simulations , and finally section [ sec : openissues ] enumerates some open issues in binary black hole simulations which could be topics of further discussion between data analysts and numerical relativists .the output of a numerical relativity code is the full spacetime of a binary black hole system . on the other hand ,what is required for gravitational wave data analysis purposes is the strain , as measured by a detector located far away from the source .the quantity of interest is therefore the gravitational wave metric perturbation in the wave - zone , where and are space - time indices .we always work in the transverse traceless ( tt ) gauge so that all information about the metric perturbation is contained in the tt tensor , where and are spatial indices .the wave falls off as where is the distance from the source : here is a transverse traceless tensor and is the total mass of the system ; this approximation is , naturally , only valid far away from the source .there are different methods for extracting from a numerical evolution .one common method is to use the complex weyl tensor component which is related to the second time derivative of .another method is to use the zerilli function which approximates the spacetime in the wave - zone as a perturbation of a schwarzschild spacetime . for our purposes , it is not important how the wave is extracted , and different numerical relativity groups are free to use methods they find appropriate .the starting point of our analysis are the multipole moments of and it is important to describe explicitly our conventions for the multipole decomposition .in addition to these multipole moments , we also request the corresponding values of or the zerilli function in the formats described later .let be a cartesian coordinate system in the wave zone , sufficiently far away from the source .let , and denote the spatial orthonormal coordinate basis vectors . given this coordinate system ,we define standard spherical coordinates where is the inclination angle from the -axis and is the phase angle . at this point , we have not specified anything about the source .in fact , the source could be a binary system , a star undergoing gravitational collapse or anything else that could be of interest for gravitational wave source modeling . in a later sectionwe will specialize to binary black hole systems and suggest possibilities for some of the various choices that have to be made . however , as far as possible , these choices are eventually to be made by the individual source modeling group .we break up into modes in this coordinate system . in the wave zone , the wave will be propagating in the direction of the radial unit vector in the transverse traceless gauge , has two independent polarizations where and are the usual basis tensors for transverse - traceless tensors in the wave frame it is convenient to use the combination , which is related to by two time derivatives as where is the weyl tensor and denote abstract spacetime indices .if we denote the unit timelike normal to the spatial slice as and the promotions of to the full spacetime as , then the null tetrad adapted to the constant spheres is where , , , and is the complex conjugate of . ] it can be shown that can be decomposed into modes using spin weighted spherical harmonics of weight -2 : the expansion parameters are complex functions of the retarded time and , if we fix to be the radius of the sphere at which we extract waves , then are functions of only .the explicit expression for the spin weighted spherical harmonics in terms of the wigner -functions is where ^{1/2}}{(\ell + m -k)!(\ell - s - k)!k!(k+s - m ) ! } \times \left(\cos\left(\frac{\iota}{2}\right)\right)^{2\ell+m - s-2k}\left(\sin\left(\frac{\iota}{2}\right)\right)^{2k+s - m}\ ] ] with and . for reference , the mode expansion coefficients are given by if is used for wave extraction , then is given by two time integrals of the corresponding mode of . in this case , it is important that the information provided contains details about how the integration constants are chosen .we define and as it is these modes of and that we suggest to be provided as functions of time in units of .let us now specialize in simulations of binary black hole coalescence .a numerical relativity simulation has many parameters that need to be specified , and several of them may not be directly relevant to the data analysis problem .we need to specify which parameters of the numerical simulation will be significantly useful for the astrophysics of a binary black hole system in a circular orbit . for our purposes ,a single numerical waveform is defined by at least seven parameters : the mass ratio and the three components of the individual spins and .these parameters will be referred to as the `` metadata '' for a waveform ; more parameters can be added as necessary .we use the convention that denotes the larger of the two masses so that . the choice of precisely how , and the spins are calculated is left up to the individual numerical relativity groups .in addition , the start frequency of the waveform , in units of , is an important parameter relevant for data analysis . for a given value of the mass ,this gives the physical start frequency of the waveform , and this will need to be lesser than the lower cut - off frequency relevant for a particular detector .for example , is an appropriate value appropriate for the initial ligo detectors . for the wavfeorm data itself , we suggest the data for a single mode to be written as a plain text file in three columns for the time , and respectively . for a given simulation , numerical groups may wish to decide the maximum value of to which they will provide the waveform . from the data analysis standpoint, it is most useful that for every , waveforms are provided for all values of , irrespective of any symmetries that may be present in the simulation .if there are certain modes which , due to small amplitude , can not be accurately determined , these can be set to zero .numerical groups often extract the waveform from the simulation at several different radii and then use richardson extrapolation to determine the most accurate waveform . for data analysis purposes , we do not consider waveforms from different radii as distinct , and would prefer only the most accurate determination of the waveform from any given simulation .it is natural to use the total mass of the binary as the unit for the time and strain columns .however , there can be subtleties in the choice of . it could be the adm mass of the spacetime , an approximation to the adm mass measured at the wave - extraction sphere , or it could be the sum of the individual masses . again , the choice is left up to the numerical relativity group which produced the waveform , and it depends on whatever best represents the time coordinate and the scale of in the particular simulation . for data analysis purposes , we would prefer the sampling in time to be uniform .if the result of a simulation , or set of simulations , yields a waveform sampled non - uniformly , we ask that the nr group performs an interpolation to give a uniformly sampled waveform .a sampling rate of is usually sufficient for our purposes , but this is not a requirement .the strain multiplied by the distance will also be in units of the total mass of the binary .there can be any number of comment lines at the top of the file and it is envisioned that the details of the simulation and the mode contained in the file will be held in the comment lines .this could be an example of a data file : .... # numerical waveform from .... # equal mass , non spinning , 5 orbits , l = m=2 # time hplus hcross 0.000000e+00 1.138725e-02 -8.319811e-04 2.000000e-01 1.138725e-02 -1.247969e-03 4.000000e-01 1.138726e-02 -1.663954e-03 6.000000e-01 1.138727e-02 -2.079936e-03 8.000000e-01 1.138728e-02 -2.495913e-03 1.000000e-00 1.138728e-02 -2.911884e-03 1.200000e+00 1.138729e-02 -3.327850e-03 1.400000e+00 1.138730e-02 -3.743807e-03 1.600000e+00 1.138731e-02 -4.159757e-03 1.800000e+00 1.138733e-02 -4.575696e-03 2.000000e+00 1.138734e-02 -4.991627e-03 2.200000e+00 1.138735e-02 -5.407545e-03 2.400000e+00 1.138737e-02 -5.823452e-03 2.600000e+00 1.138739e-02 -6.239345e-03 2.800000e+00 1.138740e-02 -6.655225e-03 3.000000e+00 1.138752e-02 -7.071059e-03 3.200000e+00 1.138754e-02 -7.486903e-03 3.400000e+00 1.138757e-02 -7.902739e-03 ...... .... the metadata information for the different datafiles will be stored in a separate file .this metadata can contain ( at least ) two sections , one for the simulation metadata , and the other listing the filenames which correspond to the various modes of the waveform .there will be a separate metadata file for each simulation .this could be an example of a metadata file : .... [ metadata ] simulation - details = nrfile.dat nr - group = friendlynrgroup email = myemail.edu mass - ratio = 1.0 spin1x = 0.0 spin1y = 0.0 spin1z = 0.5 spin2x = 0.0 spin2y = 0.8 spin2z = 0.0 freqstart22 = 0.1 [ ht - data ] 2,2 = example1_22.dat 2,1 = example1_21.dat2,0 = example1_20.dat 2,-1 = example1_2 - 1.dat 2,-2 = example1_2 - 2.dat .... it would be desirable if the waveform data are reproducible at a later date if necessary . for this purpose , the numerical relativity groups can submit a file with the parameters of the simulation .there is no requirement on the format of this file. the ` simulation - details ` line will contain the name of the file describing the parameters used to describe the nr simulation .in addition , we ask for the numerical relativity groups to provide a ` nr - group ` name and a contact ` email ` .the remainder of the entries in the ` [ metadata ] ` section describe the physical parameters of the waveform . to begin with , for non - spinning waveforms ,the only required parameter is the mass ratio . for waveforms with spin , the initial spins of the two black holes ( in the co - ordinates discussed in the previous section )must also be specified .the start frequency of the 2 - 2 mode of the waveform in units of is denoted by .we emphasize that whenever necessary , we will add more parameters such as , for example , the eccentricity of the orbit , or start frequencies of other modes etc .lines starting with a or will be taken to be comment lines and there can be an arbitrary number of comment lines .the section ` [ ht - data ] ` contains one line for each mode .these give the file names containing the corresponding modes .the filenames can be specified as relative paths to the data files starting from the location of the metadata file .thus , if the datafiles are stored in a sub - directory called ` data ` , then the metadata file would read : .... [ ht - data ] 2,2 = data / example1_22.dat 2,1 = data / example1_21.dat 2,0 = data / example1_20.dat 2,-1 = data / example1_2 - 1.dat 2,-2 = data / example1_2 - 2.dat .... if the waveforms have been calculated using , then for cross - checking purposes , we request that datafiles containing the real and imaginary parts of are also provided in the same format as for the waveforms , i.e. three columns which are respectively time , real part of and imaginary part of .again there can be an arbitrary number of comment lines , but in this case there does not need to be a metadata file .this data can be referenced in the metadata file in an additional section : .... [ psi4-data ] 2,2 = data / example1_psi4_22.dat 2,1 = data / example1_psi4_21.dat 2,0 = data / example1_psi4_20.dat 2,-1 = data / example1_psi4_2 - 1.dat 2,-2 = data / example1_psi4_2 - 2.dat .... similarly , if the waveforms were extracted using the zerilli formalism , a section ` [ zerilli - data ] ` would be added . to summarize , the numerical relativity groups are asked to submit a tarball containing the following information for each simulation : 1 .: : [ required ] the data files for and one for each mode of the simulation .2 . : : [ required ] the meta - data file .3 . : : [ optional ] data files for the functions ( e.g. or the zerilli function ) which were used to construct and .4 . : : [ required ] parameter file for reproducing the waveform .we now list some open issues for binary black hole simulations which could be topics for further discussion .we associate the coordinate system with the binary system as follows .the orbital plane of the binary at is taken to be the - plane with the -axis in the direction of the orbital angular momentum .* is this the best choice of the -axis ?would it be better to choose , say , the spin of the final black hole as the -axis ?the decision will be determined by requirements of simplicity and having as few modes to work with as possible . *the orbital plane is unambiguous when the black holes are non - spinning but it can be ambiguous in many situations , especially when the spin of the black holes causes the orbital plane to precess significantly . in such cases ,it is left to the numerical relativity group to decide what the best choice of the `` orbital plane '' is . what is the right choice for parameters such as the individual masses , the total mass and the spins ?here are some possibilities : * and could be the parameters appearing in the initial data construction .alternatively , for non spinning black holes , they could be the irreducible masses of the two horizons : where is the horizon area . for spinning black holesit could be given by the christodoulou formula : where is an appropriately defined spin for the individual black holes .the calculation of is again left up to the numerical relativity group .* for the total mass , is it better to use the sum of individual horizon masses ( including the effect of angular momentum ) , or could it be the total adm mass or rather , an approximation to the adm mass calculated at the sphere where the waves are extracted ? this could be specified in the metadata file , for example through the additional lines + .... madm = 1.0 mchristodoulou = 0.97 .what is the best choice for the radiation extraction sphere ? * how far away do we need to take the sphere ?is it sufficient to take the sphere to be a coordinate sphere , or do we need some further gauge conditions ? how important is the choice of initial data ? * clearly , the initial data used in almost all current simulations do not exactly represent real astrophysical binary black hole systems .is this deviation important for gravitational wave detection ?how large are the error - bars on the numerical results ?* is there a reliable way to estimate the systematic errors due to finite resolution effects , different gauge choices , wave extraction methods etc ?numerical relativity groups are welcome to raise any other issues that might be important .we are grateful to various numerical relativists for numerous discussions and suggestions . in particular, we would like to thank the following people for valuable inputs to this document : peter diener , sascha husa , luis lehner , lee lindblom , carlos lousto , hiroyuki nakano , harald pfeiffer , luciano rezzolla , and erik schnetter . | this document suggests possible data formats to further the interaction between gravitational wave source modeling groups and the gravitational wave data analysis community . the aim is to have a simple format which is nevertheless sufficiently general , and is applicable to various kinds of sources including binaries of compact objects and systems undergoing gravitational collapse . .... dear colleagues , with recent advances in the fields of numerical simulations of gravitational - wave sources and gravitational wave detection , we have reached a time when closer collaborations will benefit both fields . close interactions between these fields may enhance the chances of detecting gravitational waves , and enable us to better understand the physics and astrophysics involved . we would like to develop a uniform interface to public waveforms produced by the source - modeling community that could be used by the ligo scientific collaboration ( lsc ) , and other detector groups . in this document , we suggest a simple format for the waveforms . the software tools designed around this format , by the lsc , will be released under the gpl . we expect that data analysis groups will use the public waveforms in their analyses when appropriate . while this interface document proposes technical standards for numerical relativity waveforms , we believe that the ability to extract the best astrophysical information from nr waveforms in gravitational wave searches will depend not just on adopting standards , but on the ability of the gravitational wave detector communities and the numerical relativity communities to interact closely and develop a sufficiently detailed understanding of each other 's technical methods and limitations . to effectively use nr waveforms , gravitational wave scientists will need to understand the physical limitations and subtleties of numerical data , and to effectively produce waveforms , numerical relativists will need to understand the instrumental limitations and subtleties of gravitational wave interferometers . we hope that this opens the way to deeper interactions between the gw and nr communities and look forward to closer collaborations that may develop between the two communities as we explore the gravitational - wave universe together . the ligo scientific collaboration .... |
in recent decades , relaying transmission as well as network coding have attracted increasing attention as these two techniques can well exploit cooperative diversity / network coding gain to improve network performance in terms of metrics of interest - .two - way relay channel , a typical application which jointly utilizes relays and network coding , has been extensively studied in - , where the throughput of dnc are studied in and the rate region of pnc are studied in .further , green communication has received increasing attention , as it introduces novel solutions to greatly reduce energy consumptions in communication systems designs . in the literature ,numerous works studied reducing energy usage while still satisfying the qos requirement for various types of communication networks , e.g. , investigated an energy - aware transmission strategy in a multi - user multi - relay cellular network and discussed various energy - aware scheduling algorithms in wireless sensor networks . in this work , we are motivated to analytically analyze the energy usage of pnc and the superposition - coding based dnc ( spc - dnc ) .we then find the decision criterion in selecting pnc or spc - dnc in terms of minimizing energy usage for each channel realization .further , a pnc / spc - dnc switching strategy is designed to smartly select the energy - efficient strategy under fading channel realizations , with the qos requirement still being satisfied .to better compare the two strategies , we focus on the end - to - end symmetric throughput scenario . however , our analysis can be naturally extended to asymmetric throughput case , and is omitted here due to the limited scope of this work .in this work , a three - node , two - way relaying network over fading channels is studied . in this twrn , the two source nodes , and want to exchange data through the aid of the relay node , .all nodes work in half - duplex mode and can not transmit and receive simultaneously . the direct link between the two sourcesis assumed to be unavailable .the channel power gains of the - ( ) link is denoted as and that of the - is .the noise at each node is assumed to be additive white gaussian noise with zero mean and unit variance . in this work ,we aim to minimize average energy usage for a twrn subject to a symmetric end - to - end rate requirement from both sources , which might be required by video / audio applications .the two considered strategies are pnc and spc - dnc , which consist of two phases , including the multi - access uplink phase and the network - coded broadcasting downlink phase , as shown in fig .[ fig : system ] . and is decoded at relay and forwarded to both users . in spc - dnc , both and are decoded at relay and then combined together before broadcasting on the downlink. it is assumed that and are of the same length.,width=453 ] to minimize energy usage , we shall firstly review the energy usage of the two strategies , followed by the determination of the criterion rule in selection .finally , the pnc / spc - dnc switching scheme is presented with the designed iterative algorithm .in this section , we shall firstly discuss the energy usage of the pnc and spc - dnc schemes separately .we then move on to find the rule in scheme selection for energy usage minimizing .pnc is consisted of two phases . in the uplink phase , the two source nodes transmit and simultaneously to the relay node and the relay node decodes a function message and then forward it to both sources on the downlink .as each source has complete prior knowledge of its own transmitted message , it can subtract this message and then decodes the message from the other source . in , it is found that the achievable pnc uplink rate is given by , where is the receive at each source node .the required power at to support transmit rate on the uplink is therefore given by , and the total required power on the uplink then is on the downlink , the relay node broadcasts the decoded function message to both source nodes and the minimal power required to support broadcast rate is given by , where follows from that the broadcast rate is determined by the minimum channel gain of all source nodes .the spc - dnc scheme time shares the traditional multi - access uplink phase and the network coded broadcasting over the downlink . on the uplink , assuming that , from , the messages from should be decoded first to minimize sum power consumption and the power of each source node is given by , and the minimal sum power required is given by where we define and to simplify notation .on the downlink , the relay node also transmits the combined messages from the uplink and the transmit power required is identical to that given in ( [ eq : pnc_3 ] ) and is omitted here .given both the power consumption for pnc and spc - dnc , we are interested in comparing them in terms of energy usage , given the same transmit rate requirement , the rule on selection of pnc and spc - dnc are hence presented in theorem [ the ] .[ the ] given the channel realization and the uplink rate , pnc consumes less energy than spc - dnc iff the following inequality holds , it is observed that on the downlink both pnc and spc - dnc consumes the same energy given the same broadcast rate .hence we only need to compare the energy consumed by pnc or spc - dnc on the uplink .suppose the transmit rate over the uplink from both sources are , we have hence if ( [ eq : con_switch ] ) holds , we have and concludes that pnc is more energy - efficient than spc - dnc and should be selected for the given channel realization and transmit rate .otherwise , spc - dnc uses less energy and is preferred .further , from theorem [ the ] we can readily arrive at lemma [ lem ] as follows , [ lem ] if all the channel gains are equal , pnc is more energy - efficient than spc - dnc if the following inequality holds true and less energy - efficient otherwise . the proof is omitted here as ( [ eq : lem2 ] ) can be readily obtained by solving a quadratic equation from ( [ eq : the ] ) . based on the observations above , it is therefore concluded that , pnc is beneficial under relatively high data requirements in terms of energy usage . on the other hand ,spc - dnc is preferred for energy usage reduction in low - snr regime .it is noted that both spc - dnc and pnc consumes the same amount of energy on the downlink with the same broadcast rate .however , as observed , on the uplink , both strategies may consume different amounts of energy under channel variations .therefore , it is promising to further reduce energy usage by smartly switching among the multi - access uplink transmission of spc - dnc and the uplink of pnc under different channel realizations to minimize total power consumption , given the qos requirement is met . in this sense , we define , i.e. , pnc or spc - dnc are selected based on their power usage to reduce energy usage .in addition , we denote and as the time fraction assigned to the uplink and downlink transmission , respectively , the associated optimization problem , termed as * p1 * , is therefore formulated as follows , subject to the following constraints , where and are the averaged minimal sum power usage on the uplink and that on the downlink over the distribution of the associated channel gain distributions .( [ eq : opt_con_up ] ) and ( [ eq : opt_con_down ] ) are the rate requirements for the uplink and downlink .( [ eq : opt_con_timesplit ] ) is the physical time resource splitting constraint .note that * p1 * is not a convex optimization problem , due to the quadratic terms as well as the term , which is not convex as the minimal of two convex functions may not be convex functions .instead , we can solve * p1 * by firstly assuming that only pnc / dnc is used and then iteratively updating the transmit scheme in terms of energy usage for each channel gain realization . in each iterative step ,given the transmit rate allocated to each channel realization from last step , the more energy efficient strategy ( pnc or spc - dnc ) is adopted and hence the energy usage is reduced at each step .the iteration ends until no additional energy reduction is attainable . in this sense ,the switching scheme must uses less energy than employing only pnc or spc - dnc . to be specific , in each step , the transmission strategy for each realized channel gain is determined and the associated optimization is referred to as * sub - p1*. giving the analysis above , the steps of the algorithm is hence presented as followsinput : all possible , and ( a predefined threshold ) 2 .initialization : solve * sub - p1 * by assuming that only spc - dnc / pnc is employed in transmission and obtain the total energy usage in the initialized step .iteration : compare the energy used in pnc and spc - dnc given the rate allocated in the last iteration under all possible channel realizations , and use the strategy with the minimal energy usage to replace the strategy employed in the last iteration and rerun * sub - p1 * and obtain the associated total energy usage .4 . go to step 3 ) if and go to step 5 ) otherwise .output : , the optimal time splitting , the best strategy for each channel realization , the associated allocated rate as well as the transmit power level in solving * sub - p1 * in the iteration . for clarity , a flowchart of the iterative algorithmis also plotted , as shown in fig .[ fig : flowchart ] note that in each iteration step , the energy usage is reduced .in addition , the total energy usage is lower bounded by zero in nature . combining these two facts into account , the convergence of the iterative algorithm is guaranteed and at least some local - optimal point is achieved .further , it is worth noting that , in our implementation , the algorithm converges within tens of iterations and is hence promising in practice .it is also noted that at each iteration of the iterative algorithm , * p1 * reduces to * sub - p1 * where spc - dnc / pnc is selected and specified for each channel realization . it will be shown that * sub - p1 * is an equivalent convex optimization problem , followed by the analysis based on the karush - kuhn - tucker ( kkt ) conditions , which solves * sub - p1 * efficiently . to be specific , in * p1 *is replaced by in * sub - p1*. in addition , to circumvent the difficulty of the quadratic terms , we define ( ) and ( ) .hence , * sub - p1 * can be formulated as follows , subject to the following constraints , where and are the the linear transformations of the perspectives of the corresponding convex functions and hence preserve convexity . the lagrangian function associated with * sub p1 *is hence given by , the associated kkt conditions are then derived as follows , after some arithmetic operations , the optimal power allocations for the uplink is then derived as follows , where the downlink optimal power allocations with respect to the channel gains can be similarly derived from ( [ eq : downlink_opt ] ) and is presented below . for the optimal time splitting , it is noted that the closed - form solutions are not tractable as the associated kkt conditions are transcendental equations . however, numerical algorithms can be applied to find the optimal and .henceforth , * sub - p1 * can be solved efficiently by kkt conditions and in each iteration of the presented algorithm the global optimal solution is obtained and hence we argue that the proposed algorithm in sec .v above leads to a sub - optimal solution .in this section , numerical results are presented to verify our findings . in the considered setting , noise at each node is assumed to be gaussian with zero mean and unit variance and all links are assumed to be rayleigh fading channels with unity link gain on average .the reciprocity of the associated uplink and downlink channels is assumed .the average total energy usage of each scheme is obtained by averaging over independent realizations of link gains and the average symmetric end - to - end rate requirement on both sides is in the unit of bit / s / hz . as observed in fig .[ fig : all ] , pnc performs better than spc - dnc with relatively high data rate requirement and worse with low data rate requirement . in addition, it is observed that the optimal switching scheme outperforms solely pnc and spc - dnc schemes for all data rate requirements and the performance gain achieved is hence demonstrated , validating the superiority of our designed switching scheme .in this work , we studied a three - node , two - way relaying system .our aim was to minimize average total energy usage for a twrn by switching between pnc and spc - dnc , while satisfying the qos requirement .to this end , we analytically derived the optimal selection criterion for spc - dnc and pnc for each channel gain realization and the associated optimal problem to minimize energy usage by switching between spc - dnc and pnc was formulated and solved .the performance gain of the designed adaptable pnc / dnc switching scheme , over the schemes by only employing spc - dnc or pnc , was validated by numerical results . c. fragouli , d. katabi , a. markopoulou , m. medard , and h. rahul , `` wireless network coding : opportunities & challenges , '' in _ proc .ieee military communi .( milcom07)_. 1em plus 0.5em minus 0.4emieee , 2007 , pp .18 .r. devarajan , c. s. jha , u. phuyal , and v. k. bhargava . energy - aware resource allocation for cooperative cellular network using multi - objective optimization approach " ._ ieee trans .wireless communi .5 , pp . 17971807 , 2012 . | in this work , we consider an energy minimization problem with network coding over a typical three - node , two - way relaying network ( twrn ) in the wireless fading environment , where two end nodes requires the same average exchange rates no lower than a predefined quality - of - service ( qos ) constraint . to simplify the discussion , the selected network coding modes only include physical - layer network coding ( pnc ) and the superposition coding based digital network coding ( spc - dnc ) . we first analyze their energy usages and then propose a optimal strategy , which can be implemented by switching between pnc and spc - dnc for each channel realization . an iterative algorithm is hence presented to find the optimal power allocations as well as optimal time splitting for both uplink and downlink transmissions . the conducted numerical study validates the performance improvement of the new developed strategy . network coding , two - way , resource allocation , switching . |
high power in a narrow frequency band ( spectral lines ) are common features of an interferometric gravitational wave ( gw ) detector s output .although continuous gravitational waves could show up as lines in the frequency domain , given the current sensitivity of gw detectors it is most likely that large spectral features are noise of terrestrial origin or statistical fluctuations .monochromatic signals of extraterrestrial origin are subject to a doppler modulation due to the detector s relative motion with respect to the extraterrestrial gw source , while those of terrestrial origin are not .matched filtering techniques to search for a monochromatic signal from a given direction in the sky demodulate the data based on the expected frequency modulation from a source in that particular direction .in general this demodulation procedure decreases the significance of a noise line and enhances that of a real signal .however , if the noise artifact is large enough , even after the demodulation it might still present itself as a statistically significant outlier , thus a candidate event .our idea to discriminate between an extraterrestrial signal and a noise line is based on the different effect that the demodulation procedure has on a real signal and on a spurious one .if the data actually contains a signal , the detection statistic presents a very particular pattern around the signal frequency which , in general , a random noise artifact does not .we propose here a chi - square test based on the shape of the detection statistic as a function of the signal frequency and demonstrate its safety and its efficiency .we use the detection statistic described in and adopt the same notation as . for applications of the statistic search on real data , see for example .we consider in this paper a continuous gw signal such as we would expect from an isolated non - axisymmetric rotating neutron star . following the notation of ,the parameters that describe such signal are its emission frequency , the position in the sky of the source , the amplitude of the signal , the inclination angle , the polarization angle and the initial phase of the signal . in the absence of a signal a distribution with four degrees of freedom ( which will be denoted by ) . in the presence of a signal follows a non - central distribution . given a set of template parameters ,the detection statistic is the likelihood function maximized with respect to the parameters . is constructed by combining appropriately the complex amplitudes and representing the complex matched filters for the two gw polarizations . and given the template parameters and the values of and it is possible to derive the maximum likelihood values of let us refer to these as .it is thus possible for every value of the detection statistic to estimate the parameters of the signal that have most likely generated it .so , if we detect a large outlier in we can estimate the associated signal parameters : .let us indicate with the corresponding signal estimate .let be the original data set , and define a second data set if the outlier were actually due to a signal and if were a good approximation to , then constructed from would be distributed .since filters for different values of are not orthogonal , in the presence of a signal the detection statistic presents some structure also for values of search frequency that are not the actual signal frequency . for these other frequencies is also distributed if is a good approximation to .we thus construct the veto statistic by summing the values of over more frequencies .in particular we sum over all the neighbouring frequency bins that , within a certain frequency interval , are above a fixed significance threshold .we regard each such collection of frequencies as a single `` candidate event '' and assign to it the frequency of the bin that has the highest value of the detection statistic .the veto statistic is then : in reality , since our templates lie on a discrete grid , the parameters of a putative signal will not exactly match any templates parameters and the signal estimate will not be exactly correct . as a consequence will still contain a residual signal and will not exactly be distributed. the larger the signal , the larger the residual signal and the larger the expected value of .therefore , our veto threshold will not be fixed but will depend on the value of .we will find such -dependent threshold for based on monte carlo simulations .the signal - to - noise ratio ( snr ) for any given value of the detection statistic can be expressed in terms of the detection statistic as , as per eq .( 79 ) of . therefore we will talk equivalently of an snr - dependent or -dependent veto threshold .let us first examine the ideal case where the detector output consists of stationary random gaussian noise plus a systematic time series ( a noise line or a pulsar signal ) that produces a candidate in the detection statistic for some template sky position and at frequency .the question that we want to answer is : is the shape of around the frequency of the candidate consistent with what we would expect from a signal ?our basic observables are the four real inner products between the observed time series and the four filters : where runs from to .the inner product is defined by eq.(42 ) of .the four filters depend on the target frequency and the target sky location .the hypothesis that we would like to examine is where is the detector noise and is the template , which in this case perfectly matches the signal . the parameters are the maximum likelihood estimators of derived from the data and the template parameters and .the definitions of the four coefficients are given in .given that the template parameters exactly match the parameters of the actual signal , then the waveform exactly matches the actual signal . in this case the four variables : are four correlated random gaussian variables .the paper constructs the detection statistic from the data .similarly , we construct from the data . is also centrally distributed in the presence of a signal and perfect signal - template match .we obtain the veto statistic by summing over the different frequencies of the event where is the number of the frequency bins in the event .if the value of is not consistent with a distribution , we reject the hypothesis .note that the degrees of freedom of the veto statistic is , as we use four data points to infer the four parameters .in the real analysis the signal parameters will not exactly match the values of one of our templates . as a consequence, will not match exactly the actual parameters and the frequency where the maximum of the detection statistic occurs , , will _ not _ be the actual frequency of the signal .however we can still set up a procedure to answer the question : is the shape of the statistic event consistent with what we would expect from a signal with parameters _ close to _ ?suppose that an event has been identified for a position template and for a value of the signal frequency .this is how the veto analysis would proceed : 1 .we determine and for each of the event .we generate a veto signal and compute the four variables 3 .we construct the variables : + 4 .( [ eq : nu ] ) we compute and then .if is a good approximation to , then follows the distribution .as already outlined at the end of section [ s : summary ] , the veto statistic does not in general follow a distribution because in general the signal parameters do not exactly match the template parameters . due to this mismatch when step 3 is performed in the procedure described in the previous section ,not all the signal is removed from .consequently acquires a non - zero centrality parameter .since this scales as in the presence of a signal , the veto statistic threshold has to change with the snr of the candidate event in order to keep the false dismissal rate constant for a range of different signal strengths .we will thus adopt a snr - dependent veto threshold on our veto statistic .we will determine the threshold via monte - carlo simulations .an snr - dependent threshold in a similar context was first used by the tama group who performed snr- studies to veto out candidate events in their inspiral waves searches . see also for a detailed description of a time - frequency test . in a context of a resonant bar detectorburst search , see .to determine the false dismissal rate , the false alarm rate and the threshold equation for the veto statistic , we have performed a set of monte carlo simulations on artificial and real noise .we have used 10 hours of fake gaussian stationary noise and of real science data from the ligo hanford 4 km interferometer .the results presented here are thus valid for a 10 hour observation time , which is the observation time of the all - sky , wide - band search that we plan to conduct on data from the second science run of the ligo detectors .we do not take into account spin down of pulsars .this may be justified for the short time length of the data . as it will be explained belowwe have injected both signals and spurious noise artifacts of the type that we observe in the detector output .the parameters of the gravitational waves signals which are injected into the noise are uniformly chosen at random in the following ranges : ] , ] , ] and then perform the steps below 50 times : 1 .we randomly choose the noise line parameters .the e - fold decay rate varies between and , where is the total observation timewe generate a 10 hour long data set consisting of random gaussian noise with standard deviation 1 and the noise line defined by the parameters above .we perform a search in a frequency band around the frequency of the noise line and identify an event , i.e. a value of and . from the values of the complex component of the detection statistic at we determine and for every frequency of the event .we generate a veto signal 5 .we compute for the veto signal for all the frequencies of the event .we obtain .[ fig : efficiencygauss ] shows the snr- plot .it may seem that the data points are densely distributed in the left upper region with large snrs and small .this deceptive appearance is due to the coarse graphical resolution of the figure .this can be clearly seen in the estimated probability distributions , shown in fig .[ fig : efficiencygausspdf ] .in fact , if we take our nominal threshold line , eq .( [ eq : thresholdline ] ) , shown as the solid line in fig .[ fig : efficiencygauss ] , the false alarm rate is estimated to be 8.4 . , and snr .the straight line represents eq .( [ eq : thresholdline ] ) .the detector is assumed to be lho detector .the number of the data points with is 954063 ., height=302 ] with snr in the four selected ranges corresponding to fig .[ fig : efficiencygauss ] ., height=302 ] we have performed monte - carlo simulations injecting noise lines as described above into real data , avoiding frequency bands with large noise artifacts .the resulting scatter plot is similar to that obtained for the gaussian random noise case .indeed , we obtain false alarm rate for the nominal threshold eq .( [ eq : thresholdline ] ) .having observed safety and efficiency of our veto method , we now show an application of the method to real data ( no signal nor noise lines injected ) .we take the following steps iteratively 1200 times : * we randomly choose a template sky direction over the whole sky 1 .we perform a wide - band search over the interval [ 100,500 ] hz in the 10 hour real data set .we identify events in the detection statistic and to each of these events we apply our veto test .this procedure yields a value of and for each candidate event .the scatter plot snr- is shown in fig .[ fig : realveto ] .two distinct branches along the solid line at higher snrs are evident .both branches are due to spectral features in the data : the highest snr branch to a line at 465.7 hz , the lower branch to a line at 128.0 hz .these spectral features `` trigger - off '' a whole set of templates giving rise to the observed structure in the scatter plot . , and snr .the straight line represents eq .( [ eq : thresholdline ] ) .the number of the data points is 68388 .the maximum is 66.6 ., height=302 ] if we adopt the threshold line eq .( [ eq : thresholdline ] ) , 70 of the events are rejected .we have defined a veto statistic to reject or accept candidate cw events based on a consistency shape test of the measured detection statistic .we have shown how to derive the snr - dependent threshold for the veto test , through monte carlo simulations on a playground data set similar to the one that one intends to analyze .the veto method demonstrated in this paper does not require any a - priori information on the source of noise lines .however , we expect that the effectiveness of this veto technique can greatly benefit from data characterization studies aimed at identifying spectral contamination of instrumental origin .we are now further investigating methods to veto out family of outliers identified in the scatter plots above the solid line in fig .[ fig : realveto ] . natural candidates are those noise lines whose properties are known experimentally , for example the 16 hz harmonics in the lho data due to the data acquisition system .it is precisely these harmonics that give rise to one of the major branches above the solid line in fig .[ fig : realveto ] , as shown in fig .[ fig : realvetomike ] . in this paper, we have used a 10 hour long data set . for a longer observational time , the difference between an extraterrestrial line and a terrestrial one becomes larger because the doppler modulation patterns of a putative signal carry a more specific signature , that of the motion of the earth around the sun .we have not included spin down of pulsars in our current study , as we have used short enough time length data .spin down effects of pulsars become more important for a longer observation time , and spin down effects generate characteristic feature in the statistic shape .we thus expect that our veto method will become more efficient and safer for longer observation times .finally , we note that a veto threshold line varies depending on observational data time length and noise behavior. the threshold line eq .( [ eq : thresholdline ] ) is specifically for 10 hours lho data , of particular band , and we recommend that any other search that uses quite different data set from our play ground data should determine a threshold line based on a play ground data in each analysis . , but after removing the 16 hz harmonics .the number of the data points is 60375 .the maximum is 66.6 . ,height=302 ] we would like to thank m. landry , who has provided us with the tables of the experimentally - measured noise lines of the data sets that we have analyzed .the work of xs was supported by national science foundation grants phy 0071028 and phy 0079683 .999 jaranowski p , krlak a and schutz b. f 1998 phys .rev . d*58 * 063001 astone p _et al . _ 2003 class . quantum grav . * 20 * , s665 abbott b _ et al .( ligo scientific collaboration ) _2004 phys .d * 69 * , 082004 allen b _ et al ._ 2004 class .quantum grav .* 21 * , s671 tagoshi h _( tama collaboration ) _2001 phys .d * 63 * 062001 allen b 2004 _ preprint _gr - qc/0405045 baggio l _2000 phys .d * 61 * 102001 | in a blind search for continuous gravitational wave signals scanning a wide frequency band one looks for _ candidate events _ with significantly large values of the detection statistic . unfortunately , a noise line in the data may also produce a moderately large detection statistic . in this paper , we describe how we can distinguish between noise line events and actual continuous wave ( cw ) signals , based on the shape of the detection statistic as a function of the signal s frequency . we will analyze the case of a particular detection statistic , the statistic , proposed by jaranowski , krlak , and schutz . we will show that for a broad - band 10 hour search , with a false dismissal rate smaller than , our method rejects about of the large candidate events found in a typical data set from the second science run of the hanford ligo interferometer . |
phase transitions are amongst the most remarkable and ubiquitous phenomena in nature .they involve sudden changes in measurable macroscopic properties of systems and are brought about by varying external parameters such as temperature or pressure .familiar examples include the transitions from ice to water , water to steam , and the demagnetization of certain metals at high temperature .these dramatic phenomena are described mathematically by non - analytic behaviour of thermodynamic functions , which reflect the drastic changes taking place in the system at a microscopic level . besides materials science, phase transitions play vital roles in cosmology , particle physics , chemistry , biology , sociology and beyond ; the universe began in a symmetric manner and went through a series of phase transitions through which the particles of matter with which we are familiar ( electrons , protons , the higgs boson , etc . ) materialised .more mundane examples include traffic flow ( where there is a transition between jammed and free - flowing states ) , growth phenomena and wealth accumulation ( where there may be a transition to a condensation phase , for example ) .while the latter examples refer to non - equilibrium systems , the emphasis in this article is on a famous transition which exists in equilbrium systems . the mathematical physics which describes such phenomena belongs to the realm of equilibrium statistical mechanics , one of the most beautiful , sophisticated and successful theories in physics .equilibrium statistical physics is based on the following premise : the probability that a system is in a state with energy at a temperature is where and is a universal constant , and is a normalising factor known as the partition function , here , the subscript indicates the linear extent of the system .a related fundamental quantity is the free energy , , given by where is the dimensionality of the system .phase transitions can only occur when the system under consideration has an infinite number of states in which it can exist for example , in the thermodynamic limit . in the modern classification scheme, such phase transitions are categorised as first- , second- ( or higher- ) order if the lowest derivative of the free energy that displays non - analytic behaviour is the first , second ( or higher ) one .transitions of infinite order brake no system symmetries .the most famous of these is the berezinskii - kosterlitz - thouless ( bkt ) transition in the two - dimensional model .the model is defined on a two - dimensional regular lattice , whose sites are labeled by the index , each of which is occupied by a spin or rotator .these two - dimensional unit vectors have or symmetry .the energy of a given configuration is where the summation runs over nearest neighbouring sites or links .this model is used to study systems such as films of superfluid helium , superconducting materials , fluctuating surfaces , josephson - junctions as well as certain magnetic , gaseous and liquid - crystal systems . the scenario proposed in seminal papers by and kosterlitz and thouless is that at a temperature above the critical one ( or ) positively and negatively charged vortices ( i.e. , vortices and antivortices ) which are present ( see fig . 1 )are unbound ( dissociated from each other ) and disorder the system . below the critical temperature ( or ) they are bound together and are relevant as dynamical degrees of freedom . there, long - range correlations between spins at sites and ( separated by a distance , say ) exist and are described by the correlation function whose leading behaviour in the thermodynamic infinite - volume limit is the correlation length ( which measures to what extent spins at different sites are correlated ) diverges and this massless low - temperature phase persists , with the system remaining critical with varying , up to at which . above this point ,correlations decay exponentially fast with leading behaviour here is the correlation length , and measures the distance from the critical point . as this critical pointis approached , the leading scaling behaviour of the correlation length , the specific heat and the susceptibility ( which respectively measure the response of the system to variations in the temperature and application of an external magnetic field ) are in which and is a non - universal constant .this exponential behavour is known as essential scaling , to distinguish it from more conventional power - law scaling behaviour ( in which , for example , ) . in summary ,the bkt scenario means a transition which ( i ) is mediated by vortex unbinding and ( ii ) exhibits essential scaling .besides the two - dimensional model , transitions of the bkt type exist in certain models with long - range interactions , antiferromagnetic models , the ice - type model and in string theory amongst others .thus a thorough and quantitative understanding of the paradigmatic model is crucial to a wide breadth of theoretical physics .for many years monte carlo and high - temperature analyses of the model sought to verify the analytical bkt renormalisation - group ( rg ) prediction that and and to determine the value of .typically was determined by firstly fixing .subsequent measurements of yielded a value incompatible with the the bkt prediction .because of the elusiveness of its non - perturbative corroboration , the essential nature of the transition was questioned .see table 1 of for an extensive overview of the status of the model up to that point .the fundamental realization of was that the thermal scaling forms ( [ b ] ) and ( [ c ] ) ( and similar formulae for related thermodynamic functions ) are inconsistent as they stand .instead , they must be modified to include logarithmic corrections : with and where rg indications implicit in are that .eq.([cc ] ) is an analytic prediction , which , since it is based on perturbation theory , requires confirmation through non - perturbative approaches .however , infinite lattices are not attainable using finite numerical resources .instead , one simulates finite systems , where , at criticality ( ) , the lattice size plays the role of in this model .the finite - size scaling ( fss ) prediction for the susceptibility is then to verify the bkt scaling scenario then , the thermal formula ( [ cc ] ) and/or the fss formula ( [ ccfss ] ) needs to be confirmed numerically .sophisticated fss techniques ( involving partition function zeros ) were used in to resolve , for the first time , the hitherto conflicting results for , and . however , the analysis resulted in an estimate of for , a value in conflict with the rg prediction of from .thus , in recent years , the focus of numerical studies of the model shifted to the determination of the logarithm exponent .indeed , the fss analyses of , using ( [ ccfss ] ) , yielded values compatible with that of but incompatible with ( see table 1 ) .nonetheless , it was clear that taking the logarithmic corrections into account leads to the resolution of the -- controversy .the most precise estimate for the critical temperature in the literature for the model with the standard action of ( [ xy ] ) is contained in and is .this value was obtained by mapping the model onto the exactly solvable body centered solid - on - solid model , and thereby circumventing the issue of logarithmic corrections . as demonstrated in table 1 ,recent analyses which have inluded the logarithmic corrections have resulted in estimates for compatible with this value ( while keeping ) . in and respectively , phase transitions in a lattice grain boundary model and a lattice gauge theory are studied . because these are in the same universality class as the model in two dimensions , they have the same scaling behaviour ( [ ccfss ] ) . however , the critical temperatures in these models bear no relationship to that of the counterpart .although contains a study of the model , it employs the villain formulation , which is different to ( [ xy ] ) , and leads to a different value for the nonuniversal quantity ..estimates for and for the model from a selection of recent papers with indications of the method used to obtain them ( rg related , fss or thermal scaling ) .[ cols= " < , < , < , < , < " , ] it was suggested in that just as taking the logarithmic corrections into account leads to the resolution of the leading scaling behaviour , in the same spirit it is conceivable that numerical measurements for may become compatible with the rg prediction if sub - leading corrections are taken into account . in this case ,( [ ccfss ] ) is more fully expressed as this was tested in to determine if the discrepancies between the numerical and theoretical estimates for can be ascribed to sub - leading corrections .however , the lattice sizes available ( up to ) were too small to resolve the issue .recently this issue was again addressed in a very high precision numerical simulation ( using lattices as big as ) in .using the ansatz and fitting to gives if large enough lattices are used .this value is compatible with the analytic prediction , . notwithstanding this result ,which is fss based , it is rather surprising that all of the analyses of the thermal scaling formula ( [ cc ] ) in table 1 yield positive values of far from the rg prediction that .it appears , therefore , that the fss approach is more powerful than that based on thermal scaling , although more extensive analyses would be required to test how this approach compares with that of .while large scale simulations were required to resolve these puzzles , a different technique , not requiring extensive simulations was used in . a conformal mappingwas used to switch from a confined lattice to the semi - infinite half plane . in this way, could be deduced from the correlation function at any value of . in particular, is accurately recovered at the transition temperature and clear evidence for existence of logarithmic corrections in the correlation function is presented .the vortex - binding scenario is crucial to the bkt phase transition in the two - dimensional model .because the energy of a single vortex increases with the system size as , at low temperature they can only occur in vortex - antivortex pairs .mutual cancelation of their individual ordering effects means that such a pair can only affect nearby spins and can not significantly disorder the whole system .topological long - range order exists in the system at low temperature .however at high temperature , the number of vortices proliferates and the distance between erstwhile partners becomes so large that they are effectively free and render the system disordered .it was for a long time believed that altering the energetics of the model to disable the vortex - binding scenario may lead to a transition different to the berezinskii - kosterlitz - thouless one .the step model is obtained from the model by replacing the hamiltonian ( [ xy ] ) by the energy associated with a single vortex for this system is expected to be independent of the lattice size .therefore , on the basis of this argument , vortices could exist at all temperatures , disordering the system .i.e. , there was expected to be no vortex - driven phase transition in the step model if there is a phase transition , it was expected not to be of the bkt type .indeed , early studies supported this assertion .however , in , very strong numerical evidence was presented that ( i ) there is a phase transition in the step model and ( ii ) it is of the bkt type ( with even the corrections to scaling being the same as those for the model ) .given the very different vortex energetics for the two models , this came as a surprise .the issue was further addressed in , where evidence for the existence of a bkt transition in the step model was again proffered .the approach of focused on numerical analyses of the helicity modulus , which experiences a jump at the transition . a similar approach to the model is contained in .the main idea of , which explains the occurence of the bkt transition in the step model , is that while the energy associated with a single fixed vortex in the system remains finite , the _ free energy _ grows as .this fact inhibits proliferation of free vortices in the low - temperature phase .it further implies that the harmonic properties of the interaction ( [ xy ] ) do not form a necessary condition for a bkt transition .consequently , and as pointed out in , the bkt phase transition may be an even more general phenomenon than hitherto recognised .* asymptotic freedom of models : * it is generally believed that there are differences of a fundamental nature between abelian and nonabelian models .the model has an symmetry group and is abelian , while all models with are nonabelian .the mermin - wagner theorem states that a continuous symmetry of the type can not be broken in two dimensions .thus there can not be a transition to a phase with long - range order in either of the or scenarios there .however , in a two - dimensional theory , topological defects of dimension can exist if the homotopy group , , of the order parameter space is non - trivial . for models , this space is the hypersphere .the only non - trivial group is which is isomorphic to the set of integers under addition .this is the condition that gives rise to point defects ( vortices ) with integer charge in the case ( the model ) .the binding of these vortices at low temperature is the mechanism giving rise to the bkt phase transition . for , conditions are not supportive of the existence of topological defects of this type and the widely held belief is that there is no phase transition , there being no distinct low - temperature phase .perturbation theory predicts that the models are asymptotically free .there is , however , no rigorous proof to this effect .this belief has been questioned and have given numerical evidence for the existence of phase transitions of the bkt type in these models as well as heuristic explanations of why such transition could occur and a rigorous proof that this would be incompatible with asymptotic freedom .perturbative and monte carlo calculations for the and models have been performed in , which do not support the existence of such a bkt - like phase transition there and instead are in agreement with perturbation theory and the asymptotic freedom scenario .nonetheless , the controversy has not entirely gone away , and one may argue that inclusion of logarithmic considerations could help for a precise unambiguous resolution . + * the diluted model : * an interesting current topic of research has been the question of the role of impurities in the model .the presence of impurities brings the model closer to real systems , where such physical defects are present .impurities are modeled by randomly diluting the number of sites ( or bonds ) on the lattice .clearly , if the dilution is so strong as to inhibit the percolation of spin - spin interactions across the lattice ( such that it is effectively broken into finite disconnected sets ) no phase transition can occur for any model .thus moderate dilution generally is expected to decrease the location of the transition temperature . however , the special additional feature of the model is the presence of vortices and the fact that they drive the transition .vortices are attracted to and , to some extent , anchored by impurities and the vortex energy is reduced at such a vacancy . therefore with increasing dilution , morevortices can be formed and the amount of disorder in the system is increased .this effect may enhance the lowering of the critical temperature to such an extent that it vanishes before the percolation threshold is reached .this is the issue addressed in .the percolation threshold occurs when the density of site vacancies is .the critical vacancy density is identified by the vanishing of the critical temperature , which in turn is identified as being the location at which . in critical temperature was reported to vanish above the percolation threshold at vacancy density . however , in it was suggested that the critical density is , in fact , closer to the percolation threshold .support for the latter result recently appeared in .if this is true , it means that the vortices do not , in fact , strongly enhance the lowering of the critical temperature . from the collective experience with the pure model as reported above , it is clear that a more precise identification of the critical temperature through would require taking account of the logarithmic corrections , although ignoring them may suffice as first approximation . indeed, ignoring these corrections in the pure model also leads to an estimate for the critical temperature which is higher than accurate values .besides the value of the critical temperature in a diluted model , one is also interested in the scaling behaviour of the thermodynamic functions at the phase transition there .the harris criterion predicts that disorder does not change the leading scaling behaviour of a model if the critical exponent associated with the specific heat of the pure model is negative .this is the case for the model in two dimensions .however , it is unclear what effect dilution can have on the quantitative nature of the exponents of the logarithmic corrections in such a case .this would be an interesting avenue for future research , and the model ( which has negative ) offers an ideal platform upon which to base such pursuits .+ * other models with logarithmic corrections : * logarithmic corrections to scaling also exist in other important models .while their existence has been unambiguously verified in models , this is not so in most cases . in particular , kim and landau applied fss techniques to the four - state potts model .again , fss behaviour could only be described if the logarithmic corrections are included .however , here inclusion of the leading multiplicative logarithmic corrections is insufficient and sub - leading additive corrections are required .the puzzle of why unusually large numbers of correction terms are necessary ( despite the availability of the exact value of in this model ) could , perhaps , now be resolved by an analysis on the scale of and this would be another interesting avenue to pursue .similar problems concerning logarithmic corrections and their detection exist in other two - dimensional models such as the diluted ising model .theoretical progress on the general issue of logarithmic corrections will be reported elsewhere .the issue of topologically driven phase transitions characterized by the two - dimensional model has been revisited and a timely review of the status of scaling at the famous berezinskii - kosterlitz - thouless transition given .after two decades of work , the perturbative renormalization group precictions of for the leading scaling behaviour were confirmed in and there is now little or no doubt about the correctness of the analytic predictions , and .the recent controversy over the value of the leading logarithmic correction exponent ( summarized in table 1 ) has been re - examined and claims as to its resolution ( at least in the context of finite - size scaling ) and state - of - the - art calculations summarized . besides the model , such multiplicative logarithmic corrections are manifest in a variety of other models , and their resolution in these contexts is now at the forefront current modern numerical investigations in statistical and lattice physics .a summary of some recent work on such models , as well as likely future directions has been given .finally , recent work confirming the vortex - binding scenario as the phase transition mechanism in the model has also been reviewed to complete a full account of the current status of one of the most remarkable and beautiful models in theoretical physics .this work was supported by eu marie curie research grants chbict941125 and fmbict961757 .b. berche , a.i .farias sanchez and r. paredes v. , europhys .( 2002 ) 539 ; b. berche , phys . lett . a 302 ( 2002 )336 ; b. berche , j. phys . a 36 ( 2003 ) 585 ; b. berche and l. shchur , jetp lett .79 ( 2004 ) 213 . | in statistical physics , the model in two dimensions provides the paradigmatic example of phase transitions mediated by topological defects ( vortices ) . over the years , a variety of analytical and numerical methods have been deployed in an attempt to fully understand the nature of its transition , which is of the berezinskii - kosterlitz - thouless type . these met with only limited success until it was realized that subtle effects ( logarithmic corrections ) that modify leading behaviour must be taken into account . this realization prompted renewed activity in the field and significant progress has been made . this paper contains a review of the importance of such subtleties , the role played by vortices and of recent and current research in this area . directions for desirable future research endeavours are outlined . |
zebrafish ( danio rerio ) of the ab and ab / tl genetic background were maintained , raised and staged as previously described . for protein over - expression in germ cells ,the mrna was injected into the yolk at one - cell stage .capped sense rna was synthesized with the mmessage mmachine kit ( ambion , + http://www.ambion.com/index.html ) . to direct protein expression to pgcs , the corresponding open reading frames ( orfs )were fused upstream to the 3utr of the nanos1 ( nos1 - 3utr ) gene , facilitating translation and stabilization of the rna in these cells . for global protein expression ,the respective orfs were cloned into the psp64ts ector that contains the 5 and 3 utrs of the xenopus globin gene .the injected rna amounts are as provided below .the following constructs were used : * ypet - ypet - rascaax - nos-1 ( 240 pg . )was used to label membrane in germ cells . * lifeact - pruby - nos-1 ( 240 pg . )was used to label actin in germ cells . *dn - rok - nos-1 ( 300 pg . ) was used to interfere with rok function in pgcs * aqp1a - nos-1 ( 300 pg . ) was used to over - express aquaporin-1a in pgcs * aqp3a - nos-1 ( 300 pg . ) was used to over - express aquaporin-3a in pgcs * aqp1a egfp - nos-1 ( 360 pg . ) was used to visualize the subcellular localization of aquaporin1a in pgcs * aqp3a egfp - nos-1 ( 300pg . ) was used to visualize the subcellular localization of aquaporin3a in pgcs the morpholinos for knocking down protein translation were obtained from genetools , llc http://www.gene - tools.com/. the following sequences were used : aquapoin1a : 5 aagccttgctcttcagctcgttcat3 ( injected at 400 m ) ; aquaporin 3a : 5 acgcttttctgccaacccatctttc 3 ( injected at 400);. for the control , standard morpholino 5cctcttacctcagttacaatttata 3 was used .time - lapse movies of blebbing cells in live zebrafish embryos were acquired with the zeiss lsm710 bi - photon microscope using one - photon mode . the 20x water - dipping objective with the numerical aperture 1.0 was used .the bit depth used was 16 and the scanning speed ranged between 150 to 250 ms / frame for fast imaging of bleb formation .images were preprocessed with fiji software to eliminate the background . the bleach correction tool ( embl )was used to correct for the reduction in fluorescence intensity during prolonged time - lapse movies .sequences of of image stacks were then processed using the 3d active meshes algorithm implemented in the icy software http://icy.bioimageanalysis.org/. the algorithm performs three - dimensional segmentation and tracking using a triangular mesh that is optimized using the original signal as a target . from the resulting three dimensional mesh one can then measure the cell volume its surface area .three dimensional rendering of the meshes was done using paraview ( http://www.paraview.org/ ) .statistical significance was evaluated using kolmogorov - smirnov tests implemented in custom made python codes . to confirm that the volume fluctuations observed in the zebrafish cells in vivo are real, we have to exclude possible systematic errors induced by the three dimensional mesh reconstruction algorithm .our experimental analysis shows smaller volume fluctuations for cells where the bleb formation has been suppressed ( dn - rok and aqp- mutants ) , with respect to wild type cells ( wt ) or those for which blebbing has been enhanced ( aqp+ ) .in particular , as shown in fig .1(a ) , wt and aqp+ cells display a more complex morphology when compared to dn - rok or aqp- , which appear instead to have a rounded shape .hence , the first question to be answered is whether in presence of complex morphological shapes , the algorithm introduces a systematic bias in the measured cell volume .in other words the question is : does the algorithm produce a larger error while calculating the volume of blebbing cells , with respect to those where blebs are absent ? to answer to this question we generate a set of synthetic ellipsoidal cells whose volume is in the range of the zebrafish cells analyzed in our experiments ( ) , as shown in fig.[fig : synthetic_blebs ] .synthetic cells are generated through the imagej software ( http://imagej.nih.gov/ij/ ) by creating 3d stacks having the shape of ellipsoids with semi - axes and .initially , we set the voxel size to along the three directions ( see fig.[fig : synthetic_blebs](a ) ) . the volume is then calculated both according to the formula and by counting the number of voxels belonging to the synthetic cell , . the relative error between the two estimatesis , so that we can safely consider the estimate as the real volume of the synthetic stacks generated .the main sources of error in analyzing confocal image stacks stems from the anisotropic voxel . in our experimentsthe resolution along the direction is , while it is in the xy plane . to reproduce the voxel anisotropy in the synthetic stacks , we select just one single xy plane every 6 composing the original z - stack .a resulting typical cell is shown in fig.[fig : synthetic_blebs](b ) .we then extract the mesh of this newly obtained stack ( see fig.[fig : synthetic_blebs](c ) ) and calculate the ensuing volume .our set of synthetic cells consists of 35 ellipsoids of different semi - axes , for which we calculate the true volume reported in fig.[fig : volume_ellipsoids_comparison ] .then , each ellipsoid is first processed by the anysotropic voxelization in the z direction , and subsequently analyzed by the 3d active mesh algorithm .the volumes of the extracted meshes are reported in fig.[fig : volume_ellipsoids_comparison ] ( ) .it is apparent that the algorithm systematically underestimates the volume by roughly .we checked that is error is greatly reduced for isotropic voxels .a constant systematic error is not worrying , since we are only interested in changes in volume and all the images have the same voxel anisotropy and therefore the same error . before addressing the fluctuations of this error , we focus on possible spurious changes in the measured volume induced by a change in shape .for each of the synthetic ellipsoidal cells , we create a synthetic cell with the same volume but presenting 1 , 2 , or 3 blebs on the surface .blebs are generated as spherical caps of different radii with centers placed randomly on the ellipsoid surface ( see fig.[fig : synthetic_blebs](d ) ) .we then perform the same anysotropic voxelization of the original z - stack done for the plane ellipsoids ( see fig.[fig : synthetic_blebs](e ) ) . from this imagewe extract the active mesh ( fig.[fig : synthetic_blebs](f ) ) and calculate its volume .the volumes of the meshes of synthetic cells with blebs , , are displayed in fig.[fig : volume_ellipsoids_comparison ] .one can only see a very small difference between the values of and , but they both appear underestimate the true value by about .what is surprising , however , is that cells with blebs appear to approximate the real volume slightly better than cells without blebs . to confirm this, we report in fig.[fig : volume - error - vss ] the relative volume fluctuations as a function of the measured mesh surface fluctuations .if no errors were made by the algorithm in estimating the synthetic volumes one would expect since pair of cells were constructed with the same volume but different shapes .if complex shape with blebs would lead to an overestimation of the volume with respect an ellipsoidal cell with no blebs , one would expect and to be positively correlated . to the contrary, the linear regression of the data in fig.[fig : volume - error - vss ] shows a small but clear anti - correlation between volume and surface fluctuations .this is in contrast with experimental results showing that changes in shape are positively correlated with changes in volume ( fig .[ fig : vs_correlations ] ) .hence , this result can not be considered an artefact of the measurement but a real feature of the cells .the previous analysis clearly demonstrates that the volume fluctuations observed in our experiments do not depend on the shape of the cells , but the algorithm systematically underestimates the volume of both of about .the next question is whether these systematic fluctuations are of the same order of magnitude as the observed ones .are the fluctuations reported in fig .1(b ) real or just an artefact introduced by the mesh reconstruction algorithm ?this question is particularly compelling in the case of wt and aqp+ cells , since for dn - rok and aqp- cells we can accept that the volume might remain constant . to answer to this questiongenerate a set of 120 synthetic cells , 60 with blebs randomly placed and of different sizes , and 60 without blebs , each cell having its own real volume calculated with imagej .we then process each synthetic cell according to the protocol previously outlined : anysotropic voxelization in the z direction and subsequent mesh analysis .finally we calculate the fluctuations and its cumulative distribution . in fig.[fig : cumulative_fluctuation_ellipsoids_comparison_cell ] we compare with the corresponding cumulative distributions of wt , aqp+ , dn - rok and aqp- cells , once it has been shifted by the average systematic volume bias .this figure shows that systematic errors made by icy in the estimation of the cell volume , are compatible with the observed volume fluctuations of dn - rok and aqp- cells , but not with those aqp+ and wt cells , which instead appear to be significantly larger .we thus conclude that the difference in volume fluctuation between aqp+/wt cells and dn - rok / aqp- is not an artefact of the analysis . by performing numerical simulations, we notice that a straightforward formulation of the method suffers from poor volume conservation .this general drawback of the immersed boundary method has been already pointed out by many other authors in different contexts . to overcome this problemwe implement the method proposed in ref . and enforce the incompressibility constraint on the discrete lagrangian grid in the weak sense : where is the outward unit normal to the membrane at the position and is a discrete measure of the arclength in the actual configuration at the position .the above constraint is satisfied by adding a corrective term to the equation of motion , where satisfies the incompressibility constraint and is given by with this correction , we observe that the incompressibility constraint is satisfied and the cell volume is perfectly conserved .aqp3 is expressed by pgcs . a pgc expressing egfp fusion of aquaporin-3a and an rfp - tagged membrane marker .scale bar is 10 m . ]aquaporin knockdown and overexpression does not induce significant changes in the average cellular volume .a similar result holds for the dn - rok mutant .the results show the dispersion of the data , the average and the standard error .statistical significance ( values ) is evaluated according to the kolmogorov - smirnov test . ]two example of synthetic image stacks representing cells without ( a - c ) and with ( d - f ) blebs .the original stacks of equal volume ( a and d ) , are first transformed removing a set of planes ( 1 every 6 ) to obtain an anisotropic voxel corresponding to the experimental resolution ( b and e ) , i.e. m in the x , y directions and m along z. the resulting stacks ( b and e ) are analyzed to obtain a three - dimensional mesh ( shown in c and f ) ] volume comparison of synthetic cells .we compare the measured volumes of cells without blebs ( ) with those corresponding to cells with blebs ( ) and with the true volumes ( ) , which is the same for each pair of cells .the results show a large constant systematic error , and small fluctuations for cells with and without blebs . ]the relative difference in the measured volume for pairs of cells of the same true volume but different surface due to the presence or absence of blebs .the data show fluctuations of and a small negative correlation between volume and surface changes . ]the cumulative distribution of relative error fluctuations for the volume of a large number of synthetic cells with and without blebs compared with experimental measurements .the cumulative distribution of synthetic cells is shifted by to allow a visual comparison with the experimental quantities . corresponds the time averaged volume for aqp+ , aqp- , dn - rok and wt cells , whilst it is for synthetic cells .volume fluctuations for wt and aqp+ cells are significantly larger than those observed in synthetic cells , whereas dn - rok and aqp- mutant cells volume fluctuations seem to be compatible with the systematic errors induced by the mesh algorithm . ]volume and surface fluctuations are correlated .principal component analysis of volume and surface relative values .scatter plots for a ) wt , b ) dn - rok , c ) aqp+ and d ) aqp- are reported together with an ellipse with axis given by the two eigenvectors of the cross - correlation matrix , whose amplitude is reported in panel e ) and f ) for the largest and smaller axis .the dashed line represents the expected result for the ideal case of the isotropic deformation of a sphere . ]time evolution of the pressure drop across the membrane from numerical simulations.a representative example of the evolution of the pressure drop in numerical simulations shows that the average is very different from the maximum ( top ) .furthermore , the standard deviation of the distribution fluctuates intermittently in time in correspondence to the blebbing activity ( bottom ) .results are obtained for a permeability . ] a schematic representation of the bleb formation process .a ) the cortex contracts squeezing water outside of the cell .b ) the membrane buckles and the cortex - membrane interface fractures .c ) the bleb expands as the interface fails and water flows inside the cell as the internal fluid pressure is relieved . ].results of statistical significance tests for validity of gaussian statistics for volume and surface fluctuations .we report the p - values obtained from the kolmogorov - smirnov test .a small p - value ( e.g. ) would imply that we can reject the hypothesis that the distribution is described by gaussian statistics . in the present case , the p - value is large indicating that a guassian distribution provides a good fit to the data . [table : pvalue ] [ cols="<,^,^,^,^",options="header " , ]* a representative example of the time evolution of a pgc in wt conditions .the movie is obtained using an average intensity 3d projection in imagej . * a representative example of the time evolution of a pgc in dnrok conditions .the movie is obtained using an average intensity 3d projection in imagej . * a representative example of the time evolution of a pgc in aqp- conditions .the movie is obtained using an average intensity 3d projection in imagej . * a simulation of the computational model using a porous membrane .the color represents fluid pressure ( see fig .3a ) . * a simulation of the computational model using an impermeable .the color represents fluid pressure ( see fig . | cells modify their volume in response to changes in osmotic pressure but it is usually assumed that other active shape variations do not involve significant volume fluctuations . here we report experiments demonstrating that water transport in and out of the cell is needed for the formation of blebs , commonly observed protrusions in the plasma membrane driven by cortex contraction . we develop and simulate a model of fluid mediated membrane - cortex deformations and show that a permeable membrane is necessary for bleb formation which is otherwise impaired . taken together our experimental and theoretical results emphasize the subtle balance between hydrodynamics and elasticity in actively driven cell morphological changes . cells can change their shape to explore their environment , communicate with other cells and self - propel . these macroscopic changes are driven by the coordinated action of localized motors transforming chemical energy into motion . active processes in biological systems can be linked to a large variety of collective non - equilibrium phenomena such as phase - transitions , unconventional fluctuations , oscillations and pattern formation . a vivid example of actively driven non - equilibrium shape fluctuations is provided by cellular blebs , the rounded membrane protrusions formed by the separation of the plasma membrane from the cortex as a result of acto - myosin contraction . blebs occur in various physiological conditions , as for instance during zebrafish embryogenesis , or cancer invasion . while some questions concerning the mechanisms governing bleb formation and its relation to migration have been resolved , key aspects of bleb mechanics remain unclear . geometrical constraints dictate that active shape changes associated with blebs should necessarily involve either fluctuations in the membrane surface or in cellular volume , and possibly both . it is generally believed , however , that the cellular volume is not significantly altered during bleb formation , so that the cell is usually considered incompressible . yet , experimental evidence in vitro suggests that aquaporins ( aqps ) , a family of transmembrane water channel proteins , are involved in cell migration and blebbing . the implied significance of fluid transport through the membrane suggests that an interplay between hydrodynamic flow and active mechanics has an important but still unclear role in blebbing . in this letter , we reveal the role of the membrane permeability in the formation , expansion and retraction of cellular blebs . we show by direct experiments in vivo and numerical simulations that bleb formation involves volume fluctuations , considerable water flow through the membrane , and relatively smaller surface fluctuations . _ experiment : _ one of the limitations impeding the experimental studies of the bleb dynamics is the lack of proper tools to generate high - resolution spatial - temporal data of bleb dynamics . this is due to the fact that the time scale of bleb formation is relatively short ( about 1 minute starting from initiation of the bleb to its retraction ) , which requires fast imaging and photostable markers . here , we create an improved membrane marker which we inject in one - cell stage zebrafish embryos ( see supplemental material for experimental methods ) . together with wild type ( wt ) zebrafish pgcs , a well studied biological model to investigate blebbing in vivo , we also consider cells expressing dominant - negative rho kinase mutant ( dn - rok ) which inhibits acto - myosin contractility suppressing blebbing activity . zebrafish expresses a large number aqps contributing to the water permeability of the membrane . here we focus on aqp1 and aqp3 , the most ubiquitously expressed aquaporins . to asses their role in volume change and blebbing , we consider pgcs with aqp1 and aqp3 overexpression ( aqp+ ) and knockdown ( aqp- ) . the imaging of blebbing is done during 12 - 16 hours post fertilization , when we record time - series of confocal images for a large number of cells . sequences of image stacks are then processed using the 3d active meshes algorithm implemented in the icy software . the algorithm performs three - dimensional segmentation and tracking using a triangular mesh that is optimized using the original signal as a target . from the resulting three dimensional mesh one can then measure the cell volume and its surface area ( see also ) . in fig . [ fig : volume - surface]a , we illustrate representative phenotypes of pgcs under different conditions ( see also the corresponding movies ) . these observation show that wt cells display a marked blebbing activity that , as expected , is strongly suppressed when active contraction is hindered , as in dn - rok cells . remarkably , we also observe a strong reduction in bleb activity in aqp- cells , where water flow is hindered . in contrast , water flow enhances blebbing as manifested by the presence of larger blebs in the aqp+ condition . to quantify these qualitative observations , we measure the cell volume and its surface sampling the results over a large number of time - frames taken on different cells . to account for cell - to - cell variability , we consider relative volumes and surfaces changes , where ( ) is the time - averaged volume ( surface ) of each cell . the average value of the volume does not change significantly for the four cases . ( color online ) volume and surface fluctuations during blebbing are controlled by aqps . a ) representative phenotypes for bleb formation in pgcs under different conditions : wild type ( wt ) , cells expressing dn - rok mutant which impairs contractility by interfering with acto - myosin contraction ( dn - rok ) , cells where aqp1a and 3a is suppressed ( aqp- ) , cells with over - expression of aqp1a/3a ( aqp+ ) . the scale bar corresponds to . the cumulative distribution of relative volume ( b ) and surface ( c ) fluctuations for the four conditions illustrated in a ) together with a gaussian fit ( dashed lines ) . the distributions are sampled over different time - frames corresponding to different cells . ( wt : cells and time - frames ; dn - rok : and , aqp- : and ; aqp+ : and ) . ] in fig . [ fig : volume - surface]b , we report the cumulative distribution of relative volume changes which indicates significant fluctuations , reaching up to 10% , in the wt case . volume fluctuations are strongly reduced for the dn - rok and aqp- cases , while they are enhanced in the aqp+ case . the relative volume and surface distributions themselves are well described by gaussian statistics , as also confirmed by a kolmogorov - smirnov test . a statistical test also indicates that the differences between wt and both aqp- and dn - rok are significant ( ) , but the differences between wt and aqp+ and between aqp- and dn - rok are not . relative surface fluctuations are small that volume fluctuations in the wt case and are further reduced for dn - rok and aqp- and slightly increased for aqp+ ( see fig . [ fig : volume - surface]c ) . we also checked that relative surface and volume fluctuations are correlated , suggesting a direct link between blebbing activity and volume fluctuations induced by water transport . suppressing water flow has the same effect as suppressing active contraction , in both cases blebs are hindered . furthermore , the volume fluctuations we observe follow closely bleb expansion and retraction as illustrated in fig . [ fig : volume - bleb - time ] . expansion of a bleb correspond to a visible volume increase while when a bleb retracts the volume decreases . here we concentrate on aqp+ pgc since the blebs are distinctly visible and the analysis clearer , but the same result holds for wt cells where , however , several blebs may form and retract simultaneously . ( color online ) bleb formation is directly related to volume changes . a ) time lapse of a single aqp+ cell . two dimensional sections are shown in the upper panels and the corresponding reconstructed three - dimensional meshes in the lower ones . arrows indicate blebs . the scale bar corresponds to . b ) evolution of the volume for the same cell . time points corresponding to the panels in a ) are denoted in yellow . an increase in volume is observed in correspondence with each newly formed bleb . ] _ model : _ in order to better understand the physical role of water flow in bleb formation , we resort to numerical simulations of a two - dimensional model of the biomechanics of cortex - membrane deformations including fluid transport through the plasma membrane . several existing computational models for cellular blebs simulate the detachment and expansion of the membrane due to the active contraction of the cortex , assuming cell volume conservation . here , we relax this assumption by introducing and varying the membrane permeability in a model based on the immersed boundary method in the stokes approximation , considering a contracting discretized elastic cortex coupled to an elastic membrane . we describe both the membrane and the cortex by a set of discrete nodes connected by springs on one dimensional closed curves parametrized by their initial arc length with ] is the pressure jump . using the normal stress jump condition = \nicefrac{\mathbf { \mathbf f}\cdot \mathbf n } { \delta s(\mathbf r_{\mathrm{m}_{k}})}$ ] , the porous velocity reads where and is the force on the node . finally , we reach the equation used in the simulations . _ simulations : _ to simulate the model , we assume a square fluid domain that we discretize using a square grid with a discretization step and , where is the length of the domain in each direction and is the number of eulerian coordinates . the fluid domain covers both the inside and outside of the membrane and has an area of . the discretization steps and are of size . ( color online ) numerical simulations show that membrane porosity is needed for blebbing . a ) a series of snapshots of the numerical simulations of bleb formation and retraction . the color represents the local fluid pressure : red for positive and blue for negative pressures while the cell membrane is green . a ) results obtained with physiological permeability : . b ) for , bleb formation is impaired . c ) the cumulative distribution of relative area fluctuations for different permeabilities . d ) the standard deviation of the distribution of relative cell areas as a function of permeability . the physiological range is depicted in grey . ] the membrane permeability in zebrafish embryos has been measured experimentally and is reported to be in the range depending on the developmental stage . here , we perform numerical simulations under different values of , ranging from to , to account for aqp overexpression and knockdown ( see table s1 for a complete list of parameters ) . when is in the physiological range , we observe realistic bleb formation and retraction ( fig . [ fig : simulations]a and movie s5 ) . when we delete membrane permeability , setting and enforcing strict cell volume conservation , bleb activity is suppressed ( fig . [ fig : simulations]b and movie s6 ) . we can relate the simulations to the experimental results by noticing that the cell area , the two dimensional analogue of the three dimensional cell volume , fluctuates more or less when the permeability is increased or reduced ( fig . [ fig : simulations]cd ) , in correspondence with aqps over - expression or knock - down , respectively . both experiments and simulations suggest that blebs occur as long as the fluid is able to flow sufficiently fast through the membrane . ( color online ) a ) spatio - temporal evolution of the pressure jump across the membrane computed from the normal stress jump condition in numerical simulations with . the pressure drop is strongly enhanced in localized regions corresponding to blebs . b ) the corresponding distribution of pressure jumps displays long tails . ] _ discussion : _ our model allows us to better understand why volume fluctuations are crucial for blebbing . acto - myosin driven cortex contraction leads to a shrinkage of the cell by squeezing some water outside . indeed the fluid pressure inside the cell is initially larger than the one outside ( see fig . [ fig : simulations]a ) . the contraction of the cortex induces stretching in the membrane - cortex linkers leading the membrane to buckle . buckling provides an effective way for membranes to avoid considerable elastic compression and is associated with a structural softening of the system . furthermore , it allows to generate large membrane deflections needed to form a bleb . in differentiated cells additional membrane surface can be obtained by disassembling caveolae , but this can not happen in pgcs where caveolin is not expressed . when the interface fractures , the mechanical stress on the detached part of the membrane decreases , but it increases on the interface that is still attached , inducing crack propagation . as the bleb expands , the fluid pressure inside the cell is reduced ( see fig . [ fig : simulations]a ) leading to an inflow of water . in fig . [ fig : drop]a , we display the spatio - temporal evolution of the pressure jump across the membrane , showing large fluctuations in correspondence to bleb formation . these pressure spikes , whose distribution is long - tailed ( fig . [ fig : drop]b ) , are a manifestation of the stress concentrations around cracks and are needed to account for the observed volume fluctuations , because otherwise the average pressure jump generated by a uniform cortex contraction ( around pa ) would not displace sufficient amount of fluid during the short lifetime of a bleb . healing of the membrane - cortex interface eventually leads to bleb retraction and to an increased fluid pressure inside the cell . the mechanism described above does not work for an impermeable membrane : isochoric buckling is possible in principle but , in addition to bending , it necessarily causes considerable stretching which is energetically expensive . thus , when the interface fractures the bleb does not form and interface delamination takes place without localized membrane expansion . our numerical results show that volume fluctuations during blebbing are due to mechanically induced highly non - uniform pressure drops across the membrane . the same mechanism could also explain the experimental observation that blebs are nucleated preferentially in regions of negative membrane curvature . changes in osmotic pressure gradients could also contribute to the process , as suggested previously , but are not explicitly included in our model . future experimental work will clarify if this and other assumptions present in our model are correct , but our results should stimulate both new experiments and the development of more elaborate theories and models . this would also allow to better understand the role of transmembrane water transport for other cellular protrusions , given that past experimental results relate the presence of aquaporins to the formation of lamellipodia and filopodia . the present methodology provides the basis for a physical explanation of this broad class of phenomena . we thank a. dufour , a. l. sellerio and d. vilone for useful discussions . a. t. , o. u. s. , and s. z. are supported by the european research council through the advanced grant no . 291002 sizeffects . s. z. acknowledges support from the academy of finland fidipro progam , project no . 13282993 . c. a. m. l. p. acknowledges financial support from miur through prin 2010 . e. k. acknowledges the support of a febs long - term postdoctoral fellowship while writing this manuscript . a. taloni , e. kardash and o. u. salman contributed equally to this paper . 43 natexlab#1#1bibnamefont # 1#1bibfnamefont # 1#1citenamefont # 1#1url # 1`#1`urlprefix[2]#2 [ 2][]#2 , , , , , , , * * , ( ) . , * * , ( ) . , , , , * * , ( ) . , * * , ( ) . , * * , ( ) . , * * , ( ) . , , , , , , , , * * , ( ) . , , , , , , , , , * * , ( ) . , * * , ( ) . , * * , ( ) . , * * , ( ) . , , , , , * * , ( ) . , , , , * * , ( ) . , , , , , , * * , ( ) . , * * , ( ) . , * * , ( ) . , , , , * * , ( ) . , , , , , , , * * , ( ) . , , , , , , , , * * , ( ) . , , , , , * * , ( ) . , * * , ( ) . see supplemental material for detailed experimental methods , image processing methodology , supplemental figures with experimental and numerical data , animations of experiments and simulations . , , , , , , * * , ( ) . , , , , , * * , ( ) . , * * , ( ) . , , , * * , ( ) . , , , * * , ( ) . , * * , ( ) . , * * , ( ) . , * * , ( ) . , * * , ( ) . , , , , * * , ( ) . , , , * * , ( ) . , , , * * , ( ) . , * * , ( ) . , , , , , * * , ( ) . , ph.d . thesis , ( ) . , , , , , , , * * , ( ) . , , , * * , ( ) . , , , , , * * , ( ) . , * * , ( ) . , , , , , , , , , , , * * , ( ) . , , , , * * , ( ) . , , , , * * , ( ) . , _ _ ( , , ) . , , , , , * * , ( ) . , , , , * * , ( ) . |
this work deals with a one dimensional inelastic kinetic model , introduced in , that can be thought of as a generalization of the boltzmann - like equation due to kac ( ) .motivations for research into equations for inelastic interactions can be found in many papers , generally devoted to maxwellian molecules . among them , in addition to the already mentioned pulvirenti and toscani s paper , it is worth quoting : , , , , , .see , in particular , the short but useful review in .returning to the main subject of this paper , the one - dimensional inelastic model we want to study reduces to the equation where stands for the probability density function of the velocity of a molecule at time and being a nonnegative parameter . when , ( [ eq1 ] ) becomes the kac equation .it is easy to check that the fourier transform of satisfies equation where stands for the fourier transform of . equation ( [ eq2 ] ) can be considered independently of ( [ eq1 ] ) , thinking of , for , as fourier stieltjes transform of a probability measure , with . in this case , differently from ( [ eq1 ] ) , need nt be absolutely continuous , i.e. it need nt have a density function with respect to the lebesgue measure .following , can be expressed as where and is the so called _ wild product_. the wild representation ( [ eq3 ] ) can be used to prove that the kac equations ( [ eq1 ] ) and ( [ eq2 ] ) have a unique solution in the class of all absolutely continuous probability measures and , respectively , in the class of the fourier stieltjes transforms of _ all _ probability measures on . moreover , this very same representation , as pointed out by , can be reformulated in such a way to show that is the characteristic function of a completely specified sum of real valued random variables .this represents an important point for the methodological side of the present work , consisting in studying significant asymptotic properties of , as .indeed , thanks to the mckean interpretation , our study will take advantage of methods and results pertaining to the _ central limit theorem _ of probability theory . as to the organization of the paper , in the second part of the present section we provide the reader with preliminary information mainly of a probabilistic nature that is necessary to understand the rest of the paper . in section [ s2 ]we present the new results , together with a few hints to the strategies used to prove them .the most significant steps of the proofs are contained in section [ s3 ] , devoted to asymptotics for weighted sums of independent random variables .the methods used in this section are essentially inspired to previous work of harald cramr and to its developments due to peter hall .see , .completion of the proofs is deferred to the appendix .it is worth lingering over the mckean reformulation of ( [ eq4 ] ) , following .consider the product spaces with , being a set of certain _ binary trees _ with leaves .these trees are defined so that each node has either zero or two `` children '' , a `` left child '' and a `` right child '' .see figure [ figure1 ] .now , equip with the where , given any set , denotes the power set of and , if is a topological space , indicates the borel on .define , with and , to be the coordinate random variables of . at this stage , for each tree in fix an order on the set of all the nodes and , accordingly , associate the random variable with the node .see ( a ) in figure [ figure1 ] .moreover , call the leaves following a left to right order . see ( b ) in figure [ figure1 ] .define the depth of leaf in symbols , to be the number of generations which separate from the `` root '' node , and for each leaf of a tree , form the product where : equals if is a `` left child '' or if is a `` right child '' , and is the element of associated to the parent node of ; equals or depending on the parent of is , in its turn , a `` left child '' or a `` right child '' , being the element of associated with the grandparent of ; and so on . for the unique tree in it is assumed that . for instance , as to leaf 1 in ( a ) of figure [ figure1 ] , and , for leaf 6 , . from the definition of the random variables it is plain to deduce that holds true for any tree in , with for further information on this construction , see .it is easy to verify that there is one and only one probability measure on such that where , for each , * is a well specified probability on , for every .* is the probability distribution that makes the independent and identically distributed with continuous uniform law on .* is the probability distribution according to which the random variables turn out to be independent and identically distributed with common law .expectation with respect to will be denoted by and integrals over a measurable set will be often indicated by . in this framework onegets the following proposition , a proof of which can be obtained from obvious modifications of the proofs of theorem 3 and lemma 1 in .( ) _ the solution [ , respectively ] of ( [ eq1 ] ) [ ( [ eq2 ] ) , respectively ] can be viewed as a probability density function [ the characteristic function , respectively ] of for any .moreover , converges in distribution to zero as . _ as a first application of this proposition , one easily gets \\ & = e^{-t}\phi_0(\xi)+e^{-t}\sum_{n \geq 2 } ( 1-e^{-t})^{n-1 } { \hat{q}_n}(\xi;\phi_0 ) .\end{split}\ ] ] then , since for any with part of the conditional characteristic function of , given , coincides with the characteristic function of when is replaced by its real part .whence , with part of .the distribution corresponding to is symmetric and is called _ even part _ of .in fact , turns out to be an even real valued characteristic function , and this fact generally makes easier certain computations . it should be pointed out that if the initial datum is a symmetric probability distribution , then the distribution of is the same as the distribution of .it can be proved that the possible limits ( in distribution ) of , as , have characteristic functions which are solutions of this result has been communicated to us by filippo riccardi , who proved it by resorting to a suitable modification of the skorokhod representation used in the appendix of the present paper .it is interesting to note that also the stationary solutions of ( [ eq2 ] ) must satisfy ( [ eq6 ] ) .we did nt succeed in finding all the solutions of ( [ eq6 ] ) , but it is easy to check that is a solution of ( [ eq6 ] ) , for any .it is well known that ( [ eq7 ] ) is strictly connected with certain sums of random variables .indeed , it is a stable real valued characteristic function with characteristic exponent and , in view of a classical lvy s theorem , ( ) _ if are independent and identically distributed real valued random variables , with symmetric common distribution function , then in order that the random variable be the limit in distribution of the normed sum it is necessary and sufficient that has characteristic function ( [ eq7 ] ) for some ._ one could guess that ( ) may be used to get a direct proof of the fact that converges in distribution to a stable random variable with characteristic function ( [ eq7 ] ) .this way , one would obtain that these characteristic functions are all possible pointwise limits , as , of solutions of ( [ eq2 ] ) .in point of fact , direct application of results like ( ) is inadmissible since is a weighted sum of a random number of summands , affected by random weights which are not stochastically independent . in spite of this , by resorting to suitable forms of conditioning for , one can take advantage of classical propositions pertaining to the central limit theorem .in addition to the problem of determining the class of all possible limit distributions for , an obvious question which arises is that of singling out necessary and sufficient conditions on , in order that converges in distribution to some specific random variable . as to the classical setting mentioned in ( ) , it is worth recalling ( ) _ if are independent and identically distributed real valued random variables , with ( not necessarily symmetric ) common distribution function , then in order that converge in law to a random variable with characteristic function ( [ eq7 ] ) with some specific value for or , in other words , that belong to the domain of normal attraction of ( [ eq7 ] ) it is necessary and sufficient that satisfies as and as , i.e. _ for more information on stable laws and central limit theorem see , for example , chapter 2 of and chapter 6 of . to complete the description of certain facts that will be mentioned throughout the paper , it is worth enunciating ( ) _ if stands for the fourier stieltjes transform of a probability distribution function satisfying ( [ nda - f ] ) , then where is bounded and as . _ ( ) , which is a paraphrase of thorme 1.3 of , can be proved by mimicking the argument used for theorem 2.6.5 of .in the present paper our aims are : firstly , to find initial distribution functions ( or initial characteristic functions ) so that the respective solutions of ( [ eq2 ] ) may converge pointwise to ( [ eq7 ] ) .secondly , to determine the rate of convergence of the probability distribution function , corresponding to , to a stable distribution function with characteristic exponent , with respect both to specific weighted and to kolmogorov s distance .it is well known from the lvy continuity theorem that pointwise convergence of sequences of characteristic functions is equivalent to _ weak convergence _ of the corresponding distribution functions . in particular , in our present case , since the limiting distribution function is ( absolutely ) continuous , weak convergence is equivalent to uniform convergence , i.e. left hand side of ( [ eq9 ] ) is just the _ kolmogorov distance _ ( , in symbols ) between and . as to the above mentioned first aim , besides sufficient conditions for convergence reducing to the fact that belongs to the domain of normal attraction of ( [ eq7 ] ) a necessary condition for convergence is given . as far as rates of convergenceare concerned , results can be found in the paper of pulvirenti and toscani , with respect to a specific _ weighted _ , used to study convergence to equilibrium of boltzmann like equations starting from .see also . denoting this distance by , being some positive number, one has with reference to ( [ eq1 ] ) , after writing for a density of , theorem 6.2 in reads : ( ) _ let with such that is finite for some in .then holds true for every , with moreover , ( [ eq11 ] ) is still valid if and if finite for some in ] , while is verified for in ] ._ from ( [ eq22 ] ) with and , then , if belongs to n \to + \infty ] . __ we start from the definitions of and to obtain , via ( [ eq21_bis ] ) , which , in view of , yields where for these quantities one can write with , and combination of these inequalities with the definition of ( see proposition [ prop3 ] ) gives us using this inequality , we obtain it remains to study integrals like for . following the argument used in to prove lemma 7 , one can state the inequality with . to complete the proof of the main part of the proposition it is enough to use ( [ eq36bis ] ) to obtain a bound for the right - hand side of ( [ eq28 ] ) and , then , to replace this bound for the first sum in the right hand side of ( [ eq27_bis ] ) . as to the latter claim , recall that , and use the additional condition . [ prop6 ]let be in and let the additional hypothesis that is monotonic on be valid for some .then , moreover , if is such that for any , in and some constant , one gets for every in ] and , then , of ( [ eq16 ] ) and ( [ eq17]). the remaining theorems from [ thm5 ] to [ thm9 ] can be proved following the same line of reasoning , according to the scheme : resort to proposition [ prop4 ] and to ( [ eq16 ] ) for theorem [ thm5 ] . apply proposition [ prop5 ] and ( [ eq16])-([eq17 ] ) to prove theorems [ thm6 ] and [ thm7 ] .finally , use proposition [ prop6 ] and ( [ eq16])-([eq17 ] ) to prove theorems [ thm8 ] and [ thm9 ] .it remains to prove theorem [ thm1 ] .its former part is a straightforward consequence of theorem [ thm4 ] . as to the latter, we use the same argument as in the proof of theorem 1 in , based on . accordingly , for every we define where : stands for a conditional distribution of , given ; is the -fold convolution of ; indicates unit mass at ; ^c) ] ; indicates the set of all probability measures on the borel class on some metric space ; is a distinguished metrizable compactification of .these spaces are endowed with topologies specified in subsection 3.2 of , which make a separable compact metric space .now recall that , under the assumption of the latter part of theorem [ thm1 ] , must converge in distribution .next , from lemma 3 in , with slight changes , the sequence of the laws of the vectors contains a subsequence which is weakly convergent to a probability measure supported by . at this stage ,an application of the skorokhod representation theorem ( see , e.g. , , ) , combined both with the properties of the support of and with , entails the existence of random vectors defined on a suitable space , in such a way that and have the same law ( for every ) .moreover , where the convergence must be understood as pointwise convergence on and designates weak convergence of probability measures . from ( [ a3 ] ) andtheorem 16.24 of , there is a random lvy measure , symmetric about zero , such that holds pointwise on for every . to complete the proof, we assume that and show that this assumption contradicts ( [ a4 ] ) .indeed , the assumption implies that for any there is such that for every and , therefore , since ( [ a3 ] ) yields , then , which contradicts ( [ a4 ] ) in view of the arbitrariness of .kac , m. ( 1956 ) .foundations of kinetic theory . in _ proceedings of the third berkeley symposium on mathematical statistics and probability , 19541955 _ * 3 * 171197 .university of california press , berkeley and los angeles . | this paper deals with a one dimensional model for granular materials , which boils down to an inelastic version of the kac kinetic equation , with inelasticity parameter . in particular , the paper provides bounds for certain distances such as specific weighted and the kolmogorov distance between the solution of that equation and the limit . it is assumed that the even part of the initial datum ( which determines the asymptotic properties of the solution ) belongs to the domain of normal attraction of a symmetric stable distribution with characteristic exponent . with such initial data , it turns out that the limit exists and is just the aforementioned stable distribution . a necessary condition for the relaxation to equilibrium is also proved . some bounds are obtained without introducing any extra condition . sharper bounds , of an exponential type , are exhibited in the presence of additional assumptions concerning either the behaviour , near to the origin , of the initial characteristic function , or the behaviour , at infinity , of the initial probability distribution function . |
the objective of the work described here has been to develop a general purpose parallel lattice - boltzmann code ( lb ) , called _ludwig _ , capable of simulating the hydrodynamics of complex fluids in 3-d .such a simulation program should eventually be able to handle multicomponent fluids , amphiphilic systems , and flow in porous media as well as colloidal particles and polymers .in due course we would like to address a wide variety of these problems including detergency , binary fluids in porous media , mesophase formation in amphiphiles , colloidal suspensions , and liquid crystal flows .so far , however , we have restricted our attention to simple binary fluids , and it is this version of the code that will be described below in more detail . nonetheless , the generic elements related to the structure of the code are valid for any multicomponent fluid mixture , as defined through an appropriate free energy , expressed as a functional of fluid density and one or more composition variables ( scalar order parameters ) .we discuss in some detail also how to include solid objects , such as static and moving walls and/or freely suspended colloids , in contact with a binary fluid .more generally , the modular structure of _ ludwig _ facilitates its extension to many other of the above problems without extensive redesign . but note that , with several of these problems ( such as liquid crystal flows which require tensor order parameters ) , it is not yet clear how to proceed even at the serial level , and only first attempts have begun to appear in the literature .the lattice - boltzmann model ( lb ) simulates the boltzmann equation with linearized collisions on a lattice .both the changes in position and velocity are discretized .it can be shown that , at sufficiently large length and time scales , lb simulates the dynamics of nearly incompressible viscous flows .for the simplest case of a one - component fluid , it describes the evolution of a discrete set of particle densities on the sites ( or _ nodes _ ) of a lattice : the quantity is the density of particles with velocity resident at node at time .this particle density will , in unit time increment , be convected ( or _ propagate _ ) to a neighboring site . here is a lattice vector , or _ link _ vector , and the modelis characterized by a finite set of velocities .the quantity is the equilibrium distribution of , and is one of the key ingredients of the model .it characterizes the type of fluid that _ ludwig _ will simulate , and determines the equilibrium properties of such a fluid ( see section [ ssec : binaries ] below ) .the right hand side of equation [ eqn : lb ] describes a mixing of the different particle densities , or _ collision _ : the distribution relaxes towards at a rate determined by , the relaxation parameter .the relaxation parameter is related ( through ) to the viscosity of the fluid , and gives us control of its dynamics . to specify a particular model , besides the equilibrium properties given through , one has to choose the geometry of the lattice in which the density of particles move .such a geometry should specify both the arrangement of nodes and the set of allowed velocities .the only restrictions in such a choice lie on the fact that they should have sufficient symmetry to ensure that at the hydrodynamic level the behavior is isotropic and independent of the underlying lattice .the hydrodynamic quantities , such as the local density , , momentum , and stress , are given as moments of the densities of particles , namely , , and .the dynamics of lb , as expressed in equation [ eqn : lb ] , provides immediate insight into the implementation and underlying optimization issues .it is characterized by two basic dynamic stages : * the propagation stage ( left - hand side of equation [ eqn : lb ] ) , consisting of a set of nested loops performing memory - to - memory copies ; * the collision stage ( right hand side ) , which has a strong degree of spatial locality and relies on basic add / multiply operations : its implementation is straightforward and can be highly optimized .the lb model described so far can be extended to describe a binary mixture of fluids , of tunable miscibility , by adding a second distribution function , .( further distribution functions would allow still more complicated mixtures to be described . ) as in single - fluid lb , the relevant hydrodynamic variables related to the order parameter are also moments of the additional distribution function , namely the composition ( order parameter ) , and the flux .for each site ( including solid sites ) , the distributions and are stored in a structure element of type * site * : xxx = xxx = typedef struct\ { + float f[nvel ] , + g[nvel ] ; + } site ; where * nvel * is the number of velocity vectors used by the model . for example , for the cubic lattices described later on , where _ ludwig _ has been implemented so far , the number of velocity vectors has been 15 and 19 .figure [ fig : d3qx ] shows the sets of velocities for the two 3-d models developed .( to nearest neighbors ) , and eight with ( to next next nearest neighbors ) .the d3q19 model has nineteen velocities : one with speed zero ( a rest particle ) , six with ( to nearest neighbors ) , and 12 with ( to next nearest neighbors).,title="fig : " ] ( to nearest neighbors ) , and eight with ( to next next nearest neighbors ) .the d3q19 model has nineteen velocities : one with speed zero ( a rest particle ) , six with ( to nearest neighbors ) , and 12 with ( to next nearest neighbors).,title="fig : " ] we follow the procedure of swift et al . ( see also for a schematic description ) in which describes the density field , whilst describes the order parameter field , .both distribution functions have relaxational dynamics of the type of equation [ eqn : lb ] but are characterized by different relaxation parameters .the second relaxation parameter , associated with the order parameter field , will determine its diffusivity . by studying appropriate moments of the distribution functions, one can construct a relaxational dynamics that will describe , in the continuum limit , the dynamics of a near - incompressible , isothermal binary fluid with an arbitrary local free energy functional ] where obeys equation [ eqn : fe ] and the integral is over the solid surface .the two solid - fluid interfacial tensions are found by minimizing this expression near a flat solid - fluid interface to find the equilibrium free energy , subtracting the contribution \ , d\mathbf{r}$ ] of the same volume of bulk fluid , and dividing by the interfacial area . the functional minimization also gives the composition profile near the wall , and the boundary condition satisfied at the solid surface , which is where is normal to the wall . in general , is a function of the local order parameter .the classical work on wetting has shown that a functional relation of the form is enough to reproduce the various different wetting scenarios . by tuning the parameters and , we modify the properties of the surface in a thermodynamically controlled manner , so that the fluid - solid interfacial tensions can be tuned at will . since we are dealing with a symmetric mixture ,if the two phases will have neutral wetting and show a local variation in composition near the wall ( ) of the same magnitude .nonzero allows then for an asymmetry in the surface value of the order parameter for the two coexisting phases and a contact angle different from degrees .the main difficulty to implement the general boundary condition , equation [ eqn : sfe ] , is that it depends on the value of the order parameter at the surface , , which is itself a dynamical variable .moreover , due to bbl , the solid surface lies between the sites thus making the calculation of and by finite difference from neighboring sites using equations [ eq : grad_def ] and [ eq : lap_def ] impossible . to circumvent this, we use a predictor - corrector scheme to estimate the gradient at the solid wall as follows ( see figure [ fig : wetting ] ) : 1 . determine which sites are next to a wall ( boundary sites ) , and hence which links cross the wall ( i.e. dry links ) ; 2 .estimate using finite differences on all wet links ; 3 . from this estimate of ,extrapolate to halfway along the dry links , and calculate ; using on the dry links , calculate on these links ; 4 .calculate and for the boundary sites using _ all _ the gradients estimated on the links .this scheme gives good quantitative results of the wetting angles in accordance with thermodynamic predictions .results from case studies , both for a droplet and for planar interfaces are discussed in section [ sec : results ] .production runs on the phase separation kinetics of binary fluid mixtures have been carried out on the cray t3d and the hitachi sr-2201 at epcc , and on the cray t3e-1200 at csar .in addition to these distributed memory systems , we also investigated the performance of the code on shared - memory platforms such as the sun hpc-3500 at epcc .distributed on 8 pes for 12,000 steps .the shear has been applied through fixed walls at constant velocity , thus requiring a total of bbls per step .the order parameter and velocity fields were saved 60 times and the full configuration only once , thus representing a combined output of 17 gb for the simulation . the darkest color at the top of the bar chart represents the remaining stages of the simulation ( i.e.initialization , bbl , measurements , and i / o ) . ] the profile information reproduced in figure [ fig : benchmark ] provides us with an interesting insight on the critical sections of the code .firstly , one notices that over 90% of the simulation time is spent in the collision and propagation stages which are both intrinsically serial and well load - balanced for most problems .the raw performance ( excluding all i / o and measurements ) varies from to gridpoint - updates per second for the cray t3e and sun hpc-3500 respectively .details of the timing information for the various stages of the simulation of spinodal decomposition under shear is given in figure [ fig : benchmark ] .note that the halo swap and bbl only account for 4 - 7% .a comparison of the profiles obtained on the sun hpc-3500 and cray t3e-1200 also shows up that the critical parts of the code are highly system dependent .as expected , the increased clock speed of the t3e-1200 ( 600mhz compared to only 400mhz on the hpc-3500 ) benefits the collision stage which is a highly - localized algorithm with basic arithmetic operators ( add / multiply ) .this routine has been highly optimized and makes a good use of the t3e memory hierarchy . on the other hand ,the memory - to - memory copies performed in the propagation stage do not benefit from this increase in clock speed as much .indeed , the hpc-3500 significantly outperforms its rival by over 23% even though the algorithm for the propagation had been tuned for the t3e by rearranging loops to make an efficient use of its streams ( see for further information about stream optimization ) .particular attention should also be paid to finding the optimal ordering for the velocity set .it is important to order the velocity set such that they correspond , as much as possible , to sequential positions in memory for the distribution functions .the first production platform for this package was the cray t3d which , due to its lack of second - level cache , was particularly sensitive to data locality .the arrangement reproduced in table [ tab : velset ] proved to be the most effective to preserve data locality with a performance increase of over 20% on the t3d ( compared to an unoptimized sequence ) for the combined collision and propagation stages .note that some orderings can speed - up one of these stages alone and be detrimental to the second one .the gain in performance resulting from data locality is not as significant on systems with second - level cache though .the best ordering of the velocity vectors is therefore often system - specific ..optimal velocity set for the cray t3d [ cols="^,^,^ " , ] as shown in figure [ fig : scaleup ] , _ ludwig _ also demonstrates near - linear scaling from 16 up to 512 processors . however , the overall cost of the i / o can become a major bottleneck for the unwary ( e.g. a system will generate in excess of 31 gb per configuration dump ) .the i / o has been optimized by performing parallel i / o . the pool of pes is split into groups of processors , thus providing concurrent i / o streams ( typically , ) .each group has a root pe which will perform all i / o operations .the remaining pes send their data in turn to the i / o pes which pack these data and write them to disk .this approach usually offers high bandwidth without having to use platform specific calls such as disk striping . .details of the simulation are similar to that given in figure [ fig : benchmark ] with the exception of the system size which has been increased to .the only quantity measured during this benchmark is the order parameter field , which was measured and saved to disk every 50 iterations .speed - up data have been included for simulations with and without the i / o ( note that in both cases , the time taken for dumping full re - start configurations was not included ) .the i / o consists in 240 data dumps of 191 mb written to disks through eight parallel output streams . ]note that mpi2-io had initially to be discounted on the ground of portability and performance .we conclude this discussion by deploring the lack of a full support for mpi-2 single - sided communications on some platforms .indeed , this functionality proves invaluable for the implementation of the lees - edwards boundary conditions and moving particles ._ ludwig _ has already been used to study a number of problems for binary fluid mixtures of current interest : 1 .establishment of the role of inertia in late - stage coarsening ; 2 .study of the effect of an applied shear flow on the coarsening process ; 3 .persistence exponents in a symmetric binary fluid mixture .these results have been published elsewhere and will not be discussed further here . a number of validation exercises relating to binary mixtures are also described in . herewe focus on discussing new validation results obtained using the novel predictor - corrector scheme for the thermodynamically consistent simulation of wetting phenomena , as presented earlier .we present results for two types of tests for the properties of binary fluids near a solid wall .first , we verify that the modified bounce - back procedure , equation [ eqn : mod_bbl ] , gives the expected balance both for the momentum and for the order parameter .second , we check the numerical accuracy of the modified boundary condition that accounts for the wetting properties of the wall , equation [ eqn : sfe ] , as implemented through the predictor - corrector step . to test the validity of the modified bounce - back rule for the order parameter field , we have looked at the motion of a pair of planar interfaces perpendicular to two parallel planar solid walls when the whole system is moving at a constant velocity parallel to the walls .the initial condition corresponds to a stripe of one equilibrium fluid ( ) , oriented perpendicular to the walls , surrounded by a region of the other fluid ( ) , with periodic boundary conditions . due to galilean invariance , the profile should remain undistorted and move at the same velocity as it is moving with initially , but with lb this is not guaranteed _ a priori _ and has to be validated .we have considered both the case of neutral wetting and the asymmetric case where is nonzero .( open circles ) , and also for a bounce - back of the order parameter field in which the advection induced by the moving walls is neglected ( filled diamonds ) . ] for neutral surfaces , we show in figure [ fig : interface1 ] the leading interfacial profiles at different times , both for the bounce - back rules described in equation [ eqn : bbl ] , and for an improperly formulated bounce - back of the order parameter that does not take into account the motion of the wall .( the latter is equivalent to assuming only in the equations describing the bounce - back of the order parameter distribution function . )as can be seen , although the profile is advected due to the existence of a net momentum at the wall , if the bounce - back is not done in the frame of reference of the moving wall ( leading to equation [ eqn : mod_bbl ] ) the fluid - fluid interface has a spurious curvature and it moves more slowly than it should . from the rectilinear shape of both the leading ( shown in figure [ fig : interface1 ] ) and trailing interfaces of the rectangular strip , we have found that galilean invariance is in fact well satisfied with equation [ eqn : mod_bbl ] .figure [ fig : interface1 ] corresponds to a situation of high viscosity and low diffusion ( ) but the same features have been observed for a number of different physical parameters .the magnitudes of the errors made by using the inappropriate bounce - back , in this geometry , are found to decrease upon increasing the mobility , probably because a large mobility allows a faster relaxation to the imposed velocity profile in the interfacial region ( especially near the contact with the walls ) . .the walls wet partially , and therefore the stripe relaxes from its initial configuration to a bent interface .we show the interfaces for the bounce - back described in equation [ eqn : bbl ] ( open circles ) , and also for a bounce - back of the order parameter field in which the advection induced by the moving walls is neglected ( filled diamonds ) . in open squareswe show the trailing interface at time 4000 to show the magnitude of the deviations with respect to galilean invariance ( see text ) . ] in figure [ fig : interface2 ] we show the same configuration as described in the previous paragraph , but for the case in which the solid surface partially wets one of the two phases , so that is nonzero and rather than 90 degrees . in this casethe profile should relax from the initial perpendicular stripe to a curved interface .again , the use of inappropriate bounce - back for the order parameter leads to a slower motion of the interface , and a significant distortion away from the equilibrium interfacial shape . in order to test galilean invariance here , for the final interface ( timesteps )we have compared the leading and trailing edge of the stripe .there is a slight deviation in this case , implying that the asymmetry entering via couples through to the overall fluid motion relative to the underlying lattice , which ( by galilean invariance ) it should not .however , the resulting violation is very small , and the interfacial deviations do not grow beyond one lattice spacing . we have also verified that if the velocity of the walls is perpendicular to their own plane , then an order parameter profile , initially in equilibrium , remains stationary .this confirms that the chosen boundary conditions can account for generically moving interfaces for the case of a binary mixture .finally , for stationary walls , we have computed the contact angles for the simplest asymmetric case in which ( see equation [ eqn : sfe ] ) . in this situationthe order parameter at the wall will deviate by the same magnitude , but with opposite sign , in the two bulk phases .for this choice ( with ) the contact angle , , is predicted to depend on the parameter according to \ ] ] we have considered a geometry in which the two solid walls have the same wetting properties .we start , as in the previous case , with an initial stripe perpendicular to the walls , defining two regions with opposite equilibrium values for the order parameter .the equilibrium profile for the interface then corresponds to a cylindrical cap . by fitting the cylindrical cap it is possible to get a numerical value for the contact angle . in figure[ fig : angle ] we show the measured contact angles as a function of for different interfacial widths , and compare these with the above theoretical prediction .( note that , by maintaining fixed have kept constant the values of the equilibrium order parameters in the two coexisting phases . ) as can be seen , the agreement between the theoretical prediction and the measured contact angle is quantitative .is the liquid / liquid interfacial width expressed in lattice units . ]the parameters that characterize the binary mixture have been chosen to ensure fairly wide fluid - fluid interfaces .( in fact , the smallest interfacial width , , is at least twice as large as that used previously in production runs for binary fluid demixing . ) for narrower interfaces than this , the contact angles can differ significantly from the predictions due to anisotropies induced by the lattice , whose effects were studied ( for fluid - fluid interfaces only ) in .this effect should be more relevant for small contact angles .indeed , when a narrow fluid - fluid interface has a glancing incidence with the solid wall , the discrete separation of lattice planes will lead to significant errors in the estimation of the order parameter gradients ; the direction of these not only determines the surface normal of the fluid - fluid interface , and hence the contact angle , but is crucial to an accurate estimate of the free energies near the contact line . however , except perhaps for small contact angles , there is not much accuracy gained from choosing larger than .because of the finite width of the interfaces , one has to be careful to measure the contact angle by extrapolation from regions of fluid - fluid interface that are more than about from the wall . to check that we did this correctly , we have also numerically and analytically computed the three interfacial tensions directly by the method outlined in section [ ssec : wetting ] . from the values of the surface tensions obtained ,it is the possible to get values for the contact angles , through young s law , equation [ eqn : young ] .as expected , the values agree with the theoretical predictions and the contact angles measured from the profiles ( figure [ fig : angle ] ) .we have described a versatile parallel lattice boltzmann code for the simulation of complex fluids .the objective has been to develop a piece of code that allows to study the hydrodynamics of a broad class of multicomponent and complex fluids , focusing initially on binary fluid mixtures with or without solid surfaces present .it combines a parallelization strategy , making it suitable to exploit the capabilities of supercomputers , with a modular structure , which allows its use without the need to know its computational details , and with the possibility of focusing on the physical analysis of the results .this strategy has led to a code that is in principle adaptable to several different uses within the academic collaboration involved .we have discussed how to introduce generic fluid - solid boundary conditions , and discussed which structures were developed to combine the requirements of specific physical features with the generic structure of the code .the performance of the code in different computers shows its portability , and it scales up efficiently on parallel computers .we have implemented generic boundary conditions for a binary mixture in contact with moving solid interfaces .we have shown how one recovers appropriate behavior of the momentum and the fluid order parameter so long as the bounce - back rule , in the moving frame of the wall , is performed with the distribution function that characterizes the order parameter as well as that for momentum ( equation [ eqn : mod_bbl ] ) . a mesoscopic boundary condition that accounts for the wetting properties of a binary mixture near a solid surface has been described .it has been shown how to deal appropriately with the gradients of the order parameter at the wall , and with the role of the finite interfacial width when analyzing the results .the values obtained for the contact angle agree with the predictions of the model simulated , showing the absence of lattice artifacts , at least for contact angles larger than about 20 degrees .these results are , however , for planar solid interfaces oriented along a lattice direction .we have not checked in detail the dependence of the contact angle on the orientation of the solid surface , and it may require further work on the discretization of order parameter derivatives before this isotropy can be relied upon .analogously to the problem of small wetting angles mentioned above , the problem may prove most acute for low - angle inclination of the solid surface , where the naive discretization of the solid phase leads to a series of well - separated steps in the wall position .current and planned work with _ludwig _ includes the hydrodynamic simulation of multicomponent fluid flow in a porous networks with controlled wetting ; implementation of lees - edwards ( sliding periodic ) boundary conditions ; large - scale simulations of binary fluids under shear , and the improvement of the gradients to make the thermodynamics of this model more fully independent of the underlying symmetries of the lattice .longer term plans include studying colloid hydrodynamics and extending _ ludwig _ to study amphiphilic systems under shear ( see for an example of this studied by dpd ) .the authors would like to acknowledge michael cates , simon jury , alexander wagner , patrick warren , and julia yeomans for valuable discussions .they thank michael cates for assistance with the manuscript .this work has been funded in part under the maxwell institute s project on fluid flow in soft and porous matter and the epsrc e7 grand challenge and gr / m56234 . c.p .lowe , d. frenkel and a.j .masters , j. chem.phys . *103 * , 1582 ( 1995 ) .hagen , d. frenkel and c.p .lowe , physica a * 272 * , 376 ( 1999 ) . j. rowlinson and b. widom , _ molecular theory of capilarity _ , clarendon press , oxford 1982 . | this paper describes _ ludwig _ , a versatile code for the simulation of lattice - boltzmann ( lb ) models in 3-d on cubic lattices . in fact _ ludwig _ is not a single code , but a set of codes that share certain common routines , such as i / o and communications . if _ ludwig _ is used as intended , a variety of complex fluid models with different equilibrium free energies are simple to code , so that the user may concentrate on the physics of the problem , rather than on parallel computing issues . thus far , _ ludwig _ s main application has been to symmetric binary fluid mixtures . we first explain the philosophy and structure of _ ludwig _ which is argued to be a very effective way of developing large codes for academic consortia . next we elaborate on some parallel implementation issues such as parallel i / o , and the use of mpi to achieve full portability and good efficiency on both mpp and smp systems . finally , we describe how to implement generic solid boundaries , and look in detail at the particular case of a symmetric binary fluid mixture near a solid wall . we present a novel scheme for the thermodynamically consistent simulation of wetting phenomena , in the presence of static and moving solid boundaries , and check its performance . lattice - boltzmann . wetting . computer simulations . parallel computing . binary fluid mixtures . |
luc moreau is a postdoctoral fellow of the fund for scientific research - flanders . work done while a recipient of an honorary fellowship of the belgian american educational foundation , while visiting the princeton university mechanical and aerospace engineering department .this paper presents research results of the belgian programme on inter - university poles of attraction , initiated by the belgian state , prime minister s office for science , technology and culture .the scientific responsibility rests with its authors . | some biological systems operate at the critical point between stability and instability and this requires a fine - tuning of parameters . we bring together two examples from the literature that illustrate this : neural integration in the nervous system and hair cell oscillations in the auditory system . in both examples the question arises as to how the required fine - tuning may be achieved and maintained in a robust and reliable way . we study this question using tools from nonlinear and adaptive control theory . we illustrate our approach on a simple model which captures some of the essential features of neural integration . as a result , we propose a large class of feedback adaptation rules that may be responsible for the experimentally observed robustness of neural integration . we mention extensions of our approach to the case of hair cell oscillations in the ear . persistent neural activity is prevalent throughout the nervous system . numerous experiments have demonstrated that persistent neural activity is correlated with short - term memory . a prominent example concerns the oculomotor system see for a review and experimental facts . the brain moves the eyes with quick saccadic movements . between saccades , it keeps the eyes still by generating a continuous and constant contraction of the eye muscles ; thus requiring a constant level of neural activity in the motor neurons controlling the eye muscles . this constant neural activity level serves as a short - term memory for the desired eye position . during a saccade , a brief burst of neural activity in premotor command neurons induces a persistent change in the neural activity of the motor neurons , via a mechanism equivalent to integration in the sense of calculus . neural activity of an individual neuron , however , has a natural tendency to decay with a relaxation time of the order of milliseconds . therefore the question arises as to how a transient stimulus can cause persistent changes in neural activity . according to a long - standing hypothesis , persistent neural activity is maintained by synaptic feedback loops . positive feedback can oppose the tendency of a pattern of neural activity to decay . if the feedback is weak , then the natural tendency to decay dominates and neural activity decays . as the feedback strength is increased , the neural dynamics undergo a bifurcation and become unstable . when the feedback is tuned to exactly balance the decay , then neural activity neither increases nor decreases but persists without change . this , however , requires a fine - tuning of the synaptic feedback strength and the question arises as to how a biological system can achieve and maintain this fine - tuning . some gradient descent and function approximation algorithms performing this fine - tuning have been proposed and a feedback learning mechanism based on differential anti - hebbian synaptic plasticity has been studied in . nevertheless , it is still unclear how the required fine - tuning is physiologically feasible . for this reason , a different model for neural integration based upon bistability has recently been proposed in . in the present paper , we do not follow the line of research based upon bistability . instead , we pursue the hypothesis of precisely tuned synaptic feedback . the present paper proposes an adaptation mechanism that may be responsible for the fine - tuning of neural integrators and that may explain the experimentally observed robustness of neural integrators with respect to perturbations . before we present this adaptation mechanism in detail , we first discuss a similar phenomenon in the auditory system . in order to detect the sounds of the outside world , hair cells in the cochlea operate as nanosensors which transform acoustic stimuli into electric signals . in these hair cells are described as active systems capable of generating spontaneous oscillations . ions such as are believed to contribute to the hair cell s tendency to self - oscillate . for low concentrations of the ions , damping forces dominate and the hair cell oscillations are damped . as the concentration increases the system undergoes a hopf bifurcation , the dynamics become unstable , and the hair cells exhibit spontaneous oscillations . in the hair cells are postulated to operate near the critical point , where the activity of the ions exactly compensates for the damping effects . as before , this requires a fine - tuning of parameters ( the ion concentrations ) and again the question arises as to how this fine - tuning can be achieved and maintained . in a feedback mechanism has been proposed which could be responsible for maintaining this fine - tuning . it thus seems that operating in the vicinity of a bifurcation is a recurrent theme in biology . and the question as to how proximity to the bifurcation point may be achieved and maintained in a noisy environment may be of considerable , general interest . we view the two presented examples as special instances of the following general problem . consider a forced dynamical system , described by a differential equation . the right - hand side of this equation depends on a parameter and the unforced dynamics are assumed to exhibit a bifurcation when equals a critical value . the problem consists of finding a feedback adaptation rule for the parameter which guarantees proximity to the bifurcation point ; that is , which steers toward its critical value . this adaptation law may depend on and but should be independent of , since this critical value is not known precisely . this abstract formulation captures common features of both biological examples and suggests some unexpected links with the literature . questions very similar to the present one have been studied extensively in the literature on adaptive control and stabilization ; and the general problem is closely related to extremum seeking , and to instability detection , where an operating parameter is adapted on - line in order to experimentally locate bifurcations . although the above general formulation is convenient , there is little hope that a complete and satisfactory theory can be developed that applies to all possible instances of the problem . simplifying assumptions make it more tractable . in this letter , we study in detail what is probably the most simple but nontrivial instance of the general problem . we consider the one - dimensional system which captures some of the essential features of neural integration and is in fact closely related to the autapse model from . with this interpretation , is a strictly positive variable representing neural activity in the integrator network and represents the signal generated by the premotor command neurons . the term corresponds to the natural decay of neural activity and represents a positive , synaptic feedback loop . of course , when studying neural integration , questions can be investigated at varying levels of detail . it is clear that a simple model as ( [ e : ni ] ) has several limitations . because of its one - dimensional nature , the present model is , for example , unable of reproducing the distributed nature of persistent activity patterns observed in the brain . nevertheless , eq . ( [ e : ni ] ) captures a key feature of neural integration : when the feedback is tuned to exactly balance the decay , eq . ( [ e : ni ] ) behaves as an integrator and produces persistent neural activity . eq . ( [ e : ni ] ) is therefore a valuable model when studying fine - tuning of neural integrator networks . we are interested in the fine - tuning of eq . ( [ e : ni ] ) and study this question using tools from nonlinear and adaptive control theory . first , we ignore the presence of the input and consider the simpler equation we present a large class of feedback adaptation laws for ( [ e : pa ] ) which steer to its critical value ; thus enabling the automatic self - tuning of parameters and the spontaneous generation of persistent neural activity . we consider adaptation laws could come from synaptic plasticity . in particular , the term might be related to types of synaptic plasticity that depend on the temporal ordering of presynaptic and postsynaptic spiking , as in . ] of the form we show that , under three very mild conditions , this adaptation rule guarantees convergence to the bifurcation point for ( [ e : pa ] ) . the first condition requires that is a strictly increasing function . this means that the term in ( [ e : adaptation ] ) acts as a negative feedback . as a consequence , if the neural activity were constant in ( [ e : adaptation ] ) , then the synaptic feedback gain would naturally relax to a rest value depending on via the equation . the second condition states that there exists such that . this condition implies that , if the neural activity would be constant and equal to in ( [ e : adaptation ] ) , then the synaptic feedback gain would naturally relax to its critical , desired value . of course there is no guarantee that the neural activity would be equal to , or even converge to , this special value . instead , the level of neural activity is governed by eq . ( [ e : pa ] ) . therefore , in order for the adaptation law ( [ e : adaptation ] ) to work , we need to impose a last condition , that is a decreasing function . this means that the level of neural activity negatively regulates the synaptic feedback strength . we now show that , under these three conditions , the feedback adaptation law ( [ e : adaptation ] ) indeed tunes the synaptic feedback gain to exactly balance the natural decay rate . we begin with noticing that the combined system of equations ( [ e : pa])([e : adaptation ] ) has a unique rest point . this equilibrium is determined by setting the right - hand sides of ( [ e : pa])([e : adaptation ] ) equal to zero , yielding and . although the precise value of is unknown , if we are able to prove that all trajectories of ( [ e : pa])([e : adaptation ] ) converge to this ( unknown ) fixed point , then it follows that indeed converges to its desired , critical value . in order to prove this , we introduce a coordinate transformation and . this transforms ( [ e : pa])([e : adaptation ] ) into and . in these new coordinates , the dynamics take the form of a nonlinear mass - spring - damper system ( with unit mass , nonlinear spring characteristic and nonlinear damping function ) . it follows from physical energy considerations that this system exhibits damped oscillations . this shows that all trajectories of ( [ e : pa])([e : adaptation ] ) indeed converge to the unique fixed point , where . the above coordinate transformation reveals a subtle relationship between self - tuning of bifurcations and the internal model principle ( `` imp '' ) from robust control theory ( see for a discussion of the imp from a systems biology perspective ) . this relation is made explicit by the equation , which represents an integrator and corresponds to integral action studied in robust control theory . one regards the constant as an unknown perturbation acting on the system . the imp implies that , in order to track this constant perturbation , the system dynamics should contain integral action . the integral action is generated by the biological system itself , and not by the feedback adaptation law . we have so far ignored the presence of the signal . we showed that the adaptation law ( [ e : adaptation ] ) tunes the synaptic feedback gain to exactly compensate for the natural decay rate , resulting in the spontaneous generation of persistent neural activity . at these equilibrium conditions , the action potential firing rate equals , which is related to by . in the next paragraphs , we take into account the effect of the input . in this case , the value will play the role of a parameter that influences the accuracy with which the feedback adaptation law guarantees proximity to the bifurcation point . the signal will in general result in a time - varying action potential firing rate . the mechanism with which this happens , is determined by the neural integrator equation ( [ e : ni ] ) and the adaptation law ( [ e : adaptation ] ) . for the purpose of analysis , we make two simplifying assumptions , both of which seem to be natural and physically relevant for neural integration . first , we assume that , over any sufficiently large time interval ] is approximately independent of . in more mathematical terms , we assume the existence of a function such that for every test function , the time average converges to as , uniformly with respect to . secondly , we assume that the adaptation law acts on a much slower time scale than the time variations in . under these assumptions , the effect of the action potential firing rate on the adaptation law ( [ e : adaptation ] ) may be approximated by the average effect . it is now clear when the adaptation law guarantees proximity to the bifurcation point : if the compatibility condition is satisfied , then time scale separation arguments suggest that will converge approximately to and the neural integrator will approximately behave as a perfect integrator . the compatibility condition may by interpreted as follows . when the premotor command signal has zero time - average and the adaptation law acts on a slow time scale , then eq . ( [ e : ni ] ) behaves as a good integrator and the firing rate equals the time - integral of plus an integration constant . the compatibility condition ensures that this integration constant is compatible with the desired range for the firing rate . we illustrate this result on a particular example representative for saccadic eye movements . we consider the case of periodic saccadic eye movements asking for an action potential firing rate in the motor neurons alternating between and every second . at each saccade , a brief burst of neural activity in premotor command neurons changes the actual firing rate . we assume that this change is such that immediately after each saccade , the actual firing rate equals the desired firing rate . between saccades , we assume that no input is applied . at its desired level between saccades , which is consistent with experimental observations . ] if the neural integrator is perfectly tuned , then the actual firing rate will remain constant between saccades and equal to the desired firing rate ( eyes are fixed ) . if the neural integrator is not perfectly tuned , then the actual firing rate will deviate from the desired firing rate ( eyes drift ) until a new saccade occurs which brings the actual firing rate to its new desired value . fig . [ f2 ] shows the results of a simulation where the adaptation law satisfies the compatibility condition of the previous paragraph . in the beginning of the simulation , we have mis - tuned the neural integrator . clearly , after a short transient , the adaptation law achieves excellent tuning and the drift between two successive saccades becomes negligible . we have thus shown that an adaptation law can tune a neural integrator with great accuracy to its bifurcation point . in order to achieve perfect tuning , however , the adaptation law itself needs to satisfy a compatibility condition . it seems that we have merely moved the problem of fine - tuning from the neural integrator to the adaptation law . the crucial observation and one of the main contributions of the present paper , however , is that this results in a significant decrease in sensitivity . _ the adaptation law is robust with respect to perturbations in its parameters . _ in order to illustrate this significant increase in robustness , let us first summarize the well - known sensitivity properties of neural integration . experiments suggest that the actual time constant obtained in a tuned neural integrator circuit is typically greater than ; that is , . this requires for the fine - tuning of a relative precision ranging from to , depending on whether the intrinsic time constant equals or ( typical values suggested in the literature ) . the required precision for should be contrasted with the required precision for the parameters of the adaptation law proposed in the present paper . the simulations of fig . [ f3 ] show that , in order to have as observed in experiments , the parameters of the adaptation law need to be tuned with a precision of , independent of the intrinsic time constant . comparing this with the originally required precision for the synaptic feedback strength , we conclude that _ the proposed adaptation mechanism could improve the robustness of neural integration with a factor ranging from to . we have studied a simple model for neural integration and proposed a class of feedback adaptation rules that could explain the experimentally observed robustness of neural integration with respect to perturbations . the analysis tools that we have introduced extend to the study of fine - tuning involved in other systems such as hair cell oscillations in the ear . consider the nonlinear oscillator equation , which captures some of the essential features of hair cell oscillations . inspired by our previous analysis , we consider a feedback adaptation law for the parameter of the form , with a positive variable characterizing the magnitude of oscillations and related to and via the expression . fig . [ f4 ] shows that , in the absence of the stimulus , this type of adaptation law is indeed able to bring and keep the bifurcation parameter close to its critical value , resulting in the spontaneous generation of oscillations . |
recent advances in wireless communications and microelectronic devices are leading the trend of research toward cognitive radios ( crs ) .the main feature of crs is the opportunistic usage of spectrum .cr systems try to improve the spectrum efficiency by using the spectrum holes in frequency , time , and space domains .this means that secondary users ( sus ) are allowed to utilize the spectrum , provided that their transmissions do not interfere with the communication of primary users ( pus ) .the fundamental components of cr systems that allow them to avoid interference are spectrum sensing and resource allocation .however , in a practical cr network , spectrum occupancy measurements for all the frequency channels at all times are not available .this is partially because of energy limitations and network failures .another highly important and very common reason for occurrence of missing entries in the data set is the hardware limitation .each su may want to use different frequency channels , but it may not be capable of sensing all the channels simultaneously . on the other hand , a complete and reliable spectrum sensing data set is needed for a reliable resource allocation .therefore , we need to develop a method to estimate the missing spectrum sensing measurements . this task is especially more challenging in dynamic environments .there are different approaches toward the problem of data analysis in the cr networks . in ,a learning approach is introduced based on support vector machine ( svm ) for spectrum sensing in multi - antenna cognitive radios .svm classification techniques are applied to detect the presence of pus .several algorithms have been been proposed using dictionary learning framework .these approaches try to find the principal components of data using dictionary learning and exploit the components to extract information .the goal of this paper is to estimate the missing spectrum sensing data as accurate as possible in the time varying environments .an approach is introduced based on nonnegative matrix factorization ( nmf ) to represent the spectrum measurements as additive , not subtractive , combination of several components .each component reflects signature of one pu , therefore the data can be factorized as the product of signatures matrix times an activation matrix .dimension reduction is an inevitable pre - processing step for high dimensional data analysis .nmf is a dimension reduction technique that has been employed in diverse fields .the most important feature of nmf , which makes it distinguished from other component analysis methods , is the non - negativity constraint .thus the original data can be represented as additive combination of its parts . in our proposed method ,a new framework is introduced to decompose the spectrum measurements in cr networks using a piecewise constant nmf algorithm in presence of missing data .piecewise constant nmf and its application in video structuring is introduced in . in the proposed method, we try to handle the missing entries in the data and also take a different approach to solve the corresponding optimization problem using an iterative reweighed technique . in the context of cr networks, nmf is utilized in to estimate the power spectra of the sources in a cr network by factorizing the fourier transform of the correlation matrix of the received signals .our proposed method estimates the missing entries in power spectral density measurements by enforcing a temporal structure on the activity of the pus and can be used in scenarios when the number of the pus is not known .the introduced method takes advantage of a prior information about the activity of the pus and exploits piecewise constant constraint to improve the performance of the factorization .moreover , a solution for the introduced minimization problem is suggested using the majorization - minimization ( mm ) framework .the rest of the paper is organized in the following order . in section [ model ] , the system model and the problem statement are introduced .section [ pcnmf ] describes the proposed new nmf problem . in section [ mm ], a method is presented to solve the piecewise constant nmf problem in mm framework with missing data .section [ results ] presents the simulation results and finally section [ conclusions ] draws conclusions .due to the nature of wireless environments , trustworthy information can not be extracted from measurements of a single su . to find the spectrum holes in frequency , time , and space , there exists a fusion center that collects and combines the measurements from all the sus .cooperative spectrum sensing makes the missing data estimation algorithm more robust .fusion center predicts the missing entries by using the collected measurements.however , since each su is not able to sense the whole spectrum all the time , the data set collected from the sus contains missing entries .network failures , energy limitations , and shadowing can also cause loss of data . without loss of generality ,we want to reconstruct the power map in a single frequency band .the network consists of primary transmitters and spectrum sensors that are randomly spread over the entire area of interest .figure [ powermap ] illustrates an example of a network with pus and sus in a area .the received power of the sensor at time can be written as where is the transmit - power of the pu at time , is the channel gain from the pu to the su , and is the zero - mean gaussian measurement noise at the sensor with variance . considering a rayleigh fading model ,the channel gain coefficient can be modeled as : where the channel constant , is the carrier frequency , is the speed of light , and and are the transmitter and receiver antenna gains . is the path loss exponent which determines the rate at which power decays with the separation distance between the su and the pu and models the fading effect . at timeslot , measurements from sus can be stacked in a vector , given as where ^t} ] , and ^t} ] is a matrix , which consists of channel gain vectors in the -dimensional space of data , and is an matrix that indicates the power levels of pus in each time slot ( if the pu is inactive at time ) . here , the goal is to estimate the missing data using the partial observations . to achieve this goal ,the data is decomposed using piecewise constant nmf .then the components of data and the activation matrix are used to estimate the missing data .promoted by ( [ compact_form ] ) , it is easy to see that the measurements of each time slot can be represented as an additive , not subtractive , combination of few vectors .this algebraic representation has a geometric interpretation .figure [ pyramid ] helps us to visualize the structure of data in a -dimensional space of data . in this figure, sus are measuring power levels in an area with pus .it is easy to notice that measurement vectors lie within a pyramid in the positive orthant with edges proportional to .this is due to fact that all the points in the pyramid can be written as an additive combination of the edges .although it is assumed that the channel gains are stationary for a time window of length , pus can become activated / deactivated in this time window any number of times and can change their transmission power in each activation .hence , the power levels of pus tend to be piecewise constant .nmf is a widely - used technique to decompose data to its nonnegative components . here, the structure of the power level matrix is exploited while handling the missing entries . as a result ,the general objective function is presented as follows : where is a weighted measure of fit and is a penalty , which favors piecewise constant solutions . is a nonnegative scalar weighting the penalty .the constraints denote that all the entries of and are nonnegative . is an weight matrix that is used to estimate the weighted distance between and .the coefficients of the weight matrix denote the presence of data ( if the measurement of the su at time slot is unavailable / available ) .nmf algorithms utilize different measures of fit such as euclidean distance , generalized kullback - leibler ( kl ) divergence , and the itakura - saito divergence . in all the cases ,the distance can be calculated as the sum of the distances between different coefficients . in our case , euclidean distance is used as the measure of fit .this objective function is commonly used for problems with gaussian noise model , a common noise model in communication systems , hence : since there exist sharp transitions in power level of pus and power level of each pu is constant in each transmission period , rows of tend to be piecewise constant . in order to favor the piecewise constant solutions ,the penalty function is defined as : when tends to , this penalty function represents the sum of norm of the transition vectors , i.e. , where is an vector containing power levels of pus in time .this penalty favors the solutions with a lower number of transitions .however , since it is not differentiable , it can be replaced with a differentiable approximation : where is a small positive constant and is much less than all the non - zero elements of to avoid division by zero . in section [ mm ] , an algorithm is derived to find the minimizer of the following problem : after estimating and , the missing entries of can be approximated using the equation .in this section , an iterative algorithm is described to find the solution of the optimization problem proposed in ( [ scale_objective ] ) .for that , majorization - minimization ( mm ) framework is employed .mm algorithm and its variants have been used in various applications such as parameter learning and image processing .the update rules are derived to calculate the entries of given the entries of and then the entries of given the entries of , using an iterative reweighed algorithm .first , the update rules for given are derived .then , the update rules for will be derived in a similar manner . as it is clear in ( [ seperable_measure ] ) , we can write the distance measure as a sum of different time slots : where is the weighted euclidean distance between and , given and . in mm framework , the update rules are derived by minimizing an auxiliary function . by definition, is an auxiliary function of if and only if and for .if is chosen such that it is easier to minimize , the optimization of can be replaced with iterative minimization of over .thus , in the literature , convex functions are frequently used as the auxiliary functions .it is shown in that is non - increasing under the update this is due to the fact that in the iteration we have . following a similar approach as ,the auxiliary function for the weighted euclidean distance can be formulated as : where is an diagonal matrix with and is the diagonal entry of and is element - wise multiplication . to solve the problem presented in ( [ scale_objective ] ), the contribution of should be considered in the auxiliary function .for that , a convex version of is employed : now the update rules can be obtained using the iterative version of ( [ reweighted_penalty ] ) .this means that is updated in each iteration using the values of in the previous iteration . to form the penalized auxiliary function , , we add up with the contribution of to .thus , can be written as : .\end{gathered}\ ] ] it is worthwhile to mention that . since is convex , it can be easily minimized over by setting the gradient to zero .hence the update rule is attained as : where is the element of the gradient . finding the update rule for is simple .this is due to the fact that is not a function of .hence , the update rule for is similar to the update rule for standard nmf , except the missing entries must be taken into account .the update rules can be written in matrix form as : where is the element - wise multiplication and the division is also performed in an element - wise manner .the obtained update rules in ( [ p_update_rule ] ) and ( [ gamma_update_rule ] ) are exploited alternatively to estimate and . then the missing entries of are predicted by .however , by using the objective function in ( [ scale_objective ] ) , the optimization problem results in solutions with entries of tend toward and tends toward .we take advantage of the scale ambiguity between and to avoid this issue .let be a diagonal matrix with its diagonal entry equal to . in each iteration, the rescaled matrix pair is used instead of the original matrix pair . as a practical scenario, we should also consider the case when the secondary network has no information about the number of the pus , i.e. . in this case , the common dimension of matrices and is not known .there have been some efforts in model order selection in nmf . in the numerical experiments , is used as the common dimension to factorize the data in such conditions .this is only possible if the secondary network has some information about the upper bound of .for the numerical experiments , one frequency channel is considered with active pus in the area .figure [ topol ] illustrates the topology of the network .incomplete measurements are collected from sus .we use the same simulation environment and the same network topology as in .the simulation parameters are set as follows , unless otherwise is stated .the path loss is computed as , where is the distance , , and . is computed by multiplying the pathloss by the fading coefficient where , and is circulary symmetric zero mean complex gaussian noise with variance .pus activity is modeled by a first order markov model .all the pus utilize the spectrum of the time slots .transition matrix of the pu is and . is the probability that the pu stops transmitting from time to and is the probability that the pu activates transmitting .the parameter is uniformly distributed over ] .each su makes a measurement with of chance .the measurements are contaminated by additive white gaussian noise .the noise variance is for all the sus .partial measurements are generated for time slots .to reduce the computational burden , the first time slots are used to estimate .next , by using the obtained and the update rule ( [ p_update_rule ] ) , is estimated for all time slots .the regularization factor is set to and factors are used to factorize the data . figure [ truevspredicted ] shows the true power levels and the reconstructed one at a randomly selected su versus time for the time window of samples .it can be seen that the missing entries are accurately recovered through the proposed method , and it is evident that the proposed algorithm can easily track abrupt changes in power level .figure [ nmf_dl ] compares the rmse , averaged over sus , of the proposed method with two similar methods .the method introduced in exploits the spatial correlation between adjacent sus measurements and semi - supervised dictionary learning ( ss - dl ) to estimate the missing entries . for the numerical results ,the batch version of ss - dl is employed and the parameters are set to their optimal values .furthermore , to emphasize the effect of the piecewise constant penalty , the results are also compared with the weighted nmf , i.e. wnmf .wnmf employs binary weight matrix to deal with the missing entries .this figure shows that the proposed method outperforms its competitors in different noise levels ( figure [ nmf_dl].(a ) ) and different probabilities of miss ( figure [ nmf_dl].(b ) ) . denotes the ratio of the missing entries among the spectrum data . 1 monte carlo trials . , title="fig:",width=326 ] [ vsnoise ] 1 monte carlo trials ., title="fig:",width=326 ] [ vspmiss ] this figure shows that wnmf and ss - dl almost perform the same for low noise variance and low .however , for harsh environments with high noise variance or high , ss - dl produces more accurate results .the pc - nmf method outperforms both methods in different noise levels and different probabilities of miss .for instance , pc - nmf has less rmse compared to the ss - dl method for and .this improvement in the performance does not increase the computational burden of the algorithm .table [ time ] shows the running times for different methods averaged over monte carlo trials for , , and . [ cols="^,^",options="header " , ] [ time ] it is known that the nmf methods converge much faster than methods based on gradient descent . however , table [ time ] also illustrates that the proposed method does not require more computational resources compared with wnmf .to study the effect of the piecewise constant penalty on the output of the algorithm , figure [ pupowerlevel ] depicts the power level of two pus and the estimated activation levels using the introduced method and wnmf .both methods can estimate the power levels up to a scale factor .the number of factors is set to , i.e. .this figure illustrates the fact that the proposed method produces a more accurate factorization by taking advantage of piecewise constant constraint as a prior information .as it was expected , power levels estimated by pc - nmf are piecewise constant , while the results generated by wnmf are noisy .in fact , the piecewise constant penalty decreases the effect of noise and fading .moreover , the sharp transitions are preserved in the factors returned by pc - nmf .by exploiting inherent structural feature of cognitive radio networks , we proposed a piecewise constant nmf approach that can decompose the data set into its components .majorization - minimization framework is utilized to solve the optimization problem of the piecewise constant nmf .numerical simulations suggest that this method is able to predict the missing entries in the spectrum sensing database accurately . | in this paper , we propose a missing spectrum data recovery technique for cognitive radio ( cr ) networks using nonnegative matrix factorization ( nmf ) . it is shown that the spectrum measurements collected from secondary users ( sus ) can be factorized as product of a channel gain matrix times an activation matrix . then , an nmf method with piecewise constant activation coefficients is introduced to analyze the measurements and estimate the missing spectrum data . the proposed optimization problem is solved by a majorization - minimization technique . the numerical simulation verifies that the proposed technique is able to accurately estimate the missing spectrum data in the presence of noise and fading . nonnegative matrix factorization , cognitive radio network , spectrum sensing , missing data estimation |
quantum key distribution ( qkd ) involves the generation of a shared secret key between two parties via quantum signal transmission [ 1 ] , [ 2 ] .( among other possible terms , we will often use the more appropriate generation " in lieu of distribution , " ignoring their fine distinction in conventional cryptography [ 3 ] . ) qkd is widely perceived to have been proved secure in various protocols [ 1 ] , [ 2 ] , in contrast to the lack of security proofs for conventional methods of encryption for privacy or key distribution. security proofs in qkd are highly technical and are also multi - disciplinary in nature , as is the case with the subject area of quantum cryptography itself .theoretical qkd involves in its description and treatment various areas in quantum physics , information theory , and cryptography at an abstract and conceptual level .it is difficult for non - experts in qkd security to make sense of the literature ; moreover , even experts are often not aware of certain basics in some of the relevant fields .many who perform assessments on qkd security follow the vague community consensus on qkd security being guaranteed by rigorous proofs .a common perception is that qkd gives perfect secrecy , " as asserted for example in a useful recent monograph on conventional cryptography [ 3 , p. 589 ] .it is interesting to note that qkd is commonly taught to physics students as being an important application of quantum optics because qkd is provably secure . to our knowledge, the provable security property is often taught as being self evident and is not questioned on any level ( recent advances in quantum hacking may be an exception ; however , such attacks are based on discrepancies between the model and real systems as opposed to the security of the model itself ) .the commonly cited reason for no - cloning or quantum entanglement is very far from sufficient . even in the technical literature, a qkd - generated key is often regarded as perfect whenever it is used in an application .one main purpose of this paper is to correct such a misconception and to demonstrate how the imperfect generated key affects the security proofs themselves .security proofs by their nature are conceptual , logical and mathematical yet indispensable for guaranteeing security .cryptographic security can not be guaranteed by experiment , if only because possible attack scenarios can not be exhausted via experiment .however , security is a most serious issue in cryptography and must be thoroughly and carefully analyzed [ 4 , two prefaces and ch . 1 ; see also quotes in appendix i ] ._ the burden of proof _ is on one who makes the security claim , not on others to produce specific successful general attacks . in this paper, we will describe the actual security theory situation of qkd with just enough technical materials for accurate statements on the results .we will be able to describe some main security issues without going into the physics , and we can treat everything at a _ classical _ probability level , to which a _ quantum _ description invariably reduces .we will discuss in what ways these security issues have been handled inadequately . some major work in the qkd security literature will be mentioned and also discussed in appendix i , which may help clarify the issues and illuminate the development that led to the current security situation . in appendix ii , we compare qkd to conventional cryptography and provide a preliminary assessment on the usefulness of qkd when conventional cryptography appears adequate .( note that cryptography is a small and relatively minor subarea of computer security [ 4 ] .it is the latter that results in news headlines . ) in appendix iii , some possible objections to certain points of this paper from the viewpoint of the current qkd literature are answered .table 1 in section viii.b gives a summary comparison of various numerical values .generally , perfect security can not be obtained in any key distribution scheme with security dependent on physical characteristics due to system imperfections mixed with the attacker s disturbance , which must be considered in the security model .this is especially the case with qkd , which involves small signal levels .we use the term qkd " in this paper to _ refer _ to protocols with security depending on information - disturbance tradeoffs [ 1 ] , [ 2 ] , excluding those based on other principles such as the kcq " approach in [ 5 ] , which permits stronger signals and for which no general security proof has yet been claimed . in qkd , one can at best generate a key that is close to perfect in some sense. this immediately raises the issue of a security criterion , its operational significance and its quantitative level .security is very much a quantitative issue .quantitative security is quite hard to properly define and to rigorously evaluate ; thus , there are few such results in the literature on conventional mathematics - based cryptography .it is at least as hard in physics - based cryptography , and there is yet no true valid quantification of qkd security under all possible attacks . that there are problems and gaps in qkd security proofs has been discussed since 2003 in [ 6 , app . a ] , [ 7 ] , [ 5 , app .a and b ] , [ 8 ] , culminating in the numerical adequacy issue in [ 9 ] in 2012 , which provides the trace distance criterion level for a so - called near - perfect " key .this last numerical adequacy point is emphasized in [ 10 ] , and a reply is given in [ 11 ] , which in turn is replied to in [ 12 ] ; no further exchange on this topic has resulted . the basic point of [ 11 ] is that a trace distance level of is sufficient for security .there have since been several arxiv papers that elaborate upon the several qkd security issues that have yet to be resolved .this paper summarizes and supersedes those papers in a coherent framework for analyzing qkd security .this paper shows in what ways , even at a value of , which is ten orders of magnitude beyond what is currently achievable , such a trace distance level does not provide adequate security guarantees in a cryptosystem that involves the use of a vast number of keys at such a level ; see section viii . before proceeding to a detailed treatment , we list the major problems of currently available qkd security proofs as follows : 1 . the chosen security criterion ,namely , a quantum trace distance , has been misinterpreted .the operational security guarantee that it yields does not cover some important security concerns .the numerical security level that has been obtained is far from adequate .the very strong level of guarantee asserted recently is derived from erroneous reasoning .3 . the known security proofs are not complete nor justified at various stages in a valid manner , especially in connection with the major necessary step of error correction , which has not been treated in a rigorous manner .a trace distance level guarantee of a key limits the information - theoretic security level that can be obtained when is used for message authentication , taking away an otherwise available security parameter that allows arbitrarily high levels of message authentication security .there are many other serious issues facing qkd security proofs , many of which relating to physics and implementation .these issues will not be discussed in this paper , which concentrates on a careful exposition of the above four points .much of the technical content in this paper is conceptual analysis , especially on the use of probability in real - world applications .the applications are not essentially mathematical or physical in nature , which is partly why they are easy to miss and result in various confusions .sections iii.a , iv , v.c , viii.b , and appendix iii contain relevant clarifications on the subtle meaning of probability in real - world applications .note that knowledge of physics , classical or quantum , is _ not _ required to understand the content of this paper .furthermore , the relevant basic cryptography concepts will be explained when being introduced .we briefly review the different representations of conventional and quantum cryptography in regard to the cryptographic goals of privacy and key distribution [ 3 ] .may or may not be uniformly distributed.,width=288 ] in fig .1 , the conventional stream cipher encryption of a data sequence for privacy is depicted .a user alice transmits , which is the xor of the data bits and running key bits for each : we use uppercase to denote random variables and lower case to denote the values they take on .thus , from ( 1 ) , alice transmits for given and .a prior shared key bit sequence unknown to the attacker eve is known to alice and the other user bob , who can decrypt from by knowing .the attacker eve would then learn nothing about from without knowing something about .she knows nothing about when the are ( statistically ) independent bits with equal probability of being 0 or 1 .this so - called one - time pad " encryption , her probability of obtaining the sequence correctly given that she knows through interception is equal to her _ a priori _ probability on .for uniformly distributed , where is the bit length of the sequences .( the vertical bar is always used in this paper to denote the bit length of the sequence within it . lower case are discrete probability distributions , and no continuous distributions will be used . )we may represent this by , namely , the uniform random variable to eve .thus , in addition to , which is the _ a priori _ probability of each , there is also no correlation of any type between the that eve can find .this is perfect secrecy . "the security under discussion is the _ information - theoretic security _ of the intrinsic uncertainty to eve .( see section iii and appendix ii . )the correlation among bits in is a most important feature often missed in cryptography , especially in connection with the trace distance ( statistical distance ) criterion , which is presented in section iv.a and also in section iv.b in the context of distinguishability .the goal of qkd is to generate a key , which ideally is , by transmitting bits from alice to bob via quantum signals with no use of shared secret keys . in reality, a prior shared secret key is needed to start executing the qkd protocol , at least for message authentication against man - in - the - middle attacks ..,width=288 ] the qkd key generation process is depicted in fig .2 . for definiteness ,the original bb84 protocol [ 1 ] is schematically described in the following , which contains the key idea of qkd .a sequence of quantum optical signals is modulated by the data and sent from alice to bob , with each bit modulating a separate quantum signal .( we use to distinguish these key generation data from the data on which the qkd key is used to encrypt , as in fig . 1 . ) in bb84 , each quantum signal is a single photon in a so - called qubit" , a two - dimensional quantum state space. eve could intercept and set her probe on the qubits during signal transmission .bob measures on one of the two bb84 bases randomly upon receiving each qubit signal and obtains a bit value of 0 or 1 .after the entire sequence is measured , bob publicly announces which basis he measured on each , and the ones mismatched " to alice s transmitted signal are discarded .then , a portion of the remaining matched ones is used to check the frequency of bit error , which is called the _ quantum bit error rate _ ( qber ) .the other portion is called the _ sifted key _ which the final key is to be generated . for our purposes, there is __no need _ _ to understand the exact physics and underlying rationale of the above procedure. the only thing that matters for the purposes of this paper is the representation of eve s knowledge on the generated via her final classical observed value by the joint distribution .eve s probe inevitably disturbs the quantum signals if she learns anything from the probe , a characteristic quantum effect of the information - disturbance tradeoff , which has no classical analog .it is usually assumed that the users would regard all disturbance as indicated by the qber level to be from eve s interception .they would estimate how much information " eve can learn about with such a disturbance. ( the vague word information " would be specified precisely in context . )the users have to correct the errors in to obtain a useful key .such errors would always be present from system imperfections due to the low signal level .error correction is typically accomplished by an ordinary error correcting code ( ecc ) on , as indicated in fig .2 . the users then transfer the estimate of eve s information on to the error - free .if eve s information is below a certain threshold level , the users may employ privacy amplification" [ 1 ] on to eliminate it .the privacy amplification code ( pac ) usually involves linear hashing compression [ 3 ] on to a final generated key with bit length , which is a small fraction of . what is the security desired and claimed for the qkd - generated key ?because this is a physical and in particular a quantum cryptosystem , there are many mutually exclusive different observed values that eve could obtain from her choice of quantum probe and based on the quantum measurement on the probe .she could estimate various properties of from the that she obtains , each with a certain probability of success .she also would gather side information relevant to improving her estimates before she measures her probe and makes estimates .such side information would include the bb84 bases open announcement and the specific pac employed in the qkd round .it may also involve the specific ecc used .the users goal is to make eve s probability of success in obtaining any characteristic of close to the level whereby is perfect , i.e. , when .an especially significant attack on privacy encryption is the _ known - plaintext attack _ ( kpa ) , which is the main vulnerability of conventional mathematics - based encryption . the _ ciphertext - only attack _ , for which to eve , that qkd security analyses focus upon is usually not considered as a serious risk for symmetric - key ciphers [ 3 ] , [ 4 ] , [ 8 ] , which can be further substantiated in an information - theoretic manner .a brief summary is given in appendix ii , which compares qkd with conventional cryptography .a kpa runs as follows .eve may know a portion of the data that is encrypted with and hence knows part of because is open .she may then learn something about the remainder of through correlations among the bits in an imperfect and hence something about the unknown portion of .it is such a kpa when a qkd key is being used that must be protected against .this implies that correlations between bits in must be addressed in qkd security .perfect secrecy _ against all attack possibilities would require that is uniformly distributed to eve for any that she may possibly obtain that is allowed by the laws of quantum physics together with all her side information .thus far , a security criterion is chosen to measure the difference between an imperfect key and a perfect key with a quantitative security level .the term __unconditional security _ _ was coined [ 13 ] and widely used to include the following two conditions : + _ unconditional security : _ 1 .complete generality on possible attacks ; 2 .quantitative security level can be made as close to perfect as desired .the data bit length or in a qkd round is often taken to be a_ security parameter _ , namely , the quantitative security level improves with increasing and becomes perfect as [ 13 ] . such unconditional security for qkd is often claimed because it distinguishes qkd from conventional cryptography ; moreover , it is the only advantage of qkd ( see appendix ii ) .security against only some attacks means that the qkd approach may not lead to good security in the future when other attacks become practical .the latter is often taken to be the situation with conventional cryptography .unconditional security in qkd remains asserted on occasion , both theoretically for a given protocol and physical models and experimentally as potential or even actualized possibility .however , _ no _ security parameter has ever been found for any qkd protocol at any key generation rate , and certainly , and are not such parameters , as will be shown in section iii.b .to explain the process and requirements of a security proof , we proceed to quantitatively describe the information - theoretic security of a cryptosystem.it is sometimes asserted that a cryptographic security criterion is a matter of definition " and interpretation , " although this is highly misleading and can be considered incorrect .there are definite specific characteristics in a cryptographic goal on which users want to protect against successful attacks. a security criterion would be inadequate for a security task it is supposed to serve if it does not lead to a guarantee at an adequate quantitative level .the range of adequate levels may depend on the specific application ; however , one can not sensibly define " a protocol to be secure if its security criterion and level do not cover such possible attacks . interpretation " of a mathematical statement , on security or any other matter , may be correct or incorrect when it is applied to a real - world situation .moreover , there have been many erroneous interpretations in qkd security analysis .these errors are mainly conceptual errors , not mathematical nor mainly mathematical mistakes .these errors often involve reading ordinary meanings into a word that in context is a technical term that carries only a precise technical meaning .we will see many examples of these errors in sections iii to viii of this paper .the operationally or empirically meaningful security criteria on the secrecy of any shared key string for privacy or key distribution , whether it is generated by qkd or any method , are the attacker eve s probabilities or rates of success in correctly obtaining various parts or characteristics of , including itself in its entirety .this is the case even in complexity - based security , as we will see . the quantitative information - theoretic security of a key is often described by a single - number security criterion , such as eve s shannon mutual information on [ 14 ] through her observation , which she may obtain by intercepting the signal transmission .this is defined [ 14 ] from the joint distribution , where is the distribution of the generated obtained by the users under a given quantum probe from eve .the conditional probability is specified through the cryptosystem representation and eve s measurement result , from which she derives her estimate of a characteristic , denoted by , which is a function of . for example , from , she could estimate as . because she takes to be , we can simplify our notation by simply writing itself instead of , i.e. , is being observed from eve s viewpoint .thus , gives the conditional probability that the observed by eve given is the actual key generated .eve can now derive for all possible through and bayes rule .the criterion gives the number of shannon bits concerning known to eve because , in this case , one can write where is the conditional entropy of given [ 14 ] . note that , as we just indicated , eve has a _ full distribution _ on her knowledge of given any observation with the conditional distribution .we may assume that all the side information that eve may possess has been considered in her final .we order the possible values of entering and suppress its dependence so that , in various abbreviated notations , eve has with this probability profile is the complete information " eve has on given her observation .any single number criterion , such as , _ merely _ expresses a constraint on . when , we have , and eve knows nothing about . for a given level ,what does this imply for the security of ? this question arises for any criterion that is used as a theoretic quantity . a single - number information - theoretic measure on the uncertainty or information " is a theoretical quantity whose operational or empirical meaning needs to be independently explained [ 15 , preface ] . in the context of ordinary communications , the two theoretic quantities entropy and mutual information are related to the empirical data rate and error rate through the shannon source and channel coding theorems. what would be the operational meaning of these quantities in the context of cryptography ?one __cannot _ _ simply assume the word information " for a technical concept would carry its ordinary meaning in any application , especially not quantitatively .shannon himself emphasized such a danger early on [ 16 ] . in shannon s cryptography paper [ 17 ] ,he used such information measures ; however , except for the ideal case of for a one - time pad , he did not explain their operational cryptographic significance . in cryptography , oneis concerned that eve should not be able to correctly estimate various quantities associated with a key from her observation and side information .such success is generally obtained only probabilistically. therefore , this operational security requirement translates to eve _ not _ being able to estimate such quantities well , i.e. , not with appreciable probability . in the case of perfect secrecy ,eve s above equals the uniform . therefore ,in the general imperfect case , her estimate probabilities as derived from should be close to that derived from . in particular , the exact level needs to be quantitatively compared to that from and its numerical adequacy to ensure security for a given application .thus , given a security criterion level that sets a constraint on above , we would need to ascertain what success probabilities eve may possibly obtain .specifically , we would compare the from any not ruled out by the security criterion constraint to the level : we use the notation to explicitly demonstrate that the level refers to the with a distribution .clearly , , eve s maximum probability of correctly obtaining , needs to be sufficiently small for any meaningful claim to security even if it may be far larger than the level. we would address such a general security probability guarantee based on security measures in sections iv and v. their connections to the numerical security levels of concrete protocols are discussed in section viii .when is used to encrypt data , part of may be known to eve in a kpa , as discussed in section ii .let be the subsequence of known to eve , say , when is used as a one - time pad , and let be a subsequence of , being the remainder of excluding . then , eve s optimal success probabilities from such an attack should be compared to the perfect security level when , where , , and are the sequences obtained from with the same bit positions as , , and , respectively . in general , eve may possess only statistical information on without knowing part of it exactly .we will not address this more complicated situation in this paper .it is important to note that we may write 1 .operational guarantee for an event = + rule out its possible occurrence with a high probability an average number of occurrences ( sample mean ) is _ not _ an operational guarantee because the number of occurrence is itself a random quantity from a finite number of trials , each of which has the same probability distribution .this does not mean that an average can not be used as a measure of security .it means that an average is a less accurate measure compared to a probability statement on individual occurrences or on a multiple - use sample mean .if the variance is known in addition to the average ( mean ) , the probability statement on the sample mean can be made , and operational guarantees can be restored .if only the average is known , the markov inequality can be used to provide an accurate individual probability statement , as shown in section v.c .complexity - based security is operationally equivalent to the following success probability characterization .let be the total number of possibilities that eve can attempt computationally among the possible values to determine if a particular value is the correct value , as in opening a safe .it is easy to show that , with a uniform chance of success for each trial , her overall probability of success is , for a uniform probability distribution on the possible cases , eq .( 7 ) can be readily generalized when eve s success probability for each trial is not uniform but known [ 5 ] .indeed , it can be observed that there is no difference between complexity - based security and information - theoretic ( probabilistic ) security if eve is given a sufficient number of attempts to determine the correctness of a given , as in the case of cracking a safe .she would need at most trials and on average trials .the problem of complexity - based security is that it is very hard to prove a lower bound on the number of trials eve needs for a given problem , and no such proof exists for any common problem .it will be observed that it is also very hard to prove qkd security , and no such proof yet exists as well. in the literature on classical noise - based key generation within information - theoretic security [ 18 ] , [ 19 ] , [ 20], both before and after the emergence of qkd in 1984 [ 21 ] , the security criterion used is the mutual information between the generated key and eve s observation .no relation of this information - theoretic quantity to eve s operational success probability was given until [ 5]. the issue will be discussed further in the next section in connection with the statistical distance criterion .here , we would like to remove a major misconception about security proofs that use the mutual information criterion , first discussed in appendix a of ref [ 5 ] and which directly carries over to the qkd case for the criterion as well [ 5 ] , [ 22 ] .apart from the problem of bounding from , the latter we will abbreviate as , the asymptotic security proofs that show , with , were erroneously supposed as proofs that is asymptotically perfect .that is confusing the meaning of a limit because is not a number .what occurs here is that the convergence rate of to determines the asymptotic security level as follows . from lemma 2 in [ 5 ] , for any , it is possible that eq .( 9 ) states that , under the constraint of a given level of , there are possible with at the same quantitative level as .thus , a very insecure , compared to , can satisfy ( 8) , even when converges exponentially in : for a constant .it is possible given eq .( 10 ) that , as compared to for a uniform key .apart from condition ( i ) of unconditional security in section ii , such an asymptotic proof of ( 8) does not imply condition ( ii ) for unconditional security . indeed , it does not even imply is in any sense near perfect , as the above case ( 10 ) shows .we may mention that the current quantum criterion suffers from the same exponential problem as will be discussed in the following section . although can be directly used in a finite protocol, this exponential problem is why relatively large and insecure levels of are obtained in a real protocol with sizable key rates .the quantum generalization of is called accessible information " , which is the maximum mutual information that eve can obtain from any quantum measurement on her probe. such an additional issue of measurement optimization is characteristic of quantum detection [ 23 ] , [ 24 ] .this issue plays no role in our context after we take to be eve s accessible information so that the quantum security situation reduces to a classical one under such .the early proofs of qkd security up until 2004 are based on the use of such an accessible information criterion as well as the current proof on the so - called measurement - device - independent approach [ 25 ] .thus , the proofs all suffer directly from the problems explained above and remain in error even after the proof is converted to one with the trace distance criterion . a particularly influential early security proof is given in [ 26 ] , which is the basis of the heuristic generalizations used to include various system imperfections in [ 27 ] .the side information that eve has from the open announcement of ecc and pac of fig .2 are considered in the proofs of [ 13 ] , [ 26 ] , [ 27 ] . in section vii, we will discuss how they considered in more recent proofs and what problems are yet to be resolved.the mutual information criterion does not directly guarantee security against known - plaintext attacks ( kpa ) .we require bounds on the conditional probability ( 6 ) when part of , namely , , is known to eve so that correlation between the bits in will not leak much information about , namely , the remainder of . in qkd , this kpa problem is considered as one of universal composition " [ 28 ] , [ 29 ] , [ 30 ] in which the security when is being used in an application is taken to be a composition " security issue .although kpa security is crucial and is the usual concern of privacy in conventional ciphers , as noted above , it was not addressed or discussed in the qkd literature until ref [ 28 ] twenty years after ref [ 21 ] appeared .this topic was addressed in [ 28 ] as a composition security issue , with the conclusion that security is ensured when the accessible information goes to exponentially in .that is directly contradicted by ( 10 ) above even simply for ciphertext - only attack security .then , in ref [ 31 ] , it was noted in a specific counter - example that a single - bit kpa leak is possible under the accessible information criterion due to the phenomenon of quantum information locking " , and the trace distance criterion was proposed as an improved criterion ( the criterion was also discussed in [ 28 ] ) with the claim that , under , the probability that is not perfect is at most .specifically , is claimed to be the maximum probability that the generated is not perfect , such probability being called the _ failure probability ._ apparently , the accessible information criterion is much worse .under such a criterion , knowing number of bits in may fully reveal the remainder of [ 32 ] .such a maximum failure probability interpretation of , as originally given in [ 29 ] , [ 30 ] and continuously maintained in many subsequent papers and in the general review [ 2 ] , is _ incorrect _ ; however , it has been maintained publicly in the qkd community to date , despite its flaw having been revealed and explained in early 2009 [ 33 ] , [ 34 ] , in [ 5 ] and [ 8 ] , and in several arxiv papers until the fall of 2014 in [ 22 ] . only in ref[ 35 ] is such an interpretation vaguely combined with a correct security consequence of ( eq . ( 14 ) below ) but with no acknowledgment of previous errors .part of the reason is likely that the indistinguishability advantage " interpretation of is employed instead for validation of this incorrect interpretation , which serves to justify qkd security that can not be otherwise obtained . in sectionsiv and v , we will treat this issue in detail to identify the major security issues involved and those that have not been resolved with the criterion .there are _ two _ different derivations of the failure probability interpretation of in the qkd literature , which we will treat in sections iv.a and iv.b .this incorrect failure probability interpretation of the qkd security criterion is prevalent , and the distinguishability advantage " derivation in section iv.b remains widely quoted and discussed as validation of the interpretation .the issue is of major importance because the criterion issue and its ramifications lie at the foundation of information theoretically secure key generation , both classically and in the quantum case .thus , the full treatment of this issue in this section is very much warranted .we proceed by first explaining how a security criterion functions in a physical cryptosystem where signal transmission can be intercepted . as described in section iii.a , based on her attack and the physical system representation , eve could derive a conditional probability distribution on the various possible values of given her observation .she also has side information from the execution of the protocol , namely , the public announcements in the qkd case , which we can label as .we use the following notational abbreviations by suppressing the and dependences specifically , the distribution applies to a given and . the are ordered as in ( 4 ) so that a value that leads to the value is a most likely estimate of from eve s given and .note that it is this whole probability profile that represents the general results of eve s attack , which _ are not _ simply an estimate of , and as we will see , the results can not be replaced by a single numerical criterion . in a classical situation , is obtained from the channel " transition probability and the side information . in the quantum case , there would be infinitely many such transition probabilities , depending on what quantum probe and what quantum measurement eve chooses to make .a security proof has to address all such possible under a specific class of attacks or all possible attacks allowed by the law of physics , as in condition ( i ) of unconditional security .it has been explicitly shown in section iii that an information - theoretic single - number security criterion merely puts a constraint on what possible eve may obtain , and it must satisfy the criterion constraint . for mutual information , the constraint states that must not give a higher value that is ruled out by the security proof . under the criterion , given by eqs .( 9 ) and ( 10 ) shows that exponentially in does not imply that provides good security asymptotically . here, we ignore the random variations in all parameters except by focusing on .security will be weakened when these random variations are considered in sections v and viii .classically , it is already more convenient theoretically to measure the imperfection of by its statistical distance ( variational distance [ 14 ] , kolmogorov distance ) from the uniform distribution than by as follows. for two probability distributions and on the same sample space , the is defined to be thus , . from the well - known inequality in[ 14 , eq .( 11.137 ) ] , it immediately follows that , for any subsequence or segment " of and denoting by similar to , the result in [ 14 , eq .( 11.137 ) ] applies to any probability value and hence to the maximum in particular . under the constraint , it is easy to show by explicit construction [ 5 ] that the bound ( 13 ) can be achieved by many permissible .( we will often omit the unnecessary or in the following and simply set a or level . ) in particular , for any , one can achieve the bound ( 13 ) with equality .the case of the entire key for the total compromise probability is of special importance .these classical results directly apply to the quantum case in which a quantum trace distance is defined between eve s probe and an ideal quantum state to the users . after eve measures on her probe ,a trace distance bound on eve s attack simply translates to a bound on . such a bound constrains the that eve could obtain from any probe and quantum measurement .thus , in this paper , one can _ regard _ as in the context of a quantum protocol . note that ( 14 ) shows the _exponential problem _ in numerical security guarantee through similar to ( 10 ) above .an exponentially small only gives security on corresponding to an -bit uniform key after dropping the small factor in ( 14 ) . in particular , achieving ( 14 ) with equality already _ shows _ that the failure probability interpretation of is logically incorrect because it does not include the factor .the original interpretation of the quantum trace distance , which may be abbreviated as , is based on a failure probability " interpretation on the classical , which would reduce to upon eves measurement on her probe .a key is called -secure " when .it is stated in [ 30 , p.414 ] that an -secure key can be considered _identical _ to an _ ideal _ ( perfect ) key- except with probability " ( emphasis in original statement ) .in addition , in [ 29 , p.414 ] , it is stated that the real and the ideal setting can be considered identical with probability at least " .therefore , the failure " in the failure probability " refers to being not perfect , and a _ failure probability _ means that it is rigorously proved that there is a maximum probability that fails to be perfect .this unambiguous and incorrect interpretation is repeated in many papers ; see note [ 25 ] of ref [ 8 ] for a collection of cases .this is also explicitly asserted in the review [ 2 ] and in the most complete qkd security proof available to date [ 36 ] . this error has never been acknowledged , and the failure probability interpretation is widely perceived to be correct . the valid consequence , eq .( 14 ) , of a guarantee is stated explicitly in [ 35 ] ; however , the incorrect interpretation is maintained as a vague paraphrasing without noting the difference , and an indistinguishability argument is offered for such an interpretation .the security consequences of an incorrect interpretation will be presented in sections v to viii .removing such a misinterpretation is important to obtaining true and proven security . in section iv.a , we will analyze the errors committed in drawing the failure probability interpretation . in section iv.b , we will do the same for the indistinguishability " argument , which is often taken to imply the same incorrect interpretation . in sections iv - viii, we will see in many places how the wrong interpretation misrepresents the security situation , attributing a security guarantee to that it does not provide .the above failure probability claim was drawn from prop .2.1.1 in [ 29 ] , which is the same as lemma 1 in [ 30 ] .it is re - stated as theorem a.6 in [ 35 ] .the claim states that , for two random variables and in the same space with probability distributions and that are marginals of a joint distribution , one may obtain generally , for arbitrary , one obtains the following coupling inequality " [ 37 , sections i.2 and i.5 ] , thus , ( 15 ) amounts to the assertion that the bound ( 16 ) can be achieved by some . applying ( 15 ) with and , the probability is taken to be a probability that is not , and the failure probability interpretation of was thus drawn .this is an example of interpreting mathematical symbols incorrectly when addressing real - world applications .there are several other such examples of incorrectly connecting mathematics and the real world in qkd security analysis , say , in connection with indistinguishability " and universal composition " , as we will see later .the symbol merely abbreviates the probability of an event in which the outcome of equals the outcome of from an applicable joint distribution , namely , this does not say anything about the whole and themselves , as the failure probability interpretation claims .more importantly , there is no joint distribution at play in this qkd situation other than the independent product distribution , much less one that achieves the bound ( 16 ) .the incorrect failure probability interpretation reads into ( 15 ) meaning which is not warranted . that it is wrongcan be observed directly [ 5 ] from a that achieves the bound ( 14 ) , which has the additional factor exceeding what is given by the failure probability interpretation .when , the two distributions are necessarily different because if and only if . in what sensethen can hold with a probability when ?such a probabilistic interpretation for given and may be represented mathematically by the existence of a distribution such that , from the theorem of total probability , for a probability , in this case , . because is a probability distribution , eq . ( 18 ) is easily shown [ 38 ] to hold if and only if , for and , for large , up to which varies , eq .( 19 ) implies all take essentially the same value of approximately , and hence , must be nearly uniform .this condition ( 19 ) can not be satisfied for [ 8 ] . for any ,( 19 ) implies a uniformity on that does not follow from simply a guarantee on .specific counter examples can be easily constructed .thus , several errors are committed in the original derivation of the incorrect failure probability interpretation , any of which would invalidate the derivation .we omit a detailed discussion on the first two points , which are rather self evident . 1 .there is no reason to expect that maximizing is in effect so that eq .( 15 ) holds despite ( 16 ) .the mathematical representation of the failure probability interpretation of is not given via a joint distribution , which is irrelevant to such an interpretation .the correct representation of the failure probability interpretation is given by eq .( 18 ) , which can not hold for and which is also not warranted for any because of ( 19 ) .note that from ( 14 ) gives the _ total compromise probability " _ of the whole associated with , which is _ not _ the probability that turns out to be non - uniform , apart from the factor .this is because implies but not the other way around , as we have shown .a quantum trace distance measure obtained by dividing by and called the _ failure probability per bit _ is introduced in [ 35 ] , which clearly gives a substantially lower failure rate than does itself .it is a misleading terminology because it suggests that the bits in are statistically independent .with such an interpretation , the total compromise probability becomes not ( 14 ) but the following : two errors are committed in above obtained from a given . instead of applying to as a whole , it is arbitrarily reduced by to give a per bit " level _ and _ is then applied to each bit of independently regardless of the length . as a result , the level is greatly underestimated as the in ( 20 ) . generally , dividing a quantity such as and by the size does not produce a bit - independent quantity , as a matter of course .this incorrect interpretation of failure probability per bit " is used in [ 35 , p.14 ] : 1 . for example , if an implementation of a qkd protocol produces a key at a rate of 1 mbit / s with a failure per bit of , then this protocol can be run for the age of the universe and still have an accumulated failure strictly less than 1 . "the failure probability per bit here is with the of ( 20 ) .the numerical security situation of ( f ) is given in section viii.b .the failure probability per bit interpretation misses the crucial point that the significance of a given level depends strongly on . a level of may be good for but is poor for .on the other hand , the guaranteed level ( 13 ) gives the same bound on the difference from a uniform distribution value independently of the length .thus , a value that appears to be small may actually be relatively large for a long or subsequence .this confusion occurs in the same manner in the following distinguishability advantage interpretation of .the indistinguishability argument was originally used in [ 28 ] and previously to argue that the trace distance in the quantum case or the statistical distance in the classical case is a good measure of how close is to an ideal situation for the users .it is precisely formulated as a distinguishability advantage statement for the binary decision problem of discriminating between the two distributions for and .we will simply consider the case because the quantum detection problem reduces to a classical one once the ( optimal ) quantum measurement is fixed .for the distinguishability advantage interpretation to serve as a functional security criterion , say , on kpa , one must first write down what the interpretation asserts quantitatively for a given problem .it appears , as this section will describe in detail , that the incorrect failure probability interpretation of section iv.a is being asserted .specifically , distinguishability is supposed to provide a justification of the interpretation .in addition to the counter examples of sections iv.a and v.a to the failure probability interpretation of , we show in this section how such a justification is conceptually invalid .consider the well - known classical binary decision problem of deciding between two hypotheses h~0~ and h~1~ from an observed random variable with conditional distribution and for the two hypotheses . the maximum probability of a correct decision is given by where and are the _ a priori _ probabilities of h~0~ and h~1~. in ( 21 ) , the is defined exactly as in ( 12 ) , with and taking the place of and .when , the second term on the right - hand side of ( 21 ) becomes in terms of an usual statistical distance .for this equal _ a priori _ probability case , becomes the distinguishability advantage " of knowing compared to the no observation case with the a posteriori probabilities of h~0~ and h~1~ equal to the _ a priori _ probability .thus , it is thought that if is small , is hardly distinguishable " from .it is easily observed from ( 21 ) that is biased toward hypothesis h~0~ when and similarly for h~1~. when goes to 1 , for h~0~ also goes to 1 , as it should intuitively .it is not known how ( 21 ) is related to the distinguishability advantage when .thus , as it can already been observed from simply the problem formulation , the criterion can only admit a distinguishability advantage interpretation in cryptography , if at all , for , with being the _ a priori _ probability of the hypothesis h~0~ that the situation is perfect for the users and for the actual situation where eve has made an observation described by her .it is our contention that this requirement of is not realistically meaningful , and furthermore , _ no _ quantitative security conclusion on eve s success probabilities can be drawn from such interpretation ; in particular , the bound ( 13 ) or ( 14 ) must be derived mathematically from the mathematical expression of with _ no _ extraneous interpretation .the following already presents that the conclusion of the real situation being indistinguishable " from an ideal one or having distinguishability advantage " can not be validly drawn .the conditional probability whereby the situation is ideal for the case from binary discrimination is given by why would the ideal situation have such a high probability close to for any ?this is because the _ a priori _probability is taken to be 1/2 to begin with even though it should be 0 .indeed , if it makes sense to assign an _ a priori _ probability to the real and ideal situations in a binary discrimination problem , the _ a priori _ probability of the real situation should be , and the ideal situation should be .this is also the conclusion drawn from ( 18)-(19 ) above .however , what if one simply ponders the decision problem with ? then , the conclusion of such a problem can not be applied to any real - world problem .this is because such a discrimination problem has no empirical meaning because we all know we are in the real situation where eve s presence is assured .if this may not be the case , the problem should be formulated as one with all possible unknown probabilities of eve s absence or false alarms " [ 39 ] and not one with a fixed _ a priori _ probability .furthermore , eve never cares to make such a discrimination ; her objective is to learn about .this is another case of reading into mathematics an unwarranted assertion about the real world .we will elaborate this point further in the remainder of this section because there is a similar use of in conventional cryptography that we can not go into in this paper and that lends unwarranted security significance to indistinguishability " .( in particular , correlations between the future bits are not accounted for in the single - bit distinguisher " prediction , similar to what we indicated above at the end of subsection iv.a . )the problem of a metaphysical " distinguishability interpretation can be observed from the fact that there are _ many _ hypothetical situations , say , one with any level , in addition to the ideal case .should we conduct multiple - hypothesis decision making ?why not a binary one with one situation less secure than the real one ? why would such a decision allow one to conclude all the features of the decided upon hypothesis , which are simply given by fiat ?one major problem of using such an distinguishability advantage argument is that it _ becomes _ in one s mind an _ indistinguishability _ statement when the distinguishability advantage is small .indeed , the real situation and the ideal situation are then taken to be distinguishable and hence different only with probability .thus , becomes the failure probability interpretation of the previous section !such an explicit interpretation of quantitative indistinguishability as failure probability is common ; see for example [ 40 , p.3 ] .moreover , it appears to be used by many as a valid derivation of the failure probability interpretation of despite the errors in the original derivation discussed in section iv.a and the abundance of counter - examples to such a claim , the reasons for its invalidity notwithstanding .the following may help further clarify what went wrong .there is a common - sense meaning of two items being indistinguishable , " withthe leibniz metaphysical principle concerning the identity of indiscernibles " implicitly used .that such an indistinguishability conclusion can not be drawn from a binary decision problem can be directly observed from the common problem of radar detection as to whether there is an incoming flying object . in a militaristic situation, the object of concern could be an enemy airplane , say , with or without a warhead .the yes - no target detection problem of whether an enemy airplane is present _ can not _ alone determine whether a warhead is on board . in the absence of further information, one can not infer that the airplane has a warhead because that is hypothesis h~1~ in the binary decision problem formulation , which one simply applies _ by hand_.there are many possibilities on the details of the incoming target ; it is not valid to pick one and exclude others and then use binary discrimination to affirm the picked possibility .similarly , the occurrence of an ideal situation is an unwarranted conclusion that one can not make use of in other problems .one has to _ mathematically derive _ a result for a problem from the given mathematical statement .indistinguishability arguments in terms of are supposed to be universally composable " in that they justify the use of in any application to which it would be applied [ 28 ] , [ 29 ] .we would later run into such issues in connection with known - plaintext attacks and error correction . here, we may simply emphasize that there is no such automatic universal composability from or , with or without a distinguishability advantage . even in the casewhereby it is composable , the proof from may be far from trivial , as we will also observe .the main point is that an intuitive interpretation may only serve as a guide to the general situation .valid logical and mathematical deduction from premise to conclusion is required to establish proof .this is especially clear when a quantitative level is desired .summarizing , the distinguishability advantage " justification of operational guarantees from or is incorrect in several ways : 1 .the indistinguishable probability is for a specific binary decision problem , which does not imply that the two situations are indistinguishable in other physical senses .2 . in the real world , the _ a priori _ probability for the ideal situation can not be ; instead , it should be .the ideal situation can not be inferred to be the real situation from the binary decision because it includes other features not included in the binary hypothesis testing formulation .the key question to ask concerning the distinguishability advantage " argument is what is the quantitative security assertion ?it seems that the answer so far is the failure probability interpretation , to which we have given various counter - examples in section iv.a and will give another in section v.the bound ( 13 ) provides a security guarantee on the security of and its subsequences when eve attacks during its generation process before it is used . as mentioned before , when is used for privacy ,it is more important to protect against known - plaintext attacks to maintain the secrecy of when , namely , the remainder of , is known to eve . how does such security follow from the incorrect failure probability interpretation of ?it seems that this issue is addressed explicitly only in [ 35 , section 5.1 ] , with the conclusion that the security is the same as that originally obtained from .indeed , the failure probability interpretation alone without the indistinguishability and other considerations in [ 35 ] would appear to give such a conclusion already .thus , regardless of the known to eve , remains uniform to her except for a probability .note that the distinguishability interpretation gives exactly the same quantitative security conclusion as the failure probability interpretation , as noted in section iv.b .such a conclusion is __incorrect _ _ , as the following counter - example shows .let be a specific sequence of with probability .we denote the first bits of by and the remaining bits by .let all the other sequences with the first bits have .the other sequences with the first bits different from are assigned a probability as in .then , .when the known in a kpa turns out to be , eve knows that the remainder is with certainty , _ not _ with probability .the underlying reason that the failure probability can not be used to obtain correct results in kpa is that there is no way to account for _ conditioning _ with just such an interpretation , and universal composition " is a vague argument and not universal .its validity needs to be established for each composition situation . in kpa , there is apparently no composition , and thus , the original result is inferred in [ 35 ] as noted above , which is numerically very incorrect .apparently , kpa security on of ( 6 ) can be obtained directly from ( 13 ) by writing its left - hand side as an average , with being the remainder of apart from : for it is observed from ( 23 ) that the probability guarantee for kpa now is an average over .this is fully in accord with the above counter - example . when we remove the average to obtain an individual guarantee , we need to apply a markov inequality for .the drawing of a specific from possible pacs and a specific from an observation also requires a markov inequality .this will be discussed in section v.c. in section v.d , we will make clear that all these classical results are what quantum results reduce to . the following security question arises : how many bits will eve correctly obtain even though her estimate of or is incorrect as a sequence under a guarantee through ( 13 ) or ( 23 ) ?for example , eve guessing a 4-bit to be when it is actually makes a sequence estimate error but only a one - bit error out of four , not the uniformly random result of two errors .however , an error rate leak in security that differs from the uniformly random level of is equivalent to a non - uniform _ a priori _distribution , which is known to eve .for instance , eve knowing that six out of ten bits of a are correct but not knowing which are the correct bits is equivalent to having an _ a priori _ with a biased probability on each single bit in the case that the bits are independent .hence , the issue must be addressed in assessing ultimate security . in ordinary communications , this is called the _ bit error rate _ ( ber ) issue in coded systems , in contrast to the sequence error rate addressed in most performance analyses .this does not represent a serious issue there because , typically , a sequence error rate itself can already be driven to zero and because ber is in any case an improvement over the sequence error rate .the main information - theoretic problem to cryptography users concerns eve s performance from the users viewpoint , which is _ opposite _ to the performance concern of the users themselves .it turns out that the relative importance of the different issues may be different in cryptography in addition to the fact that the required performance analysis is often more difficult , say , in lower bounding instead of upper bounding eve s error probability .the ber is when under any attack. from the failure probability interpretation of , one would obtain , for any subset of in the absence of known - plaintext attacks , the following bound on such a bit error rate . counter - examples to ( 25 ) can be readily constructed for small . the actual ber needs to be validly bounded from a given .eve s ber , which is less than , gives her information that is not available for a perfect , and its quantitative security consequence needs to be obtained .it is important to observe that the ber is a very important security criterion in addition to those of ( 6 ) ; however , it alone is not sufficient as a security guarantee .all these different probability criteria arise naturally for different security concerns for a given , and all have direct operational meaning .they are perfectly protected against eve when .an approximate bound for the whole can be derived [ 41 ] from standard information theory results through the entropy of .let the bit error probability be where is the probability that the bit in is incorrectly obtained from her regardless of the estimate of . with being the binary entropy function , we have from the fano inequality [ 8] that the right - hand side of ( 27 ) can be bounded via for by neglecting compared to and using the theorem in [ 14 , p.664], a bound on follows from combining eqs .( 27)-(28 ) ; see [ 41 ] for a discussion on the relatively weak bound on such in comparison to sequence errors .note that the bound ( 13 ) for a single - bit does not concern the ber , which is obtained from a sequence estimate of a long with some bits being correct even though the sequence is wrong . for a general subsequence , there is no known result on the ber guarantee from or and none for under kpa with known . the bound ( 25 ) from the failure probability interpretation on the ber for kpa is contradicted by the same counter - example in section v.a. summarizing , it is uncertain as to what ber guarantee for eve one can have under . useful bounds on eve s ber for these cases are _ open _ problems with basic security significance , as we will see in connection with error correction in section vi . in the industrial control of product manufacturing ,the criterion employed is usually the probability an item fails to meet a pre - set standard , not the average number of failures .the former is a more stringent and operationally meaningful criterion , as we will see .if we have a zero - one random variable , then the probability of one variable is the same as the average .otherwise , from the average ] may be evaluated or bounded and because ] that exceeds the desired threshold level . in qkd security proofs ,the itself is a probability , as observed in sections iv and v , and is a random quantity depending on the values of several other random system parameters .if we did not use these two probabilities , we would not know quantitatively what individual probability guarantee we may obtain .we could address such uses of ( 29 ) in the final security guarantee by adjusting the exchange to minimize any failure probability " of the protocol as follows .consider the of ( 13 ) when is negligible compared to .then , with denoting the conditional probability of given , etc . , we have , under = \epsilon ] , we have thus , from ( 29 ) and minimization over ( f , g ) , the bounds ( 30 ) and ( 33 ) are loose ; however , it appears that there is no way to tighten them without further knowledge of the random system parameters .because the ber is a very nonlinear function of the security criterion , it is not known how the average can be converted into an individual guarantee , in contrast to the or case above . of course, we do not even have a ber bound without such an average , except for the whole from ( 27)-(28 ) .the numerical security guarantee from ( 30 ) and ( 33 ) is devastatingly worse than the original -level guarantee . indeed , even with the incorrect failure probability interpretation of discussed in section iv.a , one application of ( 29 ) is required to obtain an individual guarantee on eve s probability of successfully estimating the whole even without any side information on during its use , as discussed above ; see section viii for numerical examples . at this point, it is appropriate to emphasize that the classical analysis of and that we presented in this paper applies directly to the quantum case .this is because , regardless of the utilized quantum criterion , or otherwise , the criterion would reduce to a classical quantity once eve makes her quantum measurement on her quantum probe .the trace distance would reduce to a classical statistical distance , and the accessible information would reduce to classical mutual information . however , different quantum quantities may lead to the same classical quantity but are essentially different quantum mechanically .this turns out to be the case for quantum accessible information and the holevo quantity ; a guarantee from the former allows quantum information locking leaks , which is not the case for the latter [ 43 ] .the holevo quantity guarantee is essentially equivalent to the trace distance guarantee . from this quantum equivalence[ 28 ] , [ 43 ] , one immediately has the following bounds , which establish the essential equivalence of and in a classical protocol and which provide a general security guarantee to classical protocol security proofs via similar to that provided by given in this paper : the in ( 31 ) is an average over the observation in classical protocols , which is automatically included in the quantum trace distance .this is exactly as in the case.in this section , we consider the problem of quantifying the security of the ecc output and the pac output , the generated key in fig .2 , as well as how ecc and pac affect the final key generation rate .the information leaks from error correction and privacy amplification were not considered in the earlier security proofs [ 7 ] , [ 26 ] , [ 27 ] .this is sometimes justified by the invalid reason that the open exchange in these two steps is performed after eve sets her quantum probe .however , eve may make her quantum measurement and key estimate after the open exchange .apparently , the pac step can be rigorously quantified if the ecc step has also been quantified ; however , the ecc step can not be quantified , and there is no hint as to how a rigorous quantification of error correction may be performed in a qkd protocol . before discussing the error correction problem ,we first discuss privacy amplification and its effect on the key rate .the basic idea of privacy amplification is to increase the security level by compressing the input bit sequence into a shorter output bit sequence . intuitively , this is well known to be possible when the input bits are statistically independent to eve .for example , given two bits and , each known to eve with error probability , the bit is known to her with error probability , which is larger than . when the input bits are correlated , if simply from eve s possible attack , the situation is far less simple .useful results can be obtained using linear hashing compression represented by a matrix via the so - called leftover hash lemma [ 44 ] , which has a direct quantum generalization [ 45 ] .the leftover hash lemma for universal hashing , " which covers all pac in use , provides the tradeoff between the -level of and its length by the following formula , with , because it is not known whether greater than the minimum on the right - hand side of ( 2 ) could be obtained , the guaranteed key rate is given by the quantity .note that and .furthermore , the minimum one can obtain is , from , we can simply take the quantum in this paper to be the largest statistical distance that eve may obtain .this leftover hash lemma guarantee is an average over the family of possible hash functions from which the pac is drawn .the specific pac used is openly announced , and the performance is an average over possible codes , which is common in random coding "- type arguments .a specific or level has to be first guaranteed in the security analysis to remove the pac averaging .there are evidently some pacs with poor security , say , whenever the pac matrix is degenerate ( rank less than ) , for which a degeneracy of would leak shannon bits with certainty . if such degeneracy is first tested , a daunting practical task given that is tens of thousands and given that is a multiple of , the resulting family is not known to obey the universal family " condition required for the proof of the leftover hash lemma . a high - probability guarantee on an adequate specific -level is therefore essential .the following inequality evidently holds for the s in fig . 2 : the first inequality in ( 37 ) follows from eve possibly possessing more knowledge from the error correction in estimating .the second inequality follows from privacy amplification being an open many - to - one transformation . as will be discussed in section vi.b , the users could and indeed may have to cover the chosen ecc via shared secret bits ; therefore, one would obtain assuming that _ correctness _( alice and bob agree on the same ) is obtained with a sufficiently high probability . in such a case , when the ecc is covered by an imperfect key , there is no known bound on , as will be observed in section vi.b .hence , there is also _ no _ guarantee on the -level of the final from ( 35 ) , and the pac step justification from ( 35 ) and ( 38 ) is lost because ( 38 ) is no longer valid .the rigorous validity of the final level is correspondingly _ lost _ from simply this problem .note that it is not possible to cover a pac using shared secret bits because this would require a bit cost substantially greater than the number of key bits generated because is typically many times .the open announcement of pac is fully considered in the leftover hash lemma .there is no similar result that would yield the pac information leak automatically from another known approach [ 46 ] .privacy amplification exemplifies the exchange of key rate and privacy level inherent in qkd protocols . for pacsto which the leftover hash lemma is applicable , there is the limit ( 35 ) on how small can be made whereby remains positive . in general , from ( 35 ) , sets a limit , via , on the number of uniform key bits that can be generated , and is constrained by ( 3 ) in a guarantee and by ( 14 ) in a or guarantee on .such an exchange is fundamental .it has not been shown how , and it appears impossible , one obtains a key at a given rate of per round with made arbitrarily close to perfect by increasing a security parameter in either a finite or an asymptotic protocol .in particular , it is not possible to obtain arbitrarily close to from repeated use of linear pac , which is a direct consequence of ( 35)-(36 ) .it is not known whether a pac may exist that leads to a better exchange than ( 35 ) . on the other hand ,substantially more secure keys than those reported in the literature can be obtained from ( 35 ) at the expense of a decreased key rate [ 9 ] . in particular , a near - perfect " key with may be obtained , although that alone does not address the unsolved security issues concerning the ber and ecc .the error correction step is called reconciliation " in the early qkd literature and is to be achieved by an open exchange cascade protocol [ 47 ] .there is no valid quantitative result on cascade [ 48 ] because complicated nonlinearly random problems are involved .furthermore , the difficulty of bounding the resulting means that the subsequent pac step can not be quantified if one uses cascade .the same situation is obtained when the error correction step is performed openly , as further discussed later in this subsection .currently , ecc is universally employed for error correction in qkd protocols . in particular ,large ldpc codes are used , the performance of which is difficult to analyze [ 44 ] .the problem of ecc information leaking to eve was not mentioned in earlier security proofs [ 13 ] , [ 26 ] , [ 27 ] , in [ 35 ] or in the recent review [ 2 ] ; however , the added side information of an ecc on would help eve in her estimate of if the ecc is openly known .in particular , if the ecc is too powerful , it may even correct all of eve s errors in . as discussed in section iv.a , for security quantification, one would need to bound , which is an impossible task even classically for any given long ecc .there is a further quantum issue [ 9 ] similar to quantum information locking concerning the accessible information criterion .thus , the _ only _ viable security approach is to cover the ecc using shared secret bits between uses and subtract its cost from to obtain the final generated key rate .indeed , the following formula is currently used : with the factor is arbitrarily taken to be , the case being the asymptotic limit . the justification of ( 39 )is given in [ 36 ] by citing the whole book [ 14 ] , which does not address such reconciliation issues or even eccs .we give the following argument for the case , which appears to be what is intended in the earlier paper [ 49 ] .consider a linear ecc with information digits and code digits [ 50 ] such that the number of parity check digits is .if one assumes that the from the transformation in fig .2 can be represented by a binary symmetric channel [ 8 ] with crossover probability given by the qber , then for given by the channel capacity (qber ) , there exists a linear code that would correct the errors from the received bits for large by shannon s channel coding theorem , which is applicable to random coding over linear codes only . hence , the number of parity - check bits that are to be covered by a one - time pad , with of the code being , is =|k''|\cdot h_2{\rm ( qber)}\ ] ] thus , ( 36 ) for is obtained .we would first remark that the accounting in ( 39 ) regards the sequence as a codeword of an ecc , which is sometimes explicitly stated in qkd security analysis . in such a situation , covering the parity check bits is not sufficient to uphold ( 38 ) needed for the pac step .this is because the structural information on the specific ecc used , which is open because it would take an excessive number of shared secret bits to cover it , would induce correlations among the bits in such that it becomes impossible to estimate the increase in to .even the effective itself has been changed when it is taken as a code word . on the other hand , by regarding as the information digits of a linear ecc in a systematic form , alice may simply add further parity check digits and cover them by a one - time pad , hence preserving ( 38 ) . if the covered parity - check digits are assumed to be error free , then ( 39 ) continues to hold . in reality, the digits have to be error protected for the classical channel used for their transmission .if that channel is taken to have the same error rate give by qber , a different is obtained because the of the code is now : \ ] ] which is larger than .the resulting will be correspondingly smaller .the combined key rate reduction effect of the pac and only ( 39 ) is quite pronounced .in addition to the intrinsic physical inefficiency of qkd , they further severely limit the obtainable key rate in a full protocol .there are several basic problems with such an approach to quantifying ecc security [ 9 ] , [ 51 ] .the assumption of a binary symmetric channel is not valid under general attack by eve ; otherwise , there would have been no problem in quantifying the security since qkd day one .the pulling back of the asymptotic limit to a finite with an ad hoc factor is completely unjustified . although the parity - check covering bit cost in a concrete protocol may be smaller than ( 40 ) when the protocol continues to function correctly ( from other issues , such as correctness , that we do not discuss in this paper; however , see section vi.c ) , the hand waiving assignment of or in the literature shows that qkd security has _ not _ been rigorously quantified in principle .this is because no correctness is guaranteed if an empirically measured quantity is used for the bit cost in lieu of ( 39 ) .some formal results on information leakage in open eccs are presented in [ 52 ] ; however , ( 39 ) is employed in actual evaluations [ 53 ] .substantially more serious is the following basic security issue .the importance of qkd derives from the fact that key bits can be continuously generated between two users ; in particular , such bits can be used to execute a future qkd protocol .otherwise , one does not obtain effective key generation .we can perform the analysis above for ecc security only by assuming that the shared key bit used to cover the parity - check digits are the perfect one - time pad bits .when is not perfect , what would the information leak be ?this problem is _ never _ explicitly addressed in the literature . in the following, we will ascertain whether universal composition " may be of assistance .universal composition has been based on two different arguments .the standard one [ 28 ] , [ 29 ] is the metric property of or of the quantum trace distance . for application to the present ecc problem, we have where , , and refer to eve s distribution on for the ideal case , the case when a specific ecc is used and the case when no ecc is used . in the quantum situation , the classical would be replaced by the trace distance between corresponding density operators , i.e. , the quantum counterpart of classical distributions . from ( 43 ) , we need to bound to obtain a level with eccs , which appears to be an impossible task , and no result has ever been reported for carrying through this universal composition argument .there is no valid proof if is taken to be the level of the key used to cover the ecc . in particular , the ber leaks of discussed in sectionv.b would alone give eve significant side information to improve her estimate of and hence of .there is a complicated nonlinearity involved in these levels .the other argument [ 54 ] uses an incorrect and thus invalid failure probability interpretation of or .if the argument were to be valid , then one would add the -level of the ecc covering key to the overall -level .as we have observed in sections iv and v , some details concerning are not protected by ; however , they are protected under the incorrect failure probability interpretation .in particular , the interactions of the different parts of a protocol indicates that one may not need an entire portion to be correctly estimated to improve an overall estimate on another portion .how a ber leak of would affect eve s success probabilities through the ecc appears to be a complicated function of the given ecc .there is no reason why would equal in the absence of an explicit proof .moreover , such a level can not be generally correct because when the ecc is sufficiently powerful to correct all errors , the parity check digits would reveal all bits of .however , the failure probability derivation of universal composition [ 54 ] needs such further proof .using extraneous interpretation is not an alternative to a valid mathematical deduction . the severity of the -level limit on a qkd key in applications will be described in the following sections on message authentication . in the present ecc case, it appears extremely difficult to derive reliable estimates .one may thereby conclude that the security of the ecc step in a qkd protocol has not been , and appears that it can not be , analyzed quantitatively in a valid manner . as a consequence, the pac step is not justified due to the lack of a rigorous bound on , as discussed in section vii.a .hence , the security of the entire qkd protocol has not been reliably , and certainly not rigorously , quantified .this defect is not one of complexity in numerical evaluation but one of fundamental validity of reasoning . in this section, we will summarize and explain the important basic and practical limits on the exchange between and , including the possible adjustments of what is in a qkd round for such an exchange .indeed , what constitutes a qkd round ?let us first ignore the practical limits on processing long eccs and pacs and simply attempt to determine what is a good choice of .because has to be error corrected , we need to introduce a measure of correctness , namely , the probability that the users agree on the same . in a realistic protocol , there are various system imperfections that would compromise correctness ; however , the necessity of error correction alone implies that long eccs need to be used .this arises from the fact that being broken into small pieces for error correction is equivalent to using a shorter ecc in sequence as a longer ecc , which has never been found to be an efficient method of correcting errors .we simply have to use a sufficiently long ecc , or equivalently a sufficiently long , to achieve an adequate level of correctness , i.e. , of correcting all the errors in with a sufficiently high probability .let us simply consider a pac from using the leftover hash lemma ( 35 ) because it is the only known way of quantifying actual security levels .even more generally , from the nature of privacy amplification as bit sequence compression , we can see that a long prior to compression is needed to obtain a good security level using a sufficiently high compression bit ratio .in contrast to ecc , one can break up into shorter pieces and compress each piece .there is no correctness constraint ; the only fundamental limit is whether the of the smaller pieces is sufficiently small to ensure security from ( 35 ) , assuming that ( 38 ) holds . as discussed in sections vi.a - vi.b, this assumption does not hold when the ecc is covered by an imperfect key , as in the qkd case .apart from this crucial issue , ( 35 ) provides the fundamental exchange between the key rate and the security level , other than the need to use the markov inequality multiple times , as discussed in section v.a .the key point of this connection is that it is not possible to have a key that can be made arbitrarily close to perfect by a security parameter , namely , condition ( ii ) of unconditional security in section ii , from only the asymptotic vanishing of mutual information or statistical distance , as explained in sections iii.a and iv.a . with ( 35 ) , the limit on the exchange is explicit .this limit can be relaxed in one direction by sacrificing security to obtain a better key rate with the -smooth entropy formulation commented on in [ 43 ] . however , relaxing security is not satisfactory given the current inadequate values to be discussed in section viii .furthermore , relaxing security for longer keys is what conventional cryptography is apt to do . in practice ,the use of long eccs and pacs is limited by the complexity of the processing involved .both ecc processing and large matrix multiplication have been studied for decades , and it appears that it will be impossible in practice to address eccs on broken into pieces , whereby each of which is significantly longer than , in the foreseeable future . in the absence of a full protocol including message authentication , a qkd _ round _ may thus be defined by the stages of the protocol that check qber and generate a sifted key with subsequent ecc and pac applied , as in fig .2 , the length being limited by current technology . a more important concept of a qkd _ block _ may be defined by the pac through ( 35 ) , with security level determined from for input blocks of length to the pac .thus , has a maximum value of for a round but may be considerably shorter . under the assumption of correctness, the discussion on being broken down into many is based on practical considerations . however, this impacts on security because the blocks within a round may be correlated , and we also need to bound instead of .message authentication , in which a data message is checked to determine whether it has been altered , is often considered as a cryptographic task more important than privacy [ 3 ] , [ 4 ] .a qkd protocol necessarily involves message authentication on the open exchange between the users for basis matching , qber checking , and concerning the choice of ecc and pac after eve sets her probe . at the very least, message authentication is required to thwart a man - in - the - middle attack eve may launch by pretending to be alice to bob and bob to alice .the security of a message authentication code ( mac ) , which is a hash function for bit compression , is sometimes based purely on complexity . in qkd protocols ,mac has to have information - theoretic security ; otherwise , it would contradict the qkd claim of being information theoretically secure .a review of information theoretically secure message authentication can be found in [ 55 , ch . 4 ] and [ 56 ] .a brief summary for our purpose is given as follows .for a data message of a given bit length , a data tag of much shorter bit length is obtained by applying a hash function " to , say , by a compression matrix , as in a pac , which is chosen from a given family of hash functions . in a substitution attack in the open tag case , given and , eve finds with .if the is chosen with a uniform secret key , eve s success probability is bounded by when the family of the hash function is an -asu family .concerning both substitution and impersonation attacks , in the latter , eve finds for a given such that for the correct with success probability : there is a general lower bound on the achievable for a given tap bit length that may be achieved : when the key is a qkd key with , it can be shown that [ 57 ] which may go to 1 and be achieved with equality for some .the average of over possible , , is bounded by , as is [ 57 ] : it follows from ( 47 ) that and , not to say for individual , can not be decreased with longer or longer so long as the level of is given .in particular , the authentication security parameter , which allows security to be arbitrarily close to perfect from ( 44 ) , is _ lost _ due to the use of an imperfect .we do not yet know how to rigorously remove the average and average guarantee in ( 47 ) via the markov inequality to obtain an individual guarantee because the problem is nonlinear .if we apply ( 33 ) as a guess , numerically , after averaging is removed via ( 24 ) for obtaining an individual and an individual guarantee , one would need to achieve the same security as a 32-bit from ( 45 ) and ( 47 ) or on a 64-bit .these values are completely unrealistic , as observed in section viii .note , however , that that is simply the level and does not cover ber leaks that provide information on to eve .similar results are available for multiple uses of a hash function with tags covered by an imperfect key with , say , for uses of [ 58 ] : a large number of uses of are needed in one qkd round because many uses are needed for authenticating a long sequence of bits .thus , the security guarantee is further lowered with ( 48 ) .its quantitative effect on the final security of is unknown because the universal composition argument is not valid due to the problem nonlinearity alone .equally significantly , the authentication steps in a qkd protocol have not been specified within the context of the other steps , with its imperfect levels considered .as we have seen , one can not obtain a valid derivation of the final security level by declaring universal composition without explicitly detailing the justification of the argument in context .the execution of a qkd round requires a significant number of shared secret key bits for message authentication and error correction .it is yet unclear how quantitative security would emerge if the key used for such purposes is imperfect , as it must be for most qkd rounds . as emphasized in section iv.a , the criterion applies to a key generated in a single qkd round .it would _ not _ be meaningful to cite a level without stating the length of to which it applies , in effect making it a -dependent .eve s maximum possible probability of obtaining the entire -bit is , from ( 14 ) , for security quantification , it is important to make the following two distinctions .first , as discussed in section vi.d , the key length in may refer to that of a qkd round or to that of a qkd block .second , within a block , we can have many different mutually disjoint segments of consecutive bits under attack by eve with bit gaps between them .security is prescribed by whether or not .the maximum probability of leaking different , , within a block is given , from ( 13 ) , by with kpa conditioning , eq .( 50 ) is replaced by an average over on the left - hand side similar to ( 23 ) with respect to ( 13 ) .note the nature of eq .( 50 ) in contrast to the failure probability per bit of [ 35],[36 ] , with such bit failure probability taken to be independent among bits of a block , as in ( 20 ) .the latter vastly underestimates the of ( 50 ) .in particular , it is not the case that a single bit would be leaked with probability , as the failure probability per bit interpretation implies .rather , a number of bits equal to the block length is leaked with probability ( apart from the factor ) .generally , it is misleading to evaluate key rate or security level on a per - bit basis .the actual data rate per unit time should be employed for practical assessment , as is the compromise probability per unit time . to bound eves success probabilities , one may not assume that the blocks within a round are independent ; however , one may make such an assumption for the from different rounds . certainly , the segments within a round are not independent , as shown by ( 50 ) . even when assuming that the blocks are independent , the segment s total compromise probability ( 50 ) is far larger than that given by the failure probability per bit interpretation .the division of by is fortuitous and misleading .the numerical solution is illustrated in the next subsection .we consider here the use of many keys generated in different protocol rounds of a qkd system to guarantee that the worst - case parameter from from each round is used , considering that there is no average on itself , because there is no distribution for the complete .( see also objection b in appendix iii . )when there is a distribution on a random system parameter , we do not employ an average as a security measure for reasons discussed in section v.c .( see also objection c in appendix iii . ) in particular , according to the operational guarantee statement ( og ) of section iii.a , a finite sample average in an experiment is a random quantity on which probability statements can be made instead of approximating it using the nonrandom average .although the maximum of is an unknown nonrandom parameter without a distribution rather than a random parameter [ 39 ] , it is the probability of an event , and we can talk about averages or expectation values [ 41 ] . we can not estimate its spread as in the random parameter case and will simply consider the probability as a fractional average .this is in contrast to the situation wherein a distribution exists and a markov inequality ( 29 ) can be used to produce an operational probability statement from the average ( mean ) . in the present case , the collection of in different rounds can be given instead .the following average values cited are to be understood as having the same import as a probability strictly speaking . here, we can not disentangle the possible conceptual subtleties of probabilities in real - world applications .( however , see [ 41 ] . )the theoretical numerical values of [ 29 ] for single - photon bb84 provide a tradeoff between key rate and security . is obtained for .if the key rate is bps , a segment leak of a block or a total of bits within a block may be leaked on average every 100 days if the level is individual .the leak becomes 300 blocks per day after one use of the markov inequality from ( 30 ) . against a kpa ,the average leakage becomes one block every 10 seconds from ( 33 ) .the experimental results in [ 59 ] give a key rate of bps and for , which are approximately equal to the values given in [ 36 ] .this amounts to an average maximum of 6 blocks or bits per day leaked against ciphertext - only attacks . against a kpa ,the rate is 100 blocks or bits per day .note that , as shown in ( 50 ) , these leak levels could apply to many different segment leak combinations distributed across a single block .if the segments spread across more than one block , one should compute the leak probability of those within a block from and then multiply them from independence to obtain the total joint leak probability . only observing the block - leak probability, it appears that the above numerical guarantee is far from adequate for almost all applications and is certainly not adequate for commercial banking .it is often argued that is the practical " probability level for guaranteeing impossibility . with realistic numerical values of blocks per day with , a -level guarantee of a practically perfect key ( but only from the viewpoint of ,i.e. , not failure probability " ) in one day of operation would require a -level of for ciphertext - only attacks alone .such a level is 35 orders of magnitude above the available values of from theory alone .there is no indication of how the numerical gap can be closed in any significant manner , again only in theory and even assuming that the security analysis in the literature is completely valid , which is not the case . on the other hand, security can be increased to a near - uniform level by further sacrificing key rate , as presented in [ 9 ] , although that would require a larger than the literature value , which is limited by eccs and other processing complexities ; see section vi.c . according to the failure probability per bit interpretation discussed in section iv.a under statement ( f ) ,a failure probability per bit of for all generated bits means that the qkd protocol can be run for the age of the universe and still have an accumulated failure strictly less than 1 " .this conclusion is obtained from the incorrect of ( 20 ) .the block value is not specified .we take for 1 second of operation , which is a sensible value .these numbers imply a possible average leak of bits per day for ciphertext - only attacks and 100 bps for kpas .this strongly contradicts the quote ( f ) that the accumulated failure " is strictly less than 1 over the age of the universe .table 1 compares the numerical guarantees for a -bit block at and the current theoretical [ 36 ] as well as experimental [ 59 ] values at mbps key rates .there are five cases that can be compared : the incorrect failure probability per bit interpretation [ 35 ] , the correct average guarantee from eqs .( 13 ) and ( 50 ) , the individual operational guarantee eqs .( 30 ) and ( 33 ) , the case where is uniform , and the symmetric cipher results ( see app .ii ) from expanding a -bit seed key to bits . for this nearly -fold symmetric key expansion, the seed key cost of bits per bits is needed in qkd for message authentication within the qkd key generation round of .these results show that , even simply as an average guarantee , there are only bits of security in the qkd system for the bits compared to the bits of security for conventional ciphers under ciphertext - only attacks . further discussion is given in appendix ii .the relevance of known - plaintext attacks is discussed in appendices ii.b and ii.c .in this paper , we do not address the problems of the very low efficiency of qkd , some of which are discussed in [ 60 ] and many of which are intrinsic due to the small signal level and resulting sensitivity to disturbance .more significantly , we do not address the physics - based security issues that have not been addressed in the qkd literature but which are fundamental to a valid security claim [ 61 ] .in addition , especially important are the detector hacking attacks [ 62 ] that break most current qkd implementations , including the checking of the bell inequality in establishing epr pairs [ 63 ] .this demonstrates that cryptosystem representation is a tricky and difficult issue in physics - based cryptography , especially in qkd , where many physical details may affect a single photon or a small signal that would not matter for stronger signals .it is not yet clear what would be a reliable justification to ensure that a particular qkd system representation has incorporated all the essential features of simply the cryptosystem operation in a mathematical model that would address hacking , in addition to other extraneous loopholes .this is a point first emphasized in [ 64 ] and is found to be prescient in several ways . in this paper , we have analyzed the fundamental information - theoretic security guarantees in cryptosystems and shown in what ways current qkd security analysis falls short .a brief history of some security works in the literature is given in appendix i , which also contains a summary of the contents of the different sections in the body of the paper . in appendix ii, a brief comparison of qkd with conventional cryptography is given to put the significance of qkd into perspective . in appendix iii , some possible points of objection or confusion are addressed .a most important point of our foundational analysis is that security must involve eve s success probabilities in various problems .the incorrect failure probability interpretation dissected in section iv implicitly recognizes such importance , and it has the following faulty security consequences : 1 .known - plaintext attack security is incorrectly quantified , as shown in section v.a .the important criterion concerning eve s bit error rate is incorrectly bounded , as described in section v.b .universal composition is obtained when it is not valid , as in known - plaintext attacks , or when it requires a further justification that appears impossible to provide due to nonlinearity , as in error correction treated in section vi.b .the security situation in message authentication is misrepresented as an individual guarantee , as discussed in section vii .the failure probability per bit interpretation is seriously incorrect , as discussed in section iv.a with numerical security levels illustrated in section iii.b .generally , the failure probability interpretation ascribes substantially improved quantitative security to what can be validly deduced both qualitatively for problems in which the trace distance criterion is yet to provide a guarantee and quantitatively to problems the interpretation does give a guarantee to by neglecting the difference between average and individual guarantees .it appears that current qkd security is fundamentally no different than the uncertain security of conventional mathematics - based cryptography .one may offer plausibility arguments for security and quantify security under some restrictive assumptions ; however , there is no proof against all possible attacks. it may be useful to conduct research to develop new features for a qkd system that would permit a general security proof that is both transparent and valid .it would also be useful to utilize quantum effects on larger signals to obtain information - theoretic security .some such attempts have been undertaken in [ 5 ] and [ 65 ] in the kcq and dbm approaches .it remains to be observed the extent to which qkd can be so broadened usefully .i would like to begin this appendix with the following quotations : + the variety in this field is what makes cryptography such a fascinating area to work on .it is really a mixture of widely different fields .there is always something new to learn , and new ideas come from all directions .it is impossible to understand it all .there is nobody in the world who knows everything about cryptography .there is nt even anybody who knows most of it .we certainly do nt know everything there is to know about the subject of this book .so here is your first lesson in cryptography : keep a critical mind .do nt blindly trust anything , even if it is in print .you ll soon see that this critical mind is an essential ingredient of what we call professional paranoia . "] + it is very easy for people to take criticism of their work as a personal attack , with all the resulting problems . "[ 4 , p. 10 ] these words were written on conventional cryptography .they are even more appropriate for qkd . in this appendix, we briefly outline the history of security proofs on bb84-type qkd protocols .there are a very large number of papers on security proofs in qkd , many of which are referenced in [ 2 ] .we will touch upon mainly those that have been mentioned in the body of the paper , including the more influential proofs on the security quantification of concrete qkd systems .we will also take the opportunity to mention some relations between security analysis and qkd experiments thus far and to discuss some major physics security issues not addressed in the body of the paper .security proofs for bb84 are the most well developed in the field .other security proofs share almost all the difficulties bb84 proofs face and more .we will summarize at the end a list of problems that no proof in qkd has yet overcome , with the exception of the kcq - dbm approach ; however , the details of why and how that is possible are yet to appear . it may be noted that security proofs , in qkd or any cryptosystem concerning privacy and key distribution , are a very complicated matter .errors and incompleteness are to be expected during the early stages of their development .these theoretical defects can not be glossed over in cryptography , although such defects are often justifiably neglected in physics and engineering when a final working experimental system is what decides success or failure .security can not be proved experimentally , if only because there are an infinite variety of possible attacks , which can not all be described. there were many surprises in the history of cryptography ; thus, whether there is a valid proof in an important issue , especially in qkd , where provable security appears to be the only real advantage compared to conventional cryptography . as in the case of many mathematical propositions ,it is not always possible to produce counter - examples to the main conclusion .sometimes , the statement is actually true , such as the poincare conjecture and fermat s last theorem , yet a valid proof is a separate matter from assuming the truth . in the body of this paper, we could only produce counter - examples to specific spots of reasoning in a purported proof .we did not give a specific attack that would always succeed .the burden is on those who claim that there is a proof to produce a valid one .one can always change the proof claim to a plausibility claim , and we need to draw sharp boundaries in cryptography .the discussions of this appendix should be read with this in mind . the earliest general bb84 security proofs in [ 13 ] and [ 26 ] are mainly on the security of the sifted key , namely , in fig .earlier versions of [ 13 ] appeared a few years before it did , and [ 66 ] provided an important direction for [ 26 ] .there are several noteworthy problems in these proofs , some of which are misinterpretations from others and not by the authors ; however , such errors have perpetuated ._ first _ , these proofs are asymptotic existence proofs asserting the existence of a protocol that would yield a purportedly perfect key in the limit of long bit length .they use the mutual information criterion , which we have shown in section iii.b cannot lead to such a conclusion by its mere vanishing asymptotically. this conclusion could not be drawn with the trace distance going to either ( section iv ) . however , the prevailing impression is that it could , and the issue is not addressed in the recent review [ 2 ] . _second _ , there is no treatment of known - plaintext attacks when the generated key is used for encryption .apparently , the quantum accessible mutual information criterion is not sufficient for proving security against kpas , as discussed in section iv .a weakened protection against kpas is provided by , as presented in section v.b . _third _ , eve s side information from error correction and privacy amplification are not considered and were later addressed in different approaches , as discussed in section vi .the ecc problem remains to be rigorously treated for any type of qkd protocol , of which we call qkd or otherwise ._ fourth _ , these proofs are on qubits ( two - dimensional quantum state spaces ) and lossless systems . in all implementations , we have infinite - dimensional photon state spaces with loss .for example , coherent detection by eve is ruled out by the qubit model .loss is ubiquitous in optical systems .no reason has been offered as to why it would only affect throughput but not security in bb84 , although it is known that it does affect security in b92 [ 1]. see [ 61 ] for further discussions of these and other physics - related security issues . _fifth _ , although [ 26 ] is an existence proof among the class of what is called css eccs with associated privacy amplification , it has been widely taken to have proved the ( asymptotic ) security of any specific ecc and any pac .this error is found later in both experimental and theoretical studies .there are various spots of uncertain validity in the reasoning of these papers .although they are relevant to security , for the sake of this paper , we can assume that they are valid .the main concern in this regard is that the issues involved are not purely mathematical but concern the relation of a mathematical statement to its real - world implication .we have observed some such examples in section iv for cryptographic relations .it is a special problem for qkd in which quantum physics at the small scale is tied to various classical physical or engineering phenomena .an important sequel to [ 26 ] is the widely quoted [ 27 ] , which extends [ 26 ] to include various system imperfections by adjusting the final result in [ 20 ] using the attainable key rate with purported asymptotic perfect key generation .the derivations of these adjustments are brief and heuristic and are based on ad hoc estimates .there is no general formulation of the problem including an imperfect feature that would demonstrate how the original proof would address all possible attacks with such imperfection .the pac in [ 26 ] is a nonlinear hash function ; however , is treated as if it is linear .this [ 27 ] is used as the basis of the security claim on the use of decoy states for laser instead of single - photon sources ; some problems with such a connection are discussed in [ 61 ] . in particular , it is not realized that a weak laser pulse is itself coherent and not a mere multi - photon qubit [ 61 ] .ref [ 27 ] is also used in the security claim of the so - called measurement - device - independent approach [ 25 ] .security proofs for a finite and more specific protocol were developed and culminated in [ 36 ] for lossless bb84 with various imperfections .many approaches to bounding eve s information on the sifted key have been attempted , therein settling on the smoothed " minimum entropy , which is used in numerical evaluations in [ 36 ] .such smooth entropy is equivalently eve s maximum probability of obtaining but with greater flexibility in terms of giving up some level of security for a higher key rate .(the use of these smooth entropies can not increase security by lowering the key rate . )the trace distance criterion is used because the small kpa leak in the example of [ 31 ] was already considered unacceptable , and the incorrect failure probability claim from a guarantee was maintained .we have discussed in sections v.d that and accessible information are indeed very different guarantees in the quantum domain but are essentially equivalent classically from ( 34 ) .the errors in misinterpreting are analyzed in sections iv - v .quantitatively , the numerical values of that were obtained are far from adequate simply on the probability of compromising the entire generated key in a block , as discussed in section viii .the actual security guarantee from is detailed in section v. it is not given by the incorrect failure probability interpretation , and it is not known whether it can cover ber leaks , which for example eve could use to attack the qkd - key - covered parity check digits of a linear ecc discussed in section vi.b. the pac information leak is fully considered in [ 36 ] , although the ecc leak is not , as discussed in section vi .there is the serious problem of using an imperfect key for the purpose of covering the ecc parity check digits mentioned above and for message authentication in future rounds , the latter being discussed in section vii .the approach of [ 26 ] is generalized in [ 45 ] , [ 51 ] , [ 52 ] for a finite protocol .it is difficult to assess the formal results in these papers. in [ 52 ] , the same ad hoc formula for ecc information leaks is used in an actual evaluation , as in [ 36]. in a concrete protocol , there is are advantages , only disadvantages , to these css - code - based approaches in addressing the ecc and pac information leak problems as compared to the approach of [ 36 ] . we will simply provide some general remarks to indicate certain problems in the qkd experimental literature and will not dwell on specific analysis of the errors in specific papers .that is a separate subject matter , i.e. , not that of basic security analysis . given the complexity of qkd security analysis , it is a highly nontrivial task to integrate all the components of a protocol for an experimental system . to begin with , no complete qkd protocol that includesmessage authentication and error correction with an imperfect qkd key has been analyzed .in particular , the message authentication steps are not interlaced with exactly how the bulk of the protocol runs or with what is being authenticated at what time , and the ecc information leak is only considered by an ad hoc formula without considering the relatively large level of the key used to cover its parity check digits .qkd experiments do not usually concern the entire cryptosystem , the necessary message authentication , or error correction and privacy amplification .often , a key rate is cited with no security level attached , which is nonsensical for a concrete protocol , as we observed in sections ii - iii .part of the cause is apparently the use of [ 26 ] , as discussed in the first point of appendix i.a , with the belief that security can be made arbitrarily close to perfect for a given key rate below the threshold formula of [ 26 ] or [ 27 ] . such key rate results from [ 26 ] , [ 27 ] , with or without system imperfections and assuming that they are completely valid , have yet to consider ecc and pac leaks .more significantly , they are often quoted for a system that employs error correction and privacy amplification methods , which is not a css code .thus , those formulas so quoted are not relevant because they have never been shown in any way to hold outside of css codes , and even then , such proofs are merely existence proofs and do not pin down the working codes .the situation is evidently better for the approach of [ 30 ] , the problems of which we have analyzed in the bulk of this paper and briefly mentioned in appendix i.b . even if we assume that everything is valid , the obtainable security level is insufficient , i.e. , is too large for many applications , as discussed in section viii .although a level guarantee is not complete , it appears to be a useful criterion and needs to be ascertained for any concrete protocol analyzed without giving eve s full success probability profile ( 4 ) .however , dbm [ 65 ] promises a new direct security approach yet to be made public .there are three major security analysis problems that have not yet been solved for any qkd protocol , with exceptions noted below . 1 . in the presence of inevitable losses, it has not been proved that only throughput is affected but not security .when using an imperfect key in executing a qkd protocol , it is not known what the error correction information leak would be .no analysis has ever been given on a full protocol involving message authentication with an imperfect key , therein demonstrating the effects of key imperfection on the security level of the final generated key .note that point ( 1 ) does not apply to cv - qkd ( continuous variable ) , which is also immune to detector blinding attacks [ 62 ] .( we do not discuss such very serious hacking problems in this paper .) however , cv - qkd suffers from other major problems [ 61 ] not found in bb84 .the recent approach [ 67 ] , which allegedly dispenses with the information - disturbance tradeoff without considering even intercept - resend attacks , is subject to all three points .the coherent - state kcq approach in [ 5 ] is not subject to points ( 1)-(2 ) and may not require error correction due to its substantially larger signal level . with error correction , the new dbm technique [ 56 ] may be required .by conventional ciphers " , we mean mathematics - based ciphers , which cover essentially all practical ciphers in commercial use [ 3 ] , [ 4 ] .these are different from classical ciphers " , which rely on simply classical physics and include physical - noise - based cryptography such as that described in [ 18 ] , [ 19 ] and [ 20 ] .quantum - physics - based cryptography is yet another gene ; however , by qkd , we mean the smaller subset defined in section i with bb84 as representative .qkd covers key generation and direct encryption with the generated key .we will compare both to some typical conventional ciphers in current use .such a comparison is important for assessing the potential , progress , and future of qkd .qkd has often been contrasted with asymmetric or public key cryptography , which only includes complexity - based security and no information - theoretic security other than that in the sense of ( 7 ) in section iii.a .however , it is substantially more appropriate to compare qkd with symmetric key ciphers [ 8 ] because a pre - shared secret key is needed to execute a qkd protocol other than for the purpose of agent identification .message authentication is needed to prevent man - in - the - middle attacks . for this purpose alone, one would need information theoretically secure message authentication to preserve the overall information - theoretic security of the qkd protocol , which requires a shared secret key .as shown in section vi , the error correction step of a qkd protocol also requires a pre - shared secret key .of course , the qkd - generated key represents a pre - shared key for future protocols .we have shown in section vi how the imperfect security level of such a key prevents a valid security proof and its quantitative level from being obtained . in this subsection , we will simply make some comments on the contrast between public key cryptography and qkd .asymmetric key protocols can be used for both key generation and direct encryption for privacy .they are not used for the latter in practice due to their relative inefficiency compared to symmetric key ciphers .qkd in practice is more inefficient and more complex to operate compared to asymmetric key ciphers due to a number of fundamental reasons such as low signal levels and inevitable large losses .they have the advantage of being provably information theoretically secure , which is however not yet realized , as we show in this paper .the advantage is often claimed that qkd encryption is resistant to future compromise of the secret key in conventional ciphers .that is surely the case in comparison to asymmetric key ciphers because decryption of the public ciphertext may be obtained in the future based on mere computational power .however , qkd has no real advantage in this regard compared to symmetric key ciphers because the shared secret key can simply be deleted permanently .furthermore , when the generated key from qkd is used on a conventional cipher , such a key shares the same problem , if any , as in the case of symmetric key ciphers .symmetric key ciphers can be used for key expansion " , effectively generating new session keys " from a master key , or for privacy encryption . when used for key expansion , they are very similar to qkd generation schematically .they have information - theoretic security because they are similar in their security as the plaintext security under ciphertext - only attacks when the cipher is used for encryption .specifically , the generated ( running ) key sequence from the cipher with a uniform seed key , which can be regarded as a pseudo random number sequence , would have the following probability of leaking the whole to eve : this is obtained because each possible value leads to a different sequence ( non - degenerate cipher ) , which may be used in eq .( 1 ) as the .it is important to realize that this _ is _ information - theoretic security for , and it is very favorable for typical values of from to , as compared to that obtained from the qkd value ( 50 ) with the values in the literature . for comparison to qkd, a block cipher can be run in stream cipher mode for the generation of a running key , as in figsubset leaks depend on the specific conventional cipher . for ( non - degenerate ) linear feedback shift registers [ 3 ], the level is perfect for a single sequence of consecutive bits [ 6 ] . in general , the correlations between bits in are difficult to quantify , whereas a qkd key obtains a security guarantee under ( 13 ) . in any case ,key expansion symmetric key ciphers do not have perfect forward secrecy"[3 ] due to the correlations between bits in .moreover , a qkd - generated key does _ not _ have such secrecy either , especially not at the large level given in the literature , because it is imperfect .the following numerical comparison of the qkd system of [ 59 ] with only a linear feedback shift register ( lfsr ) cipher against ciphertext - only attacks is revealing . with a seed key of only 128 bits ,the level of an lfsr is from ( ii.51 ) for any , which compares quite favorably to for bits in [ 59 ] even before the markov inequality is applied for an individual guarantee .the lsfr protects a segment of up to 128 consecutive bits with perfect security , whereas the system of [ 59 ] only does so at the same level from ( 13 ) .it is not known what the lsfr information - theoretic security is for many scattered segments , and [ 59 ] gives the same probability for segments within a block from ( 50 ) .there are many other uncertain securities in both systems .it is also not clear if one is superior to another security wise .however , it is clear that the lfsr is substantially faster and cheaper to operate .the numerical comparison of qkd and symmetric - key ciphers is included in table 1 of section viii.b . note that there is no kpa in key expansion until the expanded key is used in an application because the plaintext is chosen to be , at least in principle , by the encrypter .this is why there is information - theoretic security in conventional key expansion before the key is used , given the possibility of kpa .non - degenerate symmetric key ciphers in current use do not have any kpa information - theoretic security because only a length of known input bits together with the corresponding ciphertext would uniquely fix in principle .security relies on the complexity of locating . on the other hand ,the qkd key remains secure for sufficiently small from ( 23 ) .note that it may be possible to obtain information - theoretic security against kpa with a conventional cipher .the theoretical possibility is presented in [ 68 , app . ], especially when the known plaintext is not too long .it has been proposed that the qkd generated key can be used as the seed key in a conventional cipher . in that case , the plaintext so encrypted only obtains the protection of the conventional cipher but worse considering that is no longer perfect .how the imperfection affects the conventional symmetric key cipher security is _unknown_. in any case , as a pure conventional cipher , there is no more information - theoretic security against kpa .we believe the following remarks are important when comparing qkd with conventional cipher security . in many specialized applications ,it does _ not _ seem possible to launch a kpa , in contrast to most commercial applications .examples include military applications with encryption on board an aircraft , a ship , a satellite , or a protected ground station . in such cases ,it is not clear what advantage of significance qkd provides compared to conventional encryption , as discussed in the above subsection with a numerical example .this is especially true given that the qkd security advantage has yet to be rigorously established ; in addition , it is inefficient and is vulnerable to hacking . more broadly , for such specialized applications , it is not clear why kirchhoff s principle [ 4 ] should be assumed . that principle states that the only security-relevant feature of the cryptosystem that an attacker does not know and that the users do is the shared secret key between the usersthe cipher structure and the encryption algorithm are assumed to be openly known .this does _ not _ appear to be a reasonable assumption in military situations .if the encryption structure or algorithm is unknown to the attacker , it appears next to impossible for her to obtain substantial amounts of information for any reasonable cipher the users choose because the possibilities between structures and algorithms are endless and equivalent to a huge number of shared secret bits. they can be readily and often changed under software implementations .even under kpa and kirchoff s principle , there is no known vulnerability of conventional strong ciphers such as aes . in specialized applications , a huge number of seed key bits can be pre - stored .weaker ciphers are commonly employed due to their efficiency .the notable security risks are not from the known strong ciphers .is there a serious problem that awaits qkd as its solution ?it appears that efficient bulk encryption of large ( elephant ) data flows in optical links is the one clear area that would benefit from efficient qkd .this appendix addresses some possible objections or concerns on various points of this paper .+ objection a : security is a matter of definition .why is your definition better than other ones ?answer : security is not a matter of definition .the cryptosystem designer must decide on an acceptable probability of a successful attack by eve on any characteristic of the generated key from a qkd round .consider for example of length bits .if eve has a total compromise probability ( for ) of correctly identifying the entire , which is substantially larger than the uniform level of , is this acceptable ?suppose that it is not ; then , regardless of the security definition used at any quantitative level , security is not guaranteed if the total compromise probability above is not ruled out .this is formalized by the operational guarantee statement ( og ) in section iii.a .a theoretic security criterion has to yield operational probability guarantees , which must be the concern of cryptographic security .such an operational guarantee is difficult to obtain and has been ignored in qkd , except through the incorrect failure probability interpretation discussed in detail in section iv .it is also neglected in some but not all information - theoretic security studies in conventional cryptography . in this paper , we detailed some basic operational guarantees for the trace distance ( statistical distance ) criterion ; however , not all important operational guarantees have been covered by .in particular , eve s ber , discussed in section v.b , is not covered .when eve identifies incorrectly as a sequence , she may still correctly obtain , say , 60% of the bits , similar to the case whereby the distribution is known to eve with a per bit error probability of 0.6 , which would not be considered a secure key by most designers .most designers would want a proof against such a possibility at any designed ber level .there are questions concerning the average versus worst - case guarantee , average versus individual guarantee , and security of multiple uses of different keys at given d levels .these questions are discussed in section iii.a , v.c , and viii.b as well as in the following objections b and c. note that eq .( 50 ) with implies that many bits and bit segments may be leaked for operation in one , say , at a key rate of 1 mbps ; see section viii .+ objection b : the average instead of the worst case should be employed in quantifying security leaks .answer : for a rigorous assessment of a problem on performance depending on a parameter , usually , only a relevant upper or lower bound can be obtained over the range of values that may take .the worst - case performance , say , concerning the time complexity of an algorithm or the security level of a cryptosystem provides a guaranteed level of performance that may or may not result from an attack but that can not be exceeded . if the parameter has a probability distribution , one can also discuss the average performance ; see objection c. however , it may not be meaningful , in the sense of being applicable to reality , to assign a probability distribution to what is called a _ nonrandom parameter _ " [ 39 ] , which is not described by a probability distribution .this may occur when the parameter appears in only one sample instance with no repeated trials ( although there remains meaning to assign probability to such a situation in various theories of probability [ 41 ] , such as the probability that president john kennedy was shot by more than one gunman .the warren commission addressed such question .see also objection c. ) in decision theory , an unknown nonrandom parameter [39 ] is then used .this often happens , for example , when the parameter takes on a continuum of possibility . in this paper ,the unknown parameter is , namely , eve s distribution on the generated key from her attack , as presented in section iii.a as well as in the beginning of section iv .the function is the parameter under consideration , the range of which has a cardinality of the continuum .more significantly, eve can pick any attack for which there is no distribution , and in any event , the users do not know the distribution or if one exists .thus , we can not average over or , the maximum value of from eq .we also can not average over the of a specific even though that may make sense if only because we do not know the value of that specific .thus , we have to bound as the worst case to provide a valid guarantee .+ objection c : an average can be used for the guarantee instead of a probability . in particular , there is no need to apply a markov inequality to convert an average guarantee into an individual guarantee .answer : some parameters in qkd do have reasonable probability distributions , although only for a given attack in a given round .thus , the choice of pac is taken to be uniform . the known part to eve of in a kpais specified by the marginal distribution of from the joint distribution with eve s observation . in section v.c , we detailed the main reasons why a probabilistic guarantee is more accurate than an average guarantee .one reason is that the average has no operational meaning when the total number of trials ( pertaining to that underlying distribution ) is small , similar to the single - trial case .( think of the above kennedy assassination example in objection b. ) this is codified in the statement ( og ) in section iii.a on operational guarantees .equally significantly , a finite sample average remains a random quantity with its own probability distribution .a probability statement can be made on it to satisfy ( og ) .for example , an estimate from variance information could lead to such an estimate , not further information on the distribution .the markov inequality estimates ( 30 ) and ( 33 ) from the average alone are weak because no other statistical information is available .one may be stuck with a weaker guarantee , such as the average without a probability statement , and even simply relying on a single theoretic criterion without analyzing its proper operational meaning , as has been the case in qkd until now , if that is all one can obtain . however , comparing quantitative security on various characteristics of to the uniform is the concern of rigorous security .a uniform gives far better and far more detailed security guarantees than does a trace distance guarantee , especially at the relatively large level that can be obtained . at the very least, it can not be claimed that the qkd - generated key is perfect " , can be made as close to perfect as desired , or is perfect except for a small probability .the many problems presented in this paper should make clear the dangers of such an exaggeration .+ objection d : distinguishability advantage is a great criterion in cryptography .is there a definite counter - example on why it is not satisfactory ?answer : distinguishability advantage is a vague and very misleadingly phrased security guarantee . as detailed in section iv.b , it leads to the an incorrect failure probability interpretation of a statistical distance guarantee ( which the trace distance criterion reduces to upon eve s measurement on her probe ) as a definite and general quantitative consequence .alone , it serves no purpose other than what is given mathematically , namely , a bound on the statistical distance .in particular , there can be no counter - example until one gives the quantitative guarantee that derives from distinguishability advantage . using the failure probability interpretation as its consequence , all the counter - examples to the failure probability interpretation are counter - examples to the distinguishability advantage interpretation .these include the examples in section iv.a and the kpa counter - example in section v.a .distinguishability advantage as a statistical distance bound on is a useful criterion , as demonstrated by this paper .it simply does not have the operational significance that has been ascribed to it .in particular , the bound of eq .( 13 ) that results is the same for a one - bit subsequence of as it is for the whole .this may give the impression that the cryptosystem is substantially more secure than it actually is and apparently led to the incorrect failure probability per bit interpretation discussed in section iv.a , which grossly overestimates security .+ objection e : is nt your kpa counter - example of section v.a not one of known - plaintext attack but one of chosen - plaintext attack ?answer : there is no difference between kpas and chosen - plaintext attacks for the symmetric key additive stream ciphers of fig . 1 , and the counter - example concerns such a cipher .this is because the additive key stream in symmetric key ciphers is blind to the data .a kpa reveals part of the running key that happens to be uncovered from the known data , with the following depending on and .of course , the level provides an average guarantee over , as given in eq .thus , a bad can only occur with a small probability for small ( which reduces to ) .however , such an average needs to be removed for an individual guarantee .the incorrect failure probability interpretation produces an incorrect answer for a given ; see section v.a .+ objection f : the trace distance guarantee may be sufficient in practice .what is the evidence to the contrary ?answer : this paper is concerned with information - theoretic security foundation and rigorous proofs of security , the latter being proclaimed for qkd for almost twenty years .it is not clear what is meant by sufficient in practice " , which would vary from application to application .many problems , including the lack of a real proof on qkd security , are noted in this paper .they constitute evidence of possible practical security problems .there is no such thing as proof by no counter - example yet " .the burden is on those who claim qkd has been proved secure to produce a valid proof for a given model. it is the task of cryptanalysis , a major component of cryptology , to scrutinize security , which is rarely performed in the qkd literature apart from implementation issues .+ objection g : why can not security be brought arbitrarily close to perfect by privacy amplification ?answer : the trace distance level is bounded by ( 36 ) in terms of the total compromise probability of the shift key .it can not be made arbitrarily small from the leftover hash lemma .it is not known whether there is any way to make only arbitrarily close to the uniform level ; see section vi.a .in addition , note from sections iii.b and iv before iv.a that asymptotic vanishing of eve s accessible information or the trace distance not only does _ not _ imply that the key is arbitrarily close to perfect but also may even imply that the key suffers from a serious weakness of having a very relatively large .+ objection h : there is no problem in assigning a numerical value to in the error correction cost ( 39 ) .this can be taken from the actual ecc used in the protocol. answer : in that case , there is then no need to present formula ( 39 ) , which is irrelevant to the actual bit cost .this conveys the misleading impression that there is a general justification . as outlined in section vi.b, there is a rationale for ( 39 ) when because of the asymptotic number of bits needed to cover a linear ecc for guaranteed error correction , although only for a binary symmetric channel , which is not obtained under a general attack .the point is that correctness of the round ( the users agree on the same key ) is then guaranteed .when a finite code is used with usually a bit cost of even less than ( 39 ) for , correctness from an ecc can not be theoretically guaranteed and must be established with high probability by other means not given in the security proof of the protocol .this is acceptable in practice whenever it works but can not be confused with a security proof on the model .the assumption must at least be made clear that it is not logically incorporated in the security proof .the more serious problem with error correction is what is focused on in section vi.b ; it has not been quantitatively shown how an imperfect key covering the ecc would degrade security or why the imperfect key bit cost of any quantitative level can be used to account for any bit leaks in any reconciliation procedure .many unstated and strictly invalid assumptions are used in qkd security proofs , as outlined in this paper and in [ 61 ] , any of which would invalidate any claim to proven security .the security of qkd protocols requires a substantial amount of further careful study .objection i : it has not been explained how an imperfect key would affect qkd security when used for error correction and what the overall complexity security becomes when used in a conventional symmetric - key cipher .answer : the first question is a major open problem in qkd security theory .the second occurs when a qkd key is used in ciphers such as aes ; however , it is not very relevant because an imperfect key can only weaken the complexity security compared to a uniform key .the substantive question is what the complexity security becomes if the imperfect seed key is changed more often compared to a uniform key .both problems appear to be very difficult and seemingly not amenable to analysis .this paper never claims to address , let alone solve , all security problems associated with qkd - generated keys .the paper provides some fundamental results on the security of any key generation scheme , quantum as well as classical , and notes some serious unsolved problems .whoever claims security has the burden of providing a valid proof .i would like to thank greg kanter for his discussions that helped clarify some of the issues treated in this paper .my cryptography research has been supported by the defense advanced research project agency and the united states air force .h. p. yuen , mathematical modeling of physical and engineering systems in quantum information " , in _ proceedings of the qcmc _ , o. hirota , j. h. shapiro , and m. sasaki , eds , nict press , p.163 - 168 ( 2007 ) .h. p. yuen , on the foundations of quantum key distribution- reply to renner and beyond " , arxiv;1210.2804 , 2012 ; also in the tamagawa university quantum ict research institute bulletin , vol.3 , no.1 , p.1 - 8 , 2013 .m. ben - or , m. horodecki , d.w .leung , d. mayers , and j. oppenheim , universally composable security of quantum key distribution , " second theory of cryptography conference ( tcc ) , lecture notes in comnputer science , vol . 3378 , springer , new york , p. 386406, 2005 ; also quant - ph 0409078 . r. renner and r. konig , universally composable privacy amplification against quantum adversaries , " second theory of cryptography conference ( tcc ) , lecture notes in computer science , vol .3378 , springer , new york , p. 407425, 2005 .h. p. yuen , essential lack of security proof in quantum key distribution " , arxiv:1310.0842v2 , 2013 ; also in _ proceedings of the spie conference on quantum - physics - based information security _ , sep 1013 .k. yamazaki , r. nair , and h. p. yuen , problem of cascade protocol and its application to classical and quantum key generation " in _ proc .8th international conference on quantum communication , measurement , and computing _ , ed .o. hirota , j. h. shapiro , and m. sasaki , nict press , p. 201204( 2007 ) .j. jogenfors , a. m. elhassan , j. abrens , m. bourennane , and j. larsson . hacking the bell test using classical light in energy time entanglement based quantum key distribution " , sci . adv . 1 , : e1500793 , 2015 . | the security issues facing quantum key distribution (qkd ) are explained , herein focusing on those issues that are cryptographic and information theoretic in nature and not those based on physics. the problem of security criteria is addressed . it is demonstrated that an attacker s success probabilities are the fundamental criteria of security that any theoretic security criterion must relate to in order to have operational significance . the errors committed in the prevalent interpretation of the trace distance criterion are analyzed . the security proofs of qkd protocols are discussed and assessed in regard to three main features : their validity , completeness , and adequacy of the achieved numerical security level . problems are identified in all these features . it appears that the qkd security situation is quite different from the common perception that a qkd - generated key is nearly perfectly secure . built into our discussion is a simple but complete quantitative description of the information theoretic security of classical key distribution that is also applicable to the quantum situation . in the appendices , we provide a brief outline of the history of some major qkd security proofs , a rather unfavorable comparison of current qkd proven security with that of conventional symmetric key ciphers , and a list of objections and answers concerning some major points of the paper . pacs # : 03.67dd |
entanglement is a genuine quantum correlation between two or more parties , with no analogue in classical physics . during last decadesit has been recognized as a fundamental tool in several quantum information protocols , such as quantum teleportation , quantum cryptography and quantum key distribution , and distributed quantum computing . nowadays , spontaneous parametric down - conversion ( spdc ) , a process where the interaction of a strong pump beam with a nonlinear crystal mediates the emission of two lower - frequency photons ( signal and idler ) , is a very convenient way to generate photonic entanglement .photons generated in spdc can exhibit entanglement in the polarization degree of freedom , frequency and spatial shape .one can also make use of a combination of several degrees of freedom .two - photon entanglement in the polarization degree of freedom is undoubtedly the most common type of generated entanglement , due both to its simplicity , and that it suffices to demonstrate a myriad of important quantum information applications .but the amount of entanglement is restricted to ebit of entropy of entanglement , since each photon of the pair can be generally described by the superposition of two orthogonal polarizations ( two - dimensional hilbert space ) . on the other hand , frequency and spatial entanglementoccurs in an infinite dimensional hilbert space , offering thus the possibility to implement entanglement that inherently lives in a higher dimensional hilbert space ( qudits ) .entangling systems in higher dimensional systems ( frequency and spatial degrees of freedom ) is important both for fundamental and applied reasons .for example , noise and decoherence tend to degrade quickly quantum correlations . however , theoretical investigations predict that physical systems with increasing dimensions can maintain non - classical correlations in the presence of more hostile noise .higher dimensional states can also exhibit unique outstanding features .the potential of higher - dimensional quantum systems for practical applications is clearly illustrated in the demonstration of the so - called _ quantum coin tossing _, where the power of higher dimensional spaces is clearly visible .the amount of spatial entanglement generated depends of the spdc geometry used ( collinear vs non - collinear ) , the length of the nonlinear crystal ( ) and the size of the pump beam ( ) . to obtain an initial estimate ,let us consider a collinear spdc geometry . under certain approximations ,the entropy of entanglement can be calculated analytically .its value can be shown to depend on the ratio , where is the rayleigh range of the pump beam and is its longitudinal wavenumber .therefore , large values of the pump beam waist and short crystals are ingredients for generating high entanglement .however , the use of shorter crystals also reduce the total flux - rate of generated entangled photon pairs .moreover , certain applications might benefit from the use of focused pump beams .for instance , for a mm long stoichiometric lithium tantalate ( slt ) crystal , with pump beam waist m , pump wavelength nm and extraordinary refractive index , one obtains . for a longer crystal of mm , the amount of entanglementis severely reduced to ebits .we put forward here a scheme to generate massive spatial entanglement , i. e. , an staggering large value of the entropy of entanglement , independently of some relevant experimental parameters such as the crystal length or the pump beam waist .this would allow to reach even larger amounts of entanglement that possible nowadays with the usual configurations used , or to attain the same amount of entanglement but with other values of the nonlinear crystal length or the pump beam waist better suited for specific experiments .our approach is based on a scheme originally used to increase the bandwidth of parametric down - conversion . a schematic view of the spdc configuration is shown in fig.[fig1 ] .it makes use of chirped quasi - phase - matching ( qpm ) gratings with a linearly varying spatial frequency given by , where is the grating s spatial frequency at its entrance face ( ) , and is a parameter that represents the degree of linear chirp .the period of the grating at distance is , so that the parameter writes where is the period at the entrance face of the crystal , and at its output face . and designate the transverse wave numbers of the signal and idler photons , respectively . is the grating wave - vector at the input face of the nonlinear crystal , and at its output face .the signal and idler photons can have different polarizations or frequencies .the different colors ( or different direction of arrows ) represent domains with different sign of the nonlinear coefficient.,width=245 ] the key idea is that at different points along the nonlinear crystal , signal and idler photons with different frequencies and transverse wavenumbers can be generated , since the continuous change of the period of the qpm gratings allows the fulfillment of the phase - matching conditions for different frequencies and transverse wavenumbers .if appropriately designed narrow - band interference filters allow to neglect the frequency degree of freedom of the two - photon state , the linearly chirped qpm grating enhance only the number of spatial modes generated , leading to a corresponding enhancement of the amount of generated spatial entanglement . for ( a ) and ( b ) .the nonlinear crystal length is mm and the pump beam waist is ., title="fig:",width=264 ] + for ( a ) and ( b ) .the nonlinear crystal length is mm and the pump beam waist is ., title="fig:",width=264 ]in order to determine how much spatial entanglement can be generated in spdc with the use of chirped qpm , let us consider a nonlinear optical crystal illuminated by a quasi - monochromatic laser gaussian pump beam of waist . under conditions of collinear propagation of the pump , signal and idler photons with no poynting vector walk - off , which would be the case of a noncritical type - ii quasi - phase matched configuration , the amplitude of the quantum state of the generated two - photon pair generated in spdc reads in transverse wavenumber space where ( ) is the transverse wavenumber of the signal ( idler ) photon . is the joint amplitude of the two - photon state , so that is the probability to detect a signal photon with transverse wave - number in coincidence with an idler photons with . the joint amplitude that describes the quantum state of the paired photons generated in a linearly chirped qpm crystal , using the paraxial approximation , is equal to ,\nonumber\end{aligned}\ ] ] where is a normalization constant ensuring .notice that the value of does now show up in eq .( [ state2 ] ) , since we make use of the fact that there is phase matching for at certain location inside the nonlinear crystal , which in our case it turns out to be the input face ( ) .after integration along the z - axis one obtains \nonumber \\ & & \times\left [ \mathrm{erf}\left(\frac{3\sqrt{\alpha}l}{4\sqrt{i}}+\frac{|\mathbf{p}-\mathbf{q}|^{2}}{4k_{p}\sqrt{i\alpha}}\right)-\mathrm{erf}\left(-\frac{\sqrt{\alpha}l}{4\sqrt{i}}+\frac{|\mathbf{p}-\mathbf{q}|^{2}}{4k_{p}\sqrt{i\alpha}}\right)\right],\end{aligned}\ ] ] where refers to the error function . notice that eq .( [ state3 ] ) is similar to the one describing the joint spectrum of photon pairs in the frequency domain , when the spatial degree of freedom is omitted .the reason is that both equations originate in phase matching conditions along the propagation direction ( axis ) . since all the configuration parameters that define the down conversion process show rotational symmetry along the propagation direction , the joint amplitude given by eq .( [ state3 ] ) can be written as here , we have made use of polar coordinates in the transverse wave - vector domain for the signal , , and idler photons , where are the corresponding azimuthal angles , and are the radial coordinates .the specific dependence of the schmidt decomposition on the azimuthal variables and reflects the conservation of orbital angular momentum in this spdc configuration , so that a signal photon with oam winding number is always accompanied by a corresponding idler photon with oam winding number .the probability of such coincidence detection for each value of is the spiral spectrum of the two - photon state , i.e. , the set of values .recently , the spiral spectrum of some selected spdc configuration have been measured .the schmidt decomposition of the spiral function , i.e. , , is the tool to quantify the amount of entanglement present . are the schmidt coefficients ( eigenvalues ) , and the modes and are the schmidt modes ( eigenvectors ) . herewe obtain the schmidt decomposition by means of a singular - value decomposition method .once the schmidt coefficients are obtained , one can obtain the entropy of entanglement as .an estimation of the overall number of spatial modes generated is obtained via the schmidt number , which can be interpreted as a measure of the effective dimensionality of the system .finally , the spiral spectrum is obtained as .for the sake of comparison , let us consider first the usual case of a qpm slt crystal with no chirp , i.e. , , and length mm , pumped by a gaussian beam with beam waist and wavelength nm . in this case , the integration of eq .( [ state2 ] ) leads to the schmidt coefficients are plotted in fig .[ fig2](a ) , and the corresponding spiral spectrum is shown in fig . [fig3](a ) .the main contribution to the spiral spectrum comes from the spatial modes with .the entropy of entanglement for this case is ebits and the schmidt number is .nonzero values of the chirp parameter lead to an increase of number of generated modes , as it can be readily seen in fig .[ fig2](b ) for and m .this broadening effect is also reflected in the corresponding broadening of spiral spectrum , as shown in fig . [ fig3](b ) . indeed , fig .[ fig4](a ) shows that the entropy of entanglement increases with increasingly larger values of the chirping parameter , even though for a given value of , its increase saturates for large values of . for and ,we reach a value of ebits . on the contrary ,the schmidt number rises linearly with , as can be observed in fig .[ fig4](b ) , for all values of . for sufficiently large values of and , reaches values of several thousands of spatial modes , i.e. for the same and . for large values of , a further increase of requires an even much larger increase of the number of spatial modes involved , which explain why an increase of the number of modes involved only produces a modest increase of the entropy of entanglement .notice that the spiral spectrum presented in fig .[ fig3](b ) is discrete .notwithstanding , it might look continuous since it is the result from the presence of several hundreds of oam modes with slightly decreasing weights . for ( a ) and ( b ) .the nonlinear crystal length is mm and the pump beam waist is ., title="fig:",width=245 ] + for ( a ) and ( b ) .the nonlinear crystal length is mm and the pump beam waist is ., title="fig:",width=245 ] and ( b ) the schmidt number as a function of the chirp parameter for ( solid black line ) , ( dashed blue line ) and ( dotted - and - dashed red line ) . ,title="fig:",width=245 ] + and ( b ) the schmidt number as a function of the chirp parameter for ( solid black line ) , ( dashed blue line ) and ( dotted - and - dashed red line ) ., title="fig:",width=245 ] we have discussed entanglement in terms of transverse modes which arise from the schmidt decomposition of the two - photon amplitude and , as such , they attain appreciable values in the whole transverse plane .alternatively , the existing spatial correlations between the signal and idler photons can also be discussed using second - order intensity correlation functions . in this approach ,correlations are quantified by the size of the correlated area ( ) where it is highly probable to detect a signal photon provided that its idler twin has been detected with a fixed transverse wave vector .we note that the azimuthal width of correlated area decreases with the increasing width of the distribution of schmidt eigenvalues along the oam winding number . on the other hand , the increasing width of the distribution of schmidt eigenvalues along the remaining number results in a narrower radial extension of the correlated area .an increase in the number of modes results in a diminishing correlation area , both in the radial and azimuthal directions .the correlated area drops to zero in the limit of plane - wave pumping , where attains the form of a function .the use of such correlations in parallel processing of information represents the easiest way for the exploitation of massively multi - mode character of the generated beams . for the sake of comparison , when considering frequency entanglement , the entropy of entanglement depends on the ratio between the bandwidth of the pump beam ( typically mhz ) and the bandwidth of the down - converted two - photon state ( ) . for type ii spdc, one has typically values of . increasing the bandwidth of the two - photon state, one can reach values of thz , therefore allowing typical ratios greater than , with .in conclusion , we have presented a new way to increase significantly the amount of two - photon spatial entanglement generated in spdc by means of the use of chirped quasi - phase - matching nonlinear crystals .this opens the door to the generation of high entanglement under various experimental conditions , such as different crystal lengths and sizes of the pump beam .qpm engineering can also be an enabling tool to generate truly massive spatial entanglement , with state of the art qpm technologies potentially allowing to reach entropies of entanglement of tens of ebits .therefore , the promise of reaching extremely high degrees of entanglement , offered by the use of the spatial degree of freedom , can be fulfilled with the scheme put forward here .the experimental tools required are available nowadays .the use of extremely high degrees of spatial entanglement , as consider here , would demand the implementation of high aperture optical systems .for instance , for a spatial bandwidth of , the aperture required for nm is .the shaping of qpm gratings are commonly used in the area of non - linear optics for multiple applications such as beam and pulse shaping , harmonic generation and all - optical processing . in the realm of quantum optics ,its uses are not so widespread , even though qpm engineering has been considered , and experimentally demonstrated , as a tool for spatial and frequency control of entangled photons . in view of the results obtained here concerning the enhancement of the degree of spatial entanglement, it could be possible to devise new types of gratings that turn out to be beneficial for other applications in the area of quantum optics .this work was supported by the government of spain ( project fis2010 - 14831 ) and the european union ( project phorbitech , fet - open 255914 ) .j. s. thanks the project fi - dgr 2011 of the catalan government .this work has also supported in part by projects cost oc 09026 , cz.1.05/2.1.00/03.0058 of the ministry of education , youth and sports of the czech republic and by project prf-2012 - 003 of palack university . c. h. bennett , g. brassard , c. crpeau , r. jozsa , a. peres , and w. k. wootters , phys .lett . * 70 * , 1895 ( 1993 ) .a. k. ekert , phys .lett . * 67 * , 661 ( 1991 ) .g. ribordy , j. brendel , j. d. gautier , n. gisin , and h. zbinden , phys .a * 63 * , 012309 ( 2000 ) .a. serafini , s. mancini , and s. bose , phys .lett . * 96 * , 010503 ( 2006 ) .j. p. torres , k. banaszek and i. a. walmsley , progress in optics * 56 * ( chapter v ) , 227 ( 2011 ) .p. g. kwiat , k. mattle , h. weinfurter , a. zeilinger , a. v. sergienko , and y. shih , phys .lett . * 75 * , 4337 ( 1995 ) . c. k. law , i. a. walmsley , and j. h. eberly , phys .84 * , 5304 ( 2000 ) .h. h. arnaut and g. a. barbosa , phys .* 85 * * , 286 ( 2000 ) .a. mair , a. vaziri , g. weihs and a. zeilinger , nature * 412 * , 313 ( 2001 ) .j. t. barreiro , n. k. langford , n. a. peters , and p. g. kwiat , phys .lett . * 95 * , 260501 ( 2005 ) .e. nagali , f. sciarrino , f. de martini , l. marrucci , b. piccirillo , e. karimi and e. santamato , phys .103 * , 013601 ( 2009 ) . for a two - photon state with density matrix ,the entropy of entanglement is defined as , where is the partial trace over the variables describing subsystem of the global density matrix .the entropy of entanglement of a maximally entangled quantum state , whose two parties live in a -dimensional system , is .since the state of polarization of a single photon is a two - dimensional system , the maximum entropy of entanglement is .d. kaszlikowski , p. gnacinski , m. zukowski , w. miklaszewski , and a. zeilinger , phys .* 85 * * , 4418 ( 2000 ) .d. collins , n. gisin , n. linden , s. massar , and s. popescu , phys .* 88 * * , 040404 ( 2002 ) .g. molina - terriza , a. vaziri , r. ursin and a. zeilinger , phys . rev . lett . ** 94 * * , 040501 ( 2005 ) .the approximation consist of substituting the sinc function appearing later on in eq .( [ state4 ] ) by a gaussian function , i.e. }$ ] with , so that both functions coincide at the -intensity . for a detailed calculation ,see k. w. chan , j. p. torres , and j. h. eberly , phys .a * 75 * , 050101(r ) ( 2007 ) .s. s. r. oemrawsingh , x. ma , d. voigt , a. aiello , e. r. eliel , g. w. t hooft , and j. p. woerdman , phys .lett . * 95 * , 240501 ( 2005 ) .a. bruner , d. eger , m. b. oron , p. blau , m. katz , and s. ruschin , opt . lett . * 28 * , 194 ( 2003 ) .s. carrasco , j. p. torres , l. torner , a. sergienko , b. e. a. saleh , and m. c. teich , opt . lett . * 29 * , 2429 ( 2004 ) .m. b. nasr , s. carrasco , b. e. a. saleh , a. v. sergienko , m. c. teich , j. p. torres , l. torner , d. s. hum , and m. m. fejer , phys .* 100 * , 183601 ( 2008 ) .j. svozilk , and j. peina jr . , phys .a * 80 * , 023819 ( 2009 ) . c. i. osorio , g. molina - terriza , and j. p. torres , phys .a * 77 * , 015810 ( 2008 ) .j. p. torres , a. alexandrescu , and l. torner , phys . rev . a * 68 * , 050301 ( 2003 ) . h. di lorenzo pires , h. c. b. florijn , and m. p. van exter , phys . rev104 * , 020505 ( 2010 ) .a. ekert and p. l. knight , am .* 63 * , 415 ( 1995 ) .walborn , a.n .de oliveira , s. padua , and c.h .monken , phys .lett * 90 * , 143601 ( 2003 ) s. parker , s. bose , and m. b. plenio , phys .a * 61 * , 032305 ( 2000 ) .m. hamar , j. peina jr . , o. haderka , and v. michlek , phys .a * 81 * , 043827 ( 2010 ) .j. p. torres , a. alexandrescu , s. carrasco , and l. torner , opt .* 29 * , 376 ( 2004 ) .x. q. yu , p. xu , z. d. xie , j. f. wang , h.y .leng , j. s. zhao , s. n. zhu , and n. b. ming , phys . rev* 101 * , 233601 ( 2008 ) . | by making use of the spatial shape of paired photons , parametric down - conversion allows the generation of two - photon entanglement in a multidimensional hilbert space . how much entanglement can be generated in this way ? in principle , the infinite - dimensional nature of the spatial degree of freedom renders unbounded the amount of entanglement available . however , in practice , the specific configuration used , namely its geometry , the length of the nonlinear crystal and the size of the pump beam , can severely limit the value that could be achieved . here we show that the use of quasi - phase - matching engineering allows to increase the amount of entanglement generated , reaching values of tens of ebits of entropy of entanglement under different conditions . our work thus opens a way to fulfill the promise of generating massive spatial entanglement under a diverse variety of circumstances , some more favorable for its experimental implementation . |
despite the eternal doubts whether social phenomena can be described quantitatively , mathematical modeling of interpersonal relations has long tradition .the idea is tempting : a bottom - up path from understanding to control and predict our own behaviour seems to promise a higher level of human existence . on the other hand ,any progress on this path is absorbed by the society in a kind of self - transformation , what makes the object of research even more complex .as scholars belong to the society , an observer can not be separated from what is observed ; this precludes the idea of an objective observation . yet , for scientists , the latter idea is paradigmatic ; their strategy is to conduct research as usual . as a consequence ,the hermeneutically oriented multi - branched mainstream is accompanied by a number of works based on agent - based simulations , statistical physics and traditional , positivist sociology .theory of the heider balance is one of such mathematical strongholds in the body of social science .based on the concept of the removal of cognitive dissonance ( rcd ) , it has got a combinatorial formulation in terms of graph theory . in a nutshell ,the concept is as follows : interpersonal relations in a social network are either friendly or hostile .the relations evolve over time as to implement four rules : friend of my friend is my friend , friend of my enemy is my enemy , enemy of my friend is my enemy , enemy of my enemy is my friend . in a final balanced state ,the cognitive dissonance is absent .there , the network is divided into two parts , both internally friendly and mutually hostile . a special case when all relations are friendly ( so - called paradise )is also allowed .more recently , monte - carlo based discrete algorithms have been worked out to simulate the dynamics of the process of rcd on a social network . in parallel , a set of deterministic differential equations has been proposed as a model of rcd .this approach has been generalized to include asymmetric relations as well as the mechanism of direct reciprocity , which was supposed to remove the asymmetry .our aim here is to add yet another mechanism , i.e. an influence of the rate of the change of relations to the relations themselves .this mechanism has been described years ago by elliot aronson and termed as the gain and loss of esteem ; see also the description and literature in .briefly , an increase of sympathy of an actor about another actor appears to be an independent cause of sympathy of the actor about the actor .by independent we mean : not coming from , but from the time derivative . in summary , both the relation itself and its time derivative influence the relation .the efficiencies of these impacts and the rate of rcd play the roles of parameters in our model .we note that the concept of gain and loss of esteem has triggered a scientific discussion which is not finished until now . among implications ,let us mention two : for man - machine cooperation and for evaluations of leaders as dependent on the time evolution of their behaviour ( the so - called st .augustine effect ) . in our opinion , it is worthwhile to try to include the effect in the existing theory of rcd. here we are interested in three phases of the system of interpersonal relations : the jammed phase , the balanced phase with two mutually hostile groups , and the phase of so - called paradise , where all relations are friendly .the two latter phases are known from early considerations of the heider balance in a social network .the jammed phase is the stationary state of relations , where the heider balance is not attained .jammed states have been discussed for the first time for the case of symmetric relations ( i.e. ) by tibor antal and coworkers .the authors have shown , that this kind of states appear rather rarely , and the probability of reaching such a state decreases with the system size .a similar conclusion has been drawn also for the evolution ruled by the differential equations .our goal here is twofold .first , we provide a proof that with asymmetric relations , the number of jammed states is at least times larger than the number of balanced states , where is the number of nodes .the conclusion of this proof is that if the jammed phase is possible , it is generic .second , we construct a phase diagram , with the model parameters as coordinates , where ranges of parameters are identified where the three above states appear . in the next sectionwe give a proof that for asymmetric relations the majority of stationary states are jammed states .third section is devoted to the generalized form of the model differential equations which govern rcd , with all discussed mechanisms included .numerical results on the phase diagram are shown in section 4 .final remarks close the text .in , a discrete algorithm ( the so - called constrained triad dynamics ) has been proposed to model rcd . for each pair of nodes of a fully connected graph , an initial sign ( friendly or hostile )is assigned to the link between nodes and .for this initial configuration , the number of imbalanced triads ( such that ) is calculated . this number can be seen as an analogue to energy ; let us denote it as .the evolution goes as follows .a link is selected randomly .if the change of its sign lowers , the sign is changed ; if increases , the change is withdrawn ; if remains constant , the sign is changed with probability 1/2 .next , another link is selected , and so on . as a consequence , in a local minimum of system can not evolve ; the state is either balanced or jammed .according to , the minimal network size where jammed states are possible is nodes .an example of jammed states for is the case of three separate triads , where each link within each triad is positive ( friendly ) and each link between nodes of different triad is negative ( hostile ) .as shown in , each modification of a link leads to an increase of the number of unbalanced triads above the minimal value .this can be seen easily when we notice that for each unbalanced triangle , each its vertex is in different triad ; hence the number of these triangles is .a simple inspection tells us that a change of any friendly link to hostile state enhances energy to , while any opposite change gives ; hence the configuration is stable .the same can be shown for the differential equations ; for the purpose of this check we can limit the equations to the simplest form the condition of stability of a saturated state where for all links is that ; either is positive and increases , or it is negative and decreases . for the above jammed state , all positive links are within a triad , and each negative link is between different triads .provided that the product of the functions is equal to one , the r.h.s .( [ simple ] ) is equal to for positive links , and to for negative links ; hence this configuration is stable again .this equivalence of the results for two different frames of calculation , discrete and continuous , comes from the fact that in both frames the state of one link can be changed independently on other links .this is not so , if we modify a whole triangle , as in the local triad dynamics ; there , jammed states do not appear .now let us consider jammed states in asymetric matrix of relations , where the condition does not hold . in ,a series of such states has been identified by a numerical check . herewe show that all of them can be captured in the following scheme .take one node ( say -th one ) , and set all links , and ( or the opposite ) . as the -th node can be selected in ways ,there are such states. now pick up one node again and divide the remaining network of nodes into two non - empty , internally friendly and mutually hostile sets , with nodes marked by and , respectively . without the preselected -th node , the system is balanced . the division can be performed in ways .the relations of the -th node with the others are as follows : either , , , , or , , , .these two options mean that we have jammed states . in summary , we get jammed states .to means that is positive ( friendly ) .a broken arrow means that the relation is negative ( hostile ) . ]the stability of these jammed states can be verified analytically .consider a configuration of nodes in part i of network , nodes in part ii , and one pre - selected node ; then , .as above , the system is balanced ( against ) except node .suppose that for i and ii we have and , as in fig .again , the stability can be verified by inspection of the r.h.s .( [ simple ] ) for all links .first , each internal links within i is positive .its time derivative is ( through other nodes in i ) , plus ( through nodes in ii ) , minus one ( through 1 ) ; the sum is positive .( writing through node , we mean the contribution to the time derivative of . )the same applies to links within ii . for ( positive ) ,its time derivative is ( through nodes in i ) , plus ( through nodes in ii ) , what is positive . for ( positive ) ,its time derivative is ( through other nodes of ii ) , plus ( through nodes in i , the product of two negative links in each time ) , what is positive . in the same way , for and (both negative ) we get , what is negative . on the other hand , there is only balanced states , including the paradise state of all relations friendly .therefore , the number of jammed states is at least times larger , than the number of balanced states .we write at least because the numerical identification of jammed states has been done for ; it is likely , than for larger systems we would observe even more complex configurations . yet, the numerical time increases with the number of configurations , which is .with all three mechanisms built into the equation , the time derivative of the relation is \label{main}\ ] ] here the coefficient is a measure of intensity of the process of evening the relations between and out because of direct reciprocity .accordingly , the coefficient is a measure of intensity of rcd .the former mechanism has been introduced in ; for initially asymmetric relations , its appearance is a necessary condition for the heider balance .varying , we control the mutual rate of the two processes .however , in stationary states the time derivatives of the relations are equal to zero ; therefore the third coefficient , which is a measure of importance of the dynamics , is kept independent on .the prefactor plays the same role as the product of functions in eq .[ simple ] , i.e. to keep the relations in the prescribed range . for numerical calculations ,this prefactor is more feasible .it is not convenient to have time derivatives on the r.h.s . of an equation .then , after a short algebra , we get \right ] \end{multlined } \label{main2}\ ] ] it is easy to notice that close to the stationary state , where ( ) , the equation reduces to the case where . yet , as we have found already in , the action of the mechanism of direct reciprocity is to reduce the asymmetry of the relations before the system is trapped in a jammed state .if the coefficient is too small , the process is too slow and the heider balance is not attained .therefore what is meaningful for the final state is the mutual rate of the processes .below we present the results of the numerical solution of the system of equations which display the time evolution of .how the mechanism of the gain and loss of esteem modifies the action of the direct reciprocity ?to solve this issue , a series of numerical solutions of eqs .( [ main2 ] ) have been performed , for different values of the parameters and and different probability distributions of the initial values of the relations . as a rule ,the distributions are uniform and different from zero in the range ; then , is yet another parameter .the width of the distribution is chosen to be for all calculations .if the width is too small , the effect of asymmetry is reduced , and too large values do not allow to modify because the values of are limited to the range . for each set of trajectories , , we intend to qualify the final state as one of three phases : the paradise state , the balanced state and the jammed state . as a rule ,after a predictable number of time steps the system attained the saturated state where all relations were close to .exceptions from this rule have been found to be very rare and almost all of them happened for .numerical inspection allowed to classify these exceptions as examples of states which varied periodically in time , as identified in .these cases have been eliminated from the statistics .the state of paradise has been identified from the condition that all relations are .similarly , the criterion of the balanced state is that the number of unbalanced triads ( ) is zero .if neither of these conditions is fulfilled , the state is classified as a jammed state . in all but one cases ,the jammed character of the state was equivalent to a clear asymmetry of the relations , i.e. . the very rare exception was that the symmetric relations happened to appear simultaneously with the non - zero number of the unbalanced triads ; such states have been discussed in .the case of three separated triads , found in , is thoroughly discussed in our section 2 .most of our simulations have been performed for nodes , what is equivalent to 380 equations of motion .this choice is motivated by our observation , that the accuracy of numerical solution ( limited by the length of the time steps ) should be high enough to prevent a jump from one final state of all links to another .this condition in turn makes the time of calculation quite long , if the number of nodes is about 30 . below ,the above jumps have not been observed .our aim here is the phase diagram , i.e. the ranges of parameters where each of the three phases appear .here we work with three parameters : , and .we proceed as follows : for a fixed value of all parameters , we perform a series of simulations and we count , how many times each of three phases appear .the boundary between two phases is set for the set of parameters where each of the two phases appears with the same frequency .the values of are identified from a linear fit between 30 and 70 percent of appearance of a given phase . , with the phases j , hb , p ( jammed , balanced , paradise ) ( upper plot ) , and an inset ( lower plot ) .the lines are for 0.0 , 0.1 and 0.2.,title="fig : " ] , with the phases j , hb , p ( jammed , balanced , paradise ) ( upper plot ) , and an inset ( lower plot ) .the lines are for 0.0 , 0.1 and 0.2.,title="fig : " ] numerical results of these investigations are shown in fig .( [ f2 ] ) .there are three plots of for three different values of .the value is the critical value ; for , the jammed phase is more rare than the alternative balanced state or the paradise state .we see that for larger than about 0.06 , the paradise phase appears for all values of ; this means that the evolution drives all relations to be friendly ( equal ) .of course , asymmetry vanishes there , even if the symmetrizing mechanism of the direct reciprocity is not present for .below close to , the obtained line is the boundary between the jammed phase ( ) and the balanced phase ( ) .an important result is that when the coefficient increases , decreases .a less distinct effect is that the transition to the paradise phase is slightly shifted towards smaller , when increases .the proof that the majority of stationary states are unbalanced ( jammed ) , given in section 2 , highlights the role of asymmetry of the relations . recall that in the case of symmetric relations , jammed states are not generic ; this is so both for discrete and continuous dynamics .basically however , experimentally collected data on relations are not symmetric , as discussed in ; a recent example has been reported in .exceptions from this rule can be induced by an experimental method , as when the intensity of communication in pairs is measured .a jammed state means that the heider balance is not attained ; we conclude that this is so in most stationary states of network of social relations .as discussed in , in some circumstances lack of the balance can prevent conflicts .there are two aspects of the phase diagram , described above : the mathematical and the sociological one .let us start from mathematics .below the critical value of , the mechanism driven by direct reciprocity is not efficient and the system is stuck in a jammed state . what matters is the ratio of efficiencies of both mechanisms , the one proportional to ( direct reciprocity ) and the one proportional to ( rcd ) . in eq .( [ main ] ) , the term proportional to produces another mechanism of evening the asymmetric relations out , because an increase of is correlated with an increase of . in the presence of this term ,the asymmetry is removed by two mechanisms and not only by one .therefore , to gain the balance , a smaller value of is necessary ; hence , the critical value of decreases with , as we see in fig .( [ f2 ] ) . on the sociological side ,the results become part of current discussions on the role of cognitive processes in shaping interpersonal relations ( and references therein ) . expressed opinions allow to evaluate the level of friendliness toward us .results of experimental research show that a positive evaluation of a person triggers a reverse sympathy stronger than similarity of attitudes . in , the roles of attitude similarity and attitude alignment have been found to act separately and interact ; similarity predicted attraction only if attitude alignment was absent . in our formulation rcd ( term proportional to ) , positive evaluation ( term proportional to ) and attitude alignment ( term proportional to ) act separately .our phase diagram indicates that they also interact ; the decrease of with means , that if the attitude alignment is strong enough , the positive evaluation is not needed to restore the balanced state .one of the authors ( k.k . ) is grateful to katarzyna krawczyk - szczepanek for kind and helpful comments .the work of p.g . was partially supported by the pl - grid infrastructure .28 m. weber , _ the methodology of the social sciences _ , the free press of glencoe , illinois 1949 .n. elias , _ what is sociology ? _ , columbia up , ny 1978 .m. burawoy , _ the extended case method _ ,university of california press , berkeley 2009 .l. festinger , _ a theory of cognitive dissonance _ , stanford up , 1957 .f. heider , _ the psychology of interpersonal relations _ , lawrence erlbaum assoc . ,. j. s. coleman , _ an introduction to mathematical sociology _ , free press , 1964 .r. r. vallacher , a. nowak ( eds . ) , _ dynamical systems in social psychology _ , academic press , san diego 1994 .d. helbing , _ quantitative sociodynamics _ , springer - verlag , berlin heidelberg 2010 .n. e. friedkin and e. c. johnsen , _ social influence network theory . a sociological examination of small group dynamics _ ,cambridge up , cambridge 2011 .p. bonacich and p. lu , _ introduction to mathematical sociology _ , princeton up , princeton , 2012 .d. cartwright and f. harary , _ structural balance : a generalization of heider s theory _ , the psychological review 63 , 277 ( 1956 ) .e. aronson and v. cope , _ my enemy s enemy is my friend _, j. of personality and social psychology 8/1 ( 1968 ) 8 - 12 .t. antal , p. l. krapivsky and s. redner , _ dynamics of social balance on networks _ , phys .e 72 , 036121 ( 2005 ) .k. kuakowski , p. gawroski and p. gronek , _ the heider balance - a continuous approach _ , int .j. modern phys .c 16 , 707 ( 2005 ) .p. gawroski and k. kuakowski , _ heider balance in human networks _ , aip conf .779 , 9395 ( 2005 ) .p. gawroski , m. j. krawczyk and k. kuakowski , _ emerging communities in networks a flow of ties _ , acta phys .b 46 , 911 ( 2015 ) . p. gawroski and k. kuakowski , _ a numerical trip to social psychology : long - living states of cognitive dissonance _ , lecture notes in computer science 4490 , 43 ( 2007 ) .m. j. krawczyk , m. del castillo - mussot , e. hernandez - ramirez , g. g. naumis and k. kuakowski , _ heider balance , asymmetric ties , and gender segregation _ ,physica a 439 , 66 ( 2015 ) .e. aronson and d. linder , _ gain and loss of esteem as determinants of interpersonal attractiveness _, j. of experimental social psychology 1 , 156 ( 1965 ) .e. aronson , _ the social animal _ , w. h. freeman and co. , ny 1992 . j. tognoli and r. keisner , _ gain and loss of esteem as determinants of interpersonal attraction : a replication and extension _ , j. of personality and social psychology 23 , 201 ( 1972 ). j. w. lawrence , c. s. carver and m. f. scheier , _ velocity toward goal attainment in immediate experience as a determinant of affect _ , j. of applied social psychology 32 , 788 ( 2002 ) .a. t. lehr and g. geller , _ differential effects of reciprocity and attitude - similarity across long- versus short - term mating context _ , j. of social psychology 146 , 423 ( 2006 ) .a. filipowicz , s. barsade and s. melwani , _ understanding emotional transitions : the interpersonal consequences of changing emotions in negotiations _ , j. of personality and social psychology 101 , 541 ( 2011 ) . c. reid , j. l. davis and j. d. green , _ the power of change : interpersonal attraction as a function of attitude similarity and attitude alignment _ , j. of social psychology 153 , 700 ( 2013 ) . t. komatsu , h. kaneko and t. komeda , _ investigating the effects of gain and loss of esteem _ , lecture notes in computer science 5744 , 87 ( 2009 ). s. t. allison , d. eylon , j. k. beggan and j. bachelder , _ the demise of leadership : positivity and negativity biases in evaluation of dead leaders _ ,leadership quarterly 20 , 115 ( 2009 ) .s. a. marvel , j. kleinberg , r. d. kleinberg and s. h. strogatz , _ continuous - time model of structural balance _ , pnas 108/5 ( 2011 ) 1771 - 1776 .r. m. montoya and c. a. insko , _ toward a more complete understanding of the reciprocity of liking effect _, european j. of social psychology 38 , 477 ( 2008 ) .r. singh , r. ng , e. l. ong and p.k. f. lin , _ differente mediators for the age , sex , and attitude similarity effects in interpersonal attraction _ , basic and applied social psychology 30 , 1 ( 2008 ) .k. m. carley and d. krackhart , _ cognitive inconsistencies and non - symmetric friendship _ , social networks 18 , 1 ( 1996 ) .w. w. zachary , _ an information flow model for conflict and fission in small groups _ , j. of anthropological res .33 , 452 ( 1977 ) .t. antal , p. l. krapivsky and s. redner , _ social balance on networks : the dynamics of friendship and enmity _ , physica d 224 , 130 ( 2006 ) . | the effect of gain and loss of esteem is introduced into the equations of time evolution of social relations , hostile or friendly , in a group of actors . the equations allow for asymmetric relations . we prove that in the presence of this asymmetry , the majority of stable solutions are jammed states , i.e. the heider balance is not attained there . a phase diagram is constructed with three phases : the jammed phase , the balanced phase with two mutually hostile groups , and the phase of so - called paradise , where all relations are friendly . social systems , heider dynamics , asymmetric relations , jammed states 05.45.-a , 89.65.ef , 89.75.fb |
physicists have long speculated that the fundamental constants might not , in fact , be constant , but instead might vary with time .dirac was the first to suggest this possibility , and time variation of the fundamental constants has been investigated numerous times since then . among the various possibilities , the fine structure constant and the gravitational constant have received the greatest attention , but work has also been done , for example , on constants related to the weak and strong interactions , the electron - proton mass ratio , and several others .it is well - known that only time variation of dimensionless fundamental constants has any physical meaning .here we consider the time variation of a dimensionless constant not previously discussed in the literature : .it is impossible to overstate the significance of this constant .indeed , nearly every paper in astrophysics makes use of it .( for a randomly - selected collection of such papers , see refs . ) ..the value of measured at the indicated location at the indicated time . [ cols=">,>,>,>,>",options="header " , ] in the next section , we discuss the observational evidence for the time variation of . in sec .iii , we present a theoretical model , based on string theory , which produces such a time variation , and we show that this model leads naturally to an accelerated expansion for the universe . the oklo reactor is discussed in sec .iv , and directions for future research are presented in sec . v.the value of has been measured in various locations over the past 4000 years . in table 1, we compile a list of representative historical measurements .we see evidence for both spatial and time variation of .we will leave the former for a later investigation , and concentrate on the latter . in fig . 1, we provide a graph illustrating the time variation more clearly .= 3.8truein = 3.8truein the values of show a systematic trend , varying monotonically with time and converging to the present - day measured value .the evidence for time variation of is overwhelming .inspired by string theory , we propose the following model for the time variation of .consider the possibility that our observable universe is actually a 4-dimensional brane embedded in a 5-dimensional bulk . in this case , slices " of can leak into the higher dimension , resulting in a value of that decreases with time .this leakage into a higher dimension results in a characteristic geometric distortion , illustrated in fig .such leakage " has been observed previously in both automobile and bicycle tires .however , it is clear that more controlled experiments are necessary to verify this effect .it might appear that the observational data quoted in the previous section suggest a value of that increases with time , rather than decreasing as our model indicates .since our theoretical model is clearly correct , this must be attributed to 4000 years of systematic errors .now consider the cosmological consequences of this time variation in .the friedmann equation gives where is the scale factor and is the total density . at late times is dominated by matter , so that .hence , if increases faster than , the result will be an accelerated expansion .of course , our model gives the opposite sign for the time - variation of , but this is a minor glitch which is probably easy to fix .this model for the time variation of has several other consequences .it provides a model for the dark matter , and it can be used to derive a solution to the cosmological constant coincidence problem .further , it can be developed into a quantum theory of gravity .no discussion of the time - variation of fundamental constants would be complete without a mention of the oklo natural fission reactor .this investigation clearly opens up an entirely new direction in the study of the time variation of fundamental constants . the next obvious possibility is the investigation of the time variation of . following this, there is a plethora of other constants that could be examined : the euler - mascheroni constant , the golden ratio , soldner s constant , and catelan s constant .more speculatively , one might consider the possibility that the values of the integers could vary with time , a result suggested by several early fortran simulations .this possibility would have obvious implications for finance and accounting .a number of colleagues were kind enough to comment on the manuscript .for some reason they did not want me to use their names , so i will identify them by their initials : s. dodelson , a.l .melott , d.n .spergel , and t. j. weiler . | we examine the time variation of a previously - uninvestigated fundamental dimensionless constant . constraints are placed on this time variation using historical measurements . a model is presented for the time variation , and it is shown to lead to an accelerated expansion for the universe . directions for future research are discussed . |
the two - way relay network shown in figure [ fig : relay ] .user requires an approximate copy of the data from user , and user requires an approximate copy of the data from user .the users are physically separated and direct communication is not possible . instead , indirect communication is achieved via a relay and a two - phase communication protocol . in phase ( uplink transmission ), each user encodes its data to a codeword that is transmitted over a multiple access channel to the relay . in phase ( downlink transmission ), the relay completely or partly decodes the noise - corrupted codewords it receives from the multiple access channel , and it transmits a new codeword over a broadcast channel to both users . from this broadcast transmission , user decodes and user decodes . in this paper , we study the downlink for the case where and have been perfectly decoded by the relay after the uplink transmission ( figure [ fig : lossy - broadcast ] ) .we are interested in the lossy setting where and need to satisfy average distortion constraints .we have a source coding problem ( figure [ fig : lossy - broadcast-1a ] ) when the broadcast channel is noiseless , and we have a joint source - channel coding problem when the broadcast channel is noisy ( figure [ fig : lossy - broadcast-1b ] ) . in figure[ fig : lossy - broadcast ] we have relabelled the relay as the transmitter , user as receiver and user as receiver .we note that the source coding problem is a special case of the joint source - channel coding problem ; however , we will present each problem separately for clarity .it is worthwhile to briefly discuss some of the implicit assumptions in the two - way relay network setup .the no direct communication assumption has been adopted by many authors including oechtering , _ et al . _ , gndz , tuncel and nayak as well as wyner , wolf and willems .it is appropriate when the users are separated by a vast physical distance and communication is via a satellite .it is also appropriate when direct communication is prevented by practical system considerations . in cellular networks , for example , two mobile phones located within the same cellwill communicate with each other via their local base - station .we note that this assumption differs from shannon s classic formulation of the two - way communication problem .specifically , those works assume that the users exchange data directly over a discrete memoryless channel without using a relay . the two - phase communication protocol assumption ( uplink and downlink ) is appropriate when the users and relay can not transmit and receive at the same time on the same channel .this again contrasts to shannon s two - way communication problem as well as gndz , tuncel and nayak s separated relay , where simultaneous transmission and reception is permitted . finally , this relay network is restricted in the sense that it does not permit feedback ; that is , each user can not use previously decoded data when encoding new data . _notation : _ the non - negative real numbers are written . random variables and random vectors are identified by uppercase and bolded uppercase letters , respectively .the alphabet of a random variable is identified by matching calligraphic typeface , and a generic element of an alphabet is identified by a matching lowercase letter .for example , represent a random variable that takes values from a finite alphabet , and denotes a vector of random variables with each taking values from . the length of a random vector will be clear from context .the -fold cartesian product of a single set is identified by a superscript .for example , is the -fold product of ._ paper outline : _ in section [ sec:2 ] , we formally state the problem and review some basic rd functions .we present our main results in section [ sec:3 ] , and we prove these results in sections [ sec:4 ] and [ sec:5 ] . the paper is concluded in section [ sec:6 ] .let , , and be finite alphabets , and let ] denotes the expectation operator .[ def : source - coding ] let .a rate is said to be -achievable if for arbitrary there exists an rd code , , for some sufficiently large with [ eqn : rd - def - ach ] let denote the set of all -admissible rates , and let definition [ def : source - coding ] does not require that the two receivers agree on the exact realizations of and .for example , receiver need not know the exact realization of . in some scenarios ,it is appropriate that the receivers exactly agree on and .the notion of common reconstructions is useful for such scenarios . a common - reconstructions rate - distortion ( cr - rd ) code is a tuple of mappings , , , , , where and are given by and [ eqn : cr - enc - dec ] here denotes the `` common - reconstruction '' decoder at receiver , see figure [ fig : cr - sc - code ] . the rate and average distortion of a cr - rd code are defined in the same manner as and .additionally , we define the average probability of common - reconstruction decoding error by ,\pr[\tilde{{\mathbf{y } } } \neq \hat{{\mathbf{y}}}]\big\}\ , \ ] ] where and .[ def : cr - source - coding ] let .a rate is said to be -achievable with common reconstructions if for arbitrary there exists a cr - rd code , , , , with , satisfying and .let denote the set of all -admissible rates with common reconstructions , and let the next proposition follows directly from definitions [ def : source - coding ] and [ def : cr - source - coding ] .[ pro : rc - cr - convex ] the rd function and the cr - rd function are continuous , non - increasing and convex on .moreover , for we have that _ general remark : _ the common - reconstruction condition used in this paper was inspired by steinberg s study of common reconstructions for the wyner - ziv problem .consider the joint source - channel coding problem .suppose that the source emits symbols at the rate , and that the channel accepts and emits symbols at the rate .let denote the channel input alphabet , let denote the product of the channel output alphabets , and let the transitions from to be governed by the conditional pmf ] and where we can view as resulting from the equation . here is uniform on , denotes modulo - two addition , and is independent of and takes values from with probability and .[ exa : dsbs ] if is the dsbs with cross - over probability and and are hamming measures , then for all ] , and are hamming distortion measures , and then the cr - rd function satisfies the following : 1 . for all ] + [ thm : dsbs - cr : part3 ] in figure [ fig : dsbs-1 ] we plot , , and the upper bound for that is given in .it can be seen from these plots that the threshold is reasonably large , and most interesting distortion pairs can be achieved by a cr - rd code .our next result characterises joint source - channel coding rates with common reconstructions .it is the joint source - channel coding extension of the theorem [ thm : cr - rd ] .[ thm : jscc - cr ] a distortion pair is achievable with common reconstructions and bandwidth expansion if and only if there exists a pmf on and such that [ eqn : thm : jscc - cr-1 ] as was the case for source coding , characterising joint source - channel coding rates without common - reconstructions ( i.e. definition [ def : jscc ] ) is difficult , and we have succeeded only in giving complete results for a few special cases .the next proposition reviews a special case that is known in the literature .this proposition follows from tuncel ( * ? ? ?6 ) , and it can be thought of as the joint source - channel coding extension of sgarro s result ( proposition [ prop : losslesssc ] ) . [ prop : losslessjscc ] suppose and are hamming distortion measures .zero distortion is achievable with bandwidth expansion if and only if there exists a pmf on such that [ eqn : tuncel ] tuncel s result is ideal because it characterises achievability simply and explicitly ; it does not require auxiliary random variables and difficult optimization problems to be solved .the following consequences of this result are worth noting : the physical separation of source and channel codes is suboptimal the side - information takes a particular `` complimentary '' form , and in some circumstances it may be appropriate to use this side - information in the channel code ; for example , see . ]; an optimal joint source - channel code exhibits a `` partial '' separation of source and channel coding at the transmitter , which results in the separation of source and channel random variables in ; an optimal joint source - channel code exploits randomness in the broadcast channel to perform a `` virtual binning , '' which is analogous to the random binning used in the proof of proposition [ prop : losslesssc ] ; if the broadcast channel is such that the same maximises and , then all channels can be used to full capacity .this last property is not shared by broadcast channels in general . like sgarro s result for lossless source coding ( proposition [ prop : losslesssc ] ), tuncel s result does not easily extend to more general distortion measures and distortions .this difficulty is evidenced by the growing body of work concerning the lossy extension of .our next result gives necessary conditions for a distortion pair to be achievable .it is the joint source - channel coding extension of the cut - set lower bound for , see proposition [ prop : cut - set ] . a proof of this result is given in section [ sec:4 ] .[ thm : jscc - cut - set ] if is achievable with bandwidth expansion , then there exists a pmf on such that [ eqn : thmjscc - cut - set ] [ eqn : jscc - cut - set ] in the hamming distortion setting , we have that and .therefore , theorem [ thm : jscc - cut - set ] gives the necessary ( `` only if '' ) condition of proposition [ prop : losslessjscc ] .similarly , in the high - distortion regime we have that and is satisfied by any .we are left with , which is the necessary condition of shannon s joint source - channel coding theorem ( * ? ? ?it is an open problem as to whether the conditions of theorem [ thm : jscc - cut - set ] are both necessary and sufficient .the next result shows that these conditions are necessary and sufficient for small distortions .[ thm : jscc - small - distortions ] suppose has support and and are hamming distortion measures .there exists a strictly positive surface in such that every on or below is achievable with bandwidth expansion if and only if there exists a pmf on such that holds .the proof of theorem [ thm : jscc - small - distortions ] follows in a similar manner to the proof of theorem [ thm : small - distortions ] .specifically , we match the single - letter characterisation of theorem [ thm : jscc - cr ] with the necessary conditions in theorem [ thm : jscc - cut - set ] .we have already reviewed the cut - set lower bound in the introduction .we now review an upper bound for that , together with , gives a good approximation of .let su and el .gamal called this bound the compress - linear upper bound the reason will become clear shortly .if and are difference distortion measures , let the next result bounds from above and below , and it approximates when and are difference distortion measures .[ thm : rd - bounds ] for , we have that ( * ? ? ?2 ) if and are difference distortion measures , then the minimax capacity bound shows that the gap between and can not be arbitrarily large .the inequalities in were obtained independently and contemporaneously by su and el . gamal in .this proof of theorem [ thm : rd - bounds ] is relevant to the following discussion , so it is worthwhile to give a brief outline .the fact that follows from the cut - set argument given in the introduction . to show we combine two wyner - ziv codes with a simple linear - network code . at the transmitter , is mapped to a binary vector using an optimal wyner - ziv code .this code treats as side - information at receiver , but it ignores at the transmitter .similarly , is mapped to a binary vector using a wyner - ziv code that treats as side - information at receiver , but it ignores at the transmitter .the transmitter sends the modulo - two sum of these codewords ( in the same way as example [ exa : dsbs ] ) over the noiseless bc , and each receiver recovers their desired codeword by eliminating ( subtracting ) the codeword destined for the other receiver .it is possible to perform this elimination because each receiver can calculate ( from its side - information ) the wyner - ziv codeword intended for the other receiver .note , if conditional rd codes were used in place of wyner - ziv codes , then each receiver can not calculate the codeword intended for the other user and this elimination is not possible .the second result follows directly from zamir s work on rate - loss in the wyner - ziv problem . is called the compress - linear upper bound because it is obtained by combining two wyner - ziv compression codes with a linear - network code. the gap between and can be no larger than the `` rate loss '' of the wyner - ziv rd function over the conditional rd function .if and and are such that there is no rate loss , then theorem [ thm : rd - bounds ] characterises .the following examples outline a number of such scenarios .[ thm : rd - bounds : cor : cond - ind ] if and where forms a markov chain , then for all distortion pairs we have that in particular , if and are independent , then we have if and where forms a markov chain , then also forms a markov chain .moreover , we have where follows from the markov chain , follows because is a component of the source ; therefore , and are equal .] , and follows because . on combining with the fact that , it follows that . a similar argument yields .substituting these equalities into the definitions of and , and applying theorem [ thm : rd - bounds ] completes the proof .[ thm : rd - bounds : cor:2deterministic ] if and are finite sets , and are mappings , , , , , then we have that the conditional rd function and the wyner - ziv rd function are both continuous by willems in .the continuity of the conditional rate distortion function at follows from willems result because is a special case of the wyner - ziv rate distortion function when the source and distortion measure are chosen appropriately . ] at .we have that where the minimum is taken over all channels with suppose that achieves the above minimum .since when and when , implies that when we have that must satisfy that is , almost surely .therefore and .we also have that where the minimization is taken over all choices of an auxiliary random variable with a joint pmf satisfying the markov chain and the distortion constraint where .setting gives and therefore .a similar argument gives .the proof is completed by applying theorem [ thm : rd - bounds ] . using standard techniques , theorem [ thm : rd - bounds ] can be extended from discrete finite alphabets to real - valued alphabets .this extension yields the following example for jointly gaussian sources .[ exa : rd - bounds : exa:1gaussian ] if , and \ , \end{aligned}\ ] ] and [ eqn : squared - error ] then for all distortion pairs we have where and [ eqn : crd - gaussian ] corollaries [ thm : rd - bounds : cor : cond - ind ] and [ thm : rd - bounds : cor:2deterministic ] include the results of ( * ? ? ?iii.b ) as special cases . example [ exa : rd - bounds : exa:1gaussian ] was independently given in .the next result characterises for one large distortion and shows that the upper bound can be loose .its proof follows directly from the lower bound in theorem [ thm : rd - bounds ] and the coding theorem for the conditional rd function .this proof is omitted .[ thm : rd - bounds : cor : max - distortion ] for , we have that in summary , the compress - linear upper bound and the cut - set lower bound well approximate when and are difference distortion measures .specifically , the ideas of zamir can be used to show that the gap between and is no larger than the maximum of two minimax capacities .the bounds yield an exact characterisation of for sources with zero rate - loss in the wyner - ziv problem ; however , it is well known that this condition is very restrictive ( * ? ? ?* remark 5 ) .two sources that satisfy this condition are the jointly gaussian source with a squared - error distortion measure ( see ( * ? ? ? * remark 6 ) and example [ exa : rd - bounds : exa:1gaussian ] ) and the erasure side - information source with a hamming distortion measure .corollary [ thm : rd - bounds : cor : max - distortion ] and example [ exa : dsbs ] demonstrated that the compress - linear upper bound can be loose . we conjecture that is also loose in general , but no counterexample has been found to date .we give a different lower bound for in appendix [ app : newlowerbound ] . in this section ,we review an upper bound for that was proposed by kimura and uyematsu in ( * ? ? ?1 ) , and we compare this bound to the compress - linear upper bound .we then formulate a new upper bound for using a result of heegard and berger .the main purpose of this section is to unify the achievability results of .let be a finite set of cardinality let denote the set channels randomly mapping to such that there exist functions and with define [ lem : one - description - upper ] for , we have that lemma [ lem : one - description - upper ] is called the one - description upper bound because its proof follows from a random coding argument that describes both and with one description . the one - description bound and the compress - linear bound both involve difficult minimizations ,so it is not immediately clear when one bound outperforms the other .the next result resolves this question and shows that is always better than .[ lem : one - des - v - com - lin ] for , we have that we have that where the auxiliary random variables and satisfy the markov chains and . note that and do not appear together in any of the mutual information or distortion conditions , so we can combine these minima into a minimum where forms a markov chain . to this end ,let denote the set of channels mapping to such that the following properties hold : 1 . the joint distribution , , factors to form the long markov chain .there exist functions and such that + note that the long markov chain in condition is implied by the markov chains , and .we now have that the constraint implies which , in turn , implies .therefore , we have similarly , we have combining with and completes the proof where follows because .the results of heegard and berger ( * ? ? ?2 ) ( see also ) can be modified to further strengthen the one - description upper bound .let denote the set of _ all _ channels mapping to . for , define \ , \ ] ] where and are the conditional rd functions of given and given , respectively .the proof of the next result follows directly from ( * ? ? ?2 ) and lemma [ lem : one - des - v - com - lin ] and is omitted .[ thm : rd*-upper - bound ] for , we have that in summary , the compress - linear upper bound and the cut - set lower bound well approximate for difference distortion measures .the compress - linear bound is weaker than the one - description bound , i.e. , and this inequality is strict for the dsbs with hamming distortion measures ( example [ exa : dsbs ] ) .finally , the one - description bound is potentially weaker than heegard and berger s bound , i.e. ; however , we have not found an example where this inequality is strict . in theorem[ thm : cr - rd ] , we claimed that the cr - rd function is equal to .we now prove this result .the coding theorem is a special case of the one - description bound , where is chosen to be .we omit the proof .it remains to prove the converse theorem .if is -admissible , then by definition there exists the following : 1 . a monotonically decreasing sequence with , and a monotonically increasing sequence ; 2 . a sequence of common reconstruction rd codes , where , , , \leq \epsilon_i ] .we now show that for all , where . to this end, the following inequalities will be useful : [ eqn : thm : cr - rd : fano1 ] where this inequality is a consequence of fano s inequality , the common - reconstruction property & \leq \epsilon_i\ \text { and}\\ \pr[\phi_2^{(n_i)}(m,{\mathbf{x } } ) \neq g_1^{(n_i)}(m,{\mathbf{y } } ) ] & \leq \epsilon_i\ , \end{aligned}\ ] ] and the fact that the cardinality of the range of , , can be no more than . note that . by definition , we also have \\ \label{eqn : thm : cr - rd:1:8 } & \geq \frac{1}{n_i}h\big(g_1^{(n_i)}(m,{\mathbf{y}}),\phi_1^{(n_i)}(m,{\mathbf{y}}),g_2^{(n_i)}(m,{\mathbf{x}})\big|{\mathbf{y}}\big ) -\varepsilon(n_i,\epsilon_i)\\ \label{eqn : thm : cr - rd:1:9 } & \geq \frac{1}{n_i } h\big(g_1^{(n_i)}(m,{\mathbf{y}}),g_2^{(n_i)}(m,{\mathbf{x}})\big|{\mathbf{y}}\big ) -\varepsilon(n_i,\epsilon_i)\\ \label{eqn : thm : cr - rd:1:10 } & \geq \frac{1}{n_i } i\big({\mathbf{x}};g_1^{(n_i)}(m,{\mathbf{y}}),g_2^{(n_i)}(m,{\mathbf{x}})\big|{\mathbf{y}}\big ) -\varepsilon(n_i,\epsilon_i)\\ \label{eqn : thm : cr - rd:1:11 } & = \frac{1}{n_i } \sum_{j=1}^{n_i } i\big(x_j;g_1^{(n_i)}(m,{\mathbf{y}}),g_2^{(n_i)}(m,{\mathbf{x}})\big|{\mathbf{y}},x_1^{j-1}\big ) -\varepsilon(n_i,\epsilon_i)\\ \label{eqn : thm : cr - rd:1:12 } & = \frac{1}{n_i } \sum_{j=1}^{n_i } i\big(x_j;g_1^{(n_i)}(m,{\mathbf{y}}),g_2^{(n_i)}(m,{\mathbf{x}}),x_1^{j-1},y_1^{j-1},y_{j+1}^n\big|y_i\big ) -\varepsilon(n_i,\epsilon_i)\\ \label{eqn : thm : cr - rd : rate-1 } & \geq \frac{1}{n_i } \sum_{j=1}^{n_i } i\big(x_j;g_1^{(n_i)}(m,{\mathbf{y}}),g_2^{(n_i)}(m,{\mathbf{x}})\big|y_j\big ) - \varepsilon(n_i,\epsilon_i)\ , \end{aligned}\ ] ] where through follow from standard identities , follows from , through follow from standard identities , and follows because the source is iid .a similar procedure yields let and denote the elements of and , respectively .i.e. and are the symbols reconstructed by the receivers .expanding the conditions and gives & \leq d_1 + \epsilon_i\\ \label{eqn : thm : cr - rd : dist-2 } \mathbb{e}\left[\frac{1}{n_i}\sum_{j=1}^{n_i } \delta_2(y_j,\hspace{0.8mm}\hat{y}_j)\hspace{0.8mm}\right ] & \leq d_2 + \epsilon_i\ .\end{aligned}\ ] ] recall , is drawn i.i.d . according to . for each , let denote the conditional probability of given ; that is , combining with characterises the joint pmf of .define the `` time - shared '' channel from and , we have consequently , .we have that where we have used jensen s inequality together with the convexity of and in when the joint pmf of ( here ) is fixed ( see lemma [ lem : cmi - convex ] below ) . finally , combining and with the definition of we have which is the desired result . the converse is completed by noting that , , and is a continuous function of and .[ lem : cmi - convex ] suppose the random vector on is characterised by the joint pmf .the condition mutual information is convex in for fixed .fix . from the convexity of mutual information ( * ? ? ?2.7.4 ) , we have that is convex in for each .the lemma follows by noting that is a convex combination of .further details can be found in appendix [ app : lemma - convexity ] .the next result shows that if one source is required to be reconstructed with vanishing hamming distortion , then the rd function and the cr - rd function both collapse to the cut - set lower bound .[ thm : cr - rd : cor : onelossless ] if is a hamming distortion measure , then for all we have that from proposition [ pro : rc - cr - convex ] and theorems [ thm : cr - rd ] and [ thm : rd - bounds ] we have that let be a channel that achieves the minimum for the conditional rd function .this channel and together define a joint pmf for .in addition , set to obtain a joint pmf for .this joint pmf belongs to the set .note , we have the markov chain and therefore the chain . on substituting this joint pmf intowe obtain the following upper bound for : where follows because and forms a markov chain , and follows because was chosen to achieve the minimum in the definition of . the proof is completed by noting that the next result covers the one large distortion setting .the proof follows directly from theorem [ thm : cr - rd ] and is omitted .note that it may differ from corollary [ thm : rd - bounds : cor : max - distortion ] the large distortion result for .[ thm : cr - rd : cor : max - distortion ] for we have that where denotes the set of all test channels mapping to such that the following result gives a useful upper bound for .we will use this bound to prove the small distortion result theorem [ thm : small - distortions ] .[ cor : thm : cr - rd ] for we have that let achieve the minimum for the joint rate distortion function . then , where the remaining mutual information terms are evaluated using .note that where the last inequality follows from the definition of .similarly , we also have that , and thus on combining this result with proposition [ pro : rc - cr - convex ] and theorem [ thm : rd - bounds ] , we have from this chain of inequalities , it is clear that if [ eqn : imtired1 ] then we have that the rd function and the cr - rd function both meet the cut - set lower bound .the next two examples give situations where holds . if and are independent ( ) , then and therefore , if and are hamming measures , then and therefore , this idea of matching the lower and upper bounds in is not just useful for these simple examples . our main result , theorem [ thm : small - distortions ] , showed that it is also useful for sources with hamming distortions with small distortions .the proof of this result is a simple consequence of corollary [ cor : thm : cr - rd ] .let us recall gray s results for the extended shannon lower bounds of joint , conditional and marginal rd functions . specifically , from ( * ? ? ?3.2 cor .3.2 ) there exists a strictly positive surface in such that for all that lies on or below . combining this result withproves the theorem .the joint pmf of the dsbs can be thought of as resulting from using as a uniform input to a binary symmetric channel ( bsc ) with cross over probability , see figure [ fig : dsbs - pmf ] . by symmetry, we can also think of resulting from using as a uniform input to a bsc with cross over probability ..,title="fig:",width=321 ] + in example [ exa : dsbs ] , it was shown that the rd function without common reconstructions equals the cut - set lower bound .since for all ] , we now construct a test channel that belongs to and ..,title="fig:",width=453 ] + fix ] and ] .we choose a channel that achieves the bound given in .let , and define where and and are indicator functions ( equal one if the subscripts are equal and zero otherwise ) .the channel is depicted in figure [ fig : fourinputchannel ] . .the transitions represented by dotted lines each have probability .,title="fig:",width=340 ] + set and .note that are both bscs with a crossover probability .therefore , = d ] .finally , the rate of the channel is given by \ .\end{aligned}\ ] ] by symmetry , we also have , which completes the proof . the channel can be view as the natural continuation of the channel , which was used to prove . specifically , is formed by passing through a bsc with crossover probability .this latter quantity is chosen because now extend the source coding results of section [ sec:4 ] to the joint source - channel coding setting ( definitions [ def : jscc ] and [ def : jscc - cr ] ) .we begin by proving theorem [ thm : jscc - cut - set ] .if is admissible with bandwidth expansion factor , then by definition there exists for every a joint source - channel code with , .let denote the codeword that is produced by the encoder .let denote the marginal pmf for the symbol .define a new `` time - shared '' random variable on with pmf since is a concave function for fixed , we have from jensen s inequality we further have \\ \label{eqn : bc : gerhard-1 } & \leq \sum_{i=1}^{\kappa_c t } \big [ h(u_i ) - h(u_i|w_i ) \big]\\ \label{eqn : bc : gerhard-2 } & = \sum_{i=1}^{\kappa_c t } i(w_i;u_i)\ , \end{aligned}\ ] ] where follows because forms a markov chain. then we have where follows from the data - processing inequality , follows because is iid , follows from the data - processing inequality and the fact that is a function of , follows from the definition of the conditional rate - distortion function where , and combines jensen s inequality and the convexity of in , and follows because non - increasing in .similarly , it can be shown that the theorem follows from the continuity of and on and the fact that is arbitrary .we now adapt an achievability result of nayak , tuncel and g to give a sufficient condition for joint source - channel coding . when combined with theorem [ thm : jscc - cut - set ] , this condition will give necessary and sufficient conditions for joint source - channel coding of jointly gaussian random variables with squared - error distortion measures .[ lem : nayak ] let be a finite set .a distortion pair is admissible with bandwidth expansion if the following conditions are satisfied : 1 .there exist random variables on and on ; 2 .there exist functions and with + [ eqn : nayak - distortion ] & \leq d_1\\ \mathbb{e}\big[\delta_2(y,\pi_2(c , x))\big ] & \leq d_2\ ; \end{aligned}\ ] ] 3 .the following inequalities hold + lemma [ lem : nayak ] is the joint source - channel coding extension of the one - description upper bound given in lemma [ lem : one - description - upper ] .the lemma is actually a special case of a stronger result ( * ? ? ?1 ) ; however , this weaker result will suffice for the following discussion .note also the markov constraints in ( * ? ? ?* cor.1 ) do not play a role here as the side - information is available to the transmitter .the next two corollaries combine theorem [ thm : jscc - cut - set ] and lemma [ lem : nayak ] to give necessary and sufficient conditions for the following two special cases : the source has zero - rate loss in the wyner - ziv problem , and one source has to be reconstructed vanishing hamming distortion .[ cor : jscc : zero - wz - rate - loss ] if has zero rate - loss in the wyner - ziv problem ( i.e. , and ) , then is achievable with bandwidth expansion if and only if there exists a pmf on such that holds .as discussed before , the zero wyner - ziv rate - loss condition is very restrictive and few sources are known to satisfy it .however , an interesting example that does satisfy this condition is given next .if are jointly gaussian random variables and are squared error distortion measures ( see example [ exa : rd - bounds : exa:1gaussian ] ) , then is achievable with bandwidth expansion if and only if there exists a pmf on such that holds .the conditional rd functions and are given in .[ cor : jscc : one - lossless - reconstruction ] if is a hamming distortion measure , then is achievable with bandwidth expansion if and only if there exists a pmf on such that the necessary condition ( `` only if '' ) is given by theorem [ thm : jscc - cut - set ] . the sufficient condition ( ``if '' ) is proved by constructing an auxiliary random variable that meets the conditions of lemma [ lem : nayak ] with and . recall that let and be joint pmfs on and that achieve the aforementioned minima .let be the joint pmf on defined by by construction , the and marginals of are and , and satisfies the chain . recall that satisfies the chain , and satisfies the chain .combining these chains yields the long chain .set and .note that is a valid auxiliary random variable for lemma [ lem : nayak ] .moreover , we have where follows because implies , follows because is an optimal test channel for the wyner - ziv rd function , and follows by assumption .similarly , we have .the necessary condition ( `` only if '' ) is follows from theorem [ thm : jscc - cut - set ] and .the sufficient condition ( `` if '' ) is proved by constructing an auxiliary random variable that meets the conditions of lemma [ lem : nayak ] as well as and . recall the joint pmf of used to prove corollary [ thm : cr - rd : cor : onelossless ] . choose and note this choice of meets the conditions of lemma [ lem : nayak ] .as before , we also have that and .the sufficient condition is a special case of lemma [ lem : nayak ] .we now give the necessary condition .if a distortion pair is achievable with bandwidth expansion , then by definition there exists for every a cr - jsc code with as well as & \leq \epsilon_t \text { and}\\ \label{eqn : app - g : cr - error - prob-2 } \pr\big[\phi_1^{(t)}({\mathbf{u}},{\mathbf{y } } ) \neq g_2^{(t)}({\mathbf{v}},{\mathbf{x}})\big ] & \leq \epsilon\ , \end{aligned}\ ] ] as in the proof of theorem [ thm : jscc - cut - set ] , let , let denote the pmf for the symbol , and define the time shared random variable on via we will show that for some test channel . the next inequality , which will be useful later , follows from fano s inequality and : where we first invoke the techniques used in the converse proof of theorem [ thm : jscc - cut - set ] ; specifically , we have we now invoke the techniques used in the converse proof of theorem [ thm : cr - rd ] .specifically , we have \\ \label{eqn : app - g : s-2 } & \geq \frac{1}{t } i\big({\mathbf{x}};{\mathbf{u}},g_1^{(t)}({\mathbf{u}},{\mathbf{y}}),\phi_1^{(t)}({\mathbf{u}},{\mathbf{y}}),g_2^{(t)}({\mathbf{v}},{\mathbf{x}})\big|{\mathbf{y}}\big ) - \varepsilon(\kappa_s , t,\epsilon)\\\label{eqn : app - g : s-3 } & \geq \frac{1}{t } i\big({\mathbf{x}};g_1^{(t)}({\mathbf{u}},{\mathbf{y}}),g_2^{(t)}({\mathbf{v}},{\mathbf{x}})\big|{\mathbf{y}}\big ) - \varepsilon(\kappa_s , t,\epsilon)\\ & = \frac{1}{t } \sum_{j = 1}^{\kappa_s t } i\big(x_j;g_1^{(t)}({\mathbf{u}},{\mathbf{y}}),g_2^{(t)}({\mathbf{v}},{\mathbf{x}})\big|{\mathbf{y}},x_1^{j-1}\big ) - \varepsilon(\kappa_s , t,\epsilon)\\ & = \frac{1}{t } \sum_{j = 1}^{\kappa_s t } i\big(x_j;g_1^{(t)}({\mathbf{u}},{\mathbf{y}}),g_2^{(t)}({\mathbf{v}},{\mathbf{x}}),x_1^{j-1},y_1^{j-1},y_{j+1}^{\kappa_s t}\big|y_j\big ) - \varepsilon(\kappa_s , t,\epsilon)\\\label{eqn : app - g : s-4 } & \geq \frac{1}{t } \sum_{j = 1}^{\kappa_s t } i\big(x_j;g_1^{(t)}({\mathbf{u}},{\mathbf{y}}),g_2^{(t)}({\mathbf{v}},{\mathbf{x}})\big|y_j\big ) - \varepsilon(\kappa_s , t,\epsilon)\ , \end{aligned}\ ] ] where follows because forms a markov chain , and follows from and a similar procedure yields for , let and denote the symbols of and , respectively .let denote the conditional probability of given , and define the average distortion requirement on the code guarantees we further have where we have used jensen s inequality together with the convexity of and in .thus , we have shown that there exists a condition pmf and a pmf such that the necessary condition follows from theorem [ thm : jscc - cut - set ] .we now show that this necessary condition is also sufficient for small distortions . from theorem [ thm : jscc - cr ], a sufficient condition for to be achievable is that there exists a pmf on and such that holds .choose to achieve the minimum in the definition of the joint rd function .in a similar manner to the proof of corollary [ cor : thm : cr - rd ] , we have that from gray ( * ? ? ?3.2 ) , there exists a strictly positive surface in such that and whenever lies on or below . for these small distortions, we have that and .the downlink broadcast channel of the two - way relay network was studied in the source coding and joint source - channel coding settings .single - letter necessary and sufficient conditions for reliable communication were given for the following special cases : common - reconstructions ( theorems [ thm : cr - rd ] and [ thm : jscc - cr ] ) , small distortions ( theorems [ thm : small - distortions ] and [ thm : jscc - small - distortions ] ) , conditionally independent sources ( corollary [ thm : rd - bounds : cor : cond - ind ] ) , deterministic distortion measures ( corollary [ thm : rd - bounds : cor:2deterministic ] ) , and sources with zero rate - loss for the wyner - ziv problem .additionally , the notion of small distortions was explicitly characterised for the doubly symmetric binary source with hamming distortion measures in theorem [ thm : dsbs - cr ] .each of the aforementioned results followed , in part , from the necessary conditions presented in theorems [ thm : jscc - cut - set ] and [ thm : rd - bounds ] .it remains to be verified that these necessary conditions are , or are not , sufficient .more generally , the source coding problem is a special case of the wyner - ziv problem with two receivers , and the joint source - channel coding problem is a special case of the wyner - ziv coding over broadcast channels problem .it would be interesting to see if the small distortion results in this paper carry over to these problems .the authors are grateful to gottfried lechner , badri n. vellambi , terence chan and tobias oechtering for many interesting discussions on the contents of this paper .in this section , we present an alternative to the cut - set lower bound given in theorem [ thm : rd - bounds ] ( see the end of section [ sec:4a ] ) . for this purpose ,let , and be finite alphabets of cardinality denote the set of pmfs on where 1 . forms a markov chain , i.e. , 2 . is independent of , i.e. , 3 .there exist functions + + such that + & \leq d_1 , \text { and}\\ \mathbb{e}_p\big[\delta_2\big(y,\pi_2(b , c , x)\big)\big ] & \leq d_2\ .\end{aligned}\ ] ] define \ .\end{aligned}\ ] ] the next theorem gives a lower bound for .[ thm : rd*-lower - bound ] for we have that if is -admissible , then there exists a monotonically decreasing sequence with limit zero ; a monotonically increasing sequence ; and a sequence of rd codes such that , and .then we have \\ \label{rd - converse-13 } & = \frac{1}{n_i}\sum_{j=1}^{n_i}\big [ i(x_j;m|x_1^{j-1},{\mathbf{y } } ) + i(y_j;m|y_1^{j-1})\big]\\ \label{rd - converse-14 } & = \frac{1}{n_i}\sum_{j=1}^{n_i}\big [ i(x_j;m , x_1^{j-1},y_1^{j-1},y_{j+1}^{n_i}|y_j ) + i(y_j;m , y_1^{j-1})\big]\\ \label{rd - converse-15 } & \geq \frac{1}{n_i}\sum_{j=1}^{n_i}\big [ i(x_j;m , y_1^{j-1},y_{j+1}^{n_i}|y_j ) + i(y_j;m)\big]\\ \label{rd - converse-16 } & = \frac{1}{n_i}\sum_{j=1}^{n_i}\big [ i(x_j;m|y_j ) + i(x_j;y_1^{j-1},y_{j+1}^{n_i}|m , y_j ) + i(y_j;m)\big]\\ \label{rd - converse-17 } & = \frac{1}{n_i}\sum_{j=1}^{n_i}\big [ i(x_j , y_j;m ) + i(x_j;y_1^{j-1},y_{j+1}^{n_i}|m , y_j ) \big]\ , \end{aligned}\ ] ] where follows from the definition of a -admissible rate , through follow from standard identities , follows because is i.i.d ., through follows from standard identities . in a similar manner, it can also be shown that \ .\ ] ] for , define , , and .we consider , , to be a class of disjoint sets .similarly , we consider and to be disjoint sets . now define let denote the resultant joint pmf on that characterises the random variables , , , and . by construction, we have 1 . is independent of , i.e. 2 .there exists a function such that , 3 .there exists a function such that .now define , and , and the `` time - shared '' pmf on . using this definition, it can be verified that furthermore , by definition , we have \\ & = \mathbb{e}_{p^*}\big[\delta_1(x,\pi_1(a , c , y)\big]\ , \end{aligned}\ ] ] where the last expectation is taken with respect to , and is defined by where is arbitrary .similarly , \ .\ ] ] at this point we have that \ , \end{aligned}\ ] ] where the infimum is taken over all satisfying as well as \ \text { and}\\ d_2 + \epsilon_i & \geq \mathbb{e}_{p^*}\big[\delta_2(y,\pi_2(b , c , x))\big]\ .\end{aligned}\ ] ] note the this infimum is not altered if we impose the markov chain .finally , we apply the support lemma to bound the cardinality of by , and and by .( and can be bounded simultaneously since forms a markov chain . )enlarge the set by removing the constraints and ] should be understood as the conditional mutual information when the joint probability of is defined by and . for this purpose ,we write explicitly as a function of : where the conditional probability is a function of the other arguments then we have & = \sum_{i=1}^2 \sum_{a , b , c } \alpha_i p_b(b ) p_{a|b}(a|b ) p^{(i)}_{c|ab}(c|a , b ) \log \frac{p^{(i)}_{c|ab}(c|a , b)}{p^{(i)}_{c|b}(c|b ) } \\ & = \sum_b p_b(b ) \sum_{i=1}^2 \alpha_i \sum_{a , c } p_{a|b}(a|b ) p^{(i)}_{c|ab}(c|a , b ) \log \frac{p^{(i)}_{c|ab}(c|a , b)}{p^{(i)}_{c|b}(c|b)}\\ & \geq \sum_b p_b(b ) \sum_{a , c } p_{a|b}(a|b ) p^{*}_{c|ab}(c|a , b ) \log \frac{p^{*}_{c|ab}(c|a , b)}{p^{*}_{c|b}(c|b ) } \\ & = i(a;c|b)\ [ p^*_{c|ab}]\ , \end{aligned}\ ] ] where the inequality follows from the convexity of mutual information in the channel for a fixed input distribution .y. wu , p. a. chou , and s. y. kung , `` information exchange in wireless networks with network coding and physical - layer broadcast , '' in _ proceedings conference on information sciences and systems _ , the john hopkins university , baltimore , 2005 .r. timo , l. ong , and g. lechner , `` the two - way relay network with arbitrarily correlated sources and an orthogonal mac , '' institute for telecommunications research , the university of south australia , tech .november 2010 .d. gndz , j. nayak , and e. tuncel , `` wyner - ziv coding over broadcast channels using hybrid digital / analog transmission , '' in _ proceedings ieee international symposium on information theory _ , toronoto , canada , 2008 , pp. 15431547 . | the broadcast phase ( downlink transmission ) of the two - way relay network is studied in the source coding and joint source - channel coding settings . the rates needed for reliable communication are characterised for a number of special cases including : small distortions , deterministic distortion measures , and jointly gaussian sources with quadratic distortion measures . the broadcast problem is also studied with common - reconstruction decoding constraints , and the rates needed for reliable communication are characterised for all discrete memoryless sources and per - letter distortion measures . rate distortion theory , joint source - channel coding , two - way relay network . |
i would first like to thank my supervisors hani mehrpouyan and alexandre graell i amat for their support and guidance during the whole thesis process .i highly appreciate their attitude towards me .they have taught me how an academic should approach the problems in the field of research .they have also trusted me and given me a lot of freedom .they respect my independent and somehow arrogant way of performing research .i would also like to thank my friends in communication engineering masters programme for the discussions and shared ideas .i want to also thank to the employees at ericsson for showing me how things are done in industry in a short time .thanks also to my family and friends in turkey for supporting me through all these years of studying .i have to thank my swedish family , jan and ann - charlotte fonselius , for creating such a nice environment for me at the house which i share with my beloved friend kiryl kustanovich .last but not least , i want to thank olric . i would not be able to finish this thesis without him .arif onder isikman , gteborgin recent years the demand for high bandwidth data services has increased with the evolution of the third generation ( 3 g ) and fourth generation ( 4 g ) cellular networks .rapid escalation in the use of bandwidth hungry devices also increases the throughput requirements of the base station ( bs ) , base station controller ( bsc ) and master switching center ( msc ) , which are the fundamental components of a cellular network .the user connects to the network through the bs .each bs is connected to a bsc via a wired or a wireless link .the bsc routes the data from the bs to the msc and controls the functionality of the bs .the msc holds all the network information and controls all calls and data management functionalities .in other words , the msc is the brain of any cellular network .the portion of a wireless mobile network from the bs to the msc is called as _backhaul network_. the backhaul links serves the medium to carry traffic from the bs to the msc via the bsc .the point - to - point microwave radio links are commonly used in backhaul networks .they are cost efficient and can be deployed rapidly .microwave radio transmission is operated at certain frequency bands .lower bands such as 7 , 18 , 23 and 35ghz have better radio propagation characteristics . on the other hand ,these frequency bands fail to provide sufficient bandwidth since the spectrum is mostly allocated . with the release of the e - band , 10ghz of bandwith in the spectrum at 70ghz ( 71 - 76ghz ) and 80ghz ( 81 - 86ghz )have been made available for point - to - point microwave links . to meet high data rate requirements point - to - point microwave systemsare equipped with multiple transmit and multiple receive antennas . _ line - of - sight _( los ) _ multi - input multi - output _ ( mimo )systems are effectively used for backhaul networking .local oscillators are utilized to carry the baseband signal to the operating band . due to the hardware limitations ,every oscillator suffers from an instability of its phase , resulting in phase noise .phase noise can dramatically limit the performance of a wireless communication system if left unaddressed .phase noise interacts with the transmitted symbols both at the transmitter and the receiver side in a non - linear manner and significantly distorts the received signal .digital signal processing algorithms need to be employed to achieve synchronous transmission in the presence of phase noise .several algorithms are proposed for _ single - input single - output _ ( siso ) systems to mitigate the effect of time varying phase noise . in the case of los - mimo systems ,each transmit and receive antenna is equipped with a different oscillator since the antennas are placed far apart .similarly , in the case of multi - user mimo systems or _space division multiple access _ ( sdma ) systems independent oscillators are used by different users to transmit their data to common receiver . as a result ,a single oscillator can not be employed and phase noise compensation algorithms proposed for siso systems are not directly applicable to mimo systems . achieving channel capacitywas seen far from reality until two decades ago . the introduction of turbo codes and the rediscovery of _ low - density - parity - check ( ldpc ) _codes has demonstrated the power of the iterative processing paradigm in improving the performance of communication systems and in operating close to the theoretical limits .subsequently , the iterative coding structure has been applied to facilitate and improve many functions including synchronization .parameter estimation can be performed jointly with data detection in an iterative fashion .it is well - known that the application of turbo codes and lpdc codes improves the data detection process at the receiver , which in turn can be applied to improve the performance of decision - directed estimators .the improved estimation and tracking accuracy allows for more accurate compensation of impairments such as time varying phase noise at the receiver which can also improve data detection .thus , by jointly performing data detection and estimation , the performance of wireless communication systems can be significantly improved .this approach , known as turbo synchronization " , was initially proposed in and has since been formalized in with the use of the _ expectation - maximization ( em ) _framework . in ,different frameworks for turbo synchronization based on the gradient method and the sum - product algorithms are studied .this work is extended to the problem of estimation of time varying phase noise for siso systems in . in ,based on the assumption of small phase noise values within each block and removing the data dependency from the observed signal , the tracking is carried out via a modified em - based algorithm that applies a soft decision - directed linearized kalman smoother . in addition , to enhance phase noise tracking performance for very high phase noise variances , proposes to employ a maximum - likelihood ( ml ) estimator in conjunction with a kalman smoother , labeled as ( ks - mla ) .a soft decision - directed extended kalman filter - smoother ( eks ) is also suggested to provide phase noise estimation .however , the performance of the ks - mla degrades with increasing block length .more importantly , the linearization applied in is not applicable to mimo systems and the estimation performance of the proposed tracking algorithm is not investigated .mimo technology allows communication systems to more efficiently use the available spectrum , . _bit - interleaved - coded - modulation ( bicm ) _ is one of the popular schemes that enables communication systems to fully exploit the spectrum efficiency promised by mimo technology .however , the performance of mimo systems degrades dramatically in the presence of synchronization errors .code - aided synchronization based on the em framework for joint channel estimation , frequency and time synchronization for a bicm - mimo system is proposed in .however , in , the synchronization parameters are assumed to be constant and deterministic over the length of a block which is not a valid assumption for time varying phase noise .a wiener filter approach that applies spatial correlation to improve phase noise estimation in mimo systems is proposed in .however , the proposed solution is only applicable to uncoded mimo systems and the algorithm in introduces significant overhead to phase noise estimation process since it requires frequent transmission of _ orthogonal _ pilot symbols .the problem of joint data detection and phase noise estimation for coded mimo systems over block fading channels is still unaddressed and will be the main focus of this thesis . in chapter 2 the phase noise modelis introduced and digital communication system for siso systems over the additive white gaussian noise ( awgn ) channel affected by phase noise is presented . in chapter 3 the performance of both uncoded and coded siso systems affected by phase noiseare investigated .the iterative code - aided em - based approach used in is modified and derived analytically .the em - based algorithm is implemented and its components are explained in detail .two estimators that are proposed in , the ks - mla and the eks , are evaluated and their performances are compared against one another . in chapter 4 , the mimo system model for both uncoded and the coded mimo systems over rician fading channels in the presence of phase noise is described in detail .an iterative joint phase noise estimation and data detection algorithm based on the em framework is derived analytically . a low complexity _ extended kalman filter - smoother _ ( ekfs ) is proposed to estimate the time varying phase noise processes of each oscillator .bicm scheme is used to decrease the detection complexity .the performance of the proposed algorithm is investigated via computer simulations . in chapter 5 conclusion and future research directionsare discussed .the primary contributions of this thesis are summarized as follows : * the system model for both uncoded siso and uncoded mimo systems in the presence of phase noise are outlined in detail and an extended kalman filter with symbol - by - symbol feedback is proposed for each system .the error rate performance of the proposed estimators are investigated through numerical results . *the iterative code - aided em - based algorithm proposed in for coded siso systems is modified and derived analytically . moreover ,the performances of two estimators , the ks - mla and the eks , are numerically compared with the help of computer simulations .* an em - based receiver is proposed to perform iterative joint phase noise estimation and data detection for bicm - mimo systems . *it is analytically demonstrated that a computationally efficient ekfs can be applied to carry out the maximization step of the em algorithm .* a new low complexity soft decision - directed ekfs for tracking phase noise over the length of a frame is proposed and the filtering and smoothing equations are derived . * extensive simulationsare carried out for different phase noise variances to show that the performance of a mimo system employing the proposed receiver structure is very close to the ideal case of perfect knowledge of phase noise .simulation results demonstrate that error rate performances of a 2 los - mimo system using the proposed em - based receiver is very close to that of the perfectly synchronized system for low - to - medium signal - to - noise ratios .it is also shown that the _ mean square - error _ ( mse ) of the phase noise estimates improves with every em iteration .in wireless communication systems , the baseband signal is multiplied by a high frequency sine wave to operate at a certain frequency band , called carrier frequency .local oscillators produce the carrier frequency waveforms .phase - locked loop ( pll ) is the major phase recovery block in a communication system .pll calculates the phase difference between the input and the output signal .the difference is then filtered by a low pass filter and applied to the voltage controlled oscillator ( vco ) . the controlled voltage on the vco changes the oscillator frequency to minimize the phase difference of the input and the output signal . however , the output of the vco circuit is a non - ideal sine wave due to some hardware limitations .the power spectrum of the output signal is not strictly concentrated at the carrier frequency .the instantaneous output of a oscillator is given by where denotes the carrier frequency , denotes the amplitude , is amplitude noise and is phase noise .demir _ et . al ._ show in that amplitude noise decays over time , since the system stabilizes itself .the amplitude noise may thus be ignored and the normalized oscillator output signal can be written as the oscillator phase noise can be seen as a widening of the spectral peak of the oscillator . the frequency domain single - sideband phase noise power , is defined as the ratio of the noise power in a 1hz sideband at an offset hertz away from the carrier , , to the total signal power , .since we have no absolute time reference , the phase disturbances accumulate over time and can be represented by where is a white gaussian process with a constant power spectral density ( psd ) .then , the phase noise process can be modeled as a wiener process and the oscillator power spectrum is a lorentzian , given by where denotes the 3db bandwidth .it is seen that the spectrum is characterized by a single parameter , .the phase noise process is sampled every seconds , sampling time interval .then , the discrete time phase noise process is defined as , and can be modeled as a random walk in accordance with [ eqn : contphasenoise ] , i.e. discrete - time wiener process in ( [ eqn : thatadelta1 ] ) , the innovation term , is a discrete zero - mean gaussian random variable with variance , denoted as .the phase noise innovation variance is given by note that the discrete innovation process is also white , in fig .[ fig : figwienerpn ] a realization of the discrete time wiener phase noise process is plotted ..,width=377 ] at the transmitter , a group of data bits are modulated onto an -point quadrature amplitude modulation ( -qam ) constellation , displayed in fig .[ fig : fig16qam ] .symbols are then transmitted through an awgn channel . in a communication system without the phase noise disturbances the received signal at time is given by where is the received signal , is the complex transmitted symbol , is the zero - mean awgn with variance per dimension , i.e. , as shown in fig .[ fig : fig16qamawgn ] . the received signal is also effected by time varying phase noise both at transmitter and receiver .let }(k) ] denote the discrete time phase noise sample at transmitter and receiver , respectively .the received signal at time is given by }(k)}+\tilde w(k))e^{j\theta^{[r]}(k)}\\ & = & s(k ) e^{j(\theta^{[t]}(k)+\theta^{[r]}(k))}+ w(k)\label{eqn : recsign2}\\ & = & s(k ) e^{j\theta(k)}+w(k)\label{eqn : obspn}\end{aligned}\ ] ] where }(k)} ] where }} ] denote the innovation variance of the phase noise process at the transmitter an at the receiver , respectively . the total phase noise process rotates the signal constellation as displayed in fig [ fig : fig16qampn ] .the received signal which is affected by the awgn and rotated by the phase noise is shown in fig .[ fig : fig16qamawgnandpn ] .the main focus of this thesis will be on the phase noise estimation for coded mimo systems . in this chapter , to better understand the effect of phase noise , the problem of phase noise estimation for siso systems over awgn channel is investigated .it is assumed that perfect frame synchronization and phase recovery are performed at the beginning of each frame by transmitting sufficient number of pilot symbols .therefore , the problem under consideration for siso systems is simplified to the problem of phase noise estimation before investigating the problem for coded mimo systems .first , an uncoded siso system is taken under consideration in sec .[ sec : uncodedsiso ] .an extended kalman filter ( ekf ) is suggested to track time varying phase noise and the set of equations for the ekf is derived . in sec .[ sec : codedsiso ] , the problem of joint phase noise estimation and detection for a coded siso system is discussed .an algorithm is analytically derived from the em framework and is applied to iteratively solve the problem .the enhancement of the ldpc codes and the kalman filters into the em - based algorithm is explained in detail . in order to de -rotate the signal space and to achieve synchronous communication , time varying phase noise process should be estimated .since the parameter to be estimated is not deterministic , an estimator based on the bayesian approach should be used . in the bayesian approach , prior knowledge about the random parameteris also taken into account .this approach is commonly used for the systems which can be represented with a dynamical model .kalman filtering can be considered as a sequential minimum mean square error ( mmse ) estimator which works according to the bayesian framework .the received signal in ( [ eqn : obspn ] ) and the phase noise process are used to construct a state space signal model .the state and observation equations at time are given as where is the received signal , is the complex transmitted symbol belonging to the -qam constellation , is the phase noise value , is the additive complex gaussian noise with variance and is the phase noise innovation , which is assumed to be gaussian distributed with variance . since the observation equation is nonlinear , a hard decision - directed ekfis used instead .the observation equation can be rewritten as where the nonlinear function is defined as in ( [ eqn : zfunc1 ] ) , is the hard decision of the transmitted symbol at time instance . note that for uncoded siso systems hard decision symbol , , is obtained by the demodulator at each time instance .next , is input to the hard decision - directed ekf at each time instance . in other words , decision feedback is performed symbol - by - symbol .the ekf provides phase noise estimate , .the ekf first predicts the mean and the minimum prediction mse of the state ahead , and , respectively , given the previous values .then , the ekf updates the estimates with the observation and computes the mean and the minimum mse of a posteriori state estimate , and , respectively .the kalman gain , , indicates the amount of correction required for an observation sample .since is a nonlinear function , it is linearized with a first - order taylor expansion .therefore , denotes the jacobian of with respect to .the first and the second moments of the state ahead are predicted as after the observation , the posteriori state estimate statistics are updated as where the kalman gain is determined as the jacobian of with respect to is given by in ( [ eqn : kalmangain ] ) , is the observation noise variance , given by the ekf needs to be properly initialized to compute and the mse .it is assumed that a sufficient number of pilots is used for frequency , frame and phase synchronization at the beginning of the frame .therefore , there is no phase shift of the first received signal i.e. .moreover , at the first time instant , the minimum prediction mse is set to since this error amounts to the estimation of without any data .the ekf is applied for different phase noise innovation variance values .the bit error rate ( ber ) performance of the system is investigated .ber vs. for 16-qam is shown in fig .[ fig : fig1 ] .it can be seen that as increases , the ekf performs better and can track phase noise processes with higher variances .for 16-qam for various values of .,width=453 ] in coded siso systems , the data bits are first encoded with an ldpc encoder .then , the encoded bits are modulated onto complex symbols from an -qam constellation and transmitted through the awgn channel .the phase noise process should be estimated in order to de - rotate the signal space .channel coding adds redundant bits to protect information bits and to correct erroneous bits .a rate regular ldpc encoder from the nasa goddard technical standard is used .it is a regular code with variable node degree 4 and check node degree 32 .the ber vs. for both uncoded and coded systems without phase noise is shown in fig .[ fig : nopncomparison ] .data bits are mapped to 16-qam symbols . as expected ,very low bers can be achieved with channel coding at medium and high levels .for 16-qam uncoded and coded system without phase noise , width=415 ] in a coded siso system in the presence of phase noise , decoding and phase noise estimation can be performed jointly in an iterative way to complement one another .the em framework is used in to estimate the constant phase offset . in , a modified em - based algorithmis suggested to estimate time varying phase noise .the tracking is performed by both an extended kalman filter and smoother ( eks ) and a linearized kalman filter and smoother with maximum - likelihood average ( ks - mla ) .the mla algorithm removes the phase noise average over the frame . in ,the modified em - based algorithm is not derived analytically . in the following ,we demonstrate analytically how an em - based algorithm can be utilized for code - aided synchronization .the performance of the proposed estimators is investigated numerically .first , recall that the signal model and the phase noise model for the uncoded siso system is given by let denote the number of symbols in one frame .we define the vectors ^t \\\mathbf s & \triangleq & [ s(1 ) , s(2 ) , \dots , s(l_f)]^t \\\theta & \triangleq & [ \theta(1 ) , \theta(2 ) , \dots , \theta(l_f)]^t .\end{aligned}\ ] ] given the observation vector , , _ the maximum a posteriori _( map ) estimate of is given by where is the a priori probability of parameters .the em framework is an iterative approach to solve an estimation problem where the observation , , depends not only on the parameter to be estimated , , but also on some nuisance parameter , .note that transmitted symbols , , and phase noise are independent .the em algorithm iteratively maximizes the conditional a posteriori expectation , , where is the conditional log likelihood function ( llf ) and denotes the expectation operator . in the expectationstep the block of previous phase noise estimates , , is kept fixed and it is updated in the maximization step .the em framework for the iteration is defined by the em algorithm has been shown to converge to the map solution if it is accurately initialized . given the transmitted data and the phase noise process , the received signal has a gaussian distribution .after taking logarithm and dropping constant terms , the conditional llf is given by taking the conditional expectation over , given and , equation for the expectation step can be written as where the weighted or soft symbol is defined as in ( [ eqn : alpha1]), are the marginal a posteriori symbol probabilities ( apps ) of the transmitted symbol . for 16-qam ,if gray mapping is used , ( [ eqn : alpha1 ] ) can be computed as where and is the log likelihood ratio ( llr ) of the a posteriori probability of the th bit of the symbol at time instant , .the llr is defined as the block diagram of the receiver structure is shown in fig .[ fig : figrs_siso ] where superscript denotes the iteration of the em algorithm .the llrs are first computed by a soft demapper ( demodulator ) .then , they are passed to the ldpc decoder .next , the ldpc decoder runs a sufficient number of iterations to achieve more accurate llrs .after that , the llrs computed by the ldpc decoder are modulated into soft symbols according to ( [ eqn : alpha ] ) by a soft mapper . as a result, the expectation step computes the block of apps denoted as ^t .\end{aligned}\ ] ] the maximization step maximizes in ( [ eqn : mstepsiso ] ) with respect to . since the system can be modeled in a state - space form , the solution can be given by kalman filtering .the kalman filter is an optimal mmse estimator which can be considered as a map estimator .then , a decision directed kalman filter where the transmitted symbol vector , , is replaced by the soft decision symbol vector , , estimates as by setting , the conditional llf in ( [ eqn : lnestepsiso ] ) can be rewritten as using ( [ eqn : lnestepsiso2 ] ) in ( [ eqn : mapthetasiso ] ) we achieve ( [ eqn : qfunc ] ) which is the function to be maximized in the m - step . as a result ,a decision directed kalman filter can be used to carry the m - step . in ,two soft decision - directed kalman filters , the ks - mla and the eks , are proposed . in the following , two estimators will be explained .[ [ the - kalman - filter - and - smoother - with - maximum - likelihood - average ] ] the kalman filter and smoother with maximum likelihood average + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the ks - mla proposes a new observation equation which is a linear function of . assuming that , the ks - mla first removes the data dependency from the measurement via where is the new observation equation .then , the ks - mla assumes that the phase noise values are small enough such that subsequently , new observation equation is given by where and is a zero mean gaussian random variable with variance and is the average symbol energy . since the observation equation is linear , a regular kalman filter and smoother ( ks ) can be used .note that the new observation equation does not depend on the complex transmitted symbol .instead , it is a function of the amplitude square of the soft symbol .phase noise process reaches large values within a block .depending on and , it has a nonzero average over the block .the mla algorithm estimates the ml average of the phase noise process over each block and removes it from each received block , , at the input of ks .then , the parameter to be estimated is given by where the mla algorithm is directly embedded into the em framework and performed before the estimator in the m - step .[ [ extended - kalman - filter - and - smoother ] ] extended kalman filter and smoother + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + second estimator proposed in is the soft decision - directed eks .the observation equation in ( [ eqn : obsuncoded ] ) is a nonlinear function of .therefore , a suboptimal decision directed eks is used to track the phase noise process in .the state and observation equations at time are given as note that ( [ eqn : obsuncoded ] ) and ( [ eqn : obscoded ] ) are similar .we can rewrite the observation equation as where stands for the nonlinear function for the coded system .the eks approximates as where is the estimate of based on the previous data .the eks first filters the received signal and estimates the block of parameters .afterwards , it smooths the estimations with a backward recursion .it is initialized with state estimate and a posteriori mse .the filtering equations compute and in a recursive fashion according to ( [ eqn : hattheta1]-[eqn : ck ] ) where is replaced by .the smoother runs a backward recursion to find better estimates of the parameter block .we will use subscript to refer to smoothing . denotes the sequence from to .the set of equations for smoothing are given by ldpc codes are linear block codes that have a sparse parity check matrix .the input of the ldpc decoder are the llrs that are computed by the soft demapper ( demodulator ) .the output of the ldpc decoder are the updated llrs that will be used by the soft mapper ( modulator ) .an example of a parity check matrix of a ( 7,4 ) irregular block code , i.e. the number of 1 s in each row and in each column is not constant , is given by .\label{eqn : hldpc}\end{aligned}\ ] ] the corresponding tanner graph is shown in fig .[ fig : tanner1 ] .there are 7 variable nodes and 4 check nodes for .let an be the index of the rows and columns of , respectively . if [k , l ] is 1 , then there exists a connection between the check node and the variable node .note that the information exchange is done in two ways for each connection , one from variable node to check node and one for the check node to variable node .in ( [ eqn : hldpc]).,width=415 ] the ldpc decoder runs the sum product algorithm to estimate the llrs of the transmitted bits .it computes the llr of each path from variable nodes to check nodes , and from check nodes to variable nodes iteratively . denotes the llr of the bit of the transmitted codeword .let denote the llr that belongs to the connection in the direction of check node to variable node .similarly , denotes the llr of the same connection in the opposite direction .an example is also shown in fig . [ fig : tanner1 ] . denotes the updated llr of the bit of the transmitted codeword .the sum product algorithm is performed as follows where represents the set of indexes of all the variable nodes connected to the check node except for the variable node .similarly , represents the set of indexes of all the check nodes connected to the variable node except for the check node . for example , if and , , and .[ [ improving - the - speed - of - the - algorithm ] ] improving the speed of the algorithm + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + should be initialized for the first iteration .normally , it is initialized with the input llrs as as suggested in , instead of initializing with the input llrs , is initialized by the last update of it from the latest em algorithm iteration .thus , where denotes the algorithm iteration .it is seen that significant complexity reduction can be achieved with negligible degradation in ber with this approximate version .the number of the decoder iterations can be reduced and the number of the em algorithm iterations can be increased .since the complexity heavily depends on the number of decoder iterations , the overall complexity is much less than that of the original em - based algorithm .first , the advantage of keeping internal information at the decoder is investigated for the 256-qam system with the eks . in fig .[ fig : decode_no_restart_256 ] , the ber is calculated for each em iteration for different number of decoding iterations with and without keeping internal information at the decoder for a fixed .the dashed lines represent the algorithm where the ldpc decoder is reinitialized between each iteration of the em algorithm . in fig .[ fig : decode_no_restart_256 ] , the solid lines represent the new algorithm where the internal information in the ldpc decoder is kept between the em algorithm iterations .it is seen that with keeping internal information overall complexity can be reduced due to fewer iterations .when the ldpc decoder is reinitialized the ber tends to reach a floor and the system can not perform under this level even with more em iterations . on the other hand ,when we keep the internal information at the ldpc decoder , lower bers can be achieved with increasing number of em iterations since better llrs are obtained from the decoder in each em iteration .db for different number of decoder iterations , with and without keeping internal decoder information.,width=415 ] in fig .[ fig : bervssnr1e43dec ] , the results of the em algorithm are shown for different number of iterations for both the eks and the ks - mla for a 16-qam system where . the number of iterations performed by the ldpc decoder is set to 3 .it is seen that the ks - mla performs slightly better than the eks .for 16-qam coded system with both the eks and the ks - mla where and 3 decoding iteration.,width=415 ] the ks - mla suggests a new linear observation equation ( [ eqn : newobs ] ) by assuming .moreover , the received signal is multiplied by the soft decision symbol which also increases the noise power .ad hoc _ method is proposed in to alleviate the effects of the assumptions in ( [ eqn : sintheta ] ) and ( [ eqn : costheta ] ) .when the phase noise values are small , is a reliable estimate of .then , the ks - mla algorithm performs better than the eks. however , the ks - mla fails to track more severe phase noise processes . in fig .[ fig : bervssnr3e43dec ] , the phase noise innovation variance is set to where all the other parameters remain unchanged . it is concluded that the assumptions ( [ eqn : sintheta ] ) and ( [ eqn : costheta ] ) are violated and is a less reliable estimate of . thus , the kfs - mla performs significantly worse than the eks .moreover , the ks - mla has an irreducible error floor . for 16-qam coded system with both the eks and the ks - mla where and 3 decoding iteration.,width=415 ] in fig .[ fig : bervssnr1e43dec_256 ] , the results for both the eks and the ks - mla are shown for a 256-qam system where . it is well - known that the reliability of the estimate of a 256-qam symbol is less than that of a 16-qam symbol for the same phase noise process .the reason is that phase noise effects the outer symbols in the signal constellation .in addition , the 256-qam constellation is more densely packed than the 16-qam constellation , i.e. , the distance and the phase difference between two neighbor constellation points is smaller . therefore , the performance of the ks - mla is degraded since the noise power is also amplified . in fig .[ fig : bervssnr1e43dec_256 ] , we observe that the eks performs better than the ks - mla . as a result ,the eks is more advantageous than the ks - mla for higher order modulations .for 256-qam coded system with both the eks and the ks - mla where and 3 decoding iteration.,width=415 ]the demand for high data rate wireless communications has spearheaded the research in the field of mimo systems .it is shown that the bandwidth efficiency of a wireless link can be significantly improved with the usage of mimo systems .however , synchronization errors decrease the performance of mimo system dramatically . in sec .[ sec : uncodedmimo ] the system model for an uncoded mimo systems is introduced .detection in mimo systems is also briefly discussed .an ekf is applied to track multiple phase noise processes . in sec .[ sec : codedmimo ] , the main contribution of this thesis is provided .joint phase noise estimation and detection in coded mimo systems is performed with the help of an em - based algorithm for the first time in the literature .exploiting ldpc codes , ekfs and bicm , the proposed em - based algorithm iteratively solves the problem of joint phase noise estimation and detection .simulation results are also evaluated for a los - mimo system over rician fading mimo channels .an uncoded mimo system with transmit and receive antennas is under consideration . at the transmitter ,a group of data bits are modulated onto an -qam constellation .they are demultiplexed into substreams of symbols of length . subsequently , using spatial multiplexing the symbols are transmitted simultaneously from antennas .quasi - static block fading channels are considered , i.e. , the channel gains remain constant over the length of a frame but change from frame to frame .similar to previous work in the literature , it is assumed that the channel matrix is estimated using orthogonal training sequences that are transmitted at the beginning of each frame . to ensure that the proposed scheme is applicable to los and multi - user mimo systems, it is assumed that independent oscillators are deployed at each transmit and receive antenna .awgn is also taken into consideration . the received signal is also effected by time varying phase noise both at the transmitter and the receiver side . to ensure that the proposed scheme is applicable to los and multi - user mimo systems, it is assumed that independent oscillators are deployed at each transmit and receive antenna .the performance of the uncoded mimo system is severely degraded in the presence of phase noise . in , pilot - aided estimation of phase noise in an uncoded mimo system is investigated .the phase noise parameters corresponding to the transmit and receive antennas are estimated by applying a wiener filtering approach . however , the proposed scheme is bandwidth inefficient and significant overhead is introduced . in ,joint channel and phase noise estimation is performed . a data aided least square ( ls ) estimator , a decision directed weighted least squares ( wls ) estimator and a decision directed ekf are proposed to track time varying phase noise . based on the above set of assumptions , the received signal at time instance , ^t ] is an diagonal matrix and }(k ) \triangleq \textrm{diag}\big(e^{j\theta_1^{[t]}(k)},e^{j\theta_2^{[t]}(k)},\dots , e^{j\theta_{n_t}^{[t]}(k ) } \big) ] and }(k) ] with ^t ] is the vector of transmitted symbols , * ^t ] and }(k) ] and }(k)\thicksim \mathcal n(0,\sigma^2_{\delta_m^{[t]}}) ] .we want to detect the transmitted symbol vector in the maximum - likelihood ( ml ) sense . for an -qam modulated mimo system , ml detection of the transmitted symbol vector at time instant given by where }(k ) \mathbf h \boldsymbol \gamma^{[t]}(k ) \mathbf s.\end{aligned}\ ] ] the ml detection search over combinations , , of the transmitted symbol vector .the problem in ( [ eqn : uncmimodetection ] ) is computationally infeasible especially for higher order modulations .there are several detectors that provide approximate solutions with low complexity , such as zero - forcing detection with decision feedback , sphere detection , and lattice reduction aided detection . however , low complexity mimo detection algorithms are beyond the scope of this thesis . the ml detection problem in ( [ eqn : uncmimodetection ] ) is directly solved by searching over all possible transmitted symbol vectors .the ml detector provides hard decision symbol vector , .next , is input to the ekf which estimates phase noise vector , . a hard decision directed ekfis applied to estimate phase noise parameters .let us define }(k ) \mathbf h \boldsymbol \gamma^{[t]}(k) ] as a reference phase noise value , in ( [ eq : received_signal2 ] )can be rewritten as }(k)\right)}{\boldsymbol \gamma}^{[r]}(k ) \mathbf h { \boldsymbol \gamma}^{[t]}(k)e^{-j\left(\theta_{n_t}^{[t]}(k)\right ) } \notag\\ & = \tilde { \boldsymbol \gamma}^{[r]}(k ) \mathbf h \tilde { \boldsymbol \gamma}^{[t]}(k),\end{aligned}\ ] ] where * }(k)\triangleq \textrm{diag}\big\{e^{j(\phi_1(k ) ) } , \dots , e^{j(\phi_{n_r}(k))} ] , * ^t ] , depends not only on the parameters to be estimated , {(n_r+n_t ) \times l_f } \triangleq [ \boldsymbol \theta(1 ) , \boldsymbol \theta(2 ) , \dots , \boldsymbol \theta(l_f)] ] .given the observation sequence , the map estimate of is given by where .the em algorithm consists of the _ expectation step _ ( e - step ) and _ maximization step _ ( m - step ) . for the em iteration ,the e - step and m - step equations are given by respectively .the em algorithm converges to the map solution if the initial estimates of the parameters of interest , , are sufficiently close to the true values of the parameters .otherwise , the em algorithm may converge to a saddle point or a local maximum . to ensure the convergence ,pilot symbols are inserted into the data stream every time instances . in the following subsectionswe derive the e - step and m - step of the em algorithm for coded mimo systems .let us define }(k ) \mathbf h \boldsymbol \gamma^{[t]}(k) ] is an matrix . by setting , ( [ eqn : channellf ] )can be rewritten as note that ( [ eqn : channellf2 ] ) and the e - step equation in ( [ eqn : estepfinal ] ) are equivalent to one another except for the term . however , when the soft decisions reach their true values , we can assume that .thus , using and , we can conclude that a kalman filter can be applied to carry out the m - step of the em algorithm .note that the _ observation _ equation for the kalman filter is given by while based on , the _ state _ equation at time is given by where }(k ) , \dots , \delta_{n_r}^{[r]}(k ) , \delta_1^{[t]}(k ) , \dots , \delta_{n_t}^{[t]}(k)\right]^t$ ] .the transmitted symbols , can be replaced by their a posteriori means , i.e. , soft decisions computed at the e - step using the iterative map detector in section [ sec : apps ] .subsequently , the _ observation _ equation in can be rewritten as since the _ observations _ are a nonlinear function of the parameters of interest , , an _ extended _ kalman filter - smoother needs to be used instead to carry out the m - step of the em algorithm .note that ( [ eq : obsmimounc2 ] ) and ( [ eqn : received_approxmc ] ) are similar. we can rewrite the observation equation for the coded mimo system as the ekfs first filters the received signal and provides phase noise estimates with a forward recursion over the frame .the filtering equations compute the posteriori estimate of the state vector , , and the error covariance matrix , in a recursive fashion according to ( [ eqn : hatphimimo]-[eqn : hatmpostmimo ] ) where is replaced by . in other words , instead of the hard decision vector provided by the ml detector in the uncoded mimo system , , the soft decision vector computed by the iterative detector , ,is provided to the ekfs for the coded mimo system .the ekfs is initialized with the state estimate and the error covariance estimate .after filtering is completed for the whole block , a backward recursion is performed to smooth the estimates .finally , the smoothed estimates of the phase noise parameters are carried out to be fed to the detector .the smoothed estimate of the phase noise vector , , and the error covariance matrix are given by after the backward recursion is completed the block of phase noise estimates , is fed to the iterative detector for the next em algorithm iteration .it is shown in sec .[ sec : mstep ] that soft decisions , i.e. , the marginal posterior probabilities of the coded symbol vectors are required for the ekfs .the computation of the true posterior probabilities has a complexity that increases exponentially with the frame length .therefore , a near optimal iterative detector , operating according to the turbo principle ,, is used to obtain the marginal a posteriori bit probabilities given the phase noise estimates and the channel gain matrix .then , a soft modulator maps the a posteriori bit probabilities to symbol probabilities and constructs the soft decisions . the block diagram of the proposed em - based receiver structure , including both the ekfs and the iterative detector , is shown in fig .[ fig : fig3 ] .the iterative detector first computes conditional likelihoods of the symbol vectors given the phase noise estimates as note that the conditional likelihoods are computed by the equalizer from the received signal before the execution of the detector iterations .iterative detection is then performed with 3 nested iterations , as seen in fig .[ fig : fig3 ] . the iterative part of the detector operates according to the turbo principle .the conditional _ a posteriori _ probability of the transmitted symbol at the transmit antenna , at time instance , , is factored as the product of an _ a priori _ probability ( with subscript ) , and an _ extrinsic _ probability ( with subscript ) such as where is a normalization constant .note that since an interleaver at the transmitter side is used , the transmitted symbols on each antenna and the transmitted bits within each constellation symbol are independent . the extrinsic symbol probabilities in ( [ eqn : symbpostfact ] ) are computed by the equalizer in fig .[ fig : fig3 ] .the extrinsic symbol probability given the estimated phase noise parameters and the received signal for , is given by the a priori symbol probabilities are computed by the soft modem and fed to the equalizer after decoding is performed . therefore , the first iteration is between the equalizer and the soft modem and there are iterations .the soft modem consists of the demapper , the interleaver , the iterative map decoder , the deinterleaver and the mapper as it is shown in fig .[ fig : fig3 ] .the demapper takes the extrinsic _ symbol _ probabilities from the equalizer and computes the extrinsic _ bit _ probabilities .the bit posterior probabilities of the bit of the bit sequence mapped to the symbol , denoted as , is factored similar to ( [ eqn : symbpostfact ] ) as where is a normalization constant .then , the extrinsic bit probability for is computed by the demapper as the demapper also needs the a priori _ bit _ probabilities computed by the map decoder .the second iteration of the 3 nested iterations of the detector is performed between the demapper and the decoder times .after the demapper computes all the extrinsic bit probabilities , they are deinterleaved and provided to the map decoder by the deinterleaver .the decoder takes the deinterleaved extrinsic bit probabilities and computes both the deinterleaved _ posterior _ and the deinterlaved _ a priori _ bit probabilities exploiting code properties with iterations .the posterior bit probabilities are sent out of the detector , interleaved and used to construct soft decisions for the ekfs , described in section [ sec : ekfs ] .the posterior bit probabilities are also used for hard decision after the algorithm terminates .the a priori bit probabilities are sent to the interleaver which constructs the a priori bit probabilities that are used as a priori information by the demapper .finally , the mapper converts a priori bit probabilities to a priori symbol probabilities which are used as a priori information by the equalizer according to estimator.,width=340 ] aforementioned , the iterative detector provides the ekfs with the marginal posterior probabilities of the coded symbol vectors , .the e - step is finished by the computation of these probabilities as where and are normalization constants .then , these posterior probabilities are used to construct the soft decision , .the a priori symbol probabilities and bit probabilities are initialized with a uniform distribution at the first em algorithm iteration. a sufficient number of iterations inside of the detector is required for convergence if the detector is reinitialized with uniform probabilities at each em iteration . instead, the detector can be initialized with the a priori probabilities obtained at the previous em iteration .in addition , only 1 iteration is allowed inside of the iterative detector for each nested iteration , i.e. , as suggested in .this alternative approach requires more em iterations but less detector iterations to converge . in this way, the computational complexity of both the detector and the receiver can be reduced significantly . , and da initial estimation with .,width=340 ] at the transmitter ,data bits are first encoded by a rate regular ldpc encoder from the nasa goddard technical standard .it is a regular code with variable node degree 4 and check node degree 32 .the number of data bits in each frame , is equal to 7154 .then , encoded bits are modulated onto 16-qam symbols .therefore , there are symbol vectors in each frame .performance will be measured as a function of , where denotes the transmitted energy per information bit and is the power spectral density of the awgn , i.e , .the convergence of the em based estimator is severely dependent on the initialization of the estimation parameters .first , the em based algorithm is initialized by a data aided ( da ) estimator to provide the initial phase noise estimates .pilots are inserted into the data stream every time instance . at the first em algorithm iteration ,the ekfs makes use of the pilots and estimates phase noise values at each time instance . afterwardsa linear interpolation is performed between two consecutive phase noise estimates . as a result ,initial phase noise estimates are obtained and sent to the iterative detector to initialize the em - based algorithm .da denotes the da estimator with pilot rate . for several phase noise processes and em algorithm iterations.,width=340 ] in fig .[ fig : berpr14 ] , the effect of the phase noise on the ber performance corresponding to the em - based algorithm with da estimator for different phase noise innovation variance levels , , after the 3rd em algorithm iteration is shown .as compared to perfect synchronization , no phase noise scenario , the proposed em - based algorithm gives rise to a ber degradation of about 1db in the presence of the slowly time varying phase noise , i.e. , .the ber degradation amounts to about 2db when .it is possible to track a stronger phase noise process with innovation variance in expense of 4db of comparing to perfectly synchronized system .note that the performance of the system tends to stay constant as increases , yielding an error floor .the main reason is that the prediction error of the ekfs is determined by the phase noise innovation variance . as increases , the error floor is observed at higher error rates . for severalnumber of decoder iterations , .,width=340 ] code - aided synchronization techniques offer to improve the overall accuracy and performance of the system .first , we investigate the performance of the em - based algorithm at each em algorithm iteration. the em - based algorithm does not converge to the global solution at some of the erroneous frames resulting in an arbitrary large number of erroneous bits .for this reason the ber performance of the system may not increase at each em algorithm iteration .therefore , the fer performance of the system is investigated .[ fig : fer2emits ] shows the fer performance at several em algorithm iterations for .we observe that the fer performance of the system improves at each em algorithm iteration . in order to operate the system at fer at the 10th iteration , additional of about 2dbis required .secondly , we investigate the estimation accuracy of the em - based algorithm .the mse of the phase noise estimates at the first receive antenna is averaged over the successfully decoded frames .[ fig : mseemits ] shows the mse results of different levels of phase noise innovation variance at both the 1st and the 5th em algorithm iteration .it is seen that the proposed em - based algorithm not only increases the overall performance of the system but also yields better estimates at each em algorithm iteration .we also investigate the performance of the proposed em - based algorithm for the system that is affected by a severe phase noise process such that .[ fig : ferldec ] shows the effect of the number of decoder iterations on the fer performance of the system for a fixed .we observe that the performance of the system does not improve significantly after a few em algorithm iterations .in contrast , the fer performance of the system can be slightly improved with more iterations inside of the decoder . in order to achieve lower error rates in the presence of strong phase noisea stronger channel encoder can be used , i.e , the rate of the ldpc code , , needs to be decreased .note that the spectral efficiency of the system also reduces with decreasing yielding low throughput ., da initial estimation with , and rate code.,width=340 ] fig [ fig : fer5e4rate12 ] shows the fer performance of the em - based algorithm for where data bits are encoded by a rate ldpc encoder .we observe that the fer performance can be improved significantly by increasing the number of em iteration .it is possible to achieve fer in expense of 4db of comparing to no phase noise scenario .however , the error floor still occurs .it is concluded that the number of the decoder iterations of the em - based algorithm , , should be tuned to a sufficiently large number to be able to track time varying phase noise .in chapter [ chap : siso - model ] , phase noise tracking is performed by a hard decision directed ekf for the uncoded siso system .numerical results shows that the ekf is able to track slowly time varying phase noise processes .unsurprisingly , the ber degradation reaches large values when the phase noise innovation variance is high .additionally , in chapter [ chap : siso - model ] , the problem of joint phase noise estimation and detection in a coded siso system with ldpc codes is discussed . the em - based algorithm proposed in is modified and analytically derived .two estimators , a soft decision directed ks - mla and a soft decision directed eks proposed in are applied to carry out the maximization step of the em - based algorithm . in ,the ks - mla is claimed to have superior performance than the eks .however , numerical results in chapter [ chap : siso - model ] indicates that when phase noise over the frame reaches very large values , i.e. , in the case of large block length and/or phase noise innovation variance , the performance of the ks - mla degrades significantly and the eks performs better than the ks - mla .the ks - mla removes the data dependency by multiplying the observed signal with the soft decision symbol . as a result, the performance of the ks - mla degrades faster than the eks when the soft decisions are less reliable .for instance , for the same phase noise innovation variance , the ekfs performs better than the ks - mla when the constellation density increases . a trick to increase the algorithm speedis also discussed and shown to decrease the overall complexity required for convergence . in chapter [ chap : mimo - model ] , a low complexity hard decision directed ekf is derived and applied to an uncoded mimo system .simulation results show that the ekf performs close to the synchronized system in the case of slowly time varying phase noise process .the problem of joint estimation of the time varying phase noise and data detection for a los - mimo system using bit interleaved coded modulation and ldpc codes is also discussed . an iterative em - based receiver to perform code - aided synchronization is proposed .a new low complexity soft - decision directed ekfs is derived and embedded into the em - based algorithm .the performance of the system is investigated in terms of the ber and the fer .estimation accuracy of the phase noise parameters is presented with average mse curves .computational complexity of the system is also discussed .computer simulations show that the number of bit errors does not decrease at each em algorithm iteration if the algorithm fails to converge .instead , the fer performance is shown to decrease at each em iteration .simulation results demonstrate that the proposed em - based algorithm estimates and compensates the time varying phase noise with a small degradation of the performance for a wide range of phase noise innovation variances. however , an error floor occurs at high signal - to - noise ratio levels . to reduce the error floor and to track the phase noise process with large innovation variance decoding performance can be increased by setting the number of decoder iterations to larger values .however , the achieved performance gain is not significant comparing to the introduced complexity . on the other hand ,coding rate can be reduced further to protect data bits , yielding better soft decisions .the fer performance can be significantly improved by utilizing low rate ldpc codes at the expense of a decrease in throughput . as a result, the ekfs can be applied to a coded los - mimo system for the wide range of phase noise innovation variances .the system parameters should be set to achieve the target performance . in chapter [ chap : mimo - model ] , the channel gains are assumed to be known at the receiver side .the effects of the estimation errors of the channel gains on the performance of the em - based algorithm is not investigated .the channel gains can be obtained by a conventional data - aided estimator .additionally , the channel estimation of the block fading mimo systems can be embedded into the em - based algorithms and initial estimates can be improved at every em algorithm iteration .therefore , this work can be extended to joint phase noise and channel estimation and data detection . in ,a data - aided ls estimator a decision - directed wls estimator and a new decision - directed ekf is proposed .these estimators can be used in the em - based algorithm to carry out the maximization step .finally , statistics of the phase noise estimates can be used by taking into account in the decoding process .a receiver employing factor graphs can be implemented and to track strong phase noise processes .a. demir , a. mehrotra , and j. roychowdhury , `` phase noise in oscillators : a unifying theory and numerical methods for characterization , '' _ ieee trans .circuits syst .i , fundam .theory appl ._ , vol . 47 , no . 5 , pp .655674 , may 2000 .h. meyr , m. moeneclaey , and s. fechtel , _ digital communication receivers : synchronization , channel estimation , and signal processing_.1em plus 0.5em minus 0.4emnew york , ny , usa : john wiley & sons , inc . , 1997 .s. godtmann , n. hadaschik , a. pollok , g. ascheid , and h. meyr , `` iterative code - aided phase noise synchronization based on the lmmse criterion , '' in _ ieee workshop on signal process .advances in wireless commun ._ , june 2007 , pp . 1 5 .a. stefanov and t. duman , `` turbo - coded modulation for systems with transmit and receive antenna diversity over block fading channels : system model , decoding approaches , and practical considerations , '' _ ieee j. sel .areas commun ._ , vol . 19 , no . 5 , pp . 958968 , may 2001 .j. boutros , f. boixadera , and c. lamy , `` bit - interleaved coded modulations for multiple - input multiple - output channels , '' in _ proc .symposium on spread spectrum techniques and applications _ , vol . 12000 , pp .123126 .n. hadaschik , m. drpinghaus , a. senst , o. harmjanz , u. kufer , g. ascheid , and h. meyr , `` improving mimo phase noise estimation by exploiting spatial correlations , '' in _ proc .conf . on acoustics , speech and signal process . _ , philadelphia , 2005 , pp .833836 .h. mehrpouyan , a. a. nasir , s. d. blostein , t. eriksson , g. karagiannidis , and t. svensson , `` joint estimation of channel and oscillator phase noise in mimo systems , '' _ ieee trans .signal process ._ , vol . 60 , no . 9 , pp. 47904807 , sep .g. golden , c. foschini , r. valenzuela , and p. wolniansky , `` detection algorithm and initial laboratory results using v - blast space - time communication architecture , '' _ electronics letters _ , vol .35 , no . 1 ,14 16 , jan . 1999 .p. wolniansky , g. foschini , g. golden , and r. valenzuela , `` v - blast : an architecture for realizing very high data rates over the rich - scattering wireless channel , '' in _ ursi int .symposium on signals , systems , and electronics _ , sep .1998 , pp . 295300 .v. tarokh , a. naguib , n. seshadri , and a. calderbank , `` space - time codes for high data rate wireless communication : performance criteria in the presence of channel estimation errors , mobility , and multiple paths , '' _ ieee trans ._ , vol .47 , no . 2 , pp . 199 207 , feb . 1999 . | new generation cellular networks have been forced to support high data rate communications . the demand for high bandwidth data services has rapidly increased with the advent of bandwidth hungry applications . to fulfill the bandwidth requirement , high throughput backhaul links are required . microwave radio links operating at high frequency bands are used to fully exploit the available spectrum . generating high carrier frequency becomes problematic due to the hardware limitations . non - ideal oscillators both at the transmitter and the receiver introduces time varying phase noise which interacts with the transmitted data in a non - linear fashion . phase noise becomes a detrimental problem in digital communication systems and needs to be estimated and compensated . in this thesis receiver algorithms are derived and evaluated to mitigate the effects of the phase noise in digital communication systems . the thesis is organized as follows : in chapter [ chap : siso - model ] phase noise estimation in single - input single - output ( siso ) systems is investigated . first , a hard decision directed extended kalman filter ( ekf ) is derived and applied to track time varying phase noise for an uncoded system . next , the problem of phase noise estimation for coded siso system is investigated . an iterative receiver algorithm performing code - aided turbo synchronization is derived using the expectation maximization ( em ) framework . two soft - decision directed estimators in the literature based on kalman filtering , the kalman filter and smoother with maximum likelihood average ( ks - mla ) and the extended kalman filter and smoother ( eks ) , are evaluated . low density parity check ( ldpc ) codes are proposed to calculate marginal a posteriori probabilities and to construct soft decision symbols . error rate performance of both estimators , the ks - mla and the eks , are determined and compared through simulations . simulations indicate that comparison on the performance of the existing estimators heavily depends on the system parameters such as block length and modulation order which are not taken into consideration in the literature . in chapter [ chap : mimo - model ] the thesis focuses on phase noise estimation in multi - input multi - output ( mimo ) systems . mimo technology is commonly used in microwave radio links to improve spectrum efficiency . first , an uncoded mimo system is taken under consideration . a low complexity hard decision directed ekf is derived and evaluated . a new mimo receiver algorithm that iterates between the estimator and the detector , based on the em framework for joint estimation and detection in coded mimo systems in the presence of time varying phase noise is proposed . a low complexity soft decision directed extended kalman filter and smoother ( ekfs ) that tracks the phase noise parameters over a frame is proposed in order to carry out the maximization step . the proposed ekfs based approach is combined with an iterative detector that utilizes bit interleaved coded modulation and employs ldpc codes to calculate the marginal a posteriori probabilities of the transmitted symbols , i.e. , soft decisions . numerical investigations show that for a wide range of phase noise variances the estimation accuracy of the proposed algorithm improves at every iteration . finally , simulation results confirm that the error rate performance of the proposed em - based approach is close to the scenario of perfect knowledge of phase noise at low - to - medium signal - to - noise ratios . |
the observational evidence for an acceleration of the expansion of the universe is now overwhelming , although the precise cause of this phenomenon is still unknown ( see , e.g. , for recent reviews ) . in this concern , and besides the need for more accurate estimates of cosmological parameters , the current state of affairs also brings to light some other important aspects regarding the physics of the mechanism behind cosmic acceleration .certainly , one of these aspects concerns the thermodynamical behavior of a dark energy - dominated universe , and questions such as `` what is the thermodynamic behavior of the dark energy in an expanding universe ? '' or , more precisely , `` what is its temperature evolution law ? '' must be answered in the context of this new conceptual set up .another interesting aspect in this discussion is whether thermodynamics in the accelerating universe can place constraints on the time evolution of the dark energy and can also reveal some physical properties of this energy component .the aim of this paper is twofold .first , to derive physical constraints on the dark energy from the second law of thermodynamics and to deduce the temperature evolution law for a dark component with a general equation - of - state ( eos ) parameter ; second , to perform a joint statistical analysis involving current observational data together with the thermodynamic bounds on . to do that , we assume the following generalized formula for the time evolution of which recovers some well - known eos parameterizations in the following limits : where and stand for the dark energy pressure and energy density , respectively ( see also for other eos parameterizations ) .the analyses are performed using one the most recent type ia supernovae ( sne ia ) observations , the nearby + sdss + essence + snls + hubble space telescope ( hst ) set of 288 sne ia discussed in ref . ( which we refer to as sdss compilation ) .we consider two sub - samples of this latter compilation that use salt2 and mlcs2k2 sn ia light - curve fitting method . along with the sneia data , and to help break the degeneracy between the dark energy parameters we also use the baryonic acoustic oscillation ( bao ) peak at and the current estimate of the cmb shift parameter .we work in units where c = 1 . throughout this paper a subscript 0 stands for present - day quantities and a dot denotes time derivative .let us first consider a homogeneous , isotropic , spatially flat cosmologies described by the friedmann - robertson - walker ( frw ) flat line element , , where is the cosmological scalar factor .the matter content is assumed to be composed of baryons , cold dark matter and a dark energy component .in such a background , the thermodynamic states of a relativistic fluid are characterized by an energy momentum tensor ( perfect - type fluid ) a particle current , and an entropy current , where is the usual projector onto the local rest space of and and are the particle number density and the specific entropy ( per particle ) , respectively .the conservation laws for energy and particle number densities read wheresemi - colons mean covariant derivative , is the scalar of expansion and the quantities , , and are related to the temperature trough the gibbs law : . from the energy conservation equation above , it follows that the energy density for a general component can be written as \;.\ ] ] following standard lines ( see , e.g. , ) , it is possible to show that the temperature evolution law is given by where we have split the dark energy pressure as where . by combining the above equations , we also find that \;,\ ] ] where we have used that , as given by the conservation of particle number density [ eq . ( [ nalpha ] ) ] . for parameterization p , shown in eq .( [ pbeta ] ) , eq .( [ temp ] ) can be rewritten as \;,\ ] ] which reduces to the generalized stefan - boltzmann law for ( see also ) . from the above expressions , we confirm the results of ref . ( for a constant eos parameter ) and find that dark energy becomes hotter in the course of the cosmological expansion since the its eos parameter must be a negative quantity .a possible physical explanation for this behavior is that thermodynamic work is being done on the system ( see , e.g. , fig . 1 of ref .in particular , for the vacuum state ( ) we obtain . to illustrate this behavior , fig .1 shows the dark energy temperature as a function of the scale factor for ( p2 ) and ( p3 ) , with the dark energy density blowing up as at high- .we , therefore , do not consider this parameterization in our analyses .] by assuming arbitrary values of , and , where k. from this analysis , it is clear that an important point for the thermodynamic fate of the universe is to know how long the dark energy temperature will take to become the dominant temperature of the universe .a basic difficulty in estimating such a time interval , however , is that the present - day dark energy temperature has not been measured , being completely unknown . 0.1 in by considering that the chemical potential for this -fluid is null ( as occurs for ), the euler s relation defines its specific entropy , i.e. , now , by combining the above equations , it is straightforward to show that the product , so that for a constant eos parameter , the above expression recovers some of the results of ref .note also that the vacuum entropy is zero ( ) whereas for phantom dark energy ( ) , which violates all the energy conditions , the entropy assumes negative values being , therefore , meaningless ( for a discussion on the behavior of a phanton fluid with nonzero chemical potential , see .see also for an alternative explanation in which the temperature of the phantom component takes negative values and for other thermodynamic analyses of dark energy ) .two cases of interest arise directly from eq .( [ entropy ] ) .the case in which implies necessarily that for all the above parameterizations are imcompatible with the case . ] .the second case is more interesting , with the -fluid mimicking a fluid with bulk viscosity where the viscosity term is identified with the varying part of the dark energy pressure . for this latter case, we note that the positiveness of implies that which clearly is not defined at , where . finally , by combining eqs .( [ euler ] ) and ( [ entropy ] ) with the conservation of the particle number density shown earlier , we obtain from the second law of thermodynamics that or , equivalently , .in this section , we combine the above physical constraints ( [ c1 ] ) and ( [ c2 ] ) with current observational data in order to impose bounds on the dark energy parameters .we use one of the most recent sne ia data sets available , namely , the sdss compilation discussed in ref . .this compilation comprises 288 sne ia and uses both salt2 and mlcs2k2 light - curve fitters ( see also for a discussion on these light - curve fitters ) and is distributed in redshift interval .along with the sne ia data , and to help break the degeneracy between the dark energy parameters and we use the bao and shift parameters where ^{1/3}$ ] is the so - called dilation scale , defined in terms of the dimensionless comoving distance , and . in our analyses , we minimize the function , which takes into account all the data sets mentioned above and marginalize over the present values of the matter density and hubble parameters . figures 2 and 3 show the main results of our joint analyses .we plot contours of in the parametric space - for p2 and p3 , respectively .the light gray region displayed in the plots stands for the physical constraint ( [ c1 ] ) . since this inequality is a function of time , the region is plotted by assuring its validity from up to today at .the resulting parametric space , when all the observational data discussed above are combined with the constraints ( [ c1 ] ) and ( [ c2 ] ) , corresponds to the small hachured area right below the line .these results clearly illustrate the effect that the thermodynamic bounds discussed in the previous section may have on the determination of the dark energy eos parameters .in particular , we note that the resulting allowed regions are even tighter for the logarithmic parameterization p2 than for p3 ( cpl ) . since the salt2 compilation allows for more negative values of , the joint constraints involving this sne ia sub - sample are also more restrictive ( figs .2b and 3b ) . for completeness, we display in table i the changes in the 2 estimates of and due to the thermodynamic bounds ( [ c1 ] ) and ( [ c2 ] ) . [ n - table ]lccc + test & & & + + sne ia ( mlcs2k2 ) .................... & p2 & & + sne ia ( mlcs2k2) + t ] .......... & p2 & & + sne ia ( salt2) ........................ & p2 & & + sne ia ( salt2) + t ............... & p2 & & + sne ia ( mlcs2k2) .................... & p3 & & + sne ia ( mlcs2k2) + t .......... & p3 & & + sne ia ( salt2) ........................ & p3 & & + sne ia ( salt2) + t ............... & p3 & & +in spite of its fundamental importance for an actual understanding of the evolution of the universe , the relevant physical properties of the dominant dark energy component remain completely unknown . in this paperwe have investigated some thermodynamic aspects of this energy component assuming that its constituents are massless quanta with a general time - dependent eos parameter .we have discussed its temperature evolution law and derived constraints from the second law of thermodynamics on the values of and for a family of parameterizations given by eq .( [ pb ] ) .when combined with current data from sne ia , bao and cmb observations , we have shown that such constraints provide very restrictive limits on the parametric space - ( see figs . 2 and 3 ) . finally , it is also worth mentioning that in the present analysis we have assumed that the chemical potential for the -fluid representing the dark energy is null . a more general analysis relaxing this condition ( ) is currently under preparation and will appear in a forthcoming communication .b. ratra & m. s. vogeley , pasp * 120 * , 235 ( 2008 ) ; r. r. caldwell & m. kamionkowski , ann .nucl . part .sci . * 59 * , 397 ( 2009 ) ; a. silvestri & m. trodden , rept .. phys . *72 * , 09690 ( 2009 ) ; m. sami , curr .sci . * 97 * , 887 ( 2009 ) .y. wang and p. m. garnavich , astrophys .j. * 552 * , 445 ( 2001 ) ; c. r. watson and r. j. scherrer , phys .d * 68 * , 123524 ( 2003 ) ; p.s .corasaniti et al . , phys .d * 70 * , 083006 ( 2004 ) ; v. b. johri , astro - ph/0409161 ; y. wang and m. tegmark , phys .* 92 * , 241302 ( 2004 ) ; h. k. jassal , j. s. bagla , and t. padmanabhan , mon . not .. soc . * 356 * , l11 ( 2005 ) ; e. m. barboza jr . and j. s. alcaniz , phys .b * 666 * , 415 ( 2008 ) .m. m. phillips , astrophys . j. * 413 * , l105 ( 1993 ) ; a. g. riess , w. h. press , and r. p. kirshner , astrophys .j. * 438 * , l17 ( 1995 ) ; s. jha , a. g. riess , and r. p. kirshner , astrophys . j. * 659 * , 122 ( 2007 ) .g. izquierdo and d. pavon , phys .b * 633 * , 420 ( 2006 ) ; n. bilic , fortsch .phys . * 56 * , 363 ( 2008 ) ; y. s. myung , phys .b * 671 * , 216 ( 2009 ) ; e. n. saridakis , p. f. gonzalez - diaz and c. l. siguenza , class .. grav .* 26 * , 165003 ( 2009 ) . | a significant observational effort has been directed to unveil the nature of the so - called dark energy . however , given the large number of theoretical possibilities , it is possible that such a task can not be performed on the basis only of the observational data . in this article we discuss some thermodynamic properties of this energy component assuming a general time - dependent equation - of - state parameter , where and are constants and may assume different forms . we show that very restrictive bounds can be placed on the - space when current observational data are combined with the thermodynamic constraints derived . |
since the seminal work of koenker and bassett ( 1978 ) , quantile regression has received substantial scholarly attention as an important alternative to conventional mean regression . indeed , there now exists a large literature on the theory of quantile regression ( see , for example , koenker ( 2005 ) , yu _ et al_. ( 2003 ) , and buchinsky ( 1998 ) for an overview ) .notably , quantile regression can be used to analyse the relationship between the conditional quantiles of the response distribution and a set of regressors , while conventional mean regression only examines the relationship between the conditional mean of the response distribution and the regressors .quantile regression can thus be used to analyse data that include censored responses .powell ( 1984 ; 1986 ) proposed a tobit quantile regression ( tqr ) model utilising the equivariance of quantiles under monotone transformations .hahn ( 1995 ) , buchinsky and hahn ( 1998 ) , bilias _ et al_. ( 2000 ) , chernozhukov and hong ( 2002 ) , and tang _ et al_. ( 2012 ) considered alternative approaches to estimate tqr .more recent works in the area of censored quantile regression include wang and wang ( 2009 ) for random censoring using locally weighted censored quantile regression , wang and fygenson ( 2009 ) for longitudinal data , chen ( 2010 ) and lin _ et al_. ( 2012 ) for doubly censored data using the maximum score estimator and weighted quantile regression , respectively , and xie _ et al_. ( 2015 ) for varying coefficient models .in the bayesian framework , yu and stander ( 2007 ) considered tqr by extending the bayesian quantile regression model of yu and moyeed ( 2001 ) and proposed an estimation method based on markov chain monte carlo ( mcmc ) .a more efficient gibbs sampler for the tqr model was then proposed by kozumi and kobayashi ( 2011 ) .further extensions of bayesian tqr have also been considered .kottas and krnjaji ( 2009 ) and taddy and kottas ( 2012 ) examined semiparametric and nonparametric models using dirichlet process mixture models .reich and smith ( 2013 ) considered a semiparametric censored quantile regression model where the quantile process is represented by a linear combination of basis functions . to accommodate nonlinearity in data , zhao andlian ( 2015 ) proposed a single - index model for bayesian tqr .furthermore , kobayashi and kozumi ( 2012 ) proposed a model for censored dynamic panel data . for variable selection in bayesian tqr ,ji _ et al_. ( 2012 ) applied the stochastic search , alhamzawi and yu ( 2014 ) considered a -prior distribution with a ridge parameter that depends on the quantile level , and alhamzawi ( 2014 ) employed the elastic net .as in the case of ordinary least squares , standard quantile regression estimators are biased when one or more regressors are correlated with the error term .many authors have analysed quantile regression for uncensored response variables with endogenous regressors , such as amemiya ( 1982 ) , powell ( 1983 ) , abadie _ et al_. ( 2002 ) , kim and muller ( 2004 ) , ma and koenker ( 2006 ) , chernozhukov and hansen ( 2005 ; 2006 ; 2008 ) , and lee ( 2007 ) . extending the quantile regression model to simultaneously account for censored response variables and endogenous variablesis a challenging issue . in the case of the conventional tobit model with endogenous regressors ,a number of studies were published in the 1970s and 1980s , such as nelson and olsen ( 1978 ) , amemiya ( 1979 ) , heckman ( 1978 ) , and smith and blundell ( 1986 ) , with more efficient estimators proposed by newey ( 1987 ) and blundell and smith ( 1989 ) . on the contrary, few studies have estimated censored quantile regression with endogenous regressors . while blundell and powell ( 2007 ) introduced control variables as in lee ( 2007 ) to deal with the endogeneity in censored quantile regression , their estimation method involved a high dimensional nonparametric estimation and can be computationally cumbersome .chernozhukov _ et al_. ( 2014 ) also introduced control variables to account for endogeneity .they proposed using quantile regression and distribution regression ( chernozhukov _ et al_. , 2013 ) to construct the control variables and extended the estimation method of chernozhukov and hong ( 2002 ) . in the bayesian framework , meanregression models with endogenous variables have garnered a great deal of research attention from both the theoretical and the computational points of view ( _ e.g . _rossi _ et al_. , 2005 ; hoogerheide _ et al_. , 2007a , 2007b ; conely _ et al_. , 2008 ; lopes and polson , 2014 ) .however , despite the growing interest in and demand for bayesian quantile regression , the literature on bayesian quantile regression with endogenous variables remains sparse .lancaster and jun ( 2010 ) utilised the exponentially tilted empirical likelihood and employed the moment conditions used in chernozhukov and hansen ( 2006 ) . in the spirit of lee ( 2007 ) , ogasawara and kobayashi ( 2015 ) employed a simple parametric model using two asymmetric laplace distributions for panel quantile regression. however , these methods are only applicable to uncensored data .furthermore , the model of ogasawara and kobayashi ( 2015 ) can be restrictive because of the shape limitation of the asymmetric laplace distribution , which can affect the estimates .indeed , the modelling of the first stage error in this approach remains to be discussed .based on the foregoing , this study proposes a flexible parametric bayesian endogenous tqr model .the -th quantile regression of interest is modelled parametrically following the usual bayesian quantile regression approach .following lee ( 2007 ) , we introduce a control variable such that the conditional quantile of the error term is corrected to be zero and the parameters are correctly estimated . as in the approach of lee ( 2007 ) , the -th quantile of the error term in the regression of the endogenous variable on the exogenous variables , which is often called the first stage regression , is also assumed to be zero .we discuss the modelling approach for the first stage regression and consider a number of parametric and semiparametric models based on the extensions of ogasawara and kobayashi ( 2015 ) .specifically , following wichitaksorn _ et al_. ( 2014 ) and naranjo _ et al_. ( 2015 ) , we employ the first stage regression models based on the asymmetric laplace distribution , skew normal distribution , and asymmetric exponential power distribution , for which the -th quantile is always zero and is modelled by the regression function . to introduce more flexibility into the tail behaviour of the models based on the asymmetric laplace and skew normal distributions, we also consider a semiparametric extension using the dirichlet process mixture of scale parameters as in kottas and krnjaji ( 2011 ) .the value of is a priori unknown , while the choice of can affect the estimates . in this study , hence , is treated as a parameter to incorporate uncertainty and is estimated from the data .the performance of the proposed models is demonstrated in a simulation study under various settings , which is a novel contribution of the present study .we also illustrate the influence of the prior distributions on the posterior in the cases where valid and weak instruments are used .the rest of this paper is organised as follows .section [ sec : tobit ] introduces the standard bayesian tqr model with a motivating example .then , section [ sec : approach ] proposes bayesian tqr models to deal with the endogenous variables .the mcmc methods adopted to make inferences about the models are also described .the simulation study under various settings is presented in section [ sec : sim ] .the models are also illustrated by using the real data on the working hours of married women in section [ sec : real ] .finally , we conclude in section [ sec : conc ] .suppose that the response variables are observed according to then , consider the -th quantile regression model for given by where is the vector of regressors , is the coefficient parameter , and is the error term whose -th quantile is zero .the -th conditional quantile of is modelled as .the equivariance under the monotone transformation of quantiles implies that the -th conditional quantile of is given by the tqr model can be estimated by minimising the sum of asymmetrically weighted absolute errors where and denotes the indicator function ( powell , 1986 ) .the bayesian approach assumes that follows the asymmetric laplace distribution , since minimising ( [ eqn : check ] ) is equivalent to maximising the likelihood function of the asymmetric laplace distribution ( koenker and machado , 1999 ; chernozhukov and hong , 2003 ) .the probability density function of the asymmetric laplace distribution , denoted by , is given by where is the scale parameter and is the shape parameter ( yu and zhang , 2005 ) .the mean and variance are given by =\sigma\frac{1 - 2p}{p(1-p)} ] and ( see wichitaksorn _ et al_. , 2014 ) .when the actual error distribution is close to the normal distribution , this distribution would lead to better performance than the asymmetric laplace distribution .however , just as the asymmetric laplace distribution , the skewness and the quantile level of the mode are controlled by the single parameter .second , we consider the asymmetric exponential power distribution treated by zhu and zinde - walsh ( 2009 ) , zhu and galbraith ( 2011 ) , and naranjo _ et al_. ( 2015 ) .the probability density function of the asymmetric exponential power distribution , denoted by , is given by where is the scale parameter , is the skewness parameter , is the shape parameter for the left tail , and is the shape parameter for the right tail . after some reparameterisation, the distribution reduces to the asymmetric laplace distribution when and to the skew normal distribution when .the tails of the asymmetric exponential power distribution are controlled separately by and , respectively , and the overall skewness is controlled by . although the distribution is more flexible than the above two distributions , the posterior computation using mcmc would be inefficient , because it includes two additional shape parameters and it has no convenient mixture representation , apart from the mixture of uniforms that is inefficient , to facilitate an efficient mcmc algorithm .the computational efficiency is also compared in section [ sec : sim ] .in addition to the three parametric models , we also consider the semiparametric extension of the models based on the asymmetric laplace and skew normal distributions to achieve both flexibility and computational efficiency .more specifically , the following two models using the dirichlet process mixtures of scales are considered : where denotes the dirichlet process with the precision parameter and the base measure .for both models , we set as it is computationally convenient .while those mixture models have the same limitation as the parametric versions in terms of skewness , they extend the tail behaviour of the error distribution preserving ( [ eqn : alpha ] ) ( kottas and krnjaji , 2009 ) .hereafter , the models with the asymmetric laplace , skew normal , and asymmetric exponential power first stage errors are respectively denoted by al , sn , and aep , and those with the dirichlet process mixtures are denoted by aldp and sndp .we must take care when selecting the value in ( [ eqn : alpha ] ) , as it is a part of the model specification and can thus affect the estimates ( lee , 2007 ) .we treat as a parameter and estimate its value along with the other parameters . since determines the quantile level of the mode for all models considered here , our approach to modelling the first stage regression can also be regarded as a kind of mode regression ( see wichitaksorn _ et al_. , 2014 ) . to gain further flexibility , we might extend the model through a fully nonparametric mixture . several semiparametric models in the context of bayesian quantile regression with exogenous variableshave been proposed by kottas and gelfand ( 2001 ) , kottas and krnjaji ( 2009 ) , and reich _ et al_. ( 2010 ) .for example , kottas and krnjaji ( 2009 ) considered the nonparametric mixture of uniform distributions for any unimodal density on the real line with the quantile restriction at the mode using the dirichlet process mixture ( see also kottas and gelfand , 2001 ) . in the more flexible model proposed by reich_ et al_. ( 2010 ) , the mode of the error distribution does not have to coincide with zero .this is achieved by using a nonparametric mixture of the quantile - restricted two - component mixtures of normal distributions .however , their approaches are not directly applicable in the present context where the value of is estimated . if we were to estimate the quantile level for which the quantile restriction holds , the computation under the former model is expected to be extremely inefficient and unstable as the model involves many indicator functions , and and the intercept would be highly correlated .the intercept would not be identifiable in the latter model .we could further extend the model to account for heteroskedasticity such that for , where for all and the first element of is fixed to one ( _ e.g ._ reich , 2010 ) . in this case , the -th quantile of is given by as in the usual quantile regression .however , since the first stage regression model is built based on ( [ eqn : alpha ] ) , models ( [ eqn:1st ] ) and ( [ eqn : hetero ] ) would produce identical estimates .we next turn to the model of the new second stage error , , in ( [ eqn:2nd ] ) . since the -th conditional quantile of is now zero, we assume that as in the standard bayesian quantile regression approach .we utilise the location scale mixture of normals representation for the asymmetric laplace distribution to facilitate an efficient mcmc method following kozumi and kobayashi ( 2011 ) ( see also kotz _ et al_. , 2001 ) .the model is expressed in the hierarchical form given by for , where , , denotes the exponential distribution with mean , and the coefficient parameter is common to all first stage regression specifications .first , we assume the normal prior for , since it is computationally convenient for the al , sn , aldp , and sndp models .since we do not have information on the coefficient values , the variances are set such that the prior distributions are relatively diffuse .our default choice is . for the scale parameters , for the al , sn , and aep distributions ,a relatively diffuse inverse gamma distribution is assumed and the default choice is set to . for aep ,we assume , where denotes the normal distribution with the mean and variance truncated on the interval .a similar prior specification is found in naranjo _ et al_. ( 2015 ) . for all models , is assumed . for the semiparametric models , we need to specify the parameters of the inverse gamma base measure .assuming that the data have been rescaled , and are chosen such that the variance of takes values between and with high probability ( _ e.g ._ ishwaran and james , 2002 ) .our default choice is and for aldp and for sndp . under this choice ,when for aldp , as .similarly , when , . for sndp , when and when .for the precision parameter of the dirichlet process , , we assume such that both small and large values for , hence the number of clusters , are allowed . for the coefficient parameters in the second stage , and , we also assume relatively diffuse normal distributions .our default choice of prior is .similar to in the parametric first stage , we assume an inverse gamma prior for the scale of the al pseudo likelihood .our default choice is .the parameter accounts for the endogeneity and we need to take care in prior elicitation . when the data follow the bivariate normal distribution , as in the motivating example ( [ eqn : example ] ) , is equal to , where is the correlation coefficient and and are the standard deviations of the first and second stage errors , respectively . in this case , we may follow lopes and polson ( 2014 ) to determine the variance of the normal prior implied from an inverse wishart prior for the covariance matrix . however , we do not limit ourselves to normal data as the quantile regression approach is suitable for heteroskedastic and non - normal data , and the non - normal models are used in the first stage . in the literature on bayesian non - normal selection models , the prior distribution of is normal typically with a very small variance , such as ( _ e.g . _munkin and trivedi , 2003 , 2008 ; deb _ et al_. , 2006 ) .on the other hand , we use a more diffused prior to reflect our ignorance about and set our default choice of prior to be . when the instrument is weak , it is expected that our quantile regression models face the problem of prior sensitivity and that the posterior distributions exhibit sharp behaviour , as in the case of the bayesian instrumental variable regression model .section [ sec : sim ] considers the alternative choices of the hyperparameters to study the prior sensitivity .the proposed models are estimated by using the mcmc method based on the gibbs sampler .we describe the gibbs sampler for the semiparametric models with aldp and sndp , which is an extension of the gibbs sampler described in kozumi and kobayashi ( 2011 ) and ogasawara and kobayashi ( 2015 ) .the algorithms for the al and sn models can be obtained straightforwardly .we also mention the algorithm for the aep model .the variables involved in the dirichlet process are sampled by using the retrospective sampler ( papaspiliopoulos and roberts , 2008 ) and the slice sampler ( walker , 2007 ) .first , we introduce and , such that .then , as in walker ( 2007 ) , the gibbs sampler is constructed by working on the following joint densities where , , , and denotes the beta distribution with the parameters and ( sethuraman , 1994 ) .we also let denote the minimum integer such that . for the aldp model , we utilise the mixture representation for the asymmetric laplace distribution to sample efficiently such that , , , where and are defined as in ( [ eqn : theta ] ) .let us denote and .our gibbs sampler proceeds by alternately sampling , , , , , , , , , , , and . * * sampling : * generate from for . ** sampling : * generate from where for . ** sampling : * generate from the multinomial distribution with probabilities for . ** sampling : * generate from where + d_0.\ ] ] * * sampling : * assuming the gamma prior , , we use the method described by escobar and west ( 1995 ) to sample . by introducing , the full conditional distribution of is the mixture of two gamma distributions given by where is the number of distinct clusters and . ** sampling : * assuming , is sampled from where ^{-1},\\ { \mathbf{g}}_1&=&{\mathbf{g}}_1\left[\sum_{i=1}^n{\mathbf{z}}_i\left(-\frac{\eta_p(y_i^*-{\mathbf{x}}_i'{\text{\boldmath{}}}_p - \eta_pd_i-\theta_pg_i)}{\tau_p^2\sigma g_i}+\frac{d_i-\theta_\alpha h_i}{\tau_\alpha^2\phi_{k_i}h_i } \right)+{\mathbf{g}}_0^{-1}{\mathbf{g}}_0\right ] , \end{aligned}\ ] ] as the density of the full conditional distribution denoted by is given by * * sampling : * the full conditional distribution of is the generalised inverse gaussian distribution , denoted by .the probability density function of is given by where is the modified bessel function of the third kind ( barndorff - nielsen and shephard , 2001 ) .for , we sample from where ** sampling : * the density of the full conditional distribution of is given by where and denote the full conditional and prior density of , respectively .we use the random walk metropolis hastings ( mh ) algorithm to sample from this distribution . * * sampling : * the full conditional distribution of is given by * * sampling : * we sample in one block . assuming , the full conditional distribution is given by where ^{-1 } , \quad \tilde{{\mathbf{b}}}_1=\tilde{{\mathbf{b}}}_1\left[\sum_{i=1}^n\frac{\tilde{{\mathbf{x}}}_i(y_i^*-\theta_pg_i)}{\tau_p^2\sigma g_i}+ \tilde{{\mathbf{b}}}_0^{-1}\tilde{{\mathbf{b}}}_0\right].\ ] ] * * sampling : * assuming , we sample from where and . ** sampling : * similar to , is sampled from where the gibbs sampler for sndp consists of sampling , , , , , , , , , , and .the sampling algorithms for , , , , , , and remain the same as in the case of aldp .the sampling scheme of and can be obtained by replacing with .similar to the case of aldp , the density of the full conditional distribution is given by where ^{-1 } , \\{ \mathbf{g}}_1({\text{\boldmath{}}})&=&{\mathbf{g}}_1({\text{\boldmath{}}})\left[\sum_{i=1}^n{\mathbf{z}}_i \left ( -\frac{\eta_p(y_i^*-{\mathbf{x}}_i'{\text{\boldmath{}}}_p-\eta_pd_i-\theta_p g_i)}{\tau_p^2\sigma g_i}+\frac{4d_i(\alpha - i(d_i\leq{\mathbf{z}}_i'{\text{\boldmath{}}}))^2}{\phi_{k_i}}\right ) + { \mathbf{g}}_0^{-1}{\mathbf{g}}_0\right ] , \end{aligned}\ ] ] which is similar to the density of the normal distribution .therefore , we sample by using the mh algorithm with the proposal distribution given by . since no convenient representation for the aep distribution is available , the full conditional distributions of the parameters in the first stage regression , , , , , and , are not in the standard forms .therefore , we employ the adaptive random walk mh algorithm . although naranjo _ et al_. ( 2015 ) proposed the scale mixture of uniform representation for the aep distribution , the algorithm based on this representation would be inefficient , because it consists of sampling from a series of distributions that are truncated on some intervals such that the mixture representation holds and such intervals move quite slowly as sampling proceeds ( see also kobayashi , 2015 ) .since the additional shape parameters in aep free up the role of , controls the overall skewness by allocating the weights on the left and right sides of the mode .hence , the mcmc sample would exhibit relatively high correlation between and .the models considered in the previous section are demonstrated using simulated data .the aims of this section are ( 1 ) to compare the performance of the proposed models ( section [ sec : default ] ) , ( 2 ) to study the sensitivity to the prior settings , and ( 3 ) to illustrate the behaviour of the posterior distribution when the instrument is weak ( section [ sec : alt ] ) .the data are generated from the model given by for , where assuming that a valid instrument is available , , , and .the performance of the models is compared by considering the various settings for , while the distributions of are kept relatively simple in order that the true values of the quantile regression coefficients are tractable .the following five settings are considered : + * setting 1 * , , + * setting 2 * , , + * setting 3 * , , + * setting 4 * , , + * setting 5 * , , + where denotes the skew distribution with the location parameter , scale parameter , skewness parameter , , and degree of freedom ( see azzalini and capitanio , 2003 ; fr - schnatter and pyne , 2010 ) , and we set . in setting 1 ,the error terms follow the bivariate normal distribution as in the motivating example in section [ sec : tobit ] .setting 2 considers the fat tailed first stage regression . setting 3considers a more difficult situation where the first stage error is fat tailed and skewed .setting 4 replaces the first stage error of setting 1 with the heteroskedastic error with respect to the instrument .setting 5 is also a challenging situation where the first stage error is fat tailed , skewed , and heteroskedastic . in settings 3 and 5 , the location parameters of the first stage error distributions are set such that the mode of is zero and the quantile level of the mode is .the average censoring rates for the settings are around . for each setting , the data are replicated times .we first estimated the proposed models under the default prior specifications ( see section [ sec : prior ] ) for and by running the mcmc for iterations and discarding the first draws as the burn - in period .the standard bayesian tqr model was also estimated .the bias and root mean squared error ( rmse ) of the parameters were computed over the replications . to assess the efficiency of the mcmc algorithm, we also recorded the inefficiency factor , which was defined as a ratio of the numerical variance of the sample mean of the markov chain to the variance of the independence draws ( chib , 2001 ) .table [ tab : default ] presents the biases , rmses , and median inefficiency factors for the parameters over the replications .first , we examined the inefficiency factors .overall , our sampling algorithms appear to be efficient , especially for al , sn , aldp , and sndp .the table shows that the inefficiency factors for al , sn , aldp , and sndp are reasonably small for , , , , and . since and determine the quantile level of the mode and location of the mode , respectively , the mcmc sample exhibits correlation between and and this results in higher inefficiency factors for them .hence , the inefficiency factors for tend to be higher than those for the other parameters .this pattern is more profound in the case of aep where the inefficiency factors for , , and are quite high . since the additional shape parameters in aep free up the role of , the mcmc sample exhibits higher correlation between and .furthermore , the inefficiency factors for the other parameters for aep are also higher than those for the other endogenous models .next , we turn to the performance of the models .as expected , tqr produces biased estimates in all cases .the rmses for the proposed endogenous models are generally larger for , which is below the censoring point , than for .the al and aldp models result in similar performance .the aep model shows the largest rmses for and among the proposed models for all cases .combined with the high inefficiency factors for those parameters , the convergence of the mcmc algorithm for aep may be difficult to ensure in the given simulation setting .this finding suggests a considerable practical limitation and , thus , aep will not be considered henceforth .the same limitation applies to the potentially more flexible nonparametric models discussed in section [ sec:1st ] .table [ tab : default ] also shows that the estimation of the first stage regression can influence the second stage parameters .for example , in setting 1 , the rmses for for sn and sndp are smaller than those for al and aldp , as the true model is the normal and thus sn and sndp produce smaller rmses for . similarly , in setting 4 ,the rmses for for sn and sndp are smaller than those for al and aldp .in addition , the heteroskedasticity in the first stage influences the performance of the slope parameters , resulting in slightly smaller rmses for for sn and sndp than for al and aldp .however , the performance of the sn model becomes worse when the first stage error is fat tailed , since the skew normal distribution can not accommodate a fat tailed distribution .while the results in setting 2 are somewhat comparable across the models , the table shows that sn results in larger biases and rmses in setting 3 and , especially , setting 5 . in setting 3 ,sn results in larger rmses for than for al , aldp , and sndp . in setting 5 ,given the heteroskedasticity of the first stage , the biases and rmses for the intercept and slope parameters for sn are larger than those for al , aldp , and sndp .on the other hand , compared with sn , the semiparametric sndp model is able to cope with fat tailed errors and this produces results comparable with those for al and aldp . while the models result in reasonable overall performance , the results for settings 3 and 5 also illustrate the limitation of our modelling approach to some extent . in setting 3 , the models exhibit some bias in because of the lack of fit in the first stage .this lack of fit , which is represented by the bias for , is reflected in the bias for .the entire coefficient vector may be influenced by this lack of fit in the first stage in the presence of heteroskedasticity as in setting 5 .the lack of fit in the first stage is also indicated by the biases in .this finding implies that an inflexible first stage model can fail to estimate the true quantile such that ( [ eqn : alpha ] ) holds and that choosing the value of a priori could lead to biased estimates ( see the discussion in section [ sec:1st ] ) . for comparison purposes ,we consider two alternative specifications for the inverse gamma base measure for the semiparametric models .the following slightly less diffuse settings than the default are considered . for aldp , we consider such that and such that when . for sndp, we consider such that and such that . for the other parameters, we use the default prior specifications .table [ tab : base ] presents the biases and rmses for aldp and sndp under the alternative base measures for and .the results in table [ tab : base ] are essentially identical to those in table [ tab : default ] , suggesting that the default choice of the base measures provides reasonable performance .next , the two alternative prior specifications for , , and are considered to study the prior sensitivity .the first alternative specification considers the more diffuse priors given by , , and . the second alternative specification is the even more diffuse setting given by , , and .for aldp and sndp , the default base measures are used . for , , and , we use the default specification .table [ tab : prior ] presents the biases and rmses for al , sn , aldp , and sndp under the five simulation settings for and , showing that the result is robust with respect to the choice of hyperparameters .we also considered some different prior choices for and , and obtained robust results .these findings thus confirm the robustness of the results with respect to the choice of base measures and prior distributions provided that a valid instrument is available . in the context of mean regression models ,however , when the instrument is weak , the posterior distribution is known to exhibit sharp behaviour in the vicinity of non - identifiability ( hoogerheide _ et al_. , 2007b ) and the posterior distribution is greatly affected by the prior specification ( _ e.g . _ lopes and polson , 2014 ) . here, we illustrate the behaviour of the posterior distribution by using a weak instrument .the data are generated from ( [ eqn : sim1 ] ) without the regressor : for , where , , , , and .the al and sn models are estimated for by running the mcmc for 20000 iterations and discarding the first 5000 draws as the burn - in period under the three prior specifications previously considered .figure [ fig : fig2 ] presents the joint posterior distribution of and for al and sn under the three prior specifications and shows that the posterior distribution is greatly affected by the prior specification .the posterior distribution of becomes more diffuse as approaches zero .this trend becomes more profound as we use more diffuse prior distributions , producing star shapes .the figure also suggests that the prior distribution can act as an informative prior about the linear relationship between and .similar results were also obtained under different prior specifications for , , and as well as for aldp and sndp . and for al ( top row ) and sn ( bottom row ) ]the proposed endogenous models are applied to the dataset on the labour supply of married women of mroz ( 1987 ) .the dataset includes observations on individuals .the response variable is the total number of hours in every hours the wife worked for a wage outside the home during 1975 . in the data , 325 of the 753 women worked zero hours and the corresponding responsesare treated as left censored at zero .hence , the censoring rate is approximately .the regressors of our model include years of education ( _ educ _ ) , years of experience ( _ exper _ ) and its square ( _ expersq _ ) , age of the wife ( _ age _ ) , number of children under 6 years old ( _ kidslt6 _ ) , number of children equal to or greater than 6 years old ( _ kidsge6 _ ) , and non - wife household income ( _ nwifeinc _ ) .we treat _ nwifeinc _ as an endogenous variable because it may be correlated with the unobserved household preference for the labour force participation of the wife . as an instrument, we include the years of education of the husband ( _ huseduc _ ) , since this can influence both his income and the non - wife household income , but it should not influence the decision of the wife to participate in the labour force .smith and blundell ( 1986 ) considered a similar setting where non - wife income was considered to be endogenous and the education of the husband was employed as the instrumental variable .they applied the endogenous tobit model to data derived from the 1981 family expenditure survey in the united kingdom . using the default prior specifications ,the aldp and sndp models are estimated for by running the mcmc for iterations and discarding the first draws as the burn - in period .convergence is monitored by using the trace plots and gelman - rubin statistic for two chains with widespread starting values ( gelman _ et al_. , 2014 ) .the upper bounds of the gelman - rubin confidence intervals for the selected parameters , , , , , , and , for sndp in the case of are , , , , , and , respectively .figure [ fig : fig3 ] presents the post burn - in trace plots for these parameters and shows the evidence of convergence of the chains . ]first , we present the results for the representative quantiles , , , and .table [ tab : real ] shows the posterior means , 95% credible intervals , and inefficiency factors for aldp and sndp for these quantiles .the table shows that the sampling algorithm worked efficiently as the inefficiency factors are reasonably small .the posterior means for the instrument , _ huseduc _ , are positive and the 95% credible intervals do not include zero for all cases for both models , implying that _ huseduc _ is a valid instrument . for ,the posterior means for are and for aldp and sndp , respectively , and the 95% credible intervals do not include zero .therefore , it is suggested that non - wife income be treated as an endogenous variable for the median regression . to study the endogeneity in non - wife household income across quantiles , the posterior distributions of are presented .the results across the quantiles can be best understood by plotting the posterior distributions as a function of .figure [ fig : fig4 ] shows the posterior means and 95% credible intervals of for aldp and sndp for .the figure shows that the two models produced similar results and that the posterior distributions of are concentrated away from zero for the mid quantiles .specifically , for , the 95% credible intervals do not include zero for either model .there are notable peaks around , where the posterior means of under the default prior specifications are and with the 95% credible intervals and for aldp and sndp , respectively .this is an interesting result considering that the censoring rate is .the result implies that the effect of the endogeneity of non - wife income is the most profound when the wife is about to decide whether to enter the labour force .when the opportunity cost of labour supply is very high ( lower quantile ) or the wife works on a more regular basis ( higher quantile ) , such endogeneity diminishes .smith and blundell ( 1986 ) also reported that non - wife income is endogenous by using the endogenous tobit regression model .the mean of our dataset is , which approximately corresponds to the -th quantile . for , the posterior mean of for aldp is with the 95% credible interval and that for sndp is with the 95% credible interval .the figure also shows the posterior means and 95% credible intervals under the two alternative prior specifications considered in section [ sec : alt ] , confirming that our results are robust with respect to the prior specifications . under the default and alternative priors for ]figure [ fig : fig5 ] compares the posterior means and 95% credible intervals of for sndp , aldp , and tqr for .the results for sndp and aldp are quite similar .the figure clearly shows that the posterior distributions for the key variable , _ nwifeinc _, for the proposed models and tqr exhibit some differences for , where _ nwifeinc _ is indicated to be endogenous .the difference becomes the most profound around for which the posterior mean for _ nwifeinc _ is for aldp , for sndp , and for tqr , implying a stronger effect of non - wife income when endogeneity is taken into account .the posterior distributions for _ nwifeinc _ for aldp and sndp are more dispersed than that for tqr for all . while the 95% credible intervals include zero for all models for the upper quantiles , for the lower quantiles , such as , those for aldp and sndp include zero and those for tqr do not .differences in the results are also observed for other variables . for ,the posterior means for _ educ _ and _ age _ are respectively and for aldp , and for sndp , and and for tqr . for the upper quantiles , , the 95% credible intervals for _ educ _ include zero for the proposed models , while those for tqr do not , implying that an additional year of education does not increase the working hours for those quantiles when the endogeneity from non - wife income is taken into account . for _ expersq_ , the endogenous models result in slightly more dispersed posterior distributions for .the posterior means for are , , and for aldp , sndp , and tqr , respectively . for _kidsge6 _ , the posterior means for are , , and for aldp , sndp , and tqr , respectively. however , the 95% credible intervals include zero for all for all models .on the other hand , the figure also shows that the models produced similar results for _ exper _ and _ kidslt6 _ for all . for aldp , sndp , and tqr for ]we proposed bayesian endogenous tqr models using parametric and semiparametric first stage regression models built around the zero -th quantile assumption .the value of determines the quantile level of the mode of the error distribution and is estimated from the data . from the simulation study ,the al , aldp , and sndp models worked relatively well for the various situations , while they faced the same limitation pointed out by kottas and krnjaji ( 2011 ) . on the other hand, the sn model could not accommodate the fat tailed first stage errors .although aep could be a promising model in terms of flexibility , the inefficiency of the mcmc algorithm largely limits its applicability in practice .the development of a more convenient mixture representation for the aep distribution is thus required . from application to data on the labour supply of married women , the effect of the endogeneity in non - wife income was found to be the most profound for the quantile level close to the censoring rate . for this quantile ,some differences in the parameter estimates between the endogenous and standard models were found , such as the stronger effect of non - wife income on working hours .this study only considered the case of continuous endogenous variables .we are also interested in incorporating endogenous binary variables into a bayesian quantile regression model .an important extension might therefore be addressing multiple endogenous dummy variables to represent selection among multiple alternatives , such as the choice of a hospital and insurance plan , as considered in geweke _ et al_. ( 2003 ) and deb _ et al_. ( 2006 ) .however , such an extension would be challenging with respect to the assumptions that must be imposed on the multivariate error terms .we leave these issues to future research .the author would like to thank the seminar participants at the second baysm and esobe 2014 and the anonymous referees for the valuable comments to improve the manuscript .the computational results were obtained by using ox version 6.21 ( doornik , 2007 ) .this study was supported by jsps kakenhi grant numbers 25245035 , 26245028 , 26380266 , and 15k17036 .azzalini , a. and capitanio , a. ( 2003 ) .`` distributions generated by perturbation of symmetry with emphasis on a multivariate skew -distribution , '' _ journal of royal statistical society series b _ , * 65 * , 367389 .barndorff - nielsen , o. e. and shephard , n. ( 2001 ) .`` non - gaussian ornstein - uhlenbeck - based models and some of their uses in financial econometrics , '' _ journal of royal statistical society series b _ , * 63 * , 167241 .deb , p. , munkin , k. m. , and trivedi , k. p. ( 2006 ) .`` bayesian analysis of the two - part model with endogeneity : application to health care expenditure , '' _ journal of applied econometrics _ , * 21 * , 10811099 .fr - schnatter , s. and pyne , d. ( 2010 ) .`` bayesian inference for finite mixtures of univariate and multivariate skew - normal and skew- distributions , '' _ biostatistics _ , * 11 * , 317336 .hoogerheide , l. f. , kleibergen , f. , van dijk , h. k. ( 2007a ) .`` natural conjugate priors for the instrumental variables regression model applied to the angrist - krueger data , '' _ journal of econometrics_**138 * * , 63103 .hoogerheide , l. , kaashoek , j. f. , and van dijk , h. k. ( 2007b ) .`` on the shape of posterior densities and credible sets in instrumental variable regression models with reduced rank : an application of flexible sampling methods using neural networks , '' _ journal of econometrics _ , * 139 * , 154180 .ishwaran , h. and james , l. f. ( 2002 ) . `` approximate dirichlet process computing in finite normal mixtures : smoothing and prior information , '' _ journal of computational and graphical statistics _ , * 11 * , 508532 .kobayashi , g. ( 2015 ) .`` skew exponential power stochastic volatility model for analysis of skewness , non - normal tails , quantiles and expectiles , '' _ computational statistics _ , doi:10.1007/s00180 - 015 - 0596 - 4 ., kozubowski , t. j. , and podgrski , k. ( 2001 ) . _ the laplace distribution and generalizations : a revisit with applications to communications , economics , engineering , and finance _ , birkh , boston .munkin , m. k. and trivedi , p. k. ( 2003 ) . `` bayesian analysis of a self - selection model with multiple outcomes using simulation - based estimation : an application to the demand for healthcare , '' _ journal of econometrics _, * 114 * , 197220 .ogasawara , k. and kobayashi , g. ( 2015 ) .`` the impact of social workers on infant mortality in inter - war tokyo : bayesian dynamic panel quantile regression with endogenous variables , '' _ cliometrica _ , * 9 * , 97130 .wichitaksorn , n. , choy , s. t. b. , and gerlach , r. ( 2014 ) .`` a generalized class of skew distributions and associated robust quantile regression models , '' _ the canadian journal of statistics _ , * 42 * , 5779596. zhu , d. and galbraith , j. w. ( 2011 ) . ``modeling and forecasting expected shortfall with the generalized asymmetric student- and asymmetric exponential power distributions , '' _ journal of empirical finance _ , * 18 * , 765778 . | this study proposes -th tobit quantile regression models with endogenous variables . in the first stage regression of the endogenous variable on the exogenous variables , the assumption that the -th quantile of the error term is zero is introduced . then , the residual of this regression model is included in the -th quantile regression model in such a way that the -th conditional quantile of the new error term is zero . the error distribution of the first stage regression is modelled around the zero -th quantile assumption by using parametric and semiparametric approaches . since the value of is a priori unknown , it is treated as an additional parameter and is estimated from the data . the proposed models are then demonstrated by using simulated data and real data on the labour supply of married women . * keywords * : asymmetric laplace distribution ; bayesian tobit quantile regression ; dirichlet process mixture ; endogenous variable ; markov chain monte carlo ; skew normal distribution ; |
the exosphere is the upper layer of any planetary atmosphere : it is a quasi - collisionless medium where the particle trajectories are more dominated by gravity than by collisions . above the exobase , the lower limit of the exosphere ,the knudsen number becomes large , collisions become scarce , the distribution function can not be considered as maxwellian anymore and , gradually , the trajectories of particles are essentially determined by the gravitation and radiation pressure by the sun .the trajectories of particles , subject to the gravitational force , are completely solved with the equations of motion , but it is not the case with the radiation pressure . in the absence of radiation pressure , we can distinguish three types of trajectories for the exospheric particles : * the escaping particles come from the exobase and have a positive mechanical energy : they can escape from the gravitational influence of the planet with a velocity larger than the escape velocity .these particles are responsible for the jeans escape .they can also be defined as crossing only once the exobase , * the ballistic particles also come from the exobase but have a negative mechanical energy , they are gravitationally bound to the planet .they reach a maximum altitude and fall down on the exobase if they do not undergo collisions .they cross the exobase twice , * the satellite particles never cross the exobase .they also have a negative mechanical energy but their periapsis is above the exobase : they orbit along an entire ellipse around the planet without crossing the exobase .the satellite particles result in their major part from ballistic particles undergoing few collisions mainly near the exobase ( ) .thus , they do not exist in a collisionless model of the exosphere .the radiation pressure disturbs the conics ( ellipses or hyperbolas ) described by the particles under the influence of gravity .the resonant scattering of solar photons leads to a total momentum transfer from the photon to the atom or molecule . in the non - relativistic case , assuming an isotropic reemission of the solar photon , this one is absorbed in the sun direction and scattered with the same probability in all directions . for a sufficient flux of photons in the absorption wavelength range, the reemission in average does not induce any momentum transfer from the atom / molecule to the photon .the momentum variation , each second , between before and after the scattering imparts a force , the radiation pressure . proposed to analyze its effect on the structure of planetary exospheres . in particular, they highlighted analytically the tail " phenomenon at earth : the density for atomic hydrogen , which is sensitive to the lyman- photons , is higher in the nightside direction than in the dayside direction in the earth corona .nevertheless , their work was limited only to the sun - planet axis , with a null component assumed for the angular momentum around the sun - planet axis .we thus generalize here their work to a full 3d calculation , in order to investigate the influence of the radiation pressure on the trajectories ( this paper ) , as well as the density profiles and escape flux ( following works ) .this problem is similar to the so - called stark effect : the effect of a constant electric field on the atomic hydrogen s electron .its study can be transposed to celestial mechanics in order to describe the orbits of artificial and natural satellites in the perturbed ( e.g. by the radiation pressure force ) two - body problem .a recent description of the stark effect solutions was already given by .however , they give the analytical solutions of all trajectories only in the 2d case ( and for bounded trajectories in the 3d case ) , and their formulas have some issues as will be discussed later .another analytical study proposed by uses the weierstrassian formulations to solve the motions for bounded and unbounded trajectories and to find periodic motions .also , the motion can be approached numerically by developing the equations of motion in taylor series but this leads to some issues for high eccentricities . compared recently these methods and their computing efficiencies . in this paper , based on the same formalism as , we provide for the first time the complete exact 3d solutions of the stark effect ( and its celestial mechanics analogue ) for any initial condition and for both bounded and unbounded trajectories .the first section describes the formalism used , before the sections [ model]/[section3]/[time ] provide the equations of motion and time .we then discuss about circular orbits in section [ circular ] , while a comparison with previous works is given in section 6 , before we conclude in section [ summary ] .in this work , we decide to study the effect of the radiation pressure on atomic hydrogen in particular . nevertheless , this formalism can be applied to any species subject to this force or to the interplanetary dust .we model the radiation pressure by a constant acceleration coming from the sun . according to , this acceleration depends on the line center solar lyman- flux , in photons..s. : in spherical coordinates, the hamiltonian of one hydrogen atom can be written : with the distance from the planet , the solar angle , the angle with respect to the ecliptic plane , , and the conjugate momenta . represents the gravitational potential and the potential energy from the radiation pressure acceleration .an example of trajectory of a atom subject to the radiation pressure is given in the figure [ trajectoire ] .this problem is similar to the classical stark effect : a constant electric field ( here the radiation pressure ) is applied to an electron ( here an hydrogen atom ) attached to a proton ( here the planet ) .both systems are equivalent because the force applied by the proton ( the planet ) to the electron ( the hydrogen atom ) , i.e. the electrostatic force , varies in as the gravitational force from the planet on the hydrogen atom .thus , we adopt the same formalism as and use the parabolic coordinates .we use the transformation : with and always positive .consequently , the hamiltonian becomes : independent from and . according to hamiltonian canonical relations, we have : in this new system of coordinates , we study this hamiltonian .first , is independent from explicitly . is a conserved quantity along the time and corresponds to the mechanical energy of the system .moreover , as is independent from , according to canonical relations : thus , is another constant of the motion .once and defined , the equation [ hamiltonuw ] can be rewritten : the left hand side is a function dependent only on and , the right hand side depends only on and . as both functions are equal and independent , they are equal to a constant , a separation constant : the motion possesses three constants : , and .the equation [ a ] allows to express ( respectively ) as a functions of , , and ( respectively ) : with we have already introduced the hamiltonian of the system .we can extend the approach according to hamilton - jacobi equations : where is the hamilton s principal function or action .this function depends on initial conditions ( as , , and ) and the actual position of the particle ( as , , and ) .as previously demonstrated , and are constants .thus , and leads to : \ ] ] with the part of the action independent from , , and .moreover , the action can be separated into two parts : one written as a function of and coordinates , the other one with and . by definition , according to the hamilton - jacobi equations , we have : with ( resp . ) a function only of ( resp . ) , assuming , and values already fixed by initial conditions .then , we can separate again the action , leading to : =\mathcal{s}_{u}[u , e , a , p_{\phi}]+\mathcal{s}_{w}[w , e , a , p_{\phi}]\ ] ] according to the equation [ pupw ] , we have the following relations : with and are effective potentials applied in and directions ( represented in the figure [ potentiel ] ) .these potentials play key roles for the motion because they constrained the motion in and directions independently . for the motion of the particle , we must respect two conditions : and .these conditions are analogous to : ( corresponding to , blue line ) and ( corresponding to , red line ) for a set of , and values .the motion is possible only in the area where the potential is below the mechanical energy , represented by the black horizontal line .the different roots of and are displayed and correspond to the intersection of the potentials with the horizontal black line . is the dimensionless value referring to ( cf .section [ dimensionless_section ] , table [ dimensionless ] ) .notice that will cross only once the energy level for too high or too low values . ]these both conditions are more restrictive than the usual where is the potential energy . is a polynom of degree 3 with .this polynom possesses three roots , whose one is real at least . as ,one of these roots is real negative , according to intermediate value theorem since .nevertheless , the motion occurs for positive values , and we know this motion exists .it implies there is an interval in such as ( otherwise , there is no motion in -direction , no possible physically , cf .eq . [ conditions ] ) .to comply with this last condition , both other roots are real and positive . in summary , has three real roots : one negative and two positive .we call each root , and such as and the -motion is restricted to ] in term of dimensionless quantities , as can be seen in the figure [ potentiel ] ) . is a polynom of degree 3 with .this polynom possesses three roots , whose one is real at least . as , one of these roots is real positive , according to intermediate value theorem since .nevertheless , the motion occurs for positive values .we have restrictions on both other roots : they must be both real positive , both real negative or both complex conjugates . in the case where the three roots are real positive, we call each root , and such as and the motion is restricted to \cup[w_{0};+\infty[ ] ( respectively - 2k(k_{w_0});2k(k_{w_0})[ ] , range where it is continuous .with the solutions previously derived , we can know the exact motion of a bounded or unbounded particle as a function of the time such as given in figure [ sample ] .it is clear that even a bounded trajectory the motion has no periodicity at all ( especially for the motion ) .nevertheless , it could be interesting to focus on stable bounded orbits and search for periodic motions ( as in ) , and thus investigate in particular the circular stable orbits for spacecrafts or the possible positions for satellite particles produced by collisions in the exosphere .thus , we dedicate this section to the conditions to obtain such orbits . for a specific set of initial conditions , it can be possible to obtain circular orbits .this orbit occurs when , on the one hand , the attraction of the planet projected along the -axis is equal to acceleration due to the radiation pressure : in dimensionless quantity , this will be expressed by : on the other hand , it is also necessary that the centrifugal force induced by the rotation around the -axis is equal to the acceleration around the planet in the perpendicular plane to the -axis .thus , we obtain the secondary equality : in dimensionless quantity , this will be expressed by : combining these two equations , we obtain : we need to study the polynom with ] with . here, is equal to 8/9 .nevertheless , if is positive then does not have solutions and inversely , if is negative then we have two solutions .this value is : a critical maximum value of thus exists to allow for circular orbits and is {3}}\ ] ] above this value , we can not find any bounded trajectories : there is no any equilibrium point and thus no circular orbits ( stable or not ) .for lower values of , we have two solutions for : one stable and one unstable as shown by and theirs plots of the equipotentials .these two solutions correspond respectively to the stable point around which the equipotentials are closed and to the saddle point , which is the last limit where one can find closed equipotentials , and is the only point where two equipotentials can cross . as long as ,these both specific points exist : they have the same .physically , the potential has two extrema as plotted in figure [ potentiel ] .when reaches , the local minimum goes to the right and the local maximum goes to the left at the same location . for higher values , have no extremum anymore : the potential is strictly decreasing and the particles are unbounded ( escaping ) .thus , the bounded particles , satellite and ballistic particles , have .the distinction between them thus depend on if they cross or not the exobase .the critical values are given in coordinates ( is for us , in comparison with ) .in dimensionless unities and in using the equations [ centrifuge ] then [ r ] , the critical orbit is : or in coordinates : the real positive roots of the polynomial [ polynom ] combined with the equality [ r ] give the positions of the circular orbits ( two coordinates are necessary ) allowed to spacecraft or particles under the influence of both gravity and radiation pressure .the knowledge of the exact trajectories of particles or satellites under the influence of gravity and radiation pressure needs the calculation of the spatial coordinates , i.e. the motions , as well as the time evolution .we summarize all needed equations in table [ summary ] . ' '' '' & & & + ' '' '' & & & + ' '' '' & & & & + ' '' '' & [ umotion](l2d ) & & & + ' '' '' & & [ wmotion1](l2d)&[wmotion2](l2d)&[wmotion3](s ) + ' '' '' & [ tu](l2d)&[tw1](l2d ) & [ tw2]*new*&[tw3]*new * + ' '' '' & [ phiu](l3d)&[phiw1](l3d ) & [ phiw2]*new*&[phiw3]*new * + ' '' '' & &&& + the -motion is provided by the equation [ umotion ] .the -motion is provided by the equations [ wmotion1 ] , [ wmotion2 ] or [ wmotion3 ] .the -motion is provided by ] .all the expressions are functions of that is not the real time .we thus have implicit expressions as a function of time .the function is bijective but can not be inversed analytically , a numerical inversion is needed to derive the real time .plane ( upper left panel ) , of the time as a function of ( upper right panel ) , of the motion in polar coordinates ( lower left panel ) and the coordinates as a function of the time ( red for , blue for , lower right panel ) . and do not show any periodicity because their periods are not commensurable ( i.e. the ratio is a rational number ) with and thus with the time ,title="fig : " ] plane ( upper left panel ) , of the time as a function of ( upper right panel ) , of the motion in polar coordinates ( lower left panel ) and the coordinates as a function of the time ( red for , blue for , lower right panel ) . and do not show any periodicity because their periods are not commensurable ( i.e. the ratio is a rational number ) with and thus with the time ,title="fig : " ] + plane ( upper left panel ) , of the time as a function of ( upper right panel ) , of the motion in polar coordinates ( lower left panel ) and the coordinates as a function of the time ( red for , blue for , lower right panel ) . and do not show any periodicity because their periods are not commensurable ( i.e. the ratio is a rational number ) with and thus with the time ,title="fig : " ] plane ( upper left panel ) , of the time as a function of ( upper right panel ) , of the motion in polar coordinates ( lower left panel ) and the coordinates as a function of the time ( red for , blue for , lower right panel ) . and do not show any periodicity because their periods are not commensurable ( i.e. the ratio is a rational number ) with and thus with the time ,title="fig : " ] besides , our 3d solutions can be easily applied to the 2d case . indeed , in the 2d case , and thus, one of the roots for each polynomial and is null : it could be or for ( if , there is no possible motion ) and any roots of .we precise that in this case the -motion is not important because the motion is planar .compared with , our formulations are first developed for the 3d case and can be used easily for the 2d case , whereas gave only the methodology to obtain the 3d solutions based on 2d ones but not the expressions , which apparently leads to complex expressions . provided also the exact formulas for bounded and unbounded trajectories using the weierstrass functions but this formulation is also difficult particularly because of the need to use the inverse weierstrass function , not implemented in all computer softwares and the need to work with complex values ( e.g. the complex logarithm function ) . in this paper, we solved the motion for the bounded and unbounded trajectories in the 3d case ; we provide the exact formulas for all cases , as well as the definitions used in [ appendixa ] .we also highlight in table [ summary ] which solutions are simpler compared with , which are completely new and which were only provided in the 2d case by .moreover , beyond the new exact solutions given in this paper , the derivation of our solutions based on jacobi elliptic functions allows a good computing time and accuracy . compared three types of solutions for the stark effect : two exact ones , proposed by ( jacobi elliptic functions ) and ( weierstrass elliptic functions ) , and a numerical one by ( based on taylor series ) .they compared the cpu time , the number of calls for each analytic elliptic function and the accuracy between and .even if we do not agree with the number of evaluations of each jacobi elliptic function mentioned by ( e.g. to call and is similar to call then and : and ) , two arguments show our solutions are efficient in terms of cpu time and accuracy : first , the solutions expressed in terms of jacobi elliptic functions ( such as in this paper or by ) are more efficient than weierstrass elliptic functions ( used by ) ; second , several solutions given above are less complex to implement than those by , e.g. equation [ wmotion3 ] .also , the analytical formulations are preferable for long duration motions .we determined analytically the trajectories of the particles or spacecraft under the influence of both planetary gravity and stellar radiation pressure .we thus provide for the first time the complete exact solutions of the well - known stark effect ( effect of a constant electric field on the atomic hydrogen s electron ) with jacobi elliptic functions , for both bounded and unbounded orbits .these expressions may be implemented for modeling spacecraft or particles trajectories : instead of solving the equation of the motion , based on differential equations , with numerical methods such as the runge - kutta method where one cumulates errors along the time , it is here possible to obtain precise expressions of the motion with only periodic errors , due to the precision on the evaluation of the elliptic functions used . in particular, we provide the analytical conditions for stable circular orbits .moreover , we discuss about the possible issues inherent to the formalism used and the importance of being extremely careful with the routines implemented . the formalism used herewill allow us in a next paper to generalize the work by to derive the exact neutral densities and escape flux in planetary exospheres , under the influence of both gravity and stellar radiation pressure .this is important for understanding the atmospheric structure and escape of planets in the inner solar system , as well as the atmospheric erosion during the early ages where the radiation pressure ( and uv flux ) of the sun was extreme .this work was supported by the centre national dtudes spatiales ( cnes ) .in this paper , we use the three incomplete elliptic integrals , and : these expressions are not exactly , and .they agree with the previous formulas [ goodformule ] in the range -\pi/2;\pi/2[$ ] . did not precise which formulations they used . according to theirs formulas and results , they used the left - hand side of the equations [ badformule ] .this may be a problem for bounded trajectories : for , the integrals [ badformule ] are not continuous contrary to [ goodformule ] .depending on the computer software , the routines and the definitions used for these functions , the results can show some issues ( e.g. no continuous motion ). 16 natexlab#1#1url # 1`#1`urlprefix[2][]#2 [ 2]#2 , , . . , ., , , , , , . . ,. , . . , . , ,. . , . , , ,. . , . , ,. , . . , . ,. . , , , ,. edition ., , , . . ,. edition . , . . | the planetary exospheres are poorly known in their outer parts , since the neutral densities are low compared with the instruments detection capabilities . the exospheric models are thus often the main source of information at such high altitudes . we present a new way to take into account analytically the additional effect of the radiation pressure on planetary exospheres . in a series of papers , we present with an hamiltonian approach the effect of the radiation pressure on dynamical trajectories , density profiles and escaping thermal flux . our work is a generalization of the study by . in this first paper , we present the complete exact solutions of particles trajectories , which are not conics , under the influence of the solar radiation pressure . this problem was recently partly solved by and completely by . we give here the full set of solutions , including solutions not previously derived , as well as simpler formulations for previously known cases and comparisons with recent works . the solutions given may also be applied to the classical stark problem : we thus provide here for the first time the complete set of solutions for this well - known effect in term of jacobi elliptic functions . exosphere , radiation pressure , stark effect , trajectories |
graph theory has recently reveived increasing attraction for applications to complex systems in various disciplines ( gernert 1997 , paton 2002a , b , bornholdt and schuster 2003 ) .the characterization of systems ( with interrelated constituents ) by graphs ( with linked vertices ) is comparably general as their characterization in terms of categories ( with elements related by morphisms ) . despite its generality ,graph theory has turned out to be a powerful tool for gaining very specific insight into structural and dynamical properties of complex systems ( see jost and joy 2002 , atmanspacher et al .2005 for examples ) .an area of particularly intense interest , in which complex systems abound , is biological information processing .this ranges from evolutionary biology over genetics to the study of neural systems .theoretical and computational neuroscience have become rapidly growing fields ( hertz et al .1991 , haykin 1999 , dayan and abbott 2001 ) in which graph theoretical methods have gained considerable significance ( cf .sejnowski 2001 ) .two basic classes of biological networks are feedforward and recurrent networks . in networks with purely feedforward ( directed ) connectivities , neuronal input is mapped onto neuronal output through a feedforward synaptic weight matrix . in recurrent networks , there are additional ( directed or bi - directed ) connectivities between outputs and other network elements , giving rise to a recurrent synaptic weight matrix . much recurrent modeling incorporates the theory of nonlinear and complex dynamical systems ( cf .smolensky 1988 , see also beim graben 2004 for discussion ) .hopfield networks are an example of a fully recurrent network in which all connectivities are bidirectional and the output is a deterministic function of the input .their stochastic generalizations are known as boltzmann machines .another important distinction with respect to the implementation of neural networks refers to the way in which the neuronal states are characterized : the two main options are firing rates and action potentials ( for more details see haykin 1999 ) . a key topic of information processing in complex biological networks is learning , for which three basically different scenarios are distinguished in the literature ( see dayan and abbott 2001 , chap .iii ) : unsupervised , supervised and reinforcement learning . in unsupervised (also self - supervised ) learning a network responds to inputs solely on the basis of its intrinsic structure and dynamics .a network learns by evolving into a state that is constrained by its own properties and the given inputs , an important modelling strategy for implicit learning processes .in contrast , supervised learning presupposes the definiton of desired input - output relations , so the learned state of the network is additionally constrained by its outputs .usually , the learning process in this case develops by minimizing the difference between the actual output and the desired output .the corresponding optimization procedure is not intrinsic to the evolution of the system itself , but has to be externally arranged , hence the learning is called supervised .if the supervision is in some sense `` naturalized '' by coupling a network to an environment , which provides evaluative feedback , one speaks of reinforcement learning . in this contributionwe are interested in supervised learning ( see duda et al .2000 for a review ) on small , fully recurrent networks implemented on graphs ( cf .jordan 1998 ) .we start with a general formal characterization in terms of dynamical systems ( sec .2.1 ) , describe how they are implemented on graphs ( sec .2.2 ) , and show how it reaches asymptotically stable states ( attractors ) when the learning process is terminated , i.e. is optimized for given inputs and ( random ) initial conditions with respect to predetermined outputs ( sec .. we shall characterize the learning operations by a multiplicative structure characterizing successively presented inputs in sec . 3.1 . in this contextwe confirm and specify earlier conjectures ( e.g. , gernert 1997 ) about the non - commutativity of learning operations for a concrete model . in sec .3.2 , we study how the size of the set of attractors representing the derived structure changes during the process for perfectly and imperfectly optimized networks .the number of attractors is proposed to indicate the complexity of learning , and in sec . 4 thisis tentatively related to pragmatic information as a particular measure of meaning .let be a set , and let , with , be a partition of into two disjoint subsets .if is some closed subset of , may be the boundary of .( later we will specify as the vertices of a graph , as a set of `` external '' or `` boundary '' vertices , and as a set of `` internal '' vertices . )we consider the dynamics of fields , where , , represents time as parametrized discretely or continuously , and is the space of admissible state values for the fields .the dynamics of can be described by an equation =0 \ , .\ ] ] for a continuous time variable and , a typical example is the diffusion equation = \frac{\partial u(x , t)}{\partial t } - \lambda \delta u(x , t)\ ] ] where is the laplace operator and the diffusion constant . the only constraint on eq .[ eq1 ] is that a state at time determines uniquely the solution for any time .we now define a set of external conditions specifying field values on which will be kept fixed during the time evolution of the fields on . this is to say that the dynamics of fields is effectively restricted to : =0\ , .\ ] ] since the state of the system at time uniquely determines the states for all , we can define a mapping , the so - called time evolution operator , acting on the set of field states . for an initial state at , $ ] yields the state of the system at . taking into accountthat different external conditions initiate different evolutions , we have to specify the time evolution operator as a mapping , where is the set of states , by the following construction : let be the initial condition for eq .( [ eq2 ] ) , then (x ) = u(x , b_i , t)\ ] ] is the state of the corresponding solution at time under the external condition . in principle, the state space of can be the entire set of states .however , for reasons which will become clear below , we are interested in dissipative systems evolving into attractors in the limit of large .if one of the states belonging to an attractor is chosen as an initial condition , the image of will again be one of the attractor states .this allows us to reduce the number of possible states on which the mappings close . denoting the flow operator as the input under the external condition , we now consider the set of states belonging to attractors after time .then all mappings , applied to an attractor , lead to images in : \in a \hspace{1 cm } \mbox{for } a\in a \ , .\ ] ] in general , the set of all attractor states does not contain a proper subset which is mapped onto itself by the set of mappings ; otherwise can be reduced to such a subset .each single mapping may not be surjective , but the union of the images of all equals . due to condition ( [ eq3 ] ) , we can define a composition of mappings . in this way ,the external conditions give rise to an associative multiplicative structure .this structure is represented on the set of attractors .we now implement the general notions developed so far on graphs ( see wilson 1985 for an introduction to graph theory ) and specify the set as the set of vertices of a graph . for simplicitywe consider directed graphs with single connections for each direction between any two vertices and without self - loops .such a graph gives rise to non - reflexive relations on and can be represented by an adjacency matrix . for two vertices and we have : if is symmetric , the graph is undirected .the set of vertices is decomposed into a set of external vertices and a set of internal vertices .if is the total number of vertices , the number of external vertices and the number of internal vertices , we have .next we consider fields on a graph with vertices evolving in discrete time steps according to : the value of the field at vertex and time depends only on the sum of the field values at neighboring vertices at time .the fields assume integer values , and the function is defined as : where denotes the nearest integer - rounded .the function is shown in fig .[ fx ] .( 240,100)(0,0 ) ( 10,10)(1,0)200 ( 10,10)(0,1)80 ( 60,8)(0,1)4 ( 160,8)(0,1)4 ( 8,60)(1,0)4 ( 15,15)(5,5)10 ( 70,55)(10,-5)10 ( 65,55)(10,-5)10 ( 165,10)(5,0)5 ( 215,5)(0,0) ( 60,3)(0,0) ( 160,3)(0,0) ( 0,60)(0,0) ( 5,100)(0,0) the restriction of to integer values implies that there is only a finite number of states . starting from an arbitrary initial state in , the system runs into an attractor after a few time steps .in many cases , this attractor is a fixed point , i.e. one single state that is asymptotically stable .sometimes the attractor is a limit cycle , i.e. a periodic succession of several ( usually few ) states .strange attractors do not occur since the number of states is finite .the external conditions are defined as fixed states on the external vertices , i.e. , the state values on the external vertices remain unaffected by the dynamics . of course , the external conditions are supposed to influence the dynamics of the internal vertices .the graphs used in our investigations consist of a total of vertices with external vertices and internal vertices .the maximal value of is defined to be .we consider 11 different input patterns which are shown in fig .[ fig1 ] .( 390,120)(-20,0 ) ( 0,80 ) ( 10,80 ) ( 20,80 ) ( 30,80 ) ( 0,90 ) ( 10,90 ) ( 20,90 ) ( 30,90 ) ( 0,100 ) ( 10,100 ) ( 20,100 ) ( 30,100 ) ( 0,110 ) ( 10,110 ) ( 20,110 ) ( 30,110 ) ( 140,80 ) ( 150,80 ) ( 160,80 ) ( 170,80 ) ( 140,90 ) ( 150,90 ) ( 160,90 ) ( 170,90 ) ( 140,100 ) ( 150,100 ) ( 160,100 ) ( 170,100 ) ( 140,110 ) ( 150,110 ) ( 160,110 ) ( 170,110 ) ( 210,80 ) ( 220,80 ) ( 230,80 ) ( 240,80 ) ( 210,90 ) ( 220,90 ) ( 230,90 ) ( 240,90 ) ( 210,100 ) ( 220,100 ) ( 230,100 ) ( 240,100 ) ( 210,110 ) ( 220,110 ) ( 230,110 ) ( 240,110 ) ( 280,80 ) ( 290,80 ) ( 300,80 ) ( 310,80 ) ( 280,90 ) ( 290,90 ) ( 300,90 ) ( 310,90 ) ( 280,100 ) ( 290,100 ) ( 300,100 ) ( 310,100 ) ( 280,110 ) ( 290,110 ) ( 300,110 ) ( 310,110 ) ( 350,80 ) ( 360,80 ) ( 370,80 ) ( 380,80 ) ( 350,90 ) ( 360,90 ) ( 370,90 ) ( 380,90 ) ( 350,100 ) ( 360,100 ) ( 370,100 ) ( 380,100 ) ( 350,110 ) ( 360,110 ) ( 370,110 ) ( 380,110 ) ( 0,10 ) ( 10,10 ) ( 20,10 ) ( 30,10 ) ( 0,20 ) ( 10,20 ) ( 20,20 ) ( 30,20 ) ( 0,30 ) ( 10,30 ) ( 20,30 ) ( 30,30 ) ( 0,40 ) ( 10,40 ) ( 20,40 ) ( 30,40 ) ( 70,10 ) ( 80,10 ) ( 90,10 ) ( 100,10 ) ( 70,20 ) ( 80,20 ) ( 90,20 ) ( 100,20 ) ( 70,30 ) ( 80,30 ) ( 90,30 ) ( 100,30 ) ( 70,40 ) ( 80,40 ) ( 90,40 ) ( 100,40 ) ( 140,10 ) ( 150,10 ) ( 160,10 ) ( 170,10 ) ( 140,20 ) ( 150,20 ) ( 160,20 ) ( 170,20 ) ( 140,30 ) ( 150,30 ) ( 160,30 ) ( 170,30 ) ( 140,40 ) ( 150,40 ) ( 160,40 ) ( 170,40 ) ( 210,10 ) ( 220,10 ) ( 230,10 ) ( 240,10 ) ( 210,20 ) ( 220,20 ) ( 230,20 ) ( 240,20 ) ( 210,30 ) ( 220,30 ) ( 230,30 ) ( 240,30 ) ( 210,40 ) ( 220,40 ) ( 230,40 ) ( 240,40 ) ( 280,10 ) ( 290,10 ) ( 300,10 ) ( 310,10 ) ( 280,20 ) ( 290,20 ) ( 300,20 ) ( 310,20 ) ( 280,30 ) ( 290,30 ) ( 300,30 ) ( 310,30 ) ( 280,40 ) ( 290,40 ) ( 300,40 ) ( 310,40 ) ( 350,10 ) ( 360,10 ) ( 370,10 ) ( 380,10 ) ( 350,20 ) ( 360,20 ) ( 370,20 ) ( 380,20 ) ( 350,30 ) ( 360,30 ) ( 370,30 ) ( 380,30 ) ( 350,40 ) ( 360,40 ) ( 370,40 ) ( 380,40 ) ( 15,70)(0,0)1 ( 155,70)(0,0)2 ( 225,70)(0,0)3 ( 295,70)(0,0)4 ( 365,70)(0,0)5 ( 15,0)(0,0)6 ( 85,0)(0,0)7 ( 155,0)(0,0)8 ( 225,0)(0,0)9 ( 295,0)(0,0)10 ( 365,0)(0,0)11 .example of a mapping diagram for 11 inputs and a system with 10 different attractors .the entries show the number of the attractor state which is obtained by applying ( plotted vertically ) to ( plotted horizontally ) . [ cols= " > ,> , > , > , > , > , > , > , > , > , > " , ] the multiplicative structure associated with tab .[ tab2 ] consists of the 11 elements which are idempotent , and satisfy the relation hence they are non - commutative , though associative : since the optimal reaction of a graph to an input is not uniquely related to that input , the attractor providing an optimal output can be identical for different inputs .therefore , the multiplicative structure of input operations can be even simpler in the sense that some of the attractors are identical .table [ tab0 ] shows a corresponding example with less than 11 attractors .deviations from eq .( [ eqproj ] ) indicate a more complicated structure of learning operations .if the elements in the same row ( i.e. for the same input ) of the mapping diagram differ from each other , the reaction of the graph with respect to an input depends on the previous input .this means that the result of a learning process depends on the sequence in which successive learning steps are carried out .this implies that the multiplicative structure of input operations deviates from eq .( [ eqproj ] ) .since the are mappings , associativity is valid trivially .however , the structure will generally be non - commutative , although it may happen that particular inputs commute , for instance when they project onto the same attractor , such as and , or and , or and in tab .[ tab0 ] . we can now understand how an optimal learner differs from a perfect learner , which recognizes inputs independently of the sequence of their presentation . comparing tabs .[ tab0a ] and [ tab1 ] shows that attractor leads to the optimal output ( field values on the first two vertices ) for input , attractors and yield the optimal output for inputs , and attractors and yield the optimal output for inputs . in these cases ,optimal learning coincides with perfect learning . from tab .[ tab0 ] we see that inputs are recognized independently of previous inputs .by contrast , inputs , and are recognized correctly only if the previous input is , and , respectively .table [ tab0a ] shows that attractors lead to an `` almost '' correct output for inputs , and the output of differs considerably from any optimal output .although these situations represent optimal learning , they are different or even far from perfect learning . if the attractor for a particular input does not consist of one single state ( fixed point ) , but of a perodic sequence of states ( limit cycle ) , idempotency [ eqidemp ] does no longer hold .( strictly speaking , this is only correct if the number of time steps in the mapping and the length of the cycle have no common denominator .otherwise , the attractor may consist of more states than can be detected by the mapping diagram or the set of inputs . )note that the structure of learning operations derived here is more general than an algebra ( as conjectured by gernert 2000 ) .there is no identity element , there is no neutral element , and no addition of the elements is defined . in order to investigate the evolution of the set of attractors during the learning process, we focus on the number of attractor states as a function of learning steps for the entire sequence of graphs starting from a random graph until a graph with optimal learning is reached .since a large number of attractors intuitively relates to quite complex structures of the graph during the learning process , we propose to refer to the size of the set of attractors as a possible measure for the _ complexity of learning_. however , it should be emphasized that a rigorous definition of complexity ( cf .wackerbauer et al .1994 ) is not yet associated with this notion .initially , the graphs are ( almost ) random and exhibit large variances of the order of . for these graphsthe number of attractor states with respect to the inputs varies over a large range ; typical are numbers between 30 and 50 .as learning begins , the variance decreases , but the number of attractor states increases , sometimes up to a few hundred .a further decrease in variance , below a value of 6000 , causes the number of attractor states to decrease again . for optimal learners ( graphs with vanishing variance ) the number of attractor states terminates at around .a typical example is shown in fig .[ fig3 ] .we now select a sample of 116 learning sequences starting from random graphs and terminating as ( almost ) optimal learners . for this samplewe count the number of attractor states , i.e. the complexity of learning , for those graphs which were accepted during the process , i.e. , for which the variance was always smaller than for any previous graph in the sequence .their behavior can be seen in fig .[ fig4 ] , where is plotted as a function of .it confirms the impression from fig .[ fig3 ] that , as learning proceeds , its complexity evolves non - monotonically .in about 50% of the cases the sequence started with less than attractor states .the final was much smaller , and for intermediate stages of learning reached a maximum during the learning process . in about 85% of all casesthe final number of attractor configurations was smaller than 20 .the largest final number of attractor states for an optimal learner was 56 .exceptions from this behavior occur if the initial ( random ) graph has a number of attractor states that is extremely large , exceeding any other number of attractor states in the sequence .for this case we find a total number of 15 sequences . in 12 of these sequencesthe initial number of attractors is larger than 100 ( with a maximum of 747 ) .figure [ fig5 ] shows a plot of number of attractors as a function of variance for 98 non - optimal learners whose final variance is . keeping in mind that decreasing variance corresponds to progressive learning , the general trend of figs .[ fig3 ] and [ fig4 ] reappears : the size of the set of attractors , i.e. the complexity of learning , evolves non - monotonically as learning proceeds . as the main observation of the present subsection, we can state that the number of attractors required to optimally map a given input onto a predetermined output evolves non - monotonically during the process of learning . while increases during the initial phase of learning , it decreases again until the learning process is terminated .we interpret this behavior as a non - monotonic complexity of the learning process .non - monotonic as opposed to monotonic measures of complexity have been developed and investigated for about two decades ; for a comparative overview see wackerbauer et al .the property of monotonicity is usually understood as a function of ( some measure of ) randomness of the pattern or process considered .monotonic complexity essentially increases as randomness increases : most random features are also most complex .non - monotonic complexity shows convex behavior as a function of increasing randomness : highest complexity is assigned to features with a mixture of random and non - random elements , while both very low and very high randomness yield minimal complexity .there is an interesting relationship between the two classes of complexity measures and measures of information ; for more details see atmanspacher ( 1994 ) or atmanspacher ( 2005 ) .it turns out that monotonic complexity usually corresponds to syntactic information , whereas non - monotonic ( convex ) complexity corresponds to semantic information or other measures of meaning ( see fig .[ fig6 ] ) . as a particularly interesting approach ,pragmatic information has been proposed ( weizscker 1972 ) as an operationalized measure of meaning .its essence is that purely random messages keep providing complete novelty ( or primordiality ) as they are delivered , while purely non - random messages keep providing complete confirmation ( after initial transients ) .pragmatic information refers to meaning in terms of a mixture of confirmation and novelty . extracting meaning from a message depends on the capability to transform novel elements into knowledge using confirming elements .it has been speculated ( atmanspacher 1994 ) that systems having this capacity are able to reorganize themselves in order to flexibly modify their complexity relative to the task that they are supposed to solve .a learning process , in which insight is gained and meaning is understood , may start at low complexity ( high randomness , much novelty ) and terminate at low complexity ( high regularity , much confirmation ) , but it passes through an intermediate stage of maximal complexity .the notion of pragmatic information was earlier utilized in this sense for non - equilibrium phase transitions in multimode lasers ( atmanspacher and scheingraber 1990 ) .it could be shown that a particular well - defined type of pragmatic information , adapted to that case , behaves precisely as indicated above .pragmatic information is maximal at the unstable stage of the phase transition , and it is low in the preceding and successive stages .however , lasers are physical systems , and it is problematic to ascribe something like an `` understanding of meaning '' to their behavior .biological networks such as studied in this paper are more realistic systems for a concrete demonstration of the basic idea .the non - monotonic complexity of learning processes as indicated in sec .3.2 starts with random graphs and ends with graphs of minimized variance ( maximized fitness ) , which are as non - random as possible under the given conditions . in this sense, a scenario has been established in which the complexity of learning on graphs qualitatively satisfies the conditions required for relating it to a measure of pragmatic information . within this scenario ,our approach suggests that the actual `` release of meaning '' during learning does not occur when the output is optimized but rather when the complexity is maximized .it is a long - standing desideratum to identify meaning - related physiological features in the brain ( freeman 2003 ) .since learning is a key paradigm in which the emergence of meaning can be studied , we hope that our approach may offer a useful perspective for progress concerning this problem .in this contribution an example of supervised learning in recurrent networks of small size implemented on graphs is studied numerically .the elements of the network are treated as vertices of graphs and the connections among the elements are treated as links of graphs .eleven inputs and two outputs are predefined , and the learning process within the remaining six internal vertices is carried out such as to minimize the difference between the actual output and the predetermined output .optimization of outputs is achieved by stable configurations at the internal vertices that can be characterized as attractors .two particular features of the learning behavior of the network are investigated in detail .first , it is shown that , in general , the mapping from inputs to outputs depends on the sequence of inputs .thus , the associative multiplicative structure of input operations represented by sets of attractors is , in general , non - commutative .second , the size of the set of attractors changes as the learning process evolves .with increasing optimization ( fitness ) , the number of attractors increases up to a maximum and then decreases down to a usually small final set for optimal network performance . assuming that the size of the set of attractors indicates the complexity of learning , its non - monotonic behavior is of special interest .since non - monotonic measures of complexity can be related to pragmatic information as a measure of meaning , it is tempting to consider the maximum of complexity as reflecting the release of meaning in learning processes .further work will be necessary to substantiate this speculation .atmanspacher , h. , 1994 , complexity and meaning as a bridge across the cartesian cut . journal of consciousness studies 1 , 168181 .freeman , w.j . , 2003 ,a neurobiological theory of meaning in perception , part i : information and meaning in nonconvergent and nonlocal brain dynamics .international journal of bifurcation and chaos 13 , 24932511 . | we present results from numerical studies of supervised learning operations in recurrent networks considered as graphs , leading from a given set of input conditions to predetermined outputs . graphs that have optimized their output for particular inputs with respect to predetermined outputs are asymptotically stable and can be characterized by attractors which form a representation space for an associative multiplicative structure of input operations . as the mapping from a series of inputs onto a series of such attractors generally depends on the sequence of inputs , this structure is generally non - commutative . moreover , the size of the set of attractors , indicating the complexity of learning , is found to behave non - monotonically as learning proceeds . a tentative relation between this complexity and the notion of pragmatic information is indicated . |
a recent influx of academic monographs and popular books manifests a keen cultural and scientific interest in complex networks , which appeal to both applied and theoretical problems in national defense , sociology , epidemiology , computer science , statistics , and mathematics .the erds rnyi random graph remains the most widely studied network model .its simple dynamics endow it with remarkable mathematical properties , but this simplicity overpowers any ability to replicate realistic structure .many other network models have been inspired by empirical observations .chief among these is the _ scale - free _ phenomenon , which has garnered attention since the initial observation of power law behavior for internet statistics .celebrated is barabsi and albert s preferential attachment model , whose dynamics are tied to the _ rich get richer _ or _ matthew effect_. citing overlooked attributes of network sampling schemes , other authors have questioned the power law s apparent ubiquity .otherwise , watts and strogatz proposed a model that replicates milgram s _ small - world _phenomenon , the vernacular notion of _ six degrees of separation _ in social networks .networks arising in many practical settings are dynamic , they change with time .consider a population of individuals . for each , let indicate a social relationship between and and let comprise the indicators for the whole population at time .for example , can indicate whether and are co - workers , friends , or family , have communicated by phone , email , or telegraph within the last week , month , or year , or subscribe to the same religious , political , or philosophical ideology . within the narrow scope of social networks ,the potential meanings of seem endless ; expanding to other disciplines , the possible interpretations grow . in sociology , records changes of social relationships in a population ; in other fields , the network dynamics reflect different phenomena and , therefore , can exhibit vastly different behaviors . in each case , is a time - varying network .time - varying network models have been proposed previously in the applied statistics literature .the _ temporal exponential random graph model _ ( tergm ) in incorporates temporal dependence into the _ exponential random graph model _ ( ergm ) .the authors highlight select properties of the tergm , but consistency under subsampling is not among them . from the connection between sampling consistency and lack of interference, it is no surprise that the exponential random graph model is sampling consistent only under a choking restriction on its sufficient statistics .mccullagh argues unequivocally the importance of consistency for statistical models .presently , no network model both meets these logical requirements and reflects empirical observations . in this paper , rather thanfocus on a particular application , we discuss network modeling from first principles .we model time - varying networks by stochastic processes with a few natural invariance properties , specifically , exchangeable , consistent markov processes .the paper is organized as follows . in section [ section : modeling preliminaries ] , we discuss first principles for modeling time - varying networks ; in section [ section : informal description ] , we describe the rewiring process informally ; in section [ section : rewiring maps ] , we introduce the workhorse of the paper , the rewiring maps ; in sections [ section : discrete ] and [ section : exchangeable rewiring maps ] , we discuss a family of time - varying network models in discrete - time ; in section [ section : continuous ] , we extend to continuous - time ; in section [ section : poissonian structure ] , we show a poisson point process construction for the rewiring process , and we use this technique to establish the feller property ; and in section [ section : concluding remarks ] , we make some concluding remarks .we prove some technical lemmas and theorems in section [ section : proof ] .for now , we operate with the usual definition of a graph / network as a pair of vertices and edges . we delay formalities until they are needed .let be a random collection of graphs indexed by , denoting _ time_. we may think of as a collection of social networks ( for the same population ) that changes as a result of social forces , for example , geographical relocation , broken relationships , new relationships , etc . , but our discussion generalizes to other applications . in practice , we can observe only a finite sample of individuals . since the population size is often unknown , we assume an infinite population so that our model only depends on known quantities .thus , each is a graph with infinitely many vertices , of which we observe a finite sub - network }_t ] , where is the sample size , and the population graph is infinite with vertex set , the natural numbers .the models we consider are _ markovian _ , _ exchangeable _ , and _consistent_. the process has the _ markov property _if , for every , its pre- and post- -fields are conditionally independent given the present state .put another way , the current state incorporates all past and present information about the process , and so the future evolution depends on only through .it is easy to conceive of counterarguments to this assumption : in a social network , suppose there is no edge between individuals and or between and at time .then , informally , and for the sake of illustration . ]we expect the future ( marginal ) evolution of edges and to be identically distributed .but if , in the past , and have been frequently connected and and have not , we might infer that the latent relationships among these individuals are different and , thus , their corresponding edges should evolve differently .for instance , given their past behavior , we might expect that and are more likely than and to reconnect in the future . despite such counterarguments ,the markov property is widely used and works well in practice .generalizations to the markov property may be appropriate for specific applications , but they run the risk of overfitting . structure and changes to structure drive our study of networks .vertex labels carry no substantive meaning other than to keep track of this structure over time ; thus , a suitable model is _ exchangeable _ , that is , its distributions are invariant under relabeling of the vertices . for a model on finite networks ( i.e. , finitely many vertices ), exchangeability can be induced trivially by averaging uniformly over all permutations of the vertices .but we assume an infinite population , for which the appropriate invariance is _ infinite exchangeability _ , the combination of exchangeability and consistency under subsampling ( section [ section : consistency ] ) .unlike the finite setting , infinite exchangeability can not be imposed arbitrarily by averaging ; it must be an inherent feature of the model . for any graph with vertex set , there is a natural and obvious restriction to an induced subgraph with vertex set by removing all vertices and edges that are not fully contained in .the assumption of _ markovian consistency _ , or simply _ consistency _, for a graph - valued markov process implies that , for every , the restriction } ] is , itself , a markov process .note that this property does not follow immediately from the markov assumption for because the restriction operation is a many - to - one function and , in general , a function of a markov process need not be markov .also note that the behavior of the restriction } ] . at any time ,given , we can generate a transition to a new state as follows . independently for each pair , we flip a coin to determine whether to put an edge between and in : if is _ on _ in , we flip a -coin ; otherwise , we flip a -coin .this description results in a simple , exchangeable markov chain on finite graphs , which we call the _ erds rnyi rewiring chain _ ( section [ section : er ] ) .more general transitions are possible , for example , edges need not evolve independently .we use the next markov chain as a running example of a discrete - time rewiring chain .we fix and regard an undirected graph with vertex set ] can be represented by its symmetric _ adjacency matrix _ for which if has an edge between and , and otherwise . by convention, we always assume for all .we write to denote the finite collection of all graphs with vertex set ] are measurable for every . moreover , both and come equipped with a product - discrete topology induced , for example , by the ultrametric }=w'|_{[n]}\bigr\},\quad\quad w , w'\in \wiren.\ ] ] the metric on is analogous .both and are compact , complete , and separable metric spaces .much of our development hinges on the following proposition , whose proof is straightforward .[ prop : lipschitz ] rewiring maps are associative under composition and lipschitz continuous in the metric ( [ eq : metric ] ) , with lipschitz constant 1 .let denote the collection of _ finite _ permutations of , that is , permutations for which .we call any random array _ weakly exchangeable _ if and where denotes _ equality in law_. aldous defines weak exchangeability using only the latter condition ; see , chapter 14 , page 132 .we impose symmetry for convenience in this paper , all graphs and rewiring maps are symmetric arrays . from the discussion in section [ section : exchangeability ] , we are interested in models for random graphs that are _ exchangeable _ , meaning the adjacency matrix is a weakly exchangeable -valued array .likewise , we call a random rewiring map _ exchangeable _ if its associated -valued array is weakly exchangeable .de finetti s theorem represents any infinitely exchangeable sequence in a polish space with a ( non - unique ) measurable function ^ 2\rightarrow\mathcal{s} ] .the aldous hoover theorem extends de finetti s representation ( [ eq : de finetti rep ] ) to weakly exchangeable -valued arrays : to any such array , there exists a ( non - unique ) measurable function ^ 4\rightarrow\mathcal{s} ] .the function has a statistical interpretation that reflects the structure of the random array .in particular , decomposes the law of into individual , row , column , and overall effects .the overall effect plays the role of the mixing measure in the de finetti interpretation .if in ( [ eq : de finetti rep ] ) is constant with respect to its first argument , that is , for all ] , then is _ dissociated _ , that is }\mbox { is independent of } x^*|_{\{n+1,n+2,\ldots\ } } \mbox { for all } n\in\mathbb{n}.\ ] ] the aldous hoover representation ( [ eq : a - h rep ] ) spurs the sequel to de finetti s interpretation : see aldous , chapter 14 , for more details .we revisit the theory of weakly exchangeable arrays in section [ section : exchangeable rewiring maps ] .throughout the paper , we use the rewiring maps to construct markov chains on . from any probability distribution on , we generate i.i.d . from and a random graph ( independently of ) .we then define a markov chain on by we call _ exchangeable _ if is an exchangeable rewiring map , that is , for all permutations \rightarrow[n] ] .moreover , the law of satisfies and , for any fixed and , satisfies the -entry of . therefore , and , for any exchangeable graph and exchangeable rewiring map , we have hence , the transition law of is equivariant with respect to relabeling .since the initial state is exchangeable , so is the markov chain .we call an _ -rewiring markov chain_. from the discussion in section [ section : rewiring maps ] , we can define an exchangeable measure on as the restriction to of an exchangeable probability measure on , where }=w\bigr\}\bigr),\quad\quad w\in \mathcal { w}_n .\ ] ] denote by the transition probability measure of an -rewiring markov chain on , as defined in ( [ eq : rewiring tps ] ) .[ thm : consistent rewiring chain ] for any exchangeable probability measure on , is a consistent family of exchangeable transition probabilities in the sense that for every }=g\} ] , the _ -erds rnyi chain _ has finite - dimensional transition probabilities for , the -erds rnyi rewiring chain has unique stationary distribution , with . by assumption , both and are strictly between and and , thus , ( [ eq : er fidi ] ) assigns positive probability to every transition in , for every .therefore , each finite - dimensional chain is aperiodic and irreducible , and each possesses a unique stationary distribution . by consistency of the transition probabilities ( theorem [ thm : consistent rewiring chain ] ) , the finite - dimensional stationary measures must be exchangeable and consistent and , therefore , they determine a unique measure on , which is stationary for .furthermore , by conditional independence of the edges of , given , the stationary law must be erds rnyi with some parameter . in an -random graph ,all edges are present or not independently with probability .therefore , it suffices to look at the probability of the edge between vertices labeled 1 and 2 . in this case , we need to choose so that which implies .some elementary special cases of the -erds rnyi rewiring chain are worth noting .first , for either or , this chain is degenerate at either the empty graph or the complete graph and has unique stationary measure or , respectively . on the other hand ,when , the chain is degenerate at its initial state and so its initial distribution is stationary .however , if , then the chain alternates between its initial state and its complement , where for all ; in this case , the chain is periodic and does not have a unique stationary distribution .we also note that when for some , the chain is simply an i.i.d .sequence of -random graphs with stationary distribution , where , as it must . for , we define the _ mixed erds rnyi rewiring chain _ through , the mixture of -laws with respect to the beta law with parameter . writing we derive } \varepsilon _ p^{(n)}(g)\frac{\gamma(\alpha+\beta)}{\gamma(\alpha)\gamma ( \beta)}p^{\alpha-1}(1-p)^{\beta-1}\,\mathrm{d}p \\[-1pt ] & = & \frac{\gamma(\alpha+\beta)}{\gamma(\alpha)\gamma(\beta ) } \frac{\gamma(\alpha+n_1)\gamma(\beta+n_0)}{\gamma(\alpha+\beta + n)}\int_{[0,1 ] } \mathscr{b}_{\alpha+n_1,\beta+n_0}(\mathrm{d}p ) \\[-1pt ] & = & \frac{\alpha^{\uparrow n_1}\beta^{\uparrow n_0}}{(\alpha+\beta ) ^{\uparrow n}},\end{aligned}\ ] ] where , , and . for , we define _ mixed erds rnyi transition probabilities _ by an interesting special case takes and for . in this case ,( [ eq : mixture er ] ) becomes is reversible with respect to . for fixed ,we write and .note that .therefore , we have & = & \frac{\alpha^{\uparrow n'_{00}}\beta^{\uparrow n'_{10}}\alpha ' ^{\uparrow n'_{11}}\beta^{\uparrow n'_{01}}}{(\alpha+2\beta+\alpha ' ) ^{\uparrow n } } \\[-1pt ] & = & \varepsilon^{(n)}_{\alpha+\beta,\alpha'+\beta}\bigl(g ' \bigr)p_{(\beta , \alpha),(\alpha',\beta)}^{(n)}\bigl(g',g\bigr),\end{aligned}\ ] ] establishing detailed balance and , thus , reversibility . a mixed erds rnyi markov chain is directed by \times[0,1]}\omega_{p_0,p_1}(\mathrm{d}w ) ( \mathscr { b}_{\alpha_0,\beta_0}\otimes\mathscr{b}_{\alpha_1,\beta _ 1 } ) ( \mathrm{d}p_0,\mathrm{d}p_1),\quad\quad w\in\wiren,\ ] ] where is determined by its finite - dimensional distributions for , for every . in the next section, we see that a representation of the directing measure as a mixture of simpler measures holds more generally .notice that is dissociated for all fixed . by the aldous hoover theorem , we can express any exchangeable measure on as a mixture of dissociated measures .to more precisely describe the mixing measure , we extend the theory of _ graph limits _ to its natural analog for rewiring maps .we first review the related theory of graph limits , as surveyed by lov ' asz .a graph limit is a statistic that encodes a lot of structural information about an infinite graph .in essence , the graph limit of an exchangeable random graph contains all relevant information about its distribution . for any injection \rightarrow[n] ] by the vertices in the range of .given and , we define to equal the number of injections \rightarrow [ n] ] . the _ limiting density _ of in any infinite graph is }),\quad\quad f\in \graphsm , \quad\quad\mbox{if it exists}.\ ] ] the collection is countable and so we can define the _ graph limit _ of by provided exists for all .any graph limit is an element in ^{\mathcal{g}^*} ] , which we denote by .we implicitly equip ^{\mathcal{g}^*} ] for which . for an infinite rewiring map , we define }),\quad\quad w\in\mathcal { w}_m,\quad\quad \mbox{if it exists}.\ ] ] as for graphs , the collection is countable and so we can define the _ rewiring limit _ of by provided exists for all . we write ^{\mathcal{w}^*} ] for every , for all , and * for every . by definition of , we may assume that is the rewiring limit of some so that , for every .from the definition of the rewiring limit ( [ eq : rewiring limit ] ) , })}{n^{\downarrow m}}=\lim _ { n\rightarrow\infty}\sum_{w\in\wirem } \frac{\ind ( w , w^*|_{[n]})}{n^{\downarrow m}}=1,\ ] ] where the interchange of sum and limit is justified by the bounded convergence theorem because })/n^{\downarrow m}\leq1 ] there are injections \rightarrow[k] ] .[ lemma : compact rewiring ] is a compact metric space .[ prop : dissociated ] let be a dissociated exchangeable rewiring map .then , with probability one , exists and is nonrandom .we delay the proofs of lemma [ lemma : compact rewiring ] and theorem [ prop : dissociated ] until section [ section : proof ] .[ cor : existence rewiring limit ] let be an exchangeable random rewiring map. then exists almost surely . by theorem [ prop : dissociated ] , every dissociated rewiring map possesses a nonrandom rewiring limit almost surely . by the aldous hoover theorem , is a mixture of dissociated rewiring maps and the conclusion follows . by lemma [ lemma : rewiring structure ] , any determines a probability measure on in a straightforward way : for each , we define as the probability distribution on with [ prop : rewiring measure ] for any , is a collection of exchangeable and consistent probability distributions on . in particular , determines a unique exchangeable probability measure on for which -almost every has . by lemma [ lemma : rewiring structure ] ,the collection in ( [ eq : measure on wiren ] ) is a consistent family of probability distributions on .exchangeability follows because }) ] for all permutations and . by kolmogorovs extension theorem , determines a unique measure on the limit space .finally , is dissociated and so , by theorem [ prop : dissociated ] , almost surely .we call in proposition [ prop : rewiring measure ] a _ rewiring measure _ directed by . for any measure on , we define the -mixture of rewiring measures by to any exchangeable rewiring map , there exists a unique probability measure on such that .this follows by the aldous hoover theorem and proposition [ prop : rewiring measure ] . from theorem [ prop : dissociated ] and proposition [ prop : rewiring measure ] , any probability measure on corresponds to an -rewiring chain as in theorem [ thm : consistent rewiring chain ] .we now refine our discussion to rewiring chains in continuous - time , for which infinitely many transitions can `` bunch up '' in arbitrarily small intervals , but individual edges jump only finitely often in bounded intervals . henceforth , we write to denote the identity and , for , we write to denote the identity .let be an exchangeable measure on such that }\neq\idn\}\bigr)<\infty \quad\quad\mbox{for every } n\geq2.\ ] ] similar to our definition of in section [ section : discrete ] , we use to define the transition rates of continuous - time -rewiring chain .briefly , we assume because the identity map is immaterial for continuous - time processes .the finiteness assumption on the right of ( [ eq : regularity omega ] ) ensures that the paths of the finite restrictions are cdlg . for each , we write to denote the restriction of to and define [ prop : finite q - omega ] for each , is a finite , exchangeable conditional measure on .moreover , the collection satisfies for all , for all , for every , where is the restriction map defined in ( [ eq : restriction - graph ] ) .finiteness of follows from ( [ eq : regularity omega ] ) since , for every , exchangeability of follows by proposition [ prop : exchangeable rewiring chain ] and exchangeability of .consistency of results from lipschitz continuity of rewiring maps ( proposition [ prop : lipschitz ] ) and consistency of the finite - dimensional marginals associated to : for fixed and , }=g'}q_{\omega}^{(n ) } \bigl(g^*,g''\bigr ) \\ & = & \sum_{g'':g''|_{[m]}=g'}\omega^{(n)}\bigl(\bigl\{w\in \mathcal { w}_n \dvt w\bigl(g^*\bigr)=g''\bigr\ } \bigr ) \\ & = & \omega^{(n)}\bigl(\bigl\{w\in \mathcal{w}_n \dvt w|_{[m]}(g)=g'\bigr\}\bigr ) \\ & = & \omega^{(m)}\bigl(\bigl\{w\in\wirem\dvt w(g)=g'\bigr\ } \bigr ) \\ & = & q_{\omega}^{(m)}\bigl(g , g'\bigr).\end{aligned}\ ] ] from , we define a collection of infinitesimal jump rates by [ cor : inf - q ] the infinitesimal generators are exchangeable and consistent and , therefore , define the infinitesimal jump rates of an exchangeable markov process on .consistency when was already shown in proposition [ prop : finite q - omega ] .we must only show that is consistent for .fix and . then , for any , we have in section [ section : informal description ] , we mentioned local and global discontinuities for graph - valued processes . in the next two sections ,we formally incorporate these discontinuities into a continuous - time rewiring process : in section [ section : the rewiring measure ] , we extend the notion of random rewiring from discrete - time ; in section [ section : local - edge ] , we introduce transitions for which , at the time of a jump , only a single edge in the network changes . over time, the local changes can accumulate to cause a non - trivial change to network structure . in this section, we specialize to the case where for some measure on satisfying where is the rewiring limit of and is the entry of corresponding to , for each . for each , we write to denote for , and likewise for the infinitesimal generator .[ lemma : upsilon - omega equiv ] for satisfying ( [ eq : regularity upsilon ] ) , the rewiring measure satisfies ( [ eq : regularity omega ] ) . by theorem [ prop : dissociated ] , implies .we need only show that implies for every . for any , }\neq\idn\}\bigr)&= & \omega _ { \upsilon } \biggl(\bigcup_{1\leq i < j\leq n}\{w\in \wiren\dvt w|_{\{i , j\ } } \neq\idij\ } \biggr ) \\& \leq&\sum_{1\leq i < j\leq n}\omega_{\upsilon}\bigl(\{w\in \wiren\dvt w|_{\ { i , j\}}\neq\idij\}\bigr ) \\ & = & \sum_{1\leq i< j\leq n}\omega_{\upsilon}^{(2 ) } \bigl(\mathcal { w}_2\setminus\{\idtwo\}\bigr ) \\ & = & \frac{n(n-1)}{2}\bigl(1-\upsilon^{(2)}_*\bigr).\end{aligned}\ ] ] hence , by ( [ eq : regularity upsilon ] ) , }\neq\idn\}\bigr)\leq \int_{\rewiringlimits}\frac{n(n-1)}{2}\bigl(1-\upsilon^{(2 ) } _ * \bigr)\upsilon ( \mathrm{d}\upsilon)<\infty,\ ] ] for every . for each , is a finite , exchangeable conditional measure on .moreover , satisfies this follows directly from propositions [ prop : rewiring measure ] , [ prop : finite q - omega ] , and lemma [ lemma : upsilon - omega equiv ] .we may , therefore , define an infinitesimal generator for a markov chain on by [ thm : upsilon - rewiring process ] for each satisfying ( [ eq : regularity upsilon ] ) , there exists an exchangeable markov process on with finite - dimensional transition rates as in ( [ eq : infinitesimal - q ] ) .we call in theorem [ thm : upsilon - rewiring process ] a _ rewiring process _ directed by , or with rewiring measure . for and ,let denote the rewiring map that acts by mapping , in words , puts an edge between and ( if ) or no edge between and ( if ) and keeps every other edge fixed . for fixed , let denote the _ empty graph, that is , the graph with no edges .we generate a continuous - time process on as follows .first , we specify a constant and , independently for each pair \times[n] ] on by }_0:=\gamma_0|_{[n]} ] , then we put }_{t}:=w^{[n]}_t(\gamma^{[n]}_{t-}) ] . [prop : thinned ppp ] for each , } ] from by removing any atom times for which }:=w_t|_{[n]}=\idn ] . by the thinning property of poisson point processes , } ] , the jump rate to state is and the conclusion follows .[ thm : existence rewiring ] for any satisfying ( [ eq : regularity omega ] ) , the -rewiring process on exists and can be constructed from a poisson point process with intensity as above .let be a poisson point process with intensity and construct }\}_{n\in\mathbb{n}} ] determined by . by proposition [ prop : thinned ppp] , each } ] is compatible by construction , that is , }_t=\rmn\gamma^{[n]}_t ] defines a process on .as we have shown previously , the infinitesimal rates given by are consistent and exchangeable ; hence , has infinitestimal generator and is an -rewiring process .any markov process on is characterized by its semigroup , defined as an operator on the space of continuous , bounded functions by where denotes the expectation operator with respect to the initial distribution , the point mass at .we say has the _ feller property _ if , for all bounded , continuous functions , its semigroup satisfies * as for all , and * is continuous for all . the semigroup of any -rewiring process enjoys the feller property . to show the first point in the feller property , we let and be an -rewiring process with initial state and directing measure satisfying ( [ eq : regularity omega ] ) .we define }=g'|_{[n ] } \rightarrow h(g)=h\bigl(g'\bigr)\bigr\}.\ ] ] by ( [ eq : regularity omega ] ) and finiteness of , }_t\rightarrow g|_{[n]} ] in probability as and , therefore , by the bounded convergence theorem .right - continuity at zero for all bounded , continuous follows by the stone weierstrass theorem .for the second point , let have for some and construct and from the same poisson point process but with initial states and . by lipschitz continuity of the rewiring maps ( proposition [ prop : lipschitz ] ) , and can never be more than distance apart , for all .continuity of , for each , follows . by the feller property, any -rewiring process has a cdlg version and its jumps are characterized by an infinitesimal generator . in section [ section: continuous ] , we described the infinitesimal generator through its finite restrictions .ethier and kurtz give an extensive treatment of the general theory of feller processes .we have presented a family of time - varying network models that is markovian , exchangeable , and consistent , natural statistical properties that impose structure without introducing logical pitfalls .external to statistics , exchangeable models are flawed : they produce _ dense _ graphs when conventional wisdom suggests real - world networks are _ sparse_. the erds rnyi model s storied history cautions against dismay . though it replicates little real - world network structure , the erds rnyi model has produced a deluge of insight for graph - theoretic structures and is a paragon of the utility of the probabilistic method . while our discussion is specific to exchangeable processes , the general descriptions in sections [ section : discrete ] and [ section : continuous ] can be used to construct processes that are not exchangeable , and possibly even sparse .the most immediate impact of the rewiring process may be for analyzing information spread on dynamic networks . under the heading of _ finite markov information exchange _( fmie ) processes , aldous recently surveyed interacting particle systems models for social network dynamics .informally , fmie processes model a random spread of information on a network .some of the easiest to describe fmie processes coincide with well - known interacting particle systems , such as the voter and contact processes ; others mimic certain social behaviors , for example , _ fashionista _ and _ compulsive gambler_. simulation is a valuable practical tool for developing intuition about intractable problems .aldous s expository account contains some hard open problems for time - invariant networks .considering these same questions on dynamic networks seems an even greater challenge . despite these barriers ,policymakers and scientists alike desire to understand how trends , epidemics , and other information spread on networks .the poisson point process construction in section [ section : poissonian structure ] could be fruitful for deriving practical answers to these problems .in this section , we prove some technical results from our previous discussion .we now show that is a compact metric space . recall that is equipped with the metric since ^{\mathcal{w}^*} ] . by lemma [ lemma : rewiring structure ], every satisfies }=w}\upsilon \bigl(w^*\bigr ) \quad\quad\mbox{for every } w\in \mathcal{w}_n\ ] ] and for all .then , for any ^{\mathcal{w}^*}\setminus\rewiringlimits ] denote the -ball around .now , take any . by this assumption , and so whence , by the triangle inequality, we have }}x^{(n+1 ) } \bigl(w^*\bigr)\biggr{\vert}\\ & \leq&\sum_{w\in\mathcal{w}_n}\bigl{\vert}x^{(n)}(w)-x'^{(n)}(w ) \bigr{\vert}+\sum_{w\in\mathcal{w}_n}\biggl{\vert}\sum _ { w^*:w^*|_{[n]}=w}\bigl(x^{(n+1)}\bigl(w^*\bigr)-x'^{(n+1 ) } \bigl(w^*\bigr)\bigr)\biggr{\vert}\\ & & { } + \sum_{w\in\mathcal{w}_n}\biggl{\vert}x'^{(n)}(w)- \sum_{w^*:w^*|_{[n]}=w}x'^{(n+1)}\bigl(w^ * \bigr)\biggr{\vert}\\ & \leq&\varepsilon_x/4+\sum_{w\in\mathcal{w}_n}\sum _ { w^*:w^*|_{[n]}=w}\bigl{\vert}x^{(n+1)}\bigl(w^ * \bigr)-x'^{(n+1)}\bigl(w^*\bigr)\bigr{\vert}\\ & & { } + \sum _ { w\in\mathcal{w}_n}\biggl{\vert}x'^{(n)}(w)-\sum _ { w^*:w^*|_{[n]}=w}x'^{(n+1)}\bigl(w^*\bigr ) \biggr{\vert}\\ & \leq&\varepsilon_x/4+\varepsilon_x/2+\sum _ { w\in\mathcal { w}_n}\biggl{\vert}x'^{(n)}(w)-\sum _ { w^*:w^*|_{[n]}=w}x'^{(n+1)}\bigl(w^*\bigr ) \biggr{\vert}.\end{aligned}\ ] ] therefore , }=w}x'^{(n+1)}\bigl(w^ * \bigr)\biggr{\vert}\geq\varepsilon_x/4>0,\ ] ] which implies ^{\mathcal{w}^*}\setminus\rewiringlimits ] is open and is closed .since ^{\mathcal{w}^*} ] for which ( i ) and ( ii ) .more precisely , we assume , for each , where are i.i.d .uniform random variables on ] almost surely , for every , . to do this , we first show that is a martingale with respect to its natural filtration , for every .we can then appeal to azuma s inequality and the borel cantelli lemma to show that as .note that \rightarrow[n]}\sum _ { w\in \mathcal{w}_n } e\bigl(\mathbf{1}\{w|_{[n]}=w\ } \mid { w}|_{[k]}\bigr)\mathbf{1}\bigl\{w^{\psi}=v\bigr\}\ ] ] and } ) \mid m_{k , n}\bigr).\ ] ] on the inside , we have } ) \\ & & \quad = e \biggl ( \frac{1}{n^{\downarrow m}}\sum_{\mathrm{injections\ } \psi:[m]\rightarrow[n]}\sum _ { w\in \mathcal{w}_n } \mathbf{1}\bigl\{w^{\psi}=v\bigr\}e\bigl(\mathbf { 1}\bigl\{w|_{[n]}^{\psi}=w\bigr\}\mid w|_{[k]}\bigr ) \bigm| m_{k , n},w|_{[k ] } \biggr ) \\ & & \quad=\frac{1}{n^{\downarrow m}}\sum_{\mathrm{injections\ } \psi : [ m]\rightarrow[n]}\sum _ { w\in \mathcal{w}_n } \mathbf{1}\bigl\{w^{\psi}=v\bigr\}e\bigl ( \mathbf{1}\bigl\{w|_{[n]}^{\psi}=w\bigr\ } \mid w|_{[k ] } \bigr);\end{aligned}\ ] ] whence , })\mid m_{k , n}\bigr ) \\ & & \quad = e \biggl(\frac { 1}{n^{\downarrow m}}\sum _ { \mathrm{injections\ } \psi:[m]\rightarrow [ n]}\sum_{w\in \mathcal{w}_n } \mathbf{1}\bigl\ { w^{\psi}=v\bigr\}e\bigl(\mathbf{1}\bigl\{w|_{[n]}^{\psi}=w \bigr\}\mid w|_{[k]}\bigr ) \bigm| m_{k , n } \biggr ) \\ & & \quad=\frac{1}{n^{\downarrow m}}\sum_{\mathrm{injections\ } \psi :[ m]\rightarrow[n]}\sum _ { w\in \mathcal{w}_n } \mathbf{1}\bigl\{w^{\psi}=v\bigr\}e \bigl(e \bigl(\mathbf{1}\{w|_{[n]}=w\}\mid w|_{[k]}\bigr ) \mid m_{k , n } \bigr ) \\ & & \quad=\frac{1}{n^{\downarrow m}}\sum_{\mathrm{injections\ } \psi : [ m]\rightarrow[n]}\sum _ { w\in \mathcal{w}_n } \mathbf{1}\bigl\{w^{\psi}=v\bigr\}e\bigl ( \mathbf{1}\{w|_{[n]}=w\ } \mid m_{k , n}\bigr ) \\ & & \quad = m_{k , n}.\end{aligned}\ ] ] therefore , is a martingale for every .furthermore , for every , \rightarrow[n]}e\bigl(\mathbf{1}\bigl\{w|_{[n]}^{\psi } = v \bigr\}\mid w|_{[k+1]}\bigr)-e\bigl(\mathbf{1}\bigl\{w|_{[n]}^{\psi}=v \bigr\}\mid w|_{[k]}\bigr)\biggr{\vert}\\ & & \quad\leq\frac{1}{n^{\downarrow m}}\sum_{\mathrm{injections\ } \psi : [ m]\rightarrow[n]}\bigl|e\bigl ( \mathbf{1}\bigl\{w|_{[n]}^{\psi}=v\bigr\}\mid w|_{[k+1 ] } \bigr)-e\bigl(\mathbf{1}\bigl\{w|_{[n]}^{\psi}=v\bigr\}\mid w|_{[k]}\bigr)\bigr| \\ & & \quad\leq m(n-1)^{\downarrow(m-1)}/n^{\downarrow m } \\ & & \quad\leq m / n,\end{aligned}\ ] ] since }^{\psi}=v\}\mid w|_{[k+1]})-e(\mathbf { 1}\{w|_{[n]}^{\psi}=v\}\mid w|_{[k]})=0 ] exists with probability one for every .therefore , with probability one , the rewiring limit exists .we have already shown , by the assumption that is dissociated , that is non - random for every ; hence , the limit is non - random .this completes the proof .this work is partially supported by nsf grant dms-1308899 and nsa grant h98230 - 13 - 1 - 0299 . | we introduce the _ exchangeable rewiring process _ for modeling time - varying networks . the process fulfills fundamental mathematical and statistical properties and can be easily constructed from the novel operation of _ random rewiring_. we derive basic properties of the model , including consistency under subsampling , exchangeability , and the feller property . a reversible sub - family related to the erds rnyi model arises as a special case . ./style / arxiv - general.cfg |
the analytic hierarchy process ( ahp ) is a method for ranking alternatives in multi - criteria decision making problems .developed by saaty , it consists of a three layer hierarchical structure : the overall goal is at the top ; the criteria are in the next level ; and the alternatives are in the bottom level .the ahp has been used in many different areas including manufacturing systems , finance , politics , education , business and industry ; for more details on the method , see the monographs by saaty - vargas and vaidya - kumar .the essence of the ahp can be described as follows . given alternatives we construct a _ pairwise comparison matrix _ ( _ pc_-matrix ) , for each criterion , in which indicates the strength of alternative relative to alternative for that criterion .pc_-matrix with the property that for all and for all is called a _ symmetrically reciprocal _ matrix ( _ sr_-matrix ) .( note that this abbreviation might clash with the strongly regular matrices of butkovi , but not in this paper . )sr_-matrix is constructed , the next step in the ahp is to derive a vector of positive weights , which can be used to rank the alternatives , with quantifying the weight of alternative .as observed by elsner and van den driessche , the ideal situation is where , in which case the _ sr_-matrix is _transitive_. in practice , this will rarely be the case and it is necessary to approximate with a transitive matrix , where for some positive weight vector .the problem is then how to construct given .several approaches have been proposed including saaty s suggestion to take to be the perron vector of , or the approach of farkas et al . , which chooses to minimise the euclidean error . elsner and van dendriessche suggested selecting to be the max algebraic eigenvector of .this is similar in spirit to saaty s approach and also generates a transitive matrix that minimises the maximal relative error . as noted in , minimising this functional is equivalent to minimising the different approaches to approximating an _sr_-matrix with a transitive matrix will in general produce different rankings of the alternatives .the question of how these rankings are affected by the choice of scheme is considered in the recent paper of ngoc . in the classical ahp involving multiple criteria , a set of _sr_-matrices is constructed : one for each criterion .one additional _sr_-matrix is constructed based on comparisons of the different criteria .once weight vectors are obtained for each individual criterion , these are then combined using the entries of the weight vector for the criteria - comparison matrix . as an illustration, we take the following numerical example from saaty and show how the perron vectors of the comparison matrices are used to construct a weight vector .[ ex : saaty ] the problem considered is deciding where to go for a one week vacation among the alternatives : 1 .short trips , 2 .quebec , 3 .denver , 4 .five criteria are considered : 1 .cost of the trip , 2 .sight - seeing opportunities , 3 .entertainment , 4 .means of travel and 5 . dining .pc_-matrix for the criteria and its perron vector are given by \quad \text{and}\quad c=\left [ \begin{array}{c } 0.179 \\ 0.239 \\0.431 \\ 0.818 \\ 0.237 \\ \end{array } \right ] .\ ] ] the above matrix describes the pairwise comparisons between the different _criteria_. for instance , as , criterion 2 is rated more important than criterion 1 ; indicates that criterion 3 is rated more important than criterion 2 and so on .the vector contains the weights of the criteria ; in this method , criterion 4 is given most weight , followed by criterion 3 and so on .the _ sr_-matrices , , for each of the 5 criteria , their perron vectors and corresponding ranking schemes are given below .for instance , for criterion 1 , the first alternative is preferred to the second as the entry of is .similarly , for criterion 3 , the 4th alternative is preferred to the 1st as the entry of is .for the cost of the trip : , \quad v^{(1)}=\left [ \begin{array}{c } 0.877 \\ 0.46 \\0.123 \\ 0.064 \\ \end{array } \right ] , \quad 1>2>3>4\ ] ] for the sight - seeing opportunities : , \quad v^{(2)}=\left [ \begin{array}{c } 0.091 \\ 0.748 \\ 0.628 \\ 0.196 \\ \end{array } \right ] , \quad 2>3>4>1\ ] ] for the entertainment : , \quad v^{(3)}=\left [ \begin{array}{c } 0.57 \\ 0.096 \\ 0.096 \\ 0.81 \\ \end{array } \right ] , \quad 4>1>2=3\ ] ] for the means of travel : , \quad v^{(4)}=\left [ \begin{array}{c } 0.396 \\ 0.355 \\ 0.768 \\ 0.357 \\ \end{array } \right ] , \quad 3>1>4>2\ ] ] for the dining : , \quad v^{(5)}=\left [ \begin{array}{c } 0.723 \\ 0.642 \\0.088 \\ 0.242 \\ \end{array } \right ] , \quad 1>2>4>3\ ] ] to obtain the overall weight vector , we compute the weighted sum .this gives \ ] ] with the associated ranking : .our work here is inspired by the max - algebraic approach to the ahp introduced by elsner and van den driessche and extends it in the following manner . in , the max eigenvector is used as a weight vector for a _ single criterion _ and it is shown to be optimal in the sense of minimising the maximal relative error as discussed above .this work naturally raises the question of how to treat multiple criteria within the max - algebraic framework .we address this question here by considering the multi - criteria ahp as a multi - objective optimisation problem , in which we have an objective function of the form ( [ eq : errf ] ) for each criterion ( and associated _sr_-matrix ) . rather than combining individual weight vectors as in example [ ex : saaty ] , we consider three approaches within the framework of multi - objective optimisation , and use the optimal solution as a weight vector in each case .the advantage of this approach is that the weight vector can be interpreted in terms of the maximal relative error functions ( [ eq : errf ] ) associated with the _ sr_-matrices given as data for the problem .the optimisation problems we consider are the following .first , we investigate the existence of a single _ transitive _ matrix with a minimum distance to all matrices in the set simultaneously .we remark that this amounts to finding a common subeigenvector of the given matrices .clearly , this will not in general be possible .the second problem we consider is to obtain a transitive matrix that minimises the maximal distance to any of the given _ sr_-matrices .the third problem concerns the existence of a transitive matrix that is pareto optimal for the given set of matrices . to illustrate our results , we revisit example [ ex : saaty ] towards the end of the paperthe set of all nonnegative real numbers is denoted by ; the set of all -tuples of nonnegative real numbers is denoted by and the set of all matrices with nonnegative real entries is denoted by .we denote the set of all -tuples of positive real numbers by . for and , refers to the entry of .the matrix ] is a common subeigenvector .however , it can be readily verified that .in general , it will not be possible to find a single vector that is globally optimal for the set of _ sr_-matrices given by ( [ eq : psi ] ) . with this in mind ,in this short section we consider a different notion of optimal solution for the multiple objective functions , .in fact , we consider the following optimisation problem . in words , we are seeking a weight vector that minimises the maximal relative error where the maximum is taken over the criteria ( _ sr_-matrices ) .corollary [ cor : cprprop ] has the following interpretation in terms of the optimisation problem given in ( [ eq : opt2 ] ) .[ pro : opt2 ] consider the set given by ( [ eq : psi ] ) . then : * ; * solves ( [ eq : opt2 ] ) if and only if .corollary [ cor : cprprop ] shows that there exists some with if and only if .( i ) follows from this observation .the result of ( ii ) is then immediate from the definition of .0.2 cm thus far , we have considered two different approaches to the multi - objective optimisation problem associated with the ahp . in this section we turn our attention to what is arguably the most common framework adopted in multi - objective optimisation : pareto optimality . asabove , we are concerned with the existence of optimal points for the set of objective functions , for associated with the set ( [ eq : psi ] ) of _ sr_-matrices .we first recall the notion of _ weak pareto optimality_. [ def : wpop ] is said to be a _ weak pareto optimal point _ for the functions if there does not exist such that for all .the next lemma shows that every point in the set is a weak pareto optimal point for .[ lem : wpop ] let be given by ( [ eq : psi ] ) . any is a weak pareto optimal point for .let be given .then for .if there exists some such that for , then for this for .this contradicts proposition [ pro : opt2 ] .we next recall the usual definition of a pareto optimal point .[ def : pop ] is said to be a _pareto optimal point _ for the functions if for implies for all .we later show that the multi - objective optimisation problem associated with the ahp always admits a pareto optimal point .we first present some simple facts concerning such points . [ thm : pops ] let be given by ( [ eq : psi ] ) .then : * if is unique up to a scalar multiple , then it is a pareto optimal point for ; * if is unique up to a scalar multiple for some , then it is a pareto optimal point for .observe that both conditions imply .\(i ) assume that is unique up to a scalar multiple .pick some such that for all . then , for all which implies that .thus , for some .hence , for all and is a pareto optimal point .\(ii ) assume that for some , is unique up to a scalar multiple .suppose is such that for all .in particular , , and this implies that for some .further , it is immediate that for any other ( ) , we have .thus , is a pareto optimal point . by proposition [ pro : unique ] ,condition ( i ) is equivalent to and to be irreducible , and condition ( ii ) is equivalent to and to be irreducible for some .[ cor : pops ] let the set given by ( [ eq : psi ] ) consist of _ sr_-matrices . for ,any is a pareto optimal point for .notice that from remark [ rem:3by3set ] for case , there exists a unique subeigenvector up to a scalar multiple in each for .this is also true for the case because and is irreducible .the result directly follows from ( ii ) in theorem [ thm : pops ] .the following example demonstrates point ( i ) in theorem [ thm : pops ] .[ ex : popsdpu ] consider the following matrices given by b=\left [ \begin{array}{cccc } 1 & 1/2 & 4 & 1/8 \\ 2 & 1 & 3 & 2 \\ 1/4 & 1/3 & 1 & 5 \\ 8 & 1/2 & 1/5 & 1 \\ \end{array } \right ] .\ ] ] matrix is obtained as follows \ ] ] where and is irreducible .from proposition [ pro : unique ] , we have a unique vector ( up to a scalar multiple ) in : ^t$ ] .figure [ fig : ex1 ] below represents the values of and at and some points in , and .remark that pareto optimality is observed at where ., scaledwidth=80.0% ] our objective in the remainder of this section is to show that the multi - objective optimisation problem associated with the ahp _ always _ admits a pareto optimal solution .we first recall the following general result giving a sufficient condition for the existence of a pareto optimal point with respect to a set . essentially , this is a direct application of the fact that a continuous function on a compact set always attains its minimum .[ thm : existence ] let be nonempty and compact .let a set of continuous functions be given where for .there exists such that , for implies for .this result follows from elementary real analysis and the observation that if minimises the ( continuous ) weighted sum where for , then must be pareto optimal for the functions . a point satisfying the conclusion of the theorem [ thm : existence ] is said to be pareto optimal for _ with respect to _ .thus , for any multi - objective optimisation problem with continuous objective functions defined on a compact set , there exists a point that is pareto optimal with respect to the given set . to apply the above result to the ahp, we first note that for a set of _sr_-matrices , is compact by theorem [ thm : geop ] ( recall that _ sr_-matrices are positive and hence irreducible ) .we now show that any point in that is pareto optimal with respect to is also pareto optimal with respect to the set .although we do nt specifically use the _ sr _ property in the following , we assume to consist of _ sr_-matrices as we are primarily interested in the ahp application . instead , we could just assume that are irreducible implying that is compact and a pareto optimal point exists .[ lem : existence ] consider the set in ( [ eq : psi ] ) and assume that is an _sr_-matrix for .let be a pareto optimal point for with respect to . then , is also pareto optimal for with respect to .assume that is a pareto optimal point with respect to .suppose .then from the definition of ( [ eq : cpr ] ) , it follows that as , for .it follows immediately from ( [ eq : par1 ] ) that for any , it can not happen that for .let in be such that as for all , , and is pareto optimal with respect to , it follows that for . our next step is to show that there exists a point that is pareto optimal with respect to .[ pro : existence ] consider the set in ( [ eq : psi ] ) and assume that is an _sr_-matrix for .there exists that is pareto optimal for with respect to .first note that since .theorem [ thm : geop ] shows that is compact .furthermore , for any irreducible matrix , the function is a composition of continuous functions and hence continuous on a compact set .theorem [ thm : existence ] implies that there exists in that is pareto optimal with respect to .combining proposition [ pro : existence ] with lemma [ lem : existence ] , we immediately obtain the following result .[ cor : existence ] consider the set in ( [ eq : psi ] ) and assume that is an _sr_-matrix for .there exists that is pareto optimal for with respect to .corollary [ cor : existence ] means that there exists a vector of positive weights that is simultaneously pareto optimal and also optimal in the _ min - max _ sense of section [ sec : minmax ] for the error functions , . finally , to illustrate the above results , we revisit example [ ex : saaty ] . [ ex : saaty2 ] let be as in example [ ex : saaty ] .taking , to be the entry of the max eigenvector of , normalised so that , we apply theorem [ thm : existence ] to compute pareto optimal solutions in the set by minimising the weighted sum using the matlab function _ fminsearch_. observe that , so there is no common subeigenvector in this case .next , we calculate the max eigenvector of \ ] ] we find that there are multiple pareto optimal points giving at least two possible distinct rankings : and .notice that first ranking scheme is the same as the one obtained from the classical method used in example [ ex : saaty ] .the second ranking scheme is also reasonable , since if we analyse the local rankings associated with the set of _ sr_-matrices in detail , we see that for , and .in particular , is preferred to all other alternatives for .building on the work of elsner and van den driessche , we have considered a max - algebraic approach to the multi - criteria ahp within the framework of multi - objective optimisation .papers characterise the max eigenvectors and subeigenvectors of a _ single _ _ sr_-matrix as solving an optimisation problem with a single objective .we have extended this work to the multi - criteria ahp by directly considering several natural extensions of this basic optimisation problem to the multiple objective case .specifically , we have presented results concerning the existence of : globally optimal solutions ; min - max optimal solutions ; pareto optimal solutions .the principal contribution of the paper is to draw attention to this max - algebraic perspective on the multi - criteria ahp , with the main results in this direction being : establishing the connection between the generalised spectral radius and min - max optimal solutions ( proposition [ pro : opt2 ] ) ; proving the existence of pareto optimal solutions and showing that it is possible to simultaneously solve the pareto and min - max optimisation problems ( proposition [ pro : existence ] and corollary [ cor : existence ] ) .we have also related the existence of globally optimal solutions to the existence of common subeigenvectors and highlighted connections between this question and commutativity ( theorem [ thm:3b3cond ] ) .buket benek gursoy and oliver mason acknowledge the support of irish higher educational authority ( hea ) prtli network mathematics grant ; serge sergeev is supported by epsrc grant rrah15735 , rfbr - cnrs grant 11 - 01 - 93106 and rfbr grant 12 - 01 - 00886 .b. heidergott , g. j. olsder , j. van der wounde , max plus at work : modeling and analysis of synchronized systems , a course on max - plus algebra and its applications , princeton university press , princeton , nj ( 2006 ) .d. hershkowitz , h. schneider , one - sided simultaneous inequalities and sandwich theorems for diagonal similarity and diagonal equivalence of nonnegative matrices , elec .j. of linear algebra 10 ( 2003 ) 81 - 101 .krivulin , evaluation of bounds on the mean rate of growth of the state vector of a linear dynamical stochastic system in idempotent algebra , vestnik st .petersburg university : mathematics 38 ( 2005 ) 45 - 54 . | the analytic hierarchy process ( ahp ) is widely used for decision making involving multiple criteria . elsner and van den driessche introduced a max - algebraic approach to the single criterion ahp . we extend this to the multi - criteria ahp , by considering multi - objective generalisations of the single objective optimisation problem solved in these earlier papers . we relate the existence of globally optimal solutions to the commutativity properties of the associated matrices ; we relate min - max optimal solutions to the generalised spectral radius ; and we prove that pareto optimal solutions are guaranteed to exist . + _ keywords : _ analytic hierarchy process ( ahp ) , _ sr_-matrix , max algebra , subeigenvector , generalised spectral radius , multi - objective optimization . + _ ams codes : _ 91b06 , 15a80 , 90c29 |
convection - diffusion equations oftentimes arise in mathematical models for fluid dynamics , environmental modeling , petroleum reservoir simulation , and other applications . among them ,the most challenging case for numerical simulation is the convection - dominated problems ( when diffusion effect is very small compared with the convection effect ) .dominated convection phenomena could appear in many real world problems ; for example , convective heat transport with large pclet numbers , simulation of oil extraction from underground reservoirs , reactive solute transport in porous media , etc .the solutions of these problems usually have sharp moving fronts and complex structures ; their nearly hyperbolic nature presents serious mathematical and numerical difficulties .classical numerical methods , developed for diffusion - dominated problems , suffer from spurious oscillations for convection - dominated problems .many innovative ideas , like upwinding , method of characteristics , and local discontinuous galerkin methods , have been introduced to handle these numerical difficulties efficiently ; see , for example , and references therein . for problems with nearly hyperbolic nature ,it is nature to explore the idea of the so - called method of characteristics ; and , this idea has been combined with different spatial discretizations like finite difference ( fd ) , finite element ( fe ) , and finite volume ( fv ) methods . along this line of research ,the semi - lagrangian method ( or , in the finite element context , the eulerian lagrangian method ) treats the convection and capacity terms together to carry out the temporal discretization in the lagrangian coordinate .the eulerian lagrangian method ( elm ) gives rise to symmetric discrete linear systems , stabilizes the numerical approximation , and the corresponding diffusion problems are solved on a fixed mesh .this method and its variants have been successfully applied not only on the linear convection - diffusion problem , but also the incompressible naiver - stokes equations and viscoelastic flow problems ; see , for example , . adaptive mesh refinement ( amr ) for partial differential equations ( pdes )has been the object of intense study for more than three decades .amr techniques have been proved to be successful to deal with multiscale phenomena and to reduce the computation work without losing accuracy when solution is not smooth . in general , the adaptive algorithm for static problems generates graded meshes and iterations in the following form : in the estimate procedure , we usually employ some computable local indicators to estimate the local error of the approximate solution we obtain from the solve procedure .these indicators only depend on the datum and/or the approximate solution , and show in which part(s ) of the domain the local error is relatively too big or too small in the current mesh .we then mark these subdomains and refine or coarsen them accordingly .local error indicators determine whether the whole adaptive procedure is effective or not .therefore , a reliable and efficient error indicator is the key to a successful amr method and a posteriori error analysis is often used to obtain such an error indicator in practice . in the context of finite element methods , the theory of a posteriori error analysis and adaptive algorithms for linear elliptic problem is now rather mature . convergence and optimality of adaptive methods for linear elliptic problemshave been proved as the outcome of a sequence of work ; see the recent review by nochetto , siebert , and veeser and references therein . on the other hand , for the nonlinear and time - dependent problems ,the theory is still far from satisfactory .a posteriori error analysis for nonlinear evolution problems is even more challenging .adaptivity time - stepping is very important for time dependent problems because the practical problems sometimes have singularities or are multiscale in time .uniform time step size can not capture these phenomena .there are considerable amount of work in the literature devoted to the development of efficient adaptive algorithms for evolution problems .a posteriori error estimators for linear parabolic problems are studied in and are also derived for nonlinear problems ; see for example .there have been also some efforts for extending a posteriori error analysis for the time - dependent stokes as well as the navier - stokes equations . in particular ,a posteriori error estimates for convection - diffusion problems have been discussed in .it is nature to employ arm techniques for convection - dominated problems because of the complex structures of the solutions and evolution of these structures in time .we also notice that spatial mesh adaptivity plays an important role in elm to reduce numerical oscillations and smearing effect when inexact numerical integrations are employed .early adoption of adaptive characteristic methods has been seen since late 1980 s .a posteriori error estimates for characteristic - galerkin method for convection - dominated problems have been proposed : demokowicz and oden considered the method of characteristics combined with petrov - galerkin finite element method for spatial discretization .houston and sli give an a posteriori error estimator in -norm for the linear time - dependent convection - diffusion problem using the duality argument . chen and ji give sharp a posteriori error estimations of elm in -norm for linear and nonlinear convection - diffusion problems , respectively . a related a posteriori error bound can be found in chen , nochetto , and schmidt for the continuous casting problem ( convection - dominated and nonlinearly degenerated ) . in the previous error estimators mentioned above , the time residual , which measures the difference between the solutions of two successive time steps ,is employed as a local indicator for temporal error . on the other hand ,it is observed that elm is essentially transforming a convection - diffusion problem to a parabolic problem along the characteristics . in this paper, we consider a posteriori error estimators for elm , with focus on temporal error estimators .motivated by nochetto , savar , and verdi , in which adaptive time - stepping scheme for an abstract evolution equation in hilbert spaces is analyzed , we can obtain an a posteriori error bound for the temporal error along the characteristics .combined with space adaptivity , adaptive method has been designed and implemented .our numerical experiments in section [ sec : numer ] suggest that the new a posteriori error estimator is effective .moreover , the numerical results also indicate that the proposed error estimators are very efficient and can take advantage of the fact that elm allows larger time stepsize .the outline of this paper is as follows . in section [ sec :elm ] , we introduce a model problem and its eulerian lagrangian discretization . in section [ sec : time ] , we discuss temporal a posteriori and a priori error analysis for the linear convection - diffusion problem . in section [ sec : full ] , we present a posteriori error estimation for the full - discretization . to simplify the discussion ,we only consider a linear convection - diffusion model problem and restrict ourselves to the standard residual - type estimator for the spatial error , although the technique discussed in this paper can be potentially extended to other problems and spatial error estimators . in section [ sec: numer ] , we consider the implementation of our adaptive algorithm and present some numerical experiments . throughout this paper , we will use the following notation .the symbol denotes the space of all square integrable functions and its norm is denoted by .let be the standard sobolev space of the scalar function whose weak derivatives up to order are square integrable , and , let and denote the standard sobolev norm and its corresponding seminorm on , respectively . furthermore , and denote the norm and the semi - norm restricted to the domain , respectively .we also use the notation for the functions that belong to and their trace vanish on .the dual space of is denoted by .we use the notation to denote the existence of a generic constant , which depends only on , such that .in this paper , we consider the following linear convection - diffusion model problem \ ] ] with the initial condition and the boundary condition .\ ] ] here ( ) is bounded polygonal domain .we assume that is divergence free and vanishes on for ] ( ) has the same original labeling .assume that and .\ ] ] we discretize the material derivative in using the backward euler method as follows here , , , and .it is easy to see that is a function of in the lagrangian coordinate .we usually refer to as the temporal semi - discretization scheme . our a posteriori error estimate base on the assumption that the characteristics are solved exactly , which preserves the determinant of the jacobian of the flow .in the numerical analysis , it is difficult to do that and it was apparent that a computational realization of preserving the determinant of the jacobian for the flow map to be one was crucial .therefore , we discuss how to integrate the following ordinary differential equation for the computation of the characteristic feet .the numerical scheme we discuss here has second order accuracy and preserves the determinant of the jacobian of the flow . where the equation ( [ eq : source ] ) is often called the source - free dynamical systems due to ( [ div : free ] ) .for such a system , the solution is often called the flow map or phase flow and it has the following property . we shall begin with introducing some popular scheme to compute ( [ eq : source ] ) and showing that the scheme is indeed volume - preserving scheme for but not for .we will then introduce some volume preserving scheme to solve ( [ eq : source ] ) for , which is due to feng and shang . in literatures , the following second order numerical scheme for solving ( [ eq : source ] )is popular and it seems to first appear in .first , we integrate the equation ( [ eq : source ] ) using the mid - point rule to obtain : where .second , the right hand side is approximated by a second order accurate extrapolation .namely , the following approximation shall also be used : for notational conveniences , let us denote and hence we have the following implicit approximations : to see that the mid - point rule ( [ mid : rule ] ) results in the volume - preserving scheme , let us take the derivative with respect to for both sides of ( [ mid : rule ] ) .we then obtain the following : where and we solve ( [ eq : mid ] ) for and obtain that under the assumption that some appropriate finite element space for the velocity field is used so that the divergence free condition of the velocity field is imposed in the discrete sense , we have from ( [ eq : tr ] ) , we conclude that identically .the main reason why such an algorithm is popular seems that it has the volume preserving property . on the other hand , it is easy to see that the algorithm may not result in the volume - preserving scheme for .note that for , under the assumption that , for given as follows , we have that our purpose here is that by reviewing the volume - preserving scheme in three dimension developed by feng and shang in , we wish to make sure such a volume preserving scheme can be devised in three dimension and confirm our numerical scheme can be implemented .as far as the author is concerned , such a special algorithmic detail has not been implemented in the context of the semi - lagrangian scheme .the basic idea of constructing the volume preserving scheme for is based upon the following observation .following the idea of h. weyl , we have : where the actual expressions for , , and as follows : from this , it is easy to see that ( [ hw : decomp ] ) holds and by construction and given as follows are divergence free : let us now denote by the volume preserving scheme for with , then the following composition is trivially volume preserving : moreover , assuming is of second order accurate , it is easy to see that the composition ( [ com : vol ] ) is of second order .the above idea on the composition is from feng and shang , . discretizing with suitable spatial discretizations , we obtain fully discrete numerical schemes . in this paper , we focus on the finite element method .first , we define the weak forms of and as usual .we define the bilinear form as follow we denote the potential by and its frechet derivative by , i.e. , it is easy to see that is convex and satisfies the following inequality in fact , it is well - known that we have the following identity , where . furthermore , by taking the test function as in and applying , we obtain that the weak forms of ( [ eqn : sl_model ] ) and ( [ eqn : td_sl_model ] ) can be written as and respectively . on a shape - regular triangulation of , we introduce the piecewise continuous linear finite element space such that let and denote the mesh and the finite element space at time respectively .then the fully discrete scheme can be written as : suppose is known , find such that where is some suitable approximation of .in this section , we focus on the a posteriori error estimation for temporal semi - discretization ( [ eqn : td_sl_model ] ) . for the sake of simplicity , we assume that in this section .different from the standard time error indicators for elm , which measure the solution difference along the time direction , our new error indicator measures the difference along the characteristics .we will establish a posteriori error estimation based on the new time error indicator and show its efficiency .we will also show an optimal priori error bound as a byproduct . hereoptimality does not only mean the optimal convergence rate which can also be achieved by classic error analysis , but also mean the optimal regularity requirement which have not been proved by standard a priori error analysis .we first show that the solution of satisfies an energy identity along characteristics .in fact , by taking inner product with on both side of and applying the following identity we immediately obtain that we can see that the convection diffusion equation preserve the total energy along characteristics in continuous level . on the other hand , by choosing the test function in and employing , we have an discrete energy inequality : for any integer , we note that the equality holds in if there is no temporal discretization error .this motivates the following definition : and we can view the computable quantity as a measure of the deviation of numerical solution from satisfying the energy conservation .we can use as a time error indicator for adaptive time stepping in elm . in order to give an a posteriori error bound for our new time error indicator, we construct the following linear interpolation where since the original labeling of the characteristics does not depend on time , we have .\ ] ] substituting back to and then applying , we have by adding and subtracting , for any , we have where is the residual ( which does not depend on the choice of test function ) : we notice that is a convexity functional , i.e. , .\ ] ] hence , can be bounded by by choosing in , and in , adding these two equations , and using , we end up with the following inequality .\ ] ] then the following upper bound of the new time error indicator holds : [ thm : td_upper ] let be the exact solution of and be the time semi - discrete solution in .assume that is defined as .for any integer the folltowing upper bound holds : integration in time , we can obtain since is divergence free , which implies , we have , after changing of variables , that then follows directly from summing up above inequality from to . traditional a priori error analysis for elm treats the temporal semi - discretization as a finite difference method and derive the error estimation based on the taylor expansion ( see ) . as a result, we obtain an optimal convergence rate but the regularity requirement is suboptimal . taking advantage of the posteriori error estimation of new time error indicator , we can derive an optimal order priori error estimation with minimal regularity requirement on the datum and the solution . from the definition of the new a posteriori error estimator , and , we have by the definition of and , we have substitute back into , we have set and use the elementary inequality , we obtain now using the assumption that is divergence free and summing up from to , we obtain the following optimal a priori error estimation with minimal regularity requirement : let be the solution of with initial value , and is the time semi - discrete numerical solution of the temporal semi - discretization . for any integer , we have the following priori error estimate only on initial guess .in this section , a posteriori error estimation for the fully - discrete scheme is considered .the characteristic feet is defined by and is computed exactly . besides the new time error indicator , residual type error estimator is used as a spatial error indicator .let be the _ interior residual _, i.e. , and be the _ jump residual _ also defined , i.e. , where is the edge / face shared by and , is the unit normal vector of from to . as the analysis of time semi - discretization, we define the linear interpolation of as following similar with , we have now , the following a posteriori error estimation holds [ thm : fd_upper ] let and be the solutions of and respectively .for any integer , there exists a constant which does not depend on the mesh size and the time step size , such that the following a posteriori error bound holds , where the temporal error indicator and the spatial error indicator are defined as and for any and , we have let in and in , we have inserting , we have let and in , we have then let and where is the so called clement interpolation operator. moreover , let in and add to above inequality , we can obtain that then by the definitions , and , we have \mathrm{d}s \\ & + ( f - f_h^n , u- u_h )\\ \le & ( t - t_{n } ) \xi_{n } + c \eta_{n}^{1/2 } |\!|\!| u - u_{h}^{n } |\!|\!|+ |\!| f - f_h^n |\!|_{-a } |\!|\!| u- u_h |\!|\!|\\ \le & ( t - t_{n } ) \xi_{n } + c \eta_{n } + \frac{1}{4 } |\!|\!| u - u_{h}^{n } |\!|\!|^{2 } + \frac{1}{2}|\!|f - f^n_h|\!|^2_{-a } + \frac{1}{2 } |\!|\!|u - u_h|\!|\!|^2.\end{aligned}\ ] ] here in last two inequalities , we use the interpolation property of the clement interpolation , the cauchy schwartz inequality , and the young s inequality .the norm is defined by .now , we have for any ] , and we give the dirichlet boundary condition .[ exp : shock-1d ] given the initial condition and , the exact solution for the model problem ( if is constant ) is given by where is the so - called complementary error function .the computational domain is ] in .the following two tests are generalizations of the one - dimensional problem , problem [ exp : shock-1d ] .[ exp : shock1 - 2d ] given the velocity field , and the initial condition and . then the exact solution is the computational domain is \times[0,1] ] in .( ) . left : time step size comparison ; middle : numerical solution ; right : adaptive mesh.,title="fig:",scaledwidth=33.0%,height=192 ] ( ) .left : time step size comparison ; middle : numerical solution ; right : adaptive mesh.,title="fig:",scaledwidth=33.0%,height=192 ] ( ) . left : time step size comparison ; middle : numerical solution ; right : adaptive mesh.,title="fig:",scaledwidth=32.0%,height=192 ] ( ) . left : time step size comparison ; middle : numerical solution ; right : adaptive mesh.,title="fig:",scaledwidth=33.0%,height=192 ] ( ) . left : time step size comparison ; middle : numerical solution ; right : adaptive mesh.,title="fig:",scaledwidth=33.0%,height=192 ] ( ) . left : time step size comparison ; middle : numerical solution ; right : adaptive mesh.,title="fig:",scaledwidth=32.0%,height=192 ] ( ) . left : time step size comparison ; middle : numerical solution ; right : adaptive mesh.,title="fig:",scaledwidth=33.0%,height=192 ] ( ) . left : time step size comparison ; middle : numerical solution ; right : adaptive mesh.,title="fig:",scaledwidth=33.0%,height=192 ] ( ) . left : time step size comparison ; middle : numerical solution ; right : adaptive mesh.,title="fig:",scaledwidth=32.0%,height=192 ] figure [ fig : peak-2d ] , [ fig : shock1 - 2d ] and [ fig : shock2 - 2d ] show the numerical results for example [ exp : peak-2d ] , [ exp : shock1 - 2d ] and [ exp : shock2 - 2d ] respectively .as shown in the pictures , the new time error indicator allows larger time step size and the space error indicator captures the singularity .overall , the adaptive finite element method based on our error estimate is effective and reliable for convection dominated diffusion problems .in this paper , we discuss the adaptive elm for linear convection - diffusion equations .we * derive a new temporal error indicator along the characteristics .we are able to show optimal convergence rate of temporal semi - discretization with minimal regularity of the exact solution . *combine the new temporal error indicator with residual - type spatial error indicator and obtain a posteriori error estimation for the fully discretization elm . * design efficient adaptive algorithm based on the a posteriori error estimators .numerical results shows robustness of the new adaptive method and allows larger time steps compared with standard temporal error indicator for elm in general . for the future work, we are working on generalize the a posteriori error estimation for nonlinear convection dominated problems , where the characteristics has to be solved approximately . in this case , ode solvers that preserves the determinant of the jacobian of the flow will be important .meanwhile , we will also generalize the algorithm to navier - stokes equations .l. demkowicz and j. t. oden .an adaptive characteristic petrov - galerkin finite element method for convection - dominated linear and nonlinear parabolic problems in two space variables ., 55(1 - 2):6387 , apr .1986 .j. douglas , jr . and t. f. russell . numerical methods for convection - dominated diffusion problems based on combining the method of characteristics with finite element or finite difference procedures ., 19(5):871885 , 1982 .i. sorbents , k. u. totsche , p. knabner , and i. kgel - knabner .the modeling of reactive solute transport with sorption to mobile and immobile sorbents , 1 . experimental evidence and model development ., 32:16111622 , 1996 . | in this paper , we consider the adaptive eulerian lagrangian method ( elm ) for linear convection - diffusion problems . unlike the classical a posteriori error estimations , we estimate the temporal error along the characteristics and derive a new a posteriori error bound for elm semi - discretization . with the help of this proposed error bound , we are able to show the optimal convergence rate of elm for solutions with minimal regularity . furthermore , by combining this error bound with a standard residual - type estimator for the spatial error , we obtain a posteriori error estimators for a fully discrete scheme . we present numerical tests to demonstrate the efficiency and robustness of our adaptive algorithm . [ multiblock footnote omitted ] [ multiblock footnote omitted ] |
the concept of symmetry plays an important role both in physics and mathematics .symmetries are described by transformations of the system , which result in the same object after the transformation is carried out .they are described mathematically by parameter groups of transformations .their importance ranges from fundamental and theoretical aspects to concrete applications , having profound implications in the dynamical behavior of the systems , and in their basic qualitative properties .another fundamental notion in physics and mathematics is the one of conservation law .typical application of conservation laws in the calculus of variations and optimal control is to reduce the number of degrees of freedom , and thus reducing the problems to a lower dimension , facilitating the integration of the differential equations given by the necessary optimality conditions .emmy noether was the first who proved , in 1918 , that the notions of symmetry and conservation law are connected : when a system exhibits a symmetry , then a conservation law can be obtained .one of the most important and well known illustration of this deep and rich relation , is given by the conservation of energy in mechanics : the autonomous lagrangian , correspondent to a mechanical system of conservative points , is invariant under time - translations ( time - homogeneity symmetry ) , and to denote the partial derivative of function with respect to its -th argument . ] = 0 \end{gathered}\ ] ] follows from noether s theorem , , the total energy of a conservative closed system always remain constant in time , `` it can not be created or destroyed , but only transferred from one form into another '' .expression is valid along all the euler - lagrange extremals of an autonomous problem of the calculus of variations .the conservation law is known in the calculus of variations as the 2nd erdmann necessary condition ; in concrete applications , it gains different interpretations : conservation of energy in mechanics ; income - wealth law in economics ; first law of thermodynamics ; etc .the literature on noether s theorem is vast , and many extensions of the classical results of emmy noether are now available for the more general setting of optimal control ( see and references therein ) .here we remark that in all those results conservation laws always refer to problems with integer derivatives . nowadaysfractional differentiation plays an important role in various fields : physics ( classic and quantum mechanics , thermodynamics , etc ) , chemistry , biology , economics , engineering , signal and image processing , and control theory .its origin goes back three centuries , when in 1695 lhopital and leibniz exchanged some letters about the mathematical meaning of for .after that , many famous mathematicians , like j. fourier , n. h. abel , j. liouville , b. riemann , among others , contributed to the development of the fractional calculus .the study of fractional problems of the calculus of variations and respective euler - lagrange type equations is a subject of current strong research .f. riewe obtained a version of the euler - lagrange equations for problems of the calculus of variations with fractional derivatives , that combines the conservative and non - conservative cases . in 2002 o.agrawal proved a formulation for variational problems with right and left fractional derivatives in the riemann - liouville sense .then , these euler - lagrange equations were used by d. baleanu and t. avkar to investigate problems with lagrangians which are linear on the velocities . in problems of the calculus of variations with symmetric fractional derivatives are considered and correspondent euler - lagrange equations obtained , using both lagrangian and hamiltonian formalisms . in all the above mentioned studies , euler - lagrange equations depend on left and right fractional derivatives , even when the problem depend only on one type of them . in problems depending on symmetric derivativesare considered for which euler - lagrange equations include only the derivatives that appear in the formulation of the problem . in - liouville fractional integral functionals , depending on a parameter but not on fractional - order derivatives of order , are introduced and respective fractional euler - lagrange type equations obtained .more recently , the authors have used the results of to generalize the classical noether s theorem for the context of the fractional calculus of variations . differently from , where the lagrangian point of view is considered , herewe adopt an hamiltonian point of view .fractional hamiltonian dynamics is a very recent subject but the list of publications has become already a long one due to many applications in mechanics and physics .we extend the previous optimal control noether results of to the wider context of fractional optimal control ( theorem [ thm : mainresult : fda06 ] ) .this is accomplished by means ( i ) of the fractional version of noether s theorem , ( ii ) and the lagrange multiplier rule . as a consequence of our main result, it follows that the `` total energy '' ( the autonomous hamiltonian ) of a fractional system is not conserved : a new expression appears ( corollary [ cor : mainresult ] ) which also depends on the fractional - order of differentiation , the adjoint variable , and the fractional derivative of the state trajectory .we briefly recall the definitions of right and left riemann - liouville fractional derivatives , as well as their main properties .let be a continuous and integrable function in the interval ] , the left riemann - liouville fractional derivative , and the right riemann - liouville fractional derivative , of order , are defined in the following way : where , , and is the euler gamma function .if is an integer , then from and one obtains the standard derivatives , that is , let and be two continuous functions on ] , the following properties hold : 1 . for , 2 .for , 3 . for , ( fundamental property of the riemann - liouville fractional derivatives ) .[ rem : der : const : nz ] in general , the fractional derivative of a constant is not equal to zero .the fractional derivative of order of function , , is given by when one reads `` riemann - liouville fractional derivative '' in the literature , it is usually meant ( implicitly ) the left riemann - liouville fractional derivative . in physics, often denotes the time - variable , and the right riemann - liouville fractional derivative of is interpreted as a future state of the process . for this reason ,right derivatives are usually neglected in applications : the present state of a process does not depend on the results of the future development .following , and differently from , in this work we focus on problems with left riemann - liouville fractional derivatives only .this has the advantage of simplifying greatly the theory developed in , making possible the generalization of the results to the fractional optimal control setting .we refer the interested reader in additional background on fractional theory , to the comprehensive book .in a formulation of the euler - lagrange equations is given for problems of the calculus of variations with fractional derivatives .let us consider the following fractional problem of the calculus of variations : to find function that minimizes the integral functional = \int_a^b l\left(t , q(t),{_ad_t^\alpha } q(t)\right ) dt \ , , \end{gathered}\ ] ] where the lagrangian \times \mathbb{r}^{n } \times \mathbb{r}^{n } \rightarrow \mathbb{r} ] , we define the following operator : .\ ] ] for , operator is reduced to the linearity of the operators and imply the linearity of the operator .[ eq : fcl ] we say that is a _ fractional conservation law _ if and only if it is possible to write in the form of a sum of products , for some , and for each the pair and satisfy one of the following relations : or along all the fractional euler - lagrange extremals ( along all the solutions of the fractional euler - lagrange equations ) .we then write .[ rem : pp ] for and coincide , and is reduced to which is the standard meaning of _ conservation law _ , a function preserved along all the euler - lagrange extremals , ] .having in mind that condition is to be satisfied for any subinterval \subseteq [ a , b] ] and the velocity vector \times \mathbb{r}^{n } \times \mathbb{r}^{m } \rightarrow\mathbb{r}^n ] , is called a _process_. [ th : p ] if is an optimal process for problem , then there exists a co - vector function such that the following conditions hold : * the hamiltonian system * the stationary condition with the hamiltonian defined by in classical mechanics , the lagrange multiplier is called the _ generalized momentum_. in the language of optimal control , is known as the _ adjoint variable_. [ def : extpont ] any triplet satisfying the conditions of theorem [ th : p ] will be called a _fractional pontryagin extremal_. for the fractional problem of the calculus of variations one has , and we obtain from theorem [ th : p ] that comparing the two expressions for , one arrives to the euler - lagrange differential equations : .we define the notion of invariance for problem in terms of the hamiltonian , by introducing the augmented functional as in : \\ = \int_a^b \left[{\cal h}\left(t , q(t),u(t),p(t)\right)-p(t ) \cdot { _q(t)\right]dt \ , , \end{gathered}\ ] ] where is given by .theorem [ th : p ] is easily obtained applying the necessary optimality condition to problem .[ def : inv : gt ] a fractional optimal control problem is said to be invariant under the -parameter local group of transformations if , and only if , d\bar{t } \\= \left[{\cal h}(t , q(t),u(t),p(t))-p(t ) \cdot { _q(t)\right ] dt \ , .\end{gathered}\ ] ] [ thm : mainresult : fda06 ] if the fractional optimal control problem is invariant under , then \tau - p(t ) \cdot \xi\ ] ] is a fractional conservation law , that is , \tau - p(t ) \cdot\xi \right\ } = 0\ ] ] along all the fractional pontryagin extremals . for , the fractional optimal control problem is reduced to the classical optimal control problem = \int_a^b l\left(t , q(t),u(t)\right ) dt \longrightarrow \min \ , , \\\dot{q}(t)=\varphi\left(t , q(t),u(t)\right ) \ , , \end{gathered}\ ] ] and we obtain from theorem [ thm : mainresult : fda06 ] the optimal control version of noether s theorem : invariance under a one - parameter group of transformations imply that is constant along any pontryagin extremal ( one obtains from setting ) .the fractional conservation law is obtained applying theorem [ theo : tndf ] to the augmented functional .theorem [ thm : mainresult : fda06 ] provides a new interesting insight for the fractional autonomous variational problems .let us consider the autonomous fractional optimal control problem , the situation when the lagrangian and the fractional velocity vector do not depend explicitly on time : = \int_a^b l\left(q(t),u(t)\right ) dt \longrightarrow \min \ , , \\ _ad_t^\alpha q(t)=\varphi\left(q(t),u(t)\right ) \ , .\end{gathered}\ ] ] [ cor : mainresult ] for the autonomous problem the following fractional conservation law holds : in the classical framework of optimal control theory one has and our operator coincides with .we then get from the classical result : the hamiltonian is a preserved quantity along any pontryagin extremal of the problem .the hamiltonian does not depend explicitly on time , and it is easy to check that is invariant under time - translations : invariance condition is satisfied with , , and . in fact , given that , holds trivially proving that : using the notation in , one has and .conclusion follows from theorem [ thm : mainresult : fda06 ] .we begin by illustrating our results with two lagrangians that do not depend explicitly on the time variable .these two examples are borrowed from and , where the authors write down the respective fractional euler - lagrange equations . here, we use our corollary [ cor : mainresult ] to obtain new fractional conservation laws .we begin by considering a simple fractional problem of the calculus of variations ( see ( * ? ? ?* example 1 ) and ) : = \frac{1}{2}\int_0 ^ 1 \left(_{0}d_{1}^{\alpha}q(t ) \right)^2dt\longrightarrow \min \ , , \quad \alpha > \frac{1}{2 } \ , .\ ] ] equation takes the form we conclude from corollary [ cor : mainresult ] that is a fractional conservation law .let us now consider the following fractional optimal control problem : = \frac{1}{2}\int_0 ^ 1 \left[q^2(t)+ u^2(t)\right ] dt \longrightarrow \min \ , , \\ _ { 0}d_{1}^{\alpha}q(t)=-q(t)+u(t ) \ , , \notag\end{gathered}\ ] ] under the initial condition .the hamiltonian has the form from corollary [ cor : mainresult ] it follows that is a fractional conservation law . for , the fractional conservation laws and give conservation of energy .finally , we give an example of an optimal control problem with three state variables and two controls ( , ) .the problem is inspired in ( * ? ? ?* example 2 ) .we consider the following fractional optimal control problem : for the control system serves as model for the kinematics of a car and - reduces to example 2 of . from corollary [ cor : mainresult ]one gets that is a fractional conservation law .main difficulty of our approach is related with the computation of the invariance transformations . to illustrate this issue ,let us consider problem with in the classical case , since does not appear both in and , such a problem is trivially invariant under translations on the variable , condition is verified for with , , and . in the fractional casethis is not in general true : we have , but condition is not satisfied since and the second term on the right - hand side is in general not equal to zero ( remark [ rem : der : const : nz ] ) .the fractional euler - lagrange equations are a subject of strong current study because of its numerous applications . in fractional noether s theorem is proved .the fractional hamiltonian perspective is a quite recent subject , being investigated in a serious of publications .one can say , however , that the fractional variational theory is still in its childhood .much remains to be done .this is particularly true in the area of fractional optimal control where results are a rarity .the main study of fractional optimal control problems seems to be , where the euler - lagrange equations for fractional optimal control problems ( theorem [ th : p ] ) are obtained , using the traditional approach of the lagrange multiplier rule .here we use the lagrange multiplier technique to derive , from the results in , a new noether - type theorem for fractional optimal control systems .main result generalizes the results of . as an application, we have considered the fractional autonomous problem , proving that the hamiltonian defines a conservation law only in the integer case .gf was supported by ipad ( instituto portugus de apoio ao desenvolvimento ) ; dt by ceoc ( centre for research on optimization and control ) through fct ( portuguese foundation for science and technology ) , cofinanced by the european community fund feder / poci 2010 .we are grateful to professor tenreiro machado for drawing our attention to the _2nd ifac workshop on fractional differentiation and its applications _, 1921 july , 2006 , porto , portugal , and for encouraging us to write the present work .inspiring discussions with jacky cresson at the universit de pau et des pays de ladour , france , are very acknowledged .el - nabulsi , r.a . , torres , d.f.m .: necessary optimality conditions for fractional action - like integrals of variational calculus with riemann - liouville derivatives of order appl ._ * 30*(15 ) , 19311939 ( 2007 ) | using the recent formulation of noether s theorem for the problems of the calculus of variations with fractional derivatives , the lagrange multiplier technique , and the fractional euler - lagrange equations , we prove a noether - like theorem to the more general context of the fractional optimal control . as a corollary , it follows that in the fractional case the autonomous hamiltonian does not define anymore a conservation law . instead , it is proved that the fractional conservation law adds to the hamiltonian a new term which depends on the fractional - order of differentiation , the generalized momentum , and the fractional derivative of the state variable . |
the catalog and atlas of cataclysmic variables ( edition 1 - and edition 2 - ) has been a valuable source of information for the cataclysmic variable ( cv ) community .one of the goals of the catalog was to have the basic information on the objects ( i.e. coordinates , type , magnitude range , and finding charts ) in one central location , thus making it easy for observers to obtain data on the objects .however , the impracticality of reprinting the finding charts in their entirety means that , with each new edition , they are spread among more publications , taking us further from our goal of a central location .furthermore , as new objects are discovered , and known ones examined in greater detail , the printed editions can not keep pace with discovery , a `` living '' edition is therefore highly desirable , so that observers can access a complete and current list of cvs at any time . for the above reasons , as well asthe need to simplify the tracking of the objects ( there are over 1200 objects in the catalog ) , we have decided to generate a web - based version of the catalog .this version will have all the information ( as well some additional information detailed below ) from the first two editions , plus information on over 150 new objects discovered since 1996 may .those objects with revised finding charts will only have one chart presented , thus eliminating a possible confusion which necessarily exists when `` paper '' catalogs are generated .the web site will also allow for easy searching of the catalog , and for generation of basic statistics ( e.g. how many dwarf novae , how many cvs have _ hubble space telescope _data , etc . ) .the catalog consists of ( as of 2000 december ) 1034 cvs , and another 194 objects that are non - cvs ( objects originally classified erroneously as cvs ) .most of the objects are dwarf novae ( 40% ) , with another 30% being novae , and the rest mostly novalike variables . a large fraction ( 90% ) of the cvs have references to published finding charts , while 64% of the objects have published spectra ( 49% quiescent spectra and 15% outburst spectra ) .we have taken this opportunity to make several enhancements to the catalog . in conjunction with hans ritter and ulrich kolb, we have added orbital period data to the catalog ; about one - third of the objects have periods .the period information is from , plus updated and additional values . in conjunction with hilmar duerbeck , we now include finding charts of novae ( when possible ) , and have measured coordinates for many in the _ hubble space telescope _ gsc v1.1 guide star reference frame ( as is the case for the non - novae ) . finally , in the first edition we introduced ( out of necessity ) a pseudo - gcvs name for certain objects ( e.g. phe1 ) , which was continued in the second edition . with the web - based catalog ,these names are no longer needed , so we will cease generating new ones .for those objects that already had such names ( some of which have appeared in subsequent papers in the literature ) and now have a formal gcvs designation , we will adopt the formal gcvs name , although we will keep the pseudo - gcvs name in the `` other name '' field for continuity .the site can be reached via : http://icarus.stsci.edu//cvcat/ and is described in detail below . the home page ( figure [ fig1 ] ) for the catalog contains six links : * * search * - a link to the search page , from which the catalog may be accessed . * * description * - a description of the catalog , following the format of the previous editions . a description of all the fields is given . * * references * - a complete listing of the references mentioned in the catalog .note that from each individual object page , you can go directly to the reference of interest . * * statistics * - a listing of a fixed set of basic statistics from the catalog , generated in real - time . ** ascii report * - a listing of the entire catalog in the format of the previously published versions ( i.e. containing most but not all of the fields ) , sorted by right ascension. this output can be down - loaded to an ascii file . ** change log * - a listing , by object , of the changes made since the initial release of this edition the search page ( figure [ fig2 ] ) is the main page for access to the catalog .it allows the user to search the catalog on any field or combination of fields .the following text fields can be searched in a case - insensitive manner : gcvs name , other name , and the five reference fields ( coordinate , chart , type , spectrum , and period ) ; the object type and notes fields can be searched in a case - sensitive manner .all textual searches support the use of wildcards .a coordinate search may be performed by specifying either a right ascension / declination range , or by specifying a set of coordinates and a radius .numerical searches ( supporting a `` '' and `` '' capability ) can be performed for the following fields : galactic latitude , minimum and maximum magnitude , outburst year ( for novae ) , and period . finally , a search for space - based observations using any of 10 observatories can be performed .an on - line help file is available detailing the search capabilities for each field , as well as providing instructions for the use of wildcards .after a search is initiated , the search results page ( figure [ fig3 ] ) presents the results of the search .this page indicates the number of objects in the catalog that match the selection criteria , and presents an abbreviated view of the catalog entries for such entries , showing the basic information such as the coordinates , type , magnitude range , and period . to obtain the full information ( including the finding chart ) , one clicks on the object of interest .the individual object page ( figure [ fig4 ] ) presents the complete information on the selected object . for the finding charts , the field size , source ( dss ( digitized sky survey ) , _ hubble space telescope _data , ground - based image ) , filter / emulsion , and exposure time are given to allow the user to estimate the depth of the image .for most objects , the dss image is used .however , for particularly crowded fields ( such as in globular clusters ) , _ hubble space telescope _data is used when available .similarly , for particularly faint targets , ground - based ccd images are provided when possible . on this page, one may click on any of the reference codes to go directly to the full reference on the references page .we plan to update the site with new objects and information on a continual basis , although period and spaced - based updates will occur roughly every six months .we encourage users to inform us of any updates that should be implemented ( e.g. revised identifications , new objects , etc . ) , and if appropriate to send us improved / original finding charts ( as either postscript or jpeg images ) .the charts will be particularly useful for recent novae recovered in quiescence , and for the faintest objects where deep ccd imaging clearly reveals the correct identification .we wish to thank anne gonnella , steve hulbert , calvin tullos , and mike wiggs for the excellent work in creating the site .we also wish to thank matt mcmaster for assistance in generating the multitude of finding charts , and the director s discretionary research fund at stsci for financial support .paula szkody , john thorstensen , and steve howell provided helpful comments on the initial version of the site .rfw gratefully acknowledges the support of nsf grant ast-9618462 , and sabbatical support from stsci .hwd acknowledges the hospitality and support of stsci . | the catalog and atlas of cataclysmic variables ( edition 1 - and edition 2 - ) has been a valuable source of information for the cataclysmic variable ( cv ) community . however , the goal of having a central location for all objects is slowly being lost as each new edition is generated . there can also be a long time delay between new information becoming available on an object and its publication in the catalog . to eliminate these concerns , as well as to make the catalog more accessible , we have created a web site which will contain a `` living '' edition of the catalog . we have also added orbital period information , as well as finding charts for novae , to the catalog . |
metamaterials in the form of particulate composite mediums are currently of considerable scientific and technological interest walser . provided that wavelengths are sufficiently long compared with the length scales of inhomogeneities ,such a metamaterial may be envisaged as a homogenized composite medium ( hcm ) , arising from two homogeneous component mediums l96 , mackay03 .hcms with especially interesting properties may be conceptualized if the real parts of the relative permittivities ( and/or relative permeabilities ) of the two component mediums have opposite signs lijimmw .this possibility arises for metal in insulator dielectric composites herwin , mlw01 and has recently become feasible with the fabrication of dielectric magnetic materials displaying a negative index of refraction in the microwave frequency range helby , smith . over many years, several theoretical formalisms have been developed in order to estimate the effective constitutive parameters of particulate composite mediums l96 .in particular , the maxwell garnett and the bruggeman homogenization formalisms have been widely used ward .generally , the maxwell garnett formalism is seen to hold only for dilute composite mediums mg .more widely applicable is the bruggeman formalism that was initially founded on the intuition that the total polarization field is zero throughout the hcm brugge .a rigorous basis for the bruggeman formalism is also available , within the framework of the strong permittivity fluctuation theory ( spft ) k81 , mlw00 .estimates of hcm constitutive parameters generated by homogenization formalisms may be required to lie within certain bounds .in particular , the wiener bounds wiener , aspnes and the hashin shtrikman bounds s are often invoked . the hashin shtrikman bounds coincide with the constitutive parameter estimates of the maxwell garnett homogenization formalism aspnes .the applicability of theoretical bounds on the hcm permittivity has recently been the focus of attention for composites specified by relative permittivities with positive valued real parts ihvola . in this communication, we consider the application of the bruggeman formalism , together with the wiener and hashin shtrikman bounds , to isotropic dielectric hcms which arise from component mediums characterized by complex valued relative permittivities whose real parts have opposite signs .this is scenario is typical of metal in insulator hcms aspnes , milton , for example . by duality, our analysis extends to isotropic magnetic hcms .it also extends to isotropic dielectric magnetic hcms , because the permeability and the permittivity are then independent of each other in the bruggeman formalism kampia ( as also in the maxwell garnett formalism lak92 ) .therefore , our findings are very relevant to the application of homogenization formalisms lijimmw to mediums displaying negative index of refraction lmw03 , for example .furthermore , the implications of our mathematical study extend beyond the bruggeman formalism to the spft as well mackay03 . a note on notation : an time dependence is implicit in the following sections ; and the real and imaginary parts of complex valued quantities are denoted by and , respectively .consider the homogenization of two isotropic dielectric component mediums labelled and .let their relative permittivities be denoted by and , respectively . for later convenience , we define the bruggeman estimate of the hcm relative permittivity , namely , is provided implicitly via the relation ward wherein and are the respective volume fractions of component mediums and , and the particles of both component mediums are assumed to be spherical .the bruggeman equation emerges naturally within the spft framework mackay03 .a rearrangement of gives the quadratic equation only those of are valid under the principle of causality as encapsulated by the kramers kronig relations bh which conform to the restriction .let be the discriminant of the quadratic equation ; i.e. , since , we may express as an insight into the applicability of the bruggeman formalism may be gained by considering the of the equation ; these are as follows : on restricting attention to nondissipative component mediums ( i.e. , ) , it is clear that are complex valued if .consequently , which implies that .on the other hand , are real valued if .thus , the bruggeman estimate for may be complex valued with nonzero imaginary part , even though neither component medium is dissipative .various bounds on the hcm relative permittivity have been developed .two of the most widely used are the wiener bounds wiener , aspnes and the hashin shtrikman bounds s while both the wiener bounds and the hashin shtrikman bounds were originally derived for real valued constitutive parameters , generalizations to complex valued constitutive parameters have been established milton .the hashin shtrikman bound is equivalent to the maxwell garnett estimate of the hcm relative permittivity based on spherical particles of component medium embedded in the host component medium .similarly , is equivalent to the maxwell garnett estimate of the hcm relative permittivity based on spherical particles of component medium embedded in the host component medium .the estimate is valid for , whereas the estimate is valid for ; but see the footnote in section 1 . to gain insights into the asymptotic behaviour of the wiener and hashin shtrikman bounds , let us again restrict attention to the case of nondissipative component mediums ( i.e. , ) . from , we see that remains finite for all values of , but may become infinite for since in a similar vein , from we find that thus , for all values of there exists a value of at which is unbounded .analogously , so we can always find a value of at which is unbounded , provided that .let us now present , calculated values of the hcm relative permittivity , along with the corresponding values of the bounds and , for some representative examples .both nondissipative and dissipative hcms are considered for .the effects of dissipation may be very clearly appreciated through first considering the idealized situation wherein the components mediums are nondissipative vandehulst .furthermore , although the absence of dissipation is unphysical due to the dictates of causality bh , weak dissipation in a particular spectral regime is definitely possible and is then often ignored ( * ? ? ?* sec.2.5 ) .thus , it is instructive to begin with the commonplace scenario wherein both and .for example , let and . in figure 1, is plotted against , along with the corresponding wiener bounds and hashin shtrikman bounds .the latter bounds are stricter than the former bounds in the sense that the close agreement between and the lower hashin shtrikman bound at low volume fractions is indicative of the fact that .similarly , agrees closely with the upper hashin shtrikman bound at high values of since .a markedly different situation develops if the real valued and have opposite signs .for example , the values of calculated for and are graphed against in figure 2 , together with the corresponding wiener and hashin shtrikman bounds .the bruggeman estimate is complex valued with nonzero imaginary part for .this estimate is not physically reasonable . the bruggeman homogenization formalism unlike the spft which is its natural generalization has no mechanism for taking coherent scattering losses into account .furthermore , no account has been taken in the bruggeman equation for the finite size of the particles lijaem , pls , shan96 .therefore , the bruggeman estimate of the hcm relative permittivity is required to be real valued if the component mediums are nondissipative . while in figure 2 is complex valued , the wiener bounds and the hashin shtrikman bounds are both real valued . in accordance with , we see that as .similarly , in the limit , as may be anticipated from .furthermore , since , the maxwell garnett formalism is clearly inappropriate here .we also observe that the inequalities which hold for , do not hold for .let us now investigate and its associated bounds when the component mediums are dissipative ; i.e. , .we begin with those cases for which : for example , we take and . in figure 3, is plotted against , and the associated wiener bounds and the hashin shtrikman bounds are also presented .the behaviour of the real parts of , and closely resembles that displayed in the nondissipative example of figure 1 .in fact , the following generalization of holds : however , this ordering does not extend to the imaginary parts of , and . turning to the cases for , we let and , for example . the corresponding bruggeman estimate is graphed as function of , along with the wiener bounds and the hashin shtrikman bounds in figure 4 . since ,the real parts of and remain finite , unlike in the corresponding nondissipative scenario presented in figure 2 . however , the real and imaginary parts of and exhibit strong resonances in the vicinity of ( for ) and ( for ) .these resonances become considerably more pronounced if the degree of dissipation exhibited by the component mediums is reduced . for example , in figure 5 the graphs corresponding to figure 4 are reproduced for and .we observe in particular that for .thus , the bruggeman estimate vastly exceeds both the wiener bounds and the hashin shtrikman bounds for a wide range of . since , the estimates of are clearly unreasonable .furthermore , since the real and imaginary parts of exhibit sharp resonances at , we may infer that the maxwell garnett formalism is inapplicable for . on comparing figures 4 and 5 ,we conclude that the bruggeman formalism , the weiner bounds and the hashin shtrikman bounds become increasing inappropriate as the degree of dissipation decreases towards zero .this means that all three _ could _ be applicable rather well when the dissipation is not weak .therefore , let us examine the scenario wherein the real and imaginary parts of the relative permittivities of the component medium are of the same order of magnitude ; i.e. , we take and .the corresponding plots of the bruggeman estimate together with the wiener bounds and the hashin shtrikman bounds are presented in figure 6 .the real and imaginary parts of the bruggeman estimate are physically plausible , and both lie within the hashin shtrikman bounds . the hashin shtrikman bounds themselves do not exhibit resonances , and the weiner bounds do not exhibit strong resonances . accordingly , we conclude that many previously published results are not erroneous , but caution is still advised .the bruggeman homogenization formalism is well established in the context of isotropic dielectric hcms , as well as more generally l96 . however, this formalism was shown in section 3.2 to be inapplicable for hcms which arise from two isotropic dielectric component mediums , characterized by relative permittivities and , with * and having opposite signs ; and * .since the bruggeman formalism provides the comparison medium which underpins the spft , it may be inferred that the spft is likewise not applicable to the scenarios of ( i ) with ( ii ) .it is also demonstrated in section 3.2 that both the wiener bounds and the hashin shtrikman bounds can exhibit strong resonances when the component mediums are characterized by ( i ) with ( ii ) . in the vicinity of resonances , these bounds clearly do not constitute tight bounds on the hcm relative permittivity . as a direct consequence , the maxwell garnett homogenization formalism , like the bruggeman homogenization formalism , is inapplicable to the scenarios of ( i ) with ( ii ) .this limitation also extends to the recently developed incremental img and differential dmg variants of the maxwell garnett formalism .if the component mediums are sufficiently dissipative then the bruggeman formalism and the hashin shtrikman bounds ( and therefore also the maxwell garnett formalism ) provide physically plausible estimates , despite the real parts of the component medium relative permittivities having opposite signs as shown in section 3.3 . the explicit delineation of the appropriate parameter range(s ) for the bruggeman formalism and the hashin shtrikman bounds is a matter for future investigation .bounds can , of course , be violated by a formalism if the underlying conditions for the formalism are in conflict with those used for deriving the bounds .sihvola ihvola has catalogued the following conflicts : * bounds derived for nondissipative component mediums can be invalid for the real parts of either or for a composite medium containing dissipative component mediums .* percolation can not be can not be captured by the maxwell garnett formalism herwin , sasta .hence , the hashin shtrikman bounds , being based on the maxwell garnett formalism , can be violated by the bruggeman estimate for a percolative composite medium . *the derivations of bounds generally assume that the particles in a composite medium have simple shapes .if the particle shapes are complicated , the composite medium may display properties not characteristic of the either of the component mediums .for instance , magnetic properties can be displayed when the particles in a composite medium have complex shapes , even though the component mediums are nonmagnetic .clearly , the magnetic analogs of , , and are then inapplicable .* , , , and as well as their magnetic analogs are also invalid _ prima facie _ when the component mediums exhibit magnetoelectric properties mackay03,lijaem , mich00 .* bounds derived for electrically small particles become inapplicable with increasing frequency , due to the emergence of finite size effects pls .even the concept of homogenization becomes questionable with increasing electrical size .in contrast , the bounds and the homogenization formalisms studied in this paper share the same premises ; yet , a conflict arises in certain situations because the bounds exhibit resonance while the homogenization estimates do not .as several conventional approaches to homogenization are not appropriate to the hcms arising from component mediums characterized by ( i ) with ( ii ) , there is a requirement for new theoretical techniques to treat this case .this requirement is all the more pressing , given the growing scientific and technological importance of new types of metamaterials walser , lmw03 . | the bruggeman formalism provides an estimate of the effective permittivity of a particulate composite medium comprising two component mediums . the bruggeman estimate is required to lie within the wiener bounds and the hashin shtrikman bounds . considering the homogenization of weakly dissipative component mediums characterized by relative permittivities with real parts of opposite signs , we show that the bruggeman estimate may not be not physically reasonable when the component mediums are weakly dissipative ; furthermore , both the wiener bounds and the hashin shtrikman bounds exhibit strong resonances . * a limitation of the bruggeman formalism for homogenization * tom g. mackay + _ school of mathematics , university of edinburgh , edinburgh eh9 3jz , uk _ + akhlesh lakhtakia + _ catmas computational & theoretical materials sciences group + department of engineering science and mechanics + pennsylvania state university , university park , pa 168026812 , usa _ * keywords : * homogenization ; negative permittivity ; bruggeman formalism ; maxwell garnett formalism ; hashin shtrikman bounds ; wiener bounds |
the _ sparse representation _ problem involves solving the system of linear equations where is assumed to be -sparse ; i.e. is allowed to have ( at most ) non - zero entries .the matrix is typically referred to as the _ dictionary _ with elements or _atoms_. it is well - known that can be uniquely identified if satisfies the so called _ _ spark condition __ columns of are linearly independent . ] .meanwhile , there exist tractable and efficient convex relaxations of the combinatorial problem of finding the ( unique ) -sparse solution of with provable recovery guarantees . a related problem is _ dictionary learning _ or _sparse coding _ which can be expressed as a sparse factorization of the data matrix ( where both and are assumed unknown ) given that each column of is -sparse and satisfies the spark condition as before .a crucial question is how many data samples ( ) are needed to _ uniquely _ identify and from ? unfortunately , the existing lower bound is ( at best ) exponential assuming an equal number of data samples over each -sparse support pattern in . in this paper, we address a more challenging problem .in particular , we are interested in the above sparse matrix factorization problem ( with both sparsity and spark conditions ) when only random linear measurements from each column of is available .we would like to find lower bounds for for the ( partially observed ) matrix factorization to be unique .this problem can also be seen as recovering both the dictionary and the sparse coefficients from compressive measurements of data .for this reason , this problem has been termed _ blind compressed sensing _ ( bcs ) before , although the end - goal of bcs is the recovery of .we start by establishing that the uniqueness of the learned dictionary over random data measurements is a sufficient condition for the success of bcs .perfect recovery conditions for bcs are derived under two different scenarios . in the first scenario ,fewer random linear measurements are available from each data sample .it is stated that having access to a large number of data samples compensates for the inadequacy of sample - wise measurements . meanwhile , in the second scenario , it is assumed that slightly more random linear measurements are available over each data sample and the measurements are partly fixed and partly varying over the data .this measurement scheme results in a significant reduction in the required number of data samples for perfect recovery .finally , we address the computational aspects of bcs based on the recent non - iterative dictionary learning algorithms with provable convergence guarantees to the generating dictionary .bcs was initially proposed in where it was assumed that , for a given random gaussian sampling matrix ( ) , is observed .the conclusion was that , assuming the factorization is unique , factorization would also be unique with a high probability when is an orthonormal basis. however , it would be impossible to recover from when .it was suggested that structural constraints be imposed over the space of admissible dictionaries to make the inverse problem well - posed .some of these structures were sparse bases under known dictionaries , finite set of bases and orthogonal block - diagonal bases .while these results can be useful in many applications , some of which are mentioned in , they do not generalize to unconstrained overcomplete dictionaries .subsequently , there has been a line of empirical work on showing that dictionary learning from compressive data a sufficient step for bcs can be successful given that a different sampling matrix is employed for each data sample is no longer valid which is possibly a reason for the lack of a theoretical extension of bcs to this case . ]( i.e. each column of ) .for example , uses a modified k - svd to train both the dictionary and the sparse coefficients from the incomplete data .meanwhile , use generic gradient descent optimization approaches for dictionary learning when only random projections of data are available .the empirical success of dictionary learning with partial as well as compressive or projected data triggers more theoretical interest in finding the uniqueness bounds of the unconstrained bcs problem .finally , we must mention the theoretical results presented in the pre - print on bcs with overcomplete dictionaries while is assumed to lie in a structured union of disjoint subspaces .it is also proposed that the results of this work extend to the generic sparse coding model if the ` one - block sparsity ' assumption is relaxed .we argue that the main theoretical result in this pre - print is incomplete and technically flawed as briefly explained here . in the proof of theorem 1 of , it is proposed that ( with adjustment of notation ) _ `` assignment [ of s columns to rank- disjoint subsets ] can be done by the ( admittedly impractical ) procedure of testing the rank of all possible matrices constructed by concatenating subsets of column vectors , as assumed in ''_. however , it is ignored that the entries of are missing at random and the rank of an incomplete matrix can not be measured . as it becomes more clear later , the main challenge in the uniqueness analysis of unconstrained bcs is in addressing this particular issue .two strategies to tackle this issue that are presented in this paper are : 1 ) increasing the number of data samples and 2 ) designing and employing measurement schemes that preserve the low - rank structure of s sub - matrices .this paper is organized as follows . in section [ sec : problem ] , we provide the formal problem definition for bcs .our main results are presented in section [ sec : main ] .we present the proofs in section [ sec : proofs ] .practical aspects of bcs are treated in section [ sec : algorithm ] where we explain how provable dictionary learning algorithms , such as , can be utilized for bcs . finally , we conclude the paper and present future directions in section [ sec : conclusion ] .our general convention throughout this paper is to use capital letters for matrices and small letters for vectors and scalars . for a matrix , denotes its entry on row and column , denotes its column and denotes its column - major vectorized format .the inner product between two matrices and ( of the same sizes ) is defined as .let denote the smallest number of s columns that are linearly dependent . is -coherent if we have .finally , let \coloneqq\{1,2,\dots , m\} ] denote the set of all subsets of ] to denote the observations and to denote the projected matrix that is a concatenation of all . specifically ,when entries of each are drawn independently from a random gaussian distribution with mean zero and variance , we use the notations and .assume where denotes the dictionary ( in the overcomplete setting ) and is a sparse matrix with exactly non - zero entries per column and .additionally , assume that each column of is randomly drawn by first selecting its support \choose k} ] .we denote by the set of feasible under the described sparse coding model .note that the assumption is necessary to ensure a unique even when is known and fixed . as noted and proved in , when , with probability one , no subset of ( or less ) columns of is linearly dependent . also with probability one ,if a subset of columns of are linearly dependent , then all of the columns must have the same support .given the above definitions , we can now formally express the problem definition for bcs : recover from given , and .our results throughout this paper are mainly developed for the class of gaussian measurements .however , it is not difficult to extend these results to the larger class of continuous sub - gaussian distributions for .to start with , assume that there are exactly columns in for each support pattern where \choose k}\downbracefill\upbracefill\upbracefill ] where stands for the fixed part of sampling matrix and stands for the varying part of the sampling matrix .the total number of measurements per column is . in a hybrid gaussian measurement scheme , and through assumed to be drawn independently from an i.i.d .zero - mean gaussian distribution with variance .the observations corresponding to and s are denoted by and respectively . as mentioned earlier ,the hybrid measurement scheme was designed to reduce the required number of data samples for perfect bcs recovery . in particular , as formalized in lemma [ lem : rank - check ] , the fixed part of the measurements is designed to retain the low - rank structure of each -dimensional subspace associated with a particular . meanwhile ,the varying part of the measurements is essential for the uniqueness of the learned dictionary .assume and there are exactly columns in for each . then can be perfectly recovered from hybrid gaussian measurements with probability one given that .[ cor : gell-4k ] similar to the statement of corollary [ cor : gn-2k ] , it can be stated that bcs with hybrid gaussian measurement succeeds with probability at least given that .the proof follows the proof of corollary [ cor : gn-2k ] .although we mainly follow the stochastic approach of in this paper , we could also employ the deterministic approach of to arrive at the uniqueness bound in theorem [ cor : gell-4k ] . in , an algorithm ( which is not necessarily practical )is proposed to uniquely recover and from .this algorithm starts by finding subsets of size of s columns that are linearly dependent by testing the rank of every subset .dismissing the degenerate possibilities are dismissed by adding extra assumptions in the deterministic sparse coding model .meanwhile , as pointed out in , such degenerate instances of would have a probability measure of zero in a random sparse coding model ] , these detected subsets would correspond to samples with the same support pattern in . under the assumptions in theorem[ cor : gell-4k ] , it is possible to test whether columns in are linearly dependent ( with probability one ) , as a consequence of lemma [ lem : rank - check ] in the following section . until now , our goal was to show that ( and subsequently ) is unique given only cs measurements .as we mentioned before , uniqueness of is a _ sufficient _ condition for the success of bcs .consider the scenario where not all support patterns are realized in or for some there is not enough samples to guarantee recovery .for such scenarios , we present the following theorem .assume and let where and denotes the set of indices of columns of with support .then , under hybrid gaussian measurement , for all can be perfectly recovered with probability one .[ the : latest ]the following crucial lemma from handles the permutation ambiguity of sparse coding .assume for and let \choose k} ] denote the set of indices of s columns that have the sparsity pattern . by definition , where .due to the pigeon - hole principle , there must be at least columns within that share some particular support pattern .in other words , if denotes the set of indices of s columns that have the support pattern , then . for simplicity , denote .clearly , ( because ) , and we have according to lemma [ lem : manifold ] , if or equivalently , then with probability one .meanwhile , since , necessitates that finally , since satisfies the spark condition , it is not difficult to see that is a bijective map . to explain more , assume there exists some such that combining with ( [ eq : map2 ] ) we arrive at which contradicts the spark condition for for .therefore , must be injective . now, since is a finite set and is an injective mapping from to itself , it must also be surjective and , thus , bijective . in order to have at least columns in for each support in the random sparse coding model , we must have more than just data samples . the following result from the number of required data samples to ensure at least columns per each with a tunable probability of success . for a randomly generated with and ] denote the set of indices of s columns that have the same sparsity pattern .clearly , therefore , if , then and according to lemma [ lem : rank - check ] : with probability one .hence , . again using lemma [ lem : rank - check ] with , with probability one . therefore, all the columns in must have the same support , namely .note that since , .meanwhile , necessitates that for every .therefore , .now , given according to lemma [ lem : manifold ] , if , then with probability one . meanwhile , since , necessitates that finally , since satisfies the spark condition , is a bijective map and for some diagonal and permutation matrix according to lemma [ lem : apd ] .recall that for every we have .assume and as before .having allows testing whether a subset of columns of are linearly dependent ( have a rank of ) with probability one .therefore , by doing an exhaustive search among every sub - matrix with \choose k+1} ] and =0 ] where .similar to , in the rest of paper we will assume .derived results generalize to the case by loosing constant factors in guarantees .two dictionaries are column - wise -close , if there exists a permutation and such that \colon \|a_i-\theta_i b_{\pi(i)}\|_2\leq \epsilon ] based on corollary [ cor : mu ] , it can be deduced that with high probability .therefore , replacing in the original theorem [ the : arora ] with introduces slightly stronger sparsity requirement for the success of the algorithm . at this stage , we only exploit the varying part of the measurements and use in place of for simplicity .let be the discovered overlapping clusters from the previous stage and define the empirical covariance matrix for the cluster .the svd approach ) selective averaging and ) the svd - based approach .we selected to work with the svd approach due to its more abstract and versatile nature .] of estimates by which is the principal eigenvector of .let denote the empirical covariance matrix resulting from the compressive measurements where as before .similarly , let denote the principal eigenvector of .our goal in this section is to show that is bounded by a small constant for finite and approaches zero for large . for this purpose, we use the recent results from the area of _ subspace learning _ , specifically , subspace learning from compressive measurements . a critical factor in estimation accuracy of the principle eigenvector of a _ perturbed _ covariance matrix is the _ eigengap _ between the principal and the second eigenvalues of the original covariance matrix .this is a well - known result from the works of chandler davis and william kahan known as the davis - kahan sine theorem .consider the following notation .let and denote projection operators onto the principal -dimensional subspaces of and respectively ( i.e. the projection onto the top- eigenvectors ) .let denote the spectral norm of the difference between and .define the eigengap as the distance between the and largest eigenvalues of .suppose is computed from at least data samples ( for all ) .moreover , assume the data samples have bounded norms , i.e. \colon\|y_j\|_2 ^ 2\leq \eta ] [ lem : temp ] clearly , and . as we mentioned in the definition of -closeness, is implicit in the error expression requiring that and consequently .also note that , by definition , for any now let . then therefore and the rest follows from lemma [ lem : akshay ] . to obtaina lower - bound for the eigengap we need to review some of the intermediate results in .in fact , we compute a lower - bound for of which serves as a close approximation of when the number of data samples is large . for every ] .meanwhile , it has been established that for , = r_i^2 ] .define .it can be shown that \ ] ] note that the first and the second largest eigenvalues correspond to projected variances of on the directions of and , respectively .therefore , based on the derived ranges for and , we are able to find the following lower - bound for : note that becomes very small as the problem size ( , , ) becomes large , resulting in . therefore , given a sufficient number of samples , it can be guaranteed that is an accurate estimation of and , in turn , an accurate estimation of even when only measurements per sample is available . once the dictionary has been approximated to within a close distance from the optimal dictionary , iterative algorithms such as can assure convergence to a local optimum and therefore perfect recovery as suggested in .finally , perfect recovery of the dictionary results in perfect recovery of and given the cs bounds for the number of measurements which are generally weaker than the stated bounds for the recovery of the dictionary .in this work , we studied the conditions for perfect recovery of both the dictionary and the sparse coefficients from linear measurements of the data .the first part of this work brings together some of the recent theories about the uniqueness of dictionary learning and the blind compressed sensing problem .moreover , we described a ` hybrid ' random measurement scheme that reduces the theoretical bounds for the minimum number of data samples to guarantee a unique dictionary and thus perfect recovery for blind compressed sensing . in the second part , we discussed the algorithmic aspects of dictionary learning under random linear measurements .it was shown that a polynomial - time algorithm can assure convergence to the generative dictionary given a sufficient number of data samples with high probability .it would be interesting to explore dictionary learning and blind compressed sensing for non - gaussian random measurements .in particular , when the data matrix is partially observed ( i.e. an incomplete matrix ) , data recovery becomes a matrix completion problem where the elements of the data matrix are assumed to lie in a union of interconnected rank- subspaces .this is a subject of future work .let . note that .thus , . our goal is to show and thus . to prove , we must show that for every $ ] , results in with probability one . for simplicity , we omit the sample index in the rest of the proof .let and respectively denote the sets of non - zero indices of and where . rewrite as .note that is supported on where .therefore , we must show that , with probability one , \choose 2k}\colon rank(\phi a_t)=|t|\ ] ] necessitating or . since ,every columns of are linearly independent and we are able to perform a gram - schmidt orthogonalization on to get where is orthonormal ( ) and is a full - rank square matrix .hence , is distributed according to i.i.d .gaussian and is full - rank with probability one .we conclude the proof by noticing that since is a full - rank square matrix. then .specifically , under the gaussian measurement scheme for bcs , we have : \in\mathbb{r}^{p n\times d n}\ ] ] where non - zero entries of are i.i.d .gaussian with mean zero and variance .the following result from gives the required number of linear measurements to guarantee ( with probability one ) that a rank- matrix does not fall into the null - space of the measurement operator .let be a -dimensional continuously differentiable manifold over the set of real matrices .suppose we take linear measurements from .assume there exists a constant such that for every with .further assume that for each that the random variables are independent .then with probability one , .[ lem : r11 ] a careful inspection of the derivation of the above theorem in reveals that this result can be extended to include the manifolds over the set of rectangular matrices .specifically , for the manifold over rank- matrices we have ( see for example ) .let denote the manifold over the set of rank- matrices and let denote the manifold over the set of rank- matrices . also let with . then , for any , implies with probability one .clearly , implies for any over the set of rank- matrices with . also note that , thus .now , since and ( with probability one , according to lemma [ lem : r11 ] ) , we must have or with probability one .it only remains to show that satisfies the requirements of lemma [ lem : r11 ] .as noted in , the requirement requires that the densities of do not spike at the origin ; a sufficient condition for this to hold for every with is that each has i.i.d .entries with a continuous density .note that non - zero entries of are i.i.d .gaussian and cover every column in . therefore, none of the entries of would spike at the origin or equivalently there exists so that with given that the vector is drawn from a continuous distribution .let and .perform a gram - schmidt orthogonalization on to obtain where has orthogonal columns and is full - rank ; hence , given , we have .note that , since is orthogonal and is i.i.d .gaussian , is also i.i.d .gaussian . hence , with probability one , is full - rank and . to conclude the proof ,note that when , necessarily we have .e. candes , j. romberg and t. tao , `` robust uncertainty principles : exact signal reconstruction from highly incomplete frequency information , '' _ ieee transactions on information theory _ , vol .2 , pp . 489-509 , 2006 . | blind compressed sensing ( bcs ) is an extension of compressed sensing ( cs ) where the optimal sparsifying dictionary is assumed to be unknown and subject to estimation ( in addition to the cs sparse coefficients ) . since the emergence of bcs , dictionary learning , a.k.a . sparse coding , has been studied as a matrix factorization problem where its sample complexity , uniqueness and identifiability have been addressed thoroughly . however , in spite of the strong connections between bcs and sparse coding , recent results from the sparse coding problem area have not been exploited within the context of bcs . in particular , prior bcs efforts have focused on learning constrained and complete dictionaries that limit the scope and utility of these efforts . in this paper , we develop new theoretical bounds for perfect recovery for the general _ unconstrained _ bcs problem . these unconstrained bcs bounds cover the case of overcomplete dictionaries , and hence , they go well beyond the existing bcs theory . our perfect recovery results integrate the combinatorial theories of sparse coding with some of the recent results from low - rank matrix recovery . in particular , we propose an efficient cs measurement scheme that results in practical recovery bounds for bcs . moreover , we discuss the performance of bcs under polynomial - time sparse coding algorithms . |
the problem of community detection in networks has received wide attention .it has proved to be a problem of remarkable subtlety , computationally challenging and with deep connections to other areas of research including machine learning , signal processing , and spin - glass theory .a large number of algorithmic approaches to the problem have been considered , but interest in recent years has focused particularly on statistical inference methods , partly because they give excellent results , but also because they are mathematically principled and , at least in some cases , provably optimal . in this paperwe study two of the most fundamental community inference methods , based on the so - called stochastic block model or its degree - corrected variant .we show that it is possible to map both methods onto the well - known minimum - cut graph partitioning problem , which allows us to adapt any of the large number of available methods for graph partitioning to solve the community detection problem . as an example, we apply the laplacian spectral partitioning method of fiedler to derive a community detection method competitive with the best currently available algorithms in terms of both speed and quality of results .the first method we consider is based on the stochastic block model , sometimes also called the planted partition model , a well studied model of community structure in networks .this model supposes a network of vertices divided into some number of groups or communities , with different probabilities for connections within and between groups .we will here focus on the simplest case of just two groups ( of any size , not necessarily equal ) . in the commonest version of the model edgesare placed independently at random between vertex pairs with probability for pairs in the same group and for pairs in different groups . in this paperwe use the slightly different poisson version of the model described in , in which we place between each pair of vertices a poisson - distributed number of edges with mean for pairs in the same group and for pairs in different groups . in essentially allreal - world networks the fraction of possible edges that are actually present in the network is extremely small ( usually modeled as vanishing in the large- limit ) , in which case the two versions of the model become indistinguishable , but the poisson version is preferred because its analysis is more straightforward . at its heart ,the statistical inference of community structure is a matter of answering the following question : if we assume an observed network is generated according to our model , what then must the parameters of that model have been ?in other words , what were the values of and used to generate the network and , more importantly , which vertices fell in which groups ?even though the model is probably not a good representation of the process by which most real - world networks are generated , the answer to this question often gives a surprisingly good estimate of the true community structure . to answer the question , we make use of a maximum likelihood method .let us label the two groups or communities in our model group 1 and group 2 , and denote by the group to which vertex belongs .the edges in the network will be represented by an adjacency matrix having elements then the likelihood of generating a particular network or graph , given the complete set of group memberships , which we ll denote by the shorthand , and the poisson parameters , which we ll denote by , is where denotes the expected number of edges between vertices and or , depending on whether the vertices are in the same or different groups .we are assuming there are no self - edges in the network edges that connect vertices to themselves so for all .given the likelihood , one can maximize it to find the most likely values of the group labels and parameters , which can be done in a number of different ways . in ref . , for example , the likelihood was maximized first with respect to the parameters and by differentiation . applying this method to eq .gives most likely values of where and are the observed numbers of edges within and between groups respectively for a given candidate division of the network , and and are the numbers of vertices in each group . substituting these values back into eq .gives the profile likelihood , which depends on the group labels only .in fact , one typically quotes not the profile likelihood itself but its logarithm , which is easier to work with . neglecting an unimportant additive constant ,the log of the profile likelihood for the present model is the communities can now be identified by maximizing this quantity over all possible assignments of the vertices to the groups .this is still a hard task , however .there are an exponentially large number of possible assignments , so an exhaustive search through all of them is unfeasible for all but the smallest of networks. one can apply standard heuristics like simulated annealing to the problem , but in this paper we take a different approach . in the calculation above, the likelihood is maximized over first , for fixed group assignments , then over the group assignments .but we can also take the reverse approach , maximizing first over the group assignments , for given , and then over at the end .this approach is attractive for two reasons .first , as we will show , the problem of maximizing with respect to the group assignment when is given is equivalent to the standard problem of minimum - cut graph partitioning , a problem for which many excellent heuristics are already available .second , after maximizing with respect to the group assignments the remaining problem of maximizing with respect to is a one - parameter optimization that can be solved trivially .the net result is that the problem of maximum - likelihood community detection is reduced to one of performing a well - understood task graph partitioning plus one undemanding extra step .the resulting algorithm is fast and , as we will see , gives good results .so consider the problem of maximizing the likelihood , eq ., with respect to the group labels , for given values of the parameters and . we will actually maximize the logarithm of the likelihood , , \label{eq : logl1}\ ] ] which gives the same result but is usually easier .to proceed we write and as where is the kronecker delta . substituting these into eq . anddropping overall additive and multiplicative constants , which have no effect on the position of the maximum , the log - likelihood can be rearranged to read where which is positive whenever , meaning we have traditional community structure in our network .( it is possible to repeat the calculations for the case and derive methods for detecting such structure as well , although we will not do that here . )the quantity is the cut size of the network partition represented by our two communities , i.e. , the number of edges connecting vertices in different communities , which we previously denoted , and where as previously and are the numbers of vertices in communities 1 and 2 .thus we can also write the log - likelihood in the form the maximization of this log - likelihood corresponds to the minimization of the cut size , with an additional penalty term that favors groups of equal size .this is similar , though not identical , to the so - called ratio cut problem , in which one minimizes the ratio , which also favors groups of equal size , although the nature of the penalty for unbalanced groups is different .the catch with maximizing eq .is that we do nt know the value of , which depends on the unknown quantities and via eq . , but we can get around this problem by the following trick . we first perform a limited maximization of in which the sizes and of the groups are held fixed at some values that we choose .this means that the term is a constant and hence drops out of the problem and we are left maximizing only , or equivalently minimizing the cut - size .this problem is now precisely the standard minimum - cut problem of graph partitioning the minimization of the cut size for divisions of a graph into groups of given sizes .there are possible choices of the sizes of the two groups , ranging from putting all vertices in group 1 to all vertices in group 2 , and everything in between . if we solve the minimum - cut problem for each of these choices we get a set of solutions and we know that one of these must be the solution to our overall maximum likelihood problem .it remains only to work out which one . butchoosing between them is easy , since we know that the true maximum also maximizes the profile likelihood , eq . .so we can simply calculate the profile likelihood for each solution in turn and find the one that gives the largest result . in effect, our approach narrows the exponentially large pool of candidate divisions of the network to a one - parameter family of just solutions ( parametrized by group size ) , from which it is straightforward to pick the overall winner by exhaustive search .moreover , the individual candidate solutions are all themselves solutions of the standard minimum - cut partitioning problem , a problem that has been well studied for many years and about which a great deal is known . although partitioning problems are , in general , hard to solve exactly , there exist many heuristics that give good answers in practical situations .the approach developed here allows us to apply any of these heuristics directly to the maximum - likelihood community detection problem . as an example of this approach , we demonstrate a fast and simple spectral algorithm based on the laplacian spectral bisection method for graph partitioning introduced by fiedler description of this method can be found , for example , in , where it is shown that a good approximation to the minimum - cut division of a network into two parts of specified sizes can be found by calculating the fiedler vector , which is the eigenvector of the graph laplacian matrix corresponding to the second smallest eigenvalue .( the graph laplacian is the symmetric matrix , where is the adjacency matrix and is the diagonal matrix with equal to the degree of vertex . )having calculated the fiedler vector one divides the network into groups of the required sizes and by inspecting the vector elements and assigning the vertices with the largest ( most positive ) elements to group 1 and the rest to group 2 .although the method gives only an approximation to the global minimum - cut division , practical experience ( and some rigorous results ) show that it gives good answers under commonly occurring conditions .a nice feature of this approach is that , in a single calculation , it gives us the entire one - parameter family of minimum - cut divisions of the network .we need calculate the fiedler vector only once , sort its elements in decreasing order , then cut them into two groups in each of the possible ways and calculate the profile likelihood for the resulting divisions of the network .the one with the highest score is ( an approximation to ) the maximum - likelihood community division of the network .these developments are for the standard stochastic block model . as shown in ref . , however , the standard block model gives poor results when applied to most real - world networks because the model fails to take into account the broad degree distribution such networks possess .this problem can be fixed by a relatively simple modification of the model in which the expected number of edges between vertices and is replaced by where is the degree of vertex and again depends only on which groups the vertices and belong to .all the developments for the standard block model above generalize in straightforward fashion to this `` degree - corrected '' model .the log - likelihood and log - profile likelihood become where and are the sums of the degrees of the vertices in the two groups . in other words ,the expressions are identical to those for the uncorrected model except for the replacement of the group sizes by .the maximization of is thus once again reduced to a generalized minimum - cut partitioning problem , with a penalty term proportional to , which again favors balanced groups .although we do nt know the value of , we can reduce the problem to a variant of the minimum - cut problem by the equivalent of our previous approach , holding and constant . andagain we can derive a spectral algorithm for this problem based on the graph laplacian . by a derivation analogous to that for the standard spectral methodwe can show that a good approximation to the problem of minimum - cut partitioning with fixed ( as opposed to fixed ) is given not by the second eigenvector of but by the second eigenvector of the generalized eigensystem , where , as previously , is the diagonal matrix of vertex degrees .once again we calculate the vector and split the vertices into two groups according to the sizes of their corresponding vector elements and once again this gives us a one - parameter family of candidate solutions from which we can choose an overall winner by finding the one with the highest profile likelihood , eq . .vertices , generated using the standard ( uncorrected ) stochastic block model with equal group sizes of 5000 vertices each and a range of strengths of the community structure . defining , ,the curves are ( top to bottom ) , 75 , 70 , 65 , and 60 , and .the dashed vertical line indicates the true size of the planted communities .the curves have been displaced from one another vertically for clarity .the vertical axis units are arbitrary because additive and multiplicative constants have been neglected in the definition of the log - likelihood .( b ) profile likelihoods for the same parameter values but unequal groups of size and .( c ) the average fraction of vertices classified correctly for networks of vertices each and two equally sized groups .each point is an average over 100 networks .statistical errors are smaller than the points in all cases .the vertical dashed line indicates the position of the `` detectability threshold '' at which community structure becomes formally undetectable . ]we have tested this method on a variety of networks , and in practice it appears to work well . figure [ fig : synthetic ] shows results from tests on a large group of synthetic ( i.e. , computer - generated ) networks .these networks were themselves generated using the standard stochastic block model ( which is commonly used as a benchmark for community detection ) .the two left panels in the figure show the value of the profile likelihood for the families of candidate solutions generated by the spectral calculation for networks with two equally sized groups ( top ) and with unequal groups ( bottom ) . in each casethere is a clear peak in the profile likelihood at the correct group sizes , suggesting that the algorithm has correctly identified the group membership of most vertices .the third panel in fig .[ fig : synthetic ] tests this conclusion by calculating the fraction of correctly identified vertices as a function of the strength of the community structure for equally sized groups ( which is the most difficult case ) .as the figure shows , the algorithm correctly identifies most vertices over a large portion of the parameter space .the vertical dashed line represents the `` detectability threshold '' identified by previous authors , below which it is believed that every method of community detection must fail .our algorithm fails below this point also , but appears to work well essentially all the way down to the transition , and there are reasons to believe this result to be exact , at least for networks that are not too sparse .+ + figure [ fig : results ] shows the results of applications of the algorithm to two well - studied real - world networks , zachary s `` karate club '' network and adamic and glance s network of political blogs .both are known to have pronounced community structure and the divisions found by our spectral algorithm mirror closely the accepted communities in both cases .in addition to being effective , the algorithm is also fast .the computation of the eigenvector can be done using , for instance , the lanczos method , an iterative method which takes time per iteration , where is the number of edges in the network .the number of iterations required is typically small , although the exact number is not known in general .the search for the division that maximizes the profile likelihood can also be achieved in time . of the different divisions of the network that must be considered ,each one differs from the previous one by the movement of just a single vertex from one group to the other .the movement of vertex between groups causes the quantities appearing in eq . to change according to where equals the number of edges between and vertices in group 1 minus the number between and vertices in group 2 .these quantities and the resulting change in the profile likelihood can be calculated in time proportional to the degree of the vertex and hence all vertices can be moved in time proportional to the sum of all degrees in the network , which is equal to .thus , to leading order , the total running time of the algorithm goes as times the number of lanczos iterations , the latter typically being small , and in practice the method is about as fast as the best competing algorithms .in this paper we have shown that the widely - studied maximum likelihood method for community detection in networks can be reduced to a search through a small family of candidate solutions , each of which is itself the solution to a minimum - cut graph partitioning problem , which is a well studied problem about which much is known .this mapping allows us to use trusted partitioning heuristics to solve the community detection problem . as an examplewe have adapted the method of laplacian spectral partitioning to derive a spectral likelihood maximization algorithm and tested its performance on both synthetic and real - world networks . in terms of both accuracy and speedwe find the algorithm to be competitive with the best current methods .a number of extensions of our approach would be possible , including extensions with more general forms for the parameters , such as different values of and for different groups , or extensions to more than two groups , but we leave these for future work .the author would like to thank charlie doering , tammy kolda , and raj rao nadakuditi for useful conversations and lada adamic for providing the data for the network of political blogs .this work was funded in part by the national science foundation under grant dms1107796 and by the air force office of scientific research ( afosr ) and the defense advanced research projects agency ( darpa ) under grant fa95501210432 . | many methods have been proposed for community detection in networks . some of the most promising are methods based on statistical inference , which rest on solid mathematical foundations and return excellent results in practice . in this paper we show that two of the most widely used inference methods can be mapped directly onto versions of the standard minimum - cut graph partitioning problem , which allows us to apply any of the many well - understood partitioning algorithms to the solution of community detection problems . we illustrate the approach by adapting the laplacian spectral partitioning method to perform community inference , testing the resulting algorithm on a range of examples , including computer - generated and real - world networks . both the quality of the results and the running time rival the best previous methods . |
while many biological processes can be well described with classical mechanics , there has been much interest and debate as to the role of quantum effects in biological systems ranging from photosynthetic energy transfer , to photoinduced isomerization in the vision cycle and avian magnetoreception .for example , nuclear quantum effects , such as tunneling and zero - point energy ( zpe ) , have been observed to lead to _ kinetic _ isotope effects of greater than 100 in biological proton and proton - coupled electron transfer processes .however , the role of nuclear quantum effects in determining the ground state thermodynamic properties of biological systems , manifesting as _ equilibrium _ isotope effects , has gained significantly less attention . intermediate and ksi complex .schematic depiction of ( a ) the ksi complex during the catalytic cycle ( fig .s1 ) and ( b ) a complex between ksi and phenol , an inhibitor which acts as an intermediate analog .both the intermediate and inhibitor are stabilized by a hydrogen bond network in the active site of ksi .( c ) image of ksi with the tyrosine triad enlarged and the atoms o16 , h16 , o32 , h32 and o57 labeled ( shown with tyr57 deprotonated ) . ] ketosteroid isomerase ( ksi ) possesses one of the highest enzyme unimolecular rate constants and is thus considered a paradigm of proton transfer catalysis in enzymology .ksi s remarkable rate is intimately connected to the formation of a hydrogen bond network in its active site ( fig .1a ) which acts to stabilize a charged dienolate intermediate , lowering its free energy by kcal / mol relative to solution ( fig .s1 ) . this extended hydrogen bond network in the active site links the substrate to asp103 and tyr16 , with the latter further hydrogen bonded to tyr57 and tyr32 as shown in fig .1a . the mutant ksi preserves the structure of the wild - type enzyme while mimicking the protonation state of residue 40 in the intermediate complex ( fig .1b ) , therefore permitting experimental investigation of an intermediate - like state of the enzyme .experiments have identified that , in the absence of an inhibitor , one of the residues in the active site of ksi is deprotonated . although one might expect the carboxylic acid of asp103 to be deprotonated, the combination of recent nmr and uv - vis experiments has shown that the ionization resides primarily on the hydroxyl group of tyr57 , which possesses an anomalously low of 6.3 0.1 .such a large tyrosine acidity is often associated with specific stabilizing electrostatic interactions ( such as a metal ion or cationic residue in close proximity ) , which is not the case here suggesting an additional stabilization mechanism is at play .one possible explanation is suggested by the close proximity of the oxygen atoms ( o ) on the side chains of the adjacent residues tyr16 ( o16 ) and tyr32 ( o32 ) to the deprotonated o on tyr57 ( o57 , fig .1c ) . in several high - resolution crystal structures ,these distances are found to be around 2.6 , which are much shorter than those observed in hydrogen bonded liquids such as water , where o o distances are typically around 2.85 .such short heavy atom distances are only slightly larger than those typically associated with low - barrier hydrogen bonds ( lbhb ) , where extensive proton sharing is expected to occur between the atoms . in addition , at these short distances the proton s position uncertainty ( de broglie wavelength ) becomes comparable to the o o distance , indicating that nuclear quantum effects could play an important role in stabilizing the deprotonated residue ( fig 1c ) . in this work, we demonstrate how nuclear quantum effects determine the properties of protons in ksi s active site hydrogen bond network in the absence and presence of an intermediate analog by combining path integral simulations and isotope effect experiments .to assess the impact of nuclear quantum effects on the anomalous acidity of tyr57 , we measured the isotope effect on the acid dissociation constant upon substituting hydrogens ( h ) in the hydrogen bond network with deuterium ( d ) .because tyrosinate absorbs light at 300 nm more intensely than tyrosine , titration curves were generated by recording uv spectra of ksi at different s ( where _ l _ is h or d ) .these experiments ( fig .2 and table s1 ) reveal a change in upon h / d substitution ( ) of 1.1 0.14 for tyrosine in the ksi active site .this isotope effect is much larger than that observed for tyrosine in solution ( = 0.53 0.08 ) , and is also , to the best of our knowledge , the largest recorded isotope effect .changes in static equilibrium properties , such as the , upon isotope substitution arise entirely from the quantum mechanical nature of nuclei .such a large excess isotope effect , defined as , of 0.57 0.16 thus indicates that the tyrosine triad in the active site of ksi exhibits much larger nuclear quantum effects than those observed for tyrosine in aqueous solution .given the possible role of nuclear quantum effects in ksi , can one estimate how much the quantum nature of protons changes the acidity of tyr57 compared to a situation in which all the nuclei in the enzyme active site were classical ( ) ?in the quasi - harmonic limit one can show that the varies as the inverse square root of the particle mass ( m ) . by using this relation and the experimental values, we can extrapolate to the classical ( ) limit which yields that the of tyr57 in ksi would be 10.1 0.5 if the hydrogens were classical particles ( fig .relative to the observed of 6.3 0.1 , this implies nuclear quantum effects lower the or tyr57 by 3.8 0.5 units : an almost four orders of magnitude change in the acid dissociation constant . to provide insights into the molecular origins of the nuclear and electronic quantum effects which stabilize the deprotonated tyr57 residue , we performed simulations of ksi . to treat the electronic structure in the active site we performed _ab initio _ molecular dynamics ( aimd ) simulations using a qm / mm approach in which the qm region was treated by density functional theory at the b3lyp - d3 level ( see methods ) .these simulations allow for bond breaking and formation as dictated by the instantaneous electronic structure rather than pre - defined bonding rules .aimd simulations are typically performed treating the nuclei as classical particles . however, a classical treatment of the nuclei would predict that the would not change upon isotope substitution .nuclear quantum effects can be exactly included in the static equilibrium properties for a given description of the electronic structure using the path integral formalism of quantum mechanics , which exploits the exact mapping of a system of quantum mechanical particles onto a classical system of ring polymers .we combined this formalism with on - the - fly electronic structure calculations and performed _ ab initio _ path integral molecular dynamics ( ai - pimd ) simulations of ksi .these simulations treat both the nuclear and electronic degrees of freedom quantum mechanically in the active site qm region , and also incorporate the fluctuations of the protein and solvent environment in the mm region .the simulations consisted of between 47 and 68 qm atoms and more than 52,000 mm atoms describing the rest of the protein and solvent ( table s2 ) .these simulations , which until recently would have been computationally prohibitive , were made possible by accelerating the pimd convergence using a generalized langevin equation , utilizing new methods to accelerate the extraction of isotope effects and exploiting graphical processing units ( gpus ) to perform efficient electronic structure theory evaluations via an interface to the terachem code .such a combination yielded almost three orders of magnitude speed - up compared to existing ai - pimd approaches , allowing 1.1 ps / day of simulation to be obtained using 6 nvidia titan gpus .we have recently shown that ai - pimd simulations using the b3lyp - d3 functional give excellent predictions of isotope effects in water , validating such a combination for the simulation of isotope effects in hydrogen bonded systems . the excess isotope effect , , obtained from our simulations ( see si methods ic ) was 0.50 0.03 , in excellent agreement with the experimental value of 0.57 0.16 .the average distance between o57 and the adjacent o16 and o32 atoms obtained in our simulations were 2.56 and 2.57 , with a standard deviation in both cases of 0.09 .the distribution of distances between o16 and o57 explored in the simulation is shown in the fig .these average o o distances are slightly smaller than ( and within the margin of error of ) those in the starting crystal structure ( 2.6 ) .as we will discuss below , the close proximity of the neighboring o16 and o32 groups plays a crucial role in the origins of the observed isotope effect .figure 3a - c shows snapshots from aimd simulations in which the nuclei are treated classically ( fig .3a ) or quantum mechanically using the path integral formalism ( ai - pimd , fig . 3b and c ) , while videos of the simulation trajectories are provided in si videos 1 - 3 . for the quantum simulationsthe h16 and h32 protons are shown as their full ring polymers , which arise from the path integral quantum mechanics formalism .the spread of the ring polymer representing each proton is related to its de broglie wavelength ( quantum mechanical position uncertainty ) .the uncertainty principle dictates that localization of a quantum mechanical particle increases its quantum kinetic energy .the protons will thus attempt to delocalize , i.e. spread their ring polymers , to reduce this energetic penalty .the resulting proton positions in fig .3b arise from the interplay between the chemical environment , such as the covalent o h bond , which acts to localize the proton and the quantum kinetic energy penalty that must be paid to confine a quantum particle .inclusion of nuclear quantum effects thus allows the protons to delocalize between the hydroxyl oxygens to mitigate the quantum kinetic energy penalty ( fig .3b ) , which is not observed classically ( fig . 3a ) .confinement of d , which due to its larger mass has a smaller position uncertainty , leads to a much less severe quantum kinetic energy penalty and hence less delocalization ( fig .3c ) . to characterize the degree of proton delocalizationwe define a proton sharing coordinate , where is the distance of proton hx from oxygen atom ox and x=16 or 32 .hence corresponds to a proton that is equidistant between the oxygen atoms of tyrx and tyr57 , while a positive value indicates proton transfer to tyr57 from tyrx .figures 3 d - f show the probability distribution along the proton sharing coordinates and for classical nuclei and quantum nuclei for h and d , respectively .the free energies along and are provided in fig .s3 . in the classical aimd simulation , h16 and h32remain bound to their respective oxygens throughout the simulation ( and are negative ) with tyr57 ionized 99.96% of the time ( fig .3d ) . however , upon including nuclear quantum effects ( ai - pimd simulations ) there is a dramatic increase in the range of values and can explore ( fig .3e ) . in particular ,the probability that tyr57 is protonated ( ) increases by about 150-fold for h upon including quantum effects ( fig .3e ) , with the proton `` hole '' equally shifted onto the adjacent tyr16 or tyr32 residues .proton transfers between the residues are observed frequently ( si videos 2 and 3 ) with site lifetimes on the order of 60 and 200 fs in the h and d simulations , respectively .although pimd simulations exactly include nuclear quantum effects for calculating static properties , they do not allow rigorous extraction of time - dependent properties ; nevertheless , they offer a crude way to assess the time - scale of the proton motion .the frequent transfers observed are also consistent with fig .3e and f , which show a monotonic decrease in the probability along both and , i.e. although the proton transferred state is lower in probability , the proton transfer process along each of the proton sharing coordinates contains no free energy barrier ( fig .s3 ) and is thus kinetically fast . as an experimental counterpart, we used chemical shifts of -tyr labeled ksi as a measure of fractional ionization of each tyr residue ( si methods ib ) .this analysis yielded values of 79% for the tyr57 ionization for h and 86% for d compared with simulated values of 94.2% and 98.3% ( 0.3% ) , respectively .this represents good quantitative agreement , since the population difference amounts to a difference in the relative free energy between experiment and theory of 0.7 kcal / mol an error which is within the expected accuracy of the electronic structure approach employed .in addition , the change in the ionization of tyr57 obtained experimentally upon exchanging h for d ( 7% ) is in good agreement with the value predicted from our simulations ( 4.1% ) .the under - prediction of the isotope effect on fractional ionization from our simulations is in line with the slightly low value of the simulated excess isotope effect , consistent with recent observations that the b3lyp - d3 density functional slightly underestimates the degree of proton sharing , and hence isotope effects , in hydrogen bonded systems upon including nuclear quantum effects .the large degree of proton sharing with the deprotonated tyr57 residue upon including nuclear quantum effects can be elucidated by considering the potential energy required , , to move a proton in the ksi tyrosine triad from its energetic minimum to a perfectly shared position between the two tyrosine groups ( ) . depends strongly on the positions of the residues comprising the triad , and in particular on the separation between the proton donor and acceptor oxygen atoms .figure 4a shows computed as a function of the distance between o16 and o57 , , for the tyrosine triad in the absence of the protein environment .removing the protein environment allows us to examine how changes in the triad distances from their positions in the enzyme affects the proton delocalization behavior without introducing steric overlaps with other active site residues ( see si methods i d ) .figure 4a shows that for the range of oxygen distances observed in the tyrosine triad ( = 2.50 2.65 ) for h16 is 3 - 6 kcal / mol .this is 6 - 12 times the thermal energy ( t ) available at 300 k , leading to a very low thermal probability of the proton shared state ( lower than ) .however , upon including nuclear quantum effects the system possesses zpe , which in this system is kcal / mol .the zpe closely matches and thus floods the potential energy wells along the proton sharing coordinate ( fig .4b ) allowing facile proton sharing ( fig .3 ) , i.e. inducing a transition to a lbhb type regime where the protons are quantum mechanically delocalized between the hydrogen bonded heavy atoms . this leads to qualitatively different behavior of the protons in the active site of ksi : _ from classical hydrogen bonding to quantum delocalization . _the proton delocalization between the residues allows for ionization to be shared among three tyrosines to stabilize the deprotonation of tyr57 , leading to the large observed shift relative to the value in the classical limit ( fig .this change in proton behavior gives rise to the large excess isotope effect since an o d stretch possesses a zpe of kcal / mol , which is no longer sufficient to fully flood the potential energy well in the proton sharing coordinate .as the o o separation is decreased below the values observed in ksi s tyrosine triad , becomes negligible compared to the thermal energy ( .6 kcal / mol at 300 k ) .hence , at very short distances ( 2.3 ) thermal fluctuations alone permit extensive proton sharing between the residues and the zpe plays a negligible role in determining the protons positions . thus one would expect a small isotope effect . on the other hand , at bond lengths in excess of 2.7 , becomes so large ( 8 kcal / mol , fig .4a ) that the zpe is not sufficient to flood the barriers , also resulting in a small expected isotope effect .the large excess isotope effect in ksi thus arises from the close matching of the zpe and depth of the energetic well ( ) which is highly sensitive to the o o distance .hence although proton delocalization can occur classically at short o o distances ( 2.3 ) , nuclear quantum effects allow this to occur for a much wider range of o o distances ( up to .6 ) , making delocalization feasible without incurring the steep steric costs that would be associated with bringing oxygen atoms any closer .the distances in ksi s active site triad motif thus maximize quantum proton delocalization , which acts to stabilize the deprotonated residue . ) as a function of the hydrogen bond donor - acceptor o o distance ( ) as compared to the zero - point energy .( a ) as a function of the oxygen - oxygen distance ( ) between o16 and o57 using the tyrosine triad geometry from a crystal structure ( details in si methods i d ) .inset figure shows the probability distribution of obtained from the ai - pimd simulation of ksi with ionized tyr57 .the probabilities are normalized by their maximum values .( b ) potential energy as a function of the proton transfer coordinate , , for = 2.6 indicating values for the hydrogen and deuterium ( o h and o d ) zero point energies , and .the position of tyr32 is fixed as the proton h16 is scanned along . ] finally , we considered the role of nuclear quantum effects when an intermediate analog participates in the active site hydrogen bond network .recent experiments have investigated how the binding of intermediate analogs to ksi affects the sharing of ionization along the extended hydrogen bond network that is formed ( fig .1b ) .these experiments identified that ionization sharing is maximized when phenol , whose solution of 10 equals that of the actual intermediate of ksi ( fig .1a ) , is bound .we thus performed ai - pimd simulations of the ksi complex .the protons were observed to be delocalized across the network ( fig .5 ) with partial ionizations of tyr57 , tyr16 and phenol calculated to be 18.5% , 56.7% and 22.5% ( 0.7% ) , compared with estimates from previous experiments using nmr of 40% , 40% , 20% . hence simulation and experiment are in good agreement that the ionization is shared almost equally among the three residues , i.e. that there is almost no difference ( 2 t ) in the free energy upon shifting the ionization among any of the three groups .therefore , the ability of protons to delocalize within the ksi tyrosine triad , initially found in the enzyme in the absence of the intermediate analog and manifested as a strongly perturbed , extends upon incorporation of the intermediate analog , which shifts the center of the ionization along the network from tyr57 to tyr16 . in both cases ,proton delocalization acts to share the ionization of a negatively charged group , which suggests that ksi could utilize quantum delocalization in its active site hydrogen bond network to distribute the ionization arising in its intermediate complex ( fig .1a ) so as to provide energetic stabilization .in conclusion , ksi exhibits a large equilibrium isotope effect in the acidity of its active site tyrosine residues arising from a highly specialized triad motif , consisting of several short o o distances , whose positions enhance quantum delocalization of protons within the active site hydrogen bond network .this delocalization manifests in a very large isotope effect and substantial acidity shift .our simulations , which include electronic quantum effects and exactly treat the quantum nature of the nuclei , show qualitatively and quantitatively different proton behavior compared to conventional simulations in which the nuclei are treated classically , and provide good agreement with experiment . the ability to perform such simulations thus offers the opportunity to investigate in unprecedented detail the plethora of systemsin which short - strong hydrogen bonds occur , where incorporating both nuclear and electronic quantum effects is crucial to understand their functions .wild - type and d40n ksi from pseudomonas putida were over - expressed in bl-21 a1 cells ( invitrogen ) , isolated by affinity chromatography using a custom - designed deoxycholate - bound column resin , and purified by gel - filtration chromatography ( ge healthcare ) as described previously .for nmr experiments , -tyrosine was incorporated into ksi according to methods described previously .a series of buffers was prepared with a pl between 4 and 10 by weighing portions of a weak acid and its sodium - conjugate base salt , and adding the appropriate form of distilled deionized water ( millipore h , spectra stable isotopes sterile - filtered d ( % ) ) .buffers were prepared at 40 mm .tyr57 is solvent accessible , so the tyrosine residues in the active site network are expected to be fully deuterated in d solution .the following buffer systems were used for the following pl ranges : acetic acid / sodium acetate , 45.25 ; sodium monobasic phosphate / dibasic phosphate , 5.58.25 ; sodium bicarbonate / sodium carbonate , 8.510 .buffers were stored at room temperature with caps firmly sealed .after preparation of buffers , pl was recorded using an orion2 star glass electrode ( thermo ) , immediately following calibration with standard buffers at ph 4 , 7 , and 10 . in h ,the ph of the buffer was taken as the reading on the electrode . in d , the pd of the bufferwas calculated by adding 0.41 to the operational ph * from the electrode reading .a series of samples for titration was prepared by combining 60 protein ( 100 m stock in buffer - free l ) , buffer ( 150 of 40 mm stock ) , and extra l .the final samples were 600 , 10 m protein , 10 mm buffer .uv - vis measurements were carried out on the samples on a lambda 25 spectrophotometer ( perkin elmer ) , acquiring data from 400 to 200 nm with a 1.0 nm data interval and 960 nm / min scan rate and 1.00 nm slit width . for each measurementa background was taken to the pure buffer of a given pl , before acquiring on the protein - containing sample .spectra were recorded in duplicate to control for random detector error .the spectra were baselined by setting the absorption at 320 nm to zero , and the change in absorption at 300 nm was followed at varying pl s , employing a previous established method to determine the fractional ionization of a tyrosine - tyrosinate pair .the error in a from comparing duplicate spectra following baselining was generally between 02% .for each pl , the average a was calculated and converted to an extinction coefficient ( ) .the titration experiment was repeated on two independently prepared buffer stocks to control for error in buffer preparation .ai - pimd and aimd simulations were performed using a qm / mm approach of ksi with tyr57 protonated , ksi with tyr57 ionized , ksi with the intermediate analog bound , and tyrosine in aqueous solution .the simulations were carried out in the nvt ensemble at 300 k with a time step of 0.5 fs .the path integral generalized langevin equation ( piglet ) approach was utilized , which allowed results within the statistical error bars to be obtained using only 6 path integral beads to represent each particle .the electronic structure in the qm region was evaluated using the b3lyp functional with dispersion corrections .the 6 - 31 g * basis set was used as we found it to produce proton transfer potential energy profiles with a mean absolute error of less than 0.4 kcal / mol for this system ( see fig .s6 ) . energies and forces in the qm region and the electrostatic interactions between the qm and mm regions were obtained using an mpi interface to the gpu - accelerated terachem package .atoms in the mm region were described using the amber03 force field , and the tip3p water model .the simulations were performed using periodic boundary conditions with ewald summation to treat long - range electrostatic interactions .the energies and forces within the mm region , and the lennard - jones interactions between the qm and mm regions were calculated by mpi calls to the lammps molecular dynamics package .the qm region of ksi contained the p - methylene phenol side chains of residues tyr16 , tyr32 and tyr57 ( fig .s5a ) . for ksi with the intermediate analog , residue asp103 and the bound intermediate analogwere also included in the qm region ( fig .the qm region of tyrosine in solution contained the side chain of the tyrosine residue and the 41 water molecules within 6.5 of the side chain o all bonds across the qm / mm interface were capped with hydrogen link atoms in the qm region .these capping atoms were constrained to be along the bisected bonds and do not interact with the mm region .the initial configuration of ksi was obtained from a crystal structure ( pdb i d 1ogx ) .for ksi with the intermediate analog , a crystal structure ( pdb i d 3vgn ) was used with the ligand changed to phenol .the crystal structures were solvated in tip3p water and energy minimized before performing ai - pimd simulations . the initial configuration for tyrosine in aqueous solutionwas obtained by solvating the amino acid in tip3p using the amber03 force field and equilibrating for 5 ns in the npt ensemble at a temperature of 300 k and pressure of 1 bar .each system was then equilibrated for 10 ps followed by production runs of 30 ps . to calculate the excess isotope effect , , ( si methods ic ) we used the thermodynamic free energy perturbation ( td - fep ) path integral estimator . combined with an appropriate choice of the integration variable to smooth the free energy derivatives , this allowed us to evaluate the isotope effects in the liquid phase using only a single ai - pimd trajectory .simulations performed with d substitution showed no change within the statistical error bars reported .t.e.m . acknowledges support from a terman fellowship , an alfred p. sloan research fellowship , a hellman faculty scholar fund fellowship and stanford university start - up funds .l.w . acknowledges a postdoctoral fellowship from the stanford center for molecular analysis and design .this work used the extreme science and engineering discovery environment ( xsede ) , which is supported by national science foundation grant number aci-1053575 ( project number tg - che140013 ) .s.d.f . would like to thank the nsf predoctoral fellowship program and the stanford bio - x interdisciplinary graduate fellowship for support .this work was supported in part by a grant from the nih ( grant gm27738 to sgb ) .the authors are extremely grateful to christine isborn , nathan luehr and todd martinez for their assistance in interfacing our program to the terachem code .48ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] http://dx.doi.org/10.1038/nphys2474 [ * * , ( ) ] http://dx.doi.org/10.1046/j.1432-1033.2002.03020.x [ * * , ( ) ] link:\doibase 10.1146/annurev - biochem-051710 - 133623 [ * * , ( ) ] link:\doibase 10.1021/ja102004b [ * * , ( ) ] http://www.jbc.org/content/275/52/41100.abstract [ * * , ( ) ] link:\doibase 10.1021/bi026873i [ * * , ( ) ] , http://www.sciencedirect.com/science/article/pii/s0045206804000550 [ * * , ( ) ] link:\doibase 10.1021/bi061752u [ * * , ( ) ] , link:\doibase 10.1021/ja102714u [ * * , ( ) ] , link:\doibase 10.1021/bi101428e [ * * , ( ) ] , link:\doibase 10.1021/bi4000113 [ * * , ( ) ] , link:\doibase 10.1021/bi301491r [ * * , ( ) ] , http://www.pnas.org/content/early/2012/01/10/1111566109.abstract n2 - understanding the electrostatic forces and features within highly heterogeneous , anisotropic , and chemically complex enzyme active sites and their connection to biological catalysis remains a longstanding challenge , in part due to the paucity of incisive experimental probes of electrostatic properties within proteins . to quantitatively assess the landscape of electrostatic fields at discrete locations and orientations within an enzyme active site , we have incorporated site - specific thiocyanate vibrational probes into multiple positions within bacterial ketosteroid isomerase .a battery of x - ray crystallographic , vibrational stark spectroscopy , and nmr studies revealed electrostatic field heterogeneity of 8 mv / cm between active site probe locations and widely differing sensitivities of discrete probes to common electrostatic perturbations from mutation , ligand binding , and ph changes .electrostatic calculations based on active site ionization states assigned by literature precedent and computational pka prediction were unable to quantitatively account for the observed vibrational band shifts .however , electrostatic models of the d40n mutant gave qualitative agreement with the observed vibrational effects when an unusual ionization of an active site tyrosine with a pka near 7 was included .uv - absorbance and 13c nmr experiments confirmed the presence of a tyrosinate in the active site , in agreement with electrostatic models .this work provides the most direct measure of the heterogeneous and anisotropic nature of the electrostatic environment within an enzyme active site , and these measurements provide incisive benchmarks for further developing accurate computational models and a foundation for future tests of electrostatics in enzymatic catalysis .[ * * , ( ) ] http://www.pnas.org/content/110/30/12271.abstract [ * * , ( ) ] http://www.pnas.org/content/110/28/e2552.abstract [ * * , ( ) ] link:\doibase 10.1021/bi401083b [ * * , ( ) ] link:\doibase 10.1021/ja803928 m [ * * , ( ) ] http://www.sciencemag.org/content/264/5167/1887.abstract [ * * , ( ) ] link:\doibase 10.1126/science.7661899 [ * * , ( ) ] , http://www.pnas.org/content/93/16/8220.abstract [ * * , ( ) ] http://books.google.com/books?id=scgx1ru6uxic[__ ] ( , , ) * * , ( ) link:\doibase 10.1021/ct400911 m [ * * , ( ) ] http://www.sciencedirect.com/science/article/pii/0022283676903119 [ * * , ( ) ] link:\doibase 10.1002/jcc.540110605 [ * * , ( ) ] \doibase 10.1002/(sici)1097 - 461x(1996)60:6<1189::aid - qua7>3.0.co;2-w [ * * , ( ) ] link:\doibase 10.1021/ar970218z [ * * , ( ) ] * * , ( ) * * , ( ) _ _ ( , , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1021/ct9003004 [ * * , ( ) ] link:\doibase 10.1021/ct3006826 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.4894287 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.446740 [ * * , ( ) ] * * , ( ) \doibase http://dx.doi.org/10.1063/1.4873352 [ * * , ( ) ] link:\doibase 10.1021/jp210544w [ * * , ( ) ] link:\doibase 10.1021/bi100074s [ * * , ( ) ] link:\doibase 10.1371/journal.pbio.0040099 [ * * , ( ) ] link:\doibase 10.1021/ac60260a013 [ * * , ( ) ] link:\doibase 10.1002/jcc.10349 [ * * , ( ) ] * * , ( ) * * , ( ) | enzymes utilize protein architectures to create highly specialized structural motifs that can greatly enhance the rates of complex chemical transformations . here we use experiments , combined with _ ab initio _ simulations that exactly include nuclear quantum effects , to show that a triad of strongly hydrogen bonded tyrosine residues within the active site of the enzyme ketosteroid isomerase ( ksi ) facilitates quantum proton delocalization . this delocalization dramatically stabilizes the deprotonation of an active site tyrosine residue , resulting in a very large isotope effect on its acidity . when an intermediate analog is docked , it is incorporated into the hydrogen bond network , giving rise to extended quantum proton delocalization in the active site . these results shed light on the role of nuclear quantum effects in the hydrogen bond network that stabilizes the reactive intermediate of ksi , and the behavior of protons in biological systems containing strong hydrogen bonds . |
the efficient market hypothesis ( emh ) has significant influence on theory as well as practical business in the financial literature .various evidence concerning market efficiency has been recently discussed .also the predictability of future price change in the stock market is a very interesting area in the financial field .research topics on both the degree of efficiency and predictability generally have been known to be intimately related . in a weak - form emh ,a lower degree of efficiency means that the usefulness of past price change is high in terms of the prediction of future price change .it is difficult to predict the future price change relatively with a higher degree of efficiency .however , the relationship between the degree of efficiency and prediction power has not been empirically studied much relatively . in this study, we investigate the relationship using the stock market indices .the present study needs a method to quantify the degree of efficiency and prediction power .the concept of the efficiency corresponds to the weak - form emh concerning whether the information of past price change pattern is useful in terms of the prediction of future price change .therefore , we employed the quantified measurement for the degree of efficiency and the prediction method using its property directly based on the degree of similarity of price change pattern in the time series . also , we use the hurst exponent to observe the long - term memory , and approximate entropy ( apen ) to observe the randomness in the time series . for the quantitative measurement of prediction power , the hit - rate estimated by the nearest - neighbor prediction method ( nn , so called time - delay embedding techniques or a local linear predictor ) is used . the nn method using reconstructed data which is created by the embeddingis useful to predict the future price change based on the degree of similarity of the price change pattern .the research topic concerning the existence of long - term memory property in the financial time series is generally well - known in the financial field as well as in other scientific fields .when a market has a long - term memory property , the market does not reflect the existing information both immediately and sufficiently .therefore , if financial time series has a long - term memory property , the information of the past price change becomes a valuable one to predict the future price change .the long - term memory property of the return and volatility in the stock markets has been an active research field .the hurst exponent , a measurement of the long - term memory property , quantify the degree of market efficiency of the stock market .however , the empirical evidence on whether the results of degree of efficiency for long - term memory property directly relates to the prediction power of future price changes based on the patterns of past price changes has not been suggested yet . in this study, we empirically investigate the relationship between the values and prediction power of future price change direction based on the hurst exponent to quantify the degree of efficiency of stock market .the apen is also a measurement to quantify degree of efficiency in the financial time series , which can quantitatively estimate the degree of randomness in the time series .it calculates quantitatively complexity , randomness , and prediction power .let us consider the similarity of price change pattern in the financial time series .when the frequency of the similarity pattern is high , the randomness of the time series is low and the apen also low .however , the apen has a high value if the frequency is low .previous works which introduce and apply the apen measurement to the financial data and argue that the apen value has significant information in terms of measuring degree of efficiency using the foreign exchange data are announced .according to the previous studies , the correlation between the hurst exponent and apen value is negative .however , the evidence on whether the results of degree of efficiency due to randomness directly relates to the prediction power of future price changes based on the patterns of past price changes has not been suggested yet . in this study, we investigate empirically the relationship between the values and prediction power of future price change based on the apen to quantify the degree of efficiency of stock markets .we also study a prediction method which directly uses the similarity pattern of past price change as the prediction of future price .we employ the nn method to predict future price change based on the similarity of price change pattern . according to the results of previous studies ,the nn method is useful to predict the financial time series within a short time frame . in this study, we also investigate the relationship between the prediction power estimated by th nn method and the hurst exponent , the apen , respectively .this investigation observes the relationship between prediction power and degree of efficiency in a financial time series .the prediction power that we use in this paper is a hit - rate , which quantify the consistency between the actual price change and the predicted one by the nn method . according to the established research purpose and previous studies, we originally expected a positive relationship between the hurst exponent and prediction power .in other words , a market index with a higher hurst exponent shows averagely a higher prediction power than one with a lower hurst exponent .also , we expected a negative relationship between the apen value and prediction power ; a market index with a lower apen value has averagely a higher prediction power than one with a higher apen . through this study , we find the following results .first , the relationship between the average hurst exponent as the long - term memory property and the average apen value as the randomness is negative , where represents cross - correlation and each stock market .second , the average prediction power in terms of the direction of future price changes reveals a positive relationship with the hurst exponent , and has a negative one with the apen . according to the above results , we find that the relationship between the degree of efficiency and the prediction power is negative . in other words ,a market index with a lower degree of efficiency has a higher predictability of the future price change in terms of the past price change pattern than one with a higher degree of efficiency .moreover , we find that the hurst exponent as the measurement of long - term memory property provides valuable information in terms of the prediction of future price change . in the next section , we describe the data and methods of the verification process used in this paper , in section [ sec : results ] , we present the results obtained according to the established research purpose .finally , we summarize the findings and conclusions of the study .we study the daily indices of 27 stock markets , which obtained from datastream ( _ http://www.datastream.net_ ) .the market indices are composed of 13 markets in asia - pacific region , 7 in north and south america , and 7 in europe .they are china ( shenzhen composite ) , hong kong ( hangseng ) , india ( bombay se200 ) , indonesia ( jakarta se composite ) , japan ( nikkei225 ) , korea ( kospi200 ) , malaysia ( kuala lumpur composite ) , philippines ( se composite ) , singapore ( straits times ) , taiwan ( se weighted ) , thailand ( bangkok s.e.t ) , australia ( asx ) , new zealand ( nzx ) , argentina ( base general ) , brazil ( bovespa ) , chile ( general ) , mexico ( ipc ) , peru ( lima se general ) , canada ( tsx composite index ) , usa ( s&p 500 ) , austria ( atx ) , denmark ( copenhagen kbx ) , france ( cac 40 ) , germany ( dax 30 ) , italy ( milan comit general ) , netherlands ( aex index ) , and the uk ( ftse100 ) .the data period is 15 years from january 1992 to december 2006 except australia ( from june 1992 ) , argentina ( from june 2000 ) , and denmark ( from january 1996 ) .the return time series of the market index used in this paper was calculated by the logarithmic change of the price , where is the stock price on day . in the previous studies , researchers utilized various methods such as the hurst exponent , arfima ( autoregressive fractional integration moving average ) and figarch ( fractionally integrated generalized autoregressive conditional heteroscedasticity ) to quantitatively observe the long - term memory property in a financial time series .however , the long - term memory property in a financial time series does not depend on the method used .we use the hurst exponent to quantify the degree of long - term memory property .there are many methods to calculate the hurst exponent .they are the classical re - scaled range analysis , generalized hurst exponent method , modified r / s analysis , gph method , and the detrended fluctuation analysis ( dfa ) .according to the previous works , the dfa method is the most efficient method for accurately calculating the hurst exponent .therefore , we select the hurst exponent calculated by the dfa method as a measurement to quantify the degree of efficiency in a financial time series .the hurst exponent calculated by the dfa method can be explained by the following steps .first , after the subtraction of the mean from the original time series , one accumulates the series as defined by , \tag{1a}\end{gathered}\ ] ] where is the number of the time series .next , the accumulated time series is divided into a window of the same length .we estimate the trend lines by using the ordinary least square ( ols ) in each window .we eliminate trends existing within the window by subtracting from the accumulated time series in each window .this process is applied to every window and the fluctuation magnitude is defined as the process mentioned above is repeated for every scale . in addition , we investigate whether the scaling relationship exists for every scale .that is , the scaling relationship is defined by where is the constant and is the hurst exponent . if , the time series has a short - term memory . if , it has a long - term memory .we use the hurst exponent as the measurement to quantify the degree of market efficiency of a stock market in each market . as the hurst exponent increases ,the persistence of similarity patterns in the past price changes is high . in other words ,the pattern of past price change is a valuable information when predicting the patterns of future price change .in addition , in order to estimate and use the robust hurst exponent regardless of the time variation , we use the average hurst exponent, , estimated repeatedly until december 2005 by estimating the time window with the width of 5 years and shifting 1 year for a whole period until december 2006 .we omit the data of the year 2006 when calculating the mean hurst exponent because an out - of - sample of the 1-year prediction period is required in the nn method .the apen , which measures the degree of randomness in the time series , was also introduced as the measurement of the degree of efficiency of the financial time series .we use the apen as the second measurement of the degree of efficiency in this study .the apen is defined as where is the embedding dimension and is the tolerance to determine the similarity between price change patterns .the is given by , \tag{2b } \\c_{i}^{m } ( r ) = \frac{\displaystyle b_{i}(r)}{(n - m+1 ) } , \tag{2c}\end{gathered}\ ] ] where is the number of data pairs within a tolerance of similarity . also , we calculate the similarity in the time series of each price change pattern by the distance ] of the number of consistent direction trading with the number of trading day for the prediction period .next , according to the above process after the moving period ( 1 years ) as the process for the hurst exponent and the apen , we calculate the hit - rate of future price change using the second reconstruction period .this process is repeated until the reconstruction period of the last day of the prediction period .we use the average hit - rates in terms of the consistent direction between actual price changes and those estimated by the nn model as the prediction power to observe the relationship with the hurst exponent and apen as the degree of efficiency .we present the observed relationship between the hurst exponent and the apen to measure the degree of efficiency and the relationship between degree of efficiency and prediction power . in this section , using the stock market indices of 27 markets , we present the investigated results for the relationship between the hurst exponent and the apen value in terms of degree of efficiency . as mentioned above , we expected a negative relationship between the hurst exponent and apen . in other words , from the viewpoint of the hurst exponent , a lower efficiency degree means a higher hurst exponent which reveals long - term memory .on the other hand , a lower efficiency degree concerning the apen means lower apen values , which indicate a lower degree of randomness .of course , based on both measurements , a lower efficiency degree occurs when similar patterns occurs repeatedly in the past price change .futhermore , the observed results are negative evidence of the weak - form emh . the results are presented in fig .1 . fig .1 shows the average hurst exponent and the apen calculated repeatedly by using the estimated period ( 5 years ) and moving period ( 1 year ) for each stock indices . in fig .1 , the x - axis and y - axis indicate the apen and hurst exponent , respectively . fig .1(a ) and ( b ) show the results for using a time series of the actual stock market and using a random time series with the mean and standard deviation of time series of the actual stock indices , respectively .in the case of a random time series , the hurst exponent and apen also have an average value for the whole period .the market index of each country denotes according to a continent .that is , the circles ( red ) , triangles ( magenta ) , squares ( blue ) , and diamonds ( green ) indicate the asia - pacific region ( 13 countries ) , south and north america ( 7 countries ) , and europe , respectively . according to the results , we find that the relationship between the hurst exponent and the apen is negative $ ] . however , we could not find these relationships in the random time series .that is , the apen in terms of the randomness of the price change patterns is low as the financial time series has a high long - term memory property .therefore , both methods can be used complementarily in terms of measuring the degree of efficiency in a financial time series .as the market indices locate at the top left , they have either a higher long - term memory or lower randomness .this shows a low degree of efficiency in the time series .we discovered that most stock indices for south america or asia belong to this area . on the other hand ,as the market indices locate at the bottom - right , they have lower long - term memory or higher randomness .that is , this demonstrates a high degree of efficiency .most stock indices in this position are in europe ( france , the uk , germany ) , north america ( canada , the usa ) , and asia ( australia , japan , new zealand and korea ) .in addition , we investigated whether the degree of efficiency increases more in the past period in each market indices .the results are presented in fig .2 . based on the hurst exponent and apen estimates using each sub - period including 2006 , we observed measurements at times and .the first sub - period for the stock indices is 5 years , from january 1992 to december 1996 , except for three countries , australia ( june , 1992 ) , argentina ( june , 2000 ) , and denmark ( january , 1996 ) .the last sub - period is 5 years , from january 2002 to december 2006 .2(a ) and ( b ) show the apen and hurst exponents of the market indices of 27 countries .2 , the first ( , the past ) and last ( , the present ) sub - periods are denoted by squares ( blue ) and circles ( red ) . by the results , on the whole , we find that the degree of efficiency in market indices increase nowadays more than in past periods . in other words ,the hurst exponent of asia decreased from 0.56 to 0.51 , and south - america decreased from 0.61 to 0.57 , respectively .the apen for asia increased from 1.59 to 1.65 , for south - america from 1.56 to 1.63 . on the other hand ,the hurst exponent and apen of stock indices for europe reveal a different result .that is , the hurst exponent of past and present approaches 0.5 averagely , but the apen decreases averagely from 1.69 to 1.53 .we could not find consistent changes in terms of the degree of efficiency . through the results, we confirmed that both methods can be a measure of the degree of efficiency of a financial time series .in addition , we can complementarily use both methods as the standard index of the degree of efficiency because the relationship between both methods is negative . although we could not find consistent results between apen measurement and hurst exponents for europe , the present efficiency degree of the market index for each country increased more than those of past periods .in this section , we present the results of the empirical relationship between degree of efficiency and prediction power of future price changes estimated by the nn prediction method . generally , the predictability of a time series having a lower degree of efficiency is higher than that of a time series having a higher degree of efficiency .therefore , the expected relationship between the hurst exponent and apen values in terms of the degree of efficiency and prediction power estimated by the nn prediction method is as follows .we expect a positive relationship between the hurst exponent and prediction power of the nn prediction method .that is , a market index having a higher hurst exponent value has a high long - term memory property because there are continuously similar patterns of past price change .therefore , the predictability of future price change using the patterns of the past price changes when the market index increases .however , we expect a negative relationship between apen and prediction power . that is , a market index having a lower apen has a lower randomness because the similar patterns of past price changes happen frequently . therefore , the predictability of the future price change using the patterns of the past price changes as the market index increases .the results are presented in fig .3 . in fig .3 , the prediction power of the nn prediction method is an average value of the hit - rates in terms of the direction of future price changes in each sub - period . fig .3(a ) shows the relationship between average apen and average prediction power and figure 3(c ) shows the relationship between average hurst exponent and average prediction power .moreover , using the random time series created with the mean and standard deviation of the original market index , the relationship between nn prediction power and apen or hurst exponent is presented in figs .3(b ) and 3(d ) , respectively .the market index of each country is denoted according to continent and color .the asia - pacific region , south and north america , and europe are denoted by circles(red ) , triangles(magenta ) , squares(blue ) , and diamonds ( green ) , respectively . according to the results, we can confirm the relationship between degree of efficiency and prediction power , generally acknowledged in the financial literature .that is , in fig .3(a ) , the relationship between average hit - rates estimated by the nn method and average apen is negative , while in fig.3(c ) , the relationship between average hit - rates and average hurst exponent shows a strong positive relation .therefore , stock indices with a lower degree of efficiency ( higher hurst exponent or lower apen ) have a higher prediction power in terms of the direction of future price change than those with the higher degree of efficiency .of course , we could not find figs .3(b ) and 3(d ) using the random time series .the places which have a lower degree of efficiency and higher predictability are shown in the top left of fig .3(a ) , which displays a relationship between apen and prediction power and the top right of fig .3(c ) , which shows a relationship between the hurst exponent and prediction power .these market indices are generally found in south america and asia .however , places which have a higher degree of efficiency or lower predictability are shown in the bottom right of fig .3(a ) and the bottom left of fig .the market indices displayed here are found in europe ( france , the uk , germany _ et al ._ ) , north america ( the usa ) , and asia ( australia , japan , new zealand _et al . _ ) .according to the above results , we discovered that the degree of efficiency of a financial time series is significantly related to the prediction power of future price changes .we also found that the hurst exponent and apen provide significant information in terms of the nn prediction method using the similarity of past price change patterns .in particular , we found that the hurst exponent used to measure long - term memory property is closely correlated with the prediction power of the nn prediction method .we empirically investigated the relationship between the degree of efficiency and the prediction power using the market index for various countries . according to the viewpoint of the weak - form emh ,a lower degree of efficiency means that the information of the past price changes is useful in terms of the predictability of future price change , and a higher efficiency degree demonstrates that the predicting is difficult relatively .the efficiency in this paper is based on the weak - form emh which used to show whether the information of past price changes patterns is useful in terms of the prediction of future price changes .we employed the measurements such as the hurst exponent and the apen which quantify the degree of efficiency and the prediction method ( nn prediction method ) by using the measurements .the measurements and the method are based on the similarity of price change pattern in the time series .we summarize the results as follows .first , we found that the relationship between the hurst exponent and the apen , the measurements of the degree of efficiency , is negative .therefore , the hurst exponent and the apen are useful to measure the degree of efficiency in the financial time series complementally .second , we found that the degree of efficiency is positively related to the prediction power .market indices having a lower degree of efficiency ( high hurst exponent or low apen ) have a higher prediction power in terms of the direction of the future price change than those having a higher degree of efficiency ( low hurst exponent or high apen ) .futhermore , we found that the hurst exponent has a very strong correlation with the prediction power of nn prediction power . according to the above results , we _ empirically _ confirm the relationship between the degree of efficiency and the prediction power , which have acknowledged _ qualitatively _ in the financial field .we also found that the hurst exponent and the apen provide useful information to the nn prediction method using the similarity of past price change pattern .in addition , our study provides useful information in terms of the consistency of the international funds which have considerable attention in the financial field and the selection of active and passive investment countries as investment strategy , and the investment weights for future works .also , further works as the complementary process are expected . | this study investigates empirically whether the degree of stock market efficiency is related to the prediction power of future price change using the indices of twenty seven stock markets . efficiency refers to weak - form efficient market hypothesis ( emh ) in terms of the information of past price changes . the prediction power corresponds to the hit - rate , which is the rate of the consistency between the direction of actual price change and that of predicted one , calculated by the _ nearest neighbor prediction method _ ( nn method ) using the out - of - sample . in this manuscript , the _ hurst exponent _ and the _ approximate entropy _ ( apen ) are used as the quantitative measurements of the degree of efficiency . the relationship between the hurst exponent , reflecting the various time correlation property , and the apen value , reflecting the randomness in the time series , shows negative correlation ( where represents cross - correlation , the hurst exponent , and apen ) . however , the average prediction power on the direction of future price change has the strongly positive correlation with the hurst exponent ( where represents the average prediction power calculated by the nn method ) ) , and the negative correlation with the apen ( ) . therefore , the market index with less market efficiency has higher prediction power for future price change than one with higher market efficiency when we analyze the market using the past price change pattern . furthermore , we show that the hurst exponent , a measurement of the long - term memory property , provides more significant information in terms of prediction of future price changes than the apen and the nn method . |
quantum computers could solve some problems faster than classical computers . performing a quantum computation relies on the ability to preserve the coherence of quantum states long enough for gates composing the algorithm to be implemented . in practice ,the quantum coherence is sensitive to the uncontrolled environment and easily damaged by the interactions with the environment , a process called decoherence . to protect the fragile quantum coherence needed for quantum computation , schemes of quantum error correction ( qec ) and fault- tolerant quantum computationhave been developed .the 3-bit qec code was implemented in a liquid - state nmr quantum information processor in 1998 as the first experimental demonstration of qec .more recently , it has been implemented in trapped ion and solid - state systems . herewe report on using the grape algorithm to implement a high fidelity version of the 3-bit qec code for phase errors in liquid state nmr .the errors due to natural transversal relaxation are shown to be suppressed to a first order . in comparison with the work performed in 1998 ,the pulse sequence fidelity is improved by about , and the reduction of the first order in the decay of the remaining polarization after error correction is improved by a factor of .the advantage of the qec is obtained although the extra operations for protecting the quantum states in qec are subject to errors in implementation .in the current implementation , we use labelled trichloroethylene ( tce ) dissolved in d - chloroform as the sample .data were taken with a bruker drx 700 mhz spectrometer .the structure of the molecule and the parameters of the spin qubits are shown in fig . [ figmol ] , where we denote h as qubit 1 , c as qubit 2 and c as qubit 3 .the hamiltonian of the three - spin system can be written as \ ] ] where , , denote the pauli matrices with indicating the spin location , denotes the chemical shift of spin , and denotes the spin - coupling between spins and .the two carbon spins are treated in the strongly coupled regime , because the difference in frequencies between the two carbons is not large enough for the weak coupling approximation .we exploit radio - frequency ( r.f . )spin selection techniques to improve the linewidth , and hence the coherence , of the ensemble qubits . the effect of pulse imperfections due to r.f .inhomogeneities is reduced by spatially selecting molecules from a small region in the sample through the r.f . power .we choose c as the qubit to carry the state for encoding and the output state after decoding and error correction .the labelled pseudo - pure states and , used as the reference states with blank ancilla , are prepared by the circuit in ref . , where the order is arranged as qubits to and .the qubit readout is performed on c , and the signals are normalized with respect to or , for different input states .the quantum network used for implementing the qec code is shown as fig .[ figcir ] ( a ) , where is chosen as , and , in separate sequences .we optimize the encoding operation , and the decoding operation combined with the error correction as two grape pulses with theoretical fidelity . to test the ability of the code to correct for the natural dephasing errors due to the transversal relaxation of the spins , the internal spin hamiltonian ( [ ham ] )is refocused during the time delay implemented between the encoding and decoding processes .the refocusing pulse sequence is shown in fig .[ figcir ] ( b ) where the selective pulses applied to spin h are hard rectangle pulses with a duration of , while the pulses applied to c or c are grape pulses with a duration of ms .taking into account the strong coupling in the hamiltonian ( [ ham ] ) , we choose the phases of the pulses shown in fig .[ figcir ] ( b ) to obtain a fidelity , where denotes the simulated unitary implemented by the pulse sequence , and denotes the identity operation .we choose the input states as , and , and measure the polarization that remains after error correction in .the polarization ratios are denoted as , and .we use entanglement fidelity " , represented as to characterize how well the quantum information in is preserved .the experimental results for qec are shown in fig .[ figres ] ( a ) . for each delay time, five experiments are repeated in order to average the random experimental errors in implementation .the results of error correction ( ec ) are represented by . by averaging the points for each delay time ,we obtain the averaged entanglement fidelity shown as , which can be fitted to , with relative fitting error , shown as the thick dash - dotted curve . in order to estimate the performance of the error correction for the encoded states , we calculate the entanglement fidelity of decoding ( de ) through measuring the remaining polarization before the application of the toffoli gate , used as the error - correcting step . in this case , the decoding operation is implemented by one grape pulse with theoretical fidelity .similar to the measurement for error correction , we also repeat five experiments for each delay time .the results are shown as in fig .[ figres ] ( a ) , and the data points after average are marked by + , which can be fitted as , with relative fitting error , shown as the thick solid curve . herethe ratio of the first order decay terms for the two fits is found to be .the important reduction of the first order decay term indicates the high quality of state stabilization provided by qec . as a comparison, we include the experimental data from ref . , which are marked as and in fig .[ figres ] ( b ) for the results of qec and decoding .the data can be fitted as and with relative fitting errors and , respectively .the ratio of the first - order decay terms is . in implementing the qec code ,the operations associated with encoding , decoding and error correction are subject to errors , which would lower the ability of the code to protect the quantum states .to estimate the effects of the errors , we measure the free evolution decay ( fed ) of under the refocusing sequence shown in fig .[ figcir ] ( b ) .five experiments are repeated for each delay time , and the experimental data for are shown as in fig .[ figres ] ( a ) .the averaging points , shown as , can be fitted as with relative fitting error , shown as the dashed curve .the ratio of the first order decay terms in the fits of fed and ec is . through comparing the results of qec and fed, one can find that the errors removed by the qec code can exceed the errors introduced by the extra operations required by the code for delay time s ( of c s ) .the pulse durations for encoding , decoding , and the combination of decoding and error correction are 8 ms , 8ms , and 13.6 ms , respectively .we exploit the results from simulation with ideal pulses to estimate the errors due to the imperfection in pulse implementation . in simulation , we choose an uncorrelated error model for errors and ignore errors .we represent the measured fidelity as , where denotes the ideal fidelity by simulation and denotes a factor to estimate the deviation between experiment and simulation .one should note that the theoretical entanglement fidelity of de is the same as fed . by fitting the data , we obtain , and for ec , de and fed , respectively .the fitting results are shown as the thin dash - dotted , solid and dashed curves in fig .[ figres ] ( a ) . from the simulation results , we estimate the errors in implementing the operations associated with the qec codes are about for de and for ec .we optimize the encoding , decoding , and error correction as grape pulses with high theoretical fidelities ( ) .the refocusing sequence is exploited to suspend the evolution of the hamiltonian ( [ ham ] ) with high fidelity ( ) .the quality of readout signals are further improved by r.f . selection . compared with the experimental results of qec obtained in 1998 ,the pulse sequence fidelity is improved by about . by the comparison with the free evolution decay, one can benefit from qec even when errors exist in implementing the operations required for qec .the improvement provided by the error correction is also demonstrated by the reduction of the first order in the decay of the remaining polarization after error correction , compared with the decay of the encoded states recovered by decoding and free evolution decay of the input states .the `` qec advantage '' for the encoded states is improved by a factor of from the 1998 result . in the current experiment , the second order term in the decay aftererror correction is larger than the previous experiment , because of the larger phase errors due to the shorter time constants [ see fig .[ figmol ] ( b ) ] , noting that s are s for h , s for c and s for c in the previous experiment .the experimental errors arise mainly from the imperfection in implementing the grape pulses .additionally inhomogeneities of magnetic fields and the limitation of also contribute to errors .the authors acknowledge professor d. g. cory for helpful discussions .+ current address : department of physics , massachusetts institute of technology , cambridge , ma , usa 02139 e. knill , r. laflamme and w. h. zurek , science , * 279 * , 342 ( 1998 ) ; a. yu .kitaev , russ .surv . * 52 * , 1191 - 1249 ( 1997 ) ; d. aharonov and m. ben - or , _ proceedings of the 29th annual acm symposium on theory of computing _ , 176 ( acm press , 1997 ) ; e. knill , nature , * 434 * , 39 ( 2005 ) ; d. gottesman , _ encyclopedia of mathematical physics _ , edited by j .-francoise , g. l. naber , and s. t. tsou , 196 ( elsevier , oxford , 2006 ) .n. khaneja , t. reiss , c. kehlet , t. schulte - herbruggen and s. j. glaser , j. magn . reson . * 172 * , 296 ( 2005 ) ; c. a. ryan , c. negrevergne , m. laforest , e. knill and r. laflamme , phys .a * 78 * , 012328 ( 2008 ) . and c .( b ) the relaxation times s are measured by the standard inversion recovery sequence . s are measured by the hahn - echo with one refocusing pulse , by ignoring the strong coupling in the hamiltonian ( [ ham]).,width=384 ] noise is introduced by a variable time delay implemented by the pulse sequence ( b ) which refocuses the evolution of the hamiltonian ( [ ham ] ) to an identity operation with theoretical fidelity higher than . in ( b )the unfilled rectangle represents a hard pulse with duration of .the filled rectangle represents a grape pulse selective for c or c with duration of ms .the phases of the pulses are denoted above the rectangles.,width=384 ] for ec , for de and for fed .the averages are shown as , + and , which can be fitted as with relative fitting error , with relative fitting error and with relative fitting error , shown as the thick dash - dotted , solid and dashed curves , respectively .the ratios of the first - order decay terms in the fitted curves are calculated as for de and ec , and for fed and ec , respectively .the thin dash - dotted , solid and dashed curves show the fitting results using the ideal data points from simulation by introducing factors of , and for ec , de and fed , respectively .( b ) results in the previous experiment , shown as the data marked by `` and '' for ec and de , which can be fitted as and with relative fitting errors and , respectively .the ratio of the first - order decay terms is .,width=576 ] | more than ten years ago a first step towards quantum error correction ( qec ) was implemented [ phys . rev . lett . 81 , 2152 ( 1998 ) ] . the work showed there was sufficient control in nuclear magnetic resonance ( nmr ) to implement qec , and demonstrated that the error rate changed from to approximatively . in the current work we reproduce a similar experiment using control techniques that have been since developed , such as grape pulses . we show that the fidelity of the qec gate sequence , and the comparative advantage of qec are appreciably improved . this advantage is maintained despite the errors introduced by the additional operations needed to protect the quantum states . |
the first compound light microscopes constructed in the 16th and 17th centuries enabled scientists to inspect matter and biological specimens at the microscopic level . in 1873 , ernst abbe formulated a fundamental limit for the resolution of an optical imaging system based on the diffraction theory of light .at the same time the fabrication and development of microscopes and lenses were transformed from empirical optimizations to schemes based on quantitative calculations and theoretical considerations . in the 20th century various contrast modalities were developed that allow one to detect very small signals and to measure characteristic properties of a specimen with high specificity . finally , during the last two decades several revolutionary methods were conceived and experimentally demonstrated , which substantially enhanced the optical resolution down to the nanometer scale ( shown in fig .[ fig_opticalresolution ] ) .the awarding of the 2014 nobel prize in chemistry to eric betzig , stefan hell and william e. moerner for their pioneering work in `` super - resolution '' fluorescence microscopy corroborates its promise for many advanced investigations in physics , chemistry , materials science and life sciences .fluorescence microscopy down to the single molecule level has been reviewed in many recent articles and books . despite the immense success of fluorescence microscopy, this technique has several fundamental shortcomings . as a result, many ongoing efforts aim to conceive alternative modes of microscopy based on other contrast mechanisms .furthermore , having overcome the dogma of the resolution limit , scientists now focus on other important factors such as phototoxicity and compatibility with live imaging , higher speed , multiscale imaging and correlative microscopy . in this review article, we present a concise account of some of the current trends and challenges .every measurement needs an observable , i.e. a signal . in the case of optical microscopy ,one correlates a certain optical signal from the sample with the spatial location of the signal source .scattering is the fundamental origin of the most common signal or contrast mechanism in imaging . indeed ,when one images a piece of stone with our eyes we see the light that is scattered by it although in common language one might speak of reflection .the scattering interaction also leads to a shadow in transmission ( see fig .[ fig_transrefl ] ) . in conventional microscopy , one speaks of trans - illumination if one detects the light transmitted through the sample and epi - illumination if one detects the signal in reflection . already the pioneers of early microscopy experimented with different types of illumination to generate stronger contrasts .even today , a good deal of instrumentation in common microscopes focuses on the illumination path .for example , in a particularly interesting scheme one adjusts the illumination angle such that a negligible amount of it is captured by the finite solid angle of the detection optics . in this so - called dark - field microscopy, one emphasizes parts of the sample that scatter light in an isotropic fashion .such oblique illumination was already exploited in the early days of microscopy . during the past century, various methods were developed to improve the contrast in standard microscopy .for example , polarization microscopy techniques can be used to examine the anisotropy of birefringent materials such as minerals .some of the most important contrast mechanisms exploit the spectroscopic information of the sample and thus introduce a certain degree of specificity .the prominent example of these is fluorescence microscopy , where fluorophores of different absorption and emission wavelengths are employed to label various parts of a biological species . over the years , the developments of fluorescence labeling techniques such as immunofluorescence , engineered organic fluorescent molecules and fluorescent proteins have continuously fueled this area of activity .however , fluorescence labeling has many disadvantages such as photobleaching and most importantly the need for labeling itself . to address some of the limitations of standard fluorescence imaging, scientists have investigated a range of multiphoton fluorescence methods such as two - photon absorption .the strong dependence of multiphoton excitation processes on intensity allows one to excite a small volume of the sample selectively only around the focus of the excitation beam .this leads to minimizing the fluorescence background and photobleaching . aside from its capacity for optical sectioning , this technique makes it possible to perform tissue imaging because the long - wavelength excitation light penetrates deeper into the biological tissue .the ultimate freedom from fluorescence markers is the use of label - free contrast mechanisms .for example , raman microscopy generates contrast through the inelastic scattering of light that is selective on the vibrational and rotational modes of the sample molecules and is , thus , very specific to a certain molecule in the specimen .the main difficulty of raman microscopy lies in its extremely weak cross section .methods such as coherent anti - stokes raman scattering ( cars ) or stimulated raman scattering improve on the sensitivity to some extent although they remain limited well below the single molecule detection level .some other interesting label - free contrasts are based on the harmonic generation of the illumination or four - wave mixing processes through the nonlinear response of the sample .for instance , it has been shown that collagen can be nicely imaged through second harmonic generation ( shg ) . to conclude this section, we return to the most elementary contrast mechanisms , namely transmission and reflection .the fundamental underlying process in these measurements is interference .consider again the situation of fig .[ fig_transrefl ] , whereby now we reduce the size of the object to a subwavelength scatterer such as a nanoparticle ( shown in fig .[ fig_iscat]a ) .a laser beam with power illuminates the nanoparticle of diameter lying on a glass substrate .a fraction of the incoming power is scattered by the object and another part of it serves as a reference , which can either be the transmitted light or the light that is reflected from the glass interface .the scattered and the reference components of the light reach the detector and interfere if their path difference is much smaller than the coherence length of the illumination beam .in the case of a reflection configuration with field reflectivity , the detected power on the detector is given by where signifies the scattering cross - section of the particle and denotes a phase term that includes the gouy phase .interferometric scattering contrast was explicitly formulated in the context of detection of nanoscopic particles in our laboratory and has been coined iscat .the central process is , however , quite general and related to the measurement of extinction by an object on a beam of light . according to the optical theorem ,the extinction signal can stem from scattering loss or absorption .we note that although the interaction of light with objects that are larger than a wavelength is expressed in terms of reflection instead of scattering , the signal on the detector can still be written in the form of eq .( [ interference ] ) .having understood the underlying interferometric nature of an extinction measurement , it is now clear that one can also perform the measurement by manipulating the reference beam in eq .( [ interference ] ) .an early example of such a variation was demonstrated by zernike in the context of phase contrast microscopy . here a part of the illumination is phase shifted and then mixed with the light that is transmitted through the sample .another related technique is known as differential interference contrast microscopy ( dic ) put forth by nomarski . a very similar methodto dic that is somewhat simpler to implement and therefore very common in commercial microscopes is hoffman modulation contrast .although simple transmission , reflection , phase contrast and dic share the same fundamental contrast as that of iscat , only the latter has tried to address the issue of sensitivity and its extension to very small nanoparticles and single molecules .this brings us to the general question of sensitivity . for any given contrast mechanism, one can ask `` how small of an object can one detect ? '' .in other words , what is the sensitivity . to have high sensitivity , one needs a sufficiently large signal from the object of desire , and the trouble is that usually the signal diminishes quickly as the size of the object is reduced .so , one needs to collect the signal efficiently and employ very sensitive detectors .the detector can either be a point detector as it is most often used in scanning confocal techniques or a camera in the case of wide - field imaging .the performance of a light detector can be generally described by its quantum efficiency , the available dynamic range and its time resolution . for the longest time in the history of microscopythe human eye was the only available detector .with its detection threshold of only a few photons it was a better detector than photographic plates and films even long after these became available .photon counting detectors had emerged by the 1970s , starting with photo - multiplier tubes ( pmt ) and later followed by semiconductor devices that are able to detect single photons .these single - photon avalanche diodes ( spad ) can nowadays achieve quantum efficiencies above 50 % with timing resolutions on the order of tens of picoseconds .in the 1990s , cameras like the charge - coupled device ( ccd ) and fast active pixel sensors using complementary metal - oxide - semiconductor ( cmos ) technology reached the single - photon sensitivity with high quantum efficiencies .today , the best ccd cameras using electron multiplication can achieve quantum efficiencies better than 95 % in the visible part of the spectrum with a readout noise of a fraction of a photo - electron .is illuminated by a light beam with power .part of the light ( ) will serve as a reference beam and part of the light is scattered by the particle ( ) . in the imaging planethe two beams can interfere leading to an enhancement of the scattering signal . *( b ) * examples of differential iscat images are shown for two protein types .individual molecules are marked with arrows .scale bars : .( b : modified with permission from . ) ] an additional important issue regarding detection sensitivity concerns the background signal , which can swamp the signal .if the detector dynamic range ( the ratio of the highest and the smallest possible amount of light that can be measured ) is large enough , one can subtract the background .however , temporal fluctuations on the background usually limit this procedure in practice . in general, the image sensitivity can be quantified in terms of the signal - to - noise ratio ( snr ) , where denotes the mean and the standard deviation .the sensitivity of fluorescence microscopy was taken to the single - molecule limit towards the end of the early 1990s .the factors leading to the success of this field were 1 ) access to detectors capable of single - photon counting , 2 ) the ability to suppress the background by using spectral filters , 3 ) preparation of clean samples .although single - molecule fluorescence microscopy has enabled a series of spectacular studies in biophysics , fluorescence blinking and bleaching as well as low fluorescence quantum yield of most fluorophores pose severe limits on the universal applicability of this technique .the low cross sections of raman and multiphoton microscopy methods also hinder the sensitivity of these methods . only in isolated cases , where local field enhancement has been employed ,have been reports of single - molecule sensitivity . for a long time , single - molecule sensitivity in extinction was also believed not to be within reach because the extinction contrast of a single molecule is of the order of , making it very challenging to decipher the signal on top of laser intensity fluctuations .nevertheless , various small objects have been detected and imaged using iscat , including very small metallic nanoparticles , single unlabeled virus particles , quantum dots even after photobleaching , single molecules and even single unlabeled proteins down to a size of 60 kda ( shown in fig .[ fig_iscat]b ) . here, it is important to note that the power that is scattered by a rayleigh particle ( ) follows a law where denotes the particle diameter . the interference term ( cf .. [ interference ] ) contains the scattering field or in other words the polarizability of the particle , which is proportional to .therefore , this term will dominate the detected power for very small objects .the high sensitivity of iscat means , however , that any slight variation in the index of refraction or topography can lead to a sizable contrast .hence , it is important to account for fluctuations of the index of refraction , length or absorption in the sample .one of the most immediate functions that the layperson associates with a microscope is its ability to reveal the small features of a sample .the principle of operation of a microscope is typically described using ray optics . however ,when dimensions are to be investigated of the order of the wavelength of visible light , i.e. 400 - 800 nm , we must consider the wave properties of light such as interference and diffraction .therefore one can not achieve an arbitrarily high resolution simply by increasing the magnification of the lens arrangement .ernst abbe pioneered a quantitative analysis of the resolution limit of an optical microscope .he considered imaging a diffraction grating under illumination by coherent light .abbe argued that one would recognize the grating if one could detect at least the first diffraction order ( see fig .[ fig_psf]a ) . in the case of an immersion microscope objective with circular aperture and direct on - axis illumination ,the abbe diffraction limit of resolution reads where is the wavelength of light and was introduced as the numerical aperture ( illustrated in fig . [fig_psf]b ) .here is the refractive index of the medium , in which the microscope objective is placed , and denotes the half - angle of the light cone that can enter the microscope objective .air has a refractive index of about limiting the of dry microscope objectives to less than unity . by filling the space between cover glass and an immersion microscope objective with a high index material, the numerical aperture can be increased . already in the original publication ,ernst abbe discussed how this diffraction - limited resolution can be improved if the illumination comes at an angle with respect to the optical axis , making it possible to collect higher diffraction orders ( cf .[ fig_psf]a ) . in this case, the diffraction limit is determined by the sum of the numerical apertures of the illumination lens and the collection lens .if the angles of incidence and collection are identical , a factor of is obtained , leading to the famous abbe formula considering that the optical response of an arbitrary object can be fourier decomposed , abbe s formula can be used as a general criterion for resolving its spatial features . at about the same time, hermann von helmholtz developed a more elaborate mathematical treatment that he published one year after abbe .helmholtz also discussed incoherent illumination and showed that eq .( [ eqn_incoherent ] ) holds in that case too . in a nutshell , for an incoherent illumination, the light is scattered from the individual nanoscopic parts of the structure rather than diffracted in a well - defined order .thus one can consider the illumination to be from every direction .lord rayleigh was the first to consider self - luminous objects , which also emit incoherently .additionally , he discussed different types of objects , different aperture shapes and the similarities in the diffraction limit for microscopes and telescopes .although abbe s resolution criterion is more rigorous , a more commonly known formulation of the resolution for spectrometers and imaging instruments is the rayleigh criterion , it states that two close - lying points are considered resolved if the first intensity maximum of one diffraction pattern described by an airy disc coincides with the first intensity minimum of the other diffraction pattern .equation ( [ eqn_rayleigh ] ) is somewhat arbitrary but it comes very close to abbe s criterion .rayleigh based his definition upon the physiological property of the human eye , which can only distinguish two points of a certain intensity difference . in practice, the full - width at half - maximum ( fwhm ) of the point - spread function ( psf ) also provides a useful criterion for the resolution of a microscope because two overlapping psfs that are much closer than their widths can no longer be resolved . for an immersion oil microscope objective with operating at a wavelength of 500 nm , the psf is about 220 nm ( see fig . [ fig_psf]b ) .the axial width of the psf is about 2 - 3 times larger , in this case amounting to about 400 nm . nm and a numerical aperture na = 1.49 .the lateral and axial fwhms of the psf amount to about 220 nm and 400 nm , respectively . ]it is worth mentioning that the psf can also take different forms depending on the employed optical beam .for example , it has been shown that the psf of a focused doughnut beam with radial polarization can be made somewhat smaller than that of a conventional tem mode .the origin of this effect is the vectorial character of optical beams .fluorescence is one of the most important contrast mechanisms because it offers the possibility of specific labeling .however , since the spontaneous emission of a fluorophore does not preserve the coherence of the illumination , the signal is incoherent . as we shall seeshortly , one can engineer the illumination to increase the resolution by a factor of .one of the possibilities for improving the resolution is confocal microscopy .although the principle was patented by marvin minsky in 1957 , it took 20 years until the invention of suitable lasers and the progress in computer - controlled data acquisition opened the door for its widespread use .in contrast to conventional wide - field illumination , where the full field of view is illuminated and imaged onto a camera , scanning confocal microscopy uses spatial filtering to ensure that only a diffraction - limited focal volume is illuminated and that only light from this focal volume can reach the detector .an image is then produced by raster scanning this confocal volume across the sample . since out - of -focus light is effectively suppressed , the method allows for higher contrast and offers the ability to perform optical sectioning to acquire 3d images .the lateral size of the psf can be improved by a factor of in confocal microscopy . however , in reality this factor depends on the coherence properties of the imaging light and the finite size of the detection pinhole .the latter is usually set to a value about the size of the point spread function so as to not lose any signal . as a result, only a marginal improvement of the resolution can be achieved in confocal microscopy .a particularly interesting mode of confocal microscopy is image scanning microscopy , where an image is recorded on a camera at each scan point .it has been shown that one can computationally reconstruct an image with a resolution that is improved by up to a factor of from the resulting image stack .the scanning feature of confocal microscopy limits its speed and , thus , wide - field approaches are generally favored .an attractive and powerful strategy to improve the lateral resolution of wide - field fluorescence microscopy by up to a factor of is offered by structured - illumination microscopy ( sim ) . here, the sample is illuminated using a patterned light field , typically sinusoidal stripes produced by interference of two beams that are split by a diffraction grating .the resulting image is a product of the illumination pattern and the fluorescence image of the sample . assuming that the dye concentration follows a certain pattern that can be fourier decomposed , one obtains a moir pattern for each component , resulting from the product of the array of illumination lines and the fluorescence signal ( fig .[ fig_sim]a ) .the key concept in sim is that the periodicity of a moir pattern is lower than the individual arrays .let us consider a sample with a periodic array of dyes at distance illuminated by an array of lines spaced by .if we take the angle of the two line arrays to be zero , the period of the moir pattern is given by . now , consider an objective lens that accepts spatial frequencies .the highest periodicity moir pattern that is detected by this objective is , so that the decisive criterion becomes furthermore , considering that the highest periodicity illumination that is compatible with the objective is . putting all this information together, one finds that it is possible to detect fourier components of the sample at . in other words, one can resolve sample features at a spatial frequency up to twice larger than usual . in resolution ; see text for details . *( b , c ) * actin cytoskeleton at the edge of a hela cell imaged by conventional microscopy and sim , respectively . *( d , e ) * insets show that the widths ( fwhm ) of the finest protruding fibers ( small arrows ) are lowered to 110 - 120 nm in * ( c ) * , compared to 280 - 300 nm in * ( b)*. ( b - e : reproduced with permission from . ) ] by recording a series of images for different orientations and phases of the stripe pattern , one can reconstruct the full image with an improved resolution .interestingly , the resulting moir pattern has also a three - dimensional structure , which allows the reconstruction program to obtain an enhanced resolution in the axial direction .figure [ fig_sim]c shows an example of a high resolution image obtained by sim and the comparison to conventional microscopy ( fig .[ fig_sim]b ) .there are also approaches that combine structured illumination with other techniques using interference and two opposing objectives to gain high axial resolution ( i m , ) .recently , sim has also been combined with light - sheet microscopy ( coined lattice light - sheet microscopy ) . in an intriguing demonstration ,3d in - vivo imaging of relatively fast dynamical processes is shown using a bound optical lattice as the light sheet .it is worth mentioning that sim strategies can not yield an extra factor of two improvement for coherent imaging modalities .in 1928 , edward synge proposed to use a thin opaque metal sheet with sub - wavelength holes to illuminate a sample placed at sub - wavelength separation from it . by scanning the specimen through this point illumination , an image could be recorded with an optical resolution better than the diffraction limit .technical limitations in the fabrication of nanoscopic apertures , their nano - positioning and sensitive light detection made the experimental realization of a scanning near - field optical microscope ( snom ) only possible in the early 1980s .near - field microscopy gets around the diffraction limit in a complete and general fashion .the essential point is that the limitations imposed by diffraction do not apply to the distances very close to the source , where non - propagating evanescent fields dominate .these fields contain the high spatial frequency information of the source and sample , but their intensity decays exponentially with a characteristic length of the order of the wavelength of light .there are two conceptual ways of performing snom . in the first case ,one follows synge s proposal and illuminates the sample through a subwavelength light source ( see fig . [ fig_nearfield]a ) .conventionally , the light source is realized by sending light through a tapered fiber that is metalized and has a subwavelength aperture at its end ( see fig . [fig_nearfield]b ) .this mode of operation is called aperture snom . herethe size and shape of the aperture dictate the range of spatial frequencies that can be coupled to and scattered by the sample .the detection can be performed in the far field or back through the aperture although the latter leads to a very unfavorable signal - to - background ratio .the main limitations of this arrangement are 1 ) small transmission of through the aperture , and 2 ) the fundamental limit of aperture given by the skin depth of metals . in practice ,the nanoscopic light source , aperture or scatterer is placed at the end of a sharp tip and various distance regulation mechanisms such as the shear force are used to raster scan it at nanometer separation from the sample .the most common realization of the aperture snom employs a metal - coated tapered optical fiber with a small aperture of the order of a few nanometers at its apex ( see fig .[ fig_nearfield]b ) .however , as mentioned earlier , reaching a resolution below 50 nm proves to be exceedingly hard due to the low tip throughput and the fact that the effective size of the aperture can not be reduced below about 20 nm . in principle, this problem can be circumvented by replacing the aperture with a nanoscopic source of light such as a single molecule or a single color center ( fig .[ fig_nearfield]c ) .however , the difficulties of placing the emitter at the very end of the tip and its photostability at room temperature have hampered the wide - spread adoption of this method . in the second snom approach ,one uses far - field illumination and scans a nanoscopic scatterer or antenna very close to the sample ( cf .[ fig_nearfield]d ) .the near field of this nanostructure again contains high spatial frequencies , which can be scattered by the sample . in general , this so - called apertureless snom mode is much more challenging .the main difficulty is that the far - field illumination creates a large background from an area ( minimum of a diffraction - limited area ) that is much larger than the near - field domain of the antenna .furthermore , this background light easily results in interferometric artifacts when the height of the antenna or sample are changed .the most common platform for performing apertureless snom has been the solid tip , which might be metalized ( fig . [fig_nearfield]e ) .although such experiments in the infrared domain have become well established , reproducible fabrication and efficient characterization of the suitable tips have made apertureless snom in the visible regime an uphill battle . as a result , many scientists turned to using more well - defined metallic nanoparticles placed at the end of a dielectric tip ( fig .[ fig_nearfield]f ) .this approach has been particularly successful in the context of antenna - based snom and in producing quantitative data on the near - field enhancement of fluorescence from single molecules .an alternative way of imaging in the near field has emerged in the context of metamaterials .these artificial materials are structures with sub - wavelength - sized unit cells and can be engineered to exhibit intriguing properties such as a negative index of refraction . in the year 2000 , a so - called perfect lens was proposed by john pendry using a slab of negative index material . as illustrated in fig .[ fig_nearfield2]a , the idea with such a lens is that an object that is embedded in a material with refractive index can be perfectly imaged by a slab of a material with refractive index , assuming that it is perfectly impedance matched and completely lossless .the exponential decay of the evanescent wave intensity inside the material is completely reversed by an exponential amplification inside the material with refractive index . thus both propagating and evanescent fields are imaged by this lens yielding a perfect image .such a perfect lens has not yet been experimentally realized because of the extremely delicate constraints on the properties of the negative index material .there are , however , experimental demonstrations of a superlens in the optical regime where sub - diffraction imaging was shown using metamaterial structures .there have also been efforts for the realization of a hyperlens ( see fig .[ fig_nearfield2]b ) to project the near field into the far field using a cylindrical or a spherical hyperlens .fabrication issues , material properties and the requirement that the object of interest must be placed in the near field of the hyperlens , make the practical usage of these interesting imaging techniques very limited although they might possibly find use in nanofabrication and optical data storage .fluorescence microscopy witnessed several important developments in the 1980s .this included commercialization of scanning confocal microscopy and the invention of two - photon absorption microscopy .a young doctoral student in heidelberg , stefan hell , took upon himself not to accept the diffraction limit in its usual formulation . shortly after finishing his doctoral work , hell proposed to exploit stimulated emission to deplete ( sted ) the fluorescence of molecules in the outer part of the illumination and thereby reduce the size of the effective fluorescence spot .the first experimental realization of this idea was reported just a few years later .hell was awarded the nobel prize in chemistry in 2014 for his achievements in this area . upon excitation, a fluorescent molecule is usually brought from its singlet ground state s to a higher vibrational state of the singlet electronic state s ( see fig .[ fig_sted]a ) , which then relaxes on a picosecond time scale to the lowest vibrational level .if the quantum efficiency of the molecule is high , a photon is emitted within a few nanoseconds .hell had the idea to suppress this fluorescence in the outer part of the excitation beam by stimulating the emission much faster than the nanosecond spontaneous emission in that region . to do this , he used a doughnut - shaped laser beam profile ( see fig .[ fig_sted]b ) that is overlapped on the excitation focal spot .the stimulated emission takes place at a wavelength that is red - shifted with respect to the main part of the fluorescence line , allowing one to spectrally separate the stimulated emission from various fluorescence components .it is important to note that the depletion doughnut beam itself is also diffraction limited , but the effective size of its hole in the middle can be adjusted by the beam intensity .in other words , the higher the intensity of the depletion beam , the farther one goes into the saturation of the fluorophores . as a result of this nonlinear behavior, the fluorescence psf can be sculpted .it follows that the resulting sub - diffraction resolution can be described by a modified form of ernst abbe s equation after considering the degree of saturation , here , is the peak intensity of the depletion laser and denotes the saturation intensity of the fluorophore .the resolution becomes sub - diffraction limited when the ratio between and becomes larger than one .the best resolution that has been demonstrated with sted is about 20 nm in the case of standard fluorescence labeling assays with organic fluorophores and about 50 nm using genetically expressed fluorescent proteins in live cells . in the case of a very robust fluorophore such as a nitrogen - vacancy color center in diamond , a resolution down to 2 - 3 nmwas successfully demonstrated .sub - diffraction imaging by using a doughnut - shaped excitation pattern for fluorescence suppression can be generalized to other photophysical mechanisms besides stimulated emission .for example , one can exploit the saturated depletion of fluorophores , which can be reversibly switched between a bright state and a dark state .this dark state can have a variety of origins such as the ground state of a fluorophore in sted , its triplet state in ground - state depletion ( gsd ) microscopy , or a nonfluorescent isomer of an over - expressed fluorescent protein .this general principle was coined `` resolft '' as an abbreviation for reversible saturable optically linear fluorescence transitions , which is also a pun on the way germans might pronounce `` resolved '' .the resolft concept can also be applied to sim , also known as saturated sim ( ssim ) . in this regime ,a sinusoidal illumination pattern becomes effectively more and more rectangular as the excitation intensity increases .this leads to higher order fourier terms in the illumination pattern periodicity . using similar back - of - the - envelope considerations as for standard sim , , one can show that spatial frequencies become detectable .indeed , using ssim , lateral spatial resolutions on the order of 50 nm have been demonstrated .the general class of resolft methods has offered a very clever strategy for circumventing the diffraction limit in fluorescence microscopy . nevertheless , these methods are accompanied with challenges , which will call for more innovations in the years to come .one of the issues is the fact that so far resolft has been a raster scan technique with a certain temporal resolution , which might limit live cell imaging applications or the study of dynamic processes .this issue can be resolved to some extent by massive parallelization , e.g. via a square grid similar to structured illumination microscopy .a second challenge concerns the photophysics of the fluorophores .excitation to higher electronic states can efficiently compete with the stimulated emission process , opening pathways for photobleaching .the company abberior has addressed this problem by developing a range of suitable dye molecules and other fluorophores covering the greater part of the visible spectrum . a further area of development will be multicolor applications as it is common in biological fluorescence microscopy .since already two lasers are necessary for resolft methods , the implementation of two or more color channels is somewhat more complex than in standard fluorescence microscopy .nevertheless , sted with two different fluorescence labels and two pairs of excitation and sted lasers has been demonstrated .there have also been efforts for sharing the excitation and sted lasers , or separating a third fluorophore that lies in the same spectral region by its fluorescence lifetime .imaging deep in the sample also poses difficulties for resolft microscopy . as the attainable resolution critically depends on the quality of the intensity minimum and phase fronts in the center of the depletion beam , slight changes of the depletion beam profile by scattering deteriorates the imaging performance .this problem can be , in principle , addressed by recent developments using adaptive optics . in parallel to the developments of snom in the 1980s and 1990s , scientists worked hard to detect matter at the level of single ions and single molecules .these efforts were originally more motivated by fundamental issues in spectroscopy and to a good part by the community that had invented methods such as spectral hole burning for obtaining high - resolution spectra in the condensed phase .although already in the 1970s and 80s there had been indications of reaching single - molecule sensitivity in fluorescence detection , the first reliable and robust proof came from the work of w. e. moerner , who showed in an impressive experiment that a single pentacene molecule embedded in an organic crystal could be detected in absorption at liquid helium temperature . soon after that m. orrit demonstrated a much better signal - to - noise ratio by recording the fluorescence signal in the same arrangement .the ease of this measurement kick - started the field of single molecule fluorescence detection .however , it was not until 1993 that e. betzig provided the first images of single molecules at room temperature .betzig used a fiber - based aperture snom to excite conventional dye molecules on a surface . at this point, there was a strong belief that far - field excitation would not be favorable because it would cause a large background .interestingly , shortly after that r. zare s lab demonstrated scanning confocal images of single molecules .this achievement was the final step towards a widespread use of single molecule fluorescence microscopy .a particularly interesting feature of fluorescence that was revealed by single molecule detection is photoblinking , i.e. the reversible transition between bright and dark states of a fluorescent molecule .different physical mechanisms may cause fluorescence intermittencies depending on the type of fluorophore as well as its surroundings .for example , an excited molecule can undergo a transition to a metastable triplet state with a much longer lifetime than the singlet excited state ( see fig .[ fig_triplet ] ) . during this time , the molecule is off because it can not be excited .the evidence for triplet state blinking is an off - time distribution that follows an exponential law .however , in some cases the off - time statistics reveal a power law similar to the blinking observed in semiconductor nanocrystals .a proposed mechanism for such a fluorescence intermittency is the formation of a radical dark state .there have been several studies on the topic of blinking , but there is only a limited amount of room - temperature and low - temperature data available and many questions remain open .fluorophores also undergo photobleaching , i.e. an irreversible transition to a non - fluorescent product . at room temperature dye moleculestypically photobleach within several tens of seconds or a few minutes if sophisticated antifading reagents are used in the buffer .however , the survival times at cryogenic temperatures can go beyond an hour or even days in the case of a crystalline matrix .in the special case of terrylene in p - terphenyl a comparable photostability has been achieved even at room temperature .unfortunately , the combination of aromatic molecules and crystalline host matrices is not compatible with the labeling strategies in the life sciences .the idea behind localization microscopy is to find the position of each fluorophore by determining the center of its diffraction - limited psf with a higher precision than its width .this is accomplished by fitting the distribution of the pixel counts on the camera with a model function that describes this distribution ( see fig . [ fig_sml ] ) .the principle was already conceived by werner heisenberg in the 1920s and experimentally demonstrated in the 1980s in the context of localizing a single nanoscale object with nanometer precision . in this scheme , single emitters can be localized with arbitrarily high precision only dependent on the available snr .the localization precision is mainly determined by the number of photons that reach the detector , the size of the psf , the level of background noise and the pixel size .the background noise is in turn affected by the luminescence of the cover slip or other materials on the sample as well as the dark counts and readout noise of the camera . the attainable localization precision ( )can be written as using a maximum likelihood estimation procedure with a 2d gaussian function .this predicts a localization error close to the information limit .here , denotes the detected number of photons , stands for the half - width of the psf given by the standard deviation of a gaussian profile , is the level of background noise and denotes the pixel size .the limiting factor is typically the finite value of caused by irreversible photobleaching of the fluorophore .the photon budget of commonly used photoactivatable fluorescent proteins lies in the range of a few hundred detected photons , which typically leads to a localization precision on the order of 20 nm . to improve on this limitation, several efforts have optimized the choices of fluorophores and the buffer conditions , engineered the dye molecule itself , or carefully controlled its environment .the best localization precision for single molecules that has been reported is just under 0.3 nanometers .a particularly powerful tool based on the concept of localization is single particle tracking .localizing a fluorescent marker or non - fluorescent nano - object of interest as a function of time allows one to study dynamical processes such as diffusion in lipid membranes .single particle tracking has been performed with various imaging modalities including fluorescence , scattering and absorption .however , high temporal and spatial precisions call for a trade - off because smaller integration times lead to a lower signal and lower snr .interferometric scattering microscopy ( iscat ) can provide an ideal solution , offering up to mhz frame rates in combination with nanometer localization precision in the case of small scatterers like gold nanoparticles .interestingly , localization techniques and particle tracking have also found applications in a wide range of studies such as coherent quantum control or cold atoms .for example , identification of atoms down to the single lattice site of an optical lattice has provided an avenue to manipulating single qubits and studying many - body effects like the quantum phase transition from a superfluid to a mott insulator .given that a single molecule can be localized to an arbitrarily high precision , one could also resolve two nearby molecules if only one could address them individually .this was formulated by e. betzig in 1995 as a general concept , but it was already demonstrated experimentally in 1994 by the group of urs wild in cryogenic studies . in the latter ,the inhomogeneous distribution of the molecular resonance lines allows one to address each molecule separately by tuning the frequency of a narrow - band excitation laser .we have recently demonstrated that the same principle of spectral selection can also be used to address single ions in a solid . by combining cryogenic high - resolution spectroscopy and local electric field gradients ,it was also shown in our laboratory that two individual molecules could be three - dimensionally resolved with nanometer resolution . to our knowledge, those results still establish the highest three - dimensional optical resolution .several groups have tried different strategies for distinguishing neighboring fluorophores .one example was to use semiconductor nanocrystals with different emission spectra or stepwise bleaching of single molecules .however , extension of these methods to very large number of fluorophores was not practical .the decisive breakthrough came in 2006 , when three groups reported very similar strategies based on stochastic photoactivation processes that switched the fluorophores between a dark state and a fluorescent state ( see also fig . [ fig_storm]b ) .eric betzig and colleagues called their method photoactivated localization microscopy ( palm ) , sam hess and his team coined the term fluorescence photoactivation localization microscopy ( fpalm ) , and x. zhuang and her group used the term stochastic image reconstruction microscopy ( storm ) . in each case , the fluorophores are placed on the target structure by different labeling techniques . while commonly used antibody - based assays have a label - to - target distance of about 20 nm , using nanobodies , aptamers or fluorescent proteins can reduce that distance to a few nanometers .m , main image ; 100 nm , insets .( c , d : reproduced with permission from . ) ] figure [ fig_storm ] illustrates the data acquisition procedure for super - resolution imaging based on single molecule localization . by shining light on the sample with a blue - shifted activation laser beam, one can stochastically switch on a sparse subset of fluorophores .next , one turns on the excitation laser and collects the fluorescence from the few activated fluorescent molecules until they become deactivated . by adjusting the intensity of the activation beam, one can control the average number of activated fluorescent labels to ensure that psfs from individual fluorophores do not overlap .one then performs a localization analysis for each recorded psf to determine the positions of all molecules .the process of activation , recording and localization is then repeated for many other random subsets of fluorophores until one is satisfied with the number of labels for reconstructing a super - resolution image .there are also variations of this acquisition procedure using , for instance , asynchronous activation and deactivation of fluorophores or assays where diffusing fluorophores get activated upon binding to the target structure . in the standard super - resolution imaging modalities such as palm and storm , usually photochemistry is employed in order to exert some degree of control on the photoswitching kinetics .this control is necessary since the achievable resolution also depends on the ratio of the on- and off - switching rates of the used fluorophores .examples of such photochemistry is the chromophore cis - trans isomerization or protonation change in the case of fluorescent proteins , and the interplay of reduction and oxidation using enzymatic oxygen - scavenging systems and photochromic blinking for organic dye molecules .the state of the art in resolution for localization microscopy is about 10 nm limited by the total number of photons emitted before photobleaching .however , even sub - nanometer localization precision has been reported in cases where photobleaching could be delayed by using oxygen scavengers or cryogenic conditions .the latter measurements offer the additional advantage of a more rigid sample fixation than chemical fixatives .indeed , first studies exploring palm imaging at low temperature have also recently surfaced .the full arsenal of methodologies developed for cryogenic electron microscopy may be applied to prepare samples for cryogenic super - resolution microscopy and even dynamic processes could be studied either by stopping processes at different times or by employing methods like local heating with an infrared laser .a quantitative assessment of the molecules positions critically depends on both the precision and accuracy of the employed method .precision in localization microscopy is determined by the standard deviation of the estimated position of an emitter assuming repeated measurements , whereas the accuracy quantifies how close the estimated position lies to the true position . in other words , even if the measurement precision is high , absolute distance information might be compromised by technical sources of bias like pixel response non - uniformity of the camera or sample drift .an important systematic source of error concerns the dipolar emission characteristics of single molecules .it is known that the image of an arbitrarily oriented dipolar light source deviates from a simple isotropic psf ( see fig .[ fig_dipoleorientation ] ) . as a result ,localization of the individual fluorophores requires a fitting procedure that takes this effect into account . in the presence of nearby interfaces, this can become a nontrivial task .however , the psf asymmetry is much less pronounced if a microscope objective with low numerical aperture is used . of course , fitting the data with full theoretical treatment of the psf or good approximations provides accurate values for the position and orientation of the fluorophore even in the case of high numerical aperture . the most severe limit in localization microscopy is the difficulty of high - density labeling .here it is to be remembered that the image in this method is constructed by joining the centers of the individual fluorophores ( see fig .[ fig_storm ] ) .this means that a resolution of 5 nm in deciphering the details of a figure would need at least two fluorophores that are spaced by about 2.5 nm according to the sampling theorem .the first problem with this requirement is the difficulty of labeling at such high densities .second , once one manages to place the fluorophores at the right place , one faces the problem that such closely - spaced fluorophores undergo resonance energy transfer ( homo - fret ) . as a result , the emission can not be attributed to one or the other fluorophore , and the basic concept of localization microscopy breaks down .another issue to be considered is that of accidental overlapping psfs from neighboring fluorophores . in high - precision co - localization microscopythis situation can usually be avoided by rejecting the affected psfs .however , it would be more advantageous to localize these psfs as well .it turns out that the fluorophore density can be increased to achieve faster acquisition times by using appropriate algorithms .and the distance between the molecule and the interface is . *( b ) * examples for psfs for two different numerical apertures and four inclination angles at = 2 nm . ]the lateral resolution in the above - mentioned super - resolution methods is currently of the order of 10 - 50 nm , but these techniques are not fundamentally limited by any particular physical phenomenon .the resolution is rather hampered by practical issues , which can be addressed to various degrees in different applications .thus , only emphasizing a record resolution outside a specific context is not a meaningful exercise .an important development concerns super - resolution in three dimensions .one possibility to localize a fluorophore along the optical axis is via astigmatism . by inserting a cylindrical lens in the detection path ,the psf becomes elliptical , from the degree of its ellipticity and orientation one can deduce the additional axial position of the fluorophore .lateral localization precision of about 25 nm with an axial localization precision of about 50 nm was already reported in 2008 .recently , an isotropic localization precision of about 15 nm in all three spatial dimensions was reported using storm in combination with an airy - beam psf . an alternative way to obtain 3dsuper - resolution is multi - focal plane imaging . here, different focal planes are imaged on various regions of the camera by splitting the fluorescence light and introducing different path lengths .the height can then be deduced from the degree of defocusing .another approach uses an engineered psf that encodes the axial position of the emitter in the rotation angle of two lobes , a double - helix psf .a crucial requirement for practical biological microscopy is the ability to image many entities simultaneously .the most convenient and common approach is to label different parts of the specimen with fluorophores of distinct absorption or emission spectra . considering that the new super - resolution methods , including resolft , palm , storm , etc ., all rely on the photophysical properties of the fluorophores , it is not a trivial matter to marry them with multicolor imaging . first , fluorescence probes with the desired switching properties must be available with the correct excitation , emission and activation wavelengths .interestingly , scientists began to develop multicolor super - resolution solutions shortly after the introduction of localization microscopy .indeed , two - color and even three - color super - resolution imaging has been demonstrated .a second challenge concerns the crosstalk that is caused by the spectral overlap of the emission bands of different fluorophores , which are typically several tens of nanometers broad at room temperature .a possible solution would be to perform super - resolution microscopy at low temperatures because even though dye molecules suited for labeling in life science do not show lifetime - limited linewidths at cryogenic temperatures , their spectra can become narrower by orders of magnitude . as super - resolution optical microscopy becomes a workhorse, it becomes more and more important that the methods can also handle live cell imaging . here, imaging speed and phototoxicity pose important problems . on the one hand, fast imaging often requires a large excitation dose to be able to collect lots of photons in a short time window .high light dose , however , causes the production of free radicals through the photo - induced reaction of the fluorophore with molecular oxygen .furthermore , excess light also brings about fast photobleaching and short observation times .an interesting approach to minimize these phototoxic effects is light - sheet microscopy . here ,the wide - field detection is disentangled from the illumination , which consist of a thin sheet of light perpendicular to the focal plane . by performing tomographic recordings at different sample orientations , onecan then obtain impressive three - dimensional images of whole organs in small animals such as zebrafish .of course , diffraction limits the thickness of the light sheet to dimensions well above a wavelength , especially if the illumination area is to be large . by employing slowly diffracting beams such as bessel beams, one can minimize the problem of beam divergence .light - sheet microscopy is very popular in developmental biology , where super - resolution is less important than large - scale information about the whole system over a longer time .an application example of the technique is the four - dimensional imaging of embryos at single - cell resolution .if we now relax the requirements for routine biological microscopy , we find that several experiments have already extended super - resolution microscopy to the nanometer and even sub - nanometer level .the first demonstration of nanometer resolution in all three spatial dimensions used low - temperature fluorescence excitation spectroscopy in combination with recording the position - dependent stark shift of the molecular transition in an electric field gradient . in another experiment, a distance of about 7 nm was measured between two different dye molecules with an accuracy of 0.8 nm using a feedback loop for the registration of two color channels and oxygen - reducing agents .the most recent achievement concerns angstrom accuracy in cryogenic colocalization . here , two identical fluorophores attached to a double - stranded dna at well - defined separations as small as 3 nm were resolved with a distance accuracy better than 1 nm .figure [ fig_coloc]a illustrates how this can be used to determine the positions of the dye molecules and their separation .the intensity trace retrieved from the image stack of a dna molecule with the attached fluorescent molecules shows three levels .the upper level corresponds to the state where both fluorophores are on , the middle one to the case that only one is fluorescent , and in the lowest level both fluorophores are in a dark state .after localizing the single fluorophores in the frames where there is only one present , one can subtract the image frames from those where both fluorophores were on . in this fashion, one can also localize the second fluorophore .an alternative route is to find the center of mass of each fluorophore and then compute the distance . using the latter approach ,both sub - nanometer localization precision and accuracy for the distance measurement have been demonstrated ( see fig . [ fig_coloc]b ) .the holy grail of super - resolution microscopy is to break free from the shackles of fluorescent labels .the use of fluorescence markers introduces a variety of difficulties , starting with the labeling process itself and ending with the inevitable photobleaching of the labels . in the past years, there have been several proof - of - principle demonstrations to obtain fluorescence - free super - resolution images .one possibility is a technique called optical diffraction tomography ( odt ) . here, the sample is illuminated using every possible angle of incidence allowed by the numerical aperture of the microscope objective .then the intensity , phase and polarization state of the scattered far field are recorded for different angles , and the distribution of the permittivity of the object of interest is reconstructed numerically .recently , an optical resolution of about one - fourth of the wavelength was experimentally demonstrated .label - free super - resolution has also been demonstrated using surface - enhanced raman scattering ( sers ) , where one performs a stochastic reconstruction analysis on the temporal intensity fluctuations of the sers signal .another intriguing method employs ground - state depletion of the charge carriers with a doughnut shaped beam resulting in the transient saturation of the electronic transition by using a pump - probe scheme .there have also been several approaches to achieve super - resolution imaging using the photon statistics of the emitters .as discussed earlier , it is also possible to detect unlabeled biomolecules such as proteins via iscat detection of their rayleigh scattering . in this methodthe image of a single protein can be localized in the same manner as in fig .[ fig_sml ] . heretoo , one needs to turn the proteins on and off individually if one wants to resolve them beyond the diffraction limit . in dynamic experiments ,the arrival time of each protein can serve as a time tag . the localization precision andtherefore the attainable resolution is determined by the signal - to - noise ratio , which in turn depends on the size of the biomolecule in iscat .this size - dependent signal also provides a certain level of specificity that in fluorescence modalities can only be achieved by employing different fluorophores .another report has used the saturation of scattering in plasmonic nanoparticles .although the saturation effect in the absorption of small plasmonic nanoparticles has been studied for many years , saturation of scattering has only been reported recently .this saturation stems from a depletion of the plasmon resonance .similar to the case of super - resolution imaging in ssim , the saturation effect in scattering allows one to record images with a resolution beyond the diffraction - limit . by recording images at different light intensities ,super - resolved images were obtained and a resolution of was demonstrated .finally , combination of optical microscopy with other imaging modes such as scanning probe techniques or electron microscopy can offer very useful additional information about the sample .some of the recent examples of correlative microscopy are the combination of optical super - resolution microscopy with electron microscopy and with atomic force microscopy ( afm ) .the quest for inventing new imaging mechanisms and pushing the spatial and temporal resolutions is a fundamental challenge for physicists and of great practical importance in science and technology .abbe s formulation of the diffraction limit at the end of the nineteenth century put a harsh spell on optical microscopy , which lasted for about one hundred years .the advent of scanning near - field microscopy broke this spell , and once the dogma of a fundamental limitation of resolution was eliminated , scientists reconsidered many scenarios and explored fascinating techniques , which we have discussed in this review article .it is now fully accepted that resolving two small objects at very close distances could be , in principle , achieved with an arbitrary resolution and accuracy . the key concept in breakingthe diffraction barrier has been to exploit more information from the system , e.g. taking advantage of the spectroscopic energy levels or transitions of fluorophores . in this spirit, scientists continue to develop optical imaging methods by devising clever schemes that rely on nonlinear phenomena , quantum optics or ultrafast laser spectroscopy .the many elegant ideas do not , however , all have practical implications .in particular , one has to consider many restrictions in biological imaging .for example , the amount of laser power that one can shine onto a live cell before it is damaged is orders of magnitude lower than what a diamond sample can take .moreover , there are important issues concerning labeling techniques and the influence of the label on the functionality of its environment .one subtle point regards the production of free radicals in a photochemical reaction of the excited fluorophore with the surrounding oxygen molecules . to minimize the effect of phototoxicity , it is helpful to acquire images as efficiently as possibleof course , this is also highly desirable because one gets access to more of the dynamics of the biological and biochemical processes .the nobel prize in chemistry in 2014 has honored the contributions of optical scientists in the area of super - resolution fluorescence microscopy and single - molecule detection .this is the fourth prize after dark - field microscopy ( zsigmondy 1925 ) , phase contrast microscopy ( zernike 1953 ) , and the green fluorescent protein ( chalfie , shimomura and tsien 2008 ) , which is dedicated to optical microscopy , spread over nearly one hundred years .the recent achievements are a testimony to the livelihood of light microscopy as a research field in fundamental science .they show a new trend against the older belief that the physics of imaging is fully understood and that its development belongs to engineering departments .we are convinced that the combination of concepts from laser spectroscopy , quantum optics , photophysics , photochemistry , nanotechnology , and biophysics will introduce many new avenues for optical imaging. currently resolution at around 10 - 50 nm is routinely reported in various configurations , but we have shown that this limit can be pushed by another one hundred times to the sub - nanometer level . more importantly various imaging contrasts , e.g. label - free techniques , promise to open the door to whole new classes of information .finally , temporal resolution will be an area of innovation and growth .the dynamics of interest in biomedical processes range from femtosecond for electronic and vibrational degrees of freedom to days and years for growth and disease progression .this astronomical span of the time scales will certainly require totally different techniques .progress in all these cases will ultimately confront the _ signal - to - noise barrier_. development of methods for efficient collection of photons and their combinations with lab - on - chip solutions , advances in photochemistry and photophysics of new labels as well as better detector and laser technologies will all contribute to pushing this barrier .experimental physicists and in particular optical scientists are well positioned to lead the ongoing revolution of optical microscopy if they manage to achieve a high degree of cross fertilization between biology , chemistry , medicine and physics .we thank anna lippert , richard w. taylor , simon vassant and luxi wei for discussions and critical comments on the manuscript . we acknowledge support from the alexander von humboldt foundation and the max planck society . | optical microscopy is one of the oldest scientific instruments that is still used in forefront research . ernst abbe s nineteenth century formulation of the resolution limit in microscopy let generations of scientists believe that optical studies of individual molecules and resolving sub - wavelength structures were not feasible . the nobel prize in 2014 for super - resolution fluorescence microscopy marks a clear recognition that the old beliefs have to be revisited . in this article , we present a critical overview of various recent developments in optical microscopy . in addition to the popular super - resolution fluorescence methods , we discuss the prospects of various other techniques and imaging contrasts and consider some of the fundamental and practical challenges that lie ahead . [ 1995/12/01 ] |
we consider the convex minimization problem with linear constraints and a separable objective function where and are continuous closed convex ( could be nonsmooth ) functions ; and are given matrices ; is a given vector ; and are nonempty closed convex subsets of and , respectively . throughout, the solution set of ( [ cp ] ) is assumed to be nonempty ; and and are assumed to be simple in the sense that it is easy to compute the projections under the euclidean norm onto them ( e.g. , positive orthant , spheroidal or box areas ) .let be the augmented lagrangian function for that defined by in which is the multiplier associated to the linear constraint and is a penalty parameter .based on the classic douglas - rachford operator splitting method , the alternating direction method of multipliers was proposed by gabay and mercier , glowinski and marrocco in the mid-1970s , which generates the iterative sequence via the following recursion : [ alx ] x^k+1=_x_(x , y^k,^k ) , + [ aly]y^k+1=_y_(x^k+1,y,^k ) , + [ all]^k+1=^k-(ax^k+1+by^k+1-b ) . based on another classic operator splitting method , i.e. , peaceman - rachford operator splitting method , one can derive the following method for : [ alxp ] x^k+1=_x_(x , y^k,^k ) , + [ allb]^k+=^k-(ax^k+1+by^k+1-b ) , + [ alyp]y^k+1=_y_(x^k+1,y,^k+ ) , + [ allp]^k+1=^k+-(ax^k+1+by^k+1-b ) .while the global convergence of the alternating direction method of multipliers - can be established under very mild conditions , the convergence of the peaceman - rachford - based method - can not be guaranteed without further conditions .most recently , he et al . propose a modification of - by introducing a parameter to the update scheme of the dual variable in and , yielding the following procedure : [ he1 ] x^k+1=_x_(x , y^k,^k ) , + [ he2]^k+=^k-(ax^k+1+by^k+1-b ) , + [ he3]y^k+1=_y_(x^k+1,y,^k+ ) , + [ he4]^k+1=^k+-(ax^k+1+by^k+1-b ) .note that when , - is exactly the same as - .they explained the nonconvergence behavior of - from the contract perspective , i.e. , the distance from the iterative point to the solution set is merely nonexpansive , but not contractive .the parameter in - plays the essential role in forcing the strict contractiveness of the generated sequence . under the condition that , they proved the same sublinear convergence rate as that for admm .particularly , they showed that - achieves an approximate solution of with the accuracy of after iterations convergence rate means the accuracy to a solution under certain criteria is of the order after iterations of an iterative scheme ; or equivalently , it requires at most iterations to achieve an approximate solution with an accuracy of . ] , both in the ergodic sense and the nonergodic sense .note that the parameter plays different roles in and : the former only affects the update of the variable in while the latter is for the update of the dual variable .hence , it is natural to choose different parameters in these two equalities . in this paper , we give such a scheme by introducing a new parameter in , i.e. , the dual variable is updated by the following manner : for convenience , we introduce the whole update scheme of the _ modified strictly contractive semi - proximal peaceman - rachford splitting method _ ( sp - prsm ) as [ equ : sp - prsm ] [ equ : sp - prsm1 ] x^k+1=_x_(x , y^k,^k ) + 12 x - x^k_^2 , + [ equ : sp - prsm2 ] ^k+=^k-(ax^k+1+by^k - b ) , + [ equ : sp - prsm3 ] y^k+1=_y _(x^k+1,y,^k+ ) + 12 y - y^k_^2 , + [ equ : sp - prsm4 ] ^k+1=^k+-(ax^k+1+by^k+1-b ) where and are two positive semi - definite matrices . in applications , by choosing different matrices and customizing the problems structures , we can obtain different efficient methods .our main contributions are 1 .motivated by the nice analysis techniques in and , we proved that the sequence generated by sp - prsm is strictly contractive and thus convergent , under the requirement that moreover , we proved that sp - prsm is sublinearly convergent both in the ergodic and nonergodic sense .note that the nonergodic convergence rate requires that ] such that besides , it is straightforward to see from that is bounded .this together with shows that are all bounded . obviously , we claim that is bounded .moreover , we see that is bounded as .notice that , with , this implies that is bounded .recalling that and is bounded , it is safe to say that is also bounded . \(ii ) to show that any cluster point of the sequence is an optimal solution of .let be a subsequence of the sequence and . since the graphs of and are both closed , taking the limit with respect on both sides of and using , we have that which means that is an optimal solution of .\(iii ) to show that the sequence has only one cluster point .we first replace with in the analysis of steps ( i ) and ( ii ) .it follows from that .owing to the monotonicity of the sequence , we can see that this together with shows that using again the inequality and , we have combing and , and using that and , we immediately have the proof is completed .the rate of convergence of an algorithm can help us have a deeper understanding of the algorithm .thus , in this section , we establish the sublinear rate of convergence of sp - prsm , in ergodic sense and nonergodic sense , respectively .we now give the sublinear rate of convergence of sp - prsm in the ergodic sense , which is very easy due to the key inequality .[ thm : ergodic ] let the sequence be generated by sp - prsm .for any integer , define then for any , we have that it follows from and following from that for all , summing the above inequality from both sides and noting the notation of , we derive that since holds for any , we can easily have .the proof is completed .* remark * : consider the case when the set is compact , implies the ergodic convergence of sp - prsm .[ lemma : decrease ] let the sequence be generated by sp - prsm . if we choose and according to , then there holds that note that also holds with , then we have choosing to be and , respectively , in and leads to and adding and and noting , following from , we obtain that \nn \\ \ge { } & ( 1 - \alpha - \gamma ) \beta \| r^{k+1 } - r^{k+2}\|^2 + ( 1 - \alpha)\beta \left \langle b\left[(y^k - y^{k+1 } ) - ( y^{k+2 } - y^{k+1})\right ] , r^{k+1 } - r^{k+2}\right\rangle .\label{equ : lemma : opt : nonergodic : b3}\end{aligned}\ ] ] following the deriving process of , we have that \right\|^2 + ( \alpha + \gamma ) \beta \|r^{k+1 } - r^{k+2}\|^2.\label{equ : lemma : opt : nonergodic : b4}\end{aligned}\ ] ] thus we conclude that + \|(w^{k+1}-w^{k } ) - ( w^{k+2 } - w^{k+1 } ) \|_h^2 \nn \\ \ge { } & ( 2 - \alpha - \gamma ) \beta \| r^{k+1 } - r^{k+2}\|^2 + 2(1 - \alpha)\beta \left \langle b\left[(y^k - y^{k+1 } ) - ( y^{k+2 } - y^{k+1})\right ] , r^{k+1 } - r^{k+2}\right\rangle \nn \\ & + ( 1-\alpha)\beta \left\|b [ ( y^{k+1 } - y^k ) - ( y^{k+2 } - y^{k+1})]\right\|^2 \nn \\ \geq { } & ( 1 -\gamma )\beta \| r^{k+1 } - r^{k+2}\|^2 \geq \frac{1 - \gamma}{\alpha + \gamma } \|(w^{k+1}-w^{k } ) - ( w^{k+2 } - w^{k+1 } ) \|_g^2,\nn \ ] ] where the first inequality is due to and , the second inequality is trivial and the last inequality owes to . the proof is completed .[ theorem : nonergodic ] let the sequence be generated by sp - prsm . if ] , it follows from that , which together with can easily suggest .the proof is completed .for simplicity , we only consider the case when and . in this case, we always have that [ ass : y ] the set .[ ass : gradient_lips ] the gradient is lipschtiz continuous on with .thus for any there holds that and [ ass : strong_convex ] the function is strongly convex on with constant .thus for any there holds that since , it follows from that similarly , we obtain from that [ thm : linear ] let the sequence be generated by sp - prsm . under assumptions [ ass : y ] ,[ ass : strong_convex ] , [ ass : gradient_lips ] , there must exists some constant such that letting in , with , we can obtain next we will estimate the two terms of the righthand side of , respectively .firstly , with , we see that it follows from , and the convexity of that for any , consider the convex combination of and , we know from , and that there holds that by applying the inequality with , and , we have plugging into , we further have where , . with the definition of , we can easily see that there exits a positive constant such that combing , and , we see that where the proof is completed .* remark*. let us consider as a special case , namely , . here , and reduces to . hence .letting , we have that where , and the equality attains when .the largest is which reduces to ( 3.16 ) in when .in this section , we demonstrate the potential efficiency of our method sp - prsm by solving the -regularized least square problem : where is the data matrix , is the number of the data points , is the number of features , is the vector of feature coefficients to be estimated and , is the observation vector and is the regularization parameter . given , and a -sparse vector ( is the number of nonzero elements in over ) , the matlab codes for generating the data of are given as .... xbar = sprandn(n,1,p ) ; d = randn(m , n ) ; d = d*spdiags(1./sqrt(sum(d.^2))',0,n , n ) ; % normalize columns b = d*xbar + sqrt(0.001)*randn(m,1 ) ; mu = 0.1 * norm(d'*b , ' inf ' ) . .... by introducing an auxiliary variable , we reformulate the problem as we consider to apply the sp - prsm to solve . in our implementation, we always choose and . starting from and , with some suitably chosen proximal matrix , the iterative scheme is given as where the shrinkage operator is defined as and if and if ., we stop the iterative scheme when or the iterative counter . considering that the role of and in the sp - prsm are equal, we can easily extend the convergence domain to note that also establish the convergence of the sp - prsm with and when the union domain defined by and will be considered below .we generate the mesh grid with respect to and by equally dividing the intervals ] into 10 parts , respectively .we set , .the corresponding scatter diagram is depicted in figure [ figure:1 ] . for consideration of space ,only the detailed results for are shown in table [ results:1 ] . from this table, we see that when and , the sp - prsm always perform best ; when and satisfy , the results are not so good .+ , , the intervals ] are both equally divided into 10 parts .note that the ` iter'-axis and ` alpha'-axis are both in reverse order.,width=384 ] .the number of iterations taken by sp - prsm with different and for solving with , , ` - ' means that the corresponding and do not satisfy . [ cols="^,^,^,^,^,^,^,^,^,^,^,^",options="header " , ] from table 4 we can observe that we can use indefinite matrices in the subproblems and , which can further improve the efficiency of the algorithm .in fact , under certain conditions on the problems data and suitable requirements on the matrices , we can theoretically establish its convergence . since the analysis is very similar to those in this paper ,we do not describe them here .in this paper , we proposed a modification of the peaceman - rachford splitting method by introducing two different parameters in updating the dual variable , and by introducing semi - proximal terms to the subproblems in updating the primal variables .we established the relationship between the two parameters under which we proved the global convergence of the algorithm .we also analyzed the sublinear rate convergence under ergodic and non - ergodic senses , respectively .under further conditions of one of the objective functions , we proved the linear convergence of the algorithm .finally , we reported extensive numerical results , indicating the efficiency of the proposed algorithm .note that the parameters and are essential to the efficiency of the algorithm , which should be variable along with the iteration . allowing the parameter and with the process of the iterate may give us the freedom of choosing them in a self - adaptive manner .suitable updating rules are among our future research tasks . on the other hand , considering that the main task in the algorithm is to solve the -optimization problem ( e.g. , ) orthe -optimization problem ( e.g. , ) , solving them in an inexact manner may improve the efficiency of the algorithm .an approximate version of the proposed sp - prsm with practical accuracy criteria is also our future research topic . , _ some reformulation and applications of the alternating directions method of multipliers_. in : hager , w.w ., large scale optimization : state of the art , pp .115 - 134 .kluwer academic publishers , 1994 . , _ sur lapproximation , par lments finis dordre un , et la rsolution , par pnalisation - dualit dune classe de problmes de dirichlet non linaires _ , esaim : mathematical modelling and numerical analysis - modlisation mathmatique et analyse numrique , 9 ( 1975 ) , pp . | the peaceman - rachford splitting method is very efficient for minimizing sum of two functions each depends on its variable , and the constraint is a linear equality . however , its convergence was not guaranteed without extra requirements . very recently , he _ et al . _ ( siam j. optim . 24 : 1011 - 1040 , 2014 ) proved the convergence of a strictly contractive peaceman - rachford splitting method by employing a suitable underdetermined relaxation factor . in this paper , we further extend the so - called strictly contractive peaceman - rachford splitting method by using two different relaxation factors , and to make the method more flexible , we introduce semi - proximal terms to the subproblems . we characterize the relation of these two factors , and show that one factor is always underdetermined while the other one is allowed to be larger than 1 . such a flexible conditions makes it possible to cover the glowinski s admm whith larger stepsize . we show that the proposed modified strictly contractive peaceman - rachford splitting method is convergent and also prove convergence rate in ergodic and nonergodic sense , respectively . the numerical tests on an extensive collection of problems demonstrate the efficiency of the proposed method . * key words * : semi - proximal , strictly contractive , peaceman - rachford splitting method , convex minimization , convergence rate . |
a fact usually assumed in astrophysics is that the main part of the mass of a typical spiral galaxy is concentrated in a thin disk ( ) .accordingly , the obtention of the gravitational potential generated by an idealized thin disk is a problem of great astrophysical relevance and so , through the years , different approaches has been used to obtain such kind of thin disk models .wyse and mayall ( ) studied thin disks by superposing an infinite family of elementary disks of different radii .brandt ( ) and brandt and belton ( ) constructed flat galaxy disks by the flattening of a distribution of matter whose surface of equal density were similar spheroids .a simple potential - density pair for a thin disk model was introduced by kuzmin ( ) and then rederived by toomre ( ) as the first member of a generalized family of models .the toomre models are obtained by solving the laplace equation in cylindrical coordinates subject to appropriated boundary conditions on the disk and at infinity .the kuzmin and toomre models of thin disks , although they have surface densities and rotation curves with remarkable properties , represent disks of infinite extension and thus they are rather poor flat galaxy models .accordingly , in order to obtain more realistic models of flat galaxies , is better to consider methods that permit the obtention of finite thin disk models . a simple method to obtain the surface density , the gravitational potential and the rotation curve of thin disks of finite radius was developed by . the hunter method is based in the obtention of solutions of laplace equation in terms of oblate spheroidal coordinates , which are ideally suited to the study of flat disks of finite extension . by superposition of solutions of laplace equation, expressions for the surface density of the disks , the gravitational potential and its rotational velocity can be obtained as series of elementary functions .the simplest example of a thin disk obtained by means of the hunter method is the well known kalnajs disk ( ) , which can also be obtained by flattening a uniformly rotating spheroid ( ) .the kalnajs disk have a well behaved surface density and represents a uniformly rotating disk , so that its circular velocity is proportional to the radius , and its stability properties have been extensively studied ( see , for instance , hunter ( ) , and ) . in this paperwe use the hunter method in order to obtain an infinite family of thin disks of finite radius .we particularize the hunter general model by considering a family of thin disks with a well behaved surface mass density .we will require that the surface density be a monotonically decreasing function of the radius , with a maximum at the center of the disk and vanishing at the edge , in such a way that the mass distribution of the higher members of the family be more concentrated at the center .the paper is organized as follows . in sec .2 we present a summary of the hunter method used to obtain the thin disk models of finite radius and also we obtain the general expressions for the gravitational potential , the surface density and the circular velocity . in the next section , sec .3 , we present the particular family of models obtained by imposing the required behavior of the surface densities and then , in sec . 4 , we analyze its physical behavior . finally , in sec . 5, we summarize our main results .in order to obtain finite axially symmetric thin disk models , we need to find solutions of the laplace equation that represents the outer potential of a thin disklike source .according with this , we need to solve the laplace equation for an axially symmetric potential , where are the usual cylindrical coordinates .we will suppose that , besides the axial symmetry , the gravitational potential has symmetry of reflection with respect to the plane , so that the normal derivative of the potential , , satisfies the relation in agreement with the attractive character of the gravitational field .we also assume that do not vanishes on the plane , in order to have a thin distribution of matter that represents the disk .given a potential with the above properties , the density of the surface distribution of matter can be obtained using the gauss law ( ) .so , using the equation ( [ eq : con2 ] ) , we obtain {z = 0^{+}}.\label{eq : sigma}\ ] ] now , in order to have a surface density corresponding to a finite disklike distribution of matter , we impose the boundary conditions so that the matter distribution is restricted to the disk , .we introduce now the oblate spheroidal coordinates , whose symmetry adapts in a natural way to the geometry of the model .this coordinates are related to the usual cylindrical coordinates by the relation ( ) , where and .the disk has the coordinates , .on crossing the disk , changes sign but does not change in absolute value .this singular behavior of the coordinate implies that an even function of is a continuous function everywhere but has a discontinuous derivative at the disk. in terms of the oblate spheroidal coordinates , the laplace equation can be written as {,\xi } + [ ( 1 - \eta^2 ) \phi_{,\eta}]_{,\eta},\ ] ] and we need to find solutions that be even functions of and with the boundary conditions where is an even function which can be expanded in a series of legendre polynomials in the interval ( ) . according with this , the newtonian gravitational potential for the exterior of a finite thin disk with an axially symmetric matter densitycan be written as ( ) , where are arbitrary constants , and are the usual legendre polynomials and the legendre functions of second kind , respectively . with this general solution for the gravitational potential ,the surface matter density is given by and , as we will shown later , the arbitrary constants must be chosen properly so that the surface density presents a physically reasonable behavior .besides the matter density , another quantity commonly used to characterize galactic matter distributions is the circular velocity , also called the rotation curve , defined as the tangential velocity of the stars in circular orbits around the center .now , given , we can easily evaluate through the relation {z=0 } , \label{eq:1.22}\ ] ] in such a way that , by using ( [ eq : potencial ] ) , we obtain for the circular velocity .we will now particularize the above general model by considering a family of finite thin disk with a well behaved surface mass density .we will require that the surface density will be a monotonically decreasing function of the radius , with a maximum at the center of the disk and vanishing at the edge . in order to do this ,we impose the conditions and we also require that where is the total mass of the disk . now , by using the boundary condition ( [ eq : bc1 ] ) , the surface density can be written in the form where is an even function of , monotonically increasing at the interval , and such that furthermore , we must to impose the condition in agreement with ( [ eq:4.12 ] ) .a simple function that agrees with all the above requirements was given by and can be written as where , in order to fulfill the condition ( [ eq : limit ] ) , we must take . with this particular choice of obtain an infinite family of finite disks with surface mass densities given by ^{m-1/2}. \label{eq:4.20}\ ] ] as we can easily see , the disk with corresponds to the well known kalnajs disk ( ) .accordingly , this family of finite thin disks can then be considered as a generalization of the kalnajs disk now , from the equation ( [ eq:4.10 ] ) , the function can be written as with the coefficients are founded , by using the orthogonality property of the legendre polynomials , through the expression the above equation can be expressed as ( ) ,\ ] ] so that , using the gamma function properties , we obtain : ,\ ] ] for and for .with the above values of the we can compute the different physical quantities that characterize the behavior of the models .so , for instance , the gravitational potential of the first three members of the family are given by , \label{eq:4.22 } \\\phi_{2}(\xi,\eta ) & = - \frac{mg}{a } [ \cot^{-1 } \xi + \frac{10 a}{7 } ( 3\eta^{2 } - 1 ) \nonumber \\ & \quad \quad + \ b ( 35 \eta^{4 } - 30 \eta^{2 } + 3 ) ] , \label{eq:4.23 } \\\phi_{3}(\xi,\eta ) & = - \frac{mg}{a } [ \cot^{-1 }\xi + \frac{10 a}{6 } ( 3 \eta^{2 } - 1 ) \nonumber \\ & \quad \quad + \\frac{21 b}{11 } ( 35 \eta^{4 } - 30 \eta^{2 } + 3 ) \nonumber \\ & \quad \quad + \ c ( 231 \eta^{6 } - 315 \eta^{4 } + 105 \eta^{2 } - 5 ) ] , \label{eq:4.24 } \end{aligned}\ ] ] where , \\ b & = \frac{3}{448 } [ ( 35 \xi^{4 } + 30 \xi^{2 } + 3 ) \cot^{-1 } \xi - 35 \xi^{3 } - \frac{55}{3 } \xi ] , \\ c & = \frac{5}{8448 } [ ( 231 \xi^{6 } + 315 \xi^{4 } + 105 \xi^{2 } + 5 ) \cot^{-1 } \xi \nonumber \\ & \quad \quad - 231 \xi^{5 } - 238 \xi^{3 } - \frac{231}{5 } \xi ] , \end{aligned}\ ] ] with similar , but more involved , expressions for greater values of . by taking at the above expressions , we can obtain the value of the potential at the plane of the disk .now , is easy to see that , for all the members of the family , we will obtain finite values for these quantities .in particular , for the first member of the family , the potential at the disk is given by for , and this expression is completely equivalent to the corresponding expression in . in the same way ,we obtain for the surface mass densities of the first three members of the family the expressions and for the corresponding circular velocities the expressions where has been introduced the dimensionless radial variable . in order to graphically illustrate the behavior of the different particular models, we first introduce the dimensionless surface density of the disks , defined as for .so , in fig .[ fig : dens ] are depicted the dimensionless surface mass densities for the models corresponding to .as we can see , the disks with higher values of present a mass distribution that is more concentrated at the center and less at the edge .accordingly , these disks can then be considered as appropriated models of galaxies with a central bulb .now , in order to graphically illustrate the behavior of the circular velocities or rotation curves , we introduce the dimensionless quantity we plot , in fig .[ fig : vel1 ] the dimensionless rotation curves for the models corresponding to .the circular velcotiy corresponding to is proportional to the radius , representing thus a uniformly rotating disk . on the other hand , for , the circular velocity increases from a value of zero at the center of the disks , until it attain a maximum at a critical radius and then decreases to a finite value at the edge of the disk .also we can see that the value of the critical radius decreases as the value of increases .we presented an infinite family of axially symmetric thin disks of finite radius obtained by means of a particularization of the hunter method .the disk models so obtained are generalizations of the well known kalnajs disk , which corresponds to the first member of the family .the particularization of the hunter model was obtained by requiring that the surface density was a monotonically decreasing function of the radius , with a maximum at the center of the disk and vanishing at the edge , in such a way that the mass distribution of the higher members of the family were more concentrated at the center .we also analyzed the rotation curves of the models and we find for the first member of the family , the kalnajs disk , a circular velocity proportional to the radius , representing thus a uniformly rotating disk , whereas for the other members of the family the circular velocity increases from a value of zero at the center of the disks until reach a maximum at a critical radius and then decreases to a finite value at the edge of the disk .also we find that the value of the critical radius decreases as the value of increases .we believe that the obtained thin disk models have some remarkable properties and so they can be considered as appropriated realistic flat galaxy models , in particular if the superposition of these thin disks with appropriated halo distributions ( ) is considered .we are now considering some research in this direction .we are now also working in the non axially symmetric generalization of the here presented disks models and also in the obtention of the relativistic generalization of them for the axially symmetric case .the authors want to thank the financial support from colciencias , colombia . | an infinite family of axially symmetric thin disks of finite radius is presented . the family of disks is obtained by means of a method developed by hunter and contains , as its first member , the kalnajs disk . the surface densities of the disks present a maximum at the center of the disk and then decrease smoothly to zero at the edge , in such a way that the mass distribution of the higher members of the family is more concentrated at the center . the first member of the family have a circular velocity proportional to the radius , representing thus a uniformly rotating disk . on the other hand , the circular velocities of the other members of the family increases from a value of zero at the center of the disks until a maximum and then decreases smoothly until a finite value at the edge of the disks , in such a way that for the higher members of the family the maximum value of the circular velocity is attained nearest the center of the disks . stellar dynamics galaxies : kinematics and dynamics . |
telomeres are tandem repeated noncoding sequences of nucleotides at both ends of the dna in eukaryotic chromosomes stabilizing the chromosome ends and preventing them from end - to - end fusion or degradation .polymerase can not completely replicate the 3 end of linear dna , so telomeres are shortened at each dna replication .this end replication problem leads to a finite replicative capacity for normal somatic cells .they can only divide up to a certain threshold , the hayflick limit .the enzyme telomerase , repressed in most normal somatic cells , synthesizes and elongates telomere repeat sequences at the end of dna strands so that certain cells like germline cells are immortal and indefinite in growth .most forms of cancer follow from the accumulation of somatic mutations .cancer - derived cell lines and 85 - 90% of primary human cancers are able to synthesize high levels of telomerase and thus are able to prevent further shortening of their telomeres and proliferate indefinitely .but if cells are premalignant or already cancerous and telomerase is not yet activated , the proliferative capacity of these cells and therefore the accumulation of mutations is determined by the remaining telomere length .so the frequency of malignant cancer should be higher for longer telomeres in normal somatic cells .recently published data show that longer telomeric dna increased the life span of nematode worms .experiments are running whether there is a positive effect on the longevity also for organisms with renewing tissues if the telomere length in fetal cells is increased .as the probability for the incidence of cancer is correlated to the replicative potential of the mutated cells , one can ask the following question : is an extension of life span possible if telomeres in embryonic cells are elongated and cancer is considered ?an answer to this question could be given by computer simulations as the presented model focuses on organismal ageing due to the loss of telomeres in dna and neglects other effects which lead to a decreasing survival probability with age .as shortening of telomeres is one of the supposed mechanisms of ageing on cellular level , most stochastical and analytical studies investigate this relationship . a theoretical model which directly relates telomere attrition to human ageingwas first suggested by aviv et al . . herewe present a different telomere dynamics model providing requirements to study the effects of ( a ) different mean telomere lengths in constituting cells , as well as of ( b ) somatic mutations leading to cancer progression and indefinite proliferation due to telomerase activity on the ageing of organisms .every organism is developed from a single progenitor cell , the zygote ( figure [ bild01 ] ) .the initial telomere length of zygote cells is assumed to be normally distributed with mean and standard deviation .telomere repeats lost per division ( trlpd ) are randomly chosen at each division of every cell from a normal distribution with mean and standard deviation .a dividing cell produces a clone who inherits the replicative capacity of the progenitor cell at this age .cells can divide until nearly all their original telomeres are lost . for every organismthe dynamics of the model is as follows : divisions of the zygote and the stem cells derived from it occur 6 times in the early embryo .each of these cells is the progenitor of one tissue .this is followed by a period of population doublings where all cells divide once in every timestep until cells are present in each tissue . in the following maturation stage ,cells are chosen randomly for division until each tissue reaches the adult size of cells .it takes about 26 timesteps until an organism is mature .ageing starts now . in everytimestep first cells die with 10% probability due to events like necrosis or apoptosis .10% of the cells of the corresponding tissue are then randomly chosen for division to fill this gap .the replacement does not have to be complete as the chosen cell could probably not divide anymore due to telomere attrition .after some time the tissue will start shrinking .the random choice of dying and dividing cells in differentiated tissues is in accordance with nature as for example in epithelium the choice of cells to be exported from the basal layer is random .the organism dies , if its total cell population size shrinks to 50% of the mature size .the results presented in the following do not depend qualitatively on the choice of this threshold .in the implementation of this model , linear congruential generators are used to produce the required random numbers and normally distributed variables are generated with the box - muller algorithm . resulting age distributions for different mean telomere lengths in the zygote cells are shown in ( figure [ bild02 ] ) .the shape of these distributions is analogous to empirical data of many human and animal populations .we obtain a positive effect on the longevity of the organisms if the mean telomere length in the precursor cell is increased .the chosen mean doubling potentials of the zygote cells are 30 , 40 and 50 with the choice of and .the number of mitotic divisions observed in human fibroblasts is higher , but the choice of this parameter is reasonable as the number of considered cells per mature organism ( 640000 ) in this model is also much lower than in human organisms where the total number of cells is of the order of .non - dividing cells are not included in the model as we focus on ageing due to the progressive shrinking of tissues driven by telomere shortening .our results will be given below as part of fig.5 .clonal cancer is now introduced in the model . in accordance to the model of moolgavkaret al . , one of our assumptions is that malignant tumors arise from independently mutated progenitor cells . for most forms of carcinoma , transformation of a susceptible stem cell into a cancer cellis suggested to be a multistage process of successive mutations with a relatively low probability for the sequential stages .two independent and irreversible hereditary mutation stages are considered here , which can occur at every level of development of the organism during cell division .the first premalignant stage to be considered is a promotion stage : a dividing cell can mutate with small probability .all descendant cells inherit this mutation .this mutation leads to a partial escape from homeostatic control of growth by the local cellular environment .cells on the promotion stage have a selective advantage over unaffected cells . in our modelthey are chosen first for division during maturation and for filling up the gap in the ageing period .the subsequent transition can occur again with probability during division .if a cell reaches this second stage of mutation it is a progenitor of a carcinoma .an explosive clonal expansion to a fully malignant compartment happens .this cell and the clonal progeny doubles in the current timestep until it is no more possible due to telomere attrition .this expansion leads only to an increase of the malignant cell population size . as a certain fraction of cellsis killed per unit time and clonal expansions only occur with a very small probability , the tumor environments may not continue growing , eventually shrink , or even die .we assume that it is necessary for advanced cancer progression and therefore for the development of a deadly tumor that fully mutated cells are able to activate telomerase . in our model ,telomerase activation is possible at every age of the organism in normal and mutated cells during division with a very low probability .the irreversible loss of replicative potential is stopped in these cells . as the contribution of telomerase to tumorigenicity is not yet completely understood , we assume that death of an organism due to cancer occurs if telomerase is reactived in at least one fully mutated cell .we treat the time interval between the occurence of the deadly tumor and death as constant , so we set this interval to zero .figure [ bild03 ] shows simulation results for and .as we consider a lower complexity by chosing a lower number of tissues and cells per organism , we assume higher mutation rates for the incidence of cancer than observed in nature .the age distribution for shorter initial telomere lengths considering cancer is shifted to the left but still very old organisms exist . for longer telomeres the age distributionis again shifted to the left but even behind the distributions for shorter telomeres with and without considering carcinogenesis .thus without considering cancer , organisms with longer zygote telomeres live longer , as the life expectancy of the organisms increases linear for longer telomeres .but if cancer is considered this effect is reversed for longer initial mean telomere lengths ( fig .[ bild04 ] ) .the force of mortality resulting from this model is shown for with and without considering cancer in comparison to empirical human mortality data ( figure [ bild05 ] ) .it agrees to some extent with human mortality functions provided cancer is incorporated into the model and decelerates at advanced ages , as claimed for human and animal populations .the hump in the curve at younger ages , occurring also for other parameter sets , fits to data of many human mortality tables .the expected simulation result of the basic model without cancer is an increase of life span of most organisms with longer initial telomeres . after introducing somatic mutations promoting cancer and telomerase activation in this model ,the survival probability is lower for each considered initial telomere length in certain time intervals in adult ages .but even low probabilities for the two mutation stages and for the activation of telomerase lead to a strong reduction of life span for longer telomeres .so the implication of two - stage carcinogenesis for the incidence of cancer in this simple model of cell proliferation in organisms is that life expectancy and life span of complex organisms can not be increased by artificially elongating telomeres in primary cells , for example during a cloning procedure .further improvements , extentions and applications of this model are possible . with respect to the role of telomeres and telomerase in carcinogenesis ,maybe this computational approach can contribute to the development of a comprehensive theoretical model in oncology uniting mutagenesis and cell proliferation .+ + * acknowledgements * + we wish to thank the european project cost - p10 for supporting visits of mm and ds to the cebrat group at wrocaw university and the julich supercomputer center for computing time on their crayt3e .cs was supported by foundation for polish science .blackburn , nature 350 ( 1991 ) 569 .harley , a.b .futcher , c.w .greider , nature 345 ( 1990 ) 458 .a.m. olovnikov , j theor biol 41 ( 1973 ) 181 .l. hayflick , p.s .moorhead , exp cell res . 25 ( 1961 ) 585 .l. hayflick , exp gerontology 38 ( 2003 ) 1231 .greider , e.h .blackburn , cell 43 ( 1985 ) 405 .shay , w.e .wright , nature rev 1 ( 2000 ) 72 .nordling , br j cancer 7 ( 1953)68 .fearon , b. vogelstein , cell 61 ( 1990 ) 759 .kim , m.a .piatyszek , k.r .prowse , c.b .harley , m.d .west , p.l . ho , g.m .coviello , w.e .wright , s.l .weinrich , j.w .shay , science 266 ( 1994 ) 2011 .moolgavkar , e.g. luebeck , cancer 38 ( 2003 ) 302 .joeng , e.j .song , k.j .lee , j. lee , genet 36 ( 2004 ) 6071 .lanza , j.b .cibelli , c. blackwell , v.j .cristofalo , m.k .francis , g.m .baerlocher , j. mak , m. schertzer , e.a .chavez , n. sawyer , p.m. lansdorp , m.d .west , science 288 ( 2000 ) 665 .shay , w.e .wright , radiat res 155 ( 2001 ) 188 .levy , r.c .allsopp , a.b .futcher , c.w .greider , c.b .harley , j mol biol 225 ( 1992 ) 951 .o. arino , m. kimmel , g.f .webb , j theor biol .177 ( 1995 ) 45 .p. olofsson , m. kimmel , math biosci 158 ( 1999 ) 75 .i. rubelj , z. vondracek , j theor biol . 197( 1999 ) 425 .sozou , t.b .kirkwood , j theor biol 213 ( 2001 ) 573 .a. aviv , d. levy , m. mangel , mech ageing dev 124 ( 2003 ) 829 .j. op den buijs , p.p .van den bosch , m.w .musters , n.a .van riel , mech ageing dev 125 ( 2004 ) 437 .tu , e. fischbach , int j mod phys c 16 ( 2005 ) 281 .box , m.e .muller , ann math stat 29 ( 1958 ) 610 .l. cairns , nature 255 ( 1975 ) 197 .frank , y. iwasa , m.a .nowak , genetics 163 ( 2003 ) 1527 . r.c .allsopp , h. vaziri , c. patterson , s. goldstein , e.v .younglai , a.b .futcher , c.w .greider , c.b .harley , proc natl acad sci usa 89 ( 1992 ) 10114 .moolgavkar , a. dewanji , d.j .venzon , risk anal 8 ( 1998 ) 383 .p. armitage , r. doll , br j cancer 8 ( 1954 ) 1 .a. sarasin , mutat res 544 ( 2003 ) 99 .p. armitage , r. doll , br j cancer 11 ( 1957 ) 161 .e. hiyama , k. hiyama , t. yokoyama , y. matsuura , m.a .piatyszek , j.w .shay nat med 1 ( 1995 ) 249 .shay , j natl cancer inst 91 ( 1999 ) 4 .t. hiyama , h. yokozaki , y. kitadai , k. haruma , w. yasui , g. kajiyama , e. tahara , virchows arch 434 ( 1999 ) 483 .v. kanjuh , m. kneevi , e. ivka , m. ostoji , b. beleslin , archive of oncology 9 ( 2001 ) 3 .blasco , w.c .hahn , trends cell biol 13 ( 2003 ) 289 .moolgavkar , a.g .knudson jr ., j natl cancer inst 66 ( 1998 ) 1037 .e.g. luebeck , s.h .moolgavkar , proc natl acad sci usa 99 ( 2002 ) 15095 .drake , b. charlesworth , d. charlesworth , j.f .crow , genetics 148 ( 1998 ) 1667 .thatcher , v. kannisto , j.w .vaupel , _ the force of mortality at ages 80 to 120 ._ monographs on population aging , vol . 5 , odense university press , 1998 .robine , j.w .vaupel , exp gerontology 36 ( 2001 ) 915 .vaupel , j.r .carey , k. christensen , t.e .johnson , a.i .yashin , n.v .holm , i.a .iachine , a.a .khazaeli , p. liedo , v.d .longo , yi zeng , k.g .manton , j.w .curtsinger , science 280 ( 1998 ) 855 .gatenby , p.k .maini , nature 421 ( 2003 ) 321 . | as cell proliferation is limited due to the loss of telomere repeats in dna of normal somatic cells during division , telomere attrition can possibly play an important role in determining the maximum life span of organisms as well as contribute to the process of biological ageing . with computer simulations of cell culture development in organisms , which consist of tissues of normal somatic cells with finite growth , we otain an increase of life span and life expectancy for longer telomeric dna in the zygote . by additionally considering a two - mutation model for carcinogenesis and indefinite proliferation by the activation of telomerase , we demonstrate that the risk of dying due to cancer can outweigh the positive effect of longer telomeres on the longevity . * does telomere elongation lead to a longer lifespan if cancer is considered ? * michael masa , stanisaw cebrat and dietrich stauffer institute for theoretical physics , cologne university , d-50923 kln , euroland institute of genetics and microbiology , university of wrocaw , + ul . przybyszewskiego 63/77 , pl-54148 wrocaw , poland _ keywords _ : biological ageing ; computer simulations ; telomeres ; telomerase ; cancer |
modern power networks are increasingly dependent on information technology in order to achieve higher efficiency , flexibility and adaptability .the development of more advanced sensing , communications and control capabilities for power grids enables better situational awareness and smarter control .however , security issues also arise as more complex information systems become prominent targets of cyber - physical attacks : not only can there be data attacks on measurements that disrupt situation awareness , but also control signals of power grid components including generation and loads can be hijacked , leading to immediate physical misbehavior of power systems . furthermore , in addition to hacking control messages , a powerful attacker can also implement physical attacks by directly intruding upon power grid components .therefore , to achieve reliable and secure operation of a smart power grid , it is essential for the system operator to minimize ( if not eliminate ) the feasibility and impact of physical attacks .there are many closely related techniques that can help achieve secure power systems .firstly , coding and encryption can better secure control messages and communication links , and hence raise the level of difficulty of cyber attacks . to prevent physical attacks ,grid hardening is another design choice .however , grid hardening can be very costly , and hence may only apply to a small fraction of the components in large power systems .secondly , power systems are subject to many kinds of faults and outages , which are in a sense _ unintentional _ physical attacks . as such outagesare not inflicted by attackers , they are typically modeled as random events , and detecting outages is often modeled as a hypothesis testing problem . however , this event and detection model is not necessarily accurate for _ intentional _ physical attacks , which are the focus of this paper .indeed , an intelligent attacker would often like to strategically _ optimize _ its attack , such that it is not only hard to detect , but also the most viable to implement ( e.g. , with low execution complexity as well as high impact ) .recently , there has been considerable research concerning data injection attacks on sensor measurements from supervisory control and data acquisition ( scada ) systems .a common and important goal among these works is to pursue the integrity of network _ state estimation _ , that is , to successfully detect the injected data attack and recover the correct system states .the feasibility of constructing data injection attacks to pass bad data detection schemes and alter estimated system states was first shown in . there, a natural question arises as to how to find the _sparsest unobservable _ data injection attack , as sparsity is used to model the complexity of an attack , as well as the resources needed for an attacker to implement it . however , finding such an _ optimal attack _ requires solving an np - hard minimization problem . while efficiently finding the sparsest unobservable attacks in general remains an open problem , interesting and exact solutions under some special problem settings have been developed in .another important aspect of a data injection attack is its impact on the power system .as state estimates are used to guide system and market operation of the grid , several interesting studies have investigated the impact of data attacks on optimal power flow recommendation and location marginal prices in a deregulated power market .furthermore , as phasor measurement units ( pmus ) become increasingly deployed in power systems , network situational awareness for grid operators is significantly improved compared to using legacy scada systems only . however , while pmus provide accurate and secure sampling of the system states , their high installation costs prohibit ubiquitous deployment .thus , the problem of how to economically deploy pmus such that the state estimator can best detect data injection attacks is an interesting problem that many studies have addressed ( see , e.g. among others . ) compared to data attacks that target state estimators , physical attacks that directly disrupt power network physical processes can have a much faster impact on power grids .in addition to physical attacks by hacking control signals or directly intruding upon grid components , several types of load altering attacks have been shown to be practically implementable via internet - based message attacks .topological attacks are another type of physical attack which have been considered in .dynamic power injection attacks have also been analyzed in several studies .for example , in , conditions for the existence of undetectable and unidentifiable attacks were provided , and the sizes of the sets of such attacks were shown to be bounded by graph - theoretic quantities . alternatively , in and , state estimation is considered in the presence of both power injection attacks and data attacks .specifically , in these works , the maximum number of attacked nodes that still results in correct estimation was characterized , and effective heuristics for state recovery under sparse attacks were provided . in this paper, we investigate a specific type of physical attack in power systems called _ power injection attacks _ , that alter generation and loads in the network .a linearized power network model - the dc power flow model - is employed for simplifying the analysis of the problem and obtaining a simple solution that yields considerable insight .we consider a grid operator that employs pmus to ( partially ) monitor the network for detecting power injection attacks .since power injection attacks disrupt the power system states immediately , the timeliness of pmu measurement feedback is essential .furthermore , our model allows for the power injections at some buses to be `` unalterable '' .this captures the cases of `` zero injection buses '' with no generation and load , and buses that are protected by the system operator . under this modelwe study the open minimization problem of finding the sparsest unobservable attacks given any set of pmu locations .we start with a feasibility problem for unobservable attacks .we prove that the existence of an unobservable power injection attack restricted to any given set of buses can be determined with probability one by computing a quantity called the structural rank .next , we prove that the np - hard problem of finding the sparsest unobservable attacks has a simple solution with probability one .specifically , the sparsity of the optimal solution is , where is the `` vulnerable vertex connectivity '' that we define for an augmented graph of the original power network .meanwhile , the entire set of globally optimal solutions ( there can be many of them ) is found in polynomial time .we further introduce a notion of potential impacts of unobservable attacks .accordingly , among all the sparsest unobservable attacks , an attacker can easily find the one with the greatest potential impact .finally , given optimized pmu placement , we evaluate the sparsest unobservable attacks in terms of their sparsity and potential impact in the ieee 30 , 57 , 118 and 300-bus , and the polish 2383 , 2737 and 3012-bus systems .the remainder of the paper is organized as follows . in section [ secform ] , models of the power network , power injection attacks , pmus and unalterable busesare established .in addition , the minimum sparsity problem of unobservable attacks is formulated . in section [ secfeas ]we provide the feasibility condition for unobservable attacks restricted to any subset of the buses . in section [ secmin ]we prove that the minimum sparsity of unobservable attacks can be found in polynomial time with probability one . in section [ secnum ] , a pmu placement algorithm for countering power injection attacksis developed , and numerical evaluation of the sparsest unobservable attacks in ieee benchmark test cases and large - scale polish power systems are provided .conclusions are drawn in section [ secconc ] .we consider a power network with buses , and denote the set of buses and the set of transmission lines by and respectively . for a line that connects buses and , denote its reactance by as well as , and define its _ incidence vector _ as follows : based on the power network topology and line reactances, we construct a weighted graph where the edge weight .the power system is generally modeled by nonlinear ac power flow equations . in this paper , a linearized model - the dc power flow model - is employed as an approximation of the ac model , which allows us to find a simple closed - form solution to the problem from which we glean significant insights . under the dc model , the real power injections andthe voltage phase angles satisfy , where is the _laplacian _ of the weighted graph .we assume that is positive which is typically true for transmission lines ( cf .chapter 4 of ) .furthermore , the power flow on line from bus to bus equals .we consider attackers inflicting power injection attacks that alter the generation and loads in the power network .we denote the power injections in normal conditions by , and denote a power injection attack by .thus the post - attack power injections are .we consider the use of pmus by the system operator for monitoring the power network in order to detect power injection attacks .with pmus installed at the buses , we consider the following two different sensor models : 1 .a pmu securely measures the voltage phasor of the bus at which it is installed .a pmu securely measures the voltage phasor of the bus at which it is installed , as well as the current phasors on all the lines connected to this bus .we denote the set of buses with pmus by , and let be the total number of pmus , where denotes the cardinality of a set . without loss of generality ( wlog ) , we choose one of the buses in to be the angle reference bus .we say that a power injection attack is _ unobservable _ if it leads to _ zero _ changes in all the quantities measured by the pmus . with the first pmu model described above, we have the following definition : an attack is unobservable if and only if where denotes the sub - vector of obtained by keeping its entries whose indices are in .is a weighted laplacian matrix , the elements of sum to . ] with the second pmu model described above , for any bus , it is immediate to verify that the following three conditions are equivalent : 1 .there are no changes of the voltage phasor at and of the current phasors on all the lines connected to .there are no changes of the voltage phasor at and of the power flows on all the lines connected to .3 . ] is the closed neighborhood of which includes and its neighboring buses .thus , for forming unobservable attacks , the following two situations are equivalent to the attacker : * the system operator monitors the set of buses with the second pmu model ;* the system operator monitors the set of buses ] is the closed neighborhood of which includes all the buses in and their neighboring buses .thus , the unobservability condition with the second pmu model is obtained by replacing with ] , which contains ( not necessarily exclusively ) all the remaining buses in after removing the cut set .an illustrative example with a cut of size 2 is depicted in figure [ 3spgen ] in section [ upsec ] .we note that there is a slight abuse of notation in : in general , a cut set does not necessarily consist of exactly all the neighboring nodes of .nonetheless , as will be shown in the remainder of the paper , we need only care about the _ minimum _cut set , which indeed consists of exactly all the neighboring nodes of , namely , .leveraging the above notation , we now introduce a key type of vertex cut on .a vulnerable vertex cut of a connected augmented graph is a vertex cut for which \vert \ge \vert n({\mathcal}{s})\vert+1 ] is no less than the cut size plus one .the reason for calling such a vertex cut `` vulnerable '' will be made exact later in section [ upsec ] .the basic intuition is the following . in order to have ( unobservability ) ,the key is to have the phase angle changes on the cut be zero , with power injection changes ( which can only happen on the alterable buses ) restricted in ] , is also a _ vulnerable vertex cut_. this contradicts the minimum vulnerable vertex cut having size at least .we now state the following theorem that gives an explicit solution of the sparsest unobservable attack problem in terms of the vulnerable vertex connectivity .[ mainthm ] for a connected grid , assume that the line reactances are independent continuous random variables strictly bounded away from zero from below .given any and , the minimum sparsity of unobservable attacks , i.e. , the global optimum of , equals with probability one .we note that finding the minimum vulnerable vertex connectivity of a graph is computationally efficient . for polynomial timealgorithms we refer the readers to and . in particular , vertex cuts are enumerated starting from the minimum and with increasing sizes , until a minimum vulnerable vertex cut is identified .we now prove theorem [ mainthm ] by upper and lower bounding the minimum sparsity of unobservable attacks in the following two subsections .we show that _ any _ vulnerable vertex cut provides an upper bound on the optimum of as follows .[ upthm ] for a connected grid and a set of pmus , for any vulnerable vertex cut of denoted by ( cf .notation [ ntn1 ] ) , there exists an unobservable attack of sparsity no higher than . a vulnerable vertex cut partitions into , and } ]now , pick any set of alterable buses in ] by \backslash { \mathcal}{a} ] ) has columns but only rows , and is hence column rank deficient .now , we let be a non - zero vector in the null space of : then , we construct an attack vector : it has some possibly non - zero values at the indices that correspond to , and has _zero values at all other indices . _thus , theorem [ upthm ] explains our terminology of a `` vulnerable vertex cut '' , since if a vertex cut is vulnerable , it leads to an unobservable attack . if a vulnerable vertex cut of exists , applying theorem [ upthm ] to the _ minimum _ one , we have that the optimum of is upper bounded by .if no vulnerable vertex cut exists , is a trivial upper bound .we now provide a graph - theoretic interpretation of theorem [ upthm ] .as shown in figure [ multicol ] and [ 3spgen ] , all the buses can be partitioned into three subsets and ] .the sparse attack ( cf . )is formed by injecting / extracting power at alterable buses in ] _ on the measurements taken in .thus , by taking control of all the buses in , an attacker can successfully _ hide _ from the system operator a power injection attack with a zero norm as large as the potential impact of unobservable attacks associated with a vulnerable vertex cut is defined as | ] . in comparison ,cut disconnects all the vertices above from , and hence its potential impact equals | \gg 3 ] .we first define the following property of a matrix , which will be shown to be equivalent to having .[ cond1def ] we have the following lemma whose proof is relegated to appendix [ secprfcond1 ] : [ cond1equ ] property [ cond1def ] is equivalent to having .we now prove the lower bounding part of theorem [ mainthm ] , namely , with probability one , all unobservable power injection attacks must have .the key idea is in showing that the equivalence between property [ cond1def ] and a full structural rank ( cf .lemma [ cond1equ ] ) implies a connection between the vulnerable vertex connectivity and the feasibility condition of unobservable attacks ( cf .lemma [ lemfeas1 ] ) .we focus on and consider its corresponding laplacian .suppose there exists a power injection attack such that denote the buses with non - zero power injection changes by , and hence . from , , and , implying that is column rank deficient .we first consider the case that a vulnerable vertex cut exists , i.e. , .the proof for the case of follows similarly . for notational simplicity, we will use instead of in the remainder of the proof .[ [ if - a - vulnerable - vertex - cut - exists - i.e .- barkappa - infty ] ] if a vulnerable vertex cut exists , i.e. , + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we will prove that , for all with , _ is of full column rank with probability one _ , i.e. , can only happen with probability zero . from lemma [ kplem ] , .it is then sufficient to prove for the `` worst cases '' with , i.e. , and is a square matrix . from theorem [ feasthm ] and lemma [ cond1equ ] , it is sufficient to show that _ satisfies property [ cond1def ] _ , and hence is of full rank with probability one .recall from the definition of the laplacian that , for any column ( or row ) of , , its non - zero entries correspond to bus and those buses that are connected to bus . with this, we now prove that satisfies property [ cond1def ] .consider any set of ( ) buses in , denoted by .\i ) if : based on the definition of the laplacian , the columns of _ that correspond to the buses themselves _ each has at least one non - zero entry .ii ) if : we prove that must contain at least buses .this is because , otherwise , , contradicting that is the minimum size of vulnerable vertex cuts for the following reasons : 1 . , and thus has at least alterable buses . 2 . implies that , and thus is a vertex cut that separates and .3 . because and are pairwise connected in , ] , and thus ] ; + & among them , find the cut with the greatest potential impact , + & denoted by }) ] by }) ] , place the next pmu + & at the one such that the resulting maximum potential impact + & among all the remaining unobservable attacks is minimized . +in this section , we evaluate the sparsest unobservable attacks and their potential impacts when the system operator deploys pmus at optimized locations .we first provide an efficient algorithm for optimizing pmu placement by the system operator .next , we provide comprehensive evaluation of our analysis and algorithms in multiple ieee power system test cases as well as large - scale polish power systems .our matlab codes are openly available for download .we have seen in section [ secmin ] that the minimum sparsity and potential impacts of unobservable attacks are determined fully by the network topology , the locations of the alterable buses , and the pmu placement .note that , unlike network states and parameters which can vary over short and medium time scales , the transmission network topology and the alterable buses typically stay the same over relatively long time scales .this motivates the system operator to optimize the pmu placement according to this information . for the best performance in countering power injection attacks, the system operator wants to _ raise the minimum sparsity of unobservable attacks , as well as mitigate the maximum potential impact of unobservable attacks_. algorithm 1 ( cf . table [alg1table ] ) is developed for the system operator to greedily place pmus to pursue both objectives . in this algorithm, we have assumed that the _ second pmu model _ in section [ secsensemod ] is employed , and the algorithm can be adapted to the first pmu model by replacing ] , assign it as the _ destination _ node , and compute all the minimum vulnerable vertex cuts that separate such a source - destination pair .3 . among all the computed source - destination vertex cuts that have the same minimum size ,compute their corresponding potential impacts , and select the minimum vertex cut with the greatest potential impact , denoted by }) ] found in step 1 _ does not remain a legitimate vertex cut _ after placing the next pmu .this can be achieved by placing the next pmu among the buses disconnected from ] as well as those in }) ] by .denote the number of connected components of the induced subgraph ] , and is diagonal , which we write in a block diagonal form whose each block is itself a diagonal matrix with _ non - negative _ entries , since the original graph is connected , each connected component of the induced subgraph ] , the dimension of the null space of is one , and is spanned by the all one vector ^t ] , and hence .[ lemcond1 ] for a matrix , if the following conditions are satisfied , ,i } \text { has at least one } { \nonumber}\\ & ~~~~~~~~~~~~~~~~~~\text{non - zero entry , } \label{cond13}\end{aligned}\ ] ] \ii ) assume that the lemma is true for all .for : first , because the upper left submatrix of satisfies the induction assumption , each of s first left columns must contain at least one non - zero entry .from , the last column of has at least one non - zero entry .thus , the case of in property [ cond1def ] holds for . *if the last rows of are selected to form , ( i.e. ) , from , the columns each has one non - zero entry , namely , .* otherwise , there exists a row , , which is not selected in ( cf .figure [ cond1fig ] ) . in this case, the row indices of can be partitioned into two subsets : and . on the one hand ,note that the upper left submatrix of satisfies the induction assumption .thus , _ among the first columns _ of the rows , there exists columns each of which has one non - zero entry . on the other hand , from , all non - zero , and none of these non - zero entries appears in the first columns .therefore , there exist columns of such that each of them has at least one non - zero entry . for , from the induction assumption b ) , there exists a non - zero permutated diagonal for the submatrix ,[1:n]} ] must be all zero , because otherwise any non - zero entry within } ] must be all - zero , because otherwise any non - zero entry within } ] that has at least one non - zero entry .wlog , assume that the sub - column ,2} ] satisfies , and , and hence satisfies property [ cond1def ] by lemma [ lemcond1 ] . from the induction assumption ii ), , [ 1:r-1]} ] must be all - zero , because otherwise any non - zero entry within } ] and . therefore , the submatrix ,[n , n]} ] and . because is on a non - zero permuted diagonal of , the submatrix , [ 1:t]} ] is of full rank with probability one .when , [ 1:t]} ] .then , is rank - deficient if and only if } \bm{\alpha } \label{rankdef1}\end{aligned}\ ] ] note that , except for itself , there are only three other entries in the laplacian that are correlated with : however , as , _ none _ of the above three entries is selected into the submatrix .therefore , _ is independent to all other entries in _ , and is hence independent to } \bm{\alpha} ] and ,t+1} ] has a non - zero permuted diagonal . from the induction assumption , , [ 1:t]} ] is of full rank , let , [ 1:t]}\bm{b}'_{[2:t+1 ] , t+1}. \label{alphadef}\end{aligned}\ ] ] thus , , t+1 } = \bm{b}'_{[2:t+1 ] , [ 1:t]}\bm{\alpha} ]is in the range space of , [ 2:t]} ] and , t+1} ] that is correlated with and is where .note that depends on only via the _ sum _ .* , and are independent to all the entries in , [ 2:t]} ] and , [ 2:t]} ] and , [ 2:t]} ] still satisfies the induction assumption , and is hence of _ full rank with probability one_. thus , before the change of distributions , , 1 } + \bm{b}'_{[2:t+1 ] , t+1} ] _ with probability zero_. therefore , with probability zero , hence the probability that is satisfied is zero . as a result, is of full rank with probability one . | physical security of power networks under power injection attacks that alter generation and loads is studied . the system operator employs phasor measurement units ( pmus ) for detecting such attacks , while attackers devise attacks that are _ unobservable _ by such pmu networks . it is shown that , given the pmu locations , the solution to finding the sparsest unobservable attacks has a simple form with probability one , namely , , where is defined as the vulnerable vertex connectivity of an augmented graph . the constructive proof allows one to find the entire set of the sparsest unobservable attacks in polynomial time . furthermore , a notion of the potential impact of unobservable attacks is introduced . with optimized pmu deployment , the sparsest unobservable attacks and their potential impact as functions of the number of pmus are evaluated numerically for the ieee 30 , 57 , 118 and 300-bus systems and the polish 2383 , 2737 and 3012-bus systems . it is observed that , as more pmus are added , the maximum potential impact among all the sparsest unobservable attacks drops quickly until it reaches the minimum sparsity . |
early studied in the 1930 s , the counter - rotating machines arouse a greater interest in the turbomachinery field , particularly for their potential improvement of the efficiency with respect to conventional machines by recovering kinetic energy from the front rotor exit - flow and by adding energy to the flow .the first counter - rotating machines have appeared in aeronautic and marine applications in open configuration .conventional designs of high speed counter - rotating fans are based on quite expensive methods and require a systematic coming and going between theoretical methods such as the lifting line theory or the strip - analysis approach and cfd analysis .moreover , the axial spacing , which has a major role on the rotors interaction and consequently on the noise , is a key parameter to find a compromise between high aerodynamic and good acoustic performance for high speed fans . in order to reduce this interaction ,the axial spacing of high speed fans has to be relatively large , resulting in a decrease in the aerodynamic performance .for the same reason , the rear rotor ( rr ) diameter has to be smaller ( about 10 according to ) than the front rotor ( fr ) diameter to reduce interaction between the fr tip vortex and the rr blade tip .contrary to that , in the case of low speed fans axial spacing could be shortened using the benefit of a relatively low rotor interaction .therefore these machines see a revival of interest in several distinct configurations open and ducted flows , shrouded or not shrouded rotors in various subsonic regime applications .recent research work dealt with the effects of global parameters like rotation speed ratio , local phenomena such as tip vortex flows and improvement of cavitation performance for pumps .all previous studies have shown the benefit of rr in improving the global efficiency and in increasing the operating flow - rate range while maintaining high efficiency .the counter - rotating systems ( crs ) moreover allow to reduce the fans diameter and/or to reduce the rotation rate .more axial spacing is needed compared to one simple fan , but not much more than a rotor - stator stage .however , it requires a more complex shaft system .another interesting feature of crs is that it makes it possible to design axial - flow fans with very low angular specific speed with the mean angular velocity , the flow rate , the total pressure rise , and the fluid density . with such advantages ,the crs becomes a very interesting solution and the interaction between the rotors needs to be better understood in order to design highly efficient crs .however , only a few studies have been concerned with , on the one hand , the effect of the axial spacing , and , on the other hand , the design method , particularly with rotors load distribution for a specified design point .this paper focuses on two major parameters of ducted counter - rotating axial - flow fans in subsonic regime : the rotation rate ratio , and the relative axial spacing , . in some cases ,these systems are studied by using two identical rotors or the rr is not specifically designed to operate with the fr . in this study , the fr is designed as conventional rotor and the rr is designed on purpose to work with the fr at very small axial spacing . in this first design , the total work to perform by the crs was arbitrarily set up approximately to two halves one half respectively for the fr and rr . in [ sec : design ] the method that has been used to design the front and the rear rotors is firstly described .the experimental set - up is presented in [ sec : setup ] .then the overall performances of the system in its default configuration and the effects of varying the rotation ratio and the relative axial spacing between the rotors are discussed in [ sec : results ] .the design of the rotors is based on the use of the software mft ( mixed flow turbomachinery ) , a 1d code developed by the dynfluid laboratory based on the inverse method with simplified radial equilibrium to which an original method has been added specifically for the design of the rr of the counter - rotating system . from the specified total pressure rise , volume flow - rate and rotating speed , optimal values of the radii and first proposed . in a second step, the tip and the hub radii as well as the radial distribution of the circumferential component of the velocity at the rotor outlet , , could be changed by the user .the available vortex models are the free vortex ( ) , the constant vortex ( ) and the forced vortex ( ) .the velocity triangles are then computed for radial sections , based on the euler equation for perfect fluid with a rough estimate of the efficiency of and on the equation of simplified radial equilibrium ( radial momentum conservation ) .the blades can then be defined by the local resolution of an inverse problem considering a 2d flow and searching for the best suited cascade to the proposed velocity triangles by the following parameters : the stagger angle , computed from the incidence angle , giving the lower pressure variation on the suction surface of the blade using equations [ eq : gamma ] and [ eq : a ] .the solidity , and the chord length , are thus computed at the hub and at the tip using equations [ eq : sigma ] and [ eq : c ] where denotes the lieblein s diffusion factor .the intermediate chords are obtained by linearisation . finally , the camber coefficients are computed using equation [ eq : coef_port ] . empirical equations have been validated for naca-65 cascades , for and .velocity triangles for the crs .the fluid is flowing from left to right ., width=294 ] the behaviour of the designed machine resulting from the above method can then be analysed using a direct method in order to determine whether the design point is achieved and what are the characteristics of the machine at the neighbourhood of the design point .the effects due to real fluid are taken partially into account with in - house loss models and the introduction of an axial - velocity distribution which considers the boundary layers at the hub and casing .thus , the characteristics of the machine can be obtained in the vicinity of the design - point discharge . regarding the crs , the geometrical dimensions , the number of blades of fr and of rr and their rotation rates are imposed . in particular , the number of blades of each rotor was chosen in order to prevent to have the same blade passing frequency or harmonics for both rotors in the lower frequencies range .the system that is presented here has moreover been designed to have a pure axial exit - flow .an iterative procedure is then performed .the pressure rise of the fr is initially chosen and then designed and quickly analysed as explained .an estimate of the pressure rise that rr would made is then performed , based on this analysis .if the total pressure rise of the crs is not met , the design pressure rise of fr is varied and the calculus are made again . in this method ,losses and interactions in - between the two rotors are not taken into account .any recirculation happening near the blade passage or near the blade hub or tip is not predicted by mft as it is based on simplified radial equilibrium . ..[tab : speci]design point of the counter - rotating system for air at [ cols= " < , < , < , < " , ] fans characteristics : ( a ) static pressure rise _ vs _ flow rate ; ( b ) static efficiency _ vs _ flow rate .the axial spacing is . : fr rotating alone at rpm ( rr has been removed ) , : rr rotating alone at rpm ( fr has been removed ) and : crs at rpm and .the and the dashed lines stand for the design point of the crs ] the fr rotating alone has a very flat curve ( in fig .[ fig : caract_r1_r2_r12_nominal ] ) .the nominal flow - rate of fr is slightly greater than the design point it is greater .the measured static pressure rise at the design point is pa , with a relatively low static efficiency of .this is not surprising with no shroud and a large radial gap .moreover , this is consistent with the estimated static pressure rise by mft , which is around pa .the rr rotating alone has a steeper curve ( in fig .[ fig : caract_r1_r2_r12_nominal ] ) and its nominal flow - rate m.h is lower than the design flow - rate of fr and crs .this is consistent with the bigger stagger angle of the blades ( see tab . [ tab : geospeci ] ) and can be explained by examining the velocity triangles in fig .[ fig : trianglevit ] and considering the case with the fr coupled to the rr : the incoming velocity = has an axial component as well as a tangential component .hence , the flow angle in the relative reference frame reads : now the case without the fr is considered and it is assumed that the flow through the honeycomb is axial . since the tangential component does not exist any more , .mft estimates m.s , m.s and m.s , which leads to at the blade mid - span .supposing that rr rotating alone reaches its maximum efficiency for , equation [ eq : tanbeta ] implies that m.s , _ i.e. _ m.h .this is exactly the nominal flow - rate of rr rotating alone ( see fig .[ fig : caract_r1_r2_r12_nominal ] and tab . [ tab : rendmax ] ) .it is clear from the above analysis why the nominal flow - rate of rr is lower than the design flow - rate . the characteristic curve of the crs ( in fig .[ fig : caract_r1_r2_r12_nominal ] ) is steeper than the characteristic curve of fr .it is roughly parallel to the rr curve .the nominal flow - rate of the crs matches well with the design flow - rate , _ m.s .the static pressure rise at the nominal discharge ( pa ) is lower than the design point ( pa ) , which is not so bad in view of the rough approximations used to design the system .please notice that the static pressure rise of the crs is not equal to the addition of the static pressure rise of the fr with the pressure static rise of the rr , taken separately .the crs has a high static efficiency ( ) compared to a conventional axial - flow fan or to a rotor - stator stage with similar dimensions , working at such reynolds numbers .the gain in efficiency with respect to the fr is points , whilst an order of magnitude of the maximum gain using a stator is typically points .awaiting for more accurate local measurements of the flow angle at the exit of the crs , a simple test of flow visualization with threads affixed downstream of the crs was performed .it has been observed that without the rr the flow is very disorganized .when the rr is operating , at the design configuration ( and ) , the flow is less turbulent , the threads are oriented with a small angle at the exit .this small angle seems , however to decrease when is increased between and .this is consistent with the results in section [ subsec : influence ] where it is found that the nominal operating point is observed for a value of higher than the design value .the flow - rate range for which the static efficiency lays in the range is : m.h , that is from of the nominal flow - rate up to of the nominal flow - rate .one open question is to what extent the global performances of the crs are affected by the axial spacing and the speed ratio , and whether the efficient range could be extended by varying the speed ratio .crs characteristics at rpm , and ] , _ i.e. _ ] .one could also imagine to work at a constant flow - rate with high static efficiency .for instance in the present case , the system could give a constant flow - rate of m.h with for pa with rpm , and ] , the overall performances do not change significantly and the variation is in the uncertainty range .the efficiency does not vary significantly either .in other studies it was reported that the axial spacing had a more significant influence on the overall performances .this was noticed as well in this study . for a= and a= ,the global performances are decreased by pa ( ) comparing to the other spacings . however , even for a= , the crs still shows good performances with high efficiency compared to the conventional fan systems .a counter - rotating axial - flow fan has been designed according to an iterative method that is relatively fast .it is based on semi - empirical modelization that partly takes into account the losses , boundary layers at hub and casing , and the effects of low reynolds numbers ( below ) .the overall performances at the nominal design point are slightly lower than predicted , with a static pressure rise lower .the static efficiency is however remarkably high ( ) and corresponds to a points gain in efficiency with respect to the fr maximal efficiency and to a points gain with respect to the rr .the overall measurements give first clues that allow to validate the design method .the counter - rotating system has a very flexible use that allows to work at constant flow - rate on a wide range of static pressure rises or to work at constant pressure rise on a wide range of flow - rates , with static efficiency bigger than , simply by varying the rr rotation rate .one could thus imagine an efficient closed - loop - controlled axial - flow fan .the overall performances moreover do not significantly vary with the axial spacing in the range $ ] . however , for and the overall performances slightly decrease .10 bechet , s. , negulescu , c. , chapin , v. , and simon , f. , 2011 .`` integration of cfd tools in aerodynamic design of contra - rotating propellers blades '' . in 3rdceas conference ( council of european aerospace societies ) .blandeau , v. p. , joseph , p. f. , and tester , b. j. , 2009 .`` broadband noise prediction from rotor - wake interaction in contra - rotating propfans '' . in 15th aiaa / ceas aeroacoustics conference -30th aiaa aeroacoustics conference ,aiaa 20093137 .shigemitsu , t. , furukawa , a. , watanabe , s. , and okuma , k. , 2005 . `` air / water two - phase flow performance of contra - rotating axial flow pump and rotational speed control of rear rotor '' . in asme 2005 fluids engineering division summer meetingjune 19 - 23 , 2005 , houston , texas , usa , pp .10691074 .cho , l. , choi , h. , lee , s. , and cho , j. , 2009 .`` numerical and experimental analyses for the aerodynamic design of high performance counter - rotating axial flow fans '' . in proceedings of the asme 2009 fluids engineering division summer meeting - fedsm2009 , colorado usa , pp .fedsm200978507 .noguera , r. , rey , r. , massouh , f. , bakir , f. , and kouidri , s. , 1993 . `` design and analysis of axial pumps '' . in asme fluids engineering , second pumping machinery symposium ,washington , usa . , pp .95111 .lieblein , s. , schwenk , f. c. , and broderick , r. l. , 1953 .diffusion factor for estimating losses and limiting blade loading in axial - flow - compressor blade elements .tech . rep .tm e53d01 , national advisory committee for aeronautics .sarraf , c. , nouri , h. , ravelet , f. , and bakir , f. , 2011 .`` experimental study of blade thickness effects on the global and local performances of a controlled vortex designed axial - flow fan '' . , p. 684 | _ an experimental study on the design of counter - rotating axial - flow fans was carried out . the fans were designed using an inverse method . in particular , the system is designed to have a pure axial discharge flow . the counter - rotating fans operate in a ducted - flow configuration and the overall performances are measured in a normalized test bench . the rotation rate of each fan is independently controlled . the relative axial spacing between fans can vary from to . the results show that the efficiency is strongly increased compared to a conventional rotor or to a rotor - stator stage . the effects of varying the rotation rates ratio on the overall performances are studied and show that the system has a very flexible use , with a large patch of high efficient operating points in the parameter space . the increase of axial spacing causes only a small decrease of the efficiency _ |
the theory of open quantum systems plays a central role in the description of realistic quantum systems due to unavoidable interaction with the environment .as is well known , the system - environment interaction can lead to energy dissipation and decoherence , posing a major challenge to the development of modern technologies based on quantum coherence . due to its fundamental character and practical implications ,the investigation of dissipative processes has been a subject of vigorous research , where the standard approach assumes a system - environment weak coupling and a memoryless quantum dynamics ( the born - markov approximation ) . under such assumptions ,system dynamics are determined by a quantum markovian master equation , i.e. , a completely positive quantum dynamical map with a generator in the lindblad form .although the markovian approach has been widely used , there is a growing interest in understanding and controlling non - markovianity . in quantum metrology ,for example , entangled states can be used to overcome the shot noise limit in precision spectroscopy , even in the presence of decoherence . however , as suggested in refs . , higher precision could be achieved in a non - markovian environment , since a small markovian noise would be enough to restore the shot noise limit .non - markovian dynamics also play an important role in quantum biology , where interaction with a non - markovian environment can be used to optimize energy transport in photosynthetic complexes , and can be observed in condensed matter devices like quantum dots and superconducting qubits .furthermore , as pointed out recently in studies involving quantum key distribution , quantum correlation generation , optimal control , and quantum communication , the use of non - markovian dynamics could offer an advantage over markovian dynamics . this scenario has motivated studies aimed at characterizing and quantifying non - markovian aspects of the time evolution of an open quantum system . however , unlike the classical case , the definition of non - markovianity in the scope of quantum dynamics is still a controversial issue .for example , breuer , laine and piilo ( blp ) have proposed a measure for non - markovianity using the fact that all completely positive - trace preserving ( cptp ) maps increase the indistinguishability between quantum states . from a physical perspective, a quantum dynamics would be non - markovian if there were a temporary back - flow of information from the environment to the system .on the other hand , for rivas , huelga and plenio ( rhp ) , a quantum dynamics would be non - markovian if it could not be described by a _ divisible _ cptp map .formally , for such cases , one could not find a cptp map , describing the evolution of the density operator from time to , such that , where and are two cptp maps .therefore , the indivisibility of a map would be the signature of non - markovian dynamics .these two different concepts of non - markovianity are not equivalent : although all divisible maps are markovian with respect to the blp criterion , the converse is not always valid . in this paper , we explore the idea of how one might manipulate the markovian nature of a dissipative subsystem , by exploiting features of its being a part of a composite system . for that, we study the dynamics of interacting two - state systems ( tss ) coupled to a common thermal reservoir . by changing the composite initial state and/or the tss couplings ,we show that it is possible to modify _ in situ _ the characteristics of the subsystem s dissipation , enabling one to induce a transition from markovian to non - markovian dynamics and _vice versa_. moreover , we observe the possibility of having different behaviors for the composite and subsystem , even when they are coupled to a common thermal environment .finally , we provide a qualitative and quantitative description of how the environmental tss acts as part of the subsystem environment .we initiate our analysis by choosing an exactly soluble analytical model that is capable of presenting the physics we want to exploit from dissipative composite systems . therefore , our starting point is the dephasing model for two interacting two - state systems ( 2-tss ) with , where is the diagonal pauli matrix and . the choice of this model is also motivated by the possibility of implementation in different experimental settings . for example , it could be realized in superconducting qubits , trapped ions , ultracold atoms in an optical lattice , and nmr systems .in addition , such a model , without tss - tss couplings , is also considered as a paradigm of quantum registers .the bath of oscillators , introduced by the canonical bosonic creation and annihilation operators and , is characterized by its spectral density , and is responsible for imposing a nonunitary evolution for the 2-tss . since =0 ] , where ]( ) denotes the standard ( anti)commutator , and is the time - dependent dephasing rate .coupling is renormalized , such that , when working with the system s reduced dynamics . throughout the manuscript, the values used for the system s reduced dynamics stand for their renormalized values . ]since no approximation has been made whatsoever , it is worth noting that eq .( [ meq ] ) constitutes a genuine cptp quantum dynamical map .the master equation ( [ meq ] ) has a very suitable form for the analysis of quantum markovianity , since it can be directly compared with the well - known lindblad theory for open systems . indeed , if for all , the time - local master equation describes a divisible cptp map .therefore , under this condition , the dynamics would fall into the class of problems considered as paradigms of quantum markovian processes , since there is only a single decoherence channel .the simplest example one can find for the system hamiltonian eq .( [ horiginal ] ) that has , is the case of an ohmic bath , i.e. , , one finds that , if , is a positive - valued function for any temperature and therefore the system dynamics is markovian . on the other hand ,if , depending upon the temperature , can present negative values , given a non - divisible map . ] . indeed , as depicted in fig .[ fig](a ) , for any bath temperature , the dephasing rate satisfies the condition of being non - negative for each fixed .consequently , the 2-tss dissipative dynamics would be categorized as markovian .another important case happens when the environment seen by the system of interest presents a pronounced peak ( often referred as a lorentzian peak ) at a characteristic frequency .relevant examples are superconducting qubits coupled to readout dc - squids , electron transfer in biological and chemical systems and semicondutor quantum dots , to name just a few .figure [ fig](b ) shows the dephasing rate assuming a lorentzian shape for the bath spectral density .as one can observe , for the case in which the resonance width is of the same order as the frequency peak , the dephasing rate can present negative values ( solid curve ) or be a positive - valued function ( dashed and dot - dashed curves ) , depending upon the bath temperature . thus the composite dissipative dynamics would be markovian as long as the thermal energy scale is comparable to or larger than the resonance parameters ( dashed and dot - dashed curves ) , i.e. , .thus far , we have only been concerned with characterizing the dissipative dynamics of the composite system , i.e. , the open 2-tss .what we now want to address is whether or not the single tss dissipative dynamics can be tuned _ in situ _ such that its behavior would differ from that observed for the composite system .in fact , one could envision that , due to its interaction with the other tss ( hereafter labeled the _ auxiliary _ tss ) , a single tss would be coupled to a structured bath ( auxiliary tss+environment ) , which would play the role of an effective bath , and could induce a different nature for the dissipative process .if so , it might be possible to change dynamically the nature of such a structured bath by varying the tss - tss coupling and/or the initial state of the auxiliary tss . as matter of fact , the approach of considering the subsystem s environment to be composed a common environment plus the rest of the composite system was employed long ago .indeed , regarding to the characterization of an effective dissipative mechanism , it was successfully applied to the class of problems mapped onto a tss coupled to a harmonic oscillator in the presence of a markov bath , where perturbative , non - perturbative , spectral density series representation and semi - infinite chain representation techniques were proposed to describe such an effective dynamics in several contexts .in addition , a multi - spin environment coupled to local bosonic baths has also been studied in the context of markovianity .it is also noteworthy that the presence of initial correlations in an environment composed of several subparts can lead to changes in the subsystem dynamics .such an influence has been successfully investigated , with regard to the blp measure , theoretically and experimentally for the specific case of non interacting tss - tss systems , which interact locally with correlated multimode fields . in spite of the results mentioned above , no systematic investigation of their markovianityhas been undertaken using both rhp and blp measures . nor have interacting tss - tss composite systems in the presence of thermal baths been analyzed , where tunability seems more natural for manipulating the dissipative dynamics nature of several physical implementations .we shall address this question by looking at the single tss reduced density matrix ={\rm tr}_2[\rho(t)] ] means the trace of the auxiliary tss degrees of freedom .its matrix representation in the eigenbasis ( ) can be cast in the simple form where and the matrix elements are given by and .hence , one finds that , in general , the dynamical map describing eq .( [ redmatrix ] ) is non - linear , since it depends on the composite initial state .indeed , such a feature becomes clear when one tries to write the reduced density matrix eq .( [ redmatrix ] ) in a similar form to a kraus representation , i.e. , , with operation elements given by observe that those operators are dependent on the composite initial state , which would not be a genuine kraus representation . therefore , it is clear that , in general , the dynamical map describing the evolution of the single tss reduced matrix is itself dependent on the initial condition , which _ breaks _ the linearity of the map .furthermore , the reduced dynamics are not necessarily described by a completely positive dynamical map .it is worthy of mentioning that , under such circumstances , the characterization of the single tss dynamics as markovian or not is disputable with current measures , since both rhp and blp proposals rely on linear maps .nevertheless , it is still possible to find conditions where the dynamical map of the single tss reduced matrix is a linear cptp map , thus being consistent with rhp and blp constructions .those happen when i ) the composite initial state is a product state , i.e. , , and ii ) there is no tss - tss interaction ( ) , since for both cases one finds . in other words , in these cases the reduced dynamics is described by a genuine kraus representation .it follows from eq .( [ redmatrix ] ) that the master equation for the single tss reduced density matrix reads +\frac{\tilde{\gamma}(t)}{2}\big(\sigma_1^z\tilde{\rho}_1(t ) \sigma_1^z-\tilde{\rho}_1(t)\big),~~ \label{redmeq}\end{aligned}\ ] ] where , with , and .note the lindblad structure of the master equation ( [ redmeq ] ) , where and play the roles of effective single tss hamiltonian and dephasing rate , respectively , and are manifestly dependent on the composite system initial state condition .moreover , since depends only on the initial state and the tss - tss coupling , neither the unitary part of eq .( [ redmeq ] ) nor the extra term in its dephasing rate is influenced by the system - bath coupling .consequently , the bath and auxiliary tss contributions for each term of eq .( [ redmeq ] ) can be identified and quantified .two results immediately become apparent : i ) as expected , if the tss - tss coupling is zero ( ) , the single tss dissipative dynamics is completely enforced by the environment ; and ii ) if the auxiliary tss initial state is set in an eigenstate of , one finds that and , where $ ] .these results highlight the fact that the auxiliary tss dynamics will be locked into its initial state , since is a constant of motion .therefore the interacting term will only play the role of a fixed external field for the single tss , in which dissipative dynamics would follow the process imposed by the environment .thus , for such cases , both the composite and single tss dissipative dynamics would have the same behavior . in spite of the two cases discussed above , in general not determined only by the environment dephasing rate and is not a positive - valued function .therefore , with regard to the concordance between blp and rhp measures , it is not possible to infer from the master equations ( [ meq ] ) and ( [ redmeq ] ) whether the single tss dissipative dynamics will follow the same behavior as that observed for the composite system .consequently , the two measures have to be determined in order to find the cases of agreement in our system .as already mentioned , the figure of merit for blp is the distinguishability of quantum states .an important tool that has been used as a measure for the distinguishability of quantum states is the trace distance between two quantum states and . according to blp ,since the trace distance has the feature that under cptp maps its value can not increase beyond its initial value , i.e. , , where , the trace distance could also be used as a definition of non - markovianity .that definition is based upon the idea that a markovian dynamics has to be a process in which any two quantum states become less and less distinguishable as the dynamics flows , leading necessarily to a monotonic decrease of its trace distance .thus , a non - monotonic behavior is interpreted as a back - flow of information from the environment to the system . from an experimental perspective, the trace distance could be calculated by the tomography of the density operator , as was recently done by bi - heng liu _. _ using all - optical experimental setups , or inferred from current fluctuations . for states given by eq .( [ redmatrix ] ) , the trace distance can be easily determined as which time behavior is manifestly dependent on the 2-tss initial state . for instance , panels ( c ) ( ohmic ) and ( d ) ( lorentzian bath ) of fig .[ fig ] show a representative initial condition , in which the auxiliary tss is set in an equal superposition of eigenstates , i.e. , .the bath temperature is chosen such that both ohmic and lorentzian baths produce a markovian process for the composite system .however , even though it asymptotically vanishes for both cases , the trace distance for single tss states is non - monotonic , indicating a non - markovian process .moreover , this non - monotonic behavior implies that the tss dynamics is indivisible .thus the tss dynamics is non - markovian not only in terms of the back - flow of information , but also with respect to the rhp criterion . on the other hand, one can find a situation in which the single tss dynamics is markovian even when the composite system presents a non - markovian behavior .this unexpected scenario is shown in fig .the negative values assumed by the dephasing rate , panel ( a ) , ensure that the 2-tss dynamics is indivisible .furthermore , as shown in panel ( b ) , the trace distance can be found having a non - monotonic behavior .therefore , the composite system conforms to a non - markovian quantum process with respect to the blp and rhp criteria .however , since the effective rate is always positive for the fixed initial condition , inset of panel ( a ) , we have a divisible dynamics , implying that the trace distance is a monotonic function of time , as illustrated in panel ( b ) .consequently , the single tss dynamics is a markovian process in the standard way .a similar result was obtained for non - interacting tss - tss systems coupled locally with correlated multimode fields .the results and examples offered here demonstrate i ) that the single tss hamiltonian has no influence regarding the markovianity of both single tss and composite system ; ii ) if one tries to push the characterization of the markovianity of the single tss dynamics for cases where the map is non - linear , it is found that a 2-tss entangled state is not a sufficient condition for non - markovianity of the single tss dynamics .indeed , a simple example would be having one of the bell states , e.g. , , as the 2-tss initial state .for those states , one finds in eq .( [ redmatrix ] ) , which leads to a non - dissipative dynamics for the single tss dynamics .this result explicitly shows that the single tss reduced dynamics has a convoluted dependence with the 2-tss initial state , which does not allow one drawing immediate conclusions about the role played by the presence of entanglement in the initial state ; and iii ) that the presence of the auxiliary tss not only has quantitative effects , but may also have , with regard to blp and rhp criteria , a decisive impact on the nature of the tss dynamics .such an influence can be made more explicit if one focuses on cases where the composite initial state is a product state ( ) .in fact , for those the extra term in , namely , , can be written in the simple form furthermore , if one considers problems having the same , the figure of merit for both criteria becomes , since the monotonicity of the trace distance eq .( [ tracered ] ) is determined by . for this case , the agreement between the two criteria is guaranteed because eq .( [ redmeq ] ) describes a single decoherence channel with the same for all .thus , as , it is clear that can be established a competition when setting the nature of the reduced tss dynamics , which can be tailored through the knobs and due to the presence of the auxiliary tss . indeed , except for the trivial case , eq .( [ gammaaux ] ) shows that the term due to coupling with the auxiliary tss will induce an _ad infinitum _ pattern of momentary loss and recurrence of quantum coherence observed for the reduced tss dynamics , which is a characteristic of the entanglement created between a tiny number of degrees of freedom. thus the coupling with the auxiliary tss constitutes a channel , here reversible , for exchanging information .the reversibility of such a channel is only possible here because of the diagonal nature of the interactions , which maintains the amplitude of constant .consequently , the irreversibility of the dephasing process observed for the reduced tss is only led by its direct coupling to the environment . following that reasoning, one could expect that the same results would be found for the reduced tss dynamics if an independent bath model were assumed , instead of the common environment model eq .( [ horiginal ] ) . as a matter of fact ,as long as the diagonal nature of the interactions is preserved , the same results eqs .( [ redmatrix])-([gammaaux ] ) are obtained , with reading the dephasing rates due to the individual baths coupled to the single tsss .a final note on the single tss dissipative dynamics comes from the observation that , if is assumed to be a thermal state , our problem resembles very much with the one having a unique ( structured ) thermal bath initially decoupled from the single tss . despite several known examples of the tss - harmonic oscillator problem , because of the different character between the spin and bosonic degrees of freedom , it does not seem possible to assign an effective spectral density for the tss - tss casenevertheless , for the case of a constant tss - tss interaction , one can single out a spectral density due to the spin character of the structured bath .following from eq .( [ gammaaux ] ) , one finds that , which leads to the spin component of the spectral density .those finds reveal that the auxiliary tss behaves as a mode filter for the natural frequency and its harmonics , which are weighted by a boltzmann factor of effective temperature .the analysis of the exact soluble model eq .( [ horiginal ] ) gives evidence that the presence of an auxiliary system coupled to the system of interest could create a knob to manipulate _ in situ _ its markovianity .however , since it was obtained for the particular situation where the system - environment interaction commutes with the system hamiltonian , one could wonder whether such a feature would be spoiled if the system - bath interaction would not commute with the system hamiltonian .unfortunately , for such cases , exact solutions are rare , if not impossible .nevertheless , we now present a case study , which is representative of some physical systems , where one finds that the idea of having a knob for markovianity , due to the presence of an auxiliary system , does also apply .our case study has the feature that , _ by construction _ , the composite system dynamics is assumed to be markovian , but depending on its initial state , the single tss dynamics can be found to be both markovian or non - markovian .the hamiltonian of interest is exactly the one given by eq .( [ horiginal ] ) , except for the system - environment interaction , which now reads assuming that the system - environment interaction is weak and the composite system dynamics is markovian , such that the born - markov approximation is aplicable , the system s reduced master equation is found to have a lindblad form with positive rates .as for the single tss dynamics , as expected , its markovian character is dependent on the 2-tss initial state .for example , for the environment temperature , if the 2-tss initial state is the separable state , one can put the single tss master equation in a lindblad form given by + \left(\frac{\gamma_-e^{-\gamma_- t}}{1+e^{-\gamma_- t}}\right)\left(2\sigma_-\tilde{\rho}_1\sigma_+ -\tilde{\rho}_1\sigma_+\sigma_--\sigma_+\sigma_-\tilde{\rho}_1\right),\end{aligned}\ ] ] where . , and .it is worthy of note that , for a general initial state , even for a separable state , it may not be possible writing the single tss master equation in a lindblad form . ] thus , for that initial condition , the single tss dynamics would be markovian . on other hand , if the 2-tss initial state is still separable , but given by another condition , e.g. , , with , one finds , using the trace distance , that the single tss dynamics would be characterized as non - markovian , according to both blp and rhp criteria ( see fig .[ fig3 ] ) .therefore , such examples make clear that the 2-tss initial state can be used as a knob to determine the markovianity of the single tss , even having the system - environment interaction not commuting with the system dynamics .in summary , we have shown the possibility of manipulating the markovianity of a subsystem of interest without having to change the common thermal environment properties or to assume a correlated environment state , which can be daunting requirements for some physical implementations . by choosing an exactly soluble model and a case study for a non - commuting system - environment interaction , we illustrated how one can induce a transition from markovian to non - markovian dynamics , and _ vice versa _ , by changing the characteristics of the composite system .such analises build evidences that a knob for markovianity can be introduced when one couples an auxiliary system to the system of interest . among the perspectives offered by this work, one could envision points regarding whether it would be possible having the concept of markovianity and non - markovianity defined for those cases which dynamics is described by non - linear maps , the extension to other non - commutative tss - tss interactions and the role of the number of degree of freedom involved in the coupling auxiliary system . on this last point, it would be interesting to investigate whether there would be a limitation on the maximum number of degrees of freedom allowing for the existence of the knob + f.b .is supported by instituto nacional de cincia e tecnologia - informao quntica ( inct - iq ) and by fundao de amparo pesquisa do estado de so paulo ( fapesp ) under grant number 2012/51589 - 1 .t.w . acknowledges financial support from cnpq ( brazil ) through grant no .478682/2013 - 1 . s. f. huelga _et al_. , phys .. lett . * 108 * , 160402 ( 2012 ) ; f. f. fanchini _et al . _ , phys . rev .a * 81 * , 052107 ( 2010 ) ; jin - shi xu _et al . _ ,commun . * 4 * , 2851 ( 2013 ) ; a. darrigo _ et al ._ , ann . phys . * 350 * , 211 ( 2014 ) ; a. orieux _et al . _ , sci .* 5 * , 8575 ( 2015 ) ; r. lo franco _ et al . _ ,b * 27 * , 1345053 ( 2013 ) .d. chruciski _ et al .lett . * 112 * , 120404 ( 2014 ) ; b. bylicka _et al . _ , sci. rep . * 4 * , 5720 ( 2014 ) ; x .- m .a * 82 * , 042103 ( 2010 ) ; s. luo _et al . _ ,a * 86 * , 044101 ( 2012 ) ; m. m. wolf et al .lett . , 150402 ( 2008 ) .a. friedenauer _phys . * 4 * , 757 ( 2008 ) ; d. porras _et al . _ ,a * 78 * , 010101 ( 2008 ) ; k. kim __ new j. phys . * 13 * , 105003 ( 2011 ) ; p. schindler _ et al .* 9 * , 361 ( 2013 ) ; j. simon _et al . _ ,nature * 472 * , 307 ( 2011 ) ; a. recati _et al . _ ,a * 94 * , 040404 ( 2005 ) ; p. p.a * 77 * , 051601 ( 2008 ) ; p. haikka _et al . _ ,a * 84 * , 031602 ( 2011 ) . | we study the markovianity of a composite system and its subsystems . we show how the dissipative nature of a subsystem s dynamics can be modified without having to change properties of the composite system environment . by preparing different system initial states or dynamically manipulating the subsystem coupling , we find that it is possible to induce a transition from markov to non - markov behavior , and _ vice versa_. |
blends of polymers and nanoparticles , commonly called `` polymer nanocomposites '' ( pnc ) , have garnered much attention due to the possibility of dramatic improvement of polymeric properties with the addition of a relatively small fraction of nanoparticles . successfully making use of these materialsdepends upon a firm understanding of both their mechanical and flow properties .numerous computational and theoretical studies have examined the clustering and network formation of nanoparticles and their effect on both the structural and rheological properties of pncs . the vast majority of these efforts have focused on nanoparticles that are either spherical , polyhedral or otherwise relatively symmetric , although there are some notable exceptions . in contrast, experiments have tended to emphasize highly asymmetric nanoparticles , such as layered silicates or carbon nanotubes .it is generally appreciated that these highly asymmetric nanoparticles have the potential to be even more effective than spherical ( or nearly spherical ) nanoparticles in changing the properties of the polymer matrix to which they are added .in addition to the large enhancements in viscosity and shear modulus expected from continuum hydrodynamic and elasticity theories , extended nanoparticles can more easily form network structures both through direct interaction between the nanoparticles , or through chain bridging between the nanoparticles , where a `` bridging '' chain is a chain in contact with at least two different nanoparticles .these non - continuum mechanisms are believed to play a significant role in property enhancement , though the dominant mechanism depends on the properties considered , particle - polymer and particle - particle interactions , sample preparation , etc .given that the majority of previous computational efforts have focused on symmetric nanoparticles , we wish to elucidate the role of _ nanoparticle shape _ in determining basic material properties , such as the viscosity , and material `` strength '' , ( i.e. , breaking stress ) .computer simulations are well suited to examine the role of nanoparticle shape , since it is possible to probe the effects of changing the shape without the alteration of any of the intermolecular interactions . in this way, the changes due to nanoparticle shape can be isolated from other effects .such a task is complicated experimentally , since it is difficult to modify the shape of a nanoparticle without dramatically altering its intermolecular interactions . in this paperwe evaluate the viscosity and ultimate isotropic tensile strength of model pnc systems with either ( i ) highly symmetric icosahedral nanoparticles ( compact particles ) , ( ii ) elongated rod - like nanoparticles , and ( iii ) sheet - like nanoparticles .these nanoparticles can be thought of as idealizations of common nanoparticles , such as gold nanoparticles and fullerenes ( polyhedral ) , nanotubes and fibers , and nanoclay and graphene sheet materials , respectively .our results are based on molecular dynamics ( md ) computer simulations , using non - equilibrium methods to evaluate , and exploiting the `` inherent structure '' formalism to determine .we find that the rod - like nanoparticles give the largest enhancement to , which we correlate with the presence of chains that bridge between the nanoparticles .the sheet nanoparticles offer the weakest increase in , and correspondingly have the smallest fraction of bridging chains . for the ultimate isotropic strength , we find opposite results : the sheets provide the greatest reinforcement , while the rods the least . for both of these properties , the property changes induced by the icosahedral nanoparticles fall between those of the extended nanoparticles .the present simulations are idealized mixtures of polymers and nanoparticles in which the polymer - nanoparticle interactions are highly favorable so as to promote nanoparticle dispersion .moreover , we have chosen to work at relatively high temperature in order to avoid contributions to from the complex physics of slowing dynamics approaching the glass transition .previous work has shown that polymer - surface interaction effects in this low temperature range can alter , and potentially dominate the nanocomposite properties .we also limit the range of chain length studied to avoid effects of significant polymer entanglement .these limitations on interaction , temperature , and chain length are advantageous in order to develop a clear understanding of the origin of the observed changes in properties .such a reference calculation provides a reference starting point to understand behavior when these constraints are relaxed . with this in mind , caution is needed when comparing these results with experimental data where these complicating additional factors may be present along with other possible effects , such as crystallization or phase separation .we organize this paper as follows : in section [ sec : simulation ] , we describe the details of the model and method , focusing on the differences between the nanoparticle types used in each system .section [ sec : composite_rheology ] describes our investigation of the rheological properties of the nanocomposites , while section [ sec : isotropic_tensile_strength ] considers the effects of shape on .we conclude in section [ sec : conclusion ] .to directly compare to experiments , it is desirable to use as realistic a molecular model as possible . while a chemically accurate md simulation is possible in principle , it is often more difficult to identify basic physical trends with such models .such attempts at chemical realism are also demanding in terms of the computational times required , which restricts the class of problems which can be investigated .coarse - grained models of polymeric materials provide a good compromise between the the opposing needs of realism and computational feasibility .such models can reproduce qualitative experimental trends of nanocomposites , but precise quantitative predictions can not be expected . building on nanocomposite models introduced before , we study a coarse - grained model for polymers and nanoparticles that allow us to consider pnc systems over a wide range of physically interesting systems . herewe consider several nanoparticle shapes built by connecting spherical force sites .we perform md simulations of systems consisting of a small fraction of model nanoparticles in a dense polymer melt . for referencewe also simulate the corresponding pure polymer melt .the polymers are modeled via the common `` bead - spring '' approach , where polymers are represented by chains of monomers ( beads ) connected by bond potentials ( springs ) .all monomers interact via a modified lennard jones ( lj ) potential - v_{\rm lj}(r_c ) & r_{\rm ij } \le r_c \\ 0 & r_{\rm ij } > r_c \end{array } \right ., \label{equation : vlj}\ ] ] where is the distance between two monomers , where is the depth of the well of the lj potential and is the monomer size. the potential is truncated and shifted at , so that the potential and force are continuous at the cutoff . bonded monomers in a chaininteract via a finitely extensible , non - linear elastic ( fene ) spring potential , \label{equation : vfene}\ ] ] where and are adjustable parameters that have been chosen as in ref . . since we do not aim to study a specific polymer , we use reduced units where ( is the monomer mass ) .length is defined in dimensionless units relative to , time in units of and temperature is expressed in units of , where is boltzmann s constant .we use three different types of nanoparticles for our calculations .[ fig : nanoparticles ] shows representative images of the nanoparticles .the first type of nanoparticle , an icosahedron , was previously studied in ref . which focused on factors controlling nanoparticle dispersion and low temperature effects on transport for a similar polymer matrix , respectively .practical realizations include fullerene particles , primary carbon black particles , quantum dots , and metal nanoparticle additives .the nanoparticle force sites interact with each other via an identical given in eq .( [ equation : vlj ] ) , with . to maintain the icosahedral shape , the force site at each vertexis bonded to its 5 nearest neighbors via a harmonic spring potential where and equals the minimum of the force - shifted lj potential , approximately . to further reinforce the icosahedral geometry ,a central particle is bonded to the vertices with the same potential eq .( [ equation : vharm ] ) with the same value of , but with a slightly smaller preferred bond length , , the radius of the sphere circumscribed around an icosahedron .the resulting nanoparticles have some flexibility , but are largely rigid , thereby preserving their icosahedral shape .the second type of nanoparticle is a semiflexible rod represented by 10 lj force sites with neighboring monomers bonded by the same as used for the polymers .we choose the shape to represent nanoparticles such as carbon nanotubes or nanofibers .carbon nanotubes typically have lengths up to several m and a diameter of 1 nm to 2 nm ( single - walled tubes ) or 2 nm to 25 nm ( multiwalled tubes can have even larger diameters ) .unfortunately , such large rods are not feasible to simulate , as the system size needed to avoid finite size effects exceeds current computational resources . as a compromise , we simulate rods with a length - to - diameter ratio of 10 .there is an additional bond potential between nanoparticle force sites where is the angle between three consecutive force sites .this imparts stiffness to the rod .we choose , so that like the icosahedra , the rods have some flexibility , but are largely rigid .the third type of nanoparticle is square `` sheet '' comprised of 100 force sites . while the sheets are obviously different from the rods , the sheets have the same aspect ratio as the rods , where aspect ratio is defined by the ratio of the largest and smallest length scales .aspect ratio is often considered to be one of the most important properties for anisotropic particles , apart from the interactions with the surrounding matrix .the sheets represent a coarse - grained model of clay - silicate nanoparticles , and in the literature these objects are also termed `` tethered membranes '' .each monomer in the array has non - bonded interactions described by the same lj forces as the polymers .the monomers are also bonded to their 4 nearest neighbors via the same fene bond potential of eq .( [ equation : vlin ] ) .the particles at edges and corners of the sheet are only bonded to 3 and 2 neighboring particles , respectively . like the rods, the sheets are stiffened with a potential to prevent the nanoparticle from folding in on itself .the sheets also include a perpendicular bonding potential which limits distortions from a square geometry . without the potential , the sheet can deform into a rhombus . as in ref . , we choose and for the sheets .thus far , we have not defined a nanoparticle - monomer potential . in previous work , the same lj potential of eq .( [ equation : vlj ] ) was used for nanoparticle - monomer interactions , with the well depth replaced by .the soft form of the attraction makes it relatively easy for monomers which are first and second nearest neighbors of a nanoparticle to exchange . as a result ,the chains can `` slide ''relatively easily along the nanoparticle surface , thereby reducing any benefits of networks built by chain bridging . to make monomer exchange at the surface less favorable, we use a `` 12 - 24 '' potential rather than the standard `` 6 - 12 '' powers of .thus , the the nanoparticle - monomer interactions are of the form - v_{12 - 24}(r_c ) & r_{\rm ij } \le r_c \\ 0 & r_{\rm ij } > r_c \end{array } \right . , \label{equation : deepwell}\ ] ] where and .this potential has shorter range and stronger forces binding the first neighbor monomers to the nanoparticle . so that the total energy of the potential well , i.e. , is the same as that of the lj potential used in ref . , we choose . in this way , the total potential energy is comparable to that used in ref . with , which led to well - dispersed nanoparticles in that study .thus , we expect our systems will be well - dispersed .however , we shall see this is _ not _ entirely the case for the systems with sheet nanoparticles .figure [ fig : systems ] shows snapshots of a typical configuration for each system studied .we generate initial configurations using the same approach as in ref . , namely growing vacancies so that the nanoparticles can be accommodated . however , this process generates artificial initial configurations , and so we generate subsequent `` seed '' configurations by simulating at where reorganization occurs on relatively short time scales . depending on the nanoparticle type , we extract independent starting configurations every to time steps .subsequently , we make the necessary changes in density and chain length before cooling and relaxing at .a possible concern is that at , monomers will stick to the nanoparticle surface on a time scale that is long compared to the simulation , since ; however , we confirmed that monomers of chains exchange with the nanoparticle surface many times over the simulation .the equilibration time depends on the system type and is directly related to particle size .pure systems relax relatively quickly , needing only time steps .icosahedral systems need roughly time steps before reaching thermodynamic equilibrium .the rods and sheet nanoparticles are much larger , and thus diffuse more slowly . as a result , these systems require in excess of time steps to reach equilibrium .given that equilibration requires more than time steps at , equilibration under conditions of entanglement or supercooling would be computationally prohibitive .we integrate the equations of motion via the reversible reference system propagator algorithm ( rrespa ) , a multiple time step algorithm used to improve simulation speed .we use a basic time step of 0.002 for a 3-cycle velocity verlet version of rrespa with forces divided into `` fast '' bonded ( , , , ) and `` slow '' non - bonded ( , ) components .the temperature is controlled using the nose - hoover method where the adjustable `` mass '' of the thermostat is selected to match the intrinsic frequency obtained from theoretical calculations of a face - centered cubic lj system .to study rheological properties , we shear equilibrated configurations using the sslod equations of motion , integrated using the same rrespa algorithm used for equilibrium simulation . the sllod method generates the velocity profile for couette flow . if we choose the flow along the -axis and the gradient of the flow along the -axis , the shear rate dependent viscosity is given by where is the average of the - components of the pressure tensor ( sometimes also called the stress tensor ) , and is the shear rate .we limit our simulations to small enough shear rates to avoid potential instabilities associated with breaking fene bonds in the simulation .we choose a loading fraction , defined by the ratio of the number of nanoparticle force sites to the total number of system force sites ; values of this order of magnitude are common in experimental studies . since all force sites have the same diameter , should also be roughly equal to the volume fraction . due to the fact both polymers and nanoparticles consist of a discrete number of force sites, varies slightly between systems . for the icosahedral systems , and for the rods and sheets . the system size for all three nanoparticle systems is much larger than either the radius of gyration of the polymers , or the largest nanoparticle dimension , to avoid finite size effects . the number of chains ranges from in the smallest system up to for the largest systems . for each systemwe examine chain lengths , , and . for this polymer model , the entanglement length 30 to 40 .hence , only the longest chains studied may exhibit any effects of entanglement and the effects should be small in the present work .we reiterate that effects of dynamics due to proximity to the glass transition should not play a significant role at the relative high of our systems .a summary of the system parameters can be found in table [ table1 ] .in this section , we evaluate the role of nanoparticle geometry on the magnitude of and clarify the influence of chain bridging on for a range of shear rates ( , 0.007 , 0.01 , and 0.02 ) and chain lengths . in the nanocomposite simulations of , became nearly constant at in other words , we are near the newtonian limit where approaches a stationary value for .thus , we expect our lowest shear rates approach the newtonian regime ; in this sense , the shear rates we use are fairly low . however, if we consider in physical units , these rates would be extremely high by experimental standards .ideally , we would estimate in the limit directly ( for example , by using an einstein or green - kubo relation ) , but the accurate evaluation of using these methods is difficult .figure [ fig : shear_figure_1 ] shows for the three nanocomposites and pure polymer as a function of .these data show a clear change in between the systems for all simulated , with the rods having the largest , followed by the icosahedra , sheets , and finally the bulk polymer .the decrease of with increasing is indicative of shear thinning .we note that is not constant at the lowest , indicating we have not yet reached the newtonian regime .nonetheless , it is clear that the pure polymer has a viscosity approximately an order of magnitude smaller than any of the composites .comparable differences in have been observed in experiments .this demonstrates an enhancement of through the addition of nanoparticles , which is often a desired goal of adding nanoparticles to a polymer melt .while the figure only shows for the and , other chain lengths and densities indicate the same trend .figure [ fig : shear_figure_2 ] ( a ) shows the effect of varying on at and .an increase in with increasing is expected from basic polymer physics since the chain friction coefficient increases with .interchain interactions and `` entanglement '' interactions enhance this rate of increase since the friction coefficient of each chain increases linearly with . while it is clear that addition of nanoparticles increases of the resulting composite , the reasons for this are not obvious . since the pure polymer has a relatively low it is reasonable to assume that the rigid nanoparticles themselves inherently raise as in any suspension of particles in fluid matrix . to separate out this continuum hydrodynamic effect of the nanoparticles from the effects specifically due to polymer - nanoparticle interactions ,we calculate intrinsic viscosity ] for spheres in three dimensions .it is also known that ] for our particles .the computational method involves enclosing an arbitrary - shaped object within a sphere and launching random walks from the border or launch surface .the probing trajectories either hit the object or return to the launch surface . from these path integration calculations , ] of the nanoparticles as an average over different confirmations found in the equilibrium system as they adopt a range of shapes due to bond flexibility . using zeno ,we find that the icosahedral nanoparticles have = 3.93 ] , and the sheets have = 10.8 ] value of the rods are generally consistent with a larger value of , the reversal of the sheet and icosahedra $ ] values indicates that nanoparticle and/or polymer interactions play a role in the overall viscosity .moreover , the continuum theory indicates that should be _ independent _ of , while we see from fig .[ fig : shear_figure_2 ] ( b ) that strongly depends on .hence we conclude that the continuum model of does not provide an adequate explanation of the changes we see in our simulations .evidently , we must also examine other contributions to the nanoparticle viscosity .one possibility that has recently been discussed is that slip rather than stick boundary conditions might provide a better description of nanoparticle boundary conditions .however , this would result in a _ decrease _ of the predicted value of relative to the stick case , which is in the wrong direction for explaining our results . at a molecular level, the boundary conditions are clearly neither stick nor slip , as particles have finite residence times at the surface , although it is difficult to determine hydrodynamic boundary conditions directly from molecular considerations .nanoparticle - polymer interactions and nanoparticle clustering are evidently factors that might cause the property enhancements we observe , and in the next section we focus on this possibility .we next turn to the structural results obtained from md simulation to better quantify the relationship of polymer and nanoparticle structure to .we first calculate the fraction of nanoparticles in contact with other nanoparticles , .nanoparticles are said to be `` in contact '' if one or more comprising force sites of a nanoparticle are within the nearest neighbor distance of a force site belonging to another nanoparticle .the nearest - neighbor distance is defined by the first minimum in the radial particle distribution function ; the first minima is at for all systems .figure [ fig : np - np ] shows as a function of for and for both and . while it is not readily apparent from fig .[ fig : systems ] ( c ) , the polymers `` intercalate '' between the sheets to some degree , and hence , the nanoparticles are not actually in direct contract with each other , resulting in a very low value of for the sheets . with the exception of the equilibrium system , the sheets have the fewest nanoparticle - nanoparticle contacts . for the systems under steady shear, evidently stays fairly constant as varies .while the order of the systems for is consistent with the trend for , the similarity between the rods and icosahedra suggests that the fraction of nanoparticles alone is not responsible for determining composite viscosity . of note , is the large fraction of nanoparticle contacts for the sheet system at equilibrium .more in - depth analysis shows that the sheets favor a clustered state at lower density , i.e. chains no longer fully intercalate between the sheets .both the and sheet systems have intercalating polymers , while the case does not .the tendency of the sheets to stack is due in part to entropic interactions that result from depletion attractions between the sheet ; the depletion interactions arise due to physically adsorbed polymers on the sheet surface , much like the depletion interactions of polymer coated colloids . this interaction is most pronounced for the sheets , presumably due to the large and relatively flat surface of a sheet , which more effectively reduces the entropy of chains at the surface .given that nanoparticle - nanoparticle contacts do not appear to be the origin of the relative ordering , we next consider the role of polymer - nanoparticle interactions . to gauge the possible role of such interactionswe calculate the fraction of polymer chains in direct contact with a nanoparticle , . using the same nearest - neighbor criteria as used to determine nanoparticle - nanoparticle contacts , we compute as a function of for at and in fig .[ fig : fraction ] .we find that is approximately the same for both the quiescent and sheared systems .there are also clear trends : increases with increasing and for every value of the system containing the rods have the largest value of , followed by the system containing the icosahedra , and finally the system containing the sheets .these same trends in appear in fig .[ fig : shear_figure_2 ] suggesting that the nanoparticle - polymer contacts indeed have a significant effect on relative magnitude of .the correlation of with is consistent with the idea that bridging between nanoparticles play a major role in determining the rheological properties of polymer nanocomposites .thus , we extend our analysis to test for a correlation between and chain bridging .we define a `` bridging chain '' as a chain that is simultaneously in contact with two or more nanoparticles . figure [ fig : bridging ] shows as a function of for .the trend in is consistent with both and , showing an increase in with increasing as well as a clear ordering between systems for every chain length .although not shown , the results seen in fig . [ fig : bridging ] occur for every value of that we have simulated .hence , the fraction of bridging chains seems to be a useful `` order parameter '' for characterizing the polymer - nanoparticle interactions .if the polymer - nanoparticle interactions were sufficiently small , the bridging would not be expected to play significant role , and it is likely that the continuum hydrodynamic approach would be applicable .similar to the notion of bridging is the idea that the nanoparticles can act as transient cross - linkers that can lead to effects equivalent with the formation of higher molecular weight chains . to test this idea, we define an `` effective chain '' as the collection of chains that are connected by nanoparticles .we then define an effective chain length as the mean mass of chains connected by the nanoparticles .[ fig : effective_chains ] shows that is almost an order of magnitude larger than .hence , we expect that the largest contribution to the increase comes from this effect . even in the simplest rouse theory ,an order of magnitude increase in would lead to an order of magnitude increase in .the formation of longer effective chains is expected to lead to entanglement interactions that would further amplify the increase , as emphasize by ref . . however , this entanglement contribution is hard to quantitatively interpret in the present context .the ultimate isotropic tensile strength of a material is defined as the maximum tension a homogeneously stretched material can sustain before fracture . while this definition can be directly probed experimentally , the situation is less straightforward and computationally accessible in an md simulation . ref . has developed an approach to estimate based on potential energy - landscape ( pel ) formalism that is accessible by md simulation .although the pel is complex and multidimensional , one can envision it as a series of energy minima connected by higher energy transition pathways . by definition ,the minima , or inherent structures ( is ) are mechanically stable , i.e. there is no net force at the minimum .the method of ref . relates to the maximum tension the is can sustain , determined by an appropriate mapping from equilibrium configurations on the pel . in other words , is the upper bound on the tension at the breaking point of an ideal glass state in the limit . herewe evaluate for the various possible nanoparticle geometries . as a cautionary note ,we point out that this method has not been directly validated by comparison with experiments ; thus conclusions drawn from these results should be considered tentative . ref . discusses an alternate approach using the pel to evaluate the elastic constants . to determine , we must first generate the energy minimized configurations from the equilibrium configurations . for a system of atoms in the canonical ensemble , the most commonly used mapping from equilibrium configurations to the is takes each force site along the steepest decent path of the energy of the system .this configurational mapping procedure corresponds to the physical process of instantaneous cooling to the limit to obtain an ideal glass with no kinetic energy .therefore , by sampling thermally equilibrated configurations and minimizing their energies we can estimate the average inherent structure pressure , , which will be negative when the system is under tension .the maximum tension then defines .the value of has been found to be weakly dependent on the at which the sampling is performed .for example , while cyclopentane has a strongly temperature dependent structure , ref . has shown that equilibrium only weakly effects .since the structures in our systems do not display such dependence , we expect our results also to be independent of the starting equilibrium .thus , we can use the same systems at that we used to determine and still obtain reliable results for even though the starting configuration is a highly fluid state .this calculation of should be an _ upper bound _ for the tensile strength that would be obtained for any finite system , and hence is referred to the as the ultimate tensile strength .the energy minimization process eliminates the high frequency effects that would normally be associated with such high states and thus the method refers to the strength of an ideal glass material having essentially no configurational entropy .figure [ fig : congrad_n20 ] shows the curves generated for the variant of each system .we find that decreases at large , until a minimum value is reached .the density of the minimum , referred to as the `` sastry density '' , characterizes the density where the system fractures , and voids first start to appear in the minimized configurations . as decreases below this point , begins to rise .since this is the greatest tension achievable , we have . for pure , icosahedral , and rod systemswe find , while for the sheets it is much lower .similarly , is considerably larger for the sheets than for the other systems .we find that the increase in for the sheets is not as sizable as observed experimentally .this may be related to the fact that ( experimentally ) the chains often form stronger associations with layered silicates than in our simulations ; additionally , the chain lengths we examine are small compared to experiments , and we will see that increases with .we note that at this chain length , the icosahedra and rods actually _ reduce _ the strength of the nanocomposite as compared to the pure melt . to quantify the chain length dependence of , we plot the minimum of each curve as a function of in fig . [fig : total ] . for the pure polymers ,we find that _ decreases _ with increasing chain length .this is non - trivial , since one might navely expect longer chains to exhibit more interchain coupling .this is the same trend with chain length as observed in ref . for -alkanes .it is reassuring that simple bead - spring model leads to similar results but this does not help us build intuition about the physical meaning of these results .reference finds a maximum for , and a tendency for to saturate near . since the smallest chain we simulate is , we are above the regime where this feature occurs .consistent with these facts , decreases with and is roughly independent of .most importantly , fig .[ fig : total ] shows that the addition of the nanoparticles _ reverses _ the dependence of when compared to the pure system .thus , while the presence of icosahedral or rod - like nanoparticles decreases the material strength for most chain lengths studied , the trend of is increasing , and if this continues , will surpass the pure melt for all nanoparticle shapes at large enough .indeed , for the icosahedra nanocomposite already exceeds that of the melt at . to better understand the predicted dependence of , we perform a parallel analysis to the structural analysis of shear runs discussed above . the analysis of figs .[ fig : np - np]-[fig : bridging ] focused on .we relate the structure to by examining the structure at .evidently , differs for the sheets in relation to the other systems studied , and it is possible that dependent changes in connectivity properties are responsible for the difference of .thus , we calculate the quantities , , and at for each system using the equilibrium and is configurations at the sastry density of the system .the results for the equilibrium and is configurations show the same qualitative trends , so we will present only data for the is .we first focus on because our investigations into the rheological properties suggested that bridging chains played a large role in determining . however , when we plot in fig .[ fig : bridging_minimized ] for each system at , we see that the dependence is the _ reverse _ of that seen for , even though this quantity evidently follows . even for the sheets which have a significantly lower value of than the other systems ,the ordering of among the systems is the same as for in the case of the calculations .the sheet composites have the smallest , followed by those with icosahedra , and those with rods .while bridging chains do increase with increasing , does _ not _ seem to be a major factor in the relative value of between the composites .for example , the nanocomposites with icosahedra and sheets have almost the same value of , yet they have significantly different values for .the fewer bridging chains in the systems with square sheets suggests that the sheets may be clustered , and we find that this is indeed the case for ( fig . [fig : np - np ] ( a ) ) .thus , we plot in fig .[ fig : np - np_minimized ] and find indeed that for the sheets .the sheets prefer a stacked state at low due to entropic interactions with polymers which leads to a reduced explicit energetic interaction with the surrounding polymers .in particular , the relative ordering of matches that of .navely , this would seem to suggest a potential correlation between and .however , this apparent correlation is problematic .firstly , the dependence of and are different , namely monotonically increases , while is roughly constant .secondly , if the nanoparticles interactions are the origin of the increased strength , then one might expect to find the largest effect when the nanoparticles are well dispersed .this is not the case , since the clustered sheets give the _largest _ effect . ref . demonstrated that the rupture of the system occurs in `` weak spots . '' to confirm that the nanoparticles actually impart additional strength , we visually examined the location of fracture in our systems at . to find empty spaces that are at least the size of a particle ,we discretize the system into a cubic lattice of overlapping spheres , each with a radius and a nearest neighbor separation .we then identify spheres that do not contain any system force site . by adjusting the parameters and can ensure that the voids we find are at least large enough for a single particle .since the force sites of both the nanoparticles and polymers have diameter , we must choose to have physical relevance .we find that values of and work best for visualizing voids .figure [ fig : voids ] shows that the voids in the system occur in regions of pure polymer for the icosahedral system .the same is also true for the rods and square sheets .quantitative analysis of the force sites within nearest - neighbor distance of the voids confirmed this suggestion , with over 99 % of the resulting force sites belonging to polymer chains .thus , while the nanoparticles impart an increased strength to the nanocomposite , the correlation to is not apparently causal .evidently , there is something more subtle controlling the magnitude of in the sheet filled nanocomposite that we have not yet identified . to complete the structural analysiswe calculate for the is .figure [ fig : fraction_minimized ] shows that the sheet nanoparticles are not in contact with many polymers , consistent with the fact that . in fact , after minimization the difference in between the rods and the sheets is even larger than observed in the equilibrium configurations shown in fig .[ fig : fraction ] . figure [ fig : fraction_minimized ] shows that the relative ordering of is _ opposite _ to that of , hence the number of nanoparticle - polymer contacts alone also does not provide a good indicator of .a potential clue to the increase of of the sheet nanocomposite is the fact that is _ significantly smaller _ than for the other systems .this allows the sheet composites to undergo a greater deformation before fracture .this large increase in both the strength and toughness of the polymer matrix with incorporation of nanoparticles is reminiscent of the changes in the properties of natural and synthetic rubbers with the inclusion of carbon black and nanofiller additives .extraordinarily large increases in both the strength and toughness of materials have recently been observed in the case of exfoliated clay sheets dispersed in the polymer polyvinylidene fluoride ( pvdf ) .although some of this change is associated with the modification of the crystallization morphology by the clay nanoparticles , this does not explain in itself the observed toughening mechanism .it is known that nanofiller particles can behave as temporary cross - linking agents that impart viscoelastic characteristics to the fluid .gersappe further emphasizes the role of this transient network formation in impeding cavitation events that initiate material rupture and the potential significance of this phenomenon in understanding the nature of biological adhesives ( abalone ) , fibers ( spider silk ) , and shell material ( nacre ) .gersappe further suggests that the relative mobility of nanoparticles has a role on the toughening of materials .however , such reasoning is inconsistent with the fact that the sheet nanoparticles lead to the strongest material while being the _ least mobile _ of our additives .recent work indicates that flexible sheet - like structures , such as studied here , are characterized by a _negative _ poisson ratio , _ i.e. _ the material expands normal the the direction of stretching , as opposed to normal materials which expand in the same direction as the stretching . the formation of a composite material with negative poisson ratio ( `` auxetic '' materials ) is expected to lead to a reduction of the poisson ratio of the composite as a whole .a large reduction in is expected to give rise to materials that are strong and fracture resistant .related to this , we observe in our simulations that the voids that form in the sheet - filled nanocomposites tend to be fewer and smaller than those for the other nanoparticles at .as we further reduce for the sheet system , the voids grow more numerous as opposed to growing larger as in the other nanocomposites .microvoid formation has also been established as a mechanism for toughening polymer materials .previous work devoted to understanding the high impact strength of polycarbonate and other glassy polymers has likewise emphasized the importance of large `` free volume '' within the polymer material as a necessary condition for large toughness .based on our simulation results and previous observations , we tentatively suggest that the addition of these `` springy '' sheet materials reduces the effective poisson ratio of our nanocomposites , and that the microvoid formation process that we observe is a manifestation of the non - uniformities in the elastic constants within the nanocomposite . in principle , the elastic constants can be determined by the pel approach , but this analysis will be deferred to a future work since it is rather involved .our tentative interpretation of the predicted variation in the sheet nanocomposites also suggests the need for better characterization of the geometric rigidity properties of the sheet nanoparticles , since these variables may be important for understating how such particles modify the stiffness and toughness of nanocomposites .in this work we have focused on how nanoparticle shape influences the viscosity of polymer - nanoparticle melt mixtures at high temperature and the ultimate tensile strength ( estimated from the pel formalism ) of polymer nanocomposites in the ideal glass state .our results suggest that chain bridging between the nanoparticles can have a large effect on of the mixture when the polymer - particle interactions are attractive , so that the nanoparticles disperse readily .in addition , there is a relatively weak increase in for sheet nanoparticle composites , which tend to cluster in our simulations , supporting the expectation that nanoparticle clustering diminishes the viscosity enhancement .( however , the formation of `` open '' or percolating fractal clusters may have the opposite effect on viscosity ) .curiously , the tensile strength of the sheet nanocomposite is greatest , in spite of the sheet stacking .one of the most intriguing effects of the nanoparticles is that , regardless of shape , the dependence of the tensile strength on chain length for the nanocomposites is _ opposite _ to that for the pure polymer melt .we reiterate that our results for rely on the pel approach , and this approach should be carefully compared with experimental measurements to test the expected relationship between from simulations and tensile strength found experimentally .the trends observed for the are more difficult to understand than those for .there is evidently no clear - cut correlation of with the formation of bridging chains or with the attractive particle - particle interactions . in the absence of such a correlation, we suggest that the nanoparticles effect the mechanical properties by modifying the ratio of the bulk and shear moduli in the low temperature nanocomposite state ( i.e. the nanocomposite poisson ratio ) .recent work has noted that molecularly thin sheets have a negative poisson ratio when they are flexible enough to crumple by thermal fluctuations .such additives could reduce the poison ratio of the material as a whole and provide a potential rationale for interpreting the increases in both the strength and toughness observed in our simulation of sheets dispersed in a polymer matrix , as well as in experiments on clay nanocomposites .the authors thank v. ganesan , s. kumar , and k. schweizer for helpful discussions .we thank the nsf for support under grant number dmr-0427239 ..the table details all system variants simulated ; listing chainlength , number of chains , loading fraction , number of force sites per nanoparticle , and number of nanoparticles . [ cols="^,^,^,^,^,^ " , ] as a function of shear rate for chain length at .the rods show the largest , followed by the icosahedra , and lastly the square sheets .we show the statistical uncertainty of for each system at , where the fluctuations is largest , and hence represents an upper bound for the relative uncertainty for all calculations .the uncertainty is the result of `` block averaging '' .the inset includes the bulk polymer and shows that the pure system has much lower viscosity than any of the nanocomposites by roughly an order of magnitude .the lines in this and subsequent figures are drawn only as guide for the reader s eyes.,width=325 ] and ( b ) reduced viscosity as a function of chain length .all values are from systems at and .chain length appears to have no effect on the ordering of amongst the systems.,width=325 ] on the fraction of bridging chains in each of the three nanoparticle systems at for ( a ) and ( b ) .the order of with respect to nanoparticle type matches the trend in fig .[ fig : shear_figure_1 ] for .,width=325 ] as a function of for the pure polymer and three nanocomposite systems at .the pure , icosahedron , and rod systems all have a sastry density , while for the sheets . the uncertainty intervals ( `` error bars '' ) represent the statistical uncertainty in our average for from block averaging . , width=325 ] as a function of chain length for the pure polymer and three nanocomposite systems .we find that as increases as increases .this is in contrast to the behavior of the pure polymer where decreases with increasing . only for systems with the sheet nanoparticles or the system with icosahedral nanoparticles at does the addition of nanoparticles produce a net benefit relative to the bulk polymer.,width=325 ] | nanoparticles can influence the properties of polymer materials by a variety of mechanisms . with fullerene , carbon nanotube , and clay or graphene sheet nanocomposites in mind , we investigate how particle shape influences the melt shear viscosity and the tensile strength , which we determine via molecular dynamics simulations . our simulations of compact ( icosahedral ) , tube or rod - like , and sheet - like model nanoparticles , all at a volume fraction , indicate an order of magnitude increase in the viscosity relative to the pure melt . this finding evidently can not be explained by continuum hydrodynamics and we provide evidence that the increase in our model nanocomposites has its origin in chain bridging between the nanoparticles . we find that this increase is the largest for the rod - like nanoparticles and least for the sheet - like nanoparticles . curiously , the enhancements of and exhibit _ opposite trends _ with increasing chain length and with particle shape anisotropy . evidently , the concept of bridging chains alone can not account for the increase in and we suggest that the deformability or flexibility of the sheet nanoparticles contributes to nanocomposite strength and toughness by reducing the relative value of the poisson ratio of the composite . we note the molecular dynamics simulations in the present work focus on the reference case where the modification of the melt structure associated with glass - formation and entanglement interactions should not be an issue . since many applications require good particle dispersion , we also focus on the case where the polymer - particle interactions favor nanoparticle dispersion . our simulations point to a substantial contribution of nanoparticle shape to both mechanical and processing properties of polymer nanocomposites . |
we consider the diffusion processes pertaining to the following distributed control system , with small random perturbations ( see fig .[ fig - dcs ] ) where * is an -valued diffusion process that corresponds to the - subsystem ( with ) , * the functions are uniformly lipschitz , with bounded first derivatives , is a small positive number ( which is related to the random perturbation level in the system ) , * is lipschitz with the least eigenvalue of uniformly bounded away from zero , i.e. , for some , * ( with ) is a -dimensional standard wiener process , * is a -valued measurable control process to the - subsystem , i.e. , an admissible control from the measurable set . in this paper , we identify two admissible controls , for , being the same on ] .if , then , for every , there exists a borel measurable function , \mathbb{r}^m \bigr ) \rightarrow \mathcal{u}_i ] [ r1 ] in general , the hypoellipticity is related to a strong accessibility property of controllable nonlinear systems that are driven by white noise ( e.g. , see concerning the controllability of nonlinear systems , which is closely related to and ) .that is , the hypoellipticity assumption implies that the diffusion process has a transition probability density , which is on , with a strong feller property .let , for , be bounded open domains with smooth boundaries ( i.e. , is a manifold of class ) .moreover , let be the open sets that are given by suppose that , for a fixed , the distributed control system , which is compatible with expanding construction , is formed by the first subsystems ( i.e. , obtained by adding one after the other , until all subsystems are included ) .furthermore , assume that the newly constructed distributed control system is composed with some admissible controls , , for .let be the exit - time for the diffusion process ( corresponding to the - subsystem ) , for a fixed , with , from the given domain , i.e. , which depends on the behavior of the following ( deterministic ) distributed control system in this paper , we specifically consider a risk - sensitive version of the mean escape time criterion with respect to the - subsystem , i.e. , where , for each , are positive design parameters and the expectation is conditioned on the initial point as well as on the admissible controls .notice that in the exit - time for the diffusion process ( which corresponds to the - subsystem ) from the domain with respect to the admissible ( optimal ) control , , with . ] [ r2 ] here we remark that the criterion in equation makes sense only if we have the following conditions moreover , such conditions depend on the constituting subsystems , the admissible controls from the measurable sets , as well as on the given bounded open domains , for ( see section [ s3(2 ) ] for further discussion ) .then , the problem of risk - sensitive escape control ( with respect to the - subsystem ) will amount to obtaining a supremum value for , i.e. , with respect to some progressively measurable control , for each .notice that , for a fixed admissible control from the measurable set , if we obtain a representation for equation as a minimal cost for an associated stochastic optimal control problem , then we will be able to obtain a representation for as a value function for a stochastic differential game .this further allow us to link this progressively measurable control in the original control problem with a strategy for the maximizing player of the associated stochastic differential game .furthermore , such a connection between the risk - sensitive value function and a deterministic differential game can be made immediately , when the small random perturbation vanishes in the limit . before concluding this section, it is worth mentioning that some interesting studies on risk - sensitive control problem for dynamical systems with small random perturbations have been reported in literature ( for example , see using pde viscosity solution techniques ; see using the probabilistic argumentation and the variational representation for degenerate diffusion processes ; see also , or for some connections between the risk - sensitive stochastic control and dynamic games ) .an outline of the paper is as follows . in section [ s2 ], we introduce a family of two - player differential games where _ player_- will attempt to maximize the mean escape time criterion corresponding to each of the subsystems ; while _ player_- will attempt to minimize it . in this section, we also provide some preliminary results that are useful for proving our main results . in section [ s3 ] , we present our main results where we consider a risk - sensitive version of the mean escape time criterion with respect to each of the subsystems . using the variational representation ,we characterize the risk - sensitive escape control for the diffusion process as the lower and upper values of the associated stochastic differential game .finally , we comment on the implication of our results , where one is also interested in evaluating the performance of the risk - sensitive escape control for the diffusion process , when there is some norm - bounded modeling error in the distributed control system .in this subsection , we consider a family of two - player differential games . for a fixed , at each time , _player_- picks a strategy from the admissible control space , and _player_- picks a control from in such a way that the functions and belong to the strategy sets and respectively . here , we also identify that and , for any , as metric spaces under any metric which is equivalent to convergence in , \mathbb{r}^{r_i } \bigr) ] .suppose that both players have played the game up to the - stage ( see footnote [ fn ] ) .let , , for , be the admissible control strategies picked by the maximizing _ player_- .then , at the - stage , the dynamics of the game is given by the following differential equations with an associated cost criterion where .note that the goal of _ player_- is to maximize with respect to and while that of _ player_- is to minimize it with respect to , for each . here , we remark that _ player_- will attempt preventing the diffusion process from leaving the given domain ( i.e. , representing the exact control in risk - sensitive problem ) ; while _ player_- will attempt forcing out the diffusion process from the domain ( i.e. , acting the role of the disturbance in the distributed control system ) .picked by _player_- affects only the dynamics of the game , not directly the cost criterion . ]furthermore , a mapping is said to be a strategy for the maximizing player if it is measurable and , for , \end{aligned}\ ] ] implies (t ) = \alpha_{\ell}[\hat{v}_{\ell}](t ) , \quad \forall t \in [ 0,\ , s],\end{aligned}\ ] ] almost everywhere , for every . similarly , a mapping is a strategy for the minimizing player if it is measurable and , for , \end{aligned}\ ] ] implies (t ) = \beta_{\ell}[\hat{u}_{\ell}](t ) , \quad \forall t \in [ 0,\ , s],\end{aligned}\ ] ] almost everywhere , for every .note that during each expanding construction ( i.e. , when a new subsystem is added to the existing distributed control system ) , we assume that both players play a differential game .for example , for , the dynamics of the game is given by with an associated cost criterion and an exit - time such that _ player_- optimally picks a strategy ( in the sense of best - response correspondence ) to the stagey of _ player_- .then , the game advances to the next stage , i.e. , , and continues until . ]let us denote the set of all maximizing strategies by and the set of all minimizing strategies by .furthermore , let us define the lower and the upper values of the differential game at the - stage by and for each , respectively .moreover , if then the differential game has a value .[ r3 ] note that the greatest payoff that _ player_- ( i.e. , the maximizing player ) can force is called a lower value of the game and , similarly , the least value that _ player_- ( i.e. , the minimizing player ) can force is termed an upper value of the game . in section [ s3 ] , we provide conditions under which these values coincide . in this subsection , we provide additional results that will be useful for proving our main results in section [ s3 ] . [ d1 ]we define to be the set of all -valued -progressively measurable processes , that satisfies for each . [ l1 ] [ variational representation formula ( cf .* proposition 2.5 or theorem 5.1 ) ) ] for a fixed , let , \mathbb{r}^m \bigr ) \rightarrow \mathbb{r} ] by under .then , the relative entropy of with respect to satisfies the following let a polish space ( i.e. , a complete separable metric space ) , with a borel -algebra , and let be the set of measures defined on that satisfies the usual hypotheses ( e.g. , see ) .[ l3 ] [ cf .* theorem 2.1 ) ] consider a sequence of measures in satisfying where .let be a borel - measurable function .then , the followings hold a. if weakly converges to another measure as , then b. if is a sequence of uniformly bounded functions that almost surely converges to , then this subsection , we relate the lower and upper values of the associated differential game with the risk - sensitive escape control problem for the diffusion process . in particular, using the variational representation ( e.g , see or ) , we present our main results , i.e. , proposition [ p1 ] and proposition [ p2 ] . for each fixedadmissible control , the following proposition ( which is a direct consequence of lemma [ l1 ] ) characterizes the risk - sensitive escape control problem ( cf .equations and ) with an associated stochastic differential game ( cf .equations below ) .[ p1 ] suppose that , for a fixed , the admissible optimal controls , , for , are given .consider any admissible control .further , for every , let be a borel measurable function such that for ] such that the upper value of the game satisfies with $ ] ( i.e. , when the maximizing player picks such a strategy ) , c. if the lower and upper values of the game coincides , i.e. , then the game has a value . in this subsection, we briefly remark on the implication of our main results where one is also interested in evaluating the robust performance of the risk - sensitive escape control , when there is some norm - bounded modeling error in the distributed control system ( see for related discussion , but in different context ) . inwhat follows , we assume that that the statements in propositions [ p1 ] and [ p2 ] are true .suppose that , for a fixed , , , for , are the admissible optimal control strategies picked by _further , we consider the following distributed control system ( which contains subsystems ) where .define the value function as where , with .then , for any , with , we have the following inequalities and suppose that , where is interpreted as a modeling error in equation .further , assume that the value is used as a qualitative measure on the performance of the distributed control system . for a given specification , with , if there exists a design parameter such that then , we obtain an upper bound on the norm of the modeling error , which guarantees the desired performance against all modeling errors satisfying such a norm bound .that is , if , \label{eq32}\end{aligned}\ ] ] then we have moreover , the above equation ( together with equation ) further implies the following for , with .[ r4 ] note that , for each , the norm on the modeling error is inversely proportional to the design specification , and , therefore , the robustness of the distributed control system increases as the bound on the performance measure decreases .befekadu gk , antsaklis pj ( 2014 ) on the minimum exit rate for a diffusion process pertaining to a chain of distributed control systems with random perturbations . http://arxiv.org/abs/1409.2751[arxiv:1408.6260 ] http://arxiv.org/archive/math.ct[[math.ct ] ] | in this paper , we consider an expanding construction of a distributed control system , which is obtained by adding a new subsystem one after the other , until all subsystems , where , are included in the distributed control system . it is assumed that a small random perturbation enters only into the first subsystem and is then subsequently transmitted to the other subsystems . moreover , for any , the distributed control system , compatible with the expanding construction , which is obtained from the first subsystems , satisfies an appropriate hrmander condition . as a result of this , the diffusion process is degenerate , i.e. , the backward operator associated with it is a degenerate parabolic equation . our main interest here is to prevent the diffusion process ( that corresponds to a particular subsystem ) from leaving a given bounded open domain . in particular , we consider a risk - sensitive version of the mean escape time criterion with respect to each of the subsystems . using a variational representation , we characterize the risk - sensitive escape control for the diffusion process as the lower and upper values of an associated stochastic differential game . finally , we comment on the implication of our results , where one is also interested in evaluating the performance of the risk - sensitive escape control , when there is some modeling error in the distributed control system . |
among the technologies for biometric identification and verification , face recognition has become a widely - used method , as it is a non - intrusive and reliable method .however , the robustness of a facial recognition system is constrained by the degree of head pose that is involved and the recognition rates of state of the art systems drop significantly for large pose angles . especially for tasks where a great variation in head pose has to be expected , like in human - robot interaction , pose - invariant face recognition is crucial .the rise of collaborative and assistive robots in industrial and home environments will increase the demand for algorithms that can adapt to changing settings and uncontrolled conditions .a humanoid robot s cognitive ability to interpret non - verbal communication during conversations relies also heavily on the face of the human counterpart .the impact of head pose on the face analysis performance can be minimised by using normalisation techniques that transform non - frontal faces into frontal representations .this type of image pre - processing can loosely be classified into two categories : * 2d methods ( cylindrical warping , aam ( active appearance models ) , 2d warping ) * 3d methods ( 3dmm , gem ( generic elastic models ) ,mean face shape ) as the 2d methods are trained on 2d data , the warping will only be an approximation of the underlying 3d rotation .they also do not model imaging parameters like the pose of the face and illumination , and face parameters like expressions , explicitly . instead , these parameters are inherent and implicitly encoded in the model parameters .when dealing with larger pose ranges than around in yaw angle , a single 2d model is often no longer sufficient and a multi - model approach must be used ( e.g. , ) .the use of a 3d face model has several advantages , especially when further analysis involving the shape of the face is required , like in emotion detection . to correct the pose for face recognition ,a 3d representation of the face can be rotated easily into a frontal view . at close ranges ,the use of a 3d or rgb - d sensor can provide the additional depth information for the model .however , since the error of depth measurements increases quadratically with increasing distance when using rgb - d sensors like microsoft kinect , they are not suitable for acquiring image data from a distance . on the other hand ,2d cameras are more widely - used in existing platforms and cost - efficient , so that a method combining the advantages of 2d image acquisition and the analysis with a 3d model is needed . the 3d morphable model ( 3dmm ) satifies this need by providing a parameterised principal component analysis ( pca ) model for shape and albedo , which can be used for the synthesis of 3d faces from 2d images . a morphable model can be used to reconstruct a 3d representation from a 2d image through fitting .fitting is the process of adapting the 3d morphable model in such a way that the difference to a 2d input image is as small as possible .depending on the implementation , the fitter s cost function uses different model parameters to iteratively minimise the difference between the modeled image and the original input image .existing fitting algorithms solve a complex cost function , using the image information to perform shape from shading , edge detection , and solve for both shape and texture coefficients ( ) . due to the complexity of the cost functions , these algorithms require several minutes to fit a single image . while the execution time may not be crucial for offline applications , real - time robotics applications require faster solutions .more recently , new methods were introduced that use local features or geometric features ( landmarks , edges ) with a fitting algorithm similar to the iterative closest point algorithm . in this paper, we present and evaluate a complete , fully automatic face modelling framework consisting of landmark detection , landmark - based 3d morphable model fitting , and pose - normalisation for face recognition purposes .the whole process is designed to be lightweight enough to run in real - time on a standard pc , but is also especially intended to be used for human - robot interaction in uncontrolled , in the wild environments .the theory is explained in section [ sec : lm_3dmm ] . in section [ sec : exp ] , we show how the approach improves the recognition rates of a regular cots ( commercial off - the - shelf ) face recognition system when it is used as a pre - processing method for pose - normalisation on non - frontal and uncontrolled images from two image databases .finally , we show how our approach is suitable for real - time robotics applications by presenting its integration into the hmi framework of our scitos robot platform in section [ sec : app ] .section [ sec : conclusion ] concludes the paper and gives an outlook to future work .all parts of the framework are publicly available to support other researchers : the c++ implementations of the landmark detection and the fitting algorithm and the face model , as well as a demo app that combines both .in this section , we will give a brief introduction to the 3d morphable model and then introduce our algorithm to recover pose and shape from a given 2d image .we will then present our pose - invariant texture representation in form of a so - called isomap .a 3d morphable model consists of a shape and albedo ( colour ) pca model constructed from 169 3d scans of real faces .these meshes first have to be brought in dense correspondence , that is , vertices with the same index in the mesh correspond to the same semantic point on each face . the model used for this implementationconsists of 3448 vertices .a 3d shape is expressed in the form of ^\mathrm{t} ] are the coordinates of the vertex and is the number of mesh vertices .the rgb colour values are stacked in a similar manner .pca is then applied to both the vertex- and colour data matrices separately , each consisting of stacked 3d face meshes , resulting in shape eigenvectors , their variances , and a mean shape , and similarly for the colour model ( , and ) .a face can then be approximated as a linear combination of the respective basis vectors : where ^\mathrm{t} ] are vectors of shape and colour coefficients respectively .given an image with a face , we would like fit the 3d morphable model to that face and obtain a pose invariant representation of the subject . at the core of our fitting algorithmis an affine camera model , shape reconstruction from landmarks , and a pose invariant textural representation of the face .we thus only use the shape pca model from the 3dmm and not the colour pca model , and instead use the original texture from the image to obtain the best possible image quality for subsequent face analysis steps .similar to aldrian and smith , we obtain a linear solution by decomposing the problem into two steps which can be alternated . the first step in our frameworkis to estimate the pose of the face . given a set of 2d landmark locations and their known correspondences in the 3d morphable model , we compute an affine camera matrix .the detected 2d landmarks and the corresponding 3d model points ( both represented as homogeneous coordinates ) are normalised by similarity transforms that translate the centroid of the image and model points to the origin and scale them so that the root - mean - square distance from their origin is for the landmark and for the model points respectively : with , and with .using landmark points , we then compute a normalised camera matrix using the _ gold standard algorithm _ and obtain the final camera matrix after denormalising : . the second step in our frameworkconsists of reconstructing the 3d shape using the estimated camera matrix .we estimate the vector of pca shape coefficients : where is the number of landmarks and is a weighting parameter for the regularisation that is needed to only allow plausible shapes . are the detected landmark locations and is the projection of the 3d morphable model shape to 2d using the estimated camera matrix .subsequently , the camera matrix can then be re - estimated using the now obtained identity specific face shape , instead of only the mean face .both steps are iterated for a few times - each of them only involves solving a small linear system of equations . in the experiments with automatically detected landmarks , we use a cascaded regression based approach similar to feng et al . .after an initial estimate , i.e. placing the landmarks in the region found by a face detector , we extract local features ( hog ) to update the landmark locations until convergence towards their actual position .the detected 2d landmarks have known corresponding points in the 3d morphable model , and these points are then used to fit the model .the combination of the regression based landmark detection and the landmark - based 3d morphable model fitting results in a lightweight framework that is feasible to run on videos ( all components run in the order of milliseconds on a standard cpu ) . after fitting the pose and shape ,a correspondence between the 3d mesh and the face in the 2d image is known for each point in the image .we use this correspondence to remap the original face texture from the image onto a pose - invariant 2d surface that covers the face from all angles .we create such a generic representation with the isomap algorithm : it finds a projection from the 3d vertices to a 2d plane that preserves the geodesic distance between the mesh vertices .our mapping is computed with the algorithm from tena .the isomap of different persons are in dense correspondence with each other , meaning each location in the map corresponds to the same physical point in the face of every subject ( for example , a hypothetical point $ ] is always the center of the right eye ) .it can consequently show the accuracy of the fitting and is therefore a plausible representation of the result .figure [ fig_isomap ] shows an isomap and a frontal rendering side by side .the isomap captures the whole face , while the rendering only shows the frontal parts of the face . as the face in the input imageis frontal , with most parts of the face visible , there is little self occlusion . in case of large poses , a major part of the isomap can not be filled with face texture , and these regions will be marked black ( for example , a small black spot next to the nose can be observed in the figure ) .to analyse the performance of our approach , we have run experiments on two different image databases ( muct and point and shoot challenge ( pasc ) ) .our experiments are face verification experiments , which means that the system has to verify the identity of probe images compared to a set of gallery images via matching .the results are then statistically analysed regarding the ratio of the two possible error types , the false acceptance rate ( far ) and false rejection rate ( frr ) and plotted into detection error tradeoff ( det ) curves , using the ground - truth subject - id information provided in the metadata of the image databases . in the experiments on the pasc database ( section [ sec : pasc ] ) , we filter the probe and gallery images according to different head poses . we used a market - leading , commercial face recognition engine for the process of enrolment and matching . as a reference measure ,all experiments were also conducted using the original , unprocessed images from the databases ( denoted as _ org _ ) . with this reference , we are able to analyse the impact of using our approach compared to a conventional face recognition framework without 3d modelling .the muct database consists of 3755 faces of 276 subjects and is published by the university of cape town . for our experiments only persons without glasses and their mouths closed were used . after this filtering, 1221 pictures were left .the pictures are taken from 5 different cameras that cause different pose angles and there are in total 10 different lighting schemes applied to the pictures . moreover , the size of the faces within the images is quite large and the subjects are placed in a controlled lab environment .muct comes with 76 manually labeled landmarks given for every image , from which we used 16 for the initialisation of our landmark - based 3dmm fitting ( _ lm _ ) . in total, we used the unprocessed images ( _ org _ ) and the pose normalised renderings coming from our fitting method ( _ lm _ ) .we ran a verification experiment on these images , leading to similarity matrices with 1 490 841 scores for both methods . using these scores ,a det curve was plotted , of which a relevant part is depicted in figure [ fig_muct ] .it is clear to see that the verification errors of the face recognition drop significantly when our landmark - based modelling approach is used to correct the head pose .this experiment shows that when pose is the major uncontrolled factor , like on muct , the usage of our _ lm _ aproach leads to a clearly improved face recognition performance . for our proposed use case in assistive or humanoid robotics ,muct is not representative of typical in the wild images we encounter . to verify the performance in heavily uncontrolled settings, we decided to do further investigations on a larger , more challenging database .the point and shoot challenge ( pasc ) is a performance evaluation challenge for developers of face recognition systems initiated by the colorado state university .as the name depicts , the images and videos used in the challenge are taken with consumer point and shoot cameras and not with the help of professional equipment .the database offers 9376 still images of varying resolution up to 4000 pixels and labeled metadata .the arrangement of the image sceneries raises some major challenges for the automated recognition of faces .the pictures were taken in various locations , indoors and outdoors , with complex backgrounds and harsh lighting conditions , accompanied by a low image quality due to blur , noise or incorrect white balance . additionally , the regions of interest for face recognition are also relatively small and of low resolution because the photographed people are not the main subject of the scenery . in a direct comparison ,the verification rate on pasc equals only 0.22 at 0.001 far , in contrast to the more controlled databases mbe ( 0.997 ) , gbu ( 0.8 ) and lfw ( 0.54 ) .because of these harsh , uncontrolled conditions , which are similar to a real life scenario , we chose this image database for our experiments and applied our fitting algorithm ( _ lm _ ) to the images .like the previous experiment on muct ( sec .[ sec : muct ] ) , it was a verification experiment using the same cots face recognition algorithm .our experiments on pasc are intended to be as near to a realistic scenario as possible .therefore , we used the automatic landmark detection of instead of using manually annotated facial landmarks .the detector and our fitting algorithm ( _ lm _ ) allow the use of different landmark schemes , which can vary in their amount .we tested three different sets of 7 , 13 and 49 landmarks , which are visualised on the 3d morphable model in figure [ fig_det_landmarks ] ( a ) . to compare the different amounts of landmarks ,their corresponding results for the face recognition experiment on pasc are shown in figure [ fig_det_landmarks ] ( b ) .it can be observed that a higher number of facial landmarks leads to a higher quality 3dmm fitting , which then leads to a better verification performance .one of the key questions of this work is whether our approach can improve the performance of face recognition under uncontrolled conditions , especially when dealing with head pose . to allow a more detailed interpretation of the results, we generated head pose annotations for the pasc database .we used opencv s implementation of the posit algorithm to calculate the yaw angles for each image in the database .posit estimates the pose of an object from an image using a 3d reference model and corresponding feature points of the object in the image . in our case , the 3d model was the 3d morphable model and the feature points were facial landmarks .we used this yaw angle annotation to filter the scores into different groups of yaw angles .the results obtained with our method were then compared to the results without any face modelling . for comparison, we also tested the performance of a commercial face modelling system ( _ cfm _ ) using the same workflow .the _ cfm _ uses additional modelling capabilities like mirroring ( to reconstruct occluded areas of the face ) and deillumination .figure [ fig : ce_lm ] depicts frontalised renderings of the commercial system and our fitter . in total , we compared all images of pasc against each other and the yaw angle filter separated the scores into 10 bins that span from -70 to + 70 . to better see how the pose normalisation methods improve the performance , the resulting curves show the absolute difference of the false rejection rate to the baseline without face modelling ( _ org _ ) at a specific operating point .this type of diagram allows a more simple and clearly arranged representation of the results at different head poses . in figure [ fig_curvedeltayaw_0 ] ,such a curve from the experiment is shown .a set of gallery images of 0 yaw is compared to sets of probe images from 0 to 70 . as gallery databases in face recognition applications often contain frontal images , this experiment reflects a common use case .at the operating point 0.01 far , we then plot the improvement for each group of yaw angles .the commercial face modelling solution shows its first improvements over the conventional face recognition starting at 30 yaw .surprisingly , our 3d morphable model based method is even able to slightly improve the matching performance for frontal images .although the cots face recognition system was used outside of the specification for the large yaw angles , we are able to improve the performance for the whole pose range . by using the original image s texture , our fitter ( _ lm _ ) does not alter the face characteristics and corrects only rotations in the pitch or roll angles . in the next two experiments ,we take a look at how well the system operates when the gallery is also not frontal . in figure[ fig_curvedeltayaw_20_40 ] , the results for the experiment with a 20 and a 40 gallery can be seen . in this case , matching the faces is a much more difficult task , especially when the subjects look in different directions and large areas of the faces are not visible due to self - occlusion . while we are still able to improve the face recognition capabilities in the experiment with a 20 gallery , our method _ lm _ struggles to keep up with the commercial solution for a 40 gallery .the ability to reconstruct parts of the non - visible texture is a key component of the commercial modelling system ( _ cfm _ ) in such a use case which our system currently does nt offer .although , for the robotics application , which we present in section [ sec : app ] of this paper , it is likely that frontal images are used for the gallery enrolment .our fast and accurate 3d face modelling method is particularly suitable for face analysis tasks in assistive or humanoid robotics . in the following ,we demonstrate our efforts in this field by illustrating how we integrate our approach into a mobile robot s software framework . a mobile robot platform combined with an hmi systemis the basis for our research on human - robot interaction .the robot is based on a scitos g5 drive system with onboard industrial pc which is able to navigate autonomously in indoor buildings . for mapping and localisation ability , as well as collision avoidance ,two laser scanners are attached to the base ( sick s300 and hokuyo urg-04lx ) . using this base platform ,the robot is able to approach humans safely .the hmi part of the robot consists of a touch - based display and a head with moveable , human - like eyes . on top of the robot, we mounted a pan - tilt - unit ( ptu ; directed perception d46 - 17 ) , where additional sensors and cameras can be installed .the ptu adds an additional degree of freedom which is used for camera positioning to enhance the tracking process . to align the cameras to the region of interest , we track people by applying an adaptive approach which uses a particle filter to track the position and size of the target and estimates the target motion using an optical flow based prediction model .furthermore , an svm - based learning strategy is implemented to calculate the particle weights and to prevent bad updates while staying adaptive to changes in object pose . at this point , the 3dmmpose normalisation of our approach can be used for the image of the tracked face .it can be either applied continuously or if the pose of the subject is greater than a certain threshold ( using the knowledge obtained in the experiments of section [ sec : pasc ] ) .key frames of an example video taken from our robot system are presented in the upper row of figure [ fig_filmstreifen_philipp ] .the opencv object detection with a face cascade was used to initialise the regression based landmark detector , which then automatically found 15 facial landmarks .the fitting results , represented in isomaps , are shown in the second row of figure [ fig_filmstreifen_philipp ] .on the robot , we use the frontal renderings as an input for the same face recognition engine that was also used for the experiments . to distinguish the person s identity properly, the system should generate a rather high score for this positive match . the pose normalised renderings allow the face recognition algorithm to achieve a clearly higher score compared to the original , unprocessed images .possible use cases for a robot of this kind are industrial mobile robotics , shop or museum tour guides or a surveillance system ( e.g. in a supermarket ) . in all these cases ,the robot has to interact naturally with technically unskilled people .it is therefore convenient to be able to analyse people independent of their pose , without requiring them to look at the robot directly .identification , age and gender estimation on pose normalised renderings offer the possibility to personalise the robot s behaviour .we proposed a powerful method for head pose correction using a fully automatic landmark - based approach for fitting a 3d morphable model . by using an efficient fitting algorithm ,our approach can be used for tasks which require a fast real - time solution that can be also used on live video streams .in contrast to existing work , we focus on an evaluation of our approach with automatically found landmarks and in - the - wild images and publish the whole 3dmm fitting pipeline as open source software ( see section [ sec : intro ] ). an experimental evaluation on the two commonly - used image databases muct and pasc showed the significance of our approach as a means of image processing for facial recognition in both controlled and heavily unconstrained settings .for recognition algorithms that are mostly trained for frontal faces , a rotation of the head means a loss of information that results in a lower matching score .we showed that our methodology is capable of improving the recognition rates for larger variations in head pose .compared to a commercial face modelling solution , our approach keeps up well and even outperforms it in certain scenarios .the addition of 3dmm pose normalisation certainly brings advantages compared to a conventional face recognition framework when the setting is uncontrolled - like in robotics , surveillance or consumer electronics .furthermore , we presented our software framework on a mobile robot platform which uses our pose normalisation approach to allow a more reliable face analysis .we are currently doing further research on age- and emotion - estimation systems to expand the abilities of our robot . in the future, we also plan to statistically evaluate the effect on their performance when using our landmark - based pose normalisation , like we did for face recognition in this paper .the authors would like to thank huan fui lee and the rt - lions robocup team of reutlingen university .we would also like to thank cyberextruder , inc . for supporting our research . | face analysis techniques have become a crucial component of human - machine interaction in the fields of assistive and humanoid robotics . however , the variations in head - pose that arise naturally in these environments are still a great challenge . in this paper , we present a real - time capable 3d face modelling framework for 2d in - the - wild images that is applicable for robotics . the fitting of the 3d morphable model is based exclusively on automatically detected landmarks . after fitting , the face can be corrected in pose and transformed back to a frontal 2d representation that is more suitable for face recognition . we conduct face recognition experiments with non - frontal images from the muct database and uncontrolled , in the wild images from the pasc database , the most challenging face recognition database to date , showing an improved performance . finally , we present our scitos g5 robot system , which incorporates our framework as a means of image pre - processing for face analysis . |
flows of most liquid substances are usually studied by modeling the liquid as a continuum , but there are some substances that allow the study of flows at the kinetic level , i.e. , at the level of the individual constituent particles . as examples , we can mention chute flows in granular materials and capillary flows in colloids .the solid particles in these soft materials are large enough that their motion can be tracked by video microscopy , allowing experimenters to record their positions and velocities . like granular materials and colloids ,dusty plasmas also allow direct observation of individual particle motion .dusty plasma is a four - component mixture consisting of micron - size particles of solid matter , neutral gas atoms , free electrons , and free positive ions .these particles of solid matter , which are referred to as `` dust particles , '' gain a large negative charge , which is about elementary charges under typical laboratory conditions .the motion of the dust particles is dominated by electric forces , corresponding to the local electric field , where is due to confining potentials and is due to coulomb collisions with other dust particles .due to their high charges , coulomb collisions amongst dust particles have a dominant effect . the interaction force amongst dust particles is so strong that the dust particles do not move easily past one another , but instead self - organize and form a structure that is analogous to that of atoms in a solid or liquid .in other words , the collection of dust particles is said to be a strongly - coupled plasma . in a strongly - coupled plasma ,the pressure is due mainly to interparticle electric forces , with only a small contribution from thermal motion .even when it has a solid structure , a collection of dust particles is still very soft , as characterized by a sound speed on the order of 1 cm / s . as a result , a dusty plasma in a solid phase is very easily deformed by small disturbances , and it can be made to flow .flows can be generated , for example , by applying shear using a laser beam that exerts a spatially - localized radiation force .in such an experiment , the reynolds number is usually very low , typically , indicating that the flow is laminar .this paper provides further analysis and details of the experiment that was reported in .we now list some of the major points of these two papers , to indicate how they are related and how they differ . in this paper, we present : ( 1 ) a detailed treatment of the continuity equations for both momentum and energy , ( 2 ) our method of simultaneously determining two transport coefficients ( viscosity and thermal conductivity ) , ( 3 ) values of these two coefficients , and ( 4 ) spatially - resolved profiles of the terms of the energy equation , including the terms for viscous heating and thermal conduction , as determined by experimental measurements . in , we reported : ( 1 ) a discovery of peaks in a spatially - resolved measurement of kinetic temperature , ( 2 ) a demonstration that these peaks are due to viscous heating in a region of a highly sheared flow velocity , and ( 3 ) a quantification of the role of viscous heating , in competition with thermal conduction , by reporting a dimensionless number of fluid mechanics called the brinkman number which we found to have an unusually large value due to the extreme properties of dusty plasma as compared to other substances .the values of viscosity and thermal conduction found in this paper are used as inputs for the calculations of dimensionless numbers in .the identification of viscous heating as the cause of the temperature peaks reported in is supported by the spatially - resolved measurements reported here . in the experiment ,the dust particles are electrically levitated and confined by the electric field in a sheath above a horizontal lower electrode in a radio - frequency ( rf ) plasma , forming a single layer of dust particles , fig .the dust particles can move easily within their layer , but very little in the perpendicular direction , so that their motion is mainly two dimensional ( 2d ) .they interact with each other through a shielded coulomb ( yukawa ) potential , due to the screening provided by the surrounding free electrons and ions .as the dust particles move , they also experience frictional drag since individual dust particles in general move at velocities different from those of the molecules of the neutral gas .this friction can be modeled as epstein drag , and characterized by the gas damping rate . using experimental measurements of their positions and velocities , the dust particles can of coursebe described in a _ particle _ paradigm .they can also be described by a _ continuum _ paradigm by averaging the particle data on a spatial grid . in transport theory ,momentum and energy transport equations are expressed in a continuum paradigm , while transport coefficients such as viscosity and thermal conductivity are derived using the particle paradigm because these transport coefficients are due to collisions amongst individual particles . in our experiment, we average the data for particles , such as velocities , to obtain the spatial profiles for the continuum quantities , such as flow velocity . in the continuum paradigm, a substance obeys continuity equations that express the conservation of mass , momentum , and energy .these continuity equations , which are also known as navier - stokes equations , characterize the transport of mass , momentum , and energy . in a multi - phase or multi - component substance , these equations can be written separately for each component .the component of interest in this paper is dust . in sec .ii , we review the continuity equations for dusty plasmas . in secs .iii and iv , we provide details of our experiment and data analysis method .we designed our experiment to have significant flow velocity and significant gradients in the flow velocity , i.e. , velocity shear . in sec .v , we simplify the continuity equations using the spatial symmetries and steady conditions of the experiment , and including the effects of external forces .we will use our experimental data as inputs in these simplified continuity equations in secs .vi and vii .we now review the continuity equations for mass , momentum , and energy for dusty plasmas . we will then discuss the significance of some of the terms for our experiments .the equation of mass continuity , i.e. , conservation of mass , is in this paper , the mass density and the fluid velocity describe the dust continuum .the momentum equation is \nabla(\nabla \cdot { \bf v } ) & + & { \bf f}_{\rm ext},\end{aligned}\ ] ] here , , , and are the charge density , shear viscosity , and bulk viscosity , respectively ; and is called the kinematic viscosity . in this paper , these parameters describe the dust continuum .equation ( [ momentum ] ) describes the force per unit mass , i.e. , acceleration , for the continuum .the confining field and pressure have been discussed in sec .i. the last term in eq .( [ momentum ] ) is due to the momentum contribution from forces such as gas friction , laser manipulation , ion drag , and any other forces that are external to the layer of dust particles , as discussed in sec .the other terms on the right - hand - side of eq .( [ momentum ] ) correspond to viscous dissipation , which arises from coulomb collisions amongst the charged dust particles .the viscous term was studied in a previous dusty plasma experiment . for our experiment, we can consider eq .( [ momentum ] ) for two conditions , with and without the application of the external force . without the external force ,the fluid velocity is zero so that eq .( [ momentum ] ) is reduced to , meaning that the confining electric force is in balance with the pressure ( which mainly arises from interparticle electric forces ) . with the external force ,the external confining force remains as it was without ; moreover , the pressure will be affected only weakly because the density is unchanged ( as we will show later ) so that the interparticle electric forces that dominate the pressure will also be unchanged .therefore , in using eq .( [ momentum ] ) , we will assume that the first two terms on the right hand side always cancel everywhere . the internal energy equation , as it is expressed commonly in fluid dynamics , is here, is the entropy per unit mass , is the thermal conducitivity , and is the thermodynamic temperature of the dust continuum .the last term is due to the energy contribution from any external forces .we assume that the continuity equations , eqs .( [ mass]-[energy ] ) , are valid for the dust particles separately from other components of dusty plasmas that occupy the same volume , such as neutral gas atoms .the coupling between the dust particles with other components is indicated in and , so that the momentum and energy of the dust particle motion is treated for the dust particles separately from other components .other external forces , such as those due to laser manipulation , are also indicated in and . the first term on the right - hand - side of eq .( [ energy ] ) is due to viscous heating .the viscous heating term depends on the square of the shear , i.e. , the square of the gradient of flow velocity .a general expression for has many terms ( cf .( 3.4.5 ) of ref . or eq .( 49.5 ) of ref . ) , but it can be simplified for our experiment by taking advantage of symmetries , as explained in sec .iv . the second term on the right - hand - side of eq .( [ energy ] ) is due to thermal conduction .it arises from a temperature gradient .previous experiments with 2d dusty plasmas include a study of the thermal conduction term in eq .( [ energy ] ) . in this paper ,most of our attention will be devoted to the first two terms on the right - hand - side of eq .( [ energy ] ) . using our experimental data, we will compare the magnitudes of these terms . in , we demonstrated that viscous heating is measurable and significant , when evaluated using only _ global _ measures like the brinkman number .here we further evaluate viscous heating by characterizing it _ locally _ using spatially - resolved profiles for the terms in eq .( [ energy ] ) .we also develop a method to simultaneously obtain values of two transport coefficients of and .here we provide a more detailed explanation of the experiment than in .an argon plasma was generated in a vacuum chamber at ( or 2.07 pa ) , powered by rf voltages at and peak - to - peak .we used the same chamber and electrodes as in .the dust particles were diameter melamine - formaldehyde microspheres of mass .the dust particles settled in a single layer above the powered lower electrode .the layer of dust particles had a circular boundary with a diameter of and contained dust particles .as individual dust particles moved about within their plane , they experienced a frictional damping with a rate due to the surrounding argon gas .the particles were illuminated by a -nm argon laser beam that was dispersed to provide a thin horizontal sheet of light , fig .1(a ) . using a cooled 14-bit digital camera ( pco model 1600 ) viewing from above , we recorded the motion of individual dust particles .this top - view camera imaged a central portion of the dust layer , as sketched in fig .the movie is available for viewing in the supplemental material of .the portion of the camera s field of view that we will analyze was , and it contained particles .we recorded frames at a rate of 55 frame / s , with a resolution of .our choice of 55 frame / s was sufficient for accurate measurement of various dynamical quantities , including the kinetic temperature , although a slightly higher frame rate would have been optimal .in addition to the top - view camera , we also operated a side - view camera to verify that there was no significant out - of - plane motion ; this was due to a strong vertical confining electric field .thus , we will analyze the particle motion data taking into consideration only the motion within a horizontal plane . at first , the dust particles self - organized in their plane to form a 2d crystalline lattice .the particle spacing , as characterized by a wigner - seitz radius , was , corresponding to a lattice constant , an areal number density , and a mass density . using the wave - spectra method for thermal motion of particles in the undisturbed lattice, we found the following parameters for the dust layer : , , and , where is the nominal 2d dusty plasma frequency , is the particle charge , is the elementary charge , and is the screening length of the yukawa potential .we used laser manipulation to generate stable flows in the same 2d dusty plasma layer .a pair of continuous - wave -nm laser beams struck the layer at a downward angle . in this manner , we applied radiation forces that pushed particles in the directions , as shown in fig . 1 .the power of each beam was as measured inside the vacuum chamber . to generate a wider flow than in , the laser beams were rastered in the both and directions in a lissajous pattern as in , with frequencies of and .we chose these frequencies to be to avoid exciting coherent waves .the rastered laser beams filled a rectangular region , which crossed the entire dust layer and beyond , as sketched in fig .this laser manipulation scheme resulted in a circulating flow pattern with three stable vortices , as sketched in fig .we designed the experiment so that in the analyzed region the flow is straight in the directions , without any significant curvature .curvature in the flow , which is necessary for the flow pattern to close on itself as sketched in fig .1(b ) , was limited in our experiment to the extremities of the dust layer , where it would not affect our observations .( color online ) .( a ) side - view sketch of the apparatus , not to scale . a single layer of dust particles of charge and mass levitated against gravity by a vertical dc electric field .there is also a weaker radial dc electric field which prevents the dust particles from escaping in the horizontal direction .further details of the chamber are shown in .the two laser beams are rastered in the direction so that they have a finite expanse , and they are offset in the direction as shown in ( b ) .( b ) top - view sketch of laser - driven flows in the 2d dusty plasma . in the region of interest , the flow is straight , with curvature of the flow limited to the extremities of the dust layer .a video image of the dust particles within the region of interest is also shown .the region of interest is divided into 89 bins of width so that particle data can be converted to continuum data . ]we start our data analysis by analyzing data for individual particles , i.e. , by working in the _ particle _ paradigm . using image analysis software , with a method optimized as in to minimize measurement error , we identify individual particles in each video image and calculate their coordinates .we then track a dust particle between two consecutive frames and calculate its velocity as the difference in its position divided by the time interval between frames . now having the position and velocity of all the particles in the analyzed region , we can study motion at the particle level .for example , in sec .vi we will use data for the individual particles to calculate the rate of their energy dissipation due to their frictional drag with the neutral gas , .we next convert our data for individual particles to continuum data , i.e. , we change from the _ particle _ paradigm to the _ continuum _ paradigm .this is done by averaging particle data within spatial regions of finite area , which we call bins .there are 89 bins , which are all long narrow rectangles aligned in the direction , as shown in fig .each bin contains dust particles .we choose the shape of these bins to exploit the symmetry of the experiment , which in the analyzed region has an ignorable coordinate , .the width of each bin is the same as the wigner - seitz radius , . to reduce the effect of particle discreteness as a particle crosses the boundaries between bins , we use the cloud - in - cell weighting method which has the effect of smoothing data so that a particle contributes its mass , momentum and energy mostly to the bin where it is currently located and to a lesser extent to the next nearest bin .the data are binned in this way regardless of their positions , since is treated as an ignorable coordinate .we also time - average these binned data , exploiting the steady conditions of the experiment .this procedure ( binning , cloud - in - cell - weighting , and time averaging ) yields our continuum quantities , such as the flow velocity .it also yields a kinetic temperature which is calculated from the individual particle velocities ; this kinetic temperature is not necessarily identical to the thermodynamic temperature . here , is the boltzmann constant .we assume that it is valid to use a continuum model when gradients are concentrated in a region as small as a few particle spacings .in fact , it has been shown experimentally that the momentum equation and the energy equation remain useful in 2d dusty plasma experiments with gradients that are as strong as in our experiment .the notation we use in this paper distinguishes velocities and other quantities according to whether they correspond to individual particles or continuum quantities .parameters for individual dust particles are denoted by a subscript , for example for the velocity of an individual dust particle .continuum quantities in a theoretical expression are indicated without any special notation , for example for the hydrodynamic velocities in eqs .( [ mass ] ) and ( [ momentum ] ) .finally , continuum quantities that we compute with an input of experimental data , as described above , are indicated by a bar over the symbol , for example .here , we present our simplification of the continuity equations , eqs .( [ mass]-[energy ] ) , to describe our 2d dust layer .these simplifications involve three approximations suitable for the conditions in our 2d layer , and a treatment of two external forces , laser manipulation and gas friction , that are responsible for and .we describe these simplifications and the resulting continuity equations , next .the first of our three approximations is .this approximation is suitable for the steady overall conditions of our experiment . aside from the particle - level fluctuations that one desires to average away ,when adopting a continuum model , the only time - dependent processes in the experiment were the rastering of the laser beam at and the rf electric fields that powered the plasma .these frequencies are too high for the dust particles to respond , and the rastering of the beams is not a factor anyway because we will only use the continuum equations outside the laser beams .the second approximation is that is negligibly small . due to the symmetry in our experiment design , as mentioned in sec .iii , is treated as an ignorable coordinate , i.e. , . as a verification of this assumption , we observe that the ratio is of order , as a measure of the slightly imperfect symmetry of our experiment .thus , for our experiment , we consider to be of zeroth order and to be two orders of magnitude smaller , when we approximate eqs .( [ mass]-[energy ] ) .the third approximation is that is negligibly small , based on our observation of the flow velocity of our 2d layer .our results for the calculated flow velocity of the dust layer are shown in fig .2 . from the velocity in fig 2(a ) , the flow can be easily identified from the two peaks with broad edges .however , the flow velocity in the direction is two orders of magnitude smaller than in the direction , as shown in fig .2(b ) .( color online ) .profiles of continuum parameters during laser manipulation , including ( a - b ) flow velocity , ( c ) areal number density , and ( d ) a measure of local structural order that would be a value of 1 for a perfect crystal . disorder , as indicated by a small value in ( d ) , is found to be greatest where the shear is largest , not where the flow is fastest . ] using the second and third approximations , and omitting terms that are small by at least two orders of magnitude , we can easily see that we can approximate and .the latter indicates that the dust layer can be treated as an incompressible fluid in our experiment .using these three approximations in eq .( [ mass ] ) , we find that . in other words , for our approximations , the density is uniform , which is confirmed by our experimental observation of fig .because the density is so uniform , we can also assume that plasma parameters , such as the dust charge , are also spatially uniform within the analyzed region .in addition to these approximations , we also assume that and are valid transport coefficients for our system .there are theoretical reasons to question whether 2d systems ever have valid transport coefficients , and this is typically tested _ theoretically _ using long - time tails in correlation functions . it could be tested _ experimentally _ by repeating the determination of and for vastly different length scales for the gradients of velocity and temperature and verifying that they do not depend on the length scale .however , such a test is not practical for experiments like ours , which tend to have a limited range of diameters of dust layers that can be prepared .now we consider external forces that contribute momentum and energy to the two equations , eqs .( [ momentum ] ) and ( [ energy ] ) . for our experiment, we can mention six _ external _ forces acting on the dust layer : gas friction , laser manipulation , electric confining force , gravity , electric levitating force , and ion drag . since we only study the 2d motion of dust particles within their plane , the last three forces are of no interest because they are in the perpendicular direction , and will not affect the horizontal motion of particles that is of interest here .the electric confining force is balanced by the pressure inside the 2d dusty plasma lattice , , as described in sec .thus , only two of the six forces need to be considered : gas friction and laser manipulation .gas friction is the main dissipation mechanism in our experiment .we can consider the effect of this friction first at the level of a single dust particle , and then at the level of a continuum .for the _ momentum _ equation , we note first that at the particle level , a single dust particle moving at a speed of experiences a drag acceleration of . at the continuum level , the contribution of this drag to eq .( [ momentum ] ) is simply the average acceleration experienced by all dust particles in a given spatial region , .for the _ energy _ equation , at the particle level the rate of energy dissipation for one dust particle is , which is the product of a drag force and velocity , where is the kinetic energy of one dust particle . at the continuum level ,averaging over all the dust particles in the given spatial region , the rate of energy loss per unit mass in eq .( [ energy ] ) is .using the three approximations listed above and taking into account gas friction and laser manipulation forces , the mass , momentum , and energy continuity equations become where is the viscous heating term , after simplifications based on the assumptions of , , and . in this paper , we will restrict our analysis to a spatial region where the laser force and power are zero , i.e. , and . in this region ,the momentum and energy continuity equations will be further simplified : to simplify the problem , we will assume that and are independent of temperature , as discussed in .we can comment on the meaning of these two equations .equation ( [ momentum_dpg ] ) indicates a balance of the sideways transfer of momentum due to two mechanisms : viscosity arising from interparticle electric forces and frictional loss of momentum due to collisions with gas atoms .equation ( [ energy_dpg ] ) describes the energy transferred from the viscous heating and thermal conduction ( the second term ) as being balanced by the energy dissipated due to friction as expressed in the last term of eq .( [ energy_dpg ] ) .we will use eqs .( [ momentum_dpg]-[energy_dpg ] ) only in spatial regions where and , i.e. , outside the laser beam . in the next section, we will present our calculation of terms appearing in eqs .( [ momentum_dp]-[energy_dpg ] ) using the dust particles position and velocity data from our experiment .the terms of interest in these equations are , , , and .( color online ) . profiles of the first ( a ) and second ( b ) derivatives of flow velocity . to make a comparison , the flow velocity profile also provided in ( c ) . ] our results for the first and second derivatives of the flow velocity are presented in fig .these results are calculated using the flow velocity profile in fig .2(a ) , which we reproduce in fig .3(c ) . the first and second derivatives in fig. 3(a - b ) will be used in eqs .( [ viscous_heat ] ) and ( [ momentum_dpg ] ) , respectively . from fig .3(a ) , we can identify four points of maximum shear , i.e. , maximum ; these are at , , , and .these points of maximum shear coincide with other features of interest : the minimum in the structural order , fig .2(d ) , and peaks in the mean - square velocity fluctuation profile , which we will present below .we find that disorder , as indicated by a small value in fig .2(d ) , is greatest where the shear is largest , not where the flow is fastest . comparing panels ( b ) and ( c ) of fig . 3, we find that the profiles for the flow velocity and its second derivative are similar , in regions without laser manipulation , for example in the central region .this similarity is expected from the momentum equation , eq .( [ momentum_dpg ] ) , provided that the viscosity is spatially uniform .we do not expect that would be spatially uniform since viscosity in general depends on temperature and the temperature is highly non - uniform , as we will show below .nevertheless , we find that the two curves are nearly similar , with a small discrepancy that we will quantify below when we present data for the residual of eq .( [ momentum_dpg ] ) .( color online ) .profiles of the mean squared particle velocity shown separately for particle motion in the ( a ) and ( b ) directions .these quantities correspond to the mean particle kinetic energy , i.e. , .the thin curve in each panel is the flow velocity profile ; its scale is shown in fig .3(c ) . ]profiles of the mean squared particle velocity , corresponding to the kinetic energy in the energy equation , eq .( [ energy_dpg ] ) , are shown in fig . 4 .this kinetic energy includes energy associated with both the macroscopic flow and the fluctuations at the particle level .we will use these profiles in determining in the next section , where we will find the residual of eq .( [ energy_dpg ] ) .( color online ) .profiles of the mean - square particle velocity fluctuation shown separately for particle motion in the and directions , ( a ) and ( b ) , respectively .these quantities combined correspond to the kinetic temperature , eq .( [ kt ] ) .the profile of were reported in , where we discovered peaks in the kinetic temperature where the shear is largest .the second derivative of the averaged mean - square particle velocity fluctuation for the motion in the and directions ( c ) will be used to determine the second derivative of the thermodynamic temperature in eq .( [ energy_dpg ] ) .the thin curve in each panel is the flow velocity profile ; its scale is shown in fig .] in fig . 5, we present our results for the mean - square velocity fluctuation , which corresponds to the kinetic temperature as in eq .( [ kt ] ) .we will use this kinetic temperature in place of the thermodynamic temperature in the energy equation , eq .( [ energy_dpg ] ) . unlike the kinetic energy ,the kinetic temperature only includes the energy associated with the fluctuations of particle velocity about the flow velocity . as reported in , there are peaks in the kinetic temperature profile that coincide with the position of maximum shear .these peaks can also be seen in fig .5(a - b ) . in , we attributed these peaks to viscous heating .as an intuitive explanation of viscous heating , consider that higher shear conditions lead to collisions of particles flowing at different speeds , causing scattering of momentum and energy that leads to higher random velocity fluctuations , and therefore higher kinetic temperature . in the next section , we provide further verification that the temperature peaks are due to viscous heating ; we do this by confirming that three terms in the energy equation , including viscous heating , are in balance as indicated by their summing to zero .we now examine the momentum and energy equations , eqs .( [ momentum_dpg ] ) and ( [ energy_dpg ] ) , which are written so that the right - hand - side is zero .when we use these equations with an input of experimental data , however , the terms will not sum exactly to zero , but will instead sum to a finite residual .we calculate these residuals , and we vary two free parameters , the viscosity and the thermal conductivity , to minimize the residuals .( specifically , we minimize the square residual summed over all bins in the central region of . )this minimization procedure yields the best estimation for the values for and , which will be our first chief result .we will then , as our second chief result , be able to make a spatially - resolved comparison of the magnitude of different terms in the energy equation , eq .( [ energy_dpg ] ) .( color online ) .( a ) a profile of the residual of the momentum equation , eq .( [ momentum_dpg ] ) , assuming a kinematic viscosity of . for the central region ,magnified in ( b ) , the summation of the squared residual reaches its minimum when ; this minimization process is how we determine , which is one of our main results .the thin curve in ( a ) is the flow velocity profile ; its scale is shown in fig .3(c ) . ]figure 6 shows the residual of the momentum equation , eq .( [ momentum_dpg ] ) .this result is shown for , which is the best estimation of the kinematic viscosity .the data are shown as a spatial profile because we calculated eq .( [ momentum_dpg ] ) separately for each bin , i.e. , each value of .two peaks in fig .6(a ) , located within the laser manipulation region , are due to the momentum contribution from the laser .we do not use eq .( [ momentum_dpg ] ) with these peaks because of a finite laser force there .instead we will use the flatter region between these peaks , where , as magnified in fig .the small residuals in this flatter region indicate that the momentum equation , eq .( [ momentum_dpg ] ) , is able to accurately account for the momentum of our 2d dust layer , and that the minimization process in this region yields a value for the viscosity .( color online ) .( a ) a profile of the residual of the energy equation , eq .( [ energy_dpg ] ) , assuming a thermal diffusivity of . for the central region ,magnified in ( b ) , the summation of the squared residual reaches its minimum when .this minimization process is how we determine , which is another of our main results .the thin curve in ( a ) is the flow velocity profile ; its scale is shown in fig .3(c ) . ]figure 7 shows the residual of the energy equation , eq .( [ energy_dpg ] ) , for , which is the best estimation of the thermal diffusivity . in this calculation, we used the value of from the momentum equation above , and we varied the value of to minimize the residuals as described above .two large negative peaks in fig .7(a ) are due to the energy contribution from the laser manipulation , which is not included in eq .( [ energy_dpg ] ) .the small values of residuals in the flatter region between these peaks , as magnified in fig .7(b ) , show that the energy equation , eq .( [ energy_dpg ] ) , accurately describe energy transport in our 2d dust layer , and that the minimization process in this region yields a value for the thermal diffusivity . by achieving a small value of the residual ,we have verified that the three terms in the energy equation , eq .( [ energy_dpg ] ) , are in balance .since one of these terms is viscous heating and another is computed from the temperature profile which has peaks , the balance we observe here is consistent with the conclusion of that the temperature peaks are due to viscous heating . as the first chief result of this paper , we obtain the kinematic viscosity value of and thermal diffusivity value of .these values are obtained simultaneously in a single experiment . a source of uncertainty in these valuesis systematic error in , for example due to particle size dispersion or uncertainty in the epstein drag coefficients .this is so because our method actually yields results for and .another source of uncertainty is our simplification that we neglect heating sources other than laser manipulation ; in a test we determined that is in a range from 7.5 to , depending on whether these small heating effects are accounted for .we note that our results of and agree with the results of previous experiments using the same size of dust particles and a similar value of .( color online ) . ( a )profiles of three terms in eq .( [ energy_dpg ] ) , assuming the transport coefficients , and , obtained above . for the central region without laser manipulation ( b ) , thermal conduction is one order magnitude larger than the viscous heating .this is our second chief result : a spatially - resolved comparison of the different mechanisms for energy transfer in our 2d dust layer .the thin curve in ( a ) is the flow velocity profile ; its scale is shown in fig .3(c ) . ]our experiment allows us to obtain spatially - resolved quantitative measurements of three heat transfer effects : viscous heating , thermal conduction , and dissipation due to gas friction ( i.e. , cooling ) .these three effects appear as the three terms on the left - hand - side of eq .( [ energy_dpg ] ) . as the second chief result of this paper ,we plot these three terms , presented as spatial profiles , in fig .8 . examining the spatial profiles for these terms , in fig . 8, we see the most prominent features are two large peaks for the gas dissipation term where the flow velocity is fastest ; in this region the energy dissipation due to gas friction reaches its maximum . despite the prominence of these features , however , they are not what interest us here . instead , we are more interested in the regions of high shear , near the edge of the laser manipulation . recall that in these high shear regions , the flow velocity gradient is largest , and there are peaks in the profiles of the kinetic temperature and the second derivative of the flow velocity , figs . 3 and 5 . in fig . 8, our spatially - resolved profiles reveal that viscous heating and thermal conduction terms are peaked in regions of high shear .the viscous heating term , eq .( [ viscous_heat ] ) , is always positive , meaning that viscous dissipation is always a source of heat wherever it occurs .the thermal conduction term partial , on the other hand , can be either positive or negative , indicating that heat is conducted toward or away from the point of interest , respectively . in locations where the shear is strongest , for example at ,the thermal conduction term is negative indicating that heat is conducted away from that point .although viscous heating has great importance in all kinds of fluids , and it has been understood theoretically for a very long time , a spatially - resolved measurement of it is uncommon . in most physical systemsviscous heating is usually hard to measure either because the temperature increase is overwhelmingly suppressed by rapid thermal conduction , as we discussed in , or because the thinness of the shear layer does not allow convenient _ in - situ _ temperature measurements .most experimental observations of temperature increases due to viscous heating are either external or global measurements , and not spatially - resolved measurements like those that we report here . indeed , in our literature search , we found no previous spatially - resolved experimental measurements of the viscous heating term , not only for dusty plasma , but also for any other physical system . as we explained in , our ability to detect strong effects of viscous heating is due to the extreme properties of dusty plasma , as compared to other substances .our ability to make spatially - resolved measurements is due to our use of video imaging of particle motion .( color online ) . profiles of ( a ) the ratio of the viscous heating and thermal conduction terms in eq .( [ energy_dpg ] ) , and ( b ) the absolute value of this ratio .the sign of determines the sign of the ratio in ( a ) .a positive ratio indicates that heat is conducted toward the position of interest .the large values of the ratio in ( b ) at high shear regions indicate significant viscous heating at those locations .for comparison , in we found a brinkman number br = 0.5 ( indicated by the arrow ) , which is a global measure of the flow that provides less detailed information than the spatially resolved ratio shown here .the thin curve in ( b ) is the flow velocity profile ; its scale is shown in fig .3(c ) . ]to further analyze the second chief result of this paper , the spatial profiles of the viscous heating and thermal conduction terms in fig .8(a ) , we plot the ratio of these two terms in fig. 9(a ) .this ratio has its largest positive and negative values in the regions of high shear . in fig .9(a ) , negative values of this ratio are observed to occur in the high shear regions , which indicates that heat is conducted away from these regions .this result is consistent with observation of kinetic temperature peaks here . to characterize the magnitude of these two terms , we plot in fig .9(b ) the absolute value of this ratio .we can see that , within regions of high shear , this ratio can be as large as unity , or even larger .a typical value of this ratio in the shear region is of order 0.5 for our experiment .this matches the value of the brinkman number , br = 0.5 that we found in for the same experiment .the brinkman number is a global measure of the viscous heating , in competition with thermal conduction .in summary , we reported further details of the laser - driven flow experiment in a dusty plasma that was first reported in .we simplified the momentum and energy continuity equations , exploiting the symmetry and steady conditions of the experiment .we developed a method to obtain transport coefficients by minimizing the residuals of continuity equations using the input of experimental data .as our first chief result , we use this method to simultaneously determine , from the same experiment , two transport coefficients : kinematic viscosity and thermal diffusivity , which are based on viscosity and thermal conductivity , respectively . as our second chief result , we obtained spatially - resolved measurements of various terms in the energy equation .we found that , in a laser - driven dusty plasma flow , viscous heating is significant in regions with high shear , which is consistent with the interpretation of that the peaks in the temperature profile are due to viscous heating .a. melzer and j. goree , in _ low temperature plasmas : fundamentals , technologies and techniques _ , 2nd ed . , edited by r. hippler , h. kersten , m. schmidt , and k. h. schoenbach ( wiley - vch , weinheim , 2008 ) , p. 129 .the external forces can add or remove energy from the collection dust particles , as expressed by , which is the final term of the energy equation , eq .( [ energy ] ) . besides heating from laser manipulation and cooling from the gas friction as discussed in the text , we can mention two other contributions to for our experiment : heating from random kicks from collisions with the gas atoms and heating by fluctuating electric fields in the plasma . in the absence of any laser manipulation , the other three terms have their background levels which are in balance , so that their total is zero . in the presence of laser manipulation , even outside the region where the laser beams strike the dust particles , there are flows that result in a cooling by gas friction that is enhanced above its background level . when using the energy equation with data from our experiment with laser manipulation , we ignore two heating effects , gas atom collisions and fluctuating electric fields . using data from an experimental run without laser manipulation , we estimated those effects , so that we could report the range of values for in the text .our laser manipulation method introduces anisotropy in the particle velocities . since our laser manipulation drives a flow only in the directions , .similarly , the kinetic energy is mostly due to the flow motion in the direction , i.e. , , as shown in fig .4(a - b ) .however , the kinetic temperature due to the velocity fluctuations in the direction is almost the same as for fluctuations in the direction . | a shear flow of particles in a laser - driven two - dimensional ( 2d ) dusty plasma are observed in a further study of viscous heating and thermal conduction . video imaging and particle tracking yields particle velocity data , which we convert into continuum data , presented as three spatial profiles : mean particle velocity ( i.e. , flow velocity ) , mean - square particle velocity , and mean - square fluctuations of particle velocity . these profiles and their derivatives allow a spatially - resolved determination of each term in the energy and momentum continuity equations , which we use for two purposes . first , by balancing these terms so that their sum ( i.e. , residual ) is minimized while varying viscosity and thermal conductivity as free parameters , we simultaneously obtain values for and in the same experiment . second , by comparing the viscous heating and thermal conduction terms , we obtain a spatially - resolved characterization of the viscous heating . = 1 |