text
stringlengths 1
6.27k
| id
int64 0
3.07M
| raw_id
stringlengths 2
9
| shard_id
int64 0
0
| num_shards
int64 16
16
|
---|---|---|---|---|
Hamiltonian (2.9) is not separable and the system is nonintegrable. 9 On the other hand, for a point particle all constant-curvature black holes have a full set of integrals of motion leading to the integrability of geodesics: for the sphere, the additional integrals (besides E) are L 2 and L z from SO(3), and for the pseudosphere these are K 2 and K z from SO(2, 1). For the planar black hole we obviously have P x,y , the momenta, as the integrals of motion. Of course, if we consider compactified surfaces, the symmetries become discrete and do not yield integrals of motion anymore. Therefore, truly topological black holes are in general nonintegrable even for geodesics. 10 8 In this and the next section we put α = 1/π, as we only consider classical equations of motion, which are independent of α . In section 4, when calculating the quantities of the dual gauge theory, we restore α as it is related to the 't Hooft coupling, a physical quantity. 9 One can prove within Picard-Vessiot theory that no canonical transformation exists that would yield a separable Hamiltonian, so the system is nonintegrable. We will not derive the proof here, as it is not very instructive; the nonintegrability of the spherical case was already proven in [26,29], and the existence of nonzero Lyapunov exponents will de facto prove the nonintegrability for the other cases. One extra caveat is in order for the planar case. For k = 0 and sinkΦ = Φ, the Hamiltonian is still | 300 | 145856149 | 0 | 16 |
not separable, and dynamics is nonintegrable. One could change variables in the metric (2.1) as (φ1, φ2) → (φ 1 = φ1 cos φ2, φ 2 = φ1 sin φ2), and the string with the wrapping Φ 2 = nσ would provide an integrable system, with the separable Hamiltonian H = f But that is a different system from (2.9): even though a change of variables is clearly of no physical significance, the wrapping Φ 2 = nσ is physically different from Φ2 = nσ. Integrability clearly depends on the specific string configuration. 10 For special, fine-tuned topologies and parameters, one finds integrable cases (even for string motion!) but these are special and fine-tuned; we will consider these cases elsewhere as they seem peripheral for our main story on the chaos bound. Fixed points and near-horizon dynamics For a better overall understanding of chaos in string motion, let us sketch the general trends in dynamics first. For spherical black holes, this job was largely done in [26,29,44] and for similar geometries also in [27,28]. We will emphasize mainly the properties of near-horizon dynamics that we find important for the main story. Typical situation can be grasped from figure 1, where the Poincare sections of orbits starting near the horizon are shown for increasing temperatures of the horizon, as well as figures 2 and 3 where we show typical orbits in the x − y plane for different temperatures and initial conditions. 1. Higher temperatures generally increase chaos, with lower and lower numbers of periodic orbits (continuous | 301 | 145856149 | 0 | 16 |
lines in the Poincare section in figure 1) and increasing areas covered with chaotic (area-filling) orbits. This is also obvious from the figure 2. 2. Orbits closer to the horizon are more chaotic than those further away; this will be quantified by the analysis of the Lyapunov exponents. This is logical, since the equations of motion for strings in pure AdS space are integrable, and far away from the horizon the spacetime probed by the string becomes closer and closer to pure AdS. An example of this behavior is seen in figure 3(A). 3. The previous two trends justify the picture of the thermal horizon as the generator of chaos. However, for an extremal or near-extremal hyperbolic horizon there is a slight discrepancy -in this case, moving away from the horizon increases the chaos. In other words, there is yet another mechanism of chaos generation, independent of the temperature and not located precisely at the horizon, which is subleading and not very prominent, except when it is (almost) the only one, i.e., when the horizon is (near-)extremal. This is demonstrated in figure 3(B). When we come to the consideration of the Lyapunov exponents, we will identify the horizoninduced scrambling and the chaotic scattering as the chaos-inducing mechanisms at work for r → r h and for intermediate r, respectively. Consider now the radial motion from the Hamiltonian (2.9). Radial motion exhibits an effective attractive potential E 2 /2f which diverges at the horizon. The Φ-dependent terms Figure 2. Thermal horizon as the generator of chaos. We | 302 | 145856149 | 0 | 16 |
show the orbits in the vicinity of the spherical (A) and hyperbolic (B) horizon, at T = 0.01 (left) and T = 0.10 (right); obviously, hot horizons generate more chaos than cold ones. The light blue dot is the initial condition of the orbit (the position of the point on the string with Φ = 0 at τ = 0). Figure 3. Thermal horizon and hyperbolic scattering as generators of chaos. In (A) and (B), we show the orbits in the vicinity of the spherical and hyperbolic horizon, respectively, at the small temperature T = 0.01 and starting at increasing distances from the horizon. In (A), the further from the horizon, the more regular the orbit becomes. But in the hyperbolic geometry (B), the thermally-generated chaos is negligible; instead, the orbit becomes chaotic as it explores larger and larger area of the hyperbolic manifold. Hence for hyperbolic horizons, an additional, non-thermal generator of chaos exists: it is the hyperbolic scattering. Light blue dots are again the initial positions of the string origin (Φ = 0). JHEP12(2019)150 proportional to R 2 and 1/R 2 are repulsive and balance out the gravitational attraction to some extent but they remain finite for all distances. For R large, the repulsion proportional to n 2 dominates so for large enough distances the string will escape to infinity. For intermediate distances more complex behavior is possible: the string might escape after some number of bounces from the black hole, or it might escape after completing some (nonperiodic, in general) orbits around the | 303 | 145856149 | 0 | 16 |
black hole. The phase space has invariant planes given by (R, P R , Φ, P Φ ) = (R 0 + Eτ, E/f 0 , N π, 0), with R 0 = const. and f 0 ≡ f (R 0 ) and N an integer. It is easy to verify this solution by first plugging inΦ = 0 into (2.8) to find Φ; eq. (2.7) and the constraint (2.9) then reduce to one and the same conditionṘ 2 = E 2 . We discard the solution with the minus sign (with R = R 0 − Eτ ) as R is bounded from below. Pictorially, this solution means that a string with a certain orientation just moves uniformly toward the black hole and falls in, or escapes to infinity at uniform speed, all the while keeping the same orientation. Besides, there is a trivial fixed point at infinity, (R, P R , Φ, P Φ ) = (∞, 0, N π, 0), found also in [26,29]. We are particularly interested if a string can hover at a fixed radial slice R = r 0 = const.. Let us start from the spherical case. Inserting R = r 0 ,Ṙ = 0 into eq. (2.8) leads to the solution in terms of the incomplete Jacobi sine integral sn (Jacobi elliptic function of the first kind, Jacobi E-function), and two integration constants to be determined. The other equation, (2.7), is a first-order relation for Φ acting as a constraint. Solving it gives a Jacobi elliptic function again, | 304 | 145856149 | 0 | 16 |
with one undetermined constant, and we can match the constants to obtain a consistent solution: The value of r 0 is found from the need to satisfy also the Hamiltonian constraint. The constraint produces a Jacobi elliptic function with a different argument, and the matching to (2.10) reads 2f (r 0 ) + r 0 f (r 0 ) = 0. (2.11) This turns out to be a cubic equation independent of the black hole charge, as the terms proportional to q cancel out. It has one real solution, which is never above the horizon. The solution approaches the horizon as f (r 0 ), approaches zero, and r = r h is obviously a solution of (2.11) for f (r h ) = 0. However, the r → r h limit is subtle in the coordinates we use because some terms in equations of motion diverge, so we need to plug in f (r) = 0 from the beginning. Eqs. (2.6), (2.8) then implyṘ = E, i.e., there is no solution at constant R except for E = 0. This is simply because the energy is infinitely red-shifted at the horizon, i.e., E scales with f (eq. (2.6)), thus indeed unlessṪ → ∞, which is unphysical, we need E = 0. Now solving eq. (2.7) gives the same solution as before, of the form sn(C 1 τ, C 2 ), with undetermined constants C 1,2 , which are chosen so as to establish continuity with the solution (2.10). For an extremal horizon of the from | 305 | 145856149 | 0 | 16 |
f ∼ a(r − r 2 h ) ≡ a 2 , a smooth and finite limit is obtained by rescaling E → E 2 . Now expanding the sn function in produces simply a linear function at first order in : Therefore, a string can hover at the extremal horizon, at strict zero temperature, when its motion (angular rotation) becomes a simple linear winding with a single frequency. Such an orbit is expected to be linearly stable, and in the next section we show it is also stable according to Lyapunov and thus has zero Lyapunov exponent. Finally, from (2.7) and (2.11) the radial velocityṘ in the vicinity of a non-extremal horizon behaves as: meaning thatṘ grows quadratically as the distance from the horizon increases. This will allow us to consider near-horizon dynamics at not very high temperatures as happening at nearly constant radius: the string only slowly runs away. For a hyperbolic horizon the calculation is similar, changing sin → sinh in the solution (2.10). The constraint (2.11) is also unchanged (save for the sign of k in the redshift function), and the final conclusion is the same: the string can only balance at the zero temperature horizon (but now such a horizon need not be charged, as we mentioned previously). The zero temperature limit is the same linear function (2.12). For a planar horizon things are different. ForṘ = 0, we get simply harmonic motion Φ = C 1 cos nτ + C 2 sin nτ , which is consistent with the constraint | 306 | 145856149 | 0 | 16 |
H = 0. But eq. (2.7) implies exponential motion instead, D 1 sinh nτ + D 2 cosh nτ . Obviously, there is no way to make these two forms consistent. Accordingly, no hovering on the horizon (nor at any other fixed radial slice) is possible for a planar black hole. But the same logic that lead to (2.13) now predicts oscillating behavior: (2.14) Therefore, even though there are no orbits at all which stay at exactly constant R, we now have orbits which oscillate in the vicinity of the horizon forever. Averaging over long times now again allows us to talk of a string that probes some definite local temperature, determined by the average distance from the horizon. The point of this (perhaps tedious and boring) qualitative analysis of possible orbits is the following. No orbits at fixed distance from the horizon are possible, but at low temperatures a string that starts near the horizon will spend a long time in the nearhorizon area. Therefore, we can study the influence of the low-temperature horizon as the main chaos-generating mechanism by expanding the variational equations for the Lyapunov exponents in the vicinity of the horizon, This we shall do in the next section. Lyapunov exponents and the bound on chaos In general, Lyapunov exponents are defined as the coefficients λ of the asymptotic exponential divergence of initially close orbits; in other words, of the variation δX of a coordinate X: and the variation is expected to behave as δX ∼ δX(0) exp(λt) for t large and | 307 | 145856149 | 0 | 16 |
δX(0) small enough in practice. This definition makes sense for classical systems; in quantum -10 - JHEP12(2019)150 mechanics, the linearity of the state vector evolution guarantees zero exponent but the intuition that initially small perturbations eventually grow large in a strongly coupled system remains when we look at appropriately defined correlation functions, like the OTOC used in [1]. We should first make the following point clear. In a classical nonlinear system, the presence of deterministic chaos leads to positive Lyapunov exponents even in absence of temperature or noise. Quantum mechanically, as we explained, the linearity of evolution means that exponential divergence is only possible in a thermal state, and this situation leads to the temperature bound on the Lyapunov exponents. This is easy to see upon restoring dimensionful constants, when the bound from [1] takes the form λ ≤ 2πk B T / , and indeed in a classical system where → 0 no bound exists. In the context of our work, which effectively reduces to the classical Hamiltonian (2.9) which has no gravitational degrees of freedom, it is not a priori clear if one should expect any connection to the bound on chaos: instead of a QFT correlation function or its gravity dual, we have classical dynamics, and the Hawking temperature of the black hole is not the local temperature probed by the string. But we will soon see that analytical and numerical estimates of λ nevertheless have a form similar to the chaos bound of [1]. Before we proceed one final clarification is in | 308 | 145856149 | 0 | 16 |
order. One might worry that the Lyapunov exponents are gauge-dependent, as we consider equations of motion in terms of the worldsheet coordinate τ , and for different worldsheet coordinates the variational equations would be manifestly different; in other words, the definition (3.1) depends on the choice of the time coordinate (denoted schematically by t in (3.1)). Indeed, the value of λ clearly changes with coordinate transformations, however it has been proven that the positivity of the largest exponent (the indicator of chaos) is gauge-invariant; the proof was derived for classical general relativity [45] and carries over directly to the worldsheet coordinate transformations. This is all we need, because we will eventually express the τ -exponent in terms of proper time for an inertial observer, making use of the relationṪ = −E/f . This could fail if a coordinate change could translate an exponential solution into an oscillating one (because then λ drops to zero and it does not make sense to re-express it units of proper time); but since we know that cannot happen we are safe. Thermal horizon Consider first a thermal black hole horizon at temperature T , with the redshift function behaving as with on-shell solutions R(τ ), Φ(τ ). This system looks hopeless, but it is not hard to extract the leading terms near the horizon which, as we explained, makes sense at low temperatures. JHEP12(2019)150 Therefore, we start from the solutions (2.10), (2.12), (2.14), adding a small correction (r 0 , Φ (τ )) → (r 0 + ∆R (τ ) | 309 | 145856149 | 0 | 16 |
, Φ(τ ) + ∆Φ (τ )). Then we expand in inverse powers of r 0 − r h , and express the angular combinationsΦ 2 ±sink 2 Φ making use of the constraint (2.9). When the dust settles, the leading-order equations simplify to: where C = C(k, E) is a subleading (at low temperature) correction whose form differs for spherical, planar and hyperbolic horizons. From the above we read off that angular motion has zero Lyapunov exponent (the variational equation is oscillatory, because cosk 2 (2Φ) ≥ 0) but the radial component has an exponent scaling as Now we have calculated the Lyapunov exponent in worldsheet time τ . The gauge-invariant quantity, natural also within the black hole scrambling paradigm, is the proper Lyapunov exponent λ, so that 1/λ is the proper Lyapunov time for an asymptotic observer. To relatẽ λ to λ, we remember first that the Poincare time t is related to the worldsheet time τ through (2.6) as |dt| ∼ E/f × dτ . Then we obtain the proper time as t p = t √ −g 00 = t √ f , where near the thermal horizon we can write f ≈ 4πT (r − r h ). This gives 11 At leading order, we get the estimate 2πT n, with the winding number n acting as correction to the original bound. Away from the horizon At intermediate radii we can do a similar linear stability analysis starting from f ∼ r 2 + k + A/r where A is computed | 310 | 145856149 | 0 | 16 |
by series expansion (with just the AdS term r 2 + k in f , without the leading black hole contribution A/r, we would trivially have integral motion and zero λ; but this approximation applies at large, not at intermediate distances). In this case the equations of motion yield R ∼ τ √ E 2 − 1, and the variational equations, after some algebra, take the form One can show again that δΦ is always bounded in absolute value, thus the third term determines the Lyapunov exponent. The exponent vanishes for k > −1/3 (because the equations becomes oscillatory) and for k ≤ −1/3 we get λ ∼ −(3k + 1)E. (3.9) 11 We introduce the notation ≡ r − r h . JHEP12(2019)150 Since the curvature only takes the values −1, 0, 1, the prediction (3.9) always holds for hyperbolic horizons. Notice that this same term (the third term in (3.8)) appears as subleading in the near-horizon expansion, so we can identify it with C(k, E) and write (3.7) as λ(T ) ∼ 2πT n (1 − |(3k + 1)E|/ (φ 0 n)). This holds for any k, and we see that C ≤ 0; thus the bound is only approached from below as it should be. In absence of negative curvature, i.e., for k > 0, we have vanishing C at leading order in 1/R but subleading contributions still exist, so both the slight non-saturation of the limit 2πT n near-horizon (for small ) and a parametrically small non-zero Lyapunov exponent at intermediate distances | 311 | 145856149 | 0 | 16 |
will likely appear, which we see also in the numerics. That the motion is chaotic on a pseudosphere (negative curvature) is of course no surprise; it is long known that both particles and waves have chaotic scattering dynamics on pseudospheres [46]. We dub this contribution the scattering contribution to the Lyapunov exponent, as opposed to the scrambling contribution. It is largely independent of temperature and largely determined by the geometry of the spacetime away from the horizon. Extremal horizon For an extremal horizon we replace f by f ∼ a(r − r h ) 2 = a 2 , and plug in this form into the variational equations. Now the result is (for concreteness, for the spherical horizon) leading to a vanishing exponent value: Obviously, this also means λ = 0 -there is no chaos at the extremal horizon. This is despite the fact that the string motion in this case is still nonintegrable, which is seen from the fact that no new symmetries or integrals of motion arise in the Hamiltonian in this case. The horizon scrambling is proportional to temperature and does not happen at T = 0, but the system is still nonintegrable and the chaos from other (scattering) origins is still present. In particular, the estimate (3.8)-(3.9) remains unchanged. The estimates (3.7), (3.9), (3.12) are the central sharp results of the paper. We can understand the following physics from them: 1. At leading order, we reproduce (and saturate) the factor 2πT of the Maldacena-Shenker-Stanford bound, despite considering classical dynamics only. 2. The | 312 | 145856149 | 0 | 16 |
bound is however multiplied by the winding number n of the ring string. The spirit of the bound is thus preserved but an extra factor -the winding numberenters the story. 3. Taking into account also the scattering chaos described by (3.9), the results are in striking accordance with the idea of [2]: there are two contributions to chaos, one -13 - JHEP12(2019)150 proportional to the black hole temperature and solely determined by the scrambling on the horizon, with the universal factor 2πT expected from the concept of black holes as the fastest scramblers in nature, and another determined by the (slower) propagation of signals from the horizon toward the AdS boundary, which we call the scattering term, as it is determined also by dynamics at large distances. 4. For a particle (n = 0), we correctly get λ = 0, as the geodesics are integrable. 5. The temperature appearing in (3.7) is always the Hawking temperature of the black hole T . In the next section, when we consider the AdS/CFT interpretation, we will try to shed some more light on where the modification of the bound 2πT → 2πT n comes from. Lyapunov time versus event time In the above derivations we have left one point unfinished. We have essentially assumed that R(τ ) ≈ const. = r and treated the difference = r − r h as a fixed small parameter. This is only justified if the local Lyapunov time 1/λ is much shorter than the time to escape far away from r h | 313 | 145856149 | 0 | 16 |
and the horizon, or to fall into the black hole. In other words, it is assumed that the Lyapunov time is much shorter than the "lifetime" of the string (let us call it event time t E ). Now we will show that this is indeed so. For the spherical black hole, upon averaging over the angle Φ, we are left with a one-dimensional systeṁ which predicts the event time as (3.14) In other words, the event times are roughly by a factor 1/ longer than Lyapunov times, therefore our estimate for λ should be valid. In (3.14), we have considered both the infalling orbits ending at r h , and the escaping orbits going to infinity (for the latter, we really integrate to some r ∞ > r 0 and then expand over 1/r ∞ ). An orbit will be infalling or escaping depending on the sign of the combination under the square root, and to leading order both cases yield a time independent of r 0 (and the cutoff r ∞ for the escaping case). The hyperbolic case works exactly the same way, and in the planar case since R(τ ) oscillates the event time is even longer (as there is no uniform inward or outward motion). For extremal horizons, there is no issue either as r = r h is now the fixed point. Dimensionful constants One might wonder what happens when dimensionful constants are restored in our results for the Lyapunov exponents like (3.7) or (3.9): the original chaos bound really states | 314 | 145856149 | 0 | 16 |
λ ≤ 2πk B T / , and we have no in our system so far. The resolution is simple: the role of is played by the inverse string tension 2πα , which is obvious from the standard form JHEP12(2019)150 of the string action (2.3); the classical string dynamics is obtained for α → 0. Therefore, the dimensionful bound on chaos for our system reads λ = 2πk B T n/2πα = k B T n/α . Another way to see that α takes over the role of in the field-theory derivation [1] is that the weight in computing the correlation functions for a quantum field is given by the factor exp −1/ L , whereas for a string the amplitudes are computed with the weight exp −1/2πα L . In the next section, we will also look for the interpretation in the framework of dual field theory. In this context, α is related to the number of degrees of freedom in the gauge dual of the string, just like the Newton's constant G N is related to the square of the number of colors N 2 in the gauge dual of a pure gravity theory. But the issues of gauge/string correspondence deserve more attention and we treat them in detail in section 4. Numerical checks We will now inspect the results (3.7), (3.9), (3.12) numerically. Figure 4 tests the basic prediction for the horizon scrambling, λ ≈ 2πT n at low temperatures: both the n-dependence at fixed temperature (A), and the T -dependence at fixed | 315 | 145856149 | 0 | 16 |
n (B) are consistent with the analytical prediction. All calculations were done for the initial conditionṘ(0) = 0, and with energy E chosen to ensure a long period of hovering near the horizon. The temperatures are low enough that the scattering contribution is almost negligible. In figure 5 we look at the scattering term in more detail. First we demonstrate that at zero temperature, the orbits in non-hyperbolic geometries are regular (A): the scattering term vanishes at leading order, and the scrambling vanishes at T = 0. In the (B) panel, scattering in hyperbolic space at intermediate radial distances gives rise to chaos which is independent of the winding number, in accordance to (3.7). To further confirm the logic of (3.7), one can look also at the radial dependence of the Lyapunov exponent: at zero temperature, there is no chaos near-horizon (scrambling is proportional to T and thus equals zero; scattering only occurs at finite r − r h ), scattering yields a nonzero λ at intermediate distances and the approach to pure AdS at still larger distances brings it to zero again; at finite temperature, we start from λ = 2πT n near-horizon, observe a growth due to scattering and fall to zero approaching pure AdS. Dual gauge theory interpretation The ring string wrapped along the σ coordinate is a very intuitive geometry from the viewpoint of bulk dynamics. However it has no obvious interpretation in terms of the gauge/gravity duality, and the Hamiltonian (2.9) itself, while simple-looking, is rather featureless at first glance: essentially | 316 | 145856149 | 0 | 16 |
a forced nonlinear oscillator, it does not ring a bell on why to expect the systematic modification of the Maldacena-Shenker-Stanford bound and what the factor n means. Thus it makes sense to do two simple exercises: first, to estimate the energy and spin of the operators corresponding to (2.5) to understand if it has to do with some Regge trajectory; second, to consider some other string configurations, with a more straightforward connection to the operators in gauge theory. Of course, finite temperature horizons are crucial for our work on chaos, and saying anything precise about the gauge theory dual of a string in the black hole background is extremely difficult; we will only build some qualitative intuition on what our chaotic strings do in field theory, with no rigorous results at all. Let us note in passing that the ring string configurations considered so far are almost insensitive to spacetime dimension. Even if we uplift from the four-dimensional spacetime described by (t, r, φ 1 , φ 2 ) to a higher-dimensional spacetime (t, r, φ 1 , φ 2 , . . . φ N −2 ), with the horizon being an N − 2-dimensional sphere/plane/pseudosphere, the form of the equations of motion does not change if we keep the same ring configuration, with Φ 1 = Φ 1 (τ, σ), Φ 2 = nσ, Φ 3 = const., . . . Φ N −2 = const. -this is a solution of the same eqs. (2.6)-(2.8) with the same constraint (2.9). The difference lies in | 317 | 145856149 | 0 | 16 |
the redshift function f (r) which depends on dimensionality. This, however, does not change the main story. We can redo the calculation of the radial fixed point from the second section, to find a similar result -a string can oscillate or run away/fall slowly in the vicinity of a horizon, and the variational equations yield the same result for the Lyapunov exponent as before. It is really different embeddings, i.e., different Polyakov actions, that might yield different results. Operators dual to a ring string? We largely follow the strategy of [47] in calculating the energy and the spin of the string and relating them to the dual Yang-Mills theory. In fact, the ring string is quite close to what the authors of [47] call the oscillating string, except that we allow one more angle to fluctuate independently (thus making the system nonintegrable) and, less crucially, that in [47] only the winding number n = 1 is considered. Starting from the action for the ring string (2.3), we write down the expressions for energy and momentum: where the second worldsheet integral gives simply dσ = 2π as R, Φ do not depend on σ, and we have expressed dτ = dΦ/Φ; finally, the canonical momentum is conserved, P T = E, JHEP12(2019)150 and in the expression for the spin we need to invert the solution Φ(τ ) into τ (Φ) in order to obtain the function R(Φ). We are forced to approximate the integrals. ExpressingΦ from the Hamiltonian constraint (2.4), we can study the energy in two | 318 | 145856149 | 0 | 16 |
regimes: small amplitude φ 0 π which translates to E/T 1, and large amplitude φ 0 ∼ π, i.e., E/T ∼ 1. For these two extreme cases, we get: For the spin similar logic gives The bottom line is that in both extreme regimes (and then presumably also in the intermediate parameter range) we have E ∝ E/α n and S ∝ E 2 /α nT ; as before = r − r h and it should be understood as a physical IR cutoff (formally, for r → r h the spin at finite temperature diverges; but we know from section 2 that in fact no exact fixed point at constant r exists, and the average radial distance is always at some small but finite distance ). Therefore, we have E 2 ∝ S/α nT . The presence of temperature in the above calculation makes it hard to compare the slope to the familiar Regge trajectories. But in absence of the black hole, when f (r) = 1, we get E = 4E/α n, S = 8E 2 /α n ⇒ E 2 = 2S/α n. For n = 1, this is precisely the leading Regge trajectory. For higher n the slope changes, and we get a different trajectory. Therefore, the canonical Lyapunov exponent value λ = 2πT precisely corresponds to the leading Regge trajectory. We can tentatively conclude that the winding string at finite temperature describes complicated thermal mixing of largedimension operators of different dimensions and spins, and these might well be sufficiently nonlocal that | 319 | 145856149 | 0 | 16 |
the OTOC never factorizes and the bound from [1] does not apply. Planetoid string In this subsubsection we consider so-called planetoid string configurations, also studied in [47] in the zero-temperature global AdS spacetime and shown to reproduce the leading Regge trajectory in gauge theory. This is again a closed string in the same black hole background (2.1) but now the solution is of the form 12 The authors of [47] work mostly with the Nambu-Goto action but consider also the Polyakov formulation in the conformal gauge; we will stick to the Polyakov action from the beginning for notational uniformity with the previous section. For the same reason we keep the same coordinate system as in (2.1). JHEP12(2019)150 where the auxiliary field e is picked so as to satisfy the conformal gauge, and any additional coordinates Φ 3 , Φ 4 , . . . and Θ 1 , Θ 2 , . . . are fixed. The Lagrangian has the invariant submanifold Φ 1 = ωτ, Φ 2 = const. when the dynamics becomes effectively one-dimensional, the system is trivially integrable and, in absence of the black hole, it is possible to calculate exactly the energy and spin of the dual field theory operator. This is the integrable case studied in [47,48], and allowing Φ 2 to depend on σ seems to be the only meaningful generalization, because it leads to another submanifold of integrable dynamics with R = r 0 = const., Φ 2 = nσ and the pendulum solution for Φ 1 : where | 320 | 145856149 | 0 | 16 |
2 =Φ 1 2 − n 2 sin 2 Φ 1 is the adiabatic invariant on this submanifold. With two integrable submanifolds, a generic orbit will wander between them and exhibit chaos. The variational equations can be analyzed in a similar fashion as in the previous section. Here, the chaotic degree of freedom is Φ 1 (τ ), with the variational equation which in the near-horizon regime yields the Lyapunov exponent λ = 2πT n, (4.12) in the vicinity of the submanifold (4.10). In the vicinity of the other solution (Φ 1 = ωτ, Φ 2 = const.), we get λ = 0. Chaos only occurs in the vicinity of the winding string solution, and the winding number again jumps in front of the universal 2πT factor. Now let us see if this kind of string reproduces a Regge trajectory. In the presence of the black hole the calculation results in very complicated special functions, but we are only interested in the leading scaling behavior of the function E 2 (S). Repeating the calculations from (4.1)-(4.2), we first reproduce the results of [47] in the vicinity of the solution Φ 1 = ωτ : for short strings, we get E ∼ 2/ωT, S ∼ 2/ω 2 T 2 and thus E 2 ∝ 2S, precisely the result for the leading Regge trajectory. Now the Regge slope does not depend on the temperature (in the short string approximation!). This case, as we found, trivially satisfies the original chaos bound (λ = 0, hence for sure λ < | 321 | 145856149 | 0 | 16 |
2πT ). In the vicinity of the other solution, with R = r 0 , things are different. Energy has the following behavior: For the spin, the outcome is so in this case there is no Regge trajectory at all, i.e., no simple relation for E 2 (S) because the scale r 0 and the quantity show up in the E 2 (S) dependence even at zero temperature. In conclusion, the strings that can violate the chaos bound have a strange Regge behavior in the gauge/string duality, in this case in a more extreme way than for the ring strings (even for n = 1 no Regge trajectory is observed). The strings which have λ = 0 and thus trivially satisfy the bound on the other hand obey the leading Regge trajectory. The limits of quasiclassicality One more thing needs to be taken into account when considering the modification of the chaos bound. Following [8], one can suspect that the violating cases are not self-consistent in the sense that they belong to the deep quantum regime when semiclassical equations (in our case for the string) cease to be valid and quantum effects kill the chaos. For a ring string this seems not to be the case. To check the consistency of the semiclassical limit, consider the energy-time uncertainty relation ∆E∆t ≥ 1. The energy uncertainty is of the order of E/α n as we found in (4.3)-(4.4), and the time uncertainty is precisely of the order of the Lyapunov time 1/2πT n; the uncertainty relation then | 322 | 145856149 | 0 | 16 |
gives E ≥ 2πT n 2 α . On the other hand, we require that the spin S should be large in the classical regime: S 1. This implies E 2 4πT nα or, combining with the uncertainty relation, T n 3 α . Roughly speaking, we need to satisfy simultaneously T n 2 ≤ 1/α and T n 3 /α , which is perfectly possible: first, we need to have small enough α (compared to T n 2 ), as could be expected for the validity of the semiclassical regime; second, we need to have sufficiently large n/ 1, which can be true even for n = 1 for small , and for sure is satisfied for sufficiently large n even for ∼ 1. In conclusion, there is a large window when the dynamics is well-described by the classical equations (and this window even grows when n 1 and the violation of the chaos bound grows). Ring string scattering amplitude and the relation to OTOC So far our efforts to establish a field theory interpretation of a ring string in black hole background have not been very conclusive, which is not a surprise knowing how hard it is in general to establish a gauge/string correspondence in finite-temperature backgrounds and for complicated string geometries. Now we will try a more roundabout route and follow the logic of [4][5][6], constructing a gravity dual of the OTOC correlation function, which has a direct interpretation in field theory; it defines the correlation decay rate and the scrambling time of | 323 | 145856149 | 0 | 16 |
some boundary operator. In [17] this formalism was already applied to study the OTOC of field theory operators (heavy quarks) dual to an open string in BTZ black hole background, hanging from infinity to infinity through the horizon in eikonal approximation. That case has a clear interpretation: the endpoints of the string describe the Brownian motion of a heavy quark in a heath bath. As we already admitted, we do not have such a clear view of what our case means in field theory, but we can still construct the out-of-time ordered correlator corresponding to whatever complicated boundary operator our string describes. We will be delibarately sketchy in describing the basic framework of the calculation as it is already given in great detail in [4][5][6]. The idea is to look at the correlation func- of some operators V, W at finite temperature (hence the expectation value (. . .) includes both quantum-mechanical and thermal ensemble averaging). The time moments need not be ordered; we are often interested in the case t 1 = t 3 ≡ 0, t 2 = t 4 = t. 13 This correlation function corresponds to the scattering amplitude between the in and out states of a perturbation sourced from the boundary. The propagation of the perturbation is described by the bulk-to-boundary propagators K. The perturbation has the highest energy at the horizon since the propagation in Schwarzschild time becomes a boost in Kruskal coordinates, and the pertubation, however small at the boundary, is boosted to high energy in the vicinity of | 324 | 145856149 | 0 | 16 |
the horizon. In the Kruskal coordinates defined the usual way: the scattering amplitude becomes (4.18) The propagators are expressed in terms of the Kruskal momenta p i = (p U i , p V i ) and the coordinates x i = (x 1 i , x 2 i ) in the transverse directions. The in-state is defined by (p U 3 , x 3 ) at U = 0, and by (p V 4 , x 4 ) at V = 0, and analogously for the out-state. The form of the propagators is only known in the closed form for a BTZ black hole (in 2+1 dimensions), but we are happy enough with the asymptotic form near the horizon. For simplicity, consider a scalar probe of zero bulk mass, i.e., the conformal dimension ∆ = D, and at zero black hole charge, i.e., for a Schwarzschild black hole. The propagator then behaves as (ω ≡ ω/4πT ): The task is thus to calculate the amplitude (4.18) with the propagators (4.19). In the eikonal approximation used in most of the literature so far, the problem boils down to evaluating the classical action at the solution. However, it is not trivial to justify the eikonal approximation for a ring string. Let us first suppose that the eikonal aproximation works and then we will see how things change if it doesn't. Eikonal approximation If the energy in the local frame near the horizon is high enough, then we have approximately and for a short enough scattering event (again | 325 | 145856149 | 0 | 16 |
satisfied if the energy and thus the velocity is high enough) the coordinates are also roughly conserved, therefore the amplitude out|in is diagonal and can be written as a phase shift exp(ıδ). The point of the eikonal approximation is that the JHEP12(2019)150 shift δ equals the classical action. The action of the ring configuration is (4.20) We will consider again the string falling slowly in the vicinity of the horizon (see eqs. (2.10)-(2.14)) and putṘ → 0, R(τ ) ≈ r 0 , r 0 − r h r h . Now we need to pass to the Kruskal coordinates and then introduce the new variables T = (V + U )/2, X = (V − U )/2. In these coordinates the near-horizon geometry in the first approximation is Minkowskian and we can easily expand around it as required for the eikonal approximation. The action and the energy (to quartic order in the fluctuations) are now so that, as the perturbation dies out, the string approaches the locus T 0 = 0 ⇒ U = −V ⇒ t → ∞. Inserting (4.23) into (4.21), we obtain, after regularizing the action: Therefore, S (0) = E (0) 2 × nr h α /4 (where we have plugged in r 0 ≈ r h ): the action is proportional to the square of energy, which equals E 2 = pq in the center-of-mass frame. This is perfectly in line with the fast scrambling hypothesis. Plugging in δ = S (0) into the amplitude in (4.18) and rescaling we | 326 | 145856149 | 0 | 16 |
obtain: with N ω containing the first two factors in (4.19) which only depend on ω and T . Introducing the change of variables p = Q sin γ, q = Q cos γ, we can reduce (4.28) to an exponential integral. With the usual contour choice for OTOC t i = − i , t 1 = t 3 = 0, t 2 = t 4 = t, we end up at leading order with Therefore, the Lyapunov time as defined by the OTOC in field theory precisely saturates the predicted bound 2πT , and in the eikonal approximation is not influenced by the winding number n. On the other hand, the scrambling time t * is multipled by a factor of log(1/α n) (the horizon radius can be rescaled to an arbitrary value by rescaling the AdS radius, thus we can ignore the factor of r h ). The factor 1/α appears also in [17] and plays the role of a large parameter, analogous to the large N 2 factor in large-N field theories: the entropy of the string (the number of degrees of freedom to be scrambled) certainly grows with 1/α . For a ring string, this factor is however divided by n, as the number of excitations is reduced by the implementation of the periodic winding boundary condition. Therefore, the winding of the ring string indeed speeds up the chaotic diffusion, by speeding up the scrambling. However, the faster scrambling is not seen in the timescale of local divergence which, unlike the | 327 | 145856149 | 0 | 16 |
classical Lyapunov exponent, remains equal to 2πT ; it is only seen in the timescale on which the perturbation permeates the whole system. In conclusion, the violation of the Maldacena-Shenker-Stanford limit for the bulk Lyapunov exponent in AdS space in the eikonal approximation likely corresponds to a decrease of scrambling time in dual field theory, originating from reduction in the number of degrees of freedom. Beyond the eikonal approximation: waves on the string What is the reason to worry? Even if the scattering is still elastic and happens at high energies and momenta (therefore the overlap of the initial and final state is diagonal in the momenta), it might not be diagonal in the coordinates if the string ocillations are excited during the scattering. These excitations might be relevant for the outcome. 14 However, the quantum mechanics of the string in a non-stationary background is no easy matter and we plan to address it in a separate work. In short, one should write the amplitude (4.18) in the worldsheet theory and then evaluate it in a controlled diagrammatic expansion. For the black hole scrambling scenario, the leading-order stringy corrections are considered in [6]; the Regge (flat-space) limit is the pure gravity black hole scrambling with the Lyapunov exponent 2πT and scrambling time determined by the large N . We need to do the same for the string action (4.21) but, as we said, we can only give a rough sketch now. 14 With an open string hanging from the boundary to the horizon as in [17] | 328 | 145856149 | 0 | 16 |
this is not the case, since it stretches along the radial direction and the scattering event -which is mostly limited to near-horizon dynamics because this is where the energy is boosted to the highest values -remains confined to a small segment of the string, whereas any oscillations propagate from end to end. However, a ring string near the horizon is wholly in the near-horizon region all the time, and the string excitations may happilly propagate along it when the perturbation reaches the area U V ≈ 0. JHEP12(2019)150 The amplitude (4.18) is given by the worldsheet expectation value with the action (4.20), or (4.21) in the target-space coordinates (T, X) accommodated to the shock-wave perturbation. Here, we have introduced the usual complex worldsheet coordinates z = τ + ıσ,z = τ − ıσ. We thus need to compute a closed string scattering amplitude for the tachyon of the Virasoro-Shapiro type, but with nontrivial target-space metric and consequently with the vertex operators more complicated than the usual planewave form. This requires some drastic approximations. We must first expand the non-Gaussian functional integral over the fields T(z,z), X(z,z), Φ(z,z) perturbatively, and then we can follow [6] and [49] and use the operator-product expansion (OPE) to simplify the vertex operators and decouple the functional integral over the target-space coordinates from the worldsheet integration. First we can use the worldsheet reparametrization to fix as usual z 1 = ∞, z 2 = z, z 3 = 1, z 4 = 0. The most relevant regime is that of the highly | 329 | 145856149 | 0 | 16 |
boosted pertrubation near the horizon, with |z| ∼ 1/s. At leading order in the expansion over T, X, the action (4.21) decouples the Gaussian functional integral over the (T, X) coordinates from the pendulum dynamics of the Φ coordinate. We can just as easily use the (U, V ) dynamics, with 1/2(Ṫ 2 −Ẋ 2 ) → −2UV ; this is just a linear transformation and the functional integral remains Gaussian. The states in U and V coordinates are just the plane waves with p 1 = p 3 = p, p 2 = p 4 = q, but the Φ states are given by some nontrivial wavefunctions ψ(Φ). Alltogether we get where we denote by the index i = 1, 2, 3, 4 the coordinates depending on z i and the coordinates in the worldsheet action in the first line depend on z which is not explicitly written out to save space. The higher-order metric corrections in U and V give rise to the weak nonplane-wave dependence of the vertices on U and V , encapsulated in the functions g above. We will disregard them completely, in line with considering the decoupled approximation of the metric as written explicitly in the action in (4.31). The functional integral over U, V is easily performed but the Φ-integral is formidable. However, for small |z|, we can expand the ground state solution (2.10) in z,z, which corresponds to the linearized oscillatory behavior and the functional integral becomes Gaussian:Φ 2 + n 2 sin 2 Φ →Φ 2 + | 330 | 145856149 | 0 | 16 |
n 2 Φ 2 . With the effective potential for the tachyon V eff (Φ) = n 2 Φ 2 , the worldsheet propagator takes the form For the plane wave states we take the ansatz ψ(Φ) = e ı Φ , where = l−ıν, with l ∈ Z being the angular momentum and 0 < ν 1 the correction from the interactions (fortunately we will not need the value of ν). The worldhseet propagator for the flat (U, V ) coordinates -24 - JHEP12(2019)150 has the standard logarithmic form. Now we use the fact that 1/|z| ∼ s = pq to expand the vertices forŴ 2 andŴ 4 in OPE. The OPE reads :Ŵ 2Ŵ4 :∼ exp ıqz∂V 2 + ıqz∂V 2 exp ı z∂Φ 2 + ı z∂Φ 2 |z| which follow from the action of the Laplace operator on the state e ı Φ . This finally gives (4.34) The above integral results in a complicated ratio of the 1 F 1 hypergeometric functions and gamma functions. We still have three possible poles, as in the Virasoro-Shapiro amplitude. In the stringy regime at large pq, the dominant contribution must come from ∼ l = 0, for the other pole brings us back to the purely gravitational scattering, with S ∝ pq, whereby the local scrambling rate remains insensitive to n, as we have shown in the eikonal approximation. The stringy pole yields the momentum-integrated amplitude showing that the Lyapunov scale 2πT is modified (we again take r h = 1 for simplicity). We | 331 | 145856149 | 0 | 16 |
conclude that in the strong stringy regime the Lyapunov exponent in dual field theory behaves as 2π(1 + πα n 2 )T , differing from the expected chaos bound for nonzero winding numbers n. Thus, if the classical gravity eikonal approximation does not hold, the modification of the bulk Lyapunov exponent also has an effect on the OTOC decay rate in field theory. Once again, the above reasoning has several potential loopholes: (1) we completely disregard the higher-order terms in the metric, which couple that radial and transverse dynamics (2) we assume only small oscillations in Φ (3) we disregard the corrections to vertex operators (4) we disregard the corrections to the OPE coefficients. These issues remain for future work. Discussion and conclusions Our study has brought us to a sharp formal result with somewhat mystifying physical meaning. We have studied classical chaos in the motion of closed strings in black hole backgrounds, and we have arrived, analytically and numerically, at the estimate λ = 2πT n for the Lyapunov exponent, with n being the winding number of the string. This is a correction by the factor of n of the celebrated chaos bound λ ≤ 2πT . However, one should think twice before connecting these things. From the bulk perspective, what we have is different from classical gravity -it includes string degrees of freedom, and no gravity degrees of freedom. Therefore, the fast scrambler hypothesis that the black holes in Einstein gravity exactly saturate the bound is not expected to be relevant for our system | 332 | 145856149 | 0 | 16 |
-25 - JHEP12(2019)150 anyway, but the question remains why the bound is modified upwards instead of simply being unsaturated (in other words, we would simply expect to get λ < 2πT ). The twist is that the Lyapunov exponent in the bulk is related to but in general distinct from the Lyapunov exponent in field theory, usually defined in terms of OTOC. Apparently, one just should not uncritically apply the chaos bound proven for the correlation function decay rates in flat-space quantum fields to worldsheet classical string dynamics. Therefore, it might be that the field theory Lyapunov time does not violate the bound at all. The timescale of OTOC decay for a field theory dual to the fluctuating string is calculated in [17]: OTOC equals the expectation value of the scattering operator for bulk strings with appropriate boundary conditions. The field-theory Lyapunov time is then determined by the phase shift of the collision. In particular, [17] finds the saturated bound λ = 2πT as following from the fact that the phase shift is proportional to the square of the center-of-mass energy. On the other hand, [6] predicts that the Lyapunov exponent is lower than the bound when stringy effects are considered. We have done first a completely classical calculation of OTOC and have found, expectedly perhaps, that the 2πT bound is exactly obeyed. Then we have followed the approximate scheme of [49] to include the one-loop closed string tachyon amplitude as the simplest (and hopefully representative enough?!) stringy process. For a ring string background, this gives | 333 | 145856149 | 0 | 16 |
an increased value for the field-theory Lyapunov rate, yielding some credit to the interpretation that complicated string configurations encode for strongly nonlocal operators, which might indeed violate the bound. But as we have explained, the approximations we took are rather drastic. We regard a more systematic study of loop effects in string chaos as one of the primary tasks for future work. To gain some more feeling on the dual field theory, we have looked also at the Regge trajectories. In one configuration, the strings that violate the bound n times are precisely those whose Regge trajectory has the slope n times smaller than the leading one (and thus for n = 1 the original bound is obeyed and at the same time we are back to the leading Regge trajectory). In another configuration, the strings that violate the bound describe no Regge trajectory at all. However, it is very hard to say anything precise about the gauge theory operators at finite temperature. Deciphering which operators correspond to our strings is an important but very ambitious task; we can only dream of moving toward this goal in very small steps. What we found so far makes it probable that complicated, strongly non-local operators correspond to the bound-violating strings, so that (as explained in the original paper [1]) their OTOC cannot be factorized and the bound is not expected to hold. 15 15 In relation to the gauge/string duality it is useful to look also at the gauge theories with N f flavors added, which corresponds to | 334 | 145856149 | 0 | 16 |
the geometry deformed by N f additional D-branes in the bulk. In [50] it was found that the system becomes nonintegrable in the presence of the flavor branes (expectedly, as it becomes non-separable), but the Lyapunov exponent does not grow infinitely with the number of flavors, saturating instead when the number of colors Nc and the number of flavors become comparable. This is expected, as the D3-D7 brane background of [50] formally becomes separable again when N f /Nc → ∞ (although in fact this regime cannot be captured, the calculation of the background ceases to be valid in this case). In our case the winding number n is a property of the string solution, not geometry, and the Hamiltonian (2.9) seems to have no useful limit for n → ∞, thus we do not expect the estimate 2πT n will saturate. JHEP12(2019)150 Preparing the final version of the paper, we have learned also of the work [52] where the n-point OTOCs are studied following closely the logic of [1] and the outcome is a factor of n enlargement, formally the same as our result. This is very interesting but, in the light of the previous paragraph, we have no proof that this result is directly related to ours. It certainly makes sense to investigate if the winding strings are obtained as some limit of the gravity dual for the n-point correlations functions. We know that n-point functions in AdS/CFT are a complicated business. The Witten diagrams include bulk propagators carrying higher spin fields that might | 335 | 145856149 | 0 | 16 |
in turn be obtained as string excitations. Just how far can one go in making all this precise we do not know for now. In relation to [15,16] one more clarifying remark should be given. In these works, particles in the vicinity of the horizon are found to exhibit chaos (either saturating the bound or violating it, depending on the spin of the background field). At first glance, this might look inconsistent with our finding that for n = 0, when the string degenerates to a particle, no chaos occurs; after all, we know that geodesic motion in the background of spherically symmetric black holes is integrable, having a full set of the integrals of motion. But in fact there is no problem, because in [15,16] an additional external potential (scalar, vector, or higher-spin) is introduced that keeps the particle at the horizon, balancing out the gravitational attraction. Such a system is of course not integrable anymore, so the appearance of chaos is expected. The modification of the bound in the presence of higherspin fields might have to do with the findings [51] that theories with higher-spin fields can only have gravity duals in very restricted situations (in particular, higher spin CFTs with a sparse spectrum and large central charge or, roughly speaking, massive higher spin fields, are problematic). Another task on the to-do list, entirely doable although probably demanding in terms of calculations, is the (necessarily approximate) calculation of the black hole scattering matrix, i.e., the backreactrion of the black hole upon scattering or absorbing | 336 | 145856149 | 0 | 16 |
a string, along the lines of [7]. In this paper we have worked in the probe limit (no backreaction), whereas the true scrambling is really the relaxation time of the black hole (the time it needs to become hairless again), which cannot be read off solely from the Lyapunov time; this is the issue we also mentioned in the Introduction, that local measures of chaos like the Lyapunov exponent do not tell the whole story of scrambling. Maybe even a leading-order (tree-level) backreaction calculation can shed some light on this question. Figure 6. Check of the Hamiltonian constraint H = 0 during an integration for the spherical, planar and hyperbolic black hole (black, blue, red respectively), at temperature T = 0.01 (left) and T = 0.10 (right). The accuracy of the constraint is a good indicator of the overall integration accuracy, it is never above 10 −6 and has no trend of growth but oscillates. | 337 | 145856149 | 0 | 16 |
Discordant Amyloid Status Diagnosis in Alzheimer’s Disease Introduction: Early and accurate Alzheimer’s disease (AD) diagnosis has evolved in recent years by the use of specific methods for detecting its histopathological features in concrete cases. Currently, biomarkers in cerebrospinal fluid (CSF) and imaging techniques (amyloid PET) are the most used specific methods. However, some results between both methods are discrepant. Therefore, an evaluation of these discrepant cases is required. Objective: The aim of this work is to analyze the characteristics of cases showing discrepancies between methods for detecting amyloid pathology. Methodology: Patients from the Neurology Department of La Fe Hospital (n = 82) were diagnosed using both methods (CSF biomarkers and amyloid-PET). Statistical analyses were performed using logistic regression, and sex and age were included as covariables. Additionally, results of standard neuropsychological evaluations were taken into account in our analyses. Results: The comparison between CSF biomarker (Aβ42) and amyloid PET results showed that around 18% of cases were discrepant—mainly CFS-negative and PET-positive cases had CSF levels close to the cut-off point. In addition, a correlation between the episodic memory test and CSF biomarkers levels was observed. However, the same results were not obtained for other neuropsychological domains. In general, CSF- and PET-discrepant cases showed altered episodic memory in around 66% of cases, while 33% showed normal performance. Conclusions: In common clinical practice at tertiary memory centers, result discrepancies between tests of amyloid status are far more common than expected. However, episodic memory tests remain an important support method for AD diagnosis, especially in cases with discrepant results | 338 | 253458763 | 0 | 16 |
between amyloid PET and CSF biomarkers. Introduction Increasingly, middle-aged people suffer from memory loss, subjective complaints without any medical evidence, cognitive impairment derived from other disorders (e.g., anxiety and depression), and dementia, with Alzheimer's disease (AD) being the most common [1]. AD diagnosis requires a complete clinical evaluation based on neuroimaging, neuropsychological assessment, and amyloid status determination. This status is evaluated by means of biomarkers levels (β-amyloid-42 (Aβ42), total Tau (t-Tau), and phosphorylated Tau (p-Tau)) in cerebrospinal fluid (CSF) or by amyloid positron emission tomography (amyloid PET) [2]; however, these techniques show some discrepant results. Great results consistencies for both diagnostic techniques are described and expected; however, in common specialized practice, this consistency needs to be confirmed. Regarding the histopathological characteristics of AD, the extracellular formation of amyloid plaques and intracellular formation of neurofibrillary tangles are the main hallmarks; therefore, diagnostic criteria are based on these specific biomarkers. They are classified into amyloid status biomarkers (fibrillary amyloid retention by PET (amyloid PET), and Aβ42, t-Tau, and p-Tau in CSF) and neurodegeneration/topographic biomarkers (temporoparietal metabolism by PET-FDG and medial temporal atrophy by NMR) [3]. The amyloid status biomarkers consist of CSF levels for Aβ42, t-Tau, and p-Tau, which provides relevant information for early AD diagnosis [4]. However, the clinical variability in late-onset AD requires us to consider other biomarkers. In this sense, some trials aim to investigate Aβ42/Aβ40 or APP669-711/Aβ42 ratios [5,6]. Another study suggested that the Aβ42/Aβ40 ratio improved the diagnostic capacity of Aβ42 [7]. Moreover, amyloid PET molecular imaging provides information about the complex interactions | 339 | 253458763 | 0 | 16 |
between Aβ42, Tau, and neuroinflammation in AD and mild cognitive impairment (MCI), allowing us to differentiate among the changes associated with the normal aging process or the physiopathological processes of the disease. In fact, PET imaging enables preclinical AD detection [8], and it has some advantages, such as being a less invasive technique with high precision and standardization levels. Nevertheless, amyloid PET is more expensive and has some limitations regarding the use of tracers. In fact, tracers are difficult to produce and handle since they have a short half-life. In addition, the different amyloid tracers available in clinical practice could show different information, and their diagnosis indexes (sensitivity, specificity) are not 100% [9]. The recent development of CSF biomarkers and amyloid PET techniques in the early diagnosis of AD demonstrates important advances in the knowledge of the role of the amyloid status [10]. However, some studies from the literature showed discrepant results between CSF analysis and the amyloid PET technique [11,12]. Specifically, atypical CSF patterns showing high levels for p-Tau and/or t-Tau but normal levels for Aβ1-42 are relatively frequent in patients with MCI and AD [13,14]. On the other hand, neuropsychological evaluation tests have shown their utility in the detection of neurodegenerative diseases, such as AD, and could constitute a useful tool to discern doubtful cases [15]. In this sense, the present study is focused on the evaluation of the concordance between the results of CSF biomarkers levels (Aβ1-42, T-tau, and P-tau) and amyloid PET in the diagnosis of patients with cognitive impairment to provide | 340 | 253458763 | 0 | 16 |
additional tools, such as neuropsychological tests, in order to help in discrepant case diagnosis in clinical practice. Participants and Samples Collection The study involved patients from the neurology unit from the Hospital Universitari I Politècnic la Fe (Valencia, Spain) (n = 82), with ages between 48 and 80 years. All of them underwent two procedures for AD diagnosis: (i) lumbar puncture to obtain CSF samples for the determination of AD biomarkers (Amyloid β42 (Aβ42), t-Tau, and p-Tau); and (ii) amyloid PET technique (Florbetapir, Flubetamol). In addition, we evaluated all patients with neuropsychological tests (CDR, RBANS) (see Section 2.2). We classified participants into two groups: (i) concordant cases (n = 67)-CSF biomarkers and amyloid PET imaging provided the same diagnosis; (ii) discordant cases (n = 15)-CSF biomarkers and amyloid PET imaging provided different diagnoses. We determined CSF positivity by Aβ42 levels with a cut off of 750 pg mL −1 . The study protocol was approved by the Ethics Committee (CEIC) from the Health Research Institute La Fe (Valencia, Spain). All participants signed the informed consent. Neuropsychological Evaluation The neuropsychological evaluation of the participants was based on the CDR and RBANS tests. The CDR test (clinical dementia rating; Morris, 1993) establishes 5 possible states based on the evaluation of 6 areas of cognitive and behavioral autonomous functioning (memory, orientation, reasoning and problem solving, activities of daily life, domestic tasks and hobbies, and personal care and hygiene). The established states are: 0 (normal cognitive and behavioral functioning), 0.5 (questionable dementia), 1 (mild dementia), 2 (moderate dementia), and 3 | 341 | 253458763 | 0 | 16 |
(severe dementia) [16]. The RBANS (repeatable battery for neuropsychological status assessment) test is a neuropsychological battery that evaluates 5 areas: immediate memory (IM), language (L), attention (A), delayed memory (DM), and visuospatial construction (VC). A resulting score lower than 85 in some areas would indicate cognitive alteration [15,17]. Statistical Analysis We summarized participants' demographic and clinical data using mean (standard deviation) and median (interquartile range) for numerical variables, and absolute frequency (%) for qualitative variables. We performed an evaluation of the relationship between the results of amyloid PET, biomarkers in CSF (Aβ42, t-Tau, and p-Tau), and neuropsychological evaluation (RBANS and CDR) through logistic regression, including sex and age as covariates. We performed statistical analyses using R software version 4.0 (1) and clickR package version 0.4.39. Table 1 summarizes the demographic and clinical data from the two participant groups. Regarding gender, more discordant and concordant-positive cases were obtained for females compared to males, while concordant-negative cases were mainly males. Additionally, CSF biomarkers (t-Tau, p-Tau, and Aβ42) showed differences between groups. Specifically, as expected, higher levels of Aβ42 and lower levels of t-Tau and p-Tau were found in the negative concordant group compared with the positive concordant group, while medium levels were present in discordant cases. In the same way, neuropsychological scores are, in general, altered in the positive concordant group and are normal in the negative concordant group, while the discordant group showed values closer to the cut-off values. Figure 1 shows the levels of the CSF biomarker (Aβ42) and the amyloid PET results in both participant | 342 | 253458763 | 0 | 16 |
groups, showing that around 18% of cases were discrepant. Additionally, it can be seen that discrepant results are mainly from intermediate CSF levels, which are close to the cut-off point (750 pg mL −1 ). In addition, Figure 2 shows that most of the discrepant cases correspond to patients with positive amyloid PET (1) and negative CSF biomarkers (Aβ42 levels ≥ 750 pg mL −1 ). Similarly, a comparison between amyloid PET and the ratio between t-Tau and Aβ42 showed 18% of discrepant cases. CSF Biomarkers and Amyloid PET Imaging Biomedicines 2022, 10, x FOR PEER REVIEW 4 of 11 Figure 1 shows the levels of the CSF biomarker (Aβ42) and the amyloid PET results in both participant groups, showing that around 18% of cases were discrepant. Additionally, it can be seen that discrepant results are mainly from intermediate CSF levels, which are close to the cut-off point (750 pg mL −1 ). In addition, Figure 2 shows that most of the discrepant cases correspond to patients with positive amyloid PET (1) and negative CSF biomarkers (Aβ42 levels ≥ 750 pg mL −1 ). Similarly, a comparison between amyloid PET and the ratio between t-Tau and Aβ42 showed 18% of discrepant cases. Biomedicines 2022, 10, x FOR PEER REVIEW 4 of 11 Figure 1 shows the levels of the CSF biomarker (Aβ42) and the amyloid PET results in both participant groups, showing that around 18% of cases were discrepant. Additionally, it can be seen that discrepant results are mainly from intermediate CSF levels, which are close | 343 | 253458763 | 0 | 16 |
to the cut-off point (750 pg mL −1 ). In addition, Figure 2 shows that most of the discrepant cases correspond to patients with positive amyloid PET (1) and negative CSF biomarkers (Aβ42 levels ≥ 750 pg mL −1 ). Similarly, a comparison between amyloid PET and the ratio between t-Tau and Aβ42 showed 18% of discrepant cases. Biomarkers and Neuropsychological Evaluation The diagnosis potential of the neuropsychological evaluation was evaluated with special attention to discordant cases; specifically, the scores obtained from CDR and RBANS. DM tests were assessed as complementary tools to the biochemical and imaging tests. For discrepant cases, patients with negative CSF biomarkers and positive PET manifest alterations in episodic memory in 60% of cases (see Figure 2). By contrast, all the cases (only two patients) with positive CSF biomarkers and negative PET showed normal episodic memory performance. The relationship between Aβ42 levels and CSF and RBANS.DM tests is shown in Figure 3. As observed, PET-negative amyloids are shown on the left side, with PETpositive amyloids on the right. Among negative PET cases, concordant cases showed Aβ1-42 levels > 750 pg mL −1 , and most of them had high RBANS.DM scores (dark values); however, discordant cases showed Aβ42 levels < 750 pg mL −1 and high RBANS.DM scores. Among positive PET cases, concordant cases showed Aβ42 levels < 750 pg mL −1 , and most of them showed low RBANS.DM scores; furthermore, discordant cases showed Aβ42 levels > 750 pg mL −1 and low RBANS.DM scores. Biomedicines 2022, 10, x FOR PEER | 344 | 253458763 | 0 | 16 |
REVIEW 5 of 11 The diagnosis potential of the neuropsychological evaluation was evaluated with special attention to discordant cases; specifically, the scores obtained from CDR and RBANS. DM tests were assessed as complementary tools to the biochemical and imaging tests. For discrepant cases, patients with negative CSF biomarkers and positive PET manifest alterations in episodic memory in 60% of cases (see Figure 2). By contrast, all the cases (only two patients) with positive CSF biomarkers and negative PET showed normal episodic memory performance. The relationship between Aβ42 levels and CSF and RBANS.DM tests is shown in Figure 3. As observed, PET-negative amyloids are shown on the left side, with PET-positive amyloids on the right. Among negative PET cases, concordant cases showed Aβ1-42 levels > 750 pg mL −1 , and most of them had high RBANS.DM scores (dark values); however, discordant cases showed Aβ42 levels < 750 pg mL −1 and high RBANS.DM scores. Among positive PET cases, concordant cases showed Aβ42 levels < 750 pg mL −1 , and most of them showed low RBANS.DM scores; furthermore, discordant cases showed Aβ42 levels > 750 pg mL −1 and low RBANS.DM scores. (Figure 4b), and the shaded part represents the confidence interval. As can be seen, the confidence intervals are wide at high Aβ42 levels and RBANS.DM scores, which could reflect considerable variability in the classification of patients (Figure 4b), and the shaded part represents the confidence interval. As can be seen, the confidence intervals are wide at high Aβ42 levels and RBANS.DM scores, which could | 345 | 253458763 | 0 | 16 |
reflect considerable variability in the classification of patients according to amyloid PET results, especially in cases with high levels of Aβ42 in CSF and high scores in RBANS.DM. Among the CDR results, there was no correlation between CSF biomarkers levels in the same way that happens in the other RBANS subsets. Biomedicines 2022, 10, x FOR PEER REVIEW 6 of 11 according to amyloid PET results, especially in cases with high levels of Aβ42 in CSF and high scores in RBANS.DM. Among the CDR results, there was no correlation between CSF biomarkers levels in the same way that happens in the other RBANS subsets. Figure 5 shows a decision tree for AD diagnosis based on these results. As can be seen, patients with positivity and negativity for both tests should be diagnosed with AD and non-AD, respectively. In the discrepant cases, RBANS.DM could be used to clarify the diagnosis. In cases with altered RBANS.DM, patients should be diagnosed with AD, while patients with normal scores for RBANS.DM should be should be followed up to define their diagnosis. Figure 5 shows a decision tree for AD diagnosis based on these results. As can be seen, patients with positivity and negativity for both tests should be diagnosed with AD and non-AD, respectively. In the discrepant cases, RBANS.DM could be used to clarify the diagnosis. In cases with altered RBANS.DM, patients should be diagnosed with AD, while patients with normal scores for RBANS.DM should be should be followed up to define their diagnosis. Discussion At present, the gold standard | 346 | 253458763 | 0 | 16 |
for the diagnosis of AD is based on the evaluation of the amyloid status by means of CSF biomarker Aβ42 or amyloid PET imaging [18]. In this work, an assessment of the concordance between the results from CSF biomarker levels (Aβ42) and amyloid PET tests was carried out in order to compare their reliability in the diagnosis of patients with cognitive impairment in a real clinical practice context. Sometimes test results are inconclusive or lack agreement. Moreover, some studies showed discrepancies between both techniques [18,19]. Additionally, previous studies indicated that amyloid PET and CSF biomarkers might not reflect identical clinical information; therefore, a combination of both techniques could be the best option to characterize clinically unclear cognitive impairment [20]. In the present work, most of the discrepant cases were patients with negative CSF biomarkers and positive PET results. These results are contrary to those described by Hye et al., who concluded that CSF biomarkers were more sensitive than PET for AD diagnosis [21]. In addition, a previous study showed similar discordant results between Aβ42 and amyloid PET, with 25% discrepancies [22]. In general, this is a very high discrepancy level for two analytical techniques considered the gold standard in AD diagnosis. Regarding the amyloid deposition, previous studies described the deposition of peptides Aβ37 or Aβ39 in extracellular plaques [23]. In addition, perivascular deposits in some amyloid plaques could contribute to high variability in amyloid PET results. Therefore, CSF Aβ42 and amyloid PET may not exactly reflect the same information. Additionally, a previous study found that the | 347 | 253458763 | 0 | 16 |
Aβ42/Aβ40 ratio showed better concordance with amyloid PET results [24]. However, the t-Tau/Aβ42 ratio showed similar results to Aβ42 in our study. These differences among studies could be explained by the different cut-off points used, as well as by the differences in the selection of participants. In the present work, the participants were patients from the Cognitive Disorders Discussion At present, the gold standard for the diagnosis of AD is based on the evaluation of the amyloid status by means of CSF biomarker Aβ42 or amyloid PET imaging [11]. In this work, an assessment of the concordance between the results from CSF biomarker levels (Aβ42) and amyloid PET tests was carried out in order to compare their reliability in the diagnosis of patients with cognitive impairment in a real clinical practice context. Sometimes test results are inconclusive or lack agreement. Moreover, some studies showed discrepancies between both techniques [11,12]. Additionally, previous studies indicated that amyloid PET and CSF biomarkers might not reflect identical clinical information; therefore, a combination of both techniques could be the best option to characterize clinically unclear cognitive impairment [18]. In the present work, most of the discrepant cases were patients with negative CSF biomarkers and positive PET results. These results are contrary to those described by Hye et al., who concluded that CSF biomarkers were more sensitive than PET for AD diagnosis [19]. In addition, a previous study showed similar discordant results between Aβ42 and amyloid PET, with 25% discrepancies [20]. In general, this is a very high discrepancy level for two | 348 | 253458763 | 0 | 16 |
analytical techniques considered the gold standard in AD diagnosis. Regarding the amyloid deposition, previous studies described the deposition of peptides Aβ37 or Aβ39 in extracellular plaques [21]. In addition, perivascular deposits in some amyloid plaques could contribute to high variability in amyloid PET results. Therefore, CSF Aβ42 and amyloid PET may not exactly reflect the same information. Additionally, a previous study found that the Aβ42/Aβ40 ratio showed better concordance with amyloid PET results [22]. However, the t-Tau/Aβ42 ratio showed similar results to Aβ42 in our study. These differences among studies could be explained by the different cut-off points used, as well as by the differences in the selection of participants. In the present work, the participants were patients from the Cognitive Disorders Unit in the Hospital La Fe, implying some selection bias. Regarding the clinical implications of the discrepant results, it is important to highlight that both tests (CSF Aβ42 and amyloid PET) were only applied to cases showing some inconsistent results between a diagnosis test and the clinical manifestations. Similarly, previous studies showed that both techniques were applied when there was a mismatch between the clinical diagnosis and the biomarker result [23]; furthermore, a study by Wilde et al. showed that discordant results provided important information about clinical progression [24]. In this sense, any discordant negative result (Aβ42 CSF or amyloid PET) should be validated by means of the other test. According to gender differences, we found higher discrepancies and higher positive results in females. The higher proportion of positive females could be explained by | 349 | 253458763 | 0 | 16 |
their higher risk levels for AD [25], as well as the higher number of female participants in this study. In addition, these could be explained by the influence of other factors, such as the ApoE genotype (not available in our data) [26]. Therefore, studies in other cohorts are necessary to confirm these results. Regarding neuropsychology tests, they are mainly used to detect cognitive impairment cases and to evaluate disease progression [27]. In this sense, the CDR test is the most used classification system in clinical dementia research [28]. It consists of a global clinical scale, which measures social, behavioral, and functional changes in patients. Among its advantages is that it is independent of other psychometric tests, it does not require a baseline evaluation, and it can be used as a control for each individual [29]. On the other hand, RBANS is a short neuropsychological battery with high sensitivity for the detection of cognitive disorders in degenerative and non-degenerative pathologies. It has been adapted into several languages and is widely used in some countries [30]. RBANS scores produce excellent estimates of diagnosis accuracy and it constitutes a useful tool in the detection of cognitive deficits associated with AD [17,31,32]. Specifically, the RBANS.DM domain could be a cost-effective tool for identifying the early signs of AD pathology, improving clinical decisions about the progression to dementia due to AD [15]. In the present study, the agreement between a CSF biomarker (Aβ42), and the amyloid PET technique was evaluated. The results showed that those subjects with alterations in amyloid PET | 350 | 253458763 | 0 | 16 |
(positive) could be AD patients; however, if CSF Aβ42 levels are ≥the cut-off point (negative), the AD diagnosis was not corroborated. In these discrepant cases, the patients' cognitive impairment was evaluated according to the RBANS.DM domain, which constitutes an important tool in establishing diagnosis, much as it is in cases without studying amyloid status. An episodic memory assessment (RBANS.DM) would allow us to establish the phase of the disease (the lower score obtained in this area, the greater memory deficit), as the study by Hammers et al. already pointed out. In fact, they claimed that RBANS.DM could illuminate clinical decision making regarding the possible progression to dementia due to AD [27]. In addition, evaluations of episodic memory with tests such as the RBANS.DM scale would remain a key tool to identify patients with AD, even when specific amyloid detection techniques are fully available. So, its use is quite able to reduce economic costs in those countries where amyloid PET cannot be performed by public health systems. Similar results were obtained from a previous study [27]. In the present study, RBANS.DM showed a great capacity to predict amyloid PET status. For that reason, RBANS.DM could be employed as an AD diagnosis approach in cases where economic conditions do not allow the performance of more expensive tests, such as amyloid PET. In addition, episodic memory tests (e.g., RBANS.DM) could be useful as a screening test for AD diagnosis due to its high sensitivity. However, its application requires further resources (specialized staff and time), which are not available in | 351 | 253458763 | 0 | 16 |
most health systems. The main limitations of this study are the small sample size and the reduced number of discordant cases. In addition, the participants were patients from the Neurology Unit; therefore, the sample could be biased. Moreover, the specific cut-off point used according to clinical practice in this unit could be different from others, making it difficult to extrap-olate the results. Regarding amyloid PET, the interpretation of these results can be more subjective, especially in the most borderline cases. So, highly specialized personnel are required to be in charge of these tasks. Similarly, for neuropsychological evaluation, some specialized staff is required. Conclusions Alzheimer's disease diagnosis is mainly based on amyloid status (Aβ42 in CSF and amyloid PET). However, in some cases, both techniques presented discordant results. In these cases, classical complementary non-invasive and cost-effective neuropsychological tools continued providing the key data to support AD diagnosis. In this sense, episodic memory assessments still constitute a useful tool in supporting the diagnosis of patients with risks for Alzheimer's disease development. Therefore, neuropsychological evaluations could help to increase knowledge regarding the patient's cognitive impairment degree or disease progression but also identify AD patients. In general, this study highlights the high percentage of discrepancies between two techniques considered the gold standard in the diagnosis of AD. In this sense, there is an increasing need to carry out further research in plasma samples to identify new specific biomarkers that are minimally invasive and economically affordable. Funding: This work was supported by the Instituto de Salud Carlos III Project PI19/00570 (Spanish | 352 | 253458763 | 0 | 16 |
Ministry of Economy and Competitiveness) and co-funded by European Union, ERDF "A way to make Europe". CCP acknowledges CPII21/00006. CPB acknowledges PFIS FI20/00022. LAS acknowledges Río Hortega CM20/00140. Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of Hospital Universitari I Politècnic La Fe (2021-454-1, 23 June 2021). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available on request from the corresponding author. | 353 | 253458763 | 0 | 16 |
NEXGB: A Network Embedding Framework for Anticancer Drug Combination Prediction Compared to single-drug therapy, drug combinations have shown great potential in cancer treatment. Most of the current methods employ genomic data and chemical information to construct drug–cancer cell line features, but there is still a need to explore methods to combine topological information in the protein interaction network (PPI). Therefore, we propose a network-embedding-based prediction model, NEXGB, which integrates the corresponding protein modules of drug–cancer cell lines with PPI network information. NEXGB extracts the topological features of each protein node in a PPI network by struc2vec. Then, we combine the topological features with the target protein information of drug–cancer cell lines, to generate drug features and cancer cell line features, and utilize extreme gradient boosting (XGBoost) to predict the synergistic relationship between drug combinations and cancer cell lines. We apply our model on two recently developed datasets, the Oncology-Screen dataset (Oncology-Screen) and the large drug combination dataset (DrugCombDB). The experimental results show that NEXGB outperforms five current methods, and it effectively improves the predictive power in discovering relationships between drug combinations and cancer cell lines. This further demonstrates that the network information is valid for detecting combination therapies for cancer and other complex diseases. Introduction In cancer therapy, it is difficult to provide effective treatment using a single drug [1]. However, drug combinations can increase therapeutic efficacy and reduce the toxic effects of drugs by targeting multiple molecular mechanisms in cancer cells [2,3]. At the same time, drug combination therapy also shows great potential in overcoming | 354 | 251998482 | 0 | 16 |
drug resistance [4]. For cancer patients, choosing the right combination of drugs to improve the efficiency of their treatment can greatly reduce their suffering. Traditional drug combination identification methods often consider clinical trials or high-throughput screening (HTS) [5], but they still have many problems. The time and money spent on clinical trials are unknown, and these methods are prone to errors and may even expose patients to harmful treatments, so it is not an effective way to identify large numbers of drug combinations [6]. Recently, HTS has been widely used to identify effective combinations in preclinical settings [7]. The method measured a large number of drug combinations at different doses and was applied to different cancer cell lines [8]. Compared with clinical trials, HTS can identify drug combination in a reasonable time through multiple drug combination databases [6,9]. However, with the dramatic increase in data volume, it is impractical to use HTS to consider all combinations [10,11]. Therefore, more effective methods are needed to explore the drug combination space, improve efficiency, and reduce errors as much as possible. With the progress of computer technology, the machine learning model has been developed to explore drug combinations for cancer diseases [12]. More and more biological Int. J. Mol. Sci. 2022, 23, 9838 2 of 13 knowledge has been used in the prediction models of machine learning. Peng et al. integrated the molecular and pharmacological features of drugs using a Bayesian network [13]. Cheng and Zhao applied five algorithms (naive Bayes, decision tree, k-nearest neighbor, logistic regression, and support | 355 | 251998482 | 0 | 16 |
vector machine) and employed four features based on drug-drug similarity to identify the drug combinations [14]. Yu et al. applied similarity networks to predict drug-target interactions and demonstrated the feasibility of predicting drug combinations through network embeddings [15,16]. Advances in biological technology have provided abundant biological data, with a huge amount of data available. Recently, in order to utilize biological information, deep learning has begun to be applied to drug synergy prediction. Kristina and colleagues used chemical and cancer genomic information to construct a new model (DeepSynergy) to predict combinations of anticancer drugs for multiple types of cancer [17]. MatchMaker, proposed by Halil et al., combines the chemical features of each drug with the gene expression feature of the cell line [18]. TranSynergy, proposed by Liu et al., uses PPI network information and gene profiles to construct the features of drugs and cell lines. Notably, they applied the random walk with restart algorithm (RWR) in the network to infer drug representations [19]. However, they considered drug target genes but ignored the topology between drug targets and disease proteins in PPI networks, which also have complex biological interactions. In recent network science research, the relationships between and importance of drug targets and disease proteins have been investigated through cluster detection algorithms [20] and drugdisease proximity measures [21]. Cheng and his colleagues quantified the relationship between drug targets and disease proteins in the human protein-protein interaction group, and they found that all possible drug-drug-disease combinations can be divided into six topologically different categories [22]. This confirmed the predictability | 356 | 251998482 | 0 | 16 |
of considering potential drug-drug-disease relationships through a PPI network. Graph convolutional neural networks and self-attention mechanisms have been applied in the fields of predicting frequently occurring diseases and predicting compound-protein interactions [23,24]. Yang et al. proposed a method based on a graph convolution network (GraphSynergy), which utilized proteins related to drug targeting and specific cancer cell lines [25]. However, the application of existing network science methods to drug combination therapy still has a large space for exploration. In this work, we propose a novel framework, extreme gradient boosting for network embedding (NEXGB), that utilizes PPI network information to identify drug combinations with synergistic effects in specific cancer cell lines. NEXGB combines the struc2vec [26] component, which can effectively capture the topological information of the target protein to generate drug features and cancer cell line features [21,22]. Two targets with the same local network structure should have the same potential features. Therefore, we obtain the drug features and cell line features from the potential information of targets and protein information. Then, the obtained features are put into XGBoost to identify drug combinations with synergistic effects in cancer treatment [27]. To validate the effectiveness of NEXGB, we apply NEXGB on two datasets: a recently developed unbiased oncology compound screen (Oncology-Screen) [28] and a large drug combination dataset (DrugCombDB) [29]. We also compare NEXGB with other recent advanced methods to further illustrate the performance of NEXGB. Results In this part, we list the key parameters of the model, illustrating the effect of the feature dimension on model performance. Furthermore, | 357 | 251998482 | 0 | 16 |
we demonstrate the superiority of the NEXGB model and its predictive performance in specific tissues and specific cell lines. Parameter Relation In this study, struc2vec is used to generate feature vectors for proteins. We set the length of random walk sequences to 80, and each node generates 20 random walk sequences. Finally, a 64-dimensional vector is generated for each gene through the Skip-Gram model. Table 1 provides the relevant parameters of this part. The XGBoost model performs subsequent classification training. The key parameters are listed in Table 2. We apply the grid search method to adjust the parameters of this part to find the optimal parameters possible. Comparison Study of Existing Methods We evaluate performance on two recently developed large drug combination datasets, Oncology-Screen [28] and DrugCombDB [29], and use five metrics: accuracy (ACC), recall, area under receiver operating characteristic curve (AUC-ROC), area under precision-recall curve (AUC-PR), and F1 score. In this study, ACC represents the number of correct drug combinations predicted. Recall represents the probability of being predicted to be synergistic among all synergistic drug combination data. AUC-ROC is the area under the receiver operating characteristic curve calculated from the predicted scores. AUC-PR is the mean precision calculated from the prediction score. The F1 value combines the precision and recall score. The Oncology-Screen data provided synergistic information for 4176 drug combinations, covering 21 drugs and 29 tumor-associated cell lines. DrugCombDB is much larger and is the largest drug combination dataset by far, containing 764 unique drugs and 69,436 drug combinations in 76 unique cell lines. | 358 | 251998482 | 0 | 16 |
We compare the performance of NEXGB with other baselines. The compared methods include network proximity (NP [22]), matrix factorization (GraRep [30]), random walk (DeepWalk [31]), deep neural network (DeepSynergy [17]), and graph convolutional network (GraphSynergy [25]). NP quantifies the proximity between drug target proteins and disease target proteins by z-score and separation measures in the PPI network. The embedding dimension of GraRep and DeepWalk is 64. DeepSynergy utilizes related proteins as drugs and cell line features. The embedding dimension of GraphSynergy is 32 for Oncology-Screen and 64 for DrugCombDB. We perform five-fold cross-validation on each of the two datasets. Cross-validation randomly splits the dataset and reports the average performance after repeating the experiment five times (Table 3). Our model outperforms all baselines on the Oncology-Screen dataset and shows decent performance on DrugCombDB. Specifically, both GraphSynergy and DeepSynergy are deep learning models specially designed for drug combination prediction tasks. The performance of DeepSynergy is mediocre, suggesting that the inability to capture graphical information may be the reason for this. The random-walk-based model (DeepWalk) outperforms the matrix-factorization-based model (GraRep) and the network-proximity-based model (NP), which indicates that capturing sufficient structural information can greatly improve the predictive ability of the model. The data with the highest score are listed in bold for readability. Discussion on Feature Dimension The output of our choice is a 64-dimensional vector of nodes. Therefore, the feature vectors for drugs and cancer cell lines are 64-dimensional, and the input to training is 192-dimensional drug combination-cancer cell line data. The struc2vec component can choose the dimension | 359 | 251998482 | 0 | 16 |
of the output node feature, and we output the 32-dimensional, 64-dimensional, and 128-dimensional node feature vectors through the PPI network. To illustrate the effect of feature dimension on the model, we train three different dimensions of features on the Oncology-Screen dataset. Through the ROC curve and AUC value, we find that the dimension of features has no significant effect on the performance of NEXGB. This proves that struc2vec has learned the features of the nodes, but the increase in dimension does not mean that the learned feature vector is correct and effective. Thus, in order to facilitate subsequent operations, we use all 64-dimensional feature vectors for experiments and explanations. The area under receiver operating characteristic curve (AUC-ROC) under five-fold cross-validation is shown in Figure 1. We run it five times in each of the three dimensions, and the average AUC values are all around 0.85. combination prediction tasks. The performance of DeepSynergy is mediocre, suggesting that the inability to capture graphical information may be the reason for this. The randomwalk−based model (DeepWalk) outperforms the matrix−factorization−based model (GraRep) and the network−proximity−based model (NP), which indicates that capturing sufficient structural information can greatly improve the predictive ability of the model. The data with the highest score are listed in bold for readability. Discussion on Feature Dimension The output of our choice is a 64−dimensional vector of nodes. Therefore, the feature vectors for drugs and cancer cell lines are 64−dimensional, and the input to training is 192−dimensional drug combination-cancer cell line data. The struc2vec component can choose the dimension | 360 | 251998482 | 0 | 16 |
of the output node feature, and we output the 32−dimensional, 64−dimensional, and 128−dimensional node feature vectors through the PPI network. To illustrate the effect of feature dimension on the model, we train three different dimensions of features on the Oncology−Screen dataset. Through the ROC curve and AUC value, we find that the dimension of features has no significant effect on the performance of NEXGB. This proves that struc2vec has learned the features of the nodes, but the increase in dimension does not mean that the learned feature vector is correct and effective. Thus, in order to facilitate subsequent operations, we use all 64−dimensional feature vectors for experiments and explanations. The area under receiver operating characteristic curve (AUC−ROC) under five−fold cross−validation is shown in Figure 1. We run it five times in each of the three dimensions, and the average AUC values are all around 0.85. Classification Method We apply the XGBoost method, which has been widely used in recent years, for classification and compare it with some classic machine learning methods: random forests (RF) [32], k-nearest neighbor (KNN), support vector machines (SVM) [33], and logistic regression (LR). In order to compare the differences between them and exclude the influence of other factors, the above methods use 64-dimensional feature vectors, and the parameters are tuned to train on the Oncology-Screen dataset. We use 6 performance metrics: accuracy (ACC), recall, area under receiver operating characteristic curve (AUC-ROC), area under precision-recall curve (AUC-PR), precision (PR), and F1 score. Each experiment was carried out 10 times, and the last 6 | 361 | 251998482 | 0 | 16 |
indexes were taken as the average value. We find that on the Oncology-Screen dataset, the recall value of XGBoost is not as good as that of SVM, but the other metrics are the best. Among the recall values, XGBoost is 0.827 and SVM is 0.829. Although the recall value of SVM is high, its performance on the other five metrics is not satisfactory. The results under the five-fold cross-validation are shown in Figure 2. ence of other factors, the above methods use 64−dimensional feature vectors, and the parameters are tuned to train on the Oncology−Screen dataset. We use 6 performance metrics: accuracy (ACC), recall, area under receiver operating characteristic curve (AUC−ROC), area under precision−recall curve (AUC−PR), precision (PR), and F1 score. Each experiment was carried out 10 times, and the last 6 indexes were taken as the average value. We find that on the Oncology−Screen dataset, the recall value of XGBoost is not as good as that of SVM, but the other metrics are the best. Among the recall values, XGBoost is 0.827 and SVM is 0.829. Although the recall value of SVM is high, its performance on the other five metrics is not satisfactory. The results under the five−fold cross−validation are shown in Figure 2. Prediction Performance on Specific Tissues and Specific Cell Lines The predictive performance of the model is closely related to the associated proteins of the drug and cell line. We further investigate the performance of the NEXGB model on the Oncology−Screen dataset for specific cell lines and specific tissues. The performance | 362 | 251998482 | 0 | 16 |
is represented by the ROC−AUC value under cross−validation. Figure 3 shows the expression of NEXGB in different cell lines and tissues. NEXGB has strong predictive power (ROC−AUC greater than 0.7) for over 75% of cell lines (Figure 3a). The ROC−AUC values of different cell lines range from 0.587 to 0.835. Among all 29 cell lines, the ES2 cell line, belonging to ovary tissue, has the lowest ROC−AUC value. The ES2 cell line has the fewest associated target proteins (56 associated target proteins), which may be one of the reasons for the poor performance of ES2 cell lines. However, the number of related target proteins is also less than 100, and DLD1 (75 related target proteins) belonging to colon tissue had a good performance. We believe that ES2 cell lines may be somewhat different from other cell lines (see Figure 3b). We also Prediction Performance on Specific Tissues and Specific Cell Lines The predictive performance of the model is closely related to the associated proteins of the drug and cell line. We further investigate the performance of the NEXGB model on the Oncology-Screen dataset for specific cell lines and specific tissues. The performance is represented by the ROC-AUC value under cross-validation. Figure 3 shows the expression of NEXGB in different cell lines and tissues. NEXGB has strong predictive power (ROC-AUC greater than 0.7) for over 75% of cell lines (Figure 3a). The ROC-AUC values of different cell lines range from 0.587 to 0.835. Among all 29 cell lines, the ES2 cell line, belonging to ovary tissue, has | 363 | 251998482 | 0 | 16 |
the lowest ROC-AUC value. The ES2 cell line has the fewest associated target proteins (56 associated target proteins), which may be one of the reasons for the poor performance of ES2 cell lines. However, the number of related target proteins is also less than 100, and DLD1 (75 related target proteins) belonging to colon tissue had a good performance. We believe that ES2 cell lines may be somewhat different from other cell lines (see Figure 3b). We also explore the performance of the NEXGB model on six different tissues: breast, colon, lung, ovary, prostate, and skin. The mean ROC-AUC values of the six tissues are 0.801 for breast cancer, 0.845 for colon cancer, 0.848 for lung cancer, 0.794 for ovarian cancer, 0.699 for prostate cancer, and 0.835 for skin cancer. Except for the prostate, the performance of NEXGB in the other five tissues is consistent. Among all tissues included in our data, prostate tissue has the lowest ROC-AUC value of all tissues. VCAP is the only prostate cancer cell line and underperforms among all cell lines. The target proteins associated with the VCAP cell line are the most numerous (830 target proteins) of all cell lines. The reason for the poor performance of prostate tissue may be related to the fact that it has only one cell line, VCAP, or that there are significant differences between VCAP and other cell lines. Table 4 shows the number of relevant target proteins of each cancer cell line. We performed t-SNE analysis on high-dimensional vector representations of cell lines, | 364 | 251998482 | 0 | 16 |
reflecting the relationships between cell lines in 2D space. Figure 4 shows that all cell lines are divided into two clusters by the density-based spatial clustering of applications with noise (DBSCAN) clustering algorithm. We find that the ES2 cell line belongs to a separate category, and the rest of the cell lines belong to another category. In addition, the VCAP cell line is on the fringes of a cluster. This suggests that both the ES2 cell line and the VCAP cell line may have unique characteristics. Ovarian cancer is mainly classified into five histological types due to the characteristics of ovarian tissue: high-grade serous, low-grade serous, clear cell, endometrioid, and mucinous [34]. Ovarian clear cell carcinoma (OCCC) is a common ovarian epithelial cancer that accounts for 5% to 25% of ovarian cancers and has unique clinical and molecular features [35]. This may be one of the reasons why ES2 (which belongs to clear cell carcinoma alone) is different from other ovarian cancer cell lines in our dataset. In recent years, studies have shown that ES2 is different from other ovarian CCC cell lines. When the expression of hepatocyte nuclear factor 1β (HNF-1β) in ovarian CCC cell lines was examined using immunocytochemistry, intense nuclear staining was observed in OVMANA, OVISE, and OVSAYO, while no nuclear expression of NHF-1β was found in ES2 [36]. The unique characteristics of ES2 cell lines in ovarian clear cell carcinoma may be another reason for the unusual performance of ES2 cell lines in our dataset. Discussion NEXGB is inspired by the latest | 365 | 251998482 | 0 | 16 |
advances in network medicine [21,22]. To better learn the potential information between nodes in the PPI network, we use struc2vec to explore the synergy between anticancer drugs, drugs, and cell lines. The input features of XGBoost are the vector combinations of drugs and cell lines, and the output is the predicted probability of drug combinations. In contrast to other models, NEXGB does not use detailed chemical and biological descriptions (genomic profiles). It is only necessary to consider the combinations of drug-target and cancer cell line-target and their topological relationships in the PPI network. We apply NEXGB to the Oncology-Scree and DrugCombDB datasets and demonstrate the superiority of NEXGB. NEXGB has discovered potential relationships between drug combinations and cell lines. Based on the performance of the model on a specific cell line, we speculate that the ES2 cell line differs from other cell lines and has unique properties among ovarian-cancer-related cell lines. Overall, NEXGB can attempt to identify novel combination therapies for other intractable diseases based on biological knowledge in the PPI network. This adaptability to other diseases still requires further exploration. The model provides new ideas for medical fields such as cancer therapy. With the improvement of more biological network data, we will be able to acquire more network information to train NEXGB. However, the prescription of clinical combinations requires knowledge of pharmacokinetic interactions between drugs. Personalized treatment needs to take into account the individual conditions of different patients, such as tolerance to the drug and unique genomic information. These factors are mostly ignored by NEXGB | 366 | 251998482 | 0 | 16 |
and other existing methods. The increase in available information brings corresponding problems. Different measurement standards in the same drug combination leads to different results. How to balance these biological experimental data is also a critical problem. Nonetheless, we believe that further studies in pharmacology, pharmacokinetics, toxicology, and genetic heterogeneity, accompanied by new computational methods, can rapidly overcome these limitations. Datasets Drug-Drug-Cell Line synergy. We evaluate NEXGB on two anti-cancer drug combination datasets: (a) Oncology-Screen [28], an oncology screening dataset (accessed on 25 August 2021) including 21 drugs and 29 tumor cell lines, with a total of 4176 combinations, using Loewe values as the synergy score; (b) DrugCombDB [29], a much larger dataset, which is currently the largest database (accessed on 12 September 2021) for the number of drug combinations, containing 764 unique drugs and 69,436 drug combinations in 76 unique cell lines, using ZIP values as the synergy score. In the two datasets, we divide the positive and negative samples according to the synergy value. A synergy value greater than zero indicates a positive sample, and a synergy value less than zero indicates a negative sample. The number of positive and negative samples in Oncology-Scree is 2257 and 1919, respectively, while there are 31,623 positive samples and 37,813 negative samples in DrugCombDB. Protein-Protein Interaction (PPI) Network. The PPI network (accessed on 17 September 2021) contains 15,970 nodes (unique proteins) and 217,160 edges (interactions). The network is composed of 15 commonly used databases and experimental evidence [22,25]. Proteins are represented using gene numbers, with the coding mapped | 367 | 251998482 | 0 | 16 |
by GeneCards. Drug-Protein Associations. A total of 15,051 drug-protein associations are established based on FDA-approved or clinically investigated drugs [22]. The dataset contains 4428 drugs and 2256 unique human proteins (accessed on 17 September 2021). Cell-Protein Associations. The cell-protein association dataset was obtained from the Cancer Cell Line Encyclopedia (CCLE) [37]. This dataset exists for 749,551 associations, 1035 cancer cell lines, and 18,022 proteins (accessed on 20 September 2021). Framework of NEXGB The related process of NEXGB is illustrated in Figure 5. NEXGB takes a drug-drug-cell line combination as input and outputs a predicted probability of the synergy value of the combination. The input to the PPI network of the struc2vec component is the coding gene sequence number, and the output is the 64-dimensional latent feature f 64 of n proteins. The target protein data for drug and cell line effects are similar to the PPI network data. We take the mean value of the characteristics of the proteins directly affected by the drug (the same is true for the cell line characteristics), and the characteristics of the drug D i : D i = P 1 + P 2 + · · · · · · + P n n (1) nary classification problem. According to the definition of the synergy score, a synergy value greater than 0 indicates a synergistic effect between two drugs [38]. We regard a synergy value greater than zero as a positive sample (label: 1) in the drug combination data, and a value less than zero as a negative sample | 368 | 251998482 | 0 | 16 |
(label: 0). We concatenate the obtain drug features and cell line features according to the drug-drug-cell line combination data, input the combined features and labels into XGBoost for training, and output the prediction results. We further formulate the synergistic drug combination prediction problem as a binary classification problem. According to the definition of the synergy score, a synergy value greater than 0 indicates a synergistic effect between two drugs [38]. We regard a synergy value greater than zero as a positive sample (label: 1) in the drug combination data, and a value less than zero as a negative sample (label: 0). We concatenate the obtain drug features and cell line features according to the drug-drug-cell line combination data, input the combined features and labels into XGBoost for training, and output the prediction results. Network Embedding In the PPI network, a node represents the gene encoding of a protein, and the existence of an edge between two nodes indicates that the two proteins interact. PPI networks are an undirected graph. We utilize the struc2vec [26] model in the network embedding approach to learn the structural features of the PPI network, using a vector to represent the genes in the network. The struc2vec model encodes structural similarities by building multi-layer graphs and generates structural contexts for nodes. Gene pairs that are far apart but structurally similar are tightly embedded in the PPI network. The main steps of the struc2vec component are as follows: Structural Similarity The calculation of the structural similarity f (u, v) between each pair of | 369 | 251998482 | 0 | 16 |
nodes u and v can be denoted as: where f −1 = 0, R k (u) is the set of nodes with distance k(k ≥ 0) from node u, s(R k (u)) is the ordered degree sequence of node set R k (u), and g(s(R k (u)), s(R k (v))) > 0 is a measure of the distance between the ordered sequences s(R k (u)) and s(R k (v)). In general, s(R k (u)) and s(R k (v)) are always of different sizes, and the node degrees are integers. In order to compare such two-degree sequences of different size, we use dynamic time warping (DTW) [39]: Construction of Multilayer Weighted Graph Each layer k = 0, . . . , k * consists of a weighted undirected graph, and the edge weight between nodes u and v in the same layer is defined as: where k * is the diameter of the original network. In the k layer, each node u is connected to the corresponding node u in the k + 1 layer and k − 1 layer. The weights of the edge between different layers are defined as: w(u k , u k+1 ) = log(Γ k (u) + e) w(u k , u k−1 ) = 1 where Γ k (u) is number of edges incident to u that have weight larger than the average edge weight of the complete graph in layer k. Generation of Node Sequence A biased random walk is applied to generate a sequence of nodes in a multi-layer | 370 | 251998482 | 0 | 16 |
graph. When performing a random walk, staying at the current layer with the probability q, the probability from node u to node v in the k-th layer is: where z k (u) = ∑ v 1 u e − f k (u,v) is the normalization factor for vertex u in layer k. The random walk switches to other layers with probability 1 − q, and the probability of selecting layer k + 1 and layer k − 1 is as follows: p k (u k , u k+1 ) = w(u k , u k+1 ) w(u k , u k+1 ) + w(u k , u k−1 ) p k (u k , u k−1 ) = w(u k , u k−1 ) w(u k , u k+1 ) + w(u k , u k−1 ) when generating the node sequence, use the Skip-Gram [40] model to train the node sequence to generate the node vector. Supervised Classification Model XGBoost is an improvement of the gradient boosting algorithm, and the internal decision tree uses a regression tree [27]. We apply extreme gradient boosting (XGBoost) in supervised learning to classify based on constructed drug-cancer cell line features to predict their synergistic relationships. Conclusions In this study, we propose a novel network embedding model NEXGB for predicting the relationships between drug combinations and cancer cell lines in cancer. The results show that NEXGB can effectively identify anticancer drug combinations through the topological features of the PPI network, the biological mechanisms of drug-related proteins, and proteins associated with | 371 | 251998482 | 0 | 16 |
cancer cell lines. In addition, the results on specific cell lines suggest that ES2 cell lines may have unique biological properties among ovarian cancer cell lines. Author Contributions: F.M. and F.L. jointly contributed to the design of the study. F.M. and Y.L. designed and implemented the method, performed the experiments, and drafted the manuscript. J.-X.L. and J.S. participated in the design of the study and performed the statistical analysis. X.L. and Y.L. contributed to the data analysis. All authors have read and agreed to the published version of the manuscript. | 372 | 251998482 | 0 | 16 |
Light scattering by a vacuum-like sphere with magnetoelectric gyrotropy An exact transition matrix was formulated for electromagnetic scattering by a vacuum-like sphere with magnetoelectric gyrotropy. Both the total scattering and forward scattering efficiencies are lower when the magnetoelectric gyrotropy vector of the sphere is co/anti-parallel to the electric field or magnetic field of an incident plane wave than when the magnetoelectric gyrotropy vector is coparallel to the propagation vector of the incident plane wave. Backscattering is absent when the propagation vector is co/anti-paralel to the magnetoelectric gyrotropy vector. Free space, i.e., gravitationally unaffected vacuum, is the reference medium in electromagnetics [12]. A metric with g 0ℓ = 0 ∀ℓ ∈ {1, 2, 3} identifies the 0th coordinate as time and delinks it from the remaining three coordinates (space); hence, the equivalent bianisotropic continuum is an anisotropic dielectric-magnetic which is devoid of magnetoelectric properties (i.e., Γ = 0) and is impedance-matched to free space. What would happen if a metric were such that its equivalent bianisotropic continuum is free space endowed with magnetoelectric gyrotropy (i.e., Γ = 0)? This communication arose from an attempt to answer that question. Let u and w 1,2,3 be four real scalars of which only u is constrained to be non-zero and positive. Let these four scalars be used to construct the metric Then,ḡu 2 = −1 and Eqs. (1) turn out to be where the magnetoelectric gyrotropy vector w = w 1x + w 2ŷ + w 3ẑ in the Cartesian coordinate system. Clearly, the bianisotropic continuum equivalent to the metric | 373 | 119259491 | 0 | 16 |
(2) is like free space with magnetoelectric properties. If an object made of a linear homogeneous material described by Eqs. (3) were to be placed in conventional free space, its scattering characteristics should depend on the magnitude and direction of w. We decided to theoretically examine this proposition by considering the scattering of light by a sphere made of this material. For that purpose, we employed a recently formulated analytic procedure that relies on closed-form vector spherical wavefunctions for an orthorhombic dielectric-magnetic material with magnetoelectric gyrotropy [13]. In this procedure, a transition matrix (commonly called the "T matrix") describes scattering by the homogeneous sphere made of the chosen material. The derivation of the T matrix for general nonspherical scatterers being available [13], we provide essential expressions and final results in Sec. 2. Section 3 presents numerical results to explicate the effects of magnetoelectric gyrotropy on the scattering of an incident plane wave. Special attention is paid to total scattering efficiency, the forward scattering efficiency, and the backscattering efficiency as functions of (i) the size parameter of the sphere and (ii) the magnitude and direction of w in relation to the incident plane wave. The dependency exp(−iωt) on time t is present but suppressed, k 0 = ω/c 0 is the free-space wave number, and η 0 = µ 0 /ǫ 0 is the intrinsic impedance of free space. The asterisk denotes the complex conjugate. Theory Suppose that the center of a homogeneous sphere of radius a and made of a material with constitutive relations (3) is | 374 | 119259491 | 0 | 16 |
located at the origin of a Cartesian coordinate system (x, y, z). The ambient medium is free space. The sphere is illuminated by a plane wave with field phasors Without any loss of generality, we fix k inc = k 0ẑ , e inc x, and h inc = (ωµ 0 ) −1 (k inc × e inc ) ŷ. Incident-field representation In order to formulate the T matrix, we must represent the incident field phasors (4) in terms of the vector spherical wavefunctions defined for free space as [14] M with j n ( • ) denoting the spherical Bessel function of order n, and P m n ( • ) the associated Legendre function of order n and degree m. The spherical coordinate system (r, θ, φ) is equivalent to the Cartesian coordinate system (x, y, z). The expansions [14,15] follow from Eqs. (4). However, as the scattering sphere is made of a bianisotropic material, it is convenient to recast Eqs. (6) and (7) more generally as [13] where the normalization factor employs the Kronecker delta δ mn , and the coefficients Scattered-field representation The scattered electric and magnetic field phasors are represented as [13] In these expressions, the vectors spherical wavefunctions M smn have to be determined by the solution of a boundary-value problem [13]. In the far zone, the scattered electric field may be approximated as and the scattered magnetic field as wherer = r/r and F sca (θ, φ) is the vector far-field scattering amplitude [16]. The differential scattering efficiency is | 375 | 119259491 | 0 | 16 |
given by and total scattering efficiency is given as Internal-field representation The electric and magnetic field phasors excited inside the vacuum-like sphere with magnetoelectric gyrotropy are represented by [13,17] the coefficients b smn and c smn being unknown. Equations (28) and (29) lead to the relation is the T matrix of the chosen sphere suspended in free space. Because of the structure of the matrix Y (j) , the T matrix can be partitioned as Numerical results and discussion We set up a Mathematica TM program to compute the T matrix. In the program, we truncated the summations over n ′ ∈ {1, 2, 3, ...} to n ′ ∈ {1, 2, 3, ..., N } and similarly n ∈ {1, 2, 3, ...} to n ∈ {1, 2, 3, ..., N }. We chose sufficiently high values of N , such that the extinction efficiency Q ext , the total scattering efficiency Q sca , the forward scattering efficiency Q f , and the backscattering efficiency Q b [16] converged to a pre-set tolerance of 0.1%. Smaller values of |w| and k 0 a required smaller N , with N = 11 being adequate for |w| = 0.25 and k 0 a = 4.0. We confirmed that our program yielded negligibly tiny values of the coefficients A smn and B smn when we set w = 0. When w ⊥ẑ, reversal of the direction of w was tantamount to the multiplication of A smn , b smn , and c smn by negative unity, which | 376 | 119259491 | 0 | 16 |
left Q ext , Q sca , Q f , and Q b unchanged. When w ẑ, both w and the direction of propagation of the incident plane wave had to be reversed together for Q ext , Q sca , Q f , and Q b to remain unchanged. Regardless of the choice of w, we found that Q ext = Q sca , implying that the absorption efficiency Q abs = 0. This was expected because Eqs. (3) satisfy the conditions of the absence of dissipation [18,11]. Although the magnetoelectric gyrotropy vector w can be arbitrarily oriented, three cases are of particular interest because the incident light is a plane wave: • w is parallel to the incident electric field (i.e., w x), • w is parallel to the incident magnetic field (i.e., w ŷ), and • w is parallel to the propagation vector of the incident plane wave (i.e., w ẑ). Figure 1 shows plots of Q sca , Q f , and Q b as functions of the size parameter k 0 a when w e inc and |w| ∈ {0.05, 0.15, 0.25}. An increase in the magnitude of the magnetoelectric gyrotropy vector has a more pronounced effect on Q f than on Q sca and Q b . Whereas Q sca is higher than Q f for smaller values of |w| and k 0 a, the reverse is true for larger values of |w| and k 0 a. The backscattering efficiency shows oscillatory behavior and peaks of the oscillations increase as | 377 | 119259491 | 0 | 16 |
the size parameter increases. Efficiencies The plots of Q sca , Q f , and Q b as functions of the size parameter k 0 a when w h inc are identical to those when w e inc . Thus, the effect of magnetoelectric gyrotropy is independent of its orientation when w ⊥ k inc . Figure 2 shows plots of Q sca , Q f , and Q b as functions of the size parameter k 0 a when w k inc and |w| ∈ {0.05, 0.15, 0.25}. The influence of magnetoelectric gyrotropy is maximal when w k inc , as is evident from a comparison of Figs. 1 and 2. The maximum value of Q sca is an order of magnitude higher and Q f in Fig. 1 is two orders of magnitude higher when w k inc than when w ⊥ k inc . Moreover, there is no backscattering (i.e., Q b = 0) when w k inc , which makes the sphere invisible in the monostatic configuration. The absence of backscattering when w k inc has an analog [19] in the reflection of a plane wave incident normally at the planar interface of free space and the material with constitutive relations (3) such that w is oriented wholly normal to the interface. Simple algebraic manipulations show that reflection is then absent (and transmission is perfect). Figure 3 shows the same plots as Fig. 2, except that w 3 < 0. A change in the sign of w 3 affects both Q sca | 378 | 119259491 | 0 | 16 |
and Q f , as is clear from comparing Figs. 2 and 3. Both Q sca and Q f are higher when w is coparallel, than when w is antiparallel, to the propagation vector k inc of the incident plane wave. Given the foregoing trends, for arbitrarily directed w it is reasonable to expect that the effects of the component of w that is co/anti-parallel to the propagation vector of the incident plane wave would dominate those of the component of w that is perpendicular to the propagation vector. Several calculations (not shown) validated that expectation. Differential scattering efficiency For k 0 a = 4, |w| = 0.25, and four different orientations of w, the differential scattering efficiencies Q D (θ, 0 • ) and Q D (θ, 90 • ) are plotted versus the observation angle θ ∈ [0 • , 180 • ] in Fig. 4. The curve of Q D (θ, 0 • ) when w e inc [ Fig. 4(a)] is identical to that of Q D (θ, 90 • ) when w h inc [ Fig. 4(b)]. Likewise, the curve of Q D (θ, 90 • ) for w e inc is identical to that of Q D (θ, 0 • ) for w h inc . This shows that the impact of magnetoelectric gyrotropy is largely independent of its orientation when w ⊥ k inc . More lobes appear in the curve of Q D (θ, 0 • ) as compared to Q D (θ, 90 • ), and the | 379 | 119259491 | 0 | 16 |
maximum magnitude of the former is smaller than that of the latter, when w e inc . When w is co/anti-parallel to k inc , the differential scattering appears identical in the φ = 0 • and φ = 90 • planes, as shown in Figs. 4(c) and 4(d). However, more lobes exist when w is co-parallel than when it is anti-parallel to k inc . Rayleigh scattering A long-wavelength approximation yields closed-form analytical results for scattering by homogeneous and electrically small objects [20]. Accordingly, Rayleigh scattering by the chosen sphere is equivalent to radiation jointly by an electric dipole moment and a magnetic dipole moment both located at the centroid of the sphere, withk inc = k inc /k 0 . Clearly from these expressions, both equivalent dipole moments vanish as |w| → 0. Therefore, the Rayleigh estimate of the vector far-field scattering amplitude is [16,20] F Rayleigh wherefrom the Rayleigh estimates of the various efficiencies were obtained as follows: Neither w 1 nor w 2 occur by themselves in the foregoing expressions, but always as w 2 1 + w 2 2 . Therefore, when w ⊥ k inc , the three efficiencies contain the quadratic form w • w and are invariant with respect to the orientation of the magnetoelectric gyrotropy vector. In contrast when w k inc , the efficiencies do depend on the orientation of the magnetoelectric gyrotropy vector. However, from Eq. (37) it follows that Q Rayleigh b = 0 does not. The Rayleigh expressions are expected to hold when | 380 | 119259491 | 0 | 16 |
the radius of the sphere is less than a tenth of the free-space wavelength. As an example, Fig. 5 depicts plots of Q Rayleigh sca and Q sca versus k 0 a ∈ (0, 0.6] for |w| = 0.25. Clearly, the long-wavelength approximation agrees well for entire range of k 0 a when w is coparallel to k inc . When w is parallel to the incident electric/magnetic field or w is antiparallel to k inc , the results match well for k 0 a ∈ (0, 0.4] but the difference between the exact and approximate results begins to rise as the value of k 0 a increases beyond 0.4. Concluding remarks Electromagnetic scattering by a vacuum-like sphere with magnetoelectric gyrotropy was formulated in terms of the T matrix, after simplifying recently derived vector spherical wavefunctions in closed form. The total scattering, extinction, forward scattering, and backscattering efficiencies were computed to explicate the magnitude and the direction of the magnetoelectric gyrotropy vector in relation to the directions of the propagation vector, the magnetic field, and the electric field of a plane wave incident on the chosen sphere. Since the permittivity and the permeability of the sphere are exactly the same as those of the surrounding vacuum, any scattering must be attributed solely to the magnetoelectric gyrotropy vector of the sphere. In general, all scattering efficiencies grow as the magnetoelectric gyrotropy grows in magnitude. A growing trend in all efficiencies with increase in the electrical size of the sphere was also found, though the growth may not | 381 | 119259491 | 0 | 16 |
be monotonic but undulatory. Both the total scattering and forward scattering efficiencies are generally lower when the magnetoelectric gyrotropy vector of the sphere is perpendicular to the propagation vector of the incident plane wave than when it is anti-parallel to the propagation vector. Further enhancements occur when the magnetoelectric gyrotropy vector is co-parallel to the propagation vector. Furthermore, the sphere is invisible in monostatic configuration provided that the the magnetoelectric gyrotropy vector is co/anti-parallel to the propagation vector. Figure 1: Q sca , Q f , and Q b as functions of the size parameter k 0 a, when w is parallel to the incident electric field; w 2 = w 3 = 0, but (a) w 1 = 0.05, (b) w 1 = 0.15, and (c) w 1 = 0.25. These plots also hold true when w is parallel to the incident magnetic field; w 1 = w 3 = 0, but (a) w 2 = 0.05, (b) w 2 = 0.15, and (c) w 2 = 0.25. | 382 | 119259491 | 0 | 16 |
Genetic variations in GBA1 and LRRK2 genes: Biochemical and clinical consequences in Parkinson disease Variants in the GBA1 and LRRK2 genes are the most common genetic risk factors associated with Parkinson disease (PD). Both genes are associated with lysosomal and autophagic pathways, with the GBA1 gene encoding for the lysosomal enzyme, glucocerebrosidase (GCase) and the LRRK2 gene encoding for the leucine-rich repeat kinase 2 enzyme. GBA1-associated PD is characterized by earlier age at onset and more severe non-motor symptoms compared to sporadic PD. Mutations in the GBA1 gene can be stratified into severe, mild and risk variants depending on the clinical presentation of disease. Both a loss- and gain- of function hypothesis has been proposed for GBA1 variants and the functional consequences associated with each variant is often linked to mutation severity. On the other hand, LRRK2-associated PD is similar to sporadic PD, but with a more benign disease course. Mutations in the LRRK2 gene occur in several structural domains and affect phosphorylation of GTPases. Biochemical studies suggest a possible convergence of GBA1 and LRRK2 pathways, with double mutant carriers showing a milder phenotype compared to GBA1-associated PD. This review compares GBA1 and LRRK2-associated PD, and highlights possible genotype-phenotype associations for GBA1 and LRRK2 separately, based on biochemical consequences of single variants. Introduction Parkinson disease (PD) is the second most common neurodegenerative disorder. The disease is characterized by the progressive loss of dopaminergic neurons in the substantia nigra pars compacta (SNpc) and the presence of intracellular proteinaceous inclusions, named Lewy bodies which are made up primarily | 383 | 251518480 | 0 | 16 |
of alpha-synuclein protein aggregates (1,2). PD patients exhibit a classic triad of motor symptoms including bradykinesia, rigidity and resting tremor. A spectrum of non-motor symptoms has also been described, including cognitive decline, sleep disturbances, hyposmia and psychiatric symptoms (3). Approximately 10-15% of all PD is caused by an identifiable genetic mutation (4), with large genome wide association studies (GWAS) having identified several additional genes and genetic loci important in familial and sporadic PD, many of which are associated with lysosomal and autophagic functions. Among these are the GBA1 gene (OMIM 606463), which encodes the lysosomal hydrolase enzyme glucocerebrosidase (GCase; EC 3.2.1. 45), and LRRK2 (OMIM 609007) which encodes the leucine-rich repeat kinase 2 enzyme. Variants in these genes are widely recognized as the two most common genetic risk factors of PD worldwide (5)(6)(7). In this review, we highlight the differences between GBA1 and LRRK2 variants, from both a clinical and biochemical perspective, and disentangle the complexity and heterogeneity of GBA1-and LRRK2-associated PD. We also summarize the recent findings on PD patients carrying both GBA1 and LRRK2 variants, and their particular clinical phenotype compared to single respective mutants, and possible pathomechanisms involved. Understanding the functional consequences of these variants and how they ultimately lead to specific PD phenotypes, is crucial to develop novel, gene-targeting therapies and direct patients to appropriate clinical trials. The GBA gene to protein The GBA1 gene and is located on chromosome 1 (1q21) and is made up of 11 exons and 10 introns spanning a sequence of 7.6 kb. It encodes a 60 | 384 | 251518480 | 0 | 16 |
kDa lysosomal hydrolase enzyme, glucocerebrosidase (GCase). The mature GCase peptide consists of 497 residues and is comprised of three noncontinuous domains (as shown in Figure 1). The active site is located in Domain III, which is a (β/α) 8 triosephosphate isomerase (TIM). Domain I consists of an antiparallel βsheet, and Domain II resembles an immunoglobulin fold made up of 8 β-sheets (8)(9)(10). Within the mature GCase structure are three important flexible loops, which cap the active site. In an acidic environment, the conformation of loop 3 changes to allow substrates to access the active site (11,12). GCase cleaves the sphingolipid glucosylceramide (GlcCer) into glucose and ceramide at the lysosome. Bi-allelic GBA1 mutations cause the lysosomal storage disorder Gaucher disease (GD), which presents as widespread accumulation of GlcCer and glucosylsphingosine (GlcSph) within the lysosomes of many cell types, particularly macrophages, across several tissues and organs. GCase is folded in the endoplasmic reticulum (ER) and binds to the lysosomal integral membrane protein type 2 (LIMP-2) to be trafficked to the lysosome, through the secretory pathway where it undergoes N-linked glycosylation (15-17). These post-translational modifications are thought to be imperative to the production of a fully active enzyme (18). FIGURE The X-ray structure of glucocerebrosidase (PDB code GXI). Domain I is shown in orange. Domain II is shown in pink. Domain III, the catalytic domain, is shown in blue and contains the active-site residues E and E which are shown as ball-and-stick models. The six significant glucocerebrosidase variants (R W, L P, E K, N S, D H, and | 385 | 251518480 | 0 | 16 |
RecNcil) are shown with spheres. The color of the spheres corresponds with the odds ratio associated with the variant: green (< ); yellow ( -) and red (> ) ( , ). This figure was created using The PyMOL Molecular Graphics System, Version . Schrödinger, LLC. Common GBA variants Almost 300 unique variants have been reported in the GBA1 gene, which span the entire protein ( Figure 1). These include missense or non-sense mutations, insertions or deletions, complex alleles and splice junction mutations. The point mutations c.1226A>G (N370S) and c.1448T>C (L444P) are the most commonly associated with GD (19,20). Generally the L444P variant causes a severe, neuronopathic type II or III GD, whereas the N370S variant is generally associated with nonneuronopathic type I GD (21). Some GBA1 mutations arise from recombination events between the functional GBA1 gene and a highly homologous pseudogene (GBA1P), an example of which is the complex allele RecNcil (19,20). Many mutations in the GBA1 gene, including the common R120W variant, occur in and around the active site, influencing its stability and affecting enzyme activity. Other common mutations including, D409H and L444P, occur far from the active site, suggesting important roles for Domains I and II (9). In the case of the L444P variant, the substitution of leucine to proline causes rigidity in the protein backbone, potentially disrupting the hydrophobicity of the domain (22) which may influence protein folding. This variant is also thought to be influenced by a lack of N-linked glycosylation and subsequent structural instability (23). To date, the crystal structure | 386 | 251518480 | 0 | 16 |
of N370S GBA1 is the only X-ray structure resolved. The N370S mutation occurs at the interface of domains II and III (9) and prevents stabilization of loop 3 at an acidic pH, impairing the ability of GCase to bind its substrate (12, 24). Within PD, GBA1 gene variants are stratified into complex, severe, mild and risk variants. The severity of a GBA1 mutation is based upon the phenotype it presents when homozygous in those with GD. Risk variants are referred to as such as they do not present any clinical features of GD when homozygous or compound heterozygous, but increase the risk of PD (30-32). In terms of cognitive function, GBA1-PD patients with mild or risk variants showed slower occurrence of cognitive impairment compared to complex or severe variants (42,45,46), or to non-carriers (47). Psychiatric symptoms, hallucinations and hyposmia are also more common in GBA1-PD vs. noncarriers (40,42,44,48), and these are more frequent in carriers of severe and complex variants compared to mild or risk variants (42,49). Controversy surrounds disease progression in GBA1-PD. In one study, GBA1-PD was characterized by a more aggressive progression and reduced survival rates compared to noncarriers (41), however, in another longitudinal study evaluating AJ patients, no significant effect on survival of either severe or mild variants was detected (50). When stratifying by variant type, risk variants were associated with similar mortality rates compared to non-carriers (51), with the greatest association with increased mortality in patients carrying severe variants (46). Severe variants are generally associated with faster development of motor complications (42,51). | 387 | 251518480 | 0 | 16 |
However, more recent longitudinal studies suggest that GBA1 status does not influence the risk of developing motor complications, even where different types of variants were considered separately (52)(53)(54). Evaluating the biochemical consequences of GBA1 variants and their relationship with clinical features may aid in understanding of the complexity of GBA1-PD. Among markers of GBA1 dysfunction, GCase enzymatic activity is the most studied. GCase activity was found to be reduced in leucocytes (42), dried blood spots (55)(56)(57), and cerebrospinal fluid (CSF) (58) of patients with GBA1-PD compared to non-carriers. GCase activity presented a steeper decline among GBA1-PD patients according to variant severity (42). In a longitudinal analysis, increasing severity of GBA1 variants was associated with increasingly steeper decline in GCase activity, however the latter was not correlated overall with increasing severity of motor or cognitive features (56). Similarly, no genotypephenotype correlation was found between GCase enzymatic activity and disease severity outcomes in a cross-sectional study (57), suggesting that GCase enzymatic activity might not be a reliable marker of disease severity or progression in GBA1-PD. Longitudinal studies evaluating other biochemical consequences of GBA1 dysfunction (e.g., sphingolipid metabolism), maybe in combination with GCase deficiency, and their ultimate impact on disease course, are needed. GBA variants and Parkinson disease: Pathogenic mechanisms Both loss-and gain-of function pathways are proposed to influence PD risk and onset (59, 60), and it is thought that these two hypotheses are not mutually exclusive. An overview of the pathogenic mechanisms associated with individual GBA1 mutations can be found in Table 1. Variants in the GBA1 gene | 388 | 251518480 | 0 | 16 |
often lead to a loss of GCase function. Analysis of GCase activity in the blood of PD patients has demonstrated that patients with severe GBA1 mutations exhibit a greater reduction in GCase activity when compared to those with mild GBA1 mutations and risk variants (56). This is supported by functional analysis of recombinant GCase protein, showing that risk variants reduce GCase activity to a lesser extent than GD-causing variants. The L444P and N370S variants reduce catalytic activity by 75-97 and 65-97%, respectively, whereas the E326K variant was associated with a 43-58% reduction (10, 61-63). The same pattern has been observed in . Variant Severity GCase Activity ALP function Lipid homeostasis ER stress Alpha-synuclein pathology Mitochondrial function (↓) denotes reduction in function, (↑) denotes an increase and (-) denotes unchanged or no literature surrounding this mechanism. ALP, autophagy lysosomal pathway; ER, endoplasmic reticulum. fibroblast lines from patients harboring these mutations (64). However, this genotype-phenotype correlation is absent in one study in induced pluripotent stem cell (iPSC)-derived dopamine neurons where GCase activity was similarly reduced in L444P and N370S variants (65). A loss of GCase activity may explain some of the downstream pathogenic mechanisms observed in models of GBA1 variants, as in human cells a GCase deficiency was associated with lysosomal dysfunction and alpha-synuclein pathology (66). In iPSC-derived midbrain dopamine neurons, the N370S variant has been associated with a significant reduction in GCase activity and protein, accompanied by impairment of the lysosome, altered distribution of GlcCer and increased extracellular release of alpha-synuclein (67). Similarly, in neural crest stem | 389 | 251518480 | 0 | 16 |
cell-derived midbrain dopamine neurons, heterozygous N370S mutations cause a loss of GCase function, impaired macroautophagy and alpha-synuclein pathology. This was rescued by the small molecular chaperone, ambroxol, suggesting these arose due to improper trafficking and activity of N370S GCase protein (68). In cells harboring the L444P mutation, impaired lysosomal and autophagic function has been demonstrated, accompanied by a significant reduction in GCase activity and protein (69)(70)(71). However, contrary to the hypothesis that a loss of GCase function is imperative for cellular pathology, in iPSCderived dopamine neurons from patients with homozygous and heterozygous L444P and N370S mutations, activity did not correlate with pathology. In homozygous lines, GCase activity was reduced to a greater extent than heterozygous lines, however no difference was observed in alpha-synuclein pathology and autophagic defects (65). Improper function of the autophagy-lysosomal pathway (ALP) can lead to the aberrant metabolism of alpha-synuclein. Such has been shown in models of L444P and N370S variants (67,68). In L444P heterozygous mice, a significant loss of GCase activity led to an abundance of alpha-synuclein inclusions in the brain and altered levels of GlcSph (72). This variant has also been associated with increased neuronal vulnerability to and accelerated spread of alpha-synuclein pathology in mice (73, 74). It has also been proposed that there may be a genotypephenotype correlation between severe and mild GBA1 variants and alpha-synuclein pathology. In SH-SY5Y cells, the L444P variant was associated with a greater increase in alpha-synuclein accumulation and stabilization, compared to N370S and wildtype (75). Another study in fibroblasts and SH-SY5Y cells demonstrated that both | 390 | 251518480 | 0 | 16 |
L444P and N370S fibroblasts exhibited an increase in the release of extracellular vesicles compared to control lines. However, alpha-synuclein pathology in SH-SY5Y cells was only promoted when incubated with vesicles isolated from L444P lines, and not N370S lines (76). In addition, a recent study showed that the E326K and L444P variants, despite different GCase activity, both presented comparable levels of alpha-synuclein aggregates suggesting that loss of GCase activity is not the only mechanism involved in alphasynuclein pathology and that other mechanisms are involved in this process, especially for risk variants (64). In addition to alpha-synuclein, the metabolism of lipids can be affected by impairment of the ALP or mitochondria, the latter of which has also been demonstrated in models of L444P (70,71) and N370S (77) variants. Changes in the composition of glycosphingolipids has been demonstrated in models of GBA1 variants, likely due to a loss of GCase function and poor lysosomal and autophagic degradation. In mice with N370S and L444P variants, a reduction in GCase function was concurrent with an accumulation of GlcSph, which promoted alpha-synuclein aggregation (78). Similarly, in N370S iPSCderived dopamine neurons an accumulation of GlcCer and alpha-synuclein was observed (79). Accumulation of glycosphingolipids may be key to the pathology of L444P and N370S GBA1 variants as in dopamine neurons with these variants, reducing the levels of GlcCer/GlcSph rescued alpha-synuclein pathology (79,80). Interestingly, in one study of L444P mice an accumulation of GlcSph alone was observed, which accelerated alpha-synuclein aggregation (72). In fibroblasts from L444P heterozygous patients, a significant increase in glycosphingolipids has been | 391 | 251518480 | 0 | 16 |
demonstrated, which correlated with decreased GCase activity. When these lipids were extracted and incubated with recombinant alpha-synuclein, an increase in the pathogenic aggregation of alpha-synuclein was observed, due to a higher content of short-chain lipids in the L444P cells (81). This may occur as lipid membrane dynamics are required . for macroautophagy and chaperone mediated autophagy (CMA) (82). In addition to glycosphingolipids, the level of fatty acids may be altered by GBA1 variants. In SH-SY5Y cells, expression of the E326K variant led to increased accumulation and formation of lipid droplets, which was accompanied by alpha-synuclein aggregation (64), suggesting alterations in the metabolism of several lipid types may be key to GBA1 pathology. An additional pathogenic mechanism that has been proposed for GBA1-associated PD arises from the toxic gainof-function hypothesis. As the majority of GBA1 variants are missense, a misfolded protein is often produced and retained in the endoplasmic reticulum (ER). This can activate ERAD, and lead to a deficiency in enzyme level through degradation and can activate pathway such as the unfolded protein response (UPR) and eventual ER stress. In some studies, in fibroblasts and Drosophila, activation of the UPR has been demonstrated in L444P and N370S variants (83,84). Conversely, other studies have suggested a genotype-phenotype correlation between variant severity of UPR activation. In fibroblasts and SH-SY5Y cells, L444P has been associated with ER retention and ER stress, which was absent in N370S and E326K cells (64, 84). In another study, the severe L444P variant displayed extensive ERAD (85), suggesting that the extent of ER stress | 392 | 251518480 | 0 | 16 |
may correlate with disease severity, perhaps due to more pronounced conformational changes. However, another fibroblast study has demonstrated heterogeneity in ER retention and degradation across lines with the N370S genotype (86), weakening the genotype-phenotype correlation argument. Overall, current evidence suggests that the mechanisms in which GBA1 variants predispose to PD are multifaceted. Different pathogenic mechanisms could explain the differences in risk and phenotypes of PD for single variants, and future studies will need to address these questions. The reasons why the majority of GD patients or heterozygous carriers do not develop PD, also remain unexplained. GBA -Parkinson disease: Current and future therapeutic strategies The discovery of the GBA1 gene in PD has opened a new avenue to develop novel therapeutics for PD, with several GBA1targeted strategies under development with the aim to enhance GCase activity [reviewed in smith et al. (87)]. Significant focus is on the development of molecular chaperones to penetrate the blood-brain-barrier (BBB) to bind and refold GCase in the ER, facilitating trafficking and rescuing enzyme activity (88). Within this class is the inhibitory, pHdependant small molecular chaperone, ambroxol (89), which has been shown to increase GCase activity and reduce alphasynuclein pathology in several cell and animal models (68,(90)(91)(92)(93)(94)(95). Ambroxol has also demonstrated the ability to reduce UPR activation in Drosophila models of GCase deficiency (84, 96). In Type 1 GD patients, ambroxol has been shown to be safe and tolerable (ClinicalTrials.gov Identifier: NCT03950050) (97) and results from a phase II, single-centre trial, in PD patients with and without GBA1 mutations, demonstrate that ambroxol | 393 | 251518480 | 0 | 16 |
can cross the BBB and enter the CSF where it can alter GCase activity and protein level (ClinicalTrials.gov Identifier: NCT02941822) (98). Ambroxol also increased the alpha-synuclein concentration in the CSF and, importantly, improved motor function. A phase III clinical trial of ambroxol in treating PD is expected to commence in early 2023. In addition to inhibitory chaperones, development of noninhibitory chaperones for GCase is underway. Two compounds, NCGC758 and NCGC607, have been shown to improve GCase trafficking and rescue glycosphingolipid and alpha-synuclein accumulation in iPSC-derived dopamine neurons from GBA1-PD patients (99,100). Allosteric modulator small molecules, that can bind and enhance GCase activity, are also an area of interest. An example of which is LT1-291, which has been shown to cross the BBB (Trialregister.nl ID: NTR7299) (101). Pre-clinical studies have demonstrated that LT1-291 can reduce substrate accumulation (101), and this was also shown in a phase 1b placebo-controlled trial in GBA1-PD patients (NL6574). Further clinical trials are expected. Small molecules are also being developed to modulate GCase activity through targeting other proteins. An example of this are histone deacetylase inhibitors (HDACis), which have been shown to increase GCase activity by preventing its ubiquitination and degradation (102, 103) or improving GCase folding and trafficking (104) in GD fibroblasts. Enzyme replacement therapy (ERT) has shown great efficacy in improving the visceral symptoms of GD but fails to cross the BBB (105). Currently research is underway to improve the delivery of wild-type GCase enzyme and enhance its ability to cross the BBB. This involves ligating a peptide, usually a virus-associated | 394 | 251518480 | 0 | 16 |
protein, to the GCase enzyme (106). Denali Therapeutics have recently developed the transport-vehiclemodified recombinant GCase enzyme (ETV:GBA1) compound, using their transport vehicle platform technology which has the potential to actively transport enzymes across the BBB (107). Preclinical research is underway with this compound, but further studies are needed to investigate its efficacy in GBA1-PD patients. Another avenue being explored to deliver wild-type GCase enzyme to the brain is gene therapy. Most commonly, the GBA1 gene is ligated into the adeno-associated virus (AAV) vector, and delivered to the brain. In mouse models of GD this method has been shown to rescue GCase activity and expression, reduce alpha-synuclein pathology and decrease glycosphingolipid accumulation (108-111). Prevail Therapeutics are currently testing their PR001A compound, which delivers the GBA1 gene using the AAV-9 vector, in phase . I clinical trials (ClinicalTrials.gov Identifier: NCT04127578 and NCT04411654). Strategies targeted to GCase to reduce the accumulation of glycosphingolipid substrates are also under development. Substrate reduction therapy (SRT), miglustat, has shown efficacy in reducing lipid accumulation in dopamine neurons from PD patients with GBA1 mutations, and can reduce alpha-synuclein pathology when coupled with GCase overexpression (79). However, miglustat cannot cross the BBB. Novel brain penetrant SRTs are therefore being developed. Sanofi's venglustat (GZ667161) had shown promise in GCasedeficient synucleinopathy mice models, able to reduce alphasynuclein and glycosphingolipid accumulation and improve cognitive function (112). The phase I trials of venglustat demonstrated successful target engagement (ClinicalTrials.gov Identifier: NCT01674036 and NCT01710826), however the phase II trial failed to show a benefit, with patients with GBA1 mutations exhibiting a | 395 | 251518480 | 0 | 16 |
decline in motor function in PD (ClinicalTrials.gov Identifier: NCT02906020). LRRK2 is expressed ubiquitously in the brain, including neurons and glial cells, as well as in the kidneys, lungs, liver, heart and immune cells (118)(119)(120)(121). The LRRK2 protein is thought to be primarily cytosolic but can also localize to a subset of organelles and inner cellular membranes, including mitochondria, ER, Golgi apparatus and microtubules (122, 123). However, the physiological roles of LRRK2 remain unclear, although it is suggested to be involved in many different processes such as adult neurogenesis, scaffolding, homeostasis of lysosome-related organelles, the innate immune response and neuroinflammation (124)(125)(126). Common LRRK variants There are several LRRK2 missense variants that have been confirmed to increase PD risk, including the most common variant G2019S, as well as N1437H, R1441C/G/H/S, Y1699C and I2020T (127,128). As seen in Figures 2A,B, G2019S resides in the activation loop of LRRK2's ATP binding site which regulates LRRK2 kinase activity (129). A computational prediction study suggests that G2019S may decrease the flexibility of the loop and improves the stability of the kinase domain, enabling it to remain in an active conformation for an extended period (130). This has been shown to increase phosphorylation of substrates by 2-to 3-fold (131). Another variant associated with increased PD risk, I2020T, is also located in the activation loop of the kinase domain and has been reported to significantly increase LRRK2 autophosphorylation by around 40% relative to the native enzyme (122). Other variants that do not reside in the kinase domain may also modify LRRK2 kinase activity. The | 396 | 251518480 | 0 | 16 |
ROC domain contains motifs that are conserved amongst GTP-binding proteins, suggesting that LRRK2 is a functioning GTPase that can regulate LRRK2 kinase activity (132)(133)(134)). An in vitro study showed that the R144C/G/H/S mutations located in the ROC domain, increases kinase activity while decreasing GTP hydrolysis and weakening LRRK2 dimerisation (132). N1437H in the ROC domain has been proposed to impair monomerdimer conformational dynamics and hinder GTPase activity, permanently locking LRRK2 into a dimeric state (135). T1410M, found in the ROC domain, is a novel variant with unclear pathogenicity and may distort the tertiary structure of LRRK2 . /fneur. . and disrupt GTP hydrolysis (136). Meanwhile, the Y1699C variant resides in the COR domain and is proposed to strengthen ROC-COR interactions, weaken ROC-COR dimerization and reduce GTPase activity (137). Y2189C, identified in Arab-Berber populations (138), is located within the WD40 domain is presumed to have a deleterious effect for LRRK2 and induces high levels of cellular toxicity (139), however there is still controversy surrounding its pathogenicity for PD (128, 138). The G2385R and R1628P variants act as potential genetic risk factors in Chinese and Malaysian populations (140-142). G2385R is also located within the WD40 domain and causes dysfunctional synaptic vesicle trafficking (128, 143, 144), while R1628P is located in the COR domain. LRRK gene variants and Parkinson disease Worldwide, the frequency of LRRK2 G2019S is found in 1% of sporadic PD and 4% of familial PD cases (145). It is most frequently found in sporadic PD cases of north African Arabs and of AJ descent (30 and | 397 | 251518480 | 0 | 16 |
10% of cases, respectively), whereas the variant is rarely found in Asians (only 0.1%) (145). The penetrance of PD in subjects carrying a LRRK2 mutation is not fully elucidated and varies with age, which may explain both the high prevalence of mutations in sporadic PD cases and the detection of mutations in unaffected individuals (145). Although this finding has been repeatedly reported, the precise mutation penetrance rates vary across studies due to different populations considered and methodologies applied, and it is unclear whether distinct variants can differently impact on penetrance. Overall, cumulative risk has been estimated to be around 30-40% at age 80, with variable figures ranging from 7 and 80% (145)(146)(147)(148)(149)(150). In one study considering effects of pathogenic LRRK2 mutations on penetrance, carriers of G2019S showed a lower penetrance compared to carriers of other pathogenic mutations combined, although the group of non-G2019S was relatively small (145). LRRK -Parkinson disease: Clinical picture and genotype-phenotype associations LRRK2-PD patients are clinically very similar to sporadic PD. There are no differences in age at onset between LRRK2-PD patients carrying pathogenic variants vs. non-carriers (151,152), as well as between carriers of different pathogenic mutations (G2019S vs. R1441C/G/H) (127), or carriers of risk variants vs. non-carriers (141) or vs. carriers of pathogenic variants (153). Interestingly, the male predominance seen in PD is less represented within LRRK2-PD patients (151,152). The motor phenotype of LRRK2-PD is that of levodoparesponsive parkinsonism, with sustained response over time, later onset of levodopa-induced dyskinesia (145,151), and milder progression in motor symptoms over time (152) compared to non-carriers. | 398 | 251518480 | 0 | 16 |
Although data comparing different genotypes is limited, there may be genotype-phenotype associations within LRRK2-PD, with risk variants showing a more rapid progression and G2019S a more benign course. Higher incidence of postural instability gait difficulty (PIGD) sub-type has been reported in PD patients of both AJ origin carrying G2019S (151,152,154), and Chinese origin carrying G2385R (155), when compared to non-carriers. Similar rates of PIGD sub-type were found in G2019S and G2385R when compared together (156). Within pathogenic variants, PD patients with G2019S showed more frequent PIGD when compared to patients carrying the R1441G variant (127). When analyzing disease course, carriers of pathogenic variants showed more sustained response to levodopa and lower motor scores when compared to carriers of risk variants (153,156), and survival curves of AJ G2019S PD carriers were also not different from those of non-carriers (50, 157). Within pathogenic mutations, motor fluctuations were more frequently reported in carriers of p.R1441C/G/H mutation than in carriers of p.G2019S mutation (127). From a non-motor perspective, the phenotype of all LRRK2-PD patients seems to be more benign compared to that of noncarriers. Slower cognitive decline has been observed in LRRK2-PD compared to sporadic PD or GBA1-PD (145,158). Carriers of G2019S PD patients also showed better olfactory function, less severe mood disorders, and less frequent REM sleep behavior disorders (RBD) (159, 160) compared to non-carriers (156). In a cohort of Chinese patients, carriers of G2385R presented better cognitive performances and more severe RBD symptoms compared to non-carriers (155). Overall, a genotype-phenotype relationship among LRRK2-PD patients might exist, with pathogenic | 399 | 251518480 | 0 | 16 |